ADVANCES IN E N G I N E E R I N G AND T E C H N O L O G Y
This page intentionally left blank
ADVANCES IN ENGINEERING AND T E C H N O L O G Y Proceedings of the First International Conference on Advances in Engineering and Technology 16-19 July 2006, Entebbe, U g a n d a
Edited by
J. A. Mwakali Department of Civil Engineering, Faculty of Technology, Makerere University, P.O. Box 7062, Kampala, Uganda
G. Taban-Wani Department of Engineering Mathematics, Faculty of Technology, Makerere University, P.O. Box 7062, Kampala, Uganda
2006
~~!i ~i.......~...9..~........~i.84 ..,~ii.i......
Amsterdam Paris
' Boston
' San D i e g o
' Heidelberg ' London ' New York . Oxford ' San F r a n c i s c o ' Singapore ' Sydney , Tokyo
iv
Elsevier Ltd is an imprint of Elsevier with offices at: Linacre House, Jordan Hill, Oxford OX2 8DP, UK The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK 84 Theobald's Road, London WC1X 8RR, UK Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA 525 B Street, Suite 1900, San Diego, CA 92101-4495, USA First edition 2006 Copyright 9 Elsevier Ltd. All rights reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier's Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made
British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress For information on all biomaterials related publications visit our web site at books.elsevier.corn Printed and bound in Great Britain 06 07 08 09 10 10 9 8 7 6 5 4 3 2 1 ISBN-13:978-0-08-045312-5 ISBN-10:0-08-045312-0
Working together to grow libraries in developing countries www.elsevier.~m i www.bookaid.org I ~w.sabre.org
This page intentionally left blank
vi
Elsevier Internet Homepage- http'//www.elsevier.com Consult the Elsevier homepage for full catalogue information on all books, major reference works, journals, electronic products and services. All Elsevier journals are available online via ScienceDirect: www.sciencedirect.com
To contact the Publisher Elsevier welcomes enquiries concerning publishing proposals: books, journal special issues, conference proceedings, etc. All formats and media can be considered. Should you have a publishing proposal you wish to discuss, please contact, without obligation, the publisher responsible for Elsevier's materials and engineering programme: Jonathan Agbenyega Publisher Elsevier Ltd The Boulevard, Langford Lane Phone: Kidlington, Oxford Fax: OX5 1GB, UK E-mail:
+44 1865 843000 +44 1865 843987 j. agb enye @ elsevi er. corn
General enquiries, including placing orders, should be directed to Elsevier's Regional Sales Offices - please access the Elsevier homepage or full contact details (homepage details at the top of this page).
vii
PREFACE The International Conference on Advances in Engineering and Technology (AET2006) was a monumental event in the engineering and scientific fraternity not only from the African continent but also from the larger world, both technologically advanced and still developing. The Conference succeeded in bringing together to Uganda, affectionately called "The Pearl of Africa", scores of some of the world-famous scientists and engineers to share knowledge on recent advances in engineering and technology for the common good of humanity in a world that is no more than a global village. These Proceedings are a compilation of quality papers that were presented at the AET2006 Conference held in Entebbe, Uganda, from 16th t o 19th July, 2006. The papers cover a range of fields, representing a diversity of technological advances that have been registered in the last few decades of human civilization and development. The general areas covered range from advances in construction and industrial materials and methods to manufacturing processes; from advances in architectural concepts to energy efficient systems; from advances in geographical information systems to telecommunications, to mention but a few. The presentations are undoubtedly a pointer to more such advances that will continue to unfold in the coming years and decades to meet the ever growing demands and challenges of human survival in the face of diminishing natural resources for an ever-increasing population. The timing of the Conference could not have been more appropriate: it is at the time when most of Africa is facing an unprecedented energy crisis engendered by a combination of factors, namely drought (resulting in the recession of water reservoir levels), accelerated industrialization that outstrips available power generation, inadequate planning, poor economies, etc. We think the AET2006 Conference has presented practical ideas for solving this and many other problems that face the peoples of Africa and other continents. The editors of the Proceedings, on behalf of the AET2006 Conference Organising Committee, extend their thanks to the authors for accepting to share their knowledge in these Proceedings. All the experts who peer-reviewed the papers are most thanked for ensuring that quality material was published. The guidance given by the members of the International Scientific Advisory Board is greatly acknowledged. The Sponsoring Organisations are most sincerely thanked for making it possible for the Conference and its Proceedings to be realized. The staff of the Faculty of Technology, Makerere University, and particularly the Dean, Dr. Barnabas Nawangwe, is given special thanks for providing an environment that was conducive for the smooth accomplishment of the editorial work. Finally, the editors thank their families for the cooperation and support extended to them.
J. A. Mwakali G. Taban-Wani
viii
This page intentionally left blank
ix
T A B L E OF C O N T E N T S CHAPTER ONE - KEYNOTE PAPERS
WATER QUALITY MANAGEMENT IN RIVERS AND LAKES
Fontaine, Kenner & Hoyer IMPROVEMENTS INCORPORATED IN THE NEW HDM-4 VERSION 2
10
Odoki, Stannard & Kerali CHAPTER TWO - ARCHITECTURE
SPATIAL AND VISUAL CONNOTATION OF FENCES (A CASE OF DAR ES SALAAM- TANZANIA)
23
Kalugila A BUILDING QUALITY INDEX FOR HOUSES (BQIH), PART 1: DEVELOPMENT
31
Goliger & Mahachi A BUILDING QUALITY INDEX FOR HOUSES, (BQIH) PART 2: PILOT STUDY
40
Mahachi & Goliger USE OF WIND-TUNNEL TECHNOLOGY IN ENHANCING HUMAN HABITAT IN COASTAL CITIES OF SOUTHERN AFRICA
49
Goliger & Mahachi WOMEN PARTICIPATION IN THE CONSTRUCTION INDUSTRY
59
Elwidaa & Nawangwe CHAPTER THREE-
CIVIL ENGINEERING
STUDIES ON UGANDAN VOLCANIC ASH AND TUFF
75
Ekolu, Hooton & Thomas COMPARATIVE ANALYSIS OF HOLLOW CLAY BLOCKS AND SOLID REINFORCED CONCRETE SLABS
84
Kyakula, Behangana & Pariyo TECHNOLOGY TRANSFER TO MINIMIZE CONCRETE CONSTRUCTION FAILURES
Ekolu & Ballim
91
DEVELOPMENT OF AN INTEGRATED TIMBER FLOOR SYSTEM
99
Van Herwiinen & Jorissen CONSIDERATIONS IN VERTICAL EXTENSION OF REINFORCED CONCRETE STRUCTURES
109
Kyakula, Kapasa & Opus LIMITED STUDY ON A CHANGE FROM PRIVATE PUBLIC TO GOVERNMENT ONE TRANSPORT SYTEMS
117
Ssamula INFLUENCE OF TRUCK LOAD CHANNELISATION ON MOISTURE DAMAGE IN BITUMINOUS MIXTURES
125
Bagampadde & Kiggundu THE EFFECT OF MEROWE DAM ON THE TRAVEL TIME OF FLOOD WAVE FROM ATBARA TO DONGOLA
135
Zaghloul & El-Moattassem BUILDING MATERIAL ASPECTS IN EARTHQUAKE RESISTANT CONSTRUCTION IN WESTERN UGANDA
143
Kahuma, Kiggundu, Mwakali & Taban-Wani BIOSENSOR TO DETECT HEAVY METALS IN WASTE WATER
159
Ntihuga INTEGRATED ENVIRONMENTAL EDUCATION AND SUSTAINABLE DEVELOPMENT
167
Matiasi MAPPING WATER SUPPLY COVERAGE: A CASE STUDY FROM LAKE KIYANJA, MASINDI DISTRICT, UGANDA
176
Quin PHOSPHORUS SORPTION BEHAVIOURS AND PROPERTIES OF MBEYA-PUMICE
185
Mahenge, Mbwette & Njau PRELIMINARY INVESTIGATION OF LAKE VICTORIA GROUNDWATER SITUATION FROM ADVANCED VERY HIGH RESOLUTION RADIOMETER DATA
195
Mangeni & Ngirane-Katashaya COMPARISON OF TEST RESULTS FROM A COMPACTED FILL
Twesigye-omwe
203
xi
DEALING WITH SPATIAL VARIABILITY UNDER LIMITED HYDROGEOLOGICAL DATA. CASE STUDY: HYDROLOGICAL PARAMETER ESTIMATION IN MPIGI-WAKISO
211
Kigobe & Kizza TOWARDS APPROPRIATE PERFORMANCE INDICATORS FOR THE UGANDA CONSTRUCTION INDUSTRY
221
Tindiwensi, Mwakali & Rwelamila DEVELOPING AN INPUT-OUTPUT CLUSTER MAP FOR THE CONSTRUCTION INDUSTRY IN UGANDA
230
Mwesige & Tindiwensi REGIONAL FLOOD FREQUENCY ANALYSIS FOR NORTHERN UGANDA USING THE L-MOMENT APPROACH
238
Kizza, Ntale, Rugumayo & Kigobe QUALITATIVE ANALYSIS OF MAJOR SWAMPS FOR RICE CULTIVATION IN AKWA-IBOM, NIGERIA
251
Akinbile & Oyerinde EFFICIENCY OF CRAFTSMEN ON BUILDING S I T E S STUDIES IN UGANDA
260
Alinaitwe, Mwakali & Hansson BUILDING FIRM INNOVATION ENABLERS AND BARRIERS AFFECTING PRODUCTIVITY
268
Alinaitwe, Widen, Mwakali & Hansson FACTORS AFFECTING PRODUCTIVITY OF BUILDING C R A F T S M E N - A CASE OF UGANDA
277
Alinaitwe, Mwakali & Hansson A REVIEW OF CAUSES AND REMEDIES OF CONSTRUCTION RELATED ACCIDENTS: THE UGANDA EXPERIENCE
285
Mwakali THE RATIONALE FOR USE OF DECISION SUPPORT SYSTEMS FOR WATER RESOURCES MANAGEMENT IN UGANDA
300
Ngirane-Katashaya, Kizito & Mugabi THE NEED FOR EARTHQUAKE LOSS ESTIMATION TO ENHANCE PUBLIC AWARENESS OF EXPOSURE RISK AND STIMULATE MITIGATING ACTIONS: A CASE STUDY OF KAMPALA CIVIC CENTER
Mujugumbya, Akampuriira & Mwakali
309
xii
CHAPTER FOUR- CHEMICAL AND PROCESS ENGINEERING
PARTICLE DYNAMICS RESEARCH INITIATIVES AT THE FEDERAL UNIVERSITY OF TECHNOLOGY, AKURE, NIGERIA
315
Adewumi, Ogunlowo & Ademosun MATERIAL CLASSIFICATION IN CROSS FLOW SYSTEMS
321
Adewumi, Ogunlowo & Ademosun APPLICATION OF SOLAR-OPERATED LIQUID DESICCANT EVAPORATIVE COOLING SYSTEM FOR BANANA RIPENING AND COLD STORAGE
326
A bdalla, A bdalla, El-awad & Eljack FRACTIONATION OF CRUDE PYRETHRUM EXTRACT USING SUPERCRITICAL CARBON DIOXIDE
339
Kiriamiti, Sarmat & Nzila MOTOR VEHICLE EMISSION CONTROL VIA FUEL IONIZATION: "FUELMAX" EXPERIENCE
347
John, Wilson & Kasembe MODELLING BAGASSE ELECTRICITY GENERATION: AN APPLICATION TO THE SUGAR INDUSTRY IN ZIMBABWE
354
Mbohwa PROSPECTS OF HIGH TEMPERATURE AIR/STEAM GASIFICATION OF BIOMASS TECHNOLOGY
368
John, Mhilu, AIkilaha, Mkumbwa, Lugano & Mwaikondela DEVELOPING INDIGENOUS MACHINERY FOR CASSAVA PROCESSING AND FRUIT JUICE PRODUCTION IN NIGERIA
375
Agbetoye, Ademosun, Ogunlowo, Olukunle, Fapetu & Adesina C H A P T E R FIVE - E L E C T R I C A L E N G I N E E R I N G
FEASIBILITY OF CONSERVING ENERGY THROUGH EDUCATION: THE CASE OF UGANDA AS A DEVELOPING COUNTRY
385
Sendegeya, Lugujjo, Da Silva & Amelin PLASTIC SOLAR CELLS: AN AFFORDABLE ELECTRICITY GENERATION TECHNOLOGY
Chiguvare
395
xiii
IRON LOSS OPTIMISATION IN THREE PHASE AC INDUCTION SQUIRREL CAGE MOTORS BY USE OF FUZZY LOGIC
404
Saanane, Nzali & Chambega PROPAGATION OF LIGHTNING INDUCED VOLTAGES ON LOW VOLTAGES LINES: CASE STUDY TANZANIA
421
Clemence & Manyahi A CONTROLLER FOR A WIND DRIVEN MICRO-POWER ELECTRIC GENERATOR
429
Ali, Dhamadhikar & Mwangi REINFORCEMENT OF ELECTRICITY DISTRIBUTION NETWORK ON PRASLIN ISLAND
437
Vishwakarma CHAPTER
SIX - MECHANICAL
ENGINEERING
ELECTROPORCELAINS FROM RAW MATERIALS IN UGANDA: A REVIEW
454
Olupot, Jonsson & Byaruhanga A NOVEL COMBINED HEAT AND POWER (CHP) CYCLE BASED ON GASIFICATION OF BAGASSE
465
Okure, Musinguzi, Nabacwa, Babangira, Arineitwe & Okou ENERGY CONSERVATION AND EFFICIENCY USE OF BIOMAS USING THE E.E.S. STOVE
473
Kalyesubula FIELD-BASED ASSESSMENT OF BIOGAS, TECHNOLOGY: THE CASE OF UGANDA
481
Nabuuma & Okure MODELLING THE DEVELOPMENT OF ADVANCED 488 MANUFACTURING TECHNOLOGIES (AMT) IN DEVELOPING COUNTRIES
Okure, Mukasa & Otto CHAPTER
SEVEN - GEOMATICS
SPATIAL MAPPING OF RIPARIAN VEGETATION USING AIRBORNE REMOTE SENSING IN A GIS ENVIRONMENT. CASE STUDY: MIDDLE RIO GRANDE RIVER, NEW MEXICO
Farag, Akasheh & Neale
495
xiv
C H A P T E R E I G H T - ICT A N D M A T H E M A T I C A L
MODELLING
2-D HYDRODYNAMIC MODEL FOR PREDICTING EDDY FIELDS
504
EL-Belasy, Saad & Hafez SUSTAINABILITY IMPLICATIONS OF UBIQUITOUS COMPUTING ENVIRONMENT
514
Shrivastava & Ngarambe A MATHEMATICAL IMPROVEMENT OF THE SELF-ORGANIZING MAP ALGORITHM
522
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott BRIDGING THE DIGITAL DIVIDE IN RURAL COMMUNITY: A CASE STUDY OF EKWUOMA TOMATOES PRODUCERS IN SOUTHERN NIGERIA
533
Chiemeke & Daodu STRATEGIES FOR IMPLEMENTING HYBRID E-LEARNING IN RURAL SECONDARY SCHOOL IN UGANDA
538
Lating, Kucel & Trojer DESIGN AND DEVELOPMENT OF INTERACTIVE MULTIMEDIA CD-ROMs FOR RURAL SECONDARY SCHOOLS IN UGANDA
546
Lating, Kucel & Trojer ON THE LINKS BETWEEN THE POTENTIAL ENERGY DUE TO A UNIT-POINT CHARGE, THE GENERATING FUNCTION AND RODRIGUE'S FORMULA FOR LEGENDRE'S POLYNOMIALS
554
Tickodri- Togboa VIRTUAL SCHOOLS USING LOCOLMS TO ENHANCE LEARNING IN THE LEAST DEVELOPED COUNTRIES
562
Phocus, Donart & Shrivastaya SCHEDULING A PRODUCTION PLANT USING CONSTRAINT DIRECTED SEARCH
572
Kib ira, Kariko-Buhwezi & Musasizi A NEGOTIATION MODEL FOR LARGE SCALE MULTI-AGENT SYSTEMS
Wanyama & Taban-Wani
580
xv
CHAPTER N I N E - TELEMATICS AND TELECOMMUNICATIONS
DIGITAL FILTER DESIGN USING AN ADAPTIVE MODELLING APPROACH
594
Mwangi AUGMENTED REALITY ENHANCES THE 4-WAY VIDEO CONFERENCING IN CELL PHONES
603
Anand DESIGN OF SURFACE WAVE FILTERS RESONATOR WITH CMOS LOW NOISE AMPLIFIER
612
Ntagwirumugara, Gryba & Lefebvre THE FADING CHANNEL PROBLEM AND ITS IMPACT ON WIRELESS COMMUNICATION SYSTEMS IN UGANDA
621
Kaluuba, Taban-Wani & Waigumbulizi SOLAR POWERED Wi-Fi WITH WiMAX ENABLES THIRD WORLD PHONES
635
Santhi & Kumaran ICT FOR EARTHQUAKE HAZARD MONITORING AND EARLY WARNING
646
Manyele, Aliila, Kabadi & Mwalembe CHAPTER T E N - LATE PAPERS NEW BUILDING SERVICES SYSTEMS IN KAMPALA'S BUILT HERITAGE: COMPLEMENTARY OR CON-FLICTING INTEGRALS?
655
Birabi FUZZY SETS AND STRUCTURAL ENGINEERING
671
Kala and Omishore
A PRE-CAST CONCRETE TECHNOLOGY FOR AFFORDABLE HOUSING IN KENYA
680
Shitote, Nyomboi, Muumbo, Wanjala, Khadambi, Orowe, Sakwa, Bamburi, Apollo & Bamburi ENVIRONMENTAL (HAZARDOUS CHEMICAL) RISK ASSESSMENT- ERA IN THE EUROPEAN UNION.
Musenze & Vandegehuchte
696
xvi
THE IMPACT OF A POTENTIAL DAM BREAK ON THE HYDRO ELECTRIC POWER GENERATION: CASE OF: OWEN FALLS DAM BREAK SIMULATION, UGANDA
710
Kizza & Mugume LEAD LEVELS IN THE SOSIANI
722
Chibole DEVELOPING A WEB PORTAL FOR THE UGANDAN CONSTRUCTION INDUSTRY
730
Irumba LOW FLOW ANALYSIS IN LAKE KYOGA BASIN- EASTERN UGANDA
739
Rugumayo & Ojeo SUITABILITY OF AGRICULTURAL RESIDUES AS FEEDSTOCK FOR FIXED BED GASIFIERS
756
Okure, Ndemere, Kucel & Kjellstrom NUMERICAL METHODS IN SIMULATION OF INDUSTRIAL PROCESSES 764
Lewis, Postek, Gethin, Yang, Pao, & Chao
MOBILE AGENT SYSTEM FOR COMPUTER NETWORK MANAGEMENT 796
Akinyokun & Imianvan GIS MODELLING FOR SOLID WASTE DISPOSAL SITE SELECTION
809
Aribo & Looijen AN ANALYSIS OF FACTORS AFFECTING THE PROJECTION OF AN ELLIPSOID (SPHEROID) ONTO A PLANE
813
Mukiibi-Katende SOLAR BATTERY CHARGING STATIONS FOR RURAL ELECTRIFICATION: THE CASE OF UZI ISLAND IN ZANZIBAR
820
Kihedu & Kimambo SURFACE RAINFALL ESTIMATE OF LAKE VICTORIA FROM ISLANDS STATIONS DATA Mangeni, Ngirane-Katashaya
832
Author Index*
841
Keyword Index*
843
* Other than late papers-pages 655 to 840
xvii
INTERNATIONAL CONFERENCE ON ADVANCES IN ENGINEERING AND TECHNOLOGY (AET 2006) Local Organising Committee Prof. Jackson A. Mwakali (Chairman), Makerere University Dr. Gyavira Taban-Wani (Secretary), Makerere University Dr. B. Nawangwe, Makerere University
Prof. E. Lugujjo, MakerereUniversity Prof. S.S. Tickodri-Togboa, Makerere University Dr. Mackay E. Okure, Makerere University Dr. Albert I. Rugumayo, Ministry of Energy and Mineral Development
International Scientific Advisory Board Prof. Adekunle Olusola Adeyeye, National University of Singapore
Prof. Ampadu, National University of Singapore Prof. Gerhard Bax, Universityof Uppsala, Sweden Prof. Mark Bradford, University of New South Whales, Australia Prof. Stephanie Burton, University of Cape Town, Cape Town, South Africa Prof. R.L. Carter, Department of Electrical Engineering, University of Texas at Arlington, USA Prof. David Dewar, University of Cape Town, South Africa Prof. P. Dowling, University of Surrey, UK Prof. Christopher Earls, Universityof Pittsburgh,USA Prof. N. EI-Shemy,Departmentof GeomaticsEngineering,Universityof Calgary, Alberta, Canada Prof. Tore Haavaldsen, NTNU, Norway Prof. Bengt Hansson, Lurid University, Sweden Prof. H.K. Higenyi, Department of Mechanical Engineering, Faculty of Technology, Makerere University Kampala, Uganda Prof. Peter B. Idowu, Penn State Harrisburg, Pennsylvania, USA Prof. N.M. Ijumba, University of Durban-Westville, South Africa Prof. Ulf Isaacson, Royal Technical University, Stockholm, Sweden Prof. Geofrey R. John, University of Dar-es-Salaam, Tanzania Prof. Rolf Johansson, Royal Technical University, Stockholm, Sweden Prof H~kan Johnson, Swedish Agricultural University, Uppsala, Sweden, Prof. V.B.A. Kasangaki, Uganda Institute of Communications Technology, Kampala, Uganda Prof. G. Ngirane-Katashaya, Department of Civil Engineering, Faculty of Technology, Makerere University, Kampala, Uganda Prof. Badru M. Kiggudu, Department of Civil Engineering, Faculty of Technology, Makerere University, Kampala, Uganda Dr. M.M. Kissaka, University of Dar-es-Salaam, Tanzania Era. Prof. Bj6rn Kjelstr6m, Royal Technical University, Stockholm, Sweden Prof. Jan-Ming Ko, Faculty of Construction and Land Use, Hong Kong Polytechnic University, Hong Kong, China Era. Prof. W.B. Kraetzig- Ruhr University Bochun, Germany Prof. R.W. Lewis,Universityof Wales, Swansea,UK
xviii
Prof. Beda Mutagahywa, University of Dar-es-Salaam, Tanzania Prof. Burton M. L. Mwamilla, University of Dar-es-Salaam, Tanzania Dr. E. Mwangi, Department of Electrical Engineering, University of Nairobi, Kenya Dr. Mai Nalubega, WorldBank, Kampala, Uganda Prof. Jo Nero, Universityof Cape Town, SouthAfrica Prof. D.A. Nethercot, Imperial College of Science, Technology & Medicine, UK Dr Catharina Nord, Royal Institute of Technology, Stockholm, Sweden Prof. A. Noureldin, Department of Electrical & Computer Engineering, Royal Military College of Canada, Kingston, Ontario, Canada Prof. Rudolfo Palabazer, University of Trento, Italy Prof. G.N. Pande, University of Wales, Swansea, UK Prof. G. A. Parke, University of Surrey, UK Prof. Petter Pilesj~, University of Lund, Sweden Dr. Pereira da Silva, Faculty of Technology, Makerere University, Kampala, Uganda Prof. Nigel John Smith, University of Leeds, UK Prof. Lennart Soder, Royal Institute of Technology, Stockholm, Sweden Prof. Orjan Svane, Royal Institute of Technology, Stockholm, Sweden Prof. Sven Thelandersson, Lurid University, Sweden Prof. Roger Thunvik, Royal Institute of Technology, Stockholm, Sweden Prof. Lena Trojer, Blekinge Institute of Technology, Sweden Prof. F.F. Tusubira, Directorate of ICT Support, Makerere University, Kampala, Uganda Prof. Brian Uy, Universityof Wollongong,Australia Prof. Dick Urban Vestbro, Royal Institute of Technology, Stockholm, Sweden Prof. A. Zingoni, University of Cape Town, South Africa
Sponsoring and Supporting Organisations Makerere University NUFU Sida/SAREC Ministry of Works, Housing and Communications Uganda Institution of Professional Engineers Construction Review
Fontaine, Kenner & Hoyer
CHAPTER ONE KEYNOTE PAPERS
WATER QUALITY MANAGEMENT IN RIVERS AND LAKES T. A Fontaine, Department of Civil and Environmental Engineering, South Dakota School of Mines and Technology, Rapid City, SD, USA S. J. Kenner, Department of Civil and Environmental Engineering, South Dakota School of Mines and Technology, Rapid City, SD, USA D. Hoyer, Water and Natural Resources, RESPEC, Rapid City, SD, USA
ABSTRACT An approach for national water quality management is illustrated based on the 1972 Clean Water Act in the United States. Beneficial uses are assigned to each stream and lake. Water quality standards are developed to support these beneficial uses. A data collection program is used to make periodic evaluation of the quality of water bodies in each state. A bi-annual listing of all impaired water is required, with a schedule for investigations to determine causes of pollution and to develop plans to restore desired water quality. The approach is illustrated using recent water quality investigations of two rivers in the Great Plains Region of the United States. Keywords: water quality management, total maximum daily load, pollution.
1.0 INTRODUCTION Water quality is related to the physical, chemical and biological characteristics of a stream, lake or groundwater system. Once the water quality of a water body is compromised, significant effort and cost are required to remediate the contamination. Protecting and improving the quality of water bodies enhances human health, agricultural production, ecosystem health, and commerce. Maintaining adequate water quality requires coordinated national policy and oversight of state and local water quality management.
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
A critical component of water quality management in the USA is the 1972 Federal Clean Water Act, which established additional rules, strategies, and funding to protect and improve water quality of streams and lakes. The US Environmental Protection Agency (EPA) is the federal administrator of the program. Water quality management of specific water bodies (rivers, lakes, and estuaries) is delegated to state and local governments that are required to meet the federal regulations. Key components of this process include (1) definition of beneficial uses for each water body, (2) assigning water quality standards that support the beneficial uses, (3) an antidegredation policy, and (4) continual water quality monitoring. Each state must submit a list of impaired waters to the EPA every 2 years. The most common reasons for these waters to be impaired include pollution related to sediments, pathogens, nutrients, metals, and low dissolved oxygen. For each water body on the list, a plan is required for improving the polluted water resource. A fundamental tool in this plan is the development of a total maximum daily load (TMDL). For a specific river or lake, the TMDL includes data collection and a study of the water quality process, evaluation of current sources of pollution, and a management plan to restore the system to meet the water quality standards. These aspects of water quality management are described in the remainder of this paper. The concepts of beneficial uses, water quality standards, the antidegredation policy, the listing of impaired water bodies, and the development of a TMDL are discussed. Case studies from recent research in South Dakota are then used to illustrate the development of a TMDL. 2.0 The 9 9 9 9 9 9 9 9 9 9 9
B E N E F I C I A L USES AND W A T E R Q U A L I T Y STANDARDS State of South Dakota has designated 11 beneficial uses for surface waters: Domestic water supply Coldwater permanent fish life propagation Coldwater marginal fish life propagation Warmwater permanent fish life propagation Warmwater semi-permanent fish life propagation Warmwater marginal fish life propagation Immersion recreation Limited contact recreation Fish and wildlife propagation, recreation, and stock watering Irrigation Commerce and industry
The EPA has developed standards for various beneficial uses. Each state can apply the EPA standards, or establish their own state standards as long as they equal or exceed the EPA standards. Examples of parameters used for standards for general uses include total dis-
Fontaine, K e n n e r & H o y e r
solved solids, pH, water temperature, dissolved oxygen, unionized ammonia, and fecal coliform. Water quality standards for metals and toxic pollutants may be applied in special cases. Waters for fish propagation primarily involve parameters for dissolved oxygen, unionized ammonia, water temperature, pH, and suspended solids. Standards are either for "daily maximum", or acute values or "monthly average" or chronic values (an average of at least 3 samples during a 30-day period). Additional standards for lakes include visible pollutants, taste- and odor- producing materials, and nuisance aquatic life. The trophic status of a lake is assessed with a Trophic State Index (TSI) based on measures of water transparency, Chlorophyll-a, and total phosphorus. Maximum values of the TS! allowed as supporting beneficial uses of lakes range from 45 to 65 across the state. The detailed numeric standards for surface water quality in South Dakota are described in South Dakota Department of Environment and Natural Resources (2004). 3.0 LISTING OF IMPAIRED W A T E R BODIES Section 303d of the Federal Clean Water Act requires each state to identify waters failing to meet water quality standards, and to submit a list to the EPA of these waters and a schedule for developing a total maximum daily load (TMDL). A TMDL represents the amount of pollution that a waterbody can receive and still maintain the water quality standards for the associated beneficial use. The list of impaired waters (the "303d list") is required every 2 years. Examples of the most frequent reasons for listing waters across the USA are: (1) nutrients, sediments, low dissolved oxygen, and pH for lakes; and (2) sediments, metals, pathogens, and nutrients for streams. The number of waterbodies on the 303d list for South Dakota has been about 170 for the past 8 years.
The decision to place a waterbody on the 303d list can be based on existing data that document the impaired water quality, or on modeling that indicates failure to meet water quality standards. A waterbody that receives discharges from certain point sources can also be listed when the point source loads could impair the water quality. If existing data are used to evaluate whether or not a water should be listed, the following criteria apply: (1) 20 water quality samples of a specific parameter are required over the last 5 years; (2) over 10% of the samples must exceed the water quality standard for that parameter; and (3) the data must meet certain quality assurance requirements. 4.0 REMEDIATION STRATEGIES For each water placed on the 303d list, a strategy for improving the water quality so that the standards are met is required. The development and implementation of a TMDL is the most common approach for remediation strategies. A TMDL is calculated as the sum of individual waste load allocations for point sources, and load allocations for nonpoint sources and for natural background sources, that are necessary to achieve compliance with applicable surface water quality standards. The units of the TMDL can be mass per day or toxicity per
International Conference on Advances in Engineering and Technology
day for example, but not concentration. The waste load allocation involves point sources, which are regulated by the National Pollution Discharge Elimination System program (NPDES: see South Dakota Department of Environment and Natural Resources, (2004)). A point source permit must be renewed every 5 years. Examples of load allocations (nonpoint sources) include agricultural runoff and stormwater runoff from developed areas. Natural background loads involve pollution from non-human sources. Examples include high suspended solids in watersheds with severe erosion due to natural soil conditions, high fecal coliform concentrations due to wildlife, and elevated streamwater temperatures due to natural conditions. A margin of safety is included in the TMDL to account for the uncertainty in the link between the daily pollutant load and the resulting water quality in the stream or lake. The process of developing and implementing a TMDL usually involves a data collection phase, the development of proposed best management practices (BMPs), and an implementation and funding strategy. A water quality monitoring program may be required to generate data to define the watershed hydrologic system, measure water quality parameters, and identify the sources of pollution. A computer simulation model may be used to calculate the TMDL required for the stream or lake to meet the water quality standards for the beneficial uses involved. Once the TMDL is known, various management actions are evaluated for their effectiveness in decreasing the pollutant loads to the point where the water quality standards are met. Point source loads are managed through the NPDES permit system. Management of nonpoint sources requires cooperation among federal, state, and local agencies, business enterprises, and private landowners. Examples of activities by individuals, corporations, and government agencies that generate nonpoint pollution sources include agriculture (livestock and crop production), timber harvesting, construction, and mining. Federal and state funding can be applied for to promote voluntary participation in best management practices (BMPs) to reduce water pollution related to these activities. Once the implementation phase of the TMDL begins, water quality monitoring continues on a regular basis to measure the impact on water quality of the selected BMPs. The state is allowed 13 years from the time the specific river or lake is placed on the 303d list to develop the TMDL, complete the implementation, and restore the water to the standards required to support the beneficial uses of that water. A final aspect of the water quality management program is an antidegredation policy. Antidegradation applies to water bodies with water quality that is better than the beneficial use criteria. Reduction of water quality in high quality water bodies requires economic and social justification. In any case, beneficial use criteria must always be met. Establishing desired beneficial uses for every surface water body, and the associated standards required to support those uses, provides the framework to protect and improve the
Fontaine, K e n n e r & H o y e r
water quality of a country so that all benefit. The process of routine collection of water quality data provides the information needed to identify impaired waters, place them on the 303d list, and to define a plan to develop and implement a strategy for restoring the desired level of water quality. The following case studies illustrate some of the procedures and issues that are often involved in this process.
5.0 SPRING CREEK Spring Creek is located on the eastern side of the Black Hills of South Dakota. The portion of Spring Creek involved in this project has a drainage area of 327km 2 at the outflow gage at 43~ and 103029 , 18". The annual mean discharge is 0.62m3/s (1991 - 2004), maximum daily mean discharge of record is 14.9m3/s, and minimum daily mean discharge is 0.0m3/s. The average annual precipitation is 56cm and the land cover is Ponderosa Pine forest. The beneficial uses of this section of Spring Creek are (1) cold-water permanent fish life propagation, (2) immersion recreation, (3) limited-contact recreation, and (4) fish, wildlife propagation, recreation, and stock watering. Spring Creek was placed on the 303d list and scheduled for TMDL development because the standard for fecal coliform in immersion recreation waters was exceeded. Fecal coliform bacteria are present in the digestive systems of warm blooded animals, and therefore serve as an indicator that the receiving water has been contaminated by fecal material. Symptoms of exposure in humans include cramps, nausea, diarrhea, and headaches. The objective of the project was to support the development of the TMDL using a water quality monitoring program and a computer simulation program (Schwickerath et al, (2005)). Data from the water quality monitoring program helped identify the sources of fecal coliform and measure the current loads. The simulation model provided insight into the relation of the sources to the loads exceeding the standards for the immersion recreation use, and was used to estimate the reduction of pollution levels resulting from various water quality management activities in the watershed.
5.1 Monitoring Program Fourteen monitoring sites were selected in the study area: 9 on the main channel of Spring Creek, 2 on Palmer Gulch Tributary, 2 on Newton Fork Tributary, and 1 on Sunday Gulch Tributary. Monthly grab samples were collected for 15 months at all 14 sites, and samples during storm-runoff events were collected at 6 stations. The storm event samples were collected over a 12 to 24 hour period on a flow-weighted basis. Streamflow measurements were taken periodically during the 15 month study to establish stage-discharge ratings at each station. A quality assurance program using field blanks and field replicates every 10 samples was used to measure the reliability of the data.
International Conference on Advances in Engineering and Technology
Samples were analyzed for fecal coliform, total suspended solids, pH, temperature, ammonia, and dissolved oxygen. The criteria for fecal coliform in immersion contact recreation has two standards: (1) the geometric mean of at least 5 samples collected during a 30 day period must not exceed 200 colony-forming units (cfu) per 100mL; or (2) a maximum of 400cfu per 100mL in a single sample. The water is considered impaired if either standard is exceeded by more than 10% of the samples. The water quality standards for the other relevant parameters were: total suspended solids less than 53 mg/L (daily maximum sample), pH between 6.6 and 8.6, water temperature of 18.3~ or less, and at least 6 mg/L dissolved oxygen. The standard for ammonia depends on the temperature and pH at the time of sampling. The fecal coliform standard was exceeded in 17% of the samples from the main channel of Spring Creek, 30% of samples from Palmer Gulch Tributary, and 13% of samples from Sunday Gulch Tributary. More than 10% of samples from Palmer Gulch Tributary also exceeded standards for total suspended solids (22% exceeded), pH (11% exceeded), and ammonia (11% exceeded). Fourteen percent of samples in Newton Fork Tributary exceeded the temperature standard. These results confirm that a TMDL for fecal coliform bacteria is required for this section of Spring Creek. The results also indicate that Palmer Gulch Tributary should be considered for an independent listing on the 303d list of impaired water, and that additional monitoring is needed to investigate temperature conditions on Newton Fork Tributary. Additional sampling was used to estimate the distribution of fecal contamination coming from humans and animals. A DNA fingerprinting analysis called ribotyping can indicate the source of fecal coliforms. Results of the initial ribotyping samples suggest that 35% of the fecal coliform in Spring Creek originates from humans, with the other 65% coming from livestock (cattle) and wildlife in the catchment. This information is used to develop remediation options to help Spring Creek meet the water quality standard. For example, potential sources of human coliform include leaking sewer systems, leaking treatment lagoons at Hill City (a town of 780 people in the center of the study area), and failed septic systems.
5.2 Simulation Modeling Analysis The Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) and the HSPF simulation models were used to investigate the impact of various remediation activities on the coliform contamination in Spring Creek (US Environmental Protection Agency (2001), Bicknell et al, (2000)). These models provide comprehensive simulation of the hydrology, channel processes, and contaminant processes on a continuous basis. Field data were used to calibrate and validate the model. The effectiveness of various best management practices (BMPs) for remediating pollution can be simulated with the models. The nonpoint sources of fecal coliform contamination in
Fontaine, Kenner & Hoyer
Spring Creek include humans and urban runoff, runoff from agricultural land and livestock, and wildlife. Human and urban runoff sources include leaks from septic systems of individual homes, sewer pipes, and treatment lagoons, and animal feces. Livestock (primarily cattie in this watershed) generates waste in concentrated compounds near farms during cold months and across widely distributed areas of the catchment during the warmer open range season. Fecal coliforms from livestock are deposited near, and easily washed into, streams in areas where no fences exist along the riparian zones. Examples of BMPs applied in the modeling analysis included improving failed septic systems, leaking sewer systems, and leaking treatment lagoons, and keeping cattle away from streams. Various combinations of these BMPs are simulated and the TMDL in Spring Creek is calculated for each scenario. Two of these scenarios were successful in reducing the TMDL to the point where the water quality in Spring Creek would be expected to support the beneficial uses. The final phase of the water quality program involves collaboration between the state environmental agency, local residents and landowners, and funding agencies to implement the effective BMPs. Water quality monitoring will continue during this period in order to measure the actual impact on fecal coliform loads, and to document the point when the water quality attains the standards for the beneficial uses of Spring Creek. 6.0 W H I T E R I V E R The White River is located in the prairie region of southwestern South Dakota. The drainage area is 26,000kin: at the downstream boundary of the study area at 43~ and 99~ ''. The annual mean discharge is 16.2m3/s (1929 - 2004), maximum daily mean discharge of record is 1247m3/s and minimum daily mean discharge is 0.0m3/s. Suspended sediment concentrations vary widely, with maximum daily mean of 72,300mg/L and minimum daily mean of 1 lmg/L (for period of 1971 to 2004). Climate is semi-arid, with 41cm of rain per year and 102cm of lake evaporation per year. Land cover is rangeland and grassland, with areas of Badlands (steep terrain with highly erodible, bare soil).
The river basin has 19 streamflow gaging stations. Spring-fed baseflow provides most of the discharge in the upper portions of the drainage area. Streamflow is a combination of baseflow and storm-event runoff in the lower portions of the basin. An analysis of streamflow data and a physical habitat assessment indicated that the river basin could be divided into three sections (the upper, middle and lower reaches), each reflecting water quality characteristics related to the hydrology, geology, and land use of the section (Foreman et al (2005)) The beneficial uses for the White River are (1) warm-water semi permanent fish life propagation; (2) limited contact recreation; (3) fish and wildlife propagation, recreation, and stock waters; and (4) irrigation waters. The White River is listed as impaired for the use of warm-
International Conference on Advances in Engineering and Technology
water semi permanent fish life propagation because of excessive total suspended solids (TSS) and for the use of limited contact recreation because of excessive fecal coliform. The applicable standard for TSS is a daily maximum of 158mg/L, or a 30-day average of 90mg/L. The applicable standard for fecal coliform is a single sample with 2000 cfu per 100mL, or a 30-day average of 1000 cfu per 100mL. Water quality standards for the other relevant parameters are: alkalinity less than 1313mg (daily maximum), total residual chlorine less than 0.019mg/L (acute), conductivity less than 4375 gohms/cm (daily maximum), hydrogen sulfide less than 0.002rag/L, nitrates less than 88rag/1 (daily maximum), dissolved oxygen of at least 5.0mg/L, pH between 6.5 and 9.0, sodium adsorption ratio of 10mg/L, total dissolved solids less than 4375mg/L (daily maximum), temperature less than 32.2~ total petroleum hydrocarbons less than 10mg/L, and oil and grease less than 10mg/L. The standard for ammonia depends on the water temperature and pH at the time of sampling.
6.1 Analysis of Water Quality Data Water quality data from six stations in the basin was analyzed to evaluate the water quality in the basin and to develop a TMDL summary report. Water is considered impaired for a specific beneficial use if more than 10% of samples exceed the standard for that use. The median concentration of TSS (mg/L) was 139 in the upper reach, 1118 in the middle reach and 1075 in the lower reach. The percent of samples exceeding the 158mg/L standard was 47% for the upper reach, 78% for the middle reach, and 79% for the lower reach. All three sections of the White River significantly exceed the maximum daily water quality standard for TSS. The TSS reduction required to meet the standard would be 90% in the upper section, 99% in the middle section and 99% in the lower section. Most of the TSS in the White River is considered natural background loading because of the amount of drainage area having steep terrain and highly erodible soil types, and the Badlands area. The extensive sediment loads from these sources create large sediment deposits in the channel system of the White River, which are easily suspended and transported as streamflow increases. Therefore, best management practices (BMPs) are not feasible and would not be expected to have a significant impact on TSS loads. If it appeared that BMPs could be effective, examples commonly explored for reducing high TSS include conservation cover, stream bank protection, rotational grazing, and upland wildlife habitat management. The median concentration of fecal coliform (cfu/100mL) was 450 in the upper reach, 370 in the middle reach and 2075 in the lower reach. The percent of samples exceeding the 2000 cfu/100mL standard was 9% for the upper reach, 54% for the middle reach, and 29% for the lower reach. The middle and lower section of the White River significantly exceed the wa-
Fontaine, K e n n e r & H o y e r
ter quality standard for fecal coliform of 2000 cfu/100mL. The coliform reduction required to meet the standard would be 88% in the middle section and 66% in the lower section. Best management practices to consider for reducing fecal coliform levels include conservation cover, filter strips, rotational grazing, upland wildlife habitat management, and stream bank protection. Implementing a combination of these land management tools would be expected to lower the coliform levels to meet the water quality standard for limited contact recreation. 7.0 CONCLUSIONS Water quality management policy and objectives are set at the national level, but a partnership at the federal, state and local levels is critical for effective water quality assessments and implementation of remediation projects. A water quality management program defines beneficial uses for each water body, assigns water quality standards to support those beneficial uses, and maintains a data collection program to identify impaired water and measure recovery. A periodic listing of impaired streams and lakes, along with a schedule of projects to restore water quality for beneficial uses, is also needed. The total maximum daily load (TMDL) is a tool for developing strategies for improving impaired waters. Implementing a TMDL-based solution requires collaboration of federal, state and local governments, plus individual landowners and business owners. The case studies of Spring Creek and White River in South Dakota illustrate these principles of water quality management. REFERENCES Bicknell, B.R., Imhoff, J.C., Kittle, J.L. Jr., Jobes, T.H., and Donigian, A.S., Jr., 2000. Hydrological Simulation Program-Fortran User's Manual for Release 12. US Environmental Protection Agency, Washington, DC. Foreman, C.S., Hoyer, D., and Kenner, S.J., 2005. Physical habitat assessment and historical water quality analysis on the White River, South Dakota. ASCE World Water & Environmental Congress, Anchorage, May 2005, 12 pg. Schwickerath, P., Fontaine, T.A., and Kenner, S.J., 2005. Analysis of fecal coliform bacteria in Spring Creek above Sheridan Lake in the Black Hills of South Dakota. ASCE World Water & Environmental Congress, Anchorage, May 2005, 12 pg. South Dakota Department of Environment and Natural Resources, 2004. The 2004 South Dakota Integrated Report for Surface Water Quality Assessment. Pierre, SD, USA. US Environmental Protection Agency, 2001. Better Assessment Science Integrating Point and Nonpoint Sources BASINS User's Manual. Washington, DC.,USA. US Environmental Protection Agency Office of Science and Technology.s
International Conference on Advances in Engineering and Technology
I M P R O V E M E N T S I N C O R P O R A T E D IN THE N E W H D M 4 VERSION 2 J. B. Odoki, Department of Civil Engineering, University of Birmingham, UK E. E. Stannard, HDMGlobal, University of Birmingham, UK H. R. Kerali, WormBank, WashingtonDC, USA
ABSTRACT The Highway Design and Maintenance Standards Model (HDM-III), developed by the World Bank, was used for over two decades between 1980 and 2000, to combine technical and economic appraisals of road projects, to prepare road investment programmes and to analyse road network strategies. The International Study of Highway Development and Management (ISOHDM) extended the scope of the World Bank HDM-III model, to provide a harmonised systems approach to road management, with adaptable and user-friendly software tools. The Highway Development and Management Tool (HDM-4 Version 1), which was released in 2000 considerably broadened the scope of traditional project appraisal tools such as HDM-III, to provide a powerful system for the analysis of road management and investment alternatives. Since the release of HDM-4 Version 1, the software has been used in many countries for a diverse range of projects. The experience gained from the project applications together with the feedback received from the broad user base, identified the need for improvements to the technical models and to the applications implemented within HDM-4. The improvements included in Version 2 of HDM-4 are described in detail in the paper and these are categorized as follows: new applications, improved technical models, improved usability and configuration, improved data handling and organization, and improved connectivity. Keywords: HDM-4; roads; highways; investment appraisal; software tools, sensitivity analysis; budget scenarios; asset valuation; multi-criteria analysis; technical models; database.
1.0 INTRODUCTION
When planning investments in the roads sector, it is necessary to evaluate all costs and benefits associated with the proposed project over the expected life of the road. The purpose of road investment appraisal is to select projects that will maximise benefits to society/stakeholders. The purpose of an economic appraisal of road projects therefore is to determine how much to invest and what economic returns to expect. The size of the invest-
10
Odoki, Stannard & Kerali
ment is determined by the costs of construction and annual road maintenance, and these are usually borne by the agency or authority in charge of the road network. The economic returns are mainly in the form of savings in road user costs resulting from the provision of a better road facility. Road user costs are borne by the community at large in the form of vehicle operating costs (VOC), travel time costs, accident costs and other indirect costs. Road agency costs and road user costs constitute what is commonly referred to as the total (road) transport cost or the whole life cycle cost (Kerali, 2003). The primary function of a road investment appraisal model is to calculate the individual components of total road transport cost for a specified analysis period. This is accomplished by modelling the interrelationships between the environment, construction standards, maintenance standards, geometric standards and traffic characteristics. The interaction among these factors has a direct effect on the annual trend in road condition, vehicle speeds and on the costs of vehicle operation and accident rates on the road. A road investment appraisal model may therefore be used to assist with the selection of appropriate road design and maintenance standards, which minimise the total transport cost or environmental effects. The Highway Development and Management Tools (HDM-4) is the result of the International Study of Highway Development and Management (ISOHDM) that was carried out to extend the scope of the World Bank HDM-III model. The scope of the new HDM-4 tools have been broadened considerably beyond traditional project appraisals, to provide a powerful system for the analysis of road management and investment alternatives and to provide a harmonised systems approach to road management, within adaptable and user-friendly software tools. The HDM-4 system can be used for assessing technical, economic, social and environmental impacts of road investment for both MT and NMT modes of transport (Ker-
ali, 2000). HDM-4 Version 1 software, which was released in 2000, has been used in many countries for a diverse range of projects. The experience gained from the project applications together with the feedback received from the broad user base, identified the need for improvements to the technical models and to the applications implemented within HDM-4. This paper describes in detail the improvements incorporated in Version 2 of HDM-4 and these are categorized as follows: new applications, improved technical models, improved usability and configuration, improved data handling and organization, and improved connectivity. 2.0 NEW APPLICATIONS Improvements in applications that have been incorporated in HDM-4 Version 2 are: sensitivity analysis, budget scenario analysis, road asset valuation, multi-criteria analysis (MCA), and estimation of social benefits.
11
International Conference on Advances in Engineering and Technology
2.1 Sensitivity Analysis Sensitivity analysis is used to study the effects of changes in one parameter on the overall viability of a road project as measured by various technical and economic indicators. This analysis should indicate which of the parameters examined are likely to have the most significant effect on the feasibility of the project because of the inherent uncertainty (Odoki,
2002). Scenario analysis is used to determine the broad range of parameters which would affect the viability of the road project. For example, a review of government long-term development plans could yield alternative economic growth rates. Investment projects should be chosen on their ability to deliver a satisfactory level of service across a range of scenarios. In this way, the economic return of a project need not be the sole criterion since social and political realities can also be taken into account. The key parameters considered for sensitivity analysis in HDM-4 are described below. The choice of which variables to test will depend upon the kind of study being conducted and it is a matter of judgement on the part of the user. 2.2 Traffic Levels The economic viability of most road investment projects will depend significantly on the traffic data used. However, it is difficult to obtain reliable estimates of traffic and to forecast future growth rates, (TRRL, 1988). Thus sensitivity analysis should be carried out, both of baseline flows and of forecast growth. In HDM-4, traffic is considered in three categories as normal, diverted and generated. Baseline flows are specified separately for motorised transport (MT) and for non-motorised transport (NMT) in terms of the annual average daily traffic (AADT) by vehicle type. Future traffic is expressed in terms of annual percentage growth rate or annual increase in AADT for each vehicle type. 2.3 Vehicle Use In HDM-4, there are several parameters related to vehicle loading and annual utilisation which are difficult to estimate and should therefore be considered as candidate variables for sensitivity analysis. The vehicle use parameters include the average vehicle operating weight, equivalent standard axle load factor, baseline annual number of vehicle kilometres, and baseline annual number of working hours. The inclusion of these parameters for sensitivity and scenario analysis has enhanced the capability of HDM-4 for carrying out special research studies, for example the determination of road use cost. 3.0 NET BENEFITS STREAMS Total net benefits stream is considered under three components namely: net benefits from savings in road agency costs, net benefits from savings in road user costs, and net benefits related to savings in exogenous costs.
12
Odoki, Stannard & Kerali
3.1 Budget Scenario Analysis The amount of financial resources available to a road agency determines what road investment works can be affordable. The level of budget is not always constant over time due to a variety of factors including competing demands from other sectors, changes in a country's macro economic performance, etc. This variation of budget levels over time affects the functional standards as well as the size of road network that can be sustainable. It is therefore important to study the effects of different budget levels or budget scenarios on the road network performance. This feature has been implemented in HDM-4 and it permits comparisons to be made between the effects of different budget scenarios and to produce desired reports. The most important aspect of budget scenario analysis is the presentation of results. This should be given at two levels as follows: 9 At detail level: to include parameters for each section alternative analysed and the performance indicators. 9 In aggregate terms: to present performance indicators for the whole road system over the analysis period for each budget scenario, and the results of comparison between the effects of different budget scenarios. Figure 1 illustrates the effect of different budget scenarios on the road network condition.
3.2 Road Asset Valuation The purpose of preparing annual asset valuations for a road network is to provide a means of checking on the success or otherwise of the road authority in preserving the assets it holds on behalf of the nation. All public assets should have associated with them a current capital value. For the implementation of road asset valuation in HDM-4, only the following components are relevant (Odoki, 2003): 9 Road formation, drainage channels, and sub-grade (i.e. earthworks); 9 Road pavement layers 9 Footways, footpaths and cycle-ways 9 Bridges and structures 9 Traffic facilities, signs and road furniture Depreciation accounting, which is based on the assumption that depreciation of the network equals the sum of the depreciation of all of the asset components making up the network, can be applied to road asset valuation. The basis of valuation used is as follows (Interna-
tional Infrastructure Management Manual, 2002):
13
International Conference on Advances in Engineering and Technology
The Optimised Replacement Cost (ORC) of each component of the road asset, which is defined in general terms as the cost of a replacement asset that most efficiently provides the same utility as the existing asset. This can be estimated as equivalent to the initial financial cost of construction, adjusted to current year prices. (ii) The Optimised Depreciated Replacement Cost (ODRC) of each component; ODRC is the replacement cost of an existing asset after deducting an allowance for wear or consumption to reflect the remaining useful life of the asset. (i)
The relevant basis of valuation and method for the road components considered is given in Table 1. The following ODRC methods are used for valuation of the road components: the straight-line method, production-based method, and condition-based method.
Annual AverageRoughnessfor the network grouped by BudgetScenario (weighted by length) 10
o n, 4 L_
r >
~
2
0 (D ('N
O C',l
O Cq
O ('N
O ('N
C, r
O r
O CN
O C',I
O C',I
O C'q
O ('N
O C',I
O r
O r
O C',l
C, C",l
O ('N
O r
O ('N
Year
Fig. 1: The effect of different budget scenarios on road condition
14
Odoki, Stannard & Kerali
250 i
' -t 3200 A
~"
o 0
200
3000 •
,r X
"
m
~ .....
150
C
.
@
2800
"O C
O
U X B
100 2600
r-
Z 50
2400 r "-9
00
r
04
~'
r
r
r
04
,r
04
04
04
04
04
04
04
,r o 04
%,o 04
Fig. 2" Road asset valuation
Table 1" Valuation methods of road assets considered in HDM-4 Feature/Component Basis of Depreciation method valuation Road formation and sub-grade ORC Road pavement layers ODRC Production or Conditionbased Footways, footpaths and cycle-ways ODRC Straight Line Bridges and structures ODRC Straight Line Traffic facilities, signs and road furniture ODRC Straight Line The backbone of HDM-4 analysis is the ability to predict the life cycle pavement performance and the resulting user costs under specified road works scenarios. The asset valuation methodology used links the capital value of the asset with its condition, which is predicted annually using the road deterioration and works effects models in HDM-4. Figure 2 illustrates an output from the asset valuation procedures.
15
International Conference on Advances in Engineering and Technology
3.3 Multi-Criteria Analysis Multiple criteria analysis provides a systematic framework for breaking a problem into its constituent parts in order to understand the problem and consequently arrive at a decision. It provides a means to investigate a number of choices or alternatives, in light of conflicting priorities. By structuring a problem within the multiple criteria analysis framework, road investment alternatives may be evaluated according to pre-established preferences in order to achieve defined objectives (Cafiso et al., 2002). The analytical framework of HDM-4 has been extended beyond technical and economic factors to consider explicitly social, political and environmental aspects of road investments. There are instances where it is important to consider the opinion of others interested in the condition of the road network (e.g. road users, industrialists, environmental groups, and community leaders) when evaluating road investment projects, standards and strategies. For example, the evaluation of the following: a low trafficked rural road that serves a politically or socially sensitive area of the country; the frequency of wearing course maintenance for particular road sections for which the economics are secondary to the minimisation of noise and intrusion from traffic (e.g. adjacent to hospitals); cases where national pride is deemed paramount, for example, the road leading between a main airport and the capital city; and roads of strategic/security importance to the country. Table 2 gives a list of criteria supported in HDM-4 (Odoki, 2003). MCA basically requires the clear definition of possible alternatives, together with the identification of the criteria under which the relative performance of the alternatives in achieving pre-established objectives is to be measured. Thereafter it requires the assignment of preferences (i.e. a measure of relative importance, or weighting) to each of the criteria. The selection of a particular set of investment alternatives will greatly depend on the relative importance (or weights) assigned to each criterion. Table 2: Criteria supported in HDM-4 Multi-criteria analysis
Category
Criteria/Objectives
Attributes
Economic
Minimise road user costs
Total road user costs are calculated internally within HDM-4 for each alternative. Economic net benefit to society is calculated internally within HDM-4 for each alternative. Number and severity of road accidents. These are calculated internally within HDM-4. Provide good riding quality to road users. This is defined on the basis of average IRI (international roughness index). The average IRI is calculated internally within HDM-4.
i
Maximise net present value r
Reduce accidents
Safety i
Functional service level
16
i
Provide comfort
i
Odoki, Stannard & Kerali
Reduce road congestion
Delay and congestion effects. Level of congestion is defined in terms of volumecapacity ratio (VCR). VCR values are calculated internally within HDM-4. Air pollution is measured in terms of quantities of pollutants from vehicle emissions, which are computed within HDM-4. Efficiency in both global and national energy use in the road transport sector. Energy use is calculated internally within HDM-4.
Environment
Reduce air pollution
Energy
Maximise energy efficiency
Social
Maximise social benefits
Social benefits include improved access to social services (e.g. schools, health centres, markets, etc.). A representative value is externally user-defined for each alternative.
Political
Consider political issues
Fairness in providing road access, promotion of political stability, strategic importance of roads, etc. A representative value is externally user-defined for each alternative.
The Analytic Hierarchy Process (AHP) method has been selected for implementation in HDM-4 because it systematically transforms the analysis of competing objectives to a series of simple comparisons between the constituent elements. AHP is based on "pairwise" comparisons of alternatives for each of the criteria to obtain the ratings (Saaty, 1990). The MCA procedure incorporated in HDM-4 Version 2 will produce a matrix of "multiple criteria ranking numbers" or ratings for each alternative of each road section included in the study. The alternative with the highest value is selected for each section. If ranking vector number is the same for two or more mutually exclusive alternatives then the minimum cost alternative should be selected. 3.4 Estimation of Social Benefits It has often been necessary to include the social benefits of road investments within HDM-4. The simple framework for including social benefits has now been made more transparent by incorporating them within the exogenous costs and benefits user interface.
4.0 I M P R O V E D T E C H N I C A L MODELS 4.1 Road Deterioration and Work Effects The road deterioration (RD) and works effects (WE) models in HDM-4 Version 2 have been updated in accordance with the specification provided by PIARC. For bituminous pavements, the changes include improvements to the pothole progression model, updated rut depth model,
17
International Conference on Advances in Engineering and Technology
improved user-calibration of the RD models, and updated WE models for patching and preparatory work effects. For unsealed roads, the most significant change is the introduction of three different grading types (non-mechanical, light mechanical grading, and heavy mechanical grading), and improved calibration of the unsealed roughness model using section calibration factors and workspace configuration parameters. 4.2 Road User Effects The Road User Effects (RUE) model in HDM-4 Version 2 has been updated in accordance with the specification provided by PIARC. The changes include the following: engine speed model, parts consumption modeling, constant service life model has been changed so that it no longer depends upon the percentage of private use, and major update to the modelling of vehicle emissions.
5.0 IMPROVED USABILITY AND CONFIGURATION 5.1 Intervention Triggers for Road Works The definition of the triggering logic of work items and improvements has been simplified and improved by the introduction of an improved intervention editor. The main areas of improvement are as follows: 9 The need to select scheduled or responsive intervention mode for a work item has been removed 9 The predefined limit parameters associated with the triggering logic are now optionally entered in the intervention editor as part of the main trigger expression. 9 The triggering of works has been extended to allow the combination of AND/OR logic operators. 9 Works can now be scheduled to occur in set years rather than just periodically. 9 The user is no longer constrained to select a trigger attribute from a pre-defined list. In fact any trigger can be used with any work type. 5.2 User-Interface for Defining Investment Alternatives The user interface for the definition of analysis alternatives has been redesigned to reduce the number of dialogs and buttons involved, to improve navigation through the alternatives in a familiar style, and to give improved view to the user. The new user interface allows the user to navigate through the alternatives and its assignments using a view similar to the windows explorer directory navigation tree, and uses a context sensitive spreadsheet-type view that facilitates easier assignment of maintenance and improvement. 5.3 The Model Dynamic Link Library Architecture The model architecture has undergone some revision to improve maintainability, flexibility, and to allow future customisation. Some parts of the analysis framework have been revised
18
Odoki, Stannard & Kerali
to take advantage of these architectural improvements. Although to the general users these changes will not be visible.
5.4 Post-lmprovement Maintenance Standards It is now possible to assign a maintenance standard to be applied after a road improvement standard has been applied (i.e. the maintenance standard will only be applied if the associated improvement is triggered). This facility is implemented in the new user-interface for defining alternatives. 5.5 Improvement Effects After-work attributes for some improvement effects can now be defined either in terms of the change in attribute value or in terms of the final value of the attribute (i.e. either in relative or absolute terms). This is intended to make improvement standards less section specific, so that they can be applied to a group of sections. Temporary exclusion of road sections from study When setting up a project analysis it is now possible to select a section for the study, assign the traffic growth set and define its alternatives, but then exclude it from analysis without loss of data (i.e. traffic, alternatives, etc.). This was identified by users to be a useful function if several sections have been selected in a project analysis and there is the need to focus on defining and refining the assignments of one section at a time without the overhead of analyzing all the other sections each time.
5.6 Calibration Sets Calibration sets" have been introduced to allow users to define sets of section calibration coefficients (i.e. a calibration item) for the range of pavement types commonly found on their road network. Road sections which have the same characteristics can all use the same calibration. The process of defining a section has therefore been simplified as a user now has only has to select an appropriate calibration item for the section's known characteristics rather than supply values for all the calibration parameters. 5.7 Improved Configuration A new HDM-4 data type has been provided to allow the user to model accident effects separately from speed flow types. An explanatory graph has been added to the user interface to explain the relationship between the capacity characteristic parameters. To reflect the correlation between road type, and capacity characteristics, and to improve consistency, the "number of lanes" parameter has been moved to the Speed Flow Type item from the road section. A graph is now shown on this dialog to reflect the flow distribution data entered by the user. As the user changes this data, the graph changes accordingly. The graph is intended to improve user feedback, and to engender understanding of the effects to the flow distribution data.
19
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
6.0 I M P R O V E D DATA H A N D L I N G AND O R G A N I Z A T I O N
6.1 Updated Database Technology HDM-4 uses an object-orientated database to store its local data. HDM-4 Version 2 has been updated to use the latest version of this database to ensure the latest developments and enhancements are available, as well as continued support and backup from the provider is accessible.
6.2 Redesign of New Section Facilities A new approach allows new sections to be reused across studies and alternatives by defining new sections within the work standards folder in the workspace, and assigning them to alternatives using the new user-interface for alternatives. 6.3 Traffic Redesign The management and entry of traffic related data in HDM-4 has undergone a number of changes that affect road networks, sections, vehicle fleets and the three analysis modes. The traffic data for a section is now defined for each section within the road network. To enable this to take place a road network is associated with a vehicle fleet. A user can enter multiple years of traffic data which is now defined in terms of absolute AADT values. A traffic growth set defines how the traffic grows over time and is defined within the vehicle fleet and assigned to a section within an analysis. The use interface for traffic growth sets is similar to that used in Version 1 for the definition of normal traffic. As growth sets may be used to define the traffic growth characteristics of multiple studies, the periods are defined as relative years rather than absolute years. These improvements allow traffic data for a section to be common in each analysis in which the section is included, and for the typical traffic growths to be reused in each analysis. When creating a new analysis a user now only selects the road network to be used, as the vehicle fleet is associated with it.
6.4 Report Management Flexible reporting is important to view the results of an analysis and to present the results. HDM-4 Version 2 supports user-defined reports by using Crystal Report templates, and the management and organisation of these has been improved. 7.0 I M P R O V E D C O N N E C T I V I T Y
7.1 Run-Data in Microsoft Access Format The run-data produced by HDM-4 during an analysis is now output to a single file in Microsoft Access format. The main benefit of this change is that the use of the Access format
20
Odoki, Stannard & Kerali
makes it easier for end-users to access the run-data with widely available software products (such as Microsoft Access, and Microsoft Excel) and easier to share with other users. For the purposes of users who wish to view the run-data but do not have a HDM-4 licence, a free tool, HDM-4 Version 2 Report Viewer, will also be available. 7.2 Import/Export in Microsoft Access format The import/export data produced by HDM-4 is now stored in a single file in Microsoft Access format, and replaces the multiple *.dbf and *.hdbf files of HDM-4 Version 1. The main benefit of this change is that the use of the Access format makes it easier for end-users to access the data with widely available software products, and easier to share with other users. 7.3 Import Validation An import wizard has been introduced that guides the user through the process of importing externally-defined data into HDM-4 Version 2. Previously no validation of the imported data was preformed and values that were outside the allowable range could produce numerical errors when an analysis was subsequently performed. HDM-4 Version 2 introduces the optional validation of vehicle fleet and road network data for incorrect values as the data is being imported. 8.0 SUPPORT TO EXISTING USERS It has been recognised by the ISOHDM that the existing data used with HDM-4 Version 1.3 is valuable to an organisation and therefore as part of HDM-4 Version 2 a tool has been developed to aid the migration of this data into a format that can be used within the improved analysis framework.
The transition to the HDM-4 Version 2 will require some recalibration of the RD and WE models to ensure the updated technical models are correctly adapted to local conditions, and studies reviewed to make the most advantage of the new features available. 9.0 CONCLUSION The paper has presented the major improvements that have been incorporated in HDM-4 Version 2. These improvements relate to new analysis modules to enhance HDM-4 applications. They include sensitivity analysis, budget scenario analysis, road asset valuation, multi-criteria analysis, and estimation of social benefits. In addition, there are several software enhancements including: improved connectivity to other databases, simplified import/export of data to/from HDM-4 with data import validation, updated database technology, redesign of the user interface, and enhanced report management. There have also been significant improvements to the technical models, including revisions to the bituminous Road Deterioration and Works Effects models, and several enhancements to the Road User Effects models. Version 2 also introduces the concept of Calibration Sets to allow users to
21
Intemational Conference on Advances in Engineering and Technology
defne calibration coefficients for the range of pavement types commonly found on their road networks. HDM-4 is the de-facto international standard tool for analyzing road sector investments. HDM-4 is now used by all of the major intemational financing institutions, such as the World Bank, the UK Department for International Development, the Asian Development Bank and the African Development Bank, to assess their financing in the roads sector. REFERENCES
Cafiso, S., Di Graziano, A., Kerali, H.R. and Odoki, J.B. (2002). Multi-criteria evaluation for pavement maintenance management using HDM-4. Journal of the Transportation Research Board No 1816, National Academy of Sciences, Paper No. 02-3898, pp 73-84, 2002. Washington, D.C. Kerali, H.R. (2000), Overview of HDM-4. The Highway Development and Management Series, Volume 1. PIARC World Road Association, Paris, France, ISBN 2-84060-059-5 Kerali, H.R (2003), Economic appraisal of roadprojects in countries with developing and transition economies. Transport Reviews, vol. 23, no. 3,249-262 Odoki, J.B. (2003), Specifications for road asset valuation in HDM-4, International Study of Highway Development and Management, University of Birmingham, UK. Odoki, J.B. (2003), Implementation of multi-criteria analysis in HDM-4, International Study of Highway Development and Management, University of Birmingham, UK. Odoki, J.B. (2002), Implementation of sensitivity and scenario analysis in HDM-4, International Study of Highway Development and Management, University of Birmingham, UK. Saaty, T.L. (1990), The analytic hierarchy process: planning, priority setting, resource allocation, RWS Publications Pittsburgh, Pa The Institute of Asset Management, (2002), International Infrastructure Management Manual International Infrastructure Management Manual Version 2. O, United Kingdom Edition, UK TRRL Overseas Unit, (1988), A guide to road project appraisal, Road Note 5, Transport and Road Research Laboratory, Crowthorne, Berkshire, UK
22
Kalugila
CHAPTER TWO ARCHITECTURE
SPATIAL AND VISUAL CONNOTATION OF FENCES (A CASE OF DAR ES SALAAM- TANZANIA) S. Kalugila, Department of Architecture, UCLAS, Tanzania
ABSTRACT
Increased immigration of people into cities, coupled with escalation in urban poverty and unemployment, has generated social and economic problems which are associated with a rise in burglary, theft, mugging and rape. The demand for security has lead to the increase of fences especially for inhabitants of high and middle income status. This paper is an attempt to contribute towards addressing the special and visual implications of fending that result from fences erected around properties. By interviews and observation, Sinza area was used as a case to examine streetscapes as well as outdoor spaces in fenced properties. The discussion is based on the types of fences, architecture relationships within the built environment; the way people perceive fences and the role that legal framework plays in regulating fences. The last part give suggestions on the way forward towards helping to create a harmonious living environment between fences. Keywords:
Fences; Built environment; Concrete; Cement; Urbanisation; Neighbour-hood; Building permit; Finishes; Legal framework, Architecture.
1.0 INTRODUCTION Communities in most parts of the world are increasingly living in a rapidly urbanising world. The pace of urbanisation is increasing in countries all over the world, Africa included. In urban centres in Tanzania, especially in the city of Dares Salaam, rapid urbanisation growth has outstripped public capacities to manage urban growth and promote public welfare including security, and safety of the urban inhabitants. Dares Salaam, being the commercial city in Tanzania, has the biggest urban agglomeration that accommodates most social and economic sources. It accommodates 25% of the total urban population i.e. 2.5 out of 10 million. It is also an industrial centre with most Tanzanian indus
23
International Conference on Advances in Engineering and Technology
tries and highest level of social services including educational and health facilities (Lupala, 2002) Due to increasing insecurity, fences have been increasing in most residential areas especially those inhabited by middle and high income settlers. Erection of a fencing wall (fortification) around a house or property has become a common feature in most housing areas and even in town neighbourhoods, which predominantly accommodate offices and commercial functions. Living between and within fences have become a way of life, and as a result, often one hardly notices them or takes note of their implications even though they are a dominant feature in residents daily lives. Fences seem to be important to the way we think and value land or property and the protection one enjoys or expects his or her land or property to provide. Fences can define, protect, confine and liberate properties. Fences can also tell where residents belong and who one is in relation to others. This is often because fences vary in size, quality and complexity; most of these depict extent of protection, desirable and financial social status of an individual. On the other hand, the public and the private spaces can be disjoined by a fence. It also announces who has or who is denied access to a certain property. Therefore fences also shape community and individual's identity. At the same time, they protect assets from encroachment upon by unwanted visitors. "Though fence ranks among the minor matters of a building, it is far from being unimportant. Without it no residence can be properly protected or regarded as complete" (Dreicer, 1996:21). Amongst most people, particularly the affluent, living in a house which does not have a fence is considered both risky and a manifestation of incompletion of the building structure. Most urban areas demand for, and creation of, fences seems to be increasing with time an activity which at present remain largely unregulated by the local authorities in most urban areas in Tanzania, including Dar es salaam. Provision of fencing in most cases seems to be largely an afterthought which often distorts the quality and visual value of the resulting built environment. The variety of design, colours, forms and heights create inharmonious relationship between fenced buildings and its surroundings. Main reasons for fencing include security, boundary definition, privacy, and portraying status. 2.0 M E T H O D O L O G Y Case study approach was used where both qualitative and quantitative data collection methods were applied in Sinza. Quantitative method provided both measurable fact and data, while qualitative methods answered questions that provided data on people's views and perceptions they have on fences (Kalugila, 2005).
24
Kalugila
Sinza area is a settlement located in Kinondoni Municipality about eleven kilometres from Dare s salaam city centre. It is a settlement where both middle and high income earners live. There is a relationship between fences and income because the more affluent one is the more security a person needs as well as stronger identity he or she would often employ to distinguish him or her from other group of low class. Considering this factor, Sinza is considered an information rich area. 3.0 DISCUSSION The discussion is based on the types of fences that were found in the study area, the resulted architectural relationship, fences from the owner's and observer's perspective, effect of fences on the expected street functions, the resulted street life and the role of the legal framework in relation to the existing fences.
3.1 Types of Fences Variation in fence type has an impact in the visual quality and architecture of a street. Fences appear different mainly because of material used as well as the construction techniques applied. The case of Sinza demonstrated that the types of fences found were dominated by cement products in different designs. Concrete blocks could either be used singly or mixed with perforated blocks or iron bars. Those made of bricks or stones were rarely found. Finishes were of either rough or smooth texture. Those who could afford plastering used different colours including mixtures of grey, cream, red, black, blue, red or brown. Those who could not afibrd plaster left the walls with fare-face. Vegetation was used but most of them were not properly taken care of. In many cases fences were built as an afterthought as a result little relationship existed between buildings and their fences. Figure 1 shows some of the exising types of fences. In relation to this there was a need to look into the kind of fencing architecture house owners come up with; the following section explains this further.
25
International Conference on Advances in Engineering and Technology
Fig. 1: Types of fences 3.2 The Resulted Architectural Relationship Between Fences and Houses In attempt to investigate the architectural coherence and unity between the fence and the house, building enclosed, the following were noted after an observation: 33% of the visited houses were having fences exhibiting different languages in terms of colour, caps, perforation, (openings) and materials; 50% have had resemblance only in colour, 37% had similarity in the caps used for the wall and that used for parapet elements on the roof; 20% had a similarity in design of perforation elements and only 33% resembled in the iron bars used. The beauty of most places was distorted because there was no common element that unified the structure enclosed and fence. While owners might be building without architectural impression they wanted to create, at the end of the day, the extensively varying streetscape, as well as lack of visual harmony between the enclosed house and fence generate unattractive urbanscape. At the same time for those producing some of the fencing elements too much variation reduces economies of scale. It also indicates that owners prefer material clashing, or by the time of constructing the fence the elements used for the house were not available. 3.3 Fences from the Owner's versus Observer's Perspectives Together with performing the expected functions; house owners felt that fences had their disadvantages. Out of thirty owners, (13%) said during an interview session that they were experiencing discomfort due to limited air circulation (considering the warm-humid climate of Dar-es-Salaam). This was particularly reported by those with solid wall fences where the fences acted as walls blocking the air movement in or out the enclosed space. Other adverse effects which were reported that arose from fences were boundary disputes. This arose when the setting of a fence was not according to ones property boundary, i.e. in cases where fences were used to demarcate two properties of deferent owners.
Observers were the most affected by the visual link and resulted street created by fences. Findings during interview session was that out of ten respondents, (50%) said that high and solid fences created a sense of fear. Others said they felt claustrophobia when passing through a street with high fences.
26
Kalugila
Most terrifying hours in walled paths or streets were reported to be late evenings and nights. This was because most gates were then closed and there were no lights. The life on the street is also affected by the kind of enclosures used in molding them as discussed further in the following section. 3.4 The Visual link and Resulted Street Life Fences contribute to degree of visual linkage between a fenced and adjacent area. Also they can affect the richness of income generating activities on the streets. The things that made a street with fences on the side lively or dead, attractive or unattractive was the absence or presence of activities on the sides such as shops and petty traders. This is summarised on figure 2.
Fig. 2: Degrees of transparency and in relation to street life Not only did the activities generate income but their presence enhanced security and made the street a lively place to walk, stroll or play even for children. As Sinza is a part of the city which is characterised by mostly single storey buildings, the existence of fences formed
27
International Conference on Advances in Engineering and Technology
strong edges which were visually pronounced as opposed to a case where fences enclosed high buildings. Together with advocating active street life, the streets are expected to perform their main function which is transportation including service provision. The existence of fences may hinder this as discussed in the following section. 3.5 Effect of Fences on the Streets' Expected Functions Service provision is an important factor in a residential area. Solid fences in Sinza neighbourhood were found to be causing difficulty in the delivery of basic services. This was because services like garbage collections, fire brigade services, and sewage collection require big trucks that need wide roads. Such trucks often require sufficient space for turning. This was not always available when every house has a fence, some of them protruding into the public roads reserve. If there are fire outbreaks, impacts for such problems could be catastrophic. Due to fencing walls, truck drivers could not turn; they had to reverse into the major road in order to turn around (see, for example figure 3). These fences were erected the way they were because of lack of a proper legal framework that could guide them.
Fig. 3: Dumping site, ghost neigbourhood and service difficulty 3.6 The Legal Framework in Relation to Existing Fences The Tanzania Building Regulations (1997) do not directly address construction of permanent fences. In Dares Salaam, normally when a building is designed, the Municipal Council is supposed to approve drawings which include detailed design for a house and any other structure that is to be built on the plot in this study, a question was designed to elicit from respondents on this matter. From the interviews, seventeen (57%) out of 30 respondents said they had their house plans (without fences plans) approved by the Municipal Council even though of the total eighteen (60%) had the fence built after the construction of the building was complete, implying that even though of the total slightly more than half had their house plans approved, the fences were not checked or approved. In cases of inherited buildings, it was difficult to know if there was any building permit obtained.
28
Kalugila
The discussion with the local authority suggested that council might not take action if the fence erected does not destruct peace and harmony. In other words one may erect a fence and justify it as long as it is not provocative to anyone. Discussion with interviewers suggested that some house builders were ignorant about the need for getting plans for fences approved if not submitted with building plans. During discussion with one of house owners, it was learnt that some house owners did not see the need of applying permit for fencing. They built when and how they wanted because no body came to inspect them. What is, however, clear is that about half of buildings are built with building permits. Yet, there are those who submit plans for buildings with fences plans. This implied that overall few fence plans were submitted or approved by municipal council. 4.0 CONCLUSION AND RECOMMENDATIONS This study has shown that fences are more than vertical elements in a built environment. They have functions and exist in varieties in Sinza depending on ones socio-economic situation, residential density and purpose of fence and exposure to alternatives. Functions of fences which were uncovered in the study area include privacy, security, exhibiting ones socio-economic status and boundary definition. The limited awareness and knowledge people have about fences, their impacts, and options available are some of the problems which lead to erection of fences which are not in consonance with public requirements or in harmony with the local environment conditions. This study has empirically demonstrated that fences shape the built environment even though people knew and cared very little about them. Their implications were many, including environmental degradation, effect on service provision, distortion of aesthetics of an area and blocking visual continuity of a space.
From the foregoing discussions the following recommendations are made: 4.1 Need for Clear Legal Framework As noted, existing legal framework is somewhat paradoxical about approval of design and construction of fences in residential areas. Therefore, a review of the current legislation, namely, Cap 101 (Township Rules of 1920), Tanzania Building Regulations of 1997, together with Town and Country Planning Act 1956 which was revised 1961, are to be attended to so as to make it explicit that fences require an approved plan and permit issued by the Local Authority. Specifications and regulations for fences have to be worked out also under the revised Cap 101. 4.2 Decentralising of Development Control Enforcement to Grass Root Level At present, Local Authorities are responsible for development control through the building inspectors in the Ward. The leaders (Ward Executive Offices) and Mtaa (sub-ward) Secretaries
29
International Conference on Advances in Engineering and Technology
or local residences are not involved in enforcing and monitoring land development including house and fence construction activities even though they are the victims of poor construction of fences especially in cases where public interests have been disregarded. It is, therefore, recommended that while the regulations and laws are formulated by Local Authorities and Central Government. Enforcement should be a collective activity, where residents should take a lead. This also underscores the need for public awareness creation to make community members aware of pros and cons of varying fence types and minimum conditionality for erecting them including respect of public interests. 4.3 Awareness Creation It has was also observed that many home builders were unaware about fence construction regulations particularly the condition that require them to submit fence plans for approval by the Local Authority before construction starts. It is important that once the existing regulations are reviewed, public awareness creating campaign is carried out. They should also be educated about adverse effects of fences and options for reducing the same. House builders should be encouraged to submit fence designs when applying for building permits even though the construction might be done much later so that the effects are considered by the authorities for approval. REFERENCES Dreicer, K. (1996), Between Fences. U.S.A: National Building Museum and Princeton Architectural Press. Kalugila, S. (2005) Fences and Their Implications in the Built Environment: A case of Dar es Salaam, Tanzania, Oslo School of Architecture, Oslo. Unpublished Masters Thesis. Lupala, J. (2002), Urban Types in Rapidly Urbanising Cities: Analysis of Formal and Informal Settlements in Dar es salaam, Tanzania. Royal Institute of Technology, Stockholm. Published PhD Thesis. United Republic of Tanzania, (1920), Township Ordinance (Cap 101), Dares Salaam: Government Printer. United Republic of Tanzania (1956), Town and Country planning- Cap 378 Revised in 1961, Dares salaam: Government Printer. United Republic of Tanzania (1997), The Tanzania Building Regulations, Dares Salaam: Government Printer.
30
Goliger & Mahachi
A BUILDING QUALITY INDEX FOR HOUSES (BQIH), PART 1: DEVELOPMENT Adam Goliger, CSIR, P 0 Box 395, Pretoria 0001, South Africa Jeffrey Mahachi, NHBRC, P 0 Box 461, Randburg 2125, South Africa
ABSTRACT One of the biggest challenges and economic achievements of South African society is the development of adequate housing for a large portion of its population. Despite the large pool of information on house construction (i.e. the correct applications of materials and technologies as well as minima standard requirements) available, unacceptable construction quality is apparent throughout the entire spectrum of housing. This issue requires an urgent attention and intervention at a national level. The paper presents a development process of a tool for post-construction quality assessment of houses, referred as Building Quality Index for Houses (BQIH). Keywords: BQIH; housing; quality assessment; quality systems.
1.0 QUALITY OF HOUSING IN SOUTH AFRICA In South Africa a large pool of technical and legislative information on good houseconstruction practices is available. Various phases of the development process (i.e. land acquisition, planning, design, etc.) are supported by relevant legislative and technical norms. Nevertheless, inadequate quality is apparent throughout the entire spectrum of housing (i.e. low- to high-income dwellings). Figure l a is a close-up view of the base of a load-bearing column supporting a second floor bay-windowed room of an upmarket mansion in Pretoria. At the time the photograph was taken, the columns were already cast with the second floor in place, but almost all bricks underneath the base were loose. Figure lb demonstrates an unacceptable practice of using loose bricks as an infill of a foundation for a low-income housing unit. Despite the huge housing stock in South Africa (estimated at nearly 10 million units, including informal) there are no formal mechanisms, methodology or socially accepted platform either for proactive and consistent monitoring of its quality or the development of relevant statistics. A need is therefore apparent for the development and implementation of a comprehensive and straightforward quality-appraisal system to measure housing quality standards.
31
International Conference on Advances in Engineering and Technology
Since 1994 the issues of the quality of house construction and risk management have been the concern of the National Home Builders Registration Council - NHBRC (Government Gazette, 1998; Mahachi et al, 2004). In 2003 the NHBRC commissioned the development of a system for assessing the quality of houses, and this was undertaken at the Division of Building and Construction Technology, CSIR. The philosophy and principles of the proposed Building Quality Index for Houses (BQIH) were based on an internationally accepted quality control scheme, Conquas 21 (1998), which was developed and implemented by the Construction Industry Development Board of Singapore. However, owing to the pronounced contextual and technological differences between the residential sectors of both countries, the two systems differ significantly.
Fig. l a: Support of a column
Fig. l b: In-fill of a foundation
2.0 DEVELOPMENT PROCESS OF THE BUILDING QUALITY INDEX FOR HOUSES (BQIH) The development process of the BQIH system is summarised in the flow chart presented in Figure 2. Various steps of the above process will be presented in the following sections.
Initially, following several interactions with the Singapore Construction Industry Development Board (CIDB), Conquas 21 has been analysed (blocks 1 and 2 in Figure 2) in the context of the South African situation (block 3). On the basis of that, the principles of the proposed system applicable to local conditions were identified (block 4). Based on the review of South African practice and standards (block 5) the scoring system (block 6) was developed. A series of initial appraisals have been carried out (block 7), and their analysis (block 8) served as the basis of an iterative process of calibrating and improving the scoring system (block 6) and developing scoring sheets (block 10). The information obtained from the analysis (block 8) also formed inputs to developing the user manual (block 9). A pocket-size-computer programme for calculating the scores (block 11) was developed. A pilot study (block 12) was undertaken in order to evaluate the applicability and relevance of the proposed system. The IT application system (block 13) was used to develop relevant statis-
32
Goliger & Mahachi
tics on the quality of houses (block 14). The pilot study and its results are presented in a subsequent paper.
C O N Q U A S 21
l. . . . . . . . . 1 ; ....... 3-
5
.....
analysis, I ~n~eraction$w~th C IDB ~ingapom,
[ ....
Sou~h Aft~an:
II
I
soclo.econom::~c
I!
J
I '~176176176 ii
Evalu:a|ien,
pta~ce~,s!~ndatds:,
II
l ~"~176176176176 J
|t-~iini!~ng
[.............................................. 4
......
Principtes o! BQtH
Scohng sheels components
we~htings
9
appr.aisais
Scores
Obs:erva'~o#S
,~..................
~o .............. ~..
Use~ Manual
Score Sheei.s
BQIH
IT applicaiion
System
An.~iysi$ Pilot$l~dy ~atist~cs on quarry
Fig 2: Schematic flowchart of the development process 2.1 Conquas 21 Over the last 50 years or so, the focus and emphasis of the home-building industry worldwide has gradually shifted from quantity to quality in human shelter. Most countries have developed and introduced sets of policies, regulations and documentation relevant to their particular situation and aimed at safeguarding the interests of the consumer. Nevertheless,
33
International Conference on Advances in Engineering and Technology
relatively few quality-assessment systems are in place to monitor and capture aspects of construction quality in a structured and consistent way. Perhaps the most internationally accepted and established is the Construction Quality Assessment System (Conquas), which was launched in 1989 in Singapore where, until recently, nearly two thousand construction projects have been assessed. Within eight years of its implementation the average Conquas score improved steadily from about 68 to 75, which reflects a significant improvement in the quality of construction in Singapore (Ho, 2002). In view of its attractiveness, an analysis of the applicability of Conquas 21 to South African conditions, and in particular this country's house-construction industry, has been carried out. Several contextual differences have been identified, as summarised below. 9 Geographical~climatic: Singapore is a fairly small and flat tropical island experiencing uniform climatic conditions, dominated by moist coastal air and cyclonic wind/rain events. South Africa's land surface is significantly larger, with a wide spectrum of altitudes, geological formations and climatic zones. 9 S o c i o - e c o n o m i c : The population of Singapore is largely of an Eastern cultural background renowned for perfectionism, perseverance and attention to detail. The country experiences a high rate of employment, as well as high living and educational standards, and has access to a large pool of skilled/educated labour. Unfortunately, these socio-economic conditions do not prevail in South Africa. 9 S p a t i a l : Like elsewhere in Asia, and as a result of the lack of urban space and the lifestyle expectations of the community, most of the development in Singapore is high-rise. In South Africa, apart of the centres of large cities, most housing development is singlestorey. 9 D e v e l o p m e n t a l . " The entire Singaporean development and construction industry is centralised and strictly controlled. This is not the case in South Africa. 9 Technical: Technical differences refer to general standards and tolerances, adherence to those requirements, the general level of technical skills and professional inputs.
2.2 Principles of BQIH Several aspects of the proposed system applicable to the South African situation and its needs were considered and investigated. These let us to the belief that: 9 The system should follow the broad philosophy of Conquas in respect of its aims, the structure (i.e. division into building components) and the principle of relative weights. 9 Both structural and architectural aspects of house construction should be considered. However, in line with the NHBRC mandate, the system should focus on assessing aspects of the quality of basic construction that affect the structural performance and safety of housing units. 9 Important aims applicable to the South African situation have been identified as: - the provision of an objective method for evaluating the performance of building
34
Goliger & M a h a c h i
contractors, the identification of good and bad construction practices, and - identification of the training needs of contractors. The system should be inclusive of the entire spectrum of the housing i n d u s t r y - from the low- to the high-income sector. The system should be self-contained, straightforward, concise and practicable. -
Our research has shown that a large pool of information on required minimum construction standards is available in the relevant codes of practice, building regulations, construction guides and requirements of national/local authorities, in South Africa. The problem is that this information is often not implemented, not easily accessible and understandable for less experienced people, and in some cases even confusing. 9 The appraisal should be based on visual assessment of relevant items, assuming access to and verification of relevant technical documentation pertinent to the site. No destructive investigations and testing will be permitted. 9 Following the initial research, one of the critical matters identified was the issue of subjectivity of assessment, with the obvious counter-measure being the appropriate training of the inspectors. Another tactic in this regard, which was adopted, was to introduce a relatively high number of items to be scored. 2.3
Benefits
There are several important benefits from implementing the proposed system. These benefits relate to various features of the society and the relevant role-players, as summarised below: 9 The contractors will also benefit from the system, which will serve as a tool to identify the problem areas in their business. Good performers can also use their BQIH Index for marketing purposes. 9 Perhaps the most obvious are the benefits to the consumer, i.e, the house owners. 9 For local authorities the most important benefit is the ability to make an independent comparison of the relative performance of various contractors involved in the construction process, and the introduction of a quality-driven management system for awarding contract work. 9 From the perspective of the national authorities, implementation of the system will provide a platform for a comprehensive and consistent assessment of the quality of housing stock in South Africa. For low-income and subsidy housing, the statistical data obtained can form the basis for risk-assessment studies, as well as for budgeting and the allocation of resources (i.e. investment in new developments vs the maintenance and upgrading of existing stock). 3.0
SCORE
SHEETS
The BQIH system contains score sheets, which include building components and items, as well as the User Manual.
35
International Conference on Advances in Engineering and Technology
3.1 Building Components Five basic building components were adopted, as shown in Table 1. Table 1: Building components Reference
Description
1 2 3 4 5
Foundations Floors & stairs Walls Roofs Electrical & plumbing
Weighting
(%) 30 15 25 20 10
3.2 Building Items For each of the components listed in Table 1, a relevant list of items has been developed. The role of this list is to identify all aspects of a specific building component that influence/determine the overall quality performance of component (e.g. plaster and brickwork to determine the quality of the walls). The process of identifying the relevant items was based on the initial comparative research work carried out in 2000-2002, and supported by input from Boutek's experts in relevant disciplines. The allocation of relative weightings followed an iterative process based on Boutek's experience in building pathology and trial appraisals of houses. 3.3 Assessment Criteria
The investigation into a suitable and reasonable set of assessment criteria was preceded by a comprehensive review of South African sources of technical data regarding minimum quality requirements in construction. This involved a review of relevant codes of practice, technical guides, specifications and national regulations. Most of the specifications appearing in various sources were found to be fairly consistent, although some differences are present. Direct comparison of them is often difficult in view of additional cross-referencing, and conditions/stipulations in the applicability of various clauses. This is demonstrated in Table 2, in which a sample comparison of selected issues is presented. (Also included are the corresponding stipulations of Conquas 21 .) Our interactions with small building contractors revealed that some information on minimum requirements is not readily accessible, while other information is difficult to interpret. Certain information given in technical specifications is impractical and deliberately ignored (or bypassed) by contractors.
36
Goliger & Mahachi
Table 2. Comparison of minimum requirements/allowable deviations NHBRC
[ref. 9] 10
50
SABS 0100
SABS 0155
S A B S 0 1 0 7 (1)
SABS 0400
Conquas
[ref. 8] [ref. 9] [ref. 6] [ref. 7] Minimum strength of concrete in foundations (MPa) 10
12o-6o
Minimum concrete cover of reinforcement (mm) I { t Deviations from level in finished floors (ram) 3-10 3mm over over 3m 2m length(2) length
10 over 6m length or 6 over 3m length (1) application of ceramic tiles (2) depending on external conditions _
[ref. 3] According to specs {
25 1 per lm, max deviation 10
Appraisal In Conquas 21 each of the components contains a detailed list questions regarding compliance with specific items, and facilitates only two options of scoring, namely: 0 for noncompliance and 1.0 for compliance. It was felt that in the South African context the direct application of this approach would be too restrictive and, in fact, could disqualify large portions of housing units. Furthermore, our initial trial tests using Conquas 21 indicated that this type of philosophy is suited for the assessment of individual aspects of finishes, and tends to distort the appraisal of structural elements as well as items of a more generic nature. It was therefore decided that, for certain building items (where possible and feasible), other than 0 and 1 ratings, to introduce an intermediate rating of 0.50, which enables a more graduated scoring of an item. This rating refers to the quality that is generally acceptable, with a few permissible non-compliances, which have been noted. The amount of noncompliances allowed for each type of item is specified in the User Manual, which is discussed in Section 4. Apart from human resources, the implementation of the present system requires a fairly limited amount of basic tools/instruments, which include a measuring tape, a spirit level, a torch, a ladder and a camera. The appraisal of houses is based on visual assessment of their elements, combined with verification of relevant documentation. Scoring of a component/unit is carried out once only, without any provision for re-working and subsequent re-scoring of a specific unit. (This is in line with the philosophy of CONQUAS 21 - i.e. to encourage a culture of 'doing things correctly right from the beginning'.)
37
International Conference on Advances in Engineering and Technology
4.0 USER MANUAL A self-contained User Manual has been developed to support the use of score sheets. This was done in such a manner that the headings and paragraph numbers in the manual correspond to those of the respective items in the score sheets. The manual includes a straightforward and practical guide to the compliance of specific items on the score sheets.
5.0 IT APPLICATION A Microsoft-compatible computer system has been developed to accommodate electronic handling and calculation of the scores, as well as pre-processing of the data for further analysis. The system has been loaded into a pocket-size computer to enable on-site data capture, and central data storage of all captured information. Upon the completion of a project, data from several pocket-size computers can be downl-oaded and synchronised with the main database. These data can subsequently be analysed. 6.0 CONCLUSIONS The paper has presented a summary of the principles and process of development of a postconstruction appraisal system for houses in South Africa, referred to as the Building Quality Index for Houses. The BQIH system offers a straightforward and concise assessment tool for the quality assessment of houses across the entire spectrum of the housing market in South Africa. A pilot assessment study on the implementation of the BQIH system is presented in a subsequent paper. 7.0 ACKNOWLEDGEMENTS The development of the system has been made possible by the contributions of a large number of people. We would like to single out (in alphabetical order) the commitment and contribution of: Messrs M Bolton, W Boshoff, X Nxumalo, M Smit, F Wagenaar, T van Wyk and Drs M Kelly, J Kruger, and BLunt. REFERENCES Government Gazette (1998), Housing Consumer Protection Measure Act 1998, Act No. 95, 1998. Government Gazette No. 19418, Cape Town, RSA. Mahachi, J., Goliger A.M. and Wagenaar F., (2004), Risk management of structural performance of housing in South Africa, Editor A. Zingoni. Proceedings of 2nd International Conference on Structural Engineering, Mechanics and Computation, July 2004, Cape Town. CONQUAS 21 (1998), The CIDB Construction Quality Assessment System. Singapore, 5th Edition. Ho, K., (2002), Presentation, Senior Development Officer, Quality Assessment Dept., Building and Construction Authority, Singapore.
38
Goliger & Mahachi
NHBRC (1999), Home Building Manual. National Home Builders Registration Council, South Africa. SABS 0400-1990 (1990), The application of National Building Regulations. Council of the South African Bureau of Standards, Pretoria. SABS 0100-1992 (1992) Part 1, Code of practice for the structural use of concrete. Design., Council of the South African Bureau of Standards, Pretoria. SABS 0155-1980 (1994), Code ofpractice for." accuracy in buildings, Council of the South African Bureau of Standards, Pretoria. SABS 0107(1996), The design and installation of ceramic tiling, Council of the South African Bureau of Standards, Pretoria.
39
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
A BUILDING QUALITY INDEX FOR HOUSES, (BQIH) PART 2: PILOT STUDY Jeffrey Mahachi, NHBRC, P 0 Box 461, Randburg 2125 Adam Goliger, CSIR, P 0 Box 395, Pretoria 0001
ABSTRACT This paper is second in a series of two. The first paper summarises the development process of a Building Quality Index for Houses (BQIH) and the current one describes the process and selected results of a pilot study in which the BQIH system has been used.
Keywords: BQIH; housing; quality assessment; site
1.0 INTRODUCTION The current paper is second in a series of two, which describe the proposed quality assessment system referred as to the Building Quality Index for Houses. In the first paper a development process of the proposed system was described and the current paper presents its implementation, on the basis of a pilot study. The aim of the pilot study was to test the operation of BQIH system and assess its applicability and usefulness for a 'post-construction' appraisal of housing stock in South Africa. An assessment of nearly 200 houses was carried out, in the course of the project.
2.0 HOUSES AND SITES About 180 of the houses were 'subsidy', and 20 were 'non-subsidy'. (Subsidy housing refers to the developments cross-subsidised by the relevant state authority.) All housing developments were located in the central and most industrialised province of South Africa - Gauteng. (The subsidy houses were located at: Lotus Gardens in Pretoria West, Olievenhoutbosch in Midrand, and Johandeo, near Vanderbijlpark. The non-subsidy houses were selected at Cornwall Hill Estate, located to the south of Pretoria, and Portofino Security Village in Pretoria East.) Figure l a presents a general view of a subsidy-housing scheme in Lotus Garden and Figure lb its typical unit with an area of 32 m 2. The Cornwall Hill, and to a lesser extent Portofino, developments represent the other 'end' of the housing spectrum in South Africa. One of the units, with a value of several millions of rands (i.e. more than 0,5m US $) is presented in Figure l c. A comparison of Figures l b and l c clearly demonstrates the flexibility and inclusiveness of the proposed quality assessment system in respect to its ability of non-bias
40
Mahachi & G o l i g e r
appraisal of relative construction quality achieved at seemingly non-comparable types of houses, constituting the extreme ends of the housing market in South Africa. 3.0 A S S E S S M E N T P R O C E S S
The assessment project was carried out during May and June 2004. It started well after the end of the rainy season. Nevertheless, an unexpected intense thunderstorm, which developed over the Midrand-Pretoria, at the time of inspection process, resulted in significant rainfall over Olievenhoutbosch and enabled us to validate our concerns regarding the problem of water penetration through the roofs and walls of the houses. (See Section 4)
Fig. 1a: Lotus Gardens Initially, a site-training session took place. This included people involved in the development of the system and the assessors. The training was followed by a set of calibration tests. For these tests seven housing units at Lotus Gardens were selected and each of them were inspected independently by two assessors. A comparison of indexes, which were derived, was good, with typical differences between 2% and 5%.
41
International Conference on Advances in Engineering and Technology
Fig. lb: Unit type A
Fig. 1c: A non-subsidy house
4.0 GENERAL PROBLEMS OF SUBSIDY HOUSES This section gives a summary of common issues and problems affecting the quality of housing units, which were repeatedly evident during the inspections. These issues are raised not necessarily in order of their importance or prevalence.
4.1 Design Shortcomings Few design shortcomings were observed. These refer to roof support structure, inadequate overlap of roof sheeting (Figure 2a), lack of attention given to the problem of heaving soils affecting the water supply and disposal systems. Minor design inconsistencies were also noted.
4.2 Compliance with the Design and Minimum Specifications Discrepancies between the design and construction were observed. These refer to the presence and positioning of movement joints, distribution and heights of internal walls (Figure 2b) and installation of sewerage system.
Fig. 2a Gap between sheets
Fig. 2b Height of internal walls
4.3 Completeness At the time of the inspection process several housing units or their surroundings were incomplete. This is in respect to the external landscaping works, internal plumbing installations (wash-basins, toilets or taps) and glazing. (According to the site management, the latter issues were apparent as precautionary measures against theft.) The issue of incompleteness of some of the units offers an interesting insight into the advantageous nature and flexibility of the BQIH assessment system, which, despite these disparities, offers a fair platform for quality comparison of housing units.
42
Mahachi & Goliger
4.4 Foundations
In principle, the assessment process at a post-construction stage does not offer adequate opportunities for foundation assessment and relies heavily on availability of relevant geotechnical, engineering and concrete supplier certifications. However, during the process of assessing completed units there was an opportunity of inspecting few neighbouring sites where the construction of foundation slabs was in progress. In some cases geometry of the slabs did not comply with the design (Figure 2c) and unacceptable fill material and its compaction were observed, together with an insufficient dept of the foundations. 4.5 Water Penetration
Several issues observed during the inspection process indicate a fair potential for water penetration into houses (Figure 2d). These relate to the minimum height above the ground level, the water-tightness of walls and the roof cover.
Fig. 2c: Overhang of external walls
Fig. 2d: Water penetration
4.6 Other Problems
Other typical problems, which were observed, refer to: 9 Faulty doors and window frames and/or their installation. These problems relate to inadequate gauge of the sheeting combined with careless handling and installation of these elements (Figure 2e). 9 Lack of tying-up of external and internal walls, structural cracks (Figure 2f) and unacceptable finish (plaster and paint) of internal walls. 9 Poor quality of mortar, which typically crumbles between the fingers. (The origin of the cements used for the mortar is unknown, and is of questionable composition.) 5.0 G E N E R A L P R O B L E M S OF N O N - S U B S I D Y H O U E S
Most of the non-subsidy houses reflect good (if not excellent) construction finishes. However, thorough investigations and discussions with occupants revealed that similar problems to those observed in the subsidy-housing sector occur. Typical problems related to non-compliance
43
International Conference on Advances in Engineering and Technology
with the design, insufficient compaction of in-fills, roof-leaks and inadequate waterproofing, structural cracks and improper installation of doors and windows. Most of the houses have architecturally pleasing but complicated roof geometries. Unfortunately such designs lead to insufficient or incorrect water flow over the roof surfaces and water penetration problems.
Fig. 2e: Re-installation of a frame Fig. 2f: Structural crack 6.0 RESULTS OF SURVEY In total, 179 of subsidy and 19 non-subsidy houses were inspected and indexed. All scores obtained from assessment of individual houses were transferred to a database. 6.1 Overall Index
An average index of nearly 65 (i.e. 64.98) was obtained and Figure 3a presents the distribution of indexes obtained from the survey. It can be seen that the data follows a fairly well-defined trend, in which most of the indexes lie between 60% and 70%. A rapid decrease in the number of houses corresponds to indices lower than 55 and higher than 75. An average index of 63.2 was obtained for the subsidy houses and 82.4 for non-subsidy houses. A difference of nearly 20 points clearly indicates the disparity in quality of product delivered to these two ends of the housing market.
44
Mahachi & Goliger
oo ~
o ! ~ii I ~/: i I:~1~! I i ; !; lllill I i ! i iill t [ ,I I l,lllti~lit ! i ililHIRl~i! 4
30
40
50
60
70
~l !~1~~1 1~, ~I ! i I I !I I ] 80
90
100
quality index Fig 3a. Distribution of quality indexes (all houses) 6.2 Comparison of Contractors
Table 1 is a summary report on the average index obtained by five best quality achievers. It can be seen that the best quality construction was achieved by EG Chapman and SJ Delport, both within operating the non-subsidy sector. It can be noted, however, that the average index scored by Mr Ngobeni (subsidy housing) is not much different from that of Mr Delport (non-subsidy). This is encouraging, as it indicates the ability and scope for improvement of small/emerging builders.
Table 1. Top achievers in construction quality No. of units Site Builder EG Chapman 9 Cornwall Hill S J Delport 10 Portofino Isaak Ngobeni 11 Johandeo J Mbatha 5 Olievenhoutbosch Miriam Mmetle Olievenhoutbosch
Average index
Position
87 78 70 69 68
1 2 3 4
6.3 Evaluation of Building Components
In Table 2a comparative analysis of average index values obtained for various building components defined in the system, is presented. It can be seen that, on average, the lowest index (60% of the maximum score) has been measured for roof structures, followed by walls (64% of the maximum score). Foundations and floors reflect overall results in the region of 70% of the maximum score and higher.
Table 2. Summary of building components Component ref. number
Description of Component
Average index obtained
Maximum score
% Maximum score achieved
Foundations
20,6
30
69
45
International Conference on Advances in Engineering and Technology
2 3 4 5
Floors & stairs Walls Roofs Electrical & plumbing*
11,3 16,1 12,0 9,0
15 25 20 10
75 64 60 90
* For this comparison only the non-subsidy houses are considered The results of electrical and plumbing works do not reflect the true site situation, since for this summary only the non-subsidy houses were considered. This is due to the fact that electricity installation was not provided in all subsidy houses and in many cases the plumbing installation was incomplete. In Figure 3b probability distributions of overall indexes obtained for walls is plotted. It can be seen that the peak in the distribution obtained for walls corresponds to an index of about 16 and the distribution tails off gradually towards lower indexes. A similar trend was observed in respect to floors. 3o 25 60
9 20
0
15
0
~
10
,~,~,~i,!~!i,~:!~i ii84 !iii~i!,~ii i i~i i i i~i,~~,,,i,i~~!i,~,i~i,,i!~!ii,~!i,i~!~!~',~~i~,~,~,~,,~,,,J,,,~,,~,~,~,~,
iiiii!iiifi_fili_I
i !i!i i i i i i i~i~iiii, i i :::ii i !iiiii!-i~!i~i i!i !i i ii l!i!'!i ,,,,ii i i i i i~lIiiit
i'
0 0
2
iiii 84
4
6
8
10
12
14
16
18 20
overall index: for walls Fig 3b. Distribution of indexes obtained for walls The above trend offers an important insight into the current quality standards relevant to these building components, and also indicates the possible strategy for improvement. This is, in a sense, that any future efforts should be directed at improving the lower standards (i.e. shifting the tail of the distribution to the right). A similar shift in the peak of distribution towards the right will require much more input and effort (i.e. training, site controls, improvements in materials and design).
6.4 Correlation Between Building Components Figure 3c presents a comparison of scores obtained for floors and walls. In order to enable a fair comparison, both sets of data were normalised by the respective maximum overall
46
Mahachi & Goliger
weights so the percentage values, which were obtained, represent the relative and comparable accomplishment of quality for both components. The data is plotted in a way in which, for specific houses, the overall normalised score corresponding to floors is projected along the horizontal axis and the score for walls along the vertical axis. Each house is then represented by a single data point. The diagonal line at 45 degrees (referred to as the regression line of unity) represents the situation in which both relative quality scores are the same. It can be seen in Figure 3c that most of the data points are scattered below the regression line, which indicates that for most of the houses more of the quality problems relate to walls. This finding suggests that more efforts (e.g. training) should be concentrated on the construction of walls, and not the floors.
001- 1! 40 .
O0
.
.
20
.
.
.
.
40 60 floors
80
100
Fig.3c. Comparison of scores obtained for floors and walls 7.0 CONCLUSIONS AND RECOMMENDATIONS The results of the pilot study, which was carried out, indicate clearly the applicability and usefulness of the proposed BQIH system for post-construction assessment of houses in South Africa. The system constitutes a fair tool for comparing various sectors of housing in South Africa, from low-income subsidy houses to high-income non-subsidy housing. The results of the study indicate the system's ability to identify statistically the most critical problem areas, to evaluate the performance of various building contractors, and to identify elements of the construction process where additional training of contractors is required. The pilot study also enabled the identification of relevant issues and considerations for future implementation of the system on a larger scale project(s). The most important issues were: 9 full access to, and analysis of, all relevant documentation, 9 adequate, relevant and comprehensive training of the assessors before commencement of a project.
47
International Conference on Advances in Engineering and Technology
the timing of the inspection, in which in the rainy season water-penetration and structural crack problems might become more evident. 8.0 ACKNOWLEDGEMENTS We would like to acknowledge efforts of the CSIR's Inspection Team as well as cooperation and support obtained from NHBRC's management, its building inspectors, and municipal inspectors of the City of Tshwane. REFERENCES
Government Gazette (1998), Housing Consumer Protection Measure Act 1998, Act No. 95, 1998. Government Gazette No. 19418, Cape Town, RSA. Mahachi, J., Goliger A.M. and Wagenaar F., (2004), Risk management of structural performance of housing in South Africa, Editor A. Zingoni. Proceedings of 2nd International Conference on Structural Engineering, Mechanics and Computation, July 2004, Cape Town. CONQUAS 21 (1998), The CIDB Construction Quality Assessment System. Singapore, 5th Edition. Ho, K., (2002), Presentation, Senior Development Officer, Quality Assessment Dept., Building and Construction Authority, Singapore. NHBRC (1999), Home Building Manual. National Home Builders Registration Council, South Africa. SABS 0400-1990 (1990), The application of National Building Regulations. Council of the South African Bureau of Standards, Pretoria. SABS 0100-1992 (1992) Part 1, Code of practice for the structural use of concrete. Design., Council of the South African Bureau of Standards, Pretoria. SABS 0155-1980 (1994), Code ofpractice for." accuracy in buildings, Council of the South African Bureau of Standards, Pretoria. SABS 0107(1996), The design and installation of ceramic tiling, Council of the South African Bureau of Standards, Pretoria.
48
Goliger & Mahachi
USE OF W I N D - T U N N E L T E C H N O L O G Y IN E N H A N C I N G H U M A N HABITAT IN COASTAL CITIES OF S O U T H E R N AFRICA Adam Goliger, CSIR, P 0 Box 395, Pretoria 0001, South Africa Jeffrey Mahachi, NHBRC, P 0 Box 461, Randburg 2125, South Africa
ABSTRACT At the southern tip of the African continent, most of the coastal cities are subject to strong and extreme wind conditions. The negative effects of strong wind events can be primarily considered in terms of their direct impact i.e. wind damage to the built environment as well as the wind discomfort and danger to pedestrians utilising the public realm. The paper presents selected examples and statistics of wind damage to structures and discusses the issue of human comfort in coastal cities. Wind-tunnel technology can be used as a tool for: anticipating and preventing the potential damage to structures, identification of areas affected by dangerous wind conditions, as well as investigating the soil erosion and fire propagation in complex topography.
Keywords: wind-tunnel; climate of coastal cities; wind damage; wind environment; wind erosion
1.0 INTRODUCTION Across the world and throughout history, coastal regions have attracted human settlement and development. This was due to several advantages of the coastal environment, including, amongst others, access to transportation routes as well as marine resources, and more recently, also its recreational benefits. Along the southern tip of the African continent, several large cities have been established - including Cape Town, East London and Port Elizabeth. These cities are subject to strong and extreme wind conditions, many of them originating in southerly trade winds and large frontal systems, occasionally accompanied by convective activities. Negative wind effects in coastal cities can be considered in terms of wind damage, wind discomfort/danger to people, soil erosion as well as wind induced propagation of fire.
2.0 WIND-TUNNEL TECHNOLOGY Traditionally, boundary-layer wind-tunnel technology was used as a tool for prediction of wind loadings and structural response, in support of the development of significant wind
49
International Conference on Advances in Engineering and Technology
sensitive structures (e.g. tall buildings or long bridges) in developed countries of the world. This is largely not applicable to the African continent, where most of the development is low-rise and dynamically insensitive. Furthermore, the largest portion of the built environment receives very little or no engineering inputs during its design and construction stages. In an African scenario, wind-tunnel technology can be used as a tool for anticipating and preventing potential damage to medium- and low-rise structures and also in identification of areas in cities, which can be affected by negative or dangerous wind conditions. The latter issue became relevant (in recent years) as the emergence of usage of the space between buildings has been identified as focal point in developing highly pedestrianised large scale retail and leisure amenities.
3.0 NEGATIVE EFFECTS OF STRONG WINDS There are various negative effects of strong winds on people living in coastal cities. From an engineering point of view, the primary concern is the direct wind damage to the built environment due to wind force exerted on structures. In the recent years, more attention is also given to wind discomfort and danger to people utilising the public realm in big cities as well as the danger posed by flying debris (e.g. broken glass or elements of sheeting). Other effects include those in which wind may be perceived as of secondary importance, which is often not the case. In fact, wind is the most important factor affecting the drying of soil (which has large impacts on agricultural sector), soil erosion and transportation (important along the Western Coast of Southern Africa) as well as spread of uncontrolled fires (which is a serious problem in coastal regions of Western and Eastern Cape Provinces of South Africa).
3.1 Damage to Structures and the Design Aspects A database of wind damage due to strong winds, which contains about 1 000 of events has been developed (Goliger, :2000). A monthly distribution of the wind-related damage in South Africa is presented in Figure 1, from which it can be seen that most of devastating events occur in summer months (October through to February). These are mainly due to High Intensity Winds, which prevail inland of the country as well as the south easterlycoastal winds along the southern tip of the African continent.
50
Goliger & M a h a c h i
Fig. 1: Distribution of damage in South Africa Strong wind events can inflict various degrees of damage to buildings and structures. In a progressive order, these can vary from minor damage to roof sheeting, up to total collapse of walls. Figure 2 presents a devastation of a second floor of residential flats in Mannenberg, which occurred in August 1999 due to a large tornado which originated off-the coast of Cape Town and Figure 3 a collapse of a large Container Terminal crane in Port Elizabeth caused by south-easterly coastal wind.
Fig. 2: Wind damage to Mannenberg
Fig. 3: Collapse of a container crane
Due to their nature, wind loading codes provide the design information on the loads of typical geometrical forms of structures and buildings. Wind-tunnel modelling enables one to determine the critical wind loading of specific structures of unusual geometry and size. Figure 4 presents a wind-tunnel model of a container crane. The tests, which were carried out, provided information on the loading of the crane. Furthermore, it enabled an investigation into the ways of improving the geometrical form of the crane in order to reduce the wind loading generated over various components of the crane. Figure 5 is a wind-tunnel model of a medium-rise building of complex form. Wind modelling provided the information on pressure distribution over the building fagade and roof cladding, which was used in the design.
51
International Conference on Advances in Engineering and Technology
This information was critical for structural integrity of the building and also the safety of the public in its vicinity.
Fig. 4: Model of a crane
Fig. 5: Model of a building
3.2 Effects on People
In many coastal cities throughout the world, unpleasant and sometimes dangerous wind conditions are experienced by pedestrians. Apart from the harsh windy climatic conditions in these cities, extreme pedestrian winds are often introduced or intensified by unsuitable spatial development of urbanised areas, for example tall and/or large buildings surrounded by outsized open spaces envisaged for public use. A trend is evident in which the re-emergence of the 'public realm' (space between buildings) becomes a focus for city developments. This is accompanied by a growing public awareness of the right to safe and comfortable communal environments. The above trend has led professionals in the built environment to recognise the need to investigate the pedestrian-level wind environment, amongst other aspects that impact on people living and walking about in their own city's public space. A variety of problems are related to human safety and comfort in the context of the urban environment. Under strong wind conditions people are unable to utlise the public spaces (Figure 6). Extreme pedestrian level winds my lead to the danger of people being blown over and injured or even killed, and vehicles being blown over (Figure 7). Physical discomfort or danger has an indirect socio-economic impact, in that people avoid uncomfortable public places. This lack of utilisation in turn affects the viability of commercial developments.
52
Goliger & Mahachi
Fig. 6: Difficulty in walking
Fig. 7: Passenger vehicle overturned by wind
The use and application of wind-tunnel technology in investigating the wind environmental aspects of developments will be highlighted on the basis of a wind-tunnel testing programme of Cape Town's Foreshore. This area is renowned for its severe windy conditions due to the notorious Cape Southeaster or 'Cape Doctor', where in some places the internationally accepted threshold wind speed of human safety (23 m/s) is, on average, exceeded for a few hundred hours per year. A comprehensive programme of wind-tunnel testing was undertaken in co-operation with the city authorities, urban planners, and the architectural design team. The transportation/road design team was also involved, due to the presence of freeway bridges in the immediate vicinity of the proposed Convention Centre. The quantitative wind-tunnel measurements included wind erosion and directional techniques. Pedestrian wind conditions were found to be fairly variable and locationally sensitive. This is due to a combination of the effects of topography, the upwind city environment, and the bulk distribution and form of the proposed development. In Figure 8 a sample of the directional distribution of localised winds, at various places around the proposed development envisaged for pedestrian use, is presented. This flow pattern results from the approach of south-easterly winds, which are the most critical.
53
International Conference on Advances in Engineering and Technology
9.
i~
!
I~~ 84 ,i
Fig. 8: Directional flow pattern, Cape Town Foreshore development Figure 9 presents a sample of summary results of quantitative measurements at one of the locations where unacceptable wind conditions are presented. The graph was developed by integrating full-scale wind statistical data for Cape Town and wind-tunnel measurements of wind speeds for the entire range of wind directions compatible with the full-scale data. The graph is presented in terms of the wind-speed probability distribution function and includes acceptability criteria for wind conditions. It can be seen that the safety criterion of wind speeds higher than 23 m/s is exceeded, on average, for about 150 hours per year. As a result of the wind-tunnel study- and subsequent data processing and analysis - several unacceptable and/or dangerous windy environments were identified and various ways of improving wind conditions were proposed, including, amongst others, setting-up specifications regarding future developments in the immediate vicinity, limiting pedestrian access and addition of various architectural measures.
54
Goliger & M a h a c h i
Fig. 9: Wind speed probability distribution, Cape Town Foreshore development Figure l0 depicts typical situations in which architectural elements were added to building structures. The photograph on the left shows a visitor's disembarking zone, which includes a continuous horizontal canopy and a set of vertical glass shields. The photograph on the right shows canopy structures introduced to protect the loading zones of the Convention Centre from a 'downwash' current generated by a nearby multi-storey hotel building.
Fig. 10. Architectural elements added to obviate wind nuisance 3.3 Soil Erosion One of the mechanisms of structural damage to the built environment caused by wind action (which is often forgotten or neglected) is the erosion of foundations. Such erosion usually occurs in non-cohesive and non-saturated soils and in extreme cases it may lead to the un-
55
International Conference on Advances in Engineering and Technology
dermining of foundations and collapse of walls. Little information is available on this topic in the international literature. The issue of soil erosion and the consequent undermining of buildings (Fig. 11) and unwanted deposition of sand (Fig. 12) is applicable to several large coastal township developments in South Africa (e.g. Rosendal, Khayalitsha, Blue Downs).
Fig. 11: Undermining of foundations
Fig. 12 Unwanted deposition of sand
Results of an investigation (Van Wyk and Goliger 1996) demonstrated the applicability of a wind-tunnel technology to the investigation of wind-induced erosion. Initial characteristics (patterns) were identified, as well as the possibility of developing a general set of design principles to optimise the spacing (density) of the units, generic layouts and orientation of the grid with regard to the direction of the prevailing winds. In Figure 13 a sample of the wind erosion pattern obtained for one of the investigated layouts and a specific wind direction is presented.
Fig. 13: Erosion pattern within a mass-housing development
56
Goliger & Mahachi
3.4 Propagation of Fire In dry areas and/or during the dry season, large parts of the African continent are subject to extreme fire hazards. This refers predominantly to bush fires, but is also relevant to agricultural land, forestry, rural developments e.g. the recent fires in the Cape Town area. Wind forms one of the most important factors influencing the propagation of fire and its risk assessment, i.e, the Fire Danger Rating. The fire is influenced significantly by the: 9 gustiness of the wind; sudden changes in speed and direction can severely hamper efforts to bring a fire under control, and can affect the personal safety of firefighters, 9 direction, magnitude and duration of the prevailing winds, and 9 dominant topographical features; for example, where strong winds coincide with rising slopes, the convection column of the fire does not rise up but progresses rapidly due to the acceleration of the flow as presented schematically in Figure 8, Figure 14 presents an aerial photograph of Cape Town and its surrounding topography, with an average elevation of 1000m above sea level. Each year during the dry season, the slopes of the mountain are subject to severe run-away fire events. One of the most difficult aspects of these events are the instantaneous changes in the speed and directional characteristics of the spread of the fire. These parameters are determined by the presence and character of dominant topography in relation to the direction of the approaching wind flow. A wind tunnel study has been undertaken to study the wind distribution around Table Mountain in Cape Town. The results of the tests have determined the flow directional and wind speed quantities (mean wind speed, peak wind speed, intensity of turbulence) as a function of the direction of the incoming wind flow.
Fig. 4. Cape Town's dominant topography
57
International Conference on Advances in Engineering and Technology
REFERENCES Goliger, A.M. (2000), Database of the South African wind damage~disaster. Unpublished, Division of Building and Construction Technology, CSIR. Van Wyk, T. & Goliger, A.M. (1996), Foundation erosion of houses due to wind."pilot study. Internal report BOU/I41, Division of Building Technology, CSIR
58
Elwidaa & N a w a n g w e
W O M E N P A R T I C I P A T I O N IN THE C O N S T R U C T I O N INDUSTRY E. Elwidaa and B. Nawangwe, Department of Architecture, Faculty of Technology, Mak-
erere University
ABSTRACT The paper looks at the role women in Uganda have played in the development of the construction industry. The paper examines the policy framework that has affected women participation as well as the role of educational and civic organizations, including NGOs and CBOs. A critical analysis of factors that could hinder active participation of women in the construction industry at all levels is made. Recommendations are made for necessary policy changes to encourage more women participate in the construction industry at all levels. The paper is based on a study that was undertaken by the authors.
Keywords: Role of women, construction industry, policy framework, working hours, gender mainstreaming.
1.0 BACKGROUND 1.1 Introduction In spite of the worldwide efforts made to bring about equity, regardless of race, sex or age, women are still marginalized almost everywhere around the universe. Although women represent half (50%) the world adult population, one third (33.4%) of the official labour force and perform two thirds (66%) of all working hours, they receive only a tenth (10%) of the world's income and own less than 1% of the world property (NGP, 1997). However, through hard work of lobbyists and researchers, the importance of the roles played by women in the economic and social development of their countries and communities is increasingly recognized (Malunga, 1998).
59
International Conference on Advances in Engineering and Technology
II Men ! "
II Women
I
9Women ]
9Men
!
I
IMMen
9Women
I
! ..........
I
Fig. 1. The charts show the ratios of men to women according to population, labour force, working hours and share of income. Gender issues are highly related to socio-cultural aspects that dictate what is womanly or manly in a given society; hence gender issues are usually locally defined or in other words contextualized. Despite variations, researchers and activists realize there are still many issues in common with respect to gender across the world. Awareness and concern towards gender issues has been raised through many ways that include, but not confined to development of theories, workshops, speeches, researches, programs, projects as well as local and international conferences. It is through the international conferences that gender issues are transferred from the local level to the international level where common issues are identified, addressed and discussed; ideas and thoughts are exchanged; and goals and direction forward agreed upon (Kabonessa, 2004).
1.2 Scope of the Study The study focused on women participation in the formal employment of the construction sector. The geographical scope of the study encompassed the academic institutions that provide education and training in disciplines related to the construction sector with Makerere University had been selected as a case study being the principal university that supplies the sector with its professional workforce. Architectural, consultancy and contracting firms within Kampala City, Kampala City Council, MoHWC and the registration professional organizations were all investigated and addressed. Masese Women Housing Project MWHP, a housing scheme that targets women as its main beneficiaries and executors has been studied and utilized as an example of the government's initiatives that target enhancement of women participation in the construction sector.
1.3 Study limitations The main limitations to this study could be summarized as follows:
60
Elwidaa & N a w a n g w e
The study has a very strong socio-cultural element that could better be understood by tracing life experiences of women involved in the sector, which could not be done due to time limitation. Lack of statistical records was another obstacles that hindered deeper investigation of the subject and also caused the exclusion of addressing women participation in the informal workforce of the construction sector, which would have made the study more comprehensive. 2.0 CONCEPTS ABOUT W O M E N PARTICIPATION IN CONSTRUCTION 2.1 Introduction In this section gender issues in relation to the construction sector were addressed. Definition of the term gender and rational for addressing it was provided, together with the definitions of other gender related terms that were used throughout the research. Gender in relation to construction in general and at the Ugandan level in particular was also addressed. 2.2 Definition of the Term Gender Gender is a term that refers to socially constructed characteristics to what is manly or womanly, feminine or masculine. Hence, gender implies socially constructed expectations to women's and men's roles, relationships, attitudes and behaviour. Thus being unmanly or unwomanly is to act, think or behave in a manner that contradicts with the expectations of the society about men and women. Despite similarities between gender issues all over the world, being socially constructed and hence contextual, gender definition varies from one society to the other. What is feminine in one society might not be the same in another (Were, (2003)). 2.3 Rationale for Addressing Gender Often one might wonder as to why gender is investigated in terms of issues related to one's being a man or woman, in particular, as opposed to society that is actually composed of men and women, in general. This controversy may be explained by considering some of the benefits addressing gender could bring. By addressing gender issues we tend to be in a better position to understand needs, capabilities, rights and other issues of both men and women as separate entities that constitute the society across its various classes, races and age groups. In so doing we are able to act accordingly and consider all members of that society and hence minimize inequality and achieve social balance in that society. Elimination of inequality and empowerment of society members without prejudice will increase self-esteem and minimize psychological ills related to its absence. 2.4 Operational Definitions For better understanding of issues addressed in this study, definitions of some terms that were used in is to be provide in the following paragraphs.
61
International Conference on Advances in Engineering and Technology
The Construction Sector The construction sector in this study refers to the actions pertaining to planning, design and the erection of buildings, roads and infrastructures as well as supervising and monitoring the execution process to ensure compliance with original designs and approval of adjustments if the matter ever arises. The construction process starts by the architect translating the client's functional requirements or needs into spatial designs empowered with his knowledge into the best functional, economical, technical and esthetical form (architectural design).
Fig.2. Standard Organisational Chart During the construction process a supervisor or consultant, ideally the architect, is supposed to supervise the execution process and ensures that the building is built according to the initial architectural and technical design. Hence, in this research the construction industry refers to the sector that involves the design and the execution of buildings. Much as the role of all technical engineers is acknowledged, this research considers only the architectural and contracting (civil) disciplines. 2.4 Women in the Construction Industry The construction industry has always been a male-dominated field. This is evident even in countries where gender issues have long been addressed and women received much of their rights and treated as equal members of the society as men. Women, being the weaker sex; have been marginalized by the assumption that construction activities need physical efforts that are beyond their ability. However, it has been reported that even in countries where technology development has reduced dependency on physical power, such as The United States (Perreault, 1992), The United Kingdom (Ashmore, 2003) and Sweden, the construction sector is still dominated by men. Gender imbalances in the construction sector are further emphasized in some areas, not only by sex, but also by race and class. 2.5 The National Context Women in Uganda constitute more than half (52.8%) the total formal labour force and it is believed that the percentage is higher in the informal sector, but no statistics is available. The majority of the working women occupy jobs that are related to the agricultural sector (86%), with 12% in the service sector and only 3% in the industrial sector (ILO, 2003).
62
Elwidaa & N a w a n g w e
I m Agriculture !1 Service D Industrial I
Fig.3: Chart showing occupation of Ugandan women by economic sector 3.0 M E T H O D O L O G I C A L APPROACH 3.1 The Analytical Framework To guide the investigation on gender sensitivity of the construction sector the study adopted a framework that identifies key issues, which interact together and are determinant of gender mainstreaming in the construction sector. It assumed that these elements formulate a network that cooperate and continuously influence one another for the achievement of gender sensitization and mainstreaming in the construction sector autonomously. The first and foremost element identified in the framework was the attainment of the policy or decision-making bodies' commitment to gender mainstreaming of the sector. The framework argues that political commitment usually comes as a result of persistent lobbying, manipulation and efforts of stakeholders, activists or any concerned bodies that are devoted to the cause, formulating gender pressure groups. If granted, political commitment is assumed to result into resources allocation, policies formulation that would be translated into programs or projects, as well as institutional building that all target gender sensitization or mainstreaming in the sector. Allocation of resources and formulation of gender sensitive policies and institutions would significantly assist in dismantling barriers against women participation in the sector in addition to increasing the level of gender sensitization and awareness among the community. These together will enhance women accessibility to training and education opportunities in construction related disciplines, which will empower them with construction knowledge and skills. Therefore stemming from the mentioned framework, the main themes or areas of investigations could be stated as follows: 9 The support and commitment of decision making bodies or policy makers to gender issues and concerns. 9 Dismantling of barriers against women participation in the construction sector together with their empowerment in construction related fields and skills.
63
International Conference on Advances in Engineering and Technology
9 9 9
Women absorption in the construction formal workforce referring to employment opportunities and type of employment. The level of gender sensitivity and awareness towards women participation in the construction sector. Identification of key actors and their roles with respect to mainstreaming gender in the construction sector among the professional and any other organizations who can act as pressure groups for the purpose.
The following diagram (Figure 4) further explains how the analytical framework operates for the attainment of gender mainstreaming in the construction sector. ~[ 9 Influence of pressure groups "1 (civil institutions, political constituencies, gender activist, ofo
V Political commitment
Allocation of resources and Formulation of policies. Institutional building
Training and education.
Dismantling of barriers
Gender awareness & sensitization ,
Increase in women empowerment & p icipation
~ /
Emergence of gender sensitive pressure groups
Fig 4. Analytical framework of the Study 4.0 ANALYSIS OF DATA AND DISCUSSION 4.1 Political Commitment towards Engendering the Construction Sectors
The government of Uganda recognizes the various imbalances in the Ugandan society and has committed itself to resolving them. This is clearly stated in the Local Govern-
64
Elwidaa & Nawangwe
ment Act Amendment, 2001 which calls for: "establishing affirmative action in favour of groups marginalized on the bases of gender, age, disability or any other reason created by history, tradition or custom, for the purpose of addressing imbalances which exist against them" (LGAA, 2001). With special reference to gender, the Government of Uganda went a step further and placed gender policies as an integral part of the national development process. It advocates for the assurance of gender concerns to be routinely addressed in planning, implementation, monitoring and evaluation of program activities across all sectors (NGP, 1997). For this purpose, a gender desk has been placed in almost all ministries to address gender issues in the respective sector and to target gender mainstreaming in their policies, programs and projects. The Ministry of Housing, Works and Communications has not been an exception. Initially, gender mainstreaming had been among the responsibility of the policy and planning department that falls under the directorate of transport and communications (Figure 4). But afterwards a section that a deal with gender mainstreaming in the ministry has been established on consultancy basis and is to run for three years (2003-2006). This office is to act as a focal point that is supposed to develop policy statements, guidelines, strategies, checklists and equip the ministry's staff with the necessary tools targeting building their capacity to implement gender mainstreaming in the ministry's sections and departments. The section is placed within the quality management department that falls under the directorate of engineering. Quality management department is responsible for quality assurance of the ministry's activities including material tests and research together with the protection and development of the environment. Environment is considered physically and socially and this is where gender is seen to relate (see the Ministry structure chart in Figure 4). It is important to note that although the gender unit is located within the engineering directorate that is concerned with buildings and construction, most of the activities of the unit were affiliated to the mainstreaming gender in the construction of roads than building. The reason behind that is road projects usually receive more donor money, which facilitates the sector's activities and development. However the unit's activities are expected to influence gender mainstreaming in all sectors of the ministry including the buildings and construction sector furnishing a precious opportunity for the purpose. It is important to note that relating gender issue to quality assurance of the ministry's performance and activities indicates the gender sensitivity of its policy makers and decision-making bodies, which poses a valuable opportunity for engendering the sector. Investigation of the unit's activities showed no evidence of collaboration with professional civil organizations, like the Ugandan Institution of Professional Engineers (UIPE), nor professional statutory organizations, like the Engineering Registration Board (ERB), which would have made the activities of the gender desk more comprehensive.
65
International Conference on Advances in Engineering and Technology
5.0 CASE STUDY: THE MASESE W O M E N HOUSING PROJECT M W H P
Masese Women's Housing project is located within Jinja Municipality. It started in 1989, funded by the Danish international development organization (DANIDA), implemented by the African Housing Fund (AHF), and monitored by the Ugandan Government through Ministry of Lands, Housing and Urban Development together with Jinja Municipality. The project posed a facilitation role; for example, it assists in delivering building materials to the site (Figure 5), while women carry out the actual construction work and handle some managerial issues as well. In the beginning the project targeted to assist 700 families in possessing their own houses. Women were trained through the AHF training team, in construction and managerial techniques for the purpose of building the houses as well as managing and monitoring the project. The construction and managerial skills women empowered with were to be utilized as an income generating activity for the betterment of their living standards during and after the end of the project. Women were also involved in the management, execution and monitoring of the project. Small loans that were to be paid back to the African Development Bank ADB were also provided. Though evaluated and was to be paid back in monetary terms, the loans were given in the form of construction materials to avoid diversion of use. Hence, benefits of the project were channeled to the poor families through women. The group skills in managing the project improved remarkably over the years. By the end of the project in 1993, three hundreds and seventy houses together with a day care center, had been constructed. In addition jobs had been created for 200 members; as well as training, skills and income generating activity potential were provided to many members. As a result of women empowerment, Masese women group managed to put up a factory that produces building materials, not only to supply the project, but also the market outside. For example, women were trained to manufacture slabs for pit latrines that are used for the benefit of the project and also for marketing elsewhere.
Fig. 6 Latrine slabs produced by women to supply the project and the market
66
Elwidaa & Nawangwe
Due to the success of the project, DANIDA showed more interest in funding a second phase that targeted the improvement of the infrastructures and social services in Masese as well as creating employment opportunities and supporting other construction programs. The Masese Women Construction Factory was to supply building materials for the construction of classrooms in five schools within Jinja Municipality. Hence, the project commenced in 1994 and built 12 classrooms in 3 schools and produced some furniture for those schools. Plans for the project to improve on the roads were made together with the establishment of credit schemes to assist members that were not employed by the project, in other income generating activities for the betterment of their lives and hence be able to pay back the housing loans. Despite its success, the AHF encountered some problems and withdrew from the country in 1996 without official handing over of the project to the Ugandan Government. In 1999 the government intervened and took over the role of the AHF to ensure continuation of the project targeting housing construction, building material production, women mobilization, housing loan recovery, employment opportunity generation and thus maintaining the project's sustainability. 5.1 General Evaluation of MWHP Inhabitants of Masese area are very poor with low level of education and minimum opportunities to uplift their living standards. They mainly depend on brewing, commercial sex work and provision of services and skilled labour to the nearby industries. People live in very poor housing conditions. Thus the project managed to utilize and mobilize the available human resource and power to the betterment of their housing conditions and living standards. The amounts of the loans given to people were proportional to the ability of the beneficiary to pay back, which was very important for the sustainability of the project. The loans though evaluated in monetary terms were given in the form of building materials to avoid diversion of use, which is again, a very good point in ensuring support to the construction sector. In general, the project had a positive influence with respect to housing provision, development of the construction sector at Masese, as well as empowering the people who participated with managerial and construction skills that facilitated upgrading of their living standards. 5.2 Gender Analysis of the MWHP The gender component of the project posed the greatest challenge to the project, as it was the first time a housing scheme is specifically designed targeting women not only as beneficiaries but also as implementers. In this respect the project has great accomplishments. In the following paragraphs some of the project's achievements will be illustrated.
In spite of the Ugandan patriarchal society, it is realized that women are usually more sensitive to housing needs in terms of size, space utilization and design as they spend longer time and do more chores in it. In some instances, they are even responsible for its maintenance. The project managed to tap this embedded knowledge, which was a key factor to its success. The project empowered women with skills that are to be utilized in income-generating activities as an alternative to the prevailing practices like commercial sex work and brew-
67
International Conference on Advances in Engineering and Technology
ing. This helped in steering them towards better moral, economical and social standards. Through this project women's self-confidence and self-esteem were restored and this contributed to a great extent in the prevailing attitudes towards women's ability in taking charge of their lives and those of their families.
Fig.7. Women show the satisfaction of achievement during the focus group discussion By involving women in the construction activities, the project succeeded in demystifying the myth about women participation in the construction sector and showed how it could be useful to both society and the sector. Members of the project act as role models for others to emulate and take up construction work as a means of upgrading the housing conditions and as an income generating activity to uplift their socio-economic standards. The project's success serves as a positive experience that can be replicable in other parts of the country. The project could be utilized for purposes of upgrading housing conditions, economic development, women empowerment and facilitating change in the sociocultural attitudes towards women involvement in the construction sector, which is a key issue in its gender mainstreaming. Women who had been trained in construction and management skills provided a pool of trainers that would transfer their knowledge to others. Women efforts in housing maintenance and up keeping were usually unnoticed and unpaid for, bit the project managed to recognize and highlight these efforts. It evaluated labour in monetary terms and hence made it possible for the beneficiaries to pay back the loans they had taken. Women skills were utililised to benefit the sector in production of building materials for marketing and increasing the knowledge base. Another problem encountered by the women of MWHP was the lack of consideration in the evaluation of their performance, productivity and hence payment during special times
68
Elwidaa & N a w a n g w e
of pregnancy and breast-feeding. Thus causing them a financial draw back in covering their living expenses and repayment of the loans acquired. Lastly, one could conclude that in spite of its limited shortcomings, the Masese Women Housing Project posed a real success story with respect to women empowerment and involvement in the construction sector, and hence its enhancement. The project also illustrates the government's genuine concern for engendering the construction sector. 5.3 Women Empowerment in Construction Related Fields One of the important issues the study considered when investigating gender sensitivity of the construction sector was to look into fields of studies and training that supply the sector with its professional workforce. The research identified civil engineering and architecture as the major disciplines for the purpose. Investigations were carried out mainly at Faculty of Technology (FoT), Makerere University (MU), being the principal education institute that provides the mentioned disciplines in Kampala. At the technical level, the study considered technical institutions and organizations that provide training in skills related to construction, which includes the vocational and technical institutes in general. However, Makerere University remains the principal case study for this research in reference to academic issues. 5.4 Women in Construction Related Academic Fields As mentioned, Faculty of Technology at Makerere University was utilized as the case study for the conduction of gender analysis to the educational fields that supply the construction sector with its workforce. Within the faculty, civil and architecture were the principal departments the study looked into. 5.5 Students Enrollment in Construction Related Disciplines by Gender Student intake at FoT, MU has almost tripled during the past decade increasing from 78 students in 1992/93 academic years to 202 in 2003/04. Civil engineering department had always received the highest percentage of students' enrollment among the various departments of the faculty ranging between 34 to35% of the total number of students during the last decade, while architecture accounts for only 6-14% for the same period of time. The small number of architecture students could be referred to its late introduction (1989), compared to civil engineering that has had been taught since 1970. However, it was noted that the number of architecture students has increased at a higher rate than that of civil engineering for the same period of time with a 400% increasing rate for the first and 250% for the later.
Nevertheless, the architecture and civil engineering students put together, usually account for almost half the student intake in the faculty, ranging between 47-57% during the last decade.
69
International Conference on Advances in Engineering and Technology
250
200
Q total
150 ~D ~100
r
I civil + arch
50
Academic year
Fig. 8. Percentage of civil and architecture students combined in relation to total faculty of technology students' intake 5.6 Female Staff Members as Role Models in Construction Disciplines The department of architecture has a higher number of female staff members compared to civil engineering, which could be explained by the higher number of female architecture graduates compared to civil engineering. However, it should be noted that in spite of the higher number of female staff members in the department of architecture, none has ever held a senior position, such as head of department, dean or associate dean. It is only in the civil engineering department that a female has ever been head of section, (note not a department). The reasons behind that were not very clear, but it could be due to the late launching of the architecture discipline, thus requiring more time to allow female graduates acquire the necessary academic and professional qualifications for the senior posts.
It was realized that female staff members in civil engineering and architectural departments pose as role models for younger generations both in the educational and professional fields of construction with more predominant influence in the first, educational field, than the later the practical fields. This was contributed to the embedded sociocultural perception of the unsuitability of the practical construction works to women. Women's more prevalence in the academic fields compared to the practical ones could also be due to their academic excellence in their graduate year which qualifies them for a teaching job immediately. The opportunity is usually eagerly accepted by female graduates as it save them from the tedious and wearisome procedure of job searching in the professional practice not to mention the security the teaching-post provides. Further investigations show that there are many other factors responsible for female architects' and civil engineers' preference of the academic field of construction over the professional practice. Few of them are discussed hereafter. Girls who manage to break through all the socio-cultural myths and joined these fields are usually bright and with strong characters, not to mention determination and academic ambition.
70
Elwidaa & Nawangwe
Exposure of architecture and civil engineering female students to female role models in the teaching profession during the years of study is higher than in the professional practice. Although the industrial training program where students have to fulfill during their undergraduate studies, provides an opportunity for exposure to female role models in professional practice, which widens the students' scope and increases their options of work but is hardly utilized properly for the purpose. Moreover the number of women in the professional is usually very little and they usually occupy junior positions. As a result, many are motivated to join the educational line after graduation. 5.7 W o m e n in Construction Professional Practice
This section addressed women position in the professional practice both as an employee in a construction or consultancy firm as well as being the employer to others in the firms. Women as employees: The survey reflected that the construction sector was receptive to both civil engineering and architecture graduates taking less than six months for a graduate in both disciplines to get a job irrespective of his sex. However, majority of respondents (75%) admitted that personal connections through relatives or friends was their means of getting employed; while random search using qualifications was the means of getting a job for a lower percentage (23%). Comparing civil engineering and architecture graduates shows that it is easier for the later to get jobs depending on qualifications solely than the former whose appointment depends on networking and personal contacts.
With reference to contracting the problem for civil engineering graduates becomes even more acute due to their large number compared to architecture resulting in higher competition especially, if we consider the added number of technical school graduates and the informal contractors. 6
CONCLUSION AND RECOMMENDATIONS
6.1 Conclusions
The conclusions had been arranged and presented in the same thematic order of the analysis for better understanding and comprehension. 9 Gender Sensitivity o f the Policy Makers The Ugandan Government recognizes the gender imbalances in the society and it has committed itself to its elimination. It has made gender policies an integral part of the national development process. To this effect, a gender desk has been incorporated in almost all ministries to ensure gender mainstreaming in their activities and thus posing a great opportunity for the purpose of mainstreaming gender in the construction sector.
At the Ministry of Housing, Works and communication, MoHWC, the gender desk main activities is to develop policy statements, guidelines, strategies, checklists and equip the ministry's staff with the necessary tools targeting building their capacity to implement gender mainstreaming in the ministry's sections and departments. However, its actual influence on gender mainstreaming of the construction sector is not yet clearly evident because of its recentness.
71
International Conference on Advances in Engineering and Technology
9
Women Empowerment in Relation to Construction
The research identified the tertiary and vocational training as the major formal approaches that empower women to participate in the construction formal work force. At the educational training level, Faculty of Technology at Makerere University being the principle academic institution that supplies the construction sector with its formal workforce was selected as a case study for investigation. It was found out that students of civil and architecture disciplines, the main departments within the faculty that supply the construction sector with professionals, put together account for almost half the total students' number in the faculty. Within the two departments, civil and architecture, females comprise a quarter of the total students' number with higher percentage of them in the later than the first. It was noted that this percentage is reflected proportionally in the workforce. Investigations showed no evidence of biases or discrimination against female students or staff members in the faculty. Incidents of sexual harassment were also not reported. It was observed that female graduates in both civil engineering and architectural, given a chance, prefer working in the academic line to the professional practice despite the greater financial returns of the later. This was attributed to the following: 9 Negative social attitudes towards women involvement in the construction field. 9 The high competitive environment of the construction profession that has been made harder for women due to the preference of men especially in site work. 9 Exposure of female students to role models in teaching more than the professional practice. 9 The intellectual environment of teaching is more accommodating and gender sensitive for women than the professional practice. 9 Greater opportunities for promotion and career development in the academic line as qualifications and competence were valued irrespective of gender which is not the casein the professional practice It was also realized that after few years of teaching, some of the females change course and join the professional practice due to their accumulated confidence with time. This coupled with the lower pay of the teaching jobs and their increasing family responsibilities drive them to join the professional practice being with more financial returns. Vocational training in construction related skills is provided mainly at technical institutes where women are very few, which is mainly due to the conviction that construction is not suitable for women. In very few cases training opportunities are provided as an element of a gender related program. 9
Women in the Work Force o f the Construction Professional Practice
The research revealed that personal contacts play the principal role in acquiring a job for architects and civil engineers irrespective of gender, though men are always preferred to women in site work. The reason behind this is mainly socio-cultural as people lack faith in women ability to handle site work. Moreover, promotion possibilities for women are more available in office than site work. It was discovered that most women engineers, who joined the professional practice, preferred to work as employees other than be selfemployed either individually or in partnership due to the following reasons:
72
Elwidaa & N a w a n g w e
9 9 9 9
Lack of self-confidence caused by the negative socio-cultural attitudes against women involvement in the male-dominant construction sector. Lack of capital required for the establishment of a private business and constrained access to financial loans or credits. Lack of business networking that is essential to the development and success of the construction business. Binding family commitments, which put pressure on women's time and activities.
9 Assessing Awareness towards Gender Sensitivity of the Construction Sector In spite of the identified good intentions and concerns towards gender sensitization of the construction sector there is no evidence of serious actions to demonstrate them. The identified possible ways through which gender sensitization of the construction sector could be achieved are: (i) Workshops and conferences Although many of the workshops and conferences that addressed gender sensitization of the construction sector took place in recent years, their influence on raising public awareness towards the issue is limited due to the insufficient publicity they received and the confined places where they took place. It was noted that in most of these conferences gender topics are handled superficially and proceedings are not closely followed up. This was attributed mainly to the superimposition of the gender topic by the donors in researches and projects without genuine interest and concern. (ii) The media" Although advantages of the media in raising public awareness towards gender issues is highly acknowledged, gender mainstreaming of the construction sector has never been addressed in the media. (iii) Role and activities of the professional and Gender concerned organizations: The main professional bodies looked at were the Uganda Institute for Professional Engineers UIPE, the Engineers Registration Board ERB, the Architects Registration Board ARB and the Ugandan Society for Architects USA. It was observed that much as addressing gender sensitization of the construction sector should have been among their responsibilities and concerns, none had ever addressed the issue in any of its activities. Furthermore, It was noted that although Uganda boasts of many gender concerned organizations, generally none had gender mainstreaming in the construction sector as its main concern. It can therefore be concluded that there is no adequate response or action from the responsible construction professional organizations as well as the gender concerned ones towards gender sensitization of the construction sector. 6.2 Recommendations In light of the previous conclusions the following recommendations follow: 9 It is recommended that the Ugandan government initiatives should target eradication of gender imbalances in the construction 9 It is recommended that women should be encouraged to study construction related disciplines. 9 Increase training opportunities for women in construction related skills in isolation of developmental or housing schemes are recommended:
73
International Conference on Advances in Engineering and Technology
Further Research in gender mainstreaming in construction professions and training generally is needed. REFERENCES
Consolata Kabonessa 2003, Lecture notes for the course on Advanced Gender Research Methodology July 2003, Department of Gender and Women Studies, Makerere University, Kampala Uganda. Constance Newman & R. Sudharshan Canagarrajah June 2000, Gender, Poverty, and Nonfarm Employment in Ghana and Uganda, World Bank Policy Research Paper No. 2367, World Bank- Poverty Reduction and Economic Management (PREM)cited in: http://papers.ssm.com/so13/papers.cfm?abstract_id=630739 sourced in 2nd May 2004. Jane S. Malunga 1998, Women Employees in the Informal Sector Kampala, Uganda, Gender Issues Research Report Series- No. 8, OSSREA, Addis Ababa, Ethiopia. Johana 2003, Lecture Notes on course on advanced gender research methodology, Women and Gender Department, Makerere University, Kampala Uganda. Lisa Ashmore 2003, Gender Trends in Professional Practice, cited in Design Intelligence Journal Feb 2003, Greenway Communications International, U S A . cited in: http://www.di.net/article.php?article_id=203 sourced in l st May 2004. Mabel Radebe 2003, Black Women Building Contractors in South Africa: A Case Study of Gauteng and Apumalanga, edited by Anita Larsson, Ann Schlyter, Matseliso Mapetla 1998, Changing Gender Relations in Southern Africa, Issues of Urban Life, Institute of Southern African Studies, National University of Lesotho, Lesotho. Ministry of Gender & Community Development 1997, The National Gender Policy 1997, Kampala, Uganda. Ministry of Works Housing & Communications June 2005, Construction Expo Magazine" Issue 1, June 2005, Kampala, Uganda. Raymond Perreault, 1992, Identification of Issues Facing Women in the Construction Industry, Auburn University, Auburn, Alabama, USA. Cited in http://asceditor.unl.edu/-archives/1992/perreault92.htm sourced in 16 April 2004. Uganda People 2000, Cited in: Photius Coutsoukis http://www.photius.com/wfb2000/countries/uganda/uganda_people.html sourced in 25 May 2005. Were Higenyi 2003, Mainstreaming Gender into Policies, Programmes and Projects, a paper presented at Gender in the Construction Sector Workshop, Makerere University Uganda, extracted from Gender and Rural Transport Initiative Training Manual.
74
Ekolu, Hooton & Thomas
C H A P T E R THREE CIVIL E N G I N E E R I N G STUDIES ON U G A N D A N V O L C A N I C ASH AND TUFF S.O. Ekolu, School of Civil and Environmental Engineering, University of the Witwatersrand, South Africa R.D. Hooton, Department of Civil Engineering, University of Toronto, Canada M.D.A. Thomas, Department of Civil Engineering, University of New Brunswick, Can-
ada
ABSTRACT This study was conducted to investigate certain characteristics of tuff and volcanic ash quarried from Mt. Elgon and Mt. Rwenzori in Uganda that may render the materials beneficial for use in industrial applications as pozzolans. Both tuff and volcanic ash materials were ground and blended with Portland cement at varied replacement levels and tested for several properties. It was found that incorporation of 20 to 25% volcanic ash gave the highest compressive strength and substantially reduced alkali-silica reactivity. The ash met ASTM requirements for 'Class N' pozzolans. This study suggests that the volcanic ash, when ground to 506 m2/kg Blaine fineness develops high qualities for potential use as a mineral admixture in cement and concrete. Conversely, the use of tuff was found to significantly increase alkali-silica reaction. This reiterates the possible harmful effects of some pozzolans to concrete if used without precaution, discretion or thorough understanding of their characteristics. Keywords: Pozzolans; Tuff; Volcanic ash; Compressive strength; Alkali-silica reaction; Fineness; Mineralogy.
1.0 INTRODUCTION The use of natural pozzolans results in a reduction of C O 2 emissions associated with Portland cement production. A 50% Portland cement replacement by a natural pozzolan would mean a reduction of such greenhouse gas emissions in cement production by one half, which could have enormous positive consequences for the environment. Secondly, depending on the grindability (if necessary) and closeness to the construction site, natural
75
International Conference on Advances in Engineering and Technology
pozzolans can significantly reduce the cost of concrete production, dam construction or production of mass housing units. As found with ancient concrete (Day, 1990; Mehta., 1981), natural pozzolans used in normal proportions typically improve concrete performance and durability. Whereas the benefits of most pozzolans used far outweigh their disadvantages, it is imperative that a thorough study of any particular geological source of natural pozzolan is conducted to understand its performance characteristics. This also helps to define discretionary use of materials where applicable. In this investigation, tuff and volcanic ash quarried from the mountainous regions of Elgon and Rwenzori in Uganda were studied to determine their properties for potential use as pozzolans, appropriate blending proportions for incorporation in cement and concrete, and evaluation of their pozzolanic activities. Earlier extensive studies by Mills and Hooton (1992) and by Tabaaro (2000) found the volcanic ash properties to be satisfactory for use in making lime-pozzolan cements. The pozzolan materials were blended with ordinary Portland cement in proportions ranging from 15 to 30% and performance related parameters were measured and compared in accordance with ASTM C-311 procedures. The techniques employed include differential thermal analysis (DTA), petrography and scanning electron microscopy (SEM).
2.0 EXPERIMENTAL 2.1 Materials
A low-alkali ASTM Type I Portland cement and two different forms of natural pozzolans of volcanic origin, tuff and volcanic ash were used in this investigation. Table 1 shows the chemical analyses of the cementitious materials. Both natural pozzolans had low CaO typical of Class F fly ash (Malvar et al., 2002). Volcanic ash was a typically dark broken rock material of highly irregular shape with networks of large bubble cavities. The tuff consisted of grayish consolidated chunks most of them over 100 mm diameter. The pozzolans were air-dried at room temperature and 50% RH for one week and then ground to required fineness levels. The materials were ground to within the normal range of cement fineness. Table 1: Chemical analyses of cementitious materials. SiO2 A1203 Fe20 3 CaO MgO SO3 K20 Na20 Na20 e LOI Cement Tuff Volcanic ash
76
20.34 4.94 2 . 3 3 63.50 2.45 2.93 0.47 0.17 42.66 12.74 13.05 10.89 5.56 0.03 1.82 4.59 46.67 13.96 12.62 9 . 1 6 7.15 0.10 3.19 2.85
0.48 5.79 4.95
2.64 5.71 0.00
Ekolu, Hooton & Thomas
2.2 Test Procedures and Specifications The procedures described in ASTM C-305 were followed in preparation of mortar mixtures. The mixtures used in the study were made in proportions of 15%, 20%, 25% and 30% of pozzolans by mass of cement. ASTM C-311 test procedures were followed. The water content of mortar mixtures was adjusted to ensure a flow of 100 to 115%. Properties of the pozzolans were evaluated in accordance with ASTM C-618 requirements. Thin sections prepared from chunks of the pozzolan materials were examined by optical microscopy equipped with polarized light. Lime-pozzolan pastes were studied by DTA for consumption of free C-H present in the hydrated specimens at different ages. 3.0 RESULTS AND DISCUSSION 3.1 Density and Fineness The densities of the pozzolans were 2860 kg/m 3 for volcanic ash and 2760 kg/m 3 for tuff as determined by the Le Chartelier flask method (ASTM C-188). The Blaine fineness levels of the raw materials (ASTM C-204) ground to different time periods by a laboratory ball mill are given in Table 2. Apparently volcanic ash requires a higher energy input for grinding as compared to tuff. Table 2: Blaine fineness of pozzolan materials. i
Volcanic ash Low High fineness fineness 3 8 259 506 i
Grinding period (hours) Blaine fineness (m2/kg)
Table 3:
i
Tuff i
Low fineness 1.5 748
High fineness 3.5 1080 i
Compressive strengths of mortars of 0.5 w/cm ratio containing 20 to 30% pozzolan (OPC-ordinary Portland cement, w/cm-water/cementitious ratio).
Cementitious materials
Control (OPC)
100%
Volcanic ash (of 259 m2/kg Blaine)
20% 25% 30%
Tuff (of 748 m2/kg Blaine)
20% 25% 30%
Bulk density at 28 da2s (kg/m3) 2271
Compressive strength (MPa) 3 da~s 32.5
7 da~s 38.9
28 da~s 54.3
2287
22.8 18.4 15.2
29.8 24.5 20.9
42.0 34.7 30.4
2233
16.4 16.4 12.5
23.2 23.4 17.0
33.5 30.2 23.8
77
International Conference on Advances in Engineering and Technology
3.2 Compressive Strength The compressive strength data for ages up to 28 days are shown in Table 3 and plotted in Figs. 1 and 2 for mixtures containing varied proportions of volcanic ash and tuff, respectively. After 3 days, the blended mixtures containing 20% volcanic ash had strength of 70% of the strength of control mix. This value increased significantly to 76% at 7 days and 77% at 28 days. Mixtures containing 20% tuff had compressive strengths of 50% of the strength of control mix at 3 days, 60% at 7 days and 62% at 28 days. The results show that more strength gain took place between 3 days and 7 days than at later ages. However, other findings (Mehta, 1981) have suggested that the pozzolanic reaction taking place within the first 7 days of cement hydration is insignificant or none. At a relatively low fineness of 259 m21kg, the compressive strength of mortar Fig.1 : Compressive strength ofm nrtars incorporcontaining 20% volcanic ash was ating volcanic ash 12f259 m 2 k g B 1 a k fineness. greater than the minimum requirement of 75% of the strength of control (ASTM C-618) for both ages of 7 and 28 days.
3.3 Pozzolanic Activity with Lime Mixtures containing pozzolans of different fineness levels were tested for pozzolanic activity with lime. A low strength of 4.8 MPa was achieved at a low fineness of 259 m2/kg as compared to 6.3 MPa at 506 m2/kg fineness for volcanic ash. The results plotted in Fig. 3 show that volcanic ash meets the minimum compressive strength of 5.5 MPa (based on ASTM C 6 1889) when ground to high fineness. Fig. 2: Compressive strength ofmortarsincorporatingtuff 1 3 7 4 8 m2kgBlaine fineness.
78
Ekolu, Hooton & Thomas
3.4 Control of Alkali-Silica Reaction
The 14-day ASR expansions of specimens stored and measured as required in ASTM C227 have been plotted in Fig. 4. At 14 days, ASR expansions of all mixtures containing volcanic ash were lower than the expansion of the control mix. The proportion of volcanic ash replacement level of 20% reduced ASR expansion to 0.02% much less than the required 0.06% (ASTM C-618). However, the opposite was found to be true for tuff. It is likely that tuff released alkalis into the pore solution, increasing ASR expansion regardless of the proportion of tuff incorporated into the mixtures. ~"
8 6.3 4.8 2.8
G~ 2
0
-
o
I
25.9 rn2/kG ash
I
I
506 m2A:g, .ash
1080 m2fkg, tuff
F i n e n e s s o f p o ~ z o l a n u s e d in ~ ~ e
Fig. 3" Lime-pozzolans activity.
~
[
-
=
0.12 o.lo
- 0.o8 0.06
. . . . . . . . . . . . . . . . . . . . . . .
~ 9 0.04 0.02 0.00
0
15
30
Volcanic ash (%)
Fig. 4: ASR expansion versus the proportion of volcanic ash or tuff replacing Portland cement. 3.5 Evaluation of the Characteristics of Volcanic Ash and Tuff
ASTM C-618 covers the requirements for use of natural pozzolans as mineral admixtures in concretes. In Table 4, results from experimental studies are compared against standard specifications for those tests performed on volcanic ash and tuff. The results summarized
79
International Conference on Advances in Engineering and Technology
in Table 4 reflect good performance by volcanic ash. Overall the material meets the ASTM C-618 requirements for 'Class N' pozzolans with test values well within the specified limits. Results of the mixes containing tuff did not measure up to the requirements of the standard. Table 4" Evaluation of volcanic ash and tuff against some major standard requirements for "Class N" pozzolans. ASTM C618-01 70.0
Volcanic ash 73.3
Tuff
SO3, max (%)
4.0
0.1
0.03
Moisture content, max (%) Loss on i~nition, max (%) Strength activity index: at 7 days, min (%) : at 28 days, min (%) Pozzolanic activity index with lime, min (MPa) Water demand, max (% of control) Expansion of test mixture as a percentage of low-alkali cement control at 14 da~s, max (%) Mortar expansion at 14 days in alkali expansion test, max (%)
3.0 10.0 75 75 5.5s 115 100
0.34 0.00 76.6 77.3 6.27 100 30#
2.26 5.71 59.6 61.7 2.80 107 217
0.06*
0.018 (25% ash)
0.13 (15% tuff)
SiO2+A1203+Fe203, min (%)
68.5
*Based on ASTM C 618-89; *Expansion of control made with low-alkali Portland cement," #This is equivalent to 70% reduction in ASR expansion
3.6 Chemical Constituents, Mineralogy and Microanalysis Some major differences in the chemical constitution of the pozzolans are evident in Table 1 showing results of their chemical analyses. The 5.71% loss on ignition of tuff may be due to bound water and the presence of a large proportion of inorganic or organic materials in contrast to practically 0% ignition loss in volcanic ash. Both pozzolans contained 5 to 6% N a 2 0 e alkali levels, however the availability of these alkalis for reaction appears to be quite different for each of the pozzolans. It is implied from the ASR control test carried out that there was high availability of alkalis in the tuff leading to promotion of ASR expansion. For the volcanic ash, alkalis may be in a bound state enabling the ash to contribute to reduction in ASR expansion. To further examine whether the materials being tested were pozzolanic, the consumption of C-H was monitored for volcanic ash, having shown good results from physical tests. The ash was mixed with hydrated lime and water in proportions of 1: 2.25:5 lime to ash to water. The mix was shaken in a sealed vial to ensure uniformity and stored at 38~ for up to 3 years. At different ages, the lime-pozzolan pastes were removed and the amount of C-H left in the samples was determined using DTA analysis as shown in Fig. 5. Most of the C-H in the samples was consumed within 28 days and after 3 years there was no
80
Ekolu, Hooton & Thomas
more of it left in the samples. It is interesting to note that at later ages, the consumption of the C-H was associated with the formation of another phase at around 180~ The new phase is presumably some form of C-S-H.
DTA/mW/mg 1.0- ,~exo
C-H peaks
1. lime
,!~,~j/j!~"~:~ii:":'ii!ii'ii!i 1 ,,/ :~ !ii /i" "iii
7 days 2.3. va-]imeva_]Lme
0.5-
4. va-lLme 28 days 5. va-lime 3 years
0 -
3
C- S- H 0.5-
1.01.52.0-
360 T e mpe ra~.r e / o C
Fig. 5" Calcium hydroxide consumption in lime-pozzolan pastes of volcanic ash stored at 38~ for up to 3 years. Va represents volcanic ash, C-H is calcium hydroxide, C-S-H is calcium silicate hydrate.
Thin sections prepared from chunks of volcanic ash and tuff were used for petrography. The examination revealed that the volcanic ash was a scoriaceous basalt comprising olivine and clinopyroxene phenocrysts, and a ground mass of olivine, clinopyroxene, feldspar, magnetite. The tuff was made of fragments of basalticrhyolite volcanic rock in a heavily altered, clay rich matrix. Figs. 6 and 7 are scanning electron micrographs showing some of the mineralogical features described. Volcanic ash consisted of mainly glassy structure and
Fig. 6: Olivine c~r~als and ~picaily hum ero~.~ bt~ble cavihes. Scaring electron micrograph of vol cani c ash.
81
International Conference on Advances in Engineering and Technology
large bubble cavities. It is likely that the heavily clayey matrix of tuff observed from petrographic analysis contributed significantly to its high loss of ignition. Consequently, the tuff had low to poor strength properties and pozzolanic activity. 4.0 CONCLUSIONS When evaluated for use as a pozzolan in concrete, volcanic ash met the requirements for 'Class N' pozzolans specified in ASTM C-618. The tuff Fig. ~ ~ a x f~a~ents o f v ~ c ~ c ro~:~, failed to meet these requirements and miner~poxticles a m b e ~ e d i n a l~gely clayey may be of little use. Volcanic ash was found to be most effective at 20 to 25% replacement levels and 506 m2/kg Blaine fineness. Examination of mineralogies of the pozzolans revealed the volcanic ash to be scoriaceous basalt with a presence of olivine, clinopyroxene, feldspar and magnetite minerals. The tuff consisted of fragments of basalt-rhyolite volcanic rock in a heavily altered, clay rich matrix. ACKNOWLEDGEMENTS The authors are grateful to Professor Michael Gorton of the Department of Geology and Saraci Mirela of Civil Engineering Department, both of University of Toronto for conducting studies on mineralogy of the pozzolans. We are also grateful to Eng. Balu Tabaaro of the Department of Survey and Mines, Mineral Dressing Laboratory, Entebbe, Uganda for providing some samples and literature. REFERENCES
Day, R.L (1990), Pozzolansfor use in low-cost housing, A state-of-the-art report, International Development Research Centre, Ottawa, Canada, September, 1990. Malvar, L.J., Cline, G.D., Burke, D.F., Rollings, R., Sherman, T.W., and Green, J. (2002), Alkali-silica reaction mitigation: state-of-the-art and recommendations, ACI materials journal, vol. 99, no. 5, Sept-Oct 2002, 21 p. Mehta, P.K. (1981), Studies on blended portland cements containing santorin earth, Cement and Concrete Research, vol. 11, no. 4, p.507-518. Mills, R.H and Hooton, R.D. (1992), Final report to International Development Research Centre (IDRC) of Canada, on production of Ugandan lime-pozzolan cement, blended cements their utilization and economic analysis, prepared by the Department of Geological Survey and Mines, mineral dressing laboratory, Entebbe, Uganda in conjunction with the Department of Civil Engineering, University of Toronto, Toronto, Canada, November 1992, 72 pages.
82
Ekolu, Hooton & Thomas
Tabaaro, E.W. (2000), Bio-composites for the building and construction industry in Uganda, International Workshop on Development of Natural Polymers and Composites in East Africa, Arusha, Tanzania, September 2000.
83
International Conference on Advances in Engineering and Technology
C O M P A R A T I V E ANALYSIS OF H O L L O W CLAY BLOCKS AND SOLID REINFORCED CONCRETE SLABS M. Kyakula, N. Behangana and B. Pariyo, Department of Civil and Building
Engineering, Kyambogo University, Uganda
ABSTRACT Over 99% of multi storey structures in Uganda are of reinforced concrete framing. Steel and brick structures account for less than 1%. Of the reinforced concrete structures currently under construction, 75% use hollow clay blocks reinforced concrete slabs. This paper looks at the form of the hollow clay blocks that contribute to its ease of use, and enables it to be held in the slab both by mechanical interlock and friction. It explores its limitations and ways in which its form may be improved.
Designs of single slab panel two storey reinforced concrete structures with one side having a constant dimension of 8m while the dimension is varied from 2m, 3m, 4m, 5m, 6m, 7m up to 8m were carried out for both solid and hollow clay blocks slabs construction. The design loads, moments, reinforcement, shear stresses and costs for each case of solid and hollow blocks slabs were compared. It was found that contrary to common beliefs; solid slabs are cheaper than hollow clay blocks slabs. This is because, hollow clay blocks need a minimum topping of 50mm, and are manufactured in standard sizes of 125mm, 150mm, 175mm, 200mm and 225mm. This implies that for spans of about 2m, solid slabs can be 75mm, 100mm thick, while the minimum thickness of hollow blocks is 175mm. Also unlike solid slabs, for hollow clay blocks slab over 6m long, shear reinforcement may be needed. As the length increases to 8m, the topping for hollow blocks increases to an uneconomic value. However for large structures with over two storeys, hollow blocks slab construction might be cheaper as the reduced weight leads to smaller columns and foundations. Furthermore hollow block slabs are easier to detail, construct and are less prone to errors on site. Keywords: Hollow clay blocks and solid RC slab; block shape; Design loads; shear stress, moments; Reinforcement; cost, ease of design/construction
1.0 INTRODUCTION Concrete slabs behave primarily as flexural members and the design is similar to that of beams except that. The breadth of solid slabs is assumed to be one meter wide while hollow block slabs are designed as T beams with effective width equal to the spacing between ribs. Slabs are designed to span smaller distances than beams and consequently have smaller effective depth (50 to 350ram). Also the shears stresses in slabs are usually low and compression reinforcement is rarely used. Concrete slabs may be classified according to the nature and type of support; for example simply supported, direction of support; for example one way spanning, and type of section; for example solid.
84
Kyakula, B e h a n g a n a & Pariyo
Until recently, the practice has been to use hollow blocks for lightly loaded floors such as residential flats. But a survey of 70 buildings currently being constructed in different parts of the country has revealed that hollow clay blocks are used in flats, hotels, student's hostels, offices, schools, libraries and shopping arcades, (Pariyo, 2005). The basis of design justifies this advance in utilization: The design of hollow clay blocks slabs depend on the fact that concrete in tension below the neutral axis has cracked. Whereas this cracked concrete contributes to the rigidity of the floor, the concrete surrounding the tension bars that holds the bars in the structure and provide bond offers its only contribution to strength. Thus any concrete in tension remote from the bars may be eliminated, thus reducing the weight while at the same time maintaining the strength of the slab. In hollow blocks slab construction, the hollow blocks are laid in a line with the hollow side end to end and the last block has its ends sealed to prevent entry of concrete into the holes. The slab is thus constrained to act as one way spanning between supports. The slab acts and is designed as a T beam with the flange width equal to the distance between ribs but is made solid at about 0.5m to 1.0m from the support to increase the shear strength. A weld mesh is laid in the topping to distribute any imposed load. Thus hollow block slabs can take most loadings. Hollow clay blocks slab construction is the most widespread form of slab construction; 60 of the 70 sites surveyed throughout the nation were using hollow clay blocks slab construction, (Pariyo, 2005). The wide spread usage and acceptability of this material necessitates that it should be thoroughly investigated. This paper is an attempt in this direction. 1.1 Hollow Blocks: A Sketch of a typical clay hollow block is shown below in Figure I below, its surface has small grooves which help introduce friction forces and a key for mechanical interlock, these hold the block in the concrete. The dimensions given in Figure 1 were measured from actual hollow clay blocks on the market. The four hollow blocks sizes available on the Uganda market (from catalogues) are shown in Table 1. The limited number of sizes means that the least depth of hollow blocks slabs is 175mm; this is because the least height of hollow blocks is 125ram and the minimum topping allowed is 50ram. This implies that even for small spans such as l m to 2m, which could require a slab thickness of 5 0 r a m - 100mm, one still has to use 175mm. However, as the span increases to 5m the thickness of the solid floor slab and hollow blocks slab are about equal.
Table 1. Hollow block types on the Ugandan market S/No Length (mm) Width (mm) Height (mm)
Weight (Kg)
1
400
300
125
7.3
2
400
300
150
8.4
3
400
300
175
11.73
4
400
300
225
13.58
85
International Conference on Advances in Engineering and Technology
1.2 Implications of the Shape: A reasonable arrangement of blocks leaves a minimum width of 75mm, which allows for a 50mm diameter porker vibrator and 12.5mm clearance on either side. Thus a minimum rib width at the bottom is given as 75 + 2x40 = 155mm. This greater than 125mm; the minimum rib width required for fire resistance as given in Figure 3.2, of BS8110. The applied shear stress v , for a ribbed beam is given by; v = (V/b,d), where: V is the applied shear force, d is the effective depth and b, is the average rib width. Ribs created between the hollow blocks are 75mm wide at the top and 155mm at the bottom as shown in Figure 2. For a case of a 175mm thick slab, with hollow blocks of 125mm depth, topping of 50mm and 25mm cover to tension bars. It would be more conservative to use the smaller value of b, = 75mm in shear design calculations, however in practice the larger value of 6 , = 155mm is used. Moreover it may be difficult to justify using the average rib width if the rib width is not tapering. One alternative is to modify the hollow blocks such that the key is recessed into the blocks rather than out, as illustrated in Figure 3 This could reduce on the required rib width from 155mm to the minimum allowed of 125mm, thus saving on the concrete, making the calculation of concrete shear stress easier, while at the same time providing the key for holding the hollow blocks safely in the slab. 2.0 COMPARATIVE ANALYSIS Two sets of slabs were designed; one set using hollow blocks while the other used solid blocks. For each set, one side of the slab was kept at 8m while the other was varied from 2m, 3m, 4m, 5m, 6m, 7m, up to 8m. The imposed and partition loads were assumed to be 2.5N I mm2 and 1 .ON I m m 2 respectively. The floor finish and underside plaster was each assumed to be 25mm and of unit weight 24.0kN I m 3 ; giving a dead load from partitions and finishes of : DL(p&F,= 1.O + 0 . 0 5 ~ 2 4= 2.2kN I m 2 . The dead load for the hollow block slab is given by: DL(,,,,) = 24(h - N,V9) + N, W, , where: h is the overall 3 slab depth in meters, N, is number of blocks per m , Vh is volume in m of a hollow block and W, is Weight of a block in kN . The slab was assumed to be an interior panel in a building with over 3 panels in either direction. The corresponding beams, columns, and pad footings were designed. Comparative analyses of the design loading, Moments, reinforcement, shear forces and costs of construction were carried and a few of these are given below. 2.1 Design Loads per Square Meter As the span and thus loading increases, design loads in kN I m 2 increases for both solid and hollow blocks slabs. Figure 4 shows a comparison of design loads for hollow blocks and solid slabs. For hollow blocks slabs less than 4m span, the design load is constant because the slab thickness used is dictated by topping requirements and depth of available blocks. For this depth and span (175mm & < 4m), deflection is not critical. On the other hand the design depth increases with span in solid slabs because slab thickness varies as per allowable deflection requirements.
86
Kyakula, Behangana & Pariyo
Figure 4: Variation of design loads for Solid and hollow blocks Floor slabs 20 A
--B--- Hollow blocks slab
i.
Solid slab
~6
--'~
J
.a4 Z
~2
~o
U
0
1
m
2
3
m
4 5 Span Length (m
6
7
8
9
2.2 Moments and Reinforcement From Figure 5 it is seen that, despite the fact that the solid slab has a greater load and thus greater applied moment it has a greater reserve capacity, its ratio of applied to ultimate moments is less than that of hollow blocks for all spans greater than 3m. Also its areas of 2 reinforcement in mm per m width of slab are less than that for hollow blocks slab for all spans. This is because for lower than 4m, even where required area of reinforcement is small, one must provide the minimum allowed, the hollow blocks slab is treated as a Tee beam and one is required to provide a minimum of area of steel given by
(100A~/b w h ) - 0.26 fy - 460N / mm
2
for
flanged
beams
with
the
flange
in
compression
and
as per table 3.25 BS8110. On the other hand, Solid slabs are provided
with a minimum of ( 1 0 0 A ~ / b h ) - 0 . 1 3 in both directions. Also for hollow slabs it is preferable to provide one bar per rib, thus the next bar size has to be provided even where required area of steel has exceeded the previous bar size by a small value.
Figure 5: Variation of ratio of applied moment to Moment of resistance for solid and Hollow blocks Slabs 0.25
E 0
0.20
=E o r
C .~
0 =En~
--9 r r
Hollow blocks slab J
- - A - - Solid
~
0.15
0.10
0.05
<
0.00 0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
9.00
Span Length (m)
87
International Conference on Advances in Engineering and Technology
2.3 Applied and Concrete Shear Stresses The value of applied shear stress v and concrete shear stress vCobtained depends on the value of b v used. The usual practice is to stop hollow blocks at about 500mm to 1000mm from the support and for this length the slab is made solid. This serves to increase the shear resistance of slab close to the support It is also the practice to ignore the keys, then. b v -155mm and . ( v c > v ) . However if the keys are not ignored and b v - 7 5 r a m , then as shown in Figure 6, for span greater than 3m (vc < v), thus necessitating shear reinforcement or using a solid slab up to .a length when the applied shear stress is no longer critical. On the other hand, the design concrete shear stress for the solid slab was greater than the applied shear stress for all length of span.
Figure 6: Comparision of applied shear and Concrete shear stresses for hollow blocks slab (bv =75mm) 1.2 A
t~ :3
O"
r 0.8 E E 0.6 #
0.4
Applied Shear stress v
------Zk--Concrete Shear stress Vc
L
I o.2
0
1
2
3
I
l
i
I
I
l
4
5
Span Length (m)
I
,
6
7
8
9
2.4 Cost Comparisons The cost of various Structural Elements were derived and compared for both solid and hollow blocks slabs. The cost of each element designed using solid slab was divided by that of the hollow blocks slab and this ratio was plotted against span. Figure 7 shows the variation of the cost of solid and hollow blocks slabs with span length. It is seen that for slabs less than 4m and greater than 5m, the cost of hollow slabs is higher than that of solid slabs. This is due to the fact that for spans less than 4m, solid slabs allow smaller depth, as per deflection requirements and hollow blocks slabs dictates slab thickness based on the depth of available blocks and topping. Thus for spans of 2m and 3m, hollow blocks have bigger slab depth than solid slabs with corresponding material requirements. At 4 and 5m, the hollow blocks slab becomes cheaper. Above 5m, the minimum topping (50mm) cannot be use because the available hollow blocks offer few standard depths and in order to meet deflection requirements as the span increases, the only option is to increase the topping. Thus for 8m span, deflection requirements dictate the overall depth of
88
Kyakula, Behangana & Pariyo
340mm, yet maximum depth of available hollow blocks is 225mm, giving uneconomical a topping is 115mm. The comparison of the cost for beams revealed that for spans less than 4m, and greater than 5m beams supporting solid slabs were slightly cheaper. This is because. The current practice of using the beams of the same size even when the hollow blocks have constrained the slab to act as one way spanning maintains rigidity of the structure and reduces effective height for the columns, but offers no reduction in the materials used in beams. The cost of columns were found to be the same for both cases because, the case considered carried little weight and the reinforcement areas were dictated by the minimum requirements rather that loading conditions. This implies that for structures supporting many floors, the columns of one for hollow blocks slabs will be cheaper because it will carry less loads and the bending may be assumed to act about only one axis for all the columns. On the other hand the foundation for a structure supporting hollow blocks slab were found to be cheaper by an average of 10%. This is because the hollow blocks slabs ensured a reduced weight.
2.5 Design and Construction Use of hollow blocks constrains the slab to act as one-way spanning. These are simple to analyse, and design. The Structural drawings are easy to detail and understand. During construction it is easier to lay the reinforcement, thus minimizing mistakes on site. The weld mesh included in the topping ensures distribution of imposed loading to the whole slab. Its ease of construction has contributed to its growing popularity such that it now occupies 75% of the market share.
Figure 7" Variation of the cost of solid and hollow blocks slab with span 1.1
1 A
E -c
09
c ,_1
c
0.8
0.. t~
#
V a r i a t i o n of c o s t y ratio
0.7
0.6 2
3
4
5
6
7
8
Cost of Solid/hollow blocks slab
89
International Conference o n Advances in Engineering and Technology
3.0 CONCLUSION The current shape of the hollow clay blocks has keys and groves that provide mechanical interlock and friction resistance to hold the blocks firmly in concrete. However, his shape could also decrease the shear resistance of the slab. A shape has been proposed that has all the advantages of the one currently used, while at the same time increases shear resistance of the slab and a saving in the concrete used. The limited range of hollow blocks available on the market makes hollow blocks slabs more expensive than solid slabs for spans less than 4m or greater than 5m. For spans less than 4m the minimum slab depth is 175mm, because the minimum available block depth is 125mm and minimum topping required is 50mm. Yet for solid slabs the depths can vary from 50mm to 150mm for spans varying from l m to 3m depending on loading and deflection requirements. For spans greater than 5m, deflection requirements dictate increasing depth with spans, yet the maximum depth of available blocks available is 225mm, leading to uneconomical depth of the topping. Using the beams of the same size even when the hollow blocks have constrained the slab to act as one way spanning maintains rigidity of the structure and reduces effective height for the columns, but offers no reduction in the materials used in beams. The reduced weight due to use of hollow blocks slabs results in reduced cost of columns and foundations. Moreover since use of hollow blocks constrains the slab to be designed and act as one way spanning, the loading and thus moments from one set of beams framing into the column is negligible compared to the other. Thus the columns experience uniaxially moments, which causes a saving in reinforcement.
REFERENCES Balu Tabaaro. W. (2004), “Sustainable development and application of indigenous building materials in Uganda” Journal of Construction exhibition. Issue 1 Page 4-5 BS8110-1 (1985, 1997) Structural use of Concrete- Part 1, code of practice for design and construction Mos1eyW.H. and Buney J. H. (1989) “Reinforced Concrete Design” 5‘hEdition Macmillan, London Pariyo Bernard. (2005) “Comparative cost analysis of solid reinforced concrete slab and hollow clay blocks slab construction”, Final year undergraduate project, Kyambogo University” Seeley 1. H. (1993) “Civil Engineering quantities” 5’hedition Macmillan London
Uganda Clays, Kajjansi catalogue (2004) Price lists and weights of suspended floor units.
90
Ekolu & Ballim
TECHNOLOGY TRANSFER TO MINIMIZE CONCRETE CONSTRUCTION FAILURES S.O. Ekolu and Y. Ballim, School of Civil and Environmental Engineering, University of the Witwutersrund, South Africu
ABSTRACT The use of concrete in developing countries is rapidly growing. There is however, a strong possibility that its increasing application as a construction material is likely to be accompanied by increase in incidents of construction failures. Such are problems that have been experienced by many countries during infancy of the concrete industry. Concrete construction is resource intensive and construction failures come with significant economic costs, loss of resources and sometimes, fatalities. For sustainable development in Africa, countries cannot afford to incur waste of resources and enormous expenses from failures that occur especially in avoidable circumstances. Although research in concrete technology is growing rapidly and faces many challenges associated with skills and technological expertise, an important contributor to failure is that there is much existing knowledge that is not adequately applied. The reason for this redundant knowledge base is inadequate technology transfer to all levels of the work force - from design engineers to the concrete work team at the construction site. This paper explores some of the barriers to effective technology transfer and considers ways of dealing with this problem in developing countries. Also presented is a case study of a recent fatal collapse of a new reinforced concrete building under construction in Uganda.
Keywords: Concrete; Construction failures; Technology transfer; Education; Skills development
1.0 INTRODUCTION It is anticipated that developing countries are on the path to experience the largest growth in the world for utilization of concrete in construction and consumption of cementitious materials. The great existing need for infrastructure and general construction in these countries is a necessary ingredient for growth in lockstep with industrialization efforts. As an example, the recent trend of industrial growth in China, one of the large developing nations has triggered significant use of concrete and cementitious materials, consuming about one-half of the world cement production (Weizu, 2004). This is not to suggest that other developing countries will experience similar growth trends but the need for physical infrastructure in Afi-ica is being driven by pressures associated with population growth and urbanization increases, as shown in Fig. 1, as well as ongoing industrial development and globalization trends that are likely to propel increase in cement consumption and the concrete construction industry.
91
International Conference on Advances in Engineering and Technology
65% L
Developing
om ~
55% o k
45%
~'-
"~~
Industrial
Z
,35%
I
1990
I
I
I
I
2000
2010 2020 Forecast Fig. 1" Forecast urban growth (Source." United Nations, 1998) (CERF) But the concrete industry in Africa is relatively young and could potentially experience disproportionately high construction failures. This is not to be pessimistic but to rather highlight the need to be cautious so that major past mistakes leading to failures experienced at the infancy of the concrete industry in North America and Europe over 100 years ago are not repeated in developing countries. In the early years of concrete construction, the conceptts of concrete durability and sustainable development were either not known or they were not fully appreciated. In the present era, much knowledge has been accumulated on these issues and they can no longer be ignored in any credible concrete design and construction, more so for developing economies.
1.1 Early Precedents of Failures in Concrete Construction At the inception of concrete as a new construction material, records indicate that there were rampant and some spectacular construction failures that occurred. Based on past experiences, it can be shown that there are very few new causes of construction failures today other than variations of the same problems associated with the broad categories of design deficiencies, poor concrete materials and workmanship, formwork and scaffold problems during construction process, foundations, and hazards (Feld, 1964; Mckaig, 1962). In an assessment of 484 building failures in Algeria, Kenai, et al. (1999) found poor workmanship and poor concrete quality to be the main causes of building failures in addition to soil movement. The lessons learnt from early experiences have been built into rules and procedures to act as safeguards to minimize re-occurences of failures. These rules have been standardized into required building codes, construction material specifications, systematic selection procedures for engineers and contractors, professional registration requirements, exposure of professionals to legal reprisals. Modem theories of technical risk management have been employed with the support of computer technology and analysis software. While these developments are most effective in defending against construction failures due to technical errors, their inappropriate use is often a problem of technology transfer. This manifests itself as ignorance of the existence of such codes and design guides, lack of understanding of the theoretical underpinnings of the code recommendations and specifications, inadequate application of such guides and specifications and the absence of a quality assurance proce-
92
Ekolu & Ballim
dure to ensure compliance. Human error adds a further dimension to the problem and it cannot be easily predicated, quantified or eliminated. The human error factor is a complicated subject that might not be hlly handled technically but its danger can be reduced with proper preparation, care and special attention to critical aspects of concrete science and technology in construction. 1.2 Construction Failures and Sustainable Development Construction failures inhibit efficient and sustainable development and should be appropriately addressed. Although concrete is a relatively new construction material in developing countries, construction failures are not expected to be as frequent as it could be nor are there any records to suggest so. Instead, most specifications and design codes governing construction practices are already in existence or have been adapted, or directly imported from more developed countries. There are often problems associated with the direct importing of these standards (Ballim, 1999). Nevertheless, most of these procedures are often undermined in circumstances of compromised relationships between owners, designers and contractors, political and social uncertainties, marginalization of local expertise due to foreign-influenced financing policies. Engineers and construction professionals in developing countries face a unique set of challenges. In many Ahcan countries, infrastructure construction projects have in the past been largely contracted to foreign firms or expatriates, citing incompetence and/or lack of local capacity. But the real challenge for professionals from developing countries is in translating the existing knowledge base from design to construction site, from theory to practice while upholding the principles of effective and sustainable engineering and development within the local environment. This can only be successhlly achieved through development of appropriate and relevant specifications, education and training at all levels of staff in the design and construction process and systems that assures quality and compliance. This paper presents some views on these issues and explores potential ways of minimising concrete construction failures within the context of effective resource utilisation. 2.0 THE TECHNOLOGY TRANSFER BOTTELNECK Any construction project is a system of operations on and off site. The role of technology transfer is to bring together the main components of the system, suggested as construction systems and equipment, supplies and materials of construction, human knowledge and skills. For an effective construction process, the independent operations of each component must be integrated to simultaneously perform in response to the other components, placing restrictions in accordance with output requirements. Human knowledge and skills play the pivotal role of planning, organising and executing works within the system towards optimal or efficient output. Many technical and non-technical errors are often made during integration and interaction of the system components and deficiencies here often lead to construction failures. This segment forms the ‘constriction in construction’ shown in Fig. 2 of the simplified system model described.
93
International Conference on Advances in Engineering and Technology
Technology and existing knowledge base Construction systems, tools and equipment
Human knowledge & skills,
Materials of
, Technology transfer constriction
Construction job site
Fig. 2: Technology transfer bottleneck
Concrete construction is by and large an execution of its material science on the job site. This is where the major problem arises. Engineers design concrete structures using structural analysis concepts but the structures have to be built through execution of concrete material science fundamentals on the job site. A construction site is also a concrete manufacturing factory. While the engineers, trades persons, artisans and labourers need proper and appropriate skills on the fundamentals of concrete as a structural material to produce good construction, the designer who may also be the supervisor should be more focussed on the implications of concrete processing methods on design and analysis concepts. This is the stage where knowledge-based skills transfer becomes critical. Often these impediments are manifested as incompetence and ignorance on the part of trades’ persons, deficiency in supervision, outright negligence or lapses on the part of an engineer, who otherwise is a competent and carefd professional. These deficiencies translate into poor concrete materials lacking durability, poor workmanship, problems in loading and removal of formwork and scaffolding that often constitute major causes for concrete construction failures.
3.0 TECHNOLOGY TRANSFER IN CONCRETE CONSTRUCTION 3.1 Concrete Technology, Skills Transfer and Education There are likely to be many non-catastrophic construction failures in developing countries that are not reported or documented. Fear of legal reprisals and professional sanction discourages openness and record keeping of construction failures. However, the danger is that hture engineers could repeat similar mistakes and further enhance the perception that engineering competence is lacking in developing countries and needs to be provided from the developed countries. On a positive note, construction failures provide an opportunity for betterment of skills and techniques through lessons learnt, and a chance to add value by including elements of
94
Ekolu & Ballim
service-life extension into the repairs. Experiences, formal and informal education, and appropriate training are required to improve existing technology and minimize construction failures. Concrete technology itself is changing fast but concrete research and innovation is rarely developed or applied in developing countries. In most cases, engineering educational institutions emphasize design analysis while minimizing the fundamental concepts of material science of concrete that are key to the process of effective concrete construction. It is often assumed that understanding of these important issues can be acquired through practice or continued professional development which in most developing countries is not readily available to engineers except through serendipitous experience for the fortunate few,. However, the concrete construction industry can benefit greatly from special courses and programs if provided by civil engineering institutions through their curricula. Current industry concerns such as construction failures, fundamentals of concrete making, ethics and many other topics can be easily accommodated as short courses or as units within major academic/educational programs.
3.2 Concrete Market and Industry in Developing Countries Except for a few countries such as South Africa, the concrete market in most developing economies is highly fragmented. The concrete industry has multiple players including producers and suppliers of construction materials, contractors, engineers and architects, unions of trades persons and artisans, formal institutions of research and education. None of these stakeholders benefit from construction failures and it is important that they make their individual contributions through a representative structure that coordinates training and development to the benefit of the entire sector. Here lies an important challenge to all players in the concrete construction sector in developing countries, as most of Africa is: they have to form a mutually supporting coordination structure which focuses on technology transfer through appropriate education, training and human resources development at all levels of the industry. This must be achieved if such countries are to grow positive and respectable indigenous concrete construction sectors.
3.3 Engineering for Sustainable Development There are principles and procedures governing approval of construction projects and designs for physical infrastructure. Project cost and duration have traditionally been held as the main considerations while evaluation is based on completion time, actual project cost, owner satisfaction and other factors. Recent advances have included environmental requirements into some construction project designs. But the concept of sustainable development has not been entrenched into construction from the engineering perspective. There is need to develop quantitative techniques that broadly measure the contribution of construction projects towards sustainable development. Such systems could then be built into the requirements for approval and evaluation of construction projects. 4.0 CASE STUDY: COLLAPSE OF J & M AIRPORT ROAD HOTEL, ENTEBBE, UGANDA The collapse of a three-story concrete structure during construction of J & M Airport Road Hotel on September I , 2004 causing the death of 1 1 persons and injuring 27 others was perhaps one of the most publicised recent incidents of a construction failure in the East Ahcan region. The section that collapsed was adjacent to a large section of an already erected six-
95
International Conference on Advances in Engineering and Technology
story reinforced concrete frame with brickwork wall filling. This brief overview is based on available reports, and is given only for the purpose of illustrating important issues concerning concrete in construction failures. The type of building was a standard reinforced concrete, column-and-beam type construction, with concrete floor slabs, brick wall partitions and cladding. On the date of collapse, construction of the section had reached the third floor. At around 10 am when the structure collapsed, reports indicate that the workers had been removing scaffolding in preparation for erection of partitions (Bogere and Senkabinva, 2004). The whole section of the structure fell vertically with the beams shearing off from adjacent erected six-story section of the same hotel building. The results of a site survey and construction materials testing conducted by Uganda National Bureau of Standards (UNBS, 2004) showed that concrete strength for columns was low and highly variable, ranging from 7 MPa to 20 MPa, well below its expected grade of 25 MPa. The report showed evidence of segregated and severely honey combed concrete with loose aggregates that could be easily hand-picked, particularly at the joints of columns and beams or floors. No hazards were involved and foundation problems were unlikely. Even before considering the possibility of a design deficiency, a myriad of faults could be assembled. Poor workmanship and poor concrete quality were apparent. The removal of scaffolding and supports at the lower floor could have been the trigger for collapse given that columns of such low strength concrete could easily fail to support the upper two floors. Indeed, columns for the existing six-story adjacent section had shown signs of buckling and, additional columns and props had to be provided at the ground floor level for fixther support. Clearly there was deficiency in ensuring that the fundamentals of concrete making involving mixing, placing and curing were not compromised. In this case, potential errors could have been related to some or all of the following: inadequate cement content, dirty or inferior quality of aggregates, inappropriate mix design, incorrect batching of concrete mixture components, segregation during placing and compaction, poor curing, absence of supervision and quality control testing, premature removal of scaffoldingiformwork, etc. These are all skills-related issues of specific concern for concrete materials in construction. 5.0 PROPOSALS Generally, it has been recognized that concrete is a complex material and its market so diverse that coordinating structures in the form of non-profit organizations are necessary to bring together all stakeholders who then consider the issues that potentially affect the sector. The key role of such a structure is to advance the local concrete market and technology. In addition, a non-profit organization for the concrete market in a developing economy would be expected to promote requirements for sustainable development in concrete construction and technology: alternative materials such as pozzolans, industrial waste utilisation, recycling and re-use, appropriate technologies for concrete products, training on fundamentals and advances in concrete technology, research and innovations to meet local needs.
Institutions such as the Concrete Society of Southern Africa and American Concrete Institute, Cement Manufacturers’ Association (India) are examples of coordinating structures that provide essential education on concrete technology and its advance, fund research and innovation, improve technology and skills transfer, facilitate information dis-
96
Ekolu & Ballim
semination and grow the concrete market in their regions. In East Africa and many other developing regions such frameworks are non-existent. As such, the concrete industry is fragmented and not well served against construction failures and concrete technologies that are simply transplanted from more developed countries. A second and equally important weakness is the dearth of locally appropriate design codes and specifications for durable concrete construction. These documents must be developed by the local concrete community and must be accompanied by the parallel development of systems and procedures for quality assurance to ensure compliance. The authors are also of the view that, while failures must be avoided in the first instance, when they do occur, more can be achieved by evaluating the impact of the failure on sustainability in addition to identifying the cause(s) of construction failures. During repair or new construction after the failure, parameters for sustainable concrete construction can then be built into the project work in order to add value which compensates for the cost of the failure. This way a construction failure can be converted into a channel for technology transfer while achieving the benefits of learning from it and promoting sustainable development. A simple technique has been proposed that can be developed and used to evaluate the impact of construction failures on sustainable construction engineering and development. It consists of four broad requirements already identified by Africa Engineers Forum (AEF, 2004) as: ( I ) affordability, (2) sustainability, (3) appropriate technology, (4) indigenious capacity and skills transfer. A scoring system can be used for each requirement based on qualifying indicators. For each of the requirements, the impact value can be calculated as: / =4
SCEIV= ZweightedSCR, ; where SCR, = I
iv
1 Y
ryr
And: SCEIV = sustainable construction engineering impact value for a given project SCR. = sustainable construction requirement J RQI scores = qualifying indicator scores for a specific requirement N . = number of qualifying indicators rq' 6.0 CONCLUSIONS It has been seen that some of the common causes of concrete construction failures are attributed to problems stemming from technical and human errors, when the construction labour force and professional teams do not give proper attention to basic concepts and advances in concrete technology. Poor workmanship, poor concrete quality and unsafe removal of scaffolding contributed to collapse of the new reinforced concrete building discussed in the case study. It is proposed that an important reason for such failure is the lack of technology transfer to all sectors of the construction industry. This can best be addressed through the development of locally appropriate design codes and specifications, establishing local and regional coordinating structures which represent the development interests of the concrete sector and aligning the curricula of education and training institutions to attend to the learning needs of employees in the sector. Furthermore, civil engineering institutions of higher learning are better placed to provide some special course programs and from time to time adjust their curricula to include
97
International Conference on Advances in Engineering and Technology
relevant topics especially construction failures, concrete materials technology, understanding design concepts. In addition to identifying failure causes, evaluation of the impact of construction failures on sustainable development needs to be considered from an engineering perspective. Repairs or new construction following failures could be conducted with value-adding components that promote sustainable development and perhaps recovering some of the long-term cost of the failure. The concept of using an algorithm has been suggested that can be developed to analyze the impact of construction failures on sustainable construction. Through these approaches some mistakes made at the inception of concrete construction in more developed countries could be avoided or improved upon by developing countries. REFERENCES AEF (2004), Africa Engineers Forum-protocol of understanding, second edition, SAICE, Private Bag 3, X200, halfway house, 1685, South Africa. Ballim, Y (1999), Localising international concrete models - the case of creep and shrinkage prediction. Proceedings of the 5th International Conference on Concrete Technology for Developing Countries. New Delhi, November 1999. National Council for Cement and Building Materials, India. pp: 111-36 to 111-45. Bogere, H., Senkabinva, M. (2004), Collapsing building buries 25, The Monitor, news article, 2 September 2004, Monitor Publications Limited, P.O. Box 12141, Kampala, Uganda. CERF, The future of the design and construction industry: where will you be in I 0 years?, CERF monograph, 213 1 k Street NW, Suite 700, Washington DC 20037 Feld, J. (1 964), Lessons from failures of concrete structures, American Concrete Institute, Monograph No. 1, Detroit, MI, USA. Kenai, S., Laribi, A., Berroubi, A. (1999), Building failures and concrete construction problems in Algeria-statistical review, Proceedings of International Conference on Infrastructure Regeneration and Rehabilitation, held at the University of Sheffeld, Ed. R.N. Swamy, 28 June-2 July, 1999, p.1147. McKaig, T.H. (1962), Building failures: case studies in construction and design, New York, McGraw Hill, 261p. UNBS (2004), Preliminary report on the collapse of the building for J & M airport road hotel apartment and leisure centre on Bwebajja Hill Entebbe Road, Uganda National Bureau of Standards, Plot M217 Nakawa, Industrial Area, P.O. Box 6329, Kampala, Uganda. Wiezu, Q. (2004), What role could concrete technology play for sustainability in China?, Proceedings of the International Workshop on Sustainable Development and Concrete Technology, Ed. K. Wang, 20-21 May 2004, p.35.
98
Van Herwiinen 2% Jorissen
DEVELOPMENT OF AN INTEGRATED TIMBER FLOOR SYSTEM F. van Herwijnen, Department of Structural Design, University of Technology Eindhoven, Netherlands, ABT Consulting Engineers, Delf/Velp, Netherlands.
A. J. M. Jorissen, Department qf Structural Design, University of Technology Eindhoven, Netherlands, SHR TimberResearch. Wageningen, Netherlands.
ABSTRACT The requirements of building structures are likely to change during their functional working life. Therefore designers of building structures should strive for the best possible match between technical service life (TSL) and functional working life (FWL). Industrial, Flexible and Dismountable (IFD) building is defined and presented as a design approach for building structures to deal with these aspects. The IFD concept combines sustainability with functionality and results in a higher quality level of buildings. The IFD design approach leads among others to integration and independence of disciplines. This will be showed in the development of a new lightweight integrated timber floor system. This timber floor system makes use of the neutral zone of the floor to accommodate technical installations. The paper describes the composition of the integrated timber floor system and focuses on the dynamic behavior (sound insulation and vibration) and fire safety of this lightweight floor system.
Keywords: Functional working life; IFD building; Integration; Floor system; Timber structures, Vibrations, Fire safety.
1.0 INTRODUCTION The design process of structures should consider the whole period of development, construction, use and demolition or disasscmbly. The requirements of building structures change during their life time. The following terms regarding the life time of structures can be defined: (i) Technical service life (TSL): the period for which a structure can actually be used for its structural intended purpose (possibly with necessary maintenance but without major repair). (ii) Functional working life (FWL): the period for which a structure can still meet the demands of its (possibly changing) users (may be with repairs and/or adaptations).
99
International Conference on Advances in Engineering and Technology
Because of the large expenses often involved in adapting building structures it can be advantageous to strive for a functional working life equal to the technical service life. The IFD concept as described hereafter makes this possible. On the other hand there is a tendency to organize the horizontal distribution of installations in combination with the floor system. To save height the installations are accommodated inside the floor. To fdfill changing demands of users, installations should be reachable for adaptations and repair during their technical lifetime. But also due to the different technical life times of the floor structure and installations, the last should be reachable inside the floor for replacement. To facilitate this, integrated floor systems have been developed, both as concrete and composite structure. To save own weight, also lightweight integrated steel floor systems have been introduced, however with uncomfortable vibration behavior. For this reason the possibility was investigated to develop with timber an integrated floor system with a comfortable vibration behavior.
2.0 IFD CONCEPT From the important notion to strive for sustainable building rose the concept of IFD building: Industrial, Flexible and Dismountable Building. Industrialized and flexible building in itself is not new. However, the combination with dismountable building is. The three elements of IFD building can be defined as follows: Industrial building in this context is industrially making of building products. Flexibility is the quality of a building or building component, which allows adjustments according to the demands and wishes of the users. Flexibility may relate to two stages: 0 Design stage: variability in the composition and the use of material; User stage: flexibility to adjust the composition and the applied building components to the changing demands of the same or varying users while in use. 0 Dismountable building is the construction of a building in such a way that a building component may be removed and possibly re-used or recycled, soiled as little as possible by other materials, and without damaging the surrounding building components. (In recycling we do not use the complete product, but only its raw material). Dismountable building is also a means for the realization of flexibility, because building components may be easily detached and replaced by other (industrial) building components. The IFD concept combines sustainability with functionality and results in a higher quality level of the building (Van Henvijnen, 2000). Industrial building increases the quality of the components, reduces the amount of energy for production and construction and reduces the amount of waste on the building site: less waste and less energy. Flexibility by adaptation of the building structure increases the functional working life: long life. Dismountable building makes re-use of elements/components or restructuring possible: loosefit and less waste.
100
Van Henviinen & Jorissen
3.0 INTEGRATED FLOOR DESIGN The IFD-philosophy leads among others to integration and independence of disciplines. Integration concerns design of components taking into consideration other components; independence relates to independent replace ability of components. This can be shown in three existing integrated floor systems: the composite Infra+ floor, the steel IDES floor and the concrete Wing floor, described and discussed in (Van Henvijnen, 2004). The goal of this research was to develop an integrated timber floor system that fulfills modem comfort criteria regarding vibrations, acoustics and fire safety.
4.0 STARTING POINTS FOR INTEGRATED TIMBER FLOOR SYSTEM As stated before the new timber floor system should be IFD- based: an Industrial way of fabrication, i.e. prefabricated, Flexible and Dismountable. Beside that, the floor has to: Accommodate technical installations inside the floor; Suitable for both office and residential buildings, resulting in a live load of 3 kN/m2, and a dead load of 0.5 kN/m2 for lightweight separation walls; Have a free span of maximum 7.2 meter; Have a width based on a multiple of a modular measure of 300 mm, with a maximum of 2.4 meter due to transport restrictions; Transfer wind loads from the facades to diaphragm walls every 14.4 meters; Have a fire resistance against failure of 90 minutes (top floor level =< 13 meter above ground level) or 120 minutes (top floor level > 13 meter above ground level); Have a comfortable vibration behavior. 5.0 FLOOR DESIGN
5.1 Technical Installations The dimension of the installation zone inside the floor is determined by the dimensions of the air ducts, their connections, the air grates and the space to be conditioned. The choice for the best installation system from a list of alternative solutions was made using a multi-criteria method. This resulted in an installation with: a balanced ventilation system; all air ducts inside the floor system, always reachable from above; a climate window (downstream type) in the faqade; air exhaust in sanitary rooms and climate facades. For a space to be conditioned of 7.2 x 3.6 meters, the installation zone inside the floor was determined to be at least 780 x 260 mm for rectangular air ducts, see Fig. 1. 5.2 Lay out of Ground Plan Dutch office buildings have usually a ground plan with two bays of 7.2 meters and a central corridor of 2.4 meter. The central corridor may have a suspended ceiling to create space for technical installations.
For residential buildings this ground plan also fits: two zones next to the faqades for living and a central zone for installation shafts, vertical transport, bathrooms, kitchens and washrooms. This results in a typical cross section as showed in Fig. 2.
101
International Conference on Advances in Engineering and Technology
5.3 Typology of the Floor Section To integrate the technical installations inside the floor thickness, a hollow floor section is needed. To reach from above the installation components for maintenance and repair, the top floor should be removable. This means that the top floor can not be a structural part. The structural components should be a combination of a floor plate (as physical separation between two stories) stiffened by beams. Fig. 3 shows possible typologies of the floor sections. Selected was typology c. with a width of 2.4 meters. The floor plate is a sound and fire barrier, and should not be penetrated. Adaptations to the installations can be done from above, without approval of neighbors below. No suspended ceiling is necessary.
rectangular cross-section air d ~ d b u ~ n system 150"400
circular cross-section air distribution system ~250 ~.~..................................... ~.:.:.~.~.......................... ~.~ i~.!.:.:.:.:.~.~.:.~.~.;.:.::.~.~,:.:.o:. ........................ ~
....................
~
....................
~
~
............ ~
~
-
,
~.~
Dimensions Ins~.llatJon , e ~ i ~ Vertical c r ~ , ~
Otrne,nsl~s tnsta!latlon e q u ~ Vertical c r ~ ~ ~
T i
!i i
I ~i /
i
!i
O,irnens~s instal[att-on e q u ~ top 9 vim necessary :s~ce: with: 290 § 50 = ~ mm heigi~t: 58~50 = ~ rnm
Dimensions ins~Utatton e~iprnent top vJ~' necessary sp~ce: with: 210 + 50 = ~ turn height: 730+~ = ~ mm
Fig. 1 Required installation zone inside the floor.
102
V a n H e r w i i n e n & Jorissen
~
A
j ....
:: i
:i ii
Fl!Oer panel ~,~,,iih,c,o:,.,~ng f l ~ r
.,.
:: i ::i
:,
~, r:' ! i ) i :: '~:
i
~:B
width 1
C i: :i i
:::iDI E '
"~-~-
~i
.
.
.
i i
i I
i i
ii
ii
i I
i.i
.................
B C
:b:: E:
:::IF'
i I I i L L ~ ]
.
A
I
i ~
F}oor pane! li~'~i~hcovenng ~lOOf
!
i
i i i i:
i i i i: i
Floor pane.i wi~h c.,,o~:~e~ngifloor
.
:,:F :~
Ft,oor p~r~el .w~i~hc~veeng f:~of
i
Fig. 2 Typical cross section over the building, with two bays of 7.2 meters and a central corridor of 2.4 meters.
t
l
l b
t
l
M
c
Fig. 3 Typologies of the floor sections" a. = U-shape, b . - T-shape and c. = UU-shape.
6.0 COMPOSITION OF THE TIMBER FLOOR SYSTEM 6.1 Floor spanning 7.2 meters in facade area (see Fig. 4) For the floor plate a laminated veneer lumber is chosen: Kerto Q, 33 mm thick. Plywood was no option, because it is not available in a length of 7.2 meters. Moreover, Kerto Q has a higher flexural stiffness than plywood in the main direction (parallel floor beams). Floor beams, dimensions 110 x 350 ram, are made of laminated timber, class GL 28h, because of the needed length of 7.2 meters. Plate and beams are glued together, and act as a composite T-structure. On top of the floor plate an acoustical and fire-protecting insulation of rock wool is applied, with a thickness of minimum 70 mm to maximum 100 mm.
103
International Conference on Advances in Engineering and Technology
wiggle s~ip
.......... '
!?;I!7ii:i:ii~iiii}i!;!i~il)!il !;!i171!i!:i!i!71i177!! floor pa nel ~77::]<'~CT;~+;{~TU~~
''7+7;.........7~ .........U
............. ;7-;7~ ........... 77UT<~c77~TT~
~.......... ~................ {
Fig.4. Composition of the floor system, spanning 7.2 meters The floor beams are supported at the fagade on longitudinal fagade beams, spanning from column to column, dimensions 135 x 320 mm. The floor beams have a notch of maximum 95 mm (according to Eurocode 5), so the underside of floor and fagade beams is in the same level (see Fig. 5). The floor beam is connected to the fagade beams by self tapping screws. In case of fire these screws are able to transfer the load from floor beam to fagade beam. Self tapping screw transfers the load in case of fire .........................................................................
i....................
.......................
iii
~c.:
......................................................................
i
Fig.5. Detail of floor panel support at the fagade. At the central corridor the floor beams are supported in the same way on longitudinal beams. Adjacent floor panels are connected to each other by means of self boring screws (see Fig. 6). Wind loads on the fagade have to be transferred to diaphragm walls by means of the floor panels; this result in shear forces in the joints between the floor panels. A cam is helpful to create a flat underside of adjacent panels. A direct connection is possible, because of the dimensional stability of Kerto and laminated timber. The dry top floor is composed (from upside down) of 18 mm chipboard, a 15 mm wood wool cement slab, 32
104
Van Herwiinen & Jorissen
mm wood fiber insulation board and a 27 mm Kerto Q plate. This composition is based on acoustical considerations.
:::::::::::::::::::::::::::::::::::::::
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
; : : ; :;;::::,;.:; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
K~-Q0
27 mm
r ~ l l : ~ r r.~d, 80"ii 2:~* i 5
............................................................................. ............................................................................
Fire:,t~e G 6 0 15"4 ~0,~t~
~~,e~i',,'e t,~g)e
Fig. 6 Connection between adjacent floor panels 6.2 Floor Spanning 2.4 Meters in Central Area A different floor system is used for the central area, because more floor height is needed to create enough space for accommodation of technical installations. Timber floor beams, class C24, dimensions 96 x 171 ram, aspanning over 2.4 meters, supporting the same dry top floor as mentioned before. These beams are dimensioned on bending due to vertical loads, and compression in case of fire to transfer the horizontal wind load from one bay to another. The separated ceiling panel is composed of a 21 mm thick Kerto Q plate, stiffened with timber beams; class C24, dimensions 60 x 121 mm. This ceiling panel has only a structural function during assembly of the building structure, to install the technical installations. 7.0 DYNAMIC BEHAVIOUR 7.1 Sound Insulation The compact- and airborne sound insulation is determined by the composition of the floor. The combination of a dismountable top floor and a sufficient sound insulation proved to be very difficult. For the top floor several alternatives were investigated by (Koops, 2005). Finally, the top floor (Fig. 6) is composed of a relatively heavy chip board top layer, and acoustically open wood wool cement slab, an insulation panel and a relatively heavy Kerto Q under layer. The top floor is through rubber pads supported on the timber beams. The 33 mm thick Kerto Q plate has a favorable influence on the sound insulation. The rock wool layer in the cavity acts as noise absorber. The whole floor structure has been modeled in BASlab, a computer program developed by the University of Technology Eindhoven, to check the sound insulation. Tests will be conducted to verify the results of the numerical analysis.
105
International Conference on Advances in Engineering and Technology
7.2 Vibration
The dynamic response of the floor on walking people depends on the natural frequency, the modal mass and the damping. The natural frequency of the timber floor can be calculated by modeling the floor as a one-mass-spring system. The first natural frequency of this system can be calculated with equation (1): 1 / 3 8 4 E I = 0.5614 f = 2~r ..........5ml . 3
1
[Hz]
jq - first natural frequency in Hz, EI = flexural stiffness in MPa, m -- total floor mass, equal to 79% of the total dead weight of the single ported beam in kg, 1 = floor span in meters, Urn = displacement in the middle of the beam due to m in mm.
(1)
with :
sup-
The modal mass of the single supported beam is known as 64% of the dead load of the floor system, which is vibrating during walking and is, consequently, depending on the stiffness of the floor perpendicular to the beam axis. The damping of timber floor structures is, according to Table 9 in (SBR, 2005), including the influence of interior (furniture), about 7% for residential buildings. (Eurocode 5) requires fl >= 8 Hz for timber floors in residential buildings. (SBR, 2005) distinguishes comfort classes for different types of buildings. Office and residential buildings belong to comfort class D, (see Fig. 7). With j] - 8 Hz, a modal mass of 2500 kg and a total damping of 7%, the developed timber floor satisfies the dynamic requirements of comfort class D and also the requirement j] >= 8 Hz of (Eurocode 5). 8.O FIRE SAFETY
The floor structure acts as separation between different fire compartments of the building, and should have a fire resistance of at least 90 minutes. Starting point for the design was that the timber structure should resist fire only by itself instead of by covering materials. The reduced cross section method according to Eurocode 5 was applied. The behavior of the structure after start of fire can be described as follows (see Fig. 8): 9 Phase 1: floor structure completely intact; start of burning in process ofKerto Q plate; 9 Phase 2: failure of Kerto Q plate and start burning in process of underside of timber beams and insulation material in floor cavity; Phase 3: failure insulation material and burning in of timber beams at three sides.
106
Van H e r w i i n e n & Jorissen
Fig. 7. Diagram with comfort classes for floor systems with 7% damping, with modal mass on horizontal axis and first natural frequency of floor structure on vertical axis.
i,
. . . . . . . . . . . . . . . . . . . . . . . .
Phase t
N............
J .....
Phase 2
Phase 3
Fig. 8 Three phases of burning in of timber floor structure. The low height/width ratio of the beams is favorable for the fire resisting time of the beams: the burning in depth after 60 minutes is about 49 mm at the bottom and 33 mm at both sides, so the structure has enough fire resistance for the required applications. 9.0 C O M P A R I S O N W I T H O T H E R I N T E G A T E D F L O O R S Y S T E M S Comparing the developed timber floor with two other floor systems: an Infra+ floor and a hollow core slab with a raised top floor, shows three advantages of the timber floor: 9 The total mass of the timber floor (160 kg/m:) is about 46% lower than the Infra+ floor and even about 73% lower than the hollow core slab; 9 The total height of the timber floor (511 mm) is almost equal to the height of the Infra+ floor (515 mm) but much lower than the height of the hollow core slab with
107
International Conference on Advances in Engineering and Technology
raised top floor (628 mm), all with the same space for accommodating technical installations; The timber floor has the best composition in view of sustainable building.
10.0 CONCLUSIONS From the foregoing it can be concluded that: (i) An integrated timber floor system is developed that fulfills actual comfort requirements in the field of acoustics, vibrations and fire safety; (ii) Timber concentrated in beams center to center 1200 mm has an advantage in case of fire (low fire velocity) and for installation space inside the floor; (iii) Making use of a Kerto Q plate as floor panel, with the veneer layer in the direction of the span, leads to a very stiff composite floor element, with a comfortable vibration behavior; (iv) Strong points of the floor are the minimum floor thickness, the light weight and the possibilities for re-use of materials and components at the end of the FWL; weak points are the higher construction costs and the sensibility for construction in view of the high acoustical requirements. ACKNOWLEDGEMENT This research project was carried out by Lars Koops as graduate project in structural design of the University of Technology Eindhoven, and guided by Frans van Henvijnen and Andre Jorissen. REFERENCES Eurocode 1, Basis of design and actions on structures, part 1: Basis of design. Eurocode 5, Design of timber structures, Part 1-1 (General-Common rules and rules for buildings, 2004) and Part 1-2 (Structural fire design, 2004). Henvijnen, F. van, (2000), Development of a new adaptable and dismountable structural system for utility buildings, TU/e Research papers 2000, pp 55 - 67. Henvijnen, F. van, (2004), Integrated Floor Systems, based on an Industrial, Flexible and Dismountable design approach for building structures, Proceedings of the third International Conference on Advances in Structural Engineering and Mechanics (ASEM 2004), Seoul, Korea, September 2-4,2004; Koops, L.K., (2005), Design of an Industrial, Flexible, Dismountable and Integral timber floor system for residential and office buildings, Graduate report (in Dutch), University of Technology Eindhoven, Netherlands. SBR-report, (2005), “Vibrations of floors by walking”, Guidelines to predict, measure and judge (in Dutch).
108
Kyakula, Kapasa & Opus
CONSIDERATIONS IN VERTICAL EXTENSION OF REINFORCED CONCRETE STRUCTURES M. Kyakula, S. Kapasa and E. A. Opus, Department of Civil & Building Engineering, Kyambogo Universiv, Ugundu
ABSTRACT There has been an increasing tendency to undertake vertical extensions in a number of structures being renovated or improved. This development usually involves change and improvement of function of the structure. However vertical extension has been complicated by the fact that most of the original plans of these structures are missing. Moreover most of these were designed using older codes based on elastic analysis such as CP114. This paper explains how the buildings designed using the older codes based on elastic analysis have reserve strengths when extensions designed based on limit state design codes such as BS8110 and Eurocode 2 are added. It sets out considerations that have to be taken into account. These include: determining the strength of existing structures and of the soil. Investigating the capability of the existing structure to carry increased load and effecting the modification in the slabs, beams, columns and foundation design. The composite action of the new concrete that is added onto the old concrete is also considered. Keywords: Reinforced Concrete Structures, Vertical Extensions, elastic versus limit state design, composite action.
1.0 INTRODUCTION The need for vertical extension has become paramount in the central business district of Kampala, which has necessitated a discussion of the considerations that have to be taken into account during analysis design and construction of these extensions. Moreover in most cases the original plans for these buildings are missing. They could have been lost when the Indian community, which owned most of the multi storey buildings in the city, was expelled and the looting during several political upheavals (wars) that the country has gone through. Moreover most of these were designed using older codes based on elastic analysis such as CPI 14. Therefore during the vertical extension a number of considerations are taken into account. The existing structure needs to be analyzed and designed using the existing loading and the method (elastic or limit state design) that could have been used to design it. Generally very few multi storey structures were designedlbuilt in Kampala during the turbulent period between 197 1 and 1980. Limit state design was introduced in 1972 with the publication of CP110 therefore it is safe to assume that those designed before 1970 used the modular ratio method of design. Furthermore it is to be noted that presently there are very few structures less than five storeys in the country that include lifts. However, due to the change in the social perspective whereby it is now of the essence to cater for the disabled,
109
International Conference on Advances in Engineering and Technology
the elderly and infirm, a lift is becoming a must. One of the modifications in vertical extension is the inclusion of lift shafts or a ramp. Thus a structure that was originally designed as unbraced will need to be redesigned as braced because if the extended structure is to be braced, it may not be helpful to compare it with an existing structure that is unbraced. At the same time the existing structure must be investigated to determine its condition, size of its members, areas and layout of the reinforcement in these members. A comparison of the area and layout of reinforcement and size of footing of the designed and existing structure must then be carried out. If possible, the other members that are readily visible such as slabs, beams and columns are made the same size as in the existing structure. The structure is then analyzed and designed when the vertical extension and other modifications have been included under the current design codes, trying as much as possible to use reinforcement of the same or similar size and strength to that in the existing structure. A comparison of the existing design with what it should have been with the bracing lift shafts added, and that of the extended structure is carried out. If the reinforcement in a member of the existing structure is the same or greater then no structural modification is necessary. If it is less than that of the extended structure, then modifications and/ or ways of increasing the reinforcement for the slabs, beams, columns and foundations have to be considered. Apart from increase the reinforcement, member sizes may also need to be increased. These and other considerations are briefly discussed in the following sections. 2.0 COMPARSION OF ELASTIC AND LIMIT STATE DESIGNS Most of the structures being modified or vertically extended were designed using elastic desigdModular ratio method of design. “This is based on the assumption that the stressstrain behavior of both steel and concrete remain elastic. This implies that both have a constant modulus of elasticity and thus a fixed ratio of the moduli, it further implies that the stresses are limited to permissible values. CP114 applies a factor of 1/3 to the cube stress to obtain the permissible stress for concrete under flexure. It also applies a factor of 1.8 to the yield stress to get the permissible stress in steel. At the permissible stress of steel in tension, the surrounding concrete has cracked. Therefore to limit crack width, the permissible tensile stress in steel is limited to 230N / mm2 , whatever the grade of steel”, Morrel, (1977) Solid slabs are usually designed as singly reinforced. Therefore consider a singly reinforced rectangular section. For the modular ratio method, moment of resistance with respect to concrete and steel is given by equations (1) and (2) respectively.
110
Kyakula, Kapasa & Opus
The maximum permissible values of j;,and,f , ; denoted pC,,and p s , respectively are given by: P , , ~= (fi,L,/ 3 ) and p y r =
/1.8) 5 230N I mm2 . Assuming f,,,
2
=
25 N I mm
2
f v = 4 1 0 N / m m , a', = 1 5 , then p c h = 8 . 3 3 N l m m , p,, = 2 2 8 N l m m
2
2
and
Substituting
these values in equations ( I ) and ( 2 )gives equations ( 3 )and (4).
From the limit state design, the Moment of resistance with respect to concrete and steel is given by equations ( 5 ) and (6) respectively. The practice in Uganda has been to use the partial materials factor of 1.15 for steel rather than the 1.05 recommended by the current editions of BS8110, this is because most designers are not so sure of the quality of the steel on the market. In the following illustrations this practice has been upheld. .
(5)
276.4A5d
(6)
M L L= , 0.156J;.,,bd2 = 3.9bd2
M 5 , = 0 . 8 7 f , A y Z = 0 . 8 7 f , A , [d - (O.9x/2)]
=
Comparing equation ( 4 ) and (6), it is seen that the moment capacity of an existing floor slab (and singly reinforced beam) originally designed using the modular ratio method of design, is 37% greater when analyzed using the limit state design. Similarly for a column that is axially loaded with gross area of concrete Ac and of steel A\' . If it was originally designed using elastic design method, its safe load is given by; = [ A , + ( a , - dA\
(7)
AL is the gross area of concrete. 2
Assuming f,, = 2 5 N l m m 2 and f , = 4 1 0 N l m m ,ac - 1 5 and A T , =O.OlA,< then p , , = ( f c , , / 3 ) = 8 3 3 N l m m2 and p , , = ( f , / 1 . 8 ) = 2 2 8 N l m m 2 then,
N
=
1.14A,Lp,c = 9.5A,
The axial load according limit state design (BS81 lo) is given by
111
International Conference on Advances in Engineering and Technology
N = 0.4f,,ACc + 0.75fy A,, = 13.075 Acc.
(9)
Comparing equation (8) and (9), it can thus be seen that a column under purely axial load, which was originally designed by modular ratio method, has a reserve load carrying capacity of 37.6%. Where the column carries moments, or it is eccentrically loaded, it is easier to design using design charts. Again comparing both methods of design it can be shown that a column designed using modular ratio method has a reserve load carrying capacity of about 35%. Thus one of the considerations in the vertical extension of structures that were originally designed using the modular ratio method is that their design strength were under estimated by about 35%,. 3.0 ADDITIONS AND IMPROVEMENTS OF BUILDING SERVICES: The existing buildings have some services, which may include drainages, plumbing, electricity, telephone and solar systems. These services may not have the capacity to sufficiently cater for the extensions. Sometimes their technology may have been superceded by modem technology that the owner wants to introduce. An investigation of existing services should be made and protective and safety precautions taken while construction are going on. Their capacity has to be established, compared to the new demand and if needed upgrading, improvements or replacement of faulty parts undertaken.
4.0 INVESTIGATING THE STRENGTH OF EXISTING STRUCTURE: The major complication with vertical extension is that the original structural plans and documentation connected with buildings to be extended are usually not available. Thus one has to determine the quantities physically. The actual sizes of members need to be established. Some of them like footings/foundations will involve excavations. Even where the plans are available a check has to be made to ensure that the structure was built according to plan. In one case an investigation of a three-storey structure to be vertically extended revealed that the pad footings were made of just plain concrete without any reinforcement. (Mwakali et al, (2002)) The strength of the concrete used in the foundation, columns, beams, slabs and staircases must be determined. Commonly used methods are rebound hammers and cut concrete cores on which compression tests are carried out. Other methods such as Ultrasonic pulse velocity test are not common for lack of equipment. The strength, amount and layout of the reinforcement in the member must also be determined. A cover meter, or a rebar meter, however these equipments are few and rarely used. Instead narrow strips (600mmlong x lOOmm wide) are hacked at right angles to expose the reinforcement, its cover and the depth of the slab at a few selected points, for example near supports and mid span. For the columns; light chiseling around the column to exposes the reinforcement bars. This should be undertaken after assessing the loading carried by the various columns such that only one is selected from those carrying similar loading. The
112
Kyakula, Kapasa & Opus
same is followed in assessing foundations. The footing is exposed to determine its size and the edges of the column base hacked to expose the reinforcement
5.0 SOIL INVESTIGATIONS: Soil investigation must be undertaken to determine the bearing capacity of the soil, its settlement rate and the position of the water table. One of the easiest methods is to dig trial pits and visual inspections carried out then samples with minimum disturbance are collected for subsequent laboratory testing. Where possible, drilling should be undertaken as this enables one to obtain undisturbed samples from which settlement rate and bearing capacity may be obtained. For soils that loosen, such as sand and gravel, a plate-bearing test can be used to determine the bearing capacity of the soil insitu and designing of the static loads on spread footings. If the strength of the soil is not adequate for the increased loading, it is necessary to improve on the foundations by introducing piles or enlarging the footing and reinforcing it betters to sustain the increased loading. 6.0 DETERMINATION OF NEW LOADS A vertical extension may involve any, all or some of the following changes: (a) Increased I improved level of service such as introduction of lifts and car parks. (b) Increased wind loading arising from the increased height of the building. (c) Change of function of the building. e.g. from residential flats to offices. The change of function implies that the imposed load on the floor slabs must change. For example if it was designed as a self contained dwelling unit (flat) the imposed load would have been 1.5kN I m 2 and if it changes to an office for general use, filingistorage spaces and computer rooms, the imposed loads will be 2.0kN I m 2 , 5.0kN I m 2 and 3.5kN I m 2 respectively. Function change results in change in the traffic on the floor, which in turn implies change in the finishes on the floor; and loading from finishes. Also the change in the function of the building implies change in the configuration of the partitions and walls. This involves demolitions of some and addition of others. For example a residential buildin has brick wall partitions l00mm thick, 3m high, brick with unit weight of K 17.3kN l m and 25mm plaster on either side. The loading from partition is W p = (0.1.5~3.0~1.0~17.3) = 7.785kN i m length of wall. Equivalent distributed load; n = 0.33Wp = 0.33x7.785 = 2.57kNlm2 (BS6399, partl, Reynolds & Steedman). This partition loading is far large when compared to the normal partition loads taken for offices of about 1 .0kN I m 2 When the sum of the effect of change in imposed, partition, floorlwall finishes and services load is an increase in loading, moments and deflections, it may imply increases in the amount of reinforcement.. This could be handled in three ways: Increasing the slab depth and thus increase on the lever arm. This in turn increases the moment capacity of the slab. For example: Assuming 150mm slab is increased to 200mm, concrete grade is 25. If the Initial cover of 25mm to diameter 12mm mild steel bars Then initial effective depth = 119mm, this increases to 169mm and the ultimate moment capacity with respect to steel is given by equation (10).
113
International Conference on Advances in Engineering and Technology
M,,s = 0.87f,,A,y(0.775d) = 0.67425fvA,d
(10)
For a 150mm slab, with d =119mm, MLIS= 80.24fyA, and for a 200mm slab, with d =169mm; MllV= 1 13.95fyA, . Therefore increase in moment capacity M,,, is 42% and increase in dead load and thus applied moment is 33%. Giving a net increase of only 9%. However it must be realized that a problem of the composite action of the fresh concrete that has been laid on the old and attendant actions such as shrinkage and creep is introduced. Alternatively, the span of the slab may be decreased by introducing supports for the slab. This introduces the problem of securing the support to the slab and catering for the hogging moments over the introduced support. Otherwise, if it is not a requirement that the original architecture is preserved, the floor or whole structure may be demolished and a new one constructed. This could probably be more expensive. 7.0 MODIFICATION OF WALLSKOLUMNS If the columns cannot carry the increased load, options for modification include;: If space and building dimensions allow, the existing column or wall may be left to carry the existing load, while a new set of columns is introduced, usually externally, to carry the load due to extension and a small part of the load from the existing structure. The new columns have to be tied to the existing structures. This reduces the slenderness of the new columns while at the same time, provides redundancy/alternative load path in case of accidental damage to other members. The column size and reinforcement may need to be increased by “jacketing”: The old outer cover around the column is hacked to a depth of about 20mm, a new cage of reinforcement and a layer of fresh concrete are added. Where this jacketed column intersects with the beams and slab, holes whose diameter is about 2 to3 times the bars in the new outer cage are drilled in the floor to pass the steel reinforcement which laps with the bars in the lower cage. The extra space is filled with a rich grout In some of the old structures, the slab is supported on block work only. Leaving the existing structure intact columns could be added to support the extensions. Columns could be introduced away from or within the line of the wall. The slab has to be attached to the columns involves hacking away part of the slab to expose the reinforcement which is welded to new reinforcement to form a column head /drop panels as for a flat slab. Thus the slab may need to be reanalyzed as a flat slab supported on walls and columns
8.0 EFFECTING MODIFICATIONS IN FOUNDATION The most commonly type of foundation used in the Uganda are pad and strip footings, with only a few cases of raft foundations and rarely pile foundation. Therefore the considerations in this paper will be confined to pad footings. Modifications carried out on the pad footings these include;
114
Kyakula, Kapasa & Opus
Interconnecting the existing ground floor columns by a ground beams. This minimizes differential settlements. If the ground beams are not placed just above the level of the column bases, they will alter the effective length of the column. And if not attached to all columns, the relative stiffness of the columns will be altered. Connecting the column to a ground beam will involve hacking away the concrete, exposing the column reinforcement and attachingiwelding the ground beam reinforcement to it. Alternatively additional footings to carry the load from the extension and part of the load from the existing structure may be introduced. Or the foundation may be enlarged; this will involve hacking away the edges of the existing footing, welding reinforcement to the existing reinforcement and casting concrete around so as to increase its width, length and sometimes depth. Other innervations like draining away water by use of French drains or other suitable methods if the water table is close to the new depth of foundation. In a few cases piling may be considered, this will involve underpinning
9.0 COMPOSITE ACTION OF THE NEW & OLD CONCRETE Concrete reaches about (80-90)% of its maximum strength in 28 days and this is the characteristic strength of the concrete. However as long as it is exposed to moist conditions it will continue increasing in strength at a very low rate. A concrete of characteristic strength of 25 N I mm may have strength of 3 1N I mm after 20 years. Since usually the drawings are missing, the strength of concrete is obtained from testing cores cut from the existing concrete and rebound hammer tests. When the new concrete is added to a concrete that has stayed for over 20 years, the young and old concrete will have different strength, modulus of elasticity, shrinkage and creep characteristics. Thus these are two composite materials. It should be considered that after a few years the difference in the properties of these concretes should be reduced to a minimum that they may be assumed to be uniform but for the first years they should be considered to act as composite material. Before adding the new concrete any finishes will have to be hacked and removed, the old concrete chiseled to make it rough, its surface washed to remove any dust, and a bonding agent introduced. When extending staircases and columns, it is difficult to expose the anchorage length for lapping,, therefore the bars may be joined by welding or couplers
’
10.0 CONCLUSIONS: Before embarking on vertical extensions, the structural integrity of the building, bearing capacity of the site and loaded soil will need to be investigated. Vertical extensions may involve change of function leading to changes in imposed, partition loads, finishes and wind loading. For buildings originally designed using elastic design methods it needs to be considered that when redesigned using limit state methods, their capacity is 37% greater. Despite this reserve capacity, the reinforcement may not be adequate, and then addition of more reinforcement and increase in inember sizes such as slab depth, column and foundation sizes may be considered. However increasing slab depth gives only a small increase in net moment capacity. For an increase in slab depth (therefore dead load and moment) of 33%, the increase in moinent capacity was found to be 42%. Giving a net increase of only 9%. For columns, increase in column size and reinforcement by “jacketing” or introduction
115
International Conference on Advances in Engineering and Technology
of other columns may be considered. However where fresh concrete is added to an existing concrete, it must be considered that they have different strength, modulus of elasticity, creep and shrinkage characteristics and may act as composite materials. Also due to consideration for disabled users, lift shafts must be included in extensions. Thus a structure that was originally designed as unbraced may need to be redesigned as braced. Noting that vertical extensions involve demolitions, and loading of various parts of the structure with construction loads. Support systems, and safety precautions, to protect the public, workers and the building need special attention.
REFERENCES: BS8 1 10-1 (1985, 1997) Structural use of Concrete- Part 1, code of practice for design and construction Mwakali, J.A, Naika R.B, Akena, R. p’Ojok & Legesi, K.S. (2002), “The Challenges of structural appraisal in developing countries: Case studies from Uganda. ’’ Structural Engineers World Congress, Yokahama, Japan, p.435. MBW Consulting Engineers 2-3,(2003), “Review and update of the Architectural appraisal of Udyam House” Morrel P. J. B. ( 1 977), “Design of Reinforced Concrete Elements Granada publishing Ltd, London ”
116
Ssamula
LIMITED STUDY ON A CHANGE FROM PRIVATE PUBLIC TO GOVERNMENT ONE TRANSPORT SYTEMS Bridget Ssamula, PhD student, University of’Pretoria, South Africa
ABSTRACT This paper takes us back to highlight public transport and its advantages to all the relevant stakeholders. It seems that in focusing on trying to curb the effects of congestion from rural urban migration, in our growing cities, we have forgotten what role the government can play in controlling public transport. Developing cities need to take a step back to look at the benefits of having a government owned public transport system, in enhancing private and public partnerships in running the cities. The issues with the present public transport systems in African cities are highlighted. A case study involving Bogota’s transformation of its public transport to the government, without excluding the stakeholders like; the road user, the tax payer, taxi operators, pubic transport user, policymakers. An analysis of the transformation will show the benefit of moving public transport systems into the government and the critical lessons that can be learned.
Keywords: Public transport; Mini-bus taxi industry; Government involvement; Private public transport.
1.0 BACKGROUND In most developing countries in Africa, there has been a marked growth in urban population due to improved economic situations, infrastructure rehabilitation. In South Africa, the change in the city structure has arisen from the fact that in post apartheid era, the jobs have moved away from the city centre to freeway locations. Movement of people to urban areas is mainly in search of employment and improved standards of living. With that comes affordability and flexibility of travel and number of trips per household increases, for reasons like school, work, entertainment, etc. The trend is that people do continue using public transport, until the point when a car is a more convenient and affordable means of travel. Newman, (1989), defines public transport as a modal choice of travel for which the passengers are availed a service, which allows for their mobility at a specified fare. The main objective for public transport is to provide the average income city dweller with a reliable, efficient and inexpensive form of transportation that will save the average income earner time and money.
117
International Conference on Advances in Engineering and Technology
1.1 Problem Statement The objective or end product of any transport system is the arrival of a person, good or service to the correct destination, at the correct time, at a correct price and safely. That being the definition, there is an overwhelming number of privately owned public transport systems commonly known as either ‘taxis” or ‘matatus’ in the most developing cities in Africa. 1.2 Aim The aim of this paper is to highlight the problems of this privately run public transport system and how governments can intervene, inspite of the politics behind the saturated mini-bus taxi industry.
1.3 Objectives 1.O To show how the mini-bus taxi system of public transport needs government intervention as a source of income 2.0 To establish the need to control the number of mini bus taxis to reduce congestion in our developing cities.
1.4 Scope The paper will look at the mini-bus taxi industry specifically, even though there are different form of privately owned public transport systems like private cars, motorbikes, shuttles and bicycles. A Case study will be carried out for one developing cites that carried out this transformation.
2.0 MINI-BUS TAXI INDUSTRY SCENARIOS The mini-bus taxi industry is mostly privately owned and run by individuals who seen it as a thriving business. In most of these countries, the mini bus taxis all run under specified taxi Associations which represent the interests of multiple minibus-taxi owners these associations are formed for specific routes. The taxis are then given ownerships into these associations through payment of daily stage fees so as to be allowed to operate on a designated route. Specific mini-bus taxi industry scenarios are given below: In Kampala, the main capital city of Uganda, the public transport system is monopolised by mini-bus taxi services. From traffic surveys made in by KUTIP (2002) taxis made up 19-38% of the traffic, while transporting 65-81% of the population of the city centre. Presently there is congestion on the road caused by too many mini bus taxis. In Kenya, minibuses also known as ‘matatus’, are the most popular mode of transport within the city, as well as to the outskirts of Nairobi and upcountry routes. A ‘matatu’ is assigned to almost every route within the city (Kenyaweb, 2006). In South Africa the mini-bus taxi industry is unregulated, receives no subsidies and is largely operated by individual owners. Many mini bus taxis are old and not roadworthy, often overloaded and driven with poor quality or worn tyres, resulting in frequent accidents from tyre blowouts. The Government’s is currently undertaking the “taxi recapitali-
118
Ssamula
sation” plan which envisages replacing all current minibuses with 18 and 35 seat midi buses.
2.1 Problems of privately run Public Transport system The problems facing governments allowing the privately owned mini bus taxi system to run with minimal control which include: 1. Monopolisation of the public transport system since they transport a large percentage of people who can’t afford a private car as a means of travel. 2. The monopoly allows the taxi industry to control the industry with decisions like fares and when to increase or decrease them, especially in peak travel season and when fuel prices fluctuate. 3. The mini-bus taxi industry system is not monitored or supervised by a unit organisation. Operations are run to the discretion of the driver and the “conductor” who decide; Where to pick and drop off passengers 0 When to start and stop working since they don’t run at specific times 0 The collection mode and revenue charged to the users, and at what times of the day. 4. The industry is difficult to monitor in terms of revenue and tax collection purposes yet its obviously a very lucrative ‘business’, which seems to be able to persevere through increasing; taxis, fuel costs, numbers of privately owned cars, and stringent government efforts to control the industry. 5 . All the other transport systems that have tried to compete with the taxi system have either been sabotaged like the ‘Posta’ bus system in Uganda, were drivers were paid to “knock-out’’ the engines of the buses. 3.0 CASE STUDY: BOGOTA’S TRANSMILENIO’S BUS SYSTEM Public Transport is something that the government can do to give back to its people for what they pay in taxes, at an affordable cost. An efficient public transport facility is the duty of the government, for those who cannot afford other means. It was on the basis of this undertaking that the city of Bogota in Colombia embarked on a project that took them 4 years to implement the first phase. The TransMilenio is a public-private partnership mass transport system project that was put together by relevant stake holders for the people at an affordable cost. The organisation of the project, with regard to the stakeholders is summarised in Table 1. 3.1 Financing The capital costs sources involved financing for this project, from the time the project was implemented in 1999 to its opening, which would be a very crucial for a developing country, were carried out as shown in Figure I .
119
International Conference on Advances in Engineering and Technology
Table 1: 0 {anisation of Bogota’s TransMilenio Transport SECTOR Public
AGENCY OR COMPANY Office of the Mayor TransMilenio S. A. Institute for Urban Development
SKILLS Leadership Public company: In charge of planning management and control Contracting infrastructure development and oversight
Secretary for Transportation and Reorganization of existing transit routes; enTraffic forcement; regulation; signalling Department of Planning
Insertion of the bus rapid transit system in the comprehensive plan; approval of road, public space and urban design
Secretary of Finance
Budgeting and allocation of resources for infrastructure capital investments
City
Council
Body) Private
(Local
Elected Approval of plans, TRANSMILENIO S.A. creation and city budget.
Trunk Operation Concessionaries: S199 S.A.; Expres del Futuro S.A.; SITM S.A.; Metrobus S.A.
Companies created out of existing transit operators for bus acquisition, drivers and maintenance personnel retention, operation and maintenance of buses.
Feeder Buses operation contractors: SIDAUTO, CODATERMIL; URIBEURIBE; ALCON; ALNORTE Fare Collection Concessionary: ANGELCOM S.A.
Existing transit operators, transformed to be able to operate feeder buses
In charge of the billeting system, fare collection and money administration, using cutting edge technology Control centre provider: Elec- Supervision of the operations of the systems, through GPS location devices, voice and data tronic Traffic ETRA information installed in the buses. Monitors passenger numbers entering and leaving the buses Design, construction and supervi- Companies providing their knowledge and capacity to design, build and supervise the system sion contractors set up in 48 months
Extract: Project 46 (2006)
120
Ssamula
.
.
.
.
.
.
.
.
.
7'
[] World Bank ~ t ~ O /
[] Mayors' office
ii
i
,
37%
[] Capital resources and Electricity Company [] Gasoline surcharge
4%
, ,,~
9National government #
f..ltJ
/U
6% 4%
[] Capital
District
Figure 1: Pie Chart Showing Percentage of Funding for TransMilenio
As can be seen the most funding came from the government, but the fuel taxes levied on the private car users. The government therefore ends up covering about 60% of the funding for this project, meaning that the amount of money that was used as a loan was only 6%. The viability of this system can be seen by the fact that the system serves on average 900,000 passengers daily and collections daily amount to US$415,000. These monies are used to pay the various operators and also finance about 4% of the working and functioning processes. Secondary sources of income come from selling advertising space at the bus stations.
3.2 System Operations The system's nominee company is TransMilenio S.A, and it doesn't operate the system it's responsible for its planning, management and control. The operational scheme comprises of: 1. Trunk-route services including express services which stop at given stations only and ordinary services serve all the stations along the whole route 2. Feeder services attend to peripheral passenger zones on an integrated basis in combination with the trunk-route services. 3. 401 network buses and 138 feeders in operation. 4. Integrated ticket price $900 (US $ 0.54) 5. 38 kilometers are run in for the routes. 6. 60 stations in operation. 7. 8 express routes, 2 normal and 19 feeder routes. 8. 11 localities and more than 35 neighborhoods under the influence of the system. 9. 560,000 passengers daily. 10% of whom used to drive
3.3 Economic analysis of the Advantages of the System 1. Profitable operation: After only 4 years of service, the system is able to not only pay it operators, it also has enough money left over to fund the day to day operations by at least 4% .This is because of the large numbers using the system and the fare charged is very affordable US$0.52 to the average traveller, which is only 6% higher
121
International Conference on Advances in Engineering and Technology
than the ordinary transport services, for an obviously better service of high quality. Further more the system is not as capital intensive since the buses are acquired by the route operators 2 . Reduction in road accidents: In Bogota, the injuries from car accidents have reduced by 54%, Minor reported car accident have reduced by 86,4% and hit and run accident have reduced by 97,6%. 3. Job creation: The system has generated 7,300 direct jobs and 10,000 indirect jobs, so the fear of people losing their jobs is non-existent. 4. Road user costs: 10% of the users of this system were using private cars for transport. With a private car, the amount of money used to run the car, includes the fuel, insurance, licenses, servicing, maintenance, parking, etc. As a public transport user, the amount of money that would be spent for the exact same trip cannot even be compared. From the Public Transport cost Model for the Cape Town public Transport restructuring, the cost for running a car for a 20-km journey by car costs US$4, while for the similar journey at present public transport will cost US$l or less as seen with the fare used in the Bogota bus system. 5. Congestion and pollution; For every standard bus, on average 54 private cars are taken off the road, or alternatively 4 mini buses are taken off the road, assuming vehicle occupancy to be 1,2 for cars, 14 for mini-bus taxi’s and 65 for a standard bus. The number of cars and minibuses that would be taken off the road will lower congestion, fuel emissions and noise pollution 6. Man-hours saved; The average travel time has been reduced by 32%, for the public transport user. Thus if the average income of a public transport user is US$ .35 an hour, he will earn that much more in terms of time at work. For workers that earn monthly salaries, the savings done in man-hours are in terms of the productivity and efficiency levels if workers are able to come to work on time. 7. Cost to GDP; This refers to the economic cost to the country’s Gross Domestic Product, GDP, in terms of modal choice of transport. The cost will include factors like fuel, pollution, congestion, etc. The private car user contributes to the GDP, since the monies spent for a trip are higher in terms on fuel, road maintenance in form of taxes, licensing private cars, contribution to congestion and pollution. The more spent on a trip, the greater the GDP, but the flaw though is that productivity depending on the trip purpose is lowered, where man-hours of productivity are wasted daily in traffic jams. 8. Fuel related benefits; The amounts of fuel that would be used annually would decrease if the use of public transport were encouraged. This would decrease third world countries dependency on highly taxed fuel to land-locked countries. Basing on the vehicle occupancies assumed earlier, for the GPMC (1999) cost model, the amounts of fuel consumed, for standard buses and taxis that will be saved are US$40/1 and US$6/1 respectively for each of the transport modes. Table 2 is a comparison of indicators for taxis and buses versus cars.
122
Ssamula
Table 2: Comyaring lndicators for Taxis and Buses versus Cars Capacity Average occupancy Fuel consumption Modal (passengers) (veh-km) Choice Car Minibus Bus
4 16 90
1.2 8 45
12.72 18 48
Fuel consumption (passenger-km) 13 0.0 13 0.008
-
The system facilitated the involvement of the traditional transport companies, because 94% of the companies became associated in order to take part as trunk operators. 2. The payment of the trunk operators is done in terms of the number of miles serviced. This would eliminate problems like overcrowding in taxis. The passengers are picked and dropped at designated stations, thus the congestion 3. and traffic jam caused by drivers stopping at their own discretion is limited. 4. To contract the operators, collection concessions, the control services and the feeder services open bidding was carried out, to allow for competition. 5 . The implementation of the system involved a massive media campaign which was carried out to enable the users of the system, to get acquainted with the daily running of the system. This included workshops, civic guides, customer service free phone lines, and when the system was introduced, the users of the system were not charged for first three weeks of service. 6. The system is run by independently run bodies like the operators, control centre, collection, etc who audit each other, because their systems are synchronised in operation. The overall management and monitoring of the transport system is carried out by one Company. 7. The implementation of improved technology was introduced gradually, to allow for mistakes to be rectified as they are introduced and the high capital intensity involved with the improved technology came as a result of the increasing number of users, who would be able to pay for them. 1.
CONCLUSION The aim of this paper is to showcase the problems of the public transport system and how governments can intervene, inspite of the politics behind the saturated mini-bus taxi industry facing our developing nations. Problems facing the mini-bus taxi industry were highlighted, with regard to the monopoly and control the industry has in transporting the average worker. The case study of Bogota, a developing city, which embarked on a project to transfer ownership on the taxi industry into the hands of government, since the city was facing similar problems with congestion, accidents and monopoly of taxi owners. The transition process was carried out without excluding the stakeholders who were involved in the transport system, and more importantly it is money generating and has put the lucrative public transport system back into the hand of the city authorities.
123
International Conference on Advances in Engineering and Technology
REFERENCES City of Cape Town Metropolitan Council (CMC), The Cape Town Public Transport Restructuring Programme. PUBLIC TRANSPORT COST MODEL. Prepared by Stewart Scott Inc, May 2002 Greater Johannesburg Metropolitan Council (GPMC) PUBLIC TRANSPORT COSTS. Prepared by Del Mistro and Associates. 1999 Kampala Urban Traffic Improvement Plan (KUTIP), (2002), by Rites Ltd I New Delhi. Kenya transportation Industry: Kenyaweb; httu:/lwww.kenyaweb.com/transuort/transporters.html, downloaded January 261h2006 at 12.20 p.m. Newman, P. et al (l989), Cities and Automobile dependence: A source book, lst ed. Gower Technical, Aldershot, Brokefield USA, Hong Kong, Singapore, Sydney. Project 46: TransMilenio: A way of Life, Colombia, Stockholm Partnerships, www.partnerships.stockholm.se1searchview.asp?Id=46 downloaded 1/30/2006 TransMilenio S.A. 0 (2004)-0rgullo capital, Bogota’s Mass Transportation System, www.transmillenio.gov.co downloaded 1/30/2006
124
Bagampadde & Kiggundu
INFLUENCE OF TRUCK LOAD CHANNELISATION ON MOISTURE DAMAGE IN BITUMINOUS MIXTURES U. Bagampadde, Department of Civil Engineering, Makerere Universiw, Uganda B. M. Kiggundu, Uganda Electoral Commission, Kampula, Uganda
ABSTRACT The influence load channelisation on moisture damage was investigated using cores and block samples from a heavily loaded highway. The original 80/100 bitumen (virgin and oven aged) and aggregates were mixed. Moisture damage in loose mixtures was measured using ASTM D1664 and that of cores using visual diametral plane rating. Pore saturation and air voids were found to be influenced by ground water level and location across lanes. Visual stripping was rated higher in the wheel paths than between wheel paths, especially in shallow water table areas where it was observed to be 79% higher, implying possible dependency of moisture damage on channelisation.
Key words: Moisture damage; Wheel paths; Air voids; Saturation; Channelisation
1.0 INTRODUCTION Moisture damage in bituminous surfaced pavements is a serious problem especially in zones of high precipitation. Many least develoed copuntries in sub-Saharan Africa and southern Asia are spending large sums of donor funds on road construction materials, yet the return on investment is overwhelmed by short service lives due to moisture damage. Stripping is one of the common forms of moisture damage in bituminous paved roads. Kandhal et al., (2001) and Stuart, (1990) reported that water or its vapour can enter the air voids by infiltration from above or seepageicapillarity from the underlying water tableilocal aquifers. However, review of previous research indicates that how pore water or its vapour enters the bitumedaggregate interfacc, assuming the aggregate is well coated, is not yet well known. One of the postulated mechanisms is a macro-level means involving high pore water pressure build-up due to external cyclic stress, acting on the bitumenimastic film, especially under undrained conditions (mainly the pessimum voids). Efforts to test pore water pressure have been ongoing for the last 50 years. For example, literature documents work by Hallberg in the 1950s (Water pressure measurement by assessing the pore size effects), Johnson in the 1960s (measurement of thermally induced pore pressure), Jimenez in the 1970s (measurement of pore pressure using the Double punch method), and Mallick et al., in the 2000s (determined cyclic pressure through suction). A comprehensive review of more basic fundamentals behind this and other related mechanisms can be found in proceedings of the National Seminal on moisture damage in San Diego (2003) and Bagampadde et al., (2004).
125
International Conference on Advances in Engineering and Technology
The investigation reported in this paper dealt with studying a traffic load-induced pressure mechanism using field cores from a heavily trafficked road north of Lake Victoria. The study approach was based on evaluating the influence of truck load channelisation on moisture damage in mixtures since it causes excessive stresses in the wheel paths. A previous survey on heavy trucks by the Ministry of Public Works and Housing of Kenya gave a mean tire pressure of 0.70 MPa and a maximum of about 1.03 MPa on the main road from Mombasa to the boarder with Uganda (Wambura et al., 1999). Another study in Uganda indicated that the 95% confidence interval of typical tire pressures on the road joining Kenya and Uganda is 0.82 to 0.96 MPa (Research Report, 2003). These roads are part of the Northern regional corridor of East Africa. For such high pressures that are most of the time applied in the wheel paths, pore water pressure may increase possibly followed by separation of bitumen from the aggregate if water is present in the pores. Consequently, if it can be shown that heavily truck loaded location(s) in the carriageway exhibit loss of bitumedmastic films from the aggregate by stripping, it is then most likely that cyclic traffic loading relates to transport of water into the bitumedaggregate interface. A 65.6 Km stretch of the Northern regional corridor was selected for this study. This stretch is situated between Malaba (a border town with Kenya) and Bugiri located north of Lake Victoria. Major rehabilitation was done on the road and completed in September 2002. Within two years of service, the road surface showed moisture damage related failures in several areas and hence it’s selection for this study. The method employed involved a technique that is inexpensive, easy to perform, safe and requires no expensive capital equipment, and would be ideal for poor countries.
2.0 MATERIALS AND METHODS 2.2 Road Section Sampling and Coring Two randomly selected sections of the road were used for investigation namely, one located in an area with a shallow water table and another in an area with a deep water table. The road studied had seven sections of the first type and 16 sections of the second type. The truck traffic moves at average speed of 8 0 k d h r and with a maximum single axle load not exceeding 10 tons in the more heavily loaded Bugiri bound lane. In trading centers, the speed of the trucks was generally low. Stratified sampling was used to select the two random sections studied. Bugiri town was arbitrarily taken as the origin with a O+OO chainage. The two sections were found out to be between chainages 49+332 to 51+160 for the one with shallow water table and 52+206 to 54+ 134 for the one with deep water table, and were arbitrarily designated as Section A and Section B, respectively. Cores of diameter 100 mm were taken from both lanes of section A and section B. Coring was done towards the end of the March-May 2004 rainy season, by randomly sampling from the outer wheel track (OWT), inner wheel track (IWT) and midway between wheel tracks (BWT) of each lane.
126
Bagampadde & Kiggundu
This was done to determine the part(s) of the road (transverse to centerline) that had possibly undergone stripping damage. In total, 23 random cores were obtained, 1 1 from section A and 12 from section B with the location details listed in Table 1. The cores were drilled using water as a coolant. Consequently, the in situ moisture content could not be determined using these cores. Furthermore, 46 block samples (two close to each coring location) were dug after milling the wearing course during spot rehabilitation. All the cores and block samples were wrapped, sealed in air tight plastic bags and taken to the laboratory. able 1 : Core loc ion details for sections < :tion A Core Lane N Lanec Trac Chainage kd 0.
-1 2 3 4 5
6 7 8 9 10 11
M
B M B
B M B M B B M
--
IWT BWT OWT OWT IWT IWT OWT BWT BWT IWT OWT
494332 497386 49+52 1 49+6 10 so-2 1 8 50+406 50+5 17 50+778 50+912 5 I +029 51+160
and R
B - xtion Lane Core N 0.
1 2 3 4 5 6 7 8 9 10
Lanec
M
11
B B M B M B M B M B
19 I L
M
Trac kd OWT BWT OWT IWT IWT BWT IWT IWT BWT OWT OWT BWT
... --
Chainage S2+206 52+248 52+664 52+712 52+935 53+020 53+543 53+710 53+740 53+873 54+030 54+134
Between Wheel Tracks. OWT = Outer Wheel Track
2.3 Laboratory Tests The cores and block samples were tested in the laboratory to determine: (a) in-situ moisture by weighing portions of block samples prior and after carefully loosening them and air drying with a fan, (b) bulk specific gravity, Gh (AASHTO T-166), (c) Rice specific gravity, G, (AASHTO T-209), (d) % air voids, V, (AASHTO T-166), and (e) % saturation. Rice specific gravity was measured on cores after stripping evaluation. The cores were heated to 50°C for 1 hour followed by careful splitting along the diametral plane using a press machine. The diametral plane of each core was visually examined, by four independent evaluators (including an experienced researcher) to estimate stripping basing on uncoated aggregate area. Rating of stripping was based on a procedure close to one recommended by (Kandhal 1994). Estimates of stripping on fine (- 4.75 mm) and coarse (+ 4.75 mm) aggregates were judged independently and the results averaged to obtain stripping at each coring location. Ratings of stripping was done on a 0 3 scale, where 0 and 3 indicatc absence and severe stripping, respectively. ~
127
International Conference on Advances in Engineering and Technology
Due to subjectivity in judgment, the experienced researcher carefully trained the other three to strengthen validity of the results. Block samples were used to determine the field gradation. Gradation was obtained after trichloroethylene extraction of bitumen from portions of loose material larger than 75 pm. After extraction of bitumen, the aggregate was dried in an oven at 60°C for 24 hrs before sieve analysis. 16 samples were used in the sieve analysis and the mean of the percent passing each sieve obtained. In addition, loose mixtures were prepared using both virgin and RTFOT aged bitumens mixed with the parent aggregates for the road project. In each case, the loose mixture was tested for moisture sensitivity in accordance with ASTM D1664 Test Method for uncompacted mixtures. 100*5 g of coarse aggregate (6.3 - 9.5 mm) was coated with 3.8% bitumen (by weight). This bitumen content was determined after several starter trial tests and was found to be adequate for complete aggregate coating. Loose mixtures were immersed in water for 24 hours at 60°C. The degree of stripping was determined by visual inspection after the conditioning time. 1.0 RESULTS AND ANALYSIS Gradation is one of the important aggregate properties is a primary consideration in mix design. The mean gradation data of the 16 aggregate portions recovered from block samples, and the original design gradation for the field mixtures at the time of rehabilitation, are summarised on a 0.45 power chart (cf. Fig. 1). The results indicate that the material gradation was still within specification limits. In no case did the values of recovered samples differ from original design values by more than 12%. Consequently, there might have been minimal aggregate fracture that would expose new aggregate surfaces. Results of bulk specific gravity, maximum specific gravity, % air voids, % in-situ moisture and % saturation are presented in Table 4 for road sections A and B, respectively. The percent saturation (PS) was obtained using the method by (Kandhal 1994).
100
80 N
35 a
t
60
35 W
'' % 40 a
s.
20
0 0
1
2
3
4
Sieve Size (0.45 Power)
aggregates recovered from block samples
128
5
6
Fig. 1 : Gradation curve of the
Bagampadde & Kiggundu
The ground water levels for sections A and B were determined three times using a water level indicator with an audio device (No. 301 16, 50 m maximum depth fitted with a 12 mm diameter probe); first during a dry season, next during a mild season and last during a rainy season. In section B, this level was referenced to an adjoining natural spring. The observed water table levels at 24 points for section A gave a 95% confidence interval of 0.6 to 1.2 m. For section B, the water level was below 3.0 m for all the three times of measurement. Consequently, chances of water getting into the surface course of section B, by seepage or capillarity from underground, were perhaps minimal. Generally, the results in Table 2 indicate a significantly higher moisture saturation in section A ( X = 38.7 % and CV = 36 %) than in section B (X = 8.4 % and CV = 50 %). The large values of CV are indicative of large variability in the data. In particular, cores from BWT locations showed higher air voids and relatively lower values of saturation compared to IWT and OWT locations in both sections. The original design value of air voids for the wearing course was 4.2 %. Thus, the BWT locations with relatively high air voids seem to be highly permeable. able 2: Data on cores from the sections i restigated in this study
-- - - - - - - -- - (?4) No. (Yo) (%I (%I w.) (%I - - --- - Secti
,ore' No.
A (Sh
Section B (Df
ow w
r table)
Air
PSd
Glnh
G mn,
Voids
In-situ doisture
Core'
Gllib
G,*b
t
water table)
Air Voids
In-situ doisture
PSd 9.1
1
2.268
2.348
3.4
0.61
40.7
1
2.304
2.370
2.8
0.11
2
2.1 15
2.255
6.2
0.99
33.8
-7
2.145
2.256
4.9
0.09
3.9
3
2.344
2.382
I .6
0.28
41.0
3
2.299
2.348
2.1
0.13
14.2
4
2.307
2.356
2. I
0.44
48.3
4
2.278
2.35 1
3.1
0.12
8.8
5
2.245
2.305
2.6
0.53
45.8
5
2.254
2.317
2.7
0.11
9.2
6
2.198
2.264
2.9
0.65
49.3
6
2.207
2.338
5.6
0.08
3.2
7
2.298
2.340
1.8
0.46
58.7
7
2.258
2.335
3.3
0.11
1.5
8
2.189
2.324
5.8
0.32
12.1
8
2.267
2.335
2.9
0.07
5.5
9
2.107
2.227
5.4
0.42
16.4
9
2.156
2.243
3.9
0.10
5.5
10
2.206
2.271
3.1
0.64
45.5
2.373
2.5
0.14
13.0
2.31 I
2.370
2.5
0.37
34.2
10 II
2.3 I4
I1
2.859
2.926
2.3
0.13
16.2
2.199
2.320
5.2
0.11
4.1
- - - - - - -- Core ocation deta arc as indicated in ble ! 'ps=(% Moi ure ;b)/(Air Voids) 12
j
~
>
Average results of visual stripping rating on the cores are plotted in Fig. 2. Gradation results indicated fracture of aggregates during service. Thus, any observed uncoated aggregate surfaces were attributable to stripping. The data show a higher nominal visual stripping rating in section A (= 2.10 and CV = 28 %) than in section B (z 1.17 and CV = 13 %), respectively. In other words, section A exhibited about 79% higher visual stripping than section B.
129
International Conference o n Advances in Engineering and Technology
In addition, the data indicate higher stripping in either OWT or IWT than BWT, especially in section A. Owing to observational differences by the evaluators of the core split surfaces, there could be possibility of bias in estimating stripping. Bias was therefore examined by comparing the mean estimates of rating values obtained with those of the experienced researcher that were presumed to be nearly valid. Table 3 presents the comparisons for all the cores investigated. The errors were less than 13% of the ratings by the experienced researcher and were observed to be randomly distributed with respect to section and transverse location. Cases with relatively high errors like cores 2 and 6 from sections A and B, respectively, could possibly have resulted from the difficulty in estimating stripping in fine aggregates. In addition, for cores with stripping the parts closer to the surface exhibited more stripping than those closer to the bottom. On the whole, the bias was not large enough to impact validity of the results. Statistical analysis of these data was performed to test whether stripping rating obtained is ascribed to three main factors namely, SECTION [shallow water table (section A) versus deep water table (section B)], LANE [more heavily loaded versus less heavily loaded 1, and TRACK [Inner Wheel Track, Outer Wheel Track and Between Wheel Tracks]. Analysis of variance (ANOVA) was performed at a 0.05 level of confidence to determine the effect of each of the above three factors and their interactions on air voids, in-situ moisture, saturation and visual stripping rated. Table 4 presents the p-values obtained from ANOVA. The analysis shows high R2 values indicating high contribution to variability of the data by the significant factors with p < 0.05. ~
~~~~
~
1
~~~~~
2
3
~
4
5
6
~~~~
7
~
8
Core Number
130
~~
ISection B
0 Section A
9 1 0 1 1 1 2
Bagampadde & Kiggundu
Fig. 2: Stripping rating in the two sections investigated 'able 3: Bias in estimatio Sectic A (Shah __
water tablc
Core Lane Track No. 1 IW T 2 BWT 3 OWT 4 OWT I WT 5 6 IWT 7 OWT 8 BWI BWT 9 10 IWT II OWT
Av. Visual
Core Average
,f stripping in tt Sectic __ Core Errorg
Stripping
No. 1
2.1 I 1.01 2.61 2.42 2.41 2.37 2 33 I .6l 1.1 1 2.44 2.73
-9
+0.16 +0.23 -0.09 +0.12 -0.21 +0.16
3 4 5 6 7 8 9 I0 II owr 12 BWT __ Core Avemge =
10 9 6
1 8
=
Lane Track OWT BWT OWT IWT IWT BWT IWT IWT B WT OWT
4v. Visual Stripping 1.31 3.87 1.41
1.01 1.21 1.1 I 1.14 1 .ox 1.21 1.1 I 1.22 I .30
Bias'
%
Errorg 7 10 4 8 8
0.09 +0.10
-0.06 +0.09 +o. I I -0. I 3 +0.07 10.02 +O. 1 I 10.12 t0.09 +0.03
13 6 2 8 10
7 2 7
experienced researcher Bias = rating percentage of rating by experienced rcsearcher TRACK and its interaction with SECTION significantly affected the air voids. Consequently, air voids content in the cores can possibly depend on both the transverse location in the lane (either within or between wheel tracks) and location of the road section with respect to the water table level, simultaneously. Table 4: Summary of the results of ANOVA showing the significance (p-values) Source
% Air Voids
% In situ Moisture
"/o Saturation
Stripping Rating
Sinele Main Factor
.I
X
4
0.080 0.0 I3 0.165
X
0.766
11
x
Y
0. I95 0.192 0.337
Y
0.292
x x
0319 0033 0729
x
0363 0 956
0.000 0.062 0.000
0.000 0.160 0.340
0385 0 1U6 0000
II
0.872
X
X
x
0.949
X
.I
0.000 0.089 0.000
\I
X
.I \r
X
0.041 0.000 0 008
X
0.169
X
v
V
v
0.981
4 factor or interaction is significant with rcspect to major dependent variable x factor or interaction is not significant with rcspcct to major dependent variable
131
International Conference on Advances in Engineering and Technology
Multiple comparisons of air voids values of cores from the three locations (OWT, BWT and IWT) were done using the Bonferroni method. The results indicated significant differences in the mean air voids content between all possible pairs of OWT, IWT and BWT. The data in Table 2 show that air voids for cores located between wheel tracks were higher than air voids for cores located in the wheel tracks. This could perhaps be attributed to either: (a) absence of secondary compaction by traffic since truck tires are most of the time in the wheel paths, or (b) some ‘relaxation’ mechanism (with volume increase) during plastic flow of material from the heavily loaded wheel paths to the less loaded adjacent areas causing opening up of the interstitial spaces within the wearing course. In addition, multiple comparisons show that air voids for cores in the IWT were significantly higher than air voids in the OWT. This could probably be explained by differences in profile roughness in the two wheel paths which generate differing dynamic forces. The dynamic tire forces that heavy trucks apply on the two wheel paths would possibly fluctuate as analytically proved by Cebon (2000), causing disparity in traffic dynamic compaction and therefore varying air voids in the wheel paths. ANOVA to determine whether in-situ moisture varied with levels of the three main factors indicated that SECTION was the only significant factor (p < 0.05). All cores in particular longitudinal track locations of each lane exhibited in-situ moisture that was, on the average three times or higher in section A than section B (cf. Table2). For example, cores from BWT locations exhibited average insitu moisture contents of 0.58% and 0.10% for section A and section B, respectively. A similar analysis showed that SECTION, TRACK and their interaction significantly affect saturation of the voids in the wearing course (p < 0.05). The significance of TRACK is perhaps attributable to densification by truck tires in the wheel paths which increases the void water content. Table 2 indicates that average saturation in the IWT and OWT (about 45.3 and 45.6%, respectively) was almost twice as much as that in BWT (about 20.8%). ANOVA on visual stripping data from the cores showed that SECTION and TRACK, and all the 2-way interactions seem to be significant (p < 0.05). Multiple comparisons indicated that average visual stripping ratings of the cores in the OWT and IWT were not significantly different. However, visual stripping of cores from BWT was significantly lower than stripping in the cores from the wheel paths. The results of immersion tests on the loose mixtures prepared from original and RTFOT aged bitumen mixed with the aggregate are listed in Table 5. Four raters were employed in this evaluation. The results indicate a higher coated area for the oven aged bitumen than the unaged one. RTFOT aged bitumen is expected to nearly simulate the behavior of bitumen in the cores since the wearing course of the road studied had been in place for about two years. Thus, because of the relatively high uncoated area observed in the loose mixes from aged bitumen, any stripping observed in the field cores was not ascribed to the bitumen and aggregate used during rehabilitation of the road studied.
132
Bagampadde & Kiggundu
Raters Rater # 1 Rater #2 Rater # 3 Rater #4
Visual stripping resistance (loose mixtures), % retained Unaged bitumen 80 85 85 - 90 90 95 90 - 95 ~
~
RTFOT aged bitumen 90 - 95 90 - 95 > 95 > 95
Figure 3 shows how saturation and visual stripping, respectively, relate to depth of the water table and location of the longitudinal track in the lane. For both cases, there appears to be a similar pattern of dependence on the two factors. In other words, higher values of both saturation and visual stripping point to locations in the wheel paths and stretches with shallow water table. Basing on this finding, it can be asserted that presence of high pore water pressure (due to high traffic load stresses in the wheel paths and high saturation levels) is a possible cause of the high visual stripping observed.
2.0 CONCLUSIONS On the basis of results from this work, the following conclusions may be drawn: Water saturation of the voids in the cores from the road investigated was sensitive to both level of the water table and location across the carriageway lane. The magnitude of air voids of the cores from the road studied was influenced by location across the lane (either within or between wheel tracks) and section with respect to the water tablc level, simultaneously. Cores from between wheel tracks contained significantly higher air voids content than those from the wheel tracks. Average visual stripping was rated higher for cores located in the wheel paths than those located between wheel paths. This observation was more apparent in the section with shallow water table where visual stripping was on the average observed to be 79% higher than that in the random section with a deep water table. Owing to incompressibility of water, the dependency of observed visual stripping on traffic induced pore water pressure probably supports the assertion that transport of moisture to the interface across the bitumenhnastic film relates to traffic load stresses from channelisation.
133
International Conference on Advances in Engineering and Technology
3~t 2.5
4O
i
"~, 2.0
'~ 30 2(] IWT
1(
/T
( / .
.
.
.
.
.
7 0.5
IWT VT
0.0
Fig. 3" Dependence of saturation and visual stripping on significant factors (WT* = water table) ACKNOWLEDGEMENTS
This work was financed by Sida/SAREC of Sweden. Cooperation by the Materials Engineer and the Engineer in Chief, both of Ministry of Works, Housing and Communications, Uganda is acknowledged. REFERENCES
Bagampadde U, Isacsson U, Kiggundu B. M. (2004), Classical and contemporary aspects of stripping in bituminous mixtures. Road Materials & Pavmt Des., 5(1): 7 - 43. Cebon D. (2000), Handbook of vehicle-road interaction. Swets & Zeitlinger Publishers BV, Lisse, Netherlands. Hicks R. G. (1991), Moisture damage in bitumen concrete. NCHRP, Synthesis of Highway Practice 175, TRB. Kandhal P. S, Lubold C. W, Roberts F. L. (1989), Water damage to bitumen overlays: case histories. AAPT, vol. 58: pp. 40-76. Kandhal P. S. (1992), Moisture susceptibility of liMA mixes: Identification of problem and recommended solutions. National Center for Bitumen Technology;, Report 92-1. Kandhal P. S. (1994), Field and laboratory investigation of stripping in bitumen pavements: state of the art report. TRR, No. 1454: pp. 36-47. Kandhal P. S, Rickards I. J. (2001), Premature failure of bitumen overlays from stripping." case histories, AAPT., Vol. 70. Stuart K. D. (1990), Moisture damage in bitumen mixtures - state-of-the-art. Report No. FHWA-RD-90-019, FHWA, 6300, VA 22101-2296. National Seminar on Moisture Sensitivity of Bitumen Pavements, San Diego, California, Feb. 4 - 6 , (2003). Wambura J. H, Maina J, Smith H. R. (1999), Kenya bituminous materials study. Transportation Research Record, No. 1681: pp. 129-137.
134
Zaghloul & El-Moattassem
THE EFFECT OF MEROWE DAM ON THE TRAVEL TIME OF FLOOD WAVE FROM ATBARA TO DONGOLA Sohair Saad Zaghloul, Researcher, National Water Research Centre, Cairo, Egypt Mohamed El-Moattassem, Professor, National Water research Centre, Cairo, Egyvt
ABSTRACT Water is the vital and most important element for development in Egypt. Nile River with an estimated length of over 6800 km is considered the main source of water in Egypt. To meet the demands of expanding population and economy, and to promote the level of national prosperity, it is essential that water resources be developed and utilized. The Atbara is a river in northeast Africa , which rises in northwest Ethiopia and flows in the east of Sudan. Atbara is also a town in northabout 805 km (500 miles) to the eastern Sudan, at the point where the Atbara River joins the Nile. The Merowe Multi-Purpose Hydro Dam, or 'Hamdab Dam', is a large construction project in northern Sudan, about 570km north of the Atbara. It is situated on the river Nile, close to the 4th Cataract where the river divides into multiple smaller branches with large islands in between. Due to the great variation of the flow of the Nile River between high flood period and the low flow period, analysis and studies for the River flow upstream the HAD should be developed. This paper is conducted to analyze the natural flood wave from Atbara to Dongola during the last five years on a daily basis. According to this analysis, a short term forecasting for the water flow entering the High Aswan Dam could be available. Also another analysis has been done to the same flood wave, taking into consideration the construction of Merowe Dam and the predicted water flow to AHD.
Keywords: Nile River, Aswan High Dam, Merowe Dam, Time of travel of fluctuations, Forecasting.
1.0 INTRODUCTION According to the increasing in the population of Egypt (The population growth has tripled during the last 50 years), the needs for water has been increased. Starting from this point of view the analysis and study of each drop of water entering Egypt is very important. The focus area in this study is the reach between Atbara and Dongola. The distant of this reach is about 740 Km. In between the two stations and very close to the Fourth Cataract is the planned position of Merowe Dam. It is about 570 Km from Atbara.
135
International Conference on Advances in Engineering and Technology
The study is analyzing the natural water coming from Atbara to forecast the amount of water at Dongola using the time travel of the fluctuations. But in case of the construction of Merowe Dam, the rating curves and hydrographs of both stations should be studied again. This is because the water coming from Atbara will be controlled through Merowe Dam according to its operating rule. In this case, the operating rule of AHD should be modified according to the new pattern.
2.0 HYDROLOGY OF NILE RIVER The Nile is 6,800 km long, extending through 35 degrees of latitude as it flows from south to north. Its basin covers approximately one-tenth of the African continent, with a catchment's area of 3,007,000 kmz, which is shared by ten countries: Burundi, Democratic Republic of Congo, Egypt, Eritrea, Ethiopia, Kenya, Rwanda, Sudan, Tanzania, and Uganda. Its main sources are found in Ethiopia and the countries around Lake Victoria. The river system has two main sources of water: the Ethiopian highlands and the equatorial region around Lake Victoria as shown in figure 1. More than 80% of the river flow arriving in Egypt originates in the Ethiopian highlands by way of the Sobat, Blue Nile, and Atbara Rivers, with the bulk of this water coming down during the summer. The dependence of Egypt on the Nile has led to intensive studies of quantities of water carried by the main stream and its tributaries throughout the year. The annual average for long time period has been estimated, which was the basis of design for the storage at the Aswan high dam.
3.0 ASWAN HIGH DAM The Aswan High Dam (AHD) was built between 1960 and 1970. It is located few miles south of the Old Aswan Dam. The storage capacity of 160 billion cubic meters and has a power capacity of 2000 MW. The construction of the AHD has affected the entire economy of Egypt, allowing reliable irrigation throughout the year and satisfying now about 20% of the country's energy demands. Egypt's irrigation practices require nearly 55.5 Billion m3 of water from the Nile every year, which is the amount allocated to Egypt by the 1959 Nile Waters Agreement with Sudan. 4.0 MEROWE DAM PROJECT The Merowe Dam, also known as 'Merowe Multi-Purpose Hydro Project', is a large construction projects in northern Sudan, about 950km upstream the AHD. It is situated on the river Nile, close to the 4th Cataract where the river divides into multiple smaller branches with large islands in between. The main purpose of the dam will be the generation of electricity and irrigation. The name Mevowe Dam refers to a small island blocking the Nile course, on which the dam is built. The dam is designed to have a length of about 9km and a crest height of up to 67m. It will consist of concrete-faced rock-fill dams on each river bank, an earth-rock dam with clay. Once finished, it will contain a reservoir of 12.5 billion m3. The reservoir lake is planned to extend 174km upstream. The Merowe Dam is likely to have serious
136
Zaghloul & E1-Moattassem
environmental problems such as evaporation (evaporation losses of up to 1.7 billion m3/year can be expected) and sedimentation. Fig 2 shows the site of Merowe Dam NILE RIVER BASIN -'~ N A T I O N A L
CAPITALS
,,V,A J O R R O A D S
L tBYA
r
REP,
OF EGYPT
5/:,UDI
~o~
LTqAD
ETH1C~'IA R E P U A IC -
...~.........................-
20r
Fig. 1" Nile River
~EROSVE
DAIVI
~SITE
Fig. 2: Merowe Dam Site
137
International Conference on Advances in Engineering and Technology
Fig. 3: Longitudinal section of the study area
5.0 H Y D R O L O G Y OF THE STUDY AREA The normal rains in this reach are very rare. It was noticed some rains in the middle of the reach north Abu Hamad due to the tropical monsoon, which its maximum peak in August. The highest evaporation is recorded at Atbara and Merowe. The distant from Atbara to Dongola is abouit 740 Km. Fig. 3 shows a longitudinal section of the study area from Atbara to Dongola. There are two main cataracts between Atbara and Dongola, the 4 th and 5th Cataracts as shown in this figure. The Merowe site is about 580 Km from Atbara. 6.0 DONGOLA This is a major town of significant historic and commercial importance located half way between Khartoum and the northern borderline with Egypt. Dongola is the capital of the Northern State and the main producer of palm-dates, wheat, cereals and fruits.
7.0 ATBARA It is known as the Town of Fire and Iron. Atbara is located 350 Km north of Khatoum. It lies in the eastern bank of the Nile north of Ed Darner, to which it is connected by a narrow old bridge across Atbara River. It originates in Ethiopia joins the main Nile River north of Khartoum between the fifth and sixth cataracts (areas of steep rapids) and provides about 14 percent of the Nile's waters in Egypt. During the low-water season, which runs from January to June, the Atbarah shrinks to a number of pools. But in late summer, when torrential rains fall on the Ethiopian plateau, the Atbarah provides 22 percent of the Nile's flow.
8.0 THE GAUGE-DISCHARGE CURVE (RATING CURVE) The gauge-discharge curves at Atbara and Dongola in the natural conditions are seemed to be fairly regular ones. Figs 4, 5 show the rating curves at Dongola and Atbara (ten days means 1995-2005) in the natural conditions. The Dongola curve is expected to be changed according to effect of the construction of Merowe Dam.
138
Zaghloul & E1-Moattassem
Rating Curve at Atbara (Ten Days Means (1995.2005))
Rating Curve of Dongola (Ten Days Means (1995-2005)) 15.00 14.00
f
,n 13.00
-~ 12.00 11.00 10,00
S 0
16.00 15.00 =~ 14.00 ~ 13.00 12.00 -- 11,00 10.00
200
400
600
800
'/ 0
200
DischargesM3/Day
400
600
800
DischargesM3/Day
Fig 4: The rating curve at Dongola
Fig 5 The rating curve at Atbara
9.0 TIME OF TRAVEL OF FLUCTUATION FROM ATBARA TO DONGOLA The prominent fluctuations of daily gauge readings at Atbara and dongola for the period 1995 to 2005 have been identified. The times of travel of fluctuation between Atbara and Dongola and Atbara water levels have been noted and plotted on a graph, see Fig. 6. T h e Lag T i m e B e t w e e n
Atbara and Dongola
17
$
16 15
$
.
13
t
11
:
10 0
2
4
6
8
10
Lag Time ( d a y s )
Fig. 6" The lag time between Atbara and Dongola Consequently, the coming water at Dongola can be expected by using the lag time curve between Atbara and Dongola. The lag time between Atbara and Dongola is about 2.5 days for the maximum level at Atbara which is 16.22 m. The velocity-gauge curve level for Atbara to Dongola (Km/day) is slightly curved as shown in Fig. 7. The maximum velocity from Atbara to Dongola can be determined as;
139
International Conference on Advances in Engineering and Technology
V = the distant from Atbara to Dongola/lag time at maximum level V = 740/2.5 = 269Km/day Therefore:
V : 311!~i~
Velocity-level curve from Atbara to Dongola 19 17~
16t 15
I
9$ .a 1 3 J 12~
11! ~o4-0
T--
50
[
1O0
150
~
200
250
T
1
]
300
350
400
Velocity Km/day Fig. 7 The velocity-level curve from Atbara to Dongola 10.0 D E T E R M E N A T I O N OF T H E L A G T I M E AND V E L C I T Y OF W A T E R INSIDE M E R O W E D A M R E S E V O I R As a matter pf fact that the back water curve inside the reservoir affects the lag time and the velocity upstream the reservoir. The maximum back water length or Merowe reservoir length is 180 Kin. The estimated maximum evaporation volume of Merowe reservoir is around 1.7 billion m3/year, and the considered rate of evaporation rate from Merowe Reservoir is 6mm/day. So; Evaporation volume = reservoir surface area (A) x evaporation rate/day A = 1700 x 106 x 1000/6 x 365 x 106 A = 776 K m z A is considered as the maximum reservoir surface area The basic fundamental equation of the lag time in a reservoir is T=A*dh/dq Maximum Lag time in Merowe reservoir (T) = maximum reservoir surface area (A) x dh/dq at the maximum level In this case dh/dq is calculated from the rating curve at Merowe station which is;
140
Zaghloul & E1-Moattassem
Dh/dq = (lm) / (550 m3/day) Therefore, T=776"1/550
=1.4day
The maximum velocity in Merowe reservoir (Vm) = the back water distance or reservoir length (Lm)/lag time (T) is given as Vm = Lm/T Therefore, V m = 1 8 0 K m / 1 . 4 d a y = 128 K m / d a y =
It is clear from the previous calculations that the maximum velocity between Atbara and Dongola is 269 Kin/day or 3.4 m/sec in the natural conditions. In the other side, the maximum velocity in the reach f Merowe Dam Reservoir is 128 Kms/day or 1.5 m/sec. 11.0
THE LAG TIMES FROM ATBARA TO DONGOLA BEFORE AND AFTER MEROWE DAM
From the previous calculations the lag time from Atbara to Dongola in the natural condition before the Merowe Dam is in case of the maximum level at Atbara. After Merowe Dam, there are three reachs as follow: From Atbara to Dongola: the distant is 400 Kms, and the velocity is 269 Km/day T1 = 269/400 = 0.7 day Inside Merowe Reservoir: from the above calculations, we have T2 = 1.4 day From Merowe to Dongola: the distant is 170, and the velocity is 269 Kin/day as the same in the 1st reach. T3 = 269/170 = 1.6 day In this case the new lag time becomes TI+T2+T3 = 7+1.4+1.6 = ~ This differs from the result of 2.5 days got using the normal conditions. There is no doubt that after the construction of Merowe Dam, the travel time of fluctuations from Atbara to Dongola will be longer and, accordingly, the velocity will be slower. Consequently, the level and discharge hydrographs at Dongola are going to have a new pattern, and this will change the Peak of water coming at Dongola. In this case, the forecasting of water coming to Dongola in normal conditions is also going to be affected, as well as the operating rule of AHD. A new forecasting system of water should be conducted, taking into consideration the expected new lag time and the operation rule of Merowe Dam. CONCLUSIONS 9 The natural coming water at Dongola could be forecasted by using the travel time of the fluctuations method from Atbara to Dongola. 9 It is designed to have Merowe Dam Project in northern Sudan, and about 950kin upstream AHD. It is situated on the Nile River, close to the 4th Cataract. 9 In the reach from Abu Hamad to Merowe (the Merowe Dam Reservoir), the Lag time of fluctuations will increase, and consequently, the water velocity will decrease.
141
International Conference on Advances in Engineering and Technology
0
0
Accordingly, this will affect the natural hydrological pattern of the whole reach from Atbara to Dongola. As a result of this change, the forecasting of the coming water at Dongola is going to be affected, and this will give new vision for the operating rule of AHD, taking into consideration the operation rule of Merowe Dam. An analysis of the expected new hydrograph at Dongola after the construction of Merowe Dam should be performed to study its effect on the operating rule of AHD.
REFERENCES Hurst H. E. Black R. P., Simaika Y. M. (1959), The Hydrology of the Blue Nile and Atbara and of the Main Nile to Aswan, with some reference to Projects, The Nile Basin, Volume IX, Nile Control Department. Mohamed El Moattassem (1992), Impact of Water Resources Projects on the Nile in Egypt, River Nile Protection and Development ProjecfNile research Institute, Water research Center. Linsley Ray K. Franzini Joseph B. (1964), Water Resources Engineering, McGRAWHILL BOOK COMPANY. Askouri Ali (2004), Sudan Dam Will Drown Cultural Treatures, Destroy Nile Communities, World River Review. Merowe Hydropower Project, Merowe Dam Project Implementation Unit, Government of the Republic of Sudan. The Nile Basin (1999), ten-days Means and Monthly Mean Discharges of the Nile and its Tributaries, Nile Control Staff, Nile Water Sector, Ministry of Water Resources and Irrigation. Bosshard Peter, Hildyard Nicholas (2005), A Critical Juncture for Peace, Democracy, and the Environment: Sudan and Merowe/Hamadab Dam Project, Report from a visit to Sudan and a Fact-OFinding Mission to the Merowe Dam Project.
142
Kahuma, Kiggundu, Mwakali & Taban-Wani
BUILDING MATERIAL ASPECTS IN EARTHQUAKE RESISTANT CONSTRUCFTION IN WESTERN UGANDA A. K. Kahuma, Department qf Civil Engineering, Makerere University, Uganda
B. M. Kiggundu, Uganda Electoral Commission, Kampala, Uganda
J. A. Mwakali, Department qf Civil Engineering, Makerere University, Uganda
G. Taban-Wani, Department of Engineering Mathematics, Makerere University, Uganda
ABSTRACT This paper is based on the construction materials aspects in the development of low cost design for earthquake resistant shelter development in Western Uganda under the Support to Earthquake Disaster Project in the Ministry of Works, Housing and Communications. This was a partial fulfillment of the recommendations by the Government in response to the 1994 Kisomoro earthquake that left nine people dead, several injured and property loss worth US$ 60,000,000 in Kabarole District. The National Earthquake Disaster Task Force was required to come up with a low cost earthquake resistant design and develop corresponding builders’ manuals. The cost reduction of any design could only be achieved through use of low cost materials and these are none other than vernacular (locally available materials), only if their structural characteristics were known. The study area comprised the districts of Bundibugyo, Kabarole, Kasese, Kamwenge and Kyenjojo. These districts are located in the northern part of the Western branch of the East African Rift System (EARS). Although, the region has been experiencing moderate (5.01M56.5) earthquakes, some of them have caused damages in the region. These include the Tooro and Kisomoro earthquakes of 1966 and 1994, respectively. This report was based on an investigation of structural characteristics of locally available (both conventional and traditional) building materials and evaluation of their structural performance under static and dynamic loads (a simulation of an earthquake environment). The results obtained enabled the use of low cost materials quality characteristics and rules of thumb techniques in the development of earthquake resistant structural designs, development of retrofitting strategies for damaged structures that need repairirenovation and development of a builders’ manual that would benefit all in an earthquake prone Uganda for construction of: 9 Temporary structures traditional houses, mainly built out of locally available and unprocessed materials Semi-permanent structures shelters usually roofed with CGI sheets, walls built out of mud and wattle (sometimes plastered) and floors made out of normal concrete or compacted ground either finished with vernacular binders-sand screed or covered with woven materials. etc. ~
~
143
International Conference on Advances in Engineering and Technology
.
Permanent structures - these are houses built out of processed materials using skilled labour.
Keywords: Earthquake resistant construction, conventional and vernacular materials, grass thatch and CGI roofs, burnt and adobe bricks, volcanic ash briquettes, sand blending formula
1.0 NTRODUCTION Damage and destruction (collapse) of buildings caused by an earthquake depends among other factors on: (i) Properties of construction materials used, (ii) Design and construction methods (workmanship quality control), and (iii) Site conditions. Site conditions include type of rock (hard or soft) on which the building is sited and topography of the site (hilltop, hillside, valleys, flat plain and wetland). This study deals with the performance of buildings in the previous earthquakes in relation to site conditions and types of materials used. In addition, locally available building materials were assessed and results from laboratory tests on samples to determine their critical properties are presented, which formed the basis for the generation of builders manual for construction of earthquake resistant buildings and development of low cost earthquake resistant prototype building designs. 2.0 METHODOLOGY
2.1 Field Work Data was collected using the following methods: (i) Using questionnaire (Building Materials and Construction Survey Checklist) results from a survey that had been carried out by social economic survey team. (ii) Direct observation of the situation on ground (iii) Sampling of materials for testing in the laboratory. Samples of local building materials were collected for laboratory testing to determine some of their chemical and physical properties. Materials were divided into two groups, namely conventional and traditional materials.
Conventional Cformal) materials are those of well-documented structural characteristics and have been in use for construction of permanent structures. They include aggregates, cement, lime, metal, sawn timber, and bricks/ blocks. Traditional (vernacular) materials are those that have been in use for construction of traditional structures (semi-permanent and temporary) but with little known structural characteristics. Examples are wild trees, bamboos, reeds, organic fibres, volcanic ashes and stones. Identification of formal sample materials was based on the lists of materials
144
Kahuma, Kiggundu, Mwakali & Taban-Wani
available with the district engineers of respective districts and questionnaire results check list of materials commonly used in the study area. Identification of informal (traditional) materials was obtained through personal interviews of people with experience in using such materials for construction works. This generated a list of all common traditional construction materials with both vernacular and scientific names. Sample materials were systematically coded. The codes had a link with the sample source and experiment it was intended for. The following samples were collected: (i) Sand The samples from each source were obtained by quartering procedure of equivalent samples materials obtained from equidistant distributed four points covering the whole sample source location. (ii) Soil: The samples of soil were collected from the top, middle and bottom parts of the nearest hill or raised ground near selected source of common clay bricks. Representative samples were obtained by homogenisation and quartering techniques on the soil excavation at three points at not less than 20 m apart across the selected slope. (iii) Bricks and Blocks: These were obtained from the major sources identified in the area. Both burnt and un-burnt (adobe) bricks were collected. In places where alternative materials had found their way in the construction industry such as the volcanic ash and stone slates were also collected. (iv) Coarse Aggregates: The samples were obtained from three points for each source to have a representative sample. (v) Samples for Mix Design: They included coarse aggregates sieved to sizes maximum 20 mm and 38 mm, soils and clay used in brick making, and sand used in conjunction with coarse aggregates. 3.0 LABORATORY ANALYSIS OF SAMPLES Sample preparation involved air-drying of samples, one week for the formal materials and one month for the traditional materials. Experimental work was carried out as summarised in Table 10.
4.0 RESULTS 4.1 Experimental Results, Conventional Materials Weight, Consistency and Chemical Properties of Soils Results of this experiment were as follows: Maximum bulk density, minimum bulk density, mean and standard deviation were 2.33x103, 1 . 5 7 ~ 1 0kg~ m-3, 1 . 9 7 ~ 1 0and ~ 177 kg m-3,respectively. Maximum dry density, minimum dry density, mean dry density and standard deviation were 2.1x103, 1.43x103, 1 .81x103, and 196 kg m-3, respectively. Moisture content values were maximum 26.20%, minimum 6.00%, mean 13.27% and standard deviation 4.85%. Chemical test revealed the presence of organic, chloride and sulphate impurities ranging from 0.02%, 0.00 1 % and 0.001 to 0.06%, 0.007% and 0.005%, respectively. The severity limit of sulphate is described as mild, moderate, severe and very severe if percentage of sulphate concentra-
145
International Conference on Advances in Engineering and Technology
tion in soil is less than 0. l, 0.1 to 0.2, 0.2 to 2 and more than 2, respectively. In addition, for reinforced concrete, the chloride content of the aggregate should not exceed 0.05% by mass of the total aggregate; this is reduced to 0.03% when sulphate-resisting cement is used and further to 0.01% for pre-stressed concrete. The organic content is limited to a level where its presence does not affect the matrix strength characteristics by more than 10%. The chemical test shows that the tested soils are of good quality for construction.
4.2 Sieve Analysis Soil: The Cumulative percentage passing curves as shown in Figure 1 indicate that out of 18samples only 3 samples (211F, 112D and 222D) that is 36% lacked D10 particle sizes. The sand from Kanyangeya area (sample 211F) was used as a basis of comparison and it is observed that samples 112D (volcanic ash from Kichuna) and sample 222D have close particle size gradation characteristics. Fineness modulus of various soil samples in the study area varied from 0.67 to 7.01 with mean of 3.35 and standard deviation of 2.38. This implies that the weighted average size of a sieve on which the material is retained is between the third sieve, size 0.212ram (sieve No.70 ASTM) and the fourth sieve, size 0.300mm (sieve No.50 ASTM) when counted from the finest sieve.
Using AASHTO (American Association of State Highways and Transport) tests, soils varied from A-1 to A-7 with the majority of the samples in grades A-1, A-4, A-6 and A-7, where A-1 indicates best soil and A-7 indicates worst soil in regard to foundations construction.
Fig.l: Particle Sizes Gradation Curves for Soils Found at Different Altitudes.
146
Kahuma, Kiggundu, Mwakali & Taban-Wani
10L-I
#
21 1 R
90 --m~---
421
80 70 69 69
,,~
60
e
50
..':--
E
E ,:-;
411 '.'2_:
.................... [ ] ...... 4 1 1 D
40
21 1 C
30 21 1 P
20 10
0 0:10
1.00
10.00
----- 4- - S t a n d a r d Sand,(Kahuma, (2001))
E:icYo S i z e ir, m m
Fig.2" Particle Sizes Gradation Curves for Sands from Bundibugyo and Kasese Districts. Sand: The results of the sieve tests for the sand samples from the best sand sources in Bundibugyo and Kasese districts were plotted as shown in Figure 2. It is observed that sand sample code 211C lacks particle size D~0, and only one sample out of six, (17%) satisfied the ASTM impurity limit of fines in soils used as construction material never to exceed 3%. From Figure 2, the sand from the two districts is observed to occupy opposite sides about the standard sand. This standard sand had been obtained by blending different types of sand from all the best sand sources in Kabarole, Kyenjojo and Kamwenge districts, (Kahuma, 2002). The gradation curves of sand from Kasese and Bundibugyo are approximately parallel and opposite to each other. The computed correlation coefficients of 0.93 (for samples 211C and 411C) and 0.99 (for samples 421 and 211R) infers that their gradation characteristics are close to one another.
Aggregates: From the coarse aggregate particle size gradation curves shown in Figure 14 and coefficients of uniformity and curvature shown in Table 1, it can be observed that other than sample 214 C the rest of the samples that is (86%) were poorly graded. The aggregate crushing value, that is, toughness measure for the sample of rock due to failure by impact, for coarse aggregate samples are as shown in Table 2 and they varied with an average of 26.14% and standard deviation of 4.74%. The standard BS812: Part 112:BS882:1992 prescribes that the maximum crushing values should be 25% for aggregates to be used in heavy duty floor, 30% for aggregates to be used in concrete for wearing surfaces and 45%for aggregate to be used in other concretes.
147
International Conference on Advances in Engineering and Technology
Table 1" Gradation Coefficients of Aggregate Samples from Kasese and Bundibugyo Districts Sample Code
Particle Sizes
Uniformity Coefficient
Coefficient of Curvature
Dlo
D3o
D6o
Cu
Cv
214R
13
26
37
2.85
1.41
314
12
26
39
3.25
1.44
]24
12
22
30
2.50
1.34
214C
7
14
20
2.86
1.40
From the Sodium Sulphate (NaSO4) soundness test, Table 2, revealed that 4 out of 7 samples (57 %) had soundness value of not exceeding 1% and these aggregates are obtained from stone fragments that are transported down-hill by rivers which in turn are reduced in size using hammers of weights varying from 5-20kg. Aggregates like those that of samples 214R and 314 are obtained from weathered stone fragments that are excavated from naturally existing aggregate-banks. 11o
==,
lOO
9o
..,_
so o~
6o 5o
J#iii!i~iii~ii~i~!Eiii~|
FI~NN
9
~ii~Ni
Jiiiiiii~i iN ~i~iNiiiN~i|
114
~ 1 2 4 2 1 41R
~ii ',~Ni!
........~, .+ ...............3 1 4 =
3o _lii~NiiiiiiNiiNii#i~iii~iiLiiiN 2o N~i~iiN|
2140 414
~:,
1
100
10
Sieve
Sizes
424
in mm
Fig. 3" Particle Sizes Distribution Curves of Coarse Aggregates from Project Area Table 2" Particle Size Distribution of Coarse Aggregates from Kasese, Kabarole, Kyenjojo and Bundibugyo Districts
50mm
ACV NaSO4 % Soundness value (%) 37.5mm 20.0mm 14.0mm 10.0mm 5.0mm
Sample Sieve Sizes/Cumulative %-Passing Code
Fineness Modulus
114
100
97
17
17
5
0
24
1
3.64
124
100
84
20
20
7
0
25
1
3.69
214R
)8
63
12
12
2
l0
20
4
4.13
214C
100
100
59
27
16
5
24
2
2.93
314
100
57
16
16
6
0
35
10
4.05
414
100
82
35
29
26
21
26
ll
3.07
424
100
88
52
40
32
24
29
1
2.64
Mean
26.14 2.86
3.45
Standard Deviation-SD
4.74
3.34
0.58
ACV=average crushing value.
148
Kahuma, Kiggundu, Mwakali & Taban-Wani
The crushing strength ranged from 0.120 MPa to 0.920 MPA as shown in Table 4. This indicates that all bricks had crushing strength less than 0.103 MPA the minimum required crushing strength for a construction brick (Dutta, 1998). It is also observed that some bricks had higher crushing strength than that of cement-sand blocks samples. It can be concluded that the observed variation in the crushing strength of un-bumt bricks indicates a similar variation in strength when the same is used as a jointing mortar. According Grammarco de Felice (2001), to achieve optimum stiffness and overall structural elastic behaviour for a masonry structure, the strength of a jointing mortar is limited to 20% of the strength of the building unit that is jointing mortar should not be less than 0.206MPa. However for the construction of earthquake resistant system results from modelling and computer simulation of 1940 E1-Centro earthquake which measured 7.5 on the Richter scale on a five story masonry structure with wall 450mm thick model revealed that the ultimate crushing strength of a building unit for such an earthquake should not be less than 2.3 MPa. This indicates that the mortar strength should not be less than 0.46MPa. Table 3" Water absorption Characteristics of Bricks/Blocks from the Study Area Sample Code
Observed Water Absorption 1
2
3
4
i5
6
Average Water Absorption %
113.1
32.00
27.00
45.00
30.00
27.00
23.00
30.67
213.1
19.00
19.00
27.00
26.00
26.00
26.00
23.83
313.1
15.00
12.00
14.00
13.00
15.00
12.00
13.50
423.2
11.00
9.00
17.00
11.00
11.00
11.00
11.67
123.2
23.00
17.00
21.00
29.00
23.00
25.00
23.00
213.2
2.60
2.70
3.00
2.60
3.00
2.70
2.77
313.2
3.40
3.20
3.00
3.30
3.40
3.00
3.22
113.3
6.60
6.80
6.20
7.20
7.00
6.50
6.72
Table 4: Weight and Compressive Strength Characteristics of Bricks from the Study Area Average Density
Compressive Strength
Sample Code (Kg/m3)
(MPa)
113.1
1505.67
0.920
123.2
1548.67
0.493
123.3
2115.00
0.230
213.1
1445.00
0.288
213.2
1523.67
0.177
213.3
1838.67
0.283
313.1
2077.83
0.293
313.2
2011.83
0.293
413.1
1681.67
0.707
413.1
2129.33
0.165
413.2
2807.33
0.120
423.2
3926.00
0.345
149
International Conference on Advances in Engineering and Technology
4.3 Water Absorption and Compressive Strength of Bricks Water absorption of both burnt and adobe bricks ranged from 2.77% to 30.67% as can be seen in Table 15a. All un-burnt bricks (113.1,213.2, 123.2 and 313.1) had more than 10% water absorption while 75% of burnt bricks had less than 10% water absorption. 4.4 Brick/Block Making Soils Results from brick/block making materials as shown in Table5 and Figure 4 indicate that
the soils used in the production of bricks or blocks have high amounts of clay content. These soils are locally categorized as clay for making clay bricks and sand for making cement-mortar blocks. These samples were collected for mix design experiments for which full testing is still in progress. Table 5: Gradation Analysis Results on Brick/Block making materials from Kasese and Bundibugyo Districts Sieve Sizes / Cumulative %-Passing) Sample
37.5 20.0 10.0 6.3 5.0 2.0 0.600 0.425 10.300 0.212 0.150 0.063m
Code
mm
mm
inm
mmmmmmmm
II31.4 II42.7 II31.1 II31.5 II42.5 II42.6 II42.1
100 100 100 100 100 100 100
100 100 100 100 100 88 97
!100 96 100 100 100 79 )2
10099 91 88 100 100 100 100 100 100 75 74 80 75
98 78 100 98 99 67 62
91 56 97 93 86 50 49
mm
mm
mm
mm
m
87 49 90 90 79 41 45
84 ~4 73 86 72 34 42
75 37 37 75 61 26 38
68 33 29 66 55 23 36
61 30 21 57 49 18 34
AASHATO Classification of soils shows a variation from A-2 to A-6
110
100
90 rCO co
8O
-~
31,4 42
70
7
Q_
....~!~ ......
60 cD : =-
.......I
5O E E o
+
40
+
........
31
1
31
5
425 42
6
42 1
3O 2O 10 0 i].01
0.1
1
10
100
S i e v e S i z e s in t u r n
Fig. 4:
150
Particle size gradation curves for brick/block-making materials from Kasese and Bundibugyo districts
Kahuma, Kiggundu, Mwakali & Taban-Wani
4.5 Traditional Materials
Traditional Materials May Be Grouped Under The Following Categories: 9
9
9
9
9
Foundation Materials include the natural soil as found on site on which foundation trench is excavated to an average depth 300 mm deep, up to a relatively hard stratum and 300mm wide. The suitability of the hardness of the foundation level is judged from the hardness to the digging with reasonable force from an average strong man. At the bottom of the foundation, holes of 450 mm depth and 150 mm in diameter at 450-600 mm centre to centre. Poles are fixed and compacted into position in foundation holes using sub-soil as good as or better than the soil excavated from these holes. The entire trench is filled with mud mortar paddled to an appropriate consistence such that it is soft enough portioned by fingers, carried and placed by hands or compacted by fingers or broken pieces of dry mud from demolished mud and wattle buildings jointed with mud mortar. Walling Materials include poles with diameter of 75-150mm, rails with diameter 30-50 mm. and wall in filling materials. The role of the poles is to support the roof, the wall and all other operating forces in play within the structural environment. For sound construction, all these loads should be safely transferred to the foundation that is without any damage to the structure or the natural foundation. The rails are wooden, like poles, with small diameter in the range of 12-50mm mainly used to collect the inter-pole bay loads and distribute to the poles which loads are transferred to the foundation. The structural supportive potential will depend on how safely structural loads are imparted without either lateral or vertical displacement. Roofing Materials, include: o roof covering materials of grass in origin 200-300 mm. thick, papyrus (150200mm thick) and banana fibre (75-100ram. thick) are placed on the top of the roof structure. Iron sheets mainly of gauge 30 or 32 are also used; these are fixed on the roof structure with roofing nails according to carpenters experience. o roof structure made out of purlins, rails, rafters struts and ties. These are similar to wall structural members but with reduced diametric sizes (3075mm) since they carry less loads. o Ceiling if it is there is out of woven materials to carpet like made out reeds, rails, papyrus, bamboo splints, etc tied on roof frame with fibres. Plastering and Finishing Materials are mainly of soil origin with traditional binders. The soils from anti-hills, when introduced into soil-binder finishing matrix greatly improve the strength characteristics. Occasionally, formal binders such as cement and lime are used but structural bond of the finishing matrix deteriorates faster than when natural binders are used. Some people have attempted blending formal and informal binders in an attempt to cut down construction costs. Flooring Materials are mainly of rammed earth with or without mud plaster
4.6 Weight and Water absorption Weight and water absorption, shown in Table 6 reveal that these materials have less dry densities than that of water and high water absorption coefficients. The sample with code III 118 had the lowest water absorption of 15.77%, sample III11170 from banana fibre had the
151
International Conference on Advances in Engineering and Technology
highest water absorption of 254.50%. The median water absorption was 85.4%, which falls between the absorption coefficients of samples III 115 and III 1114. Table 6: Weight and Water Absorption Relationships of traditional Materials Specimen
Average Dry Density kg/m3 (by
Wet Density
Water
Label
water displacement method)
kg/m 3
sorption %
Wood
III 118
789.16
913.63
15.77
Wood
III 114
631.52
747.72
18.40
Wood
III 116
946.45
1,165.01
23.09
Fibre
III 1 1 1 1
730.95
953.57
30.46
Wood
III 119
652.70
871.57
33.53
Bamboo
III 1120 G
465.30
748.48
60.86
Wood
III 115
364.29
650.79
78.65
Water reed
III 1114
310.63
596.88
92.15
Grass
III 1119
112.38
223.37
98.76
Bamboo
III 1120 Y
242.78
552.14
127.43
Fibre
III 1118
425.56
977.78
129.77
Grass
III 1110
240.00
620.00
153.17
Fibre
III 1 1 2 1
612.50
1,875.00
206.12
Banana Fibres
III 11170
261.06
925.44
254.50
Specimen Type
Ab-
4.7 Compressive Strength Characteristics 9 Wood and Bamboo: Results are summarised in Tables 7. It is observed that the densities obtained by direct measurement of wood were very low compared to those obtained by water displacement method. The plot of the ultimate crushing strength against moment of inertia for bamboo is shown in Figure 5, and indicates that strength increases with the size of the bamboo unlike that of wood as shown in Figure 5. Table 7: Compressive Strength Test Results of Wood
Fig. 5" Crushing Strength and Diametric Relationship of Bamboo
152
Kahuma, Kiggundu, Mwakali & Taban-Wani
Diameter Code
I Average Average Densit) Ultimate
Before After
Label
Height mm in Kg/m3
Crushing
Compressive Tensile
Average
~Ext. R1 Ext. R2Be-
Strength N/mm2
Strain %
Strain %
Inertia I ram4
285.0
9.83
6.95
2.27
60400.09
189.8
6.85
7.67
6.84
133042.51
236.3
3.37
3,02
2.05
4508957.92
147.1 4.66
3.32
1.85
mm
mm
fore After Before After
III 111 44.0
46.0
47.0 42.0
44.0
145.0
49.0 46.5
45.0 III 112 !55.0
45.0 56.O
'46.5 277.1 49.0 '48.0] 58.O
54.0
59.0
58.0 58.0
'58.0
52.0 49.0 206.1
IIil13 1310
133.0
77.0 74.0
1300
134.0
58.0 55.0
1300
132.0 I
44.0 44.0 238.7
III 114 54.0
155.0
70.0 67.0 i
54.0
55.0
72.0 70.0
54.0
55.0
69.0 67.0 I147.6
III 115 74.0
76.0
[78.0 77.0
Moment of
I
53.0
132860.25 =
73.0
74.0
73.0
74.0
66.0 66.0 264.1
64.0 63.0
III 116 ]57.0
60.0
73.0 70.0
59.0
61.0
78.0 ,74.0
59.0
61.0
64.0 60.0
1
,
60.0
,
61.0
I
,
58.0 ,54.0 ~
12.89
0.95
1.81
I ~ ,
57.0
58.0
75.0 75.0
57.0
58.0
61.0 59.0 199.9
84.0
85.0
51.0 48.0
79.0
82.0
70.0 68.0
79.0
81.0
i72.0 72.0 227.3
III 118 77.0
78.0
82.0 80.0
IIil17
257.2
451995.09 ]
I i
,
] 221.0
7.28
3.39
1.73
177458.34
2.51
665038.01
1.82
471621.34
[ !
IIil19
222.8
]2.91
F
71.0
73.0
83.0 81.0
74.0
75.0
83.0 78.0 178.1
78.0
80.0
83.0 81.0
80,0
83.0
76.0 73.0
70.0
75.0
89.0 87.0 148.3 40.3
4.8 C o m p r e s s i v e
6.32
178.2
7.56
3.62
l
!
i 4.23
Strength Characteristics
2.87
4.49
531172.17
of Volcanic Ash
C o m p r e s s i v e S t r e n g t h C h a r a c t e r i s t i c s o f v o l c a n i c a s h a n d S t o n e slate are s h o w n in T a b l e 8. T h e s t r e n g t h c h a r a c t e r i s t i c s o f v o l c a n i c a s h b r i q u e t t e s are c o m p a r a b l e to t h a t o f cement-sand blocks.
153
International Conference on Advances in Engineering and Technology
Fig. 6: Crushing Strength and Diameter Relationship of Wood
Table 8: Weight and Compressive Strength Characteristics of Stone and Volcanic Ash Briquettes Specimen Type INAverageUltimate CrushingStrength /mm2 Average Density in Kg/m3 2,249.01 8.02 113.3 volcanic ash 1,804.00 32.59 313.3 (Whitish Grey) 313.3 (Reddish Grey)
I
42.91
2,293.86
4.9 Tensile Strength Characteristics of Natural Fibres Tensile strength characteristics of natural fibres as shown in tables 6 and 7 and figures and 6, it is observed that all these fibres under tensile loads, they can extend to a limited extent up to the breaking point. Table 9: Unit cost (Ushs) of Buildings Materials in the Project area Average unit c o s t Bundibugyo Kabarole Kasese Average unit cost for all districts 33,218.80 31,553.19 46,076.39 24,528.30 Sandper trip Coarse aggregates per trip
37,106.92
50,455.32
48,954.55
45,626.09
Hardcore per trip
41,792.45
59,226.50
40,142.86
51,189.25
Burnt clay bricks per
51.56
56.14
Mud bricks per piece
44.23
46.67
Cement blocks per piece
1,250
1250 24,212.77
1,250 31,929.13
1250 29,844.83
Timber per piece
7,222.73
4,872.40
2,688.68
5,244.27
Wattle per piece
500
1,333
5,000
581.20
Bamboo per piece
380.00
GCI sheets G32 per
11,809.09
10,047.30
17,303.68
54.38
piece
Building stones per
44.75
piece
piece
154
380.00 12,466.61
,,
Kahuma, Kiggundu, Mwakali & Taban-Wani
5.0 COST OF BUILDING MATERIALS The average cost of construction materials in the Districts of Bundibugyo, Kabarole and Kasese is summafised in Table 9.
6.0 CONCLUSION 6.1 Quality of Available Materials Soils exhibited a high level of variability from A - 1 (excellent foundation materials) to A - 7 (poor foundation material) Sand varies from poor to good sand and with high levels of fines/organic matters Possible sources of sand are: 1. Kazingo-Kabarole District 2. Pempa-Kabarole District 3. Kabweeza-Kyenjojo District 4. Barwenda-Kyenjojo District 5. Byabasabu-Kamwenge District 6. Kanyangeye-Kasese District 7. Ntoroko-Bundibugyo District 8. Bumadu River-Bundibugyo District Coarse Aggregates are available but of poor gradation, mainly due to poor method of production. The aggregates are produced manually by crushing rocks. Usually there is temptation to look for softer rocks, which are easy to crush. Possible Sources of coarse aggregates are: 1. Karusandara Hill- Kasese District 2. Kazingo-Kabarole District 3. Rugombe-Kyenjojo District 4. Kamwenge-Kamwenge District 5. Bumadu-Bundibugyo District 6. Karugutu-Bundibugyo District Bricks exhibited very low crushing strength (due to poor selection of soils used) and high water absorption. Clay deposits are very sparse and un-burnt soil bricks exhibited very low strength Traditional Materials are available and mainly used in rural areas. The performance of buildings constructed out of traditional materials was found to be good in earthquake environment. 7.0 RECOMMENDATIONS (i) In order to enhance the sustainability of the indigenous buildings, there is need to treat local materials, such as poles, cane and reeds to proper seasoning and anti termite treatment. However, most of the anti-termite products are expensive and yet, the usual local method of preservation by using used engine oils may have long term adverse environmental impacts. Further research in affordable anti-termite treatment, in addition to the measures proposed below, is therefore recommended. (ii) The strength characteristics of materials such as sand, aggregates, and earth for brick making shall be tested for compliance with specifications. Where the materials do
155
International Conference on Advances in Engineering and Technology
not conform to the specifications, there will be a need to mechanically improve (through blending) the materials as much as possible. Because cost is a constraining factor, the improvements shall be limited to the least cost options. Where these options do not meet the specifications, resizing of the structural members shall be done. For example, the sizes of beams, columns and walls may have to be increased to ensure that stresses due to the predicted loads will be within the safe limits. (iii) The effects of seismic activities on the buildings shall be treated as wind forces due to pressure exerted on the impact surface. To ensure safety to the users, buildings higher than two stories shall be designed by professional architects and engineers. In particular, the development of institutional and public buildings shall set examples of best practices in their socio-economic, physical and legal environment. (iv) Footings for buildings shall, as much as possible, be on a firm ground. Where such firm foundation would be too deep, simple house professional advice shall be sought through manuals, or, a request placed with the District Administration for such assistance. (v) Fine aggregates from the recommended source shall be used after preliminary treatments such as: 9 washing to remove impurities 9 screening to remove oversize particles. (vi) Coarse Aggregates 9 Sieves of appropriate gradation sizes shall be used at the queries 9 Feasibility of introduction of a stone crushing plant in the region shall be explored. (vii) Bricks (viii) For traditional materials, 9 Poles used shall be preserved against termites and rails by: 9 Smoking (partial burning the surface) 9 Ash application in foundation/base 9 Polythene paper cover of the base 9 Used oil application on surface (ix) Thatch 9 Replaced after every 2 rain seasons 9 Study in progress 9 Iron sheets are better because of loads reduction (x) Agro-aforestation of trees with good (natural or genetically modified) properties shall be done in consultation with Forest Authorities and NEMA. 8.0 REFERENCES Dutta, B.N., 1998. Estimation and Costing in Civil Engineering. Delhi; Replika Press (P) Ltd. Ebinger, C. J., 1989. Tectonic development of western branch of the East Africa rift system. Geological Society of America Bulletin, 101,885-903. Kahuma, K.A., 2002. An investigation of Relative Suitability of Sand Deposits in Kabarole, Kyenjojo and Kamwenge Districts for Construction of Earthquake Resistant Systems, M.Sc. Thesis, Makerere University. Loupekine, L. S., 1866. The Toro earthquake of 20 March 1966 in Earthquake reconnaissance Mission, UNESCO, Paris, 34p.
156
Kahuma, Kiggundu, Mwakali & Taban-Wani
National Earthquake Disaster Committee Report 1994. Preliminary report on earthquake disaster in Kabalore, Bundibugyo and Kasese districts, Ministry of Labour and Social Affairs, Republic of Uganda. Twesigomwe, E.M., 1996. Probabilistic Seismic Hazard Assessment of Uganda. Ph.D thesis, Makerere University. Twesigomwe, E. M., 1997. Seismic hazards in Uganda. Journal of African Earth Sciences, 24, 183-195. Upcott, N. M., Mukasa, R. K. Ebinger, C. J. and Karmer, G. D, 1996. Along-axis segmentation and isostancy in the Western Rift, East Africa. J. Geophs Res Lett., 101, 3247-3268. Wagner, G. S. and Langston, C. A., 1988. East African body wave inversion with implications for continental structure and deformation. Geophysical Journal 94, 503-518
157
International Conference on Advances in Engineering and Technology
158
Ntihuga
B I O S E N S O R TO D E T E C T H E A V Y M E T A L S IN WASTE WATER J.N. Ntihuga. Department of Food Science and Technology, Kigali Institute of Science and
Technology
ABSTRACT Heavy metals are the most toxic substances affecting the environment. Since they are not biodegradable, such metals can accumulate in the environment and produce toxic effects in plants and animals even at very low concentrations. Heavy metals ions cause health hazards like anemia, kidney failure, neurological damage, loss of memory and loss of appetite on humans. Due to the high toxic nature of heavy metals, there is an obvious need to determine their levels at sites. Biosensors, due to their simplicity and selectivity, prove very promising for environmental pollution monitoring especially by heavy metals. A urease based biosensor for the determination of heavy metals ions in wastewater based on modified sol gel immobilization technique was developed. Crude urease (32.4U) from Dolichos uniflorus immobilized on cellulose strip was used as the bio recognition element. Keywords: Heavy metals, toxic substances, environment, biosensor, pollution, waste water, monotoring system.
1.0 INTRODUCTION Contamination of soils due to discharge of industrial effluents is one of the most significant problems faced by man. Heavy metals are widely existent in these contaminated environments. For example, many places are considerably polluted with chromium from tannery waste waters. In these areas, chromium exists in both the hexavalent and the trivalent forms. The plants grown in such areas can accumulate chromium ions. These ions have certain threshold levels for essential functions of living organism and man, but cause toxic actions if the tolerance levels exceed. Analytical tests for the determination of chromium ions are tedious, time consuming and expensive. Biosensors, due to their simplicity and selectivity, prove very promising for environmental pollution monitoring especially by heavy metals. 2.0 M E T H O D O L O G Y The determination of heavy metal ions using immobilized biosensor is based on the measurement of the activity of urease, which is inhibited by heavy metal ions. In many instances, monitoring is not continuous but requires a number of individual measurements to be made at different times. In such cases, sensors should be manufactured inexpensively so that they may be disposed after a single reading. Keeping this in view, the present work had the following objectives:
159
International Conference on Advances in Engineering and Technology
0
0
to develop an enzyme based sensor for heavy metals especially chromium ion detection to study the factors affecting inhibition of the enzyme to evaluate the performance of the sensor.
In order to achieve the above mentioned objectives, the following analyses were done:
2.1 Inhibition of Dolichos Uniflorus Urease By Cr6+ Dolichos uniJorus urease with activity of 32.4 U was used. Uninhibited and inhibited urease by Cr6' ions was studied in 20 mM phosphate buffer pH 7.2 at room temperature. The inhibition by Cr6+ions was determined in two experimental systems. In the first system (unincubated), urea was mixed with the inhibitor and the reaction was initiated by the addition of small volumes of concentrated solution of urease (1 ml solution of concentration 1.25 mg ureaseiml). In the second system (incubated), the enzyme was mixed with the inhibitor, and the mixture was incubated for 20 minutes at room temperature. The reaction was initiated by addition of small volumes of concentrated solution of urea (1 ml 5M solution). After mixing, at t = 0 the composition of reaction mixture was identical in both systems. The reaction was monitored by measuring residual concentration of urea using Diacetyl monoxime reagent method in samples removed from the reaction mixture at time intervals. 2.2 Urease Immobilization The modified sol gel process provides a very attractive and convenient technique for the immobilization of enzymes.
2.3 Preparation of TMOS Stock Sol Gel Solution and Enzyme Solution Homogeneous stock sol gel solution was prepared within 5 minutes by vigorously mixing 570 p1 of methanol, 50 pl, of tetra methoxy silicate, 10 p1 of 3.8% cetyl trimethyl ammonium bromide solution, 10 p1 of 5mM sodium hydroxide and 60 pl of water in a small test tube at room temperature. This stock gel solution was then cooled to 4OC immediately after mixing. Enzyme stock solution was prepared by dissolving 80 mg of urease in 50 in1 of 0.02 mM phosphate buffer (pH 7.2). Enzyme solution was then stored at 4 ' ~in a refrigerator. 2.2 Preparation of Enzyme Electrodes Initially, 50 PI of enzyme stock solution along with 5 pl of glycerol was pipetted onto the surface of the electrode and distributed gently over the entire surface of the electrode with the help of a capillary tube. The electrode was allowed to dry in ambient conditions for 1 hour. Then 50 pl of stock gel solution was pipetted to cover the enzyme layer formed over the surface of the electrode. The electrode was allowed to polymerize and dried for 1 hour in ambient temperature. Finally, the enzyme electrode was immersed in a phosphate buffer and kept at 4°C in a refrigerator overnight. When not in use, the electrode was stored in phosphate buffer (pH 7.2) at 4OC in a refrigerator.
160
Ntihuga
2.3 Estimation of Kinetic Parameters of Free and Immobilized Urease For the estimation of kinetics parameters, a series of runs were conducted in test tubes, pH 7.2 at room temperature with initial urease concentration of 1.25 mgiml in a 20 mM phosphate buffer. Initial urea concentration was changed in the range 10-50mM and the liberated ammonia was determined at the appropriate time intervals after the start up of the reaction. Initial substrate rate was calculated by initial velocity estimation method
2.4 Standardization of Sensor for Urea The biosensor response is the function of concentration of urea and the activity level of urease. All the measurements were carried out at room temperature in 50 ml beaker filled with 10 ml test sample. The standard samples for urea were prepared for known concentrations. A I0 ml of known concentration of urea was taken in SO ml beaker. The biosensor was dipped into the sample. The biosensor response was measured. The residual urea was quantitatively measured by spectrophotometric method. The sensor was standardized for various concentrations ofurea in the range of lOmM to 50Mm.
2.5 Incubation Time Evaluation The level of inhibition by chromium ions depends on the preincubation time. In order to optimize the incubation time, the effect of preincubation time of the developed biosensor in SO ppm Cr"' solution upon the biosensor response was examined. The electrode was dipped in the solution containing SO ppm solution for various preincubation times ranging from 1 minute to SO minutes. The level of inhibition for various incubation times was measured 1.0 STABILITY 1.1 Operational stability The stability of the enzyme in the immobilized state was checked by measuring the activity for the same biosensor up to 10 times for both immobilized urease stored in phosphate buffer pH 7.2 at 4°Cand that stored at 4'C only. The response of the sensor for SO mM urea concentration was measured. The variation of the response current for SO mM urea concentration was recorded.
1.2 Storage Stability The stability of the enzyme in the immobilized state was checked by measuring the activity up to 8 days (one set of measurements per day) for both immobilized urease stored in phosphate buffer pH 7.2 at 4°C and that stored at 4°C only. The response of the sensor for 50 mM urea concentration was measured. The same electrode was used repeatedly for all the 8 days. The variation of the response current for 50 mM urea concentration was recorded. 1.3 Reactivation of The Enzyme The electrode incubated in 50 ppm chromium ions solution was chosen for evaluation. The reactivation of inhibited enzyme was carried out by soaking the sensor strip in the solution containing 1 mM EDTA (Ethylene diamine tetra acetic acid) for 2 hours and 10
161
International Conference on Advances in Engineering and Technology
mM EDTA for 15 minutes respectively. Then, the sensor was subjected to urease activity determination. 1.4 Reproducibility The reproducibility of electrodes was checked by measuring the response of the different electrodes individually for a known concentration of urea. 10, independently made electrodes were chosen for evaluation. The variation of response current for 50 mM urea concentration of all 10 electrodes was recorded.
2.0 FINDINGS The following list summaries the findings of the investigation. For free urease: Km=20.044 mM, Vm=10.62 mM /min For immobilized urease:Km=22.049 mM Vm=3.28x10-3 mM / min, Reason for non significant change of Km: Cnchanged affinity of the enzyme for urea upon immobilization. Reason for the decrease of Vm: Conformation changes in the tertiary structure of the enzyme Steric effects resulting from limitation of the accessibility of substrate to the active sites and denaturation of the enzyme. Inhibition of urease by Cr6+ions is noncompetitive Initially the reaction was weakly inhibited i.e high reaction rates The inhibition grew stronger i.e lower reaction rates Inhibition depends strongly on time Inhibition depends on inhibitor concentration Effect of temperature was found to be negligible Optimum pH for free urease:7.2 Optimum temperature for free urease:40'~ Optimum pH for immobilized urease: 6.7 Optimum temperature for immobilized urease:70°C The activity remained same upon using the strip for six times, thereafter reduced by a maximum of 6% The activity remained same upon using the strip for four days, thereafter reduced by maximum 2% Using EDTA, immobilized urease restored up to 12% of its original activity after inhibition by chromium The reproducibility was 40% to 60% 3.0 ANALYSIS OF RESULTS Using a modified sol gel method, urease was immobilized on a cellulose strip and the performance of the resulting sensor was analyzed. The enzyme electrode was used for the detection of chromium ions. The sensor was evaluated for various parameters and the factor affecting inhibition and immobilization of the enzyme was studied.
3.1 Estimation of Kinetic Parameters, K, and V, of Free and Immobilized Urease The kinetic constants of native and nonwoven immobilized urease, the Michaelis constant K, and the maximum reaction rate V, in the absence of inhibitors were deter-
162
Ntihuga
mined at the experimental conditions (50 mM urea, 20 mM phosphate buffer pH 7.2 and 6.7 at room temperature).The kinetic parameters were estimated as 10.62 mMimin and 20.044 mM, 3.28~10"mMimin and 22.049 mM for free and immobilized urease respectively. It can be observed that V, decreased from 10.62 mMimin for the free form to 3 . 2 8 ~ 1 0 . ~ mMimin for the bound form urease, while K, did not change significantly, increased from 20.044 mM to 22.049 mM urea, denoting the unchanged affinity of the enzyme for urea upon immobilization. The increase in K, could not be attributed to mass transfer resistance due to liquid film over nonwoven cellulose because Da (Damkohler number) value was far greater than 1 (12.04).The drastic decrease of V, may have resulted from conformation changes in the tertiary structure of the enzyme during immobilization, steric effects resulting from limitation of the accessibility of substrate to the active sites or denaturation of the enzyme during immobilization. 3.2 Inhibition Studies The inhibition of urease by Cr" ions was studied in 20 mM phosphate buffer (pH 7.2). The inhibition was observed in two systems which differed in the order in which the components of the reaction mixture were mixed. In the first (unincubated), the reaction was initiated by adding urease to the mixture of urea and Cr6' ions, and in the second (incubated), by adding urea to the mixture of urease incubated with Crhi ions prior to reaction. Figures 1 and 2 depict the reaction progress curves for the systems studied for unincubated and incubated system respectively. --.-
05
10
0 10
I
15
20
25
30
Tim (nin) Fig.1 ulincubatedurease - urea - a6+im system: progress cum of urease - catalysedhydrdysis ofwea camed cut in the presence of Q6+ ions. Fclnbers denote Q6+ c m t m t i m (ppn)
In both systems, in the initial period of reaction, the reaction was weakly inhibited, characterized by high reaction rates, and in the later period, the inhibition grew stronger, characterized by lower reaction rates. Figure 3 shows the effect on inhibition constants on initial velocities. Inhibition depends strongly on time figure 4 shows the effect of time on inhibition, at the beginning the degree of inhibition is less and increases as time increases and becomes constant at saturation.
163
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
0.16
70
0.14.
60.
50 .......~
3O
........~,...........
0.12. o
o :~
----- 0.10. :).223 ).404 3.565 3.798
E >,0.08.
......./
.....
40.
~1 ~ "
.....
9
20
10
g3Oi i,~'
Ne -
~ 0.06.
H20, Q_
0.04.
10, z]
0.(2'
0 ,~;~
o.0o,
o
'
1'o
'
~o
~o
'
4o
'
Substrate concentration (mM) Fig.3 Effect d inhibition constants on initial ~ocities
~o
o
.
.
~o
.
.
1~o
1~o
2o0
t i m e (rain)
Fig.4 Effect of time on inhibition of Urease from Dolichos uniflorus Numbers denote inhibitor concentration in p p m
3.3 Calibration of Sensor For Urea The steady state response of the biosensor as a function of urea concentration under specified conditions was examined, the biosensor showed a sharp increase in response up to 10 mM. Beyond this range, it exhibited linear response to changes up to 50 mM urea concentration. 3.4 Effect of Incubation Time The rate of enzyme inhibition by heavy metal ions was rather slow (Lee & and Lee, (2002)). Therefore, the biosensor to be tested was preincubated for a definite period of time in the test solution in order to obtain the inhibition. The effect of incubation time over the sensor was evaluated. The results are depicted in Figure 4. Urease was inhibited up to 50% at the end of 25 min beyond this; the level of inhibition was more or less constant. The inhibition of urease activity by the Cr6+ ion at 25 minutes was recorded. For all other experiments the incubation time was taken as 25 minutes. 3.5 Calibration of Sensor for C h r o m i u m Under the predetermined optimum conditions, the biosensor was used to determine the concentration of chromium ions in waste water. Figures 5 shows biosensor current responses due to inhibition of urease by known amount of Cr6+
Fig.5 Experimental set up showing the current due to inhibition ofurease by Cr6+(lppm)
164
Ntihuga
3.6 Operational and Storage Stability Enzyme immobilization is one of the crucial steps in the fabrication of a biosensor. Immobilized enzymes generally have more stability and higher activity. Stability is one factor which determines the usefulness of a biosensor. In the present study, the stability of the developed sensor was investigated. It was found that the activity remained same upon using the strip for six times and four days. Thereafter its activity reduced by a maximum of 6% and 2% for operational and storage stability respectively. 3.7 Reactivation of Immobilized Urease The immobilized urease activity after inhibition by chromium ions was not restored when the sensor was washed in buffer solution for a long time. However, soaking the sensor chip in buffer solution containing EDTA caused urease reactivation. The immobilized urease had its activity restored up to 12% of its original activity after inhibition by Crb+ as a result of soaking in I mM and 10 mM EDTA for 2 hours and 15 minutes respectively. This is due to the irreversible inhibitory action of heavy metal ions. 3.8 Reproducibility The reproducibility of the sensor strip was checked by measuring the response of different strips individually for a known concentration of urea and the reproducibility was 40% to 60% for the obtained response. The deviation was due to the uneven distribution of enzyme over the electrode surface and the non uniform cutting of the electrodes. 4.0 CONCLUSION In the present work, a urease based mechanism was developed to determine chromium ions at trace levels. The present investigation showed that the modified sol gel immobilized sensor can be used as reliable means for heavy metal ions determination in liquid samples especially in laboratories. Chromium levels in waste water from two different sources were tested by the developed sensor. Further research may be directed towards development of an electronic device based on the mechanism suggested. The factors affecting immobilization such as purity of enzyme, thickness of the sol gel and size of the cellulose strip were not analyzed in this present work. It would be expected that with further evaluation of sensor for these parameters will enhance the performance of the sensor
REFERENCES Chern, L.H., Heng, L.Y. and Musa Ahmed. (2001), A potentiometric biosensor based on urease enzyme. Proc.NSF Workshop, Kuala Lumpur. Krajewska. B, Wieslawa Zaborska, and Maciej Leszko. (2000). Inhibition of chitosan immobilized urease by slow binding inhibitors: NiZL,F- and acetohydroxamic acid. Journal of Molecular Catalysis B: Enzymatic, 14, 10 1 - 109. Kunze, G., Hank, G., and Weber, W. (2001) Monitoring ofheavy metals by microbial biosensor and flow injection analysis for waste water management Cooperation of sensorics in biotechnology - Project 13028108, Lee, S.M. and Lee, W.Y. (2002) Determination of heavy metal ions using conductometric biosensor based on sol gel immobilized urease. Bull. Korean Chem. Soc., 23, 1169-1 172.
165
International Conference on Advances in Engineering and Technology
Nikolelis, P.D., Krull, J.U., Wang, J. and Mascini, M. (1997), Biosensors for direct monitoring of environmental pollutants in field. Proceedings of the NATO Advanced research Workshop, Smolenice, Slovakia. Samborska. A, Z. Stepniewska, and W. Stepniewski (2004). Influence of different oxidation states of chromium (VI, 111) on soil urease activity. Geoderma 122, 3 17 - 322. Biotechnol. Appl. Biochem., 34, 55 - 62. Tsai Hsiao - chung, Ruey - An Doong, Huai - Chih Chiang, and Kun - Tien Chen. (2003), Sol gel derived urease based optical biosensor for rapid determination of heavy metals. Analytica Chimica Acta, 48 1, 75 - 84. Wieslawa Zaborska, Barbara Krajewska, Maciej Leszko, and Zofia Olech. (2000), Inhibition of urease Ni2' ions and analysis of reaction progress curves. Journal of molecular Catalysis B: 13,103 - 108. Zhylyak, G.A., Elskaya, A.V., Korpan, Y.I., Soldatkin, A.P., and Dzyadevich, S.V., (1995), Application of urease conductometric biosensor for heavy metal ion determination. Sens. Act. B, 24 - 25, 145 - 148.
166
Matiasi
INTEGRATED ENVIRONMENTAL EDUCATION AND SUSTAINABLE DEVELOPMENT Thomas. M. Matiasi, Departmernt o j Water Resources and Environmental Management The Kenya Water Institute, Kenya
ABSTRACT Sustainable development is development which should not destroy the integrity of the environment Ensuring environmental sustainability is the key to sustainable development but this is a major challenge in developing countries. In this paper, issues which are components of integrated environmental education and are vital for sustainable development and have been explained are: role of different disciplines in environmental education, relationship between population and environment, sustainable natural resources conservation, agriculture and environment, energy and environment, human settlements and environment, natural environmental hazards , environmental pollution and management, natural resources conservation and environmental planning and management. Integrated environmental education is a necessity for professionals in the fields of architecture, chemical engineering, civil engineering, electrical engineering, mechanical and geomatics. This paper provides a conceptual framework and stresses the importance of a holistic approach to integrated environmental education for it can provide essential knowledge required in sustainable economic development. Key words: Integrated environmental education; Advances in engineering and technology; Sustainable development; Environmental impact assessment; Local communities; Natural resource base; Environmental planning and management
1.0 INTRODUCTION There is a real link between sustainable environmental management and sustainable economic development. The sustainability of economic development in any country depends on sustainable management of environmental resources. Integrated environmental education can be defined as a process of viewing the environment in its totality. The interlocking systems of the environment can be natural or biophysical or man made within which all living things interact. Integrated environmental education supports sustainable development through knowledge creation and dissemination. Due to the many factors which affect the environment, it is important that environment be studied in an integrated manner for it to provide essential knowledge which is useful tool for sustainable development. It is imperative for civil and environmental engineers to have a comprehensive knowledge on integrated environmental education for most of their activities have potential impact on the environment. Sustainable development can be achieved through harmonising developmental activities with proper management and conservation of natural resources at all levels starting with the individual, community and na-
167
International Conference on Advances in Engineering and Technology
tional. Development should be sustainable both ecologically and economically and it should be pursued without jeopardising the natural resource base for future generations. At global level, it is being realised that the environment can no longer stand too much abuse and hence the need for sustainable environmental management whose overall objective is to use the natural capital in a sustainable manner. In this paper the objectives which are components of integrated environmental education that are discussed are: human populations , settlements and environment; environmental hazards; sustainable natural resources conservation; agricultural development and environment; energy management and environment; environmental pollution and management; sustainable environmental planning and management National environmental policies should provide guidance for actions in all sectors and hence requirement for integrated environmental education by policy and decision makers. Table 1 shows the various disciplines, subjects and essential knowledge required for integrated environmental education. Table 1: The role of Each Discipline in Integrated Environmental Education Discipline Subjects Essential knowledge Enable people to comprehend and integrate the Natural basic interactions of the ecosystem; sciences Provide technology and skills that are in harmony with nature
I
Social sciences
Moral education
Languages
I
Economics Geography History sociology Psychology Religion ethics Philosophy French Spanish Arabic German English Swahili Vernacular Dance
Sculr>uture
168
.
. .
Develop awareness , provide knowledge of the social cultural environment; Alter structure to provide equity in housing, Job opportunities Provide the rationale to enable attitude change and development of environmental values
Give rise to ideas ; Communicate knowledge, Perception ,imagination and appreciation of the environment
.
Provide a means of expression and response to the aesthetics of the environment Can potray beauty of clean environment
Matiasi
2.0 HUMAN POPULATION, SETTLEMENTS AND ENVIRONMENT There has been population explosion in the world within the second half of the 20thcentury and this has had impact on the environment and this has caused an imbalance of population growth and the availability of the resources to support it. This imbalance is already apparent in some areas of developing world where food shortage, soil erosion and desertification are apparent. Population education has the objective to help in knowing how to use the natural resources such as water in a sustainable manner and improvement of the environment and contribute in the reduction of environmental degradation. A major input into environment and sustainable development is proper planning and utilization of resources in relation to the people they are meant to satisfy. Population education can provide planners with vital statistical information for planning and decision making. In both urban and rural areas , human settlement s provide the framework for social , cultural and economic interactions among people and are therefore an integral part of the interlocking systems of the environment .Human settlements provide a central topic in the context of integrated environmental education for most environmental problems are associated with human settlements. Urban and rural settlements alter the natural environment on which they are located, creating new issues that require new strategies to solve. 3.0 ENVIRONMENTAL HAZARDS Environmental hazards may be both natural and human induced. The frequency and magnitude of natural disasters have increased dramatically over the last several decades. Natural hazards can be concentrated in time and space. A natural hazard can be viewed as the risk encountered in a place subject to a natural event such as lighting, flood, volcanic eruption. The hazard results from the interaction of the natural and social systems, but it is people who transform the environment into hazards or resources by using the natural features for economic, social and aesthetic purposes. Environmental hazards can have positive and negative impacts to the environment. The natural hazards are triggered by natural phenomena and may be geophysical or biological in origin and the common geophysical ones are droughts, floods, and lighting which are of climatic and meteorological origin and earthquakes, landslides, tsunamis and volcanic eruptions are of geological and geomorphic origin. Engineering technologies need to address these environmental phenomena. Knowledge on the impact of environmental hazards is necessary. In the application of engineering and technology so as to manage the hazards, there has to be environmental considerations. There is need to formulate strategies with the main objective to mitigate the environmental problems associated with environmental hazards. Environmental education on the causes, impacts and control measures of the environmental hazards is required by planners and decision makers and together with environmental engineers and sociologists. Advancement in engineering and technology should have an input in
169
International Conference on Advances in Engineering and Technology
the devising methods of mitigating both the natural and man made environmental hazards. Methods of predicting environmental hazards should be enhanced and society should be educated on how to cope with the environmental hazards when they have occurred.
4.0 SUSTAINABLE NATURAL RESOURCES CONSERVATION Sustainable natural resources conservation is a process of rational use and skilful management and preservation of the natural environment with all its resources. Integrated environmental education can provide knowledge which is useful in sustainable management of natural resources. All human efforts towards development are based upon the presence of natural resources. Although the earth has continued to support life for thousands of years, today it is facing serious environmental challenges which are as a result of human impact and this is a threat to life support systems. This is a potential ecological disaster. Integrated environmental education can be used as tool to create the necessary awareness, that would indicate and strongly emphasise the sustainable use of the natural resource base so as to protect the natural capital for future generations. Lack of environmental education awareness, human greed and careless attitude are threatening the natural resources to their extinction. There is need to develop approaches and management strategies that should combine both developmental efforts and conservation measures of the natural resources. This would improve, maintain and protect the natural environment and its resources for the benefit of all mankind. Natural resources are finite, limited, and capable of being destroyed by unsustainable use and this can be a limiting factor on sustainable development. Some of the natural resources have been there in the past but this might not be the case in the future for it will depend on their mode of utilization. Hence environmental education on the characteristics of natural resources is required if they have to be managed in a sustainable manner so that they do not become limiting factors to sustainable development. As communities think of advancement in engineering and technology, there is also need to know the problems associated with the utilization of natural resources. Climate change and global warming has been associated with the utilization of some natural resources. The utilization of some natural resources has caused problems on the environment due to the wastes produced. In the course of utilization of the natural resources, human activities such as power generation, industrialization and transport have been responsible for the accumulation of greenhouse gases in the atmosphere. Such gases like carbon dioxide, methane , nitrous oxide, chlorofluorocarbons are being associated with global warming which has resulted to other environmental problems. These environmental problems can be mitigated by utilising the natural resources in a sustainable basis.
170
Matiasi
5.0 AGRICULTURAL DEVELOPMENT AND ENVIRONMENT Agricultural technologies have adverse impact on the environment as indicated in table 2 and this is as a result of mainly of increased demand of food production which is required by the increased population. The information indicated in table 2 shows that integrated environmental education is required at both the individual and community level if the negative impacts due to agricultural technologies are to be mitigated. Table 2 Potential Negative Impacts of Agricultural Technologies Technologies and their potenti I impact on the environment Mechanisation Irrigation Soils become compacted; Groundwater contamination by agroReduced infiltration and drainage chemicals Plant rooting depth is reduced; Salinisation and alkalinisation of soils; Reduction of yield with time Formation of salt pans Only applicable in large scale farming Poor yields; Soils become compacted; Eutrophication of aquatic ecosystems Reduced infiltration and drainage Increased water borne and water related diseases Global water degradation High yield varieties Agrochemicals = Genetic resources are reduced; Soil and water pollution; Dependence on imported grains 9 Disruption of food chains Increased need for fertilisers and pesKill both target and non target organticides isms Risk to human health; Some pests develop resistance General toxicity and secondary contamination Artificial animal feeds and hormones Monoculture Quality reduced ; Food chains destroyed; Animals may get unusual diseases Disease and pest epidemics encouraged Increased chance of cancer in humans Less stable ecosystem deve1op;Loss of genetic species diversity Interference with nutrient recycling Biological control of pests Genetic engineering Introduced pest could become a threat Ecosystem imbalance; later if the victim pest is eliminated Disruption of course of evolution and ecosystems Yield limited by climate, disease, pests
. .. .
. 9
9
.
. ..
..
.. .
. .
. .. .
.
Continuous research on environmental impact of the various agricultural technologies is necessary. It is good to note that some technologies have a potential negative impact on the environment and this necessitates creation of environmental awareness and edu-
171
International Conference on Advances in Engineering and Technology
cation to the local communities who use land and water resources for agricultural advancement. Ways and means to overcome problems and factors that hinder agricultural improvement, food production and flow of food to where it is needed, must be sought. However, such effort should take into account any environmental costs incurred due to technologies that are aimed at improving agriculture. Some environmental problems are related to agriculture and food production and these are due to lack of proper planning and clear policies in agriculture and food production. Use of new agricultural technologies has brought with it mixed blessings on the environment. Agricultural food production has been boosted and pest control has been achieved but the new agricultural technologies have caused serious environmental problems which range from resource depletion and general degradation to environmental pollution and have resulted to genetic diversity , disruption of natural food chains and destruction of fragile ecosystems and vulnerability of the quality of the various components of the natural environment with a potential impact on sustainable development. 6.0 ENERGY MANAGEMENT AND ENVIRONMENT Sustainable energy management is a prerequisite for sustainable economic development but the mode of energy utilization should not cause adverse effect on the environment. In modern society, energy is a crucial resource. The manner in which it is produced, distributed and used, has far reaching environmental implications. Besides the many beneficial roles of energy, it is also a major cause of environmental degradation and good example is the production and use of fossil energy which is a source of air pollution and acid rain. Environmental concerns have also been raised by non fossil energy sources such as nuclear power and the construction of large scale hydroelectric reservoirs. All sources of energy can be divided into renewable and non renewable sources of energy. Energy production, transformation, transport and use have significant impacts on the environment. These impacts depend on the source of energy, technologies for its production and its use in the different sectors of the economy such as in transport, agriculture, industry, domestic and commercial. Knowledge on the entire fuel cycle which is from extraction of raw materials, through transportation, processing, storage and use of the fuel to the management of wastes generated in all steps of the cycle can be used in the assessment of the environmental impacts of the different energy systems. In the 2Ist century , countries should be encouraged to adopt low pollution , low waste energy processes with three main objectives : to develop methodologies and guidelines to integrate environmental considerations into national energy policy planning and development, to provide information on the environmental impacts and risks of the different energy systems , and guidelines for comparative assessment , management and conservation,; to support research projects to show how energy can be used in environmentally sound and rational ways. The present challenge for every individual, community
172
Matiasi
and country is to conserve and use efficiently the available energy while the search for safe, environmentally sound and sustainable energy sources continues.
7.0 ENVIRONMENTAL POLLUTION AND MANAGEMENT Environmental pollution is one of the main dangers that spell out ecological uncertainty. In recent years, the phenomenon of pollution has featured locally and internationally because it poses great danger to the environment. Natural processes are no longer able to cope with the type of wastes produced through man’s activities. The major problem is that the rate of production of the wastes is so high that even the available technologies of waste management can not cope with large volumes produced. Due to the chemical , physical and biological complexities of the wastes produced , the natural processes of decomposition or biodegradation of wastes can no longer be effective in the treatment of wastes. All pollutants have adverse effect on the quality of the environment. Pollution is the price of unsustainable developmental processes and can create heavy environmental costs. Pollution is very costly in environmental terms because some of the damage it causes are irreversible. The challenge facing many countries is having effective pollution control. Issues related to chemical pollution which need to be addressed while dealing with integrated environmental education are; pollution due to industrial chemicals; air pollution , acidification; pollution due to agricultural activities; eutrophication of water bodies, oil pollution and environmental pollution due to solid waste disposal . There is need for formulation of a strategy to manage environmental pollution due to chemical pollutants.
8.0 SUSTAlNABLE ENVIRONMENTAL PLANNING AND MANAGEMENT The rationale of environmental management is to ensure long term productivity of the environmental resources so as to sustain development and it should also ensure that in the process of development, the interrelationships and interdependence within nature are maintained. Hence sustainable environmental planning management is defined as the measures and controls undertaken at individual, community, national and international levels and directed at environmental conservation so as to ensure that natural resources are allocated and utilised in a manner that will improve the quality of life for present and future generations. In the context of sustainable development, all environmental assets , natural and man made comprise the capital. Conservation of natural resources does not imply lack of economic growth. As resources are utilised, there is need to reduce the conflict between environment and development. If the present and future generations are to be assured of quality living, then development must be sustained by the environment and must, in turn, not destroy environmental resources. Hence sustainable development and environment management should be addressed as one integrated system. The creation of environmental awareness is essential for effective environmental management if the goal of sustainable development is to be achieved for this is based on the premise that environmental problems require cross-sectorial and interdisciplinary solutions, hence the need for integrated approach. There is need for global strategy for inte-
173
International Conference on Advances in Engineering and Technology
grated environmental education and training. The information and opportunities provided by integrated environmental education will make people more knowledgeable about the environment in general, so as to promote awareness of basic environmental principles. Development process should be measured in terms of the impact it has on the environment. This should be done before each development project is implemented through a pre project evaluation procedure known as Environmental Impact Assessment (EIA). Environmental Impact Assessment is important as a planning tool for sustainable environmental management. 9.0 CONCLUSION From the foregoing discussion , it can be concluded that: Knowledge on integrated environmental education is a necessity for sustainable (i) environmental management Natural resources in natural resource base should be used in a sustainable man(ii) ner. (iii) Communities should participate in environmental management. (iv) There is need for governments to formulate sustainable environmental strategies (v) The world’s natural resources must be managed, protected and conserved to meet the needs of present and future generations. Advancement in engineering and technology should not be at the expense of the (vi) quality of the environment. (vii) In thc modem world, integrated environmental education is a necessity for professionals in fields of architecture and building services, chemical engineering, civil engineering, electrical and electronics engineering, mechanical and geomatics. (viii) Geographical information system and remote sensing can be used as decision tool in environmental planning and management. (ix) The environmental aspects discussed in this paper can form a basis for integrated environmental education
REFERENCES Eldon, D.E, et al., (1992) Environmental Science: A study of Interrelationships; Wm.C. Brown Pub1ishers;USA Government of Kenya (1994) The Kenya National Environmental Action Plan Muthoka, M.G, et al. (1998) Environmental Education; Longhorn Publishers (Kenya ) Ltd Purdom, P.W, et al. (1980) Environmental Science; Published by Charles E.Merri1 Publishing Company;A Bell and Howell compmany, Columbus , Ohio Schramm, G, et al., (1 994) Environmental Management and Economic Development; A World Bank Publication UNEP (1 992) Chemical pollution: A global overview; A Joint Publication of The International Register of Potentially Toxic Chemicals and The Global Environmental Monitoring System’s, Monitoring and Assessment Research Centre UNEP (1993) Environmental Library No. 10; Words and Publications Oxford, England
174
Matiasi
William, P.C, et al., (1992) Environmental Science; Wm.C. Brown Publishers, USA World Bank (200 1 Making sustainable Commitments: An Environmental Strateg. A World Bank Publication
175
International Conference on Advances in Engineering and Technology
MAPPING WATER SUPPLY COVERAGE: A CASE STUDY FROM LAKE KIYANJA, MASINDI DISTRICT, UGANDA A. Quin, Department of Land and Water Resources Engineering, Royal Institute of Technology, Sweden
ABSTRACT In this paper the current methods of determining water supply coverage in Uganda are investigated and a new method is suggested. The Lake Kiyanja watershed in Masindi District is used as a study area. Known water sources are mapped using the Geographic Information System software, ArcGIS. This map is combined with a map of the local population distribution in order to produce a map of water supply coverage. The goal is to produce a map suitable for a Water Resources Engineering and Management Decision Support System. Such a system will enable water-supply planners to make appropriate decisions in order to improve a given population’s access to safe water supplies, effectively co-ordinate future development plans, and prepare for potential threats that may affect those water supplies.
Keywords: Rural Water Supply; Uganda; Water Supply Coverage; Geographic Information Systems; Decision Support Systems.
1.0 INTRODUCTION With a population of over 27 million, an annual population growth rate of about 3.3%, and a population density of around 135 people/km2 (Ugandan Bureau of Statistics, 2006), the strain on Uganda’s water resources is increasing. At the same time a substantial proportion of the population lacks access to improved water supplies: the percentage of the population with access to potable water (the water supply coverage) lies around 58% (Tindimugaya, 2004). In 1990 UNICEF & WHO initiated a joint monitoring program to watch the water supply coverage worldwide. In a survey completed in 2002, the water supply coverage in Uganda was estimated to be 56% (UNICEF & WHO 2004a). These percentages are a bit short of the UN Millennium Development Goal of 75% water supply coverage by the year 2015. Overall, Sub-Saharan Africa is lagging behind in the effort to reach the Millennium Development Goals (UNICEF & WHO, 2004b).
Furthermore, water resources in Uganda are, in general, under threat. Problems include unsustainable extraction rates from water sources, lack of sewage treatment systems, poorly developed dumping sites and uncontrolled dumping, and soil erosion leading to siltation of surface water, among other issues. Each of these problems is further exacerbated by population growth in both urban and rural areas. In this paper the methods used in Uganda for determining the percentage of the popula-
176
Quin
tion with access to potable water (the water supply coverage) are reviewed and compared with a new method presented in this thesis. This method has been developed using the Geographic Information System software, ArcGIS. The Lake Kiyanja catchment in Masindi District is used as a study area. Within this catchment the known water resources are mapped, and are then combined with a map of the local population distribution to produce a map of the water supply coverage within the study area. From this combination, a more accurate value for the water supply coverage can be estimated. The resulting map can also be used to plan the development of new water sources. This work has been undertaken as part of a joint research project between the Water Resources Group at the Faculty of Technology, Makerere University, Kampala, Uganda and the Department of Land and Water Resources at the Royal Institute of Technology, Stockholm, Sweden. The goal of this research project is to develop an Integrated Water Resources Management framework for a study area in Uganda, and to test its suitability as a method of managing local water resources. Ultimately this should lead to the sustainable use of the water resources within a given watershed. 2.0 THE CASE S T U D Y AREA: L A K E K I Y A N J A The study area has been selected as part of a research project, as described in the introduction, which is looking into applying Integrated Water Resources Management methods at local-scales. Thus the study area is perhaps not ideal for introducing a new method of mapping water resources (which is typically done at District level by the Directorate of Water Development). However, the smaller scale has allowed data collected from various sources to be more easily corroborated in the field.
The Lake Kiyanja catchment lies roughly in the centre of Masindi District (Fig. 1). The catchment, with an area of approximately 345 km 2, is characterised by low, rolling hills interspersed with wetlands in the northern parts, and more prominent hills in the south. Much of the catchment is settled. Approximately two-thirds of Masindi Town lies within the catchment, in the southFigure 1." The Lake Kiyanja catchment. The inset map west. There are also many shows the location of the catchment in Masindi District. small villages and settlements lying within the catchment, around which small-scale farming takes place.
177
International Conference on Advances in Engineering and Technology
Two major geological formations lie within the catchment. These are the granulitegneissic complex (Taylor & Howard, 1998), which lies in the northern parts of the catchment and the Bunyoro-Kyogo Series, lying in the south of the catchment. The Bunyoro-Kyoga Series is a Precambrian Cover formation, composed of shales, arkoses, quartzites, phyllites, amphibolites and even some tillite-like rocks (Department of Surveys and Mapping, 1995; We Consult, 2002). Elements of this formation form the hills which can be seen in the south of the catchment, and act as a natural dam for the lake (Fig. 1). The depth of the weathered material overlying the bedrock (the regolith) averages 41 m (We Consult, 2002). The majority of the soils in the area are classified as loams and laterites. These soils are mainly derived from phyllites, gneisses and granites, depending on the source rock. Generally, a layer of loam lies above the laterite. The groundwater flow system is typical of that within crystalline basement rock, with the flow occurring in both the regolith and fractures in the basement rock. In the past, it was assumed that groundwater flow was greatest in the fractured bedrock, but recent work (Taylor & Howard, 1998, Tindimugaya 1995) has shown that the primary flow of groundwater occurs within the regolith. The hydraulic conductivity is in the region of 0.5 - 5 m/day (Tindimugaya, 1995). Locals take water from protected springs, shallow wells, borehole wells and even streams and wetlands. Only the first three are categorised as "safe" water sources, but this may not necessarily be the case if the source is polluted. Recently a pumping station was built at Lake Kiyanja in order to supply Masindi Town with a piped water supply.
3.0 DETERMINGING WATER SUPPLY COVERAGE: CURRENT METHODS In Uganda two methods have been used to determine water supply coverage. The Joint Monitoring Programme (JMP) for Water Supply and Sanitation, initiated by UNICEF & WHO in 1990, has conducted world-wide household surveys. The Directorate of Water Development (DWD) calculates annual values based on the number of water sources. 3.1 JMP Methodology UNICEF & WHO, (2004a), in their Joint Monitoring Programme (JMP), presented water supply coverage statistics from a world-wide survey, which was completed in 2002. The water supply coverage in Uganda was estimated to be 56%. These results were obtained by conducting national Table 1: Water supply categories, as defined household surveys using a sampling by UNICEF & WHO (2005). method where clusters were chosen Non-Improved Improved to represent a country as a whole (UNICEF & WHO, 2000). Only Unprotected dug well Piped Water into access to "improved" water supplies Unprotected spring was considered, as presented in dwelling, yard or plot Vendor-provided water Table 1. "Access" is roughly dePublic tap/standpipe Tanker truck water fined as the possibility of obtaining Tubewell/Borehole Surface water (e.g. river, at least 20 litres per person per day Protected dug well stream, dam, lake, pond, from a source not further than one
178
Quin
kilometre away from the household. Although a water source may be classified as "improved" it might not be adequate in terms of quantity, or safe in terms of quality (UNICEF & WHO 2000).
3.2 DWD Methodology Two departments at the Directorate of Water Development (DWD) are responsible for calculating water supply coverage: the Urban Water Supply and Sanitation Department; and the Rural Water Supply and Sanitation Department. In Uganda, urban areas are defined as gazetted cities, municipalities and town councils (Ugandan Bureau of Statistics, 2002). The rules used for calculating water coverage in urban and rural areas are slightly different. In both cases water sources that do not meet the Ugandan Standards for Water Quality should not be included in the coverage calculations. Urban Calculations (Directorate o f Water Development, 2004):
Assumptions: 9 Typical household contains 8 people 9 Yard connection serves 4 households 9 Public stand post serves a maximum of 300 people 9 Hand pumped wells serve 300 people 9 Protected (or improved springs) serve 300 people per sprout Thus: Population served by household taps, yard taps and public stands = (number of household connections • 8 people) + (number of yard connections x 4 households x 8 people) + (number of stand posts x 300 people) Population served by hand pumps = Number of pumps x 300 Population served by protected springs = Number of protected springs x 300 x no. of sprouts.
Thus for a given urban area, we have: Water Supply Coverage = ((Population served by household taps, yard taps and public stands + Population served by hand pumps + Population served by protected springs) x 100)/Total population R u r a l calculations (Directorate o f Water Development, 2005):
Assumptions: 9 Boreholes and shallow wells serve 300 people 9 Protected springs serve 150 people Thus for a given rural area, we have: Water Supply Coverage = (((No. of boreholes and shallow wells x 300 people) + (No. of protectedsprings x 150)) • l O0)/Total Population
It should be noted that the Water Resources Department publishes Water Supply Coverage Maps that depict the percentage of the population supplied on a parish-to-parish
179
International Conference on Advances in Engineering and Technology
basis, and Water Service Area and Population Density Maps, where water source service areas are overlain on a map showing the population density per parish (DWD 2005). Also, the Rural Water Supply and Sanitation Department (DWD 2002) calculated "LC1 service coverage"- the percentage of villages in a District with a water supply.
4.0 WATER SUPPLY COVERAGE MAPPING The method proposed here, although requiring extra work, offers the possibility of more realistic water coverage calculations and gives the water resources planner a visual overview of the local water supply coverage. In summary, water source data collected by the local DWD office have been obtained and mapped. Service areas (the potential area supplied by a given water source) have been plotted. This map has been combined with a map of village population densities, produced by drawing polygons representing the villages, and assigning each polygon a population density. The resulting map shows the populated areas that lie within the service areas of the water sources. It can be used to determine the water supply coverage. ArcGIS 9.1 was used for this work. 4.1 Preparing the Population Density Map The population density map (Fig. 2) has been prepared using LC1 population data, LC1 boundary maps and topographical maps. "LCI" is a term in Uganda for the smallest political u n i t - Local Council Level 1. Typically the LC1 is composed of a village and its surrounding land. Although LC1 population data has been collected by the Ugandan Bureau of Statistics in the 2002 census, it has not yet been compiled. Thus LC1 population data from the 1991 census have been used to estimate 2006 LC 1 populations.
) ~ena
Figure 2." Map of village population densities.
180
Quin
Population data /
LC1 / boundaries T~176
X~ Mappingof / "~populated a r e a s /
~/ 7
Population / density map
/
Figure 3: Flow chart illustrating the preparation of the population density map. 4.2 Preparing the Water Sources Service Area Map Water source data have been obtained from the DWD office in Masindi. The dataset included: Easting and Northing values in the co-ordinate system used in Uganda, required for plotting the locations of the sources; Source Name and Source Number, needed to identify the sources; Operational Status, used to identify which sources were working; and Source Type, required for determining the number of people supplied by the source.
Two intermediate maps need to be created before the map of the water sources service area can be made (Fig. 4). These are: a map of circular polygons; and a map of Thiessen polygons. The circular polygons map is comprised of overlapping circles, which represent the area the water sources supply. The circles have their centre at the water source, and a radius of 1.5 km (the greatest distance a household should be from a source). In order to divide these overlapping circles between Figure 4" Map o f water source service areas. respective water sources a map of Thiessen polygons is required. Each Thiessen polygon marks the set of locations that lie closest to a given water source. This attribute simulates the assumption that a household's inhabitants collect water at the nearest water source. To create the map of the water sources service area the circular polygons map is combined with the Thiessen polygons map.
181
International Conference on Advances in Engineering and Technology
Water s o u r c e / ~ data
Plotting of water source data
Create circular polygons
~Water
Create Thiessen polygons
sources service area /
Figure 5: Flow chart illustrating the preparation o f the water sources service area map.
4.3 Calculating the Water Supply Coverage First the populated areas that are supplied by a water source are determined by combining the population density map with the map of the water sources service area. The population within each polygon is estimated by calculating its area and multiplying this area by the population density. Next, the population supplied by each water source is calculated. If the population supplied is greater than 300 for a shallow well or borehole, or 150 for a protected spring, it is set at one of these respective maximums. The total population supplied is the sum of the population supplied by each source. Dividing this result by the total population and multiplying by 100 gives the water supply coverage. Population / density map /Water sources/
~
/ ~ I [interDsetctinrmga;eas~ areas Populated supplied
Water supply ) coverage
/ / s e r v i c e areas Figure 6: Flow chart of the final steps required to calculate the water supply coverage. 5.0 RESULTS AND DISCUSSION The results calculated using Table 2: Results from the DWD and mapping methods the DWD method are pre67,535 Total Population within the catchment sented alongside the results 18,300 Population supplied (DWD method*) obtained from the mapping 17,688 Population supplied (mapping method) method in Table 2. 57,609 Population within 1.5km of a source 27% Water Supply Coverage (DWD Method*) The population supplied and 26% Water Supply Coverage (mapping Method) the water supply coverage Percentage of population within 1.5km of a source 85% for both the DWD and map*Calculated using the DWD guidelines for rural areas. All ping methods are similar. areas were assumed to be rural for this comparison. This can be explained by the fact that the populated areas covered by the map of the water source service area are densely populated. Thus the maximum of 300 people for shallow wells and boreholes and 150 people for protected springs is reached in all but a few cases. Only a few sources supply fewer people than these maximum values. Also interesting to note is the large proportion of people (85%) living within 1.5 km of a water source - as revealed by the mapping method. It is thus possible that some sources may be overexploited, and that a number of people are left without access to water from
182
Quin
an "improved" source. However, it might not be a problem if these water sources are capable of providing more people than the maximum values of 300 and 150 people. This possibility strongly suggests that these maximum values should be carefully reviewed to see if they are too conservative. An improvement might be to set new maximums based on the yield of each individual water source. Obviously, the time spent waiting to collect water at a source should also be incorporated when estimating new maximums. Using this new methodology water supply coverage results may both increase dramatically and be more realistic. Another advantage of the water supply coverage map is that it offers water resource planners an overview of the areas that lie within 1.5 km of a source, and those that do not (Fig. 7). Combining this with information on the water sources that lie within densely populated areas (and are likely to be heavily exploited), the planner can identify which populated areas are in need of new or upgraded water supplies. Masindi Town, and some surrounding rural settlements, are connected to a gravity-fed piped water supply (using water pumped from Lake Kiyanja). The locations of the grav" ity-fed supply taps are unknown, and were therefore excluded from the water supply coverage calculations presented in this report. Thus the results do not reflect the actual situation. This does not affect the aim of the report, which was to present the concept of mapping water supply coverage. It goes without Legend saying that up-to-date infor..................... ~ ~ B~'r~da~r'~' mation is essential in order to estimate, with reasonably accuracy, the water supply Figure 7." Map showing the populated areas lying coverage. within 1.5km of a water source, and those that do not.
A
6.0 CONCLUSION The procedure for mapping water supply coverage presented in this paper offers the possibility for more realistic estimates. It is particularly useful in identifying populated areas which lie at a distance greater than 1.5 km from a water source, and can also be used to identify water sources which are potentially overexploited. This enables water supply planners to effectively co-ordinate the construction or upgrading of water sources. Improvements to this procedure can be made, for example: improved mapping of the population distribution; data on pollutants included, to help see if a water source is safe; and a method developed to set unique values for the maximum number of people supplied by a water source, based on yield results from that source. Incorporation of this
183
International Conference on Advances in Engineering and Technology
method into a Decision Support System would further improve the effectiveness of managing the construction and upgrading of water sources.
REFERENCES Department of Surveys and Mapping (1995) Uganda, Geology Geological Map of Uganda printed by Department of Surveys and Mapping, Governmental Department, Uganda Directorate of Water Development (2005) Nakagonsola District Groundwater Report Report, Directorate of Water Development, Governmental Department, Uganda Directorate of Water Development (2004) Small Towns Water and Sanitation Unpublished report, Directorate of Water Development, Governmental Department, Uganda Directorate of Water Development (2002) The National Rural Water Supply Atlas Report, Directorate of Water Development, Governmental Department, Uganda Taylor, R. G., Howard, K.W.F., (1998) Post-Palaeozoic evolution of weathered landsurfaces in Uganda by tectonically controlled deep weathering and stripping Geomorphology 25: 173-192 Tindimugaya, C., (2004) Groundwater mapping and its implications for rural water supply coverage in Uganda 30th WEDC International Conference, Vientiane, Lao, PDR Tindimugaya, C., (1 995) Regolith importance in groundwater development 2 1st WEDC Conference, Sustainability of Water and Sanitation Systems, Kampala, Uganda, 1995 Ugandan Bureau of Statistics (2002) 2002 Uganda Population and Housing Census Ugandan Bureau of Statistics (February 2006) Website:
(February, 2006) UNICEF & WHO (2005) Waterfor Life - Making it Happen (February 2006) UNICEF & WHO (2000) Global Water Supply and Sanitation Assessment 2000 February 2006) UNICEF & WHO (2004b) Meeting the MDG Drinking Water and Sanitation Report
184
Mahenge, Mbwette & Njau
PHOSPHORUS SORPTION BEHAVIOURS AND PROPERTIES OF MBEYA-PUMICE A.S. Mahenge, T.S.A. Mbwette and K.N. Njau, WSP & Constructed Wetlands Research Project, Prospective Campus College of Engineering & Technology, University of Dar es Salaam.
ABSTRACT Mbeya-Pumice is a potential filter media in constructed wetlands. The substrate is light, porous and volcanic, found near the slopes of Mount Rungwe, collected from a village known as Mporoto about 20km from Mbeya town. The study of Mbeya-Pumice’s phosphorus sorption behaviours and properties was carried out in laboratory scale where by 1-2mm, 2-4mm and 4-8mm grains were tested using batch experiments. The results show that Mbeya-Pumice has high phosphorus sorption capacity. The sorption capacity for the Mbeya-Pumice was 2.2g P/kg. For 1-2mm and 4-8mm grains; about 50% of phosphorus sorption in Mbeya-Pumice occurs in the first 17 and 18 hours, respectively. Compared to 4-8mm and 2-4mm grains, temperature didn’t significantly influence phosphorus sorption on 1-2mm grain. Mbeya-Pumice has high potential for phosphorus removal from wastewaters. Mbeya-Pumice can be recommended to be used as a substrate in constructed wetlands to remove phosphorus. Keywords: Phosphorus sorption; Brunauer Emmett and Teller; Sorption Equilibrium; Sorption Models; Sorption Isotherms; Behaviours and Properties.
1.0 ABBREVIATIONS A1203 Aluminum Oxide CaC12 Calcium Chloride CaO Calcium Oxide Fez03 Iron Oxide Fig Figure K20 Potassium Oxide KH2P04 Potassium Hydrogen Phosphate LO1 Loss On Ignition MgO Magnesium Oxide MnO Manganese Oxide Na20 Sodium Oxide 0 Oxygen P Phosphorus p2os Phosphorus Oxide PO4 Phosphate Ion rpm Revolutions Per Minute SiOl Silicon Oxide WSP Waste Stabilization Ponds.
185
International Conference on Advances in Engineering and Technology
2.0 INTRODUCTION Phosphorus (P) is an essential macronutrient in all organisms. However, loadings of phosphorus in effluent discharges are detrimental to the quality of receiving surface water bodies. Phosphorus concentrations in excess of allowable levels have been associated with eutrophication of lakes and rivers, which lead to the algal blooms. Eutrophication is a process whereby water bodies become over-enriched with nutrients such as nitrogen and phosphorus from sewage disposal and run-off of agricultural fertilizers, etc. This stimulates excessive growth of plant (algae, periphyton attached algae, and nuisance plants weeds) often called an algal bloom, which reduces dissolved oxygen concentrations in the water when dead plant material decomposes. This is detrimental to aquatic life. For this reason, the treatment of wastewater is necessary to correct its characteristics in such a way that the use or final disposal of treated effluent can take place without causing an adverse impact on the ecosystem of the receiving water bodies (Headley et al, 2001). Subsequently, many technologies in association with management systems have been devised both to prevent and mitigate the loss of phosphorus, which can cause eutrophication of surface waters. One approach, which is a land-based ecologically engineered method, is the use of wetlands. Wetland ecosystems both constructed and natural are becoming important mitigating measures in water resource management around the world, as they have the ability to cycle and retain nutrients such as phosphorus (Braskerud, 2002). Phosphorus removal in wetlands mainly occurs by sorption and plant uptake. Phosphorus sorption in wetland substrates occurs by adsorption and precipitation. Phosphorus retention in the substrate via adsorption and precipitation is controlled by properties of the substrate (Fe, Mg, Al-, Ca-minerals, specific surface area and porosity) and the physico-chemical environment (pH, redox potential, dissolved ions) (Griineberg et al, 2001). Wetland substrates (e.g. clay, soil, gravel) for wastewater treatment are normally not considered efficient for continual removal of phosphorus. Phosphorus removal above 20-30% in long-term monitoring (> 5 years) is seldom reported. To achieve high phosphorus removal, various media have been tested: like granite, limestone, metal oxiderich natural sands, mining industry by-products and slags (Salnacke, 1999). Some of them have a very high phosphorus sorption capacity. The capacity for phosphorus sorption is finite in all media and longevity with respect to phosphorus sorption depends on the loading rates. The design regarding phosphorus removal in substrates with high phosphorus sorption is quite different from other water quality parameters, the intention being to exhaust a short-term capacity, regenerate the filter and repeat the cycle (Braskerud, 2002). Mbeya-Pumice is a substrate that is rich in minerals that facilitate sorption of phosphorus. However, little is known about its behaviours and properties in sorption of phosphorus and its potential for the use as constructed wetland substrate.
186
Mahenge, Mbwette & Njau
3.0 M A T E R I A L S A N D M E T H O D S 3.1 Materials
Substrate Origin Mbeya-Pumice is the substrate used in this study. It is a porous volcanic substrate found near the slopes of Mount Rungwe, collected from a village known as Mporoto about 20km from Mbeya town. The substrate is light in terms of both weight and colour. Substrate Chemical Composition, Porosity and Surface Area The chemical composition of Mbeya-Pumice substrate was analyzed by X-ray fluorescence (XRF) technique. Crushed soil particles of less than 420 gm were dried in an oven at 110~ to determine moisture content. A known weight of the dried sample was used to determine loss on ignition at 1,100~ The calcined sample was mixed with sodium borohydrate at a ratio of 1:9 w/w, respectively. The mixture was melted to obtain the specimen disc for the XRF analysis. The porosity and specific surface area of the substrate was measured by the Brunauer Emmett and Teller (BET) standard method for particle sizes ranging from 125 gm
This section describes the methodologies used in conducting relevant experiments.
Experimental Set Upfor the Study of Phosphorus Sorption Behaviours The batch experimental set up for the study of the phosphorus sorption behaviours and properties of the substrates is as shown in Figure 1.
Fig l: Batch Experimental Set Up Phosphorus sorption experiments in batch experiments were carried out at ambient temperature with a slow rotating shaker (25 rpm) to avoid material alteration. Only one experiment was carried out for each sorption behaviour experiment and four samples of each experiment were analyzed.
187
International Conference on Advances in Engineering and Technology
The analysis of phosphorus sorption behaviours of the substrates involved, determination of phosphorus sorption isotherms, determination of phosphorus sorption capacity of the substrates, determination of phosphorus sorption as the function of time, determination of effect of temperature on the phosphorus sorption
Phosphorus Sorption Isotherms Experiment To determine phosphorus sorption, twelve 8 g samples of each substrate grain size (12mm, 2-4mm, and 4-8ram) were suspended in 200ml of solution containing different phosphorus concentrations (0, 2, 4, 8, 16, 20, 40, 80, 160, 320, 600 and 1200 mg P/L). KH2PO4 was used as a background solution. The solutions were filled in 300 ml glass bottles. The bottles with incubated solutions were placed on a 25-rpm HS 500 digital type shaker (Labo Tech, Holland) for twenty-four hours. The experiment was kept at pH of 8 and 30~ temperature. The solutions were filtered with a 0.45 gm filter membrane and the filtrates analyzed for phosphorus.
Phosphorus Sorption as a Function o f Time Experiment. To study phosphorus sorption as a function of time experiment, four 300ml glass bottles were employed. The bottles were filled with 200 ml solutions containing phosphorus concentration of 20 mg P/L, and 8g of substrates. In this experiment, grain sizes used, were 1-2mm, and 4-8mm. Then the bottles with incubated solutions were placed on a slow rotating shaker (25 rpm) where contact time was varied from 1 to 300 hours. The experiment was kept at pH of 8 and 30~ temperature. The solutions were filtered with a 0.45 gm filter membrane and the filtrates analyzed for phosphorus.
Influence of Temperature on Phosphorus Sorption Experiment To investigate the effect of temperature on phosphorus sorption, twenty-four 300ml glass bottles were employed. One 8g sample of each substrate grain size (1-2mm, 2-4 mm, and 4-8mm) were suspended in 200 ml of solution with phosphorus concentration of 30 mg P/L The experiment was conducted at 35~ water bath (Julabo type, ECO Temp TW8, German), and at 21 ~ 40~ and 45~ incubator (WTB binder type, model 720, German) for twenty-four hours. The experiment was kept at pH of 8. The solutions were filtered with a 0.45 gm filter membrane and the filtrates analyzed for phosphorus. Statistical Validity o f Data For each experiment, four replicates were used. The data used in all the discussion are the mean values of the four replicates. ANOVA and Regression analysis were performed. For ANOVA statistical test, a single test was performed to see if there are significant difference between the three grain sizes of Mbeya-Pumice substrate (1-2 mm, 24 mm and 4-8 mm) at 95% Confidence Interval (0.05 probability level). Linear Regression analysis was used for determining the relationship between, the amount of phosphorus sorbed by the substrate and phosphorus concentration, the amount of phosphorus sorbed by the substrate and temperature and the amount of phosphorus sorbed by the substrate and time.
188
Mahenge, Mbwette & Njau
4.0 RESULTS AND DISCUSSION 4.1 Physical and Chemical Properties of the Substrate The physical properties and chemical composition of the substrate is presented in Table 1 and Table 2, respectively.
Table 1: Physical properties of Mbeya-Pumice Substrate Material Average particle Specific sur- Porosity Name diameter (mm) face area (mmZ/g) MbeyaPumice
1
15.25
69%
Particle density (kg/m 3)
pH
2246
8.5
As presented in Table l, the analyzed physical properties of the substrate were specific surface area, porosity, pH and Particle. The specific surface area and porosity are an indirect measure of the surface available for the adsorption of phosphorus although it depends also on the density of the active sites on the material. Table 2: Chemical Composition of
Substrate
Material
A1203
CaO
MgO
Fe203
MnO
Na20
SiO2
P205
K20
TiO2
LOI
Name
%
%
%
%
%
%
%
%
%
%
%
Mbeya-
I
l l / l
l
lOl~I0/ill~:/|l|~IOl IOD~Ill| | I l l
~
l
Pumice
From Table 2, the analysis shows that Mbeya-Pumice substrate has high levels of A1 and Si. The high levels of A1 and Si are the positive indicator of its potential for phosphorus precipitation.
Phosphorus Sorption Isotherms for the Mbeya-Pumice Substrate The phosphorus sorption isotherms obtained for the different Mbeya-Pumice grain sizes are presented in Figure 2. Above 300 mg P/L the sorption curves showed no significant change indicating that the maximum sorption capacity of the substrate has been reached. According to this analysis, the maximum sorption capacities for the substrate ware 2240 mg P/Kg, 1700 mg P/Kg and 1500 mg P/Kg for the grains 1-2 mm, 2-4 mm and 4-8 mm, respectively.
189
International Conference on Advances in Engineering and Technology
Fig 2: The Phosphorus Sorption Isotherms for Mbeya-Pumice Substrate 4.2 Statistical Test for Significance Difference on Phosphorus Sorption between Mbeya-Pumice Grains The calculated F value (F = 11.23) exceeds tabulated F value (F - 4.3) at 95% Confidence Interval. So there is a significant difference in phosphorus sorption between Mbeya-Pumice grains (i.e. 1-2 mm, 2-4 mm and 4-8 mm). 4.3 Freundlich Model Freundlich model is useful for describing physical sorption. The model can represent nonlinear sorption of phosphorus from solution. The model equation is exponential and is represented by equation (1) (Volesky, 1999).
Q - K C 1/n
(1)
where, Q = the amount sorbed, mg/g C = equilibrium concentration, mg/L. K = dimensionless Freundlich constant, indicator of the sorption capacity n = dimensionless Freundlich constant, indicator of the sorption intensity The sorption (Q) to concentration (C) relationship by Freundlich model is presented in Figure 3 and equations 2-4.
9 1-2mm grain size II 2-4mm grain size A 4-8mm grain size i
-0.5
0
0.5
1 1.5 Log (C)
2
2.5
3
Fig 3" Fitting Mbeya-Pumice's Sorption Data with Freundlich Model 1 - 2mm
190
"Q - 0.660 L o g (C) + 1.451
, where, R 2= 0.857
(2)
Mahenge, M b w e t t e & Njau
2 - 4 r a m : Q = 0 . 7 4 7 L o g (C) + 1.243 , 4 - 8 r n m : Q = 0 . 8 1 7 L o g (C) + 1.019
where, R 2= 0.906
(3)
where, R 2 = 0.873
(4)
The linear correlation indicated a maximum of R 2 = 0.9. Since the correlation coefficient R 2 is acceptable, the Freundlich Model can be used in prediction and verification of the amount of phosphorus sorbed at any equilibrium concentration. Under the same conditions, the difference between the phosphorus sorption behaviour of the fine and coarse grain is attributable to (1) the large sorbing surface area of finer medium, and (2) the more easily released metal ions in the finer medium. 4.4 P h o s p h o r u s
S o r p t i o n as a F u n c t i o n o f T i m e
Sampling during the first day and increasing the incubation time to 260 hours obtained a curve. Figure 4 presents the effect of incubation time on the sorbed phosphorus at different grain sizes of Mbeya-Pumice substrate. For each case concentration of 20 mg/1 of phosphorus was added. 250 ~= ~
!
200 |
-
•
~
--
1-2 mm grain size
150
~
100
. ~ ~ 4-8 mm grain size
50
0 0
50
100
150
200
250
300
Incubation time (hours)
Fig 4: The Phosphorus Sorption as Function of Time for Mbeya-Pumice Substrate 4.5 M i c h a e l i s - M e n t e n
Model
Michaelis-Menten equation was adopted to express the phosphorus sorption as a function of time. The model equation is represented by equation (5) (Zhu et alL, (2002)). Q(,~ - ( Q m a x * t ) / ( k
+ t)
(5)
where, t = incubation time (hr).
Q (0 = the amount of sorbed phosphorus (mg P/Kg substrate) at time t. Qmax = the maximum phosphorus sorption at a certain phosphorusloading (mg P/Kg substrate). k=- constant, when 50% of the maximum sorption occur. To determine the constants k and Qmax, the Lineweaver-Bulk analysis method of linearizing substrate-sorption data was used. When the sorption activity follows MichaelisMenten Model, a reciprocal plot of Q-1 vs t -1 is made. From this plot; Y-intercept =
191
International Conference on Advances in Engineering and T e c h n o l o g y
1/Qmax, X-intercept = - 1 / k , and slope = k/Qmax. The 1/Q-1/t relationships by Lineweaver-Bulk analysis method for the 1-2 mm and 4-8 mm grain sizes of the substrates are presented in Figure 5, while the Q-t relationships by Michaelis-Menten model are presented in equations 6-7.
0.015 e~ ~
0.01
9 1-2 mm grain size
0.005
!14-8 mm grain size
0 0
0.01
0.02 0.03 1 / t (h1)
0.04
0.05
Fig 5" Fitting Mbeya-Pumice's Sorption data with Lineweaver-Bulk Analysis Method
4.6 Michaelis-Menten Model Equations for the Mbeya-Pumice Substrate 1 - 2mm
4 - 8mm"
9Q ( t ~
-
(204.1 * t)/(11 + t)
R2 = 0.983
(6)
Q(t)
-
(144.9 * t ) / ( 1 2 . 5 + t)
R 2 : 0.933
(7)
From equations 6-7, the highest phosphorus sorption of the two tested grain sizes when phosphorus concentration in the initial solution was 20 mg P/L are 204.1 and 144.9 mg P/Kg Mbeya-Pumice for the grains 1-2 mm and 4-8 mm, respectively. For 1-2 mm and 4-8 mm grains; 50% of the phosphorus sorption in the substrate occurs in the first 17 and 18 hours, respectively. It takes about 192 hours and 230 hours for 1-2ram and 4-8 mm grains, respectively, to reach the maximum sorption when the phosphorus concentration in the initial solution was 20 mg P/L.
4.7 Effect of Temperature on Phosphorus Sorption The results of effect of temperature on phosphorus sorption are presented in Table 3 and Figure 6. Table 3" The Effect of Temperature on Phosphorus Sorption (mg P/Kg) for the substrate 21~ 162.6 105.9 67.5 35~ 174.7 131.2 105.9 40~ 197 148.4 140.3 45~ 217.2 180.8 176.7 o Average Decrease (%) 25 41 62 These results show that the phosphorus sorption decreases as temperature decreases. The effect of temperature increases with increasing size. With the treatment of 30 mg P/L in the original incubation solution, a temperature decrease from 45~ to 21~ for
192
Mahenge, Mbwette & Njau
the substrate led to a decrease in phosphorus sorption of 25%, 41% and 62% for 1-2mm, 2-4 mm and 4-8mm grains, respectively. The plot of sorbed phosphorus vs temperature gives a linear relationship (Fig 6). High temperatures increase the activity of the metal ion and thus increase the rate of metal ion transfer. 200 150
1 4-8 mm MbeyaPumice
28~= 100 ~
~.~,
50
o 0
10
20
30
40
50
Temperature~ Fig 6" Effect of Temperature on Phosphorus Sorption for the Mbeya-Pumice Substrate 5.0 CONCLUSIONS From the foregoing analysis and discussion, the following conclusions are made: (i) The grain size of the tested Mbeya-Pumice can influence sorption of phosphorus. The finer the grain sizes the higher the phosphorus sorption. The substrate has high sorption capacity. (ii) Since phosphorus concentrations in the sewage are normally in the range of 5-15 mg P/L (Metcalf and Eddy, 1998), Mbeya-Pumice substrate will assure that a bed of the substrates can be used for phosphorus removal in sewage for a long duration without being overloaded. (iii) From phosphorus sorption as a function of time experiment, it takes longer time for the coarse substrates and shorter time for the fine substrates to reach sorption equilibrium. Mbeya-Pumice substrate has high phosphorus removal rate. (iv) There was an effect of temperature on phosphorus sorption. Temperature has higher effect on the course size than the finer size. As temperature decreases the sorption of phosphorus decreases. Basing on the studies and tests carried out, the substrate has high potential in phosphorus removal. It can be recommended to be used as a substrate in constructed wetlands to remove phosphorus.
REFERENCES APHA, AWWA and WEF (1998). Standard Methods for Examination of Water and Wastewater, American Public Health Association, American Water Works Association and Water Environment Federation, Washington D.C. Braskerud, B. C. (2002). "Factors affecting Phosphorus retention in small constructed wetlands treating agricultural non-point source pollution". Ecological Engineering. 19:41-61. Grfineberg, B. and Kern, J. (2001). Phosphorus retention capacity of iron-ore and blast furnace slag in surface flow constructed wetlands. Water Sci. Technology.
193
International Conference on Advances in Engineering and Technology
Headley, T.R., Huett, D.O. and Davison, L. (2001). “The removal of nutrients from plant nursery irrigation runoff in subsurface horizontal-flow wetlands”. Water Sci. Technology, Vol. 44, No. 1 1. Kumar, V.K., Subanandam, K., Ramamurthi, V. and Sivanesan, S. (2004). GAC Sorption Process: Problems and Solutions. The Research Scholars at the Department of Chemical Engineering - A.C. College of Technology, Anna University, India. Metcalf and Eddy, Inc. (1 998) Wastewater Engineering: Treatment, disposal and Reuse, 3‘d Ed., McGraw-Hill Inc., New York. Njau, K.N., Minja, R.J.A., Katima, J.H.Y. 2002. “ Pumice soil: A potential wetland substrate for treatment of domestic wastewater”. Proceedings, 8Ih international conference on wetlands systems for water pollution control, Eds. Mbwette, T.S.A, Katima, J.H.Y., Kayombo, S. and Pratap, H.B., Arusha - Tanzania, Vol. 1, pp 290-303. Volesky B. (1 999). “Evaluation of Sorption Performance”. Biotechnol. Progress 11, pp 235-250. “Waste stabilization and constructed wetland research Group”, University of Dar es Salaam Tanzania. http:llwww.ucc.co.tz/Wetlands Zhu, T., Maehlum, T., Jenssen, P.D., Krogstad, T. 2002. “Phosphorus sorption characteristics of Light Weight Aggregate.” Proceedings, 81h International conference on wetlands systems for water pollution control, Eds. Mbwette et al, Arusha - Tanzania, Vol. I , pp 556-566.
194
Mangeni & Ngirane-Katashaya
PRELIMINARY INVESTIGATION OF LAKE VICTORIA GROUNDWATER SITUATION FROM ADVANCED VERY HIGH RESOLUTION RADIOMETER DATA B. Mangeni and G. Ngirane-Katashaya, Department of Civil Engineering, Mukerere University,Kampala, Uganda
ABSTRACT This study used the findings that lake temperatures from processed thermal infrared data can be used to identify possible inflow zones of groundwater into a lake. Spatial and temporal temperature anomalies are assumed to indicate groundwater inflow into a lake. National Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer scenes of Lake Victoria catchment for different seasons of 2004 were acquired, processed and analyzed. The surface temperature maps of the lake produced from this data indicated two major seasonal patterns of surface temperature distribution compared to measured lake surface temperatures. These warm and cold season patterns are indicative of possible groundwater inflow into the lake. Ground truth studies were attempted to ascertain these inferences from infrared imagery without much success due to various limitations. It is however hoped that the findings of this preliminary groundwater assessment will serve as a starting point for the recommended more detailed Lake Victoria groundwater investigations including possible subsurface outflow. Critical evaluation of the lake water balance can only be made after conclusive studies on this often ignored water balance component are accomplished.
Keywords: Thermal infrared anomalies; surface temperature maps; warm and cool season patterns; groundwater inflow, ground truth; enlarged images
1.0 INTRODUCTION Lake Victoria is the second largest fresh water lake in the world with an average depth of 40 m and a surface area of about 69,000 km2 in a catchment of 263,000 km2.In spite decades of water balance studies, consideration of its groundwater situation had never featured. This paper reports on the preliminary findings of the groundwater phenomena deduced from consideration and analysis of National Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer (NOAA AVHRR) infrared images. NOAA AVHRR infrared images were acquired, and processed to produce surface temperature maps for Lake Victoria. Surface temperature distribution over the lake were then analysed for thermal anomalies which were known to be indicative of possible groundwater inflow into the lake. Aerospace data was first applied in groundwater studies as early as 1973 in the Florida Plateau, Jamaica and West Indies. These early studies dealt mainly with identifying
195
International Conference on Advances in Engineering and Technology
submarine springs that discharged into the sea from the surrounding continents. Kohout et al. (1979) carried out more detailed remote sensing studies for Jamaica and its coastal waters. In that study, more than 17,000 frames of color, infrared aerial photography and thermal infrared imagery of Jamaica, its coast and its waters were collected. The main ground investigation was at Discovery Bay on the Jamaican north-central coast where numerous submarine springs were known to occur. Simultaneously, a NASA (National Aeronautics and Space Administration) aircraft was used to gather satellite data followed by ground truth investigations. The study also included a one day reconnaissance of reported submarine springs in the Alligator Pond area on the South coast of Jamaica. The reconnaissance study had the objective of gathering data on temperature, salinity and discharge of submarine springs for extrapolation to much larger areas of remote sensing coverage. On the infrared imagery, where the submarine springs discharged primarily clear water from the karstic limestone orifices into the surrounding more turbid sea water at Discovery Bay, thermal anomalies appeared as dark blue zones. East of Montego Bay, another submarine discharge of groundwater was detected as a thermal anomaly by the infrared scanner on the NASA aircraft. Another area where thermal infrared imagery was used to establish the fact that groundwater was a major source of the water budget of the lakes was The Nebraska Sand Hills (Winter, 1986). Methodologies were developed to identify groundwater discharge zones into shallow lakes using Landsat thermal infrared imagery. These were complimented by ground-based methods that included direct water temperature measurements. The studies were based on the fact that there was a contrast between ground and surface water, the former being cooler during the warm season and warmer during cool seasons, Evgueni et al. (2003). Signatures of groundwater discharge could be detected by the Thematic Mapper (TM) on Landsat 4 and 5 or the Enhanced Thematic Mapper Plus (ETM+) on Landsat 7. Earlier attempts to apply thermal infrared remote sensing methods in Sand Hills included those of Rundquist et al. (1 985) who used airborne Thermal Infrared Multispectral Scanner (TIMS) to identify groundwater inflow into the shallow lakes. Banks et al. (1996) used TIMS to locate groundwater discharge and extent in coastal waters. Roseen et al. (2001) also applied airborne thermal imagery in combination with detailed piezometric mapping and aquifer characterization to quantify groundwater discharge into the Great Bay Estuary, New Hampshire and found good agreement between the two methods. Gosselin et al. (2000) used the Landsat Multispectral Scanner (MSS) data from 1972 to 1989 to study seasonal and annual lake fluctuations in Nebraska Sand Hills. Evgueni (2002) applied Remote sensing methods to delineate groundwater flow systems in the Western Sandhills, Nebraska. Evgueni’s study hypothesized that the spatial and temporal temperature anomalies could be used to indicate groundwater flow into Crescent, Blue, Island and Hackleberry lakes in Western Sandhills. Landsat images of the lakes acquired for different weather conditions from 1989 to 2002 were processed and analyzed. Distribution of uncorrected surface temperatures of the resulting Landsat infrared data indicated that each lake exhibited one or several zones with warm season patterns. Warm season patterns occurred when some cooler zones were detected during the warm season and cold season patterns were recognized by the existence over the
196
Mangeni & Ngirane-Katashaya
lake of warmer zones during the cool season. These zones identified by black thermal anomalies were not only consistent with TIMS data, but were also in agreement with the Crescent lake reconnaissance of January 2002 when the first ice melt was found in the warmer zones associated with warmer temperatures near the groundwater inflow zones. Detailed understanding of the lake hydrological system and mechanisms of groundwater-surface water interactions requires knowledge of location, spatial and temporal distribution of zones of active groundwater discharge zones. These include zones of significant flux of groundwater across the lakebed which could be identified using the conventional hydrological techniques of wells and piezometers which are time-, labourintensive and prohibitively expensive. Several analytical and numerical modeling studies have demonstrated that groundwater mostly seeps into a lake through the littoral zones and lake water is discharged out of the lake across the lakebed to the groundwater system through the deep parts of the lake (John and Lock, 1977; Lee, 1977, Winter and Pfannkuch, 1984; Winter; 1984). It is also generally accepted that the seepage flux from the groundwater system to a shallow lake decreases as the distance from the shore increases.
2.0 BRIEF METHODOLOGY NOAA AVHRR images of the lake for 2004 were collected and processed to produce surface temperature maps using the Split Window Technique (SWT). This is one of the methods used to estimate surface characteristics from satellite data. Using the SWT, the difference in corrected brightness temperatures between two nearby infrared channels 4 and 5 of AVHRR sensor are used to estimate the effective surface temperature. The greater the difference between channels 4 and 5 brightness temperatures, the higher the surface temperatures of the pixels in question. NOAA AVHRR data were imported into Winchips software, the Copenhagen Information Processing System for Windows with modules to process AVHRR images. During import the following processing was performed independently of the file format: Image line synchronization is checked. Noisy and missing lines are identified and removed or blank lines inserted into the data stream. 0 Image data is unpacked and converted into a set of Chips images in 8- or 16-bits. Calibration coefficients are determined using the in-flight calibration data embedded in the data stream for thermal and non-thermal bands. 0 A calibration lookup table which expresses the conversion of Digital Numbers into physical units is created. If the input file format contains embedded orbital elements they are extracted and stored in the Chips Orbit file that is created. At the end of the import process you have a text file containing a calibration table of inflight calibration data combined with stored database information, orbit file and 1 to 5 channel images. The imported images are optionally navigated, a sort of georeferencing of the images to improve the precision of geocoding. Sun and satellite angles representing the sun and satellite view angles for each pixel necessary for atmospheric corrections of the data were created and concurrently recti-
197
International Conference on Advances in Engineering and Technology
fied with the five channel images to preserve the angular information for later use. The rectification process facilitates the creation of new geocoded images by resampling the existing images. The rectified images for channels 4 and 5 were calibrated and corrected to produce brightness temperature maps. Calibration converted raw AVHRR pixel values into reflectance and temperatures using the calibration tables created during data import. Various corrections of the AVHRR signal for atmospheric distortion to account for the sun and satellite geometry for each pixel in the input image were executed during this process. The corrected brightness temperature maps were combined using the SWT to produce the lake surface temperature maps from which warm and cold season patterns were identified.
3.0 REFERENCE LAKE TEMPERATURE The reference Lake Victoria surface temperature were those from Tallings (1969) as given by Yin (1998) and the corresponding air temperatures were from F A 0 (1984) as given in Table I . To identify warm and cool season patterns the lake surface temperatures were compared with the reference lake temperatures t,. A simple examination of the lake temperature data in Table 1 and the surface temperature maps revealed that the periods October to May and June to September correspond to warm and cool seasons respectively over the lake although the temperature differences are quite small. Average air temperatures from lakeshore stations would similarly characterize lake seasons. The distribution of warm season and cold season patterns are then identified from the resulting surface temperature maps. Table 1: Lake Victoria surface and air temperatures
Nov Dec
25.4 25.7
22.1 22.4
3.1 Surface Temperature Maps Surface temperature maps and other products may be created from the calibrated images. Surface temperature maps were retrieved from the two thermal channels using the SWT standard procedure. This technique has been designed to account for the absorption effects that atmospheric water vapor has on radiometric surface temperatures. The more the water vapor the greater the difference between channels 4 and 5 brightness temperatures. The SWT is based on similar emissivity values in the 10.3 to 12.4 pm
198
Mangeni & Ngirane-Katashaya
spectral range and derived according to the algorithm given in the expression below, a default method due to Coll and Caselles, (1997).
T
= 0.39Ch4’
+ 2.34Ch4 - 0.78Ch4Ch5
-
1.34Ch5 + 0.39Chj2 + 0.56 ( I )
where, T is the lake surface temperature in Kelvin, Ch4 and Ch5 are channels 4 and 5 brightness temperature maps respectively in Kelvin. Calibration of the Split Window Technique coefficients is done using real data. This Default method of surface temperature retrieval is strictly accurate in case of Sea Surface Temperature (SST) retrieval. But it was considered a good approximation for Lake Victoria surface temperature, due to the lake’s large surface area. The surface temperature maps were then analyzed for warm and cold season patterns in order to infer groundwater inflow into Lake Victoria.
3.2 Seasonal Pattern Zones These are the areas on the processed infrared images whose temperatures differ from the rest of the lake surface. These are normally dark colored. If they appear in the littoral areas of the lake, they indicate possible areas of groundwater inflow into the lake, the boundary of the littoral zone being at the break of the lakebed slope. Apparent seasonal pattern zones in the deeper parts of the lakes are interpreted to be due to slower warming/cooling due to larger water volume and not groundwater inflow. 3.3 Warm Season Patterns According to the findings by Kohout et al. (1 979) and Evgueni et al. (2003) the points of groundwater discharge will fonn thermal (blueiblack spots) anomalies on the surface temperature maps. Groundwater is generally cooler than surface water during the warm season and warmer during the cool season. Therefore inflow of cooler groundwater into a warmer lake during the warm season results in plumes of cooler water and temperatures lower than the surface temperatures of the lake (warm season patterns). These signatures of groundwater discharge can be detected from respective surface temperature maps by identifying the thermal anomalies. 3.4 Cold Season Patterns During the cool season groundwater is generally warmer than the lake surface water and inflow of warmer water into a cool lake results in warmer water in the discharge zones (cold season patterns). These are also identified by the thermal anomalies on the respective maps. 3.5 Temperature Distribution Summary Processed images for October to May, the warm season showed dark thermal anomalies (warm season patterns) of 297-299 Kelvin as opposed to the general lake temperatures of about 300-306 Kelvin. On the other hand the cool season from June to September showed cool season patterns of between 300 and 303 Kelvin on a generally cool lake surface of 296 to 299 Kelvin. Fig. 1 is a typical Lake Victoria scene showing dark thermal anomalies in the North-Western and Eastern areas. An enlarged portion of the North-Westem portion
199
International Conference on Advances in Engineering and Technology
is given in Fig. 2 showing temperatures of 301 Kelvin for the cool season pattern (dark) as opposed to the general lake temperature of 296 Kelvin (yellow).
Figure 1: Lake Victoria July 7, 2004 surface temperature Map showing typical cool season pattern (dark)
Figure 2: Enlarged portion of thermal anomaly showing cool season pattern (dark). Location of thermal anomaly is given by the crossing of the lines in the small bottom left map window. 4.0 GROUND TRUTH STUDIES
An attempt was made to carry out some ground truth studies. Although physical measurements of lake surface temperatures was not possible due to extreme rapid mixing over the lake surface, enlarged portions of the thermal anomalies showed clear temperature variations between the seasonal patterns and the rest of the lake (Fig. 2). Water samples were also taken for laboratory tests for Total Dissolved Salts (TDS) and turbid-
200
Mangeni & Ngirane-Katashaya
ity which were expected to differ between the areas of groundwater inflow (thermal anomalies) and the rest of the lake were not conclusive. 1.0 CONCLUSIONS AND RECOMMENDATIONS According to the seasonal patterns theory and surface temperature maps from AVHRR data: There existed w a d c o o l season patterns (black over the lake in Figures 1 to 4) indicating possibility of groundwater inflow into the lake. Both warm and cold season patterns occurred predominantly in the Western and Eastern littoral zones with some rare occurrence in the Southern part. According to the infrared thermal anomaly theory there is likely to be groundwater inflow into Lake Victoria especially in the Western and Eastern parts of the lake. Thermal anomalies are an annual phenomenon as they were exhibited by processed images for 2002,2004 and 2005 . Ground truth studies could not conclusively demonstrate temperature differences at thermal anomalies due to extreme rapid mixing. But these temperature differences were clearly illustrated in enlarged portions of the thermal anomalies, Figure 4. The study did not give conclusive results on the groundwater situation of the lake but indicated areas that should be the subject of more detailed studies. More detailed investigation on Lake Victoria groundwater using conventional hydrological techniques of observation wells and piezometer measurements are therefore recommended particularly in the identified Western and Eastern areas of thermal anomalies. Geophysical explorations including modern techniques such as Helicopter Electromagnetic (HEM) surveys should be carried out to finally quantify the lake groundwater situation. Thorough knowledge of all the components of Lake Victoria water balance is required for its comprehensive analysis.
REFERENCES Banks, W.S.L., Paylor, R.L. and Hughes, W.B. (1996) Using thermal infrared imagery to delineate groundwater discharge. Groundwater, 34(3), 434-443. Coll, C., Casseles, V. and Schmugge, T.J. (1994) Estimation of land surface emissivity differences in the split window channels ofAVHRR. Remote Sensing Environment 48. 127- 134. Evgueni, N.T., Vitaly, A.Z. and Henebry. G. (2003) Using Landsat Thermal Imagery and GIsfor identlJication of groundwater discharge into lakes. Evgueni, N.T.and Vitaly, A.Z. (2002) Application of Remote Sensing for hydrological studies in the Nebraska Sandhills. Geological society of America. Abstracts with Programs, Denver, Co. F A 0 (1 984) Agroclimatological data, Rome. John, P.H. and Lock, M.A. (1 977) The spatial distribution of groundwater discharge into the littoral zone o f a Newsland lake. J. of Hydrol., 33, 391-395. Kohout, F.A., Wiesnet, D.R., Morris Deutsch, Shanton, J.A. and Kolipinski, M.C. (1979) Applications of Aerospace Data for Detection of Submarine Springs in Ja-
20 1
International Conference on Advances in Engineering and Technology
maica. Proceedings of 2”d International Symposium on Satellite Hydrology pp 437445. Lee, D.R. (1977) A device for measuring seepageflux in lakes and estuaries. Limnology and Oceanography, 22(1), 140-147. Rundquist, D., Murray, G., and Queen, L. (1985) Airborn thermal mapping of a “)‘owthrough lake in Nebraska Sandhills. Water Resources Bulletin, 2 l (6), 989-994. Talling,J.F, (1969) The incidence of vertical mixing,and some biological and chemical consequences, in tropical African lakes. Verh. Int.Ver.Limno1. 17,998-1012. Winter, T.C., and Pfannkuch, H.O. (1 984) Effect of anisotropy and groundwater system geometry on seepage through lakebeds. Numerical simulation analysis. Journal of Hydrology. 75,239-253. Yin, X.and Nicholson, S.E. (1998). 7he WaterBalance ofLuke Victoria.Hyhdrol. Sci. J. 43(5). ”
202
Twesigye-omwe
COMPARISON OF TEST RESULTS FROM A COMPACTED FILL M. N. Twesigye-omwe, Departinent qf Civil and Building Engineering, Kyambogo University, Ugandu
ABSTRACT The plate loading test (PLT) is an important field test used to determine in-situ properties of soils. In some cases, such as hard rock, plate tests are sometimes the best means of determining the necessary design parameters. Results of plate bearing tests are used in estimating the vertical settlement and strength properties of soil and rock masses, including estimating the bearing capacity of foundations, the degree of compaction of a fill and the design of pavements. This paper presents results of field tests carried out on a site on an embankment. Further, results of laboratory tests carried out on disturbed samples recovered from the test site are presented. Field tests include plate bearing tests, the California bearing ratio (CBR) and cone penetration tests (CPT). Plate load tests are not carried to failure. A hypcrbolic model is used to obtain thc ultimate pressure (ql,,,). Laboratory tests include particle size distribution, CBR and quick undrained triaxial and undrained shear strength (cL() are detertests. Values of the resilient modulus (Mx) mined from plate load tests. Field and laboratory test results are compared. Keywords: Plate Load Test, Cone Penetration Test, Bearing Capacity, Shear Strength, California Bearing Ratio, Resilient Modulus, Hyperbolic Model.
1.0 INTRODUCTION Construction of earth structures involves the use of compacted soils. Compacted fills are essentially used in the construction of embankments and making ground for other geotechnical construction. Geotechnical parameters of compacted fills can be measured insitu or on laboratory samples, compacted using standard methods. Plate load tests can be used to estimate deformation characteristics of soil. The tests may be carried out on levelled surfaces, in shallow pits or in boreholes. A steel plate is set on the soil on the levelled surface or at the bottom of the excavation. The test is conducted until failure or until penetration reaches 15% of the plate size and the load causing this penetration is taken as the ultimate load (BS 1377: Part 9, 1990 and ENV 1997 - 3, 1999). This paper presents results of field tests conducted at the test site on an embankment on the N2 at the South Ashbourne Link Bridge West, north of Dublin, Republic of Ireland. The results of laboratory tests carried out on disturbed samples recovered from this site are also presented.
203
International Conference on Advances in Engineering and Technology
Plate load tests are not carried to failure. Further, a deflection of 15% of the size of the plate is not achieved. In order to obtain the ultimate pressure, an analytical method based on the hyperbolic model was used. A comparison was made on field and laboratory results to assess the reliability of the model in analysing plate load data.
2.0 EXPERIMENTAL PROGRAMME 2.1 Field Tests Field tests include the plate loading test, California bearing ratio, cone penetration test, and density measurements.
Plate Loading Test (PL T) The plate loading tests were carried out at four locations in accordance to Clause 4.1 of BS 1377: Part 9 (1990). A clean level surface was prepared and covered with a 5 mm thick layer of sand. A 20 tonne Ground Investigation Truck provided the reaction. The truck was lifted by two sets of hydraulic jacks to limit possible movement of the truck on its wheels during testing. A 300 mm diameter 48 mm thick plate was placed over a 420 mm diameter 30 mm thick plate. Load was applied through a hydraulic jack. The plate deflection was recorded through transducers placed at 120” on top of the 420 mm plate and attached to a reference beam. The plate load setup is shown in Fig. 1. Loading was terminated when the differential settlement of the plate reached 5 mm. Data were automatically recorded on the computerised data collection system using Strainsmart, a computer program.
The average deflection, A, was plotted against the pressure, q, as in Fig. 2. The data points for each were reduced by plotting the end points of each load stage (for example A, B, C, D on Fig. 2) and are presented in Fig. 3 . In-situ Density and Moisture Measurements A direct transmission nuclear gauge shown in Fig. 4 was used in these measurements. Values of wet density, moisture content and dry density obtained are displayed in Table 1.
Test Location
Wet Mg/m’
Density
Moisture Content %
Dry Density Mgim’
ND 1
2.281
11.5
2.046
ND 2
2.493
8.3
2.302
ND 3
2.291
12.8
2.036
ND 4
2.211
11.6
1.981
204
@d)
Twesigye-omwe
Leads to th~ computer
Transducers at 120 ~ spacing 300 mm Diameter plate 420 mm Diameter plate Reference beam Fig. l" Plate load test set-up. PLT
4: Deflection
Vs Pressure
i ooo
/
/
..../
7O,,
~
(,oo - soo
.....
,,
r~c / B
s
700
1,1
6OO
/
jD
4O,,
t'
/
/
i,,
is
2,,
i
/
~/,7
2"
+PLT i ~- P E T 2 -- PET3
.
PET4
40o
/
3OO
/
25
~X ,,"/....
/t
5oo
3,,
o
Deflection, A ( m m )
Fig. 2" Pressure-deflection curve from PLT 4.
1o Deflection,
20
30
40
A (narn)
Fig. 3" Results of plate load tests.
In-situ California Bearing Ratio (CBR) A hole of depth approximately 0.1 m and diameter 0.3 m was made in the ground. The bottom of the hole was levelled. The ground investigation truck provided the reaction. The in-situ CBR setup is shown in Fig. 5. Numerical results are presented in Table 2. Table 2: Numerical results of in-situ CBR TEST CBR1 CBR2 CBR (%) 16 40
I CBR3 45
I CBR4 16
205
International Conference on Advances in Engineering and Technology
LOAD APiPIL HANDI
LOAO DIAL,
Fig. 4: Nuclear density gauge.
Fig. 5" Field CBR set-up.
Cone Penetration Test (CPT) Force was applied on the 60 ~ electric cone through an assembly of 1 m push rods. The cone tip resistance, qc, and sleeve friction, f , were measured along the depth of penetration and recorded using a computer program, PlotCPT. Plots of qc, f and friction ratio, RU against depth are shown in Fig. 6. Rj; expressed as a percentage, was calculated from:
RU: ~f,
(1)
q~
where q~ and f, are determined at the same depth.
CPT
4: qc
Vs
CPT 4: fs Vs depth
depth
C P T 4: R f V s d e p t h
i qc (MP~) ,,
9
,o
,
fs ( M Pa)
31
0
0. I
O2
Rf (MPa) 0.3
0.4
i ,
i
2
4
5
o o.5
0.5i
i l!
i ,5i 2
2i
25
2.5_ 3~ 3.5
g, = 4.5
: 4.5
4.5 !
,i
.
5.5
6
6.5 7....
v:
:
7s
Fig. 6: Results of the cone penetration tests. The undrained shear strength, cu, was determined from the relation
Cu
qc -
-
Nk
where Nk is the cone factor which was taken as 17.5 (Meigh, 1987).
206
(2)
Twesigye-omwe
Table 3: Numerical results of the CPTs TEST CPT 1 CPT 2 CPT 3 CPT 4
qc (MPa) 14.48 12.00 6.92 7.91
f (MPa) 0.21 0.29 0.19 0.16
Rf(%)
cu (MPa) 0.827 0.686 0.396 0.452
2.5 2.6 3.8 2.9
2.2 Laboratory Tests The bulk sample was first sieved to remove all particles with sizes greater than 20 m m and then divided into eight samples whose moisture contents were varied. In order to make a fair comparison between field and laboratory values the moisture contents of some samples was brought within the range of the field moisture content. Particle Size Distribution A sieve analysis was carried out in accordance with procedures described in BS 1377" Part 2 (1990). The particle size distribution was obtained as follows" cobbles (24%), gravel (22.4%), sand (25.1%) and clay/silt (50.1%).
Laboratory California Bearing Ratio (CBR) Four samples at different moisture content points were prepared by compacting three equal layers in a CBR mould using 62 blows of a 2.5 kg rammer falling through a height of 300 ram. The penetration test was carried out from the top and bottom of the sample. The results of CBR, bulk density (p), moisture content (w) and dry density (Pd) are presented in Table 4. Table 4: Results of laboratory CBR Test Sample
1 2 3 4 5
P (Mg/m3) 2.0361 2.0998 2.0970 2.0801 2.0667
w% 15.6 16.4 17.3 17.7 18.7
pd (Mg/m3) 1.7613 1.8040 1.7877 1.7673 1.7411
CBR % 18 11 6 5 4
Me (MPa) Not valid 81.66 55.40 49.30 42.74
Laboratory Resilient Modulus Values of the laboratory MR in Table 4 were derived from: M R - 17.6(CBR) ~
(3)
Equation (3) is valid for values of CBR in the range 2 - 12% (Highways Agency, 1994).
Undrained Shear Strength Test (c.) Undrained shear strength was determined from results of triaxial tests. Samples were prepared in a split mould, of diameter 101.6 mm, height 203.2 mm, by compacting five equal layers using 26 blows of a 2.5 kg rammer. A confining pressure of 200 kPa was used.
207
International Conference on Advances in Engineering and Technology
The values of the major principal stress at failure, o-If, w e r e calculated from the equation: 0-If -- (0-1 -- 0-3 ) f + 0-3
(4)
Where, 0-3 is the cell confining pressure and ( o-1 - 0-3)/is the peak deviator stress representing failure. Numerical results of the quick undrained triaxial tests are displayed in Table 5. Table 5: Quick undrained triaxial Results
3.0 ANALYSIS OF RESULTS 3.1 Plate Load Tests There was no failure during the test. In order to obtain the ultimate pressure, an analytical method based on the hyperbolic model was used. The equation of a hyperbola in terms of the pressure on the plate, q, and the plate deflection, A, is given by (Kondner 1963)" q-
A
(5)
C 1A -k- C 2
where, C1 and C2 are constants. Rearranging equation (5), we get A - - = C 1A + C 2
q
(6)
Values of A were plotted against (A/q) as shown in Fig. 7. The slope of the best line through the data, C1, was determined. The value of the ultimate pressure, q,,tt, was given by (1/C1). A factor of safety of 3 was applied to the values of quit to obtain the allowable pressure qa. Numerical results from the PLT are displayed in Table 6.
208
Twesigye-omwe
A q
Deflection Vs - 0.035 0.030 A
0.025: 0.020
q
0.015 i
= 0.0008A + 0.0089
0.010 0.005 0.000 0
5
10
15
20
25
30
Deflection, A (mm)
Figure 7: Determination of the ultimate pressure, qu#.
Table 6 Results of the PLT TEST PLT 1 PLT 2 PLT 3
qu. (kPa) 1428.57 1000.00 1428.57
qa (kPa) 476.19 i333.33 476.19
A (m) 0.0093 0.0076 0.0079
EpLr (MPa) 12.67 10.85 14.91
PLT 4
1250.00
416.67
0.0068
15.16
i
cu (MPa) 0.238 0.167 !0.238
MR (MPa) 48.26 45.49 58.63
0.208
79.61
Elastic Modulus The elastic moduli, EpLr, were determined from the relationship:
EpL r =
7rO- V2 ~a D 4A
(7)
Where, D = diameter of the plate. Poisson's ratio (v) was assumed equal to 0.5 for undrained conditions of cohesive soils (ENV 1 9 9 7 - 3 (1999)). Undrained Shear Strength The undrained shear strength c, was determined from the relation:
Cu - qu# - ~ -Tz
(8)
Nc
Where, q.. = the ultimate contact pressure from the PLT results, yz = the total stress (density times depth) at test level (z = 0 at surface), Nc = the bearing capacity factor for circular plates (Arc = 6 for PLT on the subsoil surface) (ENV 1997 - 3, 1999). In-situ Resilient Modulus The modulus of resilience, MR, was obtained from the load-unload-reload cycles of the PLT. Average values are in the range 4 8 . 2 6 - 79.61 MPa.
Comparison of Field and Laboratory Shear Strength Values of c, derived from the PLT and CPT are in the ranges 0 . 4 6 2 - 0.605 MPa and 0 . 3 9 6 - 0.827 MPa respectively. In-situ moisture contents ranged between 12.8% and
209
International Conference on Advances in Engineering and Technology
8.3%. Laboratory c, values are in the range 0.094 - 0.405 MPa at respective moisture contents of 18.4% and 8.7%. Field and laboratory values of cu compare favourably.
Comparison of Field and Laboratory Resilient Modulus In-situ values of MR are in the range 48.26 - 79.61 MPa. Laboratory values are in the range 4 2 . 7 4 - 81.66 MPa. There is close agreement between field and laboratory values. Typical values of MR for semi stiff soils are in the range 4 0 - 150 MPa (Brown, 2003).
Comparison of ln-s#u and Laboratory California Bearing Ratio In-situ CBR values are in the range 16 - 45%; laboratory values are in the range 4 - 18%. In-situ values are generally higher than laboratory values. This could be attributed to the presence of rock material that made penetration difficult and/or low field moisture content. 4.0 CONCLUSIONS Field tests were carried out on an embankment compacted in accordance to the specifications of the National Road Authority (NRA) of Ireland. From field and laboratory tests the following conclusions have been drawn: (i) Field values of the undrained shear strength from the plate load test, the cone penetration test and laboratory values are in close agreement. (ii) Field and laboratory values of the resilient modulus compare favourably. (iii) Field values of the California bearing ratio are higher than laboratory values. The agreement between field values obtained from the hyperbolic model for analysis of PLT, values from CPT and laboratory values is a good indication that the model is reliable. 5.0 A C K N O W L E D G E M E N T S The research was sponsored by the Irish Council for International Students (ICOS) to which the author is grateful. Special gratitude is extended to Dr Eric R. Farrell of the Department of Civil, Structural and Environmental Engineering at the University of Dublin, Trinity College, for his advice and guidance throughout this investigation. REFERENCES British Standards Institute (1990) The British Standards Methods of test for soils for engineering purposes, BS 1377: Parts 2, 4, 7 and 9. London. Brown, S.F. (2003) Soil Mechanics for pavement engineers. In: Frost, M.W., Jefferson, I., Faragher, E., Roff, T.E.J. and Fleming, P.R. (2003) Transportation Geotechnics:
Proceedings of the Symposium held at the Nottingham Trent University School of Property and Construction, September 11, 2003, London, Thomas Telford Publishing, 19. ENV 1997 - 3 (1999) Eurocode 7: Geotechnical Design - Part 3." Design assisted by fieldtesting, CEN, Brussels. Highways Agency (1994) Design Manual for Roads and Bridges, HD 25/94, London. Kondner, R.L. (1963) Hyperbolic stress-strain response: cohesive soils, Journal of Soil Mechanics and Foundation Engineering, 89(1) 115 - 143. Meigh, A.C. (1987) Cone Penetration Testing. London, Butterworths.
210
Kigobe & Kizza
D E A L I N G WITH SPATIAL V A R I A B I L I T Y U N D E R LIMITED H Y D R O G E O L O G I C A L DATA. CASE STUDY: H Y D R O L O G I C A L P A R A M E T E R E S T I M A T I O N IN M P I G I - W A K I S O M. Kigobe and M. Kizza, Department of Civil Engineering, Makerere University, P.O Box 7062, Kampala Uganda.
ABSTRACT
Parameter estimation during hydrologic modelling is usually constrained by limited data and lack of ability to perfectly represent insutu conditions. Costs incurred during field data collection, poor access to appropriate sampling location are additional constraints limiting guaranteed randomness during sampling. Availability of sparsely sampled data as point data or spatially lumped data further complicates the estimation procedures. Latest endeavours have made use of geostatistical tools in hydrology to guide parameter derivations for unsampled locations. Analytical groundwater flow models were employed to analyze different pumping test records (constant discharge, step-tests and recovery test) and semivariograms and Krigging tools applied to the averaged results to interpolate between the sparsely sampled boreholes, in order to estimate hydraulic parameters in Wakiso and Mpigi districts, Uganda. The tests performed suggest that given sufficient data, use of semivariograms and kriging tools can sufficiently provide estimates for aquifer parameters. Aquifer hydraulics models coupled with geostatistical estimations techniques can adequately guide studies of hydrogeological characterisation. Keywords. Semivariograms, Kriging, Transmissivity
1.0 INTRODUCTION Success of hydrological modelling heavily depends on availability of records, usually in forms of time series. Records can either be available as point data or spatially lumped data. Information about trends with time introduces uncertainty associated with interpreting temporal variation. For small catchments a day time step may be longer than a storm response time in the catchment, necessitating finer time tuning (Beven, 2001). Availability of measurements, even for small numbers of sites is very useful for model calibration process. Modelling for ungauged sites is possible, but a very difficult process, usually a process of inferring the parameter of such sites from gauged sites is normally considered. Limited catchment gauging and monitoring is one of the biggest effort for most of Uganda's catchments. Parameter estimation techniques can be useful; however, limited data can restrict their feasibility. This paper outlines selected results where parameter estimation techniques were applied to determine hydraulic parameters for an aquifer in Wakiso and Mpigi, Uganda.
211
International Conference on Advances in Engineering and Technology
1.1 Randomness of Data Classical methods of interpolation have been applied to point measurements before, such as Thiesen polygons, triangulation, or the inverse distance method (Isaaks and Srivastava, 1989). Such deterministic functions come with the disadvantage of not taking into account the uncertainty involved. Alternative ways to predict the values of an unknown variable at an unmeasured location is to assume that it is the outcome of a random (stochastic) variable, z. Model that are able to account for both the spatial similarity effect and the uncertainty are referred to as random field models. Spatial diversity effect can be taken into account by description of the degree of dissimilarity of values at two locations with respect to the separation distance: a variogram model. A random field is said to be stationary if its multivariate cumulative density function is invariant under any translation (Deutsch and Joumel, 1998). Complete random fields are characterised by a constant Expectation value, constant variance and a variogram function that is a function only of the vectorial distance h between two points of the random field. Data modelled by such models should be stationary and not hold trends. If not, a process of de-trending can be applied to the data to remove any temporal structures 1.2 Geostatistical Estimation Using Kriging Geostatistical tools assume stationarity of input data; proper decision on stationarity is critical for the representativeness and reliability of the geostatistical tools use. In other words, one must carefully choose domains where the stationarity assumption holds, as pooling data across geological faces may mask important geological differences; on the other hand manipulation by splitting data into bins may lead to unreliable statistics based on too few data points per analysed bin. If we assume a value of an unknown variable at an unsampled location as the outcome of a random field, a variogram model can be developed to estimate the first instance. A kriging algorithm can be used. This is a modified linear regression technique, estimating unsampled values through weighted linear combinations of neighbouring data values. Let z(p) denote an unknown value at location p, the estimated value Z(p) can be obtained by interpolation using the data points Z i = Z(pi)in the surrounding area. This is the basis of the kriging equation: 2 = ZT=l wizi' where zi are the known data values in the region, M their corresponding weights, and n the number of these data points. Choice of the weights is done to minimise the prediction errorE([z- 212), where E is the expected value or mean: the average difference between true and estimated values at the unknown locations. To keep sampling unbiased, they are also constrained such that ~ wi - 1 . As the prediction error itself is unknown, instead E ( [ z - 2 ] 2) the expected value of the mean squared error (the error variance), is minimised. The error variance can be estimated from the variogram model. Traditionally, kriging has been performed to provide a best linear unbiased estimate (BLUE) for unsampled values; the kriging variance is used to define confidence intervals. Estimation by kriging is optimal in the least-squares sense because the local error variance is minimised.
212
Kigobe & Kizza
Application of kriging to heterogeneous soils may smooth out the spatial variation of the soil attribute. Estimated values will be much less variable than actual values: small values are overestimated and large values are underestimated. Due to this shortcoming Goovaerts (1999) suggests that kriged maps should not be used for applications sensitive to the presence of extreme values and their patterns of continuity which is usually the case for most hydrological variables.
1.3 Modelling using Variograms A variogram is often defined as a measure of spatial variability. Sampling for points close to each other produces typically similar outcomes compared to sampling for points separated by larger distances. The variogram distance measures the degree of dissimilarity y(h) between data separated by a vector or class of vectors h. If z ( x i ) a n d
z ( x i + h)are pairs of samples lying within a given class of distance and direction, given N(h) is the number of data pairs within this class, the experimental semivariogram can be defined as average squared difference between the components of data pairs (Goovaerts, 1999) as in the following equation: 1
7(h) - 2X(h)
N(h) Zi=I
[ Z ( X ) -
(1)
x(i + h)] 2
This spatial variability measure is called a semivariogram, but it is commonly referred to as a variogram. To interpolate between the sample variogram estimations a variogram model can be used. The variance of the entire data set is referred to as the sill and the distance at which the model semivariogram meets the data set variance is defined as the range (Fig. 1). The variance of the sample at separation distance zero is called the nugget. For models that reach the sill asymptotically, the range is defined as the lag distance, a. Given that the variogram is 95 % of the sill c: g (a) - 0.95"c.
C o m p o ~ of a ~ i v ~ ~ 0~/$$
......
, ~
.
: ":
,
'z 7
-
.... 0o15
0,10
For a semivariograms to be modelled, 0,:0~: the data functions should fulfil a mathematical condition called positive Dis~ (kin) definiteness. This is a necessary condiFig. 1. Anisotropic variogram tion to ensure a unique solution to kriging systems. Any linear combination of models meeting the positive definiteness criterion is permissible. Ways of selecting and fitting variograns to experimental data is still controversial and several methods have been proposed, some of which include automatic fitting using an optimisation algorithm to visual, or manual, fitting. The manual approach alone cannot return precision of fit, especially as the number of parameters increases, whereas unconstrained optimi-
213
International Conference on Advances in Engineering and Technology
sation against experimental data values can return unrepresentative parameter sets. Optimisation algorithms are constrained by additional knowledge where available, such as physical knowledge of the area. Where this additional knowledge is difficult to quantify as constraints, final parameter values are checked for plausibility. The following are examples of positive definite models (Deutsch and Journel, (1998)): Nugget:
Spherical
7"(h) =
c 1.5
(2)
y(h) = c
-0.5
c,
if h < a
(3)
if h > a
(4)
where a is the range, c is the sill and h the lag distance. The advantage of providing additional information especially for cases where sampling is limited is one of the reasons why use of semivariograms and kriging was adopted in this study. 2.0
DESCRIPTION OF TESTS PERFORMED
Analyses of time drawdown behaviour for aquifer response to pumping is often complicated, influencing difficulty during aquifer characterization. The governing groundwater flow equations are mostly non-linear and the release of water due to the decline of the water table cannot be described in a simple mathematical straightforward way. Analytical groundwater flow models were employed to analyze different pumping test records (constant discharge, step-tests and recovery tests). The choice of a theoretical model directly influences the accuracy in predicting the hydraulic parameter. A major difficulty is that system identification does not guarantee a unique solution for a selected theoretical model. However, the level of uncertainty can be reduced by conducting detailed fieldwork. Geostatistical techniques were explored to categorize and determine the hydraulic Transmissivity values for sampled locations in the selected aquifer systems. Limited data of drawdown observations (in the pumping well itself) were collected for only 15 boreholes out of over 200 boreholes in the area (Fig.2).
214
Kigobe & Kizza
":"7 .~J
o
................... ......................... '
\
\
~
!
/ ....... .
J
..... " / ....
/
5/" "
!
g. J
\
[
/./
...... v ~ Q ' ~ " ~ D 1 } 3 5 2
/"
7.... ;~ ..-----~ /-
//
/
'
i f
~o
.............................:.................'V..................................... .............::,.............:~q[ ....................:.............................................................7................~:" ............................'1 ~ ~ " - . . . j - ' / " ) 32~ ''E ~" /
// !
.....
I
~~D17349
DWD]7'50~
::t
", ~ ~ ' i ~ *"~-~o ,,~owo ?~r ; >;-~wo,~,,7~:~ "~ .7- ...... "
. :....
.--.
\
. . . . . . . .
....
,./
'~
,';
O
--" ,,,:
{
Legend
r"~
9
}
'
,,o,0.0..,i
Sampled Boreholes Rivers/Streams
~
~i
02.55 I
I
/
t0
15 /
20
25 /
Lakes
30 Kilometers
Fig.2. Sampled Boreholes (labels) and drainage network of Wakiso and Mpigi The methodology adopted involved determination of hydraulic parameters, followed by fitting semivariograms using a MATLAB toolbox. This code is interfaced by EasyKrig3.0 (Dezhang Chu, Woods Hole, 2004). EasyKrig3.0 is an optimisation MATLAB toolbox developed by Yves Gratton and Caroline Lafleur (INRS-Oc6anologie, Rimouski, Qc, Canada), and Jeff Runge (Institut Maurice-Lamontagne, now with University of New Hampshire). 3.0 GEOLOGY OF THE STUDY AREA The geology of Uganda is characterised by: volcanic rocks in the east and the Rift valley sediments in western Uganda; Northern Uganda is highly faulted with rocks of orogenic complexes; the central region has several location of sedimentary rocks and granite; the southern part is less faulted and mostly formed of orogenic complexes. The regional hydrogeology is formed of shallow unconsolidated formations derived from the prolonged weathering of Precambrian crystalline bedrock of the Granulitic-Gneissic complex. Most of the central and northern Uganda is covered by the Buganda -Toro system of mica schists, acid gneisses and quartzites (Wright, 1992 and Richard, et al,', 1995). Quaternary sediments line the Victoria Nile River and Sezibwa swamp as well as the north shore of Lake Victoria. Wakiso and Mpigi are mostly formed of metamorphosed argilites, quatizite and amphibolites of the Buganda -Toro Complex the west (Mpigi) and the Granordiotric and granite gneiss of the Buganda Tot 9 In the East, Wakiso is predominantly underlain by intrusive granite while in the west; Mpigi is mostly underlain by Precambrian formation (Fig.3). Geological logs for the sampled boreholes reveal progression of a sandy clay rock with residual rock in the upper zones trough weathered and fractured rocks and fresh rarely fractured rocks in the bottom of the aquifer system.
215
International Conference on Advances in Engineering and Technology
O Legend Q SampledBoreholes I
l kmBoreholeBuffer Rivers/Streams
{
31~ 02.55 9I
I
~
Intrusive,Granite,MainlyCoarsegrainedporphyriticbiolite
~
PRECAMBRIUM,Cleavedgrainedsandstone
l
CAINOZOIC,swampdepositalluviumand lucustdnedeposits
~
PRECAMBRIUM,MuscovitebiotiteGneiss& SurbodinateSchist
I I ~
PRECAMBRIUM,Conglomerate&SandstonewithSurbordinateshalebands PRECAMBRlUM,COnglomerates,akoses,sandstone& Siliciatedrocks PRECAMBRIUM,Quartzites
I
PRECAMBRtUM,AmphibolitesandEpidosites
I
OpenWater
10 15 20 25 30 I I Kilometers
Fig.3. Geology of Wakiso and Mpigi 4.0 DATA ANALYSIS Theis and Jacob's tests followed by Recovery tests; for unsteady flow in confined aquifer were attempted. These were normally considered prior to unconfined aquifer models such as Hantush (1960). Eden and Hazel's well performance test were used to establish aquifer sustainable yield and the well losses that could have occurred during the drawdown of a pumped well. Fig.4 summarises the averaged Transmissivity values for the pumping-tests borehole data that were carried analysed.
Mean of Transhnisivi~, of Sampled Boreholes i m 0.13 - -
.........
--:
0.85
~ 0 . 1 8
Jim
0.22 .....
1.19
),~i:~'~iiiii:ii~i i!:~iili ~:~ i~:~ii::i::i0.55
'
aNT 17~*
0.92
;
i DN0 17C~a~ i DWI 17.*S n!~D17~.9 nWD 173~W.
1.38 11.49
li
1.84 3.43 0.03
DWa
9o~l
1.77 I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
3
17~*~ 1r,.7
IIII 4.87 . ,4.99
4
5
T (m/d) Fig.4. Average Hydraulic Transmissivity Estimates at Location of Borehole Geostatistical estimation using kriging was applied to estimate hydraulic property at unsampled locations, given the model that defines how the properties vary in space could be defined. With the limited data fitting of semi-variograms proved difficult (Fig.5) due to spatial diversity of the sampling points. Better results were feasible by separating data into groups, where records are close spatially. This had plausibility shortfalls; hence additional data was required for more accurate kriging. During the optimization process the Least-Squares Fit function was used to fit the data. The LSQ fit has a certain range for each parameter, different initial settings or parameters will pro-
216
Kigobe & Kizza
duce different results. For most instances, data was assumed anisotropic and an anisotropic variogram or correlogram model was fitted. I
I
I
I
i
~
i
I
I
~. 2 6 L
~
2-
0
m
,..
0
9
E
1-
o
0.6
0
9
o(; 0
Oi11
? 0.3
0.2
I, 0.4
~,
0:5
Lag (;Relative i
O, 0 @ 0 0 0.6
I
07
9
0:8
0.9
Full Length Scale) i
i
1
i
3 :E 2
i(
0
9 O0 0
0 0 0 0 0 0 O0
O
L L
0
o 0
0
o ....................... 0
O
o
0
.......................... 0
o
-I
o
0
.2
0
OI
0.2
0:3
0 O4
015
0:6
0
0:9
I
L ag (Relative to the Full Length Scale)
Figure 5. Typical experimental variograms and correlogram for entire data set Parameters obtained by fitting variograms or correlograms are used for the kriging operation. The "Relative Variance" parameter is useful to allow control of kriging limits while changing the variogram and kriging parameters to fine-tune parameters and improve the quality of the kriging results (Fig.6). ............... siii#iiy#'
.iiiiii~ii@@~ '~ ........
.-~|
~:.
~
215
it5
(~)
(b)
Fig. 6. Typical Kriging results, (a) and associated variance, (b) plots Given sufficient data, kriging by a "Batch Process", to allow grouping of data sets that have the same data structures, i.e., having the same data format and using the same
217
International Conference on Advances in Engineering and Technology
variogram and kriging parameters should be explored. More accurate results are feasible when a grid file to specify the coordinates of regular or irregular grids on which the kriged values are generated is provided. 5.0 FINAL HYDRAULIC PARAMETERS ESTIMATION: The final Transitivity values considered to represent hydraulic parameter estimates for Wakiso and Mpigi were derived from Kriging results and spatially analysed using GIS. Most wells were 5 km within reach of streams (Fig.8). Apart from two of the wells, the rest were also 5 km within reach of each other. This could have an influence on the well characteristic behaviour or aquifer response in form of induced recharge from streams or increased drawdown due to excessive pumping of the nearby well.
Fig. 7. Approximated Transmissivity Plots for Wakiso and Mpigi Due to relatively high uncertainty during kriging, categorisation of results into zones can provide insights to the approximate parameter estimates (Fig.8).
218
Kigobe & Kizza
Figure 8. Hydrologic poperty zoning. 6.0 C O N C L U S I O N S A N D R E C O M M E N D A T I O N S
Boreholes were sparsely located and in regions of different geomorphology and different hydrogeological conditions. Boreholes located in valley areas tend to have relatively higher Transmissivity values compared to those located on slopes and hilltops. The results reveal that given limited field explorations, use of kriging techniques to estimate aquifer properties is possible. Aquifer hydraulics models coupled with geostatistical estimations techniques can adequately guide studies of hydrogeological characterisation. In addition spatial analysis using GIS tools is an effective way of parameter visualisation. Parameter estimation by use of variograms and kriging, is not only limited to hydrological application, but as demonstrated here, the approach can be used to improve the level of understanding of the hydrological data and its interpretation. Given adequate sampling is possible, other studies could include salinity levels in agricultural soils, soil pressure differences, soil pollutant movements, etc. Availability of data at finer spatial resolutions leads to improved estimation. Results provided here are more pedagogical than practical for basis of groundwater abstractions/designs. In absence of adequate aquifer sampling, there is need to employ modelling tools to simulate groundwater flow patterns in Uganda's aquifer systems and future work could involve stochastic analysis on aquifer properties. 7.0 A C K N O W L E D G E M E N T S Without the financial support from SIDA/Sarec Faculty of Technology Research Funds, the insight to the work would not have been realised. The authors are also grateful to MBW Consulting Engineers for providing test data. REFERENCES
Beven, J. K., (2001) "Rainfall-Runoff modelling - The Primer" John Wiley and Sons Ltd, England. 1SBN: 0-470-86671-3
219
International Conference on Advances in Engineering and Technology
Clark, 1.1 (1984) "Practical Geostatistics", Third printing Elsevier Applied science publisher London England 123pp, Deutsch, C. V and Journel, A. G. (1992). GSLIB." Geostatistical Software Library and User's Guide. Oxford University Press, Oxford, 340 p. Deutsch, C.V. and Journel, A.G. (1998) GSLIB: Geostatistical Software Library and User's Guide." 2nd edition,Oxford Univ. Press, New York, 369 pp. Goovaerts, P. (1999) Geostatistics in soil science." state-of-the-art and perspectives, Geoderma v89, 1-45. Hantush, M.S. and Jacob, C.E. (1955). "Non-steady radial flow in an infinite leaky aquifer" American Geophysical Union Trans. 36: 95-100pp, Isaaks, E.H. and Srivastava, R.M. (1989). An Introduction to Applied Geostatistics, Oxford Univ. Press, New York, 56 lp. Journel, A.G. and Huijbregts, C.J. (1992). Mining Geostatistics. Academic Press, New York, 600 p. Kitanidis, P.K. (1997). Introduction to Geostatistics. Applications in hydrogeology. Cambridge University Press. 249 p. Taylor, Richard G. and Ken W.F. Howard (1995). "Averting shallow-well contamination in Uganda" Sustainability of water and sanitation systems, 21st WEDC Conference Kampala, Uganda. Marcotte, D. (1991). Cokriging with MATLAB. Computers & Geosciences. 17(9): 12651280. Wright, E. P. (1992). "The hydrogeology of crystalline basement aquifers in Africa" Geological Society, Publ. No. 66, ISBN: 090331777.
220
Tindiwensi, Mwakali & Rwelamila
TOWARDS APPROPRIATE PERFORMANCE INDICATORS FOR THE U G A N D A C O N S T R U C T I O N INDUSTRY D. Tindiwensi, Department of Civil Engineering, Makerere University, Uganda J.A. Mwakali, Department of Civil Engineering, Makerere University, Uganda P. D. Rwelamila, University of South Africa
ABSTRACT This paper, presents a brief background on the development of performance indicators for the Uganda construction industry. The main objective of the paper is to demonstrate the need for performance measurement, the justification for the use of indicators, the methodology employed in the development of indicators and the rationale for validation. The paper is divided into three sections, namely: theoretical background, indicator development methodology, and way forward.
A set of ten indicators to measure the performance of the construction industry have been developed at the Department of Civil Engineering, Makerere University. The research sought to address the problem of the lack of construction industry performance indicators and trends upon which the Government of Uganda and other stakeholders may base intervention and/or development policies and strategies. The paper recommends the creation of a performance index that incorporates both growth and development indicators to appropriately measure performance of construction industries in developing economies. Keywords: Performance, Indicators, Construction, Industry, Measurement 1.0 THEORETICAL BACKGROUND A set of ten indicators to measure the performance of the construction industry have been developed at the Department of Civil Engineering, Makerere University as part a PhD research on the topic 'An Investigation into the Performance of the Uganda Con-
struction Industry '. The research sought to address the problem of the lack of construction industry performance indicators and trends upon which the Government of Uganda and other stakeholders may base intervention and/or development policies and strategies. Therefore, the development of indictors for use in measuring the performance of the construction industry was the major objective.
221
International Conference on Advances in Engineering and Technology
1.1 The Need for Performance Measurement in Construction The construction industry possesses several stakeholder groups which are both internal and external to it. These groups control the driving force issues to change and also the efficiency and sustainability of the industry. The increasing power of the various stakeholder groups and their multiple, contradictory and often changing preferences compounds the problem of ensuring their satisfaction (Freeman, 1984). There is, therefore, a strong and urgent need to determine, on a continuous basis, whether these preferences are being met, hence performance measurement. While advocating for total re-thinking of performance measurement, Eccles (1 995) states that: ‘what gets measured gets attention ’ and suggests that the following questions should be posed while undertaking performance measurement: ‘Given our strategy, what are the most important measures of pevformance? How do the measures relate to one another? What measures predict long term success?’ To provide the answers to these questions Eccles (1995) while considering the example of a firm, suggests a shift from treating financial figures as the foundation for performance measurement to treating them as one among a broader set of measures. ‘Thefigures reflect the consequences ofyesterday ’s decisions than indicating tomorrow’s performance’. There is therefore, a need to define measures and draw out a methodology for their determination that describe performance not only in the past but also the present and make predictions about the future, focussing on the preferences of the stakeholders. 1.2 The Meaning of Performance Measurement Measurement consists of assigning a numerical scale to the size, value or other characteristic of a tangible or intangible object (Kydos, 1998). Kydos (1998) presents the relative nature of all measures and asserts that a measure not referenced to something else has no meaning.
According to Kydos (1 998), measurements can be categorised broadly as: Qualitative or subjective when numbers on a scale are assigned by human judgement; Quantitative or objective - when measures are derived from physical measurements or countable units; Attributes - when characteristics are measured as either being present or not, and lastly; Variable or continuously variable - when the degree or extent of a variable is measured on a continuous scale. ~
1.3 Performance Measurement Metrics According to Rolstadbs, (1994), an indicator is information we want to know and a metric is ‘the specific method to quantify this information’. Metrics may be objective, accurately known and hierarchical in nature and therefore, possible to measure directly. This type of metrics is described as hard metrics. Alternatively, other metrics are subjective, measure a surrogate indicator and represent a multivariable situation, e.g. attitudes. This type of metrics is described as soft metrics. The choice of the type of metrics to use in performance measurement is therefore, important and will depend on the classification of the indicator to be measured.
222
Tindiwensi, Mwakali & Rwelamila
1.4 Performance Measurement Indicators Due to the urgent need for an improved and reliable flow of intelligence to the decisionmaking process, there has recently been a surge of interest among policy-makers in using statistical indicators (Wong, 1995). Statistical indicators are used in a number of ways: to measure the needs or opportunities of each area as a basis for resource allocation; to set up the contextual baseline of an area’s conditions to help measure the additional improvement brought about by public policy intervention and assistance, and lastly, to help distinguish just which opportunity or problem is most important for each area. According to UNCHS, (1997): ‘Indicators are not dutu, rather they are models sirnplifiing a complex subject to a,few nirmhers which can be easily grasped and understood by policy makers and the general public. Indicators are statistics directed specifically towards policy concerns and which point towards successful outcomes and conclusions for policy. They are required to be user driven, and are generally highly aggregated and have easily recognisable piirposes. ’ From the definition above, it is clear that indicators are not empirical data but rather simple informative numbers derived from empirical data. Coombes and Wong (1 994) in their paper on methodological steps in the development of multivariate indexes for urban and regional policy analysis emphasise simplicity. They prefer the use of ‘soft operations research’ in the acquisition and manipulation of empirical data to develop the indicators. From the definition advanced by UNCHS ( 1 997) above, the process of developing indicators should be consonant with the explicit needs of the stakeholders. According to UNCHS, ( 1995): ‘Indicators should be measurable using immediately available data and should not normallv require speciul surveys or studies. They should be related to the interests of‘ one or more stakeholders, be cost eflkctive and be independent with each indicator rneuszrring a dijferent outcorne. ’ Independence of the indicators needs to be emphasised because it determines the mathematical method of manipulating data that may be employed to calculate indicators from empirical measurements. However, it is important to discuss the genesis of the concept of indicators and in particular socio-economic indicators that are being proposed for use in measuring construction industry performance. The use of social economic indicators to inform policy decisions can be traced back to at least the 1940s but academic discourse on the subject is mainly reported in the mid 1960s (Wong, 2003). Monthly economic indicators were first published to measure the buoyancy of the US economy in the 1940s (Bauer, 1966). Wong (2003) asserts that the success in the development of a set of reliable economic indicators prompted the action of American social scientists, welfare advocates and civil servants to develop indicators to measure social change in the mid 1960s. The term ‘social indicators ’ was popularised by Bauer (1966) during an effort to examine the impact of the space programme on the American society. Wong (2003) presents the chronology of the spread of use of social indicators.
223
International Conference on Advances in Engineering and Technology
‘The idea of compiling social indicators spread rapidly from the USA to international organisations such as Organisation for Economic Cooperation and Development (OECD) and the Social and Economic Council ofthe United Nations which began to develop social accounting and reporting schemes. ’ This wave of research was named the ‘social indicators movement’. ... The influence of the social indicators movement was also evident in the publication of social reports and compendia of social statistics in Britain, Germany and France in the 1970s.... ’ According to Carley, (1981): ‘... The initially rapid development of the indicators movement suffered a setback in the late 1970s due to the failure of researchers to resolve conceptual and methodological difficulties. ’ According to Wong (2003): ‘Social indicators also fell in disfavour with policy makers because they were not tailored to measure their policy concerns. More importantly, governments increasingly opted for the ‘magic of the market’ rather than social intelligence and became less interested in social engineering and reform. ... Thus, although the bandwagon effect of research during the early period of the social indicators movement did produce revelatory and reflective work, many methodological and conceptual issues still remained unresolved. Wong (2003) reported resurgence in the 1990s, of interest in Britain among policy makers in using statistical indicators in the decision making process especially in relation to urban regeneration and local development. This has been necessitated by persistent urban and regional disparities in prosperity and has encouraged policy makers to try to devise strategies that exploit the distinctive advantages and opportunities possessed by each area in order to maximise their development. This new form of the indicators movement integrates the needs of stakeholders and is stimulated by broad environmental concerns related to creating sustainable development and has been branded ‘community indicators movement’. The community indicators movement is differentiated from the social indicators movement because the later was largely based in the context of social reform and welfare whereas the former is based on local development needs as espoused by the stakeholders. The new wave of interest has not left the international organisations behind as seen in the call for suitable indicators of sustainability in Agenda 21 (UNCED, 1992) and the United Nations’ Millennium Development Goals (MDGs). Wong (2003) gives a more in-depth description of concepts, methods and application of indicators. 2.0 METHODOLOGICAL STEPS IN THE DEVELOPMENT OF INDICATORS In order to ameliorate the danger of feeding in a haphazard collection of statistics in a ‘garbage in-garbage out’ approach, it is important to derive indicators in a systematic manner rather than on an arbitrary basis.
Coombes and Wong (1994) proposed a four step procedure, working from the general to the specific as a basis for a consistent development process to improve the quality of indicators, namely: conceptual consolidation, analytical structuring, identification of indicators, and creation of an index.
224
Tindiwensi, Mwakali & Rwelamila
3.0 CALCULATION OF INDICATOR IMPORTANCE WEIGHTS There are a number of techniques that may be employed in the calculation of indicator importance weights in an effort to create an index, namely: null, expert, literature, public opinion, z-scores, regression analysis, factor analysis, cluster analysis, and multicriteria analysis methods. In order to minimise the shortcomings of the various methods, it is prudent to use them in combination of two or more. 4.0 THE PROPOSED PERFORMANCE INDEX The public opinion method was used to rank the indicators through a questionnaire survey and multi-criteria analysis was used to calculate the indicator weightings. In addition, factor analysis was done to provide an alternative explanation but also the relative importance of the indicators based on the explanation of total variance. The Expert method was used to validate the weightings obtained by the multi-criteria analysis.
1.1 Multi-Attribute Utility Theory (MAUT) The multi-attribute utility theory (MAUT) is an example of the multi-criteria analysis that was used in this research to calculate weightings for the performance indicators. All attributes whether monetary, non-monetary, economic, or environmental were assessed on an equal basis (Rogers, 2001). According to Rogers (200 I): ‘The overall strategy within multi-criteria decision models decomposition Jollowed by aggregation. ’ The decomposition process divides the problem into a number of smaller problems involving each of the individual criteria. The breaking down of a problem into a number of smaller problems makes it easier for the decision-maker to analyse the information coming from diverse origins (Rogers, 200 1). The process of aggregation allows all the individual pieces of information to be drawn together to allow a final decision to be made. Within multi-criteria models, the process of aggregation involves either the use of information or the making of certain assumptions concerning the relative importance weightings of the different criteria. Utility is a concept of expressing a decision -makers’ level of satisfaction with a given outcome. It is used to determine the existence or absence of preference between the outcomes of a set of options under examination (Keeney and Raiffa, 1976). Each outcome or criterion has its own utility function. A utility function is expressed on an ordered metric scale. The numbers on this scale have no absolute physical meaning and the scale is constructed by assigning numbers to the two extreme points. These points correspond to the best and the worst possible outcomes for the attribute in question. In the vast majority of engineering situations, the decision maker must take into account a large number of different attributes or types of consequences. These consequences relate to the economic, environmental, social, and technical performance of the various options under examination. All the separate utility functions for the individual criteria are combined within one mathematical expression called a multi-attribute utility function.
225
International Conference on Advances in Engineering and Technology
Any decision maker implicitly attempts to maximise some fimction U that aggregates all the different points of view to be taken into account. Therefore, if a decision maker is requested for information regarding a range of options, hidher reply will be both coherent and consistent with a certain unknown function U. This function is expressed in terms of a number of relevant criteria. Estimating the form of this function is the basis of MAUT. In the simplest approach, if the utility of each criterion is independent of that of the others, i.e. if there is utility independence, then the multi-attribute utility function can be constructed as a weighted average of the utility fbnctions for each individual attribute or consequence as expressed in Equation 7.1 below (Rogers, 2001):
Where x is a vector containing the n criteria and Wi is the weighting for the criteria i which specifies the relative contribution of each criterion in the final decision. Von Winterfeldt and Edwards, (1 986) indicated that the multi-attribute utility function may take the form of a weighted additive, multiplicative, or multi-linear types. The additive model is valid if one selects the criteria carefully in order to minimise the possibility of inter-criterion interactions. The multiplicative and the multi-linear models allow for inter-criterion relations such as dependence and correlation. However, Vincke, (1992) reported that the additive model is the most widely used because of its simplicity. The simple additive weighting method was used in this research to calculate the weightings for the indicators in the creation of the performance index. The calculation of the indicator weightings is made from the formula given below: The normalised importance weight for each index shall be calculated using the following formula (Rogers, 2001)
w ,=
n -
T i +1
2 (n-r,+l) ,=I
where,
W, = normalised importance weight of the jthindex. r, = ranking score for the jthindex n = number of indexes. The ranking scores shall be determined by arranging the indicators in descending order of importance as agreed by the experts.
226
Tindiwensi, Mwakali & Rwelamila
4.2 Research Results As already indicated above, a questionnaire survey of public opinion on an a priori set of ten indicators was conducted the results of which were used to calculate the relative importance weight of each indicator in the construction performance index. Table 1 below presents the ten indicators and their corresponding importance statements that were put to the respondents in the questionnaire for rating on a 5-point Likert scale. Jsed in the Questionnaire. Ac-
Indicator
linportance Statement
rony The increase in the total output ot'the construction industrv is an imoortant indicator of its nerformance.
Construction Growth Indicator
The ability of the general public to pay for construction goods and services is an important indicator for construc-
Construction Affordability Indicator
tion industry performance. The formation and development of both consulting and construction firms is an important indicator of construc-
Corporate Development Indicator ~
CoD1
tion industry performance. The reduction in the rate of accidents on construction sites
is an important indicator of construction industry per-
Construction Safety Indicator
formance. Construction Quality Indicator
I
The proportion of completed projects accepted with no
cQ1
defects at first time is an important indicator of construction industrv oerformance. The contribution of the construction industry to sustain-
Construction Sustainability Indicator
CSUl
able development is an important indicator of its perfomimice.
Construction
CSD,
The rate of training of skilled manpower is an important
Skills
Development
Indicator
indicator for construction industrv oerformance. The increase in the capacity of firms to handle complex projects is an important indicator of construction industry performance.
Construction Productivity Indicator
Productivity, i t . the workers' output at workplace, project, or project level is an important indicator of construction industrv nerformance.
Distribution 0 1 Involvement Indicator
An even distribution of projects to the various and competing categories of firms by size is an important indicator of construction industry nerforinance.
Table 2 below presents the questionnaire results together with the calculated relative importance weights using Formula 2 above.
Table 2. Calculation of Indicator Relative Importance Weights
227
International Conference on Advances in Engineering and Technology
werage
otal Score
Score
[n-r+ll
elative Importance Weiaht (Wl
232
4.30
1
10
0.18
Construction Growth Indicator
230
4.26
2
9
0.16
Corporate Development Indicator
229
4 24
3
8
0.15
Organisational Complexity Indicator
226
4.19
4
7
0.13
Construction Quality Indicator
223
4.13
5
6
0.1 1
212
3.93
6
5
0.09
Construction Productivity Indicator
205
3.80
7
4
0.07
Construction Affordability Indicator
203
3 76
8
3
0.05
202
3.74
9
2
0.04
1
0.02
55
1.oo
178
a;ae 1 0 SUM
From the results presented in Table 2 above, the proposed index for measuring performance of the Uganda construction industry is presented in Equation 3 below:
PI = 0.1 8CSUi + 0.1 6Gi + 0.1 5C00, + 0.1 30C, + 0.1 lCQ, + 0.09CS0,
+ O.07PRi + O.05CAi + O.04CSAi + 0.0201, (3) REFERENCES Bauer, R. A. (1966) Social Indicators, Cambridge, MA, MIT Press. Carley, M. (198 1) Social Measurement and Social Indicators. George Allen and Unwin, London. Coombes, M. G. and Wong, C. (1994) Methodological Steps in the Development of Multi-Variate Indexes for Urban and Regional Policy Analysis. Environment and Planning Vol. 26, No. A, pp 1797-13 16. Eccles, R. G. (1995) The Performance Measurement Manifesto. Ed. Hollaway, J., Lewis, J. and Mallory, G. Sage Publications, Open University Business School, London. Freeman, R. E. (1 984) Strategic Management: A Stakeholder Approach. Pitman, Marshfield, MA. Keeney, R. L. and Raiffa, H. (1976) Decision with Multiple Objectives. Wiley, New York. Kydos, W. (1998) Operational Performance Measurement. St Lucie Press, Boca Raton. Rogers, M. (200 1) Engineering Project Appraisal: The Evaluation of Alternative Development Schemes. Blackwell Science Ltd, London. Rolstadb, A. (1994) Ed. Performance Measurement. A Business Process Benchmarking Approach. Chapman and Hall. UNCED (United Nations Commission on Environment and Development) (1992), Agenda 21, Switzerland, Conches. United Nations Centre for Human Settlements (1995) Monitoring Human Settlements: Abridged Survey, Indicators Programme. UNCHS, Nairobi. Vincke, P. (1992) Multi-Criteria Decision Aid. John Wiley and Sons, Chichester, UK.
228
Tindiwensi, Mwakali & Rwelamila
Von Winterfeldt and Edwards (1 986) Decision Analysis and Behavioural Research. Cambridge University Press, Cambridge, UK. Wong, C. (1995) Developing Quantitative Indicators for Urban and Regional Policy Analysis. As in Hambleton, R. and Thomas, H. (1995) Eds. Urban Policy Evaluation: Challenge and Change. Paul Chapman Publishing Ltd, London. Wong, C. (2003) Indicators at the Crossroads: Ideas, Methods and Applications. Town Planning Review, Vol 74, No. 3, pp 253-279.
229
International Conference o n Advances in Engineering and Technology
DEVELOPING AN INPUT-OUTPUT CLUSTER MAP FOR THE CONSTRUCTION INDUSTRY IN UGANDA G. Mwesige, Department of Civil Engineering, Makerere University, Uganda D. Tindiwensi, Department of Civil Engineering, Makerere University, Uganda
ABSTRACT This paper presents a brief description of the Uganda construction industry, and its contribution to Gross Domestic Product. It examines its organizational structure, classification criteria, and current trend in collection, analysis and reporting of the industry’s statistics, identifies direct and indirect industrial clusters based on Systems Approach and Input-output analysis. It also gives a snap-short of vital statistics on identified clusters in terms of employment, establishments and turnover with the aim of creating a perfomance measurement base, and way to monitor progress from time to time. Finally a description of Uganda construction industry in terms of its sectoral structure based on clients, inputs to production and outputs (consumption) are represented on a cluster map. This gives a global picture of the industry to enable policy makers and investors to intervene in weak sectoral clusters to enhance efficient delivery of goods and services and improve overall performance. Keywords: Industrial Clusters; Construction; Mapping; Input-Output Analysis. BACKGROUND This paper presents a “The Uganda Construction Industry Cluster Map” part of a wider research titled “Developing an Input/Output Cluster Map for the Construction Industry in Uganda” by the same authors. The paper discusses the theory behind the development of the map. The cluster map, a preliminary ‘sectoral map’ of the construction industry defines key industrial sectors, identifies the major segments, describes key industry players and institutions, and provides the basis for exploring relationships, innovation and information flows within the industries will benchmark further development and industrial analyses. 1.1 Introduction The construction industry in Uganda consists of firms mainly engaged in construction of residential and non-residential buildings and other civil engineering facilities and services is an important part of the economy, accounting for 8.5% of Gross Domestic Product for Uganda’s financial year 2000/01, as shown in Table 1. The industry has important links with other sectors, and therefore its impact on the economy go well beyond the direct contribution of construction activities. The main features of the perfomance of the construction industry can be examined by mapping the main flows into the industry from other industries, the direct value added by the industry to the economy as a whole, and the broad destinations of its outputs (consumption).
230
Mwesige & Tindiwensi
Year
1996197
1997198
1998199
1999100
Million shillings
229,848
247,301
273,039
297,320
2000101 322,489
GDP (9”)
7.2
7.6
10.4
8.9
8.5
(Source: Background to the Budget 2001/02). By identifying key inputs (production factors) and outputs (consumption), it is easy to monitor performance and explain negative or positive change over time. It is therefore a necessity to have objective information so as to be able to make effective decisions and understand the variability of the industry’s processes from time to time.
1.1 Objective of the Study The research was aimed at establishing fundamental empirical and statistical data necessary to provide snap shots on the structure and performance of the Uganda’s construction industry in terms of investment, productivity and innovation. This was achieved through: Establishing the size and sectoral structure of construction industry in Uganda. Identifying and defining major inputs into construction industry in Uganda. 0 Identifying and defining current outputs of the construction industry. 0 Developing cluster map based on factors that impact greatly on the performance of the industry. 0
1.0 DEFINlTION OF KEY TERMS The key terms that need to be defined are: industrial cluster, input-output analysis, mapping, and construction. These terms need to be made understandable from the start so as to have a clear direction in the study. 2.1 Industrial Cluster This is defined as geographical concentrations of interconnected companies, specialised suppliers, service providers, firms in related industries, and associated institutions (for example, university, standards agencies, and trade associations) in particular fields that compete but also co-operate ( Porter, 1990). 2.2 Input-Output Analysis Input-output analysis is a macro-economic technique used for understanding the structure and interdependencies between sectors in an economy. This economic analysis can give an indication of the effect on the construction industry that a change in demand, or output for a good or service in an interlinked sector will have on the industry. Some early results in this work ‘suggest that the construction sector follows the economic destiny of the manufacturing sector, its primary partner in economic growth and development’ (Bon et al. 1990). The inputs into construction include goods and services required for the execution of the projects (production factors), where as goods and services produced and distributed for various consumption purposes constitute outputs.
23 1
International Conference on Advances in Engineering and Technology
The emphasis in the inpub‘output research is primarily on the structure, conduct and performance of firms that compete with each other within a particular market and then the associated policy considerations (Martin, 1993). The input-output model as a methodology deals with the performance of business enterprises and the effects of market structures on market conduct (pricing policy, restrictive practices, innovation) and how firms are organised, owned and managed (Bancock et al. 1998). The most important elements of market structure in these models refer to the nature of the demand (buyer concentration), existing distribution of power among rival firms (seller concentration), entrylexit barriers, government intervention and physical structuring of relationships (horizontal and vertical integration) (Litman, 1998). The role of the input-output model is to give substance to the traditional neo-classical abstract concepts of market structures (Litman, (1998)). 2.3 Mapping This involves developing two-dimensional diagrammatic representation of the industrial cluster, accommodating official industry statistics, which includes: major actors, activities and geographical concentration. The mapping structure draws out a number of features of industry that have all too often been overlooked. It highlights the important relation between manufacturing and services, and sheds light on user-producer relations, linkages and potential avenues of innovation and diffision across the economy. The advantage of mapping an industry in this way is that it provides a mechanism by which to group activities and explore their inter-relations, while at the same time preserving the traditional structure of the industry enough to allow a simple and fairly direct concordance with official national statistical categories.
2.4 Construction Construction is defined as comprising the “erection, repair, and demolition of all types of building and civil engineering structures This definition includes specialist subcontractors in the building and civil engineering direct labour departments of public authorities, on-site industrialized buildings, Plant hire with operatives and exclude offsite manufacture, quarrying of raw materials and the manufacture of materials and components (ISIC, rev 3). The construction sector comprises a wide range of economic activity, from individual house building and repair to major engineering works. Construction activity is usually divided roughly equally between housing, non-residential building and civil engineering works. Although attention is mostly focused on new construction, the renovation and maintenance of existing structures accounts for almost 50% of total construction output in some of the more developed economies, and an even higher percentage of employment (ILO, (2002)) ’I.
2.0 CLUSTER DEVELOPMENT IN CONSTRUCTION INDUSTRY In developing a cluster of an industry, one strives to represent all the industries and entities that impact on its competitiveness. It means that in addition to the head contractor, vital contributors such as the suppliers of specialized components, materials and services take their place. This broadens the context to give a much better insight into the performance of the industry. The building and construction industry cluster also in-
232
Mwesige & Tindiwensi
cludes government and other institutions - the tertiary institutions, the agencies that set standards, the providers of vocational training, the regulators and the industry associations. These all help shape the industry by providing specialized research, information, technical support and training.
‘The advantage of cluster analysis is that it not only acknowledges the importance of what goes on within a J i m , it also analyses the complete context in which a$rm operates ’ (Macarthur, 1999). An industrial cluster is different from the classic definition of industrial sectors because it represents the entire value chain of a broadly defined industry from suppliers to end products, including supporting services and specialised infrastructure. This process cluster map has six analytical dimensions. They are: Regulatory environment, Supply networks, Project-based firms, Property sector, Technology support infrastructure, and Information and knowledge flow. Using this model as a base it is possible to map the industry to reflect the major participants and activities involved in its main sub-sectors. Cluster analysis, however, rarely conforms to standard industrial classification systems and these systems, in turn, also fail to capture many important factors and relationships that either induce or inhibit competitiveness. This has made the economic performance of the industry difficult to quantify. The cluster map divides the industry’s firms and businesses into three distinct, but closely related sectors the supply network, project-based$rms andproperty. ~
Cluster analysis leads to a number of important insights into the dynamics, which shape the competitiveness of industries and firms. While the internal operations of the firm remain the dominant factor in determining competitiveness, cluster analysis also provides insights into the factors outside the firm which shape firm and industry performance. The way in which firms interact between themselves and with the institutional environment impacts on the development of knowledge and the flow of information. Cluster analysis also demonstrates that government and other public bodies are involved in shaping industry performance. It makes an outward look at the industry, defining it beyond the traditional view that confines the industry to contractors and sub-contractors. Cluster analysis asserts that the scope of the industry is much broader and gives a clearer insight into the industry’s workings, performance and potential.
1.1 Developing an Input/Output Cluster The analysis above is based on process cluster, identifying the major actors in the industry in general. It does not show what the outputs (consumption or broad destination of products) of the industry are, an important element in industrial analysis. Merging the process cluster map and the concept of inputloutput analysis, leads to an inputloutput cluster map. The difference with the process map is that the primary and intermediary inputs into industrial production are shown on one side and the outputs (consumptionidestination of products) into the economy, and the environment on the other side. This perspective not only identifies major actors in the industry but also shows the requirements for production process and distribution of products.
233
International Conference on Advances in Engineering and Technology
The method uses the “systems approach to management”, which asserts that an industry can be better analysed if only looked at as a conversion process requiring particular inputs on one side to yield outputs on the other side. The quality and quantity of outputs will depend on the inputs into the conversion process. However it is wrong for one to dwell on material inputs into construction, but rather include policy, regulation, and support institutions since they influence industrial production, not common to other industries.
1.2 Inputs to Construction Construction inputs in a typical old economy are labour, capital, energy, material, and equipment as basic factors of production. However, the new economy dictated by capitalism and globalisation demands aspects of information technology, computer services, and communication as vital direct resources of production. Indirect resources that contribute to industrial development include; policy, regulation and standards, which may vary with project type, scale and technologies involved, and are both qualitative and quantitative in nature. In most cases the qualitative part has been ignored which presents a severe loophole in the production process. The quantitative inputs (goods and services) are primary and intermediary in nature. The intermediate inputs are goods and services provided by other production sectors, which are either local or foreign. The primary inputs are services provided by primary production factors, i.e. labour and capital (Inputdozitput tables for Uganda, 1995). The various inputs are provided by different sectors of the economy that may be directly involved in production (project-based firms), or closely supporting the production process (regulation, supply, property, and technical support sectors). The contribution of each sector to production is necessary for analysis of industrial performance. 2.0 THE UGANDA CONSTRUCTION INDUSTRY CLUSTER MAP The Uganda construction industry cluster map development is based on the systems approach with a broad objective of establishing the key input sectors, their current state in terms of number of establishments and employment totals, the sectoral structure and the outputs classified in terms of contribution to the environment, to physical infrastructure and the economy. This was based on data from financial year 2001102. Key input sectors identified include Technical support infrastructure, Supply, Projectbased$rms, Regulatory Institutions, and the Property sectors. The research established that the above sectors are development of the industry. Each of the sectors was considered in terms of number establishments and employment totals. However, the reader should understand that the map gives a global picture of these sectors. Further research is necessary to understand the dynamics that shape daily operation of individual sectors. The outputs were classified in terms of the contribution of the industry to the Environment, Physical Infrastructure and Economy. The contribution to the environment mostly considered in terms quantity waste generated by the industry. However, due to lack of organised records, it was hard to establish the quantity of construction waste from project sites. Physical infrastructure contribution was considered in terms of buildings, road development, water coverage, drainage, and
234
Mwesige & Tindiwensi
solid waste management expressed in terms of quantity and value. Contribution to economy was expressed in terms of the Gross Domestic Product share.
2.1 Inputs Technical Support Infrastructure comprises of Research and Development (R&D), and Technical and Vocational Training institutions. In the R&D sector, there were 35 establishments, employing 439 persons. In Technical and Vocational Training, there are 3 10 establishments, employing 3,308 persons. The Supply sector comprises of materials and equipment suppliers. In material supply, there are 2,384 establishments in manufacturing employing 12,439 persons, 100 establishments in whole sale sector employing 379 persons. In construction equipment supply, there are 4 1 establishments manufacturing employing 476 persons, and 33 establishments employing 259 persons in wholesale. The project-based firms sector, there are 247 established contractors employing 7,340 persons. There are 147 established consultants employing 1,036 persons. Regulatory institutions are 18 in government, industrial, professional and educational and research. The Property sector, there are 8 1 established firms in real estate employing 500 persons, 15 1 in rental sector employing 405 persons.
2.2 outputs The output to the environment as explained earlier is considered in terms of quantities of waste generated by construction related activities, including the industries. The assumption is that if the construction related businesses were none existent, such waste would not exist. The research established that steel mills, cement, lime, quarries and Brick/tile production generate 50,000; 50,000; 50,000; 1 50,000; and 100,000 respectively. The contribution to physical infrastructure comprises of the building, road, water, drainage, and solid waste management sectors. According to data obtained from Kampala City Council for the year 2001102. There were 1028 buildings approved, and valued at 223,617 millions. In the road sector, 226.78km were up-graded, valued at 408,204 million shillings. Water coverage was estimated at 70 towns, valued at 138,060 million shillings. The contribution to economy was determined using macro-economic statistical data published by the Ministry of Planning and Economic Development expressed as a share of the national Gross Domestic Product (GDP). The construction sector (which only represents Contracting Firms) contributed 609,623 million shillings. The research attempted to classify the contribution in terms of construction from utility sectors and other support industries. Figure 1 (Cluster Map) gives a summary of the findings described above. It should be noted by the reader that the information given is a summary of a wider research, and what is highlighted is a brief of the findings. Further information can be obtained from the publication.
235
International Conference on Advances in Engineering and Technology
Fig. 1: An Input/Output Cluster Map for the Uganda Construction Industry. 3.0 CONCLUSION The research found out that the attributes of Uganda's construction industry represent an old economy, where firms operate as independent ventures, operate within national boundaries, and exhibit low competition amongst themselves, low capital with high dependence on primary factors of production (land, labour and capital) for operation. Such attributes no longer sustain an economy especially for a sector that is central to delivery of goods and services. To be able to realize the goal of a "sustainable economy", the Government with support of individual firms and industry associations need to work together to lift industry performance. The private sector continues to dominate the building sector with highest concentration in the residential sector. Fewer industrial infrastructures were set up in 2001/02 possibly because government has pulled out of building sector, yet private developers could not raise the capital required for industrial establishments. The research gives a global picture of the industry that enables policy makers and investors to improve performance by strengthening weak sectors vital in delivery of goods and services.
236
Mwesige & Tindiwensi
REFERENCES
Building for growth: An Analysis of the Australian Building and Construction Industries Report, 1999 Defining construction, http://www.ilo.org/public/english/dialogue/sector/sectors/constr.htm, June 2002 Government of Uganda, Background to the Budget 2001/2002, Ministry of Finance, Planning and Economic Development Industry structure, http://www.ilo.org/public/english/dialogue/sector/sectors/constr.htm, June 2002 Lowe J.G, Construction Economics, cost studies." module BSUP339, Department of Building and Surveying." Glasgow Caledonian University. Mapping the Building & Construction Product System in Australia (AEGIS), University of Western Sydney, Macarthur, May 1999 The Monitor Business Directory 2003, Contracting and consulting firms, materials, and Equipment supply firms in Uganda, publication of monitor publications Ltd; printed by AG printing and publishing Ltd. The Republic of Uganda; Input~output tables for Uganda (1989 & 1992), Statistics Department, Ministry of Finance, Planning and Economic Development, September 1995, Entebbe, Uganda The Republic of Uganda; Small Towns Water Supply and Sanitation, Ministry of Water, Lands and Environment, Directorate of Water Development, a Paper for the Joint GOU/Donor Review for the Water and Sanitation Sector 24-26th September 2002, International Conference Centre, Kampala, Issue paper 3a The Republic of Uganda, Statistical Abstract 2002, Uganda Bureau of Statistics, November 2002 Uganda Investment Authority: The Building and Construction industry profile, prepared with support from International Development Agency (IDA), published by Uganda Printing and Publishing Corporation Uganda National Bureau of Standards; List of ISO 9000 certified companies in Uganda as of May 2002
237
International Conference on Advances in Engineering and Technology
REGIONAL FLOOD FREQUENCY ANALYSIS FOR NORTHERN UGANDA USING THE L-MOMENT APPROACH M. Kizza, Department of Civil Engineering, Makerere University, Uganda
H. K. Ntale, African Minister’s Council on Water, Kampala, Uganda A. I. Rugumayo, Department of Civil Engineering, Makerere University, Uganda
M. Kigobe, Department of Civil Engineering, Makerere University, Ugunda
ABSTRACT This study was aimed at carrying out a regional flood frequency analysis for northern Uganda using the L-moment approach. The procedure involved carrying out data quality checks followed by definition of hydrologically homogeneous regions. A statistical test was employed in order to ensure that the regions obtained are indeed homogeneous. Four homogeneous regions were identified in the study area. The region covering most of north eastern Uganda had no reliable data on which to base the analysis and was excluded from the study. The underlying distributions for each region were identified using the L-moment ratio diagram technique. The identified frequency distributions were then subjected to predictive ability tests to verify their robustness in simulating the catchments. Since the true values of the proposed regions are unknown, the Monte Carlo simulation technique was used to generate data for comparison and the lognormal distribution emerged as the most robust for all regions. Regional parameters of the lognormal distribution were then estimated and the frequency curves drawn for each region. Regression analysis was carried out to generate models for estimating mean annual flood from catchment characteristics of area and mean annual rainfall. For some of the regions (Mt Elgon, and Aswa River drainage basins) modcls that incorporate both area and mean annual rainfall showed better estimation efficiency while for others (Lake Kyoga and West Nile drainage basins) area showed better estimation efficiency. Regression models bascd on the whole study area gave poor results and are not recommended for use by the rcscarchers.
Keywords: Regional flood frequency analysis, L-moments, homogeneity, linear regression models, frequency distribution, probability weighted moments, mean annual flood, index flood.
238
Kizza, Ntale, Rugumayo & Kigobe
1.0 INTRODUCTION Estimating the frequencies of extreme environmental events such as floods is difficult because extreme events are by definition rare and the relevant data record is often short. Flood frequency analysis is used to estimate flood magnitudes required for the design and economic appraisal of civil engineering structures, to estimate flood risk and for environmental purposes, particularly floodplain management. The analysis involves estimation of the flood magnitude corresponding to a given return period or probability of exceedance (Chow et al, 1988; Al-Khudhairy, 1997). A primary assumption of hydrological flood frequency analysis is that the series of events being considered have random magnitudes that are independent and identically distributed, and the hydrologic system producing them is considered to be stochastic, as well as being space and time-independent (Bobek & Ramussen, 1995). The hydrologic data analysed should be carefilly selected so that the assumptions of independence and identical distribution are satisfied (WMO, 1989). The analysis is statistical using instantaneous recorded annual maximum data. The disadvantage of flood frequency analysis using at-site data is that annual flood series are usually too short to allow for a reliable estimation of extreme events. In addition, gauging stations are usually sparsely distributed and transferring these results to ungauged sites maybe unreliable. The difficulties are related to both the identification of the appropriate statistical distribution and the estimation of the parameters of the selected distribution. A suitable method that can be used to obtain flood quantiles at ungauged sitcs is known as “regional analysis” whereby all data from sites within a region that has been judged to be homogeneous are used in estimating event frequencies for any site within the region (i.e. trading space for time). The method has been used by many investigators such as Meigh et a1 ( 1 997); Farquharson et al (1992); Mkhandi & Kachroo ( I 997); Alkhudhairy (1997); Drayton (1980); Muhara (2001) etc. to obtain flood estimates with success.
2.0 L-MOMENTS AND THEIR ADVANTAGES L-moments are an alternative system for describing the shapes of probability distributions that are modifications of the “probability weighted moments” of Greenwood et al. (1979). The purpose of L-moments (like ordinary moments) is to summarise theoretical probability distributions and observed samples. They can also be used for parameter estimation, interval estimation and hypothesis testing. L-moments have a theoretical advantage over conventional moments of being able to characterise a wider range of distributions and, when estimated from a sample, of being more robust in the presencc of outliers in the data (Muhara, 2001). Since sample estimators of L-moments arc always linear combinations of the ranked observations, they are less subjcct to bias than ordinary moments. This is because computing ordinary momcnt cstimators such as skewness and kurtosis requires squaring and cubing observations, which causes them to give greater weights to observations that are far from the mcan.
239
International Conference on Advances in Engineering and Technology
Probability weighted moments of a random variable X with cumulative distribution function F(.) were defined by Greenwood et a1 (1979) to be quantities 1
i
a, = x(u)(l- u p ,
p, = Ix(u)u'du 0
0
(1)
For a random variable X with quantile function x(u), the L-moments of X are defined as the quantiles
e'i
is the shifted Legendre's polynomial and in terms of probability weighted mowhere ments, L-moments are given by
In most cases, it is convenient to define dimensionless versions of L-moments; this is achieved by dividing the higher-order L-moments by the scale measure 12. Therefore, we define the L-moment ratios as follows
5 4
=-
r = 3,4,...
(4) L-moment ratios measure the shape of the distribution independently of its scale of measurement. The coefficient of L-variation is also defined as
3.0 ESTIMATORS OF L-MOMENTS AND PWMS The above L-moments have been defined for a probability distribution, but in practice must often be estimated from a finite sample. For an ordered sample ased estimator of the probability weighted moment pr is
240
'ln
'X2n
'"'rxn:n
an unbi-
Kizza, Ntale, Rugumayo & Kigobe
b,
= n-'
,=r+l
( j - l ) ( j - 2) ...(j - r ) Xjx ( n - l)(n - 2) ...(n - r )
Analogously, the sample L-moments are defined by
r = O,l, ...,n - 1; k=O
(7)
The sample L-moment 'I is an unbiased estimator of kr. The sample L-moment ratios are defined by
4.0 LINEAR REGRESSION MODELS Sometimes we need to estimate the flood quantiles for some sites whose flow data is not available. When other catchment characteristics are available, for example rainfall, slope and area, we resort to linear regression models to estimate the relationship between them
= F ( C ), where C represents catchment and mean flow. They usually take on the form characteristics. The form of the equation adopted for use depends on the amount of physical and hydrological data available as well as actual dependence between characteristics (Muhara, 200 1).
and the catchment
A simple regression model is of the form
0= CA"SsR'
(9)
where A is the Catchment area, S is the catchment slope, R is the mean annual rainfall and C, a, s and r are constants. -
The efficiency of a regression model in estimating is assessed using the sample coefficient of determination and the standard error of estimate.
5.0 STUDY AREA The study area occupies the area North of L. Kyoga between latitudes l o N and 40 N covering a total area of approximately 108,000 sq. km. The elevation varies between 600 meters above sea level (amsl) at Nimule on the northwest border with Sudan to 3040 amsl at the
24 1
International Conference on Advances in Engineering and Technology
peak of the volcanic mountains of Moroto (NEMA, 1996). The study area mainly experiences one or two rainy seasons ranging lasting for 5-7 months from April to October depending on location. The mean annual rainfall varies between 745- I500 mm with the eastern part receiving less than the western part (DWD, 2001).
6.0 ANALYSIS AND DISCUSSION Analysis of the data was done progressively from screening, through identification of homogeneous regions, choice and estimation of regional frequency distributions for each region to derivation of regression models for prediction of mean annual flood (MAF) for the derived regions.
6.1 Data Screening Data screening was aimed at checking if the data collected at each site is actually a true representation of the quantity being measured and is from the same frequency distribution (WMO, 1994). Three types of checks were used to achieve this: Checks on individual data values to reveal gross errors (e.g. negative flows, very high flows etc.); Checks of each site’s data separately with the aim of identifying outlying values and repeated values; and; Comparison between data from different sites to check for correlation The discordancy measure proposed by Hosking and Wallis (1 993) was used to identify sites which are grossly inconsistent with the group as a whole. These were then removed from the dataset. At the end of the screening exercise, the resulting dataset comprised of 25 sites with a total of 666 years of record. The average record length was 26.6 years. None of the stations had complete records with most of them missing some records from 1976 1989. River Manafwa had the longest record (46 years) while River Akokorio had the shortest (12 years). ~
6.2 Identification of Homogeneous Regions The objective was to form groups of sites whose frequency distributions are identical apart from a site-specific scale factor. The cluster analysis approach, which is a standard method of statistical multivariate analysis for dividing a dataset into groups, was used in the study. A data vector (made of site attributes) was associated with each site and sites were portioned into groups according to the similarity of their data vectors. The site attributes used in the study included catchment area, gauge elevation, gauge altitude and gauge longitude. The heterogeneity measure (proposed by Hosking and Wallis, 1993) was used to check the homogeneity of the groups of sites at all stages. The premise was that all sites in a homogeneous region have the same L-moments. However, the sample L-moments will be different
242
Kizza, Ntale, R u g u m a y o 2% Kigobe
due to sampling variability. The heterogeneity measure, therefore, compares the betweensitc variations in sample L-moments to check if they are higher than what could be expected if region was homogeneous. The approach used in computing the heterogeneity measure of realizations of a region with N sites, each having its was to simulate a large number .v,),,, frequency distribution. The simulated regions are homogeneous and have no crosscorrelation or serial correlation. For each simulated region the dispersion was obtained. of From the simulations the mean p,,was determined and the standard deviation o), the N,,,!? values of b” . The heterogeneity measure was then calculated from:
v
H= For values of H erogeneous.
52
(observed dispersion) -(mean of simulations) (standard deviation of simulations)
(10)
the region was taken to be acccptably homogeneous otherwise it is het-
Four hydrologically homogeneous regions wcrc delineatcd from the study area. Table 1 shows the description of the regions while Figure I shows thc delineated regions.
Region
No o f Sites
NU1
5
NU2
9
NU3
3
Description Mt Elgon drainage area. L. Kyoga drainage area. Mainly swampy Aswa river basin
Area
(km’)
Regional Avcragc L-moment ratios CV (1)
Skew ( 7 ) )
Kurt (T~)
4,500
0.268
0.154
0.153
25.400
0.535
0.396
0.239
36.800
0.419
0.36 I
0.258
10.700
0.390
0.391
0.268
21.600
K/A
N/A
N’A
West hile drainage basin. RivNU4
4
ers mainly tlow into the Albcrt Nile Kai-amoja ai-ea. Thei-e wa\ tin
NUS
0
reliable data tn facilitate analysis for this region
243
International Conference on Advances in Engineering and Technology
_>
- 2'"2 4 )
~...~-" Figure 1: Map of Uganda showing the delineated hydrologically homogeneous regions of the study area. 6.3 Choice of Regional Flood Frequency Distribution Dimensionless L-moment diagrams i.e. L-skewness and L-kurtosis were used to identify the statistical distribution from which flood samples might have been drawn. Based on the work done by Mkhandi and Kachroo (1997) and Hosking and Walls (1993) six distributional forms were considered for evaluation as possible candidate distributions for modelling flood flows in the proposed homogeneous regions in Northern Uganda. Two parameter distributions were not considered because they have been shown to perform poorly when used for regional data (Hosking and Wallis, 1997). The distributions considered were; Generalized Logistic (GLO), Generalized Extreme Value (GEV), Lognormal (LN3), Pearson Type III (PE3), Generalized Pareto (GPA), Lower bound of Wakeby (WAK 5). The L-moment ratio diagram (Figure 2) showed that the GLO, GEV and LN3 distributions give acceptable fits for all four regions. In addition the PE3 distribution gives an acceptable fit for NU 1 while the GPA distribution gives an acceptable fit for region NU2.
244
Kizza, Ntale, Rugumayo & Kigobe
0.4 0.35
~,,.~
0.25
~
~..:'"
;-'oG
.....
I
~ .
/
,,,
.......,. 0.05
-0.05
,...,...,,..,
0......9.Q5--
-
,,,
....., .-.- " -
@.r"l " ~
.I
0".15
0.2
0.25
'~"
0.3
0.35
0.4
0.45
0.5
-0.1
C4_O .
.
.
.
.
GEV
. . . . . . .
LN3
PE3
.
.
.
.
Figure 2: Regional average L-moments for the Northern Uganda stream flows (1 ..... 4 represents NU 1,...NU4) Predictive ability tests were then used to select the distribution that is more robust in quantile estimation. Robustness of each frequency distribution was assessed using values of bias (ME) and root mean square error (RMSE) of the estimated QT where
ME
-
E ( Q r - Qr ) __
__
(11) 2 ]1/2
(12)
t5
where ~ v is the quantile estimate using a given procedure and QT is the true value of the quantile as obtained by simulation. The robustness of each distribution in estimating the flood quantiles for each of the regions is ordinarily evaluated by comparing the true values of the quantiles with those estimated using the proposed regional distributions. Since the true value of the quantiles is unknown, Monte Carlo simulations were used to carry out to generate data for a region with the same number of sites and the same record lengths. The simulation was carried out 10,000 times errors in the simulated quantile estimates were computed each time. The errors were then averaged to yield estimations of ME and RMSE. The parameters were estimated using the method of probability weighted moments (PWM) as it suffers less bias and variance and also eliminates the negative effects of outliers in the samples. The results showed that the LN3 was the most robust for estimating floods in northern Uganda. Table 2 shows the parameters for each region. Table 2: Parameters of the regional lognormal distribution for northern Uganda
245
International Conference on Advances in Engineering and Technology
Region
Location (~)
Scale (a)
Shape (K)
NU 1 NU2 NU3 NU4
0.926 0.644 0.742 0.743
0.455 0.705 0.582 0.517
-0.318 -0.842 0.763 -0.832
6.4 Estimation of the frequency distribution and regional growth curves The index flood method was used for estimating the frequency distributions with the basic assumption being that the frequency distributions for the different sites in a group are identical apart from a site-specific factor called the index flood. That is to say m
Qi(F)-Qiq(F)
(13)
t9 where ~i is the index flood which is taken as the mean of the at-site frequency distribution and q(F) is the regional growth curve, a dimensionless quantile function common to every site. The index flood procedure assumes that for a given region there is no dependence between observations at the different sites and no serial correlation between observations at the same site. These assumptions are usually not satisfied for environmental data. However, Hosking and Wallis (1997) have shown that the effect on variability of the regional parameters of existence of some heterogeneity between sites is always higher than the effect of inter-site dependence and correlation and the index flood procedure can still be used to estimate the regional frequency distribution. Regional parameters were then used to compute standardised quantile estimates for different return periods and then the growth curves for each region were constructed as well as their 90% error bounds. An investigation of the slopes of the derived frequency curves for each region indicates that the slopes are governed by the variability of flood regimes. If the coefficient of variation (CV) of the flood flows is high (Table 1), then the frequency curve for that particular region will be steep (Figure 3). The apparent similarity between the regional curves for region NU3 and NU4 suggests that on statistical grounds there is little justification for treating the regions as distinct and that they may as well be merged to form larger group. However, this argument should be treated with caution, because the absence of statistically significant differences between regional growth curves may merely reflect insufficiency of data. A scrutiny of Figure 5 will reveal that the error boundaries increase with return period. They are a minimum for return periods of about 10 years and increase with return period due to extrapolation.
246
Kizza, Ntale, Rugumayo & Kigobe
N.:~ ~.
2C
18
f
lo
8
6
., t;::oo
~o::o.e.:
:::
:. :: . : :
: :: : : : ~ o : o , : ~ ;
; ::.: ..............
~e:o~e~eo
)oo
lo oo
100000
Returnperiod NU4
NU3
12
10 10
2 0( ~ . f * " ~ 1 oo
lo oo
loo oo
~ooo oo
100
1000
R ~tOuCr0 Period
100000
~eturn p,~io,~
Figure 3" Estimated regional frequency curves for region NU1, NU2, NU3 and NU4 with their 90% error bounds
6.5 Prediction of Mean Annual Flood Relationships between mean annual flood ( Q ) , in m3/s, and catchment characteristics of area (A), in kin2, and mean annual rainfall (R), in m, for each of the four regions plus the study area as a whole, were obtained by regression analysis using the simple linear regression model. An assessment of the prediction efficiency for each of the regions and the study area as a whole was also carried out. A forward stepwise regression was performed. The coefficients and variables in the final regional equations were selected based on the following criteria: (i) The coefficient of correlation between the dependent and independent variables had to be significantly high. (ii) The standard error of the estimate had to be a minimum. (iii) The final predictor variables had to be independent of each other.
247
International Conference on Advances in Engineering and Technology
(i) Entry into the regression equation had to be significant at a 5% level using the F-ratio. The prediction efficiency for the study area as a whole was less than that for models generated for each of the regions. This could be due to differences in flood generating mechanisms in northern Uganda as whole but relative similarity within the each of the regions. Table 3 below shows the regression models that performed best for each of the regions.
Region
Equation
RZ
NU 1
Q = 0.029A'.'97
0.5482
NU2
Q = 0.3086A0.420R6.2080.7740
NU3
IQ
= 0.0127A'.070
0.5743
7.0 CONCLUSlONS The main aim of regionalisation is to make a model for estimation of flood frequency curves at ungauged sites or gauged sites with short records. Regional flood frequency analysis augments the data from the site of interest by using data from other sites that are judged to have frequency distributions similar to that of the site of interest. In practice frequency distributions at different sites are not exactly identical. Nevertheless, a well conducted regional flood frequency analysis will yield quantile estimate that are accurate enough to be applicable in most applications. This is more so true for the study area which has a limited number of gauging stations having limited data. By lumping sites that are deemed to have similar flood generation mechanisms (homogeneous regions), more reliable models were obtained for estimating flood flows. A number of distributions, including the generalised logistic, generalised extreme value, lognormal, Pearson Type 111 and the generalised Pareto, provided good fit for data in northern Uganda. The suitability of each distribution was assessed based on its robustness in estimating the regional flood quantiles. A robust procedure is one that yields quantile estimates whose accuracy is not seriously degraded when the true physical process deviates from the model's assumptions in any plausible way. Since the true values of the quantiles are unknown, controlled Monte Carlo simulations were carried out to generate data. The data was then compared with quantile estimations from each of the proposed regional distributions to estimate their robustness. The lognormal distribution proved to be the most robust in predicting the flood quantiles in northern Uganda.
248
Kizza, Ntale, Rugumayo & Kigobe
Region NU3 and NU4 gave similar regional growth curves. This implies that on statistical grounds there is little justification for considering them as distinct and may as well be merged and treated as one. However, the two growth curves were retained in this study because absence of statistically significant differences may merely reflect insufficiency of data. Based on thc R2 value and the standard error of estimate, the results of regression analysis showed that in two of the regions, NU1 and NU3, area alone was a better estimator of MAF while in the other two, a combination of area and mean annual rainfall gave better estimates. However, the generated regression models generally showed poor estimation efficiency of the MAF with low values of R2 and high values of the standard error of estimate.
8.0 RECOMMENDATIONS During the process of data quality checking it was discovered that a major source of discrepancies is the rating curves that are not well defined for high discharges. There is, therefore, need for the Water Resources Management Department of the Ministry of Water, Lands and Environment of Uganda to work out a way of improving the rating curves. There is also need to collect more catchment information that would help in defining homogeneous regions with more certainty. The results obtained from this study were based on the data that wcre available for the analysis. Thc distribution of the stations used in the analysis varied from one region to another. In particular the region covering most of Karamoja had no reliable data on which to base analysis. There is, therefore, need to improve the results presented in this study in future when more data becomes available. In this study only the statistical frequency analysis approach was used to develop suitable flood estimation procedures in northern Uganda. Further research work is required to extend the analysis to other methods like rainfall-runoff modelling.
REFERENCES Al-Khudhairy, D.H.A. (1997) http://poplar.sti.irc.it/~ublic/iain/floods/re~ional.html.Regional Flood Frequency Analysis. European Commission, Joint Research Centre, Institute for Systems, Informatics and Safety. Accessed on 26th Oct 2001 Bobee, B. and Ramussen, P. (1 995), Recent advances in jlood frequency analysis, Reviews of Geophysics, vol. 33, suppl. Chow, V. T, Maidment, D. R. and Mays, L. W. (1988). Applied Hydrology. McGraw Hill, Inc. New York. 572 p. Drayton, R., S., et al. (1980). A regional analysis of river jloods and low jlows in Malawi. Inst. of Hyd. Malawi Water Resources Division. Report No. 72
249
International Conference on Advances in Engineering and Technology
Directorate of Water Development (200 l), Hydroclimatic Study, Ministry of Water Lands and Environment, 2001 (unpublished report). Farquharson F. A, Meigh, J. R. and Sutcliffe, J. V. (1992), Regionalflood frequency analysis in arid and semi arid areas, Journal of Hydrology. , Vol. 138, p. 487-50 1. Greenwood, J. a., Landwchr, J. M., Matalas, N. C. and Wallis, J. R. (1979). Probability weighted moments: Definition and relation to parameters of several distributions expressible in inverseform. Water Resources research, 15, 1049-54. Hosking, J.R.M. and Wallis, J.R. (1993). Some statistics in regional frequent-v analysis, Water Resources Research, Vo1.29 (2), p. 272-28 1. Hosking, J. R. M., and Wallis, J. R. (1997). Regional fi-equency analysis: an approach based on L-moments. Cambridge University Press, Cambridge, U.K., 224 p. Meigh, J. R., Farquharson, F. A. K and Sutcliffe, J. V., (1997) A worldwide comparison of regionalflood estimation methods and climate, Hydrological Sciences Journal, Vol. 42(2) p. 225-244. Mkhandi, S and Kachroo, R. (1997), Regionalfloodfiequency analysis for Southern Africa, Southern Africa FRIEND: IHP IV Technical documents in Hydrology No 15. p. 130-150 Muhara G, 2001, Selection offloodfrequency model in Tanzania using L-moments and the region of influence approach, 2nd WARFSAiWaterNet Symposium: Theory, Practice, Cases; Cape Town, 200 1. NEMA (1996), State of the environment report for Uganda 1996, National Environment Management Authority, Ministry of Water Lands and Environment.
250
Akinbile & Oyerinde
QUALITATIVE ANALYSIS OF MAJOR SWAMPS FOR RICE CULTIVATION IN AKWA-IBOM, NIGERIA C.O. Akinbile, Departnient ofdgricultural Engineering, Federal University o j Technology, Akure, P. M B 704, Akure, Ondo State, Nigeria. Cakinbilea, Yahoo.Corn A.S. Oyerinde, Department Of Agrictiltural Engineering, Federal Universiy Of Technology, Akure, P.M.B 704, Akure, Ondo State, Nigeria. Cakinbile@Yahoo. Corn
ABSTRACT Ten Swamps located at different areas along major rivers of Akwa-Ibom State were sampled and analyzed for the purpose of domestic and irrigation uses. These are Ikpa East, Nwaniba, Ibiaku Uruan, Afaha Nsai and Nkana. Others are Itu cross, Enyong creek, Okoroma, Ikpa Uruan and Idim Ntouro. The suitability or otherwise of their respective soils for rice farming were also determined. All the water samples analyzed were found to be non-saline and have low sodium concentrations. The samples also contained no excessive nutrients and have minute traces of heavy metals which are well below toxic levels to affect plants growth or soil fertility with the exception of Nwaniba swamp which has an Iron (Fe) value of 1.04 mgiL, a value higher than 1.O mg/L maximum permissible by the WHO. Though not quite harmful it is undesirable on aesthetic grounds. As for the bacteriological assay, 75 % of the samples had Escherichia coli, which provides definite evidence of faecal pollution, making them unpalatable for domestic purposes. The soils of the different swamps evaluated were found to contain poorly drained clayed loamy soils making them suitable for rice cultivation. The results could be used as a basis for planning policy formulation and implementation of rice irrigation projects along these rivers for food security in the region and the country as a whole. It is also suggested that further research be conducted to determine the crop evapotranspiration and consumptive water use of rice in order to minimize waste and improve yield. Keywords: Swamp, Rice, Irrigation, Water, Akwa-Ibom, Nigeria. 1.0 INTRODUCTION Water has been known to sustain plants and human life and its proper treatment has also been beneficial to mankind. It is a basic requirement for household consumption, Agricultural, Industrial and Institutional uses (Twort et al, 1987).Swampland is usually a low-lying parcel of land that is completely submerged in water. The land, damp and muddy is usually filled by flooding. Flooding brings about a series of physical chemical and biological changes which provides a completely different set of soil-plant relationship, causing entrapped air to explode in soil pores, decreasing permeability and aggregate stability. These conditions favour rice cultivation mostly in the tropics. Irrigation is an age-old art of water application to crops to supplement rain deficit and could be described as any process other than natural precipitation that supplies water to crops, orchards, grass or any other cultivated plants (Stern, 1994). About 250 million hectares representing 17 percent of global agricultural land are irrigated worldwide today, nearly
25 1
International Conference on Advances in Engineering and Technology
five times more than at the beginning of the 20th century. This contributes about 40 percent of the global food production of cereal crops (IFPRI, 2002) irrigation especially in hot climates is an increasingly common practice to meet the food requirements of the world’s growing population and water consumption by irrigation has seen considerable growth in recent years because of the potential for enhancing crop yield (Twort et al, 1987). In countries where water is in short supply, sewage effluents are disinfected and then used for irrigation while the physical and chemical qualities of irrigation water are often the criteria for suitability. In swampland, surface water usually accumulates in form of rainwater or as floodwater and depending on the time of the year could be used for swampland rice cultivation. Rice cultivation has been on for a long time. It is the main cereal crop grown in many parts of the World. It is an important source of food for human and form part of animal feed. Rice is one of the most staple foods widely consumed in Nigeria and its consumption cut across every tribe, sex, colour or status as it is generally accepted (Ngambeki and Idachaba, 1985). Rice provides 27 percent of dietary energy supply and 20 percent of dietary protein intake in the developing world. It is cultivated in over 113 countries as it is the staple food for over half of the world’s population (This day, 2004). Rice is typically a swamp plant, which is well adapted to growing in naturally flooded places such as riverbanks, valleys, brackish and fresh water as well as mangrove swamps (Akinyosoye, 1985). Its importance to the Asian economy is tremendous as about 80 percent of the irrigated area in the country in Rice (Seckler et al, 1998). In West Africa, Upland rice is grown on a well-drained land while swamp rice is cultivated in swamps. Rice thrives well on wet soils with low pH but rich in mineral salts and mature in 5-7 months after sowing. It suffers from moisture stress and thus water must be kept between field capacity and flooding. In flooded conditions, Rice has the ability to oxidize its own rhizosphere, climinate water stress, make weed control easier and make useful chemical nutrients of the soil available as the pH tends to neutrality (Ngambeki and Idachaba, 1985). The water quality parameter of major importance is salinity, measured by either electrical conductivity (EC) or total dissolved solids (TDS) which seriously affects crop growth. When present in high concentration, a TDS level of > 1600 mg/L might well reduce crop yield, but is unlikely to occur unless the water is saline or large qualities of salts from industrial effluents leached into the water. The relative proportion of sodium to other cautions as measured by sodium Adsorption ratio (SAR) in important in preventing the soil from becoming more alkaline. The objective of this study therefore is to determine using various analyses, the characteristics of the swamps water and their respective soils with a view of locating various suitable swamps for the cultivation of rice along major rivers in Akwa-Ibom State, Nigeria.
2.0 MATERIALS AND METHODS The swamp waters were analyzed to determine their suitability for domestic and/or irrigation purposes. The samples were collected from ten (10) different swamps in Akwa-Ibom State; Nigeria. The swaps were located at Ikpa East, Nwaniba, Ibiaku Uruan, Afaha Nsai,
252
Akinbile & Oyerinde
Nkana, Itu Cross, Enyong Creek, Okoroma, Ikpa Uruan and Idim Ntouro swamps. Two (2) litres of each sample were analyzed to determine their physical characteristics such as colour, temperature, turbidity, odour and taste, chemical and bacteriological characteristics, which included the pH, electrical conductance, suspended solids, hardness, sodium adsorption ratio (SAR) and titrimetric methods for cautions and anions. Flame photometry was used in determining sodium (Na) and Potassium (K) metals just as the atomic absorption spectrophometry was used for Zn and Fe metals determination. As for the bacteriological assay, the samples were sent the medial laboratory for isolation and identification of Escherichia Coli (E-coli) and coliform bacteria in line with the guidelines of AOAC, (1990). Two samples each of bottled natural water (Btw)and a sample suspected to be seawater (Eket sample) were also analyzed and used as control for the two extreme cases of the experiment. As for soils of the samples, physical soil measurements namely permeability by auger hole methods, infiltration by double ring methodsisoil texture by hydrometer method and some minerals constituents to ascertain the suitability of such soils for cultivation.
2.1 Study Area Akwa Ibom State is one of the six south- south states' in the Eastern region of Nigeria. It was carved out of the old Cross river state in September, 1987. It has Uyo as the state capital and located between latitudes 4'53' and 5'08' North and longitudes 7'45' and 8'05' East of the Greenwich meridian. The state has a population of over one million (1992, National Census) with a density of 1,149 persons per square km.The annual rainfall varies from 1900mm to 2600mm while the mean maximum temperature varies from 3 1°C to 33OC and the mean minimum temperature ranges between 26'C to 27.6'C. It has two distinct seasons, the rainy season, which spans from March to November and dry season from December to February. The climate of the state is influenced by both tropical continental air mass from Sahara desert and tropical maritime air mass from the Atlantic Ocean. Akwa Ibom is bound by Abia state in the North, Rivers state in the West, Cross River state in the East and the Atlantic Ocean in the South. It is a predominantly an estuary delta state which leads to the Atlantic. The vegetation is of tropical rain forest. Farming is mainly practiced at subsistence level and is the major occupation. The landscape is relatively flat and the elevation varies between 80.3 to 9 1.5m above the mean sea level.
3.0 RESULTS AND DISCUSSION 3.1 Soil The suitability of land for any given use is the result of a combination of several factors such as soil characteristics, topography, climate and susceptibility to flooding. These are the main factors considered in this study. Table 1 contains average values of the textural analyses and classes of soils in the ten swamps sites. From the table, the soils are predominantly clays with poor natural drainage. The soils are classified as imperfectly, poorly or very poorly drained according to their degree of wetness. The poorly drained clays which are predominant in virtually all the swamp areas has a topsoil of about
253
International Conference on Advances in Engineering and Technology
13cm thick consisting of dark greyish brown clay with a few medium sized yellowish red mottles and a moderately developed medium blocky structure. The subsoil is greyish brown clay brightly mottled with strong brown. The measured saturated hydraulic conductivity is moderately rapid above 20cm. The minimum infiltration rate for 6 determinations was 1.8cmihr after 1%hour, a very high figure for clay soil. The gives an indication of the initial infiltration rate and because of extensive cracking, it is very doubtful if the true basic infiltration rate could be measured in the dry season without 2 or 3 days pre-wetting. The nutrient status indicated that responses to Nitrogen, Phosphorous and Potassium containing fertilizers are likely in view of the already low pH values; therefore acidifying fertilizers must be avoided. The soil is not suitable for all crops but with some possible potential suitability after flood protection and drainage, which is most likely, to be feasible on all the soils except Ikpa east swamp. Some of these soils are already covered by rice and fallowed rice farms with some forests in the Enyong creek and Nkana swamps. The swamp rice assessment assumes control of local flooding i.e. flooding caused by streams entering the swamps directly from the upland to the South and West. The Idim Ibom River was not included because its flow is too large to be controlled by minor flood protection works. Table 1 : Soil physical measurements Swamps
cs
FC
SILT
I.
IkpaEast 30.3
8.9
10.0
50.8
2.
Nwaniba
30.8
1.8
16.6
50.6
SCL
3.
Ibiaku Uruan
45.2
10.6
15.5
34.6
SCL
4.
Afaha Nsai
30.0
6.4
23.4
40
CL
5. 6.
Nkana Itu Cross
61.2 40.0
2.0 6.4
16.0 19.4
20.8 34.2
SC SCL
7. 8.
Enyong Creek Okoroma 59.4
54.8 1.2
5.8 18.0
18.0 21.4
21.4
9.
Ikpa Uruan
46.4
11.4
12.0
30.2
SCL
60.8
C
10. Idim Ntouro 10.4 5.8 23.0 Key: SCL - Sand clay loam - FS - Fine sand C - clay - SL - Sand Loam CS - Coarse sand
TEXTURE
%CLAY C
SCL SCL
3.2 Water Quality The results of physical, chemical and bacteriological analyses of water samples including the control were presented in Tables 2, 3 & 4.In considering the qualities of water samples, it will be necessary to relate the qualities to the intended use and in this case, discussion will be focused on the use for domestic andlor irrigation uses.
254
Akinbile & Oyerinde
Table 2: Physical Characteristics of the Swamp Water Samples Swamps
Temp ("C)
Turbidity
Colour
Taste
Odour
I.
Ikpa East
27.2
Nil
Nil
Nil
Offensive
2.
Nwaniba
26.5
Nil
Coloured Tasty
3.
lbiaku Uruan
25.7
Turbid
Nil
Tasteless Offensive
4.
Afaha Nsdi
25.9
Nil
Nil
Tasteless Offensive
5.
Nkana
27.4
Nil
Nil
Nil
Inoffensive
6.
Itu Cross
28.8
Turbid
Nil
Nil
In offensive
7.
Enyong Creek
29.8
Nil
Coloured Tasty
8.
Okoroma
26.1
Nil
Nil
Nil
9.
Ikpa Uruan
25.8
Nil
Nil
Tasteless In Offensive
10.
Idim Ntouro
27.3
Nil
Coloured Nil
Offensive
Offensive In Offensive
In Offensive
Table 3: Chemical Analvses of the Samples CATIONS mg/l
ANIONS mg/l
OTHERS mg/l
Na
K
Mg
Ca
No;
PO,'.
SO4'-
I. lkpa East
4.0
1.8
3.6
2.0
0.09
7.4
9.3
0.76
6.1
17.8 0.02
0.75
0.29 0.21
2. Nwaniba
1.6 6.8
2.4
4.0
0.09
1.2
9.9
0.84
6.1
17.8 0.00
0.49
1.04 0.28
3. Ibiaku Uruan 0.8 4.0
1.2
2.0
0.00
6.8
5.7
2.43
6.1
Swamps
S?. NC03- CI-
14.2
CN- NH,
0.03
0.83
be
Ln
0.25 0.14
4. Afaha Nsai
0.8
1.0 2.4
4.0
0.00
8.7
6.9
3.18
6.0
17.8
0.03
0.97
0.29
0.14
5. Nkana
0.8
6.0
3.6
4.0
0.00
14.6
7.5
1.68
0.9
17.8
0.02
0.78
0.71
0.14
6. ItuCross
0.8
1.8
1.2
0.00
14.0
6.9
1.87 27.5
17.8
0.03
0.49
0.13
0.14
12.1
6.6
1.99
24.4
14.2
0.02
0.38
0.21
0.14
6.0
7. Enyong Creek 0.8
2.1
1.2
4.0
0.09
8. Okoroma
0.4
2.3
2.2
1.8 0.00
9.0
10.2
2.33
6.2
10.7
0.00
0.51
0.50
0.26
9. Ikpa Uruan
0.8
0.9
2.4
2.0
0.00
8.7
9.3
1.95
6.1
17.8
0.00
0.49
0.42
0.28
10. IdimNtouro 0.4
3.0
1.2 4.0
0.00
8.7
4.5
0.00
14.2
0.40
0.49
0.13
0.14
9.1
1 I . Bottled water
(Btw) 14.4 2.4 12. Seawater
3.6
16.0 0.09
4x00 230 493.0 156
0.22
36.3 6.2
6.9
0.20 18.3
26.6
0.00
1.46
0.00
0.21
62.4
3.15 91.5
9310
0.04
3.30
0.13
0.35
255
International Conference on Advances in Engineering and Technology
Table 4: Some chemical and bacteriological analyses of the Swamps Swamps (location) 1.Ikpa East 2. Nwaniba
PH
E.condc
6.30 5.89
3.Ibiaku Uruan 4.Afaha Nsai 5. Nkana 6. Itu Cross 7.Enyong Creek 8. Okoroma 9.Ikpa Uruan 10.Idim Ntouro 1 IBottled water (Btw)
Total Hardness 20 20 10
5.89 5.75 6.1 1 6.29 6.27
0.280 0.012 0.008 0.007 0.030 0.039 0.039
Sus.solds mm slcm 250 250 450 300 400 450 400
20 25 20 15
E-Coli. Per IOOml 50 0 5000 50 0 500 500
6.53 6.31 6.05
0.035 0.009 0.006
240 250 250
7 15 15
600 400 500
0
0
0.05 0.09 0.05
7.05
0.133
0
55
0
0
0.84
Colifonn per 1OOml 0 5000 0
0 5000 0 0 5000
SAR 0.39 0.16 0.11 0.08 0.07 0.80 0.07
pLizqm From Table 2, most of the samples were colourless though not crystal clear perhaps due to the continuous, movement (flow) of the water. Those coloured samples from Nwaniba, Enyong creek and Idim Ntouro may have been caused by humic and peaty material and naturally occurring material salts such as Iron and Manganese from the upland of such swamps, though Iron presence is below the acceptable level of 1.O mg/L with the exception of Nwaniba swamp which is more. This is a clear indication that the swamps are good for irrigation and may pose health hazard in case of drinking it. For taste and odour, a subjective test, was also carried out. Most of the swamps were tasteless and inoffensive which tend to suggest that they are good enough for domestic uses but may not be as other chemical and bacteriological analyses has not proved it to be. The offensive nature of Nwaniba, Enyong creek and Afaha Nsai signify presence of decompose organic material and/or decaying vegetation. Some odours are indicative of increased biological activity and the combined perception of substances detected. Turbidity is an indication of the clarity of water just as the degree of disorderliness is turbulence. Since turbidity is caused by material in suspension, it is difficult to correlate it with the quantities measurement of suspended solids. All the samples had over 5 nephelometric turbidity units (NTU) as almost half of the samples were turbid. This signifies the presence of peebles and flocculants in the samples. As for suspended solids, the samples are high in suspended solids making them unsuitable for drinking but good for irrigation. Though none of the samples has the maximum permissible level of 5OOmgiL but samples from Ibiaku Uruan and Itu cross are high as 450mg/L (Table 4). This is also similar to turbidity in contents and the contents vary upon flow and season.From table 2 , high temperatures were noticed in all samples, which range between
256
Akinbile & Oyerinde
25 and 3OoC, which confirmed the presence of some biological species and their rates of activity in such water. Temperature has an effect on most chemical reactions that occur in natural water systems and the amount of dissolved gasses in it. The p H of the water samples ranged 5.75 to 6.53 except that of the bottled water and seawater used as control with values 7.05 and 7.40. The permissible pH range of 6.5-8.5 was given by the World Health Organization for domestic purposes (WHO (1996) indicating that the samples are acidic (though mild) and theoretically not suitable for domestic consumption (Table 4). Also from the table, the high value of electrical conductivity (EC) indicated that the Eket sample which is suspected to be seawater is very saline while all others that have very low EC values are soft and therefore good for domestic use. Water from Eket sample is not suitable for irrigation due to high EC which is 27.5 milli siemens/cm and high sodium hazard with SAR of > 26; it is put in the very high salinity and very high sodium hazard class (C4-S,) hence not suitable for irrigation. However, all the other samples analyzed have low electrical conductivity EC and low salinity hazards. This therefore puts them in low salinity and low sodium hazard class (C,-S,) using the USDA classification system. The samples have EC < 0.25 mill siemendcm and sodium adsorption ratio (SAR) <10 (Table 4). The nutrients and trace element contents were generally low in water samples and in most cases below the highest desirable or permissible limits set by the WHO for drinking water standards. However, a number of the samples contain high levels of Iron (Fe) particularly the Nwaniba sample which has a value of 1.04 mg/L, a value higher than 1.Omg/L which is the maximum permissible level given by WHO (Table 3). The element is not quite harmful but undesirable on aesthetic ground. The implication is that it imparts a bitter taste when present in large making the water unpalatable. The other trace elements in the entire sample are much below the F A 0 recommended maximum concentrations for irrigation water, which implied that, their concentration in water will not adversely affect the growth of plants. They are below the toxic levels as given by the F A 0 (1998). From Table 4, the bacteriological standard was quite poor in a good number of the samples. About 75 percent of the samples had E-coli of over 1 per IOOml making it unfit for drinking and almost 42 percent had coliform organisms. The detection of faecal (themotolerant) coliform organisms, in particular Escherichia coli provides definite evidence of faecal pollution, which makes the samples unsuitable for domestic purposes. Water would have to be treated to eliminate the pathogenic organisms in order to make it potable. Apart from minerals and gases, harmful water (with high E-coli and coliform count) causes dysentery, typhoid, cholera and gastroenteritis, which are injurious to human health and therefore unfit for drinking but allowable for irrigation uses. Summarily, all the samples fall within the limits acceptable for irrigation but require high treatment level before it could be used for domestic purposes. 4.0 CONCLUSION
All the ten (10) swamps identified were good for rice cultivation except that of Ikpa East River Swamp where the volume and duration of the run off has to be determined
257
International Conference on Advances in Engineering and Technology
for dry harvesting period of rice. The water samples from all the swamps were found to have acidity levels exceeding the limit recommended by the WHO for potable water. In other aspects however, the water was found to be of good quality. The samples contained no excessive nutrients or heavy metals at toxic levels going by WHO guidelines. A good number of them contained pathogenic bacteria. E. coli, a situation that require purification before it could be used for domestic purposes. For irrigation purposes, all the samples were found to be of excellent quality. They were non-saline and have very low sodium concentrations. They will therefore not impose any salinity or sodium hazards on the soil or crops grown on it. Expectedly, the first control, the bottled natural water satisfied all the WHO minimum criteria for drinking water and therefore can serve the dual purpose of domestic and irrigation uses while the other control, seawater (Eket Sample) was neither suitable for domestic nor irrigation purposes. It has high electrical conductivity and SAR of > 26mm sicm and 40 respectively which are above the WHO standard and also high contents of basic cautions such as Na, K, Mg and Ca which contributes to hardness and scale formation in water.
As for the soils, all the samples analyzed have the same combination of land suitabilities for the several crops considered which is based on the poorly drained days. This is very suitable for both rice and maize cultivation with fertility limitations but only marginally suitable for cassava and vegetables. Texture limitations for the former, fertility limitations for the latter and wetness limitations for both 5.0 RESEARCH PRIORITIES The soil conditions and water quality contribute towards favourable environment for rice production. Having considered the quality of all the samples good enough for irrigation, further research should be carried out regarding the exact quantity required by rice for the entire growing season. Estimating irrigation requirements for rice is difficult since its actual evapotranspiration is about 110 percent of grass (Seckler et al, 1998). Rice fields are kept flooded primarily for weed control. This creates high percolation ‘losses’ from the fields. The belief that either running water through rice fields or simply holding stagnant water increases yield (and perhaps taste) is erroneous. There is no scientific evidence for this belief except that during very hot days, running water may beneficially cool the plant. On the other hand, this practice flushes out the fertilizer and contributes to water pollution. This aspect can only be evaluated after a more precise identification of development options in all the sample sites. If necessary this will be studied where irrigation schemes with their main supply from the river have been identified.
6.0 ACKNOWLEDGEMENTS The authors wish to acknowledge immense contributions of Engr. Richard Ekpe, Director, Engineering Services, Akwa-Ibom State Ministry of Agriculture and Natural Resources, Uyo for his assistance in site assessment and for using some of the project’s equipment in data processing. The efforts of Anietie Bassey and Peter Offong in data collection are also appreciated. The grant for the project was sourced from ADB (African Development Bank) by the government of Akwa-Ibom State.
25 8
Akinbile & Oyerinde
REFERENCES Akinyosoye. V.O. ( 1985) Senior Tropical Agricultural, Macmillan Publishers Limited, London, pp 88-90. Association of Official Analytical Chemists ( 1 990) Official Methods of Analysis, 15th Edition, AOAC, Arlington, Virgina. F A 0 (1 998) Food Production and Consumption Database, Food Production and Import Data for South Africa, Rome, FAO. International Food Policy Research Institute (IFPRI) (2002) Green Revolution, Curse or Blessing, Washington DC. National Population Census, Ministry of Internal Affairs, Abuja, 1992. Ngambeki D.S and F.S Idachaba (1985) Supply Responses of Upland Rice on Ogun State of Nigeria; A producer Panel Approach, Journal of Agricultural Economics, Vol. 35, No2 pp 239-249. Seckler, D. Amarasinghe U., Molden D. de Silva. T and R. Barker (1998) World Water Demand and Supply, 1990 to 2025, Scenario and Issues, Research Report 19, Colombo, Sri Lanka, International Water Management Institute (IWMI) Stern, P.H, ( 1994) Small Scale Irrigation, Intermediate Technology Publications, London, pp 13-18. This Day (2004) A Report Titled ‘FA0 Organizes Global Contest to Boost Rice Production’ on Agric Business Page of This Day Newspaper, by Crusoe Osagie Vol. 10, No 3291, pp 40, Tuesday, April, 27. Twort, A.C. Law, F.M. and F.W. Crowley, (1987), Water Supply, 3rd Edition, Edward Arnold Publishers, London, pp 47-56,200-230. WHO (1996) Guidelines for Drinking Water Quality: Health Criteria and Supporting Information, 2nd Edition Geneva 2, 27 1 pp.
259
International Conference on Advances in Engineering and Technology
EFFICIENCY OF CRAFTSMEN ON BUILDING SITES STUDIES IN UGANDA H. Alinaitwe, Department of Civil Engineering, Makerere University, Uganda
J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda B. Hansson, Division of Construction Management, Lund UniversiQ, Sweden
ABSTRACT Many researchers have of recent been concerned about low and in some cases declining labour productivity in the construction industry. Some have gone to question whether labour is efficiently utilized. This paper reports on a survey carried out on building craftsmen to find out how efficiently they utilize the time available to them for building activities on site. It involved taking observations from 150 craftsmen from 52 building sites and 45 contractors. In all 37500 usable observations were made. The study found that the craftsmen use about 20 percent of the available time for making the buildings grow. The statistics obtained comparable with what was found in other countries. It was found out that building construction craftsmen spend about 33 percent of the time on non-value adding activities. Research should be directed to reducing the non-value adding activities on sites.
Keywords: Activity Sampling, Craftsmen, Building Sites, Efficiency, Productivity
INTRODUCTION Many researchers have of recent been concerned about low and in some instances declining labour productivity in the construction industry. Allmon et a1 (2000) for example shows that labour productivity in the United States has been declining over the years. In the United Kingdom, the Egan report (1998) was made following public outcry. A lot of research is currently going on to improve performance of the construction industries following various reports. According to Buchan et al. (1993) and Gilleard (1992) labour cost is somewhere between 20% and 50% of the total project cost. Hence how efficiently labour is utilised on construction costs is very important for the improvement of performance of the industry. The construction industry is characterised by repeated delays and cost overruns, more so in developing countries (Mansfield et al, 1994). The time and cost overruns have been so severe in some cases that questions as to the efficiency of human factors in the construction process have emerged (Imbert, 1990). Research in developed countries has been going on how labour is efficiently utilised. According to Jenkins and Orth (2004), the time used by workers on daily basis on productive work averages about 29% of the total time available for construction work.
260
Alinaitwe, Mwakali & Hansson
Oglesby et a1 (1989) argues that direct work activities on construction sites take 40 to 60% of the available time. Agblous and AbouRizk (2003) made activity-sampling studies on plumbers and each activity observed on full time shift. They concluded that 32% was value adding whereas 68% was classified as being non-value adding. Motwani et a1 (1995) noted that improving communication skills, preplanning and stricter management could help to raise the individual productivity rate from an average of 32 percent productive time per hour to almost 60 percent per hour. Motwani et a1 further argue that concentrating on productivity improvement in the larger portions of non-productive employee time would be more effective in improving productivity in construction where there are a variety of uncontrollable productivity influence factors. Strandberg and Josephsson (2005) found out that building workers in Sweden spend less than 20% of their time on direct work activities. However, there has not been a similar study in the developing countries where the level of mechanisation is low and construction activities are largely labour intensive. Studies by Orth and Jenkins (2003) and Strandberg and Josephsson (2005) used activity sampling concentrated on single sites and on a few craftsmen. To have a better picture, one needs a wider view from a number of sites and different tradesmen. The objective of this research was to find out how efficiently labour is utilised in Uganda and to compare them with those results from other countries. The other objective was to find out the way the time is distributed so that efforts in future research can focus on reducing the wasted and non-value adding activities. Activity sampling was used to find out the way construction workers utilize the time available to them.
METHODS 2.1 Research Method There are five research styles: experiment, survey, action research, ethnographic research and case study (Fellows and Liu, 2003). Research in construction is usually carried out through experiments, surveys or case studies. Because of several factors that affect construction productivity, experiments would not be easily performed. They would require a lot of time and the cost would also be high. Case studies would also not be appropriate because they provide a limited area of study. Activity sampling, motion analysis, time study, and the method productivity delay model have been used before for evaluating the influence of factors and efficiency of worker productivity (Adrian and Adrian, 1995). In this case, activity sampling was used. It is a simple, quick and inexpensive technique which involves a series of instantaneous observations of work in progress taken at random times over a period of time (Jenkins and Orth, 2003). Observations, also known as samples, which can be taken from a large number of workers, are compiled together at the end of the study to show the percentage of that day spent by workers performing various activities. By using the information obtained from the study, one can evaluate which components are detrimental to productivity and where improvement is need. It provides information on the amount of time workers spend productivity and non-productive work and identify site-specific factors that have either a positive or adverse effect on productivity. It is less obstructive to workers since the samples are random. Workers are more apt to cooperate with activity sampling since the
26 1
International Conference on Advances in Engineering and Technology
results focus on the work performed by workers as a whole, rather than singling out individual performance (Jenkins and Orth, 2003). Winch and Carr (200 1) describe activity sampling as one of the best ways of obtaining a detailed knowledge of the performance of any production process. Results from the work sampling can be used as a benchmark for future studies. Activity sampling is less controversial that the other approaches such as output per unit time. The argument made against Activity sampling is that the direct work time does not necessarily correlate with unit rate productivity. The reasoning is that the level of labour skill and standard equipment that influence productivity without necessarily influencing the percentage of time used on direct work activities. However, there is high correlation between the time used efficiently and the level of productivity. The purpose would therefore be to identify the causes of inefficiencies and find ways of reducing or eliminating them altogether.
2.1 Observation Form Design A form was designed for taking observations of worker activity. It was based on the breakdown of time used by Winch and Carr (2001). The time when workers are on sites can be utilized in production; statutory ancillary time; support ancillary time; and nonvalue adding wasted time. Productive time is comprised of time for making the building grow (F); preparation of materials (P); handling materials at the workplace (H2); cleaning up (C) and unloading (U). Statutory ancillary time is comprised of Official Break (BK); Safety related (HS); and Inclement Weather (RO). Support ancillary time is comprised of supervision (SU); material distribution support (H3); setting out (T3); and Testing (T2). Non-value adding wasted time comprises of absent (A); materials transfer (Hl); not working (I); Making good (RT); walking around (W) and waiting (N). 2.2 Pilot Studies The Observation Form was shown to two researchers whose input was taken into account by modifying the layout of the form. Pilot studies were then carried out to ensure the clarity and relevance of the form to the research assistants. Five different research assistants tried it on one bricklayer, one roof joiner, one painter, one concretor and plasterer so that they learn how to use it and get conversant with the codes. A second phase of the pilot study was conducted on another set of five craftsmen. Based on the feedback received, it was calculated that the percentage of time used on making the building grow was 19%. 2.3 Sample Size Use was made of the formula in equation (1) given by Harris and McCaffer (2001)
1
N=Z2xPx(1-P)L2 where N is the number of observations required; P is the activity rate observed from the pilot study in this case 19%; L is the limit of accuracy required in this case 5%; and Z is the standard normal variable depending on the level of confidence in this case 1.96 at 95% level of confidence.
262
Alinaitwe, Mwakali & Hansson
Substituting the values above in equation (1) gives N = 236 and so 250 samples per craftsman was adopted. O’Brian (2001) believes that an accuracy of 5000 samples is regarded as very accurate by around plus or minus 2 percent. As a result, activity sampling targeted 250 observations on each craftsman. A decision was made to make observations on 30 tradesmen in each of the 5 categories in order to have a sample big enough of 7500. The observations were made at intervals of 5 minutes such that for each person, they were made over three days.
2.4 Sample Selection The selection of the subjects for study was carried out through quota sampling as follows: Contractors who are registered with the contractors’ association Uganda National Association of Building and Civil Engineering Contractors (UNABCEC) and doing formal building contract were targeted. From the survey, data was gathered from as many different contractors and as broad a geographic area within Uganda as possible. For the purposes of this survey, the mailing list of building contractors during the year 2005 was used. There were 167 in number on the list but for various reasons 30 could not be reached. So we reduced our target to 137 contractors. They were requested by telephone that we need sites to make observations on the following activities. 0 Bricklayers laying a 230 mm brick wall of height not more than 3 m high. Concrete layers laying oversite concrete 100 mm thick on hardcore Applying paint on a freshly plastered wall. Joining timber for a roof with timber trusses and rafters Applying 1 :4cement sand plaster on a brick wall not more than 3 m high. The contractors would inform the research team when they would be carrying out the work on any of those items. These are typical activities on new building work and they are from sections that form big cost centres. Observations were made on one craftsman in each of the activities per contractor until 30 craftsmen in each category of work were observed. 2.5 Surveys Meetings were held with the site foremen to inform them about the purpose and nature of the study. They also confirmed the availability of work in the desired categories and identified the craftsmen to use in the study. Exact location of the work areas and site specific issues such as times for start, break, lunch and end were made known to the survey team before hand. The observations made on the first day were not utilized. This was to enable the observer settle in on site and also to make sure that the observations made did not affect workers on site. The supervisor would then explain to the workers the purpose of the study that it is for research purposes and not to evaluate the work performance of each individual employee. Only tradesmen with experience of not less than 2 years were used.
Officially, work would normally start at 8.00 and close at 1630 hours with a 45-minute lunch break and 15-minute break but the specifics depended on the sites. 52 different buildings sites and 45 contractors were involved from various parts of the country. Sites
263
International Conference on Advances in Engineering and Technology
were chosen basing on availability of work on the required categories for at least four working days continuously and where workers were paid on daily performance rather than subcontract basis. Each observer would observe not more than three craftsmen on one site and they would be of different trades. Observations were made on only one craftsman from each trade per site. Three research assistants carried out the observations over a period of three months from August to October 2005. It was the observer’s responsibility to record what the individual worker was doing at the very first instant observation took place. The observer would record according to the categories indicated on the observation sheet. The observations would be made at random times but only one observation would be made per person within an interval of 5 minutes.
264
Alinaitwe, Mwakali & Hansson
3.0 RESULTS AND DISCUSSION The percentages of time that the various categories of craftsmen were performing different work items were calculated and are presented in Table 1. The results show that on the average productive time which comprises of time spent on making the building grow; preparation of materials; handling materials at the workplace; cleaning up and unloading takes about 39.6% of the total time available on site. Statutory ancillary time that is comprised of Official Break; Safety related; and Inclement Weather take about 13.7%. Support ancillary time that is comprised of supervision; material distribution support; setting out; and testing takes about 13.5%. Non-value adding wasted time that comprises time when the workers are absent; materials transfer; not working; making good; walking around and waiting consumes about 33.3% of the total time. The percentage of time spent on making the building grow for the various building activities averages 19.34%. Bricklayers perform worst on this aspect because they wait for someone else to prepare the mortar and then deliver the bricks at the work site thereby loosing a lot of time. Painters, scored highest possibly because their tasks also do not involve a lot in transferring materials. The painters spend the biggest percentage (6.55%) of time cleaning up. This could be partly because they do not provide ample cover of the areas where paint could easily spill. One way for them to improve this is to cover up all affected surfaces before work starts. The concretors spend the most time unloading (5.04%). This is by the very nature the way the concreting process is done. Concrete is unloaded from chutes and wheelbarrows on the sites where observations were made. Bricklayers spend the list time unloading. There were porters to unload the materials at the work place and the bricklayers only step in to unload when there is a shortage at the work areas. The average percentage of time spent on official breaks was 5.42%. It was observed however that the official break time was not strictly adhered to. Some people were taking longer than granted and in some cases some were taking a shorter time when there was need to complete given tasks. This had an effect in that the gangs were not balanced when some of the workers were not there. It is important that all observe break times and they resume work together in order to minimize unbalanced gangs. Regarding safety related issues, the workers spend very little time on briefing on safety related matters. Most of the instances recorded were on attending to injuries and preparation of supports and barricade to safeguard the workers. Inclement weather on average takes a big percentage of time. Uganda being in the tropics experiences heavy torrential rains and these are usually in the afternoon and last about 30 minutes. These are the ones that tend to disrupt work on many sites. The sun is at times hot around mid day and makes working uncomfortable when then are not protected from it. The construction in Uganda is that most of the work is done in situ and the roof is provided last after the walls have been erected as a matter of construction procedure. There is need to provide cover on some occasions by changing the construction procedure to provide the roof in order to avoid inclement weather. Setting out takes about 2.69% and testing takes about 0.60% of the time of the average. For a big part of the time (4.4%), the workers were absent. They were generally missing
265
International Conference on Advances in Engineering and Technology
in action. The worst performers in the area are the bricklayers at 6.5 1% of the time. This may show a laxity in supervision and enforcement of work. It was observed that a lot of time is spent on materials transfer i.e. 5.31% of the total time. The worst culprits are the plasterers who spend 7.2% of their time on this work item. It is there necessary to examine how the work gangs are formed. Transferring materials is mainly the work of labourers on many sites. On average about 6.6% on average is spent not working. The workers are largely idle and talking amongst themselves. The survey indicates that for 5% of the time workers were walking around while 5% they were waiting for materials. This reflects on the laxity in inspection and inadequate work targets. For 5.59% of the time, the craftsmen were seen reworking. Rework not only consumes time but also leads to a lot of wastage of materials. This research shows that the efficiency of building craftsmen in Uganda is low. However, what is interesting to note is that the availability of workers for productive work is comparable to that in developed countries where workers have easy access to tools, equipment, and materials. The availability of time for making the building grow of 19.3% is comparable with what Strandberg and Josephsson (2005) obtained after similar studies in Sweden. Yet in Sweden, the assumption that project managers make is that craftsmen are available for productive work 50% of the total time on site. The average time spent by workers on the productive activities of 39.6% seems to be less or at most on the lower boundary of the range 40 - 60 % observed in the United States. This could partly be because most of the materials handling in Uganda is manual. Some of the materials like fresh concrete are mixed on site and this takes the time of the craftsmen. The skills of the workers and the level of supervision are poorer in comparison and are possible causes of lower availability for productive work. All the workers studied were men. On average, the percentage of total time used by workers not working, walking around, waiting, and absent excluding the official break amounts to 22.4%. This is much greater than the 14% relaxation allowance recommended by International Labour Organisation (ILO) (1 978) for relaxation assuming that they are men working under awkward (bending) conditions and carrying 4 kg loads.
4.0 CONCLUSIONS Due to various reasons, the time that the workers spend on productive activities of 39.6 percent seems to be low. However, this seems to be comparable with what was observed in Sweden and Unites States. The proportion of time spent on non-value adding activities of 33.3 percent is significant. The time used by craftsmen on productive activities in the building industry is lower than that recommended by ILO. Comparison with studies from developed countries shows that there is not much difference in how efficiently the craftsmen are employed. Those in developed countries have machinery and equipment at their disposal and that is why their productivity could be higher in spite of the low efficiency. REFERENCES Agbulos, A. and AbouRizk, S. M. (2003) An application of lean concepts and simulation for drainage operations maintenance crews. In Chick, S., Sanchez, P. J., Ferrin, D. and Morrice, D. J. (eds). Proceedings of the 2003 Winter Simulation Conference, 1534 - 1540.
266
Alinaitwe, Mwakali & Hansson
Adrian, J. J. and Adrian, D. J. (1995) Total Productivity and Quality Management of Construction. Champaign, IL: Stipes Publishing. Allmon, E., Haas, C., Borcherding J. and Goodrum, P. (2000) U.S. Construction Labour Productivity trends, 1970 - 1 998, Journal of Construction Engineering and Management, 126(2), 97 - 104. Buchan, R. D., Fleming, F. W. and Kelly, J. R. (1 993) Estimating ,fbr builders and quantity surveyors. Buttenvorth-Heinemann, Oxford. Egan, J ( 1998) Rethinking Construction. HMSO Department of Trade and Industry, London. Fellows, R. and Liu, A. (2003), Research Methods ,f;,r Construction, Second edition, Blackwell Science, Oxford. Gilleard, J. D. (1992) The creation and use of labour productivity standards among specialist subcontractors in the construction industry, Cost Engineering, 34(4), 1 1-1 6. Harris, F. and McCaffer, R. (200 I ) . Modern Construction Management. Fifth edition. Blackwell Publishing, London Imbert, 1. D. C. (1990) Human issues affecting construction projects in developing countries, Construction Management and Economics, 8(2), 2 19 - 228. International Labour Organisation ( 1978) Introduction to work study methods. Third edition, 1L0, Geneva. Jenkins, J. L. and Orth, D. L. (2004) Productivity improvement through work sampling. Cost Engineering, 46(3), 27 - 32. Mansfield, N., Ugwu, 0 and Doran, T. (1994) Causes of delays and cost overruns in Nigeria construction projects. International Journal of Project Management, 12(4), 254 - 260. Motwani. J., Kumar, A. and Novakoski, M. (1995) Measuring construction productivity: A practical approach. Work Study, 44(8), pp 18 20. O'Brian, K. E. (2001) Improvement of on site productivity. KE O'Brian & Associates, Inc., Toronto, Canada. Ogelsby, C., Parker, H. and Howell, G. (1989) Productivity Improvement in Construction. McGraw-Hill, New York. Strandberg, J. and Josephsson, P. (2005) What do construction workers do? Direct observations in housing projects. In A. S. Kazi (ed.) Systemic Innovation in the management of'constrirction procrsses, 184 - 193. Winch G. and Carr, B. (2001) Benchmarking on-site productivity in France and the UK: a CALIBRE approach. Construction Management and Economics, 19(6), pp 577 - 590. ~
267
International Conference on Advances in Engineering and Technology
BUILDING FIRM INNOVATION ENABLERS AND BARRIERS AFFECTING PRODUCTIVITY H. Alinaitwe, Department of Civil Engineering, Makerere University, Uganda K. Widen, Division of Construction Management, Lund University, Sweden
J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda B. Hansson, Division of Construction Management, Lund University, Sweden
ABSTRACT Efforts to promote innovation are at the heart of current research in building in an endeavour to make it more productive and competitive. At the same time, there is lack of statistical data on innovation. A review of the major barriers and enablers to innovation at business level has been carried out which made been a basis for making propositions about the factors that affect innovation. A questionnaire was then made and a survey carried out on building contractors in Uganda. The identified enablers and barriers to innovation were then ranked and correlated. Having an educated technically qualified workforce and having experienced diverse workforce are regarded the greatest enablers to innovation for building firms that will drive forward productivity. The effect of design on construction and the level of tax regimes are regarded the greatest barriers to innovation in construction firms. Keywords: Construction Firm, Productivity, Innovation, Barriers, Enablers.
INTRODUCTION Efforts to promote innovation are at the heart of current research in construction in the endeavour to make it more productive and competitive. In comparison to other sectors, construction is usually regarded as a traditional or a low-technology sector with low levels of expenditure on activities associated with innovation, such as research and development (R&D) (OECD, 2000; Seaden and Manseau, 2001). It appears that many construction firms are in a vicious cycle of low performance, anaemic levels of profitability, limited investment, and poor organisational capabilities (OECD, 2000). In the UK, Egan report outlines a new vision for the system of design, production and operation of the built environment, involving considerable investment in new technologies, management practices and techniques of production (Egan, 1998). Their recommendations have been taken up by researchers in the construction industry world wide because of the general relevance. The new vision implies change and therefore need for more innovativeness. In response to the challenges, changes are taking place in the delivery of construction goods and services around the world (Seaden et al, 2003).
268
Alinaitwe, Widen, Mwakali & Hansson
There have been a number of case studies of how successful firms have been able to make a range of different organisational, managerial or technological innovations to overcome the limits of their environment (Slaughter, 1993; Sexton and Barrett, 2003). Innovation is common in all sectors; the potential for innovation for an individual firm is shaped by its own activities and the environment it operates within (Reichstein et al, 2005). A recent survey in Canada explores the importance of innovation and the use of advanced practices in shaping the performance of construction firms (Seadan et al, 2003). It focuses on the sources of success and failure in innovation among construction firms. However, the barriers and enablers of innovation faced by construction firms have not been sufficiently studied and quantified. The objectives of the present research were to identify and rank the main enablers and barriers to innovation in the building industry Uganda. The main barriers and enablers of innovation at firm level were identified through a literature search. FIRM LEVEL INNOVATION ENABLERS AND BARRIERS This section includes literature review on barriers and enablers to innovation at businessifirm level. The identified factors are formulated into propositions and numbered FEl . . . FBI.. . etc. The factors are from the general industries but the intention was to capture those that affect productivity in construction firms. 2.1 Firm Level Innovation Enablers The principal divers for innovation are often created at the firm level, within a stimulating macro-economic context (Seaden & Manseau, 200 1). In most industrialised countries, a few construction companies have achieved superior market position through the use of innovative practices. Developing countries like Uganda need to create stimulating macro-economic conditions that will encourage innovation. There is great need for empirical studies to determine the key success factors of innovative firms to allow others to emulate their examples.
Research and development (R&D) is a key enabler as the winning of new knowledge is the basis of human civilisation. There is strong statistical evidence of the positive relationship between R&D activities at firm level and adoption of innovations (Dodgson, 2000). FE1: Innovation at the firm level is positively affected by Research and DevelopmenL The implementation of quality control procedures is associated with innovation. Innovative firms integrate process improvement with quality control (Rothwell, 1992; Chiesa et al, 1996). FE2: -Innovation at thefirm level is positively affected by Quality controlprocedures. A highly educated and technically qualified workforce is more receptive to innovations. The extent of training is associated with innovation (Reichstein et al, 2005).
269
International Conference on Advances in Engineering and Technology
FE3:- Innovation at the firm level is positively associated with an educated technically qualified workforce. The proportions of staff that are scientists, engineers or managers and have relevant experience in another company stimulate innovation. The use of technocrats increases the production of innovative ideas. It is also argued that organisations whose staffs are from diverse backgrounds and experiences will be more receptive to innovation; as such staff will generate a wide range of innovative suggestions. FE4:- Innovation at the firm level is positively associated with experienced diverse workforce. Communication, both internal and external, is vital for implementation of innovations. Internal communication will enable circulation of new ideas to all employees. External communication will enable interaction with suppliers and customers and therefore enable feedback from other stakeholders. Circulation of new ideas keeps the personnel aware of the firm’s direction and enhances innovation (Chiesa, 1996; Souitaris, 2002).
FE5:- Innovation at the firm level is positively associated with communication both internal and external. Strength in marketing enables innovation. Strong marketing programmes feature strong user linkages and a significant effort towards identifying user requirements (Cooper Robert, 1984; Rothwell, 1992) FE6:- Innovation at the firm level is positively associated with strong marketing programmes. The presence of a project champion is an enabler to innovation (Schon, 1975). The project champion is an individual who enthusiastically supports an innovation project and who is personally committed to it. He is particularly effective in maintaining impetus and support when the project encounters major difficulties. FE7: Innovation at thefirm level is positively associated with the presence of a project champion. Teamwork is regarded as an issue of major interest in innovation and a number of authors have highlighted its importance. Teamwork, linkages and clusters for horizontal and upward cooperation are associated with innovation. (Chiesa et ul, 1996). FE8: Innovation at the firm level is positively associated with teamwork, employment conditions and linkages.
2.2 Firm Level Barriers to Innovation According to Pihkala (2002), the financing cost of invention and diffusion is a key barrier. The cost may not be easily affordable to many organisations.
270
Alinaitwe, Widen, Mwakali & Hansson
FB1:- Innovation at the firm level is negatively associated with high financing cost of invention and diffusion. Lack of risk propensity is a major barrier in many firms (Pihkala 2002). Firms that are risk averse are less likely to be innovative. Adoption to change is part of innovation and this involves taking some risk. FB2:- Innovation at the firm level is negatively associated with lack of propensity to take risk. A fragmented industry, leading to many small companies, which are not in position to meet the cost of R&D, hinders innovation (Seaden & Manseau, (2001)). FB3:- Innovation at the firm level is negatively associated with fragmentation of the building industry. High tax regimes also discourage innovations. Uncertainty of occupation and insecurity of employment stifles innovation. Lack of flexibility and empowerment of workers so as to encourage the creation and diffusion of knowledge in the employment policy is a hindrance (Dodgson, 2000; Pihkala 2002). FB4:-Innovation at the firm level is negative!v associated with high tax regimes. FB5:- Innovation at thefirm level is negatively associated with uncertainty of occupation q f t h e workers. FB6:- Innovation at the firm level is negatively associated with lack of flexibility on the part of workers. FB7:- Innovation at thefirm level is associated with the eflect of design on construction. 3.0 METHODS Surveys are one of the most frequently used methods of data gathering in social research. The survey protocol of random sampling procedures allows a relatively small number of people to represent a much larger population (Ferber, 1980). The opinions and characteristics of a population can be explained through the use of a representative sample. Surveys are an effective means to gain a lot of data on attitudes, on issues and causal relationships and they are inexpensive to administer. The study aimed at using a representative sample rather than using anecdotal or case study evidence based on a few, select firms. However, surveys can only show the strength of statistical association between variables. Cross sectional surveys like the one that was used do not explain changes in attitudes and views over time. Surveys also provide no basis to expect that the questions are correctly interpreted by the respondents.
3.1 Questionnaire Design The main barriers and enablers to innovation, which affect productivity, were identified through a literature search that is summarized in the section for literature review. The search identified 8 main enablers and 7 barriers at firm level. Respondents were asked to
27 1
International Conference on Advances in Engineering and Technology
rate each of the listed factors for either enabling or acting as barriers to innovation in order to achieve greater productivity in the building industry. A five-point Likert scale (Kothari, 2003) was used, where 1 was for ‘no effect’, 3 for ‘fairly significant effect’, and 5, ‘very big effect’.
3.2 Pilot Studies Pilot studies were carried out to ensure the clarity and relevance of the questionnaire to the contractors. The questionnaire was shown to two researchers. Based on their feedback, amendments were made to the questionnaire and the second phase of the pilot study was conducted on four building contractors in Uganda. Based on the feedback received, minor amendments were again made to the questionnaire to remove any ambiguities and discrepancies. This pilot study was conducted to validate and improve the questionnaire, in terms of its format and layout, the wording of statements and the overall content. The draft questionnaire was revised to include the suggestions of these participants. In short, the questionnaire was validated through this process and provided the research with improvement opportunities before launching the main survey. 3.3 Sample Selection The survey gathered data from chief executives of building contractors from as broad a geographic area within Uganda as possible. For this purpose, it was determined that the largest contractors who are registered with the contractor’s association (UNABCEC) are targeted. This was along the argument by Schumpeter (1976) that companies need to be large and in a dominant position in order to innovate. One of the aims of UNABCEC at the moment is to increase productivity (UNABCEC, 2004). It was decided that all those in category A and B be the source of potential participants. At the national level, one recognized way of categorizing construction companies is by the UNABCEC class. The classification from A to E takes into account the financial strength, size and ability to carry out jobs. Those in class A are the biggest and undertake works of the biggest magnitude and include some international companies. For the purposes of this survey, the 2005 mailing list of contractors was reduced to those in classes A and B that deal in building construction. Owing to the relatively small number of firms within the two categories A and B, all the 57 building contractors in the two categories were targeted. A total of 54 questionnaires were sent out and three companies did not participate for various reasons. 3.4 Survey Response As a result of mailing and telephone and physical follow up, a total of 44 questionnaires were completed out of the 54 that were sent to contractors, making the total response rate 82 percent as summarized in Table 1. The survey package comprised a covering letter, the questionnaire and a pre-stamped self-addressed envelope.
I
272
Table 1: Response On Questionnaire From The Contractors UNABCEC [ NO. of questionNO.of rePercentage naires sent sponses response Class 34 89 38 A 10 63 16 B 54 44 82 All
I
I
1
Alinaitwe, Widen, Mwakali & Hansson
A review of the responses indicated no measurable differences in the respondents’ answers to the questions. And because group B is less than 30, the two groups were combined for the analysis of this survey. 4.0 RESULTS AND DISCUSSION This section contains a summary of the statistical analysis and gives results of the survey and discussions ensuing from there. The mean ratings, standard deviations, and correlation coefficients were determined for enablers and barriers at the firm level as perceived by the contractors. Statistical analysis of the Likert scale ratings given through the questionnaires was conducted using the Statistical Package for Social Sciences (SPSS 10) software. The ranking of the factors according to the mean rating of the enablers and barriers to innovation in the building industry both at the firm level and national level as perceived by building contractors is summarized in tables 2 to 3. Table 2 gives the ranking for enablers at firm level starting with the highest rated. Table 3 gives the ranking for barriers at firm level. The ranking of enablers at firm level in table 2 indicates that having an educated technically qualified workforce (FE3) is the highest rated enabler to innovation for increasing labour productivity in the building industry with a mean rating of 4.32. Having an experienced diverse workforce (FE4) is rated the second highest enabler to innovation with a mean rating of at the firm level. High rating of technically qualified workforce is in agreement with research which indicates that lack of skills is one of the major factors that negatively impact on productivity in the building industries (Reichstein et al, 2005; Sha and Jiang, 2003). The implication is that construction companies should invest more in training or should get other ways of training their craftsmen if there is to be increase in innovation. At the moment, many firms are not directly involved in formally training their workforce.
273
International Conference on Advances in Engineering and Technology
Rank
Factor
Mean
1 2 3
Effect of design on construction (FB7) Level of tax regimes (FB4) Level of uncertainty of occupation of the workers
3.77 3.73 3.59
4 5 6 7
Standard deviation 0.91 0.95 1.13
I Level of fragmentation of the building industry (FB3) I 3.48
I
3.48 3.34 3.07
1 I
I
I I
Level of cost for invention and diffusion (FB 1) Degree of propensity to take risks (FB2) Level of flexibility on the part of workers (FB6) \
I
1
I I
1
0.85 1.23 1.01 0.95
The effect of design on construction (FB7) is rated the worst barrier as regards innovation at firm level. The result seems to be in agreement with previous research that suggests that fragmentation of the industry and low levels of design and build type of procurement are some of the biggest hindrances to innovation (Reichstein et al, 2005; Dulaimi et al, 2002). This might be used to support the argument that design should be more integrated with construction for contractors to be more innovative. This therefore suggests that the call of shift from the traditional form of procurement where design is separate from construction will make the building industry more innovative. All the identified enablers and barriers have got mean ratings of more than 3.0 which implies that all are taken as having at least fairly significant effect on innovation in the building industry. Among the enablers, having an educated and technically qualified workforce and having experienced diverse workforce at the firm level have average rating above 4.0. This implies that these two factors are perceived as having the biggest effect on innovation. The contractors regard the level of involvement in R&D as the least enabler of innovation among the listed factors that would improve productivity in the building industry with a mean rating of 3.36. It may be that the firms do no like spending on R&D and would prefer more government involvement in R&D like was found out in Singapore (Dulaimi et al, 2002). From Table 3, the standard deviations of the factors that are ranked highest are generally the smallest. This suggests that there is closer agreement in the rating by the contractors towards those factors with high mean rating. This survey was however carried out with building contractors in focus because they are the ones who carry out the building work. The survey did not include the informal contractors who also carry out a significant amount of construction work. The authors believed they would not get representative samples from the informal contractors at ease before gathering ample data on their activities and addresses. The survey could as well have included consultants, clients and other stakeholders in the construction industry. However, each of these categories requires a different sent of questions that are relevant to them. It is also important to note that barriers and enablers to innovation are related. Lack of an enabler can be regarded as a barrier and the converse is true. The search tried
274
Alinaitwe, Widen, Mwakali & Hansson
to identify the major enablers and barriers at the industry level but it is possible that some of the factors were not included.
5.0 CONCLUSIONS The enablers and barriers to innovation at firm level from the view of the building contractors have been identified. Having a technically qualified workforce and having experienced diverse workforce are looked at as the greatest enablers to innovation in building construction that will drive forward productivity. The construction firms and policy makers should therefore focus on how to improve the identified enablers. The effect of design on construction and level of tax regimes are the worst barriers to innovation that lead to low productivity in the building industry in Uganda. Parties to the construction process especially designers and clients should address the separation of design from construction. The policy makers in governments should also address the level of taxation and how it affects innovation in the construction industry.
REFERENCES Chiesa, V, Coughlan, P and Voss, C A ( 1 996) Development of a Technical Innovation Audit. Journal Of'ProdzictInnovation Management, 13(2), 105-136. Cooper Robert. G. ( 1 984) The Strategy-Performance Link in Product Innovation. R & D Management, 14(4), 247-259. Dodgson, M (2000) The Management of' Technological Innovation. Oxford: Oxford University Press. Dulaimi, M F, Ling, F Y, Ofori, G and De Silva, N (2002) Enhancing integration and innovation in construction. Building Research and Information, 30(4), 237-247. Egan, J (1998) Rethinking Constrztction. London: HMSO Department of Trade and Industry. Ferber, R. ( 1 980) Readings in the analysis of'swvey data, New York: American Marketing Association. Kothari, C R (2003) Research Methodolom, Methods and Techniques. New Delhi: Wisha Prakashan. OECD (2000) Technical Policj.: A n International Comparison qf Innovation in Major Capital Prqjects. Paris: OECD. OECDiEurostat ( 1997) Proposed gztidelines ,for Collecting and Interpreting Technological Innovation Data - Oslo Manual. Paris: OECD. Pihkala, T, Ylinenpaa, H and Vesalainen, J (2002) Innovation barriers amongst clusters of European SMEs. International Joiirnal ?f Entrepreneurship and Innovation Management, 2(6), 520 - 536. Reichstein, T., Salter, A. and Gann, D (2005) Last along equals: a comparison of innovation in construction, services and manufacturing in the UK. Construction Management and Economics, 23(6), 63 1 - 644. Rothwell, R ( 1992) Successful Industrial Innovation: Critical Factors for the 1990s. R & D Management, 22(3), 22 1-239. Schumpeter, J. A. ( 1976) Capitalism, Socialism and Democracy. London: Routledge. Seaden, G and Manseau, A (2001) Public policy and construction innovation. Building Research and Infi)rmation, 29(3), 1 82- 1 96.
275
International Conference on Advances in Engineering and Technology
Seaden, G., Guolla, M, and Doutriaux, J (2003) Strategic decisions and innovation in construction firms. Construction management and economics, 21(6), 603 - 612. Sexton, M and Barrett, P (2003) Appropriate innovation in small construction firms. Construction management and economics, 21(6), 623 - 633. Sha K, and Jiang Z (2003) Improving rural labourers’ status in China’s construction industry. Building Research and Information, 31(6), 464 - 473. Schon, D (1975) Deutero - learning in organisations: learning for increased effectiveness. Organisational Dynamics (3), 2 - 16. Slaughter, E S (1993) Builders as sources of construction innovation. Journal of construction engineering and management, 119(3), 532 - 549. Souitaris, V (2002) Technological trajectories as moderators of firm-level determinants of innovation. Research Policy, 31(6), 877-98. UNABCEC (2004) Improving Uganda’s Construction Industry. Construction Review. 15(10), 18 19. ~
276
Alinaitwe, Mwakali & Hansson
FACTORS AFFECTING PRODUCTIVITY OF BUILDING CRAFTSMEN - A CASE OF UGANDA H. Alinaitwe, Department of Civil Engineering, Makerere University, Uganda J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda
B. Hansson, Division of Construction Management, Lund University, Sweden
ABSTRACT Poor productivity of construction workers is one of the causes of cost and time overruns on construction projects. The productivity of labour is particularly important especially in developing countries where most of the building construction work is still on manual basis. This paper reports on a survey made on project managers of building projects in Uganda. The managers were required to rate using their experience the way 36 factors affect productivity with respect to time, cost and quality. The survey was carried out through a questionnaire and responses received over a period of three months. The ten most significant problems affecting labour productivity were identified as incompetent supervisors; lack of skills; rework; lack of toolsiequipment; poor construction methods; poor communication; inaccurate drawings; stoppages because of work being rejected by consultants; political insecurity; toolsiequipment breakdown; and harsh weather conditions.
Keywords: Labour, productivity, factors, ranking, building craftsmen.
INTRODUCTION Construction industries in many countries are greatly concerned about the low level of productivity (Egan, 1998; Lim and Alum, 1995). Poor productivity of craftsmen is one of the most daunting problems that construction industries especially those in developing countries face (Kaming et a/, 1997). Although some research has been carried out (Imbert, 1990; Olomolaiye et al, 1987; Kaming et al, 1997; Rahaman e f al, 1990), research on construction craftsmen productivity is generally in its infancy in developing countries. The construction industry in Uganda constitutes over 7% of the Gross Domestic Product (Uganda Bureau of Statistics, 2005) and has witnessed a steady growth for the last 20 years. It is assumed that any effort directed at improving productivity will greatly enhance the country’s chances of realizing her development goals. The construction industry in Uganda suffers from cost and time over-runs (Mubiru, 2001). Over-runs in the construction industry are indicators of productivity problems. Improving construction productivity will go a long way towards eliminating time and cost overruns. Identifying and evaluating the factors that influence productivity are critical issues faced by construction managers (Motwani et al, 1995). Research on factors that affect productivity in
277
International Conference on Advances in Engineering and Technology
developed countries has been explored extensively (Yates and Guhatharkta, 1993; Borcherding, 1976). Strategies for performance improvement have been identified and implemented mainly basing on the identified key factors. It is therefore important that factors affecting productivity in industry are well identified so that efforts can be made to improve the situation. However, the results from some of the earlier research were based on the perception with regard to time only. For example the Importance Index used by Lim and Alum (1995) is based on the frequency of encountering the factors. It is important that time, cost and quality aspects are included in assessing productivity. The three common indicators of performance in construction projects are cost, schedule and quality (McKim et al, 2000). The objective of this study is to identify and rank the major factors that affect the productivity of craftsmen in Uganda. The goal is to find an appropriate strategy for improving the productivity of craftsmen in Uganda.
PRODUCTIVITY PROBLEMS Although factors influencing productivity are widely researched, are not yet fully explored even in developed countries (Lema, 1996). There is therefore need to study further the factors that affect labour productivity. To improve productivity, the impact of each of the factors can be assesses using statistical methods and attention given to those particular parameters that adversely affect productivity. What follows is a review from earlier studies. Lack of materials, incomplete drawings, incompetent supervisors, lack of tools and equipment, absenteeism, poor communication, instruction time, poor site layout, inspection delay and rework were found to be the ten most significant problems affecting construction productivity in Thailand (Makulsawatudon and Emsley, 2003). Kaming et a1 (1 997) found out that lack of materials, rework, worker interference, worker absenteeism, and lack of equipment were the most significant problems affecting workers in Indonesia. Lema (1 996) through a survey of contractors in Tanzania found out that the major factors that influence productivity are leadership, level of skill, wages, level of mechanization, and monetary incentives. Lack of materials, weather and physical site conditions, lack of proper tools and equipment, design, drawing and change orders, inspection delays, absenteeism, safety, improper plan of work, repeating work, changing crew size and labour turnover were found out through a survey to be the most important factors in Iran (Zakeri 1996). The ranking was based on perception by project managers’ on influence and potential for improvement. Motwani et a1 (1995) found out through a survey in United States of America that the five major problems that impede productivity are adverse site conditions; poor sequencing of works; drawing conflictilack of information; searching for tools, materials, and weather. Olomolaiye et a1 (1987) found that the five most significant factors are lack of materials, rework, lack of equipment, supervision delays, absenteeism, and interference in Nigeria. Lim and Alum (1 995) found that the major problems with labour productivity in Singapore are recruitment of supervisors, recruitment of workers, high rate of labour turnover, absenteeism at the workplace, communication with foreign workers, and inclement weather. From the literature cited above, material shortage was usually the most significant problem as found out from the various studies. However, according to Lim and Alum (1 995), material shortage was ranked eighth instead. It is important to note that the questionnaires
278
Alinaitwe. Mwakali & Hansson
and ranking were based on time aspect of frequency of occurrence. However, quality and cost are equally important in assessing the factors that affect productivity. Craftsmen can deliver varying quantities of work but the quality and cost should be acceptable to their supervisors. Rosefielde and Mills (1979) argued that any measure of construction productivity that does not account for the changes in design and quality will lead to low, if not negative, measures of construction productivity.
METHODS
3.1 Research Method Fellows and Liu (2003) highlight five research styles: experiment, survey, action research, ethnographic research and case study. Research in construction is usually carried out through experiments, surveys or case studies. Experiments on barriers and enablers in the construction industry would take a long time to yield results and at the same time would be expensive. Case studies would not provide results that are easy to generalise as different companies face different problems. Surveys through questionnaires were found appropriate because of the relative ease of obtaining standard data appropriate for achieving the objectives of this study. Surveys are one of the most frequently used methods of data gathering in social research. The survey protocol of random sampling procedures allows a relatively small number of people to represent a much larger population (Ferber, 1980). The opinions and characteristics of a population can be explained through the use of a representative sample. Surveys are an effective means to gain a lot of data on attitudes, on issues and causal relationships and they arc inexpensive to administer. However, they can only show the strength of statistical association between variables. Cross sectional surveys do not explain changes in attitudes and views over time. They also provide no basis to expect that the questions are correctly interpreted by the respondents.
3.2 Questionnaire Design Factors that affect the productivity of craftsmen were identified through the literature based on previous research (Makulsawatudon and Emsley, 2003; Kaming 1997; Zakeri 1996; Lim and Alum, 1995; Motwani et al, 1995; Leina, 1996, Sanders and Thomas, 199 1 ; Oloinolaiye et ol, 1987; Borcheding, 1976). A total of 36 factors were identified. The project managers were required to rate the factors in the way they affect productivity in relation to time, cost and quality from their own experiences on building sites. The questionnaire required the respondents to rank their answers on a Likert Scale (Kothari. 2003). The survey package comprised a covering letter, the questionnaire and a prestamped self-addressed envelope. 3.3 Pilot Studies Pilot studies were carried out to ensure the clarity and relevance of the questionnaire to the contractors. The questionnaire was shown to two researchers. Based on their feedback, amendments were made to the questionnaire and the second phase of the pilot study was conducted on four building contractors in Uganda among those who were not going to
279
International Conference on Advances in Engineering and Technology
participate on the final one. Based on the feedback received, minor amendments were again made to the questionnaire to remove any ambiguities and discrepancies. 3.3 Sample Selection The survey gathered data from project managers of building contractors from as broad a geographic area within Uganda as possible. For this purpose, it was determined that all contractors who registered with the contractor’s association participate. The target population of contractors was 167, those registered with the contractors’ association of Uganda National Building and Civil Engineering Contractors’ Association (UNABCEC), and engaged in formal building work. At the national level, one recognized way of categorizing construction companies is by the UNABCEC grade. The classification from A to E takes into account the financial strength, size and ability to carry out jobs. Those in class A are the biggest and undertake works of the biggest magnitude and include some multinational companies. At the time of the survey, UNABCEC had a membership of 189 including civil engineering contractors. For the purposes of this survey, the mailing list of all those who deal in building construction during the year 2005 was used. A total of 159 questionnaires were sent out. 22 of the contractors did not participate for various reasons. The sample therefore reduced to 137.The survey was carried within a period of three months from mid July to October 2005. 3.4 Survey Response As a result of mailing and follow up, a total of 73 usable questionnaires were completed and returned. The distribution in the various grades of the 137 who were contacted and the 73 who responded is given in table 2.
A review of the responses from the national surveys indicated no measurable differences in the respondents’ answers to the questions. All the questionnaires were therefore combined for the analysis of this survey. 4.0 RESULTS AND DISCUSSION 4.1 Data Analysis and Results The average rankings were calculated basing on four different criteria: mean ratings for effect on time; effect on cost; effect on quality and combined importance index. The means for time, cost and quality were calculated using the formula
280
Alinaitwe. Mwakali & Hansson
where, x = time, cost or quality; R, is the mean rating with respect to time, cost, or quality from the Z number of raters; and R, in the rating given by the respondents. The mean combined importance index from the rankings were calculated using the formula
where,
R, is the rating basing on time, R, is the rating on cost and R, is the rating on quality. Table 2 is a summary of the calculated mean values for the different factors and also their ranking within the groups. 4.2 Discussion This section contains the results from the ratings as given in table 2 and a discussion about the factors. There follows a section on checking the reliability of the ratings obtained. We shall discuss the ten highest ranked within the category of overall ranking and five highest ranking in terms of time, cost and quality where they are not yet discussed. The highest ranked according to the Overall Importance Index are: incompetent supervisors; lack of skills from the workers e.g. inexperienced poorly trained workers; rework e.g. from poor work done; lack of tools/equipment; poor construction method e.g. poor sequencing of work items; poor communication e.g. inaccurate instructions; inaccurate drawings; stoppages because of work being rejected by consultants; political insecurity e.g. insurgency, wars, and risk; toolsiequipment breakdown; and harsh weather conditions. Materials shortages and delays is ranked first in terms of time. This is similar to what was found out in other countries (Makulsawatudon and Emsley, 2003; Kaming et ul, 1997, Olomolaiye et al, 1987). However, on the overall Importance Index, it is ranked seventeenth. Material shortages consume a lot of the contractors’ time but the effect on cost and quality is relatively lower. The main cost incurred due to shortages is for the idle time that craftsmen have to wait for the materials. The factor of Incompetent supervisors is rated highest on the overall Importance index. This could be partly because supervisors do not attend refresher courses. Most of the supervisors are trained but their formal training stops when they leave school. The way knowledge in managed is important. There is therefore need for continuous training of the supervisors. Lack of skills is a major problem and seriously affects the time, cost quality achieved. The hope is that since the government of Uganda is promising to introduce technical schools at all sub counties, the right skills will be developed in future but this will take at least three years to
28 1
International Conference on Advances in Engineering and Technology
have impact on the industry. There is need to make a needs assessment and identify the key trades and right numbers to train in order to change the scenario. Table 2: Ranking of factors according to time, cost, quality and combined importance index
282
Alinaitwe, Mwakali & Hansson
Rework is a rated third overall on Importance Index. It is ranked second, first and seventh against time, cost and quality respectively. It is mainly caused by failure to follow specifications. Specifications should be made clear and explained to the executing team to avoid rework. Repetition of instructions everyday with visual management aids could possibly make it easier for the foremen and workers so that they always refer to them. At the moment, the specifications are usually kept in office and relayed only when they are needed. Lack of tools and equipment is ranked fourth overall. Tools are mainly provided to the craftsmen engaged on full time basis. Casual workers are expected to bring their own tools. This is partly because the casual workers end up taking the very tools they are provided with. Some equipment is not readily available in some places even for hiring. Poor construction methods is ranked fifth on the overall importance index. Poor construction methods are mainly due to poor planning of the work. Poor planning is partly due to the incompetence of the supervisors. The other problem is that of designs that are not easily buildable. Lack of buildability is due to designs that do not take into account the available resources for construction purposes and inadequate appreciation of construction techniques. Poor communication due for instance to inaccurate instructions and inaccurate drawings is ranked fifth sixth and stoppages because of work being rejected by consultants is rated seventh overall. Political insecurity e.g. insurgency, wars is rated eighth on the overall importance. The factor of risk and insecurity has not been rated high before. This might have come up because Uganda has not been at peace for a long time. Currently a big portion of the total area of the country faces risk of insecurity froin armed rebels and this affects execution of building contracts. Toolsiequipment breakdown is ranked ninth according to the overall Importance Index. This is in relation to breakdown on equipment like vibrators, water pumps, powered machinery, etc. These break down due to poor maintenance and lack of regular service. Many of them are also not in the best condition as they lack spares. There is need for good garages and workshops to take care of the repairs and maintenance and for contractors to understand that there is optimal age for replacing such tools and equipment. Harsh weather condition is ranked tenth from the overall importance index. Uganda being along the equator experiences wet and dry conditions. The rains are heavy but in many cases last short periods. They cause damage to unprotected building components under construction that are mainly carried out in situ. The afternoons are generally hot at average maximum of about 28 - 33 "C when there is no cloud cover.
5.0 CONCLUSION From the survey, the five highest ranked factors that affect productivity of labour taking into account effect on time, cost and quality are incompetent supervisors, lack of skills from the workers; rework; lack of tooldequipment; and poor construction method. The competency of supervisors, level of skills of construction workers should be improved. The contractors too should focus on improving these areas by way of giving refresher courses, rewarding on the basis of skill and output, and participating in structured training on workers in the construction industry. Research geared at improving productivity should focus on the identified factors preferably those top on the list by importance index.
283
International Conference o n Advances in Engineering and Technology
REFERENCES Borcherding, J. D. (1 976) Improving productivity in industrial construction. Journal of the Construction Division, Proceedings of the A X E , 102 (C04), 599 - 614. Egan, Sir (1998) Rethinking Construction. DETR, HMSO, London. Fellows, R. and Liu, A. (2003) Research Methods for Construction. Second edition, Blackwell Science, Oxford. Ferber (1 980) Readings in the analysis of survey data. American Marketing Assoc., NY. Imbert, I. D. C. (1990) Human issues affecting construction projects in developing countries. Construction Management and Economics, 8(2), 219 - 228. Kaming, P. F., Olomolaiye, P. O., Holt, G. and Harris, F. (1997) Factors influencing craftsmen productivity in Indonesia. International Journal of Project Management, 15(1), 21-30. Kothari, C. R. (2003) Research Methodologv, Methods and Techniques. Wisha Prakashan, New Delhi Lema, M. N. (1996) Construction labour productivity analysis and benchmarking - the case of Tanzania. PhD Thesis, Loughborough University. Lim, E. C., and Alum, J. (1995) Construction Productivity: issues encountered by contractors in Singapore. International Journal of Project Management, 13(1), 5 1 - 58. Makulsawatudom, A. and Emsley, M. (2003). Critical factors influencing construction productivity in Thailand. Construction Innovation and Global Competitiveness. CIB 1othInternational symposium, Cincinnati. McKim, Hezagy, and Attala (2000) Project performance control in reconstruction projects. Journal of Construction Engineering and Management, 126(2), 137 - 141. Motwani, J., Kumar, A. and Novakoski, M. (1995) Measuring construction productivity: A practical approach. Work Study, 44(8), 18 - 20. Mubiru, F (2001) Comparative Analysis of bidding strategies of contractors in Uganda. Master of Engineering Dissertation, Makerere University, Kampala. Olomolaiye, P., Wahab, K. and Price, A. (1987) Problems influencing craftsmen productivity in Nigeria. Building and Environment, 22(4), 3 17 - 323. Rosefielde, S., and Quinn Mills, D. (1979). Is construction technologicaZly stagnant? The Construction Industuy: balance wheel of the economy. In J. Lange and D. Mills (eds), Lexington Books, Lexington. Sanders and Thomas (199 1) Analysing construction company profitability. Cost Engineering, 33(2), 7 - 15. Uganda Bureau of Statistics. (2005). Statistical Abstract, UBOS, Entebbe. Yates J. K. and Guhathakurta S. (1993), International labour productivity, Cost Engineering, 35(1), 15 - 26. Zakeri M., Olomolaiye, P., Holt, G. and Harris, F. (1996) A survey of constraints on Iranian construction operatives’ productivity. Construction Management and Economics, 14(5), 417 - 426.
284
Mwakali
A REVIEW OF CAUSES AND REMEDIES OF CONSTRUCTION RELATED ACCIDENTS: THE UGANDA EXPERIENCE J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda
ABSTRACT With robust economic conditions fuelling a construction boom in Uganda, the frequency and severity of construction site accidents is bound to increase in the hture. In recent years many such accidents have been reported, the most severe ones still fresh in the minds of many Ugandans being the Bwebajja building accident of 2004 in which a multistoreyed hotel building under construction collapsed and the recent collapse in March 2006 of a church structure under construction in Kalenve. There is therefore the need to increase or strengthen health and safety activities in the construction sector. The paper, based on studies by various researches, presents the most common causes of constructionrelated accidents in Uganda and beyond and suggests remedies to reduce on them. Keywords: Accident; Building; Bwebajja; Civil; Collapse; Construction; Formal construction; Hazard; Health; Informal construction; Infrastructure; Injury; Labour, Legislation; Occupational; OSH; Regulation; Safety; Workers.
1.0 INTRODUCTION One lexical definition of infrastructure is “the system or structures which are necessary for the operation of a country or an organization” (Longmans dictionary). Civil infrastructure refers to public and private works, namely buildings, roads, bridges, and water and sewerage facilities. Civil engineers are responsible for their planning, design, construction, operation and maintenance. A nation’s infrastructure system provides for the delivery of essential services and a sustained standard of living. An efficient system of infrastructure is fundamental to the well-being of a country and indispensable for the promotion of productive activities and social development. Civil constructions are normally medium to large-scale (both physically and financially), involve many trades and products which have to be selected with care, and are for use by many people. Public safety is therefore paramount. Such safety can only be guaranteed by experts in construction, in particular civil engineers. Mistakes in civil constructions are usually very costly. Due to the nature of the construction trade, individuals employed on construction sites find themselves confronted with dangerous, life-threatening work conditions on a daily basis. Serious accidents and injuries resulting in personal injury occur with alarming frequency at construction sites throughout the world. For example, in the European Union more than 1,300 people are killed in construction accidents every year. In many
285
International Conference on Advances in Engineering and Technology
countries, construction is the sector most at risk of accidents. Worldwide, construction worker s are three times more likely to be killed and twice as likely to be injured as workers in other occupations. The costs of these accidents are immense to the individual, to the employer and to society. They can amount to an appreciable proportion of the contract price. Most of the construction firms tend to be Small and Medium Enterprises (SMEs), the latter being the most affected by construction accidents. See European Agency for Safety and Health at Work, (ESAW) (2001). In Uganda, as robust economic conditions have been fuelling a construction boom, the frequency and severity of construction site accidents is bound to increase in the future. The most severe construction accidents still fresh in the minds of many Ugandans are probably the Bwebajja building accident of 1 st September 2004 in which 11 people died and 26 were injured when a multi-storeyed hotel building under construction suddenly collapsed on the Kampala-Entebbe highway (Mwakali, 2004); and the church building collapse at Kalerwe, a poor suburb on the north of Kampala City, in which dozens of worshippers died and hundreds were injured on the night of 8th March 2006 (The New Vision, 2006). Figs. 1 & 2 below show the extent of the Bwebajja and Kalerwe church building collapses, respectively. The situation of the construction industry therefore underpins the need to increase or strengthen health and safety activities.
Fig. 1. A section of the collapsed Bwebajja building. (From Mwakali, 2004)
286
Mwakali
Fig. 2:
Kalerwe church collapse of 8m March 2006. (From http://news.bbc.co.uk/2/hi/in__pictures/4788872.stm. Accessed o n 12 th March 2006)
2.0 A C C I D E N T S T A T I S T I C S F R O M E U R O P E According to European Statistics of Accidents at Work (ESAW, 2001): 9 About 4.8 million accidents at work resulted in more than 3 days absence from work in the 15 Member States of the EU 15. 9 The estimated total number of accidents at work in the EU 15 is about 7.4 million. 9 In 2000, there were 5,200 fatal accidents at work 9 The fatal accident incidence rate decreased between 1994 and 2000 There are variations in the accident pattern throughout the workforce: 9 Men have more accidents than women 9 Young workers (18-24yrs) have a much higher accident incidence rate than other age groups, but older workers (55-64yrs) have more fatal accidents 9 The accident incidence rates in industry sectors vary widely. 9 In the wood industry, every year, 10% of workers have an accident 9 The rate of accidents is higher in small companies than large enterprises 9 Accidents occurring at night tend to be more fatal than ones occurring at other times. 9 The "upper extremities" (arms, etc.) are the parts of the body injured by accidents at work 9 Wounds and superficial injuries are the most common type of injury According to the European Survey on Working Conditions 2000 (EWSC):
287
International Conference on Advances in Engineering and Technology
9 9
17% of illness absences from work are due to accidents at work. This adds up to about 210 million working days lost due to accidents at work.
The Eurostat Labour Force Survey reveals: 9 Workers who have under 5 years seniority in an enterprise are more likely to suffer an accidental injury at work. 9 Workers usually or sometimes doing shift work have a higher accident incidence rate than those never doing shift work. 9 2.3 million Europeans consider themselves having a longstanding disability due to an accident at work. Every year, about 5,500 people are killed in the workplace across the European Union, with another 4.5 million accidents resulting in more than 3 days absence from work (amounting to around 146 million working days lost). These accidents are estimated to cost the EU about 20 billion Euro. The problem affects all sectors of the economy and is particularly acute in enterprises with less than 50 workers. Due to accidents at work, around 5% of people were forced to change their job or place of work or reduce their working hours. 0.2% stopped working permanently. Between 1998 and 1999, it is estimated that work-related accidents cost the EU 150 million working days per year. A further 350 million days were lost through work-related health problems. Together, the total 'bill' was 500 million days per year. 3.0 ACCIDENT STATISTICS F R O M UGANDA Uganda is now experiencing a significant economic growth at an average rate of 6.3% of GDP p.a., making it one of the fastest growing economies in the world. The high economic growth rate has had a very positive impact on the construction sector, growth going up from 15% in 1990 to 40% in 1999, making it the second largest employer after agriculture. The informal construction sector accounts for up to 70% and the formal construction sector is dominated by small and medium size contractors and a few big ones often International subcontracting a number of local contractors. Generally, this rapid growth in the industry has brought about increased threat to occupational safety and health (OSH) and as such there have been a number of injuries and accidents (fatal) in the recent past. See Senyonjo (undated).
Fig. 3 gives the total number of reported accidents and dangerous occurrences in Uganda for the period 1984 to 1994, while Fig. 4 gives a breakdown of the numbers by industry. Fig. 5 gives the numbers for construction-related accidents in the same period. For the same period, Fig. 6 shows the percentage distribution of accidents by type while Fig. 7 shows the percent of those that were fatal. However, surveys commissioned by the British Health and Safety Executive (HSE, 2003) indicate a reporting rate by employers for other reportable injuries of less than 40%. Thus, published statistics are the tip of the iceberg.
288
Mwakali
350
293
300 r
250
230
-
246
221
"o
"6 200
166
o
'<
150
152
124
0
5
100
Z
50
12
13
Year
Fig. 3. Recorded accident trends in Uganda 1984-1994. (Adapted from Ojok, 1996) 45O 400 ii 378 = 350, e 300 ~ 250 o 200 "6 150 10050
385 214 118 ~
41
44
~
38
~
117 ~
48
ooo~ ~
122 ~
. 18 . . .N
51
~
79
69
N
N
0 ~ 9r -
n~
e" ~
r-" 0 ~ ._
e'~
~
C "~
~
@ "*" "r-
~
O
E ~
--
__1
Industry
Fig. 4. Accident occurrence totals in Uganda by industry for 1984-1994. (Adapted from Ojok, 1996)
289
International Conference on Advances in Engineering and Technology
30
27
26
I!
25 "0 o
20
17 15
159
0 1 0
0 Z
10
-
5=
2
0i
Year
Fig. 5. Construction accidents in Uganda in the period 1984-1994. (Adapted from Ojok, 1996) Miscellaneous 18% Accidents at , other premises subject to ~,, Factories5%Act i,,,,,i , i
i
,,
Fires 1% Accidents in building and ~ construction , sites 7% i,
ccidents in factories 69%
Fig. 6. Accident distribution by "type" in Uganda for the period 1984-1994. (Adapted from Ojok, 1996.)
4.0 THE COST OF CONSTRUCTION RELATED ACCIDENTS 4.1 Economic Cost
Accidents resulting in failed infrastructure mean loss of investment. In the case of Uganda, a good portion of the resources invested in public works (road bridges, school
290
Mwakali
buildings, dams, etc) is borrowed money. Hence, the tax payer has to pay for what they have got no value in return. In addition, lack of sufficient infrastructure or postponed realization of the same results in retarded economic growth. Failed infrastructure, such as roads, also results in operating inefficiencies, hence increasing operating costs (e.g. ve hicle operating costs). This further aggravates a very bad economic situation.
Miscellaneous 23%
Accidents at other premises subject to Factories Act 6%
Accidents in factories 55%
Fires 0% Accidents in building and construction sites 16%
Fig. 7. Accident fatalities by "type" in Uganda for the period 1984-1994. (Adapted from Ojok, 1996.) Given a weak insurance sector in Uganda, infrastructural failures are especially costly because there is usually no fallback position and the investor has to absorb the whole loss. 4.2 Social Cost
The social cost resulting from construction accidents is mainly in terms of lost and injured lives. Death is the ultimate loss, since human life is sacred. Lost or maimed lives in addition mean loss of skills needed by the economy. Many work hours get lost because of injuries to workers. Large compensation claims can accrue from accidents. The cost of occupational accidents is high for enterprises, including: sick pay; overtime payments; temporary replacement labour; early retirement; recruiting new labour; retraining; lost production time and business; damage to plant, equipment, materials and products; lost management time to deal with accidents; increased insurance premiums; lawyers' costs; and lower worker morale.
291
International Conference on Advances in Engineering and Technology
4.3 Political cost
An accident-prone economy is bad publicity for the country in general and managers of the affected sectors in particular. The damage to tourism and infrastructural investment can be great. 4.4 Environmental cost
Failed infrastructure often leads to environmental hazards, such as unsightly rubble, dust, floods, toxic emissions/discharges, outbreaks of disease, etc. 5.0 COMMON CONSTRUCTION RELATED ACCIDENTS AND THEIR CAUSES 5.1 Key Work Related Hazards and Risks
The type of hazard, the degree of risk it poses, and the severity of harm that may result will vary from workplace to workplace, sector by sector. The following list compiled by ESAW (2003) highlights some of the common work related hazards and risks: 9 Work equipment and plant. Under this we have inadequate mechanical safeguards to prevent contact with dangerous objects; lack of maintenance of work equipment and vehicles; cuts and splinters (from blades, comers, sheet metal, tools, edges, etc.); and electrical hazards. 9 The workplace. This involves poor housekeeping (order, cleanliness, control); poor visibility in areas where vehicles and lifting equipment (e.g. mobile cranes) are working; and the mixing of people and vehicles (particularly at entrances and exits to garages, warehouses, and depots). Workplace-related hazards also include: (i) Thermal burns caused by working with hot surfaces, hot liquids, vapours, or gases, or heating systems; (ii) Chemical burns due to corrosive substances, in particular strong acids and bases used in some processes (e.g. cleaning, maintenance, surface treatment); (iii) Inhalation of certain dangerous substances e.g. the 'silent killer' carbon monoxide, generated by incomplete combustion. (iv) Dangerous work involving exposure to the risk of asphyxiation, i.e. to a lack of oxygen, e.g. in confined spaces (such as vats, tanks, reactors, or tubes); (v) Fires and explosions caused by the conjunction of three factors- fuel, oxygen, and an ignition source; (vi) Falls and falling objects. Many deaths at work are due to people falling (from scaffolding, ladders, staircases, mobile ramps, etc.). Also, falling objects do considerable harm. In addition, people can fall over because of cluttered, dirty, slippery, corroded passages and poor surfaces; (vii) Workplace transport. This covers the uncontrolled movements of objects (e.g. poorly secured loads and containers in storage, transport, distribution, or handling); and people (being struck or run over by moving vehicles e.g. during reversing; falling from vehicles; being struck by objects falling from vehicles; or vehicles overturning) 9 The workforce, i.e. lack of information, instruction, training, supervision, and education. 9 Psychosocialfactors, such as stress can considerably increase the risk of industrial accidents. Senyonjo (undated) has listed a number of reasons for the occurrence of industrial inju-
292
Mwakali
ries and accidents in Uganda. They range from poor or inadequate legislation and regulation concerning the workplace to a poorly trained/facilitated workforce, to natural causes. Fig. 8 shows the percentage of accidents by "causes" in Uganda in 1994. In general, most sources indicate that human factors are the cause of 80%-90% of accidents, with only 10%-20% being caused by material factors. This proportion can vary significantly from one sector to another, and/or from one activity to another.
Fire or explosion
3%
Other causes (collapse of building etc.) 8%
Transport 1% ~ower driven machinery 34%
Use of hand tool.,
7% Falls of persons 10% Stepping on or striking against objects 9% Hot or corrosive substances 5%
Falling objects 15%
andling of goods or articles 8%
Fig. 8. Accident causes in Uganda in 1994. (Adapted from Senyonjo, undated.) 5.2 Summary of Accident Causes It is therefore possible to summarise causes of construction related accidents under the following four categories (HSE, 2003): (i) Worker and work team c a u s e s Worker or work team factors in construction accidents involve the actions of individuals, their capabilities and communication problems. Possible influences on these arise from worker attitudes and motivation, pay and remuneration, supervision and deployment, education and training, health, and working hours. The term 'worker' is used broadly and includes operatives, trade personnel and specialist professionals. (ii) Workplace causes Workplace factors influence safety through the presence of local hazards, size of working space, environmental aspects (e.g. lighting, noise, vibration and the weather). These, in turn, are affected by constraints of the particular site, consequences arising from work scheduling and effectiveness of housekeeping procedures.
293
International Conference on Advances in Engineering and Technology
(iii) Materials and Equipment Material and equipment characteristics include their suitability, usability and condition (including maintenance). Factors influencing these are the material/equipment design, specification and their supply and availability. (iv) Originating Influences These are possible underlying or root causes of the accidents, and include Permanent works design - inadequate designs, unclear designs, innovative designs, incorrect drawings or information on utility services, too many revisions. When infrastructure is not properly designed, it could fail and cause an accident. Because there is lack of regulated design and construction of infrastructure, especially by the private sector, many structures are never subjected to rigorous engineering scrutiny and judgment. Many designers and constructors of structures pass as engineers when they have never been in an engineering college. There are also numerous cases of inexperienced engineers being given charge of the design and/or construction of complex infrastructure. An engineer must aim to learn how a structure might behave and not be satisfied with a quick but poorly understood solution. Perhaps the advice given by Skempton (1 96 l), the president of the UK Institution of Civil Engineers, is telling: “Optimism and overconfidence may impress one’s clicnts, but they have no influence on the great forces of nature.” Project management - confusion of ownership of responsibility (cg. between main and sub contractors), quality and quantity of labour supply, work scheduling and time pressure. Construction processes - confusingiinadequate or lack of method statements. Safety culture - lack of training in OSH, production pressures overriding safe practice, widespread disinclination among operatives, supervisors and site managers to take responsibility for safety, absence of risk assessments for construction projects, workers simply being “told’ what to do without being part of any consultation, lack of accident investigation, not identifying accident remedial action. Almost all the above causes were greatly manifested in the Bwebajja building accident (Mwakali, 2004). 6.0 HOW TO REDUCE CONSTRUCTION RELATED ACCIDENTS
6.1 A view from Uganda In 1998, Uganda National Association of Building and Civil Engineering Contractors (UNABCEC) embarked on a two-year OSH programme following the concern over so many injuries and accidents that had hit the construction industry. The main objective was to promote health and safety awareness in the construction industry. The programme made the following recommendations, among others: Review and update OSH legislation, particularly regulations governing construction industry and their strict enforcement Increase OSH awareness programmes through OSH sustainable training and information activities for all stakeholders
294
Mwakali
Enhance collaboration and co-ordination between trade unions, employers’ organisations, Government enforcement agencies, professional bodies, insurers, etc., in supervising OSH practices Include OSH programmes in the curricula of training institutions Introduce incentive schemes in the construction industry Avail and promote the use of Personal Protective Equipment (PPE) Immediately report occurrences to enforcement authorities for investigation to avoid reoccurrence Establish a professional body like an institution of OSH to co-ordinate training and research in OSH. Have a long term programme of OSH awareness and capacity building so as to instil and sustain a culture of OSH in the construction industry in particular and Uganda in general. Ensure obligation of safety and health by designers of construction projects at the onset. Ensure professional bodies monitor ethical conduct of their members Develop guidelines to districts following the Local Government Act of 1995
6.2 A view from Europe Under European Union directives, employers have responsibilities for the safety and health of their workers. Directive 891391 provides the general framework for health and safety management, risk identification and prevention. Employers are required to assess risks and take practical measures to protect the safety and health of their workers, keep accident records, provide information and training, consult employees and co-operate and co-ordinate measures with contractors. A hierarchy of prevention is set, including: Avoid risks; Combat risks at source; Adapt work to the worker; Replace the dangerous with the non-dangerous; and, Give collective measures priority over individual measures. The following sections give a detailed description of the measures for accident prevention at work. Actions at Enterprise Level to Prevent Work-Reluted Accidents Companies should ensure the safety and health of workers in every aspect related to their work. Therefore, employers should take the necessary measures for the safety and health protection of workers, including the prevention of occupational risks and the provision of information and training, and provide the necessary organisation and means. To prevent accidents at company level, enterprises should establish a safety management system that incorporates hazard identification, risk assessment, implementation of prevention measures, monitoring and review. Risk assessment involves: (i) Identifying hazards; (ii) Judging who might be harmed and how seriously, including employees, contractors, and the general public; (iii) Deciding how likely it is to happen; (iv) Deciding how these risks can be eliminated or reduced; (v) Setting priorities for action accord-
295
International Conference on Advances in Engineering and Technology
ing to the size of risk, numbers affected, etc.; (vi) Implementing control measures; (vii) Assessing whether control measures are working; (viii) Including employee consultation in the process and providing information on risk assessment results; and (ix) Ensuring that occupational health and safety knowledge, skills and expertise are available Furthermore, the company should ensure that working conditions and machinery have been adapted to technical progress. A structured approach to management ensures that risks are fully assessed and that safe methods of work are introduced and followed. Periodic review checks that these measures remain appropriate. Performance should be monitored. This can be reactive, e.g. using accident records or proactive, e.g. by feedback from inspections and audits and from staff surveys. Accident investigations should identify the immediate and underlying causes, including management failings. The aim is to ensure that systems and procedures are working and to immediately take any corrective action needed.
Accidents and Gender Taking a ‘gender-neutral’ approach to risk assessment and accident prevention can result in risks to female workers being underestimated. A careful examination of real work circumstances shows that both women and men can face significant accident risks at work, so it is important to include gender issues in workplace risk assessments, for example by: having a positive commitment and taking gender issues seriously; looking at the real working situation; involving all workers, women and men, at all stages; avoiding making prior assumptions about what the hazards are and who is at risk; and ensuring collaboration and cooperation with the workers. Accidents and People with Disabilities People with disabilities should receive equal treatment at work and this includes equality regarding health and safety at work. Risk assessments and accident prevention measures should take account of individual workers’ differences and it is important not to assume that all workers are the same. Separate risk assessments are needed for pregnant workers, and similarly, they may also be necessary for disabled workers. Accident prevention measures for persons with disabilities may also help to reduce accidents to all workers, such as good lighting in the workplace; safe workplace access and egress; well maintained pedestrian and traffic routes in the workplace; and clear communication of hazards and risks in the workplace (e.g. by good signing) Consultation Consulting the workforce is important. Using their knowledge helps to ensure hazards are correctly spotted and workable solutions implemented. Employees must be consulted on health and safety measures and also before the introduction of new technology or products. Consultation helps to ensure that workers are committed to safety and health procedures and improvements.
296
Mwakali
Information and Training Workers have a right to receive information about the risks to health and safety, preventive measures, first aid and emergency procedures. Employees have duties to co-operate actively with employers' preventive measures, following instructions in accordance with training given and taking care of their own and workmates' safety and health.
All workers need to understand how to work safely. Therefore training should cover: what the risks are; the protective measures to follow; and emergency procedures. Training should be relevant and understandable, including for workers who speak a different language. Training should be provided for new workers and for existing workers when work practices or work equipment change, with change of job, or when new technology is introduced. Training requirements may vary according to worker ability. Particular care should be taken when training workers with disabilities. 6.3 Checklist for Management to Prevent Accidents Have clear procedures and responsibilities for health and safety been set and does everyone know their own and others' responsibilities? Do you know what you have to do to comply with health and safety legislation? If not, have you appointed a competent person who can provide advice? Have you identified the main risks to health and safety and taken action to eliminate or reduce them? Are your arrangements for the maintenance of work equipment adequate? Have you provided your workers with any necessary personal protective equipment for risks that cannot be avoided by other means? Have you trained them in its use? Have you provided information to the workers on the risks, and trained them in safe working and emergency procedures? Do you consult your workers about health and safety issues, including changes to policy, work procedures, equipment'! Do workers know how to report unsafe conditions and accidents'! Do you take prompt action to investigate accidents, near misses and reported problems? Do you regularly inspect the workplace, and check that workers are following safe working procedures? Do you have a system for reviewing your health and safety policy and working procedures'? 6.4 A view from UK According to the wide-ranging study for HSE (2003), achieving a significant and sustained reduction in accidents will require concerted efforts directed at all levels of the hierarchy of' causation. Important points are: Responsibility for safety needs to be owned and integrated across the project team, from designers and engineers through to skilled trade personnel and operatives. Research has shown how the lead given by front line supervisors has a strong influence on safety performance. Worker participation in managing safety is important, to generate ideas and to build ownership and responsibility. Where safety depends on communication and coordination, it is important that a
297
International Conference on Advances in Engineering and Technology
robust safe system of work is established. A step change is required with standards of site layout and housekeeping. Principal contractors should raise expectations of what constitutes acceptable practice. Greater attention should be given to the design and selection of tools, equipment and materials. Safety, rather than price, should be the paramount consideration. There needs to be greater sophistication with the design and use of PPE. Current PPE is often uncomfortable and impedes performance. Forcing workers to wear PPE when risks are not present is counterproductive. PPE should be a last rather than first resort for risk management. There is a need across the industry for proper engagement with risk assessment and risk management. Emphasis should be on actively assessing and controlling risk, rather than treating risk assessment as merely a paper exercise. Construction should be encouraged to benchmark its safety practices against other industries. The excuse that construction is ‘different’ in some way does not stand up to scrutiny. Greater opportunity should be taken to learn from failures, with implementation of accident investigation procedures, both by employers and government. It is important that ‘safety’ is disassociated from ‘bureaucracy’. Frequently, safety does not have to come at a price. Where there are cost implications, however, regulatory bodies and trade associations should work to make sure there is a level playing field. Most of these changes depend on achieving widespread improvement in understanding of health and safety. Education is needed over training, so as to promote intelligent knowledge rather than unthinking rule-based attention to safety.
6.5 Putting it all Together In a nutshell, it is suggested that regulation and support of the construction industry is the remedy that would help greatly reduce construction related accidents in Uganda. This involves, among other things, Strict enforcement of existing laws that relate to the construction industry, such as the Factories Act, the Public Health Act, Workers Compensation Act, etc. Urgent revision of the following laws: The Building Control Act to curb informal construction; The Factories Act to include construction and engineering works initiated by and for the private sector; The Engineers Registration and the Architects Registration Acts to provide for disciplinary action against all errant engineers and architects, respectively. Urgent consideration of the proposed Occupational Safety and Health Bill so that the Public Health Act can be better complimented. Developing and enforcing standards for construction materials, technologies, design, practice, etc. Operationalisation of the recently formulated Local Construction Industry policy. Investment in the education and training of technical managers and operatives of the construction industry (engineers, architects, surveyors, etc). Strengthening the insurance sector. Building capacity to respond to national emergency situations of major incidents.
298
Mwakali
7.0 CONCLUSION All accidents are multi-causal, with a rare combination of factors needing to coincide to give rise to an incident. Underlying each of the causal factors are a range of influences determining the extent to which they undermine safety. Operatives’ actions, for example, are influenced by their attitudes towards safety, their knowledge and skills, and their alertness and health. These, in turn, are affected by peer pressure, education and training, working hours, payment schemes, previous injuries or ill-health, and so on. The existence of hazards on site is a consequence of influences such as planning and preparation, supervision, housekeeping, project management and safety culture. Considering materials and equipment, the suitability, usability, condition and ultimately safety of these are a result of their design, selection and then availability and supply. It is possible to drastically reduce construction related accidents by regulating and managing construction.
REFERENCES European Agency for Safety and Health at Work (2001), Accident prevention in the construction sector, Facts 15. Accessed on 20th June 200.5 from http:l/agencv.osha.eu.intlpublications/factsheets/1 Sleni. Health and Safety Executive (2003), Causal factors in construction accidents, HSE Books, Sudbury, Suffolk, UK. Mwakali, J. A. (Chairman) (2004), Bwebajja building accident report, Bwebajja Building Accident Investigation Committee, Ministry of Works, Housing and Communications, Kampala. Ojok, J. R. M. (1996), “Heavy physical work in Uganda”, 1994 Annual Report for Commissioner Occupational Safety and Health. In African Newsletter 2/1996, p. 5052. Accessed on 20th June 200.5 from http:llwww.ttl.filIntemetiEn~lish/lnformation/Electronic+ioiiriials/African+Newsletter/ 1996-021 1 0.htm. Senyonjo, William K. Mukasa (undated), A health and safety programme in the construction industry in Uganda, Occupational Safety and Health Department, Kampala, Uganda. Accessed on 20th June 2005 from http:l/www.cramif.frlpdf/th4lParislsenyonjo.pdf. Skempton, Alec (1 961). Presidential address. Quoted in New Civil Engineer, No. 99, p. 19. The New Vision (March 9, 2006), “20 perish as church collapses”, The New Vision, Vol. 21, No. 058. Kampala, Uganda.
299
International Conference o n Advances in Engineering and Technology
THE RATIONALE FOR USE OF DECISION SUPPORT SYSTEMS FOR WATER RESOURCES MANAGEMENT IN UGANDA G. Ngirane-Katashaya, F. Kizito, I. Nansubuga Mugabi, Department of Civil Engineering, Makerere University, Uganda
ABSTRACT Water is a major factor in the socio-economic fabric of Ugandan society and a major determinant of the development potential of the country. The increasing demand for water in Uganda requires that water use efficiency be improved. Management of water resources is a complex problem that typically involves a variety of stakeholder interests and fundamental environmental uncertainties; this calls for the application of an integrated approach to water resources planning and management characterized by informed, fair and equitable decision making. The plurality of concerns establishes a pressing need for improved and more comprehensive water resource planning and management, which considers all three dimensions hydrological, ecological, and socioeconomic - in arriving at management decisions. There is a need to develop practical tools and methodologies to underpin and support sustainable development and management of the country’s water resources, in the form of comprehensive Decision Support Systems (DSS) that integrate data and stakeholder development priorities. In spite of rapidly advancing computer technology and the proliferation of software for decision support, relatively few DSSs have been developed, implemented, and evaluated in the field of water resources management in Uganda. Furthermore, there are still open methodological questions about the development and structure of operational DSSs in the field of WRM, and so there is room for applied research in developing tools that match local needs. Such tools should be tailored to the local conditions prevailing in the country, and accommodate specific needs as identified by stakeholders in a participatory, bottom-up development framework. They should feature user interfaces that allow easy interaction, are simple enough to be used directly and mastered by local decision makers without the constant support of computer analysts, and present outputs in formats that are easy to interpret. Equally important in the context of Uganda is for the process of development and deployment of the DSSs and associated methodologies to be inexpensive and cost-effective. Keywords: Decision Support Systems, Integrated Water Resources Management, stakeholder participation, CIS, Uganda
1.0 INTRODUCTION The Constitution of Uganda (1995) under objective XIV provides for the state to fulfill the fundamental rights of all Ugandans to social justice and economic development. The objec-
300
Ngirdne-Katashaya, Kizito & Mugabi
tive ensures that all Ugandans enjoy rights and opportunities and access education, health services, clean and safe water. Furthermore, in objective XXI, the state undertakes “to take all practical measures to promote a good water management system at all levels”, and in objective XXVll “to promote sustainable development and public awareness o f the need to manage land, air and water resources in a balanced and sustainable manner for the present and future generations”. Monitoring, assessment, allocation and protection of the water resources of Uganda is inherently a responsibility of the State, through its established institutions (specifically the Directorate of Water Development). This responsibility is enshrined in both the 1995 Constitution and the Local Government Act of 1997. Water is a major factor in the socio-economic fabric of Ugandan society and a major determinant of the development potential of the country. The various uses to which water is put have important implications for the state of Uganda’s environment, as does the way water resources are managed. The increasing demand for water in Uganda requires that water use efficiency be improved. As competition grows for scarce resources from different sectors in which unit values of water can differ markedly, there is increasing need for improved management o f the resource. Formulated in 1997, the National Water Policy (NWP) promotes an integrated approach to managing the country’s water resources in ways that are sustainable and most beneficial to the people of Uganda (NWP, 1997). This approach is based on the continuing recognition of the social value of water, while at the same time giving much more attention to its economic value. Allocation of both water and investments in water-use schemes aims at achieving the maximum net benefit to Uganda fiom its water resources both now and in the future. The current mode of thinking on improving water resources management endorses an integrated, multi-sectoral approach in the prevailing socioeconomic context, including: treating water as a social and economic good: relying on markets and pricing to determine water allocation among various sectors and user groups; involving the beneficiaries and the private sector in managing water at the lowest appropriate level; recognizing that water is a finite resource that contributes to economic development and supports natural ecosystems. Management of water resources is a complex problem that typically involves a variety of stakeholder interests and fundamental environmental uncertainties. Furthermore, interdependencies exist between regimes (ground water, surface water, land), processes (hydrology, meteorology, hydrogeology), u5es (water supply, irrigation, hydropower, recreation),
301
International Conference on Advances in Engineering and Technology
and social, economic, political and environmental concerns (stakeholder priorities, environmental impacts, costs, gender issues, treaties and regulations). The above interrelationships call for the application of an integrated approach to water resources planning and management characterized by informed, fair and equitable decision making. 2.0 I N T E G R A T E D W A T E R R E S O U R C E S M A N A G E M E N T
Integrated Water Resources Management (IWRM) may be defined as "a process which promotes the coordinated development and management of water, land and related resources in order to maximize the resultant economic and social welfare in an equitable manner without compromising the sustainability of vital ecosystems" (Rogers and Hall, 2003). Implementation of IWRM involves the establishment of three "pillars", as illustrated below:
(
Economic
~
C
y
~ J ) L
/Management "~' Instruments .- Assessment - information ,. Allocation instruments
~ -Equity ~
~ ~ : ~Sustainabiility..J
f Enabling: ~ Environment ~ Policies .~ Legislation
flnstitutionai Framework > CentralLocal ~. River Basin 3, Public o i j Private
Balance 'Water for livelihood" and"water .as a
\ . . . . . . . . . . . . . . . . . . . . . o,o.ro,o-
)
)
Fig. l'The "three pillars" of Integrated Water Resources Management: Enabling Environment, Institutional Framework and Management Instruments. (Source" GWP)
IWRM calls for analysis of varying technical, socioeconomic, environmental and political value judgments, and involves complex trade-offs between divergent criteria. The plurality of concerns establishes a pressing need for improved and more comprehensive water resource planning and management, which considers all three dimensions - hydrological, ecological, and socioeconomic- in arriving at management decisions. It has been noted that decision-making related to water resource management would benefit from water resources engineering expertise combined with suitable use of informatics.
302
Ngirane-Katashaya, Kizito & Mugabi
Mysiac (2003) reports that in the year 2000, after four years of extensive discussions. the European Union (EU) Council and Parliament adopted Directive 2000/60/EC, also known as the Water Framework Directive (WFD), a common European legislative framework for the protection of water resources. In order to support the implemcntation of the WFD, a key action line has been dedicated under the 5"' Programme Framework to issues related to the sustainable use of watcr rcsources. One of the priorities under this action line is the development of decision support systems for water resources management, providing a means of exploring and solving water related issues in an integrated and participatory manner. 3.0 DECISION SUPPORT SYSTEMS At its broadest definition. a Decision Support System (DSS) is any methodology that is helpful to a decision maker to resolve issues of trade-offs, through the synthesis of information. Within the context of Water Resources management, a DSS would typically contain or rely on information from databases, CIS coverages, computer simulation models, economic analysis models as well as decision models. The actual analytical processes may use linear programming techniques, decision theory or rule-based expert systems.
Several authors, cited by Watkins and McKinney (1995), describe an extcnsion of thc DSS concept to what they refer to as Spatial Decision Support Systems (SDSS). This involves the integration of DSS and Geographic Information Systems (GIs). The latter may be defined as a general purpose tcchnology for handling spatial geographic data in digital fomiat, with the ability to prc-process data into a form suitable for analysis, to support direct modelling and analysis, and to post-process results into a form suitable for graphical display. Water resources management issues are usually characterized by spatial features, so it seeins logical that GIS become a part of a DSS for water resources management. A inore recent view ofwhat a DSS is regards it as a context or platform for helping all those involved in decision making processes to access the necessary informationidata for a useful debate to take place. Bruen (2002) defines a Stakeholder Decision Support System as a DSS which can be used jointly by decision makers, technical experts and other non-technical stakeholders to explore the consequences of combinations of preference schemes and alternative scenarios. in the hope of achieving mutually acceptable compromises. This is also referred to as Participatory DSS.
Few DSSs for water resources management have been found to be in use in Uganda. Examples of those encountered include the Lake Victoria Decision Support System (LVDSS) and the Nile Decision Support Tool (Nile DST). Also currently under development is a Water Resources Engineering and Management Dccision Support System ( WREM-DSS), which is designed to incorporate and enhancc the two desirable attributes (Spatial and Participatory) within the same DSS.
303
International Conference on Advances in Engineering and Technology
3.1 Lake Victoria Decision Support System (LVDSS) This is a water resources management decision support system being developed for the Lake Victoria Basin. It is a collaborative effort of the Food and Agriculture Organization (FAO) of the United Nations (UN) and its Lake Victoria Water Resources Project on the one hand, and the Environmental Hydraulics and Water Resources Group at the Georgia Institute of Technology in Atlanta, Georgia. The LVDSS is being developed in accordance with the following guiding principles: The DSS should be a "shared-vision'' system, able to capture all relevant information pertaining to management decisions and represent it in a form that the users can intuitively appreciate; The role of the DSS will be to assist the Lake Victoria basin partners in their efforts to formulate mutually-agreed upon management strategies. As such, it should have the ability to generate tradeoffs among the various water uses, and assess the gains and costs of various development and management scenarios; The Lake Victoria basin partners should be able to continue to utilize and develop the DSS technology under a changing environment. 3.2 Nile Decision Support Tool (Nile DST) In 1999 the Nile Basin Initiative (NBI), a partnership initiated and led by the riparian states of the Nile River through the Council of Ministers of Water Affairs of the Nile Basin states (Nile-COM), was created. The NBI programme consists of two complementary subprogrammes: the Shared Vision Program and the Subsidiary Action Program. The former focuses on fostering an enabling environment for cooperative development, while the latter addresses physical investments at sub-basin level.
Involvement of the Government of Italy through the Italian Cooperation in Nile Basin started in 1996 with project GCPIRAFI286IITA "Operational water resources management and information system in the Nile Basin countries". This was later followed by project GCPIINTI752IITA "Capacity building for Nile-Basin water resources management", which was implemented as part of the NBI Shared Vision Program. One of the focus areas for the two projects was the development of a Nile decision-support tool (Nile DST). The Nile DST, developed by the Georgia Water Resources Institute, is a prototype software that models the entire Nile Basin system and assesses the tradeoffs and consequences of various cross-sector and basin-wide development scenarios. The system incorporates modules for river simulation and reservoir operation, agricultural planning, and watershed hydrology. It allows the impacts of various levels of regional coordination to be examined, and serves as a cornerstone for information integration. The Nile DST was released by NileCOM in February 2003.
304
Ngirane-Katashaya, Kizito & M u g a b i
3.3 Water Resources Engineering and Management Decision Support System (WREM-DSS) The WREM-DSS is a prototype DSS under development as part of an ongoing study that is being carried out within thc framework of the SidaiSAREC-funded project “Sustainable Technological Development in the Lake Victoria Region”. The prototype DSS that is being developed is designed to take full advantage of the traditional spatial capabilities of CIS, together with a focus on enhanced stakeholder involvement in the decision making process facilitated by the embedding of Multi-Criteria Decision Analysis (MCDA) techniques within the decision engine. 4.0 THE NEED FOR DSS DEVELOPMENT IN UGANDA One of the priority actions that have been identified in order to achieve the policy goal of sustainable water resources management is the establishment of planning and prioritization capabilities for decision makers (WAP, 1995). These capabilities are intended to enable decision makers to make choices between alternative actions based on agreed policies, available resources, environmental impacts, and the social and economic consequences. It has been recognized (DWD, 2002( 1)) that the capacity at district and lower lcvcls to plan and implement sector activities is low, and additional central support is still needed. Likewise, the capacity at the center (in terms of skills, technology, etc) is also limited. Efforts geared towards building up the requisite capabilities are timely and desirable. Based on the status of her various Macroeconomic and Human Development indices (UNDP. 1996), Uganda is classified as a “developing country”. A very topical catchphrase in Uganda today is “modernization”, which is viewed as key to addressing the poverty and underdevelopment status prevalent in the country (NEMA, 1996). Within this context, development of appropriate technologies has a crucial role to play. While in the past there has been some skepticism regarding the suitability of modern Information Technology (IT) within an “appropriate technology” framework, there is now a growing school of thought that sees advanced IT as actually underpinning the development effort in underdeveloped countries such as Uganda (Moriarty and Lovell, 2000). This is mainly so in light of the continuing fall in prices and rise in availability of computing power. There is therefore a need to devclop practical tools and methodologies to underpin and support sustainable development and management of the country’s water resources, in the form of comprehensive decision support systems that integrate data and stakeholder development priorities. In spite of rapidly advancing computer technology and the proliferation of software for decision support, relatively few DSSs have been developed, implemented, and evaluated in the field of water resources management in Uganda.
305
International Conference on Advances in Engineering and Technology
Such decision support tools need to be structured to fit in with existing policy frameworks and responsibility allocation in Uganda’s water sector. They should be tailored to the local conditions prevailing in the country, and accommodate specific needs as identified by stakeholders in a participatory, bottom-up development framework. By building a DSS, many needs of policy-makers and resource managers in the watcr sector can be met, such as the provision of mapping capability for land and water resources, a common digital database for information, a suite of spatial analysis tools, development of predictive models, and provision of a basis for evaluation of management alternatives. Another important prerequisite to achieving the policy objectives for integrated water resources development and management is a good understanding of the physical resources, including the interplay of meteorological, hydrological and hydro-geological factors (DWD, 2002(2)). Equally important are good forestry, agriculture and land use management programmes and practices, as these have a direct impact on the water regime. The process of building up a comprehensive, integrated DSS offers the involved stakeholders an opportunity to gain an insight and understanding of the various sub-sectoral interdependencies.
5.0 SOME CONSIDERATIONS IN DEVELOPING A DSS FOR WRM IN UGANDA According to Pereira and Quintana (2002), one of the key desirable features of a DSS is adaptability - an adaptive system that corresponds to diversc dccision makers’ needs, supporting a variety of decision making processes yet independent of any one in particular. Thus, construction of a DSS calls for a concept-driven approach - that is, an approach that begins with the establishment of a conceptual framework and then finds suitable tools and technologies that would support and implement that framework. Furthermore, mere generation of knowledge about interactions among physical and socioeconomic processes in a watershed is insufficient. The knowledge must be delivered to potential users in a way that maximizes its usefulness in watershed planning and management. In this respect, it would be necessary that a computer-based water resources management DSS feature a user interface that allows easy interaction, is simple enough to be used directly and mastered by local decision makers without the constant support of computer analysts, and presents outputs in formats that are easy to interpret. Modularity within the context of DSS development means that, starting with the identification of various needs as perceived by the different stakeholders, individual analytical and modeling tools can be developed or adopted and adapted to constitute sub-components of the DSS. A framework would then be established within which each of these subcomponent modules can be integrally accessed and utilized in a holistic manner, taking into account the multiple objectives and constraints at play within the watershed as a whole. The use of an open architecture for the DSS would ensure ease of upgrade of component modules, as well as addition of new modules in response to identified needs.
306
Ngirane-Katashaya, Kizito & Mugabi
Equally important in the context of Uganda is for the process of development and deployment of the DSS and associated methodologies to be cost-effective, demanding minimum hardware, software and licensing fccs. This neccssitatcs the identification, adoption and adaptation of suitable existing tools, models and routines, with particular emphasis on the usage of non-proprietary, inexpensive or widely available industry-standard software tools.
6.0 CONCLUSIONS Many decision support systems have been developed to address the problems of water resources management, in different parts of the world. and focusing on different aspects of WRM. The need for a computerized DSS clearly emerges as a result of the increasing complexity of the decision situations caused by the numerous conflicting, often spatially related objectives, and the dissimilarity of stakeholders involved. However there are still open methodological questions about the development and structure of operational DSSs in the field of WRM, and so there is room for applied research in developing tools that match local needs. REFERENCES Brucn, M. (2002). Mirltiple Criteria atid Decision Support S,vstem.s in Water resource,^ P lanniiig and River Basin Manugement. Paper presented in National Hydrology Seminar. Url: http://WWw.op~v.ie/hydroloay/data!speeches (Accessed: 1811012004) Directorate of Water Development (DWD), 2002( 1 ). Overview qfthe Water Sector, Reforw, SWAP and Fitinncial Issires. Issue Paper 1, prepared for the Joint Government of UganddDonor Review for the Water and Sanitation Sector, Kampala, 2003. Directorate of Water Development (DWD), 2002(2). Wuler Resources Munugernenl. Issue Paper 5 , prepared for the Joint GoU/Donor Review for the Water and Sanitation Sector, Kampala, 2003. Moriarty, P.B., Lovell, C.J. (2000). The development and use oj'a networ*k integrating u datuhuse/GIS andgi~ooundwatermodel as a manugement tool in The Romwe Catchment Study the effects of land management on groundwater resources in semi-arid Zimbabwe. Proceedings of Electronic Workshop: Land-Water Linkages in Rural Watersheds; Case Study 19. Mysiak, J. (2003). Development of' Tmnsferr-able Mzilti-criteria De 0 1 7 Tool.s ,/br WaterResource Managenzent. Annals of the Mark Curie Fcllowship Association, Vol. 111. Url: http:/:www.tnariccurie.or~/aiinas National Environment Management Authority (NEMA) ( 1996). State of' the Gwironment Reportjhr- Ugundaa,1996. National Water Policy (NWP), Ministry of Natural Resources (1 997). Pereira, A.G., Quintana, S.C. (2002). From Technocrutic to Pavticipatory Decision Support Svsten?s: Rrsponding to the N P N ,Governance 1nitirrtive.s. Journal of Geographic Information and Decision Analysis (GIDA), Vol. 6, No. 2, 2002. ~
307
International Conference on Advances in Engineering and Technology
Rogers, P. and A. W. Hall, 2003. Effective Water Governance. Global Water Partnership, 2003. United Nations Development Programme (UNDP), 1996. Human Development Report, Uganda, 1996. Water Action Plan (WAP), Directorate of Water Development (1995). Watkins, D.W., McKinney, D.C. (1995). Recent Developments associated with Decision Support Systems in Water Resources. U.S. National Report to IUGG, Rev. Geophys. Vol. 33 (American Geophysical Union).
308
Mujugumbya, Akampuriira & Mwakali
THE NEED FOR EARTHQUAKE LOSS ESTIMATION TO ENHANCE PUBLIC AWARENESS OF EXPOSURE RISK AND STIMULATE MITIGATING ACTIONS: A CASE STUDY OF KAMPALA CIVIC CENTER P. B. Mujugumbya, K. C. Akampuriira and J. A. Mwakali, Department of Civil Engineering, Makerere University, Uganda
ABSTRACT The field of earthquake loss estimation is currently in an exciting stage of development. Advances in software and computer technology, database management programs, improved data collection in recent earthquakes are all contributing to the development of models to better assess the performance of structures, the types and extent of economic losses, casualties, and other social impacts. Estimates of earthquake loss are needed for a variety of purposes, including emergency response, risk management and hazard mitigation. Since all hazard and risk estimates have uncertainties, we need to think through the best ways to express them to the public and to policy makers in responsible ways that do not negate the significance of our conclusions. Keywords: Loss estimation; hazard; earthquake; vulnerability; disaster management; workmanship; supervision; engineered buildings; RADIUS; HAZUS.
1.0 INTRODUCTION The recent earthquakes in Uganda have claimed lives, caused material losses and have brought attention to the increasing social and economic vulnerability to seismic risks. There continue to be large human losses from earthquakes and the economic losses are rising dramatically mainly because of the continuously increasing concentrations of populations in exposed urban areas and increasing dependency on a complex network of lifelines. The suffering of earthquake victims raises difficult policy issues of how best to decrease the human losses and economic damages from earthquakes and to spread the financial burdens. Once an earthquake strikes a large city, the damage can be tremendous and a terrible toll can be inflicted as divert attention and resources needed for the greatest challenges of the developing countries, i.e. poverty eradication and sustainable development are focused on recovery programs. In 1998, the government realized the need to establish the department of disaster preparedness and placed it under the Prime minister’s office [I]. Though there is a disaster division under the department, there has been no implementation of earthquake disaster management projects for our city as it has been done for other earthquake prone cities
309
International Conference on Advances in Engineering and Technology
namely Teheran of Iran, Kathmandu Valley of Nepal, Bogota of Colombia and Istanbul of Turkey and more studies are being carried out on them [ 2 ] . Traditionally, in the disaster management the focus has been placed on preparedness for response. However, we should shift to developing disaster reduction strategies and measures with an aim to enable societies to be resilient to earthquake disasters and to minimizing a threat to both poverty eradication and sustainable development. In the projects, first and foremost, the risk assessment consisting of hazard, vulnerability, and risk analyses and evaluation has to be carried out. Based on the risk assessment, the disaster reduction strategies and measures can be developed by a holistic approach namely multi-sectoral, multi-disciplinary and interagency approach. Local participation during the process of development is crucial as the previous experiences clearly demonstrate that local government is the most effective player of in-front activities as well as citizen’s capacity effectiveness. In this paper the study for civic center is briefly described as an example of a project for earthquake loss estimation of a city in a developing country. 2.0 BARRIERS TO PUBLIC LOSS REDUCTION ACTIONS Earthquake hazard and risk communications face some formidable barriers to their being understood and believed by many recipients. Actions in response to receipt of successful hazard and risk messages include both the private decisions based on the recipient’s perception of risk, their personal aversion to that risk, and the public community’s consensus that loss-reducing regulations and other types of actions are justified to keep the exposure to risk within a specific comfort zone. The upper boundary of the comfort zone is usually referred to as the “level of acceptable risk”. The most important public policy efforts to reduce future earthquake losses in Uganda have come on the heels of disastrous events rather than through the receipt of prior hazard and risk messages. Our policy makers want to show that they are doing something after disasters.
3.0 APPLICATION OF LOSS ESTIMATION TO EARTHQUAKE LOSS REDUCTION In a democratic society, the things that are given significant attention are generally “consumer driven” issues. It is clear that both citizens and public policy makers of the city are unaware of the extent of their exposure to earthquake risk and consequently they do not have the motivation to invest in loss reduction improvements. Our attempts at risk and hazard communication are not strategically successful. The citizens will only advocate and support policies that advance seismic safety if they appreciate the extent of their own risk exposure and find it to be above their consensus level of acceptability. Properly used, RADIUS1999 and the newly developed HAZUS loss estimation modeling capability can enable us to justify improvements without waiting for the next disastrous earthquake. The citizens receiving our messages are the key to any improvements in seismic safety that may be made. We do not have to wait for the next disaster! First
310
Mujugumbya, Akampuriira & Mwakali
we must strive to eliminate barriers to the recipient's perception of our messages and then construct our messages so that the recipients are prompted to take action. 4.0 ESTIMATED LOSSES IN UGANDA EARTHQUAKES DURING THE PAST YEARS Looking at the past years, earthquake losses have increased dramatically. The tabulation below shows recorded earthquake losses in Uganda between 1945 and 2006 [3].
7--~
Event
18"' March 1945 20Ih March I966
5'" February 1994
Latitude
Longitude
Magnitude
Location
Masaka Toro
0.7
0.593
30.037
1
6.0
Kisomoro
Estimated Damages Losses
and
Five people were killed. 160 people died, 7000 buildings were damaged and destroyed. 8 people died, Several injured, buildings damaged, More 2 people were killed and I injured by a landslide in Kasese.
All these earthquakes occurred in relatively unpopulated areas. As urban areas continue to expand, the population and infrastructure at risk increases. If these or similar events were to recur, or occur closer to populated areas, damage would be much more significant. For example, the Kisoinoro earthquake of in 1994 caused approximately $60 million in damage of which 0?40was covered by insurance. City decision makers are frequently called upon to make decisions on development, redevelopment, and hazard mitigation priorities. Clearly, these decisions could profit from an understanding of the expected future losses from earthquakes. This understanding should begin on a regional scale, applicable to regional policy decisions. To this end, to provide a credible first order estimation of future earthquake losses in Kampala, the Faculty of Technology Makerere University has implemented an evaluation of expected earthquake losses in Kampala City. Of course, we cannot say when and where earthquakes will occur, how big they will be and what their effects will be. But we do apply the best available current understanding of earthquakes and their effects to produce this evaluation. The approach used provides a publicly available model that can be applied at regional scales to assist in the development and prioritization of mitigation, response and recovery strategies. To this end, we include a short list of possible suggestions and issues on loss reduction that arise from the damage analysis.
31 1
International Conference on Advances in Engineering and Technology
5.0 RESULTS OF LOSS ESTIMATION Using data collected from the target area, the theoretical potential damage caused by the hazard scenarios can be estimated. This estimate includes structural and non-structural damage to buildings, human life loss, number of injuries and direct economic loss. When associated losses, such as losses to contents, inventory and income, are included, the expected annual loss for all of Civic center increases to a number many times the anticipated. It should be perfectly clear that the results are purely theoretical, using average damage functions that do not include particular characteristics of local systems. These particular characteristics are considered in the non-theoretical damage estimation. Take note that past earthquakes may not provide a realistic estimate of future earthquakes' effects. It may be helpful to note that once all required information has been gathered and the use of Geographic Information System (GIS) software or standard format effected, it is very easy to estimate damage for different earthquakes since only the intensity distribution changes. The city, then, could conduct investigations as to how several hypothetical events could affect it. This expresses the need to enhance this technology for better and accurate results. 6.0 COMPARISON WITH OTHER PUBLISHED ESTIMATES The loss estimates presented from this evaluation may seem very large or small. You realize that you can not conclusively compare the results of our analyses with previously published results. Most of the quantified losses have occurred in developed countries. It is now widely held that even if the rate of occurrence of natural disasters is not increasing, the damage that results from them is increasing as the number of people and structures exposed to the hazards increases.
7.0 RECOMMENDATIONS TO CREATE FURTHER EARTHQUAKE AWARENESS AND INITIATION OF MORE EFFECTIVE LOSS REDUCTION EFFORTS Where do we go from here? It is clear that the civic center faces a serious financial threat to its buildings and economy from future earthquake losses. But the problem receives serious attention only after catastrophic events. Solutions seem too expensive for the nebulous gain to be received at some distant time in the future. What can we as scientists, engineers, economists and sociologists do to help alleviate these future losses? Better communication needs to take place between the technical community and the policy makers and general public. Scenario loss estimates fair just as poorly. A scenario event may be catastrophic and cost millions of dollars, but it's likely not to occur in our lifetime or at least in the tenure of any city leader. New approaches must be invented to better communicate the level of risk to the general public, the public policy and decision makers and the financial institutions. A process of assessing the success of important new communication efforts by surveying the intended audiences will facilitate continuing improvement of our efforts [4].
3 12
Mujugumbya, Akampuriira & Mwakali
We recognize that it's difficult to understand how to use estimates of future loss that are uncertain by any factor. We need to put both the uncertainties and the significance of the current loss estimates into a perspective that will justify action and not favor procrastination. In addition, the technical community must continue its efforts to use and improve the loss estimation methodologies in their reach to reduce the uncertainty in the results. Dollar loss estimates should be more comprehensive including loss estimates for critical facilities and lifelines. The better the results presented to the public, the more credible the message and the more useful the understanding is to design mitigation strategies. Together, these increase the likelihood that the public and policy makers will respond meaningfully to the loss estimation insights. In many countries the main issues in preventing human and economic losses appear to be the enforcement of existing building codes, the quality of materials used in construction, the workmanship and supervision on the construction of engineered buildings. In a nut shell human and economic losses can be reduced by improving the built environment. There may also be difficult problems in financing earthquake recovery because of the heavy burden this places on government budgets. Moreover, households and businesses appear to be reluctant to invest in retrofitting and other mitigation measures as well as to invest in insurance for events that appear remote with respect to their everyday needs. For the purpose of mitigating losses, we should emphasize the need to link private risk management tools such as insurance with public risk regulations such as well-enforced building codes. We also argue that insurance could play a role in reducing earthquake losses by linking premiums with household and business mitigation measures. A set of risk management issues should be examined for both mitigating earthquake
losses and spreading the economic losses after the disaster. In developed countries, surveys have shown how the losses are transferred from the victims to insurance companies, the government and international aid givers. The sum of insurance claims, public compensation and voluntary aid ranges from 40 to 60% of the direct losses from these events. However, the distribution is uneven with insurance playing the major role in some countries and public assistance in others. Voluntary aid appears to play only a minor role. In Uganda, the reverse is true. For the government to reduce its role in post disaster assistance there should be institutional arrangements for a national insurance system. One important suggestion is that disaster insurance be mandatory for households and businesses in high-risk areas and urban center. However, this raises the issue of how low-income persons will cover the costs. However, arrangements for bundling natural disaster insurance with fire or property insurance might be an interesting alternative to mandatory insurance. It appears that the demand for "bundled" policies is far less elastic with respect to the price of these policies than "unbundled" policies.
313
International Conference on Advances in Engineering and Technology
8.0 CONCLUSION Whether Africa needs a risk modeling tool comparable to HAZUS or RADIUS remains a significant issue. It is up to us to stress its necessity in order to anticipate risk on an African scale with a compatible technology that gives higher priority to the local risk assessment process. The latter one should follow good practice as developed by the RADIUS program. Key issues in risk assessment should be to increase awareness and to understand risk assessment as an integrated part of a risk management strategy.
REFERENCES 1. Owor M., “Disaster management in Uganda”. Proceedings of the Uganda Second Seismic Safety Association International Conference on Earthquake Disaster Preparedness, Entebbe-Uganda, December 2-3,2002. 2. Takahashi M and Tanaka T, “Seismic Risk Assessment ofIstanbul in the Republic of Turkey”. Proceedings of the 3rd WSSI International Workshop on Seismic Risk Management for Countries of the Asia Pacific Region, December 7-8, 2003 Bangkok, Thailand. 3. Geological Survey and Mines Department, Ministry of Energy and mineral development, Uganda 4. Communicating Risk to the Public; International Perspectives, 1999, Kluwer, Academic Publishers.
314
Adewumi, O g u n l o w o & Adeinosun
CHAPTER FOUR CHEMICAL AND PROCESS ENGINEERING PARTICLE DYNAMICS RESEARCH INITIATIVES AT THE FEDERAL UNIVERSITY OF TECHNOLOGY, AKURE, NIGERIA B. A. Adewumi, A. S . Ogunlowo and 0. C. Ademosun, Depurlrnent of Agricultri~~trl Engineering, Federal L'nii,er-si<)* of Technolog?: '4 kztre, Nigeria
ABSTRACT The interactions between particles and the analysis of the inter-playing forces in a moving air are essential in the separation, sorting, cleaning and grading processes. Effective design of machines for particle separation, sorting, cleaning and grading also requires a good knowledge of particle dynamics. Separation, sorting, cleaning and grading processes mostly exploit the differences in the physical and aerodynamic properties of particles, especially size, shape. density and terminal velocity. Conventionally, the concurrent and counter current flows are utilized for separation and a lot of theories are well established. However, concurrcnt and counter current systems could only separate materials with sharp difference in size, density and terminal velocity and cannot classify materials. Cross flow system was therefore proposed for material classification. This paper reports the initiatives of the Department of Agricultural Engineering, Federal University of Technology, Akure, Nigeria at developing an equipment for cross flow particle dynamics research with the view of developing smallimedium scale machines that could be used simultaneously for sorting, cleaning and grading grains.
Keywords: Material clarification, Material classification. Particle dynamics, Equipment
1.0 INTRODUCTION Impurities and contaminants are scparatcd from sound grains during the cleaning process. Grain cleaning reduces the problems that occur during storage and handling. Clean grains save storage space and increases marketability. Separation and cleaning is achicvcd as a result of the differences in the size, density and terminal velocity of the mixed product (Kulkami 1989). Cleaning of grains is mostly done using screen or pneumatic separator. Screen
315
International Conference on Advances in Engineering and Technology
separation is mostly used to separate product on the basis of the differences in their sizes. Multi screen separators are used for classifying grains to size grades. Some screen separator incorporate fan to remove light particles (Ogunlowo & Adesuyi 1999). Screen separator includes the rotary and the reciprocating types (Kulkarni 1989, Ademosun 1993). Pneumatic separation occurs as a result of the differences in the aerodynamic properties of the mixed material to partition them into two major groups. Ogunlowo & Oladapo (1990) stated that air velocity (Va) is selected such that it is between the terminal velocity of the grain (V,,) and that of the light material (VT,). Hence, VTg>Va> VTL. Pneumatic separators/cleaners are basically of two types namely the vertical air stream and horizontal air stream separators (Gorial & O’Callaghan 1991a, b). In the vertical air stream separator, air stream is flowing vertically against the injected mixed product such that heavy particles (grains) drop through the air (counter current flow) while the light materials (chaff) move upward and are carried along by the air (concurrent flow). In the horizontal air stream separator, air is blown horizontally or at an inclined angle to the horizontal against mixed product injected along the vertical plane. The mixed product is displaced along the horizontal plane at various distances based on their physical and aerodynamic properties (Gorial & O’Callaghan 199 1 b).
2.0 FACTORS AND PARAMETERS INFLUENCING PARTICLE MOVEMENT IN FLUID The solution to the motion of particles in fluid attracted various dimensions of approaches by different researchers including Kiker & Ross (1966), Rumble & Lee (1970), Farran & Macmillan (1979), Gorial & O’Callaghan (1991a, b), Macmillan (1999), Ogunlowo & Coble (2000) and Adewumi (2005). The study of the behaviour of particles in moving air is an essential aspect of cleaning and separation of threshed materials. The knowledge of flow of solid particles in air, the forces involved and the parameters effecting grain separation are required. The three primary aerodynamic parameters of paramount importance include the drag coefficient, terminal velocity and Reynolds number. The drag coefficient includes the coefficient of pressure drag and coefficient of friction drag. Hence, the resistance of a sphere to motion in an air flow is the sum of frictional effect (Skin drag) and pressure gradient effect otherwise known as form drag (Tabak & Wolf 1998). It is a fimction of the Reynold number and geometric mean diameter (Gorial & O’Callegham 1990). The form drag is closely associated with the mechanism of flow in the wake region while the skin drag depends on surface characteristics. The form drag is dependent on the form or shapes of the particle and decreases to the minimum as a body approaches eccentric ellipsoid.
3 16
Adewumi, Ogunlowo & Ademosun
The terminal velocity otherwise called suspension velocity is the velocity at which the drag force and the gravitational force are at equilibrium. It is a function of particle moisture content, increasing with an increase in moisture content (Joshi et a1 1993). Reynolds number is an essential aerodynamic parameter that determines the nature of flow. Flow could be turbulent, laminar, uniform, non-uniform depending on the Raymond number. The Reynolds number in pneumatic system is usually in the range of 10’ to 2 x 1O4 (Tabak & Wolf 1998). A number of forccs influence air-solid interaction in bulk flow. Such forces include gravitational force, drag force, frictional force, pressure gradient force, lift force, transverse force caused by maginus effect, collision force due to momentum and tangling, and so on (Ogunlowo & Coble 2000, Macmillan 1999, Rumble and Lee 1970, Kiker & Ross 1966).
A number of parameters were identified to influence the separation of particles in a fluid medium. These include the fluid velocity, particle direction in air flow, direction at which particle is injected, particle feed rate, loading ratio, resilience time of particle in the separation chamber, the ratio of grain to material other than grain (MOG) and air turbulence intensity (Hamilton & Butson 1970, Shellard & Macmillan 1978). 3.0 THEORETICAL BASIS FOR PARTICLE SEPARATION IN HORIZONTAL AIR STREAM A lot of literature is available on the aerodynamic theory of separation of threshed material in vertical air stream (Gorial & O’Callaghan 1991a). Only few literatures are found on the aerodynamic theory of particles in horizontal air stream (Gorial & O’Callaghan 1991b). Unlike flow in vertical ducts which i5 a concurrent and counter current in nature, the flow in horizontal duct is cross flow. Gorial & O’Callaghan ( 199 1 b) proposcd a thcory for the separation of particles in horizontal air stream as discussed below: The aerodynamic drag force (Fd) exerted upon a particle by a stream of air is given by:
The drag force accelerates the particles until it acquired the velocity of the air stream. The equation of the acceleration is given below.
Assuming the particles are spherical and using the diameter of equivalent sphere (d) to calculate the projected area (A) as discussed in Gorial and O’Callagham (1990), the acceleration is written as:
317
International Conference on Advances in Engineering and Technology
d2y = 3 c, ax2
4
(q) (3)
Equation (3) suggests that if two grains of different size are introduced with the same initial velocity unto an air stream, the acceleration imparted to the grains is dependent on their drag coefficients and the velocity of particle relative to air, and inversely on their diameters and densities. Assuming the same density and drag coefficient, for the same type of grain, small grains have a greater acceleration and hence, will travel a greater distance in the air stream than the larger grain. A similar conclusion can be drawn for two grains of equal size but different densities introduced into the air stream. This theory suggests the possibility of separating grains into size and density grades in horizontal air stream.
4.0 THE CROSS FLOW SEPARATOR AT THE FEDERAL UNIVERSITY OF TECHNOLOGY, AKURE (FUTA), NIGERIA A thresher-cleaner was developed for particle dynamics study at FUTA. Figure 1 shows the entire experimental rig at FUTA. It is made up of the conveyor, threshing, fan units and cleaning chamber. The conveyor unit, threshing unit, transmission system and the centrifugal fan were designed using the conventional design analysis but the aerodynamic theory of separation proposed by Gorial & O’Callaghan, (1991a, b) were utilized and modified for the design of the separation chamber. Based on the proposition of Gorial & O’Callanghan (1991), a trajectory model was dcvcloped using drag and gravitational force components, for a more realistic condition. The resolved forces were integrated twice to obtain the displacement components in x and y directions. The displacement components were solved numerically using the MATLAB soft ware and Fortran 77 programme for materials of varied terminal velocities, representing chaff, stem, leaf, stone and grain. The displacement component incorporates fan angle of inclination, air velocity and particle velocity. The numerical solution involved iteration at 0.01 sec. until steady state was achieved and the trajectory plots were obtained using another programme. The plots of the particle trajectories under practical conditions were used to select the size of the separation chamber. A dimension of 0.99 m x 1.3 m was selected. The thresher-cleaner has been subjected to initial testing and found effective. Preliminary experimentation with the rig ascertained the feasibility of utilizing cross flow system to clean, sort and grade grains, particularly grain legumes. It is recommended as a test rig for particle dynamics studies (Adewumi 2005). The rig is now to be provided with accessories for the purpose of research.
318
Adewumi, Ogunlowo & Ademosun
5.0 CONCLUSION The development of the rig is aimed at initiating particle dynamic research at FUTA. It is intcndcd that a particle dynamics laboratory and a centre for particle dynamics study shall eventually be established at FUTA. The end goal of all these efforts is to develop daptable, effective and efficient aerodynamic systems / machincs for the African nations. 6.0 AKNOWLEDGEMENT The authors thank the Federal University of Technology for providing a minor research grant to support the project. REFERENCES Ademosun, 0. C. ( 1993). Development and performance evaluation of a pedal-operated multi-crop cleaner. Journal of Agricultural Engineering and Technology I : 27 - 37. Adewumi, B. A. (2005). Development and evaluation of an equipment for threshing /cleaning of legume crops. Phd Thesis, Federal Univcrsity of Technology, Akurc, Nigeria. Adcwumi B. A,, Ademosun, 0. C and Ogunlowo, A. S. (2005). Research contributions to the development of legume thresher. Paper presented and accepted for publication at the International Conference on Science and Technology, Federal University of Technology, Akure. Farrdn I. G, and Macmillan, R. H ( 1979). Grain- Chaff separation in a vertical air stream. Journal of Agricultural Engineering Research 24 (2): 115 - 129. Gorial, B. Y and O’Callagham, J. R (1990). Aerodynamic properties of grainistraw material. Journal of Agricultural Engineering Research 48 (4) : 275 290. Gorial, B. Y. and O’Callaghan, J. R ( 199 1 a). Separation of grain from straw in a vertical air stream. Journal of Agricultural Engineering Research 48: 1 1 I - 122. Glorial, B. Y. and O’.Callaghan, J. R. ( 1 991 b). Separation of particles in a horizontal air stream. Journal of Agricultural Engineering Research. 49(4): 273 284. Hamilton A.J. and Butson M. J. (1979). Approaches to the problem of combine grain loss on sloping ground I I : An alternative to sieve separation. Journal of Agricultural Engineering Research 24(3) 293 299. Joshi, D. C., Das, S. K; Mukherjee, R. K. (1993). Physical properties of pumpkin seeds. Journal Agricultural Engineering Research 54(3): 2 I9 - 229. Kalkarni, S. D. (1989). Pulse processing machinery in India. Agricultural Mechanisation in Asia, Africa and Latin America . 20 (2): 42 48. Kiker, C. F., and Ross, I. J. (1 966). An equation of motion for multiple grain particles in free fall in enclosed vertical duct. American Society of Agricultural Engineers. Trans. 9: 468 - 473,479. Macmillan, R. H. ( 1999). Winnowing in the wind A computer study. Agricultural Mechanisation in Asia. Africa and Latin America. 30( I): 56 58. Ogunlowo, A. S, and Adesuyi, A. S. ( 1 999). A low cost rice cleaning/destoning machine. Agricultiiral Mechanisation in Asia, Africa and Latin America 30( 1): 20 - 24 Ogunlowo, A. S. and Coble, L. G. (2000). A mathematical model for investigating bulk solid flow in a vertical counter current chamber. FUTAJEET 2(1):10-19 ~
~
~
~
~
~
319
International Conference on Advances in Engineering and Technology
Rumble D. W., and Lee J. H. A. (1970). Aerodynamic separation in a combine shoe. American Society of Agricultural Engineers Trans. Paper 68 - 610 pp 6 - 8. Shellard J. E. and Macmilan, R. H. (1978). Aerodynamic properties of threshed wheat material. Journal Agricultural Engineering Research 23(3): 273 - 28 1.
320
Adewumi, Ogunlowo & Ademosun
MATERtAL CLASSIFICATION IN CROSS FLOW SYSTEMS B. A. Adewumi, A. S. Ogunlowo and 0. C. Ademosun, Department ofAgricitltura1 Engineering, Federal University of Technology, Akure, Nigeria
ABSTRACT Classification of agricultural material into size grades is an essential process because it improves its quality and market values. Size grading is achieved conventionally by exploiting the differences in the geometry and lineariradial dimensions of the material. Aerodynamic mcans of particle separation is not commonly used and yet to be exploited, whereas it is possible and feasible. The paper reports the trial tests conducted to classify cowpea grains into size grades bases on the differences in their density, diameter and terminal velocity using a cross flow test ridge developed at the Department of Agricultural Engineering, Federal University of Technology, Akure, Nigeria. The spread pattern of materials collected in the cleaning chamber was studied. Keywords: Material classification, Grain size, Terminal velocity, Air speed, Density, air inclination.
1.0 INTRODUCTION Grain legume and cereal are staple food worldwide and have high industrial applications. Cleaning, sorting and grading are essential separation processes required to obtain high grade and quality grains. Cleaning. sorting and grading could each and are mostly handled as unit operations using different machines for each operation (Kulkami 1989, Ademosun 1993, Ogunlowo & Adesuyi 1999). Separation process includes the removal of trashes, stone, pebbles and other foreign materials from grains. It also involves the removal of broken or damaged grains from whole grains and the grading of whole grains into distinct size ranges. These essentially include cleaning, sorting and grading (Adewumi et al 2005). Separation and cleaning is achieved as a result of the differences in the size, density and terminal velocity of the mixed product (Kulkarni 1989). Cleaning of grains is mostly done using screen or pneumatic separator. Screen separation is mostly used to separate product on the basis of the differences in their sizes. Multi screen separators are used or classifying grains to size grades. Some screen separator incorporate fan to remove light particles (Ogunlowo & Adesuyi 1999). Screen separator include the rotary and the reciprocating types (Kulkami 1989, Ademosun 1993). Pneumatic separation occurs as a result of the differences in the aerodynamic properties of the mixed material to partition them into two major groups. Ogunlowo & Oladapo (1 990) stated that air velocity (Va) is selected such that it
321
International Conference o n Advances in Engineering and Technology
is between the terminal velocity of the grain (V,,) and that of the light material (V,,). Hence, VT,> Va> V, L. Pneumatic separatorsicleaners are basically of two types namely the vertical air stream and horizontal air stream separators (Gorial & O’Callaghan, 1991a, b, Aguirre and Gamy 1999, Farran & Macmillan 1979, Jiang et a1 1984). In the vertical air stream separator, air stream is flowing vertically against the injected mixed product such that heavy particles (grains) drop through the air (counter current flow) while the light materials (chaff) move upward and are carried along by the air (concurrent flow). In the horizontal air stream separator, air is blown horizontally or at an inclined angle to the horizontal against mixed product injected along the vertical plane. Gorial & O’Callanghan (1991) proposed the feasibility of separating materials of similar density, shape and size. He suggested the use of horizontal air stream since this could subject materials to lateral drift as they settle under gravity. Such separation process could therefore sub divide a mixture of particles into several fractions, deposited at different distances from the inlet point, depending on the tendency of individual particle to settle vertically under gravity of /and drift horizontally under the drag force of the air stream. Adewumi (2005) further suggested that such a cross flow system could perform the following functions: (i) Separate stones, pebbles and relatively denser materials from grains (ii) Separate light weighed, broken or infested grains from whole grains (iii) Grade whole grains into various sizes or size grades (iv) Blow of admixed chaff from grains. Cross flow aerodynamic system therefore could have the advantage of utilizing a single machine for three processes. Hence, it could reduce the overall unit cost of machinery and cost; safe space and increase the rate of production. Therefore, the aim of the paper is to demonstrate the feasibility of classifying cowpea grain with a cross flow air stream.
2.0 MATERIAL AND METHOD A thresher-cleaner developed at the Department of Agricultural Engineering of the Federal University of Technology, Akure (FUTA), Nigeria was used for the study. The equipment is made up of the conveyor unit, threshing unit, fan unit and cleaning chamber. Fig. 1 shows a full view of the fan unit and the cleaning chamber, which are most relevant to the study. A collection tray with seven equal partitions at about 12.5 cm intervals is located at the lower part of the cleaning chamber. The threshed cowpea material that passed through the sieve of the threshing unit was discharged into the cleaning chamber for cleaning, sorting and grading. The discharged
322
Adewumi, Ogunlowo & Adeinosun
material includes whole seeds, damaged seeds and pods. Table 1 shows the size range of the material loaded into the collection tray from the threshing chamber. At a pod moisture content of 14% (wb). grain moisture content of 15.3% (wb), the relative proportion, by weight, of whole seed, damaged seed, threshed pods and unthreshed pods loaded into the cleaning chamber was 18, 17, 63 and 2% respectively. The % of damaged seed was relatively high because the seeds were subjected to insect infestation before the experiment to allow seed of varying weight. The Fan, with a speed of 900 rpm, was inclined at 90" relative to the materials being dropped. The quantity of the various materials collected in each of the partitions on the collection tray was recorded. Table 1 Classification of material loaded into the clcaning chamber
Material
Specification
(~OLlpl'll
Big size
(Wholr s d b ) Medium iizc Cowpea
POdC
Siiwll vizcJ (Damaged weds) Rroken wcds I n i t a k d areds Immature seeds ~ ~ l l l l l hr1 - ~ ~I . \ Threshed wholr Threshed inedium siLe Tlircshed sinall siLe
Size ranye (ininJ 7.00 'i / / I . 00
5.00 x 7.00 3 KO i- 4.99 5.00 3 x 10.00 3.00 40 I' I20 1001. 120 40 L 9') 5 L 39
3.0 RESULT AND DISCUSSION Table 2 shows the materials recovered in each of the 7 partitions in the collection tray. Fig. 2 shows the spread pattern of the materials collected at fan angle of inclination (0), pod moisture content (PMC), grain moisture content (GMC) and fan speed (V) of go", 14.0%, 15.3% and 900 'pin respectively. All the grains (whole and damaged) loaded into the cleaning chamber were recovered within the cleaning chamber while all threshed and unthreshed pods were blown away. 85% of the big and medium sized grains together with the big sized damaged grains were collected at a distance 38 mm from entry (within the first three trays). Majority of the small sized grains (68%0) were collected within the fourth and sixth tray 38.0 to 74.5 mm from entry. The highest percentage (above 45%) of the big sized grains (whole and damaged) was recovered i n the first tray (1 2.5 mm from entry). This is because the terminal velocity of the damaged big s i d grains is closest to tliose of whole seeds. The simply implication of theae result is that the classification of seeds into size grades in cross flow system is feasible, if the variable parameters such as angle of inclination of fan, fan specd. air vclocity, grain pod moisture content and so on are properly selected and monitored.
323
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Table 2: M e a n Material distribution at P M C = 14.0%, G M C =15.3%, O = 90 ~ Vf = 900 rpm SN
Materials
1
2
3
4
5
6
7
Total
1
Big seeds
5.9
4.0
2.4
0.7
0.1
0.0
0.0
13.1
0.4
1.1
0.4
0.2
0.0
0.0
0.0
2.1
0.0
0.2
0.0
0.3
0.I
0.0
0.0
0.7
Medium 2
seeds Small
3
seeds Damaged
4
seeds
6.5
3.6
2.1
1.6
0.1
0.0
0.0
14.0
5
Whole pods
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
pods
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
Small pods
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.4
0.0
Medium 6 7
Unthreshed 8
pods Total
....................
;-;:;-;;;;==
. . . . . . . . . . . . .
60
;:...:.:=:
12.9 . . . . . . . . . . . . . . . . . . .
8.9 ;:::;:;:
4.9
2.9
-;:-;-:;:;:==;=
........ :;:.:
I /
\
40
.
. . . . . . . . . . . . . .
30
--
9
-I
.......
:
----;:::
-
~;;:::::;:
::=:===:.
.
.
.
.
Y
.
\
..........
38
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
\
. . . . . . . . . . . . . .
/I 26
:
,\,. . . . . . . . . . . . . . . .
10 0 _ 12.5
29.9
.............................
Y
j
! 20
0.0
-;:=::;
?,,,
. . . . . . . .
/ o~
;:
I,~,Big seeds -e-Mediumseeds ~Small seeds ~Damagedseeds i [llll!lJVholepods q!RIVlediumpods @jtSmallpods ilJnthreshed pods
/~ . . . .
:;:=:;=
. . . . . . .
/I 50
. . . . . . . . . . . . . . .
-.. . . . . . . . . . . .
_ 62
_
74.5
Distance from entry point or Horizontal displacement, cm Fig. 7: Spread pattern of material in the trays at O = 9 0 , PMC = 14.0%, GMC=15.3% and V = 900 rpm
4.0 C O N C L U S I O N
324
86
Adewumi. Ogunlowo & Ademosun
Adequate selection of essential parameters such as angle of inclination of fan, fan speed, air velocity, grain pod moisture content, pod moisture content, height of fall, inlet speed of grains would enhance adequate separation, sorting, cleaning and grading of grains in a cross flow system. It is therefore suggested that further researches should be initiated in this field of study, particularly with the view that cross flow separator has the potential of reducing the unit cost of production and saving space. 5.0 ACKNOWLEDGEMENT The authors thank the Federal University of Technology, Akure, Nigeria for providing a minor research grant to support the project.
REFERENCES Ademosun, 0. C (1993) Development and performance evaluation of a pedal-operated multi-crop cleaner. Journal of Agricultural Engineering and Technology 1: 27 37. Adewumi, B. A. (2005) Development and evaluation of an equipment for threshing /cleaning of legume crops. Phd Thesis, Federal University of Technology, Akure, Nigeria. Adewumi B. A,, Ademosun, 0. C and Ogunlowo, A. S. (2005). Research contributions to the development of a smallimedium scale legume thresher. Paper presented and accepted for publication at the International Conference on Science and Technology, Federal University of Technology, Akure. Aguirre R. and Ciaray, A. E. (1999) Continous flow portable separator for cleaning and upgrading bean seed and grains. A~g-ictilturalMechanisation in Asia, Africa and Latin America 30(1): 59 - 63. Farran I. G. and Macmillan, R. H (1979) Grain- Chaff separation in a vertical air stream. Journal Agricultural Engineering Research 24 (2): 1 15 - 129. Gorial, B. Y. and O’Callaghan, J. R (1 99 1a) Separation of grain from straw in a vertical air stream. Journal Agricultural Engineering Research 48: 1 1 1 122. Glorial, B. Y. and O’.Callaghan, J. R. (l991b) Separation of particles in a horizontal air stream. Journal of Agricultural Engineering Research. 49(4): 273 - 284. Jiang, S, Bilanski, W. K. and Lee, J. H. A. (1984). Parameters for an aerodynamic combine precleaner for separation of grain from straw and chaff wing a multi-air-jet sieve. American Society of Agricultural Engineers Trans. 27 ( 1 ): 36 40, 44. Kulkarni, S. D. (1989). Pulse processing machinery in India. Agricultural Mechanisation in Asia, Africa and Latin America . 20 (2): 42 - 48. Ogunlowo, A. S, and Adesuyi, A. S. (1999). A low cost rice cleaningidestoning machine. Agricultural Mechanisation in Asia, Africa and Latin America 30( 1): 20 - 24 Ogunlowo, A. S. and Oladipo, F. 0. (1990). A multi purpose grain cleaning and grading machine. Proceeding ofthe Nigerian Society of Engineers 14:268 - 275. ~
~
~
325
International Conference on Advances in Engineering and Technology
APPLICATION OF SOLAR-OPERATED LIQUID DESICCANT EVAPORATIVE COOLING SYSTEM FOR BANANA RIPENING AND COLD STORAGE S. A. Abdalla, Kamal Nasr Eldin Abdalla, M. M. El-awad and E. M. Eljack, Department of Mechanical Engineering, University of Khartoum, Sudan
ABSTRACT Banana facility is distinguished by the process of ripening, a process during which banana is exposed for a 24 hours period to a small pre-measured amount of ethylene gas in a confined airtight room whose air temperature should be maintained between 20' C and 25OC and relative humidity between 90% and 95%. Banana ripening rooms require continuously circulation of large volumes of conditioned air throughout the entire load in addition to automatic room humidifiers to maintain the humidity ranges required and prevent excessive fruit dehydration. The contribution to Earth's ozone layer depletion and global warming due to emissions during production and use of halogenated hydrocarbons used in vapor compression refrigeration, besides the need to conserve high grade energy necessitates exploring and developing less energy demanding alternative cooling technologies. Indirect evaporative cooling in which the incoming air wet-bulb temperature is reduced by sensibly cooling the incoming air using vapor compression refrigeration, then cool and humidify the secondary air-stream by evaporation was applied to improve the cooling achieved. Evaporative cooling, a low cost energy cooling technology utilizing direct contact between water and air to cool air by evaporation, seems to be the best choice; yet direct evaporative cooling cannot be used because of the limitation imposed by the incoming air wet-bulb temperature on the cooling achieved. The purpose of this paper is to describe a solar-operated liquid desiccant evaporative cooling system for providing banana ripening and cold storage conditions in Khartoum (Sudan), and investigates mathematical modelling of the system's components used in simulating the system.
Keywords: Desiccant cooling collector heater
-
desiccant dehumidifier
-
evaporative cooling - solar
1.0 INTRODUCTION Air-conditioning has been achieved reliably and efficiently over the last few decades due to the popularity gained by vapor compression machines as a result of halogenated hydrocarbons discovery. The need to conserve high grade energy besides the contribution to the
326
Abdalla, Abdalla, El-awad & Eljack
Earth’s ozone layer depletion, and global warming of the emissions of halogenated hydrocarbons during production and use necessitates exploring numerous alternative techniques. Evaporative cooling, a very simple, robust and low cost cooling technology basically achieved by evaporation of water in air is one proposition. Evaporative air coolers are devices utilizing direct contact between water and atmospheric air to cool and dehumidify air by evaporating part of sprayed water in the air. Despite its potential to reduce cooling electricity use and peak demand, evaporative cooling is not widely used in air-conditioning because of technical barriers: declining cooling capacity with increasing outdoor humidity, and indoor humidity control conccrns. Unlike vapor compression cooling which rely on high energy technology, desiccant cooling in which solar heating can be utilized to provide the heating required for desiccant regeneration during the cooling season and hot air during the heating season rely on the low energy technology ‘evaporative cooling’. A cold storage solution that significantly reduces electrical cooling energy demands, greenhouse gas emissions and dependence on harmhl refrigerants can thus be given by providing cool humidified air using a solar-driven desiccant evaporative cooling technology. Desiccant cooling which cools and humidifies process air, dehumidified using a desiccant solution, by direct evaporative cooling (both require no refrigerant) is one modification of direct evaporative cooling that can cater to different climates. Dcsiccant cooling cycle, an open heat driven cycle affording the opportunity to utilize heat that might otherwise be wasted to cool as well as humidify air, can be coupled with solar heating to produce nearly saturated cool air for banana ripening and cold storage purposes. This can significantly reduce cooling electrical energy demands in comparison with conventional vapor compression refrigeration, and should in theory be extremely environment friendly as it eliminates greenhouse gas emissions and dependence on harmful refngerants. Although liquid desiccant regeneration requires heating roughly equal to the energy it provides for dehumidification, yet the liquid desiccant evaporative system could be cost effective if it delivers cold water, or dry air at relatively high COP is the final target.
2.0 SOLAR-DRIVEN LIQUID DESICCANT EVAPORATIVE COOLING SYSTEM DESCRIPTION Figure 1 shows the schematic of the solar-driven liquid desiccant evaporative cooling system used to serve as an open cycle absorption system operating with solar energy. The cooling system consists of ten major components: continuous fin tube type air-to-air pre-cooler, continuous fin tube type air-to-water cooler, an isothermal packed tower air dehumidifier (absorber), adiabatic packed tower regenerator, water-to-solution precooler, water-to-solution cooler, solution-to-solution pre-heater, solution-to-thermal fluid heater, solar collector thermal fluid heater, counter-flow packed bed type evaporative water cooler, re-circulated spray type evaporative air cooler, and appropriate instruments for various measurements. Arabic numerals indicate working fluids states at specific locations; thick solid lines represent air flow while thin solid and dashed lines represent solution and water flow.
327
International Conference on Advances in Engineering and Technology
Air
from
6
cold
Cold
water
from
tower
TO coaling
air toeolditore
toner
store
!!
heat exchanger
1
*
u
A i r -to-a i r heal
riohsngrr
Cold water ,
I Cooling
air
out
nut
14
,.............................,.,,,.,,,
u
Figure 1 Schematic of Solar - Operated Liquid Desiccant Evaporative Cooling System The system is connected in a flow arrangement that allow concentrated absorbent solution storage and capability to work in two different automatic modes as may be selected by the user. One automatic mode is for full operation of the system in which all system components operate, including the thermal fluid storage circuit, while the second is for regeneration only. In the full automatic mode, the absorber solution pump PI pumps absorbent solution from absorber sump (state 8 ) through the solution-to-water heat exchanger where it is cooled by water from the evaporative water cooler. The cold solution leaving the heat exchanger (state 9 ) is then supplied to the distribution system at the top of the absorber chamber from where it trickles down in counter flow to air stream and collects in the sump. A fan draws ambient air through the air-to-air heat exchanger where it is precooled to state 2 and the air-to-water heat exchanger where it is finally cooled to state 3 into the absorber chamber for dehumidification. Water vapor is removed from the sensibly cooled process air entering the bottom of the absorber (state 3 ) by being absorbed into the absorbent solution. Part of the dehumidified air leaving the absorber (state 4 ) is passed to an evaporative water cooler to generate the cold water required in the different heat exchangers (air-to-water heat exchanger, solution-to-water solution pre-cooler and cooling coil enclosed within the dehumidifying chamber to keep the so-
328
Abdalla, Abdalla, El-awad & Eljack
lution temperature constant). The remainder of the dehumidified air is passed to the evaporative air cooler where it is brought into direct contact with re-circulated water. To maintain the liquid desiccant at the proper concentration for moisture removal, a controlled amount of solution is pumped (using pump 3 ) from the absorber sump (state 10 ), through the solution-to-solution heat exchanger (where it is pre-heated to state 11 by recovering heat from the hot solution leaving the regenerator) and the solution-tothermal fluid heat exchanger (where it is heated to state 12). The hot solution then trickles down the regenerator distribution system in counter flow to atmospheric air entering at the bottom of the regenerator which serves to re-concentrate the absorbent to state 13 . The vapor-pressure difference between the ambient air and the hot solution causes ambient air to absorb water vapor from the solution; the hot humid air is discharged to the atmosphere and the strong solution (state 13 ) is pumped through the solution-to-solution heat exchanger (where it is cooled to state 14 ) to the dehumidifier (absorber) sump. The regenerator is similar to the dehumidifier, and so are the flow system and associated components. The regeneration side of the system will shut down for a certain time if the thermal fluid storage tank cannot supply thermal fluid at sufficiently high temperature or if the absorbent solution concentration in the absorber pool rises above a set limit. 100% R.H
Dry - bulb Temperature "C
Figure 3.2 Solar-driven Desiccant Evaporative Cooling Cycle Figure 2 shows the schematic of the solar-driven desiccant evaporative cooling system thermodynamic cycle employed solely to provide conditioned air used by the banana cold store. Lines 1 - 2 , 2 - 3 represent the cooling path of moist air from point 1 to point 3 in the pre-cooler and cooler, while lines 3 - 4 and 4 - 5 represent the dehumidifying, cooling and humidifying paths of moist air.
329
International Conference on Advances in Engineering and Technology
3.0 SYSTEM SIMULATION This section presents the basic equations, solved by Mat lab, that govern the process of heat and mass transfer in the solar-operated liquid desiccant evaporative cooling system used for banana ripening and cold storage. The mass and energy governing equations are written by taking each system component as a control volume and divide the domain of interest into a finite number of computational cells using finite difference technique. The governing equations are derived and simplified based on a set of assumptions, yet they are complex mainly due to the fact that they are linked together in a way to make the closed form solution tractable.
4.0 HEAT EXCHANGERS MODELS The goal of the heat exchanger modeling is to relate the inlet and outlet temperatures, the overall heat transfer coefficient, and the geometry of the heat exchanger, to the rate of heat transfer between the two fluids. This study employs the effectiveness number of transfer units E-NTU approach for simulating the heat exchangers. The analysis begins by determining the overall heat transfer coefficient and then calculates the effectiveness, E, as a function of the number of transfer units, NTU, and capacity rate ratio, Z. The effectiveness formulae for cross and counter flow configurations can be given by equations 1 and 2 respectively: ~
E = 1 - exp[[-
1 exp[(-
zll E=
1 - exp[- NTU(I - z)] 1 - Zexp[- NTU(I - z)]
Once the effectiveness is known, the.temperature of the cold stream leaving the ex, the heat transferred Q and the outlet temperature of stream 2 Tz,out changer can be calculated: -
cz
330
Abdalla, Abdalla, El-awad & Eljack
5.0 THE ABSORBEWREGENERATOR MODEL The absorberiregenerator modeling procedure employs finite difference method to derive liquid-desiccant system components governing equations. In the analysis, it is assumed that only water is transferred between air and the desiccant, that the chamber cross-section is uniform, the solution temperature remains constant, the liquid phase resistance to heat transfer is negligible (resulting in interfacial temperature equal to the desiccant bulk temperature), the principal resistance to heat and mass transfer occurs in the boundary layer of air surrounding solution droplet, heat conduction and mass diffusion in the solution and in the air normal to the flow direction are neglected. The governing equations are obtained by dividing the packed bed height into small segments, and solving energy and mass balances for each segment from the top to the bottom of the tower. At each node the air specific heat, water vapor enthalpy, mass and heat transfer, incremented air enthalpy, solution mass flow rate, concentration, temperature and enthalpy, air humidity, temperature, and enthalpy must all be calculated. Because liquid-desiccant chamber uses a sorbent to humidifyidehumidify air, a salt mass balance is included in the liquid-desiccant heat and mass finite-difference model to model the performance. Steady state mass balance for the given absorber configuration as obtained from the typical control volume of Figure 3 shows that the mass of absorbent (salt) in solution and dry air mass flow rates are constant, and that mass transfer (mass of vapor added to the solution equals the mass of water vapor condensed from).
;'..t, ,
m
m ,i ? , x, T,
dq
+L
A
,i,
+
di
,, Y
+ d i d , W + d W , T, + dT,
+
Solution
m
s,i a
m,d4
+
du,
T,
m a , i , ,W, 1,
T
Figure 3 Differential Heat and Mass Transfer Element used 1.1 To derive Absorber Finite Difference Model Equations The rate of mass transfer, in terms of a constant interfacial area per unit volume a , across a differential length dL from a tower whose empty cross-sectional area is A may be given by: -
Steady state energy balance for the control volume yields: -
-ma di, = h,aAdL(W
-
33 1
International Conference on Advances in Engineering and Technology
mA
di,
= -ma
di, -AH,
where, AHs is the heat that must be removed or added to keep the solution temperature constant. Steady state mass balance for a regenerator configuration yields: -
ma C,,dT,
= haaA(Ta- T, )dL
m~ di, = -ma di, = h,aAdL(W
- Ws,Ts )if,
+ h,aA(T,
(9) - T, )dL
(10)
These non-linear first order coupled differential equations forming the detailed liquid desiccant absorberiregenerator heat and mass transfer exchange model are solved, in this work, by numerical integration along the absorberiregenerator height choosing one absorberiregenerator end as a starting point and divide the height into a number of small segments assuming properties to be constant within each segment. The output can be obtained carrying-out step-by-step finite difference method analysis given the air flow rate and inlet conditions at the bottom of the regenerator, salt-water solution mass flow rate and entering conditions (temperature, concentration and mass flow rate) at the top of the absorberiregenerator. The operating conditions are evaluated at each step such that the mass and energy balance equations for the whole system are satisfied. The solution routine starts by arbitrarily assume that integration begins at the process air inlet, amalgamated such that the output of one step becomes input to the other (forming a closed loop), and check the calculated output of the last step with the known entering salt water solution condition. The desired condition will be achieved if the guessed salt water solution outlet values give calculated inlet condition that match within a certain tolerance with the known entering values, otherwise new assumption is generated until convergence occurs. The operating conditions (temperature, concentration, humidity ratio, etc.) are evaluated at each step such that mass and energy balance equations for the whole system are satisfied. 2.0 EVAPORATIVE WATER COOLER MODEL Figure 4 shows the schematic diagram of the counter flow evaporative water cooler (cooling tower) used in this paper to derive the basic differential equations, derived using finite difference method, used for modeling the cooler performance. Water is admitted at the upper part of the tower and falls downward in counter flow to air circulated upwards through the tower, using a fan; a packing (fill) retards the rate of water fall and increases the water surface exposed to the air. The analysis ignores the water lost by drift and heat transfer through cooler walls and assumes that the cooler cross sectional area and the temperature throughout the water stream at any cross section are both uniform, that temperature normal to the direction of thickness is constant, and that the overall heat and mass trans-
332
Abdalla, Abdalla, El-awad & Eljack
fer coefficients are constant. Steady state conditions for the control volume yield: -
d m , = m adW = h,aA(W,,,;
-
W)dL
(1 1)
Energy balance on the air and water for the control volume yields: -
(12)
- m,
di, = hmaAdL(Ws 3
s
+ h,aA(T,
- W>iv
-T, )dL
(13
.t m , - m r ( N l - W)+ m,dV
m,,i+di,U +dW
t 4 m , , i , W
rn,d\\,
m,,
+ - m a @ -
L dL
J
3.0 EVAPORATIVE AIR COOLER MODEL 3.1 To Derive Cooler Finite Difference Model Equations By imagining that water temperature remains unchanged, Figure 4 can be used to derive the basic differential equations for modeling the re-circulated counter-flow spray type evaporative air cooler performance in this paper. Differential equations are derived assuming that the rate at which make-up water is added to the sump is negligibly small compared to the rate of water flow through the distribution system, that heat transfer through the cooler walls froin the ambient may be ignored, that the small addition of energy to the water by the pump has negligible effect upon the water temperature. Steady state conditions for the cooler control volume yields: -
dm, = m adW = hmaA(Ws,Tv - W)dL
(14)
333
International Conference on Advances in Engineering and Technology
ma dia = C,,dTa + i , , d W
=
i. 'I
ma dW if,,
Assuming m a , h, and T, as constants, integrating equation 14 yield: -
Taking air specific heat at constant pressure Cp,a as constant the evaporative cooler efficiency yield:
Z=
h,aAdL ma
4.0 THE SOLAR HEATING SYSTEM MODEL The performance of all solar heating systems depends on the weather; both the energy collected and energy demanded (heating load) are functions of solar radiation, the ambient temperature, and other meteorological variables. The weather, best described as irregular functions of time both on short (hourly) and long (seasonal) time scales, may be viewed as a set of neither completely random not deterministic time-dependent forcing functions. As solar energy systems analysis often requires long time period performance examination and it is difficult to vary parameters to see their effect on the system performance, experiments are very expensive and time-consuming. Computer simulations supplied with meteorological data and mathematical models that provide the same thermal performance information as physical experiments with significantly less time and expense can be formulated to simulate the transient performance of solar energy systems and used directly as a design tool by repeatedly simulating the system performance with different values of design parameters. A mathematical model of a heating system, done either numerically or analytically, is a powerful means to analyze different possible configurations and component sizes so as to arrive at an optimal system design; it represents the thermal behavior of a component or a system. Sizing a solar liquid heater involves determining the total collector area and the storage volume required to provide the necessary amount of hot fluid.
334
Abdalla, Abdalla, El-awad & Eljack
Figure 7 Schematic of a Forced-circulation solar liquid heater The solar heating system consists basically of a collector for heating the working fluid, working fluid storage tank, and a heat exchanger in which the working fluid exchanges heat with the load, Figure 7. For material compatibility, economics and safety reasons, a heat exchanger may be sometimes provided between the solar collector and the load to isolate the collector's working fluid from the load, and to prevent freezing of the working fluid. Depending upon the overall objective of the model, the complexity of the system can be increased to reflect the actual conditions by including the pipe losses, heat exchanger effectiveness, etc. Assume that all collector components have negligible heat capacity, the glass cover is thin and of negligible solar absorptivity, the collector plate solar absorptivity is close to unity and independent of the angle of incidence, the collector plate fins and back side have highly reflective surfaces, and radiation heat transfer from these surfaces to the insulation inside surface is negligible, the instantaneous total useful energy delivered by the flat plate collector is given by: -
Solving for the temperature of the thermal fluid the collector Tfi, and subtract T, from both sides yield: -
Assume the storage tank is fully mixed, the temperature rise of the thermal fluid in the storage tank can be written following the simplified mathematical model described by Beckman et al [I as : Cst ' dTs = Q d
dt
-Q,
-Qe
Assume the rate of energy delivered to the storage tank
Qd
to be equal to the useful en-
335
International Conference on Advances in Engineering and Technology
ergy is delivered by collector Q, = AcFR [Ha - UL(Tfi - T,)], the load Q L can be written in terms of the thermal fluid mass-specific heat capacity product Cf , the temperature of the thermal fluid leaving the collector Tf, (assumed to equal the storage tank temperature T,) and thermal fluid return temperature T, as QL = Cf(Ts - T,.), the loss from the storage tank Qe in terms of the storage tank loss coefficient area product (UA)s, tank temperature T, and ambient temperature T, as Q, = (UA),(T, - Ta), equation 22 can be numerically integrated to obtain the new storage tank temperature is the collector inlet temperature for the next hour; thus, the entire day's useful energy delivered can be obtained as well as the storage tank temperature profile.
T,+ =T, +-
:[
Cst
1
A,F,(H, -u,(T, -T~))-(UA)~(T* -T,)-&(T, - T ~ )(23)
5.0 RESULTS OF THE NUMERICAL SIMULATION A computer code (the code and its results are not included in this paper) developed based on unit subroutines containing system's components governing equations was employed in this study. In this code, the components are linked together by a main program that calls unit subroutines according to the user's specification to form the complete cycle; a mathematical solver routine is employed to solve all established entire cycle equations simultaneously. Property subroutines contained in the program serve to provide thermodynamic properties of the different working fluids. The property subroutine for LiC1-water, the particular working fluid employed in this study, contains correlations derived from the work of Manuel R. Conde. The computer simulation yields temperature and humidity ratio of air at evaporative air and water coolers outlet as well as heat duties of the various system components as functions of the specified inlet conditions and other operating parameters. In conducting the simulation, a reference case (ambient air condition of 43' C dry-bulb and 23.4' C wet-bulb temperatures, and indoor conditions of 23' C dry-bulb and 90% relative humidity) has been selected and the values of the relevant parameters were varied around it. Only one parameter (cooling water flow rate, air flow rates through the absorber, air cooler, water cooler and regenerator, salt water solution flow rate and concentration) was varied at a time, all others remained fixed at their design values. 6.0 CONCLUSION The above description reveals a number of advantages of solar-driven desiccant evaporative cooling for banana ripening and cold storage over conventional air-conditioning cycles: 1. Liquid desiccant evaporative cooling system seems to be the most cost appropriate banana ripening and cold storage technology option for future applications not because it is environmentally friendly and require low high grade energy input but also it improves banana ripening and cold storage facility substantially in a most energy efficient manner.
336
Abdalla, Abdalla, El-awad & Eljack
Pressure-sealed units are avoided as the whole system operates at atmospheric xessure. 3eater flexibility as water evaporation process in the regenerator is independent From dehumidification in the absorber. Efficient utilization of very low heat source temperatures is possible. [n contrast to conventional air-conditioning systems, moisture control in liquid desiccant adds no cooling load to the system; moisture control in conventional air:onnditioning systems adds a significant cooling load to the air-conditioning system i s the moisture added must removed using refrigeration. zoompared to conventional air-conditioning systems, the product (banana) is :xposed to high air volume rates (good air circulation) and lower temperature lifferentials; this minimizes chilling disorders bananas may encounter after storage. 1VOMENCLATURE
Arca of the collector plate, m' Mass transfer area per unit volume of chamber m2/m' Heat transfer area per unit volume of evaporative chamber m2/m3 Specific heat of moist air at constant pressure kW/kg-'C Mass specific heat product k W k Spccific heat of water at constant pressure kW/kg-'C Collector heat removal factor Solar radiation absorbed by the collector Convection heat transfer coefficient kW/m'-'C Convection mass transfer coefficient kg/sec- m2 Latent heat of vaporization of the water. kJ/kg Spccific enthalpy of saturated liquid vapor, kJ/kg Specific enthalpy of saturated water vapor, kJ/kg Chamber total height, m Ambient temperature "C Overall heat loss coefficient, kW/m'-'C Humidity ratio kg,/kgd,,, ',,,
REFERENCES S. A. Abdalla, Non-adiabatic Evaporative Cooling for Banana Ripening, M. Sc Thesis. Faculty of Engineering & Arch., University of Khartoum, Sudan, 1985. Andrew Lowenstein, A Solar Liquid-desiccant Air-conditioner, All Research Inc, Princeton, NJ 08543. ASHRAE Handbook, Fundamentals Volume, American Society of Heating, Refrigeration & Air-conditioning Engineers, Inc., 1997. J. L. Threlkeld, Thermal Environmental Engineering, Prentice-hall International, London. P. Stabat, S. Ginestet & D. Marchio, Limits of Feasibility & Energy Consumption of desiccant evaporative Cooling in Temperate Climates, Ecole des Mines de Paris Center of energy studies, 60 boulevard Saint Michel, 75272 Paris, France. Sanjeev Jain, Desiccant Augmented Evaporative cooling: An Emerging Airconditioning Alternative, Department of Mechanical Engineering, Indian Institute of Technology Delhi, Hauz Khas, New Delhi- 1 10016, India.
337
International Conference on Advances in Engineering and Technology
Esam Elsarraj & Salah Ahmed Abdalla, Banana Ripening and Cold Storage in Sudan using Solar Operated Desiccant Evaporative Cooling System, Proceedings of WREC2005, Aberdeen, UK. Conde Manuel R. (2004) Properties of aqueous solutions of lithium and calcium chlorides: formulations for use in air conditioning equipment design, International Journal of Thermal Sciences. Michael Wetter, Air-to-Air Plate Heat Exchanger: Simulation Model, Simulation Research Group Building Technologies Department Environmental Energy Technologies Division Lawrence Berkeley National Laboratory Berkeley, CA 94720
338
Kiriamiti, Sarmat & Nzila
FRACTIONATION OF CRUDE PYRETHRUM EXTRACT USING SUPERCRITICAL CARBON DIOXIDE H. Kiriamiti, Department of’Chemical andprocess Engineering, Moi University, Kenya , S. Sarmat, Department ojChemica1 and process Engineering Moi U n i v e m i ~Kenva
C. Nzila, Departnient of Textile Engineering, Moi University, Kenya
ABSTRACT Fractionation of pyrethrum extract (crude extract) using supercritical fluid carbon dioxide shows that fixed oils and pyrethrin could be separated in a supercritical extractor with two separators in series. In the first separator, more oil, which is less volatile, is obtained and in the second separator, more of the pyrethrin is obtained. Fractionation of ground pyrethrum flowers extract gives 24% pyrethrin in the first separator and 34% in the second separator. In the case of fractionation of crude hexane extract (oleoresin), the percentage of pyrethrin in second separator is twice that in first separator. In all cases, the product obtained is solid because of the waxes, which are fractionated in both separators.
Keywords: pyrethrin, pyrethrum, extraction, fractionation, supercritical fluid, oleoresin
1.0 INTRODUCTION Today there is a high demand for natural insecticides due to an increase of biological farming in the western world. Among the well-known natural insecticides are: pyrethrin, nicotine, rotenone, limonene, lazadirachtine from neem oil, camphor, turpentine, etc Salgado V. L. ( 1997). Except for pyrethrin and rotenone, most of the natural insecticides are expensive to exploit. Pyrethrin is one of the most widely used natural domestic insecticides and is extracted from pyrethrum flowers. Pyrethrin is a mixture of 6 active ingredients, which are classified as Pyrethrins I and Pyrethrins 11. Pyrethrins 1 are composed of pyrethrin I, jasmolin I and cinerin I, while Pyrethrins I I are composed of pyrethrin 11, jasmolin I1 and cinerin 11. Pyrethrin is non-toxic to warm blooded animals and it decomposes very fast in the presence of light. In the conventional commercial process, extraction with organic solvents, such as hexane is carried out to obtain oleoresin concentrate. Oleoresin is purified in several steps to eliminate waxes, chlorophyll pigments and fixed oils to obtain a final product referred to in the industry as “pyrethrin pale”. In their earliest works, Stahl (1980) observed that between 20°C and 40”C, no decomposition of pyrethrin occurred in both liquid and supercritical carbon dioxide (CO,).
339
International Conference on Advances in Engineering and Technology
Marr (1 984) developed a method for the identification of the six active ingredients in pyrethrum extract using High Performance Liquid Chromatography (HPLC). In 1980, Sims (1981) described and patented an extraction process for the extraction of crude extract from pyrethrum flowers using liquid CO2. Wynn (1995) described a preparative supercritical COz extraction process of crude extract from pyrethrum flowers at 40°C and 80 bar. Otterbach (1999) compared crude extract obtained by ultrasonic extraction, Soxhlet extraction using hexane, and supercritical CO2 extraction and observed that the supercritical C 0 2process yielded better quality product in terms of colour and pyrethrin content. Della Porta (2002) extracted pyrethrin from the ground powder of pyrethrum flowers with simultaneous successive extractive fractionation and post-extractive fractionation. In our previous work Kiriamiti (2003a, b), we have shown the effect of pressure, temperature, particle size and pre-treatment on the amount of crude extract and pyrethrin content and also developed a method for the purification of crude hexane extract (CHE) using carbon dioxide. In this paper, we have studied fractionation of pyrethrin and fixed oil in a postextractive fractionation of crude extract obtained directly from pyrethrum flowers using C 0 2 and CHE. 2.0 MATERIALS AND METHODS Pyrethrum flowers were bought from local farms in Kenya. Batch extraction of pyrethrin from ground pyrethrum flowers with hexane was conducted in an agitated mixing vessel at ambient temperatures. The batch process was repeated several times, until the colour of the solvent in the mixing vessel was clear. CHE was obtained by evaporation of hexane from cumulative extracts of all batches. A CHE with a pyrethrin content of 0.16 gig CHE was obtained.
The C 0 2 extraction was performed with a pilot plant from Separex Chimie Fine, France (series 3417 type SF 200), having an extraction capacity of 200 ml and 3 separators in series of capacity 15ml each with a maximum C 0 2 flow rate of 5 kgih. The schematic diagram of the pilot plant is shown in our previous work Kiriamiti (2003a, b). The extractor and separators are jacketed to maintain a constant temperature. The ground flowers or the CHE slurry were put in the extractor’s cylinder and filter mesh screens were placed at both ends of the cylinder. The cylinder is then introduced into the temperature-controlled extractor. Care is taken to ensure that the air is purged before the extraction process is started. The C 0 2 is pumped at constant flow rate and directed into the bottom of the extractor. The fluid phase from the extractor is passed through valves where the pressure is throttled via the three separators in series. Then the C 0 2 is cooled and recycled again into the system. The extracts are collected only in the first and the second separator at regular intervals. Samples are weighed and analysed. In all experiments, C 0 2 flow rate was kept constant at 0.403 kgih. Analyses of the extracts were performed using a high-performance liquid chromatograph (HPLC), Hewlett Packard series 1050 chromatograph, equipped with a 250mm x 4.6 mm column Lichrosorb S160 5pm, as proposed by Marr (1984). Elution was con-
340
Kiriamiti, Sarmat & Nzila
ducted with a mixture of acetyl acetate and hexane, in a ratio of 1 : 10 at a constant flow rate of 1.5 ml per minute, leading to a 15-minute analysis. The UV-detector was set at a wavelength of 242 nm in series with a Light Scattering Detector (LSD). A refined pyrethrin sample whose pyrethrin content was 21.1% (by weight) was bought from RdH laborchemikalien & Co KG (Germany) for standardisation of the analytical method.
3.0 RESULTS AND DISCUSSION 3.1 Fractionation of crude C 0 2 extract Experiments were carried out in order to compare the use of supercritical COZ and liquid CO1 extraction when fractionation of the crude C 0 2 extract from extractor is implemented by using a "cascade of depressions" through various separators. The operating conditions chosen, as well as the results obtained at the end of the extraction process are presented in tables 1 and 2. The quantity of the pyrethrum flowers used was 45g.
Operating conditions Pressure (bar) Extractor Temperature ("C) Density COZ(kg. m-3) Pressure (bar) Separator 1 Temperature ("C) Density CO, (ka. m-? Pressure (bar) Separator 2 Temperature ("C) and 3 Density CO1 (kr. m-3)
Liquid COz 120.00 19.00 890.375 80.00 35.00 429.349 50.00 28.00 127.76
Supercritical COZ 250.00 40.00 798.45 80.00 37.00 338.80 50.00 30.00 122.91
Table 2: Mass fraction of Dvrethrin. oil and immrities in the crude CO7 extract. Mass fraction of pyrethrin Mass fraction of oil Mass fraction of impurities (mainly waxes)
Separator 1 Separator 2 Separator 1 Separator 2 Separator 1 Separator 2
0.1548 0.3437 0.1227 0.125 1 0.7225 0.53 12
0.1 182 0.2489 0.323 0.1579 0.5588 0.5932
It clearly appears that the amount of pyrethrin obtained in separator 2 is higher than that in separator 1 for both supercritical and liquid COz. Distribution of oil in both separators is the same for liquid but in supercritical more oil is deposited in separator I . The quantity of impurities (mainly waxes) obtained is lower in separator 2 of liquid, which contributes to the improvement of quality of partial extract, while in supercritical they are almost the same. The realization of a fractionation thus makes it possible to obtain a product more concentrated in pyrethrin in separator 2 with partial extract containing
34 1
International Conference on Advances in Engineering and Technology
34.37% of pyrethrin when extracted with liquid C02 and 24.89% in the case of extraction with supercritical C02. Figure 1 shows the evolution of the cumulated pyrethrin mass recovered in separator 2 presented with respect to time. It was observed that in the two cases (liquid and supercritical C 0 2 )the quantity of pyrethrin recovered is very similar. On the other hand, the high quantity of oil recovery in the case of supercritical COz caused a drop in the quality of pyrethrin in extract. Figure 2 shows the mass fractions of pyrethrin and oils extracted, indicating that a higher mass fraction of pyrethrin is obtained with an extraction using liquid C02.
Figure 1: Cumulative mass of a) pyrethrin b) oil in second separator; liquid COz( I 20 bar, 19OC); supercritical CO, (250bar, 4OOC)
*
a
0.45
n3c Om=
1
0.2
-
O.t5 -
b A
A
A
m A m
A I
.
0.1 0.2
' 0
0.05 -
100
200
3011
04 0
TIme (rnin)
Figure 2: Mass fraction of a) pyrethrin b) oil in liquid C02( 120 bar, 19°C); A supercritical CO,(250bar, 40°C) In general, the quantity of extracted impurities (and thus recovered) is lower in the case of the extraction using liquid C02. This is explained by the fact that the solubility of waxes is lower in low temperatures. One can deduce from these experiments that the most satisfactory product can be obtained by an extraction using liquid C 0 2 and then followed by an on-line fractionation. On the other hand, working with pressures and temperatures lower than critical point seems to improve the quality of pyrethrin extracted. However, in all cases, the end product obtained is a yellow solid, thus meaning that it still contains a large quantity of waxes. This result confirms those obtained by Stahl (1 980) who noticed that fractiona-
342
Kiriamiti, S a n n a t & Nzila
tion with two separators can improve the quality of the extraction. To eliminate the waxes more effectively, Della Porta (2002) imposed a very low temperature (- 15OC) in the first separator at a pressure of 90 bar. In both works, they did not mention the state of the product extracted.
3.2 Fractionation of Crude Hexane Extract (CHE) In this study, CHE was re-extracted with supercritical C 0 2 at 250 bar and 40°C followed by on-line fractionation separators. In order to have stable conditions, a low flow rate of COz was used in order not to cause the flow of products of one separator towards the other and a flow of COz was thus fixed at 0.403kg/h. The conditions in the first separator were fixed at 100 bar and 40°C, while in the second and third separators at 50 bar and 40°C. Under these conditions, the presence of extract only in the first and the second separator was noted. Figure 3 shows the cumulative mass of pyrethrin in the two separators and figure 4 shows the cumulative mass of oil in the two separators. It is observed that the quantity of extracted pyrethrin, as well as its mass fraction, is much more in separator 2 than in separator 1. It is noted that the results obtained from extraction fractionation of CHE resemble those obtained from crude C 0 2 extract.
E ;:: 3E :::
'
300
2 ;;; : a
# * *
0
Figure 3: Quantity of pyrethrin recovered in first separator and second separator extracted at 250 bar and 40°C lvith a flow rate 0.403kp71: 4 separator I ; W separator 2.
;
250
200
f
150
s
100
rn
50
a
0
w
B
I
0
200
400
600
Time [min]
Figure 4: Quantity of oil recovered in separator I and separator 2 exmcted at 250 bar separator 2. and 40°C with a flow rate 0.403kgh; 4 separator 1;
343
International Conference on Advances in Engineering and Technology
Figure 5 shows the yield of pyrethrin extracted. Because of the very low C 0 2 flow rates (0.403kg/h), extraction time lasts relatively longer. It was thus observedthat after 410 minutes only 42% of pyrethrin initially present in the extractor was extracted. Figure 6 shows the mass fraction of pyrethrin recovered in the two separators and figure 7 that of oil. In separator 2, a product more concentrated in pyrethrin was obtained. At the end of the extraction process, a product containing more than 63% of pyrethrin by mass was obtained. In separator 1 a very low mass of pyrethrin with concentration of 39% by mass was extracted. This result is satisfactory but the product obtained is solid at ambient temperatures, which poses a problem for the final product formulation.
50 40
I
10 I
i
~
+
0
0
100
~
-~
I
~-
7 1 -
200
300
1
400
Time [min] - -
1
500'
~-
Figure 5: Total pyrethrin yield extracted at 250 bar and 40°C with a flow rate 0.403kgih followed by fractionation of CHE.
--
0
200 400 Time [min]
~
~ _ _ _ _
-
-
_ _ _ _ ~ - -
- _ _ _ _ - ~
I
600~ ~
Figure 6: Mass fraction of extracted pyrethrin recovered in separator 1 and separator 2 at 250 bar and 40°C with a flow rate 0.403kgih; t separator 1; separator 2.
344
Kiriamiti, Sarmat & Nzila
0.3 0.25 ~"
o.2i ~
0 19 5
4-~
0 19
~
~9
i
o.o5 0
4
200 400 Time [min]
600 !
Figure 7" Mass fraction of extracted oil recovered in separator 1 and separator 2 at 250 bar and 40~ with a flow rate of 0.403kg/h;, separator 1; 9 separator 2.
The initial CHE ratio of pyrethrin I/pyrethrin II in extractor was 1.95. A value of 1.68 was obtained in the first separator while a value of 2.56 was obtained in the second separator. The mass fraction of oil is much more in the second separator than in the first separator at the beginning of the extraction process, but at the end, they are identical. In the first separator, the majority of the compounds are undesirable. Through this postextractive fractionation, we hoped that in the first separator, less soluble oils would be recovered and that in the second separator, an extract more concentrated in pyrethrin would be recovered. The extraction from CHE at 250 bar and 40~ normally dissolves many compounds due to increase in CO2 density. 4.0 CONCLUSION The experimental results nevertheless showed the presence of a considerable quantity of pyrethrin in the first separator, as well as a considerable quantity of oil in the second separator. So, this fractionation does not reach the expected ideal due to the fact that the separators are too small, the residence times are very short and also a thermodynamic model to support the phase equilibria of these mixtures is lacking. In particular, the existence of specific interactions between wax, oil and pyrethrin in the presence of CO2, contributes to the mutual solubility of oil and pyrethrin, which affects the effectiveness of the fractionation separation. Fractionation of crude extract in two separators gives a better quality product than a single step extraction process as is observed in our previous work Kiriamiti (2003a). The product is solid at normal temperatures, a property, which is undesired in formulation of insecticides. This process of fractionation of crude extract can be used to concentrate dewaxed extract and also to obtain products of different pyrethrins I/pyrethrins II ratios.
5.0 A C K N O W L E D G M E N T We would like to acknowledge the laboratoire de genie chime (LGC) Toulouse for enabling the use of their facilities to carry out the experimental work. We are grateful to
345
International Conference on Advances in Engineering and Technology
Professor Jean-Stephene Condoret for his advice and providing the use of the SFC equipment REFERENCES Della Porta G., Reverchon E. (2002), Pyrethrin extraction, 4th international symposium on high pressure technology and chemical engineering, Venice, Italy . Kiriamiti, H. K., Camy, S., Gourdon, C., Condoret, J-S. (2003a), Pyrethrins extraction frompyrethrumflowers using carbon dioxide ; J. Super. Fluids. 26(3), p. 193-200. Kiriamiti H, Camy S, Gourdon C, Condoret J.S. (2003b), Supercritical Carbon Dioxide Processing of Pyrethrum Oleoresin and Pale.; J Agric Food Chem12;5 1(4), p. 880-884. Marr, R., Lack, E. Bunzenberger (1984), C 0 2-extraction: comparison of supercriticaland subcritical extraction conditions Ger., Chem. Eng., 7, p. 25- 3 1. Otterbach, A.; Wenclawiak, B. W. (1 999), Supercriticalfluid extraction kinetics ofpyrethrinsfromflowers and allethrinfrom paper strips, J. Anal. Chem. 365 (8), p. 472-474. Salgado V. L. (1997), The modes of action of spinosadandother insect controlproducts Down to Earth Dow AgroSciences, Midland, MI .52(2), p. 35-43. Sims M. (1981), Liquid Carbon dioxide extraction ofpyrethrins, US Pat. 4281 171. Stahl, E.; Schutz, E. (1980), Extraction of natural compounds with supercritical gases, J. of Medicinal Plant Research, 40, p. 12-2 1. Wynn, H. T. P., Cheng-Chin, C., Tien-Tsu, S., Frong, L., Ming-Ren, S. F. (1995), Preparative supercritical fluid extraction of pyrethrin I and II from Pyrethrum flower, Talanta, 42 , p. 1745-1749.
346
John, Wilson & Kasembe
MOTOR VEHICLE EMISSION CONTROL VIA FUEL IONIZATION: “FUELMAX” EXPERIENCE G. R. John, L. Wilson and E. Kasembe, Energy Engineering Department, Faculty of Mechanical and Cheniical Engineering, University of Dar-es-Salaam, P. 0. Box 35131, Dar-es-Salaarn. Tunzani
ABSTRACT World energy supply is dominated by fossil fuels, which are associated with uncertainties of supply reliability. The world energy crisis of 1973174 and 1978179 followed by the recent supply fluctuations resulting from regional conflicts calls for the need of their rational use. Further to the supply fluctuations, oil reserves are being fast depleted and it is estimated that the existing reserves will last in the next 40 years while natural gas reserves are also estimated to last for about 60 years. Use of fossil fuels for motor vehicle propulsion is the major cause environmental pollution and the associated greenhouse gas effect. Air particulate matter (PM), nitrogen oxpollutants from motor vehicles include ozone (03), ides (NO,), carbon monoxide (CO), carbon dioxide (COZ), sulphur dioxide (SO*)and general hydrocarbons (HCs). While curtailment and alternative energy sources are effective measures in reducing the effects of supply and environmental problems, increasing efficiency of existing motor vehicles have an immediate effects. One of the methods of achieving this is by fuel ionization. Fuel ionization has shown to enhance fuel combustion and thereby improving engine performance at reduced emissions. This paper discusses findings of a study that was done to a diesel engine in laboratory conditions. Fuel ionization was achieved by utilizing a magnetic frequency resonator (type FuelMax), which was fitted to the fuel supply line under pressure feeding the engine from the fuel tank. Fuel consumption level with and without FuelMax is compared. Keywords: Fuel ionization; Vehicle emission control; Fuel conversion efficiency; Specific fuel consumption; Brake mean effective pressure
1.0 INTRODUCTION World energy supply is dominated by petroleum fuels accounting for 37.3% of total energy supply and majority of this fuel is consumed by the transport sector (BP, 2004). Due to the important role it plays in economies, worldwide average annual energy consumption growth in the period 2001 to 2025 is estimated at 2.1 percent (Energy Information Administration, 2004). The overdependence on petroleum fuels is a major concern of greenhouse gas emissions and poses a risk of world resources depletion. Road transport alone releases 20-25 percent of the greenhouse gases particularly carbon dioxide (SAIC, 2002). On the other hand, oil reserves are being fast depleted and it is estimated that the existing reserves will last to the next 40 years (Almgren, 2004). Various measures are deployed in minimizing environmental pollution from motor vehicles. These include demand curtailment, use of efficient engine designs, cleaner fuels,
347
International Conference on Advances in Engineering and Technology
alternative fuels and the application of exhaust gas after treatment devices. Measures that bases on efficiency improvement and fuel substitution are said to have more impact on greenhouse gases mitigation compared to measures that are addressing travel demand (Patterson 1999, Yoshida et al 2000). The application of exhaust gas after treatment devices like three-way catalytic converters is capable of reducing tail pipe emissions of CO, HC and NOx. Further to these techniques, increasing environmental performance of existing motor vehicles can also be achieved by fuel ionisation. Fuel ionization has shown to enhance fuel combustion and thereby reducing the combustion emissions. Fuel ionization can be deployed by retrofits to engine. The simplest of these retrofits is the clamp on ionization type similar to the one known as FuelMAX as deployed by the International Research and Development (IRD), a company based in the U. S. A. FuelMAX consist of two halves of a strong magnetic material made from neodymium (NdFeB37), which is clamped on fuel line near the carburettor or injection system. When the fuel passes through the strong magnetic resonator, the magnetic moment rearranges its electrons on a molecular and atomic level. Because it is a fluid, the now positively charged fuel attracts air for better oxidation resulting in more complete burn. The existing FuelMAX is capable of reducing fuel consumption in the range 20% - 27% while the respective emission saving of CO and CH are reported to be in the range of 40% - 50% (RAFEL Ltd. 2005, Sigma Automotive 2005). Consequently, the use of FuelMAX will improve engine horsepower by up to 20% (Fuel Saver Pro, 2005). This paper presents findings of study that was done in order to quantify fuel savings and the respective reduction in pollution by the application of a fuel ionizer type FuelMAX. The study was carried out in laboratory conditions using a diesel engine. 2.0 E X P E R I M E N T A L AND M E T H O D O L O G Y
A single cylinder diesel engine, Hatz Diesel type E 108U No. 331082015601, was utilized for the laboratory testing. Fuel ionization was achieved by utilizing a magnetic frequency resonator type Super FuelMax, Fig. 1. The resonator was fitted to the fuel line that supplies diesel to the engine from fuel tank.
Magnetic field (!
' .... .....
COin v!e nit iio n a ~ Fuel
Ionized Fuel
Fig. 1. Schematic Representation of Fuel Ionization The engine was serviced (which included replacing the air cleaner, changing lubricant and checking for proper nozzle setting) prior to performing the test. One set of data was obtained before fitting the resonator and the other was recorded after making the retrofit.
348
John, Wilson & Kasembe
Upon fitting the resonator, the engine was run for 30 minutes at idling speed before collecting the experimental data. This ensured the removal of carbon and varnish deposits from the engine. Engine speeds that were set during experimentation were 1500 rpm, 1700 rpm, 1800 rpm, 2000 rpm, 2100 rpm and 2200 rpm. Initially, one complete set of data was obtained without loading. Latter on, three loads (14.32Nm, 21.48Nm and 28.65Nm) were used to each test speed. The loading was achieved by a Froude hydraulic dynamometer size DPX2 No. BX31942. Engine's fuel consumption was obtained by measuring time elapsed to consume 50cc of diesel fuel. A single data point constituted of 4 readings, which was averaged for analysis. 3.0 FUEL IONIZATION ANALYSIS Diesel engines utilize either open-chamber (termed indirect injection, IDI) or divided chamber (termed indirect injection, IDI). Here the mixing of fuel and air depends mostly on the spray characteristics (Graves, 1979). Consequently, the engines are sensitive to spray characteristics, which must be carefully worked out to secure rapid mixing. A i r - fuel mixing is assisted by swirl and squish (Henein, 1979). Swirl is a rotary motion induced by directing the inlet air tangentially which also results in a radial velocity component known as squish. Fuel ionization by FuelMAX is similarly enhancing fuelair mixing that results in optimized combustion. 4.0 RESULTS AND DISCUSSION 4.1 Results
Table 1 shows a summarized result for the performance testing of FuelMAX. Average fuel consumption under no load and all test load conditions (14.32 Nm, 21.48 Nm, 28.65 Nm) showed that fuel consumption without FuelMAX was 1.72 litres per hour while with FuelMAX fitted was 1.69 litres per hour. Consequently, overall fuel saving accrued from the use of FuelMAX was 1.61%. Table 1. Summary of FuelMAX performance. LOAD
FUEL CONSUMPTION RATE (1/hr)
SAVINGS, %
WITHOUT
WITH
No load
1.071
1.075
14.32
1.621
1.534
5.37
21.48
1.930
1.915
0.77
28.65
2.268
2.254
0.64
AVERAGE
1.722
1.694
1.61
(0.35)
At no load conditions FuelMAX performance had no significant reduction in fuel consumption to the test engine, Fig. 2.
349
International Conference on Advances in Engineering and Technology
NO
LOAD
CONDITIONS
L-e-
1.30 0 ,1,. I-
-
//:"
1 _20
faO z 0
1.1 o
1.00
~_~__~"
/ 0.90
0_ 8 0
0.70
|
1500
,
1700
,
1800
Speed,
I
--:~--
V V I T H : O U T
,
2000
|
21 O0
2200
RPM
=
~ ! ~
I
Fig. 2. FuelMAX performance under no load condition. Typical FuelMAX performance under loaded conditions at low speeds below 1700 rpm and at higher speeds over 2000 rpm were investigated. Good performance was experienced at part load conditions close to 14.32 Nm and for mid range speeds of 1700 - 1800 rpm, Fig. 3. Performance at other conditions is depicted in Figs. 4 and Fig. 5. 4.2 Discussion
While the details differ from one engine to another, performance maps of engines are similar. Maximum brake mean effective pressure (bmep) contours occurs in mid-speed range and the minimum brake specific fuel consumption (bsfc) island is located at slightly lower speed and at part-load, see Fig. 6. At very low speeds, fuel-air mixture combustion quality deteriorates and dilution with exhaust gas becomes remarkable. On the other hand, Very high speeds increase sfc of motor vehicles. The already good fuel conversion efficiency is outweighed by friction losses that increase almost linearly with increasing speed. Other contributing factors are the variation in volumetric efficiency (qv) and the marginally increase in indicated fuel conversion efficiency (qf). Indicated fuel conversion efficiency increases slowly due to the decreasing importance of heat transfer per cycle with increasing speed (Slezek and Vossmeyer 1981).
350
John, Wilson & Kasembe
L O A D = 14.3::2 N m
2. t0 / l l .
2 O0
.,:~: ---=: z'--==....
i.
"" I..9,0
....~:j;:;"
i
.,..;....
o 1..8:0 I la.. 1,70
....::;::;"
...,;:::;::;::" .Z /
,..{:::/
1,60 Z
9: -_.._.
O
o
1,50
.....
ii
w
,:~::;::" ---.
.
.
.
.....-._:~
1.40
.
.
,<,. ?
.
_ _ = - - ~ - ~ - - - ~'
u_
:1.30 1,20
1500
1700
1800
2:000
2100
Speed, RPM
I - : , : - WITHOUT
-= WITH
I
Fig. 3. FuelMAX performance under 14.32 Nm load condition
L O A D =. 21.48; N r n
2.40
-
2_ 30 i.__._~__-:-.-~
= ,,-,..,. 2:..2:0 i
o
2,10
...... :5-.: :::'='~ ___.'.#:: ~- .... . ......
....... - : - - - _ - m ......
~-2.00 CI. 1,90 =" o t,D
/
'1 ,.80
/
1.70 u.
/
//
1 ..60 1 ,.5 0
.,.;:., .'7
1.40
i
1500
i
1700
i
i
18:00
2000
2:100
S p e e d, RPM
I - - : , : - 'r162
0 UT ---=,-- WITH
I
Fig. 4. FuelMAX performance under 21.48 Nm load condition
351
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
L O A D = 28.65 Nm L_
--
2:.60
0
2:.5'0
n..
2.40
::3 z o ,o --
2:30
2.10
D
200
__..i~~ ~ ~
.~,.~.-~.f....~..<4~
2:.2'0
. - " ~ " " ~ ) ...... ---"_.r".......
/ 5 _ -
1.90
-~"
,
,
1500
,
1700
,
1800
2000:
2 t O0
Speed, RPM I_._WITHOUT
- W,TH l
Fig. 5. FuelMAX performance under 28.65 Nm load condition
6
mean piston speed, m/s. 8 10 12
I
bmep, kPa. 800
I
I
i
Torque, Nm. 420 _
380
-
340
-
300
-
260
-
220
-
180
\
700
', // /
600 500
-
",,
.y
~,
220./i B
, /"
,. 22.5" g/kW.h -~
//
.. _ ;J
...
/_ //
, :
.
/
- - ~ - ! . ........ ...2130..../ / -~
400
-
:
....... ~"-;....:i..........i 240..-//
300
................ __ +. J__ i
~.., ............. 250 . ./ .
/ "/ .
/
....'....
.//
.................~ ....... 260./"
~"I ~-........~I
1000
/
//"
..,/-//"
.............z_< .. j -
2000
L/
140 I
I
3000
3500
Engine speed, rev/min. Fig. 6. Typical performance map for an 8-cylinder air cooled naturally aspirated me3 dium-swirl DI diesel engine (6.54 dm ) [Slezek and Vossmeyer 1981].
352
John, Wilson & Kasembe
5.0 CONCLUSION Performance testing of fuel ionizer type FuelMAX was successfully done under laboratory conditions utilizing a single cylinder engine, Hatz Diesel type E 108U No. 331082015601. The following are the main observations; 9 FuelMAX achieved an overall fuel saving of 1.61% compared to test cycle that was done without ionization 9 Fuel saving was remarkable at mid range speeds of 1700- 1800 rpm 9 Higher engine speeds and low engine speeds test results did not achieve significant savings, so was no load conditions 6.0 REFERENCES
Almgren, A. (2004): Energy Endurance- The Transition Towards an Electricity/Hydrogen Future. Cogeneration and On-Site Power Generation, March - April issue, pp. 21-25 Energy Information Administration, Office of Integrated Analysis and Forecasting, U.S. Department of Energy (2004): International Energy Outlook 2004 Fuel Saver Pro (2005): FuelMAX. [Intemet] available from http://www.fuelsaverpro.com/, 2005; accessed on 16th October 2005 Graves, G. (1979): Response of Diesel Combustion Systems to Increase of Fuel Injection Rate, SAE Paper Number 790037, SAE Transactions, Vol. 88 No. 6, pp. 69-81 Henein, N. A. (1979): Analysis of Pollution Formation and Control and Fuel Economy in Diesel Engines. Energy and combustion science, Pergamon Press, ISBN 0-08 024780-6 Patterson, P. D. (1999): Transportation's Contribution to Global Climate Change. The Earth Technologies Forum, Washington, D. C. RAFEL Ltd (2005): Super FuelMAX. [Intemet] available from http://www.fuelmax.lv/en/index.php, accessed on 16th October 2005 SAIC - Science Applications International Corporation (2002): Greenhouse Gas Emission Reductions and Natural Gas Vehicles; A Resource Guide on Technology Options and Project Development, 8301 Greensboro Drive, E-5-7 McLean, Virginia 22102 Sigma Automotive (2005): FuelMAX. [Internet] available from http://www.sigmaautomotive.com/IRD/superfuelmax.php, accessed on 16th October 2005 Slezek, P.J. and Vossmeyer, W. (1981): New Deutz High Performance Diesel Engine. SAE Paper Number 810905, SAE Transactions, Vol. 90 No. 3, pp. 905-912 Yoshida, Y., Ishitani, H., and Matsuhashi, R. (2000): Reducing CO2 in the Transport Sector in Japan. International Journal of Global Energy Issues, Vo
353
International Conference on Advances in Engineering and Technology
MODELLING BAGASSE ELECTRICITY G E N E R A T I O N : AN A P P L I C A T I O N TO THE S U G A R I N D U S T R Y IN Z I M B A B W E Charles Mbohwa, Department of Mechanical Engineering, University of Zimbabwe, P.O. Box MP 167, Mount Pleasant, Harare. Zimbabwe. Tel.." +263-4-303211 Ext. 1852 or 1380, Cell phone +263-23-812860 Fax." +263-4-303280. E-mail Address." [email protected] or cmbowa(~yahoo.com
ABSTRACT
This paper describes the process of modelling and designing a bagasse/coal power plant, which is then applied to the Zimbabwean sugar industry. The co-fired power plant is separated from the sugar factory so that attention can be paid to specialist requirements for each plant, without compromising efficiency and investments decisions. Assuming 60% steam on sugarcane, the electrical power potential is found to be 183 MWe, when feed water is at normal conditions and exhaust steam for sugar processing is at 2 bars. When the system is fully condensing to 0,2 bars, the turbo-alternator output rises to 288 MWe, due to the increase in efficiency in the conversion of heat energy to electricity that results. The method that is used is comparable to the operating characteristics of high-pressure bagasse power plants in practice, particularly the state-of-the-art plants in Mauritius and Reunion islands. Keywords" Bagasse, Power plants modelling, Combined heat and power, Cogeneration
1.0 I N T R O D U C T I O N The diminishing reserves of fossil fuel and their effect on the environment are issues of major concern. The emission of carbon dioxide and other noxious gases like sulphur oxides and nitrogen oxides, are a threat to the sustainability of human beings. Carbon dioxide is a greenhouse gas that is one of the major contributors to global warming. On the other hand the acidic oxides have negative environmental effects in the form of acid rain, which affect plants and animals. Renewable energy can play an important role in alleviating these environmental effects. Biomass electricity production is one way of using an almost carbon dioxide neutral fuel, provided the organic material used is replaced. The most promising biomass electricity generating fuel is bagasse. Bagasse is the solid fibrous material, which leaves the final mill after extraction of juice. Bagasse power generation has traditionally been used as a means of incinerating waste in very low efficiency low-pressure boilers to get electricity and steam for sugar processing. Technological improvements over time have extended the benefits of this cogeneration technology to the provision of electricity to local communities and to workers, powering of irrigation pumps and even power exports to the grid. The power plants in Reunion and Mauritius have provided some of the most unique and striking examples of advanced bagasse cogeneration plants of the future. (Deepchand 2000, Quevauvilliers
354
Mbohwa
2001) A challenge that will face many sugar plants that want to develop advanced cogeneration plants is the design methodology and the issues to consider when undergoing this transformation. The primary purpose of this paper is to present a bagasse power generation model that can be used for any size of sugarcane plant. A number of critical thermodynamic and engineering design steps and models are used that can offer guidance to sugar factories. The next section presents a model of the bagasse power plant that is separated from the sugarcane processing plant and discusses the merits of this arrangement. A number of design issues are discussed. Then the fuel properties are summarized, comparing bagasse with other fuels. This is followed by theoretical considerations of a steam power plant. This theory is then applied to the Zimbabwean sugar industry and some results derived. 2.0 SEPARATE THE BAGASSE P O W E R PLANT F R O M THE SUGAR FACTORY
The ideal set up for bagasse power production is when the power plant is completely dissociated from the sugar factory. This enables the concentration of core skills on both sides. The sugar operators concentrate on the skills, which are needed to produce high quality products that can compete effectively in the world market. Contamination risks of the various flows are reduced. The power plant personnel can concentrate on electricity production, efficiency and system availability. High technology and skills can now be created to enable the power plant to work as an independent power producer, a role that is closer to the utility than that of a sugar factory that exports excess power to the grid. The system layout is as shown in the Fig 1. Alternative fuel used off-season 1" Bagasse power plant !Elec~ical power export t i
|
tricity
~
I
B,
,
OILER
TURBINE
~]
1
~
I
I
|
/
,,
l /
ALTERNA-
I
Elec-
TOR
~
steam
.................................................Condensate .....................~. ................................................................ I................................................
Ba~asuS2e~
fac~or~ SUAGARRy
F...ig. 1. The separated sugar factory and power plant
I~
I"
:...........................................................................................................................................................................................................................
The aim of this paper is to estimate the steam requirements of the plant so that the boilers are sized appropriately, in terms of steam capacity, operating pressure and tempera-
355
International Conference on Advances in Engineering and Technology
ture. The boiler should be able to fire alternative fuels. The turbo-alternators used can be multi-stage single action type with low pressure stages. Such a condensing extraction turbine has normally two ports, one that can be used for controlled extraction of steam to supply the power plant auxiliaries such as de-aerators and heaters and the sugar mill. The second extraction port is uncontrolled and used to reheat the turbine condensate before it enters the de-aerator. The water that is de-aerated comes from, pre-heaters, condensers, sugarcane mill and make-up water from the demineralised tank. The feed water pumps supply the boilers for steam generation. An ion exchange water pretreatment system is needed to treat the raw water. This water can then be pumped through cat-ion, anion and mixed bed vessels to produce demineralised water to serve as make-up in the de-aerator. Evaporative type counter-flow induced draft cooling towers can be connected to turbine condensers for cooling. (Quevauvilliers 2001) 3.0 PROPERTIES OF BAGASSE AND ALTERNATIVE FUELS The boiler can be co-fired by bagasse and an alternative fuel. The alternative fuel is normally, coal, oil or wood. The carbon and hydrogen content generally shows the energy value of the fuel. Coal and oil emit a lot of sulphur oxides. Coal also emits nitrogen oxides. The bulk density of bagasse is low, making it difficult to store. This is worsened by the fact that when it is dry, it can be carried by wind, creating a dusty and dirty environment. It has to be consumed quickly to avoid storage of large amounts. The combustion of the fuel is an exothermic oxidizing reaction. The oxidizing agent is air and the main products are nitrogen, carbon dioxide, water vapour and oxygen. Secondary products are trace elements of sulphur phosphorus and vanadium. An adequate amount of oxygen must be brought into contact with the fuel to attain complete and efficient combustion. Combustion efficiency can be determined by measuring the % of carbon dioxide (CO2), oxygen (02) and carbon monoxide (CO) in the flue gases. CO indicates incomplete combustion, whilst CO2 indicate the quantity of excess air present. Each 1% of CO in flue gases is equivalent to a 4.5% loss of heat. To ensure complete combustion a small amount of excess air is needed. This amount of excess air varies from 10 to 50% over and above theoretical. (Mugadhi 1999) The normal characteristics of bagasse, wood, coal and oil are shown in Table 1 (Mugadhi 1999).
3.1 Energy Losses During Combustion of Bagasse The moisture in the bagasse is one of the figures, which drastically affects the efficiency of the boiler plant. Under normal mill conditions moisture content varies between 47% and 56%. Whilst superficially this may seem a narrow band, the specific moisture content increases rapidly in this range. 46% more water must be evaporated per unit mass of dry fibre at 56% moisture than at 47%. Below a moisture content of 47% factors other than moisture begin to dominate performance characteristics (Mugadhi 1999). With drier fuels combustion and furnace leaving temperatures increase. This can result in grate level slugging and severe fouling of heating surfaces. The furnace gas leaving temperature then becomes the critical parameter. Besides energy losses due to moisture, other energy losses occur when heat from fuel is transferred to steam in the boiler. The losses of heat in the furnace and at the boiler consists of the following: 9 Latent heat of the water formed by the combustion of the hydrogen in the bagasse 9 Latent heat of the water contained in the bagasse
356
Mbohwa
9 9 9 9 9 9 9
Humidity loss. It is 0,5% when the relative humidity is 80% Sensible heat of the flue gas leaving the boiler Dry gas loss, 10% if the outlet gas temperatures is 200 o C Losses in unburned solids Losses by radiation from the furnace and especially from the boiler Unaccountable losses Losses due to bad combustion carbon giving CO instead of CO2.
The use of the net calorific value takes into account the first two losses. The three losses due to unburned solids, radiation and incomplete combustion, are taken into account by means of coefficients applied to the total quantity of heat that is still available after the other losses. The total losses as a percentage have already been shown in Table 1. They are further classified and shown in Table 2 (Mugadhi 1999). Table l" Chemical and Physical properties of fuels used in cane su ar Property
Bagasse
Wood
Coal
Oil
Carbon % Hydrogen %
22.5 3.0
30.2 3.7
Sulphur %
Trace
Trace
67.0 4.2 1.3
83.5 11.7 3.3
Nitrogen % Oxygen % Phosphorus %
23.0 50 1.5 9558 7671 2635
25.8
Moisture %
Ash % Gross Calorific Value kJ/kg (wet) Net Calorific Value kJ/kg Theoretical, air for complete combustion of 1000kg fuel (kg) Maximum theoretical CO2 in flue gases %
Practical excess air requirements for complete combustion
1.6
1.4 -
40 0.3 11980 10189 2929
5.8 0.1 4 16.0 27450 26423 3144
43030 40170 3053
20.70 48
20.10 43
18.45 37
15.60 15
35.89 80.00 64.11 80-145
29.71 82.55 70.29 112-320
15.66 90.40 84.34 960-1040
112 1
190 1.96
18.27 84.80 81.73 800880 865 17.4
1
1.6
3.5
8
%
Total losses % Efficiency on NCV, % Efficiency on GCV, % Bulk Density Range kg/m3 Bulk Density Average Bulk kg/m3
Ratio of Volume of bagasse burnt to fuel burnt to produce equivalent stem output Ratio of mass of bagasse burnt to fuel burnt to produce equivalent steam output.
990 30.3
Table 2: Percentage energy losses for bagasse, wood, coal and oil during combustion The Causes or Types of Energy Losses Unburnt carbon in ashes, grits and stack discharge, % Dry gas loss, % Wet gas loss, % Moisture in the air loss, % Radiation losses, % Unmeasured losses, % Total losses, %
Bagasse
Wood
Coal
Oil
3.00 8.71 22.80 0.15 0.73 0.50 35.89
2.00 9.03 17.30 0.15 0.73 0.50 29.71
3.16 9.52 4.21 0.15 0.73 0.50 18.27
7.44 6.84 0.15 0.73 0.50 15.66
357
International Conference on Advances in Engineering and Technology
3.2 The Calorific Value of Bagasse The gross calorific value (GCV) of dry bagasse has a mean value of 19605 kJ/kg. The gross calorific value (GCV) of wet bagasse is based on the composition of wet bagasse. Water has no calorific value and it also absorbs heat being vaporized during combustion. The combustion reactions of bagasse as a fuel are as given in Table 3 (Mugadhi 1999). The net calorific value of bagasse, with around 48% moisture content is about 7670 KJ/kg (Deepchand 2000, Mugadhi 1999, Murefu 2001). Table 3" Combustion Reactions of Bagasse Constituent
Mass ( % )
Carbon
22.5
Oxygen R e q u i r e d (kg) 0.60
Hydrogen Nitrogen
3.0 -
0.24 -
Oxygen Water Ash
23.0 50.0 1.5
. -
Total
100.0
0.84
Mass of Product
C a r b o n dioxide
0.825
0.417
Water Nitrogen
0.270 2.010
1.600
0.500 0.015 3.720
2.017
(kg)
.
Volume of Product ( m 3 )
Product
. Water Ash
.
4.0 THE STEAM POWER PLANT The conversion of thermal energy into electricity can be estimated by a theoretical consideration of the related and possible thermodynamic cycles. The generation of electricity using steam involves the taking of heat from a high temperature source, converting a part of it into work and rejecting the rest to a low temperature sink. One example of such a power producing cycle is the Rankine Cycle, which is most suitable for electrical power production from fuels like coal, furnace oil, natural gas, agricultural waste, wood and city waste. All thermal power plants are based on this cycle. It is made up of four components, a boiler, turbine, condenser and a pump. The thermodynamic processes of the cycle are standard and have been widely published. (Iynkaran et al 1989) The main stages of the process are: 9 Constant pressure heat addition to the water, which is the working fluid. Water is heated to saturation point. Dry saturated steam is formed. 9 Isentropic expansion of the working fluid, involving reversible and adiabatic expansion of steam in a turbine to low-pressure steam. Work is done on the turbine shaft in the process, to generate electricity. 9 Constant pressure heat rejection from the working fluid, involving condensation of the low-pressure exhaust steam to saturated water. 9 Isentropic compression or pumping of the working fluid, which involves pumping of low-pressure water to boiler pressure by doing work on it. The arrangement is as shown in Fig. 1 below.
358
Mbohwa
Steam at high pressure (Condition 1) Work output
Heat supplied from fuel TUR~LTER~'ATOR
l
BOILER
~
Exhaust s t e a ~ Water at high pressure (Condition 4) Condensate (water) at low Pressure (Condition 3) PUMP ~ o r k input
)
pressure (Condition 2) CONDENSER
rej'ected to cooling water
Fig. 2. The basic components of the Rankine Cycle
4.1 The Effect of Boiler Pressure on Steam Power Plant Efficiency
The thermal efficiency of the power plant rises with increase of boiler pressure, since this is associated with it an increase in the maximum cycle temperature. This efficiency increase occurs up to a maximum value around 160 bars. Beyond this pressure, the latent heat decreases drastically resulting in less heat being transferred to the maximum cycle temperature, hence resulting in a slight fall in efficiency for higher pressures. This is shown in Fig 3, which plots the characteristics of a power plant, which feeds water at atmospheric pressure and 30 ~ into a boiler to produce dry saturated steam at various pressures. A
20
15
y
V
V
~
V
~
V
140
150
160
170
180
190
X
10
10
20
30
40
50
60
70
80
90
Steam
100
Pressure
110
120
130
200
in Bars
Fig. 3. Variation of Efficiency with Steam Pressure with Exhaust Steam at Atmospheric Pressure
359
International Conference on Advances in Engineering and Technology
5.0 ESTIMATING BAGASSE POWER PLANT IN THE SUGAR INDUSTRY IN ZIMBABWE The Zimbabwean sugar industry produces about 4.7 million tones of sugar can, which is milled at the two sugar plants, Triangle Sugar Limited and Hippo Valley Estates. The two plants have a maximum potential milling capacity of 1200 tonnes of sugarcane per hour. (Mbohwa 2003) This at 30% bagasse on cane provides 360 tonnes of bagasse per hour, which when burnt in boilers provides 2 760 GJ of energy per hour. This is equivalent to a thermal power of 767 MWt. The fibre in the cane is generally sufficient to enable bagasse produced during milling to supply all the steam necessary for power production and manufacture. With a well-balanced and well-designed factory, an excess of bagasse also remains (or of steam) which may be used for other purposes like supply of electrical energy to the regional network. Unbalanced factories may experience a shortfall or surplus of bagasse resulting in costly auxiliary fuels in cases of shortfalls and costly bagasse disposal techniques when there is surplus bagasse. Where off-crop outside power and irrigation loads are needed, they can be met by, either increasing factory thermal efficiencies and stock piling baled bagasse for the use during off-crop or by burning an auxiliary fuel. Table 4 presents the results of a bagasse power plant that consists of a boiler set at various pressures between 10 bars and 200 bars, while exhausting steam for sugar processing during the on-crop season. It is assumed that the boiler creates saturated vapour, which is then used to power a turbo-alternator.
Table 5 represents the situation during the off-crop season, when the steam extraction ports are closed and the saturated steam is condensed completely in the turbine. It can be observed that this increases the thermal efficiency of the plant by about 10% to 14%, depending on the operating pressures. It is noted that the percentage point increase in efficiency after a pressure of 80 bars is well below one and this suggests that it is better to size the boilers at that pressure. A pressure of 80 bars was therefore chosen for the power plants. The bagasse power plants output in Zimbabwe can be maximized for a given amount of heat energy that can be transferred from bagasse to steam. In reality, it is necessary to superheat the steam to improve the power plant efficiency and to set an optimal boiler operating pressure. Assuming that the steam requirements for sugarcane processing are based on 60% steam on cane, this means that 720 tonnes of steam per hour are needed to process the sugar at the two factories in Zimbabwe at a pressure of 2 bars. This represents a heat energy share of 3833 KJ/kg from bagasse. Assuming that the total energy loss from the net calorific value in the process of 10% (Mugadhi 1999, Murefu 2001) the heat energy converted to steam is 3450 KJ/kg. If the input water is at 30 ~ the thermal efficiencies are computed for different degrees of superheat and the results are as shown in table 6. They suggest a superheat temperature of between 550 ~ and 600 ~
360
Mbohwa
Table 4" The performance of a bagasse power plant
exhausting steam at 2 bars,
Pressure ,(bars)
sl
x2
10
6.5819
0.903
20
6.3366
0.859
hi
h2
WNET
120 ~
Wp
h4
Qs
Wt
Efficiency
2777.9 2492.0
0.8
126.5
2651.4
285.93
285.13
10.75
2797.5 2395.5
1.8
1 2 7 . 5 2670
402.03
400.23
14.99
30
6.1836
0.831
2802.4 2335.3
2.8
128.5
2673.9
467.12
464.32
17.36
40
6.0681
0.811
2800.4 2289.8
3.8
129.5
2670.9
510.56
506.76
18.97
50
5.9734
0.794
2794.2 2252.6
4.8
130.5 2663.7
541.61
536.81
20.15
60
5.8904
0.779
2785.1 2219.9
5.8
131.5 2653.6
565.16
559.36
21.08
70
5.8158
0.766
2773.4 2190.6
6.8
132.5 2640.9
582.81
576.01
21.81
80
5.7472
0.753
7.8
133.5 2626.51
596.40
588.60
22.41
90
5.682
0.742
134.5
2610.2
606.75
597.95
22.91
604.58
23.32
608.78
23.66
2760
2163.6
2744.7 2138.0 8.8 !
100
5.6199
0.731
2727.9 2113.51 9.8
135.5 2592.4
614.38
110
5.5594
0.720
2709.3 2089.7 10.8 136.5 2572.8
619.58
120
5.5
0.709
2689.1 2066.4 11.8 137.5 2551.6
622.75610.95
23.94
130
5.441
0.699
2667.4 2043.1i12.8
2528.9
624.26
611.46
24.18
140
5.3807
0.688
2642.5 2019.4 13.8 1 3 9 . 5
2503
623.08
609.28
24.34
150
5.3181
0.677
2615.3 1994.8 14.8 140.5
2474.8
620.51
605.71
24.48
160
5.2531
0.665
2585.1
1969.2 15.8 141.5
2443.6
615.88
600.08
24.56
1942.8,16.8
170
0.653
2552.1
142.5
2409.6
609.28
592.48
180
i 5.1137
0.640
2514.4 1914.4 17.8 143.5
2370.9
600.02
582.22
24.56
190
5.0346
0.626
2471.4 1883.3 18.8 144.5 2326.9
588.14
569.34
24.47
.
5.186
138.5
1
.
.
.
.
.
.
.
.
!
24.59
i
200
|
4.9436
P3 is
|
0.610
|
2420.1
,
1847.5 19.8 145.5 2274.6 ,
,
,
572.64
552.84
,
|
24.30
2
Pressure pl = p4
he (2 bars)
504.7
sf2 at 2 bars is
hfg2(2 bars)
2201.6
Sfg2at 2 bars is h3=h: at 30 deg
|
1.5304
i
5.5963 125.7
361
International Conference on Advances in Engineering and Technology
Table 5" The performance of a bagasse power plant with a condenser exhausting at 0.2 bar, 60 ~ iPressure (bars)
,
Sl
.
x2
,
hi
,
h2
.
Wp
,
h4
,
Qs
.
Wt
i WNZT ,Efficiency
10
6.5819
0.812
2777.9
2167.4
0.98
251.98
2525.92
610.48 609.50
24.13
20
6.3366
0.778
2797.5
2085.7
1.98
252.98
2544.52 i711.82 709.84
27.90
30
6.1836
0.756
2802.4
2034.7
2.98
253.98
2548.42
767.70 764.72
30.01
40
6.0681
0.740
2800.4
1996.2
3.98
254.98
2545.42
804.19 800.21
31.44
|
50
15.9734
60
5.8904
|
|
70 80 90 100 110
140
|
0.726
|
5.8158
|
0.715 0
!
0.704
|
5.6199
|
,
5.5594
,
5.5 ,
,
150
5.441 5.3807
,
,
5.3181 '
2744.7
|
0.676
|
2760
|
0.685
|
1964.7
|
2727.9
|
0.668
,
2709.3
0.660
2689.1
0.651
i 2667.4
0.643
2642.5
,
,
0.634 '
1912.1
|
1889.3
|
1867.6
|
1846.9
|
,
~ 1826.7
,
,
1787.2 1767.2
,5.2531,
0.625
, 2585.1
, 1724.6
5.186
0.615
2552.1
1702.3
180
200 P3 = p2 is
,
,
,
5.1137 5.0346 4.9436
,
,
,
0.605
,
0.594
,
0.581
,
2514.4 2471.4 2420.1
6.98
|
7.98
|
8.98
|
9.98
|
,
,
,
,
10.98
,
,
12.98 13.98
1678.2 1651.8 1621.5
257.98
|
258.98
|
0
|
,
259.98 260.98 261.98
,
,
263.98 264.98
266.98 267.98
,
,
18.98 19.98
2515.42
|
2501.02
|
2484.72
|
2466.92
|
,
,
,
,
2447.32
268.98 269.98 270.98
|
,
,
2403.42 2377.52
|
32.49
|
848.11 842.13 |
|
870.72 862.74
|
,
877.15 868.17
|
|
881.04 871.06
|
|
,
33.31 |
861.26 854.28
|
2426.12
882.60 871.62 ,
33.96
|
34.50
|
34.94
0
35.31
|
35.62
,
882.19 870.21 ,
,
2349.32 i
16.98
|
829.55 824.57
|
265.98
, 15.98,
|
|
2528.12
262.98
'
17.98
2538.22
|
|
14.98
,
|
256.98
11.98
'
170
255.98
|
1746.3 '
|
|
5.98
1806.9
160
190
4.98
|
2615.3
|
|
|
1937.0 |
2773.4
|
|
5.682
|
2785.1
5.7472 i 0.694
|
2794.2
|
|
120 130
|
880.15 867.17 ,
875.35 861.37 ,
35.87 36.08
,
36.23
,
869.01 854.03 '
'
36.35 '
, 2318.12 ,860.47,844.49, 2284.12 ,
,
,
2245.42 2201.42 2149.12
36.43
849.83 832.85 ,
,
,
836.22 818.24 ,
819.58 800.60 ,
798.601 778.62 |
36.46 36.44
,
36.37
,
36.23
,
0.2
Pressure Pl = P4
,hf2 (.2 bar)
251.5
,hf~2(.2bar)
2358.3
I
sf2 at 0.2 bar is iSle2
at 0.2 bar is
h3=hf
362
0.8322] 7.07731 251
i
Mbohwa
Table 6: Performance of a bagasse power plant using superheated steam exhausting at 2 bars, 120 ~ Temperature ~
300 325
S~
.
6.066 i 6.21
x2
.
h~
.
h2
i . Wp
Effi.
h4
.
Q~
.
Wt
Wnet eieney
0.810
2784
2289.0
7.8
133.5
2650.5
494.98
487.18
18.38
0.836
2896
2345.7
7.8
133.5
2762.5
550.33
542.53
19.64
350
6.332
0.858
2986
2393.7
7.8
133.5
2852.5
592.34
584.54
20.49
375
6.441
0.877
3065
2436.5
7.8
1133.5
2931.5
628.46
620.66
21.17
400
6.54
0.895
3138
2475.5
7.8
1133.5
3004.5
662.51
654.71
21.79
450
6.72
0.927
3272
2546.3
7.8
133.5
3138.5
725.70
717.90
22.87
500
6.881
0.956
3398
2609.6
7.8
133.5
3264.5
788.36
780.56
23.91
550
7.029
0.983
3520
2667.9
7.8
133.5
3386.5
852.14
844.34
24.93
600
7.167
1.007
3641
2722.2
7.8
133.5
3507.5
918.85
911.05
25.97
650
7.298'
1.031
3761
2773.7
7.8
133.5
3627.5
987.31
979.51
27.00
70
7.423
1.053
3881
2822.9
7.8
133.5
3747.5
1058.13 1050.33 28.03
750
7.542
1.074
4001
2869.7
7.8
133.5
3867.5
1131.32 1123.52 29.05
800
7.656 '
1.095
4122
2914.5
7.8
133.5
3988.5
1207.47 1199.67 30.08
850
7.766
1.114
4245
2957.8
7.8
133.5
4111.5
1287.20 1279.40 31.12
900
7.8731
1.133
4368
2999.9
7.8
133.5
4234.5
1368.10 1360.30 32.12
950
7.976[
1.152
4492
3040.4
7.8
133.5
4358.5
1451.58 1443.78 33.13
1000
8.076
1.170
4618
3079.8
7.8
133.5
4484.5
1538.24 1530.44 34.13
p3 is
2
Pressure p l = p4
ht: (2 bars)
504.7
st: at 2 bars is
1.5304
h~g2(2 bars)
2201.6
Sfg2at 2 bars is
5.5963
h3=hfat 30 deg
125.7
The same analysis for full condensation suggests a superheat temperature between 600 ~ and 650 ~ See Table 7. A boiler operating temperature of 500 ~ is chosen for Zimbabwean boilers at a thermal efficiency of 23.91%. The electrical power output would then be about 183 M W e during the on-crop season. However at this temperature, it is expected that excess steam would be generated, requiring more condensation by the turbo-alternator thus yielding more power. For full condensing, when operating during the off-crop season, a thermal efficiency of 37.57% is achievable. This would give an electrical power output of 288 MWe. Experiences in Mauritius have shown that boilers
363
International Conference on Advances in Engineering and Technology
operating at 82 bars and 525 ~ are feasible fired by bagasse during the on-crop season and by coal during the off-crop season. Based on their output capacity, it has been empirically estimated that the bagasse resources in Zimbabwe would yield at least 210 MWe. (Mbohwa 2003) This would be made up of 3 boilers and 3 turbo-alternators of 35MW each at each of the two sugar factories. The boiler capacities would be at least 120 tonnes of steam each. Table 7: Performance of a bagasse power plant using superheated steam exhausting at 0.2 bars, 60 ~ Temperature ~
Effi.
S1
.
x2
hi
.
.
h2
.
Wp
.
h4
.
Qs
.
Wt
,
Wnet .ciency.
300
6.066
0.740
2784
1995.5
7.98
258.98 2525.02
788.49
780.51
30.91
325
6.21
0.760
2896
2043.5
7.98
258.98 2637.02
852.51 844.53
32.03
350
6.332
0.777
2986
2084.1
7.98
258.98 2727.02
901.86
893.88 32.78
375
6.441
0.793
3065
2120.5
7.98
258.98 2806.02
944.53
936.55
33.38
400
6.54
0.806
3138
2153.5
7.98
258.98 2879.02
984.55
976.57
33.92
450
6.72
0.832
3272
2213.4
7.98
258.98 3013.02
1058.57 1050.59 34.87
500
6.881
0.855
550
;7.029,
0.876
600
7.167
0.895
650
7.298
0.914
700
i7.423,
0.931
,
750
;7.542,
0.948
,
800 850 900 950
7.656
.
,
,
,
1000
7.766 7.873 7.976 8.076
p3 = J92is
,
,
,
,
0.964 0.980 0.995 1.009 1.024
,
,
,
.
,
3398
2267.1
7.98
258.98 3139.02
1130.92 1122.94 35.77
3520
,2316.4,
7.98
258.98 3261.02
1203.60 1195.62 36.66
3641
2362.4
7.98
25 8.98,. 3382.02 . , .1278.62 . ,1270.64, 37.57
3761
2406.0
7.98
258.98, 3502.02, 1354.96,1346.98, 38.46
3881
,2447.7,
7.98
!258.98 3622.02
4001
,2487.3,
7.98
258.98 3742.02
4122 4245 4368 4492 4618
,
,
,
,
2525.3 2562.0 2597.6 2632.0 2665.3
,
,
,
,
i
,
,
1433.31 1425.33 39.35 ,
,
1513.66 1505.68 40.24
7.98
258.98, 3863.02, 1596.67 ,1588.69, 41.13
7.98
L258.98 3986.02
7.98
258.98 4109.02
7.98 7.98
,
,
,
258.98 4233.02 ,
258.98 4359.02
1683.02 1675.04 42.02 ,
,
1770.36 1762.38 42.89 ,
,
1860.04 1852.06 43.75 ,
,
1952.72 1944.74 44.61
0.2
Pressure pl = p4 sf2 at 0.2 bar is 0.8322 Sfg2 at 0.2 bar is 7.0773 h3=hf 251 |
ihf2 (.2 bar)
251.5
hfg2(.2
bar)
.2358.3
|
6.0 DISCUSSION OF THE RESULTS The power output results have been understated due to the fact that it has been assumed that 60% steam on cane is necessary. This is the case now for the plants in Zimbabwe, because there have been no efficiency improvements yet in the sugar processes. Triangle Sugar Limited operates at this high percentage because of the ethanol plant, which places more demand on steam. At times values of 55% steam on cane have been achieved. In addition some of the auxiliaries in the sugar plant are driven by steam. Hippo Valley Estates has more steam driven processes. These are shown in Table 8. (Murefu 2001).
364
Mbohwa
Table 8: Steam consumption of turbo-alternator sets
The shredder, de-watering mills, drying-off mills and the steam feed pumps are currently operating inefficiently and have to be electrified to maximise efficiency. And the power output during the on-crop season. The two sugar plants in Zimbabwe effectively operate at below 40MW during the on-crop season and below 10MW during the offcrop season. (Murefu 2001, Mbohwa 2003, Nyamuzihwa 1999) The model plants would produce a lot more power and export the excess to the grid. This aspect has to be traded off with the fact that the energy losses from the net calorific value have been assumed to be 10%. An increase of the losses reduced the power output. The power output during full condensation has however been overstated. There are some heat losses by the turbine and the pressure of the condensate though definitely below atmospheric pressure, can be well above the 0.2 bars assumed. This would reduce the power output during the off-crop season. Naturally the turbo-alternator will be sized for bagasse combustion during the on-crop season at a fixed power output rate. During this time coal will be used for the Zimbabwean plants at a lower steam output rate that matches the size of the alternators. The sugar plants in Zimbabwe also currently use some of the high-pressure steam for sugar plant processes through cooling and pressure reduction. The cooling down and pressure reduction of high-pressure steam is a very inefficient process. Mutsambiwa 2001 indicates that the de-superheating station at Hippo Valley Estates in Zimbabwe provides about 108 tonnes of steam per hour needed for sugar processing. The proposed model plants do not accommodate this at all resulting in higher efficiency and power output. The real steam cycle deviates from the Rankine Cycle. This is mainly due to the pressure drop as a result of frictional effect and heat loss to the surroundings. Irreversibility results and entropy is introduced. As a result more energy is needed to pump water into the boiler. The turbine losses consist of heat losses and friction at the blades and nozzles. The expansion of steam is therefore no longer isentropic as assumed in the model. The work output is reduced and the steam is exhausted with large enthalpy and entropy. The pump process is also no longer isentropic in reality and higher work input is necessary. The isentropic efficiencies of the turbine and the pump range from 80% to 85% and these reduce the effective work at the shaft of the turbo-alternator. It has been noted
365
International Conference on Advances in Engineering and Technology
that increasing boiler-operating pressure increases the thermal efficiency of the plant, since this increases the maximum cycle temperature. The Rankine cycle can be further improved by reheating the steam and by regeneration. Superheating has been applied already to improve thermal efficiency. The maximum superheating temperature possible is 1100 degrees Celsius, otherwise normal superheating tubes would no longer withstand the heat. In practice the limitation in the sugar industry is set at 525 degrees Celsius based on experiences so far. Superheating increases the efficiency of a condenser turbine by about 5%, and hence increases power output to the shaft by at least 50%. Reheating the steam in a superheated steam power plant increases work output by 10% and results in a 0.4% thermal efficiency improvement. Regeneration, which is heating of feed-water with steam extracted from the turbine. One regenerative heater can increase efficiency by about 2% and large plants can have up to seven regenerative heaters. The economisers can be used to heat up feed water, using residual heat of the flue gas before it enters the steam drum. This can normally be located between a mechanical dust collector and the electrostatic precipitator, if they are included in the design. The combustion air can also be preheated using low-pressure steam and water from the economiser. These improvements will result in more electrical power output at a higher efficiency. 7.0 CONCLUSION A system model of bagasse power plant has been presented. The method used has proved to be quite accurate and can be used for any sugar plant, when its sugarcaneprocessing rate is known. The results obtained are reasonable and agree with practical experiences with high-pressure bagasse power plants that are in operation at the moment. An area that would require improvement is the determination of the steam requirements of sugar processing. This is a critical parameter that any plant that intends to set up a power plant for efficient electricity production and for power export to the grid would have to accurately determine. In addition sugar-processing efficiency would have to be improved through electrification of all prime movers. The method used makes the design of bagasse power plants easy. For any given sugar plant characteristics, the system design can be done using computer software and sensitivity can be tested for a variety of design parameters. The method can be extended to other types of thermal plants, powered by alternative fuels. REFERENCES
Deepchand K., (2000) Bagasse Energy Cogeneration in Mauritius and its Potential in the Southern and East African Region, Presented at an AFREPREN Workshop, Paper No. 272, Nairobi, Kenya, Sept. 2000 Iynkaran K. and Tandy D. J., (1989) Applied Thermofluids and Pollution Control, Prentice Hall, New York Mbohwa C., (2003) Bagasse energy cogeneration potential in the Zimbabwean Sugar Industry, Renewable Energy, Elsevier, Vol. 28, pp 191-204 Mugadhi A., (1999) Case Study on Steam Raising: A complete energy balance for Triangle Sugar Limited, A Report Prepared for Triangle Sugar Limited, Triangle, Zimbabwe.
366
Mbohwa
Murefu M., (2001) Steam Reticulation at Hippo Valley Sugar Estates, Publication of the Department of Mechanical Engineering, University of Zimbabwe, Harare Mutsambiwa S., (2001) Sale of Power to the Grid." Challenges Faced by Sugar Factories in Zimbabwe, Paper Presented to the AFREPREN Energy Workshop on Power Sector Reforms-Implications for the Cogeneration Industry, Quatre Bornes, Mauritius, Aug. 2001 Nyamuzihwa S., (1999) Energy Management at Triangle Limited, Publication of the Department of Mechanical Engineering, University of Zimbabwe, Harare Apr. 1999 Quevauvilliers J. M., (2001) Implications for Cogeneration Industry." Description of an Advanced Cogeneration Plant, Paper Presented to the AFREPREN Energy Workshop on Power Sector Reforms- Implications for the Cogeneration Industry, Quatre Bornes, Mauritius, Aug. 2001.
367
International Conference on Advances in Engineering and Technology
PROSPECTS OF HIGH T E M P E R A T U R E AIR/STEAM GASIFICATION OF BIOMASS T E C H N O L O G Y G.R. John, C.F.Mhilu, I.S.N. Mkilaha, M. Mkumbwa, W. Lugano and O. Mwaikondela,
College of Engineering and Technology (CoET), Energy Engineering Department, University of Dar es Salaam. Email: [email protected]
ABSTRACT The production of bio-fuel as a potential source of energy derived from the application of High Temperature Air/ Steam Gasification of biomass and wastes is feasible. The suitability of the technology based on thermal chemical conversion of combustible gases, mainly hydrogen and carbon monoxide, has been realized together with its associated environmental benefit. During the gasification process, biomass or wastes are converted into a gaseous fuel by heating the feedstock in a gasification medium of Air and/or steam at high temperatures well above 1000 ~ As such, High Temperature Air/ Steam Gasification allow the convertion of the intrinsic chemical energy of the carbon, in the biomass or wastes, into a combustible gas in two stages of pyrolysis and gasification. Hence, the produced gas is readily easier and versatile to use than the original biomass and therefore, can be used to power gas engines and gas turbines, or used as a chemical feedstock to produce liquid fuels. The technology can be applied to conventional and non-conventional means for electricity generation. This is a new technology albeit its limited handicaps which has to be addressed.
Keywords: High Temperature Air/ Steam Gasification; Biomass; Thermal Chemical Conversion; Pyrolysis; Moisture content; Mineral matter
1.0 I N T R O D U C T I O N Improved biomass thermochemical conversion processes such as pyrolysis, gasification and combustion have a key role in achieving both the economic and environmental benefits of the renewable energy from biomass. The challenges posed by biomass materials for their acceptability as a fuel, range from social, economic to technical reasons. Economic issues to be considered in biomass resources utilization includes harvesting/collecting cost, transportation costs, preparation cost and storage costs (Cameron et. al. 2005). However to promote the same, biomass policies that are taxation-based, agricultural-based and fuel mandates are important for promoting long-term growth of biofuels. For instance, Saddler (2004) showed that the price of ethanol from grain and that of biodiesel was higher than the petroleum counterparts, gasoline and diesel respectively. While the price of biodiesel was U. S. $ 0.55 per litre, that of petroleum diesel was U. S. $ 0.27 per litre. Besides the provision of power, biomass conversion technologies provide economic opportunities to the local rural areas where they are grown to include access to social services like clean water, education and medical improvement. Renewable energy con-
368
John, Mhilu, Mkilaha, Mkumbwa, Lugano & Mwaikondela
version technologies, particularly in biomass, create more employment than conventional energy technologies (Sims and Richards, 2004). Full-Time Equivalent (FTE) staffing needed to operate and maintain conventional gas- or coal-fired plants ranges 1 6/100GWh/yr, solar thermal 25 - 27/100GWhIyr, small hydro 8 - 9/100GWh/yr. Bioenergy projects would need 42 personnelil OOGWhIyr to provide sufficient biomass fuel supplied at the plant when obtained from energy crops, 10 from collected forest residues, or 36 from agricultural waste. A further 22 staff would be needed to run sufficient biogas plants to generate 1OOGWhIyr, or 8 operating combustionisteam turbine plants, or 9 for wood gasification plants where closer supervision is needed. Increased energy security and energy efficiency is another feature that increases attractiveness of renewable energy sources like biomass. The application of biomass for energy is possible via a number of conversion routes. One of such possibilities is the utilization of high temperature air or steam gasifier (HTAISG). Here, highly preheated airisteam having temperatures above 1000°C is used in gasifying biomass. The application of high temperature air or/ and steam enhances thermal decomposition of biomass and provides additional energy into the gasification process leading to numerous advantages. The process is efficient and produces clean syngas for direct combustion or electricity generation. ~
2.0 THE GASIFICATION TECHNOLOGY 2.1 Low Temperature Gasification In 200 1 industrialized countries with 15% of the world population consumed 223.1 quadrillion kJ of commcrcial energy, Asia with 54% of the world population consumed 98.7 quadrillion kJ whilst Africa with 14% of the world population used 13.1 quadrillion kJ (Energy Information Administration, 2004). The National Energy Policy of many African countries shows that about 90% of the total energy is consumed in the household and 92% of it is in the form of biomass with low conversion rate derived from poor technologies. Beside other uses, biomass can be utilized for generating producer gas from low temperature gasifiers that can be utilized for electricity generation or by direct firing for various industrial processes. Thermal biomass gasification has its origins from the old coal gasification technology. The first confirmed use of producer gas generated from coal, for room lighting, was reported by Murdock in England around 1792. Dowson in England introduced producer gas-engine systems in 1872, and the J. W. Parker in Scotland operated a producer gas-passenger vehicle between 1901 and 1905 (Goss, 198 I ) . Large-scale rapid development of small-scale producer gas-engine systems continued between World War I and 11. After the World War I1 the most extensive and well-demonstrated producer gasengine research for commercial trucks and farm tractors has been done by the National Machine Testing Institute in Sweden. This work was funded by the Swedish Government to provide an alternative to a long-term curtailment of fuel experienced during the World wars (Solar Energy Research Institute, 1979). Performance of low temperature gasifiers is well documented with its efficacy (Jingjing et al. 2001). Many types of gasifiers with varying schemes for both reactor design and reaction media in a wide range of operating conditions have been used before. The most common gasification reactor types are in counter-current fixed bed (updraft) co-current fixed bed (downdraft) and fluidized bed. In biomass gasification, the combustible output gases are mainly carbon monoxide, hydrogen and non-condensable hydrocarbons.
369
International Conference on Advances in Engineering and Technology
which are mostly methane. A small percentage of the hot gas as it comes from the gas producer will contain combustible tar vapours. Updraft gas producer have been used before is best suited for close coupling to the firebox of a steam boiler where it bums readily. The output gas contains more condensable tar vapours than the gas from downdraft gas producer. Updraft gas producer are normally used to gasify coal or wet (up to 50% moisture content) crop and wood residues and can handle fuels with high ash content. Fluidized bed gasification process has the important operational feature of being able to control the temperature throughout the fluidized gasification chamber. Typical problems associated with low temperature gasifiers are due to the high ash content of some biomass such as rice straw. High ash content 25% is unsatisfactory for simple air blown downdraft gas producers due to ash agglomeration problems (Blasi et al. 1999). Disposal of the ash in landfills is an environmental hazard that would cause leaching of heavy metals that would result in contamination of groundwater. Both high ash content and lower gasifying temperature in these gasifiers result in a low calorific value producer gas. The presence of tar in the producer gas is undesirable. The estimated amount of tar production by an updraft gasifier is around 100 g/m3 (at STP), by a fluidized bed gasifier is around lOgIm', and by a downdraft gasifier it is around lg/m3 (Devi, 2005). On the basis of these problems research efforts have been directed in the application of emerging new biomass conversion technologies that are both efficient and environmental friendly.
2.2 High Temperature Air/Steam Gasification (HTA/SG) The emerging HTNSG technology is only receiving a steady recognition in Japan Tsuji et al. (2002), and in Sweden, Blasiak et al. (2002), where the main features of the gasification technology have been established. HTAISG is an efficient process of converting agroforest biomass for producing high calorific value fuel gas. Optimized HTA/SG process increase the share of biomass renewable energy in primary energy supply stream and thereby reducing the associated environmental degradation accrued from the use of fossil fuels. High temperature gasification (HTAG) technology has been developed in recognition of the emerging environmental concerns, the need of energy efficiency and sustainability of energy supply. The technology uses a highly preheated airisteam in excess of 1000°C as a gasifying media. The preheated air temperature exceeds the ash melting point of gasified materials. In this manner, ash disposal problems of meeting stringent trace metals limits as reported by Kouvo (2002) are avoided, The preheated gasifying media provides additional energy into the gasification process to enhance thermal decomposition. The application of this technology has given an indication on the reduction in energy by approximately 30% and 25% reduction in the physical size of facilities as compared to the traditional type of furnace, and has demonstrated extremely high reduction levels of emissions of nitric oxides by 50% (Tsuji et al., (2002)). High temperature aidsteam gasification has been reported in both batch (Lucas et al. 2004) and continuous (Kalisz et al. 2004) gasifiers. The studies show the direct influence of temperature of the feed gas on the formation of combustible gases of H2 and CO. Lucas et al. (2004) reported that the formation of H2 increased for about 10% up to nearly 14% with increase of temperature of the preheated feed gas from 350°C to 830"C, respectively. The
370
John, Mhilu, Mkilaha, Mkumbwa, Lugano & Mwaikoiidela
increase of the H2 formation was favoured by the increase of the gasifier temperature due to the combined effect of the exothermal character of water-gas shift reaction which begins and predominates between 500°C and 600°C as seen in Eq. 1 ;
The primary water gas reaction significant from 000°C to 1100°C and above is explained by Eq. 2;
The existence of CH4 and C,H, during the gasification process is also in line with the fact that the high temperature within the air blown gasifiers thermally cracks the lighter oils into gaseous components, aromatics, and polynuclear aromatics, and suggests that pure gaseous products from tar cracking may undergo gas phase reaction among themselves thus solving tar problems. Thermal cracking of tar improves the energetic content of the producer gas. The maximum low heating value (LHV)of the dry fuel gas produced with feed gas preheated at 350°C was 6.9MJ/Nmi, then increased up to 8.2MJ/Nm3 and slightly rose again to 8.7MJ/Nni', respectively, for the preheated gas at 700°C and 830°C respectively.
A schematic diagram of fuel gas production utilizing biomass and wastes gasification process based on HTAG process is shown in Fig. 1 as can be seen in the diagram, high air temperature is produced by means of an air preheater equipped with a pair of ceramic honeycomb heat regenerator storage beds. The flue gas and air pass through the honeycomb heat storage beds alternatively with a short switching period (Blasiak, 2002). The syngas exits the gasifier at about 1200°C. Consequently, low-pressure utility can be utiliied to capture thermal energy of the syngas. The resulting low temperature syngas is passed through gas cleaning system comprising of cyclones and filters for cooling to 25°C. By combusting the clean syngas in furnace, gas engine or turbine, the desired thermal energy or electricity will be produced for processes or for electricity generation. This gasification process can be controlled in such a way that the gasification byproducts are either in form of ash or slag. Further work on the possible use of ash for fertilizer application if proved feasible may enhance the efficient utilization of the gasification process in which. Additionally the formation of slag can also find use in the construction industry.
37 1
International Conference on Advances in Engineering and Technology
Fig. 1.Envisaged schematic diagram of fuel gas production and utilization using biomass and wastes gasification process based on high temperature air gasification (HTAG) process 3.0 INFLUENCING PARAMETERS 3.1 Moisture Content
Known features affecting the gas is the gasification process is the release of moisture and mineral matter content in raw biomass, and gasifying media composition. Biomass materials contain a certain percentage of moisture which is absorbed to the cell wall evaporation requires more energy than needed to boil free water and may occur at temperatures exceeding 100~ (Janssens, 2004). Increasing the fuel moisture content increases the energy requirement for bringing the material to the ignition temperature and reduces ignition speed in the fuel bed (Horttanainen et al. 2000, Amagai et al. 2000). Consequently, the overall efficiency under thermochemical reactions is reduced (Dobie and Haq, 1980). The presence of water in high temperature air gasification of biomass was investigated by Lucas et al. (2004). The study findings showed the direct influence of temperature of the feed gas on the formation of combustible gases of H2 and CO. The formation of H2 increased for about 10% up to nearly 14% with increase of temperature of the preheated feed gas from 350~ to 830~ respectively. The increase of the H2 formation favoured by the increase of the gasifier temperature due to the combined effect of the exothermal character of water-gas shift reaction CO+H20 +--,H2+CO2 which begins and predominates between 500~ and 600~ and of the primary water gas reaction H20 ~-~CO+H2 which becomes significant from 1000~ to 1100~ and above. A certain limit of steam present in the oxidizer have been reported to boost the yield and LHV of gasifiers that operate at relatively high temperatures due to water-gas shift reaction, Lucas et al. (2004). 3.2 Mineral Matter Content
Biomass materials contain varying degree of mineral matter and ash potential. The effects of these on the ultimate deployment of the biomass are in record. Limited use of biomass materials include its composition, in particularly the presence of alkali metals (Na, K), alkaline earth metals (Ca, Mg), Si, C1 and S. These are major ash forming component
372
John, Mhilu, Mkilaha, Mkumbwa, Lugano & Mwaikondela
leading to deterioration of energy content and cause ash disposal problems. Due to the formation of corrosive compounds like sulphuric acid (H2SO4) and hydrochloric acid (HC1) there is alkali corrosion and deposition that damage process hardware (Vamvuka and Zografos, 2004). The current industrial gas turbine specification limit for alkali metal in the product gas entering the combustion chamber is 0.1 ppm by weight (Salo and Bojtahedi, 1998). In addition, the presence of toxic trace elements of heavy metals such as Cu, Zn, Co, Mo, As, Ni, Cr, Pb, Cd, V and Hg influence the combustion process by forming gaseous emissions, solid emission and significantly influencing the ash melting behaviour as well as fouling and corrosion. The resulting emissions are an environmental concern requiring deployment of treatment processes for meeting existing environmental legislation. Wide ranges of heavy metal concentrations are found per biomass material and biomass streams. Levels are especially high for zinc, copper and lead. Some of the heavy metals will remain in the gasifier ash, although they will partly evaporate in the gasifier, particularly those with a low melting point (Cadmium, Mercury and Lead). Heavy metals found in mineral concentrations from biomass are in limiting concentrations but have effects to the environment rather than in combustion equipment. 4.0 CONCLUSION 9 HTAG and HTASG prospects elucidate a potential existence of benefits as compared to low temperature technology 9 It is imperative to check on Moisture Content in biomass as this has significant impact on the fuel gas quality and energy balance of gasification process whilst it is worth registering that operating parameters of the gasification process have significant role in conversion phenomena and other gasification products properties. The influence of the mineral matters is for system hardware and heavy metals for environment emission concern rather than the hardware deterioration. REFERENCES
Amagai, K., Saito, M., Ogiwara, G., Kim, C. J., and Arai, M. (2000): Combustion Characteristics of Moist Wood, International joint power generation conference, Miami Florida, IJPG2000-15071 Blasi, C. D., Sgnorelli, G. and Portoricco, G. (1999): Countercurrent Fixed-bed Gasification of Biomass at Laboratory Scale. Ind. Eng. Chem. Res. 1999, pp. 2571 - 2581 Blasiak, W., Szewezyk, D., Lucas, C., and Mochida, S. (2002): Gasification of Biomass Wastes With High Temperature Air and Steam. IT3'2 conference, New Orleans, Louisiana Cameron, J., Kumar, A., and Flynn, P. (2005): Scale Impacts on the Economics of Biomass Utilization. BIOCAP Conference, Ottawa, Canada Devi, L. (2005): Catalytic Removal of Biomass Tars, Olivine as prospective In-bed Catalyst for Fluidized Bed Biomass Gasifier. Ph. D Thesis, Technische Universiteit Eindhoven Dobie, J. B. and Haq, A. (1980): Outside Storage of Baled Rice Straw. ASAE paper number 80-2304, American Society of Agricultural Engineers, St. Joseph, Michigan 49085, pp. 9 9 0 - 993
373
International Conference on Advances in Engineering and Technology
Energy Information Administration (2004): International Energy Outlook 2004. Office of Integrated Analysis and Forecasting, U.S. Department of Energy, Washington, DC 20585. Goss, J. R. (1981): Gasification of Rice Straw. ASAE paper number 81-5703, American Society of Agricultural Engineers, St. Joseph, Michigan 49085 Hanaoka, T., Yoshida, T., Fujimoto, S., Kamei, K., Harada, M., Suzuki, Y., Horttanainen, M. V. A., Saastamoinen, J. J. and Sarkomaa P. J. (2000): Ignition Front Propagation in Packed Beds of Wood Particles. IFRF combustion journal, article number 200003, ISSN 1562-479X Janssens, M. L. (2004): Modeling of the Thermal Degradation of Structural Wood members Exposed to Fire. Fire and Materials 2004, 28: pp. 199 - 207 Jingjing, L., Xing, Z., DeLaquil, P., Larson, E. D., (2001): Biomass energy in China and its potential. Energy for Sustainable Development, Volume V No. 4, pp. 66 - 80 Kalisz, S., Abeyweera, D., Szewczyk, D., Jansson, C., Lucas, C., and Blasiak, W. (2004): Energy Balance of High Temperature Air/Steam Gasification of Biomass in Up-draft, Fixed Bed Type Gasifier. IT'04 Conference, May 10-14, 2004, Phoenix, Arizona Kouvo, P., Sandelin, K. and Backman, R. (2002): Trace Element Partitioning in the Cofiring of RDF, Saw Dust and Peat. IFRF combustion journal, article number 200205, ISSN 1562-479X Lucas, C., Szewczyk, D., Blasiak, W. and Mochida S. (2004): High-Temperature Air and Steam Gasification ofDensified Biomass. Biomass and Bioenergy, Vol. 27, pp. 563-575 Saddler, J. (2004): Biofuels for Transport. IEA Bioenergy Task 39, Forest Products Biotechnology Group, Faculty of Forestry University of British Columbia Vancouver B. C. Canada Salo, K and Mojtahedi, W (1998): Fate of Alkali and Trace Metals in Biomass Gasification. Biomass and Bioenergy, Vol. 15 issue 3, pp. 263-267 Sims, R. and Richards, K. (2004): Bioenergy for the Global Community. Renewable Energy World, January- February issue, pp. 128-133 Solar Energy Research Institute, (1979): Generator Gas - The Swedish Experience from 1939-1945. Prepared for the U.S Department of Energy, contract number E.G. 77 C 01 4042 Tsuji, H., Gupta, A. K., Hasegawa, T., Katsuki, M. Kishimoto, K. and Morita, M. (2002): High Temperature Air Combustion - From Energy Conservation to Pollution Prevention. CRC Press, ISBN 0 8493 10369 Vamvuka, D and Zografos, D. (2004): Predicting the behaviour of Ash from Agricultural Wastes During Combustion. Fuel, Vol. 83, issues 14-15, pp. 2051-2057
374
Agbetoye, Ademosun, Ogunlowo, Olukunle, Fapetu & Adesina
DEVELOPING INDIGENOUS MACHINERY FOR C A S S A V A P R O C E S S I N G AND FRUIT JUICE P R O D U C T I O N IN N I G E R I A L.A.S. Agbetoye, O.C. Ademosun, A.S. Ogunlowo, O.J. Olukunle, O.P. Fapetu and A. Adesina, Department of Agricultural Engineering, Federal University of Technology,
Akure, Ondo State, Nigeria."
ABSTRACT
There are many emerging agro-based industries in Nigeria. Two of such industries are fruit juice production and cassava processing (into gari, chips, flour, starch and grit) industries. Infact, because of their increasing industrial and revenue earning potential, many Nigerians are now investing in either of fruit juice production and cassava processing business. This paper identifies the necessary machineries required for cassava processing into its major products and for fruit juice production from tropical fruits. It discusses the problems associated with the use and procurement of imported machinery, and also presents the various machineries developed by the Federal University of Technology, Akure, Nigeria for fruit juice production and cassava processing. Some of the developed machines have been installed in the factory established by the University and are being used for commercial production, while some of the machines are being fabricated and sold to farmers. Recommendations on the improvement of the machines for adoption and commercialization are proposed. Keywords: Cassava; Fruits; Indigenous machinery; Cassava processing; Fruit juice production; Nigeria.
INTRODUCTION
The technology of processing, handling and packaging of agricultural products in Nigeria had been peasantry. However, efforts are now being intensified by the federal government to diversify the export potentials of the country through the exportation of processed agricultural materials such as cassava in the form of chips, pellets, gari and starch. Furthermore, significant amount of agricultural produce are lost during the peak harvesting seasons due to inadequate storage and processing technology. Such materials includes fruits such as oranges, mango, pineapple and banana, vegetables such as amaranthus, tomato, pepper, okro and tuber crops such as cassava, yam and cocoyam to mention a few. Many policies of successive governments of Nigeria towards self-sufficiency in food production including the Accelerated Food Production Programme (AFPP), Operation Feed the Nation (OFN), Integrated Rural Development (IRD), Green Revolution (GR), Agricultural Development Programme (ADP), Directorate of Food, Roads and Rural Infrastructure (DIFFRI) and the National Agricultural Land Development Authority (NALDA) have not succeeded because of many factors (Ogunlowo, (2003)). One major factor is the indiscriminate influx of machineries and equipment into the farms resulting
375
International Conference on Advances in Engineering and Technology
from the implementation of these agricultural policies. There is therefore the need to evolve our own indigenous technology to address the issue of food security in Nigeria. Many authors have reported on the need to develop indigenous technology for engineering the various aspects of our agricultural operations (Ademosun, 1997; Agbetoye, 2003a & 2004; Olukunle, 2002; Ademosun et al., 2003; Adewumi, 1998). The problems militating against the development of agricultural equipment and their commercial manufacture in Nigeria have been stated (Agbetoye, 2004). Governments in Nigeria have been making concerted efforts to encourage local production of agricultural equipment but have been faced with some problems. Among these is the fact that agroprocessing machinery depend largely on imported technologies and machinery. Apart from the enormous attendant cost, the issue of unavailability of spare parts for the sophisticated equipment, and in some cases relevant manpower to operate and service the equipment, have given rise to the need to look inward for locally fabricated equipment. It is an established fact that economic development of any nation depends considerably on the level of its industrial development. Industrial development can only be achieved when there is a strong industrial base. The level of manufacturing activity in Nigeria can be measured by the volume of output of locally manufactured goods (Igbeka, 1996; Ige, 1987). Presently, there is an upward movement in the involvement of Nigeria in industrial processing of our natural resources. This trend has been made possible by the emergence and growth of indigenous equipment manufacturers (Igbeka, 1997). Ogazi and Chukwujekwu (1998) attributed some of the problems of equipment production to the calibre of people that head and handle most of the existing fabrication companies in Nigeria. He pointed out that most of the fabrication companies are headed by those with limited engineering training who can copy an existing equipment or prototype but can neither draw nor interpret engineering drawing. Their knowledge of material properties is limited and therefore, they cannot make the adequate choice of material for a given job. Nigeria is blessed with abundant land resource, well suited for mechanised agriculture. Major food crops cultivated include yam, cassava, cocoyam, maize sorghum and cowpea. Major cash crops produced include coffee, cocoa, cashew etc. Many of these crops are processed by traditional methods. Two crops that are gaining more popularity in terms of emerging processing factories are cassava and fruits. This paper identifies relevant machinery required and also examines the efforts being made by the Federal university of Technology, Akure in Nigeria in developing indigenous machinery for cassava processing and fruit juice production. CASSAVA PROCESSING IN NIGERIA Economic Importance of Cassava Since 1990, Nigeria has surpassed Brazil as the world's leading producer of cassava with an estimated annual production of 26 million tones from an estimated area of 1.7 million hectares of land (FAO, 1991). Other major producers of Cassava are Zaire, Thailand, Indonesia, China, India, Malaysia, Malawi, Togo and Tanzania. The importance of cassava as cheap source of calories intake in human diet especially in the tropi-
376
Agbetoye, Ademosun, Ogunlowo, Olukunle, Fapetu & Adesina
cal areas of Africa, Asia and Latin America, as well as source of carbohydrates in the production of animal feed (chips and pellets) and industrial raw materials such as starch and alcohol has been reported (Odigboh, 1983: Ugwu and Okereke, 1985; Agbetoye, 1995 and 2003; Kawano, 2000; Ali and Ogbu, 2003). Cassava starch is an ingredient in the manufacture of dyes, drugs, chemicals, carpets and in coagulation of rubber latex (Odigboh, 1983). Cassava which has previously been regarded as a poor man's food is increasing in industrial and economic potential (Agbetoye, 1995). Infact, there has been a revenue generation projection of about $100 million for cassava in Nigeria by 2005 (All and Ogbu, 2003). According to Professor Dupe Olatunbosun of the University of Agriculture Abeokuta in his lecture titled "Cassava Revolution- Implications for Civil Servants", world import demand for cassava in 2004 was 25 million tones, while local demand by poultry farmers in the country has reached 400,000 tones annually. Meanwhile many States and Local governments, apart from individuals, have embarked on large-scale cassava production. Among these is the Ogun State Government which on February 2005 established 12 centers for rapid multiplication of high yielding, disease and pest resistant varieties of cassava cuttings, with each center having an area of 20 ha. Currently, there is high demand for cassava products both locally and abroad. At the local scene, there is a federal government directive that producers of flours for bread baking must include 10% cassava in their product. The Chinese and other Asian countries have also ordered for large quantities of cassava chips. There is no doubt therefore that the cost of gari, the most popular food derived from cassava will increase. One solution to the impending scarcity of gari is the development of small-scale technologies for increased gari production. Furthermore, for the federal government to exploit the opportunity to generate revenue from cassava and to remove over-dependency on oil revenue, urgent attention must be given to the development of machinery for mechanized production and processing of cassava. Most of the cassava produced in Nigeria still comes from peasant farmers who depend on manual tools for their field operations. Increase in production of cassava implies the mechanization of its cultivation, harvesting and processing. The demand is likely to increase because of the superior quality of Nigerian cassava (Agbetoye, 2005). The Need For Cassava Processing Like many other foods, such as fruits and vegetables, roots and tubers are rarely eaten raw. They normally undergo some forms of processing before consumption. Even though raw sweet cassava is occasionally eaten in the Congo region, Tanzania and West Africa, cassava is not generally consumed raw. Cassava consists of high percentage of water. Processing it into dry form will therefore reduce the moisture content and convert it into more durable and stable product. Gari is very popular in West Africa and is a staple food in Nigeria, Ghana, Benin and Togo. Its ability to store well and its acceptance as a "convenience food" is responsible for its increasing popularity in the urban areas of West and Central Africa (IITA, 1990 and FAO, 1990). It is often consumed as the main meal in the form of dough or a thin porridge. Both are prepared in the household by mixing dry gari with hot or cold water and cooking and are served with soup or stew. Gari is also eaten as a snack when mixed in cold water with sugar, and sometimes milk.
377
International Conference on Advances in Engineering and Technology
The processing of gari from cassava has been reported by many authors as a labour demanding operation and women and children are the major producers. Onwueme (1978) stated that, in traditional setting, only very simple hand equipment is employed in the production of gari. According to Nweke et al. (1994), poor processing quality of gari emanates from the difficulty of processing, such as problems associated with, peeling, grating, milling, dewatering, toasting, sifting e.t.c, which are labour-intensive tasks. Francis (1984), in his study of problems involved in traditional processing of cassava into gari in Ibadan, highlighted several problems involved in each stage of gari processing and concluded that gari is energy, time and labour consuming operation. Likewise, Ikpi and Hahn (1989) reported that cassava processing is almost entirely performed by women at the household level or at a central location such as a village or town market place. They estimated that at least 45% of the labour requirement is accounted for by peeling and sifting.
Some Locally Developed Cassava Processing Machines at FUTA, Nigeria The efforts of the Federal University of Technology, Nigeria in developing indigenous machines for mechanized processing of cassava are presented below.
Cassava Washing Machine Washing of cassava tubers before processing is desirable to remove adhering soil particles before peeling and to remove other debris even after peeling. Proper cleaning of the tubers before and after peeling will engender good quality product. A peeling machine has been designed in the pilot plant for cassava processing in the University. It comprises rollers equipped with washing brushes that washes the tubers as they are conveyed from one end of the machine to the other. A pipe network and tank equipped with pump is provided to circulate water. Fabrication of the machine has reached an advanced stage. Cassava Peeling Machines Different models of cassava peeling machine have been developed in the University. These includes two models of hand-fed (single gang and double gang) and self-fed cassava peeling machines. The self-fed model machine consists of a dual abrasive brush mounted in parallel. An auger which permits the tubers to ride through in-between the dual brush was also provided. The cassava peeler is powered by a of a 5 kW petrol engine. The machine is required in the production line of the following products: cassava grit, gari, cassava flour, cassava chips and pellets, lafun, pupuru, and starch. Cassava Graters Cassava graters are common in Nigeria. They utilize grating drums made up of rough and abrasive surfaces powered by diesel engine or electric motors. An improved grating machine has been developed in the University by replacing the grating drum with grating spikes fashioned out of corrosion resistant materials.
378
Agbetoye, Ademosun, Ogunlowo, Olukunle, Fapetu & Adesina
:::
~:~!i
'~',iiii:iiiiii
S~:Y~teo p:.ee,~::m:g machine
d. Cassava washing machine
c. ~]oror'is~_~d cassm:a si,6er
e. Cassava drying/gari frying machine
Figure 1" Cassava processing machines developed at the Federal University of Technology, Akure, Nigeria. Continuous Press for Cassava Mash Hydraulic and screw presses which are produced locally are plentiful in Nigeria. However, the demand for mechanized processing operations has necessitated the development of a continuous press in the University (Figure). The dewatering of the grated cassava mash is performed by series of pistons pressing small quantities of mash through eight confined cylinders arranged in a circle. The grated cassava mash is fed continuously via an auger. The machine is powered by a 7.5 kW three-phase electric motor equipped with a speed reduction gearbox. Cassava Sifter Different models of cassava sifter have been developed by the authors, including a dual powered model reported by Agbetoye and Oyedele (2005). Another motorized model has been fabricated and coupled to a new gari fryer/chip dryer, also developed for use in the cassava processing pilot plant.
Gari Fryer~Chip Dryer Two major operations in cassava processing that are regarded as critical are gari frying and drying of cassava chips. A lot of attempts have been made at improving the process of frying gari from the traditional methods of using open clay pots mounted on wood
379
International Conference on Advances in Engineering and Technology
fire. However, this operation is still largely done by manual methods. The truth is that this method cannot cope with the emerging mechanized methods being utilized for other processing operations. Many people are advocating for flash drying of cassava chips and starch. At the Federal University of Technology, Akure, Nigeria, an electric gari fryer cum dryer for cassava chips ahs been developed. It is equipped with instrumentation for monitoring the frying and drying parameters for effective performance. It was demonstrated at the Second Nigeria Universities Research and Development Fair in Abuja in December, 2005. The machine is much cheaper to produce than the flash dryer and has shown promising performance. DEVELOPMENT OF FRUIT JUICE PRODUCTION MACHINERY IN FUTA Fruits are plenty in the Southern and Middle Belts of Nigeria. They include oranges, Pineapples, bananas and mangoes. During the peak harvesting season, up to 50% of the harvests is lost as waste (RAIDS, 1989). Between 1997 and 1998, the then Vice-Chancellor of FUTA, Prof. L.B. Kolawole mandated a research team to develop machinery for the establishment of a fruit juice factory. The factory was commissioned for production in 1999 (Ademosun et al., 2001). The machines developed for the factory include fruit washing machine; extractors, homogenizer, pasteurizer and heat exchanger (Fig. 2). Currently, the university is reactivating the factory for full-time production. Lately, a machine for removing essential oil from the orange peels in order to eliminate manual peeling during processing and a manual bottle corking machine have been developed and installed in the factory. Fruit Washing Machine The fruit washing machine (Fig. 2a) has capacity for washing up to 1400 oranges per batch. The major components of the washing machine are the frame, washing drum, water bath, washing drum adjuster and power transmission system. Fruit Juice Extractor The fruit juice extractor (Fig. 2b) was designed based on the principle of transfer of rotary motion to linear. The essential elements of the extractor include the electric motor, gear reduction box, crank wheel and pin, connecting rod, press plate, slot for. crank pin, perforated expression tray, expression base tray and frame. Power from the extractor comes from a 1420 rpm, 3-phase 4 kW electric motor. The speed reduction gear box assembly enables the motor speed to be reduced to 2 rpm required at the crank wheel. The crank pin by moving through the crank pin slots converts the rotary motion of the crank wheel to linear motion of the connecting rod. The press plate fits into the perforated expression tray containing the peeled fruits to be expressed. The expressed juice is collected using the base tray and piped into a concentrate tank located at the base of the frame. Homogenizer, Filling tank and Manual bottle corking machine The homogenizer fabricated from stainless steel material (Fig. 2c) is to mix the expressed juice with water and other additives. The electric motor driven machine contains a main vessel in which water is heated, a thermostat, a stirrer and a framework to support the machine. A stainless steel tank supported on metal frames and filled with pipes network, valves and taps dispenses bottled juice into 33 cl bottles. A manual-operated corking machine has also been fabricated locally to cork bottled juice and is being used in the factory (Fig. 2f).
380
Agbetoye, Ademosun, Ogunlowo, Olukunle, Fapetu & Adesina
Pasteurizer A pasterurizer equipped with loading and offloading of bottled fruit juice (in crates) was produced and installed in the factory (Fig. 2d). It contains the crate containers, hot water tank with cover, heater unit, water pump, and hoist and carriage system and control unit. The pasteurizer has a length of 2.2 m and a width of 2.2 m and height of 0.87 m. The hot water tank has a heater unit, located at the base. It contains four 12 kW heaters to heat up 0
0
water from ambient temperature of 25 C to 90 C. The centrifugal pump circulates heated water from the base to the upper portion of the tank to maintain equilibrium. The control unit at the lower end regulates the temperature and flow of water. Based on estimated pasteurization time of 15 minutes (including loading and offloading) of bottled juice, it required three batches per day to pasteurize 48 crates of 24 bottle per crate per day. Essential Oil Remover from Orange Fruits A machine for removing essential oil from orange fruits so as to eliminate manual peeling during processing has been designed, and a model of it was fabricated and installed in the factory. The picture of the machine is shown in Figure 2e. The major components are the vibrating trays, frame, suspension springs, out-of-balance weights, water tank, water pump, pipes, belt and pulley drive system and the electric motor to drive the machine. The preliminary results indicated satisfactory performance.
CONCLUSIONS Some emerging locally produced machines for the processing of cassava and for fruit juice production have been identified and discussed. Such locally produced machines should be encouraged and supported financially especially that they are produced directly from a University of Technology. The Iron and Steel Industries in Nigeria should be revived to enhance local production of Agro-allied machinery.
381
International Conference on Advances in Engineering and Technology
.,.~i"::~.~.7' ~:~:i",:~.:'!!',~!~:~":~i~,..~oo~o::o:~,?~ii,'i
:a), Fna~t ~ashr
machine.,
#i!i!iiiiii!!!ii~iili!ill !!!i~i!i~!i!i!!i~i!~!!i!~i!!~!~i~:~' !i!i!!!~iU !?~ii~'~!7' 84184184 84 ~:' ~! ;i 84
~-i~~, 84 ,~,ii!i:!i~il~84184 i~"~ii:ii~:ii~iii'!~i7'~i!i
ii d).Paz:teurizer
..... ( h o t wa~er
t"~. E s s e n . t ~ a l o i l v e m o v e ~ ~
(~,) M a n u a l
eor~qng machine
(O- F " o r k in p=~wgrezs
Tank; ca~age ~Svsrem a n d C r a t e coni~al'ners
Fig. 2" Machines in the FUTA fruit juice-processing factory (Ademosun et al., 2001). ACKNOWLEDGEMENTS The authors wish to express their appreciation to the former Vice-Chancellor, Professor A
L.B. Kolawole who inspired the setting up of the FUTA fruit juice factory in 1998, and for providing the required funds through the University Research Grants Committee to develop and install the machines for fruit juice production in the factory. The enthusiasm and commitment of the current Vice-Chancellor, Professor Peter O. Adeniyi towards the sustenance and commercialization of the fruit juice factory, as well as for the development of cassava processing machineries in the university through the provision of funds are gratefully acknowledged. The contribution of all the technical staff in the department of Agricultural Engineering in fabricating the developed machines is thankfully recognized. Furthermore, I acknowledge the contributions of the members of the Committee on Fruit Project, FUTA led by Professor V.A. Aletor. Finally, I wish to thank the Chairman and Management of the Centre for Continuing Education, Federal University of Technology for sponsoring my attendance at this Conference in Uganda. REFERENCES
Ademosun, O.C. (1997). I n d i g e n o u s T e c h n o l o g y f o r L o c a l A g r o - b a s e d I n d u s t r i e s . Inaugural Lecture Series 11, Federal University of Technology, Akure, PP. 55.). Ademosun, O.C.; Ogunlowo, A.S.; Fapetu, O.P. and Agbetoye, L.A.S. (2001). T h e E s t a b l i s h m e n t o f a M e d i u m S c a l e F r u i t P r o c e s s i n g Factory." F U T A E x p e r i e n c e , Journal of Science, Engineering & Technology, 8(5): 3155-3164
382
Agbetoye, Ademosun. Ogunlowo, Olukunle, Fapetu & Adesina
Ademosun, O.C.; Adewumi, B.A.; Olukunle, O.J. and Adesina, A.A. (2003), Development oflndigenous Machines for Weeding and Grain Harvesting: FUTA Experience Journal of Engineering Technology, 3(2): 77-84. Adewumi, B.A. (1998), Developing Indigenous Machinery Base,for Crop and and Food Processing Industry in Nigeria, Proceedings of the National Engineering Conference of the Nigerian Society of Engineers held at Maiduguri International Hotel, Maiduguri, Nigeria, Pp 43-49. Silsoe Agbetoye, L.A.S. (19 9 9 , Mechanics of cassava Ijfiing Unpublished PhD thesis, College, Cranfield University, Bedford, United Kingdom, pp. 280. Agbetoye, L.A.S. (2003), Engineering Challenges in developing indigenous machineg) ,for cassava production and processing, Proceedings of the annual conference of the Nigerian Society of Engineers (Lagelu 2003), Cultural Centre, Ibadan, Oyo State, 8th -12th December, 2003, Pp 80-86. Agbetoye, L.A.S (2004). The Problems Ajfecting Development and Manujacture of Agricultural Machinery in Nigeria, Submitted paper for publication in “The Nigerian Engineer” Journal, Pp 9. Agbetoye, L.A.S. (2005). Improving the technoloAy of’cassava harvesting urzd,food security in Nigeria, Paper presented at the International Conference on Science and Technology, held at the Federal University of Technology, Akure, Ondo State, Nigeria, August 14-- 19 processing mechanisation,for. Ah, Y. and Ogbu, C. (2003). Cassava export: Ogbeh j : team returnsporn searchfor market., In: The Punch, Edited by Azubuike Ishiekuene, Tuesday 16th September, 2003, PP.5. F A 0 ( 1990). Roots, Tubers, Plantains and Bananas in Nuinan Nutrition, Food and Agricultural Organisation of the United Nations, Rome Italy, Pp 59 60, 64. F A 0 (1 99 1 ), Prodzrction Yearhook,/br 1990, Food and Agricultural Organisation of the United Nations, Rome, Italy F A 0 (1998). Storage and Processing oj’Root and Tuber in the Tropics. Food and Agricultural Organisation of the United Nations, Rome Italy, Pp. 3 4. Igbeka, J . C. (1996). Problems of processing Cussava Into Gari with Refkrence to Mechanization in Ibadan . Unpublished H.N.D Project Report, IAR&T, Ibadan, Pp 2 Igbeka, J. C. ( 1 996). Resourcing Scient[fic Indigenous Technological Development in Nigeria, Paper Presented at the National Consultative Workshop on Equipment Maintenance Rehabilitation, Manufacturetand XFchnology Development in Nigeria, International Conference Centre, Abuja, 9 -10 October 1996. Igbeka, J. C. ( 1997). Agro-processing Machinery and Equipment Manufacturing in Nigeria: Proceedings of the National Workshop on Appropriate Agricultuml Mechanisation,fbr Skill Developmen,. In: Low-Cost Agricultural Mechanisation Practices, Pp 339 -- 348. Ige, M.T. ( 1987), Development and Management Of‘Appropriute Farm Power and Machine? Technologv ,for Achieving a Self’ Reliant Integrated Rural Development, Proceedings of the Nigerian Society of Agricultural Engineers, 2: 71 73. Ikpi, A.E. and Hahn, N.D. (1989). Ca.s.sava: Lifeline,fbr the Rural Household, pp 60. Igoni, A. H. (2000), A Continuous Flow Rota? Gar; Sieve, Conference paper number NIAEi2000iPRS - 22, Nigerian Institution of Agricultural Engineers. -
-
383
International Conference on Advances in Engineering and Technology
IITA (1990). Cassava Tropical Africa: A Reference Manual, International Institute of Tropical Agriculture, Ibadan, Nigeria, Pp 87,98,95. Jimoh, A.G. and Oladipo, I. 0. (2000). Development of Mechanical Gari Sieving Machine for Small-Scale Production, Conference Paper number NIAEl2000/PRS.09, Nigerian Institution of Agricultural Engineers. Kawano, K. (2000). Cassava as a source of animal feed and income generation in upland farming communities of Asia. Science Reports of Faculty of Agriculture, Kobe University, Japan, 24(1): 123 - 124. Nweke, F. I., Dixon, A. G. 0 , Asiedu, A, Folayan, S. A. (1994). Cassava Varietal Needs of Farmers and the Potential for Production Growth in Africa, Pp 321; 89 - 90 Odigboh, E.U. (1983). Cassava Production, Processing and Utilization, In: Chan Jnr., H.T.(ed.), Handbook of Tropical Foods, Marcel Decker Pub., Inc., 270, Madison Avenue, New York, Pp. 145-200. Ogazi, P. 0. and Chukwujeku, S. E. (1998). Locally Designed and Manufactured Goods: Prospects for The Third Millennium in Nigeria, A Paper Presented at COREN Engineering Assembly, Port Harcourt, 26'h- 28'h August 1998. Olukunle, O.J. (2002). Development of an Indigenous Combine Harvester, Unpublished Ph.D. Thesis, Department of Agricultural Engineering, Federal University of Technology, Akure, Nigeria. Onwueme, I.C. (1978). The Tropical Tuber Crops, John Willey and Sons Ltd., Pp. 147-148. Ugwu, B.O. and Okereke, 0. (l985), The problem of inadequate supply of raw cassava tubers for industrial processing: A case study of the Nigeria Root Crop Production company, Enugu, Agricultural Systems, 18(3): 155-170.
384
Sendegeya, Lugujjo, Da Silva & Amelin
CHAPTER FIVE ELECTRICAL ENGINEERING
FEASIBILITY OF CONSERVING ENERGY THROUGH EDUCATION: THE CASE OF UGANDA AS A DEVELOPING COUNTRY A. Sendegeya. Department
of Electrical Engineering, Makerere University, Uganda
E . Lugujj 0 , Departin en t of' Electrical Engineering, Makerere University, Uganda
I. P. Da Silva, Department ofElrctrica1 Engineering, Makerere Universitl;, Uganda
M. Amelin, Department of Electric Power Engineering, KTH, Stockholm, Sweden
ABSTRACT This paper discusses the possibility of supporting energy conservation programs and dissemination of sustainable energy technologies in a developing nation (Uganda) through education. This can be done through both formal and informal education. Training the young generation on different energy technologies, forms of energy, sustainable use of energy, energy saving methods, energy reserves management, energy and environment, can have a positive impact to the national energy security. Energy resources and the status of formal and informal education in Uganda are presented. Also it analyses the role of local leaders in promoting energy conservation programs. The paper finally shows that in order to change the mentality of a whole nation regarding energy conservation, there should be a nationwide campaign involving all stakeholders including the government, NGOs, institutions and the media. Keywords: Energy; Conversion; Conservation; Education
1.0 INTRODUCTION There is a direct linkage between energy and development. Education is vital for the dissemination of any technology. Thus, both energy and education play a significant role in the development. In developing countries like Uganda, there is a need to promote energy conservation prograins and reduce over dependence on wood fuel (commonly burnt in inefficient wood fuel stoves) and imported fossil fuels (such as petroleum).
385
International Conference on Advances in Engineering and Technology
Uganda is highly endowed with a variety of renewable energy sources yet little has been done to utilise and develop these resources in an appropriate and sustainable manner. The government of Uganda is concerned about the above mentioned energy situation. [ 11 [2]. Energy conservation is neither taught in the Ugandan formal education system nor disseminated through the media. The consequences of this are the lack of concern for conservation of the country’s scarce energy recourses and disregarding towards environment protection. 1.1 Energy Sources and Utilisation in Uganda Uganda is a country endowed with a variety of renewable energy resources, which include: 0 Biomass (wood fuels in form of firewood and charcoal, and plant residues in form of saw dusts, coffee and rice husks, nut shells, sugarcane baggasse, etc.) Solar, with a potential for both photovoltaic (PV) and thermal applications Hydropower, both small scale and large scale hydropower generation schemes
Petroleum is the only fossil fuel consumed in the country and it is imported though the country has unexploited oil reserves. The sectors in which petroleum products are consumed include transport, commerce, industrial and in electric power generation. Biomass in form of firewood and charcoal dominates the thermal energy applications for both domestic and commercial sectors. At a conservative estimate he1 wood constitutes over 90% of the energy used in the country. The conversion of biomass energy in the domestic sector, takes place in inefficient three stone stoves and metallic charcoal stoves. The efficiency of these types of stoves ranges between 5% 10%. To tackle the problem of extensive use of these inefficient biomass stoves, the government together with NGOs started a massive campaign to develop and disseminate energy saving biomass stoves. It has been done through training local artisans, local cooperatives and organisations such as women groups. This paper suggests that such programs should be extended to schools and training institutions. ~
Another unfortunate thing is that only a small percentage of agricultural residues is used to produce energy. For instance, the use of baggasse to generate power is solely in the two sugar factories in the country. These industries generate power just to supplement their power demands. Uganda has an average insolation of 5kWWm’iday. This insolation ranks among the highest in the world. Despite this, the solar thermal and PV sectors are still incipient due to the fact that few Ugandans can afford the initial investment required to acquire these technologies. The grid electricity used in the country is mainly generated from the sole hydropower station (with a potential of 300 MW) situated at the source of the Nile. There are other small generation stations with a total generation capacity of 17 MW. The use of fossil fuel decentralised generator sets is common especially in areas remote from the grid and among grid connected dwellers that use them during load shedding. Uganda is a signatory to the Kyoto Protocol. In order for the government to show its commitment to comply with the international trend of COz emission reduction it is willing to support energy conservation and efficient utilisation programs.
386
Sendegeya, Lugujjo, Da Silva & Amelin
1.1 Energy Conservation Energy is defined as the capacity to accomplish work and the word Conservation is referred to as the preservation of the available natural resources (such as energy resources). The preservation can be achieved through proper management of the resources especially during conversion, delivery, and utilisation. Therefore the term Energy Conservation is the management of energy resources for the benefits of the future generations. External factors such as environmental degradation or improvement and benefits involved must be considered. Energy can easily be conserved when conversion efficiency is improved and/or energy wastage is minimised or avoided. Figure 1 illustrates the energy conservation concept including the technologies and human factor. The demand should be satisfied according to equation 1. (1) The idea of energy conservation is to use as little of it as possible to satisfy a need (e.g. lighting, heating, motion etc). To minimise the input or the amount of fuel Eoi for the same and/or efficient consumption EL/, can be achieved through either improving the conversion efficiency (to reduce system losses) or educating the consumers on energy saving techniques e.g. demand side management. It often happens that an initial investment in more efficient technologies is paid back in the saving on fuel and also in increased production capacity.
I
Primary Energy (Fuel Sources) ]
Secondary Energy (Conversion Stage and de~iv-
[
ery stage), Conservation techniques are needed
[
in conversion, transmission and distribution)
[
I I
Tertiary energy or Demand (As a human factor, education on energy saving techniques to be emphasised)
I
! :" .........................
.
[
i EnergyConversion ~
"I~I
System, qci
t
Delivery, I r" . . . .
i'----~l
qOl
m
i EL,
~lEo':E'm*qct*qDl~
___''
.
.
.
J
I
l
l
.
m
m
.I
_
.
"o.o....,.., 9149149149149 I
Energy Source,
: .........................
"
Energy Conversion ~ 9 System,q c2 :
Etxi
r
I F
Delivery, ~
.
.
.
.
]
qD2
E02 = E,x2* tic2* q02 |____~ EL2 | ] I . . . . . _ . l ~
k
"l
I
(i=1,2,3, - ......................... .
.
.
.
.
.
.
:
.
I
ki
I
......................... EnergyConversion ~ Delivery, 9 !
System,qc.
:. . . . . . . . . . . . . . . . . . . . . . . . . . .
.
:
r--
qo.
I
I
I
I
T - tl "~ "~ - M
. . . . . .
m
Fig. 1" Energy Conservation through improving technologies (i.e. improving efficiency) and human factors (education)
387
I
International Conference on Advances in Engineering and Technology
1.2 Formal Education System in Uganda The system consists of primary, secondary and post secondary or tertiary education (universities and colleges)[8]. All schools and colleges use the same curricular developed by the National Curriculum Development Centre (NCDC) and are examined by one body, Uganda National Examinations Board (UNEB). The curriculum specifies the knowledge and skills that learners should acquire at different levels. The education structure is summarised in figure 2. The schematic diagram shows that it is possible for students to join the work force via different levels of education. At the same time upgrading to different levels is possible either directly or indirectly after gaining working experience. This indicates that there are opportunities to train people on energy technologies, applications and conservation at all levels using the existing education structure. Besides pupils and students are good ambassadors to deliver energy related knowledge and skills to people outside the formal education system.
l" r l
I
n
"IJ
Employment
I .
.
.
"!
Employment
l
n
i m
I
l=l
~
l
l
~1
.......
l
I
I
.
.
.
.
. .
. .
.
.
.
.
.
.
Technical and Vocational institutions, and Polytechnics
I I I
I
Employment
"-
"0" Level education
I attendedfor 4 years I
I i
l
.
r - ..eo_.v~r_ioN_. I
l
L . . . . . . . . . . . . . .
"A "Level educa- L tion attendedfor 2
I
l
I
f E R r L A e- V
I I
,=~- m
UniversityEducation
i
i
i
i
I
i
i
i
Primary Education attended in 7years
Employment 9= ~
Technical and
.I
Teacher Colleges ~
""I ~
~
==
~
9
I r
iI
9
9
l
Technical~~ "Schools
Junior
Fig. 2: Schematic diagram showing the formal education structure in Uganda. 2.0 I N T R O D U C T I O N OF ENERGY CONSERVATION PROGRAMS INTO THE F O R M A L EDUCATION SYSTEMS In most cases formal education is referred to as simply "education". This education setup is very useful to disseminate knowledge on energy. Any attempt to sensitise people on a nationwide basis has to make use of this existing structure.
388
Sendegeya, Lugujjo, Da Silva & Amelin
2.1 Primary School Level Children are expected to attend this level for seven years. In 1997 the government introduced free primary education (Universal Primary Education, UPE), this radically increased school enrolment from 3.6 million 7 million in a period of 5 years. The current situation is unfortunate that about 30% of the working population has no formal education.
The primary school syllabus comprises of four examinable subjects namely, Mathematics, Science, Social Studies and English language. According to the contents of these subjects energy could feature either in science oriand social studies. In most rural areas of Uganda women and children play a significant role in domestic energy consumption. This group of people spends long hours searching for firewood and cooking in unhealthy smoky kitchens. This is an opportunity for pupils to apply in real life the knowledge acquired at school. The children can be taught basic energy conservation techniques such as designing and making minimal smoky energy saving firewood stoves. Generally the areas that should be considered when training these pupils include: Schools should organise practical exercises about energy technologies. Demonstrations must focus on the technologies that can be used to develop local energy sources to address local energy needs Schools must organise field trips and excursions to practical systems especially where appropriate technology is applied in using local renewable energy technologies to solve local problems. Examples of such systems are solar energy systems, biogas, small hydro, energy saving stoves Pupils should be able to internalise concepts such as energy conversion, delivery. storage and consumption. Imparting knowledge to the young generation is an intelligent way of planning for the fiiture. 2.2 Secondary School Level About 50% of the children leaving primary level join post-primary education (secondary schools) training institutions. Secondary education is divided into two levels; ordinary level (“0”Level) and advanced level (“A” level), normally attended for four and two years respectively. After “0”level candidates graduate with Uganda Certificate of Education (UCE) certificate and those from “A” level are awarded Uganda Advanced Certificate of Education (UACE) certificate.
At “0”level students do ten to sixteen subjects. The compulsory subjects which are done by all students are English language, Mathematics, Chemistry, Physics, Biology, Geography and History. Other option subjects include Health Science, Agriculture, Commerce, Accounts, Fine Art and Languages (French, Germany, Swahili and local languages). Some schools offer technical/vocational subjects such as technical drawing, metal work and fabrication, carpentry and joinery, electricity and electronics, stenography, tailoring etc.
389
International Conference on Advances in Engineering and Technology
Energy basics do feature on the “0”level education structure. It is part of Physics, Chemistry, Biology and other vocational subjects like Electricity and Electronics, and Power and Energy. Nonetheless, it taught theoretically and somewhat detached from everyday life. To tackle energy education at this level, all concepts introduced in primary level should be reinforced by laboratory experiments and relating the teaching to real life problems. In case of vocational training students at this level should be trained in energy conversion, utilisation and conservation measures. After four years of “0”level education some students join “A” level, others join technical institutions and colleges to acquire various practical skills. Students who join “A” level choose three to four subjects (including vocational subjects) of the “0”level. Energy could be introduced as one of the four subjects at this level. Science fairs and exhibitions by schools should be encouraged to promote energy related practical ideas. This can be done on local, regional or national level. 2.3 Junior Technical School Level At these schools students attain professional qualifications (practical skills) that enable them to join the work force. In junior technical schools, students study craft courses leading to the award of Uganda Junior Certificate. The acquired skills are in carpentry and joinery, brick laying and concrete practice, motor vehicle maintenance, agriculture mechanics, tailoring and cutting, electrical installation, plumbing, painting and decoration, mechanical engineering, pottery and ceramics. All these courses are practical and there is a linkage (either directly or indirectly) between them and the use of energy. Today most of the practical activities require efficient use of energy in different forms such as electricity and petroleum products. Therefore it is vital to demonstrate to students various sustainable ways of improving energy efficiency in the production processes. Students must understand that reducing energy use is reflected in the reduction of energy bills, hence, a more profitable production process. There is a need to revise the curricula of these institutions in order to include energy related programs. The programs could deal with topics such as: The global and national energy crisis 0 Depleting energy resources and linkage among energy conversion, delivery, utilisation, environment and development 0 Energy saving techniques at all stages of energy transfer 0 Planning, designing, installing, maintaining and operating simple energy systems (like solar systems) Energy economics and auditing 0 2.4 Tertiary Education Level
Institutions and Colleges At present in Uganda there are no professional qualified technicians to deal with energy related problems. Even electrical installation technicians are not trained in energy manage-
390
Sendegeya, Lugujjo, Da Silva & Amelin
ment. Therefore it has created a gap in society hindering promotion of new technologies such as solar PV. For instance, one can not feel comfortable buying a solar home system when there are no technical personnel who can even handle the installation and maintenance. The paper proposes two possible ways of bridging this gap: one is a short term plan where to organise tailor-made courses on energy related issues for in-service technicians. The second approach is a long term plan which may need a revision of the existing curriculum to introduce energy related topics andor an energy course. The energy course should focus more on the study of local energy resources such as solar, biomass and hydro. It would lead to a certificate or diploma in Energy. In order to address this issue the Ministry of Energy and Mineral Development in collaboration with the Ministry of Education and Sports with assistance from GTZ they have developed a solar photovoltaic installation course for technicians. The course has been introduced in the curricula of technical institutions and it will be examined by the national examinations board with effect from 2007. Training of trainers workshops were organised for the tutors of these institutions. University Level Uganda has both private and public universities. Though the activities in these universities are directly monitored by the ministry of education, there is no accredited body responsible for reviewing and approving their syllabi. After graduation, students from Universities join the practical work force and eventually become policy makers. Introducing energy programs in the curricula of Universities can directly contribute to long term development of sustainable energy programmes. Therefore it is a challenge to the administrators and professors in the university academia to introduce energy related programmes in the university syllabi. Examples of university programs where energy related courses can be introduced include engineering, architecture, agriculture, science and education. In Africa the lack of special energy programs in the curricula of universities is not only in Ugandan problem. There are few universities in Africa where special energy programs are available. A good example of an African University where a postgraduate course in renewable energy is offered is Harare University, in Zimbabwe. Other countries in Africa should take a model in Zimbabwe and introduce special programmes which are focusing on sustainable development and use of the continent’s abundant energy resources. In order to build capacity in the energy research it is necessary for the universities to develop energy programs and courses at undergraduate level. This can be done either by introducing special research topics on energy related issues in the existing curricula or develop special energy programs similar to that in Zimbabwe. Examples of areas which need consideration include energy policy, renewable energy, Environmental technologies for sustainable development, pollution and its control, energy conservation and management, energy planning, economics and audit. There is a need to encourage students to get involved in research projects on energy related topics. An effective mechanism for the diffusion of energy related research and independent study among undergraduate students can play a significant role in the sustainable development and use of energy.
391
International Conference on Advances in Engineering and Technology
The universities should solicitor for funds from donors, governments, industries and other related organisations to promote research at graduate level. Innovative research which encourages students to develop new technologies and improving the existing technologies can be introduced. These research activities should focus on topics such as: 0 energy policy; energy development, use and management; 0 energy conservation techniques e.g. demand side management; 0 new and renewable energy technologies; 0 clean development mechanism; 0 energy economics and auditing; 0 pollution from energy conversion and usage, plus other environmental issues related to energy extraction, generation, transmission and distribution The teaching should focus on the theories and methods as required for students to understand, to be able to analyse and perhaps influence energy patterns of consumption. At this level it is necessary to show the relation between energy consumption and production of goods. Teaching and Research at the Faculty of Technology (Makerere University) In order to address the issue of capacity building in the area of training and research, the Faculty of Technology with assistance from the Private Sector Foundation has started a Centre for Research in Energy and Energy Conservation (CREEC). The centre is focusing on research related to energy development, policy and conservation. Also in order to enhance research and training in appropriate technology (where energy research is an element), the office of the Dean has set up another centre (Technology Development and Transfer Centre, TD&TC). This centre is deals with practical matters related to all kinds of appropriate technologies. There are very few energy related courses in the current engineering syllabus at the faculty. For instance in the department of Electrical Engineering, the only elective courses offered are Energy Conversion and Energy Utilisation. Even in these courses not all energy related issues are tackled. The department is trying to introduce more energy contents to the existing courses. Also the department investigates the possibility a specialised course focusing only on energy issues discussed above. At the faculty, some students (at both undergraduate and graduate levels) are involved in academic research projects in the field of energy. With assistance from international donors such Sida/SAREC and NORAD, the faculty has started research programs mainly at postgraduate level on this topic. There are various PhD research programs in the field of energy which are carried out by some members of staff in the departments of electrical and mechanical engineering. Actually this is a strategic long term plan to intensify energy programs in the faculty and in the university in general.
392
Sendegeya, Lugujjo, Da Silva & Amelin
3.0 INTRODUCTION OF ENERGY CONSERVATION INTO THE INFORMAL EDUCATION SYSTEM Informal education is a way of delivering knowledge in an informal manner such as through advertisements, cultural (e.g. using poems) etc. The government of Uganda has started educating the public on issues of energy conservation using an informal approach. For instance, the Ministry of Energy with assistance from GTZ has started a nationwide awareness campaign sensitising the public on the benefits of saving energy and different energy saving methods. This is done by printing pamphlets, brochures, using the media such as broadcasting programs on radios and televisions, advertisements in news papers, bill-boards and street posters. The sensitization programs are targeting the domestic, commercial and industrial sectors. 3.1 Short Courses Short courses provide a low-cost option by which members of the public are able to increase their knowledge and skills on sustainable energy related issues such as energy conservation. These courses could be tailor made organised for people already in service. The training programs of the courses could include lecture series, class exercises, case studies and fieldwork. The first priority programs should be training of trainers targeting teachers at all levels. Initially this exercise may require experts, e.g. foreign consultants or fieldworkers with experience in the dissemination of energy programs. Beside teachers, short courses can be organised for other technical personnel already in service. These include industrialists, government officials, fieldworkers, agriculturalists etc. 3.2 Involvement of Stakeholders In this case stakeholders are consulted to seek their advice and comments on energy training programs. Besides consultation, stakeholders can work as a bridge in delivering the relevant energy related programs to the public. Brain storming workshops and seminars can be organised for a selected group to ensure that effective programs are included in the education system. These seminars and workshops should be organised by the relevant government ministries (e.g. Ministry of Energy and Mineral Development) and academic institutions. The key stakeholders are the academic staff from tertiary institutions, government officials, media, industrialists, local leaders and private sector. For instance, through training local leaders the message can reach the public because, administratively Uganda has adopted a decentralised system of governance. An innovative and co-operative effort by the government to involve practicing engineers in the country in energy conservation should be emphasised. The government should emphasise energy education in the energy policy and take the initiative to educate the public about this policy. This can be done by translating it into understandable format using local languages.
393
International Conference on Advances in Engineering and Technology
4.0 CONCLUSION The paper has outlined the possibility of implementing energy conservation programs through educating the public. For long term planning energy education programs should start with lower schools through tertiary institutions. And for short term planning courses should be organised for people already in the work force such as technical personnel working in both public and private sectors. In academic and research institutions, innovative and competitive research must be emphasised. Though the paper focuses on Uganda education system as a case study, the knowledge can be adopted to education systems of other developing countries. Depending on the education system of any nation, energy education programs can be adopted to achieve long term sustainable energy development and utilisation programs.
5.0 ACKNOWLEDGEMENT The authors would like to extend their sincere appreciation to those people who provided information that enabled them to compile this paper. Special thanks to SiddSAREC and Makerere University (School of Graduate Studies) to accept to h n d the research and avail all the financial support that has enabled the principal author to write the paper. REFERENCES Dave E. (2004), Energy Eflciency and Renewables,Energy & Environment Vol. 15, No. 6 Elizabeth Shove (2004), Eflciency and Consumption: Technology and Practices, Energy & Environment Vol. 15, No. 6 Energy Policy for Uganda (2002), Ministry of Energy and Mineral Development (Republic of Uganda) Goen Ho, Stewart Dallas, Martin Anda and Kuruvilla Mathew, Renewable energy in the context of environmentally sound technologies training and research programmes at the Environmental Technology Centre, Mordoch University, Renewable Energy Journal 22 (2001) 105 - 112 Gottschalk. C. M., (1 993) Education and Trainingfor New and Renewable Forms of Energy, July 1993, Paris, France Henri B. (2000), Rethinking Development: Putting an end to Poverty, UNESCO ISBN UNESCO 92-3-103702-1 Levine. M. D. and Jonathan S. E. (1 996), Energy Conservation Programs in China: IEEE (pg. 2324 2329) Ruben D. A., Cristiano M. R., and Jose B. A. P. (2004), Energy education: breaking up the rational energy use barriers: Energy Policy 32 1339 - 1347 Rural Electrification Strategy and Plan Covering the Period 2001 to 201 0 (2001), Ministry of Energy and Mineral Development (The Republic of Uganda) Saeed S A R., Energy Conservation Strategies for School Buildings in Rayadh: International Journal of Ambient Energy, 1997 (pg. 43 - 52) Uganda Education Catalogue a guide to education services (2004), Published by Media Option Ltd. ~
~
394
Chigiivare
PLASTIC SOLAR CELLS: AN AFFORDABLE ELECTRICITY GENERATION TECHNOLOGY Z. Chiguvare, Renewable Energjs Progrmnine, Department of Mechunicul Engineering, Univct*si(vof'Zimbubw>e,P. 0. Box MP 167, Mt. Pleasant, Harare, Zimbabwe.
ABSTRACT Polymer photovoltaics has become a very interesting area of research given the success of polymer electronic devices like displays, field effect transistors and light emitting diodes. We developed bulk heterojunction polymer-fullerene solar cells, with a poly(3hexyl)thiophene and a soluble fullerene derivative [6,6]-phenyl-C6 1 butyric acid methyl ester, P3HT:PCBM blend as the active layer, that yielded above 2.5 % energy conversion efficiencies at standard test conditions. The solar cells were characterized by analyzing current-voltage characteristics at various temperatures, from 80 to 400 K, and measuring their external quantum efficiencies at rooin temperature. We discuss the origin of open circuit voltage observed and estimate that the upper limit of open circuit voltage for P3HT: PCBM based polymer heterojunction solar cells is 1.2 V. The highest current density measured was 8.5 inAicm', while the energy conversion efficiency gave 2.7 1 96 for IOOmW/cm2white light intensity, AM 1.5 spectrum at 300K. A maximum external quantum efficiency of 65 Y' O was also determined for monochromatic light of wavelength 550 nm. Some of the major challenges in efficiency improvement of such solar cells are discussed.
Keywords: Polythiophene; Fullerene; Bulk heterojunctions; Solar cells; Temperature dependence; Light intensity dependence; Quantum efficiency.
1.0 INTRODUCTION The widespread use of solar photovoltaic power has been elusive because it can be difficult and costly to manufacture the coinmercial photovoltaic cells, which are made of inorganic crystals such as silicon. A novel approach to generate electricity from solar energy is that of using organic polymer materials, that can be processed as easily as plastics, as light absorbers. Unlike today's semiconductor-based photovoltaic devices, plastic solar cells can be manufactured in solution in a beaker without the need for clean rooms. They do not require the high deposition temperatures or coinplex processing as required in inorganic devices, and they can be deposited onto large flexible substrates. This provides design options that could lower the cost of using the cells. Unfortunately. lagging energy conversion efficiencies has held their application back. The best organic cells convert a little more than 2 percent of sunlight into electric current, while commercial inorganic cells reach efficiencies of 20 percent. Inorganic cells, such as those based on silicon or on thin films of CdTe or Cu(ln,Ga)Se?,can display efficiencies greater than 15 percent. Poly(3hexylthiophene) (P3HT) and similar plastic semiconductors currently are a hot area of research in solar cell technology (Brabec ef a1 2003), but by themselves these
395
International Conference on Advances in Engineering and Technology
plastics achieve very low light-conversion efficiencies. This has stimulated further investigations of new cell structures such as interpenetrating networks of donor and acceptor-type materials. In this article a description of the development of an organic solar cell based on interpenetrating networks of P3HT and PCBM, a soluble Fullerene derivative, is given. The measured temperature and illumination dependent J - V characteristic curves as well as calculated efficiencies and fill factors are presented. Further, the general tendencies are discussed and recommendations on further improvement are suggested. 2.0 T H E O R Y
The efficiency of solar cells depends on their capability for the absorption of photons, charge carrier generation, separation and transport to the electrodes. Interpenetrating conjugated polymer-Fullerene (donor-acceptor) networks, also referred to as bulk heterojunctions, are a very promising approach for the improvement of efficiency of polymer solar cells. Photovoltaic devices based on these interpenetrating networks provide increased charge carrier-generating interfaces, as compared to bi-layer photovoltaic devices. A simplified schematic of the principle of operation of a solar cell based on interpenetrating networks of donor (polymer) and acceptor (fullerene) is shown in Fig. 1.
Light r-1
[]
Indium Tin Oxide electrode I
D-A Electron ~ : ~ : i ~:~. ...............................................transfer .........
Losses
!ii i iii!!iiii!i i l iiiiiiii!iiiii!, /
Exciton creation
Aluminium electrode
Fig. 1.A simplified diagram showing the photovoltaic effect in a bulk heterojunction solar cell based on conjugated organic polymer absorber material. The general scheme of the charge carrier generation processes in non-degenerate conjugated polymers (without acceptor) can been described as follows: The mobile charge carriers responsible for the photocurrent are produced as a result of the dissociation of primarily generated singlet excitons due to interchain interaction, presence of Oxygen, (Antoniadis et al 1994, Bath et al 1997, Barth and B~issler 1997) or impurities, or a Schottky interface at metal electrodes. Nevertheless, the charge carrier generation yield remains
396
Chiguvare
low, since other competitive processes, for example, photoluminescence and non-radiative recombination also occur. The charge carrier generation yield can be enhanced by the presence of a strong acceptor species, such as e.g. C60 molecule (Sariciftci 1995). The process of charge separation in polymer:fullerene composites is ultra fast, and can occur within 40 fs in PPV:PCBM composites (Brabec et al 2001), whereas the electron back transfer is much slower (Brabec et al 1995). This results in effective formation of a metastable charge-separated state. The photoinduced charge transfer is dependent upon the electronic overlap of the donor ( D ) - acceptor (A) pair of molecules. A simple scheme for the electron transfer mechanism is as follows: First the donor is excited, the excitation is delocalised on the D-A complex before charge transfer is initiated, leading to an ion radical pair and finally charge separation can be stabilised possibly by carrier delocalisation on the D + (or A-) species by structural relaxation. (Sariciftci 1995). Symmetrical electrodes on such a film produce no voltage (Chiguvare, 2005), therefore there is need to select asymmetrical electrodes in such a way as to provide minimal resistance to the collection of the generated charge carriers. For efficient charge collection from the absorber layer to an external circuit, both the positive and negative electrodes must form ohmic contacts with the donor and acceptor networks, respectively. If this is not the case, charge collection would be limited depending on the nature of potential barriers built up at the contacts. The negative electrode must form an ohmic contact with the electron transport level of the blend, typically the lowest unoccupied molecular orbital (LUMO) of PCBM. The positive electrode must first filter the charge carriers, i.e., block the passage of electrons and allow only holes to travel through it. In this context PEDOT:PSS must form an ohmic contact with the hole transport level of the blend, which is typically the highest occupied molecular orbital (HOMO) of the polymer. The (PEDOT:PSS) / (P3HT:PCBM) interface is therefore the charge carrier separating interface while the P3HT/PCBM heterojunctions provide exciton dissociating interfaces, which on their own cannot generate a voltage. The ITO is needed since it is easier to contact with external wires as compared to the PEDOT:PSS from our experimental point of view. The energetic picture of the operation of a P3HT:PCBM solar cell is shown in Fig. 2. We observed that the PCBM also absorbs light and provides an alternative path of exciting the blend. In this case electron transfer occurs from the P3HT LUMO to PCBM LUMO. In either case, the result is the formation of a positive polymer radical and a negative PCBM radical. However, although the ITO/PEDOT:PSS interface is ohmic, a small loss of Voc may be observed due to band bending which may unavoidably occur at that interface. 3.0 M A T E R I A L S A N D M E T H O D S
P3HT is a conjugated polymer with an absorption spectrum that shows an onset at around 2.14 eV. This amount of energy can be transferred to the electrons forming the covalent bonds by several mechanisms, one of which is visible and ultraviolet light. It is therefore possible to dislodge an electron from a double bond by absorption of sunlight. It has been shown that a high electron affinity species such as PCBM, in the neighbourhood may attract the dislodged electron within 40 fs, and hence create a metastable charge separated state. Electron transfer was shown to occur in the case of the developed solar cell by strong quenching of photoluminescence of P3HT in the presence
397
International Conference on Advances in Engineering and Technology
of PCBM. Fig.3 shows the formulae of the materials used as donor and acceptor in the developed bulk heterojunction polymer solar cell.
-2.5 LUMO (P3HT)
-3.0 -3.5
LUMO (PCBM)
~" -4.o >4.5
AI HOMO (P3HT)
ITO
~-5.0 W
PEDOT'PSS
1
-5.5
HOMO (PCBM) . . . . .
-6.0
Positive electrode
Hole transport layer
Active layer
Polymer-fullerene blend
Negative ectrode
Fig. 2. Simplified energetic diagram of the photovoltaic process in an organic solar cell: Absorbed photons with energy hv > 2.14 eV excite electrons from HOMO(P3HT) 5.14 eV into LUMO(P3HT) ~ 5.14 eV, which are then transferred to LUMO (PCBM) ~ 4.2 eV, from which they can be collected by a negative A1 electrode with workfunction q) ~ 4.28 eV. Holes are collected, via PEDOT:PSS of workfunction ~ 5.0 eV, by positive ITO electrode with workfunction ~ 4.8 eV. The red arrows indicate an alternative path of photo-induced electron transfer. ITO/PEDOT:PSS/P3HT:PCBM/A1 heterojunction solar cells were prepared in a Nitrogen atmosphere of a glove box and characterised. First, a thin layer of polyethylenedioxythiophene doped with polystyrene-sulfonic acid (PEDOT:PSS) (Baytron P, Bayer AG, Germany) was spin coated on patterned clean ITO coated glass substrates in order to smoothen the surface of ITO and hence avoid possible short circuits due to the spiky roughness of the ITO surface. PEDOT:PSS is known as a good hole transport material, and assures better hole collection from the active layer onto the ITO electrode, especially if its Fermi level lies between the workfunction of ITO and the HOMO level of the polymer. An active layer consisting of a mixture of P3HT:PCBM in a 1:1 mass ratio, dissolved in Chloroform, 5g/ml, was then spin coated on top of the dry PEDOT:PSS film. Finally, 100 nm A1 contacts were deposited on the active layer by thermal evaporation at low rate in a high vacuum of better than 1.10 .6 mbar in all cases. The device structure is shown schematically in Fig. 4.
398
Chiguvare
\ .
O
hv > E c
/ $
CH3
(CH2)6CH3
*'
Poly(3 hexylth io ph ene-2.5d iyl) (P3HT)
[6,6]-phenyI-C61 butyric acid methyl ester (PCBM)
Fig. 3 Formulae of the materials used as donor and acceptor in the developed bulk heterojunction polymer-fullerene solar cell. Light of sufficient energy excites both P3HT and PCBM and ultra fast electron transfer to PCBM occurs.
~
, i ~ , - ' ~ ~ - - - ~
Aluminium electrode P3HT:PCBM blend (1 :1)
i~
:i i iii:i:iii::ili!
.... ~//~,~,,
PEDOT:PSS
~
. Indium
. Tin
. Oxide
!
Glass substrate Light Fig. 4 Device structure of a P3HT-PCBM solar cell After fabrication the solar cells were thermally annealed on a hot plate at 120~ in a nitrogen glove box for 2 minutes. Temperature and illumination dependent current voltage characteristics were obtained by utilising an Advantest Source Measure Unit (Advantest TR 6143), with the solar cell placed in a liquid-Nitrogen-cooled Cryostat at better than 105 mbar vacuum. A 150W Xenon lamp (Osram XB0 150W/XBR) was used as the illumination source with a water filter placed in the light path to approximate the AM1.5 solar spectrum. The light intensity reaching the device inside the cryostat was calibrated to 100 mW/cm 2 and neutral filters were used to vary the intensity. Quantum efficiency measurements were performed under normal atmosphere at room temperature. 4.0 RESULTS AND DISCUSSION Typical current density-voltage characteristic curves of an ITO/PEDOT:PSS/ P3HT:PCBM/A1 solar cell have been plotted in a semi logarithmic representation in Fig. 5 at different light intensities from 0.1 mW/cm 2 to 100 mW/cm 2 at T=300 K. The dark J-V characteristic curves at room temperature show diode-like behaviour with a rectification factor of about 3.5 x 103 at + 1.2V at 300K. The rectification is not due to the
399
International Conference on Advances in Engineering and Technology
presence of a space charge region as in a p-n junction (or alternatively, due to a Schottky contact), but possibly0 due to the workfunction difference of the two different electrode materials as well as different mobility of electron and holes within the bicontinuous interpenetrating network of two components. Under illumination, the rectification ratio decreases from 400 at 0.1 mW/cm 2 to 16 at 100 mW/cm 2. This is a common feature of polymer based solar cells, but the underlying reasons remain unclear. We suggest that this is due to photoconductivity, where light increases the number of charge carriers participating in the conduction of current. 10 2
1oo "~
S
10"
~
~
E
;
~
!~----
..................':;~;~;:iYI ....
-13
1O 2
"~ ~ ~_
1 04
~'='~;:~.~s~o ~ ~ %
10 -4 Q') 10 -~ -0,25
0,00
~
--~
3 m W /c m =
........::....... 10 m W / c m ' .........:t..........20 m W / c m ' :: ..... 50 m W / c m 2 ....... 9..........1 0 0 m W / c m 2
?';s, '~' ;i ~,~' ,}! V 0,25
.---.-.--o.......... O . 1 m W I c m 2
+
0,50
0,75
1,00
Voltage (V)
Fig. 5 Illumination dependent Current density - Voltage characteristics of a typical ITO/PEDOT:PSS/P3HT:PCBM/A1 solar cell at 300 K. For this cell, some values obtained for the main parameters such as power conversion efficiency, ~/, open-circuit voltage, Voc, short-circuit current density, Jsc, and fill factor, FF, obtained for different temperatures are shown in Table 1. All the values indicated were obtained at 100 mW/cm 2 white light intensity. Table 1. OutPut characteristics of a typical ITO/PEDOT:PSS/P3HT-PCBM/A1 solar cell.
400
Temperature (K)
Jsc
Voc
FF
q
(mA/cm ~)
(V)
(%)
(%)
80 100 120 140 160 180 200 220 240 260 280 300 340 360 380
1.56 2.31 2.76 3.75 4.50 5.48 6.33 7.41 7.83 8.04 8.12 8.50 8.41 8.26 8.12
0.72 0.70 0.69 0.68 0.67 0.67 0.66 0.65 0.62 0.61 0.60 0.59 0.57 0.56 0.55
18.27 19.39 20.07 21.54 22.98 25.10 28.38 35.95 41.76 45.38 49.81 53.98 57.93 58.73 58.19
0.21 0.31 0.38 0.55 0.69 0.92 1.19 1.73 2.03 2.23 2.43 2.71 2.78 2.72 2.60
Chiguvare
Figure 6 shows the dependence of short circuit current density on temperature, for different illumination intensities. We observed an increase of short circuit current density with temperature, which tends to saturate to some maximum value around 300 K and remains constant. The Jsc, being thermally activated at low temperatures, saturates and becomes nearly temperature independent at elevated temperatures, clearly indicating that the charge carriers traverse the active layer without significant losses. The short circuit current density increases almost linearly with light intensity, for all the intensities considered in the experiment. This suggests that concentrated light may give higher current, and could be considered as a way of harvesting more electric power from such solar cells. Figure 7 shows the dependency of open circuit voltage on temperature, for different incident light intensities. The open circuit voltage of a solar cell based on ITO/PEDOT:PSS/P3HT: PCBM/A1 decreases almost linearly when the temperature is increased from 80 K to 380 K. The linear decrease with increase in temperature is consistent with inorganic solar cell theory. Voc also increases with incident light intensity, and tends to saturate at high intensities: the curves of Fig. 7 are closer together for high intensities. However, for all the intensities considered, the Voc increases, and a maximum may be reached at higher than 100. The highest recorded open circuit voltage value of 0.72 V was obtained at 80 K, at 100 mW/cm 2 white light, normal incidence. We suggest that the open circuit voltage is dependent on the energetic levels of p and n dopants and on their concentrations within the donor and acceptor materials. PCBM introduces a transport level in the band gap of P3HT in the same fashion as an n dopant does for inorganic semiconductors. PSS on the other hand, introduces a transport level in PEDOT making it a p-type material. The junction formed should therefore be expected to behave like a p-n junction, and the open circuit voltage is determined by the transport level of the n type material (P3HT:PCBM) and the transport level of the p type material (PEDOT:PSS) which both tend to pin the electrode work-functions to their respective transport levels. The upper limit of open circuit voltage is determined by the energetic difference between the electron affinity of PEDOT:PSS and the LUMO of PCBM, which gives about 1.2 V. This can only be obtained at zero Kelvin. Extrapolation of the 100 mW/cm 2 curve of Fig. 7 yields a 0.76 V open circuit voltage at zero K. The results also indicate an increase of energy conversion efficiency with increase in temperature (see Table 1), which saturates at about 280 K, remains constant, and is followed by a decrease after 380 K. This suggests an optimum operating temperature range from about 280 K to 380 K. The highest efficiencies were recorded at 3 mW/cm 2 white light intensity for all temperatures, suggesting that at higher intensities, although the charge carriers are generated, the material has poor transport properties and cannot transport them effectively. The highest achieved efficiency at 300K, 100 mW/cm 2 white light intensity for our devices was 3.1%. A maximum external quantum efficiency of 65 % was also determined for monochromatic light of wavelength 550 nm. We believe that there is still scope for improvement of the efficiency of P3HT:PCBM based solar cells. Siemens Solar reported an achievement of 5 % efficiency for similar cells in December 2003. If repeatable on an industrial scale, then the cells will be slowly
401
International Conference on Advances in Engineering and Technology
introduced in small consumer electronic gadgets such as calculators, mobile phone battery chargers, watches, etc., in the near future. 12
IE
'
I
10
---?--
9B
I
'
I
--~--
10 m W / c m ~ 20 m W / c m 2 50 m W / c m 2
--I>--
100 m W / c m 2 . ~ / ~
8 6
4
.I> / /
~-
I
'
I
'
I
'
I
2
~_~/~-~-~b / b/
/
r
"~
'
3 mW/cm ~
O
E
'
- - - o - - 0.1 m W / c m ~ -----o-- 1 m W / c m 2
<]/<]/<]
/
~<<]/<]/
D/~/--
, " , ~ 9 - - ^0
0 ~ 0 - - 0 - - 0 - - ~ - - 0 ~ 0 - - 0 - - 0
o 0
!
50
1 O0
150
200
250
300
350
400
Temperature (K) Fig. 6 Temperature dependent short circuit current density of ITO/PEDOT:PSS/P3HT:PCBM/A1 solar cell at different temperatures.
>
'
0,7 -
i
~
'
I
'
i
,
i
'
i
'
i
'
typical
i
~
v
(::!)
0,6
E~
~o~
05 O >
0,4
0
0,3
0 (-
0,2
Q.
0
o ~
O.I m W / c m 2 --o--
o,1 0,0
. 50
1 mW/cm 2 3 mW/cm 2 10 mW/cm 2 20 mW/cm 2 50 mW/cm = 1 O0 mW/cm 2 . . .
~..~__&-~.~ O
............
\%
~o
oL o
.
.
.
160 1;0 260 2;0 360 3;0 Temperature (K)
460
Fig. 7 Temperature dependent open circuit voltage of a typical ITO/PEDOT:PSS/ P3HT:PCBM/A1 solar cell at 300 K, at different incident light intensities.
5.0 C O N C L U S I O N S
A configuration of an ideal donor/acceptor heterojunction solar cell that consists of an interpenetrating network of donor and acceptor as the absorber layer has been fabricated and characterised by means of temperature and illumination dependent current density-
402
C higuvare
voltage characteristics. We stress however the need of a homogeneous mixture of donor and acceptor to ensure sufficient electronic overlap between molecules of the D-A blend, and propose an optimum mixture ratio of I : 1 by mass. Junction formation procedures that should eliminate any possibility of contact with oxygen or other contaminants, is another possible way of improving the efficiency of solar cells based on P3HT. 6.0 ACKNOWLEDGEMENTS I acknowledge the contributions of the following: V. Dyakonov, J. Parisi, and the PV research group of the University of Oldenburg in Germany, where all the experiments were carried out. Acknowledgements also go to the GTZ and DAAD - Germany for funding the research. REFERENCES Antoniadis, H. et al, Phys. Rev. B 50, 149 1 1 (1994). Assadi, A., Appl. Phys. Lett. 53, (1988). Barth, S. and Bassler, H., Phys. Rev. Lett. 79,4445 (1 997). Barth S., Bassler, H., Rost, H. and Horhold, H. H., Phys. Rev. B 56,3844 (1997). Brabec, C. J., Dyakonov, V., Parisi, J. and Saricitci, N. S. Organic Photovoltaics: Concepts and Realization, Springer Series in Material Science, 60, Springer Verlag, 2003. Brabec, C. J., Zerza, G., Sariciftci, N. S., Cerullo, G., DeSilvesteri, S., Luzatti, S., Hummelen, J. C. Briitting W., Berleb S. and Muck1 A. G., Organic Electronics (accepted) 2000. Chiguvare, Z., Electrical and optical characterisation of'bulk heterojlrnction polymer,fullerene solar cells, PhD Thesis, University of Oldenburg, Germany, (2005) . Sariciftci, N. S., Prog. Quant. Electr., 19, 13 1 ( 1 995). Shaheen, S. E., Brabec, C.J., Padinger, F, Fromherz, T., Hummelen, J.C. and Sariciftci, N. S. - Appl. Phys. Lett. 78 (2001) 841.
403
International Conference on Advances in Engineering and Technology
IRON LOSS OPTIMISATION IN THREE PHASE AC INDUCTION SQUIRREL CAGE MOTORS BY USE OF FUZZY LOGIC B.B.Saanane, A.H.Nzali and D.J.Chambega, Department of Electrical Power, University of Dare s Salaam, Tanzania
ABSTRACT Until now, the computation of iron (core) losses in induction motors cannot be performed through exact analytical methods but is dependent mainly on empirical formulae and experience of motor designers and manufacturers. This paper proposes a new approach through the use of fuzzy logic with the aim of optimizing the iron loss and hence optimizing the total machine loss in order to improve the efficiency. The multi-objective optimization algorithm through fuzzy logic therefore, is used to tackle the optimization problem between the objective parameters (core losses and magnetic flux density) and the airgap diameter which define the machine geometry (e.g. slot and tooth dimensions, airgap thickness, core length etc.). The fuzzy logic toolbox is employed based on the graphical user interface (GUI) on the matlab 6.5 environment. The optimal points of airgap diameter, airgap magnetic flux density and iron loss are then used to reconfigure a new motor geometry with an optimized total loss. The new motor design is simulated on a 2D-FEM to analyse the new motor response. Experimental results which agree with the results of the design, show an improvement of motor efficiency. Keywords: Fuzzy logic model, optimisation, analysis, motor efficiency.
INTRODUCTION Fuzzy logic deals with degrees of truth and provides a conceptual framework for approximate rather than exact reasoning. Therefore, fuzzy logic has come to mean any mathematical or computer system that reasons with fuzzy sets. It is based on rules of the form “if.. .then” that convert inputs to outputs-one fuzzy set into another, (Canova et al, (1998)). The rules of a fuzzy system define a set of overlapping patches that relate a full range of inputs to a full range of outputs. In that sense, the fuzzy system approximates some mathematical function or equation of cause and effect. Fuzzy set theory and fuzzy logic provides a mathematical basis for representing and reasoning with knowledge in uncertain and imprecise problem domain. Unlike Boolean set theory where an element is either a member of the set or it is not, the underlying principle in fuzzy
404
Saanane, Nzali & Chambega
set theory is the partial set membership which is permitted (Canova et al, (1998) and JungHsien & Pei-Yi, (2004)). In this paper, the multi-objective optimization algorithm for iron loss optimization through fuzzy logic is an approach employed to tackle the optimization problem between the objective parameters (core losses and magnetic flux density) and the airgap diameter which defines the machine geometry (e.g. slot and tooth dimensions, airgap thickness, core length etc.). The h z z y logic toolbox employed is based on the graphical user interface (GUI) on the matlab environment. 2.0 PROPOSED NEW APPROACH The multi-objective optimisation was performed through an algorithm linked to outputs of the developed iron loss optimization model. Therefore, the fuzzy logic model was represented by a set of objective values
Yi(X)
which also defined the value of fuzzy global ob-
jective function as in Canova et al, (1 998):
where: =the number of objective functions; X =the vector of machine design parameters like the airgap diameter D,the airgap magnetic flux density B, etc; p , =the i-membership function of a machine parameter normalized between values 0
y2
to 1 ; and y , =a set of objective values. Through this approach, the optimisation problem became scalar and consisted in the deter-
-*
mination of the vector X
such that:
O(x*)= max(O(x)) = max( min (pi(yi(x))) ) X€X
X C X i=l,...,n
The multi-objective optimisation was then accomplished through an algorithm linked to outputs of the developed iron loss optimization model [3] as shown in Figure 1.
405
International Conference on Advances in Engineering and Technology
D X1
g
y
........................... . . ....
IRON LOSS MODEL t
B
Xn
mi J ....
J~]
1a
.......................... ." "
)
. . . . . . . . . . . . . .
o
Fuzzy Global Performance Index
Fig.l" Block diagram of the Proposed Fuzzy Approach Following the fuzzy theory, a fuzzy set, characterized by a membership function (MF), was associated with the chosen objective parameters D and B as shown in Figure 1. The objective parameters D, B and
Pfe
were converted to membership functions (~i) within the limits
as, in correspondence with degrees of satisfaction normalized between 0 (unacceptable value) and 1 (total fulfilment of the requirement). Such an approach allowed to easily optimise the chosen parameters, which were defined for within a band of acceptable values. The membership functions for D and B were passed through the fuzzy logic model to obtain a single global quality index, representing the overall balancing, by means of the minimum operator. In this research the single global quality index parameter Pfe was also represented as a membership function, which is the minimized iron loss value,
Pfe,
for the motor frame
investigated. 3.0 M E T H O D O L O G Y
The Fuzzy Inference System (FIS) which explains the specific methods was used and implemented on the Fuzzy Logic Toolbox based on the matlab platform with the Graphical User Interface (GUI) tools. This was a process of formulating the mapping from the two inputs D and B to the iron loss
Pie
as an output using fuzzy logic. The actual process of the
fuzzy inference involved the formulation of the membership function, fuzzy logic operators, and if-then rules. The Mamdani's methodology, (Canova et al, (1998)) was applied for the fuzzy inference system as an algorithm for the decision processes. So, this Mamdani-type inference as defined for the Fuzzy Logic Toolbox, the output membership functions were also fuzzy sets. After this aggregation process, there was then a fuzzy set for each output variable that needed de-fuzzification, that is resolving to a single number in this case the optimised value for the iron loss
406
Pfe
for each motor frame under consideration.
Saanane, Nzali & Chambega
Although it was possible to use the Fuzzy Logic Toolbox by working strictly from the command line, but it was much easier to build the system graphically. There were five primary GUI tools for building, editing and observing the fuzzy inference system in the Fuzzy Logic Toolbox. These GUI tools were dynamically linked such that changes you make to the FIS using one of them, could affect what you see on any of the other open GUI tools. It was also possible to have any or all of them open for any given system. The five GUI tools which made possible to implement the fuzzy inference process: are listed below 9 The membership functions; 9 AND methods; 9 OR methods; 9 Implication methods; 9 Aggregation methods; and 9 De-fuzzification methods. Figure 2 shows the block diagram for the employed FIS.
407
International Conference on Advances in Engineering and Technology
D ,. Airgap_diarneter
2: Fuzzy Logic Block diagram for computation of optimized parameters. The Mamdani's methodology was applied for the fuzzy inference system as an algorithm for the decision processes utilizing the following rule:
If (Airgap diameter D [m] is mfl and (Airgap induction B [T] is mf2) then (Ironloss Pfe [W] is mf 3). 3.1 The Fuzzy Inference System The concept of fuzzy inference is a method that interprets the values in the input vector and based on some set of rules, assigns values to the output vector. So, Fuzzy Inference Systems (FIS) explains the specific methods of fuzzy inference used in the Fuzzy Logic Toolbox. In this research the Fuzzy Inference System was used and implemented on the Fuzzy Logic Toolbox based on the matlab 6.5 platform with the Graphical User Interface (GUI) tools. This was a process of formulating the mapping from the two inputs D and B to the iron loss
408
Saanane, Nzali & Chambega
Pf~
as an output using fuzzy logic. The actual process of the fuzzy inference involved the
formulation of the membership function, fuzzy logic operators, and if-then rules.
The Process Used in the Fuzzy Inference System (FIS) Below is a description of the process which was used for organizing the major blocks in the FIS. The description is provided for one frame size M3AP 160 L-4. Therefore, the process for the major blocks of FIS is as shown below:
The membership functions: The Gaussian distribution curve was used to build the membership functions for the fuzzy sets D, B and
Pf~.
The fuzzy sets D and B as
shown in Figure 3 and Figure 4 were simultaneously varied in order to optimize fuzzy set
Pf~
shown in Figure 5, such that the fuzzy operator in all the antecedents were made
to be AND. That is,
D: min(/aD (x)),
(3)
B: max(/a~ ( x ) ) ,
(4)
AND, implication and aggregation methods: The fuzzy AND aggregated the two membership functions by an output having a value at a given input x (D or B). The result of fuzzy AND served as a weight showing the influence of this rule on the fuzzy set in the consequent (Jung-Hsien & Pei-Yi, (2004), and Qilian & Jerry, (2000)). The aggregated membership of the antecedent was then used as a weight factor to modify the size and shape of the membership function of the output fuzzy set
Pie
in a way of trun-
cation as in Xu and coresearchers in the Textile Research Journal, Vol. 72, No. 6. The truncation was done by chopping-off the gaussian output function. Considering that the membership function of the output fuzzy set as /.tpfe( x ) a n d the weight generated from the antecedent as w, so the truncated functions had the form:
,ues~(x) - max
f,f~(x), w},
(5)
So, Figure 5 represents the weighted membership functions of the output fuzzy sets of
Pie
for frame size M3AP 160 L-4.
409
International Conference on Advances in Engineering and Technology
De-fuzzification method: After all the fuzzy rules and evaluations were done the FIS needed to output a crisp member to represent the classification result
Pie
for the input
data of D and B. This step is called defuzzification. The most popular method for defuzzification, the centroid calculation was employed which gave a grade weighted by the area under the aggregated output function. Let
al, a 2 ,..., a~
be the areas of the truncated areas under the aggregated function, and
cl, c2, ..., c n be the coordinates of their centers on the axis. The centroid of the aggregated area is given by Xu and group again.
G=
aic i Z i=1
(6)
~-'a i i=1
Therefore the location of the centroid indicated the value of optimized ironloss
Pfe
to
the input D and B as shown in Figure 6. The solution to the optimiztion problem was represented as a three-dimensional surface equivalent to the mapping Pfe ( D , B) as shown in Figure 7.
3.2 Implementation of the Fuzzy Logic Model Implementation of the model for parameters D, B and
Pfe
was based on the use of the GUI
tools of the Fuzzy Logic Toolbox (Lehtla, (1996), Papliski, (2004), Qilian & Jerry, (2000))]as shown in Figure 2 with the parameters D and B as the two inputs and
Pfe
as one
output parameter. The Mamdani's methodology was applied for the fuzzy inference system as an algorithm for the decision processes utilizing the following rule as in Papliski, (2004). If (Airgap diameter D [m] is is
mfl
and (Airgap induction B IT] is mf2) then (Ironloss
Pie
[W]
mf 3).
In Section 3.2.2 below, curves which show the shapes of the adopted membership functions with the ordinate values ranging between 0 and 1 and the procedure for fuzzy logic optimization of the iron loss
Pie
Fuzzy Logic Model Inputs
410
for the frame size M3AP 160 L-4. are provided.
Saanane, Nzali & Chambega
Table 1: Inputs to the Fuzzy Logic Model Type of moAirgap diameAirgap induction, B, ter, D, lml ITI tor M3AP 160 L4
0.14
Ironloss, Pfe, [W] by Fuzzy Logic model
530
0.60
Table 1 shows the results of optimization of parameters optD, optB and optPfe through the Fuzzy Logic Model in comparison to the same values obtained by the developed Ironloss Prediction Model. 4.0 MODEL RESULTS" CURVES FOR MOTOR FRAME M3AP 160 L-4 FIS Variables
ap__diameter lf~s__Pfe[,Ar
ap~ndu "~ionBrT1
i? 5:
i~..
~
....
input variable "Airgap_..diarneter__D[m]"
i~iiiiiiiiiiiiiiiiiiiii iiiiiiiiiiii~~!i~ii'iiiI ~'~
ii~ii?iiiiiii?!ii~!iiiiiiiiiiii!iiii~i!i!!!~iiii~ it ~i~
!
~ -
~
Fig. 3" Membership function for the Airgap diameter D [m], M3AP 160 L-4.
411
International Conference on Advances in Engineering and Technology
I~|174
Ill~~l~iill!~iil
[0.07070.~8~5.23e-0050.~2]
I~~~l~~1,0.~ 0.~, ~ ~ ~~~~0~,~000~0,0~,,0,,,,,,,,,~0,,,,,~,~0~,,,,,, ~,,,,,,,,,,,,,,,,,~,~,,,,,,,,~,,,~,,,,~ ~NI~NN~',IiNNNNi~i'::;ilNii~N~i~, 9...................................................................................................................................... .~ iN~iii~!i~i~NN,::iiN i~i~| ~iiii~iiiii;,iiiii~iii~
Fig. 4: Membership function for the Airgap Induction B [T], M3AP 160 L-4.
412
Saanane, Nzali & Chambega
""~176176176
~176
2:2:
.N~i|174174
Fig. 5: Membership function for the Ironloss Pjo [W], M3AP 160 L-4.
413
International Conference on Advances in Engineering and Technology
Airgap__diameter.._D[m]= 0 ,I 61
Airgap._induction_B[T]
=0,68 Ironloss.._Pfe=[547 vV]
A 1.I
~""
.176
0,57
0.69
t
I =
539 Fig. 6" Optimization of Ironloss Pie [W], M3AP 160 L-4.
414
Saanane, Nzali & C h a m b e g a
Fig. 7" Curve
Pf~ (D,B), M3AP
160 L-4.
5.0 2D-FEM SIMULATION RESULTS FOR M O T O R TYPE M3AP 160 L-4 The 2D FEM was used as a tool for analysis of the core losses in different sections of the motor core. The steady state response was done with the magneto-dynamic simulation on the FEM software to compute for the iron losses in the motor cores and also to plot the torque versus slip relationship. The magneto-harmonic model of no-load operation for rated source voltage and frequency was employed. The main numerical results of no-load simulation were: (1) the value of noload current for each phase was, I10 = 18.7A, 19A, 20.9A; (2) stator and rotor iron loss was 343 W. The shape for Air gap flux density and the harmonics for new geometry of frame M3AP 160 L-4 is shown in Figure 8. But the shape of the torque-slip curve Figure 9 from the original motor geometry is different from that curve as in Figures 10 of the new motor geometry. The rotor slots in the new geometry are deeper than those in the original geometry and therefore the skin effect as in [4] was more pronounced in the torque-slip curve of the new motor geometry.
415
International Conference on Advances in Engineering and Technology
NOLOADBON
(E-3) Tesla
1. i l ll li, 1~ s~ ~
~.~(E-3) Tesla
20(1
.........*
'
ti
i
2o
3'0
Fig. 8: Air gap flux density and the harmonics for new geometry of frame M3AP 160 L-4.
416
Saanane, Nzali & Chambega
i |:'P~!~i: ~ :~4:;L::OR,;::;N
N~I:i::~::~:.~|
No:~on,m 70 ........ ~:i i
7 f
...... ....
.
.
.... I S~i,p ...................
B:: 733773.......................
.................i .................i ..................i .............i ..............I .............i ...............I..................i ........................... ..................i .............................i...............i .................................i ................I..................I.................I ...................i t i i iii
Fig. 9: Torque - slip curve for original geometry of frame M3AP 160 L-4.
417
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
Newton.m 1TPB150 N E W M D 01
CURVE C2D 5
Torque / Moment STATOR CORE" slip
L I L
I I I
0.25 0.5 0.75 1 Fig. 10: Torque - slip curve for new geometry of frame M3AP 160 L-4.
6.0 C O N C L U S I O N The fuzzy logic model was verified with the two-dimensional finite element simulation and thereafter, validated with experimental data from a dynamometer set-up. By comparing the iron loss value of the old motor geometry with the new geometry, the later has a lowered value and this leads to lowered total loss and therefore to an improvement of the motor efficiency as a result of iron loss optimization as seen in Table 2 and Figure 11 below. Table 2" Comparison of computed iron loss, Motor type
Practical Iron Loss value,
Pfe [w]
418
Data from Fuzzy Logic model,
Pfe with the experimental value
Original geometry Iron loss,
efe
[w] by
New optimized geometry,
ere [W]byFEM
Experimental Data on original motor,
efe [w]
Saanane, Nzali & Chambega
ge Ewl M3AP 160 L-4
Pie =609w
e/b =544w
FEM
e/e =342.8 W
P/e 247.20w
Pfe=262 W
Comparison of computed Ironloss Pfe, [W] with e x p e r i m e n t a l v a l u e
P fe
8OO 600 400 200
0 Practical Iron loss, Pfe [VV]
@
i M3AP 160 L-4 FEM new geom.Pfe, [W]
Figure 11" Comparison of computed iron loss, Pie with the experimental value REFERENCES
Canova A., Chiampi M., Ragusa C. and Repetter M. "Automated design of magnetic circuit of induction machines using the multi-objective optimization techniques and finite element method" International Journal of Applied Electromagnetic and Mechanics, Vol.9, pp 1-9, 1998. Jung-Hsien and Pei-Yi H. "Support Vector Learning Mechanism for Fuzzy R u l e - Based Modeling" A New Approach", IEEE Transactions on Fuzzy Systems, Vol.12, No.l, 2004. Saanane B.B., Nzali A.H. and Chambega D.J. "Computational Methods and Experimental Measurements XII: Design Approach of Squirrel cage Induction Motors by use of an Iron Loss Optimization Method for Improving Efficiency ", WIT Transactions on Modelling and Simulation, Vol 41, pp 621-629, 9 2005 WIT Press. Lehtla T. "Parameter Identification of an Induction motor using Fuzzy Logic Controller", PEMC'96, Part 3, pp292-296, Budapest, 1996. Papliski A. P. "Foundations of Fuzzy L o g i c - Matlab 6.5 Fuzzy Logic Toolbox" User's Guide, Neuro-Fuzzy Computing" 2004.
419
International Conference on Advances in Engineering and Technology
Qilian L and Jerry M. M. "Interval Type-2 Fuzzy Logic Systems: Theory and Design" IEEE Transactions on Fuzzy Systems, Vol.8, No.5, 2000. Xu B., Dale D. S. and Huang Y. "Cotton Color Classification by Fuzzy Logic", Textile Research Journal, Vol.72, No.6, p
420
Clemence & Manyahi
P R O P A G A T I O N OF LIGHTNING INDUCED V O L T A G E S ON LOW VOLTAGES LINES: CASE STUDY TANZANIA R. Clemence and M. J. Manyahi, Department of Electrical Power Engineering,
University of Dar es Salaam, Tanzania
ABSTRACT Transient surges on the topmost conductor due to lightning strike causes interference in the other conductors. This is due to the existence of electromagnetic coupling between conductors. This phenomenon is observed in power lines and telecommunication lines. In this paper various factors which influence the propagation of transient surges on the multi-transmission lines above the perfectly conducting ground have been investigated in detail. The separation distance between the conductors and conductors' termination loads have been found to influence the magnitude of transient surges in multitransmission lines.
Keywords: Lightning; Multi-Transmission Lines; Transient Surges; Crosstalk; Finite Difference Time Domain, Electromagnetic Interference.
1.0 INTRODUCTION Today's societies use modern electronic devices that operate at lower voltages (e.g. computers, communication equipments, industrial controls, e.t.c). As a consequence, the sensitivity of these devices to transient surges and electromagnetic interference (EMI) that are either man made or controlled by nature is high. This in turn, has created a need for advancing our knowledge of interactions of transient surges and EMI with the power and telecommunication systems. Among of the various transient surges and EMI sources, lightning appears to be the most devastating and has caused loss of a large number of human and animal lives and equipment damages. In Tanzania, lightning strikes to power lines is a common phenomenon due to the fact that transmission and distribution systems are overhead in many places and spans-over thousands of kilometres. In the analysis of Multi-Transmission Lines (MTLs) system, the topmost conductor is always referred as the emitter whereas other conductors are referred as receptors. The earlier design of the lightning protection scheme for power network was not taken into consideration the effect of the induced transient surges in the power networks due to crosstalk problems (nearby field coupling). The main reason for this is that in past years the consumer-connected equipments were not as sensitive to transient surges as they are today. As mentioned earlier, in today's' societies there is an increased use and dependence in electronic based equipments, which are more vulnerable to transient surges. Thus, the damages and failure of these equipments due to lightning caused surges have increased.
421
International Conference on Advances in Engineering and Technology
Due to the increase of equipments sensitivity to transient surges; various researchers have been studying extensively the factors that could influence the propagation of transient surges on the power lines in the recent years. The losses due to skin effect of the ground and skin effect of the conductors influence the transient surge propagation along the line, when compared to the ideal case (lossless) Deft, et al, (1981), Theethayi, et al, (2004). This influence is due to the frequency dependent penetration depth for the fields in the finitely conducting soil and conductors. There are various ways on which the ground influences the electric and magnetic field incident on the power lines due to indirect lightning strikes. Electromagnetic field coupling between the power lines and nearby object is strong when conductors and the ground are lossy as compared to when the ground and conductors are lossless. However, most of the detailed analysis on the interaction of lightning generated fields and propagation of surges on power lines have been carried out on single conductor above the ground Cooray, et al, (1998), Cooray, (1994). The single conductor system does not model well the power networks because many electrical power networks have more than one conductor. Furthermore, the study on the single conductor system does not include crosstalk problems (i.e. nearby field coupling). In the recent years, various researchers have been investigated the factors that influences the propagation of transient surges in a Multi-Transmission Lines (MTLs) system. The recent study, out of the studies that have been carried out, it has been deduced that the height of the receptor above the ground and the ground conductivity influence the crosstalk surges on the receptor Theethayi, et al, (2004). The findings have been based on the study carried out on two-conductor system arranged in the horizontal of a horizontal separation of 1.0m between the two conductors. In their model, the emitter was kept at the fixed height of 10.0m above the ground whereas the receptor was at the height of 10m, 7.5m, 5.0m, 2.5m or 0.5m. The current source was connected at the emitter, and the crosstalk problems on the receptor at different heights were investigated. The model developed by Theethayi, et al, (2004), does not model well the actual electrical power network but it has explained clearly the idea of crosstalk problems in a two conductors system. This is due to the reason that during simulations, the receptor was shifted from one height to another but the real network has its conductors at the fixed position. In the crosstalk analysis, for the model developed by Theethayi, et al, (2004), LV-L 1C) 0.3m LV-L2 C) 0.3m LV-L3 C) 0.3m NC) 8.1m Ground Plane Fig. 1: System configurations
422
Clemence & Manyahi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
:
.
.
.
.
.
.
.
?
.
.
.
.
.
.
.
.
.
.
.
.
.
~
,
.
.
,
.
.
,
~
Time (m r~icro ~er
Fig.2:0.36/26.8 g s Impulse Current Source
there is electromagnetic coupling between the emitter and one receptor at a time. Thus, in this paper, the model is developed in such away that the conductors of the networks are kept at their position to represent the real power network such that there is electromagnetic coupling between the emitter and each receptor and between one receptor and the other receptors at a time.
1.1 Description of the System under Study A 1.2 km transmission line was used in the simulation to represent a typical Low Voltage (LV) network in Tanzania. The conductors of the LV system are arranged in the vertical at the heights of 9m, 8.7m, 8.4m and 8.1m above the ground level Tanzania Electric Supply Company Limited, (1993) (see Fig. 1). The size of conductor used for simulation purposes is 100 mm 2 AAC which is a typical type of the conductors of the LV system. Both ends of the emitter were terminated with self-surge impedance. The ends of the receptors are terminated with self-surge impedance (SSI) or open circuit (OC) or short circuit (SC). The impulse current source is said to model the lightning strike. The impulse current source is a double exponential wave, with a peak current of about 10 kA, rise time of about 0.36 gs and half to peak value time of 26.8 gs. This current source was connected at the middle of the emitter. The waveform of this current source is shown in Fig. 2. The induced voltages at the loads connected to the two ends of the receptors are studied. Also, the induced voltages at the first node from the point of strike are presented.
2.0 TELEGRAPHER'S EQUATIONS Voltage and current distribution along the lines are mostly widely obtained by solving Telegrapher's equations Rachidi, et al, (1999), Paul, (1994), Theethayi, et al, (2004). The Telegrapher's equations can be made to include losses due to the ground and losses due to skin effect of the conductors Cooray, V, (1994), Rachidi, et al, (1996), Rachidi, et al, (1997), Rachidi, et al, (1999), Theethayi, et al, (2004). However, in this paper, the investigations have been considered the losses due to skin effect of the conductors only. The modified Telegrapher's equations that include the skin effect of conductors are given by equation (1). Equation (l a) is known as voltages wave equation whereas equation (1 b) is known as current wave equation.
423
International Conference on Advances in Engineering and Technology
0 I(z, t - z)dz1 + L 0 I(z,t) = 0 0 V(z, t) + AI(z,t) + ~ ~ t ! 7 ~c3(t-~) -~ c3I(z, t) 0V(z, t) + C ~ - 0 c~z &
(la)
(lb)
In equation (1), C and L are the per unit length capacitance and inductance matrices respectively. The matrices A and B in (1) are based on ~ dependencies of the skin effect of the conductors. It should be well noted that both A and B are diagonal matrices given by (2a) and (2b) respectively. A
=
edc
=
1 71;OT2
~
B - 2nr
(2a)
(2b)
where, cy is conductor's conductivity. r is the radius of the conductor. 2.1 Numerical Solutions for Telegrapher's Equations Telegrapher's equation was solved numerically by using the Finite Difference Time Domain (FDTD) technique described in Paul, (1994). The advantage of using this technique is that the losses in time domain can be efficiently incorporated by means of convolution technique. In order to incorporate the convolution, the transient impedance corresponding to the skin effect of the conductors has been approximated as sums of exponential functions Paul, (1994). In the recent work, the transient impedance for the skin effect of the conductor has been approximated by means of a standard ten terms Prony's approximation Paul, (1994). The program code to solve these expressions has been implemented in Matlab program. 3.0 SIMULATION RESULTS The parameters that are changed during the simulations are receptors' termination loads. Other parameters are set according to the existing LV system. Further, throughout the simulations, the current source was connected at the middle of the emitter. The results obtained by FDTD technique were first compared with known results in the literature and the accuracy was found to be quite satisfactory Paul, (1994), Theethayi, et al, (2004). Finally, results for induced voltages at the nearest and at the farthest node from the point of lightning strike (in this case from the middle of the emitter) are shown in Fig. 3 and Fig. 4. They are also summarized in Table 1.
424
Clemence & Manyahi
Table l" Peak voltages at the nearest point from the point of strike and at the either end of LV-lines for different receptors termination loads Receptors' termina-
Peak voltages (kV) at the
Peak voltages (kV) at
tion loads
nearest point
either end
2423
2423
LV-L1
SS!
OC
SC
1217
659.9
LV-L2
1011
460.5
LV-L3
889
381.3
N
2701
2711
LV-L1
1382
1387
LV-L2
1148
1153
LV-L3
1009
1013
N
3220
2537
LV-L1
1258
0
LV-L2
1046
0
LV-L3
919.4
0
N
Induced Voltages on the Struck Conductor (LV-L1) at the Nearest Point (i,e. 2m) from the Point of Strike 3500
Conductor
Induced Voltages on Receptor LV.L2 at the Nearest Point (i,e, 2m) from thePoint of Strike
--
1600
1600
I@
1000 -1000
600 0
10
20
30
40
50
Time (micro seconds) Induced Voltages on Receptor LV-L3 at the Nearest Point (i.e. 2m) from the Point of Strike 1600
I000'
~"
600
4) O1
0
--:. . . . .
Z . . . . . . . . . . - :-.:"::-:
16013
0
I
I0
20
30
40
50
Time (micro seconds) Induced Voltages on Receptor N at the Nearest Point (i.e. 2m) from the Point of Strike I._qO0
10130 . . . .
!
-t~,t,; - -" ............................ 9 9 '
I
ss[
:
600
-600i -600
-1000 -1600
11_7
2[.'1
30
Time (micro seconds)
41"1
50
-1000
0
I0
20
30
40
60
Time (micro seconds)
Fig.3: Induced voltages at the nearest point (i.e. 2m) from the point of lightning strike.
425
International Conference on Advances in Engineering and Technology
Induced VoRaQes on Receptor LV~L2 ~ Either End
|nducee Voltages on Ule Stmcl( Coneucter |LV-LI) at Either End
~. '.:~%~
!i........~,,i.............~.............!..............
I. . . . . . . . .
_
8
~Fi
"={ ~
1~0
2~3
48
:g)
:~
Time (micro seconds) Induced VoRages on Receptor LV-L3 at E i d e r End
~-.
.....: : ~ : ~ .......i...........~...........
{~...................... ; .......................... ~ ..................... 4~ time. [micro ~econd!s)
;........
...................... ~ ........................
~ .............................
~ .......................
~ .......................
:
Time [micro secends| Induced V e r g e s on Receptor N at Either End
!
i
i
!.;:~i~ii]
oci I
I~ Tim~ |micro secondsl
Fig.4: Induced Voltages at either end of LV lines. 4.0 OBSERVATIONS The following observations are made from the simulation results: 1. The magnitude of the induced transient voltages is large on the struck conductor (emitter) compared to induced voltages on either receptor due to crosstalk problems. Also, induced voltages on each receptor decreases as the vertical separation from the emitter gets larger. 2. When the termination loads for both emitter and receptors are self-characteristic impedance (SSI), the peak magnitude of induced voltage on the struck conductor is large at either ends of the struck conductor compared to the peak magnitude of induced voltages at the distance near the point of strike (i.e. about 2.0m). The peak magnitude of the induced voltages on each receptor has large peak value at the distance near the point of strike compared to peak magnitudes of the induced voltages at either end of each receptor. 3. When the termination loads for the emitter are self-surge impedance (SSI) and the termination loads of each receptor are very high impedance i.e. open circuit (OC), the peak magnitude of induced voltages at the ends of the emitter and at the ends of each receptor (i.e. 0.6km from the point of strike to the either end) is larger than the peak magnitude of the induced voltages at the point near the position of strike (i.e. 2.0m from the point of strike). 4. When the termination loads for the emitter are self-surge impedance (SSI) and the termination loads of each receptor are very low impedance i.e. short circuit (SC), the peak magnitude of induced voltages at the ends of the emitter (i.e. 0.6km from the point of strike to either end) is smaller than the peak magnitude of the induced voltages at the point near the position of strike (i.e. 2.0m from the point of strike). At the far ends, the peak magnitudes of the induced voltages of each receptor are almost equal to zero.
426
Clemence & Manyahi
5.
6.
7.
From Table l, you can easily see that for different loads at the ends of the receptors (i.e. SSI, OC and SC) while the termination loads for the emitter is SSI, the struck conductor (emitter) has a maximum peak value of induced voltage at the distance near the point of strike when the loads connected at the ends of receptors are very small (short circuit). On the other hand, all receptors LV-L2, LV-L3 and N have maximum peak value of crosstalk voltages at the point far from the point of strike when the loads connected at the ends of receptors are very high (open circuit). The struck conductor (emitter) has the highest peak magnitude near the point of strike (2.0m from the point of strike) when the loads at the ends of the receptors are very low (short circuit) (see Table 1). All receptors have the highest peak magnitudes of induced voltages at the point far from the point of strike (i.e. 0.6kin from the point of strikes to either ends) when the loads at the ends of receptors are very high (open circuit) (see Table 1).
5.0 D I S C U S S I O N OF THE R E S U L T S
1.
2.
3.
In this case study (LV network in Tanzania), the induced transients in the conductor system is influenced by the condition of terminating impedances at the ends of the emitter and receptors, and the vertical separation between the adjacent or opposite conductors. The induced voltages on the receptors depend on the strength of electromagnetic coupling between the emitter and the receptor (i.e. the magnitudes of the induced voltages on the receptors decrease as the strength of electromagnetic coupling between the emitter and a receptor becomes weaker). There is the strong electromagnetic coupling between the emitter and receptor when the separation distance between them is very small compared to when it is large. Thus, in the existing LV system, the crosstalk voltages are very high between the emitter LV-L 1 and the receptor LV-L2 (i.e. 0.3m between them) and very low between the emitter LV-L1 and receptor N (i.e. 0.9m between them) (see Table 1). Transmission lines terminated in its surge impedance do not have reflections from the ends. However, in the case of crosstalk there are reflections from both ends to the middle (i.e. when the current source is injected at the middle) even though it is terminated in the surge impedance. This is due to the presence of equivalent series voltage sources which is the result of magnetic coupling between the emitter and receptors. That is why; the peaks induced voltages at the nearest distance from the point of strike and at the end of the receptors differ when receptors are terminated with self surge impedance.
6.0 C O N C L U S I O N S This work has discussed in detail how the two factors namely separation distance between the struck conductor and the receptor and different conductors' termination loads influence the propagation of surges in a Multi-Transmission Lines system (MTLs). When the receptor is closer to the emitter, the induced voltages on the receptor is very high compared to the induced voltages on other receptors which are far from the emitter. A severe distortion of the input-waveform (double exponential waveform) has been observed on both conductors (i.e. the emitter and receptors).
427
International Conference on Advances in Engineering and Technology
Also, in three cases of termination loads considered in this work, in protecting the struck conductor LV-L1 against direct lightning strike, it is proposed that we should consider the loads at the ends of the receptors as short circuit (very low impedance). However, in protecting other conductors (receptors) against crosstalk (near by field coupling) the terminations loads at the receptors should considered as very high impedance (open circuit) respectively. REFERENCES
Araneo, R. & Celozzi, S., Direct Time Domain Analysis of Transmission Lines Above a Lossy Ground, IEE Proc.-Sci.Meas. Technol., 2001, vo1.148, no2, March 2001, pp. 73-79. Coolay, V. (1986), Effects of Propagation on the Return Stroke Radiation Fields Produced by Positive Return Strokes and their Submicrosecond Structure, J.Geophysics. Res., pp. 7907- 7911. Cooray, V. & Scuka, V. (1998), Lightning-induced overvoltages in power lines: validity of various approximations made in overvoltage calculations, IEEE Trans., on Electromagnetic Compatibility, Vol. 40, No. 4, Part-1, 1998, pp. 355-363. Cooray, V., Calculating Lightning Induced Overvoltages in Power Lines. A comparison of Two Coupling Models, IEEE Trans., on Electromagnetic Compatibility, Vol. 36, No. 3, 1994, pp. 179-182. Geri, A., Behaviour of grounding systems excited by high impulse currents: the model and its validation, IEEE Transactions on Power Delivery, vol. 14, No. 3, p. 1008 - 1017, 1999. Theethayi, N., Yaqing, L., Montano, R. & Thottappillil, R., On the influence of conductor heights and lossy ground in multi-conductor transmission lines for lightning interaction studies in railway overhead traction systems, Electric Power Research, vol. 71, pp. 186-193, 2004. Paul, C.R. (1994), Analysis of Multiconductor Transmission Lines, John Wiley and Sons, New York. Rachidi, F., Nucci, C.A. & Ianoz, M. (1999), Transient Analysis of Multiconductor Lines above Lossy Ground, IEEE Trans. on Power Delivery, Vol. 14, pp.294-302. Rachidi, F., Nucci, C.A., Ianoz, M. & Mazzetti, M., Influence of a lossy ground on lightning induced voltages on overhead lines, IEEE Trans. on EMC, vol. 38, no. 3, Aug. 1996, pp.250-264. Rachidi, F., Nucci, C.A. & Ianoz, M. (1997), Response of Multiconductor Power Lines to Nearby Lightning Return Stroke Electromagnetic Fields, IEEE Trans., on Power Delivery, Vol. 12, No. 3, 1997, pp.1404-1411. Tanzania Electric Supply Company Limited, (1993), Distribution Construction Hand Book.
428
Ali, D h a m a d h i k a r & M w a n g i
A C O N T R O L L E R FOR A WIND DRIVEN MICROPOWER ELECTRIC GENERATOR Sayyid Ahmed Ati, Vasant Dhamadhikar and Elijah Mwangi; Department of Electrical and Electronic Engineering, University of Nairobi, PO BOX 30197, Nairobi 00100, KENYA.
ABSTRACT The design of a controller for a wind driven electric generator is presented. The system is based on the Intel 8051 microcontroller and controls a wind turbine by activating a brake when the wind speed exceeds a preset value and releases the brake when the wind speed drops. The controller also measures the wind speed, turbine rotor speed, the generated voltage, and the ambient temperature. These parameters are then displayed on an LCD screen output. The parameters are also sent to a PC via an RS232 link for the purpose of data logging. The assembly coding was developed using KEIL gVISION development tools software and burned into the EPROM of the microcontroller.
Keywords: 8051 Microcontrollers, microprocessor applications, wind turbine control, digital control.
1.0 INTRODUCTION A controller is essential in a wind driven electric generator so as to avoid damage to the turbine due to excessive mechanical stresses and vibrations. In micro-power wind driven generators, this usually takes the form of activating and releasing a brake in response to a given wind speed cut-off. In this paper, a controller has been designed based on the Intel 8051 microcontroller. In addition to the main function of controlling the speed, the circuit also measures the following parameters. (i) The outdoor temperature from the range of 0~ to a maximum of 60~ (ii) The turbine rotor speed from standstill to 2000rpm, (iii)The prevailing wind speed form 0m/s to 36m/s, (iv) The generated voltage from 0V to 60V. These analogue inputs were simulated as voltage signals and conditioned and then converted to a digital format by an 8-bit ADC. An Intel 8051 microcontroller accepts the digital inputs and then a programme code is used to control the generator. The circuit is also interfaced to a PC via an RS 232 connection. This gives remote user vital information about the turbine operation and offers the possibility of data logging. A manual control panel circuit that uses a relay to disconnect the generator and to override the operation of the microcontroller has also been designed. This offers a provision for emergency shutdown of the turbine. The circuit was designed on PCB using EXPRESS PCB and the assembly code was developed using KEIL ~tVISION development tools software. The results obtained from the circuit indicate that an effective control of the generator with a cut-out wind speed of 25m/s. All parameters as well as the system status such as whether the brake is on or off, or whether the generator is on or
429
International Conference on Advances in Engineering and Technology
off, were displayed on an LCD screen. The data is also sent through the UART terminal of the 8051 to a PC and displayed on the hyper terminal window. This paper is organised as follows: In section 2, a brief treatise on the principles of a wind turbine and methods of power control are discussed. The active brake control method is identified as a simple and popular control method. In section 3, A design methodology based on the popular 8051 microcontroller is presented. Results from the design are given in section 4. Lastly, conclusions and recommendations for further work are given in section 5. 2.0 P R I N C I P L E S OF A S I M P L E W I N D T U R B I N E A mass of air in motion can exert a sizeable force when its momentum is stopped or slowed down. The momentum transfer can be used to rotate propeller style wind sails, often called wind turbines. The rotating turbine is connected to an electrical generator through a gearbox. The minimum speed that a wind turbine can begin producing useful electricity is often called the cut-in speed. Many systems have a typical 4.5m/s cut-in speed requirement. When the wind speed exceeds a certain level, many wind turbines disconnect their generators from the power grid and stop the propellers to prevent damage to the turbine. The cut-out speed of many systems is around 24rrds to 27 m/s. Most generator systems are synchronized to the utility grid so the generators spin at a constant speed. To compensate for different wind conditions most systems have variable pitch propellers that can capture more or less wind forces. Some wind generator also have gearboxes that have variable input vs. output gear ratios, as another way to deal with different wind speed conditions. The better machines carefully measure the wind speed and make corrections to the propeller blades and the gearbox to maintain maximum mechanical to electrical conversion efficiency. (ITDG, 2005). 2.1 Power Control of Wind Turbines The power, P, obtained when the wind passes through a circular cross-sectional area perpendicular to the direction of air flow is given by" (Danish Wind Industry, 2003).
P - 0o51,,3~'/'2
(1)
where, ,o=the density of air. (approx. 1.23Kg/m3). v = the velocity of the wind. r =- radius of the cross sectional area. Thus, a small change in wind speed results in a large change in wind power. This cube function is what makes control systems for extracting power from the wind so difficult to design. A small change in wind speed means a large change in electrical power output. Of course, perfect machine does not exist that can extract all of the available energy. A more realistic efficiency figure is 30%. In equ.(1) above, the electrical power produced is proportional to the wind collection area. So, it is advantageous for a wind farm to collect as much wind as possible by having large diameter wind turbine propellers. However, conventional wind turbines do not scale very well. As wind turbines become larger, the cost of the supporting towers and the propellers drives up the cost per
430
Ali, D h a m a d h i k a r & M w a n g i
kilowatt hour of the energy converted. Large arrays of medium size wind turbines are generally used in wind farms. Hence the swept rotor area is spread out by having many wind turbines. Wind turbines are designed to produce electrical energy as cheaply as possible and are therefore generally designed so that they yield maximum output at wind speeds around 15 m/s. It is unnecessary to design turbines that maximise their output at stronger winds, because such strong winds are rare. In case of stronger winds it is necessary to waste part of the excess energy of the wind in order to avoid damaging the wind turbine. It is for this reason that wind turbines are designed with some sort of power control. There are diverse ways of doing this safely on modem wind turbines depending on the output power capacity (Danish Wind Industry, 2003). For smaller power wind turbines (micro-power), an active brake method is applied. When the wind speed is too high, a brake is applied to prevent damage to the turbine. This method is usually cheaper and easier to install, since the turbine itself is not aerodynamically critical as in the other methods. This is the control method that was chosen in our work mainly due to simplicity in implementation (Energy Centre, 2005). C O N T R O L L E R DESIGN In order to design an effective controller, it is necessary to capture the following four parameters. (i) Temperature from a minimum 0~ to a maximum of 60~ (ii) The turbine rotor speed from a minimum 0 RPM to a maximum value of 2000 RPM. (iii) The prevailing wind speed from a minimum value of 0 m/s to a maximum of 36 m/s. (iv) The generated voltage from 0 V to a maximum of 60 V. These parameters, including the system status (e.g. is the brake on or off, is the generator switched on or off) are also to be displayed on an LCD screen. In addition, the controller should also send the above data to a remote PC for data logging purposes. For this reason, a microcontroller such as the 8051 which has an in-built UART is used rather than a general microprocessor (Mazidi, 2002). It is also essential for the Controller to activate a Turbine brake by detecting the cut-out wind speed of 25 m/s and deactivate the brake if the wind speed is below 25 m/s. A manual override Control panel is to be included to shut down the turbine in emergency cases or for maintenance purposes. A suitable block diagram is illustrated in Fig 1. A Brief Description of Each Component in the System is as Follows: The analogue inputs such as Rotor RPM, Generated Voltage, Temperature and Anemometer inputs are first signal conditioned (Horowitz & Hill, 1995). This involves the analogue processing techniques such as amplification and level shifting. Thereafter, they are converted to an 8-bit digital format by the A/D Converter operating at a clock frequency of 500 kHz. The output of the A/D converter (ADC0809) is taken to the 8051 microcontroller for further processing as per the programme code. An assembly language code controls the microcontroller operation. The analogue data is displayed on an LCD screen (20x4). This gives the user information about the wind speed, temperature and the turbine operation. An RS-232 connection (using MAX232CPE) also gives a remote user vital information about the turbine operation. Data logging is also possible with this kind of serial connec-
431
International Conference on Advances in Engineering and Technology
tion. A manual control panel is the part of the system that overrides the microcontroller action. This is a provision provided for emergency shutdown of the turbine and for maintenance purposes by the user. The complete circuit diagram is given in Fig. 2(a) and (b) (Ahmed, 2005). Turbine
t !.
LCD Display
8051 uC
RS-232 Interface
Rotor RPM, Gen. Voltage, Temperature (Input Parameters)
Anemometer
Mux A_DC ADC0$09
F / V C oxv. 1.3t12917
Manual
S/D
Anemometer Input Fig. 1" The 8051 Based Controller Circuit
The manual control panel circuit consists of three switches. These are: (i) Backlight Switch: This simply toggles the Backlight of the LCD Screen. (ii) Generator Connect Switch: The user manually can connect or disconnect the Generator from the Charging system by activating or deactivating the Generator relay. (iii) Brake Override Switch: The user can activate the Brake regardless of wind speed. There is an OR relationship Between the Override switch and the software activation condition (i.e. wind speed greater than 25m/s). The operation if illustrated further in Table 1. Table 1. The operational status of the manual control panel Override Switch Status
Software Activation Status
Brake Signal Output
OFF
OFF (Wind Speed <25m/s)
OFF
ON OFF ON
OFF (Wind Speed <25m/s) ON (Wind Speed >25m/s) ON (Wind Speed >25m/s)
ON ON ON
432
Ali, Dhamadhikar & Mwangi
An 8051 Assembly Code was written Based on the Following Pseudo-Code (Ahmed, 2005). Start Fetch adc data {select an analogue channel, activate ALE of the 8051, start adc conversion, check EOC status, store data} Convert binary to A S C I I {load Acc with binary data, go to conversion subroutine, store ASCII data} Check switches {if windspeed>25 activate brake, Else check switch status. If brake is on, activate brake relay. If generator witch is on, activate relay} Transmit serial data {set baud rate, set TX mode, send data to buffer, If TI flag is set then clear TI flag} Send data to LCD {initialise LCD if busy flag low, then send display data, else exit} Create time delay {set timer, set countdown, stop when count=0} Go to fetch adc data end
4.0 RESULTS The circuit operates from a 12V dc supply and potentiometers are used to represent the sensors and may vary an input voltage from 0V to 5V to represent a particular parameter. The input data is sampled at a rate of 10sec. This period can be adjusted by setting the appropriate value in the time delay routine of the assembly code. The circuit was powered and the manual overriding switches were switched on and off. The RS-232 serial connection was made to a PC with the HyperTerminal program open.
433
International Conference on Advances in Engineering and Technology
...
ii~
?
i:
~"~ ~~l ~
. . . . . .
~
i~
~. ~ ~! ~ ~
Fig. 2a. The circuit diagram of the controller.
434
Ali, D h a m a d h i k a r & M w a n g i
~, ~ ~! .~ ~"
....iiii .....L.. !~iiii
. . . . .li. . . .
i I.......~........................................................ !
.......................................~ i!
..........................................................
.....ii>~. .......,~ ..... ........,......... .....~i.~ .....~,~ ...... ...........,......................
(ii
~.~i
.I
---I
I~ii i.~i ~,
Fig.2 (b) The circuit diagram of the controller (continued) The program settings were; microcontroller frequency= 11.0592MHz. Baud rate =9600, 1-stop bit, 8-bit Data, and no parity bit. It was confirmed that data was being transmitted to the PC. 4.1 Observations (i) The circuit activated the brake relay once the cut-out wind speed was reached. All the parameters were seen on the LCD screen and in the hyper terminal window of the PC.
435
International Conference on Advances in Engineering and Technology
(ii) Relay 1 is used as the switch for connecting/disconnecting the generator while relay 2 is used for activating/deactivating the turbine brake. They were operating as per the programme code. (iii) The system action was exactly as per the assembly code written; i.e. the cut-out wind speed of 25m/s was observed. (iv) The system has maximum power consumption when both relays are on and the LCD backlight is also on. In this case, the current drawn from the source is 226mA to give a power consumption of 0.27W. 5.0 CONCLUSION Wind energy is an innovative and a cost effective method for trapping energy from the environment. All wind turbines need some sort of power control. Although wind turbines are designed for average wind speeds, they need to be shut down in the case of strong winds or stormy weather. This is to avoid damage to the turbine and to protect the generator. The control circuit was quite a challenge to design and implement. From the results that have been obtained, it was noted that the designed system operated as expected and met the design specifications. It was noticed that although the hardware design was fixed, it was easy to add any new features to the system by simply modifying the assembly coding. Further extension to the design can be done in terms of developing a PC data logging software to monitor and record remote serial data originating from the controller circuit. A copy of the complete assembly code is available from the authors on request) REFERENCES
Ahmed, S A.(2005). A controller for a wind driven micropower electric generator. Unpublished BSc (Electrical & Electronic Eng) final year project report. University of Nairobi. Danish Wind Industry Association. (2003). http:www.windpower.org Energy Center of Wisconsin's wind power web site. (2005) www.wind.ecw.org Horowitz, P & Hill, W. (1995). The Art of Electronics Cambridge University Press. Mazidi M.A & Mazidi, J.G. (2002). The 8051 Microcontroller and embedded systems. Prentice Hall. ITDG.(2005) (Intermediate Technology Development Group. Wind Electricity Generation. www.itdg.org
436
Vishwakarma
R E I N F O R C E M E N T OF E L E C T R I C I T Y D I S T R I B U T I O N N E T W O R K ON PRASLIN ISLAND Suresh Vishwakarma, Principal Engineer, PUC- Electricity Div., Rep. of Seychelles
ABSTRACT Praslin - the 2 nd biggest island of Seychelles archipelago is a major contributor to the economy of this tourism based country. A reliable electricity supply network on this Island holds significant importance. The electricity transmission and distribution system at Praslin was developed and expanded more than 25 years ago. Under the restrictive scope of expansion, some additional developments were made in subsequent years to meet the increasing electricity demand as well as to maintain various parameters within allowable limits at the points of usage. The majority of the portion of island having protected natural reserve was another reason for not going for large T&D network at the initial stage. However subsequent to electric supply arrangements by Public Utilities Corporation, lately during past ten years, airport and many big hotels & tourism establishments have come up demanding for bulk supply of power in addition to the high index of system reliability on the island. The peak hour's electricity demand has increased up to 5 MW within 25 years. The voltage profile on one of the Distribution feeder has already approached maximum allowable limits. Considering similar growth rate of demand, the voltage profile in the last sections of this feeder shall cross the allowable limits within next 5 years. Moreover, under the present configuration, the last sections 2 of the 4 Distribution feeders do not have open ring supply arrangements. There are no flexible supply arrangements in these 2 sections in the event of any major fault or breakdown on either of these 2 sections. They further supply power to many important tourism establishments and airport. The future growth in electricity demand is highly likely to make the reliability of power supply vulnerable due to single circuit radial configuration. The paper compiles study of various parameters of the existing Distribution network and critical assessment of the network performance with respect to the existing and future electricity demand. Lastly, some proposals have been suggested to upgrade the system, interconnect and reinforce the distribution network to meet the future demand. Keywords:
Seychelles, Praslin Island, La Digue Island 11 kV Vallee De Mai Feeder, 11 kV Cote D'or Feeder, 11 kV Baie St. Anne Feeder, Cable, Conductor, Voltage Regulation, Line Efficiency.
437
International Conference on Advances in Engineering and T e c h n o l o g y
1.0 I N T R O D U C T I O N Seychelles is one of the world's most beautiful archipelago located 4 ~ South of the equator in the western part of the Indian Ocean, North of Madagascar and 1593 Km east of Mombassa, Kenya. There are 115 islands in the entire archipelago out of which habitation is limited to only 10 islands. Majority of the total 455 sq km of land area is conserved as national parks and reserves. For the purpose of comparison, it may be said that Seychelles is about 2 89times the size of Washington DC. The capital city and only urban centre as well as port is Victoria, located on the largest island, Mahe. The other colonized islands are Praslin and La Digue. Praslin is the 2 nd most populous island of Seychelles with an area of 40 square kilometres. The island has a population of 7,500. The Distribution network on this 2 nd biggest island of Seychelles was developed and expanded since the year 1981 to meet the increasing demand for electricity. Electricity is generated Praslin Island by diesel based generating stations. 11 kV Distribution feeders emanate from the generating stations. Data below is an indicative of 9 Maximum Demand = 5 MW 9 No. of Consumers = 3,000 9 Route length 11 kV line = 56 Km 9 Distribution Transformer Capacity = 10 MVA All Distribution feeders at Praslin are partly underground and partly overhead. Due to the topography of the Island, their routes are partly along the road and partly over the mountains or through valleys. The route of the feeders often creates difficulties to carry out maintenance especially during inclement weather conditions. The energy generated at the Power Station is distributed to consumers via overhead lines, underground cable and undersea cable networks. Except in the areas designated as National Reserve, almost all medium and low voltage distribution systems on Praslin are overhead lines. Electricity supply is available to almost all existing household and business premises. The route length of 11 kV lines is 40 Km and underground cable is 6 km. The low voltage distribution network is quite extensive and its route length is approximately 110 km. The under noted 11 kV feeders emanate from the generating station: 1. Baie St. Anne Feeder 2. Cote D'or Feeder 3. Valle De Mai Feeder 4. La Digue Feeder
438
Vishwakarma
The Baie St. Anne feeder is the shortest and the lightest loaded feeder. The Cote D'or feeder is a comparatively longer feeder. The Vallee De Mai feeder is the longest feeder. These 3 feeders are on Praslin Main Island. La Digue Feeder supplies electricity to the neighbouring La Digue Island about 8 km away from Praslin. Consumers on La Digue Island receive their supply via an undersea cable laid in the year 1985. This is the longest feeder emanating from Praslin Generating station.
9 ~.
n
"
-: i i "
7
.S
J
~.
~2 ,..-..,
.
.
.
.
.
.
.. . . . . . . . .
~:~.! ~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
e::::~o::~?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
:;~ :::,;: ~ .:
................................................
Fig. 1. Marine parks and protected natural resource reserves at Praslin Island
439
International Conference on Advances in Engineering and Technology
2.0 G E O G R A P H I C A L FEATURES OF PRASLIN ISLAND The map of Praslin Island in Fig. 1 shows the protected natural reserves and surrounding marine parks. The nearby La Digue Island about 8 km from Praslin is not visible in the picture. As seen, the location of International Airport is in the North-West part of the Island. Major part of the Island is composed of granite based mountains. A very little amount of flat land is available along the sea coast. Many tourism establishments already exist in Praslin as well as the surrounding islands. Apart from the new ones coming up, most of the existing ones are in the process of expansion. An undersea cable was laid to supply electricity to the La Digue Island in the year 1985. An additional undersea cable has been laid in year 2005 up to Round Island (seen in the top) to meet the demand. Plans are underway to extend the same cable from Round Island to La Digue Island to meet the increasing demand. 3.0 11 kV VOLTAGE N E T W O R K AT PRASLIN ISLAND Fig. 2 shows the existing 11 kV distribution network at Praslin Island. The locations of generating station and submarine cable for La Digue Island are shown. The three feeders are drawn in different colures. Vallee de Mai feeder is the longest whereas Baie St. Anne Feeder is the shortest. As apparent from the sketch Vallee de Mai feeder and Cote D'or feeder become radial after a certain length. In the event of any major fault in their last sections, a large area remains offsupply due to unavailability of multiple supply arrangements.
If
:t ....
f~"'a.,
Fig. 2. Existing 11 kV distribution network at Praslin Island
440
Vishwakarma
Table 1. Technical specifications o f each feeder st. Characteristic Baie St. Anne Feeder
Cote D'or Feeder
Valle De Mai Feeder
La Digue Feeder
1
Total Route Length
9.6 km
13.2 km
13.8 km
19.55 km
2 3 4 5
Undersea Route length Underground Route Length Overhead Route Length Underground Cable size
Nil 0.6 km 9 km 95 mm2 Copper
Nil 0.8 km 11.2 km 95 mm2 Copper
6
Overhead Conductor size
50 mm2 AAC
50 mm2AAC
Nil 4.8 km 9 km 95 mm2 Copper 50 mm2 AAC
7 8 9 10
Connected Transformers Total Installed Capacity Current Maximum Demand Maximum Current in feeder
10 nos. 935 kVA 545 kVA 30 Amps
26 nos. 3595 kVA 1549 kVA 86 Amps
20 nos. 4465 kVA 1950 kVA 102 Amps
8 km 1.55 km 1 km 95 mm2 Copper 50 mm2 A1 Cable 8 nos. 1545 kVA 1194 kVA 64 Amps
4.0 C H A R A C T E R I S T I C S O F 11 K V F E E D E R S A T P R A S L I N S o m e important technical specifications o f each feeder are tabulated above (Table 1). The chart in Fig. 3 b e l o w shows daily loading on the above 3 feeders r e c o r d e d in January 2006 200
180 160 140
8. 120 ._. 100
_.i
80
0
@ @ @ @ @ @ @ @ @ .@ .~ .@ .@ .@ @ @ .@ @ @ .@ .@ .@ .@ .@ Time in h o u r s
Fig.3. Chart o f daily loading on the above 3 feeders recorded in January 2006
5.0 T E C H N I C A L D A T A O F 11 K V F E E D E R S O N M A I N I S L A N D Basic Technical Data o f Baie St. A n n e F e e d e r is presented in Table 2. As can be c o n c l u d e d from the table, the m a x i m u m loading on the feeder remains b e t w e e n 18:00 hours till 21:00 hours with the p e a k load around 19:00 hours.
441
International Conference on Advances in Engineering and Technology
Table 2. Basic technical data for Baie St. Anne Time
P/kW
PF
Ph3/ kW 11.0
Phl/ Amps 18.2
Ph3/ Amps 14.8
F/Hz
kVA
50.0
315
00:00
268
0.848
Phl / kW 11.0
01:00
228
0.808
11.0
11.0
18.0
12.9
50.2
280
02:00
222
0.810
11.0
11.0
14.1
12.6
50.1
273
03:00
222
0.802
11.0
11.0
15.1
12.6
50.1
281
04:00
212
0.813
11.0
11.0
13.9
12.0
50.2
261
05:00
302
0.843
11.0
11.0
15.5
13.0
50.1
333
06:00
291
0.858
11.0
11.0
18.6
14.0
50.1
336
07:00
304
0.853
11.0
11.0
22.4
18.9
50.1
447
08:00
359
0.879
11.0
11.0
23.0
19.4
50.1
395
09:00
315
0.848
11.0
11.0
22.3
19.0
50.1
435
10:00
348
0.833
11.0
11.0
19.4
17.3
50.1
387
11:00
388
0.833
11.0
11.0
21.8
18.6
50.0
398
Ph3/ kW
Phl/ Amps
F/Hz
kVA
Table 2 (continued) Time
P/kW
PF
Phl / kW
Ph3/ Amps
12:00
345
0.846
11.0
11.0
21.4
18.8
50.2
391
13:00
353
0.829
11.0
11.0
20.9
18.7
50.2
412
14:00
385
0.829
11.0
11.0
19.7
18.8
50.1
299
15:00
335
0.843
11.0
11.0
20.4
18.4
50.1
370
16:00
334
0.863
11.0
11.0
23.4
19.1
50.2
439
17:00
332
0.836
11.0
11.0
20.4
16.6
50.0
378
18:00 19:00 20:00
430 479 502
0.895 0.909 0.919
11.0 11.0 11.0
11.0 11.0 11.0
26.2 28.3 29.8
21.6 23.7 24.8
50.1 50.0 50.0
483 535 545
21:00
427
0.912
11.0
11.0
27.3
22.0
50.1
483
22:00
334
0.865
11.0
11.0
21.1
17.0
50.1
378
5.1 Line Details Total length of feeder Conductor length: Impedance for 50 mm 2 Cu Cable Impedance for Cable in feeder Impedance for 50 mm 2 Conductor
442
= 9.6 km (Cable length: 0.65 km, =8.95 km) = 0.26 + j 0.1 Ohms / km = (0.26 + j 0.1) 0.65 Ohms = 0.169 + j 0.065 Ohms = 0.609 + j 0.3 Ohms / km
Vishwakarma
Impedance for Conductor in feeder Therefore, total Impedance
= (0.609 + j 0.3) x 8.95 O h m s = 5.45 + j 2.685 O h m s
= 5.62 + j 2.75 Ohms.
Table 3. Basic Technical Data for Cote D'or Feeder: Time
P/kW
PF
00:00
644
0.85
Phl / kW 11.0
Ph3/ kW 11.0
Phl/ Amps 40.3
Ph3/ Amps 45.0
F/Hz
kVA
01:00
617
0.83
11.0
11.0
38.6
42.6
50.3
813
50.2
764
02:00
643
0.83
11.0
11.0
39.6
42.4
50.2
03:00
631
0.83
11.0
11.0
39.0
40.6
50.2
760 758
04:00
608
0.83
11.0
11.0
38.1
41.6
50.2
726
05:00
613
0.84
11.0
11.0
37.3
40.5
50.2
728
06:00
742
0.88
11.0
11.0
43.4
47.0
50.2
847
07:00
802
0.88
11.0
11.0
46.0
50.8
50.1
922
08:00
945
0.89
11.0
11.0
51.5
56.0
50.0
1003
09:00
1224
0.89
11.0
11.0
63.4
72.0
50.2
1223
10:00
1274
0.91
11.0
11.0
73.2
65.2
50.3
1388
11:00
1131
0.90
11.0
11.0
68.8
75.7
50.1
1384
12:00
1196
0.89
11.0
11.0
65.0
72.0
50.0
1321
13:00
946
0.88
11.0
11.0
55.4
60.8
50.0
1075
14:00
965
0.87
11.0
11.0
58.9
62.8
50.0
1127
15:00
1104
0.89
11.0
11.0
65.2
71.4
50.1
1222
16:00
995
0.88
11.0
11.0
57.3
64.5
50.2
1152
Ph3/ kW 11.0
Ph 1/ Amps 59.4
Ph3/ Amps 60.7
F/Hz
Table 3 (continued) Time
P/kW
PF
17:00
942
0.88
Ph 1 / kW 11.0
18:00
1271
0.92
11.0
11.0
73.5
75.5
50.0
1432
19:00 20:00
1451 1367
0.93 0.92
11.0 11.0
11.0 11.0
82.2 77.9
86.2 81.0
50.0 50.0
1549 1519
50.1
kVA 1061
21:00
1370
0.92
11.0
11.0
79.9
81.0
50.1
1462
22:00
959
0.89
11.0
11.0
60.0
59.4
50.0
1134
23:00
777
0.86
11.0
11.0
46.7
58.3
50.0
1000
As seen, the maximum loading on the feeder remains between 18:00 hours till 21:00 hours with the peak load around 19:00 hours. 5.2 Line Details Total length o f feeder Conductor length: I m p e d a n c e for 50
InII12 Cu Cable
Impedance for Cable in feeder I m p e d a n c e for 50 m m 2 Conductor I m p e d a n c e for Conductor in feeder
= 13.2 k m (Cable length: 1 kin, = = = =
12.2 km) 0.26 + j 0.1 O h m s / k m (0.26 + j 0.1) 1 O h m s = 0.26 + j 0.1 O h m s 0.609 + j 0.3 O h m s / k m (0.609 + j 0.3) x 12.2 O h m s = 7.43 + j 3.66
443
International Conference on Advances in Engineering and Technology
Total Impedance
= 7.69 + j 3.76 Ohms
Table 4. Basic Technical Data of Vallee De Mai Feeder I Time
I P/kW
PF
Phl/ I Ph3/ Amp: Amps 00:00 i 1001 I 0.88 58.2 60 01:00 I 870 0.84 54 54.3 02:00 I 863 0.83 53.4 54.3 03:00 I 870 0.83 52.2 54.9 04:00 I 816 0.85 49.8 50.4 05:00 I 850 0.85 53.1 52.8 06:00 I 1096 0.9 63.9 64.2 07:00 I 1166 0.9 66.6 67.8 08:00 i 1235 o.89 ]] I 11 66.6 72.6 09:00 I 1386 0.85 !! ! 11 79.2 85.5 10:00 I 1448 0.85 10.9 I 11 87.6 89.4 11:00 I 1582 0.86 10.9 I 11 89.1 96.9 PI~tIOI i | t [ [ 0 1 R0]Rt[1 i | i ilOI~l ngl 84.6 |tlOlOi l | l | l R 0 1 : t ~ l / [OI~i i l l Dill 77.1 [ l ~ l I O l / [ [ 9 1 RI]Kt[ ! i [O]~i i l O ~ i l : ~ l 91.8 [ l l [ O I O l / i U / l O ] K t ! / lOl~i / [O]~i 1 1 ~ 1 82.8 [~ItIII/l~ll ll]g][ ! / [ t ~ 1 / [t]~i l:ll 86.4 IltIIi/|~/ill:}!/[l~i / [ll~l//~l 87 [:]~lIIi / I [ I ] i l t l ~ l / / [ t ~ i / I l/Ill 75.9 ~ l l I I I / l:| l ~ ! i l ] ~ J L ! / l l ] ~ l / I / / [l| 102.3 [lllItl / l~gtr / l l I ~ i / 1/ / ~ 92.4 1I~lItl / ![ w[ [ / I I I ~ ] r 82.8 lPItIII/llP//l~//1//1///il 76.2 Kl~ll I i l I ~ / ~ ~ Ii111 IlU 63 i
i
i
]
i
i
i
Phl/ kW 10.9 10.9 10.9 10.9 10.9 10.9 10.9 il
I Ph3/ I kW I 11 I 11 I 11 I 11 I 11 I 11 I 11 : 11
F/Hz
kVA
50.2 50.1 50.1 50.1 50.2 50.2 50.1 50.1 50.1 50.1 50.2 50.1 50.2 50.2 50.2 50.2 50.2 50.2 50.2
1143 1035 1035 1046 960.2 1006 1223 1292 1383 1629 1703 1846 1597 1469 1733 1563 1631 1642 1446
50.2
1950
50.2 50.2 50.2 50.2
1760 1578 1452 1200
|
|
|
|
|
|
|
|
i
|
i
|
|
The loading on this particular feeder remains fairly high throughout the day hours from 09:00 hours onwards till 22:00 hours with the peak load around 19:00 hours.
5.3 Line Details Total length of feeder Conductor length: Impedance for 50 mm 2 Cu Cable Impedance for Cable in feeder Impedance for 50 InII12 Conductor Impedance for Conductor in feeder Total Impedance
= 13.8 km (Cable length: 4.83 km, -9km) = 0.26 + j 0.1 Ohms / km =(0.26 +j 0.1)4.83 Ohms = 1.2558 +j 0.483Ohms = 0.609 + j 0.3 Ohms / km = (0.609 + j 0.3) x 9 Ohms = 5.481 + j 2.7 Ohms = 6.7368 + j 3.183 Ohms.
5.4 Sample Calculation of Utilization Factor, Voltage Regulation, Peak Power Loss and Line Efficiency of Baie St. Anne Feeder (as in January 2006):
Calculation of Voltage Regulation
444
Vishwakarma
Total length of feeder Installed Capacity of feeder M a x i m u m D e m a n d of feeder
= 9 . 6 km = 935 k V A = 545 kVA
Utilization Factor (UF)
= 545 / 935 = 0.58 = 5.62 + j 2.75 Ohms = 5.62 O h m s
Total Impedance Resistance (R) M a x i m u m Current in feeder Average Current in feeder
= = = =
30 Amps M a x i m u m Current x Utilization Factor 30 x 0.58 17.4 Amps
Power factor at M a x i m u m D e m a n d = 0.919 (Corresponding Sin q~ = 0.394) Voltage Regulation = I(R Cos q~ + X Sin q~) = 17.4 [5.62 (0.919) + 2.75 (0.394)] = 17.4 (5.164 + 1.083) = 17.4 x 6.247 - 108.7 Volts (Phase Voltage) = 108.7 (~/3) Volts = 188.28 Volts (Line Voltage) Percentage Voltage Regulation = [ 188.28 / 11000] (100) % = 1 . 7 1 % Sending End Power
Peak Power Loss (Line)
= M a x i m u m D e m a n d in k V A x Power factor = 545 x 0.919 =501 kW - 3 x IZx R x (UF) 2 = 3(30) 2 (5.62) (0.58) 2 = 5.105 k W
5.5 P e r c e n t a g e E f f i c i e n c y (q) It is equal toReceiving End Power / Sending End Power) x 100% =[(Sending End P o w e r - Losses) / Sending End Power] x 100% = [(501 - 5.1) / 501] x 100% Therefore, Percentage Efficiency (q) = 98.98 %
On the basis of above calculations the same parameters have been calculated for the other two feeders. The values obtained are compiled in the table on following page. Table 5. Comparative performance of 11 kV Feeders at Praslin in year 2006 Name of Installed Max. DeUtilization VoltageReguPeak Power feeder Capacity mand factor lation Loss
Efficiency at peak load
445
International Conference on Advances in Engineering and Technology
Baie St. Anne Cote D'or Vallee De Mai
kVA
kVA
Uv
%
kW
%
935
545
0.58
1.71
5.105
98.98
3595
1549
0.431
4.99
31.84
97.78
4465
1950
0.436
5.2
39.971
97.8
It can be seen that the present values of different technical parameters are within limits. However efficiency at peak load of Cote D'or and Vallee De Mai feeders is approaching towards maximum allowable limits. 6.0 TREND OF LOAD G R O W T H AT PRASLIN The rate of growth of load on different feeder has shown a tremendous change during past few years due to fast growing infrastructure on the Island especially in the tourism sector. The installed capacity has been increased 3 times from the initial commissioning of generating station in view of upcoming demand. Moreover the maximum demand has also been increasing at the same pace during past 25 years ever since the commissioning of independent generating station. The graph in Fig. 4 below shows the continual increase in the installed capacity vis-/t-vis the increasing maximum demand at Praslin. If the maximum demand keeps increasing at the same rate, then it may reach 6 MW by the year 2010 and 7 MW by the year 2015. 7.0 PROBLEMS LIKELY TO OCCUR IN THE N E T W O R K IN FUTURE Considering an approximate rate of 3 % annual rise in the transformer installed capacity and an approximate rate of 5 % annual rise in maximum demand on all feeders, using the same pattern of calculations sampled in the Technical Manuals listed in the reference section., the predicted performance of 11 kV Feeders at Praslin by the year 2011 shall be as in Table 6.
The under noted observations can be made from the different values tabulated above: 1. The predicted values of Voltage Regulation and Line efficiency at peak load of Bale St. Anne feeder shall remain within allowable limits till the year 2011. 2. The predicted values of Voltage Regulation and Line efficiency at peak load of Cote D'or feeder shall reach the maximum allowable limits demanding improvement in near future. 3. The predicted values of Voltage Regulation and Line efficiency at peak load of Vallee De Mai feeders shall exceed the maximum allowable limits demanding immediate improvement. Apart from initiating necessary procedures to control the exceeding values of Voltage Regulation and Line Efficiency, as mentioned in the abstract, 1. The last sections Cote D'or and Vallee De Mai feeders presently do not have open ring
446
Vishwakarma
supply arrangements. There are no flexible supply arrangements in their last sections in the event of any major fault or breakdown. It needs redress. More than 60% of the load on Vallee De Mai feeder is supplied by the last 3.8 km of its route length. The Airport, important hotels and establishments are fed from its last sections. The existing network therefore demands reinforcement.
.
Installed Capacity & Maximum Demand at Praslin t4,000
._.E 12,000 "o 10,000 E
8,000
-o a
6,000
~- ~ ~E -- •
4,000
=E
0
e,-
, _
2,000
Year ,,,,,,,,,,Installed Capacity in KW ~ M a x i m u m
Demand in KW
Fig. 4. Installed capacity and peak demand curves at Praslin Island
Table 6. Prdicted performance of 11 kV feeders at Praslin by 2011 Name of feeder
Installed Capacity
Max. Demand
Utilization factor
Voltage Regulation
kVA
kVA
UF
1075
685
4145 5140
Efficiency at peak load
%
Peak Power Loss kW
0.637
2.25
8.866
98.6
1940
0.468
6.4
52.38
97
2450
0.476
7.17
75.73
96.7
%
Baie St. Anne Cote D'or Vallee De Mai
8.0 S U G G E S T I V E S O L U T I O N S TO R E D R E S S THE S I T U A T I O N
The under noted corrective actions may be applied to redress the problems likely to occur in the network during forthcoming years. 1. Installing Capacitor Banks for reactive kVAR compensation. 2. Reducing overall Impedance of the feeder by upgrading existing conductor / cable. 3. System Reconfiguration by reducing length of feeder or transfer of some of its load. 4. Inter connection of radial sections to ensure multiple supply arrangements.
447
International Conference on Advances in Engineering and Technology
These actions need to be analyzed from different technical and economic aspects before applying them to the system.
8.1 Installation of Capacitor Banks for Reactive Kvar Compensation" Capacitors are installed in the Distribution network to achieve power and reduce energy loss. They help to maintain voltage profile within allowable limits at the desired points.
Reactive kVAR compensation requirement for Vallee De Mai feeder: Predicted Peak load on Feeder in year 2011 : Corresponding kVA Corresponding kVAR Load power factor at Predicted peak load Desired load power at Predicted peak load Under present load power factor, Cos q~l Corresponding Sin q~l Corresponding Tan q~l Under assumed improved load power factor, Cos r Corresponding Sin q~2 Corresponding Tan q~2 kVAR to be compensated to improve load power factor to 0.98
= 2271 kW = 2450 kVA = 883 kVAR =0.932 =.0.98 =0.932 =0.362, =0.388 =0.98 =0.198, =0.203 = kW(Tan q)l
-
-
Tan
q~2) = 2271 ( 0 . 3 8 8 0.203) = 420 kVAR = 450 kVAR (approx.) Using the same pattern of calculations sampled in Article 5, the predicted values of rameters upon 450 kVAR compensation are tabulated below: Sr. 1 2 3 4
Parameter(s) Voltage Regulation Line Efficiency Line Loss Annual Energy Loss
Situation Before 7.17 % 96.7 % 20.71 kW 181,412 kWh
SituationAfter 6.6 % 97 % 18.76 kW 164,382 kWh
pa-
Results Achieved Improved by 0.55 % Improved by 0.25 % Reduced by 9.4 % Reduced b 9.4 %
The above corrective action of Reactive kVAR Compensation is quite result oriented. Moreover existing network is quite compatible for installing such shunt capacitors. A set of 3 capacitor units may therefore be installed at strategic locations especially in the last sections to improve the exceeding parameters within allowable limits.
8.2 Reducing Overall Impedance of Feeder by Upgrading Existing Conductor/Cable The effective component of Resistance and Reactance of the existing conductor / cable forms the overall impedance of the feeder. The impedance value should be as low as possi-
448
Vishwakarma
ble to reduce line losses. Impedance of the Vallee De Mai feeder could be reduced to some extent by extending the existing underground cable network or augmenting the conductor size whichever feasible. Considering the geographical locations, environmental aspects, way-leave constraints and other associated factors prevailing at this eco-sensitive I s l a - Praslin, it would be quite tough to augment the existing conductor size from AAC 50 mm 2 to a higher size. However, the existing underground cable laid up to 4.83 km route length can be further extended up to 6.3 km for a route length of approximately 1.5 km The total impedance of feeder shall improve from existing value of 5.481+ j 2.7 Ohms to 6.205 + j 2.88 Ohms by such extension of cable. Using the same pattern of calculations sampled in Article 5, the predicted values of parameters upon reducing overall impedance are tabulated below: St. 1 2 3 4
Parameter(s) Voltage Regulation Line Efficiency Line Loss Annual energy loss
Situation Before 6.6 % 97 % 18.76 kW 164,382 kWh
Situation After 6.05 % 97.25 % 17.24 kW 151,051 kWh
Results Achieved Improved by 0.55 % Improved by 0.25 % Reduced by 8.1% Reduced by 8.1%
This corrective action of reducing overall impedance the feeder is also equally effective giving positive results. Moreover there would be no constraint to extending the underground cable for 1.5 kin. Extension of underground cable for 1.5 km may therefore be carried out to improve the exceeding parameters within allowable limits. 8.3 System Reconfiguration by Reducing Length and Transferring Load The System configuration involves reorganizing and rearranging the existing facilities in the system eventually resulting in improved reliability and reduced interruptions in supply, their duration and frequency. System reconfiguration may involve creation of additional feeders or switching arrangements in the feeders. The under arrangements are suggested for reconfiguring the existing network at Praslin: 9 Possibilities of reducing the length(s) of Vallee De Mai and Cote D'or feeders can be explored by creating additional feeders or transferring some load to other feeders. This would involve extension of existing overhead / underground network. 9 As discussed in article 8.2, the extension of the underground of Vallee De Mai feeder by a distance of 1.5 km shall facilitate the creation of independent feeders at an ideal geographical location at Pasquiere where all 3 overhead feeders on Praslin could be operated in a primary loop or open ring system. 9 Under the present configuration, any abnormality in the overhead network is cleared by the feeder protection at the beginning of the network at the power station and large areas suffer from the outage. Creation of additional switching station can limit the supply outage to small areas.
449
International Conference on Advances in Engineering and Technology
9 9 9
9
9 9
The network requires augmentation and strengthening. Introduction of a new substation at the said location as shown in the single line diagram shall be of great help By installing a switching sub-station, 2 independent feeders can be created namely Grand Anse feeder and Pasquiere feeder to improve network's reliability. Upon creation of a new switching substation at Pasquiere and extension of underground cable, the 200 kVA load of Grand Anse Church Transformer would transfer to Baie St. Anne feeder thus reducing 200 kVA load on Vallee De Mai feeder. With the creation of the new feeders, the probability of outage of the cable network due to transient faults on the overhead network shall be drastically reduced / eliminated thus improving the reliability of the network in the first section (route length 6.3 km). The failure will be in a well defined geographical area and not in a large area. Risk of circuit failure is proportional to the length of the feeder and also proportional to the km-kva loading of the feeder. The cost of outage is proportional to the energy not supplied and consequential loss which could not be defined. With the above augmentation and strengthening of the network the outage cost is expected to be reduced to a great extent.
The suggested reconfiguration including reduction of some load on Vallee de Mai feeder, creation of independent feeders and commissioning of new switching sub-station at a strategic location is shown in the single line diagram below.
450
Vishwakarma
1
I
x
i
"i
1
Fig. 7.1 lkV Praslin Net work 8.4 Interconnection of Radial Sections to Ensure Multiple Supply Arrangements As discussed in the earlier sections and seen in the network diagram, the radial sections of Vallee De Mai and Cote D'or feeders needs to be interconnected to ensure open ring supply arrangements in their last sections. The distance between their end points is about 3.75 kin. A sketch showing suggested 11 kV interconnection is printed below.
451
International Conference on Advances in Engineering and Technology
................... :i'< t ............... :;:::...~,< .........
./.,"
\
....
,...............................,/;//"
.f
/
~ ,:,o,,,,-,-,= t
C,--,EV,,,_,ER ~j. ...... / ...... - . .......
...... ..f. ..... .. . . . . . . . . . . . /-- -.~. .............
....... ",....................................................................... :;,./ ";7 -.
.................. ---"...............
..+"
ANSE
LAZIO
9
i"
,
;'
..
i
.......
.... /
~..................................................................
">
............... ........... o
,.,/...................
.:,,
~',
i'
..... ...........
s
W
J........... ""
................
During the course of interconnection, considering the geographical locations and the prevailing environmental constraints, the 11 kV inter-connector between Cote D'or and Vallee De Mai feeders shall have to cross hilly regions and pass along coast areas. The complete HV interconnection would require under noted 4 stages: 1. The existing 2 phase, 11 kV line for a route length of 0.75 km shall have to be upgraded to 3 phase, 11 KV up to the end of Cote D'or feeder at Anse Lazio. 2. 3 phase High Voltage ABC cable shall have to be additionally strung for route length of 3 km to connect with the end of Vallee De Mai feeder at Mt. Plaiser. 3. Installation of 11 KV Air Break Isolators at the start and end of inter-connector. 4. Installation of Surge Diverters at both ends of High Voltage ABC cable. The suggested 11 kV interconnection would bring under noted improvement in network:
452
Vishwakarma
1. 2.
Availability of flexible supply arrangements between Vallee De Mai and Cote D'or feeders since either of them can be extended further during supply outage. Reduction in off-supply durations during faults or routine maintenance works.
9.0 CONCLUSIONS Upon analyzing the different technical parameters of the 11 kV feeders of the existing distribution network at Praslin, it has been observed that they are within allowable limits and their present performance is quite acceptable. However considering the growth of the load as per the prevailing rate, one of the feeders shall be critical by the year 2011. Using the modern engineering principles, several short term and long term solutions have been suggested to reinforce the distribution network after carrying out associated calculations and predicting their end results to meet the future demand. REFERENCES Annual Reports of PUC - Seychelles (1998 - 2004) AS Pabla, Electric Power Distribution, Tata McGraw-Hill Publishing Company Limited, New Delhi, 5th edition. Log Sheets of Electricity Generating Stations of P U C - Seychelles Manuals and Technical Literature supplied by various Manufacturers. Suresh Vishwakarma (2005), High Voltage Network Interconnection and Reinforcement on
Praslin Island, Seychelles, E C - UK
453
International Conference on Advances in Engineering and Technology
C H A P T E R SIX MECHANICAL ENGINEERING
E L E C T R O P O R C E L A I N S F R O M R A W M A T E R I A L S IN UGANDA: A REVIEW P.W. Olupot, Department of Mechanical Engineering, Makerere UniversitY, Uganda. S. Jonsson, Department of Material Science and Engineering, Royal Institute of Technology, Sweden. J.K. Byaruhanga, Department of Mechanical Engineering, Makerere University, Uganda.
ABSTRACT Porcelains are vitrified and fine grained ceramic whitewares, used either glazed or unglazed. Electrical porcelains are widely used as insulators in electrical power transmission systems due to the high stability of their electrical, mechanical and thermal properties in the presence of harsh environments. They are primarily composed of clay, feldspar and a filler material, usually quartz or alumina. These materials are widely available in Uganda, but little research has been carried out on them in relation to technical porcelains let alone their use for the same. Based on the abundance of the requisite materials and the corresponding demand for insulation materials, this paper reviews the current and traditional methods of manufacturing electrical porcelains. The major objective is to review the processes for production of electric porcelains from the basic raw materials including material characterisation methods. Keywords: Porcelain, materials, properties, characterisation, Uganda.
1.0 INTRODUCTION Porcelains are polycrystalline ceramic bodies containing typically more than about 10 volume percent of the vitreous phase (Cho & Yoon, 2001). The vitreous phase controls densification, porosity and phase distribution within the porcelain and to a large extent its mechanical and dielectric properties. Porcelains are widely used as insulators in both low and high voltage applications, mainly due to the high stability of their electrical, mechanical and thermal properties in the presence of harsh environments (Kingery, 1967; Bribiesca et al, 1999). They are classified as triaxial, steatite/non feldspathic types depending on the composition and amount of vitreous phase present. Non feldspathic porcelains are found in the system MgO-AI203-SiO2 (Buchanan, 1991). The raw materials used are talc (3MgO'4SiOz'2H20), kaolinite clays, and alkaline earth fluxes such as
454
Olupot, Jonsson & Byaruhanga
BaCO3 and CaCO3. These porcelains are typically of higher purity than the triaxial porcelains with superior dielectric properties but are more difficult to produce due to a narrower sintering range. Triaxial porcelain forms a large base of the commonly used porcelain insulators for both low and high tension insulation. It is considered to be one of the most complex ceramic materials and most widely studied ceramic system (Dana et al, 2004), yet there still remains significant challenges in understanding it in relation to raw materials, processing science, phase and microstructure evolution (Carty & Senapati, 1998).They are made from a mixture of the minerals clay-flint-feldspar. The clay [A12Si2Os(OH)4], give plasticity to the ceramic mixture; flint or quartz (SiO2), maintains the shape of the formed article during firing; and feldspar [KxNal_x(A1Si3)Os], serves as flux. These three place triaxial porcelain in the phase system [(K,Na)20-A1203-SiO2)] in terms of oxide constituents (Buchanan 1991). The fired product contains mullite (A16Si2013) and undissolved quartz (SiO2) crystals embedded in a continuous glassy phase, originating from feldspar and other low melting impurities in the raw materials. By varying the proportions of the three main ingredients, it is possible to emphasize certain properties, as illustrated in Figure 1 (Thurnauer, 1954). The above mentioned materials are widely available in Uganda, but there is no evidence of the successful use of the same for porcelains in the country. This paper reviews the current and traditional methods of producing electrical porcelains. The objective is to give an understanding of the process for production of triaxial electric porcelains from raw materials. 50000
M t '[
45000 M Microcline ordered, KAISi~O s "
40000
A Albite, NaAISi30 s
35000
i
Q Quartz
I Lunya Feldspar
30000
i
25O00 20000
M
M
M M MM
A M M t' , .Q!!':i:
MMM
M
15000 : Mutaka Feldspar
10000 5000 MM
0
,
2
i
14
M
M
M M M M M M M~ .... , ,~,
MM M M
,
16
18
20
22
24
26
28
30
32
34
2 Theta I~
Figure 1" XRD of some of Uganda's Feldspar Deposits (Olupot et al, In Press)
Figure 2:
Porcelain of various properties in the Triaxial diagram (Thumauer. 1954)
2.0 RAW MATERIALS FOR TRIAXIAL PORCELAIN 2.1 Kaolin Kaolin is a clay material that consists primarily of the clay mineral kaolinite, A12Si2Os(OH)4. It is white when dry and white when fired. In terms of origin, kaolins exist as residual and sedimentary kaolins (Norton, 1970; Dombrowski, 1998; Murray, 1998). A pure kaolinite crystal has the composition of A1203"2SiO2"2H20, giving it a theoretical composition of 39.8 wt % AlzO3 46.3 wt % SiO2, and 13.9 wt % H20. Used
455
International Conference on Advances in Engineering and Technology
alone, kaolin is difficult to shape into objects because of its poor plasticity. Its refractoriness also makes it difficult to mature by firing to a hard, dense object. In practice, other materials are added to it to increase its workability and to lower the kiln temperature necessary to produce a hard dense product. Because of its coarse grain structure, kaolin has little dry strength and a low firing shrinkage.
2.2 Ball Clay Ball clay consists, largely of the mineral kaolinite, but with a smaller crystal size than that of other clays. The reasons for inclusion of ball clays in whiteware bodies include; increased workability of the body in the plastic state (Singer & Singer, 1979), development of increased green strength, increased fluidity imparted to casting slips and the fluxing ability of some ball clays. The amount of ball clays in the whiteware, however, has to be controlled because in most cases they contain substantial amounts of iron oxide and titania, which impair the whiteness of the fired bodies and reduce the translucency of vitreous ware. If whiteness is desired, not more than about 15% of ball clay can be added to a clay body (Powell, 1996). In addition, the large amounts of water that must be added to develop high plasticity, results in large shrinkages during drying. Therefore ball clays cannot completely replace kaolins in a ceramic body without causing cracking and warpage of the ware. 2.3 Feldspar Feldspars of importance to ceramics are aluminosilicates of sodium, potassium, and calcium (Jones & Berard, 1993). They are used as fluxes to form a glassy phase in bodies, thus promoting vitrification and translucency. They also serve as a source of alkalis and alumina in glazes. The pure spars are albite (NaA1Si308), orthoclase or microcline (KA1Si308) and anorthite (CaA12Si208). The soda spars are used in glasses and glazes, and the high potash spars in whiteware bodies. Potassium feldspars (KA1Si308) enable the broadest firing range and the best stability of the body against deformation during firing. Sodium feldspar (NaA1Si308) exhibits a lower viscosity than potassium feldspar when melted at a given temperature. This enables vitrification at lower temperatures but carries the risk of increased deformation. Calcium feldspar (CaA12Si208) has an extremely narrow firing range. Figure 2 shows the mineralogy of some of Uganda's feldspar deposits. 2.4 Quartz Silica, SiO2 occurs in nature as a dense rock quartzite and as silica sand. Sand is the preferred raw material for ceramics as it does not need the energy-consuming crushing process. Quartzites however dissolve more rapidly than sand in the molten phase as indicated by the transformation rate to cristobalite during heating (Schuller, 1997). Quartz is added to ceramic bodies as filler. Fillers are minerals of high melting temperature that are chemically resistant at commercial firing temperatures (<1300~ (Iqbal & Lee 2000). They reduce the tendency of the body to warp, distort or shrink when it is fired to temperatures that result in substantial quantities of viscous glass. 3.0 MANUFACTURING PROCESS FOR ELECTRICAL PORCELAINS The major shapes and uses of porcelain insulators include; suspension insulators used in 'strings' for voltages over 66kV, pin-type insulators for voltages up to 50-60kV, bush-
456
Olupot, Jonsson & Byaruhanga
ings, lead-in insulators, transformer parts, circuit breaker parts, mast bases, etc. (Singer & Singer, 1979). For electrical insulation application, the properties of most concern are the dielectric and mechanical strength (Islam et al, 2004). Despite its role stated in section 2.4, quartz grains embedded in the porcelain glassy matrix, have a deleterious effect on the mechanical strength due to the occurrence of micro cracks caused by the 13--+a quartz inversion during cooling (Schroeder, 1978). Several investigators have reported significant improvements in the mechanical properties by reducing/eliminating the use of quartz. These include replacement of quartz with kyanite (Schroeder, 1978), replacement of quartz with alumina (Kobayashi et al, 1987; Das & Dana, 2003), replacement of quartz with rice husk ash (Prasad et al, 2001), replacement of quartz and feldspar by sillimanite sand and alumina/cordierite glass ceramic, respectively (Maity & Sarkar, 1996), replacement of quartz with fly ash (Dana et al, 2004), partial replacement of feldspar and quartz by fly ash and blast furnace slag (Dana et al, 2005), replacement of quartz with silica fume (Prasad et al, 2002), substitution of quartz by a mixture of rice husk ash and silica fume (Prasad et al, 2003). An effort to substitute part of quartz with fired porcelain by Stathis et al (2004) did not result in a positive effect on the bending strength. On the other hand, there is abundant evidence that under certain conditions, quartz has a beneficial effect on the strength of porcelain. Such evidence includes the use of fineparticle quartz in the range of 5-30pm. Different views have also been expressed regarding optimum conditions when working with quartz in improving strength (Ece & Nakagawa, 2002; Braganga & Bergmann, 2003; Stathis et al, 2004). Other modifications, which have proven successful include, replacement of clay with aluminous cement (Tai et al, 2002), substitution of feldspar with nepheline syenite (Esposito, et al, 2005), use of soda feldspar in preference to potash feldspar (Das & Dana, 2003), partial substitution of feldspar by blast furnace slag (Dana & Das, 2004), use of recycled glass powder to replace feldspar to reduce firing temperature (Braganga & Bergmann, 2004). 3.1 Strength Considerations Carty & Senapati (1998) presented three major hypotheses that have been developed to explain the strength of porcelain as the mullite hypothesis, the matrix reinforcement hypothesis and the dispersion strengthening hypothesis. The mullite hypothesis suggests that porcelain strength depends on the felt-like interlocking of fine mullite needles. Specifically, the higher the mullite content and the higher the interlocking of the mullite needles, the higher is the strength. Hence the strength of porcelain depends on the factors that affect the amount and size of mullite needles, like the firing temperature and composition of alumina and silica in the raw materials. The matrix reinforcement hypothesis concerns the development of compressive stresses in the vitreous phase as a result of the different thermal expansion coefficients of dispersed particles or crystalline phases and the surrounding vitreous phase. The larger these stresses are, the higher is the strength of the porcelain body. The phenomenon is known as the pre-stressing effect. The dispersion strengthening hypothesis, on the other hand, states that dispersed particles in the vitreous phase of a porcelain body, such as
457
International Conference on Advances in Engineering and Technology
quartz and mullite crystals in the glassy phase, limit the size of Griffith flaws resulting in increased strength. There is evidence supporting each of these hypotheses (Maity & Sarkar, 1996; Stathis et al, 2004; Islam et al, 2004). The typical strength controlling factors in multiphase polycrystalline ceramics are thermal expansion coefficients of the phases, elastic properties of the phases, volume fraction of different phases, particle size of the crystalline phase and phase transformations. Islam et al (2004) conclude that the best mechanical and dielectric properties can be achieved by high mullite and quartz content with low amount of the glassy phase and in absence of micro cracks. A high amount of SiO2 leads to a high amount of the glassy phase which is detrimental to the development of high dielectric strength. An increase of the glassy phase gives an increased length of the free path of mobile ions like Na +, K +and A13+and hence increases the conductivity.
78r ! 1- Lido beach sand 68 - 2- Mutaka quartz 3- Lunya feldspar 58 i 4- Mutaka feldspar 48 I 38 "~ 28
"~
2 s -12 -22
--0
100
200
300
400
500
600
700
800
900 1000 1100 1200
Temperature, Celsius
Figure 3: Americal Triaxial Bodies (Norton, 1970) Key: (A) Electrical Porcelain; (B) Sanitary ware; (C) Vitrified floor tile; (D) semi vitreous ware; (E) Parian porcelain; (F) Dental porcelain: (G] Stoneware
Figure 4" Differential thermal analysis results of some of Uganda' s porcelain raw material deposits (OluDot et al. in Press]
3.2 Forming Processes for Electric Porcelains
Both low and high tension porcelain bodies have similar basic raw material requirements (Fig. 3), although the production process for high-tension is more exacting (Norton, 1970). A manufacturing flow diagram was presented by Norton (1970), described the wet and dry mix method. In the wet method, the product is formed either by slip casting or by plastic forming. The dry mix method, according to Indulkar & Thiruvengadam (1992), produces hygroscopic and porous products unless well glazed.
458
Olupot, Jonsson & Byaruhanga
3.3 Reactions during Firing of Triaxial Porcelain The influence of the firing cycle is related to the kind of furnace, the firing atmosphere, maximum temperature and soaking time (Braganga & Bergmann, 2003). The reactions occurring during firing can be outlined as follows: i) Removal of water ends at about 200~ followed by oxidation of organic matter contained in the ball clays in the temperature range 200-700~ (Norton 1970) ii) Dehydroxylation of crystal structure of kaolinite to form metakaolin (A1203"2SIO2) occurs at about 550~ (Iqbal & Lee 2000) according to the following equation. A1203"2SiO2"2H20
iii)
iv)
v)
vi)
vii)
viii)
-550~
I~ A1203"2SIO2 + 2H20 (g)
Reports cited by Carty & Senapati (1998) and Kirabira (2003) reveal that dehydroxylation kinetics, believed to be of first order, yield a dehydroxylation rate directly proportional to the surface area of kaolin. The dehydroxylation process is an endothermic process, accompanied by a reorganization of octahedrally coordinated aluminium in kaolinite to tetrahedrally coordinated aluminium in metakaolin. Ordinary quartz transforms abruptly into a form called high quartz when heated to 573~ (Fig. 4). This is not accompanied by a large overall volume change as the individual quartz crystals are small (Jones & Berard, 1993; Braganga & Bergmann, 2003) Sanidine, a homogeneous, high temperature, mixed alkali feldspar, forms within 700-1000~ The formation temperature is dependent on the sodium:potassium ratio (Catty & Senapati, 1998). Metakaolin transforms into a non equilibrium spinel-type structure and amorphous free silica at approximately 950-1000~ The amorphous silica liberated during the metakaolin decomposition is highly reactive, possibly assisting eutectic melt formation at around 990~ (Carty & Senapati, 1998). Pure potassium and sodium feldspars melt at about 1150 and 1050~ respectively (Reed, 1995). The temperature of formation of the eutectic melt of feldspar with silica is dependent on the type of feldspar; the eutectic melt forming at about 990~ and 1050~ for potash and soda feldspar respectively (Routschka, 2004). The lower liquid formation temperature in potash feldspar systems is beneficial for reducing the porcelain firing temperature. The presence of albite in potash feldspar can reduce the liquid formation temperature by as much as 60~ (Carty & Senapati, 1998). The fluxing reaction of feldspar with kaolin above 1050~ produces a glass and needle shaped (primary) mullite nearer the feldspar side and plate like (secondary) mullite nearer the kaolin side (Reed, 1995). Elongated crystals are associated with a fluid matrix. The fluidity arises from increased firing temperature or from the composition of the flux. Na20-rich feldspars produce more fluid liquid and so larger needles than K20-rich fluxes in bodies fired at the same temperature. A third form of mullite called tertiary mullite has been detected in porcelains containing alumina as filler. This is precipitated from alumina-rich liquid formed from dissolution of the alumina filler but has only been observed in minor quantities (Lee & Iqbal, 2001). At about 1200~ the melt becomes saturated with silica. Dissolution of quartz ends, and quartz-to-cristobalite transformation begins. Quartz in contact with feldspar liquid dissolves slowly above 1250~ At higher temperatures near the final
459
International Conference on Advances in Engineering and Technology
stage of sintering, mullite crystals grow as prismatic crystals into the remains of the feldspar grains (Carty & Senapati, 1998). Quartz grains less than 20gm completely dissolve at about 1350~ and at about 1400~ porcelain bodies consist almost entirely of mullite and glass with little quartz. ix) As the body starts to cool, pyroplastic deformation and relaxation within the glass phase prevent the development of residual stresses until the glass transition temperature is reached. As the body cools below the glass transition temperature, residual stresses are developed because of thermal expansion mismatch between glass and the included crystalline phases (i.e. mullite and quartz, and in some cases, alumina and cristobalite). Cracks commonly observed in and around large quartz grains occur because of the large thermal expansion mismatch between the crystalline quartz and the glassy phase in the temperature range of 20-750~ (Iqbal & Lee, 2000). x) Cooling through the quartz inversion (573~ results in a quartz particle volume decrease of 2% which can produce sufficient strain to cause cracking of the glassy matrix and the quartz grains. The cracking severity is dictated by the quartz particle size and the cooling rate. Finally the 13- to a-cristobalite inversion at 225-250~ is similar to the quartz inversion, but it produces a larger volumetric change (approximately 5%). With a higher activation energy barrier, the transformation is less severe than that of quartz (Carty & Senapati, 1998). The final microstructure of fired bodies consists of 10-25% mullite, with composition ranging from 2A1203"1SIO2 to 3A1203"2SIO2, 5-25% Gt-quartz (SiO2), and 0-8% pores dispersed in 65-80% potassium aluminosilicate glass. Bodies with a high percentage of quartz also may contain crisobalite (Iqbal & Lee, 2000) Under normal firing conditions, equilibrium at the firing temperature is achieved at above 1400~ and the structure consists of a mixture of siliceous liquid and mullite. The liquid formed at the firing temperature cools to form glass so that the resulting phases at room temperature are normally glass, mullite and quartz in amounts depending on the initial composition and conditions of firing. The concentration of the phases and the size of mullite crystals also influence the dielectric properties (Chaudhuri & Sarkar, 2000).
3.4 Glazing A glaze is a thin vitreous layer formed on ceramic ware by application of special materials and secured to the surface by firing at high temperatures. They are applied to bodies to make them impervious, mechanically stronger and resistant to scratching, chemically more inert and more pleasing to the touch and eye. In chemical nature, glazes are alkaline, alkaline-earth, lead, or other aluminium silicate and aluminium borosilicate glasses (Budnikov 1964). The raw materials for the manufacture of glazes can be grouped into three categories of flux, glass former (silica) and stabiliser (kaolin) (Norsker & Danisch 1993). 4.0 CHARACTERIZATION OF RAW MATERIALS Characterization of the starting powders aids in maintaining and modifying the properties of ceramic bodies (Katz 1976). Important factors for starting materials are chemical and mineralogical composition, moisture content, particle size distribution, surface area of the powder, particle density, shape, and chemical exchange capacity for clay materi-
460
Olupot, Jonsson & Byaruhanga
als (Jones & Berard, 1993). Characterization methods exist for each of the above factors (Table 1). Table 1: Methods of characterisation of materials Characteristic/Property Charactertsation method Chemical and crystallo- Wet chemistry*, emission spectroscopy, flame emission spectrosgraphic characterisation copy, atomic absorption spectroscopy, x-ray fluorescence, mass and infrared spectroscopy Phase Analysis Optical, scanning and transmission electron microscopy, x-ray diffraction Thermochemical and ther- Thermo gravimetric analysis (TGA), Differential thermal analysis mophysical (DTA), differential scanning calorimetry (DSC) Morphology Screening through standard sieves, sedimentation technique, electrical sensing, laser diffraction, light intensity fluctuation, x-ray broadening and air permeability techniques, BET Method, liquid phase sorption. *This method is less utilized by ceramists, as compared to other methods of chemical analysis because of the basic inertness and refractoriness of ceramic compounds Uganda has many ceramic raw material deposits (GSMD, 1998). Characterisation studies on some of them have revealed their suitability for production of porcelains (Nyakairu et al, 2001, Kirabira et al, 2005, Olupot et al, in Press). Some characteristics of some deposits are given in Table 2 and Figure 5. Table 2: Chemical composition of some deposits (weight %) Lunya Mutaka Mutaka Lido Beach Compound Feldspar Feldspar Quartz Sand SiO2 65.7 62.9 101.0 100.0
Mukono Ball clay 67.200
Mutaka Kaolin 48.800
Mutundwe Kaolin 46.400
A1203 CaO
18.3 <0.1
22.5 <0.09
0.193 <0.09
0.127 <0.09
18.200 0.306
36.000 <0.090
38.700 <0.009
Fe203 K20
1.65 12.3
0.065 11.8
2.57 <0.06
0.201 <0.06
2.830 0.975
0.238 1.140
0.791 0.214
MgO
0.0436
<0.02
0.0251
<0.02
0.363
0.038
<0.020
MnO Na20
0.0219 1.84
<0.003 0.409
0.0199 <0.04
0.0092 <0.05
0.026 0.185
0.028 0.048
0.004 <0.040
P205
0.0366
0.0874
0.0177
0.0171
0.049
0.009
0.043
0.172 0.4
1.380 8.100
0.004 12.6
0.039 13.8
TiO2 0.0053 0.0036 0.0051 LOI* 0.1 3.1 -0.4 *LOI is loss on ignition at 1000~ Source." Olupot et al (In press); Kirabira et al (2005)
461
International Conference on Advances in Engineering and Technology
Figure 5: SEM of (a) Mutaka kaolin and (b) Mutaka quartz (Olupot et al, in press) 5.0 C O N C L U S I O N Good quality deposits for ceramic raw materials exist in Uganda for production of triaxial electric porcelains. High quality porcelains can be made from these materials with minimal beneficiation, careful control of particle size, firing schedule, forming process and adoption of a suitable composition to optimise the requirements of high dielectric and mechanical strength with a minimal porosity. This paper is part of an ongoing study to achieve the above. 6.0 ACKNOWLEDGEMENTS The authors express their gratitude to the Sida/SAREC-Makerere University Collaborative Research Programme for the financial support. REFERENCES
1.
3.
Braganga, S.R. and Bergmann, C.P. (2003) "A View of Whitewares Mechanical strength and microstructure" Ceramics International, 29 pp 801-806. Braganga, S.R. and Bergmann, C.P. (2004) "Traditional and glass powder porcelain." Technical and Microstructure analysis" Journal of the European Ceramic Society, 24 pp 23832388. Bribiesca, S., Equihua, R. and Villasefior, L. (1999) "Photothermal Characterisation of
4.
Electrical Porcelain; Effects of Alumina Additions on Thermal Diffusivity and Elastic Constants" Journal of European Ceramic Society 19 pp 1979-1985. Buchanan, R.C. (1991) "Ceramic Materials for Electronics" (R.C. Buchanan, Ed.) Chap. 1.
2.
5. 6. 7. 8. 9.
462
Dekker, New York. Budnikov, P.P. (1964) "The Technology of Ceramics and refractories" The M.I.T Press, U.S.A. Carty, W.M and Senapati, U. (1998) "Porcelain-Raw Materials, Processing, Phase Evolution and Mechanical Behaviour" J. Am. Ceram. Soc. 81 pp 3-20. Chaudhuri, S.P. and Sarkar, P. (2000) "Dielectric behaviour of porcelain in relation to constitution" Ceramics International 26 pp 865-875. Chaudhuri, S.P., Sarkar, P. and Charkraborty, A.K. (1999) "Electrical Resistivity of porcelain in relation to constitution" Ceramics International 25 pp 91-99. Cho, Y.S. and Yoon, K.H. (2001) "Handbook of Advanced Electronic and Photonic Materials and Devices" (H.S. Nalwa, Ed.) vol. 4, Chap. 5. Academic Press. New York.
Olupot, Jonsson & Byamhanga
10. Dana, K., Dey, J. and Das, K. S. (2005) "Synergistic effect offly ash and blast furnace slag on the mechanical strength of traditional porcelain tiles" Journal of the European Ceramic Society 31 pp 147-152. 11. Dana, K. and Das, K. S. (2004) "Partial substitution of feldspar by B.F slag in triaxial porcelain." Phase and microstructural evolution" Journal of the European Ceramic Society 24, pp 3833-3839. 12. Dana, K., Das, S. and Das, K. S. (2004) "Effect of substitution offly ash for quartz in triaxial kaolin-quartz-feldspar svstem" Journal of the European Ceramic Society 24, pp 3169-3175. 13. Das, K.S. and Dana, K. (2003) "Differences in densification behaviour of K- and NaFeldspar-containing porcelain bodies" Thermochimica Acta 406, pp 199-206. 14. Dombrowski T. (1998) "The origin of kaolinite-implications for utilization" (Carty and Sinton, Ed.) Science of Whitewares II. The American Ceramic Society. 15. Ece, O. I. and Nakagawa, Z. (2002) "Bending Strength of Porcelains" Ceramics International 28, pp 131-140. 16. Esposito, L., Salem, A., Tucci, A., Gualtieri, A.,and Jazayeri (2005). "The use ofnephelinesyenite in a body mix for porcelain stoneware tiles" Ceramics International 31 pp 233-240. 17. Geological Surveys and Mines Department (GSMD), Uganda (1998). "Annual Report" Entebbe, Uganda. 18. Indulkar, C. S. and Thiruvengadam, S. (1992). "An Introduction to Electrical Engineering Materials" S. Chad & Co. Ltd, Ram Nagar, New Delhi. 19. Iqbal, Y and Lee, W.E (2000) "Microstructural evolution in Triaxial porcelain" J. Am. Ceram. Soc. 83 [12] pp 3121-27. 20. Islam, R. A., Chan, Y.C, and Islam M. F. (2004) "Structure-property relationship in hightension ceramic insulator fired at high temperature" Materials Science and Engineering B106 pp 132-140. 21. Jones, J.T. and Berard, M.F. (1993). "Ceramics." Industrial Processing and Testing" 2 l~d Ed. Iowa State University Press. U.S.A. 22. Katz, R. N. (1976) "Treatise on Materials Science and technology" (Herbert Herman, Ed.) Characterisation of Ceramic Powders, Vol. 9 Ceramic fabrication processes. Academic Press. New York. 23. Kingery, W.D (1967) "Introduction to Ceramics" John Wiley and Sons Inc., New York. 24. Kirabira, J.B. (2003). "Characterisation of Ugandan raw-minerals for firebricks- before and after sintering" Licentiate Thesis, Royal Institute of Technology, SE-100 44, Stockholm, Sweden. 25. Kirabira, J.B, Jonsson S. and Byaruhanga, J.K (2005). "Powder Characterization of High Temperature Ceramic raw materials in the Lake Victoria Region" Silicate Industriels 70[910], pp 127-134. 26. Kobayashi, Y., Ohira, O., Ohashi, Y. and Kato, E. (1987). "Strength and Weibull distribution of alumina strengthened whiteware bodies" Journal of the Ceramic Society of Japan, International Edition 95, pp 837-841. 27. Lee, W.E and Iqbal, Y (2001) "Influence of mixing on mulliteformation in porcelain" Journal of the European Ceramic Society 21, pp 2583-2587. 28. Maity, S. and Sarkar, B.K. (1996) "Development of high-strength whiteware bodies" Journal of the European Ceramic Society 16, pp 1083-1088. 29. Murray, H.H (1998) "Processing Kaolins and Ball Clays for Ceramic Markets" (Carty and Sinton, Ed.) Science of Whitewares II, The American Ceramic Society. 30. Norsker, H. and Danisch, J. (1993) "Glazes. for the self- reliant potter" Friedr. Vieweg & Sohn Verlagsgesellschaft mbH, Braunschweig. Germany. 31. Norton, F.H. (1970) "Fine Ceramics, Technology and Applications. " McGraw-Hill Book Co. New York.
463
International Conference on Advances in Engineering and Technology
32. Nyakairu, G.W.A., Koeberl, C. and Kurzweil, H. (2001) "The Buwambo kaolin deposit in Central Uganda." Mineralogical Composition" Geothermal Journal. 35. pp 245-256. 33. Olupot, W.P., Jonsson S. and Byaruhanga J.K. (In Press) "Characterization of Feldspar and Quartz Raw Materials in Uganda for Manufacture of Electrical Porcelains" J. Aust. Ceram. Soc. 34. Powell, P.S. (1996) "Ball clay basics" American Ceramic Society Bulletin, vol. 75(6). 35. Prasad, C.S, Maiti, K.N. and Venugopal R. (2001) "Effect of rice husk ash in whiteware compositions" Ceramics International 27 pp 629-635. 36. Prasad, C.S, Maiti, K.N. and Venugopal R. (2002) "Effect of silica fume addition on the properties ofwhiteware compositions" Ceramics International 28 pp 9-15. 37. Prasad, C.S, Maiti, K.N. and Venugopal R. (2003) "Effect of substitution of quartz by Rice husk ash and silica fume on the properties of whiteware compositions" Ceramics International 29, pp 907-914. 38. Reed, J.S (1995). "Principles of Ceramics Processing" (2nd Ed.) John Wiley & Sons, Inc. New York. 39. Routschka, G (Ed.). (2004) "Pocket manual refractory materials; basics-structuresproperties" (2ndEd.) Vulkan-Verlag GmbH. 40. Schroeder, E. J. (1978) "Inexpensive high strength electrical porcelain" Am. Ceram. Soc. Bull. 57, 526. 41. Schuller, K.H. (1997) "The Development of Microstructures in Silicate Ceramics", a paper presented at the EMMSE-West European Coordinating Workshop Committee, with Support from NATO Science Programmes. 42. Singer, F. and Singer, S. S. (1979). "Industrial Ceramics. "Chapman and Hall Ltd. London. 43. Stathis, G., Ekonomakou, A., Stournaras, C.J, and Ftikos, C. (2004) "Effect offiring conditions, filler grain size and quartz content on bending strength and physical properties of sanitary ware porcelain" Journal of the European Ceramic Society 24 pp 2357-2366. 44. Tai, Weon-Pil. Kimura, K. and Jinnai K. (2002) "A new approach to anorthite porcelain bodies using non-plastic raw materials" Journal of the European Ceramic Society 22 pp 463-470. 45. Thurnauer, H. (1954) "Dielectric Materials and Applications" (A.R.V. Hippel, Ed.) Ceramics, Chapman & Hall, London.
464
Okure, Musinguzi, Nabacwa, Babangira, Arineitwe & Okou
A NOVEL COMBINED HEAT AND P O W E R (CHP) CYCLE BASED ON GASIFICATION OF BAGASSE
M. A.E. Okure, Department of Mechanical Engineering, Makerere University, Uganda W.B. Musinguzi, Department of Mechanical Engineering, Makerere University, Uganda B.M. Nabacwa, Department of Mechanical Engineering, Makerere University, Uganda G. Babangira, Department of Mechanical Engineering, Makerere University, Uganda N.J. Arineitwe, Department of Mechanical Engineering, Makerere University, Uganda R. Okou, Department of Electrical Engineering, Makerere University, Uganda,
ABSTRACT The common method of bagasse utilization for process steam and power generation in sugar mills has for long been direct combustion, generating steam at relatively low pressure. Sufficient amount of power and process steam to meet the factory needs has been the target. This paper proposes an integrated gasification combined cycle for cogeneration and shows that this has the promise of being able to produce excess electricity for other consumers at a competitive rate. Different options for increasing the electricity generation in the sugar mills by using more advanced steam process and combined cycle technology, using bagasse as fuel have been analyzed. The choice of process data is by optimization. For the high pressure turbine, an optimal pressure of 70 bars and steam temperature of 450~ were selected as inlet parameters, with exhaust steam from the low pressure condensing turbine at 0.1 bars. For a fairly large-sized sugar mill, introduction of the proposed cycle will make it possible to achieve a specific electrical energy generation of 200 KWh/ton cane. The financial evaluation indicates a payback period of about six years. This should be of interest to sugar cane industries that have for long relied on income mainly from sugar production as well as to the nation, which is seeking sustainable sources of power.
Keywords: Bagasse, Gasification, Combined-cycle, electricity production, sugar mill
1.0 INTRODUCTION A huge potential for power generation from waste biomass fuels exists within the sugar cane industry. An estimated 1.2 billion tonnes of sugar cane is harvested annually,
465
International Conference on Advances in Engineering and Technology
which corresponds to a worldwide electricity production potential of 40 GW or 300 TWh/annum in the eighty countries where sugar cane is grown on a significant basis [Morris and Waldheim, 2004]. The cane residue, commonly called bagasse, which is left after extraction of the sugar juice is used as a fuel in steam generators. In a process called cogeneration, the steam is generated at a pressure and temperature exceeding the needs of sugar processing and is therefore first passed through back-pressure turbines and then exhausted at conditions appropriate for the drying of the sugar juice, frequently about 2 bars and 120~ [Kjellstrom and Gabra, 1995]. Back pressure turbines are used for driving the cane crushers as well as driving an alternator which generates electricity. Normally this electricity is consumed by the sugar factory itself. Most sugar factories were built when there was little incentive for energy conservation and electricity prices were low. This is true for sugar mills in Uganda. This made it financially unattractive to invest in a more advanced process which could generate more electric power. Increasing energy prices worldwide, however, have led to the design of equipment with higher efficiencies. More efficient boilers, motors, pumps and fans are all available. Their adoption in sugar mills presents an opportunity to increase profitability of sugar factories by increasing electricity generation from cane residues. This would reduce own-generated electricity costs for purchased power. In addition, sale of the excess power would increase revenues. This paper proposes an integrated gasification combined cycle for cogeneration and demonstrates its potential to produce excess electricity for other consumers at a competitive rate. 2.0 COMBINED HEAT AND P O W E R CYCLES
Three different types of cycle exist for cogeneration. The first is a conventional cycle which uses low pressure and temperature steam of close to 20 bars/300~ generating steam and power just sufficient for the sugar mill's own consumption and usually employing less efficient electricity generation equipment. The second is the advanced cycle which is similar to the conventional cycle set up but with higher pressure and temperature parameters of say 60 bar/500~ applied. This results in significant electricity generation in excess of the sugar mill's own requirements and thus the excess power can be sold to outside consumers. The third cycle is the biomass integrated combined cycle which also yields excess. The detailed analysis of the later two is of interest in this paper. The analysis uses a "typical sugar mill" with mill specifications given in Table 1. 2.1 Advanced Steam Process
Figure 1 shows the advanced steam process plant cycle. The optimal pressure of 70 bars and steam temperature of 450~ is obtained as the inlet steam parameters for the high pressure turbine.
466
Okure, Musinguzi, Nabacwa, Babangira, Arineitwe & Okou
Table 1" Typical sugar mill data Item
Value
Annual operating period Effective operating time Effective crushing rate Sugar production Average bagasse produced Moisture content of fresh bagasse Process steam required Energy supplied with process steam Crushing power Electric energy required for process Factory electricity generation Total power required by the factory Annual electric energy consumption Purchased electric energy annually
44 weeks 144 hours/week 84 tonne cane/hour 420 tonnes/day 33 tonnes/hour 51.2% 0.4 tonnes/tonne cane 300 kWh(thermal)/tonne cane 8.1 kWh(mek)/tonne cane 32 kWh(el)/tonne cane Nil 3 MW 23400 MWh 23400 MWh
As shown in the figure, part of the steam is exhausted (extracted) at 12 bars to backpressure turbine which is used to drive the cane crushers and to the feed water tank for preheating the feed water to saturation point. Preheating the feed water increases the boiler efficiency.
Make up water i-
, Flue gases
_!
,
/Feed taI~k
/
=
Condensatefrom process
Feed watlr Sugar process
Fuel oiler Air
~ . ~ - - - - - , ".... Millturb~
/-
/-J
•Condensate
I
pump
~
HP turbine
KEY Steam Water
Condenser I
Fig. 1 Schematic of the advanced steam process
467
International Conference on Advances in Engineering and Technology
The remaining steam at 12 bars enters the low pressure turbine and is expanded to 1.8 bar, the pressure at which more steam is exhausted to the sugar process. Excess steam after the need of the sugar process is passed through the low pressure condensing turbine exhausting at 0.1 bars. With the proposed condensing steam turbine, the exhaust pressure can be as low as 0.05 bar in order to maximize power output. Analysis shows that condensing steam turbines give higher electricity output provided the condensing medium is obtained cheaply. The analysis considered average bagasse consumption of 25 ton/hour with a heating value of 7.76 MJ at the moisture content of 51.2%.
2.2 Biomass Integrated Gasification Combined Cycle (BIGCC) According to Reed and Gaur [2001] research has shown the potential of Integrated Gasification Combined Cycle (IGCC) based systems to be competitive with, if not superior to, conventional combustion power plants because of their higher efficiency, superior environmental performance, and competitive cost. However much of the advancements are still under research and development. BIGCC is a combination of two leading technologies. The first technology is gasification, which uses biomass to create a cleanburning gas (producer gas). This is depicted in Figure 2. The gasification portion of the BIGCC plant produces a clean gas which fuels the gas turbine. FUEL
PRODUCER GAS_ y
__-__GASIFIER~ _
CYCLONE
ASH~2
STORAGE
AIR
FE COl',,
_
I
~~q
WASTE ASH
PUMP ) ~ WATER
WATER!CRUBBER ~R
cBOOSTER m Esso
~_~~ "~
PR-ODUCER GASTO COMBUSTOR
Fig. 2 Schematic diagram of the fluidised bed gasifier system
The second technology is combined-cycle, which is the most efficient method of producing both process heat and electricity commercially. Figure 3 shows this cycle.
468
Okure, Musinguzi, Nabacwa, Babangira, Arineitwe & Okou
M~k~.~JnWater Exhausf Gases~ Feedwatei HeatRecovery generator
steam
~ .... ~ n . . . . . ~. .r. u. ! m c;ic -iump
Feedwater ~ i c e e ss tank sugar M ~il tur in Condensate
...................l....................................................................................................................................................................................................
producergas
pump
condenser
Intake air Fig. 3: Combined cycle b a s e d on the producer g a s fuel
For this system, the gasification stage is carried out in a fluidised bed. The bed consists of silica sand and ash and the fuel content is 2-3%. Typical operating temperature of a fluidised bed is 800-850oC. Air is blown through the bed at a sufficient velocity to keep the bed materials in a state of suspension. The fuel particles are introduced at the bottom of the reactor, very quickly mixed with the bed material and almost instantaneously heated up to the bed temperature and hence the subsequent producer gas generation. After the producer gas has left the fluidised bed chamber, it goes through a cleaning/cooling unit. The gas after the cleaner/cooler unit is then led to a boost compressor, which compresses it to the gas turbine combustion chamber pressure conditions. The exhaust heat from the combustion turbine is recovered in the heat recovery steam generator to produce steam. This steam then passes through a steam turbine to power another generator, which produces more electricity. Combined cycle is more efficient than conventional power generating systems because it re-uses waste heat to produce more electricity.
2.3 Performanee Analysis A comparison of the performance parameters of the two cycles is shown in Table 2 below.
Table 2: Performance Parameters of the Two Cycles
469
International Conference on Advances in Engineering and Technology
Advanced Steam Process Cycle
Electric Power output(MWe) Specific electrical energy output (kWh/ton cane) Mechanical power output (MW) Thermal power output(MWt) Electric yield/electric efficiency (%) Total efficiency (%)
Biomass Integrated Gasification Combined Cycle
Milling season
Off season
Milling season
Off season
10.3
14.5
16.81
21.3
123 0.68
173
200
254
Nil
0.89
Nil
23.3 23
Nil 32
23.3 32
Nil 40.2
75.3
32
74.4
40.2
In both schemes, more electricity is produced during off season due to reduced or no extraction of process steam. All the steam is expanded to condenser pressure for electrical energy generation unlike in the milling season when some steam is extracted for sugar processing. Also during off season, a higher heat to power ratio (higher electric yield) is obtained. There is no cane crushing during off season as well thus the zero mechanical power required. Comparing the two, the Biomass Integrated Gasification Combined Cycle achieves a higher power output, specific energy output as well as electric yield/electric efficiency. The efficiency of the Biomass Integrated Gasification Combined Cycle is marginally lower than the advanced steam process cycle in the milling season but higher in the off season. Technically therefore, the Biomass Integrated Gasification Combined Cycle is superior to the advanced steam process cycle. 3.0 FINANCIAL EVALUATION In order to carry out a financial comparison of eh two cycles, it is necessary to scale component data to account for the different capacities. The scaling formula used is:
Ic
-
where Ic is the component investment at capacity P, Ic,b the component cost at capacity Pb and x is the digression exponent(exp), taken to be 0.7 in power plants investment. Table 3 shows the resulting financial figures. Table 4 shows the expected financial performance.
Table 3" Investment Costs for the Typical Sugar Mill
470
Okure, Musinguzi, Nabacwa, Babangira, Arineitwe & Okou
Item
Advanced Steam Process Cycle Capacity cost kUSD
Fuel handling, ton/h Fuel drying, ton/h Biomass fuelled boiler Fluidized bed gasifier Gas turbine generator MWe
25 0 50 0 0
0
11.4
6408
Back pressure steam turbine generator, MVA Condensing turbo generator MWe with condenser Cooling Tower MWth Transformer MW Switchboard Sub-total 1
0.68
1661
0.99
2160
14.9
3749
6.78
2160
53.5 14.9
513 244 96
65.3 22.4
590 324 96
Purchased cost
3134 0 6156 0
Biomass Integrated Gasification Combined Cycle Capacity cost kUSD
25 25 41
3134 2500 5357
62
4320
15553 Cost factor
Cost kUSD
27049 Cost factor
Cost kUSD
Mechanical equipment Electrical equipment Instrumentation and control
0.3 0.1 0.1
4666 1555 1555
0.3 0.1 0.1
8115 2705 2705
Buildings
0.2
3111
0.2
5410
Sub-total 2
10,887
Direct cost
Cost factor
Engineering Miscellaneous
Cost kUSD
0.06 0.1
Total investment cost (kUSD)
1586 2644 30670
18935 Cost factor
Cost kUSD
0.06 0.1
2759 4598 53341
Table 4: Comparison of the Financial Indicators Item Net electrical energy produced (MWh/yr) Specific electrical energy production (KWh/ton cane) Price of I unit of energy sold (USD/kWh) Annual profit (kUSD) Pay-back period (years)
Advanced Steam Process Cycle 96,276
Biomass Integrated Gasification Combined Cycle
123
200
0.1
0.1
6479
9936
4.7
5.4
153,721
471
International Conference on Advances in Engineering and Technology
The investment costs for the Biomass Integrated Gasification Combined Cycle is about 73% higher than the advanced steam process cycle, the profit accruing is 53% higher while the payback period is longer by about 15%. 4. CONCLUSION The results of the analysis reveal that significant amount of power can be generated from bagasse. The electric power potential is not only for the sugar factory's own requirements but also for sale to other consumers. The increased generation of electric power can be achieved partly by replacing old inefficient system components with more advanced, efficient, modem ones. Employing the advanced steam process with a pressure of about 70 bars and steam temperature of about 450~ gives a very good specific electrical energy generation of 123 KWh/ton cane during milling season. The combined cycle (i.e. BIGCC) results in a more significant specific electrical energy generation of 200 KWh/ton cane. Financial evaluation of the two options indicate that the combined cycle has an edge over the advanced steam process generates in terms of profitability but this comes at a higher investment costs and a longer payback period. This indicates that BIGCC has the potential to produce excess electricity for other consumers at a competitive rate. However it is important to note that the initial investment in the two cycles proposed is relatively higher than that for the conventional cycle though in the long run, the advanced cycles give an appreciable cost benefit. In addition, a host of other factors such as pricing may influence the above analysis. REFERENCES
Kjellstrom, B. and Gabra, M. (1995), A Pre-feasibility Assessment of Cane residues for Cogeneration in the Sugar Industry, Stockholm Environment Institute, Stockholm. Mohamed, Gabra (2000), Study of possibilities and some problems of using cane residues as fuel in a gas turbine for power generation in the sugar industry, Ph.D. Dissertation, Lulea University of Technology, Lulea, Sweden. Morris, M. and Waldheim, L. (2004), Biomass Power Generation." Sugar Cane Bagasse and Trash, TPS Termiska Processor AB, 611 82 Nykoping, Sweden. Thomas, B. Reed and Siddhartha, Gaur (2001), A survey ofbiomass gasfication-gasifier projects and manufacturers around the world, National Renewable Energy Laboratory, Colorado, USA.
472
Kalyesubula
E N E R G Y C O N S E R V A T I O N AND E F F I C I E N C Y USE OF BIOMAS USING THE E.E.S. STOVE E. H. Kalyesubula, Ceramics and Chemical Engineering Divisio,; Uganda Industrial Research Institute, Kampala, Uganda.
ABSTRACT Energy is the life-blood of development as recognized by Uganda government. To meet the energy needs of Uganda's population the government set up an energy policy in 1995 with a vision to provide increased and improved modern energy supply. The government's objective is to meet an impressive economic growth, access to affordable modem services and sustainable development for its people. The policy aims to overturning the wide spread energy poverty all over Uganda and its objective and strategies have been developed for the supply and demand sub-sector. Apparent energy poverty at all levels in Uganda is exhibited on dominant reliance on wood fuel. To reduce on the consumption of wood fuel an Extra Energy Saving Stove (EES. Stove) which allows for fuel substitution and allies with the recommendation of the Kyoto protocol was developed. Its slow spread relies mainly to social, economical and cultural hindrances. Key words: Energy, environmental, stoves, renewable, technology, insulated, bricks, poverty, efficiency, Biomass. 1.0 INTRODUCTION Rational utilization of energy means the sensible out right usage and accountability of energy in the system at work. Energy is the life-blood of development as recognized by Uganda government. Uganda's policy goal is "To meet the energy needs of Uganda's population for social and economic development in an environmentally sustainable manner." This policy was established in the 1995 constitution of the republic of Uganda, which provided the mandate to establish an appropriate energy policy, which states, "The state shall promote and implement energy policies that will ensure that people's basic needs and those of environmental preservation are met." The energy policy emphasizes to meet: 9 9
An impressive economic growth. Wide spread access to affordable modern services that will improve the living standards of all people of Uganda. 9 A sustainable development. The policy aims to overturning the wide spread energy poverty all over Uganda which is as a result of: 9
Lack of planning in either the hydrological or renewable resource sector.
473
International Conference on Advances in Engineering and Technology
9 9 9
Poor transmission and distribution infrastructures. Lack of access to affordable, reliable and adequate energy supply. Lack of balance between physical and social environmental impacts created by energy development especially hydropower. 9 Lack of harmony between the energy policy and policies of other sectors of the economy, e.g. Education policy, healthy policy, etc. 9 Lack of compatibility between the energy policy and region or global policies and 9 Lack of strong legal policy in the area of downstream petroleum industry, renewable energy, energy conservation / efficiency and atomic energy application. The policy objective and strategies have been developed for: 9 9
Supply sub-sector i.e. power, petroleum, biomass, and renewable energy. Demand side sub-sector i.e. house hold and institutions, industry and commerce, transport and agriculture This policy framework provides for Uganda's government vision, which is "Increased and improved modern energy supply for sustainable economic development as well as improving the quality of life of the Ugandan population." Thanks go to the Ministry of Energy and Mineral Development, stakeholders in government (e.g. Makerere University, Uganda Industrial Research Institute), development partners (e.g. World Bank, NORAD, SIDA, JICA) and the private sector who are involved in the dissemination of the technologies. 2.0 INNOVATIONS The Ceramic and Chemical Engineering Division of UIRI has been supporting interventions to improve the energy supply and utilization patterns for households and smallscale enterprises in Uganda. It is a contribution to improving kitchen environment and conservation of energy. This presentation is success factor for improved biomass stoves, which in turn should lead to the improvement of the quality of life for the millions of people depending on biomass energy for their live hoods today. The researcher condemns those who erroneously have promoted high mass uninsulated clay or mud stoves with a belief that they are efficient only to be comforted with disastrous results. Emphasis is made on making a fast cooking, affordable, durable, low cost maintenance and less fuel consuming portable stove. Clearance between the rim of the stove and the edge of the pot sitting on the stove equals or is less than half (1/2) the thickness of the figure. Extreme burning of wood gas reduce on blackening of cooking utensils and also increased generation of heat therefore a high combustion efficiency. This stove meets basic performance and operational criteria. Though the stove is expensive but the fuel reduction cost accounts for the initial investimate in burning it. The E.E.S.stove can use cow dug, maize cobs, charcoal, coffee husks, briquettes and other agricultural wastes.
474
Kalyesubula
This stove gives better advantage use as compared to Mali metal Jico, Turbo stove (Tapio Neemi of Finland), Mulanje Clay stove (GTZ/IFSP), Mud stove (Zimbabwe and North Malawi GTZ/IFSP), Double Burner Rocket (Aprovecho), Eco-Rocket Stove (Prolena), Shisa Stove (Down Engineering of Switziland), Tsotso Stove (Namibia) and Nakatigi Stove (Enosi Kalyesubula of Uganda).
........./
Mali Metal Jiko
Turbo Stove (Tapio Niemi)
Mulanje Clay Stove (GTZ/IFSP)
Mud Stove Zimababweand North Malawi GTZ/IFSP)
Shisa (New Dawn)
Tsotso (Namibia)
!
....~i!~iii!ii!i!i~!i!I~!'i~,~ ..i.............. Double B u r n e r Rocket (Aprovecho)
Eco-RocketStove (Prolena)
Fig 1" Stoves currently used in Southern Africa.
Nakatigi Charcoal Stove.
Nakatigi Wood Stove
Fig. 2: Nakatigi stoves (enosi kalyesubula (uganda))
475
International Conference on Advances in Engineering and Technology
Stove as seen from the top.
Stove as seen from the sides
Fig. 3: Extra energy saving stove designed by Ceramics and Chemical Engineering division of Uganda Industrial Research Institute. Table 1 Materials and consum ~tion of the Extra Energy Saving Stove. Item
Brick E.E.S. Stove
Charcoal
Weight
Temperature
(kg)
(~
range
Quantity
for the stove
needed
Life-span mated
range (1.0 - 1.5)
1200 -1500
-
-
range (25 - 30)
Operation Temp.
Made of sixteen
More than ten
(o - 900)
(16) bricks
years (10 yrs)
-
360 Kgs
Esti-
Cook for three hours (cook 0.5 kgs.ofbeans ready).
A fully filled charcoal stove can cook for three hours (3hrs) thus reducing the time for tending to it as compared to other stoves and fires. Because the walls are insulated the combustion temperature rises very fast and does not fall as compared to other stoves mentioned in which the walls have to gain heat fast. It is even good for cooking small quantities of foods because even 88of its full capacity can cook effectively; you don't need to fill the E.E.S. Stove. A hotter, cleaner combustion and improved heat transfer are attained in this modem improved ceramic cooking stove the E.E.S. Stove. It is true that for this technology to be successful women should be involved in the design of the E.E.S. Stove in order for it to be adapted to their needs. This remains a challenge to the innovators although the design has been tested for two years and incorporated culturally. Dr. Lanry Winiarski Technical Director Aprovecho Research Center ([email protected]) says that "it makes me confident that safer, cleaner, and more efficient stoves are on their way to those in need ". The E.E.S. Stove meets these aspirations and also under score the rocket stove principles (Dr. Lanry Winiarski: [email protected]) which are:-
476
Kalyesubula
9
9
9
9
9
9
9
9 9 9
Insulate particularly the combustion chamber with low mass, heat resistant materials in order to keep the fire as hot as possible and not to heat the higher mass of the stove body. Within the stove body, above the combustion chamber, use an insulated, upright chimney of a height that is about two or three times the diameter before extracting heat to any surface (griddle, pots, etc.). Heat only the fuel that is burning (and not too much). Burn the tips of sticks as they enter the combustion chamber, for example. The object is NOT to produce more gasses or charcoal than can be cleanly burned at the power level desired. Maintain a good air velocity through the fuel. The primary Rocket stove principle and feature is using a hot, insulated, vertical chimney within the stove body that increases draft. Do not allow too much or too little air to enter the combustion chamber. We strive to have stoichiometric (chemically ideal) combustion: in practice there should be the minimum excess of air supporting clean burning. The cross sectional area (perpendicular to the flow) of the combustion chamber should be sized within the range of power level of the stove. Experience has shown that roughly twenty-five square inches will suffice for home use (four inches in diameter or five inches square). Commercial size is larger and depends on usage. Elevate the fuel and distribute airflow around the fuel surfaces. When burning sticks of wood, it is best to have several sticks close together, not touching, leaving air spaces between them. Particle fuels should be arranged on a grate. Arrange the fuel so that air largely flows through the glowing coals. Too much air passing above the charcoal cools the flames and condenses gaseous vapors. Throughout the stove, any place where hot gases flow, insulate from the higher mass of the stove body, only exposing pots, etc. to direct heat. Transfer the heat efficiently by making the gaps as narrow as possible between the insulation covering the stove body and surfaces to be heated but do this without choking the fire. Estimate the size of the gap by keeping the cross sectional area of the flow of hot flue gases constant. EXCEPTION: When using a external chimney or fan the gaps can be substantially reduced as long as adequate space has been left at the top of the internal short chimney for the gasses to turn smoothly and distribute evenly. This is tapering of the manifold. In a common domestic griddle stove with external chimney, the gap under the griddle can be reduced to about one half inch for optimum heat transfer.
The E.E.S. Stove design meets the practical use for our people rather than only the laboratory or research needs. It creates a bridge between theory and practice. It makes cooking of food easy even for beans which seems to be hard to cook as it consumes a lot of fuel when other means of cooking are used. It accounts for the following user's point of view:9 Type of foods routinely used by the communities. Some foods cooked by communities take six hours to cook and clay and sand stoves proves quite useful in that respect. The mass will take the heat for the first 30 minutes but later the mass retains the heat within itself as the fire is directed to the pot. This makes it very ideal where the heat is used for house warming as the food cooks. Certainly, the stove made that
477
International Conference on Advances in Engineering and T e c h n o l o g y
way with a community that uses it in that way is far much useful and effective than a three stone fire. Let me elaborate on the six hour cooking period. There is the heating and boiling period for about 1 and half hours to two hours, then there is the heating period and the the simmering period. In all these stages of cooking, different feeding of firewood or charcoal to the fire is required. Take an example when cooking matoke (steamed bananas in Uganda) for example, you need a steaming period that needs very low level of heating and the mass of sand and clay plays a great part in achieving the right cooking environment. Dry maize and beans (Empengere) in Uganda takes between 5 and 6 hours to just bring it to the cooking stage. Then it has to be pounded which is another 30 minutes to one hour exercise etc. Using that example, the E.E.S. stove is far more efficient than use of a three stone fire. Length of time the food takes to cook. This is more important as the stove efficiency consideration in design. The type of pot used in the cooking. Many pots used cannot fit very well with a situation where heat is to be forced to scrape against the sides of the pot. A very unfortunate reality. So the E.E.S. stove is better in the field than the three stone fires. Some foods require pounding as it cooks which influences the way the pot has to sit on the stove. The most forgotten aspects by stove designers is that the fire is used by the users for other activities apart from cooking e.g. house warming and also warming themselves around the fireplace as they socialize, roasting of maize or potatoes, and even in some cases to provide light in the kitchen (some kitchens have no lamps) etc. as culture demands. Some form of allowance to this is needed if the community will adopt the stove fully otherwise they will still make another side-fire against the most efficient stove in the kitchen to carry out the other functions of the fire place. After all that has been tackled, the secondary considerations from the user's point of view include:9 Stove design and its looks: Is the stove appealing to the eye? 9 Affordability or accessibility. Is the stove affordable or can it be made easily by the user or a helper? 9 Efficiency: is it helping to save some wood or charcoal? 9 Duration: how long does it take to cook? 9 Durability: They are more interested with something that will last for sometime. 9 Portability: The issue of whether the stove could be moved from one place to the other is important with some communities. You will realize that it is only in stove efficiency where the user and the developer have found a common ground in the case of other designs. But the reality is that common ground must be cultivated in all the other aspects if stoves programmes will be successive. The designers of the E.E.S. stove appropriately integrated all those aspects. There is apparent energy poverty at all levels in Uganda, particularly at household level in the rural areas. This is exhibited in:-
478
Kalyesubula
9 9 9
Low level of consumption of modern energy forms (electricity, and petroleum products. Inadequacy and poor quality of electricity services. The dominant reliance on wood fuel sources.
To alleviate these conditions an Extra Energy Saving Stove (E.E.S. Stove) has been designed and produced to bridge the gap between the favoured urban population and the marginalized rural poor population that depends on biomass. This design will: 9 Leads to poverty eradication as stipulated in the Poverty Eradication Action Plan (PEAP), Plan for Modernization of Agriculture (PAM), decentralization Act, and the Liberalized economic environment. 9 Ensures that Uganda meets its green gas emission commitments, as it is a signatory to the United Nations Climate Change Convention (UNCCC). 9 Benefit the Global Environment Facility and the clean Development Mechanism. 9 Satisfy the New Partnership for Africa's Development (NEPAD) objective of optimizing development, use of resources and providing cost effective energy services. 9 Reduce the high total energy consumption ofbiomass in Uganda, which is currently 90%. 9 Reduce the degradation of the forests as wood reserves are depleted at a rapid rate in many regions in Uganda. Charcoal consumption increases at a rate close to that of urban population of 6% per annum. 9 Eliminate the inefficient wood and charcoal stoves and charcoal production kilns currently used in Uganda. 9 Provide for an improved Renewable Energy Technologies (RET's). Since expenditure on energy constitute a larger proportion of the country's GDP and particularly large proportion of poor household expenditure, it is necessary to emphasize the effective and efficient use of energy by use of the EES-stove. This stove allows for fuel substitution for example dried cow dug for charcoal. Emphasis is put on the Households and Institutions energy conservation section other than the Transport, Industrial and Commerce, or Agriculture. 3.0 F O R E C A S T E D
PROBLEMS
The slow spreading and usage of this EES stove may be due to: 9 High initial investment production cost. 9 Lack of awareness among energy end-users about energy conservation possibilities and practice. 9 Socio-economic barriers existing among the population. 9 Lack of appropriate research and development that suits production in all environments including the rural areas. 9 Lack of appropriate curricula in energy studies at institutes of higher learning. 9 Low public awareness about the efficiency and potential of EES stove. 9 Poor quality of some of the Renewable Energy Technologies (RET's) available on the market reduces their life time and damages the image of RET' s.
479
International Conference on Advances in Engineering and Technology
9
Lack of initiative on part of the government to facilitate the propagation of the EES stove technologies and applications. For cooking "ugali" (the staple food), the stove is not as good as the traditional ones. The stove is not very good for roasting maize.
9 9
4.0 CONCLUSION Effective utilization coupled with zealous support from all leaders (more especially those of Africa) will encourage the speedy acquisition of the E.E.S. stove technologies as well as its use. This will encourage the:9 Reduction in the accumulative production o f " green house gases " as stipulated in the Kyoto Convention ".as a result of reduction in the production of the gases and reduced cutting down of forests for wood and charcoal fuel. 9 Reduction on cancerous diseases especially among women resulting from smoky kitchens. 9 Reduction on poverty resulting from excessive expenditure on wood fuel, treatment of diseases resulting from smoky fires, of time wastefully spent on slow cooking processes and cumulative replacement of stoves that lasts only short duration of their useful lifespan. 9 Further innovation in Renewable Energy Technologies based on utilization of E.E.S. stove as a heating source. Therefore lasting solution is achievable only if the E.E.S .stove technologies are inculcated in all heating processes of any sort. REFERENCES [email protected] http ://ecoharmony.net/hedon/malawistove.php. http://ecoharmony.com/hedon/claystove.php http://www.kfpe.ch/index.htm. http://www.newdawn-engineering.com/website/stove/singlestove/shisal.htm
http://www.repp.org/discussiongroups/resources/stoves/crispin/malistove.html. http://www.repp.org/discussiongroups/resources/stoves/still/Aprovecho Plans/double burner rocket plans.pdf
http://www.repp.•rg/discussi•ngr•ups/res•urces/st•ves/miranda/ec•st•ve/ec•st•ve.htm• http ://www.repp.•rg/discussi•ngr•ups/res•urces/st•ve/namibia/Ts•ts•%2•st•ve.jpg. http://www.sirdc.ac.zw http://www.theire.org/action/index.cfm? http://www.turbostove.fi/eng/ant/turbostove.php3 [email protected]
480
Nabuuma & Okure
FIELD-BASED ASSESSMENT OF BIOGAS, TECHNOLOGY: THE CASE OF UGANDA B. Nabuuma and M.A.E. Okure, Department of Mechanical Engineering, Makerere University, P.O. Box 7062, Kampala, Uganda.
ABSTRACT
Dissemination of renewable energy technologies worldwide has received heightened attention in Uganda as in the rest of the world. One such technology is biogas where several projects have supported especially the rural population to adopt the technology. This paper presents the findings of the research that was conducted to follow up on the biogas systems in the field to assess their performance and identify barriers that have blocked the wider adoption of the technology. Findings of the study show that the biogas systems were performing dismally compared to countries like China and India. A catalogue of problems that seem to have originated right from conception of the dissemination programmes, through the system design, to the construction and operation and maintenance process was found to affect the performance of these plants. They include weak materials of the digesters and gas reservoirs, inadequate gas production and pressure, leaking digesters, gas reservoirs and gas distribution systems, blocked pipes and burners. The paper proposes context-based strategies for the revitalization of existing systems and renewed dissemination on the basis that biogas technology still has potential in Uganda since the raw materials and energy demand exist. Key words: Biogas systems, Performance assessment.
1.0 BACKGROUND Motivated by the need to meet the ever-increasing energy demand and sustainability consciousness, many Governments and civil society organizations have promoted such technologies as solar photovoltaics, wind, and biogas. In Uganda, where over 80% of the population live in the rural areas and rely on agriculture for their livelihood, several biogas technology dissemination projects have been undertaken. (Odogola et al, 2001) Initial introduction dates back to the 1950s when the church missionary society constructed two floating drum digesters which acted as good demonstration units. In the early 1980s, The People's Republic of China in conjunction with Ministry of Animal Husbandry and Fisheries constructed seven digesters which with the exception of one were not functioning by 1987 (Karekezi et al, 1997). Since then, a number of government and private initiatives have embarked on development and popularisation of this technology. Notable examples of these are works of Ministry of energy and Mineral Development, Agricultural Engineering and Appropriate Technology Institute, Integrated Rural Development Initiative, Heifer Project International and Makerere Univer-
481
International Conference on Advances in Engineering and Technology
sity Although some plants have been constructed on self sponsorship basis, construction of most units was as a result of facilitation from donors such as World Bank, The People's Republic of China, and Innovations at Makerere (I@MAK). In the case of cost sharing, the beneficiaries provided some of the locally available construction materials while the facilitating organs provided some of the materials and technical expertise. With the exception of a few which were constructed for commercial purposes, most of the plants are household units. The expectation was that the technology would be enthusiastically adopted, as has been the case for countries like China and India (Kristoferson, et al, 1986). Around 2003, a pilot survey was carried out since it was suspected that the technology was not being enthusiastically adopted. Furthermore, most of the tentative reasons for dismal performance were attributed mostly to social economic effects (Kristoferson et al, 1986). No technical information was available concerning the performance of plants in the field. This paper presents the findings of the research that was conducted to follow up on the biogas systems in the field to assess their performance and identify barriers that have blocked the wider adoption of the technology. 2.0 M E T H O D O L O G Y A sample of several plants across 4 districts was selected and through interviews, observations and measurement of various variables performance of biogas technology in the field was assessed. Gas production was measured using a gas flow meter set between the reservoir and the burner; pH was measured using a pocket size digital "pHep" meter with a range of 0.0 to 14.0 pH; and Feedstock was weighed with a Salter weighing scale. Other parameters such as digester temperature and pressure at the burner end of the system were measured using mercury in glass thermometer and manometer respectively. In addition to these measurements, systems were checked for leaks and blockage. This technology has spread to a few districts in the country but the districts where the survey was conducted are shown in Figure 1. 3.0 ADOPTED DESIGNS Three types of biogas digester designs have been promoted in Uganda. These are floating cover, fixed dome and tubular as shown in the figures that follow. Digester design and size was in most cases decided upon by the program implementers. This was especially true for cases where the biggest percentage of the cost was a donation. In some cases plants of the same type and size were constructed irrespective of family energy demands or available quantity of feed. Exclusion of family energy demands as a determining factor in selection of digester sizes in most cases resulted into daily gas outputs that were not sufficient to meet entire energy needs of the family. Therefore, rather than use two types of energy to cook one meal, biogas plants were abandoned in favour of other sources which were in abundance.
Although tubular digester designs were adopted because of the low installation cost of about 350,000 Uganda Shillings (170 US dollars), they had the highest percentage fail-
482
N a b u u m a & Okure
ure. Floating cover and fixed dome digester types were found to be a more reliable option. [[ GANDA,
,/
~
". . . . . . .
"3
s u Da N
DEMOCRATIC REPUIILIC O F 'rltl~ .... {
9 ........... _-,.
~ " i + .......... .,'..... < i,, NF.BB; %-'% /
. .........
-)
......
~,~i]~
' f ~ "
:
~
' ~ : '~~~ @ ' r
:.-, ,!
f*'.--'" "k)
_:~ ,
.~;.., ""
I[~+''j"
I1~ .
'4XRWANDA . . . . . .
,,~a,,u
~ . . i ~ ; ~{ .... )
Mo4St~,'Dt
, r " ATa5' ......... ~.... '.
, ..... ;.,3;.~a,
.....
MUI~'I2A+'L~E
" ,,.
....., " {-"
i'~ ' ....
:
.. < . ,
~
|
", ,- .;-"/"-.
,o
J
"
"t" .-
-~z" ,..J2,~,a " ~
i
'........... ,' #'~
, ~
.c
, ..+,
;
~"
""
~'*
-.j. r
~
:: ~ ;:. ~
"
............ Y -
)
= :
KaTaKW; ......... io:!..........
:~. ,.. --o ~
".~
MOROTO
.~,' L I R A ,~,,..".J
+. . . . . ../" ,~,,, s ~
.
"
&~
r ". . .';)RO' .. -
a 4 ~ ' % ~ i ~ " r ' : E : " +R ' i :om m ~r : "~~ ' c"= T ~ N Z .~ N !.
~
.................. _~< ~:....
~.;. /~aF,4(]."
. . . . . . . r7
"
:/'----------
"'~'"
<<-/o
>:,
.. . . . . . . . . .
_._.,,
=:,.=
)
II
a..s
#.~>,,--,.-
,.,..,~ .... , ~ f
........ o, =~, ,o, ~,, :o~ ,
Fig 1" Map of Uganda showing districts covered during the survey
EFFLUENT OUTLET PIPE AS EXIT PIPE
pjFEED lb)LET PIPE
~
Fig 2: Fixed dome digester
483
International Conference on Advances in Engineering and Technology
GUIDE MIXINGER PIPE
~.
. ~METAHLIC GASHOLDER
EFFLUENT
HOLDIN?TAN
/
_
pi/IpLET
ET
"~'///'////////////"////~[
Fig.3" Floating cover digester
FEEDINLET
t
BIOGAS
~
]~
DIGESTER CONTENTS
LEVELLED SURFACE
Fig.4: Tubular digester
4.0 T E C H N O L O G Y P E R F O R M A N C E Generally 48% of the plants were not functioning. Of these, more than 80% failed in less than six years after construction. While this percentage is general, all the nonfunctioning tubular digester systems failed in less than four years. On the other hand, more than 70% of the functioning systems have been in operation for less than six years. With the current status quo, we can not be sure that the currently functioning systems will stay in operation for more than six years especially when only 17% had lasted more than 8 years at the time of the survey. Specific gas production which is a measure of the biogas produced per day in m 3 per m 3 of digester and pressure developed were used as Indicators of system performance 3
3
The highest specific biogas production registered during the study was 0.23m/m/day and the lowest 0.05 m 3/ m /3d a y both of which were for fixed dome digester systems. Comparing these actual figures with the expected results as obtained from on-station
484
Nabuuma & Okure
tests carried out at AEATRI with consistent daily feeding of plants reveals a very big deviation. While an average of 0.23, 0.25, 0.17 m3/m3/day was expected for floating cover, fixed dome and tubular digester respectively (Odogola et al, 2001), those from the field indicated an average of 0.11, 0.14 m3/m3/day for floating cover and fixed dome respectively. Specific gas production for tubular digester systems could not be established because of inconsistent feeding as well as technical faults which rendered the plants non-functioning. Some of the plants did not develop enough pressure to allow for flow of gas from the digester storage chambers to the gas consumption points so they could only be used once in a while when the pressure had increased to sufficient levels. Possible explanations for dismal performance were investigated along two lines, design and construction as well as operation and maintenance. The findings are outlined as follows. 4.1 Design and Construction Problems Most of the blockages in the pipes were due to lack of or poor location of water drain valves. This hindered effective condensate removal and eventually led to intermittent flow. The tubular digesters were placed in positions unprotected from moving animals and objects and were therefore susceptible to damage. As a result, most of these bags were pierced or developed holes and could no longer hold gas. In addition to that, most of the bag reservoirs were placed on tree branches. The effect of the sun on the continually expanding and contracting bag resulted into wear and tear and finally the bags failed. With the failure of the biogas reserve bags, the systems could no longer develop enough pressure. Some of the supply lines were placed under concrete making maintenance activities such as emptying of blocked pipes a tedious exercise since it necessitates breaking of concrete before accessing the lines. Some gas outlet pipes were placed below the highest slurry levels in the digester and were therefore blocked by the digester contents hindering gas flow to the appliances. 4.2 Operational and Maintenance Problems Besides the technical faults on the biogas plants, problems such as insufficient gas production and low pressure in most of the plants was found to arise from inconsistent feeding, incorrect dilution ratios and insufficient loading of digesters. Although dilution ratios should be 1:1 and 1:2 for fresh and dry manure respectively results show biogas system operators, whose source of feed was paddocks and therefore using dry manure, operating with ratios such as 1:0.9, 1:1.2 and 1: 3.6. In some cases at the time of the survey they no longer had animals from which to get digester feed or the animal waste was too little to be of any use. Other plants were abandoned simply because of the burden of transporting effluent to gardens especially for homes whose gardens were far away from the biogas effluent chambers.
485
International Conference on Advances in Engineering and Technology
These coupled with lack of awareness on the labour intensive operation and maintenance requirements for biogas systems as well as the availability of alternative sources of energy such as wood and charcoal led to the abandonment of the biogas systems. A number of non-functioning floating cover and fixed dome systems could have been saved from further deterioration either by carrying out simple, minimal cost maintenance activities such as water condensate draining and cleaning of burners. In some cases all that was required was replacement of low cost system components such as drain valves or hosepipes but this too was neglected. Although this is possible with floating cover and fixed dome systems, replacement of faulty parts for tubular digester systems is rather costly. According to information provided by Agricultural Engineering and Appropriate Technology Institute, the cost of installing a 4m 3 tubular digester system would require about 240,000 Uganda shillings (140 dollars) in materials and 100,000 Uganda shilling (60 dollars)labour. In most cases the damaged components of this system were found to be the bag digester and gas reservoir which cost about 75,000 Uganda shilling (45 dollars). This means replacement of the bags would require more than 30% of the initial installation cost for the entire system. Therefore, although considered affordable because of the low initial cost, tubular digester systems are not sustainable unless an element of protecting the bags both from moving objects and the effect of sun is incorporated into the construction costs an idea which would compromise the initial objective of providing low cost biogas digester systems. One of the greatest hindrances to the sustainability of biogas technology in Uganda is lack of awareness of even the minimum requirements for good performance of a biogas plant, possible failures, causes and remedies. There is a very big knowledge gap between the promoters of the technology and the users. While the promoters are aware of all the benefits as well as the necessary inputs to acquire them, most of the operators or users of the technology are unaware of the benefit package as well as the operation and maintenance requirements. Based on the information availed by the users in the field, only close to 20% of the users admitted to having received both written and oral instructions while the other percentage only received oral instructions. None of those who received written instructions had a copy of the same. Most of the information that the operators had was verbally handed down to them from the owners or had obtained it from the previous operator. The possibility of losing some of the information in the transfer is quite high since the contents are not constant. Further analysis revealed that 60% of the users were aware of the technical problems on their systems and 48 % of these knew the causes, only 32 % knew how to solve the problem and only one user managed to solve the problem. Repair was not done either because of lack of technical know how or the high cost of replacing component parts. 5.0 SUMMARY AND RECOMMENDATIONS The failure of most of the systems cannot be entirely blamed on the technology. These problems seem to have originated right from conception of the dissemination programmes, through the system design, to the construction and operation and maintenance process. The survey revealed that the technical aspect of biogas technology cannot be totally isolated from the social and economic aspects. From the study, it seems the only
486
Nabuuma & Okure
reason for acceptance of the technology was the convenience associated with using the energy obtained from a biogas plant. This, compared to the cost of operation, was not strong enough reason to favour continual commitment to the technology especially where users perceived that the other sources are available and convenient. It seems the benefits of the technology were over emphasized while neglecting the inputs required for generating them. This is evidenced by the enthusiastic early adoption of the technology which quickly wanes with eventual abandonment of the technology even when the infrastructure is still in good condition. Since the majority of Uganda's population still depends on agriculture for its livelihood and the energy demand in still on the increase, biogas technology still has potential in Uganda. As a way forward, the following recommendations may be made: (1) In order to achieve the benefits of the biogas technology, the owners/users should be given all the necessary information regarding the inputs required for good performance of a biogas system in a way that can easily be passed on from one user to another with minimal knowledge loss. Biogas is a labour intensive technology and emphasis should be placed on specifying labour, substrate, operation and maintenance requirements Information regarding potential hazards to gas inhalation and fire risks as well as emergency medical care should be availed to the users. (2) There is need to develop technical and maintenance skills. The first step in this is to encourage the owners to participate in the construction of the entire biogas system while highlighting possible problems, causes and remedies. (3) For future projects, dissemination should be based on end-user approach instead of the supply-side approach should be used in order to facilitate sustainability of the technology. Dissemination program teams should endeavour to highlight how each of the benefits of the technology can transform the lives of the users as a way of fostering commitment to the technology REFERENCES
Gunners, Charles G. and Stuckey, David C. (1986). Integrated Resource Recovery Series World Bank, Washington D.C. Hao, Tang Ying, Lo, Ibrahim and Megersa, Buyene (1989). Biogas Manual. African Regional Centre for Technology. Karekezi, Steven and Ranja, Timothy (1997). Renewable Energy Technologies in Africa, Zed Books Ltd, London and New Jersey. Kristoferson, L.A. and Bokalders, V. (1986). Renewable Energy Technologies, Their Application in Developing Countries. Pergamon Press, Oxford. New York. Beijing. Odogola, R. W., Kato, C., Benoni, B. and Makumbi, G. (2001). Biogas Production for Small hold Energy Needs, Unpublished report, Agricultural Engineering and AppropriateTechnology Research Institute. Wafula, James (1992). Biomass Energy development, Rural Biogas Plants Technical and Dissemination aspects. GTZ-SEP Ministry of Energy. Walugembe, D. and Kamani, M. (1992). Biomas energy development, Technical issues, Publisher- Regional Wood Energy Programme for Africa.
487
International Conference on Advances in Engineering and Technology
M O D E L L I N G THE D E V E L O P M E N T OF A D V A N C E D M A N U F A C T U R I N G TECHNOLOGIES (AMT) IN D E V E L O P I N G COUNTRIES M.A.E. Okure, Department of Mechanical Engineering, Makerere University, Uganda N. Mukasa, Department of Mechanical Engineering, Makerere University, Uganda Bjorn Otto Elvenes, Norwegian University of Science and Technology (NTNU), Norway
ABSTRACT
This paper presents models that can collectively be used to analyse the manufacturing industry in developing countries. The models take into account the existing environment and highlight the effect production strategy as a moderating factor has in influencing decisions to adopt Advanced Manufacturing Technologies (AMT) as well as upgrading technical skills to levels necessary for absorption of these technologies. A method of quantifying proponents, skill levels and production strategies as main effects to the degree of automation is presented. Finally the type of activity a firm is engaged in as an influential factor to AMT adoption is explored. Keywords: Models, manufacturing Industry, existing environment, advanced manufacturing technology, skill levels, firm activity.
1.0 INTRODUCTION The manufacturing industry in developing countries is generally characterised by low growth, low volume/capacity, lacks high responsiveness and consequently cannot survive in highly competitive markets. Small-medium batch sizes and non-flow line production technologies are typical of industries in this sector. The vitality and speed with which other sectors such as telecommunication, have adopted advances in technology has not been observed in the manufacturing industry particularly in developing countries. Small manufacturing firms are the norm rather than the exception employing the majority of manufacturing employees and can contribute enormously to the vitality of these economies. Past research on AMT adoption and implementation has mainly focussed on large firms since it is assumed that small firms do not have the resources to make extensive use of these technologies. The level of inventiveness of smaller firms is apparently inexistent among larger manufacturers. The general trend world wide to achieve efficiency and utilisation levels of mass production, while retaining the flexibility that job shops have in batch production through Flexible Manufacturing Systems (FMS), has not taken root in developing countries. Dearth of appropriate Government policies, lack of awareness, poor industrial strategy, no external markets to complement the domestic one, high tool investment costs, lack of
488
Okure, M u k a s a & Otto
suitable financing, unreliable electricity supply and low returns can be cited as bottlenecks to the growth of the industry. This article explores the modeling of the effect education levels and technical skills have on the technological development of this industry, the main incentives for AMT adoption in existence and the types and degree of automation appropriate to the various categories of establishments in developing countries. 2.O MODELS Several models can be used to try to unravel the status and prospects of the manufacturing industry in a developing country, especially with respect to adoption and utilization of AMTS. 2.1 Education Levels Model This model fits the relationship between education levels and the degree of automation. It takes the form of a multiple regression model
AMI7 - ,8o + fl~ (CE), + ,82 (SEC), + ,83 (MGR), + ,84 (ENG), +/35 (BCW), + c, (1) where, AMT~ = breadth of AMT adoption of firm i, CEi = percentage of clerical employees that use computer based technologies on a daily basis in firm i, SECi = percentage of secretaries that use computer based technologies on a daily basis in firm i, MGR~ = percentage of managers that use computer based technologies on a daily basis in firm i, ENG~ = percentage of engineers that use computer based technologies on a daily basis in firm i, BCW~ = percentage of blue collar workers that use computer based technologies on a daily basis in firm i. ~'i is the error term. 2.2 Proponents Model This model fits proponents of AMT's and degree of automation, also in a multiple regression model of the form
AMT i - ~o + ~1 ( T A X ) i nt- ~2 (ENV )~ ~4 (MD )i
+
+
~3 ( G U S T ) i +
fls( ENG )~ + f16( MRT )~ + g~
(2)
where, AMTi = breadth of AMT adoption of firm i, TAXi = firm i's response to tax incentives and/or favourable financing, ENV~ = firm i's response to environment, safety or health, CUSTi = firm i's response to customers, MDi = firm i's response to Managing director or Chief executive officer, ENG~ = firm i's response to Engineering/Production departments, MRTi = firm i's response to Marketing/Sales department. ~'i is the error term.
489
International Conference on Advances in Engineering and Technology
2.3 Activity Model This model tests the effect of manufacturing activity of firms on AMT adoption. The following analysis of variance (ANOVA) model is used yo = ,u~ + aij + eo
H0" ill-
]./2-
...-
(3)
~./~c
H 0 1 : ~ 1 = a 2 = ... = 0~'ir = 0 where, YOis the i th firm'S breadth of AMT in the jth category of activity and otij is the effect due to i th firm. 2.4 Production Strategy Model This model checks the dependence of the degree of automation on production strategies and is made up of the following multiple regression model
A MTi = 1~o + ]~1 ( P R D C T ) i "[-~2 ( L B C T ) , + 183 (PRO)i .-t-184 ( P R D Q T ) i -Jr-
(4)
f15 (CUSTSQ ) i + ,B6 ( D M R T ) , + f17 ( F M R T ) , + ,B 8 (CPADV ) ~
(5)
-t- ~9 (FLX )i 7t- ~
(6)
where again, AMTi = breadth of AMT adoption of firm i, PRDCTi = firm i's response to reduction in cost of finished goods, L B C T ~ - firm i's response to reduction in labour costs, PRDg = firm i's response to increase in overall productivity, PRDQT~ = firm i's response to increased quality of product(s), C U S T S Q i - firm i's response to increased quality of customers services, DMRT~ = firm i's response to increased domestic market share, FMRTi = firm i's response to increased foreign market share, CPAD V~= firm i's response to superior firm image, FLX~ = firm i's response to increase in the flexibility of the manufacturing process. ~i is the error term. 2.5 Interaction Model 1 This model measures the moderating effect production strategy has on technological skills as determinants for the degree of automation. The following Multivariate regression model is used:
AMT ijk
-- ~ 0 nt- ~ l ( P S ) ij "{- ~2 ( T S ) ik + ~3 ( P S ) ij x
+
(7) Here, (PS)ij = effect of firm i with dimension j of production strategy (TS)ik= effect of firm i with technical capability j, and
490
Okure, M u k a s a & Otto
(PS)ijx (TS)ik=Interaction skills.
effects between strategic motivations and technical
2.6 Interaction Model 2 This model measures the extent to which production strategies modify the form of the relationship between degree of automation and influence of proponents. The following Multivariate regression model is used:
AMT ijk -- ~ 0 -Jr- }~1 (PS)!/ -+- J~2 ( IP ) ik q-
i~3 ( PS
) ij x ( I P ) , , + c,
(8) where, (PS)ij = effect of firm i with dimension j of production strategy (IP)ik = effect of firm i with influence of proponents k (PS)ijx (IP)ik=Interaction effects between strategic motivations and technical skills 3.0 RESEARCH M E T H O D O L O G Y To test the models and later validate them, the following methodology is used. 3.1 Sampling The research adopts the firm as its unit of analysis. The population is manufacturing establishments in Uganda employing more than 5 people that make use of and own machine tools. The sampling frame to be used is the Uganda Bureau of Statistics 2003 Business Register. Within the register, 1939 firms were identified as meeting the criteria for inclusion in research population.
In order to ensure that all manufacturing activities are represented stratified random sampling methods will be used. In compilation of the sampling frame, the population is divided into twelve groups based on the kind of activity the establishments are engaged in. These form the strata. On this basis, the following numbers were found in each group: 9 Food processing which includes processing of meat, fish and diary products; grain milling; bakeries; sugar and jaggery; coffee roasting, coffee processing; tea processing; other food processing and animal feeds. (584) 9 Tobacco and beverages (45) 9 Textile, clothing, leather and footwear (124) 9 Timber, paper and printing (165) 9 Chemicals, paint and soap (53) 9 Plastics and rubber (22) 9 Bricks and cement (88) 9 Steel and steel products (256) 9 Foam Products (5) 9 Furniture (449) 9 Civil works (141 ) 9 Miscellaneous (7).
491
International Conference on Advances in Engineering and Technology
Data in this study will be collected by interviewing leaders of the selected establishments. In addition a Survey instrument is filled in.
3.20perationalisation of Variables In the research, the dependent variable is AMT. The survey instrument asks firms to identify the type and number of AMT they had adopted. A firm's breadth of adoption (AMT) was the number of different types of advanced manufacturing technologies used by each firm. The survey instrument identified 21 possible such technologies. Thus, a firm's AMT can range from "0" (a firm which has no AMT) to "21" (a firm which has adopted all 21). The various independent variables are outlined below, based on the different model already presented above.
3.3 Education Levels Model In the Educational levels model, technical capabilities of the different categories of employees are measured as the actual percentage of employees within each category who use computer based technologies on a daily basis. In their study Lefebvre, (1996) noted that the level and type of educational background and the extent of functional experience were poor proxies for the level of technical skills: for example some extremely skilled machinists operating on computerized numerically controlled machines (CNC machines) had only two or three years of experience and no post-secondary diploma. They also noticed that, in the more sophisticated firms, an extensive use of computerbased technologies by the non productive employees was almost invariably associated with a higher AMT adoption rate. As a result of this the survey instrument assesses the extent of use of computer based applications for all types of employees. 3.4 Proponents Model: The variables in this model are measured using a five-point Likert scale. Firms are asked to respond to the following question: "On a scale of 1-5 indicate how the following groups, individuals or factors influence decisions to adopt AMT's in your firm.". The influences are then categorized into two groups: 9 External Influences 9 Tax incentives/favourable financing 9 Environment safety/health 9 Customers 9 Internal Influences 9 Managing Director/Chief Executive Officer 9 Engineering/Production Department 9 Marketing/Sales Department 3.5 Production Strategy Model: The variables here are also measured on five-point Likert scale and firms are asked to respond to a question of the form: "On a scale of 1-5 indicate how the following strategic motivations would influence or influenced your decision to adopt AMT's". In all nine production strategies are listed in the instrument.
492
Okure, Mukasa & Otto
4.0 IMPLICATIONS OF THE PROPOSED MODELS The models are so far presented as a broad framework for the explanation of educational levels, proponents of AMTs, production strategies and manufacturing activity phenomena in the context of AMT adoption in developing countries. The following are the implications of the models: 1. The models can be used as a useful means to measure strengths and weaknesses of manufacturing firms in developing countries in their efforts to adopt more efficient technologies and therefore compete favourably in the global market 2. The analysis of resulting data may reveal that certain key variables do predominate in determining the efficacy of the interacting variables. This effort may pinpoint key factors which determine success in AMT adoption and therefore provide a basis for policy intervention to accelerate the movement to industrialisation. 3. Consequently the models will explain the slow pace at which firms in developing countries are introducing advanced technologies. The resulting propositions can then be used to suggest coping strategies that can alleviate these problems 4. The models can act as a basis for establishing more credible and valid models which can be used to explain interaction factors in the context of AMT adoption in developing countries
5.0 CONCLUSIONS The establishment of production facilities in developing countries is limited in part by the increasing scales that emerged with industrialization. The small size of their domestic market reinforces the argument advanced by Alcorta & Ludovico, (2001), who point out that exports could provide a way out of the scale problem but immediately caution that a minimum of efficiency is often necessary prior to entering foreign markets. Understanding why these firms are not adopting agile manufacturing and time-based technologies can be critical to their competitiveness. The paper has presented general models that can be used as a launching pad for further work. Production strategy has been used as the main moderating variable but both influence of proponents and educational levels may be taken on separately as interacting variables. Models that measure the interacting effects of impeding factors, flexibility types, openness to innovation and technical collaboration have not been presented. This study took manufacturing establishments that make use of and own machine tools as its population. Future work can be in the area of analyzing tooling facilities in the establishments to identify any moderating role played in the context of AMT adoption. REFERENCES Acs, Z.J. & Audretsch, D. B. (1998). "Innovation and firm size in manufacturing," Technovat. 7, 197-210. Ahluwalia, I.J. (1991). productivity and Growth in Indian Manufacturing. Dehli; Oxford University Press Alcorta, Ludovico, (2001). Technical and Organization Change and Economies of Scale and Scope in Developing Countries. Oxford Development Studies, 29, 77-101.
493
International Conference on Advances in Engineering and Technology
Bennet, D., Vaidya, K. & Zhao. H. (1999). Valuing Transferred Machine Tool Technology. International Journal of Operations & Production Management, 19, 491-515. Bonaccorsi, A. (1992). On the relationship between firm size and export intensity, J. Int. Bus. Studies 3 (4), 605-635. Carlsson, B., Stankiewicz, R. (1991). On the nature, function and composition of technological systems. Journal of Evolutionary Economics, 1(2), 93 -118. Hobday, M. (1995). Innovation in East Asia. Edward Elgar, London Katrak, H. (2000). Economic Liberalization and the Vintages of Machinery Imports in Developing Countries" An Empirical Test for India's Imports from the United Kingdom. Oxford Development Studies, 28, 309-323. Lall, S. (1990). Building Industrial Competitiveness in Developing Countries. Paris; OECD Development Centre Studies Lefebvre L.A, Lefebvre, E. & Harvey, J. (1996). Intangible assets as determinants of advanced manufacturing technology adoption in SME's" Toward an evolutionary model. IEEE Transactions on Engineering Management, 43(3), 307-322. Merideth, J. (1987b), The strategic advantages of new manufacturing technologies for small firms, Strategic Manage. J., 8, 249-258. Naik, B. & Chakravarty, A.K. (1992). Strategic acquisition of new manufacturing technology: A review and research framework. International Journal of Production Research, 30 (7), 1575-1601. Pimrose P.L. & Leonard, R. (1985). "Evaluating the "intangible" benefits of flexible manufacturing systems by use of discounted algorithms within a comprehensive computer program," Proc. Inst. Mechan. Engineers, 199, 23-28. Rajagopalan, N., Rosheed, A.M. & Datta., D.K. (1993). Strategic decision processes" An integrative framework and future directions. Oxford, U.K.; Blackwell Rodrik, D. (1992). The limits of trade policy reform in developing countries. Journal of Economic Perspectives 6, 87-105. Samuels, J., Greenfield, S. & Mpoku, H. (1992). Exporting and the small firm, Int. Small Bus. J 10 (2), 24-36. Soete, L. (1985). International diffusion of technology, industrial development and technological leapfrogging. World Dev 13(3):409-32 Sung, T.K., Carlsson, B. (2003). The evolution of a technological system" the case of CNC machine tools in Korea. Journal of Evolutionary_ Economics, 13, 435 - 460. Tsuji, M. (2003). Technological innovation and the formation of Japanese technology: the case of the machine tool industry. Journal of AI & Soc. 17, 291 - 306.
494
F arag, Akasheh & Neale
CHAPTER SEVEN GEOMATICS
SPATIAL M A P P I N G OF RIPARIAN V E G E T A T I O N USING A I R B O R N E R E M O T E SENSING IN A GIS E N V I R O N M E N T . CASE STUDY: MIDDLE RIO GRANDE RIVER, N E W M E X I C O F. Farag, Strategic Research Unit, National Water Research Center, Cairo, Egypt 0. Akasheh, Biological and Irrigation Engineering Department, Utah State University, USA C. Neale, Biological and Irrigation Engineering Department, Utah State University, USA
ABSTRACT
This paper demonstrated a procedure to classify riparian vegetation in the middle Rio Grande River, New Mexico, USA using high resolution airborne remote sensing in a GIS environment. Airborne multispectral digital images with spatial resolution of 0.5meter pixels were acquired over the riparian corridor of the middle Rio Grande River, New Mexico, in July of 2001, using the new Utah State University (USU) digital imaging system covering approximately 175 miles (282 km). The images were corrected for vignetting effects, geometric lens distortions, rectified to 1:24000 USGS digital orthophoto quads as a base map, mosaicked and classified. Areas of the vegetation classes and in-stream features were extracted and presented. Surface water area within the river along with the meso-scale hydraulic features such as riffles, runs and pools were classified. The water surface area parameters were presented not as an indication of water flow volume in the river, though they could be related, but as a means of showing how changes occur as moving downstream. Analyzing the river images shows that water diversions have a big effect on the water surface of the river. Records of river flows on that date confirm these classification results. Riparian vegetation mapping using high resolution remote sensing gives a broad and comprehensive idea about the riparian zone health and condition along the river. In case of Middle Rio Grande, the vegetation classification image maps will help decision makers to study and identify problems that affect the river system. This map will also provide a base from which to monitor the riparian vegetation in the future and provide the basis for change detection resulting from any management plan applied to the river corridor with the aim of protecting and restoring the river ecosystem.
495
International Conference on Advances in Engineering and Technology
Keywords" Riparian Vegetation; Remote Sensing; GIS, and Meso-scale hydraulic features.
INTRODUCTION Riparian vegetation systems are important for maintaining water quality and habitat diversity of rivers. Traditional methods of mapping riparian systems include aerial photography, as well as extensive ground-based mapping using well-established surveying and measurement techniques (Neale, 1997). Global Positioning Systems (GPS) have also been used as aids in ground-based mapping. Furthermore, airborne multispectral videography system gained acceptance over the last several years as applications developed to assist in mapping such riparian systems. However, a technique known as airborne multispectral digital system has been gaining approval over a few years ago as new applications have been developed and proven viable. In addition, improved digital camera systems have become available commercially, providing better quality imagery in digital format. Airborne multispectral digital imagery provides some advantages over traditional aerial photography; whereas the processing of aerial photograph film is expensive, airborne digital imagery can provide a quick turnaround of multi-band imagery in digital form ready for computer processing. Aerial photographs must be scanned for computer use in digital form, or features of interest interpreted and digitized from the photographs. Calibrated multispectral digital images lends itself well to computer image classification and the automated extraction of features such as vegetation types, soils, vegetation density and cover, standing water, wetland areas, instream hydraulic features, exposed banks, and other features of interest in a riparian zone. As municipal and irrigation water demand increase due to growing world population, rivers and streams are exposed to extensive pumping and diversion of water. In the past, there has been no consideration for riparian and wetland vegetation water requirements. Encroachment of agriculture and reduced fiver flows has led to the reduction in riparian vegetation areas. Estimation of riparian vegetation water requirement is considered one step toward conserving this resource. This will set limits for water diversion from rivers and streams for irrigation and municipal purposes. The estimation of riparian vegetation evapotranspiration is not sufficient to quantify the riparian vegetation water requirement, unless the riparian vegetation is mapped and the area of the main species is estimated precisely. Furthermore, classification of surface water areas within the rivers along with the meso-scale hydraulic features will assist in water resources planning and management. High resolution airborne remote sensing is a powerful technique for riparian vegetation mapping and monitoring. This paper demonstrates the processing steps and procedures required to use this type of multispectral digital images for riparian vegetation classification and mapping over the riparian corridors of the middle Rio Grande River, New Mexico. Furthermore, the study presents classification of the surface water areas within the fiver strip along with the meso-scale hydraulic features such as riffles, runs and pools.
496
Farag, Akasheh & Neale
METHODS 2.1 Image Acquisition and Processing High-resolution airborne multispectral images were acquired using Utah State University (USU) airborne digital system at a nominal spatial resolution of 0.5-meter pixels. The middle Rio Grande was covered from Cochiti Dam down to Elephant Butte reservoir, approximately 175 fiver miles. The image acquisition flights occurred on 24th, 25th and 26th of July of 2001 under mostly clear sky conditions. Figure (1) shows the flown riparian buffer over the fiver coveting approximately 175 miles (282 kin). The USU airborne multispectral digital system acquires spectral images centered in the Green (0.55 urn), Red (0.67 urn) and near-infrared (0.80 urn) portion of the electromagnetic spectrum (Neale 2001).
New Mexico
. . . . . . . . . . . . . . . . . . . . . . . . . .......
Iv
I
Fig. 1" The flown riparian buffer over the middle Rio Grande River, NM, USA in 2001 The images were acquired at a nominal overlap of 60% along the flight lines in one swath centered over the river. For the most parts, the 1 km swath width was enough to cover the riparian zone on both sides of the river up to the drains that run parallel to the river on both sides. The individual spectral band images were geometrically corrected for radial distortions, radiometrically adjusted for lens vignetting effects and registered into 3 band images using the same technique developed by Neale and Crowther (1994) and Sundararaman et al. (1997). The 3-band images were then rectified to 1:24000 USGS digital orthophoto quads using common control points visible in both sets of imagery. The rectified images were mosaicked into larger image strips along the flight lines, representing reaches of the river. The mosaicked strips were calibrated to a reflectance standard using the USU system calibration and measurements of incoming radiation developed by Crosby et al. (1999).
497
International Conference on Advances in Engineering and Technology
2.2 Image Classification Supervised classification was conducted using ground truthing information obtained during a field campaign using technique developed by Neale (1997). Prints of selected multispectral images along with Global Positioning System were used to locate and identify different vegetation types and spectral signatures visible in the images. In addition, a second ground truthing data set for the study area, provided by The US Bureau of Reclamation, Denver office, Colorado, was used. Part of the ground truthing data set was used to extract vegetation signatures and train the computer software (ERDAS IMAGINE) to recognize different surface types. The remainder of the ground truthing data was used to verify the accuracy of the classification of the major riparian vegetation classes. Spectral signatures were extracted from the riparian zone and the surrounding areas and the within the fiver using the Seed Property Tools in IMAGINE. Several signatures were extracted visually and iteratively from the image to cover most features and surfaces that appeared in the images. The spectral statistical separability of the classes was studied using the Transformed Divergence method within the ERDAS IMAGINE software (ERDAS Field Guide 2001). The final classification was then conducted using the Maximum Likelihood scheme and all pixels in the images were assigned to a specific class. Figure (2) shows the 3-band image and the corresponding classified image as well as the final list of the classes for a section of the river showing the areas for each class.
74:9 Sand Bare Soil Cottonwood DenseTamarisk SparseTarnarisk Dead Tamarisk, GoodingWillow Acacia/Bushes Wet Soil/Wet Sand Grasses Dilt Road AsphaltRoad Shadow DrainWater SubmergedSand Backwaier Riffle Railway CoyotoWillow RussianOlives ElmTree BurntCottonwood Lillies Rock crops
.i!~iili!~ili~y~ii~ili~ili~i~l~i~i~. "
178.1 315.0
328
........
17718 0,3 25 21,8 36,6 23.6
4.3
I I
145.5 11.3
3.'.1 iiiiNi!i!i!Niili ii~N /i
o.o 0.5 0,0 21.7
,~,~:..~N,
5.2
~!~i1
o,o
.ii!~,i,,i!i,,ii,;i,,ili,,ili,,i,li,,ili,;i,,iil,,H,,i,i N!i!iiiiNii
o,
o~o 1.0
Fig. 2" 3-band image and the corresponding classified image for a section of the river. 2.3 Accuracy Assessment An accuracy assessment was conducted on the classified images using the ground truthing data that was not used in the classification process. The accuracy assessment was conducted on the major vegetation classes, which were: Cottonwood, Tamarisk, Russian olives, Coyote willow. Ground trothing data that was not used in the signature set training process was compared to the classified results for that area. The matching and mismatching events were
498
Farag, Akasheh & Neale
recorded. In case of mismatching events, the mismatching class was noted under the corresponding class in the confusion table in the classification column. In this table four parameters were calculated: users accuracy, producers accuracy, overall accuracy, as well as omission and commission errors. Users accuracy is calculated by dividing the number of samples correctly classified by the number of samples of that class. The producers accuracy is a measure of how well a certain area is classified and is calculated as the number of samples of the ground truth class that were correctly classified divided by total number of samples of the ground truth. Commission and Omission errors are the result of 100% accuracy minus the user and producer accuracy respectively.
3.0 RESULTS AND ANALYSIS The following paragraphs summarize the findings of the investigation.
3.1 Vegetation Distribution Analysis Areas of different vegetation classes were extracted from the final classified images for every corresponding quadrangle base map, using the ERDAS IMAGINE software. The area statistics were extracted from the riparian zone, digitized as an area-of-interest (AOI) polygon over the 3-band image mosaics and essentially corresponded to the region between the two drains that run parallel to the river in most sections of the river. Figure (3) shows a full resolution of the 3-band multispectral imagery and the corresponding classified image for part of river. Figure (4) shows the results of the vegetation class areas per quadrangle sheet, listed from north to south along the x-axis. The most important observation is that the Cottonwood trees areas decreased from north to south while the Tamarisk trees areas increased within the riparian zone. This distribution may indicate problems in the river ecosystem. Tamarisk results in the deterioration of the soil chemical and physical properties and prevents new cottonwood seedlings and other native species from emerging. Irrigation water diversions and the resulting decrease of in-stream flows as the fiver flows from north to south might be affecting the balance and capacity of the native species to compete with Tamarisk. The vegetation distribution chart also indicates areas where Tamarisk control activities such as burning result in the dead Tamarisk class, mostly on the lower sections of the middle Rio Grande. Figure (5) shows a section of the fiver with dead Tamarisk in the 3band image and the corresponding classified image. The value of the near infrared band reflectance drops significantly over the dead tamarisk compared with the healthy one.
Fig. 3:
Full resolution of the 3-band multispectral imagery and the corresponding classified image.
499
International Conference on Advances in Engineering and Technology
800 700 600 r-,
500 400
,~
300 200 t00
.
li_li,
[] Cottonwood
[] Acacia/Bushes [] FIm Tree
Fig. 4:
[] Tamarisk
[] Dead Tamarisk
O Grasses 9Coyote Willow [] Burnt Cottonwood [] Lilhes
i tl [] Gooding Willow
[] Kussian Olives [] Crops
Surface class statistics of the riparian area along the middle Rio Grande resulting from image classification.
Fig. 5" Area with dead Tamarisk resulting from a fire
3.2 In-Stream Surface Water Distribution Analysis Figure (6) shows the surface water area within the fiver along with the meso-scale hydraulic features such as riffles, runs and pools. The water surface area parameter is presented not as an indication of water flow volume in the fiver, though they could be related, but as a means of showing how changes occur as we move downstream. The following figure show the typical change in vegetation pattern and water surface area in the fiver from north (Albuquerque) to the South (San Antonio). More Cottonwood and less Tamarisk appears in the northern section of the imagery while more Tamarisk and less water surface area are present in the downstream section south of San Antonio. Analyzing the river images from upstream to down stream, it was clear that water diversions have a big effect on the water surface area in the river, a strong indication that the water flows were affected as well. Records of fiver flows on that date confirm these classification results. Figure (7) shows mean daily stream flow measurement for the month of July 2001 when the flight took place. These measurements are for four gauging stations going from north (below Cochiti) to south (San Marcial). There is a significant change in stream flow
500
Farag, Akasheh & Neale
going from north to south. The peaks might be attributed to the effects of precipitation events in the area or in the watershed. These peaks disappear as we go south due to the extensive water use upstream. Other statistics to note is the decrease in the water surface area as diversions occur, and the subsequent increase of wet soil/sand and submerged sand downstream resulting from these diversions. Rapid variations in river flows affect in-stream habitats and affect riparian vegetation. The river corridor downstream from the diversion dam has a much higher area in dry and wet sand and lower water surface area.
~
~~7
Fig. 6"
......................
Section of the river in the Albuquerque Area (left) where Cottonwood is the predominant class (dark green color), and Classified image from San Antonio South (Right) where Tamarisk is the major vegetation class in the area (light Green)
Daily Mean Stream Flow in Four different location 1200 1000 o0 co
800 0 Ii
600
E
400 '4--' O3
200
Days in July 2001 Below Cochiti dam San Acacia
~ Albuquerque .................San ...... Marcial
Fig. 7" Mean daily stream flow at four different locations along the Middle Rio Grande Figure (8) shows the in-stream class area distribution along the river going from north to south. The total water surface area decreased as the river flowed from north to south. Hopefully these statistics will aid policy makers to set diversions and preserve in-stream flows to support the native riparian vegetation and river habitats.
501
International Conference on Advances in Engineering and Technology
200 . . . . . . . . . . . . . . . . . . 180 i 160 140 ~" 120 ~" 100 80 60 _~. 4o
.
.
.
_k
.
20
.
~[[L
.
...... - !1 r i l l 0
,
,
~.
,
,
~
_~]11
9Run 9Pool 9Sand 9Wet Soil/wet land 9Sub~rged SandN Bac~at-er m RiffleFig. 8" In-Stream class area distribution obtained from the classified airborne Images for Middle Rio Grande River
3.3 Accuracy Assessment Analysis The contingency table for the four major vegetation classes in the fiver shows that Coyote Willow had the highest classification producer accuracy with 92% while Cottonwood had the highest in user accuracy with 89%. The classification methodology equally identified Tamarisk and Russian Olives at 86% and 82% user accuracy and with a producer accuracy of 86% and 82%, respectively. The reason for high accuracy for Coyote Willow and Cottonwood is that they are easier to distinguish from other tree, because cottonwood are larger trees with larger shadows and coyote willow is small in size similar to a shrub with almost no shadow. The ground truthing that was used to create this table was not used in the signature extraction process. The overall accuracy was calculated to be 88%. In summary, the classification accuracy was considered to be good and comparable to other studies using airborne multispectral imaging with spectral classification. 4.0 CONCLUSIONS AND RECOMMENDATIONS Vegetation mapping using high resolution remote sensing gives a broad and comprehensive idea about the riparian zone health and condition along the river. In the case of Middle Rio Grande, the vegetation classification image maps will help decision makers to study and identify problems that affect the fiver system. This map also will provide a base from which to monitor the riparian vegetation in the future and provide the basis for detection of change resulting from any management plan applied to the river corridor with the aim of protecting and restoring the fiver ecosystem. The paper presents some examples of the use of airborne multispectral digital images for riparian system mapping as well as the meso-scale hydraulic features along the middle Rio
502
Farag, Akasheh & Neale
Grande River, located in New Mexico, USA. Digital images from these systems are well suited for image processing and spectral classification of vegetation types and densities. The images can be easily incorporated and analyzed within a GIS environment. Georeferenced images can also be used for geo-morphological studies and physical measurements within the riparian zone. Image resolution (0.5-meter pixels) is selected according to the size of the riparian system of interest. Future monitoring of the river using high resolution airborne remote sensing will aid in the detection of the escalating problems due to water quantity and quality and its availability for riparian vegetation and other habitats. High resolution remote sensing can aid in detecting changes in vegetation due to natural causes or control practices on introduced vegetation such as Tamarisk. The effectiveness of new Tamarisk control methods using a beetle imported from Asia could be assessed with a future flight of the river. REFERENCES
Bartz, K., J. L. Kershner, R. D. Ramsey, and C. M. U. Neale (1994). Delineation riparian cover types using multispectral, airborne Videography. Pages 58-67 in C. M. U. Neale, editor. The proceedings of the 14th Biennial Workshop on Color aerial photography and videography for resources monitoring, May 1993, Logan, Utah. American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland. Crosby G.S., Neale C. M. U. and Seyftried M. (1999). Vegetation Parameter Scaling on a Semiarid Watershed. Proc. 17th Biennial Workshop on Color Photography and Videography in Resource Assessment, May 5-7, 1999, Reno, Nevada. American Society of Photogrammetry and Remote Sensing, and Department of Environmental and Resource Sciences, University of Nevada, Reno, Nevada, pg. (218- 222). Edited by Paul T. Tueller. Neale, C.M.U. & Crowther, B.G. (1994) An airborne multispectral video/radiometer remote sensing system: development and calibration. Remote Sensing of Environment 49. Neale, C. M. U. (1997). Classification and mapping of riparian systems using airborne multispectral Videography. Journal of Restoration Ecology Vol. 5 No. 45, pp 103 - 112. Neale, C.M.U. 1991. "An airborne multispectral video/radiometer remote sensing system for agriculture and environmental monitoring". ASAE Symposium on Automated Agriculture for the 21st Century: December 16-17, 1991, Chicago, Illinois Redd, T., C. M. U. Neale, and T. B. Hardy (1994). Classification and delineation of riparian vegetation on two western river systems using airborne multispectral video imagery. Pages 202-211 in C. M. U. Neale, editor. The proceedings of the 14th Biennial Workshop on Color aerial photography and videography for resources monitoring, May 1993, Logan, Utah. American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland. Sundararaman S., and Neale C.M..N., (1997). Geometric Calibration of The USU Videography System. Proceeding of the 16th Biennial Workshop on Videography and Color Photography in Resource Assessment, April 29 - May 1st, 1997, Weslaco, Texas. American Society of Photogrammetry and Remote Sensing, USDA Subtropical Agricultural Research Laboratory.
503
International Conference on Advances in Engineering and Technology
CHAPTER EIGHT ICT AND M A T H E M A T I C A L M O D E L L I N G
2-D H Y D R O D Y N A M I C M O D E L FOR P R E D I C T I N G E D D Y FIELDS A. M. EL-Belasy, HRI, National Water Research Center, Delta Barrage, Egypt. M. B. Saad, First under secretary of the Ministry of Water Resources &Irrigation, Egypt. Y. I. Hafez, NRI, National Water Research Center, Delta Barrage, Egypt.
ABSTRACT A 2-D mathematical model was modified in the Hydraulic Research Institute to predict eddy formation and to determine its dimension and back velocity. The model includes a module for varying turbulent viscosity to obtain better representation of the turbulence phenomena occurring downstream hydropower structures. The irregular boundaries can be represented very well by the modified model. The modified model was applied to the experiments of Jain et al (1988) for a hydropower structure with navigation installation. In this phase the predicted flow velocity gave good agreement with the measured data in terms of matching the eddy length and back velocity. As will, the model was tested by comparing the results with experimental runs carried out in the Hydraulic Research Institute (MWRI) for Esna and Naga Hamaadi Barrages. The results simulated well the formation of large eddies and back flow. Keywords: 2-D Mathematical models; flow field; eddies; back velocity; turbulent viscosity.
1.0 INTRODUCTION Most hydropower structures on a river consist of a powerhouse, a sluiceway and a navigational lock. The existence of these components along with the operational scheme of the structure induces a flow field with high turbulence intensity and complicated eddies, which cause problems to navigation and may cause erosion and sedimentation, that may threat structures stability. Mathematical models are needed to study how to minimize the negative effect of such flow fields. The high turbulence is introduced when most of the river flow is diverted through the powerhouse; thus creating a jet with very high average velocities which might be in the order of 2-4 m/s. It is customary to locate the
504
EL-Belasy, Saad & Hafez
powerhouse in the opposite side to the navigational lock in order to avoid the reverse flow coming from the powerhouse jet into the navigational lock. When the river flow is released through the powerhouse near the bank side, the skewness of the jet discharge creates a large eddy with reverse flow just downstream of the lock, Fig.1. This reverse flow creates problems to the navigational unites coming out of the lock. To cope with this back flow, the guide walls of the lock are constructed usually in a way such that they reduce the reverse flow or its effect as much as possible. Investigations have resorted to physical and/or mathematical models in order to investigate the nature of the eddy structure downstream of the hydropower structures and to find out solutions of reducing its effect. The present study aims to modify a 2-D mathematical model for predicting eddies formation and its dimensions in addition to back velocity and to introduce turbulent viscosity as a function of space. The present study also aims to compute a 2-D flow pattern around the hydraulic structures. The 9 9 9
study phases thus proceed by: Reviewing the available mathematical models Modifying a hydrodynamic model of Molinas & Hafez (2000) Applying the modified model B
Lock
~
"-
l
l
~C Q ~
Eddy with Reverse Flow
Navigation
b
Channel
Spillwayll~ ~--" A
P. . . . h ~
- - ! ~ QtI~
I
Nonover flow Dam
k--~ Scale (Meters)
50 1
0 1
50
100
1
Fig. 1" Formation of a large eddy just downstream a lock (after Jain et al, 1988) 2.0 A V A I L A B L E M A T H E M A T I C A L M O D E L S
Determination of the flow distribution around hydraulic structures is an important aspect for their protection and safety. In general, flow around hydraulic structures can be numerically simulated through the use of 2-dimensional or 3-dimensional models. 2dimensional models have been already used for a long time for tidal flow in seas and estuaries. They are also used for quasi-steady flows in river flow computations. As for harbors and bays, several models are available. Among them is the model Kuipers & Vreugdenhil (1973). This model uses a 2-dimensional depth-averaged model for un-
505
International Conference on Advances in Engineering and Technology
steady free surface flow to predict steady recirculating flows. The model neglects the turbulent transport terms in the equations. But, owing to a smoothing procedure introduced to obtain numerical stability in their center difference scheme, terms, which exert a diffuse action, is effectively introduced. This is physically unreasonable, since the diffusion present in the numerical solution depends only on the smoothing coefficient used. On the other hand, many models are available for predicting eddy dimensions and flow field in curved channels such as: Mcgurik & Rodi (1978), Booij (1989),Yu & Zhang (1989), Chapman & Kuo (1985), Yee-Chunge & Peter (1993), Bravo & Holly (1996), Lien, Hsieh & Yang (1999), Ouillon & Darms (1997) and Molinas & Hafez (2000). These models under estimated the predicted size of the recirculation zone. From the above-mentioned survey of 2-D models, it can be concluded that most of the 2-D models are depth-averaged ones. As for jet flows, the jet induce a velocity field that causes a large surface eddy with high reverse flow. This causes a problem to the navigational units. These surface currents are required to be predicted. These currents cannot be predicted by depth averaged models. Therefore, a surface model is needed to be modified in order to simulate jet and curved flows (circulation). Model Molinas & Hafez (2000) was thus chosen to be modified. 3.0 THE H Y D R O D Y N A M I C MODEL
The differential governing equations (Molinas & Hafez, 2000), are written in the Cartesian X-Y coordinates, where the X-direction is in the main flow direction and Ydirection is in the lateral direction as shown in Fig. (2).
Fig. 2: Cartesian X-Y coordinates direction The complete equations of motion for a viscous fluid are known as Reynolds average equations. It is assumed that the fluid is incompressible and follows according to the Newtonian shear stress law whereby viscous force is linearly related to rate of strain. For two-dimensional steady incompressible flows, the flow hydrodynamics governing equations are the equation for conservation of mass and the equations for conservation of momentum. Conservation of mass equation takes the form of the continuity equation while Newton's equations of motion in two dimensions express the conservation of momentum. The continuity equation is
506
EL-Belasy, Saad & Hafez
~,u - + #x
(1)
c~v =0 0Y
-
The momentum equation in the longitudinal (X) direction is given by
a~ a au av
uc~U+vCOU__ _ 1 OP + 0 ' 2 v~'~-X)+~-y-( vc(~--f+ - ~ ))+Fx c3X OY pc?X c~X
+[a[2~] ]~=" ~
(2)
The momentum equation in the lateral (Y) direction is given as: /
.....
~---(
/
\,,,
(2v c~V)+F~.+
....
(3)
where, U - Longitudinal surface velocity, V - Lateral surface velocity, P - Mean pressure,vo = Kinematic eddy viscosity, Fx = Body force in X direction = g sin 0, Fy = Body force in Y direction = 0.0, g = Gravitational acceleration, 0 - Average water surface slope, 9 = Fluid density, zf• -- Turbulent frictional stresses in X-direction and "rfy= Turbulent frictional stresses in Y-direction. The Governing equation of the mathematical model was modified to meet the objective of the study by adding a new module for Kinematic eddy viscosity. The Kinematic eddy viscosity is assumed as a function of the velocity gradients or more precisely in terms of the shear and normal turbulent stress as in equation (5). Therefore, Equation (5) is important to model the turbulence over a constant turbulent viscosity models as in (Molin a s & Hafez, 2000; Bravo & Holly, 1996). The Kinematic eddy viscosity is calculated according to Smagorinsky (1963) as:
e=C*dxdy
OU
*~
+
+
+
c?Y
-
-
c?X
12
Equation (4) is modified in the model to become V e= C , d x d v "
,
-
~7- U
+
? x
~ V ~
+
1
c~ -U + c? Y
-
-
~- V o x
+V
B.G
5
The parameter C is equal to 0.1 (Smagorinsky, 1963), dxdy is the area of the element, and v=~ is a background turbulent viscosity that accounts for the turbulence generated by the bed and transported by the mean flow vertical velocity gradient. The two frictional stress terms (rfx, rfy) were evaluated at the water surface by Molinas & Hafez (2000) as shown below:
oz
pj
n v8
507
International Conference on Advances in Engineering and Technology
..... c3Z
k
IVV l+m
(7)
H
In this study, the numerical technique used to solve the governing equations is based on the Galerkin finite element method (F.E.M). Application of the finite element method begins by dividing the water body being modeled into elements. The quadrilateral elements in shape are used, this shape can be easily arranged to fit complex boundaries. The elements are defined by a series of node points at the element vertices. Following the Galerkin method, values of dependent variables are approximated within each element using the nodal values and a set of interpolation functions. Approximations of the dependent variables are substituted into the governing equations results in a set of integral equations. These equations are integrated over each element using the four-point Gaussian quadrature. The contributions of all element integrations are added together to obtain a global matrix, the solution of which represents the finite element approximation of boundary value problem. Due to the presence of inertia terms, the governing equations are nonlinear; therefore the global matrix representing these terms is also nonlinear. Due to the nonlinear nature of the governing equations, the numerical solution is obtained by assuming initial values for the variables and by iterating. The initial value assumed for the two velocity components and the pressure term were zero at all interior nods. Gauss forward elimination and back substitution techniques are used to solve the systems of equations. After each iteration the solution vector is updated using:
o[AU] n+l
(8)
Where, 0 is the relaxation coefficient, and superscripts (n) and (n+l) refer to iteration counter. The iteration process is continued until the maximum difference between two successive iterations across all the nodes of the mesh is less than a specific tolerance. The iterative penalty concept is used to enforce the constraint of incompressibility. In this approach, the non-hydrostatic pressure is considered as an implicit variable adjusting itself expressed to enforce the incompressibility constraint. In the iterative penalty concept, the pressure is expressed as (Zienkiwics (1989)): c?U
c~V
*'~+~=~'~-~ (}--2 + oY )
(9)
Where, k is the penalty parameter and n is iteration counter. Replacing the pressure term in Eq (9) into Eqs (2) and (3) indirectly enforces the conservation of mass condition. The continuity equation (1) may then be omitted from the set of governing equations, reducing the number of simultaneous equations from three to two, and therefore improving the computational efficiency. After solving the system of equations and obtaining the velocities U and V, the pressure value is updated according to Eq. (9).
508
EL-Belasy, Saad & Hafez
As for the boundary conditions, uniform longitudinal and lateral velocities are prescribed (U and V) at the upstream boundary. Fully developed flow conditions are applied for longitudinal velocity at the downstream. For the lateral velocity equation, the lateral boundary shear is set to zero. For the channel wall, the no-slip boundary conditions were applied. (U=0.0, V=0.0) 4.0 MODIFIED MODEL APPLICATION TO JAIN E T AL (1988) EXPERIMENTS Jain et al (1988) investigated experimentally the flow fields and navigation conditions induced by hydropower releases, see Fig. 1. They used undistorted models to identify the size of eddies and back velocity in the model in a typical lock and dam installation. With release from the powerhouse and no flow over spillway, a large eddy is formed, as sketched in Fig. 1. The size of the eddy was taken herein as the distance between its downstream and upstream ends identified by letters C and D in Fig.1. The upstream velocity of the eddy U was specified by the average velocity over the distance CD. The measured value of the eddy size and the back velocity were L=165 m and U=0.8 m/s.
It was found that the experiments of Jain et al (1988) provide valuable data sets. In addition to the application of Bravo & Holly (1996) model provides a valuable asset for comparison with the suggested-model results. The domain of the experiments of Jain et al (1988) data is divided into finite elements mesh as shown in Fig. 3. The total number of elements was 3940 elements, which produced 4086 nodes using the four-node element. This number of nodes herein is nearly twice the number of grid points used by Bravo & Holly (1996). The complex nature of the eddy structure dictates that a finer resolution is needed as much as possible. In the context of constructing the finite element mesh, the domain of the study was initially divided into trapezoidal loops or macro-elements where a total of 34 loops were used. These loops are divided into elements. The source model of Molinas & Hafez (2000) before modification was applied to the experiments of Jain et al, (1988). The eddy viscosity is constant. Values of 10 m2/s, 5 m2/s and 2.5 m2/s for the global eddy viscosity were used in this study. UsseglioPolatera & Schwartz-Benezeth (1987) suggested using a value v = 10m2/s for flows with high vorticity zones. Eddy viscosity in natural open channels can be related to the bed shear velocity and depth (Rodi, 1982) by v-g
o
+c
/a
U,H
(10)
Where, Vo = base kinematic eddy viscosity, and c~ = dimensionless coefficient, approximately equals 0.6 in natural channels. A constant eddy viscosity is assigned by specifying c~ = 0.0 and Vo > 0.0. Following equation (10), the second and third values have been selected (5m2/s and 2.5mZ/s). The discharge given to the model was 745m3/s from the hydropower and the average water depth was taken as 4.74 m. The DarcyWeisbach friction factor was taken equal to 0.05 400
0 o
80 509
International Conference on Advances in Engineering and Technology
Fig. 3" Finite elements mesh of the experiments of Jain et al, (1988)
The results showed that, the calculated recirculation in the main backflow eddy is underpredicted with a value v - 10 mZ/s 9the computed eddy length was L = 95m (approximately 58% of the measured value) and the averaged back velocity (Fig.l) was U = 0.1 m/s (approximately 12.5% of the measured value).The results obtained using a value v = 5 mZ/s and 2.5 mZ/s are shown in Table 1. Table 1" Comparison between measured averaged back velocity and eddy length for Jain et al (1988) and computed by Bravo & Holly (1996), Molinas & Hafez (2000), modified model
Parameter Averaged back velocity (m/s) Eddy length(m)
Jain's et al,
Bravo &
Molinas & Hafez (2000)
(1988) ex-
Holly
periment
(1996)
v=10 mZ/s
v=5 m2/s
v = 2.5 m2/s
model
0.80
0.5
0.1
0.28
0.35
0.65
165
150
95
195
225
160
Modified
The Modified model was applied to the experiments of Jain et al (1988) by using the same finite element mesh which used in the application of source model of Molinas & Hafez (2000). Fig. 4 shows vector plot of the two-dimensional velocity field obtained from the developed model. In this Fig. the jet flow generates two eddies with different size and strength. The weaker eddy with smaller size and strength is below the jet stream and confined by the channel walls. It has been generated by the shearing action of the jet stream on the nearly stagnant water in this region. The modified model succeeded in picking up the details of this eddy, which did not appear in the experimental data. The simulated eddy length by the modified model was 160 m while the predicted averaged back velocity (Fig. 1) was 0.65 m/s. The values are to be compared with measured eddy length of 165 m and back flow velocity of 0.8 m/s, while Bravo & Holly corresponding values are 150 m and 0.5 m/s. Therefore, the modified model prediction of the general eddy parameters (length and back flow velocity) can be considered reasonable based on the experimental data. Velocity profiles at three cross-sections were predicted and compared to the experimental data of Jain's et al, (1988) and the predictions by Bravo & Holly (1996). The location of the measured cross-section 1, 2 and 3
510
EL-Belasy, Saad & Hafez
are shown in Fig. 5 and the comparison between velocities profiles are shown in Fig. 6. It is clear from Fig. 6 that the velocities profiles computed by the developed model are close to both the experimental data and Bravo & Holly model predictions. The model was also run to compare the results with experimental runs carried out in the Hydraulic Research Institute for Esna and Naga Hamaadi Barrages. The results simulated well the formation of large eddies and back flow.
':...
(!!(!!!!!(!!!!!!!!i i'~'~'~'~'~'~'~i':ii''~''i'~' '~' '~'~'i'i'~' '~
Lock
52.5
Q
200
-
-
O~w,-
Spillway
/
-_
/
3'6
64
~ ~
....
~....
N. . . . . . Flow D a m /
/
/
/
/
/
/
/
CS NO. 3
C S NO. 2
CS NO. 1 800.0 m
Fig. 4: Velocity field obtained from the modified model
511
International Conference on Advances in Engineering and Technology
Velocity Distribution at CS No. 1 45
i~01
|
0s ,~
...
~
u
lO
B
o
5o
1~
~
~
2~
~
~
4o0
4~
Distance (m)
|
experiments
,--...-.--Bravo and Holly (1996)
-
-
.Developed Model i
Velocity Distribution at CS No. 2
w o
so
loo
150
20O
~0
30O
~0
4OO
450
Distance (m)
9 experiments -------Bravo and Holly (1996) -
-
,Developed Model
Fig. 5" Locations of measured cross sections Velocity Distribution at CS No. 2
| 50
o
100
150
200
250
300
350
4O0
45O
Distance (m)
|
Fig. 6:
experiments .,.,.....--Bravo and Holly (1996)
-
-
,Developed Model
The model predictions along with the experimental data of Jain et al, (1988) and predictions by Bravo & Holly (1996)
3.0 SUMMARY AND CONCLUSIONS A Two-dimensional mathematical model was modified including a module for varying turbulent viscosity that can be used to achieve a better identification and prediction of the location and size of eddies and back velocity around hydraulic structures. This enables the designer for solving the navigation difficulties and improving the design of hydraulic structures. The modified Two-dimensional mathematical model was examined by using the Experiments of Jain et al (1988) for a hydropower structure with navigation installation. The predicted flow velocity gave good agreement with the measured data in terms of matching the eddy length and back velocity. The application of Bravo & Holly (1996) model to the experiments of Jain et al (1988) provides a valuable asset for comparison with the suggested-model results. The conclusions that can be deduced from the peresent study are:
512
EL-Belasy, Saad & Hafez
1. 2.
Models with constant eddy viscosity value give underpredicted values for eddy length and back velocity downstream of hydropower. Introducing a module for varying turbulent viscosity gives reasonable results for the circulating eddies and back flow downstream of hydropower structures.
REFERENCES
Bravo, H. R. & Holly, F. M. (1996), Turbulence model for depth-average flow in navigation installations, Journal of Hydraulic Engineering, ASCE, 122(12) Booij, R. (1989), Depth-averaged k-e modeling, 23rd IAHR Congress, Ottawa, Canada, pp. A- 199-A-206 Chapman, R. S. & Kuo, C. Y. (1985), Application of the two-equation k-e turbulence model to a two dimensional, steady, free surface flow problem with separation, International J. For Num. Methods in Fluids, Vol. 5, pp. 257-268 H. C. Lien, T. Y. Hsieh & J. C. yang (1999), Bend flow simulation using 2-D depthAverage Model, Journal of Hydraulic Engineering, ASCE, Vol. 125, No. 10 Jain, S. C., Bravo, H. R. & Kennedy, J. F. (1988), Evaluation and minimization of effect hydroplant release on navigation, IHR Rep. Iowa Inst. of Hydro. Res. Univ. of Iowa, Iowa City, Iowa. Kuipers, J. & Verugdenhil, C. B. (1973), Calculations of two-dimensional horizontal flow, Rep. No. S 163, Part 1, delft Hydr. Lab., Delft, the Netherlands. Lien, H. C., Hsieh, Y. Y. & Yang, J. C. (1999), Use of two-step split-operator approach for 2-D shallow waterflow computation, Int. J. Numer. Methods in Fluid, in Press. Mcguirk, J. J. & Rodi, W. (1978), A depth-averaged mathematical model for the near field of side discharge into open channel flow, J. Fluid mech., Vol. 86, Part 4, pp. 761-781. Molinas, A. & Hafez, Y. I. (2000), Finite element surface model for flow around vertical wall abutments, Journal of Fluid and structures (2000) Vol. 14.711-733 Ouillon, S. & Dartus, D. (1997), Three-dimensional computation offlow around groins, Journal of Hydraulic Engineering, ASCE, 109 (11). Rodi, W. (1982), Hydraulics computation with K-~ turbulence model, In Smith, P. E., Ed, Proceeding of The Conference of The Hydraulic Division of The American of Civil Engineering, Jackson, miss., 1982 Smagorinsky, J. (1963), General circulation experiments with the primitive equations. I." The basic experiment, Monthly Weather Rev. 91, 99-164. Usseglio-Polatera, J. M. & Schwartz-Benezeth, S. (1987), "CYTHERE program. User guide" Sogreah, Grenoble, France. Yee-Chunge Jin & peter M. S. (1993), Predicting flow in curved open channels by depth-averaged method, Journal of Hydraulic Engineering, ASCE, 119, No. 1. Yu, L. R. & Zhang, S. N. (1989), A new depth-average two-equation (k-~) turbulent closure model, Proc. 3rd International Symposium on Refined flow Modeling and Turbulent Measurements, Tokyo, Japan, July 1988, pp. 549-555 Zienkiewicz, O. C. (1989), thefinite element method, Fourth Edition. Vol. 1. New York: McGraw-Hill.
513
International Conference on Advances in Engineering and Technology
SUSTAINABILITY IMPLICATIONS OF UBIQUITOUS COMPUTING ENVIRONMENT Manish Shrivastava and Donart A Ngarambe, Department of CELT, Kigali Institute of
Science, Technology and Management, Kigali, Rwanda
ABSTRACT
In ubiquitous computing environment, a person might interact with hundreds of computers at a time, each invisibly embedded in the environment and wirelessly communicating with each other. The vision of ubiquitous computing is to make computers available in everyday objects. It is a new kind of relationship of people to computers. As progress in ubiquitous computing increases, the significant opportunities and threats will also be involved towards social and environmental sustainability. There are many issues regarding the sustainability like: How will ubiquitously available computing systems affect the ecological balance? What happens to society when there are hundreds of invisible microcomputers to each other? What are the implications on social sustainability? This paper explores theoretical issues of social, environmental and ethical implications of Ubiquitous Computing Environment on Sustainable Development. Keywords: Sustainable Development, Ubiquitous Computing, Ubiquitous Society
1.0 INTRODUCTION Ubiquitous computing refers to a new vision of applying Information and Communication technologies to our daily lives. It involves the miniaturization and embedding of hundreds of invisible computers in everyday objects that communicate to each other using wireless networking, thus making computers ubiquitous in the world around us. Due to the ubiquitous computing environment, the world is moving towards a ubiquitous society where people can access and operate their computing devices anywhere, anytime. The key devices involved in building a ubiquitous computing environment are Mobile and Smart phones, PDAs (Personal Digital Assistants) and Hand-held devices, Sensors and Wearable computers etc. The history of computing has been associated with paradigm shifts in the relationship between humans and computers. As Patric Mckeown wrote "More than 25 year before, the first personal computer changed the way people thought about computing. One negative aspect of the use of personal computers relates to the location of the resulting data and information on a single computer located in a home or office because people often want data and information on computer other than their personal computers. Instead of having a personal computer, people want to have personal information available to them on any kind of machine, no matter where they are working" Mckeown (2003).
514
Shrivastava & Ngarambe
As advances in ubiquitous computing increases, the field moves away from being purely technology-driven and towards a more human-centric perspective. It is a new kind of relationship of people to computers where computers will be invisibly embedded in everyday objects and will support people in their daily life. If our life is to be surrounded and supported by these miniature computing devices, designers of ubiquitous computing systems must take into account the potential social, ecological and ethical impact of their systems. One must ask whether these technologies might not have undesirable side effects on human health. Whether sustainable development will be supported or not? How will ubiquitously available computing systems affect the ecological balance? What about social sustainability if consumers' privacy and freedom of choice are threatened? The more ubiquitous computers become in society, the larger this concern will become. This paper explores such issues of ubiquitous computing environment on sustainable development.
2.0 THE PARADIGM OF UBIQUITOUS COMPUTING The ubiquitous computing is referred as the "Third Paradigm" computing. First were Mainframes, where one computer was shared by many people. Second is the Personal computing era, where one computer is used by a single person. Next comes ubiquitous computing, where one person can use many computers. As Weiser and brown wrote "The third wave of computing is that of ubiquitous computing, whose cross-over point with personal computing will be around 2005-2020" Weiser, et al, (1996). This emerging paradigm is a result of the rapid advancements in the ongoing miniaturization of electronic circuits and the corresponding exponential increase in embedded computational power. The increasing miniaturization of computer technology will make it possible to integrate small processors and tiny sensors into more and more everyday objects, leading to the disappearance of traditional PC input and output media such as keyboards, mice, and screens. Instead, we will communicate directly with our clothes, watches, pens, and furniture and these objects will communicate with each other and with other people's objects. This era was once described by former IBM Chairman Lou Gerstner as "A billion people interacting with a million e-businesses through a trillion interconnected intelligent devices" Gerstner (2000).
2.1 What is Ubiquitous Computing? In Latin the word 'ubiquitous' means "God exists everywhere simultaneously". Webster defines ubiquitous as "Existing or being everywhere at the same time". Ubiquitous computing has roots in many aspects of computing. In its current form, it was first articulated by Mark Weiser at the Computer Science Lab at Xerox PARC. He described it as "Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user" Weiser (1993). Weiser suggests, "Computing technology will become so specialised and well integrated into our physical world that we will no longer be aware of it in itself, just as we would
515
International Conference on Advances in Engineering and Technology
now not particularly think of the pen or pencil technology that we use when writing some notes on a sheet of paper" Weiser (1991). Rick Belluzo, general manager of Hewlett-Packard, compared ubiquitous computing to electricity, calling it "the stage when we take computing for granted. We only notice its absence, rather than its presence" Amor Danie (2001). 2.2 The Technology Trends Ubiquitous computing comprises a broad and dynamic spectrum of technologies. Two of the most common placeholders for these devices are the personal technologies and smart environments.
Personal Area Network (PAN)." It is an interconnection of personal technology devices to communicate over a short distance, which is less than 33 feet or 10 meters or within the range of an individual person, typically using some form of wireless technologies. Some of these technologies are: 9 Bluetooth technology: The idea behind Bluetooth is to embed a low cost transceiver chip in each device, making it possible for wireless devices to be totally synchronized without the user having to initiate any operation. The chips would communicate over a previously unused radio frequency at up to 2 Mbps. The overall goal of Bluetooth might be stated as enabling ubiquitous connectivity between personal technology devices without the use of cabling as written in Mckeown (2003a). High rate W-PANs: As per standard IEEE 802.15 TG3, launched in 2003, these technologies use higher power devices (8 dBm) than regular Bluetooth equipment (0 dBm) to transmit data at a rate of up to 55 Mbps and over a range of up to 55 m Ailisto et al (2003). Low power W-PANs: As per standard IEEE 802.15 TG4, these technologies are particularly useful for handheld devices since energy consumption for data transmission purposes, and costs, are extremely low. The range of operation of up to 75 m is higher than current Bluetooth applications, but the data transfer rate is low (250 Kbps) Ailisto et al (2003).
BodyArea Network (BAN): Wireless body area networks interlink various wearable computers and can connect them to outside networks and exchange digital information using the electrical conductivity of the human body as a data network. Advantages of BANs versus PANs are the short range and the resulting lower risk of tapping and interference, as well as low frequency operation, which leads to lower system complexity. Technologies used for wireless BANs include magnetic, capacitive, low-power far-field and infrared connections Raisinghani et al (2004). Sensors and Actuators: Sensors are essential in capturing physical information from the real world. Different types of sensors are needed for different phenomena. These devices collect data about the real world and pass it on to the computing infrastructure for enabling decision-making. They can detect and measure mechanical phenomena of the user like movements, tilt angle, acceleration and direction. Actuators provide the output
516
Shrivastava & Ngarambe
direction from the digital world to the real world. These devices allow a computing environment to affect changes in the real world.
Smart Tags: The smart tags contain microchips and wireless antennas that transmit data to any nearby receiver which is acting as a reader. Beyond just computing a price, the smart tags will enable companies to track a product all the way. New tags can recognize more than 268 million manufacturers, each with more than 1 million products. They use Radio frequency identification (RFID) system, which encompasses wireless identification through radio transmission. 3.0 SUSTAINABLE DEVELOPMENT The most widely cited definition of sustainable development was given by the World Commission on Environment and Development in 1987: In order to be considered sustainable, a pattern of development has to ensure "that it meets the needs of the present without compromising the ability of future generations to meet their own needs" (WCED, 1987). The world summits on environment and development in Rio de Janeiro in 1992 and in Johannesburg 2002 have shown that the goal of attaining sustainable development has become a predominant issue in international environmental and development policy. According to the UN statement following the 'Rio+5' event in 1997: "Economic development, social development, and environmental protection are interdependent and mutually reinforcing components of sustainable development. Sustained economic growth is essential to the economic and social development of all countries, in particular developing countries" retrieved from http://www.ecouncil.ac.cr/rio/susdevelopment.htm. It is widely accepted that sustainability has an environmental, a social and an economic dimension. These sustainability dimensions are (Ducatel et al, 2005). 9
9
9
Personal physical and psychological sustainability: Can it reduce (mental) health risks from information stress, virtual identities and information overload? What precautionary evaluation is needed to avoid new health impacts of these pervasive electronic radiations? Socio-economic sustainability: Digital divides emerging from unequal developments and access to the Ubiquitous infrastructure could be related to income, education and skills, age and work. Environmental sustainability: There are pressures created by new growth and the material wealth associated with these technologies. The embedding of computers implies considerable extension of recycling and reclamation of electronic waste.
4.0 SOCIAL IMPLICATIONS The omnipresence of computing power and its widespread use has begun to affect our everyday lives in many ways we do not even notice. The sustainability-related opportunities and risks of ubiquitous computing for society are illustrated here. 4.1 Social Opportunities
517
International Conference on Advances in Engineering and Technology
Personal Empowerment There are two main motives for personal augmentation. The first is to overcome some of the physical disabilities and the second is to augment the capabilities of a normal healthy person. Using wearable computers and sensors, individual sensory and physical capabilities can be significantly enhanced. Access to information and knowledge will work more efficiently under ubiquitous computing. Access will be possible everywhere and anytime (pervasiveness), and be dependent upon one's location and local environment (context sensitivity). Improvements in both physical and mental performance of a human being can be enhanced by providing him a smart working environment. Several concepts are in development that could increase individual's work efficiency and personal productivity. Protection As RFID is intended to be used for unique identification of real-world objects (e.g., items sold in supermarkets), using RFID transponders in the form of "smart labels" will probably become the first and most widespread example of ubiquitous computing. With "smart labels" it will be much easier to protect goods from theft or imitation. 4.2 Social Risks
Consumer Freedom of Choice As more and more objects and environments are being equipped with ubiquitous technology, the degree of our dependence on the correct, reliable functioning of the deployed devices and microcomputers including their software infrastructures is increasing accordingly. Today, in most cases, we are still able to decide for ourselves whether we want to use devices equipped with modem computer technology or not. But in a largely computerized future, it might not be possible to escape from this sort of technologically induced dependence, which leads to a number of fundamental social challenges. Jtirgen Bohn et al (2004). Moreover, a loss in competition among service providers may occur if proprietary defacto standards continue to play a significant role in the computer economy. As a result the consumer may lose the power to decide which ICT products or ICT services he uses and what price he pays. Andreas Koehler et al (2005).
Knowledge Sustainability Most information in our everyday life today remains valid for an extended period of time, e.g. food prices in our favourite supermarket, or prices for public transport. Using the acquired knowledge and prior experiences, individual manage with future situations and tasks. In a highly dynamic world, an experience that was valid and useful one minute could become obsolete and unusable the next minute. For example using the mobile phones, individuals do not remember most of the phone numbers and the numbers are changing very frequently. Moreover if a mobile phone is not working then it is very difficult to contact. Such a loss of knowledge sustainability could, in the long term, contribute to an increased uncertainty and lack of direction for people in society.
518
Shrivastava & Ngarambe
Impact on Privacy One major characteristic of ubiquitous computing technology is that it has a significant impact on the social environments in which it is used. Although data protection/privacy is not a new problem, ubiquitous computing introduces a new privacy risk due to timely and accurate location data for an individual (both real-time and historical) being made available. Because location management is part of such an environment, it can also be used to intrude on the privacy of people. Some users may be uncomfortable with the ability of ubiquitous computing system to be able to obtain their locations at any time. For example, assume your apartment is outfitted with all kinds of sensors to feed a ubiquitous computing system that could help you to manage life threatening situations, such as fires. However, a big concern would be about how the collected data is used, would you want your neighbour police station to be able to monitor in which room you are currently residing and how much alcohol you are consuming. Gathering data of any kind irrevocably leads to privacy concerns. Psychological Stress Apart from the privacy implications the ubiquity of sensors may also lead to psychological unease on the part of users. The constant feeling of observability, as it can be generated by the perpetual presence of certain sensors can, hence, lead to undesirable psychological feelings and unease about the sensor-laden environment. The old sayings that 'the walls have ears' and 'if these walls could talk' have become the disturbing reality. Effects of Ubiquitous computing can indirectly influence the user's behavior and the social context encountered. Such as poor usability, disturbance and distraction, the feeling of being under surveillance, the possible misuse of technology for criminal purposes, as well as increased demands on individuals' productivity. Stress has many side effects on health. 5.0 ENVIRONMENTAL IMPACTS Generally most ubiquitous pervasive computing devices will have one significant environmental advantage over traditional computers: that they are physically smaller and inherently consume less material. On the other hand, they have many other disadvantages on our environment in terms of raw material consumption, energy consumption, and disposal. The low cost will encourage rapid replacement and in addition, their small size, weight, embedding in other materials and overall design for ubiquity will disperse them widely. Ecological sustainability will be influenced by the following ways:
Resource Consumption Intel expects that semiconductor technology will develop continuously towards design geometry of 22 nanometers within the coming ten years without a general change in material composition. However, due to the increasing number of components that will be used, the total material and energy consumption caused by the production of electronic goods is still expected to accelerate global resource depletion. Furthermore, the trend toward throwaway electronics caused by price reductions will shorten the average service life of electronic devices and components in general. For these reasons, a reduc-
519
International Conference on Advances in Engineering and Technology
tion of the total demand for raw materials by the ICT sector can be anticipated only in the moderate scenario. Andreas Koehler, et al, (2005). Due to use of low power batteries, there is a great potential for power savings. On the other hand these batteries often contain heavy metals and are an environmental hazard in them, rather than fixed AC power.
End-of-Life Treatment Another environmental risk of ubiquitous computing is the release of pollutants caused by the disposal of the resulting waste.. Service life is an essential parameter of the waste generation by ICT products. Halving service life means doubling the resource use for production and doubling the amount of waste disposed per service unit. Disposable versions of some devices, like disposable cell phones will soon emerge. By this effect, ubiquitous computing could indirectly contribute to an increasing demand for raw materials and an increasing amount of waste. End-of-life treatment of Ubiquitous computing will have to deal with large numbers of small electronic components that are embedded in other products. More and more microelectronic throwaway products, including rechargeable batteries, will be found in waste streams outside that of electronic waste (packaging, textiles). As a consequence, the risk of uncontrolled disposal of toxic substances as a part of household waste could counteract the goals of the Environmental Sustainability. If no adequate solution is found for the end-of-life treatment of the electronic waste generated by millions of very small components, precious raw materials will be lost and noxious pollutants emitted to the environment. Andreas Koehler, et al, (2005).
Indirect Effects The ubiquitous use of miniaturized and embedded microelectronic components interconnected in wireless networks could have an influence on human health due to the additional exposure to non-ionizing radiation. Non-ionizing radiation (NIR) is emitted for wireless data transfer, which is one of the basic technologies of ubiquitous computing. As a consequence, a great part of the emitted radiation will be absorbed by human body. Even sources of low transmitting power may cause high exposure to radiation if they are very close to body tissues. Due to the wide range of substances used for microelectronics, the risk of allergic reactions or chronic poisoning increases. However the level of risk depends on the substances contained and the kind of encapsulation. In the future, new types of microelectronics will emerge that may release new potentially harmful substances. 6.0 CONCLUSION Ubiquitous computing is a socio-technical phenomenon in which computers are integrated into people's lives and the world at large. This paper discussed some of the issues about the possible consequences of this technology from social and environmental perspectives. To make ubiquitous computing sustainable, precautionary measures have to be initiated quickly.
520
Shrivastava & Ngarambe
REFERENCES
Ailisto, H., Kotila, A. and Str6mmer, E. (2003), Ubicom applications and technologies, Presentation slides from ITEA, http://www.vtt.fi/ict/publications/ailisto et al 030821.pdf Amor, Danie (2001), Pervasive Computing: The Next Chapter on the Internet, http://www.informit.com/articles/article.asp?p = 165227&rl = 1 Andreas, Koehler and Claudia, Sore (2005), Effects of Pervasive Computing on Sustainable Development, http://www.patmedia.net/tbookman/techsoc/Koehler.htm Ducatel, K., Bogdanowicz, M., Scapolo, F. and Leijte, J. (2005). That's what friends are for. Ambient Intelligence (AmI) and the IS in 2010, http://www.itas.fzk.de/esociety/preprints/esociety/Ducatel%20et%20al.pdf Gerstner, L.V (2000), IBM, http://www5.ibm.com/ de/entwicklung/produkte/ pervasive.html Jtirgen Bohn, Vlad Coroam~, Marc Langheinrich, Friedemann Mattern, Michael Rohs (2004), Living in a World of Smart Everyday Objects - Social, Economic, and Ethical Implications, Human and Ecological Risk Assessment, Vol. 10, No. 5, October 2004. McKeown, Patrick (2003), Information technology and the networked economy, second edition, (p.164), Thomson course technologies. USA McKeown, Patrick (2003a), Information technology and the networked economy, second edition, (p.84), Thomson course technologies. USA McKeown, Patrick (2003b), Information technology and the networked economy, second edition, (pp.438-439), Thomson course technologies. USA Raisinghani, Mahesh S, Benoit Ally, Ding Jianchun, Gomez Maria, Gupta Kanak, Gusila Victor, Power Daniel and Schmedding Oliver (2004), Ambient Intelligence: Changing Forms of Human-Computer Interaction and their Social Implication, Journal of Digital Information, 5(4), Article No. 271, 2004-08-24, http ://j odi.tamu, edu/Artic le s/v05/i04/Rai singhani/?printab le= 1 Weiser, M.(1991), The Computer for the 21st Century, Scientific American, September 1991, pp. 94- 104. http://www.ubiq.com/hypertext/weiser/SciAmDraft3.html Weiser, Mark (1993), Some Computer Science Issues in Ubiquitous Computing, Commun. ACM 36(7), pp.74-84, http ://www. informat ik. unii er. de/-l ey/db /j oumals/ cacm/ cacm3 6.html Weiser, Mark, Brown John S. (1996), Designing Calm Technology, Power Grid Journal, v 1.01, http://powergrid.electriciti.com/1.01 W C E D - World Commission on Environment and Development (1987). Our Common Future. Oxford: Oxford University Press.
521
International Conference on Advances in Engineering and Technology
A M A T H E M A T I C A L I M P R O V E M E N T OF THE SELFORGANIZING MAP ALGORITHM Tonny J. Oyana, Department of Geography and Environmental Resources, Southern Illinois
University, USA. Luke E. K. Achenie, Department of Chemical Engineering, University of Connecticut, USA. Ernesto Cuadros-Vargas, School of Computer Science, Universidad Catolica San Pablo,
Peru. Patrick A. Rivers, Health Management Program, Southern Illinois University, USA. Kara E. Scott, Department of Geography and Environmental Resources, Southern Illinois
University, USA.
ABSTRACT
The objective of this paper is to report a mathematical improvement of the self-organizing map (SOM) algorithm implemented using real georeferenced biomedical and disease informatics data. The SOM algorithm is a very powerful unsupervised neural network with both competitive and cooperative learning abilities. It provides a foundation for knowledge discovery in large spatial databases and has successfully been applied to recognize patterns in several problem domains. Although significant progress has been achieved in using SOM to visualize multidimensional data or utilizing SOM for data mining purposes, certain limitations related to its performance still exist. In this paper, we propose a mathematical improvement as a result of discovering these limitations while using SOM-trained data for biomedical applications. The paper also introduces a new SOM-based model, mathematically improved learning- SOM (MIL-SOM*). Keywords:
SOM; MIL-SOM*; GIS; Clustering; Geography; Algorithms; Spatial Data Mining; Visualization; Biomedical Applications.
INTRODUCTION
The self-organizing map (SOM) is a special type of artificial neural network (ANN) that clusters high-dimensional data vectors according to a similarity measure (Kohonen 1982). The SOM is not only used for clustering in high dimensional spaces, but it is also designed to self organize similar data which have not yet been classified. In the SOM, neurons compete with each other in order to represent the input data. As a result, data in the multidimensional attribute space can be abstracted to a much smaller number of latent dimensions, which is organized on a basis of a
522
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
predefined geometry in a space of lower dimensionality, usually a regular two-dimensional array of neurons. The SOM clusters the data in a manner similar to other clustering algorithms, but has an additional benefit of ordering the clusters and enabling the visualization of large numbers of clusters. Although a number of SOM applications have been developed for the biomedical sciences (Manduca 1994; Tamminen et al. 2000; Sugiyama and Kotani 2002) none have integrated SOM-trained data with geographic information systems (GIS) data models. Similarly, to the best of our knowledge, no-one has attempted to integrate non-SOM like biomedical data with GIS. The increasing demand of spatial databases for biomedical applications also provides an incredible opportunity for the development of more sophisticated geospatial tools. In the context of spatial databases, for example, the integration of SOM-trained data with GIS data models could assist in a physical database design that could thus lend further support for the development of more reliable spatial access methods (SAM) (Gaede and Guenther 1997). 2.1 The Basic Structure of K o h o n e n ' s S O M Algorithm
The basic structure of Kohonen's SOM algorithm consists of five major steps: 1. Begin with an input vector X = [Xk_l..... Xk_n] with d dimensions represented by an input layer W~/containing a grid of units (m • n) with 0" coordinates. 2. Define SOM training parameters (size, training rate, map grid, and neighborhood size). Equally important is the main principle in selecting the size because it is contingent upon the number of clusters, and pattern or the SOM structure; however, defining an initial network may no longer be necessary as illustrated by the growing neural gas example (Fritzke 1995). 3. Compute and select the winning neuron or Best Matching Unit (BMU) based on a distance measure and neighborhood function illustrated in Equations 1 and 2, respectively. In most cases the metric space is considered in Euclidean terms while the neighborhood function is Gaussian. [[ X k
-
Wbm,, [[= argmin{[[ Ark -
w O.
If
(I)
i
In Equation 1, II. II is the absolute distance, mbm u is the winning neuron, and w!/ corresponds to the coordinates on the grid of units.
hci(t) -
exp(- Jc2
2o.(t)2 )
(21
In Equation 2, hci(t ) is the neighborhood function, cy(t)is the neighborhood radius at time t, and d~,;is the distance between neurons c and i on the SOM (mx n) grid. Update the attributes of the winning neuron using the update rule in Equation 3
523
International Conference on Advances in Engineering and Technology
w O. (t + 1) = w O. (t) + ct(t)hci (t) [A'Tc(t) - wij (t)] w o. (t + 1) : w o. (t)
f o r / s Nc(t) for i ~ Nc(t)
(3)
In Equation 3, Xk(t) is a sample vector randomly taken from input vectors, wo.(t) is the output vector for the coordinates on the (m x n) grid with coordinates i andj within the neighborhood NO(t), and a(t) and hci(t) are the learning rate function and neighborhood kernel function, respectively. Since Nc(t) specifies the topological neighborhood for the neurons surrounding the winning neuron, its size reduces slowly as a function of time, i.e. it starts with fairly large neighborhoods and ends with small ones. The training rate function can be linear, exponential or inversely proportional to time t. The training length is divided into two periods: t~ is the initial period and t2 is the fine tuning period with neighboring units hci (t). 5.
Repeat steps 3 and 4 until complete convergence is realized for the SOM network.
2.2 Mathematical Improvements to the Kohonen's SOM Model In the new mathematically improved learning-SOM (MIL-SOM*) model, we have proposed a better updating procedure than the one in Step 4 of Kohonen's SOM model. Figure 1 gives the pseudo code for the MIL-SOM* algorithm. This pseudo code specifies an augmented learning mathematical procedure to improve, particularly, the updating method outlined in Equation 3. The new learning method was suggested to address a number of efficiency and convergence issues associated with Kohonen's SOM model. This algorithm offers a solution to four issues (1) speed and quality of clustering, (2) stabilizing the number of clusters, (3) the updating procedure for the winning neurons, and (4) the learning rate in the SOM model. Cuadros-Vargas and Romero (2005) also investigated some of these issues. In future studies, we plan to compare and analyze the performance of their two constructive algorithms (SAM-SOM* and MAM-SOM*) with this MIL-SOM* model. 3.0 EXPERIMENTAL DESIGN In this study, we compared standard SOM and MIL-SOM* learning procedures together with GIS methods to explore disease data. We built a topological structure representing the original surface by encoding the disease map via a 3-D spherical mesh output. The neurons were positioned based on their weight vectors. The BMU was the neuron nearest to the sampling point in a Euclidean distance measure. We performed three experiments using two disease datasets encoded with a vector data structure (point and polygon data structure) and a randomly generated dataset. In addition to encoded disease data, each map also contained unorganized sample points.
The two published datasets (Oyana and Lwebuga-Mukasa 2004; Oyana et al. 2004; Oyana and Rivers 2005; Oyana et al. 2005) contained geographically referenced data points of adult (n = 4,910) and children (n = 10,289) patients diagnosed with asthma between 1996
524
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
and 2000. The biomedical datasets were obtained from Kaleida Health Systems, a major healthcare provider in western N e w York. Vector patterns consisted of six dimensions (X, Y, case_control/code, IN500, 1N 1000, and PM1000). The MIL-SOM* algorithm for training a 2-dimensional map is defined as follows: Let X be the set of n training patterns Xk--1, X2..., Xk=n W be a m x n grid of units wij where i and j are their coordinates on that grid Jsom be the best cluster after iterations where p is the distance between all possible pairs of neural nodes and data points alpha (ct) be the original learning rate, assuming values in [0,1 ] initialized to a given initial learning rate alphal=alpha*al be the first improved learning rate alpha2=alpha*a2 be the second improved learning rate al be the first non-negative parameter of alphal when set to zero it yields the original SOM update a2 be the second non-negative parameter of alpha2 when set to zero it also yields the original SOM update diff(Xk-wij) is the differentiation of (Xk-wij) int(((Xk-wij)),O,(n-1)) is the integral term for (Xk-wij) with intervals 0 to n-1 (1 to n). radius (~) be the radius of the neighborhood function H (wij, Wbmu,~), initialized to a given radius Repeat for k= 1 to n for all wijeW, calculate absolute distance dij -- Xk-wij for p= 1 up to number_iteration Calculate the sum of the distances Jsombetween all possible pairs of neural nodes and data points Assign the unit that minimizes dij as the winning neuron Wbmu Iterate to minimize the quantization and topological errors and select the best SOM clusters with minimum Jsom --Standard SOM used to Update each unit wijeW: wit = wij + alpha H(Wbmu, Wij, ~) I Xk-wij 11 Define Xk, wij as syms Xk, Wij Apply improved procedure to Update each unit wijeW" wij = wij+(H*((alpha*(Xk wij)+(alphal (diff(Xk-wij)))+(alpha2 (int(((Xk-wij)),0,(n- 1)))))))); --Note d/dt(Xk-wij) will tend in the direction of zero as learning improves Decrease the values of alpha, alpha 1, alpha2, and radius Until alpha, alpha 1, and alpha2 reach 0 --Visualize output of MIL-SOM* using the distance matrix, e.g., U-Matrix Figure 1" Pseudo code for the M I L - S O M * (Mathematically Improved Learning-SOM) Algorithm
525
International Conference on Advances in Engineering and Technology
In this case, X and Y represent the coordinates of the patients, the case_control/code indicates whether the patient has asthma (case) or gastroenteritis (control), the IN500 indicates whether the patient is within 500m of the highway, IN1000 indicates whether the patient is within 1000m of a pollution source, and PM1000 indicates whether the patient is within 1000m of the sampling site of measured particulate matter concentrations. The plan for conducting these experiments was to train three datasets at different epochs. The first epoch consisted of training two published datasets. These were clinically-acquired geospatial biomedical datasets, which identified geographic patterns of childhood and adult asthma. The second epoch consisted of training a randomly generated dataset. The training was designed this way in order to compare the performance of traditional SOM (Kohonen 1982) with the new MIL-SOM* model, using two real datasets and a random computergenerated dataset. We randomly selected either 1000 or 2000 data points from the entire dataset, then continued adding on the same number of data points (e.g., 1000, 2000, 3000, etc. or 2000, 4000, 6000, etc.) until the completion of training. We implemented different data ranges for the distinct datasets due to their different sizes, and trained these three datasets using standard SOM and MIL-SOM* algorithms. We conducted our experiments in two phases using standard SOM and MIL-SOM* algorithms. The first phase (rough training), immediately following initialization of both algorithms, performed separately, consisted of taking relatively large initial learning rates and neighborhood radii. The second phase (fine'-tuning) concentrated on much smaller rates using the same criteria. A 20 x 20 map size was used, and the initial neighborhood radii for rough training and fine tuning were 5 and 1.25, respectively. We initialized the weight vector for each of the neurons (vector quantization) in a linear fashion. We engaged standard SOM and MIL-SOM* algorithms to train six dimensional data vectors. Batch and sequential training algorithms were used. The training procedure for both algorithms corresponded to approximately the same space as the input data and the fine-tuned maps. Table 1 shows the training parameters of the standard SOM and MIL-SOM*.
526
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
Table 1" Training parameters of Standard SOM and M I L - S O M * algorithms
Data Points
Standard SOM
MIL-SOM*
Standard SOM
Elapsed Time (s)
Elapsed Time (s)
Qe
Te
MIL-SOM*
Qe
Te
Childhood Asthma data
2000
5.406
1.016
2081
0.052
1064
0.019
4000
8.016
1.625
1780
0.032
1103
0.035
6000
10.75
2.094
1637
0.028
961
0.036
8000
12.75
2.797
1502
0.029
887
0.031
10000
18.547
3.922
1309
0.031
828
0.037
1000 2000
3.25 5.61
0.922 1.094
813 635
0.034 0.007
571 467
0.021 0.039
3000
7.109
1.297
612
0.018
434
0.04
4000
7.875
1.563
537
0.015
416
0.031
4910
7.5
1.641
507
0.019
403
0.043
Adult Asthma data
Randomly Generated Numbers
2000
5.625
1.141
0.379
0.065
0.345
0.107
4000
7.813
1.516
0.374
0.059
0.338
0.127
6000
10.344
2.016
0.365
0.057
0.336
0.147
8000
12.672
2.828
0.347
0.065
0.322
0.141
10000
19.765
3.703
0.336
0.062
0.318
0.136
* Note: There was a general improvement in map quality (quantization error (Qe), topological error (Te)) in the last two columns when we integrated the new method (MIL-SOM*) for initialization and training the standard SOM. We used initial neighborhood radii for the rough training phase and fine tuning phase as max(msize)/4 = 5 and max(msize)/4)/4 =1.25, respectively until the fine tuning radius reached 1, where max is the m a x i m u m value of the map size matrix. For all the datasets the map size was [20 20], so max is 20. We also could easily adjust and specify a different map size, a different radius, and a different training length to obtain better results with M I L - S O M * 4.0 EXPERIMENTAL
RESULTS
AND DISCUSSIONS
We have successfully implemented a mathematical improvement to resolve efficiency and convergence concerns associated with standard SOM. The successful implementation introduced a new family of a constructive M I L - S O M * generation of algorithms, which focused on three significant opportunities: (1) evaluation at the metrics level, (2) focus on finding optimal clustering solutions, and (3) augmenting the learning rate of standard SOM. As noted, the performance of this constructive M I L - S O M * was tested using three datasets.
527
International Conference on Advances in Engineering and Technology
In order to understand the performance of MIL-SOM*, we trained the same datasets using a standard SOM. In all three of the experiments that were conducted, the newly modified (MIL-SOM*) algorithm showed a dramatic improvement in performance during training and in map quality when compared to the standard SOM. Figures 2 through 4 illustrate experimental results for the standard SOM and MIL-SOM* by comparing the number of data points and quantization error. The figures clearly indicate that MIL-SOM* has definitively outperforms the standard SOM by revealing an overall decrease in quantization error. Before training the data using either algorithm the quantization errors were relatively the same, for the childhood asthma data. We observed that the quantization errors for both the standard SOM and MIL-SOM*dropped tremendously after training. After training, however, the quantization error using MIL-SOM* showed much improvement with a steady decline, relative to that of the standard SOM. According to Table 1, the elapsed time improved by approximately a factor of one-sixth when using MIL-SOM*. For the adult asthma data, we observed that before training the data using either algorithm, the quantization error was roughly 1400, reflecting minimal change as the number of data points increased. This error showed a continuous decrease in error following training using both the standard SOM and MIL-SOM* algorithms. The results do however reveal an even greater and continuous decline after using MIL-SOM*. The elapsed time for this adult asthma data decreased by a factor of approximately one-fifth. For the randomly generated dataset, the quantization errors before training, using either the standard SOM or MIL-SOM*, was approximately 0.58. Although both algorithms revealed a downward trend after training, the error following training using MIL-SOM* showed a greater improvement than that following the standard SOM, with a greater decrease in quantization error. Another benefit of the MIL-SOM*, as revealed in Table 1, was an improvement for the elapsed when compared to that of the standard SOM. A key property of the MIL-SOM* algorithm is the minimal additional computations per learning step, which is conveniently easy to implement. Learning with MIL-SOM* is also accomplished using the same methods as for the standard SOM. Since only a small fraction of the MIL-SOM* has to be modified during each training step, adaptation is fast and the elapsed time is low, even if a large number of iterations might be necessary or the dataset is unusually large. The MIL-SOM* algorithm also has other properties. It is very stable, has increased performance, and maximizes time available for processing data. Thus it is scalable and has independence of input and insertion sequence.
528
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
3500
r-
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3000
25OO s Before Training (Standard SOM) ._
~
2000
:-..,+:a,-,.:.:.~,. 9 After Training (Standard SOM)
m ....................................
Before Training ( ML-SEA4~)
c
--x--- After Training (ML-SQM~) O 1500
l m .............
1000
500 2000
4000
6000
8000
10000
Number of Data Points
Figure 2" Standard SOM and MIL-SOM* applied to a published childhood asthma dataset
A
I•
._~
,
v
r 1000
Before Training (Standard SOM)
............ m ...........A f t e r Training (Standard SOM) Before Training (ML-SOIVI*) A f t e r Training (ML-SOM*)
800
................ g ....................................................................................................................... m .......................
:"m ............................................................................................................................................. M
1000
2000
3000
4000
4910
N u m b e r o f Data P o i n t s
Figure 3" Standard SOM and MIL-SOM* applied to a published adult asthma dataset
529
International Conference on Advances in Engineering and Technology
0.55
Before Training (Standard SOM) - 4 ~ - After Training (Standard SOM)
0.5
Before Training (ML-SOM*)
2
After Training (ML-SOM*) .N
0.45
O 0.4
.
.
.
.
.
.
.
0.:35
0.3
.
.
.
~
.........................
,
2000
4000
,
6000
,
8000
......
10000
Number of Data Points
Figure 4" Standard SOM and MIL-SOM* applied to a randomly generated dataset 3.0 CONCLUSIONS AND RECOMMENDATIONS This study has introduced a new family of MIL-SOM* developed from suggested mathematical improvements of the original SOM (Kohonen 1982). We are confident that the properties of this new family will be useful for data classification, visualization, and mining georeferenced data. These improvements can be used for georeferenced biomedical applications, and to address problems associated with the integration of SOM-trained data with GIS data models for physical database design efforts. The GIS data models are computer encodings of abstracted forms or constructs of geographic space based on a simple graph, topology, geometry, or an array of pixels. MIL-SOM* can also be used for Similarity Information Retrieval, as suggested by Cuadros-Vargas and Romero (2005), as well as for building and exploring homogenous spatial data. While we have not developed a specific plan in this experiment, we recognize that assigning a confidence level to the SOM results is very important. Therefore in future studies we will explore the use of hypothesis testing and the Bayes' inference method to assess the probability of obtaining the correct clustering results (i.e., class confidence) from MIL-SOM*. 4.0 ACKNOWLEDGEMENT Supported in part by SIUC Faculty Start-up Fund and SIUC ORDA Faculty Seed Grant. Mr. Dharani Babu Shanmugam, a Computer Programmer was responsible for implementing the
530
Oyana, Achenie, Cuadros-Vargas, Rivers & Scott
mathematical improvements in SOM for these experiments. Dr. Jamson S. Lwebuga-Mukasa, Department of Internal Medicine, UB School of Medicine and Biomedical Sciences, Kaleida Health Systems Buffalo General Division provided the datasets for this study. 5.0 P R O T E C T I O N OF H U M A N S U B J E C T S
All research was approved by the Southern Illinois University Carbondale Human Investigation Review Board in accordance with national and institutional guidelines for the protection of human subjects. REFERENCES
Cuadros-Vargas, E. and Romero, R.A.F. (2005), Introduction to the SAM-SOM* and MAMSOM* Families, International Joint Conference on Neural Networks (IJCNN) 2005, Montreal. Gaede, V. and Guenther, O. (1997), Multidimensional Access Methods, ACM Computing Surveys, 30(2): 123-169. Kohonen, T. (1982), Self-organized formation of topologically correct feature maps, Biological Cybernetics 43:59-69. Manduca, A. (1994), Multiparameter medical image visualization with self-organizing maps, IEEE World Congress on Computational Intelligence, IEEE International Conference on Neural Networks (1994), 6(27):3990-3995. Oyana, T.J., Boppidi, D., Yah, J., and Lwebuga-Mukasa, J.S. (2005), Exploration of geographic information systems-based medical databases with self-organizing maps." A case study of adult asthma, In Proceedings of the 8th International Conference on GeoComputation, lst-3rd August 2005, Ann Arbor, University of Michigan. Oyana, T.J., and Lwebuga-Mukasa, J.S. (2004), Spatial relationships among asthma prevalence, healthcare utilization, and pollution sources in Buffalo neighborhoods, New York State, Journal of Environmental Health 66(8):25-38. Oyana, T.J., Rogerson, P., and Lwebuga-Mukasa, J.S. (2004), Geographic clustering of adult asthma hospitalization and residential exposure to pollution sites in Buffalo neighborhoods at a U.S.-Canada Border Crossing Point, American Journal of Public Health 94(7): 1250-1257. Oyana, T.J., and Rivers, P.A. (2005), Geographic variations of childhood asthma hospitalization and outpatient visits and proximity to ambient pollution sources at a U.S.-Canada border crossing, International Journal of Health Geographics 4(1): 14. Sugiyama, A. Kotani, M. (2002), Analysis of gene expression data by using self-organizing maps and k-means clustering, IJCNN 2002, Proceedings of the 2002 International Joint Conference on Neural Networks, 12-17 May 2002, 2:1342-1345. Tamminen, S. Pirttikangas, S. Roning, J. (2000), Self-organizing maps in adaptive health monitoring, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN) 24-27 July 2000, 4:259-264.
531
International Conference on Advances in Engineering and Technology
532
Chiemeke & Daodu
B R I D G I N G THE DIGITAL DIVIDE IN R U R A L C O M M U N I T Y : A CASE STUDY OF E K W U O M A T O M A T O E S P R O D U C E R S IN S O U T H E R N NIGERIA S. C. Chiemeke and S. S. Daodu, Department of Computer Science, University of Benin,
Nigeria
ABSTRACT Information and Communication Technology (ICT) has become a potent force for transforming social, economic and political life globally. In an attempt to bridge the "digital divide" (between those who have access to information resources and those who do not), the three tier levels of government of Nigeria (Federal, State and Local Government) laid emphasis on rural electrification, thus promoting informative education through radio and television as well as licensing of GSM (Global Systems for Mobile Communication) operation. In Delta State of Nigeria, the wife of the Governor, Chief (Mrs.) N. Ibori initiated women centers. Also, she introduced an interactive 30 minutes weekly programme on television for Ibo tomatoes producers. The Ibo communities are known to produce 70% of pluvial tomatoes (available from June to October in two out of six geographical zones of Nigeria). Recently, there has been an attempt to improve the production, distribution, preservation and sales of pluvial tomatoes in these communities through ICT. The introduction of ICT is with a view to replace the older form of ITs (Information Technologies), that is, town crier, smoke piper, drumming etc.
This paper looks into the life style of Ibo community that promotes community- based ICT. It also x-rays access and effectiveness of ICT, as well as social and economic activities through ICT in Ekwuoma community of Delta State of Nigeria. Keywords: Digital Divide, Information Technology, Rural Concept, Pluvial Tomatoes.
1.0 INTRODUCTION Worldwide, Information and Communication Technology (ICT) provide a great development opportunity. ICT contributes to information dissemination, and provides an array of communication capabilities thereby increasing access to technology and knowledge among others. ICT is a key weapon in the war against world poverty. When used effectively, it offers huge potential to empower people in developing countries and disadvantaged communities (Peter, 2003). ICT overcomes obstacles, addresses social problems, strengthens communities' democratic institutions, offers a free press, and boosts local economies. With the growing popularity of ICT (such as the internet, software packages and cellular telephones), people are incorporating technology into their daily routine. These technologies are
533
International Conference on Advances in Engineering and Technology
making people's lives easier. The people most able to utilize these advancements however are the ones who have access to physical resources and knowledge of the newest changes. On the other hand, there are people that, for economic, social, cultural or educational reason do not have access to computers and internet. This people will not be able to utilize the information provided by these technologies. This will lead to a gap between those that have access and those that do not. This gap is a global problem and is referred to as "digital divide" (Novak & Hoffman, 2000). In Nigeria, four aspects of digital divide are worthy of consideration. These include: (i) Regional aspects which manifest within the country in the distribution of amenities due to influx of young ones from rural to urban areas in order to seek for greener pastures. Many of these young youths discover to their displeasure the hard and unfriendly community they find themselves. Thus many youths spend most of their times in cyber cafes hoping to better their lives through information. The unequal access to ICT facilities between the urban and rural people is probably the worst in Nigeria. (ii) The socio-economic aspects, given the skewed distribution of income in the country, where only members of the elite can afford access to the technical equipment required for joining the ICT order. This means that only small and minute proportion of the Nigerian society have access to electronic information media particularly the e-mail. (iii) Gender disparities that are disparity between male and female access to ICTs. In Nigeria, one of the most frequently cited solutions for targeting the digital divide is to construct Cyber Caf6s. A Cyber Caf6 is a public location with Internet connected computers available, for a per-minute or package minute fee. Cyber Caf6s are termed as "youth centers" (i.e. for ages between 15 and 24). These cafes enable people who do not own a computer at home, to utilize Internet technology. In this way it is thought that these cafes help to bridge the digital divide. However, In Nigeria cyber cafes are not women targeted due to socio-cultural factors. Cyber Caf6 serves as a perpetuating medium of immoral act. The centers serve as good convergence for lousy hangout especially adolescence who are involved in watching of pornographic films. Also, there is a constant robber raiding. The violence coupled with the lousy environment makes it difficult for females to want to browse. All these reasons make Nigerian females disadvantaged and discouraged. The culture frowns at females hanging out with males. These age groups are very inquisitive and exploring. Nigerian males are minimally involved in home keeping thus they could spend the time in Cyber Caf6. Unlike Nigerian females who are culturally assigned the responsibilities of home management have lesser time to spend on internet. Thus the direct effect is that many Nigerian females are technology shy. (iv) Disparity between the rich and poor. There is a real risk that the poor people will continue to be marginalized and that the existing educational divide will be compounded by a growing digital divide.
534
Chiemeke & Daodu
The contributing factor for these digital divide are inadequate provision of electricity (Never expect power at all - NEPA), Television and Radio broadcasting. Broadcasting has traditionally provided the basic information infrastructure for Africa's entry into the information society. Access to radio and television is by far greater, per capita, than access to newspaper, telephones or even computers. According to UNICEF, there were 226 radio sets and 66 television sets per 1000 population in Nigeria in 1977 (Idowu et al., 2003). With the emergence of many new and independent radio and TV stations, their impact has been quite far reaching. Although lack of universal access to electricity limits the coverage of local television. 1.0 RURAL CONCEPT OF INFORMATION T E C H N O L O G Y IN NIGERIA Information Technology has existed in Nigeria in its primitive form from time immemorial when jingle drums, cave drawings, smoke signals etc., were used for communication between villages, communities and hamlets (Ajayi & Ighoroje, 1996). Other basic forms used then were town criers who conveyed information and announcements within a community. Communal living makes it possible to pass information from one household to another. Some of these older and primitive forms of IT are still relevant in some communities, especially the rural communities (like Ekwuoma village in Delta state of Nigeria), due to their overall slow pace of development. The impact of modem technology is therefore hardly felt in these communities because in extreme cases these technologies are not accepted. Due to high poverty and illiteracy levels, it is sometimes very difficult to effect dramatic changes in the philosophy and attitudes of rural dwellers towards technological growth. There is overdependence on governmental interventions, which are grossly inadequate or even nonexistence in some cases. Such interventions could be educational support at all levels, provision of the necessary infrastructure like constant power supply and a reliable telecommunication network. All these reasons enumerated above are the direct effects of low patronage of ICT in rural areas. Furthermore, the non-availability and independability of the supply of electricity is one of Nigeria's Achilles heels. In spite of the abundance sources of energy in the country - thermal, solar and oil, Nigeria is perpetually short of electricity. These perennial shortages and the epileptic nature of the inadequate supply constitute a major constraint to the realization of the technology-driven globalization in rural communities of Nigeria. Rural communities occupy the deepest part of this digital divide. 3.0 B R I D G I N G THE DIGITAL EKWUOMA COMMUNITY
DIVIDE
IN NIGERIA:
A CASE
STUDY
OF
In bridging the digital divide, the three-tier levels of government (Federal, State and Local government) are involved. At Federal level, the President, Olusegun Obasanjo initiated the rural electrification at the inception of his administration in May 29, 1999. The Federal government is currently executing 1,141 rural electrification projects across the country (Government in action, 2003). The project was distributed on a zone-by-zone basis and 241 rural
535
International Conference on Advances in Engineering and Technology
electrification projects were distributed to the south- south of the six geopolitical zone. At the state level, the Delta state governor's wife (Chief (Mrs) N. Ibori) initiated women centres in rural communities, while at the local government level; facilitators were provided to empower the rural dwellers on various social, economic and political lives peculiar to their communities. Ekwuoma is an Ibo community in Delta State located in the south-south geographical location of Nigeria. The Ibos are known to live in a close-knitted community, which made their older primitive forms of IT (town crier, smoke pipers) to be made relevant. They are mostly peasant farmers. They are known to produce 70% of pluvial tomatoes (available from June to October) in two out of six geographical zones of Nigeria (http://www.bj.refer.org). Inspite of these activities, they are poor and neglected. They are neglected because of access to global world. They are highly skilled and willing to adopt any method that will enhance productivity and sales of their farm products especially tomatoes. The life style of Ibo community made it possible for the wife of the Delta Sate governor Chief (Mrs.) N. Ibori initiated agricultural centre in Ekwuoma community. Chief (Mrs.) Ibori also introduced an interactive 30 minutes weekly program on television (for Ibo tomatoes producers). This centre serves as means of boosting agricultural products through transfer of technology. The agricultural centre is located in the central part of the village (1.5km radius from an individual home). Accessibility is made easy by earthen constructed footpath. The centre opens once in two weeks on Sundays between 4.30pm to 6.00pm. Notice of meeting are circulated through town criers and reinforced by GSM (Global System for Mobile Communication) calls where necessary. Agricultural extension officers serve as facilitators. These facilitators are involved in educating the farmers on how to boost production and sales of agricultural products especially pluvial tomatoes. The information are disseminated in local dialects. It is an interactive session, which is later related as a weekly television programmes. Undocumented reports suggest that these broadcasting cum agricultural centres are welcome developments in many rural areas, especially Ekwuoma. ICT is brought nearer home and the effectiveness is greatly enhanced due to self-driven methods. There is a better social interaction amongst the community tomatoes producers. This has led to the formation of vibrant cooperative societies, which fix prices, and distribution of pluvial tomatoes in Ekwuoma village. The GDP of the inhabitants of Ekwuoma village has improved due to access and effectiveness of ICT. Within the last two (2) years, access to telecommunication in the rural communities has increased by over 50% (Ajayi, 2003). For example, in Ekwuoma village, all tomatoes producers possess GSM, Television and Radio through cooperative societies. This has enhanced interaction among the pluvial tomatoes producers and sellers. Transportation of harvested tomatoes to the neighboring villages or town where they are disposed off has improved from the usual carriage by head or bicycle to the use of autobyke or motorcycle. This level of farming has also boost the social life of the tomatoes farmers
536
Chiemeke & Daodu
by forming different clubs apart from cooperative societies where they socialize and exchange ideas on the improvement of land acquisition, improve tomatoes varieties, method of farming through the use of tractors and planters as the case may be. This is usually enhanced by the local agricultural extension officers who help the farmers with these advance implements. Part of the ways to encourage these local farmers by the government is awarding of prices to the best farmers of the year through yearly exhibitions and assessments by the government officials. In exceptional cases, scholarships are also awarded to the children of the best farmers. 4.0 CONCLUSION The partnership between the three tier levels of government and the tomatoes producers is a welcome development in Nigeria. We feel that in the next few years, the digital divide between the rural and urban communities will be reduced to a barest minimum. REFERENCES
Ajayi, G. O. (2003). The Nigerian National Information Technology Policy, http://www.jidaw.com/policy.htm Ajayi, O. B. & Ighoroje, A.D.A. (1996). Female Enrolment for Information Technology in Nigeria. In Achieving the Four E's Edited by Prof. R. K. Banerijee, GASAT 8, Amenbad, India 1, 41-51 Government in Action (2003) FG Executes Rural Electrification Projects. Available online: http ://www.nigeriafirst.org/article_977. shtml http://www, bj .re fer. org/b enin_ct/e c o/lare s/thema/thema 7/English/po int2. htm Idowu, B, Ogunbedede, E. & Idowu, B (2003). Information and Communication Technology in Nigeria. Journal of Information Technology Impact, Vol. 3, No. 2, pp.69-76 Novak, T. P. & Hoffman, D. L. (2000). Bridging the Digital Divide: The Impact of Race on Computer Access and Internet Use. Vanderbilt University. Peters, T. (2003). Bridging the Digital Divide. The Evolving Internet Edited by William Peters, Washington D. C., U.S. Department of States.
537
International Conference on Advances in Engineering and Technology
STRATEGIES FOR IMPLEMENTING HYBRID E-LEARNING IN RURAL SECONDARY SCHOOL IN UGANDA P. O. Lating, Sub-Department of Engineering Mathematics, Makerere University, Uganda S.B.Kucel, Department of Mechanical Engineering, Makerere University, Uganda L.Trojer, Division of Technoscience Studies, Blekinge Institute of Technology, Sweden
ABSTRACT
This paper discusses the strategy that should be used while introducing e-learning in rural girls' secondary schools in Uganda for the benefit of female students of advanced level Physics and Mathematics. The strategy was formulated after numerous field visits to Arua, one of the poorest districts in Uganda. Urban secondary schools where Uconnect and SchoolNet projects are being implemented were also visited. Some literatures were reviewed from the Web on the subject. The results show that a limited form of e-learning, the Hybrid E-learning, can be introduced in rural secondary schools and the main delivery platform is the CD-ROM. To implement the hybrid e-learning, multistakeholder participatory approach, VSAT internet connectivity, and use of open source software are recommended. The implementation of this strategy will result in reducing the digital divide and achievement of one of the Millennium Development Goals of empowering women at reduced costs. Keywords: ICT; E-learning; Hybrid; Rural; Poverty; Secondary School; Female Students; Gender.
1.0 INTRODUCTION Rural secondary schools in Uganda are poor and have inadequate infrastructure, facilities and qualified teachers for Physics and Mathematics subjects. These are the essential technology and engineering subjects that are required for entry for degree courses in the Faculty of Technology, Makerere University, the most dominant tertiary institution in Uganda with a sound research base. Students perform poorly in Physics and Mathematics, especially female students from rural schools. The result is the low participation of female students from rural secondary schools in the engineering and technology profession. This disparity is distinctly evident in the graduation patterns of students from Faculty of Technology, Makerere University, see table 1.
538
Lating, Kucel & Trojer
From table l, it can be seen that in 4 years (from 2000 to 2003) Makerere University produced 417 Engineers out of which 85 were female, giving a 20.4%graduation ratio of female engineers as compared to the total number that graduated in a period of four years. Table 1. Graduations of Undergraduate Students by Gender from Faculty of Technology, Makerere University, March 2000 to March 2003 Course CivilEngineering Electrical Engineering Mechanical Engineering Total Male 154 102 76 332 Female 35 34 16 85 Total 189 136 92 417 Source: Academic Registrar's Office, Makerere University It was observed that in the 2005/2006 admissions into the Faculty of Technology, all the female students admitted were from only four urban, educationally elite Districts of Kampala (the capital city of Uganda) and its surrounding Districts of Mukono, Wakiso and Mpigi. There are 69 Districts in Uganda currently. Therefore, 65 rural Districts failed to produce female students who could perform well in Physics, Chemistry and Mathematics so as to qualify for admission into Makerere University for engineering training. This is a reflection of gender inequality in the education sector: rural female students do not participate in the engineering profession. Such inequalities should be addressed. The main causes of the poor performance of rural secondary schools in national examinations are: 9 Absence of senior laboratories for advanced level experiments. Those that have the laboratories cannot equip them with chemicals and necessary facilities for practical work. 9 Such schools lack libraries. Those with libraries cannot stock them with recommended text books. 9 Shortage of qualified and committed teachers. Good teachers go to urban and periurban schools where they are better remunerated and have attractive fringe benefits. The schools are too poor to invest in laboratories and libraries. Nor can they attract and remunerate qualified teachers. These are poverty related problems that must be solved by application of ICT in education. The paper starts by looking at some key international documents that support ICT and Gender research. The relevant policies of the Ugandan government that support this research are reviewed by the researcher. Problems of science education training in rural secondary schools are highlighted. There have been some attempts to introduce e-learning in Ugandan secondary schools under a number of projects with the aim of solving some of these problems. These projects are analyzed to see if they are the appropriate approach to introducing
539
International Conference on Advances in Engineering and Technology
e-learning in schools. At the end of the paper is a strategy for implementing e-learning in rural secondary school science education of female students in Arua district. The research is in progress. 2.0 WHY ICT AND GENDER RESEARCH 2.1 United Nations Millennium Development Goals
In September 2000, 189 world leaders under the auspices of the United Nations, (UN), agreed and set eight Millennium Development Goals (MDGs) to guide development of its member countries in the 21 st century (UN Publications). By the year 2015, all the 191 UN Member States have pledged to meet these goals. The UN MDG No. 3 specifically deals with empowerment of women. As an indicator for the achievement of this specific goal, gender disparity in primary and secondary education must be eliminated preferably by 2005 and at all levels by 2015. 2.2 The World Summit on the Information Society
In 2003, the World Summit on the Information Society (WSIS) set objectives and targets necessary for UN member countries to achieve the MDGs mainly through the application of Information and Communications Technologies (ICT) in every sector of human endeavour (UN Publications). WSIS operates under the patronage of the UN Secretary General, Mr. Kofi Arian. 2.3 The World Summit on the Information Society Gender Caucus
The WSIS Gender Caucus identified six Plans of Action. The sixth plan recommends strongly the need for Research Analysis and Evaluation to guide actions by UN member countries (UN Publications). It says: "Governments and other stakeholders must apply creative research and evaluation techniques to measure and monitor impacts- intended or unintended- on women generally and subgroups of women. At minimum, Governments and others should collect information disaggregated by sex, income, age, location and other relevant factors. On the basis of these data, and applying a gender perspective, we should intervene and be proactive in ensuring that the impacts of lCTs are beneficial to all". This particular Plan of Action calls for a more proactive involvement in ICT and Gender Research. 3.0 NATIONAL ICT AND GENDER EQUALITY POLICIES IN UGANDA 3.1 National ICT and Rural Communications Policies in Uganda
Uganda Government has identified ICT as one of the eight strategic intervention areas. The Government approved the National Draft ICT Policy (December 2003) for the country. The growth of the ICT use in Uganda was boosted by the Government's decision to exempt all ICT equipment from custom taxes. This helps in making the equipment such as computers more affordable to people.
540
Lating, Kucel & Trojer
In 1998, Uganda Communications Commission (UCC) was set up according to the Uganda Communications Act of 1997 as an independent communication regulator in the country. UCC adopted a Rural Communication Development Policy (July 2001). According to this policy, the three National Telecommunications Operators have been required, directly through the license rollout obligations, to attend to rural communication development. The three National Operators in the country are charged 1% of their annual gross turnover as contribution for the Rural Communication Development Fund (RCDF). UCC set up and manages this Fund (RCDF). The fund, while limited, is being used to leverage investment in rural communications through competitive private sector bidding. 3.2 Gender Policy of the Ugandan Government There are a number of gender related policies that Uganda government is implementing but the National Gender Policy (2003) is most relevant. At all levels of leadership in Uganda, Gender Mainstreaming is being emphasized. Women have specified number of seats in Parliament and in Local Government Councils. Gender is a component in the composition of Boards of Public Institutions and Corporations. In Education, female advanced level senior secondary school students get additional 1.5 points when they are being considered for entry into public Universities or Tertiary Institutions. Rwendeire (1998) defended educating women in Uganda by identifying the relevant social benefits involved. 3.3 Limitations of the ICT and Gender Policies in Uganda Unfortunately, both the ICT policy and the National Gender Policy are being implemented without the necessary laws that have been enacted by parliament to guide their implementation.
Access tariffs for Internet, however, remain quite high because Uganda's international access is only through European or American satellites that are expensive compared to our level of development. Minges & Kelly (1997) found that for Dial-up access to Internet for 30 hours a month (i.e. one hour a day), the monthly tariff was almost the same as the annual Gross National Income (GN1) per capita was only 240 US$ in 2003. The tariffs consist of fixed ISP and telephone subscription charges and variable telephone usage charges. u c c established the local Internet Exchange Point for local Internet use. However, services are still affected by lack of appropriate high capacity backbone infrastructure resulting in high local connection costs and bandwidth constraints. There has been a move by UCC to improve this by waiving license fees on Internet Access Service Licenses and use of 2.4GHz spectrum from July 2004. By the end of 2003, Internet bandwidth in Uganda had grown to about 25 Mbps (for up link and l0 Mbps (for down link) from only about 1 Mbps (for both up and down links) in 1998.
541
International Conference on Advances in Engineering and Technology
UN annually measures the level of ICT penetration and diffusion as one of the Human Development Index parameters. In its Human Development Report (2003), Uganda's number of Internet users (per 1,000 populations) was only 2.5. This is so low if you compare with a country like Sweden that had 516.3 in the same report. 4.0 INTERNET CONNECTIVITY IN RURAL SECONDARY SCHOOLS IN UGANDA There have been some attempts aimed at introducing Internet for learning and teaching in some selected secondary schools in Uganda. Most notably were the SchoolNet Uganda Project and the Uconnect Project.
4.1 The SchoolNet Project SchoolNet connected Internet to some schools at a capital cost of 30,340 USD. Generally VSAT connectivity methods are used with some schools connected using the Broad Spectrum technology. Dial-up and other wired Internet connectivity methods like ISDN, DSL, Leased Lines and Fiber Optic are not suitable for rural areas. They are narrow band and teledensity is low in rural schools. The project is mainly funded by donors especially the World Bank. However, such schools have problems in sustaining the project and cannot meet the recurrent monthly expenditure of 1,680 USD. And Internet in those schools is not being used for e-learning. Furthermore, the schools that SchoolNet chose are the best urban schools in the country with relatively good science laboratories, libraries, infrastructures and qualified teachers. SchoolNet intends to introduce a commercial, proprietary e-learning platform, the Blackboard. This is a very expensive platform to acquire (the cheapest version id at 12,000USD) and maintain. No rural secondary schools can afford this. Makerere University has also thrown it out. 4.2 The Ueonneet Project Uconnect is an NGO that sells refurbished computers to schools at 175 USD per set, networks them and helps the schools with training of students and teachers in ICT. In some schools they arrange for Internet connectivity by contracting private businesses. Uconnect supplies their clients with the School Axxess server, the LIBRARIAN search engine. Students use it to find web pages of interest to them from among the thousands stored in its web-cache which includes many multimedia sites. There are also full motion interactive training videos that come with the server. This web-caching technology is very relevant for rural schools. Uconnect also encourages the use of open source platform especially the SchoolWeb for e-learning. But none of the schools have any ideas about e-learning.
542
Lating, Kucel & Trojer
5.0 S T R A T E G I E S F O R I M P L E M E N T I N G E - L E A R N I N G IN R U R A L S E C O N D A R Y S C H O O L S IN U G A N D A
5.1 What does "rural" mean in the Ugandan context? In the Ugandan context, the word "rural" means "poor" and it is not a classification based on whether an area is sparsely populated or not as is the case in Europe. Therefore, a rural secondary school in Uganda is another name for a poor secondary school. When implementing e-learning in such poor schools, their unique situations must be borne in mind. And one of the crucial decisions to make is the choice of the delivery platform(s) that will be used. 5.2 Hybrid Type of E-Learning Platforms most Suitable for Rural Secondary Schools
in Uganda The following types of platforms are used for electronic/distance learning purposes: 9 C D - R O M s are stand-alone instructional or informational programs not connected to Internet or other communication processes. 9 Web-sites are linked wed pages on an Internet or Intranet. They can be compared to a reference manual or reading a book. They provide passive information. 9 Asynchronous Internet Communication (AIC) is a listserver forum using communication tools, such as e-mail or bulletin boards, on an Internet or Intranet and is usually accompanied with an archive or database accessible by participants and the instructor. The users log on and write to each other at different times. A listserver is a program that automatically sends e-mail to a list of subscribers. It is the mechanism that is used to keep newsgroups informed. 9 Synchronous Internet Communication (SIC) is a form of communication like chat, video conferencing via the Internet, and voice chat. The chat function is when the individuals are simultaneously connected to a common site where typed messages are displayed for everyone to see. Each person can type his or her own message. Bulletin boards can be used the same way. 9 Web-based training is an on-line learning platform containing communication and course management tools on an Internet or Intranet, and can combine the above features. 9 Hybrids are any combination of the above with classical classroom training or coaching or group facilitation. In the circumstances of rural, poor secondary schools in Uganda, hybrid platforms are most suitable. And the main course delivery platform should be the CD-ROMs.
543
International Conference on Advances in Engineering and Technology
5.3 Multistakeholder Best Practices to be used when Implementing Hybrid E-learning in Rural Secondary Schools The implementation of Hybrid E-learning Project should be done using the Multistakeholder participatory approach. Local Government, Businesses, the participating schools, and NGOs, etc. should join hands with Makerere University, Faculty of Technology, in implementing the Research Project. Poverty-related problems cannot be solved single-handedly. 5.4 lnternet Connectivity In circumstances where teledensity is low and private businesses are reluctant to operate in rural areas, VSAT Internet connectivity is only viable method of introducing Internet to schools. The Broad Spectrum technology can be used to connect Internet to schools that are within a radius of 30 kms from a hub, which can be one of the schools itself. Refurbished computers with multimedia capability can be used in rural schools to reduce costs. Four monitors may also be connected to one CPU to further reduce costs. To reduce bandwidth requirement, web-caching will be used. The schools can operate as telecenters and allow the communities to access Internet. This will help to reduce the digital divide. 5.5 Course Management System For managing the learning environment, a Course Management System will be required and a Website for the Research Project created. This must be an open source software, not proprietary. The Mambo is a good product to try and is hosted on an open source server by bone.net. The author has some experience in using the Mambo. 6.0 CONCLUSIONS In Uganda, as one of the least developed countries, people in the rural areas face a situation of being among the poorest of the poor in the country. Support to the education system in the rural areas is highly needed. The scarcity of the schools includes teachers (whether skilled or not), textbooks and other learning materials, laboratories, infrastructure like electricity and, in the context of this paper, Internet connection. Uganda has adopted one of the most radical gender equality policies in the world. When this policy is linked to the policies of development and poverty reduction, the emphasis on well educated women and men in Uganda is inevitable. As elsewhere there is still a long way to go having gender balance especially in higher education. The paper is considering this issue for science education in rural secondary schools. Thus the Faculty of Technology at Makerere University is expected to benefit from the increased admission of female students from the target secondary schools. This is one way to bridge the gender gap existing currently in the Technology and Engineering training, where only about 18 - 20% of the students are female.
544
Lating, Kucel & Trojer
Going from policy to practice implies a number of challenges on fundamental levels. Issues such as general technology, ICT, multistakeholder collaboration, open source software, hybrid e-learning platforms, open archive resources as well as web caching for developing digital libraries constitute ways forward. Conclusions have been drawn that e-learning using hybrid delivery platforms can be put into practice in Advanced-level rural secondary science education of female students. This is expected to result in improved performance of female students in Physics and Mathematics. Still another impact of the hybrid e-learning project is the parallel development of using the targeted schools as telecentres for the surrounding society. If successful, it is expected that Internet use in the rural community will increase. This may have some impact on the digital divide. REFERENCES
2003 UN Human Development Report. http://www.undp.org~dr2003/indicator/cW f UGA.html Minges & Kelly (1997). Uganda Internet: Case Study. http://www.itu.int/africaintemet2000/Documents/ppt/Uganda%20Case%20 Study.ppt# 1 Ministry of Gender, Labour and Social Development: National Guidelines in Uganda (1997, 1999, 2003). http://www.ilo.org/public/english/employment/gems/eeo/guide/uganda/mglsd.htm Ministry of Works, Housing and Communications: National Information and Communication Technology Policy Framework (Draft). (May 2002). http://www.logosnet.net/ilo/150_base/en/init/uga_l .htm Rural Communications Development Policy for Uganda. (2001). http://www.ucc.co.ug/rcdf/rcdfPolicy.pdf Rwendeire A. 1998. Presentation at the World Conference on Higher Education, 5-9 October, 1998, Parish. http://unesdoc.unesco.org/images/0011/001173/117374e.pdf SchoolNet Uganda VSAT pilot project, http://www.schoolnetuganda.sc.ug/hompage.php?opti on=v satproj ect Uconnect Schools Project. http://www.uconnect.org/projectextract.html UN Millennium Development Goals. (n.d). http://www.un.org/millenniumgoals/ World Summit on the Information Society Gender Caucus" Recommendations for Action. (n.d) .http ://www.genderwsis.org/fileadmin/resources/Recommendations_ForAction_Dec_2003_Engl.pdf World Summit on the Information Society: Objectives, Goals and Targets. (n.d). All the websites were retrieved on March 10 th, 2006.
545
International Conference on Advances in Engineering and Technology
DESIGN AND D E V E L O P M E N T OF INTERACTIVE M U L T I M E D I A CD-ROMs FOR RURAL S E C O N D A R Y SCHOOLS IN UGANDA P. O. Lating, Sub-Department of Engineering Mathematics, Makerere University, Uganda S.B.Kucel, Department of Mechanical Engineering, Makerere University, Uganda L.Trojer, Division of Technoscience Studies, Blekinge Institute of Technology, Sweden
ABSTRACT The paper discusses the design and development of interactive multimedia CD-ROMs for advanced-level secondary school Physics and Mathematics for use by the disadvantaged rural female students in the rural district of Arua. Multimedia content of the CD-ROMs was developed at a Workshop of advanced level secondary school Physics and Mathematics teachers from the district in September, 2005. The Interface design and production of the two CD-ROMs (one for each subject) were made after some 'trade offs' and are being tested in the two girls' schools in Arua: Muni and Ediofe. It is expected that their use by the female students will result in improved performance in national examinations. This is the first successful case of advanced level course content being delivered to students using CD-ROMs that were locally developed based on the Ugandan curriculum. It is also a successful application of ICT in women empowerment.
Keywords" Interactive. Multimedia. CD-ROMs. Physics. Mathematics. Rural. Secondary School. Uganda. Poverty
1.0 INTRODUCTION Rural advanced level senior secondary schools in Uganda lack senior laboratories for core science subjects. Those with laboratories cannot afford to equip them and buy chemicals for practical work or experiments. Libraries are not well stocked. Science and Mathematics teachers are few and poorly remunerated and most of them teach in more than one school in order to get more pay. This makes them not committed and in many instances the syllabus is not completed. Government financial assistance to such schools is negligible and schools rely on the meager contribution of the poor parents whose annual income per capita is under 300 US$. The situation has led to students, especially female ones, dropping science subjects especially Physics and Mathematics, key engineering subjects. A strategy for implementing hydrid e-learning in such rural secondary schools to support disadvantaged female students in advanced level Physics and Mathematics was developed
546
Lating, Kucel & Trojer
by Lating, Kucel &Trojer, (unpublished). The Hybrid E-learning research project is currently being implemented in the Ugandan rural district of Arua in the two girls' advanced level secondary schools, Muni and Ediofe. In this project, the main course delivery platforms are the interactive self-study multimedia CD-ROMs which were designed and are being developed based on the local Ugandan curriculum. The project is being financed by Sida/SAREC as part of its support for research activities of Makerere University, Faculty of Technology. The paper starts by reviewing literatures on the advantages of CD-ROMs before describing the content and interface designs of the CD-ROMs that have been developed for advanced- level secondary school Physics and Mathematics based on the local Ugandan Syllabus. The paper ends by giving the hardware and software used in the production process. The CD-ROMs are being tested in Muni and Ediofe before mass production for use in other Ugandan secondary schools.
2.0 ADVANTAGES OF INTERACTIVE MULTIMEDIA CD ROMS There are no interactive multimedia CD ROMs based on the local advanced level curriculum that are being used in Ugandan secondary schools at the moment. Therefore, the delivery of content using interactive multimedia CD ROMs for advanced level secondary schools is a new phenomenon in Uganda. But in many developed countries, training CD-ROMs are used quite extensively. The main advantages of CD-ROMs are big memory capacity, multimedia applications and popularity due to its standardization. 2.1 Big Memory Capacity Advantages of interactive multimedia CD ROMs have been stated by Tapia et al, (2002). As storage medium CD ROMs have high capacity (650 to 700 megabytes) and are relatively cheaper compared to media like floppy disks (1.44 MB). For example, Woolbury (n.d) notes that a CD-ROM disc with a memory capacity of 550 megabytes of data is equivalent to about 250,000 pages of text. And most common CD-ROMs these days have memory capacities of up to 700 floppy discs, enough memory to store 300,000 text pages. Therefore, CD ROMs are very suitable for presenting rich graphic information, videos and animations that would be difficult to download from a website. Beheshti, Large & Moukdad, (2001), while supporting the use of CD ROMs further clarified that limited bandwidth has hindered the efficient transmission of large quantities of information over the Internet. A related problem in accessing online information is the need for complex networking technologies. Even today, many countries including Uganda, lack the necessary telecommunications infrastructure to effectively and reliably use the Web, especially in rural areas. But such countries can afford to buy inexpensive computers with CDROM drives. CD ROMs can be designed and implemented for large class teaching with very modest investments in the equipment.
547
International Conference on Advances in Engineering and Technology
Interactive CD ROMs have very fast data transfer rates compared to the Internet. The transfer rate for a CD ROM is typically between 300 to 1,200 Kbytes/sec as compared to the Internet technology that Ranky (1997) describes as a relatively slow technology with an equally slow transfer rates from 1.8 Kbytes/sec (for 14.4 modem) to 16 Kbytes/sec (for ISDN). This means that the CD ROM is more capable of supporting real-time learning needs than the Internet. 2.2 Interactive Multimedia Applications The basic components of an interactive multimedia CD ROM include the interface, texts, graphics, sound effects, animation, narration and video clips. 2.3 Popularity and Standardization of Interactive Multimedia CD ROMs For a digital delivery medium like CD ROMs, standards are necessary so that software manufacturers have an established unit to run their ever changing software. This gives the consumers confidence that the digital hardware they are purchasing will not be obsolete soon after purchase.
The basic digital hardware for CD ROM is already a standard agreed by two very influential companies: Sonny and Philips. Having set the standard for CD-Audio, the two companies have continued the trend with CD-ROMs, and most recently CD-I, a multimedia offshoot of CD-ROM. One beneficial effect of standardization of the compact disc is that the runaway popularity of CD-Audio has helped to set the compact disc in the public mind. Since CD-ROM is a close relative, its acceptance is made easier. That is why there are now many drives that will play CD-ROM and CD-Audio. This gives better assurance to those interested in CDROMs that it is a delivery medium that is here to stay. 3.0 DSIGN AND DEVELOPMENT PROCEDURE FOR THE CD-ROMs 3.1 Multimedia Local Content Design A workshop of advanced-level Physics and Mathematics teachers was held from 4 th to 1 lth September, 2005 in Arua district headquarters. Thirty three advanced level Physics and Mathematics teachers attended the Workshop and created hand-written interactive local content in both subjects based on the current local examination syllabus. The local content created was in English, the official language in all schools in Uganda. It was hand-written since the teachers only 4 teachers out of the 33 teachers had basic computer literacy skills. The facilitators of the Workshop were officials from the National Curriculum Development Center (NCDC), an Institution under the Ugandan Ministry of Education and Sports.
548
Lating, Kucel & Trojer
The teachers were guided by the facilitators how they were supposed to structure the content logically. Each topic in Physics or Mathematics was divided into sub-topics, lessons and a number of teaching units. Content was created for all the teaching units under a particular topic. A teaching unit would have a title and a text of 100-300 words and the subject teacher was to indicate where an accompanying illustration, graphic, activity by the student or animation should appear. This would help the programmer/producer to include the interactivity in the CD-ROM. Common interactivities that the teachers were told to use include animations, text entries, multiple choices, drag and drop; match the correct answer, true or false statements, comparing answers and zooming. The content was written in such a way that it encourages higher-order thinking skills (based on problem-solving approach). Activities must encourage the learner to think and play a role. The student discovers the concept/ideas herself. She should discover her own answer. And every teaching unit would have three activities to introduce the concept. The first one is to be done by the teacher with little student involvement. The second activity was to ensure that the student is involved 50% (i.e. half way). The third activity was to be exclusively done by the student to discover the answer. The learner would have the impression that she is being taught but the teacher is not there.
3.2 Interface Design Interface was created to establish a seamless navigation among the multimedia content. The design of the interface was based on the following principles: 9 Audience." Advanced level female secondary school students. 9 Interface consistency." As much as possible the screens have identical layout, terminology, prompts, menus, colours, and fonts. Navigational aids are situated in the same locations on each screen so that the students will not have to search for them. Colour schemes and type size and fonts are consistent for all screens. 9 Ease o f use and learning." The students should have minimum training, if any, in the use of the interface. 9 Efficiency." The students should navigate through the materials relatively efficiently. 9 Colourful and meaningful navigational tools." All the navigational aids and buttons are clearly and vividly marked to be distinguished from surrounding objects. 9 Information f e e d b a c k . For every student action, the interface provides a feedback. For example, the buttons are activated when the mouse pointer is moved over them, or they are clicked, and the student is provided with immediate feedback. 9 Error prevention. the system is error proof. Objects are only activated on the student's command. 9 'Previous" a n d 'Next' buttons make it easier to browse through the entire lesson. Other buttons on the screen are Chapter for proceeding to the introductory screen of a particular chapter, Contents for proceeding to the contents screen, etc.
549
International Conference on Advances in Engineering and Technology
4.0 PRODUCTION OF THE INTERACTIVE MULTIMEDIA CD ROMS 4.1 Hardware Requirements For the production of the CD-ROMs DELL computer with multimedia capability was purchased. It has 512 MB of RAM. 80 GB hard disk, Pentium| 4 CPU 2.60 GHz and Integrated Audio and Video cards. The monitor is 17". It has a CD/DVD/RW/R unit with a writing speed of 32X.
4.2 Software Adobe Photoshop and Corel PhotoPaint are used for resizing images and creating graphical interface elements. Macromedia Flash MX 2004, and Macromedia Flash Player 7 and Macromedia Flash HomeSuite + were installed for purposes of developing animations. Roxio Easy CD Creator 5 and Burn CD&DVDs with Roxio are some software applications that we use for recording CDs. Macromedia Dreamweaver 4 is used for multimedia asset development like graphics, animations, video and sound. 4.3 'Trade offs' Multimedia assets consume a huge amount of computer storage space and their inclusion in the CD-ROMs were restricted to only the essential parts. The hardware and software requirements for them are prohibitive. For example, a video clip requires 3 MB per minute of disc space. Video clips were restricted. And topics that are relatively straightforward will be delivered in the conventional way just like those that do not have any significant multimedia input. There is no point having a textbook on a CD ROM. Iskander et al (1996) call this a "tradeoff" when developing CD ROMs. Focus is placed on visualization of abstract and highly mathematical topics, interactive participation in laboratory experiments and on presentation of practical applications.
Secondly, multimedia authoring is enormously time consuming according to Hinchliffe's (2002) experience in production of training CD ROMs. He notes that a 30 second animation, which might occupy the user's attention for two or three minutes only, might take an hour or two to put together. Animations are good interactive methods but they also require a lot of bandwidth. All these considerations were made during development of the interactive multimedia CD-ROMs for A-level Physics and Mathematics. 4.4 Production of the CD-ROMs
Digitalization of the Local Content Created The stages in the production of the CD-ROMs are shown in table 1. The main stages of the CD-ROM production process included digitalization, authoring, multimedia integration, production of test copies, producing master copies and mass production.
550
Lating, Kucel & Trojer
Arrangements were made with some secretaries and the manual hand-written copies of the local content created were converted into electronic copies. The teachers could not do it because of very low computer skills. The scanner we have does not have the capability of recognizing hand written letters and numbers. It is a digital flatbed scanner with photo-quality results of 2400dpi and 48-bit colour. This made the exercise of digitalization very tedious. Table l: Timeline for the production of the CD-ROMs Activity Responsibility
Expected Completion Date
Status
Subject teachers in a Workshop Researcher, Secretaries
19.9.2005
Done
2.11.2005
Done
Researcher
On-going 15.1.2006 10.2.2006
Done
30.1.2006
Done
February, 2006
Done
Testing
Researcher, Multimedia Programmer Researcher, Multimedia Programmer Researcher, Multimedia Programmer Researcher, Multimedia Programmer $6 Students of Muni, Ediofe
Ongoing Done
March, 2006
CD covers/insert design Production (500 copies) Marketing in other schools
Researcher Researcher Researcher
March, 2006 April, 2006 From April, 2006
In progress Done
Local content creation Digitization of content
Copyright permission for some resources Interface Design Programming Narration, Music Writing/Editing
For the completed CD ROMs there will be small pamphlets or inserts included in their jewel boxes. The pamphlets will contain information about the CDs and how to use them. They will also have copyright statements and hardware requirements. Finally, master CDs will be burnt along with the other documents for mass manufacture and possible use in other rural secondary schools. 5.0 CONCLUSIONS In Hybrid E-Learning for poor rural secondary schools, the course delivery platform to be used is the CD-ROM. The main advantages of CD-ROMs are big memory capacity, multimedia applications and popularity due to its standardization. Web-based (Internet) delivery options are inherently not suitable for this purpose and context of rural communities.
551
International Conference on Advances in Engineering and Technology
In designing and developing interactive multimedia CD-ROMs for advanced -level Physics and Mathematics subjects the main objective was to demonstrate that the technology could be viable for use in poor rural schools that cannot afford to build science laboratories, libraries while at the same time cannot attract committed and qualified teachers. The CD-ROMs are being tested in the two girls' schools in the rural District of Arua, 500 kms from Kampala, the capital city of Uganda. The two girls' schools are Muni and Ediofe. The aim is to encourage more female students to pursue engineering career later after improving their performances at the secondary level of education. This will help to narrow the gender gap currently exists in the engineering profession in Uganda. In a wider context, the CD-ROMs will contribute towards the achievement of Millennium Development Goal No. 3 which specifically deals with women empowerment. In the design and development of the CDs, multistakeholder participatory approach was used. The content was created based on the local curriculum by the subject teachers in the District of Arua and was facilitated by experts from the National Curriculum Development Center, an Institution in the Ugandan Ministry of Education and Sports. Headteachers of a secondary schools in Arua and Koboko Districts willingly allowed their teachers to attend the content creation Workshop. Hardware and software for the production was procured by Makerere University, Faculty of Technology with financial support by Sida/SAREC. This type of participatory approach is good when handling community, poverty-related issues. The community takes ownership of the process. While designing the multimedia content, the context of the schools where the female students are studying had to be considered. They have earlier versions of refurbished PCs. The multimedia capabilities of such computers may not be very good. They do not have a lot of memory space and the hard disc capacity is also limited. So some 'tradeoffs' were done. Video clips that require a lot of bandwidth were restricted to the remarks of the Dean, Faculty of Technology, Makerere University. Other multimedia assets like graphics, simulations, etc. were also limited to an absolute minimum. Digitalization was the lengthiest process since the content created by teachers was handwritten, thanks to their low computer skills level. The scanner that could have been used did not have the character-recognizing capability. This caused a lot of delays when creating digital copies of the content. The final products are the CD-ROMs for advanced-level Physics and Mathematics. It is the first time locally produced training CDs have been produced for that level of education in Uganda.
552
Lating, Kucel & Trojer
REFERENCES Beheshti, J., Large, A., & Moukdad, H. (2001). Designing and Developing Multimedia CD-
ROMs: Lessons from the Treasures of Islam.http://www.ingentaconnect.com/search/expand?pub=infobike ://mcb/264/2001/00000025/00000004/art0000 !&unc = Hinchliffe, P. (2002). Developing an Interactive Multimedia CD-ROM to support AS Physics. http ://ferl.becta.org.uk/disp!ay.cfm?resID=2937. Iskander, M.F., Catten, J.C., Jameson, R.M., &Rodriguez, A. (n.d). Interactive Multimedia CD-ROMs for Education. www.ieeexplore.ieee.org/ie 13/4276/12342/0573085.pdf Peter Okidi-Lating, SamuelBaker Kucel, & Lena Trojer. Unpublished. Strategiesfor implementinghybrid elearningin rural secondaryschoolin uganda Ranky, P.G., & Ranky, M. F. (1997). Engineering Multimedia CD-ROMs with Internet Support for Educating the Next Generation of Engineers for Emerging Technologies. (A Presentation with and Interactive Multimedia CD-ROM Demonstration). h.ttp://ieeexplore,.ieee.org/xpl/abs_free.j sp? arNumber=616247 Tapia, C., Sapag-Hager, J., Muller, M., Valenzuela, F., & Basualto, C. (2002). Development of an Interactive CD-ROM for Teaching Unit Operations to Pharmacy Students. www.ajpe.org/legacy/pdfs/aj 660312.pdf Woodbury, V. (n.d). CD-ROM: Potential and Practicalities. http://calico,org/joumalarticles/Volume6/vol6-1/Woodbury.pdf Note" All references were retrieved on 28 th November from the respective Websites.
553
International Conference on Advances in Engineering and Technology
ON THE LINKS B E T W E E N THE POTENTIAL E N E R G Y DUE TO A UNIT-POINT CHARGE, THE G E N E R A T I N G FUNCTION AND RODRIGUE'S F O R M U L A FOR L E G E N D R E ' S POLYNOMIALS Sandy S. Tickodri-Togboa, Department of Engineering Mathematics, Makerere University
ABSTRACT Open any textbook of advanced engineering mathematics that has chapters on special functions. Turn to that chapter on Legendre's functions. Two items that you will never fail to find are the generating function and Rodrigue's formula for Legendre polynomials. Invariably, the generating function will be reported as a very effective device that facilitates proofs of numerous interrelationships between Legendre polynomials of various orders and/or their derivatives. Rodrigue's formula, on the other hand, will invariably be introduced as a formula that enables you to generate Legendre polynomials of any order without recourse to the method of separation of variables. Rarely, except perhaps for electrical engineers, will you be told that the two items have their roots traceable to the potential energy of a unit point-charge, situated at some distance from the origin along the z - axis of the Cartesian coordinate system! This paper aims to trace this linkage and demonstrate that this linkage leads to the definition of Legendre polynomials without actually having to solve the traditional Legendre differential equation. It simply happens that the polynomials so defined turn out to satisfy a special case of Legendre's differential equation.
Keywords: Cartesian coordinates, Generating function, Legendre polynomials, Potential energy, Unit-point charge, Rodrigue's formula, Taylor series expansion, Spherical polar coordinates
1.0 INTRODUCTION Polynomials are popularly defined as functions obtained by means of the basic arithmetic operations of addition and multiplication 1 on the constant and the linear functions. However, there is at least one set of polynomials, the Legendre polynomials, that can be obtained by other means, namely, by means of the calculus operation of differentiation. These polynomials feature very prominently in scientific and engineering applications, particularly in expansions of functions in series, extrapolations and in connection with solutions of boundary-value problems in spherical domains by the method of separation of variables. They also feature in fields treated by finite element methods. 1Here we regard subtraction simplyas the inverse of addition, and division,where this admissible, as the inverse of multiplication.
554
Tickodri-Togboa
In most treatments of Legendre polynomials, two items that never escape our attention are the generating function and Rodrigue's formula for Legendre polynomials. Invariably, the generating function is almost always introduced as a very effective device that facilitates proofs of numerous interrelationships between Legendre polynomials of various orders and/or their derivatives. Rodrigue's formula, on the other hand, is introduced as a formula that enables one to generate Legendre polynomials of any order without recourse to series solutions of one of the equations that results from application of the method of separation of variables to Laplace's equation in spherical coordinates, hence avoiding the need to specify some arbitrary coefficients in special circumstances. Rarely, except perhaps for electrical engineers, is it ever stated that the two items have their roots traceable to the potential energy of a unit point-charge, displaced from the origin along the z - axis of the Cartesian coordinate system! This paper endeavours to trace the links between the potential energy of a unit point-charge situated on the z - a x i s and displaced a distance h from the origin, the generating function of Legendre's polynomials and Rodrigues's formula for Legendre polynomials 2. 2.0 P O T E N T I A L E N E R G Y OF A P O I N T - C H A R G E D I S P L A C E D F R O M ORIGIN Consider the potential energy of a point charge situated on the z - axis at a distance h from the origin, as depicted in the Figure 1 below. Let this point charge initially be q Coulombs. Its location in terms of Cartesian coordinate system variables is then of course (0,0,h).
We now wish to examine the potential energy of this point charge at the point P(x, y , z ) , whose location in terms of the spherical polar coordinate variables is (r, q~,0). These spherical coordinate system variables are related to the rectangular Cartesian coordinate system variables through the expressions x = r cos (,osin O, y = r sin (p sin O, z = r cos O.
(1)
The reverse relationships of the Cartesian coordinate system variables to the spherical polar coordinates are of course provided by the expressions 2
r -x
2
2
"~
+ y + z ~,
(p-arctan y-, X
O-
Z
.(2)
~/X 2 + y2 + Z:
According to Coulomb's law, the potential energy at the point P(x, y, z ) , due to the point charge, while it is located at the origin is given by q U (x, y, z) - x/x2 + y2 + z 2
(3)
2 Note that in the context of gravitational potential, analogous relationships can equally be established by supplanting point-charges with point-masses.
555
International Conference on Advances in Engineering and Technology
With the point charge displaced to a distance h from the origin in the positive direction o f z - axis, the new expression for the corresponding potential energy is then given by
U(x,y,z,h) -
q . x/x 2 + y2 + ( z - h) 2
(4)
f Z P(x, y, z) = P(r, (p, O)
(0,0,h)
i
h Y
Figure 1: Location of a point charge Considering the case of q = 1 Coulomb, the potential energy at the point P(x, y,z) due to a unit point-change located at the point (0, 0, h) is then specified by the expression 1 U ( x , y, z, h) = x/'x~ + y~ + (z - t7) ~
(5)
We would, however, like to express this potential energy in terms of the potential energy of a unit point-charge positioned at the origin and in terms of any changes that the potential energy may have undergone as a consequence of the displacement. For this purpose, we invoke Taylor series expansion to write
556
Tickodri-Togboa
hu]
U(x, y, ~, h) - u(x, y, ~ + 7,. - ~
~=o
h3 + 2! 0h 2 h=O
h4 I + I +L 3! ~h 3 Ih=0 4! ~h 4 Ih=O
(6)
The differential coefficients in this expansion can then of course be obtained through successive differentiations of the potential energy expression (5) with respect to the displacement h and evaluations of the limits as h --~ 0 (as we approach the origin). As a result, we find that the derivatives are given by the expressions OU l,=o --
Oh
0 2Uj
z
Ex 2 + y 2 _+_Z2 13/2 '
__ 2_72 _ (X2 + y2 ) --
/2
7
03U h=O -- 6_73 - 9(x2 + y2)z
Ex2
2+ +y2+)z'
04U~h-7t,_-o - 24z4 - 7 2 ( x 2 I x 2 y z z i99(x 22
+ y2 )2
and so on. Recalling from expressions (1) and (2) that x z + yZ +_72 _ r z ' so that X2 -nt- y2 _ r 2 - z 2 , and z - r cos 0, these derivatives may be recast in the alternative forms z cos ~ 0 Oh h=0 F2
0U
020h 2Uh=o -- 3 COS3r30 - 1 03U~h3h=0 ----3 (5 COS3 F40-3 COS 0)
04Uh= 0 0 4h _- 35cos 4 0 - z30 5 r cos 0 + 3 and so on.
557
International Conference on Advances in Engineering and Technology
1
By multiplying the n - th derivative by -n-)-, for n - 1, 2, 3, 4 , K , we find that the coefficients in the potential energy expansion (6) are given by cosO
1 c~U
r2
1! Oh h--o 1 ~2UI 2V c3h2
9
h=0
____13COS3 O-- 1 -2 r3 '
1 c33UI __15cos 3 0 - 3 c o s O 3! c~h 3 h : o - 2 r7 ' 1 0 4 U h:0 -- 1 3 5 cos4 O - 30 cos 20 + 3 ,K 4! c3h 4 8 r and so on. On plugging these results in expression (6), we have h
u(~, o, o, h~: U(x, y, z~ + [cos ~ +
3cos 2 0 - 1
E
h2
2
E35 c O S 4 0 3 h0 C O4S 2 08- 3 1 - 7 +
5COS3 0 - 3 c o s O
-7- +
2
[63c~176176h 5
as the desired expression for the potential energy at the point
h3
J 8
7 +
(7) ~ +L
P(r,~b,O)due to the unit
point-charge at the location (0, 0, h). Can it be expressed differently? This is the question we wish to address next 9 3.0 R O D R I G U E ' S F O R M U L A A N D T H E G E N E R A T I N G F U N C T I O N
First, we note that the terms in the square brackets appear to submit to some pattern of consistency, in that the powers of the cosine terms in a given pair of square brackets are either all even or all odd! Furthermore, we can easily verify that the first few of them can be expressed alternatively as
coso:l/
/lcoso-lt :-8/ -sin-OdO/
2 -sinOdO
3cos 2 0 - 1
2
558
1
d
(cos 20_1)2
Tickodri-Togboa
3
5 COS3 0 - 3 COS 0
48 - s i n O d O 35 cos 4 O - 30 COS 3 0
--
3
1
)3 cos 0 - 1
/
,
/4(\COS2 0--1)4 ,
384 - s i n O d O and so on. Moreover, it is also easily verified that the bracketed coefficients are actually captured within the general differentiation formula
/
, 1 d 1 (cos 2 0 - 1 sin2n 0 . 2"n! sin0d0 2 n! s i n 0 d 0 Accordingly, therefore, the potential energy expression (6) may be captured in the forms
)
1
1
U ( r , ~o, O, h) - ~_, .=0 2" n !
d
2
sin 0 dO
or
{ / 1
n=0
"
.cos
1
d
1
r-7,
/ }:,, sin >, 6/
What is even more interesting is that if we set cos 0 = 4"
(8)
(9)
(10)
(11)
so that (12)
-sinOdO=d(,
both potential energy expressions (9) or (1 O) may be collapsed into the single expression U(r,q~,O,h) -
,,--0
2"
n! d ( "
((2 -1
r "+l
(13)
An alternative path to expressing the potential energy at the point P ( r , qk, O)due to the point-charge at the location (0, 0,h) in terms of the potential energy of the unit point-charge located at the origin and any changes that the potential energy may have undergone as a consequence of the displacement is to revert to equation (5) and subject its right-hand side to the usual binomial series expansion. First though, we need to note that by expanding the square term ( z - h)2, the equation may be re-written as 1
U(x,y,z,h)=
x/x + y2 + z 2 _ 2zh + h 2
On migrating to the spherical polar coordinates(r, q~, 0 ) , this expression becomes U(r,q~,O,h) =
(14)
~/r 2 - 2rh cos 0 + h2
559
International Conference on Advances in Engineering and Technology
Be extracting the factor r outside the square root sign in the denominator, it may be recast as
g(r, cp, O,h) h By setting - - u r looking form
1 . r 4 1 - 2(h / r) cos 0 + (h / r) 2
and recalling that cos 0 - ( ,
(15)
we may indeed re-write it in the simpler1
g(r, (p, O, h) -
.
(16)
rx/1- 2 u ( + u z On re-writing this expression in the form 1
U(r,(p,O,h) = l
- ( 2 u ( - u 2 )12 ,
(17)
r
and invoking the standard binomial expansion theorem to expand the components 1
[ 1 - ( 2 u ( - u 2 )1 2 in powers of ( 2 u ( - u 2 ) , we get
'
1 -I+-~(2Eu-u2)+
(2Eu
35 - u2) 2 +~3(2~u - ~/2)3 +-~(2~u-u
2 )4 +L
After multiplying out the powers of (2U~'-, 2 ) higher than unity and reorganising the terms in this expansion through collection of the coefficients of the various powers u, we find that oo
1
[1-(2u~-u2)]
2 -EP,(~)u"
(19)
n=O
where P~ ( ( ) , n - 0,1, 2, 3,K turn-out to be Legendre polynomials. In other words, the ex1
pansion of the function [ 1 - ( 2 u ( - u 2)]2 in binomial series thus "generates" Legendre 1
polynomials and provides the basis for calling
the generating function
of Legendre polynomials. With result (19), the potential energy expression (17) thus becomes
U(F,(to, P nO,h)-Z -- """ n=O
(4") r
b/n--Z
~
h~
Pn ( ( )
n=0
n+l "
(20)
r
Upon comparing of statements (13) and (20), it is not difficult to confirm the definition 1
P'(()-2"n!d("
560
d"
((2_1),
' n-0,1,2,3,K
.
(21)
(i8)
Tickodri-Togboa
of Legendre's polynomials, known as Rodrigue's formula. It is thus evident that Legendre polynomials can be obtained through operations other than the "pure" arithmetic operations of addition and multiplication. Consideration of the potential energy of a unit point-charge in a spherical domain setting seems to provide the link. 4.0 CONCLUSION We have thus traced the links between the potential energy at the point P(r, ~b,O)due to a unit point-charge at the location (0, 0, h), the definitions of Legendre polynomials by means of Rodrigue's formula and the generating function for the same polynomials. Hopefully, we dispelled the mystery about having to obtain Legendre's polynomials through the series solution of the corresponding Legendre differential equation. REFERENCE
Dunham Jackson (1941), Fourier Series and Orthogonal Polynomials, Dover Publications, Inc., New York, Harry F. Davis (1963), Fourier Series and Orthogonal Functions, Dover Publications, Inc. New York, Oliver Dimon Kellogg (1954), Foundations of Potential Theory, Dover Publications, Inc. New York.
561
International Conference on Advances in Engineering and Technology
V I R T U A L S C H O O L S USING L O C O L M S TO E N H A N C E L E A R N I N G IN THE LEAST D E V E L O P E D C O U N T R I E S Ngarambe Donart, Kigali Institute of Science and Technology, Department of CEIT, Kigali,
Rwanda Ntayombya Phocus, UNICEF, Kigali, Rwanda Shrivastava Manish, Kigali Institute of Science and Technology, Department of CELT, Ki-
gali, Rwanda
ABSTRACT The long-time traditional classroom mode of education has been struggling a lot to cope with low educational budgets, especially among the Least Developed Countries (LDCs), against the disproportionate exponential population growth rate to support the expected increases both in classroom spaces and training of educators. With the inception of ICT technologies, both constraints are being overcome because it has now become possible that: (1) few educators can be utilized to serve a large geographical area, or even an entire global area, thus saving on the budgets towards the training of educators; (2) learners can learn at any time and from anywhere, thus making classroom spaces a less compulsory learning factor. The paper discusses Local College Learning Management System (LoColms) for the cheap delivery, of rich full-motion video contents, usually of prohibitive costs, to Virtual Schools in the poor communities. We have developed a system, LoColms, a learning management system that can be used to support a scenario of Virtual Schools whereby learners can watch educators' demonstrations, which is a much better learning environment. This solution as well addresses the cost issues that come with the video resource delivery; which is achieved by employing Proxy Cache Servers & Multimedia Stream storage Servers and Point-toPoint communication Protocol (PPP) Technologies over the ubiquitous Public Switched Telephone Network (PSTN), which in most of the least developed countries is now fully digital and having sufficient bandwidth. With this approach LDCs' governments can save in two major areas: 1) on the budgets involved in training and salary payments of many teachers, and on procurement of scholastic materials such as text books, chalks, etc. For example, some cases only one computer would be required per virtual school class room; 2) fewer classroom spaces would be required as classes can go for 24 hours everyday, since no educator would be physically required, as the contents would be asynchronously accessed from the Proxy Cache server It
562
Donart, Phocus & Manish
provides a sustainable solution as this aims at empowering local institutions to prepare and disseminate contents that are of local relevance. Key words: Virtual School, LoColms, Proxy Cache Server, PPP
1.0 INTRODUCTION In several Regional and International Declarations it was emphasized that education should be a must for all: in the Universal Declaration of Human Rights,(UDHR), (1948), the freedom of education for all was recognized among other human rights that need to be protected - "Everyone has the right to education" -; World Conference on Education For All, (WCEFA, 1990), held in Jomtien, Thailand, the four conveners of the conference, UNESCO, UNICEF, UNDP, and the World Bank, the WCEFA participants, 155 governments, 33 intergovernmental bodies, and 125 NGOs, adopted an initiative intended to stimulate international commitment to a new and broader vision of basic education to " meet the basic learning needs of all, to equip people with the knowledge, skills, values, attitudes they need to live in dignity, to continue learning, to improve their own lives, and to contribute to the development of their communities and nations"; In the Beirut Declaration on Higher Education in the Arab States (BDOHEIAS, 1998), it was also acknowledged that education is a useful tool in the economic growth in the face of Globalisation and towards Regional Peaceful coexistence. The LDCs represent the poorest and weakest segment of the International community, comprisingabout 49 poorest countries, constituting a population of about 1.2 billion people. These countries are characterized by their exposure to a series of vulnerabilities and constraints such as limited human, institutional and productive capacity; acute susceptibility to external economic shocks, natural and man-made disasters and communicable diseases; limited access to education, health and other social services and to natural resources; poor infrastructure; and lack of access to information and communication technologies, (The Third United Nations Conference on the Least Developed Countries Brussels, UNCOLDC (2001). The emergence of distance education provides an important way to address these concerns (Gundani et al, 1997),. The basic barriers to distance education in these countries are the lack of: 1) resources needed for meaningful development and sustenance of technologybased learning; 2) infrastructures (which includes information and communication hardware systems) to support modem technologies in least developed and/or low-technology countries, and; 3) recurrent funding necessary to acquire or develop appropriate software and courseware on a continuous basis, and maintain, service and replace the equipment. In an attempt to find a solution that is relevant to the situation in the LDCs, we have developed a web-based distance educational system, Local Colleges Learning Management Sys-
563
International Conference on Advances in Engineering and Technology
tern, (LoColms), whose primary objective is to empower the local educational institutions to improve their educational capacities in qualitative and quantitative terms. The rationale is to take the advantage of what already exists in these countries, like the PSTN's wellestablished infrastructure, which, presumably, has already been digitally upgraded for the ease of data communication, and the local educational institutions. The key technologies supporting these LoColms supported Virtual Schools, are the PSTN (have a high Quality of Service (QoS) infrastructure, the Point-to-Point Protocols (PPP), and the Proxy Caches (ProCa). The choice of the PSTN is to eliminate the duplication of communication networks (such as Internet, a packet-switched network, that has been the main infrastructure of the Web), and the choice to utilize the local educational institutions is in order that these can be empowered, by owning the process. The PPP is used to provide a direct TCP/IP link between the local educational institutions and the Virtual Schools, while the Proxy Caches are to minimize communication traffic and costs. The choice of the optical fiber cable or Digital Subscriber Link (xDSL) is to provide a broadband environment over the ordinary telephone subscriber lines to provide very high quality of shareable multimedia study materials (video contents). The LoColms seeks to address financial and administrative problems. Its aim is to integrate Distance Education into the main stream of educational system of individual colleges to avoid the quagmire of many players, and aims at achieving both the cost-effectiveness ('cheapness' of educational provision- usually expressed in terms of per-student costs) and the cost-efficiency (the optimal balance between cost, student numbers, and educational quality) for each college on the system. In considering sound educational investment, it is essential to distinguish effectiveness from efficiency. The main advantage of the LoColms is that it addresses the financial considerations, in that there is no need for major government funding required. Probably the governments would only come in with policies to ensure the smooth operation of the educational system over commercially run Virtual Schools as well as providing preferential treatment to the operators of the Virtual Schools. Also, since the remote students would not require any of the university's facilities for accommodation, feeding, healthcare, classroom, library, etc., necessarily the cost for tuition should be reasonably very low. The costs over the telephone system are almost eliminated by the use of the Proxy Caches, which are normally used to reduce latency and traffic of the study contents over communication networks. This paper provides a scenario of Virtual Schools solution to improve the per capita children educational deficit in the low income countries. In this way the pressure of training great numbers of trainers on the side of these governments does get eased, and the necessity of providing specified locations and times for providing teaching/learning becomes less mandatory, and for Learners to have access to the more learning friendly mode - Full Motion Video - either from the Virtual Schools centers established in existing schools and/or in established study centers. The architecture of such services will consist of Multimedia Storage
564
Donart, Phocus & Manish
Servers that are connected to client sites via high-speed networks. However, in this paper we opt for asynchronous delivery of media quanta (a Database based resources delivery) in order to cut costs even much further. 2.0 C H O I C E OF SCHOOLS
TECHNOLOGY
FOR
THE
LOCOLMS
BASED
VIRTUAL
Studies reveal that education in the LDCs is still far from being a right but continues to be a privilege for most people due to a couple of constraints: 9 Budgetary constraint: According to the (UNESCO, "Education and Training in the Least Developed Countries", UETLDC (1995)), it is indicated that the majority of educational systems in the developing countries, especially the LDCs, are confronted with major setbacks, mainly due to the use of inadequate, inappropriate, often inefficient and most always costly educational strategies. The links between cost-benefits, costefficiency, and cost-effectiveness remain weak in most of these countries because of high costs of educational materials and services, burdensome procedures and mechanisms for educational spending, and the use of inappropriate technologies and educational methods. 9 Unsustainable supporting technologies constraint: According to, Hilary Perraton, et al, (2000), it is stated that there cannot be a practical substitute for Primary schools --Children need to learn within a social environment. However, the paper observes that technologies may play a part in meeting the needs of children or adults who cannot get to school or conventional class and it makes sense to look at the technologies together, from print to broadcasting to computers. Even though the International Community is showing a great concern and doing everything possible to improve the educational situation in the LDCs, by for example the World Bank supporting Computer for Linking Schools projects Hilary Perraton et al (2000); the USAID supporting the Interactive Radio Instruction projects; UNESCO supporting Computer For Teacher Training projects; and many other funding agencies and NGOs supporting other various educational projects, obviously these noble efforts can only offer a temporary solution. This can be deduced from the example of an ambitious attempt to use technology to raise the quality of basic education and widen access using the Television Broadcast Project in C6te d'Ivoire, Hilary Perraton, et al (2000), a program that was launched in 1971, with the intention of reaching 21 000 1st grade children in the first year and with the other 5 grades added every year which by 1975 was reaching 235 000 children but, in 1981 the government of C6te d'Ivoire closed it down. We suppose the reason for the closure was probably because the government realized it would not be able to takeover the task in the event the funding agencies withdrew their role. Since the inception of the digitization of the PSTN system, data communication has been possible over the traditional voice communication network, and also its traditional bandwidth limitations no longer exist. For instance, with the use of data compression techniques,
565
International Conference on Advances in Engineering and Technology
the most modest bandwidths required are as follows: 64Kbps for video conferencing applications, 2Mbps for Full-motion broadcast video applications, and 19Mbps for Highdefinition television applications. The data rates might continue to scale down, substantially, in future with more improvements on the compressions power. To date, inter switching stations with data rates in Gigabits/s are possible over PSTN using SONET/SDH technologies; and about 2 Mbps and STS-1 by ISDN (H12) and xDSL, respectively, over the telephone subscriber loop twisted pair copper cables. Over the PSTN, we utilize the Point-to-Point Protocol (PPP), the protocol that supports TCP/IP over serial communications lines where routers and gateways are not used, and the ProCa, which regulate communication traffic and the communication costs over the PSTN. PPP provides a standard method for transporting multi-protocol datagrams over point-topoint links. The PPP encapsulation is suitable over SONET/SDH, DSL, and ISDN links, since by definition these are point-to-point circuits. The system employs ProCa to provide temporary storage web objects (HTML pages, images and files) for later retrieval. The basic Internet client-server model (where clients connect directly to servers) wastes resources, especially for the often-repeated transfer highly popular information. That's because popular objects are requested only once, and served to a large number of clients. Because ProCa usually have a large number of users behind them, they are very good at reducing latency and traffic: 1) reduce latency - Because the request is satisfied from the cache (which is closer to the client) instead of the origin server, it takes less time for the client to get the object and display it; 2) reduce traffic - Because each object is only gotten from the server once, it reduces the amount of bandwidth used by a client. This saves money for the client is paying by traffic, and keeps their bandwidth requirements lower and more manageable. Freshness and Validation of the contents are controlled (LastModified or If-Modified-Since common validators) to avoid having to download the entire object when they already have a copy locally, but are not sure if it's still fresh by time. 3.0 THE L O C O L M S BASED VIRTUAL SCHOOLS
A Virtual School is based on the concept of the networking learning environment and the technical possibilities offered by new information and communication technology, which is able to deal with all the tasks of school without the need for a physical school building. Virtual school, thus, does not exist according to an ontological analysis as a concrete building with classrooms, office rooms, teachers, other staff, or pupils. Virtual school is a logical extension of the use of computers in teaching (Tella et al, 1996). We can thus regard virtual school defined narrowly (Illich, 1972), as a school without a building but still connected to society. Independence of time and place and historical neutrality is central to the concept of virtual school. Virtual school can work as a virtual extension of ordinary school or classroom activity.
566
Donart, Phocus & Manish
If we regard virtual school as a symbiotic extension of ordinary school, part of the activities of physical school may be moved to virtual school and carried out there with the aid of information and communication technologies. The LoColms system provides virtuality in the educational systems but sticking to the local colleges' educational standards. The system allows for customization for the sake of protecting the autonomy of the educational culture of individual nations. It primarily encourages full involvement of the local resources, such as lecturers and their teaching materials.As has been indicated earlier, the emergence of distance education provides an important way of addressing the precarious situation of education in the LDCs, but for reasons ranging from economic disadvantages to the level of sophistication of the Information Technology (IT) systems, technology supported education might delay to have an impact on these developing nations. This system is not merely aiming at raising enrollment figures, even though that apparently is the ultimate objective, but also at improving on educational conditions in a given country. The objective is to help educational local institutions to increase schools' enrollment capacities without requiring any major governments' or foreign donors funding, because it is potentially a commercially viable system for the educational local institutions as well as other parties involved in the implementation of the system. The size of colleges will virtually increase by remote enrollment, and the limited resources, such as qualified teachers and study materials can tremendously be stretched, made accessible to all, all the time. LoColms based Virtual Study Centers (VSCs) is arranged as in fig. 1.
PSTN (PPP) Network (Connection to the
, '
''
~"
,,
,
~
I
'
~
,
~
'
/
9
~-
\
OllO )
,'
~,.
"ll!' 'hi
proCa
AC
Fig. 1" VSCs linked to Local Educational Colleges over PSTN by PPP LoColms supports asynchronous multimedia (mainly video recorded) instructions that are transferred over the PSTN. The system utilizes the proxy caches to store the downloaded study contents at the learners' end, yet keeping the education providers constantly informed
567
International Conference on Advances in Engineering and Technology
of the online learners' progress. Basically, the system addresses two asynchronous instruction pedagogical concerns: 1) how to keep track of learners' attendance; 2) and how to provide an online support mechanism for the online evaluation. The purpose of organizing learners on the LoColms supported virtual schools in virtual school study centers is to enable the use of broadband media connections, and easy invigilation for the online examinations. On making a PPP dialup connection to the respective educational institution, the system verifies password for authentication. This also allows the server to be kept informed each time an online learner reports for a lesson. After the logon onto the system, the PPP dialup connection is established, the client side guides the remote scholar through login process, and when the LoColms Client side has gathered all the information about the student's intentions, such as the college, the year of study, the Program (or major), and subject of study, the LoColms Server is contacted for: 1) password and payment verification; and 2) providing the required service. These are provided by the client side of the LoColms application, so that the request to the LoColms server is made only once. After the study resources from the College database servers have been downloaded by individual learners, they are temporarily stored into the VSC's ProCa, for the subsequent learners, until they are replaced by successive course packages by automatic prefetch according to the Course Sequencing Prerequisites and Completion Requirements. It is highly hoped that many courses would be shared, although different remote learners may be studying different Assignment Units (Aus) of a same course, according to the Course Structure Format (CSF) of each course, SCORM 1.0 (2000). Each time a learner is issued with the current Assignment Unit (AU), according to the curricular Taxonomy, it should include an exercise and frequently asked questions (FAQ) for each AU, and each time the student has finished the AU's exercise, a had- class is uploaded to the system server. If the student's intention is to study, the LoColms will first check to make sure these contents are not already saved on the ProCa. If the course contents are not in the ProCa, they will be downloaded from the university LoColms server and then the system will disconnect the PSTN transmission channel. The learner(s) carry on learning from the ProCa of the VSC's LAN. In other words, the telephone line is only in use during the login, for administrative procedures, and for downloading only when the content doesn't exist in the ProCa. The system mainly uses video materials of the recorded class sessions saved into the LoColms servers of the university. So broadband links (Optical fiber or DSL,) between the VSCs' LANs, the local educational institutions' LANs, and the telephone central office are a must, which is much a better method of online learning than the textual or audio modes. The Course Structure Format (CSF) and Course "Packaging" in the LoColms is emulated in the ProCa, although the ProCa requires contents by block units of a topic of the subjects being studied at a time. This is because the contents are big video recorded class sessions files. The elements block, Assignment Unit au, and objective will satisfy the prerequisites and
568
Donart, Phocus & Manish
completionReq of the CSF. The learners from the study centers will be served with topic assignment units (Tau) according to the prerequisites and completionReq procedure, (Tau~&]au~ &...&laux); studied sequentially according to the order the units were taught at the physical school, with an after Tau exercise to mark the completion of the Tau if attempted after a period of 60 minutes, and a finished Tau is recorded in the LoColms server against the topic of a given subject.The application consists of two servers, the ProCa server and the LoColms server, as in fig. 2, and fig. 3 study procedure screen shots.
Fig. 2" system architecture *'~ ~:~:: ~!~ :~::::~:~:.i:~.; ...... ................. .~:.~ .: :~: :;;:~..~: e : ~ : : . ~ : . N~:.;..; ~::~:.NN;N:::,~:::.~ ~. . . . . . . . . . . . . . . . . . .
~
..
: ~: :::;!111::~,~ ~ ~ :::: ;;;~-r
:: . . . . . . . . ~ : ~ .~ :~ ~
~i~
~::::~i~
~
!~ ~!:,
: ;
i:::: :::: : . . . : ..... : : : : ::: ... ~:-. ::.. ::::
Fig.3. Student study procedure screen shot
569
International Conference on Advances in Engineering and Technology
4.0 CONCLUSION In paper we discussed the Local College Learning Management System (LoColms) based Virtual School Application, whose objective is to provide both a sustainable and economical solution, suitable for educational situation in the LDCs. The application is a web-based system, and aims at improving the traditional form of education by empowering the local educational institutions. Its economicability comes from the fact that it is supported by traditional communication technology, the public switching telephone network system (PSTN), formerly regarded a voice communication system, which already exists in all of the LDC countries to avoid the costs that would be involved in deploying packet switched networks or dedicated private virtual networks (PVN) usually required in similar situations along side, that would otherwise become an intimidating factor to the decision makers in these countries. The work discussed is an innovation, whereby different technologies are combined to make web based education a realizable dream in the least developed countries in the soonest possible time. By this approach a lot can be achieved: 1) the WWW infrastructure would be economically established and with ease; 2) individual colleges' enrollment would, virtually, rise exponentially; 3) the local resources would be helped to develop; 4) the web-based educational system would be sustainable. We hope that this work will stimulate further research in appropriate technologies, especially the web-based ones, that will be more applicable in the LDCs' situations, for the interest of education in the LDCs in particular and any other socio-economic aspects in an effort to bridge the digital divide in general, relying on the locally available resources with an aim of strengthening them. Although the mastery of IT related technologies should become a priority, it shouldn't be a precondition for these countries to engage in the technology-based education systems, especially if there already exists a minimum technological capacity with which to start. In our view, what is required is a formula, and in what combinations of these technologies. REVERENCE BDOHEIAS - Beirut Declaration on Higher Education for Arab Regional Conference on Higher Education Beirut, Lebanon, 2 - 5, (1998). Gundani,& Govinda Shrestha, "Distance Education in Developing Countries", http://www.undp.org/info21/public/distance/pb-dis.html#up, (1997) Hilary Perraton & Charlotte Creed "Applying New Technologies and Cost-Effective Delivery Systems in Basic Education Thematic Review": International Research Foundation for Open Learning for the Department for International Development on behalf of multiagency review, (2000) Illich, I. "Towards a School-Free Society." Helsinki: Otava, (1972). SCORM 1.0, http://www.adlnet.org, (2000).
570
Donart, Phocus & Manish
Tella, S., "Virtual School in a Networking Learning Environment." University of Helsinki. Lahti Research and Training Centre, 146-176, (1995). UDHR- General Assembly of the United Nations proclaimed this Universal Declaration of Human Rights (Article 26.1), (1948), http://www.historyoftheuniverse.com/udhr.html. UETLDCUMTROETLDC - UNESCO, "Education and Training in the Least Developed Countries", Mid-term Review, Paris, (1995). UNCOLDC, The Third United Nations Conference on the Least Developed Countries, Brussels, (2001) WCEFA - Final Report, World Conference on Education for All (Jomtien, Thailand, 5-9), (1990).
571
International Conference on Advances in Engineering and Technology
SCHEDULING A PRODUCTION PLANT USING CONSTRAINT DIRECTED SEARCH D. Kibira, Department of Mechanical Engineering, Makerere University, Uganda
B. Kariko-Buhwezi,Departmentof MechanicalEngineering, Makerere University, Uganda P. I. Musasizi, Department of Engineering Mathematics, Makerere University, Uganda
ABSTRACT
The paper presents the application of constraint directed search to production scheduling at Uganda Clays Limited, to increase productivity and timely order delivery. An experienced human has in the past performed the scheduling task for the 69 clay products made on the same facility. This has been a serious challenge. The production process consists of five major sections i.e. the Silo, Green production section, dryers, kilns, and the stockyard. A scheduling system based on the Multiple Perspective Scheduling technique has been developed for the Green production section. The system uses products and resources data and provides for selection of scheduling policies. The schedules developed are similar to those of an experienced human. Therefore, the developed system would go a long way in addressing the need for an automated Production Scheduling Decision Support System at Uganda Clays. It can also be adapted to similar production environments. Keywords: Production Scheduling; Uganda Clays Limited; Green Production Section; Constraint-Directed Search; Multiple Perspective Scheduling; Decision Support System; Resource Utilization; Order Due-Date Compliancy.
1.0 INTRODUCTION Uganda Clays Ltd (UCL) is the leading manufacturer of baked-clay building materials and other clay products with a market share of about 65% in Uganda. It manufactures a variety of products under the following classes; roofing tiles, walling and partitioning blocks, decorative grilles, suspended floor units, ventilators and other products. Each of these product classes has a number of variants giving a range of over 69 different product items. Scheduling of such batch-produced items on the same production facility to satisfy customer due dates has been a serious challenge. The production process consists of four series processing sections; the silo for milling and blending the clay, the green production section for moulding the milled and blended clay into different products, the dryers, and the kilns for firing the dried products. There is also a stockyard for sorting, grading and storing before delivery. Each of the above sections per-
572
Kibira, Kariko-Buhwezi & Musasizi
forms different scheduling activities with the corporate goal being the optimum utilization of resources and order due date compliancy. In this paper, we present the development of an improved production scheduling system for the green production section. 1.1 Production System Description The Green Production Section has three production lines; each fitted with an extruder and a cutting table. The extruders include: the Bongioani 15-MA, Morando MVA-400 and Synthesis SYN-500 extruders. There are three tile presses; the Bongioanni Automatic Tile Press (ATP) Morando Automatic Tile Press (ATPNL), and Manual Tile Press (MTP).
I
SILO
15-MA
ATP
MVA-400
ATPNL
SYN-550
MTP
Extruders
Presses
DRYERS
1 KILNS
GREEN PRODUCTION SECTION STOCKYARD Fig. 1" Schematic Representation of the Production System The production system is schematically represented as shown in Figure 1 above. The Green Production Section has two types of machines and processes. There is the extrusion process and the pressing process. Some products are just extruded while others are extruded and pressed. The Extruders and Presses can be used to produce different products by changing the mould. Currently, certain products can only be produced on particular machines because moulds are designed for a specific machine and the moulds available do not cover all products on all machines. 1.2 Current Scheduling Practice Currently, production-scheduling is done manually on a weekly basis. Due to the inherent complexity associated with scheduling in this environment, developing a satisfactory production schedule without the support of an information system is a difficult and protracted procedure. Some objectives are often conflicting; for instance satisfying high priority orders and meeting due dates. Additionally, orders arrive in a stochastic manner with varying dispatch priority and other order characteristics such as ordered product array and quantities.
573
International Conference on Advances in Engineering and Technology
Every year some products are flagged "prime products" based on the demand history and must be catered for while making the weekly production schedule. Therefore, UCL gives priority to the production of prime products and caters for the others based on weekly orders made. These latter are called "special orders". The prime products for the year 2004 were Mangalore Tiles, Portuguese Tiles, Ridges, Bricks, Half Bricks, Maxpan 5"and Maxpan 6". Production scheduling of prime products is based on monitoring their stock levels in the stockyard. The so-called special orders are made on a make-to-order strategy. Product items are produced in batches per shift, whose sizes are limited to the budgeted capacity (or planned production level) of a particular machine. The plant runs two eight-hour shifts a day. There are five working days in a week. In this arrangement, a weekly green production schedule is produced. The actual execution of the schedule is of course affected by factors such as machine breakdowns, bad weather, and intermittent electricity power supply. 1.3 Data Collection
The data collected includes products' specifications, prime products, green production equipment specifications, and classification of products by machine. Such information provided the bases for the development and input to the Decision Support System (DSS) and the selected scheduling policies utilize this data to generate the Weekly Green Production Schedule. Interviews with production personnel were carried out in form of discussions. Detailed information on product and equipment characteristics was collected using data collection and recording sheets. The information collected includes products data, special orders data, and resource utilization data. The exhibits of collected data are shown below. (i) Budgeted Green Production January to December 2005 (ii) The prime products were Mangalore, Marseille and Portuguese Tiles, Half-Bricks (GR), Ridges, Maxpan 5" and Maxpan 6". The budget was based on a day of two shifts, five working days a week and 4 weeks a month (22 working days). Table 1: Green Production Budget for Prime Products for the year 2005 Product
Machine
Pieces/day
Number of days
Pieces/month
Mangalore Tiles
ATP ATPNL MTP MTP ATPNL 15MA
16,200 13,000 12,000 12,000 12,000 26,958
22 20 15 3 2 22
356,400 260,000 180,000 36,000 24,000 593,076
4 5 9
26,000 38,500 72,000
Marseille Tiles Portuguese Tiles Half-Brick
(GR) Ridges Maxpan 5" Maxpan 6"
574
MTP 6,500 S Y N - 5 5 0 7,700 S Y N - 5 5 0 8,000
Kibira, Kariko-Buhwezi & Musasizi
Other information collected: (i) Products data; i.e. product name, description, specifications (weight and physical dimensions), production process and lead-time. (ii) Green production equipment data; i.e. equipment name, description, design capacity (tonnes per shift), budgeted capacity, whether available for both shifts, whether it can be powered by the standby generator and whether it has been reserved for any product. (iii) Products by machine; i.e. whether a particular machine can make a given product.
1.4 Scheduling Technique Scheduling can be defined as the allocation of resources over time in order to perform a collection of tasks (Baker, 1974) and many useful models have been devised over time for solution. In many cases however, scheduling problems manifest themselves as sequencing problems. The scheduling problem at hand requires an algorithm for determining a good sequence for producing a given set of orders over a week's period given the constraints imposed by due dates, order priority, machine capacity and capability and pre-budgeted production levels for prime products. Given that even simple models of scheduling (e.g. job-shop scheduling) are N-P hard (Gary and Johnson, 1979), the search process typically depends on heuristic commitments, propagation of the effects of commitments, and the retraction of commitments. In more complex scheduling models the goal is not simply meeting due dates but also satisfying many complex (and interacting) constraints from disparate sources within the organization as a whole (Fox, 1983 and Fox, 1990). Therefore, the solution technique selected under the circumstances is the knowledge-based constraint directed search, which requires a representation of a scheduling problem and the search for a solution by focusing upon the constraints in the problem There are different solution approaches that exist in constraint directed scheduling (Fox, 1990) but the best suitable in this environment is called Multiple Perspective Scheduling, which together with the Preferred Customer Order dispatch rule was used to model the scheduling process. This technique is applicable to environments, just like at UCL, with high resource contention. It is both order and resource centered, with the aim of optimizing resource allocation. It involves identification of the island bottleneck, then guided by constraints and a priority rule through Order Selection, Capacity Analysis, Resource Analysis, and Resource Assignment to establish the associated resource reservations. Limited production capacity is the most critical (island) bottleneck at UCL. The scheduling challenge is to assemble and utilize all the quantifiable data drawn from the UCL operations domain, and develop a weekly schedule for the Green Production Section that will maximize machine utilization and Order due date compliancy subject to the constraints manifested by the production system. The following constraints were identified as the major determinants of the Weekly Green Production Schedule:
575
International Conference on Advances in Engineering and Technology
1.5 Budgeted Green Stock Levels This requires that the sum of units of a product to be produced must be at least equal to the "Budgeted Green Stock Level" for that product for the week being scheduled. During scheduling, the product with the highest "Budgeted Green Stock Level" takes precedence over the rest i.e. it is the Preferred Customer Order. 1.6 Order Characteristics Since production of a particular product is restricted to go on till the end of the shift, it was noted that when a special order is received there could be stock (Baked Stock), which can be used to service part or all of the order. Each order also specifies the quantity of each product item being ordered. This implies that reservation for production of an item on special order should only cater for the deficit between the unit in baked stock and units on order. However, due to losses in the dryers and kilns not all green stock is output as baked stock. The losses are a result of breakages during handling, and over firing in the kilns. Management considers a 30% factor on all green stock to become baked stock (i.e. only 70% of the green stock is expected to become good baked stock).
Additionally since all orders have varying priority, the following classification was adopted to indicate the order priority inline with the PCO dispatch rule: Order Classification
Basis For Classification
Hot
The order must be served before normal time
Normal
The order is to be served on normal time
Cold
The order can be served after normal time
The products in H o t orders are hence given first priority. Those in N o r m a l and Cold orders follow in that sequence. Within the order classes, product items are scheduled by giving first priority to the product with highest green stock level. 1.7 Machine Capabilities In case a machine is reserved it can only be scheduled to produce to green stock level the end product (this excludes reservation for Slabs) for which its reserved and not scheduled for any other product thereafter. Otherwise a product is scheduled on a particular machine if that machine has the mould for that product. 1.8 Resource Availability The factors that affect resource availability (emergency breakdowns, power failures, etc) are of a stochastic nature and cannot be represented by deterministic quantities. Since these are probabilistic, it would call for rescheduling at any unique instantiation. The developed system
576
Kibira, Kariko-Buhwezi & Musasizi
does not cover reactive scheduling; it is limited to consideration of production system's parameters that can be represented by deterministic quantities. To provide for comparison among several simulated scenarios ("what-if' simulation) resultant from the application of different scheduling policies derived from the UCL production environment. An event-driven model was design, giving the DSS user the opportunity to define the scheduling policies by selecting any desired basis for the DSS to build a schedule. The Scheduling Polices Modeled were categorized as: 9 Prime Products Only This is the default scheduling option; the user does not select to turn it on or off. In case no other option is selected, then a schedule based on only considering Prime Products is developed by the system. 9 Prime Products and Machine Capabilities This option gives the user the opportunity to see the effect of Machine Capabilities on the schedule if only Prime Products are considered. Prime products and Special Orders This option gives the user the opportunity to see the effect of assuming that all presses can produce any Extruded and Pressed product and all extruders can produce any Extruded product on the schedule if Prime Products and Special Orders are considered. Prime products and Special Orders and Machine Capabilities This option is a combination of the Prime Products and Special Orders option, and the Prime Products and Machine Capabilities option. It shows the effect of all the considered constraints and rules on the schedule. Within each policy, each Production Order is assigned a priority index and each machine is also assigned a sequence index. 2.0 THE DEVELOPED SYSTEM The system model was implemented into a computer program using a Microsoft Visual Basic 6.0 Integrated Development Environment. It is supported by a dynamic graphical user interface through which the scheduler can easily access and manipulate information about products and resources. The interface provides for the definition of the desired scheduling policies, update of production data, display of the generated schedule, and manipulation of the schedule report through the export method provided to enable interfacing with other text processing software. 2.1 Sample Test Run A set of data was input to the system with the objective of testing the performance of the developed system.
577
International Conference on Advances in Engineering and Technology
Using the policy of prime products only a production schedule is generated as indicated in exhibit below. A different schedule is generated for each of the policies. UGANDA CLAYS LIMITED Weekly Green Production Schedule Date
Mon 22
Shift
I
Bongioanni
Synthesis
Morando MVA
Bongioanni
Manual
15MA
550
400
Press
Tile Press
Press
Slabs
Maxpan
Slabs
Mangarole
Portuguese
Mangarole
Tiles
Tiles
Tiles
August 2005 Mon 22
6" II
Slabs
August 2005 Tue 23
Tiles
Halfbricks(GR)
Mangarole
Mangarole
Tiles
Tiles
Halfbricks(GR)
Mangarole
Mangarole
Tiles
Tiles
Halfbricks(GR)
Mangarole
Mangarole
Maxpan
II
Slabs
Maxpan
I
Slabs
Maxpan
Tiles
Tiles
II
Slabs
Halfbricks(GR)
Mangarole
Mangarole
Tiles
Tiles
I
Slabs
Halfbricks(GR)
Mangarole
Marseile
Tiles
Tiles
II
Slabs
Andalusia
Mangarole
Marseile
Tiles
Tiles
I
Slabs
Andalusia
Mangarole
Marseile
Tiles
Tiles
II
Slabs
Maxpan 6"
Mangarole
Ridges
6" 5" 5"
August 2005 Thu 25 August 2005 Thu 25 August 2005 Fri 26 August 2005 Fri 26
Mangarole
Tiles
Slabs
August 2005 Wed 24
Mangarole
I
August 2005 Wed 24
Halfbricks(GR)
6"
August 2005 Tue 23
Maxpan
Morando
August 2005
Tiles
The program output is consistent with what would be achieved if the schedules were otherwise developed manually based on the same rationale. 3.0 CONCLUSION
A scheduling system has been developed to aid the scheduler at UCL in achieving the following: 9 Finding feasible Weekly Green Production Schedules instantaneously based on the data derived from the system requirements and the scheduling policies defined by the scheduler. 9 Simplifying the regular scheduling exercises by requiring only general knowledge of the UCL production system's characteristics from the user. 9 Carrying out sufficient what-if analysis of the scheduling decisions 9 Achieving higher productivity using the same resources 9 Maximizing due date compliance and hence enhancing competitiveness
578
Kibira, Kariko-Buhwezi & Musasizi
Enhanced predictability in the production process at UCL can result in enhanced reliability and improvements in the associated economic performance and the degree of utilization of production equipment, due date compliance and overall customer satisfaction. REFERENCES
Adams, J., Balas, E. and Zawack, D. (1988). The Shifting Bottleneck Procedure for Job Shop Scheduling. Management Science, Vol. 34, pp. 391-401. Baker, K. R. (1974). Introduction to Sequencing and Scheduling. Wiley and sons. Christopher J. Beck, Mark S. Fox (2000). Constraint-Directed Techniques for Scheduling Alternative Activities. Artificial Intelligence Vol. 121, pp 211-250, Elsevier Science. Fox M. S. (1990). Constraint-Guided Scheduling- A Short History of Research at Carnegie Mellon University, Pittsburgh, PA 15213 U.S.A. Computers in Industry Vol. 14, pp 79-88. Elsevier Science. Fox, M.S. and Smith, S.F. (1984). ISIS- A Knowledge-Based System For Factory Scheduling. Expert Systems, Vol. 1, pp. 25-49. Froeschl, K.A. (1993). Two Paradigms of Combinatorial Production Scheduling. Operations Research and Artificial Intelligence. Scheduling of Production Processes, Chapter 1, Dorn, J. and Froeschl, K., eds., Ellis Horwood. Garey, M. R. and Johnson, D. S. (1979). A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, New York.
579
International Conference on Advances in Engineering and Technology
A N E G O T I A T I O N M O D E L FOR LARGE SCALE MULTIA G E N T SYSTEMS T. Wanyama, Department of Electrical Engineering, Makerere University, Uganda G. T. Wani, Department of Engineering Mathematics, Makerere University, Uganda
ABSTRACT Modeling agent negotiation is of key importance in building multi-agent systems, because negotiation is one of the most important types of agent interaction. Negotiation provides the basis for managing the expectations of the individual negotiating agents, and it enables selecting solutions that satisfy all the agents as much as possible. Thus far, most negotiation models have serious limitations and weaknesses when employed in large-scale multi-agent systems. Yet, large-scale multi-agent systems find their use in major domains of human development, such as space exploration, military technology, disaster response systems, and health technology. This paper presents a negotiation model for large-scale multi-agents systems that is based on Qualitative Reasoning and Game Theory, and on the similarity criteria. In the model, each agent classifies its negotiation opponents according to the similarity of their preference model. The agents use Qualitative Reasoning components of the negotiation model to estimate the preference models of their negotiation opponents, and to determine the "amount" of tradeoff associated with the various solution options. Moreover, they use the Game Theory component of the negotiation model to determine the social-acceptance of each of the solution options. The output of the Qualitative Reasoning and Game Theory components of the negotiation model is used to determine the rationale for accepting or rejecting offers made by the negotiation opponents of the agents. Keywords: Centralized, Decentralized, Decision-Making, Game-Theory, Group-Choice, Large-Scale, Multi-Agent, Negotiation, Opponent, Preferences, Reasoning
1.0 INTRODUCTION When solving Group-Choice problems where agents have to select solution options from sets of alternatives, each agent normally has its own preference models, which that are made up of sets of decision variables (criteria for evaluating solution options) and preference value functions (criteria weights). Furthermore, each agent applies its preference model independently onto the features of the solution options, using a Multi-Criteria Decision Making (MCDM) technique. This results in a ranking of the solution options for each agent. The solution that is ranked highest by all the agents is the dominant solution, and it should be selected as the agreement solution. However, Group-Choice problems normally do not have
580
Wanyama & Wani
dominant solutions due to the differences in the preference models of the agents. In this case, an agreement (best-fit) solution option is identified through a negotiation process. Negotiation is a form of agent interaction that aims at identifying agreement solution options through an iterative process of making proposals (offers). The attributes of these proposals depend heavily on the preference models of the concern agents, and on the knowledge that the agents have about the preference models of their negotiation opponents. Consequently, opposite negotiation models should be able to assist agents to collect preference information of their negotiation opponents, and to integrate this information with their own preferences models in order to identify and make proposals that are most likely to be accepted as the agreement solution options. Although many negotiation models have been reported in literature, they all fall under two distinctive categories, namely: Analytic Based Models (Barbuceanu & Lo, 2000; Kraus, 1997) and Knowledge Based Models (Raiffa, 1982). In the context of Group-Choice problems in Large Scale Multi-Agent Systems (LSMAS), negotiation models in literature have the following shortfalls: 9 Both categories of negotiation models are developed with an implicitly assumption that agents are always available during the entire negotiation period. This is not realistic in large-scale distributed multi-agents systems, because in such systems agents are terminated or crash without warning. 9 Most analytic based agent negotiation models employ techniques that are naturally centralized; this is against the principle of decentralization, which is fundamental to the concept of Multi-Agent Systems (MAS). Moreover, analytic based models require the central processor to have complete information about the preferences of all the negotiating agents. This is impractical for LSMAS. 9 Most knowledge based negotiation models result in random behavior (lack of mechanism to track the negotiation process) of the negotiating agents. This behavior results in unnecessary deadlocks in LSMAS. The few knowledge based negotiation models that track negotiation processes are invariability feasible for negotiations involving two agents, such as in the buyer seller negotiation problem. This paper presents a negotiation model for solving group-choice problems in LSMAS. The model is based on categorizing negotiation opponents of agents according to the similarity of their preferences. Since the agents focus on making proposals that are acceptable to classes of opponents, instead of dealing with each of the opponents individually, the model presented in this paper enables the agents to address issues associated with many negotiation opponents. This makes the model to be practical for both small and large scale MAS. Furthermore, the model allows the agents to seamlessly join or leave the negotiation process, which addresses the issue of agents crashing, or being started and terminated without warning.
581
International Conference on Advances in Engineering and Technology
1.0 R E L A T E D W O R K
Negotiation is a very extensive subject that spans from pre-negotiation to post-negotiation analysis, both at the local and social level. Consequently, a considerable amount of work on negotiation is available in literature from different domains, such as operational research, economics, and decision theory (Jennings et al, 2001), Faratin et al (undated)). In this section, we present the work that is directly related to our negotiation model. The analytic based agent negotiation models utilize analytic techniques such as Game Theory to determine the solutions that maximizes the social welfare of the negotiating agents (Kraus, 1997). In most of these models, each agent evaluates the solution options according to the preference model of its clients, a process that results in performance scores of each solution option for every agent. These scores are sent to a central processor that determines the 'best-fit' solution option and/or a ranking of the solution option with respect to the combined preferences of the negotiating agents. Analytic based agent negotiation model minimize communication among the negotiating agents; however, besides the drawbacks associated with LSMAS presented in Section 1, these models invariably have the following general shortfalls: 9 The agents have no control over the tradeoffs made during the negotiation process; the models consider only the quantity of the tradeoffs, disregarding their quality. That is, analytic based models are used with an implicitly assumption that the negotiating agents accept any tradeoffs so long as they are associated with the smallest total tradeoff quantity. However, this is not always true, since agents may sometimes be more willing to give larger concessions on some decision variables, than to give a small concession on others. 9 The analytic based models do not follow the natural process of negotiation, where in between offers and counter offers, multiple negation decision variables are traded-off against one another, in order to identify the solution that maximizes the social welfare. Kraus (2001) presents a knowledge based agent negotiation model that implicitly depends on tradeoffs made by the negotiating agents to determine the agreement solution. In the model, the agents evaluate the solution options individually, and then start the process of making offers and counter offers. In between each negotiation round, the agents make tradeoffs aimed at identifying a solution option that is acceptable to all negotiating agents. This model has the following major shortfalls: 9 It does not give any guarantees that the agreement solution maximizes the social welfare of the negotiating agents. 9 It does not support learning from the offers made by the agent negotiation opponents in order to enable the agents to make offers that are more socially acceptable, as the negotiation progresses; resulting in a random behavior of the agents. 9 The agents have no way of knowing whether the negotiation is converging or not.
582
W a n y a m a & Wani
To circumvent the shortfalls of the analytic models, as well as the shortfalls of the Kraus (2001) model, Faratin et al, (not dated) have proposed an agent negotiation model that depends on utility, similar to the analytic models. Moreover, the model enables the agents to tradeoff during negotiation, like the knowledge-based models. The negotiating agents can utilize the model proposed by Faratin et al even if they have partial information about the solution, thus the model has the potential of enabling the agents to search a larger solution space. However, in the context of LSMAS, the model has the following shortfall: It is viable for only two negotiating agents such as in buyer-seller negotiation problems. Therefore, the approach of Faratin et al may not be applicable to LSMAS in its current form. Ray & Triantaphyllou (1998) [9] propose a negotiation model that is based on the possible number of agreements and conflicts on the relative importance of the decision variables. However, having different preference functions does not necessarily mean preferring different solution options. Therefore, this model is too inefficient to be utilized in LSMAS. The other shortfalls of this model are the assumptions that the clients of the agents have the same concerns, hence the same set of decision variables, and that the preference models of negotiating agents is public information. In practice, agent clients normally have different concerns, which lead to having different sets of decision variables, as well as preference value functions, and this information is private. This paper presents an agent negotiation model that is based on Qualitative Reasoning (QR), and Game Theory (GT). We call it the Universal Agent NEgotiation Model (UANEM), because it is applicable to both small and large scale MAS. Moreover, the model can be used in a variety of negotiation problems such as Group-Choice negotiation, Seller-Buyer negotiation, and Auction problems. It should be noted that this paper focuses on the use of the model in the Group-Choice negotiation problems. The QR component of the model assists the agents to estimate the preference models of their negotiation opponents, and to determine the similarities between preference models. On the other hand, each of the agents utilizes the Game Theory component to determine how acceptable each of the solution options is, to all the negotiating agents. UANEM is similar to the negotiation model of Faratin [6]; except, our model utilizes a Game Theory component to support negotiation among nagents. 2.0 NEGOTIATION MODEL FOR LSMAS
Increasing the number of agents in MAS introduces complexity in the modeling of agent negotiation processes. As a matter of fact, there are agent negotiation models that are highly efficient for negotiations between two agents, but whose efficiency drops considerably when the number of agents in the MAS is increased by just one agent. The model proposed in Faratin et al [6] is a good example of such models. In the following sub-section, we describe how we developed an automatic agent negotiation model for Group-Choice problems, and how we modified it to become applicable to LSMAS.
583
International Conference on Advances in Engineering and Technology
3.1 The First Version of Our Group-Choice Negotiation Model We developed our Group-Choice Negotiation Model (GCNM) for use in a Decision Support System (DSS) for the selection of Commercial-Off-The-Shelf (COTS) products, which we were working on [10]. The main objective of that project was to develop a DSS, which allows both the group and the individual stakeholder processes to be carried out concurrently. Therefore, our main concern was the provision of appropriate user agents for the various stakeholders of the COTS selection process, and the integration of the user information to automatically identify the 'best-fit' COTS products. The automatic negotiation was not met to replace the human decision makers, but to assist the stakeholders to carryout simulation based analysis and ask the 'what if' questions, both at the individual and group levels. At this time we did not mind whether the resulting MAS was centralized or decentralized. Moreover, the COTS selection problem normally involves few (3-10) stakeholders, thus our agent negotiation model did not have to satisfy the requirements imposed by LSMAS. Figure 1 shows the first version of our GCNM, and to facilitate this model, each user agent has a negotiation engine that has three components: 9 The first component has a Multi-Criteria Decision Making (MCDM) algorithm that enables the agents to evaluate and rank the solution options according to their performance scores against the preference models of their clients. 9 The second component of the negotiation engine has a simple comparison algorithm that allows the agents to compare the individual ranking of the solution options to the group ranking. 9 The third component of the engine contains a Qualitative Reasoning algorithm that enables the agents to adjust automatically the preference models of their clients.
584
W a n y a m a & Wani
Agent Preference Model
Features of Solutions
Scores of Solution Options to the > Arbitrator Agent
Group Ranking o[ the Solution Options by the Arbitrator Agent
Individual and Grc_ ,_ - - o have d([ferent best Solution
l
Same Solution is the best for Both Individual and Group Rankings. Agent Sends No Change Massage to Arbitrator Agent
Figure 1: First version of our GCNM The negotiation model shown in Figure 1 works as follows: (i) Each user agent j determines the score 7/'j (i) of every solution option i, and sends their scores of all the solution options to the arbitrator agent. (ii) The arbitrator agent determines the optimal solution option for the negotiating agents using a Game Theory model. (iii) The Arbitrator agent ranks the solution options according to how close they are to the optimal solution. We refer to the closeness of a solution option to the optimal solution as the degree of fitness of the solution option in meeting the combined concerns of all stakeholders. The degree of fitness of solution options is represented by their Social Fitness Factors IGr ). (iv) Arbitrator agent sends to all negotiating agents the Social Fitness Factors of the solution options. (v) If the 'best' Social Fitness Factor corresponds to the most preferred solution option for all agents, the negotiation ends. However if any of the agents prefers another option, it adjusts its preference model in such away as to improve the score (payoff) of the option with the best Gf. The agent targets the solution option with the best Social Fitness Factors because it is aware that it has to maximize its payoff subject to the satisfaction of the group. After adjusting the preferences, the agent evaluates all solution options using the new preference model and then sends the new scores of the solution options to the arbitrator agent. This amounts to calling for another around of negotiation.
585
International Conference on Advances in Engineering and Technology
The above five steps continue until all agents prefer the alternative with the 'best' G j , or all agent acknowledge that there is nothing they can change to improve their negotiated payoffs without depreciating the Gu of the best fit alternative considerably. The negotiation model in Figure 1 turned out to be very unreliable. Whenever the arbitrator agent was unavailable, it would not be possible to carry out any group processes. This was very frustrating since we had designed our negotiation model in such a way as to support asynchronous decision making, where agents that are not available at some stage of the negotiation process can catch up with the others at a later stage without being at an advantage or a disadvantage. Moreover, the model assumes environments where only the grand coalition maximizes the utility of the agents. Yet, in practice forming a grand coalition does not guarantee maximum utility for the involved agents. Finally, the negotiation model in Figure 1 does not follow the natural process of negotiation, where agents trade offers and counter offers. Instead the model relies on arbitrator to resolve the differences between the agents.
3.2 The Second Version of Our Group-Choice Negotiation Model We addressed the above-mentioned shortfalls of the negotiation model in Figure 1 by modifying the agent negotiation engines as follows: 9 The Qualitative Reasoning algorithm was modified to be able to estimate the preference models of the negotiation opponents of agents based on their offers. This enables the agents to estimate the scores of the option options with respect to the preference models of the various negotiation opponents of the agents. Furthermore, the Qualitative Reasoning algorithm was modified to determine the 'amount' of tradeoff (TradeoffFactors) associated with the various solution options. This helps the agents to know in advance what they again and/or loses if a particular solution is selected. 9 A coalition formation component was added to the negotiation engine. The component has a coalition formation algorithm that assists the agents to join coalitions that maximized their utilities according to the negotiation strategies of their clients. These strategies determine the basis for joining coalitions and the level of commitment that the agents have to their coalitions. 9 The arbitrator agent was removed from the MAS and a social welfare component was added to the negotiation engine of the user agents. This component has a Game Theory model, which is used to determine the Social Fitness Factors of the solution options. The input to the Game Theory model are the estimated scores of the solution options for the coalition mates of the concerned agent, as well as the actual solution scores for the concern agent. 9 The acceptance component was inserted in the negotiation engine of the user agents. This component has an algorithm for combining the Social Fitness Factors, the Tradeoff Factors, and the parameters of the agent strategies, to determine the Acceptance factors of the solution options.
586
Wanyama & Wani
The decision making algorithm was changed from making decisions based on whether the solution with the 'best' Social Fitness Factor is the one preferred by the concern agent, to selecting offers to be made to the opponents of the agent, based on the ranking of the solution options according to the preferences of the agent. In addition, the algorithm was modified to make it capable of deciding on how to respond to offers made by the opponents of the agent, based on the Acceptance Factors of the offers. The above modifications of the agent negotiation engine resulted in the second version of our Group-Choice agent negotiation model. The model enabled decentralizing the MAS such that if any of the agents was not available for some reason, the others could go ahead with the negotiation process. This increased the reliability of the MAS. Moreover, the modifications resulted in a negotiation model that follows the natural process of negotiation, where agents trade offers and counter offers after evaluating the solution options. Figure 2 shows the negotiation model associated with the modified agent negotiation engine. It should be noted that the solution option with the highest score is the offer that the concern agent makes. IB< " "
N'J' I
Ne
0)'1 ~'r
Iao I I I I I I I I
i i I i
'//
I
I
II I I I I I I I I I I I I
IThe end o f the ne o ia ion
i . . . . . . . . .
II-I
I I I I I I I I I !
Acceptance Factors used to d e c i d e on the I '~ Offer
I ~
I I
I
,,
Updated A c c e p t a n c e
F a c t o r s used to d e c i d e on the 2 '~d o ffe r
I I I I I I I
Final Updated A c c e p t a n c e Factors for the n e g o t i a t i o n round, used to decide o n the N th Ot'|'er
Update
Figure 2" Second version of our GCNM Therefore, Figure 2 illustrates that on receiving an offer, the agent checks it to determine its type. This results in the following scenarios: (i) The offer is the same as the solution option that the agent prefers. In which case, the offer is accepted.
587
International Conference on Advances in Engineering and Technology
(ii) The offer is not the preferred solution option of the agent, and it is made by an agent that is not a member of the agent's coalition. Such a solution is sent to the decision component of the negotiation engine to determine whether it satisfies the acceptance criteria before accepting or rejecting it. (iii) The offer is not the preferred solution option of the agent, and it is made by a member of the agent's coalition. The offer is sent to the Reasoning Component of the negotiation engine to finally estimate the Acceptance Factors of the solution options. The Acceptance Factors are thereafter sent to the Decision Component of the engine to determine whether the offer satisfies the acceptance criteria. Figure 2 illustrates how the Acceptance Factors of the solution options are updated as more coalition members make offers. It should be noted that the figure depicts only a single negotiation round. Moreover, Figure 2 shows that if an agreement is not reached by the end of a negotiation round, the final Acceptance Factors of the solution options are used in the negotiation engine to modify the preference model of the concern agent in preparation for the next negotiation round. The agent modifies its preference model by adjusting the preference values of some decision variable in such a way as to increase the score of the solution option with the 'best' Acceptance Factor; if that solution is not the agent's most preferred, then the modified preference model is used to evaluate the solution option at the beginning of the next negotiation round. When we employed the second version of our negotiation model in Group-Choice problems that involve many (more than 15) stakeholders, the model proved to be inefficient. For example, an agent running on a Personal Computer (PC) with the following specifications: AMD Duron (tm) Processor, 1.10 GHz, 256 MB of RAM), would make cause the PC to freeze for up to 5 seconds whenever the agent received the last offer in a negotiation round involving 20 negotiation opponents. Since we designed our agents to run on general purpose PCs and/or servers, this level of resource utilization was unacceptable, because it interfered with other processes running on these machines. Moreover, such time delays would definitely affect the applicability of the negotiation model to time-constrained Group-Choice problems such as resource management in wireless networks. We modified the agent negotiation engine to reduce on the computational resource, as well as the time required by agents to respond to offers. The negotiation model that resulted is applicable to both small scale and large scale MAS, and it can be modified to become applicable to other negotiation problems such as buyer-seller negotiation and auction problems. We therefore refer to this model as the Universal Agent NEgotiation Model (UANEM). 3.3 The UANEM
To make our agent negotiation model applicable to LSMAS, we reduced amount of processing offers, by enabling the agents to classify their negotiation opponents according to the similarity of their preference models. This was achieved by adding capability to the Qualita-
588
W a n y a m a & Wani
tive Reasoning algorithm to compare offers, as well as the estimated preference models of the negotiation opponents of agents. The resulting agent negotiation model that we refer to as AUNEM is similar to the model shown in Figure 2, but instead of the input to the Game Theory model being the estimated scores of the solution options with respect to all the negotiation opponents of the concern agent, as well as the actual scores of the solution options for the concern agent; it is a set of the scores of the solution options associated with the various classes of the negotiating agents, and the number of agents in each class. This compresses the input data to the Game Theory model, resulting in a reduction of the computational resources and time required by the agents to respond to offers. The UANEM can be viewed as a version of the Model in Figure 2 that has memory of previous offers, and that has the ability to classify the negotiation opponents of agents according to the similarities of their offers. On receiving an offer, agents in a negotiation process that is based of UANEM are required to check the offer to determine if the same offer has been previously received in the current negotiation round. This results in two scenarios: 9 The offer has previously been received; in this case the agent proposing the offer is added to the classes of agents that is associated with its offer, and the number of agents in each class, as well as scores of the solution options that corresponding to every agent class are sent to the Social Welfare Component of the negotiation engine of the concern agent. 9 The offer has not previously been received; in this case, the preference model of the proposing agent is estimated, then it is compared with the representative preference models of the existing agent classes. If it is found to be similar to one or more of the category representative preference model(s), the agent is marked as a member of the class whose preference model is most similar to the preference model of the agent. However, if the preference model of the proposing agent is not similar to any of the representative preference models of existing agent categories, the proposing agent is marked as the first member of a new agent class, and its preference model is labeled the representative preference model of the new agent class. It should be noted that the level of similarity(co)between two preference models can be set anywhere between the extremes of 0% and 100%. Where the 100% setting means that for two preference models to be similar, they must have the same decision variables and the same preference value functions. In other words, the two preference models must be identical. On the other hand, the 0% setting of (are you missing a word here?) implies that the preference models being compared do not have to have anything in common to be treated as being similar. In fact, with a 0% setting there is no need to go through the process of memorizing previous offers or comparing preference model. The 0% setting reduces the UANEM to the model proposed by Kraus, (2001). In that model, agents do not process the offers of their opponents, and adjust their preference models randomly at the end of every negotiation round.
589
International Conference on Advances in Engineering and Technology
4.0 SIMULATION EXPERIMENTS In these experiments, agents were required to select a Commercial-Off-The-Shelf (COTS) product to be used in the development of a web-shop, from a set of eight solution options. The agents evaluated the solution options based on the preference models made up of twelve predefined decision variables, and the initial preference value functions of the agents were generated using a truncated random number generator. Three types of agent negotiation models were tested in the experiments: the model proposed in Kraus, (2001), the second version of our agent negotiation model, and the UANEM. In all experiments, we kept the number of solution options constant (eight solution options), and the number of negotiating agents was increase from 2 to 50 in steps of 1. For each number of agents, we ran the simulation one hundred times, noting the negotiation rounds, and the time taken by one of the two agents with which the simulation started (Agent a), to process the last offer in every negotiation round. The last offers in the rounds are targeted because they involve processing the preferences information of all the negotiating agents; thus resulting in maximum offer processing time. For the UANEM, we carried out simulations with the value of co set to 0%, 50% and 100%. Moreover, for simplicity we made the following assumptions with regard to the second version of our agent negotiation model: All negotiating agents subscribe to the grand coalition, and every agent is totally committed to maximizing the utility of the grand coalition. The simulation measurements were carried out on a computer that has the following specifications: AMD Duron (tm) Processor, 1.10 GHz, 256 MB of RAM. The MAS that we tested in the simulations was developed using Java, and it ran on windows XP machines with Java Run-time Environment ORE 1.4.2). 5.0 RESULTS Figure 3 shows the variation of the maximum number of negotiation rounds with the number of agents involved in the negotiation process, and Figure 4 shows the variation of the average of the maximum offer processing time with the number of negotiating agents.
590
Wanyama & Wani
350
rr
300
._ :~
// 250
/ /
Z
d ,z " ' - "" f
200 / 150
l
f
f /
/",,,I I 100
/I I ......
50
/
i
''/
. , - . . . ,..
,'" "-
- -" ""
I
0
2
6
10
14
18 Number
Legend:
. . . . .......
Kraus Model & UAN~: UANEM: w =50%
-
2nd Version
of
Agents
w =0%
UANEM: w =100% -
Figure 3" Variation os maximum number os negotiation rounds with the number of negotiating agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
oo
i
v12
/ , ~
d) 03
5o
10
._
8 .. __
13..
6
9
4
~
~
i
2
10
18
26
34
42 Number
Legend
~
UANEM:w=100%
.......
2nd
50
of Agents
Version
Figure 4: Variation of maximum offer processing time with the number of negotiating agents
591
International Conference on Advances in Engineering and Technology
6.0 DISCUSSION OF RESULTS Figure 3 reveals that the negotiation model proposed in Kraus, (2001) is synonymous with the UANEM with the similarity level (co)set to 0%. Moreover, the Figure shows that the performance (in term of negotiation rounds) of the UANEM with o~ set to 100% is comparable to that of the second version of our agent negotiation model. Any other setting of co results in a negotiation-rounds performance that lies between that of the UANEM with co = 0% (Kraus's Model) and that of the UANEM with o~ = 100% (see performance of the UANEM with co = 50%). For Kraus's negotiation model (Kraus, (2001) ), the number of negotiation rounds increases sharply with increasing the number of agents involved in the negotiation (see Figure 3). This makes it inappropriate for LSMAS. Kraus's agent negotiation model (Kraus, (2001)) does not require the agents to carryout any processing of the offers that they receive. This saves processing time, but it results in random behavior of the agents, leading to poor or no control of the dynamics of the negotiation process. On the other hand, the second version of our agent negotiation model requires agents to process offers that they receive in order to identify appropriate counter offers. This controls the dynamics of the negotiation process. However, processing of offers results in offer processing time that increases sharply with increasing the number of agents involved in the negotiation process (see Figure 4). This makes the second version our agent negotiation model inappropriate for LSMAS. Furthermore, Figure 4 shows that the UANEM (co set to 100%) results in offer processing time that does not vary significantly with the number of agents involved in the negotiation process, implying that the mode is applicable to LSMAS. 7.0 CONCLUSION AND FUTURE W O R K This paper presents an agent negotiation model for GCDM in LSMAS. Moreover, the paper describes how the negotiation model for LSMAS was derived from a simple centralized negotiation model. The simulation results presented in this paper show that the negotiation model proposed in Kraus, (2001) and the second version of our agent negotiation model represent the two extremes of knowledge based negotiation models. That is, in the context of LSMAS, Kraus's model is associated with minimum (zero) offer processing time and maximum number of negotiation rounds. On the other hand, the second version of our agent negotiation model is associated with maximum offer processing time and minimum number of negotiation round. Furthermore, the simulations reveal that the UANEM is associated with low offer processing time (close to Kraus's model) and low negotiation rounds (close to the second version of our agent negotiation model), making it suitable for LSMAS. From Figures 3 and 4, it is noticed that the offer processing time and the number of negotiation rounds vary in opposite directions with the variation of the similarity level (co). Therefore, in the future we would like to establish the optimal similarity levels associated with different agent negotiation situations.
592
Wanyama & Wani
REFERENCES
Barbuceanu, M. and Lo, W. (2000), A Multi-attribute Utility Theoretic Negotiation Architecture for Electronic Commerce, Proceedings of the Fourth International Conference on Autonomous Agents, Barcelona, Spain. Faratin, P., Sierra, C., Jennings, N. R., Using similarity criteria to make issue trade-offs in automated negotiations, Artificial Intelligence, Vol. 142. Jennings, N. R., Faratin, P., Johnson, M. J., O'Brien, P., and Wiegand, M. E., (1996), Using Intelligent Agents to Manage Business Processes, Proceedings of the Practical Application of Intelligent Agents and Multi-Agent Technology Conference, London The United Kingdom. Jennings, N. R., Faratin, P., Lomuscio, A. R., Parsons, S., Sierra, C., and Wooldridge, M. (2001), Automated Negotiation." Prospects Methods, and Challenges, International Journal of Group Decision and Negotiation, Vol. 10, No. 2. Kraus, S., (1997), Negotiation and Cooperation in Multi-agent Environment, Artificial Intelligence, Vol. 9, Nos 1-2. Kraus, S., (2001), Strategic Negotiation in Multiagent Environments, Cambridge: Massachusetts Institute of Technology Press. Raiffa, H. (1982), The Art and science of Negotiation, Cambridge: Harvard University Press, USA. Ray, T. G., and Triantaphyllou, E., (1998), Theory and methodology.'- Evaluation of Rankings with regard to the Possible number of Agreements and Conflicts, European Journal of Operational Research. Wanyama, T., and Far, B. H., (2004), Multi-Agent System for Group-Choice Negotiation and Decision Support, proceedings of the 3rd Workshop on Agent Oriented Information Systems, New York USA. Yokoo, M. and Hirayama, K. (1998), Distributed Constraint Satisfaction Algorithm for Complex Local Problems, Proceedings of the Third International Conference on MultiAgent Systems. IEEE Computer Society Press.
593
International Conference on Advances in Engineering and Technology
C H A P T E R NINE T E L E M A T I C S AND T E L E C O M M U N I C A T I O N S
DIGITAL FILTER DESIGN USING AN ADAPTIVE MODELLING APPROACH Elijah Mwangi, Department of Electrical & Electronic Engineering, University of Nairobi, PO BOX30197, Nairobi 00100, Kenya.
ABSTRACT
The design of an FIR filter using the Wiener approach is presented and compared to an LMS design. The Wiener filter is synthesized by computing the optimum weights from the signal characteristics. For the LMS filter, the optimum weights are obtained iteratively by minimising the MSE of an error signal that is the difference between the filter output and the output of an ideal filter that meets the design specifications exactly. Results from MATLAB computer simulations show that both methods give filters that meet the design specifications in terms of cut-off frequency and linear phase response. The presentation gives an alternative design methodology for FIR filters and is also suitable for illustrating the properties of the LMS algorithm as an approximation to the Wiener approach. Keywords: FIR Filters, LMS algorithm, Wiener filtering, Adaptive filters.
1.0 INTRODUCTION The aim of this paper is to illustrate adaptive signal processing concepts through the design of an FIR filter. Since the design methodologies for FIR filters are well established and easily understood, the design of such filters using an adaptive process would form a good basis of introducing adaptive signal processing techniques. In the proposed method, the ideal magnitude and phase characteristics of the FIR filter are specified. The design problem can be stated as follows: Given the magnitude and phase response of a discrete linear timeinvariant system, an FIR filter is to be synthesized by using an adaptive solution that gives a good minimum square error fit to the magnitude specifications. The filter phase response should also exhibit linear phase characteristics. Two synthesis methods are investigated. These are the Wiener approach and the LMS algorithm approach. The Wiener method computes for an optimum set of filter weights that give a response that best fits the design speci-
594
Mwangi
fications. The LMS algorithm method provides an iterative solution that converges to a set of filter weights that approximates the Wiener solution. The adaptive process of the LMS algorithm is demonstrated by a reduction in the Mean Square Error (MSE) at each iteration. In this paper an FIR low pass filter with a specified number of coefficients is designed using both the Wiener and the LMS algorithm. Results, obtained by MATLAB simulation show that the LMS algorithm filter gives similar magnitude and phase response to those obtained with the Wiener approach. 2.0 THE P R O B L E M S T A T E M E N T
The design process can be modelled as a system that consists of two parts: An ideal filter that meets the design specifications exactly, and an adaptive filter that gives an approximation to the specifications. The difference between the ideal filter output and the adaptive filter output is an error signal can be used to adjust the filter weights. The process is illustrated in Figure 1. Let the input signal x(n) be a sum of K-sinusoids, where each sinusoid is sampled at a frequency, f In addition, each sinusoid is of unit amplitude. K
x(n) - Z
sin 2~r(f k / fs )n
(1)
k=l
The output of the ideal filter is also a sum of sinusoids that exhibits a phase difference from the input. For the ideal filter, some of the output sinusoids will be attenuated and others will pass through the filter as per the design specifications. Thus, K
d(n) - Z Ak sin 2~[(fk / f~ )n + Ok ]
(2)
k=l
where A~ is the magnitude specification at frequencyJ~ and with a phase shift 0h.
595
International Conference on Advances in Engineering and T e c h n o l o g y
x(n)
d(n) ~[ Ideal Filter r-i
+
e(n]
.~[ ad *"1
I
ter v(n~
Fig 1. The adaptive process model. The adaptive filter is an FIR filter with a transfer function:
H(z) - w(O)+ w(1)z -1 + w(2)z -2 + .... + w ( M - 1 ) z -(M-l)
(3)
Where the filter coefficients, or weights, w(i), i=0,1,2 .... (M-I); are adjustable. The output
y(n) of the adaptive filter is: M-1
(4)
y(n) - Z w(n)x(n- m) m=0
3.0 THE W I E N E R SOLUTION: The error signal e(n) is the difference between the desired output d(n) and the adaptive filter output y(n), i.e. M-1
e(n) - d ( n ) - y(n) - d ( n ) - Z w(n)x(n- m)
(5)
m=0
As per the Wiener filter theory (Widrow & Steams, 1985), the optimum set of filter coefficients Woptare given by:
Wopt - R - 1 P
596
(6)
Mwangi
Where, the autocorrelation matrix R is a Toeplitz matrix with elements as given in equation
(7). i K
rxx (m) - --2 Zk=l cos 2er(f k / f,. )m
(7)
P is the cross-correlation vector of the input signal samples to the desired signal samples and is computed as shown in equation (8). K
k =1
K
K
coso ,Z Ak c~
/ L ) + Ok),.., Z
Ak cos(2(M- 1)~r(fk / f~ ) + 0 h ]r (8)
k=l
k =1
4.0 THE L M S S O L U T I O N
In the LMS algorithm, the computation of the optimum weight vector is done iteratively by minimizing the MSE. Thus the LMS is a steepest descent algorithm where the weight vector is updated for every input sample as follows (Ifeachor & Jervis, 1993): Wi+, - WX - / N #
(9)
Where W/+: is the updated weight vector, /4/.1is the current weight vector, ~ is a gradient vector. The parameter ,a controls the convergence rate of the algorithm and also regulates adaptation stability. The value of/1 is restricted in the range 0 to [1/tr(R)], where tr(R) is the trace of the autocorrelation matrix (Widrow & Steams, 1985). If P is the cross-correlation of the input and desired samples and R is the autocorrelation of the input samples, then the gradient vector at the jth sampling instant is: V j - -2Pi + 2Rj WX - - 2 X j d ( n ) + 2Xj X ~ Wj - - 2 X [ d ( n ) - X ~ Wj ]
(lo)
The signal Xjr~ 9is the filtered output of an FIR filter with a weight vector W and input signal vector X. Therefore the error signal is: gj (n) - d(n) - X.lr. Wi.
(11)
Substituting equation (11) into equation (10) gives: V j - -2~j X
(12)
597
International Conference on Advances in Engineering and Technology
Thus, the weights update in equation (9) becomes: m j +1 = m j -k- 2 /u~X j
(13)
It can be noted from the above derivation that the LMS algorithm gives an estimate of Wj-+I without the need of direct computation of signal statistics. The need for matrix inversion which can be computationally expensive is also avoided. The computation procedure for the LMS algorithm is as summarized below.
Step (i): Initially, the filter weights are set to an arbitrary fixed value, say: w(m)=O. O; for i=O, 1, (M-I). Step (ii)- The adaptive filter output is computed. M-1
y(n) - Z w(n)x(n - m)
(14)
m=0
Step (iii)- The error estimate c(n) is then obtained as the difference between the desired output and the adaptive filter output. a(n) = d(n) - y(n) (15) Step (iv)" The adaptive filter weights are then updated so that there is symmetry about the centre weight. This ensures that the filter will exhibit linear phase characteristics. Wj+1(i) = Wj (i) + 2/,tCjXj (k - m)
(16)
Step (v): For each subsequent sampling instant steps (ii) to (iv) are repeated. The process is stopped if either the change in the weight vector is insignificant as per some preset criterion or for given number of iterations. The comparison of the LMS algorithm to the Wiener filtering is best illustrated by the computation of the MSE at each iteration stage. This is given by: (Widrow & Steams, 1985). -- ~min + ( W - Wopt ) T R ( W - Wopt)
where ~:mi,is the Wiener MSE.
598
(17)
Mwangi
5.0 A DESIGN EXAMPLE A digital FIR low pass filter with the following specifications is to be designed. Passband: dc to 3.4kHz; Phase: Linear, Sampling frequency: 8kHz 1.1 Wiener Approach A pseudo filter with an ideal magnitude response is used. The passband has a magnitude of unity while the magnitude is zero in the stopband. An ideal brick wall transition from the passband to the stopband is used. The phase is made to vary linearly with frequency in the passband. For a filter of length N, a good approximation of the phase response is given by: (Rabiner & Gold, 1975). o(co) = - a c o
(18)
where, c~=(N-1)/2. The magnitude and the phase response of the simulated filter are illustrated in Figure 2. 1.2 The LMS Approach The same ideal filter that is used in the Wiener approach simulation is also employed in the filter simulation using the LMS algorithm. The adaptive filter length is also kept at N=I 7. The magnitude and phase response are shown in Figure 3. These characteristics are obtained after 400 iterations and with a value of 11=0.001. In order to monitor the progress of the LMS algorithm the MSE was computed at each iteration stage. The results are illustrated in Figure 4. 2.0 DISCUSSION From the results displayed in Figure 2 and Figure 3, it can be noted that the Wiener filter and the LMS filter have identical characteristics that closely match the design specifications. A summary of the filter parameters is given in Table 1. The figure given for the attenuation is the maximum side lobe attenuation in the stop-band.
599
International Conference on Advances in Engineering and Technology
10 . . . . . . . .
:. . . . . .
..........
]
=
-
7
......
J 0 L 133 "u9
.......................................
-10
--
: .
.
4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
~
---~ ~
.
-,
---
(1) g
133
-20
.
.
.
-30
. . . . .
.
.
.
.
.
.
.
.
.
~
.
.
.
.
.
.
J
.
9
.
:
_ -40
.......
-50
.
,
. . . 500
0
.
.
.
. . . 1000
.
.
.
. . . 1500
.
.
.
. . 2000
.
Frequency
o ~:::::::
i
. . . . .
.................
.
.
. . . 2500
3000
3500
4O0O
(Hz)
....................
....................................... -5oo
i
.
. . . . . . . . . . .
::: :-
--
g, "o
=
i
.... .
t'n
-1000
.
.
.
.
.
.
.
-1500 0
500
1500
1000
2000
2500
Frequency
3000
3500
4000
(Hz)
Fig 2. The Magnitude and Phase response of the Wiener filter.
10
0
. . . . . . . . . . . . . . . . . . . . . . . .
.
m -10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
-.--
. . . . . . . . . . .
--
.
i
'.
.+7._= -20
-
.
-3o
.
-40
.....................
0
.
-
.
i
.
:
.
.
.
.
: 500
.
.
.
.
• 1000
.
.
.
.
.
.
.
.
.
: : i
.
-
%-. -500~
.
.
.
.
.
i
1500
2000
2500
.
.
.
.
3000
3500
4000
(Hz)
.
.
.
.
.
!
.
i '
v
.
-
Frequency
0. . . . :
.
:":i
9 :
.
.
.
~
.
.
.
.........
....
.
.
.
.
.
.
9
-lOOO
-~nn
..... 0
~ 500
~ 1000
: 1500
: 2000
.
2500
.
.
. 3000
.
.
. 3500
4000
Frequency(Hz)
Fig 3. The Magnitude and Phase response of the LMS filter.
600
Mwangi
0
50
1 O0
150
200
250
300
350
400
Number of iterations
Fig 4. Learning curve of the LMS adaptation process with g=0.001.
Filter Ideal Wiener LMS
Table 1. Filter parameters Cut-off frequency Attenuation 3.4kHz Not specified 3.3kHz -22dB 3.3kHz - 18dB
Phase Linear Linear Linear
It can be noted that the Wiener filter does not offer any significant improvement over the LMS filter in terms of the cut off frequency accuracy. However, the Wiener filter exhibits deeper stop-band attenuation. A further observation is that both filters satisfy the linear phase requirement. Results on the level of MSE obtained at each iteration stage as illustrated in Figure 4 serve to indicate that the LMS algorithm is simply an approximation of the Wiener process. After the 50 th iteration, the filter coefficients quickly converge to near Wiener coefficients. Table 2 gives the coefficients of the Wiener filter and those of the LMS filter at convergence. It can be noted that all the 17 coefficients are very close. The above results can be improved by increasing the number of sampling points on the magnitude characteristics of the ideal filter to more points than 30 that have been used in the
601
International Conference on Advances in Engineering and Technology
simulation. Further improvement in stop-band attenuation and in sharper pass-band transition may also be obtained by increasing the filter length. Table 2. Coefficients of the Wiener and the LMS filter. COEFFICIENT w(1)=w(17) w(2)=w(16) w(3)=w(15) w(4)=w(14) w(5)=w(13) w(6)=w(12) w(7)=w(11) w(8)=w(1 O)
w(9)
WIENER 0.0292 -0.0143 -0.0107 0.0399 -0.0736 0.1041 - 0.1311 0.1473
0.8457
LMS 0.0263 -0.0108 -0.0119 0.0433 -0.0749 0.1082 -0.1335 0.1532
0.8427
3.0 CONCLUSION In this paper, we have presented both the Wiener method and the LMS algorithm method for the design of an FIR filter from ideal filter specifications. The computer results show that both methods give filters with magnitude and phase characteristics that meet the design criteria. The application of adaptive signal processing algorithms in FIR filter design is hence illustrated. REFERENCES
Fisher, M., Mandic, D., Bangham, J., and Harvey, R. (2000). Visualising error surfaces for adaptive filters and other purposes. IEEE International Conference on Acoustics, Speech & Signal Processing. p3522-3525. Ifeachor, E. C., and Jervis, B. W. (1993). Digital Signal Processing, A practical approach. Addison-Wesley Longmans Ltd. Essex, UK.. Rabiner, L. R., and Gold, B. (1975). Theory and Applications of Digital Signal Processing. Prentice Hall international. New Jersey, USA. Widrow, B. and Steams, S.D. (1985). Adaptive Signal Processing. Prentice Hall International. New Jersey, USA.
602
Anand
A U G M E N T E D REALITY ENHANCES THE 4-WAY VIDEO C O N F E R E N C I N G IN CELL PHONES P.M.Rubesh Anand, Department of Electronics and Telecommunication Engineering, Kigali Institute of Science and Technology, Rwanda.
ABSTRACT Third generation (3G) mobile networks are currently being deployed and user demand for multimedia applications are ever increasing. Four-way video conferencing is one of such an application which could be possible through the cell phones too. This paper deals with the difficulties faced by the video quality in the cell phones during the video conferencing between more than two persons and analysed the possible ways to overcome those difficulties. End user's satisfaction determines the Quality of Service (QoS) and obviously the satisfaction depends on the reality of the image the user is watching. Due to the small screen size and lower bandwidth, the quality of the image in cell phones ever be made perfect from the transmitter side and the quality of image has to be improved at the receiver side only. Augmented Reality (AR) is one of such promising approaches which guarantees for the high quality of video. This paper proposes an idea of using AR in cell phones for enhancing the video quality during video conferencing.
Keywords: Four-way video conferencing; cell phone; Augmented Reality; 3G; QoS.
1.0 I N T R O D U C T I O N A live voice can be somewhat reassuring, but there is nothing like a live picture to bring a sense of relief or satisfaction. In 1964, AT&T demonstrated the picture phone at the New York World's Fair which was not a successful one. There are several reasons which are frequently cited for its failure but all the studies highlights that the failure is because of the non-realistic nature of video that was transmitted. Though today's mobile phones have the capability of transmitting high quality videos, the question of realistic nature of those videos still arises. It is obvious that the new technology, new media will have new problems. Quality of service support in future mobile multimedia systems is one of the most significant challenging issues faced by the researchers in the field of telecommunications. An issue on an end-to-end basis, necessarily with appropriate harmonization consideration among heterogeneous networks of wired and wireless is still under research. It has been estimated that by the end of the year 2006 approximately 60% of all cell-phones will be equipped with digital cameras. Consequently, using Augmented Reality in cell-phones has a lot of end user applications. Compared to high-end Personal Digital Assistants (PDAs) and Head-Mounted Displays (HMDs) together with personal computers, the implementation of Augmented Reality (AR) on the 3G cell phone is a challenging task: Ultra-low video stream resolutions,
603
International Conference on Advances in Engineering and Technology
little graphics and memory capabilities, as well as slow processors set technological limitations. The main motivation for this research is due to the demand for better availability of services and applications, rapid increase in the number of wireless subscribers who want to make use of the same handheld terminal while roaming and support for bandwidth intensive applications such as videoconferencing. The basic idea of AR is to enhance our perception of the real world with computer generated information. It is obvious that AR technology has a high potential to generate enormous benefit for a large amount of possible applications. AR superimposes computer-generated graphics onto the user's view of the real world. AR allows virtual and real objects to coexist within the same space. Most AR application scenarios proposed up to now belong to engineering. In particular maintenance applications are very popular. In contrast to that trend, the paper decided to find an AR application that attracts the mass market. The target user should be not a specialized person like a maintenance engineer, but a general user. Successful mass applications in most cases result in increasing great demand for devices and appropriate application services. 2.0 CURRENT ISSUES IN BANDWIDTH High bit rates are required to provide end users with the necessary service quality for multimedia communications. Separation of call and connection/bearer control such as speech, video and data could be associated with one single call and these could be handed over separately. 3G-324M is the 3GPP standard for 3G mobile phone conferencing. 3G networks are mostly based on Wideband Code Division Multiple Access (W-CDMA) technology to transfer data over its networks which is based on CDMA2000 technology. W-CDMA sends data in a digital format over a range of frequencies, which makes the data move faster, but also uses more bandwidth than digital voice services. UMTS also has its own wireless standard which works with GSM technology which also offers the high data rate up to 2 Mbps. In the case of multimedia applications, such as videoconferencing, it is also necessary to maintain synchronization of the different media streams. Failure to provide low enough transfer delay will result in unacceptable lack of quality. For videoconferencing, the targets are similar, except the play-out delay has to be much less so that end-to-end delay does not exceed 400 ms. The degree of jitter that must be compensated is up to 200 ms. Throughput must range from 32 Kbps upwards, including the specific rates of 384 and 128 Kbps for packet and circuit switching, respectively. Video compression techniques should be used to reduce the bandwidth required. H.264/MPEG-4 part 10 video coding standard was recently developed by the JVT (Joint Video Team). The basic technique of motion prediction works by sending a full frame followed by a sequence of frames that only contain the parts of the image that have changed. Full frames are also known as 'key frames' or 'I-frames' and the predicted frames are known as 'P-frames'. Since a lost or dropped frame can cause a sequence of frames sent after it to be illegible, new 'I-frames' are sent after a predetermined number of 'P-frames'. This compression standard saves the bandwidth used for video conferencing.
604
Anand
3.0 A U G M E N T E D R E A L I T Y Augmented Reality (AR) is a growing area in virtual reality research. As this field is still young, no standard methodology and product seems to be recognized yet. As a consequence, the development of applications is reduced because the learning process can only be done through sparse conferences and literature reading. AR is a very interesting field because it requires multidisciplinary expertise to be achieved correctly. An augmented reality is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with additional information. The application domains reveal that the augmentation can take on a number of different forms. The ultimate goal of this paper is to create a system such that the user cannot tell the difference between the real world and the virtual augmentation of it. To the user of this ultimate system it would appear that he is looking at a single real scene. Most AR research focuses on "see-through" devices, usually worn on the head that overlay graphics and text on the user's view. Virtual information can also be in other sensory forms, such as sound or touch, but this paper concentrates only on visual enhancements. AR systems in HMDs track the position and orientation of the user's head so that the overlaid material can be aligned with the user's view of the world. Through this process, known as registration, graphics software can place a threedimensional image over it. AR systems employ some of the same hardware technologies used in virtual-reality research, but there's a crucial difference: whereas virtual reality brashly aims to replace the real world, augmented reality respectfully supplements it. 4.0 PREVIOUS A P P R O A C H E S WITH A U G M E N T E D R E A L I T Y SYSTEM The fields of computer vision, computer graphics and user interfaces are actively contributing to advances in augmented reality systems. The previous approaches used the AR system consisting of the AR client and the AR server. In the first step, the built-in camera of the AR client is pointed to a certain object in the real world. The images from the camera (a video stream or single image) are sent to the remote AR server via wireless data transmission. In addition to the image data, interaction data are also sent from the AR client to the AR server. Interaction data control the AR application. Then the AR server analyses the image data received from the AR client. The real world object is recognized. For successful recognition, certain information on this object must be stored in a database on the AR server. After successful recognition, additional graphical information is generated by the AR server and mixed with the real world images. The rendered, i.e. computer generated information, are some kind of overlay to the real world images. There are two kinds of computer generated information: 3D data and 2D data. Handling of 3D data is especially challenging since it needs to be rendered spatially correct, i.e. in correct position and orientation. All additional information has to be pre-defined by an AR authoring system and stored in the database of the AR server. The computer enhanced image are encoded and sent back to the AR client via wireless data transmission and the client decodes and displays the image in its display.
605
International Conference on Advances in Engineering and Technology
5.0 D R A W B A C K S IN T H E E X I S T I N G A R SYSTEMS The AR systems are used only to display text and information about the object over the real scene which looks like an information system. The video enhancement is not proved in the existing AR systems. There is requirement to run real-time video through the existing AR system, which in this case means that the system must be capable to display real-time frame rates (i.e. at least 20-25 frames per second) on the AR client. Real-time update rates are necessary because position and orientation of an object will constantly vary on the real world image, if the user moves around it. Since it is almost impossible to avoid delays in the system, lower updates rates (1-2 frames per second) have to be accepted at this time. In contrast to the majority of AR applications available today, augmentation is not executed on the mobile device because the object recognition and augmentation require a considerable amount of computing power that is not available on mobile devices like cell phones. Furthermore, the AR server consists of a database to store the information for object recognition and augmentation whereas the cell phones do not have memory capacity as in server. So the clients like cell phones can only display the video data and serves as interaction device for the user.
6.0 PROPOSED AUGMENTED REALITY SYSTEM FOR CELL PHONES Recent 3G mobiles have the capabilities of real-time video image processing, computer graphic systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with a view of the 3D environment surrounding the user. By using all the advantages of the recent 3G cell phones, the paper proposes an AR system which can enhance the video quality in cell phones during four-way video conferencing. During the four-way video conferencing, the cell phone screen is split into four windows and the video is displayed independently. Fig.1 shows the continuous presence screen layout in the cell phone display. The diagram shows the full-screen mode, 2-way and 4-way video conferencing modes. During four way video conferencing, the screen can also be used to display one large video sequence and three small video sequences along with the control information in a small window.
Full Screen
2-way
4-way
1 large + 4 Small
Fig. 1: Continuous Presence Screen Layout in Cell phone Display
606
Anand
The proposed block diagram of the AR system (fig.2) differs from the traditional AR system which has client and server. In this system the functions of the server like object recognition and augmentation is done at the client side itself. The AR system processes the three videos independently and simultaneously (as the fourth one is from the user camera which needs not to be processed). This AR system and the display system get the image at the same time and the AR system identifies them separately while the display system projects them. The identified three individual images are aligned with their corresponding graphics image generators. Then each image is checked for their intensity and chromacity levels by their respective graphics image generators. Image recognition is separated in two sub modules: Low-Level-Vision (LLV) and High-Level-Vision (HLV). LLV uses image recognition algorithms to detect certain distinct characteristics of the image. For this purpose, dominant edges of the image are used in the current application. Based on the results of LLV, HLV then recognizes information about intensity and colour levels by comparison with recognition data stored in the database of the AR system (Generally, the human faces are stored in database as the paper deals only with the face to face video conferencing). According to the recognized edges of the face, the intensity or/and colour of the image is enhanced for better viewing quality. The amount of intensity and colour to be added with the original image are generated by the graphics image generators with the help of the database (as reference) and displayed immediately over the cell phone display system along with the original image which is already at its background. AR works in the following manner: All points in the system are represented in an affine reference frame. This reference frame is defined by 4 non-coplanar points, po ... P3. The origin is taken as po and the affine basis points are defined by D, P2 and P3. Having defined the affine frame, the next step is to place the virtual objects into the image. The camera, real scene and virtual scene are defined in the same affine reference frame. Thus the original image along with the corrected graphics image projecting over it forms the augmented image. Now the user can enjoy a high quality of video in all the screens. In this system, all the blocks shown in the fig.2 are embedded inside the cell phone itself. The cell phone now acts both as a server and a client. The AR system requires only the video from the transmitting section rather than cell phone camera's intrinsic (focal length and lens distortion) and extrinsic (position and pose) parameters. Hence the integration between different types of cell phones will be easy and the future coming models can also be joined with the proposed AR system without any modification in the hardware and software.
607
International Conference on Advances in Engineering and Technology
Real Scene
Image Coordinates
3G-324~ Mobile
Align the graphics Image generator-1 to real image-1
Generate the augmented image for real image-1
Align the graphics Image generator2 to real image-2
Generate the augmented image for real image-2
Align the graphics Image generator3 to real image-3
Generate the augmented image for real image-3
/
~_~
IlL~
[
Video
~
Image
[~L~ \~L Graphics Image Coordinates
Graphics I J
k \
LI ~l
/
Image /
J."
"" ", ..T
/ Augmented Video
,." / '
Combine the entire augmented image and align with the global affine plane and project on the cell phone screen
Fig. 2: Block Diagram of the Proposed Augmented Reality System for Cell phone
7.0 P E R F O R M A N C E ISSUES IN THE P R O P O S E D A R S Y S T E M
Augmented reality systems are expected to run in real-time such a way that the user will be able to see a properly rendered augmented image all the time. This places two performance criteria on the system. They are: 9 Update rate for generating the augmenting image, 9 Accuracy of the registration of the real and virtual image. Visually the real-time constraint is manifested in the user viewing an augmented image in which the virtual parts are rendered without any visible jumps. To appear without any jumps, a standard rule is that the graphics system must be able to render the virtual scene at
608
Anand
least 10 times per second. This is well within the capabilities of current graphics systems in computers but it is a question in cell phones for simple to moderate graphics scenes. For the virtual objects to appear realistically, more photorealistic graphics rendering is required. The current graphics technology does not support fully lit, shaded and ray-traced images of complex scenes. Fortunately, there are many applications for augmented reality in which the virtual part is either not very complex or will not require a high level of photorealism. The second performance criterion has two possible causes. One is a misregistration of the real and virtual scene because of noise in the system. As mentioned previously, our human visual system is very sensitive to visual errors which in this case would be the perception that the virtual object is not stationary in the real scene or is incorrectly positioned. Misregistrations of even a pixel can be detected under the right conditions. The second cause of misregistration is time delays in the system. As mentioned previously, a minimum cycle time of 0.1 seconds is needed for acceptable real-time performance. If there are delays in calculating the camera position or the correct alignment of the graphics camera then the augmented objects will tend to lag behind motions in the real scene. The system design should minimize the delays to keep overall system delay within the requirements for real-time performance. The combination of real and virtual images into a single image presents new technical challenges for designers of augmented reality systems. The AR system relies on tracking features in the scene and using those features to create an affine coordinate system in which the virtual objects are represented. Due to the nature of the merging of the virtual scene with the live video scene, a virtual object drawn at a particular pixel location will always occlude the live video at that pixel location. By defining real objects in the affine coordinate system real objects that are closer to the viewer in 3D space can correctly occlude a virtual object. The computer generated virtual objects must be accurately registered with the real world in all dimensions. Errors in this registration will prevent the user from seeing the fused real and virtual images. Discrepancies or changes in the apparent registration will range from distracting which makes working with the augmented view more difficult, to physically disturbing for the user making the system completely unusable. The phenomenon of visual capture gives the vision system a stronger influence in our perception. This will allow a user to accept or adjust to a visual stimulus overriding the discrepancies with input from sensory systems. In contrast, errors of misregistration in an augmented reality system are between two visual stimuli which we are trying to fuse to see as one scene. In the cell phone video conferencing, faces are the usual image under consideration. So the problem of merging a virtual scene with a real scene is reduced to: 9 Tracking a set of points defining the affine basis that may be undergoing a rigid transformation, 9 Computing the affine representation of any virtual scene, 9 Calculating the projection of the virtual objects for a new scene view as linear combinations of the projections of the affine basis points in that new view.
609
International Conference on Advances in Engineering and Technology
Achieving a consistent lighting situation between real and virtual environments is important for convincing augmented reality applications. A rich pallet of algorithms and techniques has to be developed that match illumination for video-based augmented reality to provide an acceptable level of realism and interactivity. Methods have to be developed which create a consistent illumination between real and virtual components. Diffuse real images and to illuminate them under new synthetic lighting conditions is very difficult to achieve unless efficient image processing software is available. Latency is as much of a problem in augmented reality systems. Other than just getting faster equipment, predictive methods which help to mitigate the latency effects should be developed. Models of the human operator and the position measurements should be considered in the algorithms of predicting forward in time to get a perfect AR video. 8.0 C O N C L U S I O N S A N D F U T U R E R E S E A R C H D I R E C T I O N S
Augmented reality will truly change the way we view the world. The majority of AR achievements have found few real-world applications. As for many other technological domains, AR needs to provide sufficient robustness, functionality and flexibility to find acceptance and to support its seamless integration into our well-established living environments. In this paper, a look at this future technology, its components and how it will be used were discussed. Many existing mobile devices (3G mobile phones) fulfill the basic requirements for augmenting real world pictures with computer generated content. But the bandwidth and processing facilities are the strong obstacles prohibiting their success. Though the bandwidth requirements may be solved by 3G standards, and up coming 4G standards, the processing efficiency solely depend on the hardware of the cell phones which has to be improved. The paper concludes with a proposed AR system for cell phones and its success mainly rely on the processing capability of the cell phone. The paper does not deal with the cell phone video conferencing and augmented reality at mobility. In UMTS, the maximum speed envisaged for the high mobility category is 500 km/h using terrestrial services to cover all high-speed train services and 1000 km/h using satellite links for aircraft. The data-rate is restricted during mobility to 144 Kbps instead of the guaranteed rate of 2 Mbps. The mobility problem can be used for the future research by improving its technology such that the virtual elements in the scene become less distinguishable from the real ones. REFERENCES
Bimber, O.,Grundh6fer, A., Wetzstein, G., and Kn6del, S. (2003), Consistent Illumination within Optical See-Through Augmented Environments, In proceedings of International Symposium on Mixed and Augmented Reality, The National Center of Sciences ,Tokyo. Christian Geiger, Bernd Kleinnjohann, Christian Reimann and Dirk Stichling. (2001), Mobile AR4ALL, In Proceedings of the IEEE and ACM International Symposium on Augmented Reality, Columbia University, New York. International Telecommunication Union, Recommendation ITU-T (2003), H.264: Advanced Video Coding for Generic Audiovisual Services, ITU-T.
610
Anand
Vasconcellos, S.V. and Rezende, J.F. (2002), QoS and mobility in 3G Core Networks, Proceedings of Workshop on QoS and Mobility, Brazil. Nilufar Baghaei, Ray Hunt. (2004), Review of quality of service performance in wireless LANs and 3G multimedia application services, In the Journal of Computer Communications, Elsevier, Netherlands. Wang, Y., Ostermann, J. and Zhang, Y. (2001), Video Processing and Communications, Prentice-Hall, Englewood Cliffs, New Jersey.
611
International Conference on Advances in Engineering and Technology
D E S I G N OF S U R F A C E W A V E FILTERS R E S O N A T O R W I T H C M O S L O W NOISE A M P L I F I E R Et. Ntagwirumugara, Department of Electronics and Telecommunication, Kigali Institute of Science and Technology, Rwanda T.Gryba, IEMN, UMR CNRS 8520, Department OAE, Valenciennes University,BP311,
59313- Valenciennes Cedex, France J. E. Lefebvre, IEMN, UMR CNRS 8520, Department OAE, Valenciennes University,BP311,
59313- Valenciennes Cedex, France
ABSTRACT In this communication we present the analysis of a ladder-type filter with a CMOS low noise amplifier (LNA) in the frequency band of 925-960 MHz. The filter will be developed on a structure with three layers of a ZnO film and aluminium (A1) electrodes on a silicon (Si) substrate with Ti/Au for metallization. This filter is composed of six resonators on the same port. After, we added a 943-MHz fully integrated CMOS low noise amplifier (LNA), intended for use in a global system mobile (GSM) receiver, have been implemented in a standard 0.35~tm CMOS process. Design procedure and simulation results are presented in this paper. The amplifier provides a forward gain of 10.6dB with a noise figure of only 1dB while drawing 8.4mA from a 2.5 V power supply.
Keywords: SAW filter; Resonator; Coupling-of-modes (COM); IDT; ZnO/Si/A1, CMOS; Low-noise amplifier; Low power; Low voltage; Noise figure.
1.0 INTRODUCTION Expansion of small size and dual-band mobile phones require strongly the development of compact devices. Because of their small size, low height and light weight, SAW filter and LNA are used as a key component in GSM and GPS communication equipment. Recently, a new and exciting RF circuit capability came to light as radio front-end integrated circuits have been developed in silicon or GaAs technologies. As the first bloc of radio frequency receiver following the antenna, filter and low noise amplifier play a significant role. RF design for applications below 2 GHz moved from the printed circuit board and discrete components to large-scale integration. For these reasons, co-integration of SAW filters with RF systems in the same substrate (Si or GaAs) appears to be a key solution. On the present work, the SAW filters on the ZnO/Si/A1 structure are study by employing the design approaches based on multiple resonator techniques (Thorvaldsson, 1989; Wright, 1986).
612
Ntagwirumugara, Gryba & Lefebvre
2.0 S A W F I L T E R D E S I G N The COM-
Modelling
The transducer generates forward and backward propagating surface waves with amplitudes R(x) and S(x) that are coupled together (Fig. 1).
f-....__s ........ ................f"
~1 = k]./p
":~I~
P
Fig .1" Geometry of a SAW transducer The general COM equations describing both the multiple reflections and the SAW excitation of an IDT are given by (Chen & Herman, (1985), Suzuki et al, (1975)).
dR(x) dx
dS(x)
= -jkllR(x
) - jk,2e 2JO:'S ( x ) + jc~eJ~
- jk,2e-ZJO:"R(x)+ j k l , S ( x ) - jc~ e - J ~ V
(la)
(lb)
dx
dZ(x)
9 9 _/&
= --2JC~ e
R(x) - 2j~ze/~S(x) + jcoQV
(lc)
dx where,
R ( x ) , S ( x ) = The slowly varying amplitudes of the forward and backward waves respectively, k~j, k~2 - coupling coefficients, V and I are applied voltage and current drawn by the IDT such that, 6 = (co - co0)/v f = k l - k0 (wave mismatch), k t - co / v l (the free wave vector), k 0 - 27r / A, (A~ is show in fig 1), (z) , C S and a' are respectively the radian frequency, the static capacitance per unit length and with of the transducer and the transduction coefficient. T h e C O M P a r a m e t e r s kll and kl2 The parameters kll and kl2 are closely related to the more commonly used parameters of average velocity shift At_)/ol. and acoustic mismatch A Z / A Z 0 respectively Chen & Herman, (1985). These relationships were derived (Suzuki et al, 1976; Thorvaldsson, 1986).
613
International Conference on Advances in Engineering and Technology
(2a)
kl l / k f - - ( A u / o f )
k,2/kf k12/kf
-(1/~)(AZ / Z o )
(2b)
- -(1/~)(AZ/Zo)
(2c) (2d)
where k z is the electromechanical coupling coefficient, Hm is the metal film thickness and ,,~ is the acoustic wavelength. The first terms on the fight side of both (6) and (7) represent the piezoelectric loading effect, and the two fight most terms represent the mechanical loading effect. The Electrical And Mechanical Perturbation Terms
The electrical perturbation terms Dk and Rk can be written as
] D k - - - ~11 1+ P~s9(- c~ P_0 (- cos( rrl))
[
(3a)
1
(3b)
where, Pv(x) is the Legendre function of order v and r] is the metallization ratio (see B.11 and B. 14 in Chen & Herman (1985)). The mechanical perturbation terms are given by (Chen & Herman, (1985), Wright, (1986))
Om
Rm
= - - ~
z
-
C,
1- p o f
(4a)
(4b)
c,
where, /9 is mass density, O1(1 and t51(2 characterize the overlay material. Ui and (p represent magnitude and phase of the free surface mechanical displacements and electrical potential (Chen & Herman, 1985).
614
Ntagwirumugara, Gryba & Lefebvre
Simulation Results The former model is applied to calculate and optimize the performances of the ladder-type filter. The results of simulation allow the choice of the optimal geometry, position of IDT fingers either on the surface or between ZnO and metallization. The optimization of the filter will be carried using Math-lab program while acting on several parameters: The aperture, the number of the fingers of the transducer and reflectors, the thickness of IDT electrodes and the acoustic wavelength. The structure of the ladder-type SAW filter realized on the silicon substrate and using the frequency band 925-960 MHz with RI=R3=R5 and R2=R4=R6 is as follows: Ri R; Rf
........
R2 ~
Vo ~ "f"l , , T
R~ m
R~ l
rs
7-
Fig.2" The structure of the ladder-type SAW filter
Reso
1
O
1111111111 Resort
a:t~u:r 2
Fig.3" Fundamental structure of a ladder type SAW filter
.....
!
Fig.4: SAW fabrication
A summary of parameters for ZnO/Si/A1 obtained with the theory described above is given in Tables 1 and 2.
Of [m/s] 5083.83 Rm 2.4718
Table 1" SAW parameters k2 Cs[e-10F/m] Dm 0.84 -0.3706 0.0575 Dk Rk -7285 -0.7178
615
International Conference on Advances in Engineering and Technology
Table 2 9Design parameters
~
Acoustic wavelength ~ , / ] ' 2 [gm]
= 5.32 =5.48 Nt=51 Nr=80 W~=80 ,;~ W2-- 160 A~. 700 0.5
Number of fingers in each IDT Number of fingers in each reflector Aperture Metal film thickness [gm] Metallization ration
Tran~duc~r!
Re~<>n~t~r i
i!i
.................... oo~176176
Fig. 5" Conductance and Susceptance
Fig. 6: Conductance and Susceptance of Transducer 1
,f
Fig. 7: Conductance and Susceptance
Coaduc,tan, 9
Fig. 8: Conductance and Susceptance of Transducer 2
Fig. 9" Insertion Loss
616
~.
Ntagwirumugara, Gryba & Lefebvre
3.0 LNA CMOS DESIGN For our study of low noise amplifier in CMOS technology, we have used a tuned-cascode LNA topology shown below (Nguyen et a12004" Mitrea & Glesner, 2004; Shaeffer & Lee, 1997):
9-c>
--
m
I
Fig. 10: Schematic of a cascade LNA topology
3.1 Equations Used for Simulation For the cascade LNA with inductive source degeneration shown in figure 7, its input impedance is (Shaeffer & Lee, 1997)" Zi, , = j w ( L
1 + Ls)+
jw(C,,
gm = /leffCoxWgsat tg(2 + tg)
2(1 +
t9) 2 -
g ~, - - +LC d + C ~ ) + R g + R ~ + Cg,
s
2p 0 I I + p / 2 1 __ /a, VadLg ..... /9(1 + p) /a41 1 + Vod /L,5",,,,,
p
m
h
<~ L
2 WLC 3
2 OX
(6) (7)
g s aJt
G -G-v~ *'
(5)
(8) P0
3 V~/Jv ..... ~ .....
(1+2/9 ] k
p
)
(9) (lO)
R
- Ro
W 2 3q L
(11)
where,
617
International Conference on Advances in Engineering and Technology
Cgs 9Gate-to-source capacitance of M1, gm " Transconductance of M1, Rl" Parasitic resistance of L g and L s R 0 9Sheet resistance of the gate poly silicon r/" Number of gate fingers of M1 ~ Effective gate resistance of the NMOS transistor M~ and L 9The total gate with and length of device r 's and Cox represent the field-limited electron mobility, the saturation electric field and the gate oxide capacitance per unit area. P0 9Power dissipation (P0 - IdVsupply ). The drain current I d is (Nguyen et a12004; Mitrea & Glesner, 2004; Shaeffer & Lee, 1997) I d -I_)satCoxm
V~ Vod + Ls
,
(12)
t
At the resonance frequency, we get L
_._
(13)
2 ~ / ( L g + L s )(Cg s + C d) The input matching at resonance frequency condition is given by (Shaeffer & Lee, 1997)
Rs
Ls ~ R g + Rz + coTL s Rg + R t + g m 1 Cg s + Cd
(14)
The cutoff frequency when C a is added to Cg S
(_DT =
gml Cg~ + Cd
3V,a, p ( 2 + p ) 2L(l+ p)2
(15)
The optimum transistor size is given by (Nguyen et al, 2004; Shaeffer & Lee, 1997) Wopt - [(2o)oZCoxRsQin,opt ) / 3~ 1
618
(16)
Ntagwirumugara, Gryba & Lefebvre
-Icl
1+
(17)
l + ?--T-, l +
Then the noise figure of the LNA can be shown as (Nguyen et al, 2004)
(18) F =1+
g
2
Rs --
---~
gs
S
g
5Z:cld.
i ~o m ,........, a.~ "
~
~rZ,
:L
=-
_
Fig. 11" Complete schematic of CMOS LNA Table 3" LNA design parameters Ls Lg W2 W3 [/am] [/am] [/am] [nil] [nil] 0.5 32 760 150 150 Cm[pF] Rd[ ~ ] CI[nF] C2[nF] Cd[pF] 1000 10 10 0.860 3 RL[ f~ ] RI[K ] R2[K ] U S [ ~ ] L d [nil] 50 1 1 50 7 Wl
Table 4: Performance summary of LNA Parameters
Operating frequency[MHz] Power gain [db] Noise figure[dB] Technology[/am] Supply voltage[V] Power dissipation [roW]
Values
943 10.6 0.87 0.35 2.5 21
619
International Conference on Advances in Engineering and Technology
................... ~ ................... . ................... + . . . . . . . . . . . . . .
. ..........
. ................... + . . . . . . . . . . . . . . . . . . , ......................
~
~
Vg+.,~O+8 Qin+o~O+392
+.+
=
1\
+it
++
Freq|.H~o|
+ ~,+.,+.i+
Fig. 12" Noise Figure vs Frequency
Vgs-O+8
Dev~ W~
Fig. 13" Noise Figure vs Width
4.0 CONCLUSIONS The structure carried out for the resonators presents the insertion loss of 2.2dB (< 3dB) and the broad band-width of 25MHz. The fabrication of the filter and LNA will be independently, which will influence that the time of manufacture will be too long. Whereas, the objective of our work is to c a w out in the future the two components on the same substrate composed by a layer doped for CMOS and the other un-doped for filter. Our amplifier has a suitable gain, low noise, low supply voltage and dissipates a low power. It was conceived for the application because the majority of these parameters were calculated and simulated. REFERENCES
Chen, D.P.and Herman, A.H. "Analysis of metal-strip SAW grating and transducers", IEEE Transactions on Sonics and ultrasonics, vol. SU-32, no. 3, pp. 395-408, May 1985. Mitrea, O. and Glesner M.," A power-constrained design strategy for CMOS tuned low noise amplifiers", Elsevier, Microelectronics reliability, vol.44, pp.877-883, 2004. Nguyen, T.K.,.Kim, C.H., Ihm, J., Yang, M.S. and Lee, S.G. "CMOS low-noise amplifier design optimisation techniques", IEEE Transactions on microwave theory and technique, vol.52, No. 5, pp. 1433-1442, May 2004. Shaeffer, D.K. and Lee, T.H. "A 1.5V, 1.5GHz CMOS low noise amplifier", IEEE journal of solid-state circuits, vol. 32, No.5, pp.745-759, 1997. Suzuki, Y., Shimizu, H., Takeuchi, M., Nakamura, K. and Yamada, A. "Some studies on SAW resonators and multi mode filters", IEEE Ultrasonics symposium, pp. 297, 1976. Thorvaldsson, T., Nyffeler, F.M. "A rigorous derivation of the mason equivalent circuit parameters from coupled mode theory", IEEE Ultrasonics symposium proc., pp. 91-96, 1986. Thorvaldsson, Thor "Analysis of the natural single phase unidirectional SAW transducer", IEEE Ultrasonics symp, pp. 91-96, 1989. Wright, P.V."Analysis and design of low-loss SAW devices with internal reflections using coupling-of-modes theory", IEEE Ultrason.Symp.RF Monolithics, pp. 141-152, 1989. Wright, P.V. "Experimental and theorical research on an innovative unidirectional surface acoustic wave transducer", Phase II final report, pp. 80-81, 1986.
620
Kaluuba, Taban-Wani & Waigumbulizi
THE F A D I N G C H A N N E L P R O B L E M AND ITS I M P A C T ON W I R E L E S S C O M M U N I C A T I O N S Y S T E M S IN UGANDA L.L. Kaluuba, Department of Electrical Engineering, Makerere University, Uganda. G. Taban-Wani: Department of Engineering Mathematics, Makerere University, Uganda. D. Waigumbulizi, Mobile Telecommunications Networks Ltd, Uganda.
ABSTRACT
Recent advances in radio communications, signal processing, and computer technologies have made wireless networking for data communication systems an achievable reality. Wireless communications for data networks is not only limited to outdoor systems, but has now also extended to one attractive application referred to as indoor wireless local area networks (WLAN). The advantages of wireless networks are that: they allow for mobility of users and re-wiring is unnecessary when a user of a WLAN moves - which is especially important for users of portable data terminals. Secondly, since the WLAN can operate in indoor environments, high-speed data transmission is possible without requiring an unrealistic amount of transmitter power. The explosive growth of wireless systems coupled with the proliferation of laptop and palmtop computers and the mobile phone, indicate a bright future for wireless networks, both as stand-alone systems and as part of the larger networking infrastructure. However, many technical challenges remain in designing robust wireless networks that deliver the performance necessary to support the emerging applications. We also know that for any wireless network to function properly, reliable wireless data communication links must first be established. For instance, there are several communication channel impairments which may lead to corruption of the information signal, leading to unavoidable errors. We also note that the communication channel is a complex mix of physical phenomena including noise, interference from other radio signal sources, multipath propagation and path loss, shadowing, etc. Multipath propagation, however, is the most challenging problem encountered in a wireless data communications link design process. Due to the complexity of the fading phenomena, engineers currently depend more on simulation techniques to predict the effects of the various parameters involved. In this paper we discuss the multipath problem, the physical causes of fading, and mitigation measures applied in the Uganda cellular communication systems environment. We also present a mathematical model for fading and characterize it as a stochastic process.
621
International Conference on Advances in Engineering and Technology
Keywords: Fading; fading phenomena; multipath propagation; fading channel manifestations; signal fading; delay-spread; Doppler spread; power delay profile; intersymbol interference. 1.0 INTRODUCTION For a wireless data network to function properly, reliable wireless data communication links must first be established. In a multipath propagation environment, the transmitted electromagnetic signal propagates from the transmitter to the receiver via many different paths. In general, these propagation paths have different amplitude gains, phase shifts, angles of arrival, and path delays that are functions of the reflection structure of the environment, Lee, (1997). The effects of multipath propagation include signal fading, delay spread, and, when there is relative motion between the transmitter and receiver, Doppler spread.
Signal fading refers to the phenomenon in a multipath propagation environment, whereby the strength of the received signal changes rapidly over a small distance or time interval and its strength is also strongly dependent on the locations of the transmitter and receiver. This occurs because in a multipath propagation environment, the signal received by the mobile at any point in space may consist of a large number of plane waves having randomly distributed amplitudes, phases, delays and angles of arrival. These multiple components combine vectorially at the receiver antenna. Fading is the interference of several scattered multiple copies of a transmitted signal arriving at a given point. They may combine constructively or destructively at different points in space, causing the signal strength to vary with location. Fading is often responsible for the most rapid and sometimes violent changes of the signal strength itself as well as its phase. If the objects in a radio channel are stationary, and channel variations are considered to be only due to the motion of the mobile, then signal fading is a purely spatial phenomenon, Lee, 1997. A receiver moving at high speed may traverse through several fades in a short period of time. If the mobile moves at low speed, or is stationary, then the receiver may experience a deep fade for an extended period of time. Reliable communication can then be very difficult because of the very low signal-to-noise ratio (SNR) at points of deep fades.
Delay spread refers to the spread of the duration of the received signal with respect to the transmitted signal. This is due to the different delays associated with the propagation paths. Delay spread introduces inter-symbol interference (ISI) in a digital wireless communication system, and this limits the achievable transmission rate. It also causes difficulties for symbol-timing recovery in a digital demodulator, Lee, 1997.
Doppler spread, on the other hand, refers to the spread of the frequency spectrum of the received signal with respect to that of the transmitted signal, when there is relative motion between the transmitter and the receiver. This is due to the different angles of arrival associated with the propagation paths. Since the spectrum of the received signal is wider than that of the transmitted signal, the multipath propagation channel is clearly a time-varying system. Adap-
622
Kaluuba, Taban-Wani & Waigumbulizi
tive signal processing techniques are therefore required to track channel variations for a mobile digital wireless communication system operating in a multipath propagation environment. Areas of interest in our research are 9 Models and simulation of fading multipath channels 9 Information-theoretic aspects of communication through fading channels 9 Channel coding and decoding techniques and their performance and 9 Adaptive coding and equalization techniques for suppressing inter-symbol interference. This paper is a presentation of the cardinal issues established during the analysis of the multipath fading problem and methods of dealing with the problem for the improvement of the wireless communication link design and implementation. Several adaptive equalization techniques and adaptive coding techniques for overcoming the problems of delay spread and Doppler spread have been proposed in the literature [4,Lee, 5]. We also present the different methods employed in combating the problem in the cellular communications environments in Uganda, with reference to the main mobile communication providers MTN and UTL systems. 2.0 M U L T I P A T H P R O P A G A T I O N In a wireless communication channel, the transmitted signal generally propagates to the receiver antenna through different paths. This phenomenon illustrated in Fig.1 is termed multipath propagation. Multipath propagation is due to the multiple reflections caused by reflectors and scatterers in the environment. Possible reflectors and scatterers may include mountains, hills and trees in rural environments, buildings and vehicles in built-up urban environments, or walls and floors in indoor environments. The receiver antenna will therefore receive multiple copies of the transmitted signal. Since the different versions of the signal propagate through different paths, they will in general have different attenuation, phase shifts, time delays, and angles of arrival. The receiver antenna output is the sum of the multiple signal copies weightedby the antenna gain pattern. Multipath propagation is a rather complicated phenomenon that is cumbersome to characterize. One common approach is to treat the received signal as a spatial-temporal random process. The statistics of this random process can be collected from extensive field measurements in selected operation environments. Since the properties of the received signal are clearly a strong function of the multipath environment, statistical characterization of the received signal is often done in a two-step process. In the first step, it is assumed that the multipath environment is fixed, and variations of the received signal are measured for the given multipath environment. The statistics thus collected are referred to [5] as small-scale variations, because they are usually obtained from measurement data obtained at various locations in a small area.
623
International Conference on Advances in Engineering and Technology
In the second step, variations of the large-scale statistics are determined from measurements taken in different multipath environments. These variations are referred to as large-scale variations because they are obtained from measurement data taken at various locations in a large area. Our research focuses on mitigating the effects of the small-scale variations using digital signal processing techniques. The small-scale variations include signal fading, delayspread and Doppler-spread, which we believe contribute to transmission errors in a digital environment. In a multipath propagation environment the received signal consists of a large number of components having different delays [5]. Consequently, when a "narrow" pulse is transmitted over a multipath propagation channel, distorted replicas of the transmitted pulse arrive at the receiver at various different times, making the received signal "wider" in time than the transmitted signal. This phenomenon is referred to as delay spread. The significance of delay spread depends on the time-width of the signal relative to that of the channel; hence a quantitative characterization of the severances of channel delay-spread is necessary.
Fig. 1 Multipath propagation phenomena
3.0 FADING CHANNEL MANIFESTATIONS
Fading channel manifestations can be broadly divided into two categories: large-scale fading and small-scale fading. Large-scale fading represents the average signal power attenuation or path loss due to multipath signal propagation over large areas. Large-scale fading [2] is
624
Kaluuba, Taban-Wani & Waigumbulizi
affected by prominent terrain contours (such as hills, mountains, forests, billboards, clumps of buildings, etc.) between the transmitter and the receiver. The receiver is often represented as being "shadowed" by such prominences. The statistics of large-scale fading provide a way of computing an estimate of path loss as a function of distance. This is described as a mean-path loss and a log-normally distributed variation about the mean. Small-scale fading refers to the random changes in signal amplitude and phase that are experienced by a signal as a result of small changes (as small as a half wavelength) in the spatial positioning between a receiver and a transmitter. Small-scale fading manifests itself in two mechanisms [7], namely, time-spreading of the signal (or signal dispersion) and timevariance behavior of the channel. For mobile radio applications, the channel is time-variant because motion between the transmitter and the receiver results in propagation path changes. The rate of change of these propagation conditions accounts for the fading rapidity (rate of change of the fading impairments). 3.1 Rayleigh Distribution Model Extensive measurements have previously been done as reported in the literature [1,2] to characterize the small-scale spatial distribution of the received signal amplitude in multipath propagation environments. It was established that for many environments, the Rayleigh distribution provides a good fit to the signal amplitude measurement in environments where no line-of-sight or dominant path exists [2]. The fact that the Rayleigh distribution provides a good fit to the measured signal amplitudes in a non-line-of-sight environment can be explained as follows: When a signal is transmitted through a multipath propagation channel, the in-phase and quadrature-phase components of the received signal are sums of many random variables. Because there is no line-of-sight or dominant path, these random variables are approximately zero-mean. Therefore, by the central limit theorem, the in-phase and quadrature-phase components can be modeled approximately as zero-mean Gaussian random processes. The amplitude then is approximately Rayleigh distributed. Small-scale fading is also called Rayleigh fading because if the multiple reflective paths are large in number and there is no line-of-sight component, the envelope of the received signal is statistically described by a Rayleigh probability distribution function (pdf).
3.2 Rician Distribution Model When there is a dominant non-fading signal component present, such as a line-of-sight propagation path, or when there is a dominant reflected path, the small-scale fading envelope is described by a Rician pdf model. When a dominant path exists in a multipath propagation environment, according to the central limit theorem, the signal amplitudes are approximately Rician distributed when the number of paths is large. A mobile radio roaming over a large area must process signals that experience both types of fading: small-scale fading superimposed on large-scale fading.
625
International Conference on Advances in Engineering and Technology
Fading Channel Manifestations
,
I
I
:otionlover large areas
Mean signalttenuation s distance
I
3
I
4
Large-scale fading due
I
Variations about the mean
Small-scale fading due
I
ts~ changes in position / I 6 I Time~ Time I Sprea~ling of Variance of I l The ~lignal The channel I
16 13 Time Frequency -1 FOUrie r Doppler-shift Time-delay I Fourier domain nsfor---~s Domain domain ~ansfor--~s domain I description description description I \ / I "9 I \\ Duals _ / / A " I fF2!t 118 Slow ] Fast \ \ Frequency "g fading fading Selective fading 10
Frequency Selective fading
15
14
12
11
lat fa ting I
Fast fading
I
Slow I fading
Fig.2. Fading channel manifestations 3.3 Power-delay Profile One common measure of characterizing channel delay spread is the power-delay profile [7, 12]. The power-delay profile of an environment is obtained through field measurements by transmitting a short pulse and measuring the received power as a function of delay at various locations in a small area. These measurements are then averaged over spatial locations to generate a profile of average received signal power as a function of delay.
626
Kaluuba, Taban-Wani & Waigumbulizi
3.4 Frequency-Selective Fading and Flat-Fading versus Fast Fading and Slow Fading Two degradation categories can be defined for signal dispersion, namely, frequencyselective fading and flat-fading, and two degradation categories can be defined for fading rapidity, namely, fast fading and slow fading. 3.5 Frequency-Selective Fading Channel When viewed in the time-domain, a channel is said to exhibit frequency-selective fading if the mean delay time is greater than the symbol time (Tm > Ts). This condition occurs whenever the received multipath components of a symbol extend beyond the symbol's time duration, thus causing channel-induced inter-symbol interference (ISI) [7]. 3.6 Nonselective or Flat Fading Channel Viewed in the time-domain, a channel is said to be frequency non-selective or to exhibit flat fading if Tm < Ts. In this case, all of the received multipath components of a symbol arrive within the symbol time duration; hence, the components are not resolvable. Here there is no channel-induced ISI distortion, since the signal time-spreading does not result in significant overlap among neighbouring received symbols. In general, for a wireless digital communication system, the significance of channel delay spread depends on the relationship between the rms delay-spread of the channel and the symbol period of the digital modulation [5]. If the rms delay-spread is much less than the symbol period, then delay spread has little impact on the performance of the communication system. In this case the shape of the power-delay profile is immaterial to the error performance of the communication system. This condition is called 'flat-fading". On the other hand, /f the rms
delay-spread is a significant fraction of or greater than, the symbol period, the channel delay spread will then significantly impair the performance of the communication system. Furthermore, the error performance of a digital communication system depends on the shape of the power-delay profile. This condition is often referred to as "time-dispersive fading" or "frequency-selective fading. " Since the power-delay profile is an empirical quantity that depends on the operating environment, for computer simulation purposes, we can only postulate functional forms of the profile, and vary the parameters of these functional forms in order to obtain results that are applicable to a broad spectrum of wireless environments. 3.7 Doppler Spread When a single-frequency sinusoid is transmitted in a free-space propagation environment where there is no multipath propagation, the relative motion between the transmitter and
627
International Conference on Advances in Engineering and Technology
receiver results in an apparent change in the frequency of the received signal. This apparent frequency change is called the Doppler shift (see Fig. 3).
dcos(
X,.,
,.,Y
Fig.3" Illustration of Doppler shift in the free-space propagation environment. The receiver moves at a constant velocity v along a direction that forms an angle a with the incident wave. The difference in path lengths traveled by the wave from the transmitter to the mobile receiver at points X and Y is given by A1 - d cos a - v a t cos ~z where,
..............................
(1)
At is the time required for the mobile to travel from X to Y. The phase change in
the received signal due to the difference in path lengths is therefore 2:rA1
Aq) -
2
2rtvAt - ~ c o s a 2
............................
(2)
Where, ]L is the wavelength of the carrier signal. Hence the apparent change in the received frequency, or Doppler shift, is given by
1 A(p m
f d -- 2rC A t
=
V mCOS~
2
628
.......................................
(3)
.......................................
(4)
Kaluuba, Taban-Wani & Waigumbulizi
v
-
- f c cosoc
.......................................
(5)
c
In the last equation, c is the speed of light and fc is the frequency of the transmitted sinusoid (carrier). Note taht c - fc A- Equation (5) shows that the Doppler shift is a function of, among other parameters, the angle of arrival of the transmitted signal. In a multipath propagation environment in which multiple signal copies propagate to the receiver with different angles of arrival, the Doppler shift will be different for different propagation paths. The resulting signal at the receiver antenna is the sum of the multipath components. Consequently, the frequency spectrum of the received signal will in general be "broader or wider" than that of the transmitted signal, i.e., it contains more frequency components than were transmitted. This phenomenon is referred to as Doppler spread. Since a multipath propagation channel is time-varying, when there is relative motion, the amount of Doppler spread characterizes the rate of channel variations [5]. Doppler spread can be quantitatively characterized by the Doppler spectrum:
K s(f)
:
.....................
(6)
The Doppler spectrum is the power spectral density of the received signal when a singlefrequency sinusoid is transmitted over a multipath propagation environment. The bandwidth of the Doppler spectrum, or equivalently, the maximum Doppler shift fma• is a measure of the rate of channel variations. When the Doppler bandwidth is small compared to the bandwidth of the signal, the channel variations are slow relative to the signal variations. This is often referred to as "slow fading". On the other hand, when the Doppler bandwidth is comparable to or greater than the bandwidth of the signal, the channel variations are as fast or faster than the signal variations. This is often called "fast fading" 4.0 P A R A M E T E R S M E A S U R E D T O D E T E R M I N E F A D I N G L E V E L S
Fade Margins: Fade margin refers to the difference between the normal unfaded signal and receiver thresholds defined by the received signal level required to cause the worst 3kHz slot of receiver baseband to have a 30dB S/N and is defined as Fade Margin (dB) = System Gain (dB) - Net Path Loss (dB) . . . . .
(7)
629
International Conference on Advances in Engineering and Technology
Pathloss: Refers to the difference between transmitted and received power or Pathloss
= Tx_Power-
Rx_Power
.................................
(8)
Threshold Crossing Rate: This is the average number of times per second that a fading signal crosses a certain threshold level. Fade Duration: This is the average period of time for which the received signal is below a required or desired level. Received Signal Strength Indication (RSSI): This is the strength of the received signal in dB/dBm.
Bit Error Rates (BER): This is the number of errors in a transmitted message carried on particular link. Since this links or channels are digital communication channels, BER is used to evaluate the level of erroneous bits in the message. 5.0 FADING M I T I G A T I O N T E C H N I Q U E S Fading mitigation techniques can be divided into three categories: 9 Power control 9 Adaptive waveform, and 9 Diversity [ 11 ]. Power control and adaptive waveform fade mitigation techniques are characterized by the sharing of some unused in-excess resource of the system, whereas diversity fade mitigation techniques imply adopting a re-route strategy. The former aim at directly compensating fading occurring on a particular link in order to maintain or to improve the link performance, whereas diversity techniques allow to avoid a propagation impairment by changing the frequency band or the geometry. 5.1 P o w e r C o n t r o l
In power control techniques, transmitter power or the antenna beam shape are modified in order to adapt the signal to the propagation conditions. Several implementations are possible depending on the location of the control technique. 6.0 ADAPTIVE W A V E F O R M OF SIGNAL P R O C E S S I N G T E C H N I Q U E S Three types of methods can be identified which translate in reductions of power requirements to compensate for additional attenuation on the link, and lead to modifications in the use of the system resource by acting on the bandwidth or on the data rate.
630
Kaluuba, Taban-Wani & Waigumbulizi
6.1 Adaptive Coding When a link is experiencing fading, the introduction of additional redundant bits to the information bits to improve error correction capabilities (FEC) allows to maintain the nominal BER while leading to a reduction of the required energy per information bit. Adaptive coding consists in implementing variable coding rate in order to match impairments due to propagation conditions. A gain of varying from 2 to 10 dB can be achieved depending on the coding rate. The limitations of this fade mitigation technique are linked to additional bandwidth requirements for FDMA and larger bursts in the same frame for TDMA. Adaptive coding at constant information data rate then translates in a reduction of the total system throughput when various links are experiencing fading simultaneously.
6.2 Adaptive Modulation Under clear sky conditions, high system capacity for a specified bandwidth can be achieved by using modulation schemes with high spectral efficiency such as coded modulation or combined amplitude and phase modulation [5,6]. In case of fading, the modulation schemes could be changed to implement more robust modulations requiring less symbol energy. As for adaptive coding, the aim of the adaptive modulation technique is to decrease the required bit energy per noise power spectral density ratio ( E b / N O ) corresponding to a given BER, by using a lower level modulation scheme at the expenses of a reduction of the spectral efficiency.
6.3 Data Rate Reduction With data rate reduction, the information data rate is decreased when the link experiences fading, and this translates in a decrease by the same amount of the required carrier power to noise power spectral density ratio ( C / N o ) if the required bit energy per noise power spectral density ratio ( E b / N o ) is kept constant (no change in the coding gains and constant BER). The transmitted bit rate is reduced accordingly and turns into a similar reduction of the occupied carrier bandwidth. Operation at constant resource by keeping a constant transmitted data rate is also possible by adjusting the coding rate accordingly. In that case, the coding gain adds to the reduction of the information data rate. This fade mitigation technique requires services that can tolerate a reduction of the information rate such as video or voice transmission (assuming a change of the source coding at the expense of a reduction of the perceived quality), and data transmission (assuming an increase of transfer duration or a reduced throughput if Internet access). Moreover, extra delay and/or complexity may be required due to the exchange of signaling between the transmitter and the receiver [11].
6.4 Diversity Techniques [7, 12] Diversity fade mitigation techniques involve setting up of a new link when the primary link is experiencing fading. The new link can be implemented at a different frequency (Fre-
631
International Conference on Advances in Engineering and Technology
quency Diversity), with a different geometry (Site or Station Diversity), or a different period of time (Time Diversity).
6.5 Frequency Diversity Provided that two different frequency bands are available, with frequency diversity information is transmitted onto a carrier using the frequency band that is the least affected by meteorological situation (typically the lowest frequency) when a fade is occurring. It requires a pair of terminals at each frequency at both link terminations, and suffers from inefficient use of the radio resource.
6.6 Site Diversity With site diversity, the selection at one end of a terminal at a different location and in a different angular direction modifies the geometry of the link and prevents the path from going through an atmospheric perturbation which is going to produce a fade. Side diversity is based on the fact that convective rain cells which produce high fades are a few kilometers in size, and that the probability of simultaneous occurrence on two geometrically separated links is low. This technique requires to re-route the connection in the network.
6.7 Time Diversity Time diversity aims at re-sending the information when the state of the propagation channel allows to get through. This assumes that there is no or little time constraints for transmission of the data (e.g. push services), or that a variable delay (minutes to tens of minutes) is acceptable between data packets (non-real time services). 7.0 FADE M I T I G A T I O N IN THE UGANDA C E L L U L A R ENVIRONMENT Several methods have been adapted in the Uganda cellular communication networks for mitigating the effects of fading phenomena. These include antenna systems, multipath equalization techniques, proper frequency planning, frequency hopping and discontinuous transmission and reception techniques. 8.0 CONCLUSION Due to the presence of reflectors and scatterers in the environment, the signal transmitted through a wireless channel propagates to the receiver antenna via many different paths. The output of the receiver antenna is, therefore, a sum of many distorted copies of the transmitted signal. These copies generally have different amplitudes, time delays, phase shifts, and angles of arrival. This phenomenon is referred to as multipath propagation. The effects of multipath propagation can be classified into large-scale and small-scale variations. Small-scale variations include signal fading, delay spread, and Doppler spread.
632
Kaluuba, Taban-Wani & Waigumbulizi
Signal fading refers to the rapid change in received signal strength over a small travel distance or time interval. It occurs because of the constructive and destructive interference between the signal copies. Delay spread refers to the smearing or widening of a short pulse transmitted through a multipath propagation channel. It happens because different propagation paths have different time delays. Doppler spread refers to the widening of the spectrum of a narrow-band signal transmitted through a multipath propagation channel. It is due to the different Doppler shift frequencies associated with the multiple propagation paths when there is relative motion between the transmitter and the receiver. These small-scale effects can be quantitatively characterized using the signal amplitude distribution, power-delay profile, and rms delay-spread, and Doppler spectrum. All these characterizations are empirical statistics that must be obtained using extensive field measurements. However, field measurement is expensive and difficult, and cannot be generalized for all situations. Because of the stochastic nature of the environment in which wireless systems operate, and because of the complexity of modem wireless systems, the use of simulation enables the design engineer to predict some estimates about the degrading effects of fading, interference, power requirement, and hand-off in a proposed system, before installation of the actual system. During the simulation process of a multipath signal propagation environment, the powerdelay profile and Doppler spectrum of this channel model can be investigated by properly specifying the distribution of some model parameters, such as the path delays, Doppler shifts and path phases. A special blend of advanced techniques and technologies are required to overcome fading and other interference problems in non-line-of-sight wireless communication. REFERENCES
[1]
[2]
[3]
Lapidoth and P. Narayan, "Reliable Communication Under Channel Uncertainty", 1998 IEEE International Symposium On Information Theory, Cambridge, M.A. August 17-21 1998. Bernard Sklar, "Rayleigh Fading Channels in Mobile Digital Communication Systems", IEEE Communications Magazine, July 1997, Part I: p . 9 0 - 100, Part II: p.102 - 109. Information- Theoretic and Communications Aspects". IEEE Transactions on Information Theory, Vol. 44, No. 6, October 1998, p.2619 - 2692.
633
International Conference on Advances in Engineering and Technology
[4]
[5] [6]
[7]
[8]
[9]
[10]
[11]
[12]
634
Mohamed-Slim Alouini, Andrea J. Goldsmith, "Capacity of Rayleigh Fading Channels Under Different Adaptive Transmission and Diversity-Combining Techniques", IEEE Transactions on Vehicular Technology, Vol. 48, No.4, July 1999, p. 1165-1181. Yumin Lee, "Adaptive Equalization and Receiver Diversity for Indoor Wireless Data Communications", Stanford University, 1997. PhD Thesis Publication. Andrea J. Goldsmith, Soon-Ghee Chua, "Adaptive Coded Modulation for fading Channels," IEEE Transactions on communications, Vol.46, No.5, May 1998, p.595602. Bernard Sklar, "Mitigating the Degradation Effects of Fading Channels," http://www.informit.com/content/images/art sklar6_mitigating/ Elena Simona Lohan, Ridha Hamila, Abdelmonaem Lakhzouri, and Markku Renfors, "Highly Efficient Techniques for Mitigating the Effects of Multipath Propagation in DSCDMA Delay Estimation", IEEE Transactions on Wireless Communications, Vol.4, No. 1, January 2005, p. 149-162. Oghenekome Oteri, aragyaswami Paulraj, "Fading And Interference Mitigation Using a Greedy Approach", Information systems Laboratory Department of Electrical Engineering, Stanford University, Stanford, CA, 94305. Ana Aguiar, and James Gross, "Wireless Channel Models", Technical University BerlinTelecommunication Networks Group (TKN), Berlin, April 2003, TKN Technical Reports Series. Ana Bolea Alamanac and Michel Bousquet. "Millimetre-Wave Radio Systems: Guidelines on Propagation and Impairment Mitigation Techniques Research Needs", COST-action 280 PM308s 1st International Workshop, July 2002. Andrea Goldsmith, "Wireless Communications." Stanford University, 2005 Cambridge University Press.
Santhi & Kumaran
S O L A R P O W E R E D Wi-Fi W I T H W i M A X E N A B L E S THIRD W O R L D P H O N E S K.R.Santhi and G. Senthil Kumaran, Department of CELT, Kigali Institute of Science and
Technology (KIST), Rwanda
ABSTRACT The lack of access to reliable energy remains a significant barrier to sustainable socioeconomic development in the world's poorest countries. Majority of their population are largely concentrated in the rural areas and access to power is often sporadic or altogether lacking. Without power the traditional telecom infrastructure is impossible. So these lower income strata people are living for years without electricity or telephones, relying on occasional visitors and a sluggish postal system for news of the outside world. So if electricity is playing havoc, there is a need to devise low tech solutions to help bridge not only the digital divide but also the electrical divide. One of such solution is a solar and pedal powered remote ICT system by Inveneo a non- profit organization which combines the power of the computer and a clever application of the increasingly popular Wi-Fi wireless technology powered by solar energy. With this system the rural villagers pedal onto the hand-built, bicycle-powered PC in the village which would send signals, via an IEEE 802.11 b connection, to a solar-powered mountaintop relay station. The signal would then bounce to a server in the nearest town with phone service and electricity and from there to the Internet and the world. This paper describes a prototype of how the wireless broadband WiMAX technology can be integrated in to the existing system and gain global possibilities. With the new suggested prototype each village will connect to one WiMAX station through the Wi-Fi Access Point (AP) which is powered by a solar means. The WiMAX tower then send the radio signal to fixed fiber backbone that will connect the villages to the Internet and enables VoIP communications. Keywords: Wi-Fi, WiMAX, Solar powered ICT, Pedal powered PC
1.0 INTRODUCTION The rural villages are frequently "off-grid", away from wired power grids, or energy infrastructure of any kind. For the people in these remote locations the telecommunications facility is very important, specifically the capability to make local calls and to make calls overseas. An innovative, low-cost, pedal powered, wireless network can provide communication facility through Internet to off-grid villages. For them telephony is the top priority, not the Internet. With this system villagers will be jumping on stationary bikes to pedal their way onto the Information Superhighway, be able to make phone calls, using Internet-based voice technologies. A complete system will provide simple computing, email, voice and Internet
635
International Conference on Advances in Engineering and Technology
capabilities for remote villages using pedal powered PC, Solar powered Wi-Fi, WiMAX, VoIP and Linux technologies. While these might not exactly sound like big technology break-throughs, simple solutions like these could take computing powers and in turn the communication facility to the electricity-starved areas bridging the digital divide but also helping to bridge the electrical divide. Section 1 describes about the need for alternate energy sources for implementing the telecommunication facility in rural villages. 2.0 M O T I V A T I O N OF RESEARCH
Rwanda is still one of the poorest nations in the world, heavily reliant on subsistence farming and international help. Disparities between rural and urban areas are widespread, with over 94% of Rwandan population without access to electrical power are located in rural areas: in fact only 6% of the populations live in urban areas. Energy consumption in Rwanda is greatly inferior to that needed for industrialization. The required minimum is generally thought to be 0.6 tep per person per year, whereas at the moment available energy is of the order of 0.16 tep per person per year. Today 80% of electricity consumed is by the capital city, Kigali, where only 5% of the population lives. In the present context, the lack of or the unreliability of power and phone lines as well as the high cost of access to existing infrastructure severely limit Rwanda's development. For example these isolated Communities depend on intermediaries for information often leading to weak bargaining positions which leads to undervaluing the prices of their crops or paying too high of prices for materials they require. So an innovative power management system that would take pedal powered personal computers combined with the solar powered wireless technologies to the powerstarved villages is a necessity to improve the living conditions of the population. One such path-breaking initiative is the ICT prototype described in this paper. 3.0 NEED FOR ALTERNATE ENERGY P O W E R E D ICT
Today, in most places especially in the rural areas, infrastructure and services are a key problem without proper communication facility. Alternate energy powered ICT is a necessity for rural markets and an option for urban markets due to the following reasons. (i) Villagers in the remote locations have lived for years without electricity or telephones, relying on occasional visitors and a sluggish postal system for news of the outside world. They have families scattered around the globe but no ways to call relatives living abroad, or even in the next town. (ii) There is a clear requirement for power back-ups that can enable the delivery of services to citizens various e-governance projects. (iii) Whether it is villages, small towns or even metropolitan cities, long power cuts, no electricity, voltage fluctuations is part of every human's life in a poor country. (iv) In countries like Rwanda Power and telephone service is absent and cellular phones struggle to get a signal in the hilly terrain.
636
Santhi & Kumaran
(v) Cellular has sometimes proved to be effective as the ideal platform when there is no electricity in some third world countries. But laptops, PDAs, cell and satellite phones all have batteries, and can operate on its internal battery for short periods of time. In a truly off-grid situation, recharging is still a problem. 4.0 ALTERNATE METHODS OF P O W E R MANAGEMENT There are a number of ways to power small-scale ICT installations in locations that are not served by the electricity grid. When grid extension is not an option, a standalone or distributed power system can be installed to generate electricity at a location close to the site where the electricity is needed. Examples of small-scale, standalone power systems include generator sets powered by diesel, solar PV systems, small wind systems, and micro-hydro systems. As illustrated in the table 1 below, the cost of providing power in off-grid locations is influenced by the technology, the size or capacity of the system, and the ongoing operating costs of fuel and maintenance. Renewable energy such as solar power, pedal power is considered as power solutions. Some of the technical equipments used are a wind generator, solar panels, a bank of deep cycle batteries etc. Table l: Cost of providing power by various methods Grid extenSolar PV Small Wind sion $4,000 to 12,000 to $2,000 to Capital 10,000 per 20,000 per 8,000 per Costs km kW kW Operating $80 to 120 $5 $10 Costs (per kWh)
MicroHydro $1,000 to 4,000 per kW
Diesel/Gas generator
$20
$250
$1,000 per kW
4.1 Generator Using a generator or continuously running a vehicle engine is impractical because it provides far more power than most electronic communication devices need. At the same time, recharging many electronic devices can take hours, so charging them from a vehicle battery is not always advisable. Most car/truck batteries are designed to maintain a relatively high charge, and deep, frequent discharges will dramatically shorten the life of the battery, and/or diminish its performance. 4.2 Wind Power An Air 403 wind generator can be mounted on a pole. This wind generator is capable of providing 400 Watts in strong wind and features an internal regulator and protection against wind speeds which are too excessive. The wind generator required guy wire stays fixed in four directions for safety.
637
International Conference on Advances in Engineering and Technology
4.3 Solar power
Photovoltaic power is an interesting option worth considering for many remote ICT facilities. Small-scale PV systems turn sunlight directly into electricity for use by communications devices, computers and other kinds of equipment. An array of twelve 115 Watt solar panels effectively provides just over 1300 Watt (1.3 kiloWatt) of power during full sunlight as described in Psand (2004). This amount of power at 12 volts needs very careful handling and regulation. 12 volts lkiloWatt equates to a current of just over 100 Amperes. The following are the advantages of a Solar Power system as said in Humaninet ICT (2005): (i) Resource: Broad availability of the solar resource, sunlight, often makes PV the most technically and economically feasible power generation option for small installations in remote areas. (ii) Maintenance: Since there are typically no moving parts in PV systems, they require minimal maintenance. So this technology is well suited for isolated locations and rural applications where assistance may be infrequently available. (iii) Operation: Operation required for typical PV systems is the periodic addition of distilled water to the batteries when flooded batteries are used. More expensive systems, using sealed batteries, can run for extended periods without user intervention. (iv) Environmental Impacts: Solar system produces negligible pollutants during normal operation. (v) Costs: Costs per installed Watt depend on system size, the installation site and component quality. Smaller systems (less than 1 kW) tend to be at the higher end of the cost range. (vi) Viability: Unlike generator sets, PV systems are quiet and do not generate pollution. With proper design, installation and maintenance practices, PV systems can be more reliable and longer lasting. 4.4 Pedal power
Compared to options such as solar panels and generators, using pedal power for small appliances like PCs, printers could reduce the cost of the project. Power would be supplied by a car battery charged by a person pedaling away on a stationary bike nearby. One minute of pedaling generates about five minutes of power: 12 V at 4 to 10 A, depending on the pedaler's effort. The main focus must be how to apply the technology of pedal power to a laptop computer. There is a necessity for appropriate technology to be compatible with high technology. This task is a little difficult because computers are very sensitive to power surges. If you tried to plug your computer right into the generator you would more than likely crash your computer. To avoid this outcome the generator is plugged into a battery that can then safely plug into your laptop computer. The battery will deliver consistent power to the laptop, where power straight from the generator would be inconsistent due to the nature of pedaling. This set up can be used to power many other appliances; for example - lights, televisions, radios, and any other battery powered appliances. The following are the advantages of pedal power system.
638
Santhi & Kumaran
(i) The efficiency and variable speed of the output are two features that can be exploited. Basically, any device that was hand cranked, foot-powered, or powered by a fractional horsepower electric motor could potentially be converted to pedal power. (ii) It requires no fuel, and is not affected by time-of-day or weather, so it would make an excellent emergency generator. 5.0 DESCRIPTION OF THE EXISTING INVENEO REMOTE ICT SYSTEM The Inveneo Remote ICT system provides access to essential information and communication tools in this region where there is limited and/or unaffordable electricity and telephone service. This low cost solar and bicycle powered computer provides basic computing, voice calling, and Internet access for villages without access to electricity or telecommunications and uses standard off-the-shelf PC, VoIP and Wi-Fi and open source software technologies (including Asterisk) that have been designed for low power consumption and integrated, ruggedized and adapted for the local environment and language.
The Computer will be powered by electricity stored in a car battery charged by foot cranks. These are essentially bicycle wheels and pedals hooked to a small generator. The generator is connected to a car battery) and the car battery is connected to the computer. Connection with each computer to the others will be by radio local area network (LAN). The rural villagers pedal onto the Internet via the bicycle-powered computer in the village which would send signals, to one repeater station powered by a solar means on the ridge near the river valley. That station will then send the radio signal to the microwave tower nearby and eventually to a server in the town that will connect the villages to the internet. The system like the above is already being implemented in Phon Kham in Laos and in the rural villages of Uganda as shown in figure 1 (Inveneo, 2002), which are one of the world's poorest countries, has very little--no electricity, no phones, no running water etc.. With such a system the villagers will be able to call out and receive incoming calls on a regular telephone instrument hooked up to the computer. Inveneo's systems utilize opensource software (Linux, KDE, Open Office) for Internet access and productivity tools. The phone connections are established using SIP VoIP signaling protocol and the Asterisk opensource PBX system. Each village has its own extension and voice mail box. The PBX system allows for free calls among the connected villages. Any phone in the world can call the stations in the villages and calls to any phone in Uganda are possible from the village stations.
639
International Conference on Advances in Engineering and Technology
Regional -
.~." / ~ ~ :~...... .
.;4~ ~ .
.
"
.
:S: . . . . . . . . .
~%:~i~PSTN ~ '
:
.
.
.
.
.
.
.
.
.
.
i
~zeway
~ | ~,
~
" I1~ ~11
.
~ ,,
Relay stel.ion
~
A
.... ~,
%.
~
.
~9 i ,
........ ~~.,, A
...~.,~;~:,~,~i~i!~i~i~i~.,~., . . . . . . . . . . . . .
. \z/
,
:U):
....%il;:i.;~i~i: :~:~~~:i~~:::,:~'::~,:~i:'~i::~:~:i :~:~:~lii:~i~?~:~ :~ ~,~:~ ~ .I~;~'~
.
Village
~
~[[l~
~:~eeavt|~:~ . . .
.....
~.,o ,
~ :~,
9an~ co:mmunicatlon . ~ j s~:.tJons :~nr e m o t e a r e a
~ 9
.
.
.
.
.
.
.
Fig.1. Inveneo Remote ICT System
6.0 THE S U G G E S T E D N E W R E M O T E ICT S Y S T E M W I T H W I M A X According to the new system the pedal powered village PCs would interconnect between themselves on a wireless LAN (local area network) and each PC in turn connects to an "access point" which serves to relay message packets between different destinations. The access point is connected to the WiMAX relay station. With WiMAX we can reach about 50 Km point to point broadband. The WiMAX tower in turn connects the villages to the fiber backbone through a server in the nearest town and from there to the Internet and the world. The access point is a solar-powered IEEE 802.11 b (Wi-Fi) connection.
6.1 About W i M A X WiMAX (World Wide Interoperability for Microwave Access) has rapidly emerged as the next generation broadband wireless technology which is based on 802.16x standard. The technology - officially known as 802.16 - not only transfers data as fast as 75 Mbps, but also goes through walls and has a maximum range of 30 miles and provides Internet connection with up to 25 times faster than today's broadband. Access to Internet is of prime importance as it has turned into a fully converged network delivering voice, audio, image and video in addition to data. WiMAX extends the reach of IP broadband metropolitan Fiber networks well beyond the relatively small LAN coverage of Wi-Fi in offices, homes or public access hot spots to rural areas. WiMAX is expected to provide flexible, cost-effective, standards based means of filling existing gaps in broadband coverage, and creating new forms of broadband services not envisioned in a "wired" world.
640
Santhi & Kumaran
6.2 Description about the new system The system is based upon low-power embedded PCs running the GNU/Linux operating system The PC also sports two PCMCIA slots to accommodate an IEEE 802.1 l b wireless local-area network (WLAN) card supporting Wi-Fi wireless communications and a voiceover-IP card (H.323) supporting voice communications as described in Craig Liddell, (2002). Phone CARD DSP/Phone Interface card can use standard analog phone as well as headset/microphone combination. All PCs in a cluster of the village use Wi-Fi to send data wirelessly to a central WiMAX tower. A single WiMAX tower can serve many clusters as shown in figure 2. Remote Viiliages wRh solad
WiNAX Towers
24e
M
Fiber-
Dial-:up or Broadband
~
,
;;, //
W I F i u~ith W i M A X s e r v i n g the S o l a r / P e d a l p o ~ ~ ~ ; o | P C o m m u n i c a t i o n S y s t e m of R u r a l Villages Fig.2. Remote ICT system with WiMAX The system uses a pedal to charge a car battery be attached to a special cycle battery unit called RP2, which, in turn, is connected to the PC to run it in those locations that do not have power. RP2 is a power management system that switches the computer to a power bat-
641
International Conference on Advances in Engineering and T e c h n o l o g y
tery when the power phases out. RP2 system can provide continuous power for about eight hours. Moreover one minute of pedaling yields five minutes of power. HCL Infosystems (Moumita, 2005) has designed the prototype of an external gadget that can be charged through pedaling and connects to a personal computer to run it under the most difficult of power situations and this can be easily used. The existence of open source software supporting such wireless communications was central to this decision. The relay point would therefore have a router (the "relay PC") serving the access point function for the villages and providing a link or the "backhaul" to the phone lines of the remote village. Though the bikes will power much of the system, the Access point and the routers are solar powered and highly resistant to environmental factors. The system consists of four distinctive parts:
(i) Main server: This system is placed in a location where phone lines, Internet access (dial-up or any kind of broadband) and electricity is available.. The server incorporates a modem (V.34) and a PSTN interface card capable of emulating a telephone and converting the voice signals to and from digital form. The main server act as the following: 9 It acts as the gateway to the local phone network (PSTN, analog or digital); 9 It maintains the connection to Internet (Internet access gateway -dial-up or broadband); 9 It handles the voicemail system with mailboxes for individuals; 9 It acts as the intranet web server for local content; file sharing; network monitoring etc; (ii) Solar powered Relay system: This system consists of a WiMAX tower and a router acts as a repeater, extending the range of the signal from the main server to the access point and further towards the village PCs. Also it allows extending the range of the wireless network by relaying the signal from the village PCs to the server PCs. Other features include: 9 Extends the range of village PCs from 50 Km away to the main server. 9 Enables Point-to-point or point-to-multipoint connections; 9 Multiple Relay Stations can be connected to the Central Sever to cover large areas; (iii) Access point: 802.11 wireless network links act as the access point and ranges from 2 to 6 Km. The PC is wired to a regular telephone set and a directional Wi-Fi antenna which transmits the internet signal to the Access Point and routed via the router to the WiMAX tower. (iv) The village PC" This system provides the users with access to a phone line, email, web browsing and basic computing applications. The village PCs are interconnected using wireless networking and has a telephone interface, so telephony is carried out using the standard telephone "human interface". Calls between villages and village clusters are routed by the router and cost nothing like dialing another room from a hotel PBX.
642
Santhi & K u m a r a n
7.0 A D V A N T A G E S OF THE R E M O T E ICT S Y S T E M W I T H W I M A X The use of the WiMAX technology into the ICT system contributes the following major advantages, among others: 9 Practical limitations prevent cable and DSL technologies from reaching many potential broadband customers. The absence of line of sight requirement, high bandwidth, and the inherent flexibility and low cost of WiMAX proving useful in delivering broadband services to rural areas where it's cost-prohibitive to install landline infrastructure. 9 WiMAX also provides backhaul connections to the internet for Wi-Fi hotspots in the remote locations. 9 The network is designed and built in such a way that it will cost very less around $ 25 a month to operate. 9 Even though the pedal device can also be powered by solar or gas generator, the idea is that young people will earn money/computer time pedaling the device. 9 In addition to fulfilling the desire for telephone service, there are basic computer functionality available for preparation of documents and spreadsheet functions. 9 Because much of the project can be built around nonproprietary, or "open source," software, villagers can essentially own the system.
8.0 C H A L L E N G E S The main challenges are the following: 9 There is a need for separate Rural PCs that will take into account factors including power, service ability and local conditions such as heat, cold and dust. Everything is to be designed for the high humidity environment. The physical security of the devices also matters a lot. 9 The success of the pedal power PC hinges on crucial issues such as the time taken to charge the battery via pedaling, number of hours that PC can be used thereafter, and the price. It is estimated that it will take about one hour of labor to re-charge the battery for 4 hours of computer/printer/lcd screen use. 9 Commercially available access point hardware is not programmable to the extent required for this monitoring purpose. So it is necessary use a PC in the relay station to know the state of charge of the battery, given the monsoon season, and any other information (regarding tampering, for instance) that may prove of use in assessing the state of the installation. 9 Although English Web sites will remain in English, villagers will be able to send and receive messages only in their native language. So software that will feature menus translated into the local language must be developed. 9.0 SUGGESTIONS 9 Students can be trained to use the system and teach older villagers. 9 Working with computer science and engineering students and teachers of nearby universities local language version of the Linux-based graphical desktop can be developed.
643
International Conference on Advances in Engineering and Technology
9 9
As a telecommunication system it is obvious that long service life would be important and the network design must accommodate it. The system had to be made as automatic as possible and simple enough to be operated by villagers in order to reduce operating costs
10.0 (i)
USES OF THE R E M O T E ICT SYSTEM Family communication: The global population shift from rural to urban communities inevitably breaks up families. These remote ICT networks allow distant family members to remain in contact, with positive results for community stability. (ii) Health Care: Heath clinics can communicate in real time with doctors and nurses in hospitals; provide AIDS awareness and prevention information (N-Ten, 2005), address complex medical treatment needs and emergencies etc. (iii) Human Rights: Communities get access to information allowing them to take part in shaping on their own destiny. They share information on human rights, women's rights, and land issues, improving farming techniques etc.. (iv) Education: The integration of ICT in teaching curriculums Increase availability of literacy and other training; Provide youth opportunity to acquire computer skills etc. as said in N-Ten, (2005). (v) Economic empowerment: Beyond the support for traditional economic practices, the introduction of information, communication and energy technologies allow for the development of useful trade skills related to those technologies, from solar technicians to software programming. (vi) Disaster relief: Rapid deployment of phone and data networks after disasters. (vii) Income generation (Inveneo, 2002): Through improved communication farmers access market data to maximize crop value by taking it to the highest paying nearby markets. Coops are formed between villages to improve buying power and share resources. This results in substantial income increases. (viii) Aids distribution: Though access to databases in real time provides resource information on grants and funding from government agencies and NGO entities (ix) Communication and transportation (N-Ten, 2005): Improves local communication using phone and email - eliminate time and expense to make the full day journey between villages. 11.0 CONCLUSION The suggested VoIP system with WiMAX can help sending two-way voice signals with computers, mimicking the traditional phone system and can make a big difference to the people in the rural areas. So each country belonging to the third world must adopt this system which is cost effective and must improve the living conditions of the people which in turn lead a path to economic empowerment. Many companies like HCL Infosystems (Moumitha, 2005) of India manufacture the new affordable model that is charged by pedal power that can be adopted. But government and other Aid agencies must develop policies to ira-
644
Santhi & Kumaran
plement the communication infrastructure with WiMAX so that rural areas are easily connected. I hope that this system will soon become ubiquitous in the poor parts of the world and transform the third world. REFERENCES
Alastair Ottar (2004), Pedal power Linux gets Ugandans talking", Tectonic-A•ca's source for open source news. Andreas Rudin (2005), Solar and pedal power 1CT, [Linuxola] e -magazine. Ashton Apple White (2005) , IT take a village IEEE spectrum Careers, www.spectrun.ieee.org/careers/careerstemplate.jsp? ArticleId=p090303 Craig Liddell (2002), Pedal powered." Look Ma no Wires, Australia intemet, http://www, australia, internet, com. N-TEN (2005), Inveneo-Solar/Pedal powered ICT, Tech Success Story, http:// ww.nten.org/tehsucess-inveneo. Inveneo (2002), Pedal and Solar powered PC and communications system, 2005, http://www. Invenoe. org. Michael (2003), Green wireless Networking, Slashdot-News for Nerds. Lee Felsenstein (2003), Pedal powered Networking Update. Technical information for the Jhai Pc and communications system, http://www.jhai.org. Lee Thorn (2002), Jhai PC. A computer for most of the world, TEN Technology. Roger weeks (2002), Pedal powered Wi-F1, Viridian note 00335. Steve Okay (2003), Technical information for the Jhai PC and Communication SystemSoftware, http'//www.jhai.org. Lee thorn et all (2003), Remote It village project Laos, The communication initiative. David Butcher, Pedal powered generator, http://www.los-gastos-ca.us/davidbu/pedgen.html. Michael G. Richard (2005), Inveneo. Solar and pedal powered phones for Uganda, Treehugger, http://treehugger.com/files/2OO5/O9/inveneo-solar-a.php . Digital Divide Network (2005), Generation gaps in Technology and Digital divide, www.digital divide.net/biog. Moumita Bakshi Chatterjee (2005), Bridging Digital Divide-New pedal power to run your computers, Business Line, http://thehindubusinessline.com/2005/07/29/stories. Cyrus Farivar (2005), VOIP phones give Villagers a Buzz, Wired News, http:// www.wired.com/news/technology/168796-0.html. Humaninet ICT (2005), Humaninet ICT Features, http://www.humaninet.org/ICT featureslist.html. Vinutha V (2005), A PC in every home, http://www.expresscomputeronline.com/20051024/market03.shtml Pragati Verma (2005), PCs that can bridge the Electrical Divide, OWL Institute Learning Service Provider, http://owli.org/node/355. Buzz Webmaster (2005), Closing the Digital Divide, http:www.politicsonline.con/blog/archive s/2005/07.
645
International Conference on Advances in Engineering and Technology
Psand Limited (2004), iTrike: The world's First Solar Powered Wireless Internet Rickshaw.
646
Manyele, Aliila, Kabadi & Mwalembe
ICT FOR EARTHQUAKE HAZARD MONITORING AND EARLY WARNING A. Manyele 1, K. Aliila, M.D. Kabadi and S. Mwalembe, Department of Electronics and Telecommunications, Dar es Salaam Institute of Technology, P.O. Box 2859, Dares Salaam, Tanzania
ABSTRACT Tanzania lies within the two branches of the East African rift valley system and has experienced earthquakes with magnitudes of up to 6.0. Six broadband seismic stations, Mbeya, Arusha, Tabora, Dodoma, Morogoro and Songea are currently recording seismic activities independent of each other in Tanzania. Data recorded on magnetic tapes from each station are collected and delivered to the University of Dares Salaam (UDSM) seismic processing center on monthly basis. There is no real-time monitoring or reporting of earthquakes in Tanzania, which put the population of Tanzania living in rift valley under high risks due to earthquakes and its related hazard. With the emerging development of the information and communication technology (ICT), this paper proposes a new and potentially revolutionary option for the real-time monitoring of earthquake data or warning through Short Message Service (SMS) and Internet. The Tanzanian remote Seismic recording stations will be connected to the UDSM Center for real-time data collection using mobile phone networks. With this system, earthquake data will be sent through mobile phones as a simple SMS message to the computer at the UDSM data processing center. Using the Internet the analyzed information can be sent to other emergence information center for real-time dissemination of earthquakes hazard and early warning, as opposed to current monthly reporting.
Keywords: Seismicity; East African rift valley system; Earthquakes; ICT; SMS; R-scale; Geological survey; Early warning; Earthquake monitoring; Real-time; Tsunami; Hazards; GPS; Radio transmitter.
INTRODUCTION 1.1 Seismicity of Tanzania Tanzania lies between the two branches of the East African Rift valley system, which is seismically very active and has experiences earthquake from small to large size magnitudes as shown in figure 1below. Figure l(a) show the position of two branches of rift valley with respect to Tanzania, Figure l(b) is the Seismicity of Tanzania for the period of 1990 to 1 Corresponding author: Tel: 255-0-744-586727; E-mail: [email protected]
647
International Conference on Advances in Engineering and Technology
2000, Figure 1(c) is the Seismicity of Tanzania with respect to seismic activities in the Indian Ocean. From the figure it can be observed that areas along Lake Tanganyika, Lake Rukwa and Lake Nyasa have experienced a numerous earthquakes of magnitude up to 6.6 on R-scale. Figure l(d) shows the location of seismic station that monitors the seismic activities of different part of Tanzania. These seismic stations record seismic activity of particular area of Tanzania, independently of one another. In these seismic stations data is usually collected to the central recording site in person at the interval of one month. Among the historical earthquake that has caused some damages to the community are the earthquakes near Mbozi areas in 18/8/1994. In Year 2005 also Tanzania has been among the countries that were affected by Tsunami. Effects were felt countrywide accompanied with lose of properties as well as human lives. The information that Tanzania has been hit by Tsunami was obtained from the international monitoring agents, and nothing was announced to the public locally to alert people on the possible after shocks of the Tsunami. Tanzanian seismic stations recorded the event but its analysis had to wait for one month as per collection calendar of geological survey agent.
(a) East African Rift valley system
648
(b) Seismicity of Tanzania for the period 1990 - 2000
Manyele, Aliila, Kabadi & Mwalembe
~: '....~): ~<
:..
:
............. +, ....
:"
o~
?
~ 4
.,~ ~
~
"
J (c) Seismicity of Indian ocean for the pcriod 1990 ........2000 (source US(~S center)
(d) Seismic Station location in Tanzania
Fig. 1" Tanzanian Seismicity 1.2
Needs for Real-time Earthquakes Data in Tanzania From figure 1 above, it can be shown that Tanzania is seismically unsafe, with active volcanoes on its part of African rift valley system and many occurrences of earthquakes. Earthquakes and volcanic eruptions can cause hazards that can lead into natural disasters. Example is the 26 December 2004 massive earthquake that caused the tsunami. Although natural hazards, by definition, cannot be prevented, their human, socioeconomic and environmental impacts can and should be minimized through appropriate measures, including early warning and preparedness. This paper deals with these deficiencies existing in current seismic monitoring network of Tanzania by proposing a system that could be implemented to transmit recorded seismic data to UDSM center for real-time analysis. 2.0 EARTHQUAKE MONITORING SYSTEM Real- time seismic data availability and analysis can be used to warn the population about the identified earthquakes, (Yunnan Province, PR China, 2001) and help to save lives as well as minimize their impacts to the communities. To provide real-time data analysis at the UDSM center( Seismic analysis center in Tanzania), the seismic data recorded in remote stations should be transmitted directly to the processing center from all the existing seismic stations. The concept for the system layout is shown in figure 2 below.
649
International Conference on Advances in Engineering and Technology
A emote seismic stations
I Seismic data Transmission -
\ \
\ UDSM Center Seismic Processing Input ' Ground Mlotions OUTPUT' Location, Size of Earfl-lquake
Fig. 2" Basic system for Earthquake Monitoring. From Figure 2, recordings from remote seismic stations will be transmitted to the available communication tower and re- transmitted to UDSM center for analysis. Data to be transmitted will be purely ground motions as recorded from the seismometers. 2.1
Technical outline of Current Remote Seismic Station
Each seismic station in Tanzania is an Orion Nanometrics Seismological Instruments type, producing a broadband signal that can be transmitted to Global Positioning Satellite System through a built-in Global Positioning System (GPS) antenna. Also the Orion system is equipped with magnetic tape recorder for recording seismic data. Many remote seismic stations are reachable using mobile phone and are powered using solar panel. 2.2 How to Transmit Seismic Signal to UDSM Center
Three possible techniques can be used to transmit seismic signals to the UDSM center for analysis, which to choose will depend on the financial position of the funding agent and location of the seismic station. Distribution of each technique is given in the following section:
(i)
SMS message through Mobile phone operators
Wireless link is available to all five Tanzania seismic stations, which are equipped with communication port for easy connections to other systems.To use this option, a normal mobile phone receiver set with and computer will be installed at seismic station to set-up and control a communication link between seismometer recording instrument and the transmitting mobile phone station where the later will be connected to recording computer at UDSM through a
650
Manyele, Aliila, Kabadi & Mwalembe
second receiving mobile phone. The computer will be configured to establish a connection link at prescribed time to download the recorded information and then upload it into the mobile phone that will transmit it to the UDSM center as a Simple Message Service (SMS) (Heaton, (1985), Charny, (January, 10 2005). The set up for this solution is shown in figure 3 below. bx~ net ~ti~D
SM$Message ~
Mobile ,ommuMie~ti0Mt0wer
..~,'~ . ....... SMSMessage
MobileI:'ho,e ,::o,,e,:.a:,:,,lto PC
:~")~
...........~
......~,
SMSM,:ss:,g,:
MobilePhone connec~edtoPC
Message :~M$M,:'s:s..,,1,:' Mobil, ,::ommu,i~:io,to,,,,,et' $M$M,~ss:,g,."
......
IJDSMCenter MobileOl~ee~or Mobile~ho,e ,'o,,e,te,l to P{'.
~:~M$
Mess.~ge
,:ommu,io~tio,t0wer
Mobile~h,:,,e ,::o,,e,;.te,tto PC
Fig. 3: Using Mobile Phone operators to Transmit Seismic data to UDSM The daily running cost for this method is very low, is the normal hourly charge fees for downloading of the recorded data as charged by the mobile operator providers.
(ii) Transmission of seismic signal through VHF Radio link For the seismic stations that are unreachable using mobile communication towers the VHF radio transmitter will be installed at the seismic recording station and controlled by the computer to transmit the seismic data as e-mail through the VHF modem that will be received at the UDSM center through a normal computer in the World Wide Web (internet). This solution has some disadvantages due to increased cost of VHF antennas and power required to run it. The configuration for the transmitting antenna with the solar panel for powering is shown in figure 4 below.
651
International Conference on Advances in Engineering and Technology
....~
~ ~i:i,~iii~iiiiiiiiii~!~iiiiiiiii~:;/!!li!!!!!!ii!! i;~ii:::~:ili!!i!i!!!!!!!!!!!!!i ~'~i!!!!i
Fi gure 4" VHF Radio Antenna with solar panel 9
3.0 EARTHQUAKE EARLY WARNING SYSTEM Whenever an earthquake occurs, most of the casualties are due to structural damage caused by S-waves followed by L-wave and R-waves. These waves travel at a speed of 3 to 4 KM per second in all directions. A house located 40 to 50 km away from the epicenter of a highmagnitude earthquake would therefore be hit by S-wave after 10 to 12 seconds. It will take 10 to 15 seconds to damage the house if it is poorly structured. Therefore, a person gets a total time of 20 to 25 seconds to go to a safer place if an alarm is raised just 10 seconds before the killer waves hit that house. It has been observed that the effect of a high-magnitude earthquake (of order of 6.5 to 9 on R-scales) on the surface is up to 400 to 500 km only. Therefore depending on the distance from the epicenter, alarm time of 20 to 120 seconds is available for early warning system. This is sufficient even if a middle-aged person takes 40 to 50 seconds to go to safer place Systems capable of providing a warning of several seconds to tens of seconds before the arrival of the strong ground tremors caused by a large earthquake are called Earthquake Early Warning Systems (Madison, (May 7, 2003)). An earthquake early warning system has the potential for the optimum benefit as it can provide the critical alarms and information needed (i) to minimize loss of lives and property, (ii) to direct rescue operations, and (iii) to prepare for recovery from earthquake damage.
652
Manyele, Aliila, Kabadi & M w a l e m b e
If we assumed that each station communicates digital data in real time to a UDSM central processing centre, to create an early warning system: we need to install a real-time central processor and alert algorithms at UDSM. The basic features of a seismic early warning proposed are shown in Figure 5 below. Ground motions recorded by remote seismometers are transmitted to UDSM central processing centre. The main parameters of an earthquake, i.e. the location, time of origin, magnitude, amplitude of ground tremors and reliability are computed. Based on the location and the geological conditions the nature of the ground motions expected at the site is determined. On the basis of this information the appropriate action is taken.
Transrn is,.-; on Ant_et-~t-~,.~ ~
Brc~J:~tir~ ~e ,a,iCt
~,,~!~i~i~i~i~i'~ii~ii,.i,,i,i,i!i!i, ,~ ~iiiiiiii!iii!iiiii;,ii;iiiii! iiiiii;,
U Remote c~.isrnirStation,.-; J9
.
.
.
.
i
i,._JE)Sb,qCPU anct '9
........ Warning
Fig.5" Earthquake Early Warning System. The next step is the warning signal transmission. There are two possible strategies that can be adopted to communicate the alert information packet to the end users: 9 Direct communications from the central processing center to each user. 9 An area-wide alert transmission to all users. Direct communications from the central processing center to each end user would result in high communication costs since a dedicated communication channel to each user must be available at all times. An area-wide broadcast of the alert information packets is the most cost effective and flexible strategy. An area-wide alert broadcast will be accomplished via mobile phone service providers, radio and television broadcasts, and FM radio communications.
653
International Conference on Advances in Engineering and Technology
4.0 C O N C L U S I O N The current development in the telecommunication industries is allowing people to communicate cheaply with mobile phones anywhere in the world. The Earthquake monitoring and early warning utilizing mobile phone infrastructure is the suitable technique to be used in establishment of the system where the service is available. The system has room for expansion to accommodate warning related to other sources like Tsunami and floods as well as diseases. We do recommend the government to consider sponsoring the establishment of the system because it will make serving lives of Tanzanian an easy task and will serve the unplanned budgets spent on disaster recovery. 5.0 ABBREVIATIONS
ICT UDSM GPS SMS
Information Communication and Technology University of Dar es salaam Global Positioning System Short Message Service
REFERENCES
Yunnan Province, PR China, 2001, Improvement of Earthquake Disaster Reduction and Early Warning System Heaton, T.H, A model for a seismic Computerized Alert Network, Science 228, 987 - 990, 1985 National Research Council, Real-time Earthquake Monitoring, National Academy Press, 1991. Madison, May 7, 2003, Earthquake Alarm System May Ease Risk for Southern Californians. Charny. B., Monday January 10 2005, SMS Enlisted for Tsunami Warning System?
654
International Conference on Advances in Engineering and Technology
C H A P T E R TEN LATE P A P E R S
N E W B U I L D I N G SERVICES SYSTEMS IN K A M P A L A ' S BUILT HERITAGE: C O M P L E M E N T A R Y OR CONFLICTING I N T E G R A L S ? A.K. Birabi; Department of Architecture, Faculty of Technology, Makerere University, P.O. Box 7062, Kampala, Uganda.
ABSTRACT The research looks at the present characterization of built heritage in Uganda, with a specific case study of Kampala The emphasis is on built heritage conservation. It focuses on the historical aspects of it and the current state of disuse, neglect, adulteration and destruction. An appraisal is presented of the trend and impacts from current design practices, upgrading and installation of new building services systems in the built heritage. The urgency of striking a balance between conserving the City's built heritage and appropriate installation of contemporary building services systems is real. The study compared the Practice in Uganda to that of other countries which guided the appraisal, against a backdrop of fundamental principles and practices of architectural conservation, which are critical determinants of conservation management among many cities today. Keywords: Architectural heritage, built heritage, architectural planning, historic buildings, aArchitectural conservation.
1.0 INTRODUCTION In contemporary times, built heritage conservation has become a focus of attention in the discourse of architecture, planning and a powerful determinant of humanistic culture and decision-making factor in urban design, (Kovfics 1999). Consequently, broader commitment towards safeguarding the built heritage is globally on the increase in many cities. However, while the built heritage can give us a sense of wonder and direction by virtue of numerous architectural, aesthetic, historic, social, economic, touristy, spiritual and symbolic values
655
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
which society can derive, this seems not to the case with Kampala City. The current trend is characterized by either neglect, adulteration or destruction of the built heritage and yet the above mentioned values demand that it should continue to stand and remain in use. In order to ensure continued existence, sustainable use and care of the built heritage, historic buildings the there has to be combined efforts towards: (a) Prevention of decay instigated by human beings, the climate and natural processes; (b) Dynamic management of adaptive re-use and change. This Paper reports on an appraisal of the trend and impacts from current design practices, upgrading and installation of new building services systems in the built heritage of Kampala. The urgency of striking a balance between conserving the City's built heritage and appropriate installation of contemporary building services systems is real. The appraisal was against a backdrop of fundamental principles and practices of architectural conservation, which are critical determinants of conservation management among many cities today. 2.0 KEY ASPECTS IN FUNDAMENTAL PRINCIPLES AND PRACTICES OF ARCHITECTURAL CONSERVATION Ever since the 19th Century moralistic and didactic writings of John Ruskin and William Morris, clear principles of intervention have been developed and fine-tuned, which remain orthodox in architectural conservation, (Pendlebury 2002). Ruskin and Morris stressed upholding the sanctity of authentic historic fabric in a largely moralistic and cultural context. To date, built heritage conservation undertakings regarded successful are those judged to have 'respected' and 'preserved' the very special qualities, values and special characteristics imbedded in the attendant fabric. This is a cardinal architectural conservation principle, which stresses minimum intervention and harmonious weaving of any consequent changes or additions with the fabric's original setting. The above principle is significant on two grounds: (a) It provides security against unnecessary damage of structural, compositional, aesthetic and visual appearance of the built heritage; (b) It ensures that irreversible changes in the historic fabric are kept at a minimum and that any repair or alteration is hence purposed, justified, and precise. 3.0 THE CURRENT CONTEXT OF KAMPALA'S BUILT HERITAGE Currently, Kampala's built heritage is caught in the vortex of the City's general deterioration in its historic environments powered partly by demographic transpositions. These transpositions are characterized by the phenomenal rural-urban migration crisis. Accordingly, Kampala alone "...receives many migrants from rural areas every day..." and yet without
656
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
enough housing for them, (Ngatya 2000). Consequently, the City's present housing stock and the built heritage are heavily pressurized. As a result, many structures of the historic fabric are either in poor conditions, in danger of collapse or threatened with disappearance. Buturo (2001: 8-9) terms the conditions as "...distressing housing standards.., unacceptable and shameful". The structures require rehabilitation to reveal their historic and artistic importance. Derelict vehicles, dumped rubbish and scrap metal and problems of poor stormwater drainage coupled with uncollected garbage undermine efforts towards a favourable built heritage maintenance regime. Inadequacies in services, infrastructure and street lighting contribute to the high rate of crime and general malaise, (Ngatya 2000; Wegener 2001; Lwasa 2002). Walls/fences/hedges and paving, ground surfacing and roads are dilapidated. In fact a number of roads in the city have lost their side-walks, road reserves and drainage channels, which explains their fast deterioration rate and erosion of adjacent areas. There is indifference to vehicular traffic control measures in Kampala. In fact, the concentration of good roads in the Central Division coupled with uncontrolled mix of options regarding vehicles has immensely increased vehicular transit-ridership levels. Thus, most cars and heavy lorries pass through the Central Division, causing tremendous damage to the City's roads and historic environments, (NEMA 2000; Buturo 2001). Also, shrubberies, gardens, parks and other green spaces have been overrun by other informally permitted land uses or squatter pursuits. Squatter pursuits include Barbecue (muchomo)joints, selling of charcoal, fish, tom . . . . . . . ". . . . (fig. 1- 4 for makeshift squatter pursuits).
Fig. 1 Fig. 1: Food Preparation joint in the grounds of a Historic Building, Old Kampala Fig. 2: A Chapati Seller's Toll in the ruins of a Historic Building, Old Kampala
Fig. 2
657
I n t e r n a t i o n a l C o n f e r e n c e o n A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Fig. 3 Fig. 3: A Pork joint on the route to Kasubi World Heritage Site. Fig. 4: Green grocer next to a in the Road Reserve.
Fig. 4
In a related occurrence, Kampala's skyline and scenic beauty are clattered with billboards placed haphazardly and with no statutory control by KCC. There is no doubt that these billboards are a severe visual intrusion and are central to erosion of visual the ambience of the Kampala (Fig. 5).
Fig. 5: No Control on Billboards: meeting point of Kyagwe and Rashid Khamis Rd. Furthermore, the environs of Kampala are severely neglected. Vegetation is overgrown (Fig 6). Unawareness of value has also been transformed into a poor maintenance culture for the entire built environment. (Fig.7 to Fig.9). Poor maintenance also spans degradation of one time prestigious outdoor seating areas, lawns, parks or gardens enclosed by lanes and their houses.
658
International Conference on Advances in Engineering and Technology
Fig.6 Fig. 7 Fig.6: Bush Surrounds Historic Buildings: Mid-ground is part of Plot No.45 and in the background is Flats on Plot No. 47, Rashid Khamis Rd Fig. 7: Steps of the Walkway connecting Kiseka Market and Rashid Khamis Rd. through Plot No. 43.
Fig.8: Fig.9:
Fig.8 Neglect of Historic Buildings: Entrance to Delhi Gardens. Part of Nsalo Rd., a one time prestigious tarmac.
Fig. 9
Thus, the local setting of Kampala's built environment has a chronic state of untidiness with footpaths made at free will in any spots, which also double as drainage channels where people dump their rubbish to be washed by rain. In a parallel occurrence, sewage lines are regularly flooding causing health hazards. Also, domestic garbage is dumped in storm water pathways due to insufficient disposal containers, (Fig. 10(a) and (b).
659
International Conference on Advances in Engineering and Technology
,:: ~ ~::~:================================= ~:::~~:::~::~::+~+~::~)~{~
(a)
(b)
Fig.10 (a) and (b). Insufficient Garbage Disposal Facilities. The City's environs in some instances contain congested dwellings with high health risk levels. They are often with either poor or non-existent sanitation facilities and so dwellers dispose excreta in crude drainage channels in polythene bags-Buvera, (Fig. 11). Thus, the setting of Kampala's built heritage has a chronic state of untidiness.
I
Fig. 11" Abused Drainage Channels. Today, Kampala's historic urban neighbourhoods have fallen from high brilliance and a number of historic properties have been adulterated, redeveloped or demolished. Buturo (2001" 8) deplores the appalling city's physical state and management and notifies" At the present rate, if nothing is done, the city of Kampala will soon have the dubious distinction of qualifying as the biggest slum-cum-capital city of an independent country in Africa. Likewise, the World Monuments Watch (2004" 21) warns:
660
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
In such a climate, ignorance of the value of Kampala's architectural heritage is the most serious threat to the city's future. If present circumstances persist... Kampala's historic urban assets will soon be decimated. Summed up, Kampala's built heritage is at risk. What surfaces is that deliberate efforts must be made to introduce a sustainable built heritage conservation management regime in Kampala, which seems to be lacking. Effective management of building services for the built heritage is part of such a conservation management regime, the main subject of this Paper. 4.0 CURRENT M A N A G E M E N T OF BUILDING SERVICES SYSTEMS IN KAMPALA'S BUILT H E R I T A G E As observed in many cities, adaptive re-use of the built heritage is a popular strategy for breathing new life in old historic fabric or entire historic environments. Apparently, the temptation to adapt the historic fabric for new uses and their rehabilitation or upgrading to present-day needs of occupants/users is irresistible. Experiences of different cities attest that incidences of rehabilitation and/or upgrading unleash new brands of competing logics and conflicting overtures controversial to architectural conservation principles. Most profound is the incompatibility of modem building materials and construction approaches discordant with construction techniques of the past. Most of the built heritage of Kampala was not designed to take modem high-tech building stuff. As the replacement of the old with new fixtures grows in magnitude, complexities coupled with demands of high-tech lifestyle, as many problems as are solved emerge while failing to renew the much built heritage for posterity. As a result, the instances in which new building services systems have replaced original types have raised serious technical, functional, and aesthetic shortcomings despite initial good intentions. This problem has been aggravated by oversight over conservation-centered considerations during processes of rehabilitation or upgrading building services systems in historic fabric. This is also due to the absence of holistic and inter-disciplinary approach to conservation challenges is lacking in Kampala. I shall refer to the 'holistic and inter-disciplinary approach' as 'Category A'. Lack of this category in Kampala is because varied professionals in the relevant disciplines are often in constant rivalry and struggle for territorial autonomy rather than engaging in meaningful inter-disciplinary interaction. Rowntree (1981: 135), a distinguished writer in Development and Professor at California University and the Open University since 1970 infers to Inter-disciplinarity as an aspect in which two or more disciplines are brought together preferably in such a way that the disciplines interact with one another and impose positive effects on one another's perspectives.
661
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
In the case of Kampala, it is evident that the varied professionals are instead more into 'multi-disciplinarity'. I refer to this type as 'Category B'. Rowntree (1981: 135) also distinguishes it as that in which "...several disciplines are involved but do not interact with one another in coming to their conclusions". In some extreme situations some professionals go as far as cocooning themselves in uni-disciplinarity, which I refer to as 'Category C'. Unidisciplinarity is the performance "...in which only one discipline is concerned" about problem-solving", (Rowntree 1981:135), supposedly the knower of everything! Reflecting on the above three situations, as far as 'best value' frameworks are concerned, Category A signifies an integrated mode of built heritage conservation accomplishment and management of their attendant building services systems. It is a more shrewd and effective strategy, which harnesses complementary processes. Complementary by definition means: "Something that completes, makes up a whole, or brings to perfection (The American Heritage Dictionary of the English Language 2000) As long as we do not harness complementary processes in the drive for adaptive re-use and/or rehabilitation of the historic fabric there will often be continuous loss of best attributes and features of Kampala's heritage buildings, despite our best intentions. The integrated, inter-disciplinary and holistic mode recognizes and validates wisdom that is available in conservation-worth building services systems among the built heritage. And that the their meaningful replacement is complementary, rather than seeking extreme alternatives. Category A is preferred against category B that merely denotes bundling varied professionals together but with a rather artless and inconsequent point of departure for managing building services systems and their host built heritage. Certainly, Category C is a practice with ambiguity and short-sightedness which offers a disservice and damaging constraints on long-term sustenance of the building services systems and the built heritage in Kampala. As this Paper reflects on these categories, the challenge for professionals, be as individuals or whole associations or learned societies gathered here today, is to immerse themselves in a sincere critical self-evaluation. In short, given the chance for a culture of interdisciplinary interaction, Kampala's built heritage would be pervaded with acceptable solutions and appropriate conservation practices. It would result in application of specific professional inputs of relevant disciplines to levels relative to requisite conservation accomplishment. In most cities leading in best built heritage conservation and management practices, teams of professional are indispensable. Quite often they include architects, engineers, artisans, conservators, archaeologists, economist, surveyors, buildings historians, building contractors, town planners, environmentalists together with certain specialist consultants (Swanke Hayden Connell Architects, 2000).
662
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
To pick up the thread of contemporary buildings services systems in particular, ongoing practices pose great challenges to the furore of Kampala's built heritage. Among the challenges, insufficient care is paid to the building services planning, design and installation in Kampala's general historic environments and particularly the built heritage. While the architectural conservation discipline grants that building services elements in a building typically last 15 to 30 years, on-going rehabilitation, renewal and/or upgrading of buildings services systems in Kampala is not being done in accordance with the character of the old fabric. For instance, pipe-work, water storage tanks, electrical and telecommunication cabling, lighting fixtures, ornamental switch-plates, etc., and their distribution routes originally concealed have been replaced with poor quality services systems moreover exposed on walls, ceilings or roof-tops, (Figures 12a, 12b, and 12c, and figures 13 to 31).
(a)
(b)
(c) Fig. 12.: (a), (b), and (c): Ruparelia House Martin Rd., Old Kampala. Unsatisfactory Electrical Cabling.
663
International Conference on Advances in Engineering and Technology
.~ . ~.
.
.
~,,~..:~! ~!!i!i~i~~i
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 14: Dangerous Electrical Cabling Nazziwa House, Plot No. 57 Ruparelia House, Martini Rd, Old Kampala Fig. 15.: Intrusive Water Tanks & plumbing: Martin Rd Old Kampala. Fig. 16: Lumumba Hall, Makerere University.
664
International Conference on Advances in Engineering and Technology
Fig. 17, Fig. 18. and Fig. 19: Details of intrusive elements: Water tanks, TV antennas and relief plumbin~
Fig. 19
Fig. 20
i!~!!!ii!i
Fig. 21 (a) Fig. 21 (b) Fig. 19: Thorny surface plumbing causes pedestrian pain. Fig. 20: Awkward waste water plumbing, Lumumba Hall, Makerere University. Fig. 21 (a) and (b): Clumsy Bath tab Plumbing Livingstone Terrace, Makerere University.
665
International Conference on Advances in Engineering and Technology
g. 22
g. 23
g. g. 25 Fig. 23: Plumbing: Entrance of 10, Livingstone Terrace. Fig. 24 Hazardous electrical wiring Fig. 25: Poorly designed additions. Fig. 26: Hazardous wiring from ceiling. A socket protector.
Fig. 26
666
Fig. 27
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
~
!
...U..~:
I
..
Fig. 28
Fig. 29 (b)
Fig. 29 (a)
Fig. 29(c)
Fig. 28: Current Power Shortages have also created new inappropriate building services: An unsealed high voltage generator planted in the midst of a corridor with unsatisfactory cabling Mackay Rd.
Fig. 29(a), (b) and (c): Air conditioners Complex Fences, etc: 5.0 M O D E R N GADGETS AND FACETS P R O B L E M A T I C APPEAL AND A M B I E N C E OF HISTORIC BUILDINGS
TO
AESTHETIC
667
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Fig.30 (a). Fig. 30 (a) and (b): Mailo Two Bombo Rd.
Fig. 30 (b)
Fig. 31: Amiss building additions are also creating ugly extrusions Additional new systems such as emergency systems for fire, burglar-proofing, window blinds, louvers, air-conditioning have not been appropriately designed or concealed to match with the character of the built heritage. These have led to inappropriate and ugly extrusions and consequent loss of the historicist character and retrospective atmosphere. In another critical occurrence, some of the buildings which have lasted close to 100 years have not even had any replacement, renewal or upgrading of their building services components. Gutters have fallen off, doors, windows and roof-timber trusses in a precarious state and effluent disposal entirely non-functional. An example is the historic building at Mailo Two, Bombo Rd. A number of weaknesses are attributed to this trend of things and I shall touch on the outstanding ones. To begin with, concerning Inspection and Appraisal, Kampala resents or owner-occupiers of heritage buildings do not engage in inspecting existing building services whenever they carry out rehabilitation or upgrading of their properties. This is against the general background of poor or non-existent standards of documentation and information on building services systems of the entire built environment. This is partly due to past disturbances that Uganda went through and as a result such information went missing. In such circumstances,
668
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
while adequate preliminary investigation by means of combined site survey, testing of existing building systems, the building's structural and aesthetic conditions, and user consultation would be necessary, most Kampala owner occupiers of heritage buildings view it as a luxurious and prohibitive expense. Concerning missing information, the ideal is for a re-do of measured drawings so as to comprehend the structures in question and antique design concepts and unlocking of a creatively complimenting re-touch. However, this notion is also often dismissed as a costly element. Consequently, meaningful renewal or refurbishment strategies which would be the province of professionals such as architects, engineers, artists, conservators, archaeologists, surveyors, buildings historians, building contractors, town planners, etc., are rarely realized. Also, considering that collaborative effort of these professions yields the best of conservation of the old fabric, and safety, security, maintenance, health, and comfort of users, it is imperative for inter-disciplinarity to take route in Kampala's building services systems management. In another related perspective requisite financial and contingency planning is also overlooked. In addition the building technologies and properties of the older materials are never grasped. This has influenced poor choices in the matter of selecting, designing, and replacing building services systems. Serious mistakes have also occurred while replacing electrical and mechanical systems or introducing entirely new ones. Additions include interior design elements such as acoustics, carpeting, artificial ventilation, synthetic wall coverings, which have reduced on the quality of the indoor environment since initial designs of the older buildings were based on the local natural tropical setting and low energy consumption. Thus, aspects such as natural ventilation, and day-lighting and receptiveness to nature were kept in full view. On the contrary, the new additions have led to thermal, spatial, circulation, ventilation, and environmental discomfort. 6.0 CONCLUSION Kampala's built heritage gives us a sense of wonder and makes us appreciate its changing historical, cultural, economic, and geography, (Graham et a12000). It has architectural, aesthetic, historic, social, economic, spiritual and symbolic values that reminiscence societal configurations and need to be preserved. Whereas Kampala's planning and development ethos has been similar to that of the post-WWII era of demolition and redevelopment, the built heritage must be given chance to continue standing, and remain in use. In part, this requires harmonizing replacement of building systems with conservation processes pertaining to the old structures. New additions should be added in a way that preserves original significant materials and features and preserves the historic character of the fabric. Relevant professional need to pay necessary attention to the design and management of building services systems when it comes to attendant replacement in Old structures. Once properly replaced, building services are bound to boost Kampala's preventive conservation of historic
669
International Conference on Advances in Engineering and Technology
structures. They would prolong their lifetime, enhance greater beneficial use and improve their internal environments. REFERENCES
Birabi, A.K., "Diary of Events/Activities of an Academic Visit to the Bartlett School of Graduate Studies, UCL" CETA Introductory Programme (London: University College London, 1996). Graham, B., Ashworth, G.J., and Tunbridge, J.E., A Geography of Heritage (London: Arnold, 2002). Rowntree, D., A Dictionary of Education. (London: Harper and Row Publishers, 1981). Swanke Hayden Connell Architects, Historic Preservation: Project Planning & Estimating, (Kingston: R. S. Means Company, 2000).
670
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
F U Z Z Y SETS AND S T R U C T U R A L E N G I N E E R I N G Zden6k Kala and Abayomi Omishore, Brno University of Technology, Faculty of Civil
Engineering, Institute of Structural Mechanics, Czech Republic
ABSTRACT This paper addresses the issue of uncertainty, with which we are often forced to work with whether we want to or not, in engineering. The article presents an alternate approach in the modelling of initial imperfections as fuzzy numbers. The evaluation of the load carrying capacity of a steel plane frame loaded at the columns is presented. The load carrying capacity was solved utilizing the geometric non-linear beam solution. The fuzzy analysis of the frame load carrying capacity is compared with the random load carrying capacity determined acc. to the LHS method based on real material and geo-metric characteristics and acc. to EUROCODE 3.
Keywords: Fuzzy; Steel; Frame; Random; Stochastic; Imperfection; Geometric non-linear beam solution; Load carrying capacity.
1.0 INTRODUCTION Uncertainty is an essential and inescapable part of life. Requirements on the load-carrying capacity and the serviceability of building structures are generally met with two types of un-certainty: randomness caused by the natural variability of fundamental quantities and uncer-tainty due to vague and inaccurate definitions of requirements of standards. The influence of imperfections on the uncertainty of the load carrying capacity of a steel plane frame sub-jected to loading at the columns is presented. Imperfections are considered as both random and fuzzy numbers. Feasibility limits of mathematical statistics in the formalization of im-precision are demonstrated. In the case of imperfections, limitations are due to the lack of statistical data from higher number of experiments. The required measurements are in heavy service conditions either totally impossible to perform or the quality of their information is so low in securing of the required robustness that they are unusable, Kala, (2006). The unknown statistical data presents the source of uncertainty in the solution of stochastic models. The paper presents on an example of a steel plane frame a number of problems limiting the widespread utilization of stochastic methods. Probabilistic methods are advantageous in the sense that they enable relatively easy evaluation of existing structures even from the point of their residual lifetime. Probabilistic methods have their place in research activities, Kala, (2005), Sadovsky et al, (2004), however there are problems limiting their widespread utilization. Classical stochastic methods e.g. Monte Carlo assume stochastic character of not fully accurately determined events. The use of mathematical statistics for the treatment of uncertainty requires that respective events
671
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
are well-defined elements of the set and that the enunciation of their significance is defined just as well. An alternate method of evaluation with imperfections treated as fuzzy numbers is provided utilizing fuzzy sets, Kala, (2005). FUZZY SYSTEM I M P E R F E C T I O N S Imperfections are input data necessary in a series of studies, e.g. in beam member, Kala, (2005), and in thin-walled structures, Ravinger & Psotny (2004). The analysis of their random variability is however difficult. The random realization cannot be practically obtained form measurements on more structures. We generally have only inaccurate information at our disposal, e.g. from the tolerance standards or from a small number of measurements, the evaluation of which is burdened with high statistical error. ,~F F F F 2.0
$
ii
IPE 240
li
IPE 240 o
o II
1
2k x
4.5 m
x
x
2k 4.5 m
x
Fig. 1" Geometry of the steel plane frame Three sets of system imperfections are assumed by the tolerance standard ENV 1090-1, 1996, (Fig. 2). Unlike in the crisp set theory where we would only differentiate between the cases when a member belongs to the set and when it does not, the fuzzy set theory defines in addition the degree to which a member belongs into the set. The transitional region is gradual in the fussy set theory. The more the frame shape with imperfection approximates the ideal shape (Fig. 1), the more it belongs to the set. el e2 el e2 el e2
e a = e l+e 2 2
672
__~_
h 500
ec = e 1 = e2 e b = e 1= e 2 = + ~ h 100 Fig. 2: Tolerance limits of system imperfections
-
h 100
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Due to the fact that the frame (Fig. 1) is symmetric, it suffices to assume three basic sets (Fig. 2). In complying with the tolerance limits (Fig. 2) it is possible to define three fuzzy sets of imperfections, (Fig. 3). The remaining variables are assumed as singletons of their characteristic values. Steel $235 was considered. 1.0~ .,..~
0.8 (D
1.0~
1.0 r
/--1
/ /
0.8 (D
0.6
x~ 0.6
"~ 0.6
~,9 0.4
0.4
~,9 0.4
0.2
~, 0.2 go > ~0.0
(D
(D
(D
0.0
0 24 6 810 Imperfection e,, e2 [mm]
0.2 ~O
-9' , ~ 0.0 . . . . . 0 10 20 30 40 50 60 0 10 20 30 40 50 Imperfection el, e2 [mm] Imperfection e,, e2 [mm] Fig. 3" Fuzzy sets of imperfections.
The boundary conditions at the lower end of the columns were modeled as ideal hinges. Beam-to-column joint was considered as ideally rigid. The fuzzy analysis of buckling lengths of a frame with semi-rigid joints was performed in e.g. Omishore, (2006). 3.0 LOAD-CARRYING CAPACITY AS A FUZZY NUMBER The load carrying capacity was evaluated utilizing the geometric non-linear solution with beam finite elements. The meshing of the frame into finite elements and the numerical set up of the non-linear solution was performed as in Kala, (2006). The fuzzy analysis was evaluated according to the general extension principle, Novak, (2000), utilizing the so-called response function, Stemberk, (2000). Basic fuzzy arithmetic (addition, subtraction, multiplication, division) can then be performed utilizing this function. Let 9 be an arithmetic operation (e.g. addition, division) and Z1, Ze c_ R be fuzzy numbers. The extension principle then allows the extension of operation 9 to the operation | with fuzzy numbers in the following manner:
(z,
V(<(x)AZeU))
Z= X e y
The result of operation ~ is a fuzzy number Z~ (~ Z2, consisting of elements z = x 9y with a membership function that is given by the minimum of membership values of operators x into fuzzy number Z1 and y into fuzzy number Z>
673
International Conference on Advances in Engineering and Technology
1.0, ,s=
~, 1.0,
1.0,
0.8
0.8 (D
E 0.6
"~ 0.6
~, 0.4 O
O ~D
0.2
/-5
~D ,.Q (D
0.4 O (D
0.8
0.6 0.4
0.2 0.2 (D [kN] .... [.kY] , ~. 0.0 0.0 ] c~ 0.0 51 52 53 54 55 52 53 54 55 54 55 56 57 Load-carrying capacity Load-carrying capacity Load-carrying capacity Fig. 4" Fuzzy sets of load carrying capacity Obtained results for the three estimated fuzzy sets of the load carrying capacity are depicted in Figure 4. The resultant fuzzy set of the load carrying capacity is obtained through the union of obtained fuzzy sets. The utilization of the operation of union enables the transparent incorporation of imperfection traditionally according to the standard ENV 1090-1, (1996). The requirement that system imperfections of the "same direction" have stricter tolerance limits than imperfections of "different direction" is easily satisfied, (Fig. 2.). The resultant load carrying capacity obtained through the union of the three fuzzy sets is in Figure 5. 1.0
~
~
0.8 0.6 0.4
~o 0.2 0.0 51
4.0 COMPARISON
52
53 54 55 56 Load-carrying capacity [kN] Fig. 5" Fuzzy set of load-carrying capacity
57
OF FUZZY, STOCHASTIC AND D E T E R M I N I S T I C (EC3) ANALYSIS It would be best to consider imperfections from measurements on real structures. This however cannot be practically carried out because building structures of one type are usually constructed as unique and so an observation file cannot be obtained. Another possibility would be to assume that the initial imperfections are found with a certain probability within the tolerance limits of the standards ENV 1090-1, (1996). One of the possible heuristic approaches was published in Kala,
674
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
(2006). The standard ENV 1090-1, (1996), differentiates the sets of basic imperfection cases, which is impractical for stochastic models. The stochastic solution was evaluated utilizing 300 runs of the LHS method. The input random variables were considered according to Melcher et al, (2004), see Tab. 1. The Gaussian probability distribution was assumed for all random variables (with the exception of el, e2). Imperfections el and e2 were implemented according to Figure 8 Kala, (2006), for zero correlation. The remaining variables, which are not listed in Tab. 1, were considered as deterministic given by their characteristic values. Results of sensitivity analysis Kala, (1998), were used for their estimation. The load carrying capacity was evaluated from the geometric non-linear solution utilizing the Euler Newton-Raphson method. The comparison of the load carrying capacity given as fuzzy numbers, random variables and as deterministic according to the standard is difficult. Each method is based on a different theory and processes qualitatively different information and hence has different predicative capabilities. The deterministic semi-probabilistic solution is evaluated acc. to the standard EUROCODE 3 EN 1993-1-1, (2005). This standard is currently being translated into the Czech language and will soon be available to design engineers. The two basic methods of assessing the load carrying capacity of the frame in Figure 1 are: the stability solution with the buckling length and the geometric non-linear solution. The membership functions were considered formatively identical with the Gaussian distribution, see Tab. 1 (the degree of membership of 1.0 corresponds to the core of each fuzzy set). Similarly as in chapter 3, the resultant fuzzy set of the load carrying capacity was obtained through the union of the three solutions obtained for the imperfection cases: sets A, B, C (Fig. 2). The load carrying capacity was evaluated from the geometric non-linear solution utilizing the response function. Table 1:
random variables ,,
No
Variable
Mean value
Standard deviation
01 02
Flange thickness Young's Modulus
t2 E
5.6601 mm 210 GPa
0.26106 mm 12.6 GPa
Yield strength Imperfection
f~ et
297.3 MPa 0 mm
16.8 MPa 3.7 mm 0.44884 mm
Column 1
03 04 05
Flange thickness
t2
9.7314 mm
06
Young's modulus
E
210 GPa
12.6 GPa
07 08
Flange thickness Young's Modulus
t2 E
5.6601 mm 210 GPa
0.26106 mm 12.6 GPa
Yield strength Imperfection
f~ e2
297.3 MPa 0 mm
16.8 MPa 3.7 mm
09 10
Cross beam
Column 2
675
International Conference on Advances in Engineering and Technology
1.0
~ 1.0
0.6
"~
0.4
~ 0.4
0.2
~o 0.2
o.o
~ o.o
40 45 50 55 60 65 70 Load-carrying capacity [kN] .~ 1.0~' 0.8
0.6
40 45 50 55 60 65 70 Load-carrying capacity [kN]
2~ 1.0t/__~ ~ 0.8
//N ~
0.6 o 0.4
~ 0.4
~D
0.2
~o 0.2
0.0 ~, . . . . . ~ 0.0 40 45 50 55 60 65 70 40 45 50 55 60 65 70 Load-carrying capacity [kN] Load-carrying capacity [kN] Fig. 6: Fuzzy sets of load-carrying capacity
676
International Conference on Advances in Engineering and Technology
l0
Degree of membership
0.8
/
/ u/
/ \u
\
0.6
,~. . . . . . . . . . . . . . . . . . . . .
0.4
Fc~a _
0.2
EC3-nonlinear-min J J~
,
0.0 25
30
~tnhilit,,
~
35
~
/
/
~c~l,,tian
.
/
I
-- : I I
'
~,,.
0.1 0.17
k
l'~':ii
/
~i~1
Releative frequency
\
0.13
\
~
....... "
0.08
\
0.4
.
~
0.0
40
45 50 55 60 65 70 Load-cawing capacity [kN] Fig. 7: Comparison of complex indeterminate (fuzzy, stochastic, deterministic) analysis
5.0 C O N C L U S I O N S The obtained differences presented in Figure 7 between the fuzzy, stochastic and EC3 solutions corroborate the general vagueness of modeling and indicate the limits of application of conventional methods in complex systems. The accuracy of the stochastic solution is dependent on the adequacy of input random variables (including their correlation). The number of carried out experiments are frequently insufficient. Supplementary observations due to technical or economical reasons are often not available. In general, the higher the complexity of a system, the more the solution may suffer from a shortage of input data. The example demonstrates the basic inadequacy of stochastic procedures, which is based on the ability to consider only uncertainty of the stochastic type. For the steel frame in Figure 1 it is practically impossible to obtain a higher number of experimental results from the evaluation of measurements on more frames. The displacement of each column top is a random variable, whereas the uncertainty in modeling rests in the uncertainty of statistical characteristics and the correlation between the parameters of system imperfections. With the aim of a broader implementation of stochastic methods into design practice, it is necessary to analyze and solve all the problems preventing their general utilization, Marek et al, (1995). 6.0 A C K N O W L E D G E M E N T This research was supported by grant KJB201720602 AVI2R and research center project 1M68407700001.
677
International Conference on Advances in Engineering and Technology
REFERENCES
Holick3), M. (1999), Fuzzy Probabilistic Optimization of Building Performance, Journal of Automation in Construction 8, pp.437-443, ISSN 12102717. Kala, Z. (2006), From partial safety factors methods to the probabilistic concept of system reliability analysis, In Proc. of VII Conf. on Reliability of Structures, Praha. Kala, Z. (2005), Fuzzy Sets Theory in Comparison with Stochastic Methods to Analyze Non-linear Behavior of a Steel Member under Compression, Journal Non-linear Analysis: Modeling and Control Vol. 10, No. 1, pp.65-75, ISSN 1392-5113. Kala, Z. (1998), Sensitivity of the Steel Plane Frame on Imperfections, Stavebni Obzor, Praha: (~VUT, N.5 pp.145-149, ISSN 1210-4027. Kala, Z., Omishore, A. (2005), Comparison of Fuzzy Set Theory and Stochastic Method in Application to the Analysis of the Load-Carrying Capacity of Steel Members Under Tension, In Proc. of Int. Conf. Lightweight Structures in Civil Engineering, Warsaw, pp. 188189, ISBN 83-908867-9-0. Kala, Z., Novfik, D. and Vo~echovsk3), M. (2001), Probabilistic Nonlinear Analysis of Steel Frames Focused on Reliability Design Concept of Eurocodes, In CD Proc. of the 8th International Conference on Structural Safety and Reliability, ICOSSAR, Newport Beach, California, USA, 2001, ISBN 905809 197 X. Kala, Z., Strauss, A., Melcher, J., Novfik, D., Fajkus, M., Rozlivka, L. (2005), Comparison of Material Characteristics of Austrian and Czech Structural Steels, International Journal of Materials & Structural Reliability, Vol.3 No.l, pp.43-51. Kmet', S. (2005), Probability design values Pfa, In Proc. of VI Conf. on Reliability of Structures - Subject - From deterministic to probability approach of engineering appraisal of structure reliability, Ostrava, pp. 109-118, ISBN 80-02-01708-0. Marek, P., Gugtar, M., Anagnos, T. (1995), Simulation-Based Reliability Assessment for Structural Engineering, CRC Press, Inc., Boca Raton, Florida. Melcher, J., Kala, Z., Holick~, M., Fajkus, M. and Rozlivka, L. (2004), Design Characteristics of Structural Steels Based on Statistical Analysis of Metallurgical Products, J. Constructional Steel Research 60, pp.795-808, ISSN 0143-974X. Mendel, J. (1995), Fuzzy Logic Systems for Engineering, In Proc. of the IEEE, Vol. 83, No. 3, pp.345-377. Novfik, V. (2000), Basics offuzzy modeling, BEN, ISBN 80-7300-009-1. (in Czech) Omishore, A., Kala, Z. (2006), Fuzzy Analysis of Buckling Lengths of Steel Frame, In CD Proc. from the Conf. Modeling in Mechanics, Ostrava, ISBN 80-248-1035-2. Omishore, A. (2005), Fuzzy Set Theory in Application to the Analysis of the Load-Carrying Capacity of Steel Member under Compression, In Abstracts Collection and Proceedings of Conference (CD) from the VII. Conference with Abroad Participation "Statics and Physics Problems of Structures", Vysok6 Tatry, Strbsk6 Pleso (SR), pp. 77-78, ISBN 80232-0189-1. Ravinger, J., Psotn3), M. (2004), Stable and Unstable Paths in the Post-Buckling Behavior of
678
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
Slender Web, In Proc. of the Fourth International Conference on Coupled Instabilities in Metal Structures, Rome (Italy), pp.67-75. Sadovsk:~, Z., Gudeses Soares, C., Teixeira, A., P. (2004), On Lower Bound Solutions of Compression Strength of Plates with Random Imperfections, In Proc. of Fourth Int. Conf. on Thin-Walled Structures, Loughborough (England, UK), pp.835-842, ISBN 0 7503 1006-5. Strauss, A., Kala, Z., Bergmeister, K., Hoffmann, S., Novfik, D. (2006), Technologische Eigenschaften von St~ihlen im europfiischen Vergleich, Stahlbau, 75 Heft 1, ISSN 00389145. Skaloud, M., Melcher, J., Z6merovfi, M., Karmazinovfi, M. (2005), Two Studies on the Actual Behavior and Limit States of Steel Structures, In Proc. of the Int. Conf. on Advances in Steel Structures, Shanghai, pp. 1091-1098, ISBN 0-08-044637-X. Stemberk, P. (2000), Fuzzy Set Theory Applications, Ph.D. Thesis, Prague p.87. Stemberk, P. (2001), Alternative Way of Material Modeling, In Proc. of9th Zittau Fuzzy Colloquium 1, pp 180-190, Zittau/GOrlitz. Vi6an, J., Koteg, P. (2003), Bridge Management System of Slovak Railways, In Proc. of the IlL International Scientific Conference Quality and Reliability of Building Industry, Levo~a, pp.539-542, ISBN 80-7099-746-X. Zadeh, L. A. (1960), From Circuit Theory to System Theory, In Proc. of Institute of Ratio Engineering, Information Control, 50. Zadeh, L. A. (1965), Fuzzy Sets, Information and Control, 8, 3, pp.338-353. EN 1993-1-1:2005(E): Eurocode 3: Design of Steel Structures-Part 1-1: General rules and rules for buildings, CEN. ENV 1090-1 : 1996, Design of Steel Structures, Part 1. JCSS Probabilistic Model Code, Intemet Publication 2001, (http://www.jcss.ethz.ch/)
679
International Conference on Advances in Engineering and Technology
A PRE-CAST CONCRETE TECHNOLOGY FOR A F F O R D A B L E H O U S I N G IN K E N Y A Shitote, S. M., Department of Civil and Structural Engineering, Moi University, Kenya Nyomboi, T., Department of Civil and Structural Engineering, Moi University, Kenya Muumbo, A., Department of Mechanical and Production Engineering, Moi University, Kenya Wanjala, R. S., Department of Civil and Structural Engineering, Moi University, Kenya Khadambi, E. L., Department of Civil and Structural Engineering, Moi University, Kenya Orowe, J., Department of Civil and Structural Engineering, Moi University, Kenya Sakwa, F., Bamburi Special Products, Kenya Apollo, A., Bamburi Special Products, Kenya
ABSTRACT Kenya is experiencing an acute shortage of housing for both its rural and urban population. The problem has been more evident over the last two decades as a result of the country's depressed economic performance. There is proliferation of informal settlements due to high demand for housing. There are also related problems such as violation of set standards/bylaws in the construction of housing units and increased conflicts between tenants and landlords. These problems are especially manifest in the low-income areas within towns/cities. In rural areas the status of housing is characterized by poor quality of materials and the construction methods used. To address this situation, a concerted effort by Government agencies (concerned Ministries, Research Institutions) and the Private sector (Financial, Construction and Professionals) is required to provide affordable housing (individual and schemes) so as to improve the standards of living both in rural and urban areas. The success of such efforts is a best prospect particularly for manufacturers of affordable and durable building materials, Contractors, Researchers, Professionals and Financial institutions.
At the primary level in addressing the problem of affordable housing are the Research Institutions because the interest from potential stakeholders will depend largely on the outcome of proven affordable and/or low-cost technologies developed. This paper presents some background information on housing in Kenya and discusses the preliminary design and construction guidelines for affordable model house developed as a joint venture between Moi University and Bamburi Special Products. In this initial stage, the teams involved in the
680
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
research were able to design and construct a model house of total floor area 45 rn 2 made of pre-cast steel fibre reinforced concrete walling panels at a relatively affordable cost despite the fact that some materials were imported. It is envisaged that with appropriate replacement of the imported materials and other high cost materials used in the model house, the overall cost will be further reduced. Keywords: Pre-cast, Design, Model, Affordable housing, Concrete, steel fibre.
1.0 BACKGROUND 1.1 Situational Analysis on Housing in Kenya Housing development is strategically an important social-economic investment to a country and its people. Furthermore comfortable housing is necessary for good living and this will generally constitute well planned/designed housing and infrastructure of acceptable standards and affordable cost which when combined with essential services affords dignity, security and privacy to the individual, family and community at large. Adequate availability of quality and affordable shelter also reduces proliferation of slums and informal settlements as well as prevent social unrest occasioned by depravity and frustrations of people living in poor housing settlements. The situation for the majority of Kenya's population as far as adequate and comfortable housing provision is concerned is still far from good. The common forms of dwellings in rural Kenya are temporary houses made of mud and timber with very few concrete/brick/stone constructions. In the urban areas, the majority of the homes are made of stone/concrete, however, there are areas commonly referred to as 'slums' whose housing structures are temporary as they are made of mud and or iron sheets. Just after Kenya's independence in the 1960s, the annual housing requirements were 7,600 and 38,000 new units in urban and rural areas respectively. By the 1974-1978-plan period, a total of 50,000 units per year were required in urban areas out of which 50% was achieved. In the 1980s, the housing shortfall was about 60,000 units per year and the net annual demand by this period was about 20%. However, in the 1997-2001 periods, the net annual demand was about 89600 units per year representing an annual net demand of 49%. For the period between 1980 and 1997, there was a huge percentage increase in the net demand as compared to the 1 9 7 4 - 1980 periods. In the next decade from the year 2001, the annual demand has been estimated at 150,000 units per year (Ministry of Roads and Public Works, (2003)). This represents an annual increase in demand for housing of about 67% for the period 2001-2010. According to the 1999 National Population and Housing Census (Ministry of Lands and Housing, (2004)) there are about three and six million people in urban and rural areas respectively in need of proper housing. Based on the censuses of average household size of 4 persons, there are about 750,000 and 1,500,00 households in rural and urban areas respectively in need of housing. To satisfy its urban housing needs, the Government of
681
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Kenya plans to adopt innovative and proactive strategies to meet its pledge to build 150,000 housing units per year, which translates to an average of 410 units per day in urban areas but only 30,000 - 50000 units are expected to be constructed during the year. In addition an estimated 300,000 housing units will require to be improved annually in the rural areas. It is clear from the above figures that the problem of housing has continued to persist and it is for this reason that the Government has developed a housing policy to address the situation.
1.2 National Housing Policy Framework Since independence Kenya has developed two National Housing Policies (Ministry of Lands and Housing, (2004)). The first comprehensive National Housing Policy was developed in 1966/67 as Sessional Paper No. 5. At that time, Kenya's population was just over 9 million people growing at a rate of 3 percent per annum for the whole country and 5 to 6 percent per annum in the urban areas. The policy directed the Government to provide the maximum number of people with adequate shelter and a healthy environment at the lowest possible cost. It also advocated for slum clearance and encouraged mobilization of resources for housing development through aided self-help and co-operative efforts. Emphasis was placed on enhanced co-ordination to increase efficiency in the preparation of programmes and projects. Other areas addressed in the policy paper included increased research in locally available building materials and construction techniques, and housing for civil servants through home ownership schemes in urban areas as well as institutional and pool housing schemes in remote stations. The second housing policy recently released as Sessional Paper No.3 (Ministry of Lands and Housing, (2004)) on National Housing Policy for Kenya was dated July 2004. This policy document aims to achieve six broad goals. The first is to enable the poor to access housing and basic services and infrastructure necessary for a healthy living environment. Second is to encourage integrated, participatory approaches to slum upgrading. Third is to promote and fund research on the development of low cost building materials and construction techniques. Under the latter, research institutions in tandem with the Ministry concerned with housing would be required to undertake the following: (i) Initiate, encourage, promote and conduct research related to planning, design, construction and performance of buildings; (ii) Explore social, economic and technical problems of housing and community planning and to help establish appropriate standards for both rural and urban areas. (iii) Conduct research in the use and development of indigenous and innovative building materials and construction techniques; (iv) Provide reference and documentation services to parties interested in housing and building research and development; (v) Provide research-based advisory services to the Government on research, training and innovative development work conducted by the Institute and other bodies.
682
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
The fourth goal aims to harmonise existing laws governing urban development and electric power while the fifth is to facilitate increased investment by the formal and informal private sector. The sixth is to create a Housing Development Fund to be financed through budgetary allocations. 1.3 Current Initiatives
The need for advancement and adoption of modified building materials, production and construction techniques are paramount in the process of developing an affordable house. In Kenya, initiatives have been made to come up with what is termed as Low cost housing. An example is the use of Stabilised Soil Blocks and Ferro cement construction in the Pumwani high-rise experiment in Nairobi promoted by Intermediate Development Technologies Group (IDTG) an international Non Governmental Organisation (NGO) in conjunction with National Housing Corporation (NHC). It was found that construction costs were reduced significantly while maintaining material quality (CIVIS, (2003)). 2.0 PRE-CAST C O N C R E T E DESIGN 2.1 Customised Solution
It is in view of the above initiatives and trends that the current research on affordable housing is envisaged. This research has therefore explored other means besides the existing building technologies by considering the use of modified pre-cast sandwich concrete panels (1.5m by 0.4m by 0.2m), whose structural integrity is guaranteed by the use of steel fibre reinforcement as well as the interlocking structure of the elements when assembled. The need for ease in the construction, reduced costs, adoption in the rural areas, and psychological satisfaction (serviceability) has been provided through element design whose appearance is standard in thickness although being lighter (about 100kg) due to use of lightweight core material. The latter allows for handling of the elements by hand (labour intensive). It should be noted that most pre-cast concrete elements of the same size as these panels are handled by equipment such as cranes. In this initial design, styrofoam was used as core material. However, in an ongoing research the same would be replaced with inexpensive locally available, environmentally friendly material to be sourced from sugar factory wastes. A replacement of the steel fibres, which are currently locally unavailable, is expected to be undertaken with an outcome of further research on the locally available wire type steel. However, it is also expected that with mass production of this type of a house, there would be mass importation of the steel fibres or a local production could be attracted causing a reduction in its costs in the long run. The production technique for these panels is labour intensive with the use of locally developed steel moulds. Pressing equipment for the development of the new core material is being developed. The culmination of this research is the development of a complete Design and Construction standard for adoption and use by professionals and the Construction Industry.
683
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
2.2 Design Considerations Choice of Materials In the effort to meet the housing needs as pledged by the government, architectural and engineering standards have to be observed. Further, the new units have to meet the United Nations standards on comfortable affordable housing. It is noted that for walling, currently the main building methods employed in Kenya are the following: temporary walling in poles and mud, sewn timber walling, burnt clay bricks, concrete block walling and stone walling In general, the first option does not meet the desired standards. Whereas it is a common solution for shelter in rural areas, its construction is not allowed in urban areas. Also, it is temporary in nature and the search for a more permanent solution excludes it from the current research. Subject to construction techniques employed, the other four options meet the required standards. However, there are a number of disadvantages associated with them, principal of which being material costs and the duration required for construction. In view of the identified difficulties, a new option is proposed that is relatively cheap and can be constructed in a reasonably short time. Pre-cast concrete elements for walling are used. This is consistent with practice elsewhere in the world when rapid construction is desired. All ingredients for the manufacture of concrete are readily available in Kenya.
Geometry and Performance Considerations The basic geometry of the individual pre-cast concrete elements proposed for walling was chosen through application of the multi criteria analyses with use of the permutation method. The parameters considered were production process and lifespan. From the criteria evaluation matrix and upon application of weight factors, the post-plate construction was chosen for the housing element design. In the design of the element, the main requirements considered were simplicity and weight. The targeted maximum weight was 100 kg. The basic element is illustrated in Figure 1 below.
Fig. 1:The basic element
684
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
For assembly and structural integrity of the construction, consideration was also made for element-column connections. Choices were made for the comer and internal columns. A typical plan for an assembled section of the house involving comer and internal columns is shown in Figure 2(a), (b) and (c) below.
:::}!:()(;i: :! ':q
~
/L,
:,
}. IJL: :/: :!
:ii :::'!2!:!!::!i 84 ;:
...... ,:,
<,,........
(a)
(b) .... ..: ...., J:. :{::~;u
2 2 S ,. (} ?
2 s (:}..(}:
:i. 2 !!!!;0 ii
~..jil(:;,~(!!. . . .
:i!i;.0. ~i:!.
(c) Figure 2: Column- element connections Having chosen the geometric form of the columns and the walling elements, the architectural layout for a typical housing unit was developed. The unit has a total floor area of 45 m 2 and the plan is as shown in Figure 3 below. The description of packing the basic building blocks to form the walling is given in section 3.0
685
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
)k
~ii
~.:
!.::~
T O ~, ET BEDROOM
s
i::
~ BEDROOM
SHOWER k
r :L
.... :
!
LIVING ROOM
K,tTCHEN t50 , !:i
:
'L:""__'L"L'2':L
'•177
r---
"LLL
';LJ_i2L2L-
i ;; i
~! i! iJ,
::7
Fig. 3" The plan of the pilot house
Strength Control Parameters It has already been stated that the construction elements are basically pre-cast concrete. Noting the physical and mechanical properties of concrete, a number of considerations are made in order to meet the requirements on weight and simplicity. From the basic element shown in Figure 1 above, thin concrete walls on a thick core of lightweight material is used. To enhance the mechanical performance of the thin concrete walls, steel fibres are used. It is known that when designing a concrete mix, the type of fibres used and the purpose of the mix should be taken into account in order to ensure that both the fresh and hardened concrete have the intended characteristics. Against this background a mix design for the elements had the following proportions for 100 kg of concrete. Coarse aggregate (6-9 mm) Fine aggregate (2-6 mm) Sand
32 8 27
kg kg kg
The concrete and fibres used have the following characteristics. Fibres" -Dramix 45/30 RLBN: -length 30 mm -diameter 0.62 mm -ft 1050 N/mm 2 Concrete: B25 (C25)" ffctm, eq.300 2.4 N/mm 2 ffctm,eq.150 2.1 N/mm 2
686
Cement Water Fibres
21 10 2
kg kg kg
International Conference on Advances in Engineering and Technology
Loading consideration for the design of the elements and the overall completed structure comprised; self weight during production, transport, erection and life span, wind load, impact load and the self-weight of the roof and roof loads. Calculations were based on the British Standards (BS), Kenya Standards (KS) and for some aspects Dutch Standards and Euro Codes. The main consideration was that if one standard is met, the construction would have sufficient capacity. A self-weight q s,, =546 N/m was calculated based on the weight of concrete. During transportation, the element is carried by hand resulting in a dynamic load. If own length is the span, applying a dynamic factor of 2, the maximum moment and shear force are calculated as: Mma x =307 Nm and Vmax =819 N (www.kenya.tudelf.nl, (29.03.06)). Wind loads were dealt with using Code of Practise 3 (CP 3. Chapter V, Part 2, (1972)). Considering a basic air pressure of 0.9 kN/m 2 and the respective pressure and suction factors of 0.8 and 0.4, use is made of the higher factor of 0.8 and the element height of 400 mm to obtain maximum moment and shear force due to wind as:
Mmax=ll3
Nm and
Vmax =302N. These lead to the following in the columns" Mclam p =5.92 kNm and gmax =4.3 kN (www.kenya.tudelf.nl, (29.03.06)). A torsion analysis was also carried out as part of checking the mechanical integrity of the construction. The section is U-shaped; therefore, if loaded in the vertical plane, torsion will occur. This can arise in case the element is carried with one of the webs or due to wind load. The complete analysis leads to the following values: Tweb=38424 Nmm, Tjlang ~ =16650 Nmm and vto~a/ =0.59 N/mm 2 (www.kenya.tudelf.nl, (29.03.06)).
Design Strength For the illustrated stress and loading conditions on the element shown in Figure 4 below, the respective design strength capacities are as shown.
1 LI
_l -I
(a) Rectangular stress blocks
(b) Horizontal load
(c) Vertical load
Fig. 4: Stress and loading conditions
687
International Conference on Advances
in E n g i n e e r i n g a n d T e c h n o l o g y
(i) Moment Capacity of the Wall Element Loaded in the vertical plane it has the following moment capacity. Ab~ = (25 * 300) * 2 = 7500 mm
2
Fbr = o-b~ * Ab~ = 0.78 * 7500 = 5850 N
M , = Fb~ * arm = 5850 * 0.2 = 1170 Nm Loaded in the horizontal plane it has the following moment capacity. b W= 0 . 1 " 1 5 0 0 + 2 5
= 175mm
Ab~ = 175 * 25 + ( 1 4 8 - 25) * 25 = 7450 mm
2
Fb~ = o-b~ * Ab~ = 0.78 * 7450 = 5 8 1 1 N
M , = Fb~ * a r m = 5 8 1 1 " 0 . 1 = 5 8 1 N m Hence capacity exceeds the calculated maximum load of 307 Nm (Mu > Md). Where Md is the design moment being taken as larger of the maximum moment Mmax obtained in strength control parameters section above.
(ii) Shear Capacity o f the Element The shear strength of a section is given by the following formula; -
+
+
Where;
Vcd
The contribution of concrete
Vf d
The contribution of the fibres
Vwd
The contribution of the shear reinforcement
The contribution of the concrete ( Vcd ) is calculated according to Eurocode 2, because the Dramix guidelines (www.kenya.tudelf.nl, (29.03.06)) are used. Vcd = {rRd *k*(1.2*4Opl)+O.15*Crcp}*bwd rRd = 0.25 * fc,~,ax / 7~ = 0.26 k =l.6-d /91
=
=1.4
A, / b wd = 0
O'cp = N,d /A ~ = 0 V~d = 0.26 * 1.4 * 1.2 * 30" 200 = 2.62 kN The contribution of the steel fibres is given by the following formula.
688
International Conference on Advances
Vm - k r
*rrd * b w * d
k r - l+n*(h
r/b~,i,)*h r / d
b s - b ..... n
-
-
in E n g i n e e r i n g a n d T e c h n o l o g y
-
175-30 z
hr
-
= 5.8 where n < 3, so n=3
-
25
25 .25 ) k l. - 1 + 3 * (-yd) * (~0-~ - 1.31 Vta - 1.31 * 0.25 * 30 * 200 - 1.97kN The Dramix guidelines state that if the fibres are the only shear reinforcement, which is the case, half of the shear force has to be absorbed by the fibres. The maximum permissible shear resistance of the beam is therefore
V,o,~i - 2 * 1.97 - 3.94 k N
or
r =
3"94"103
= 0.65
30*200
~/m
2 m
The calculated maximum load, V,o,
(iii) M o m e n t Capacity of the Column The column is made of normal concrete with normal reinforcement. The quality is presumed to be the same as the concrete used for the pannels, C25. The reinforcement has a cover of 20 mm. Normally, this would not be sufficient, but because they are totally surrounded by concrete panels, it is. The column has a trapezium shape with a depth of 110mm and a width of 180 m m on one side and 200 m m on the other side. It has some openings for the installation of the elements, which leave a cross-section of 110 m m x 100 mm. It is reinforced with four steel bars with a diameter of 12mm and yield strength of fy=460N/mm 2 (normal reinforcement steel in Kenya). A =2"113=226mm
2
= 2 2 6 " 4 6 0 = 104 k N d = 110-20-0.5"12 z =0.85"d=
= 84ram
71ram
M~ = F * z = 1 0 4 " 0.071 = 7 . 3 8 k N m As can be seen the maximum moment is 5.92 kNm so the capacity exceeds the load (Mu > Md). The pressure on the concrete A b - 0 . 3 9 " d * b h - 0 . 3 9 " 8 4 " 100 - 3 2 7 6 m m
2
M , - 5.92 kNm
689
International Conference on Advances in Engineering and Technology
F~ = 5.92 / 0.071 = 83.4 k N crb = F s / Ab = 104/3276 = 2 5 N / m m
2
The quality of the concrete is C25, so the concrete is able to withstand this load.
(iv) Shear Force Capacity o f the Column There is no shear reinforcement in the columns, so the concrete contributes to the shear capacity. Again the shear capacity is chosen according to the Eurocode 2 (www.kenya.tudelf.nl, (29.03.06)). Due to the fact that the roof can be made of corrugated steel sheets no normal force will be taken into account. V~d = {rRd * k * (1.2 + 40Pl ) + 0.15 * o-~p} * bwd rRd = 0.25 * f~,k,~ / 7~ = 0.26 k=l.6-d=l.6-0.085=l.5
Pl = A,/b~d = 3 1 2 / ( 1 1 0 " 1 8 0 ) = 0.016 Crp = N,d / A ~ = 0 V~d = {0.26"1.5"(1.2 + 4 0 " 0 . 0 1 6 ) } * 1 1 0 " 1 8 0 = 14.2kN As can be seen the maximum shear load is 4.3 kN so the capacity exceeds the load (Vcd > Vma~). The above designed elements were produced using specially designed and manufactured moulds.
3.0 PILOT HOUSE 3.1 General Aspects The Pilot house is located at the Bamburi Special Products Factory site at Athi River about 30km South East of Nairobi. It is a 7.5m by 6.0m two bedroom self contained house shown in Figure 3 above made of pre-cast steel fibre reinforced concrete elements as described. Reinforced pre-cast columns designed above secure the house. The roofing consists of timber trusses and iron sheets. 3.2 Construction The overall sequence of the construction process of the pilot house was as illustrated in Fig. 5 shown below.
690
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
i 84i~ ~i~ ii ~ iI
!i i
i~ ii ~
!
~
~ i~i~ ....... ~i~ ~ ~!~i~i~i~ !i ~'!~i ~!~!~i(~i~ ~!Q~i!ii~!i!ii~!ii~ii~i~ii!i!iii!! ~ ~ i ii!~i ii !~ ~ ~ i !iiii~ i~i ,!~! ~ ~i i,!~ii ~ii~i ~!~! !~! ~ i ~ ~ ~! ,~i!~i~ii~'!!~!~ii~ ~ ii ~i
i 84184184184 I i ~!
~ ~
i i ~
~ ~!~!ii~ii~ii~ii~i~!~i~i~iii~ii~iii!~i~iii~!i~!~!~ii~i~i~i~!i~i~!i~!!~i%~ ~ ~5~i~ii~i~i ~ ~ ~ ~i~!!i~!!ii!~!i!ii~!!~!!i!!~@i~i~i)!~i!~!i~i~i~!i~i!~!i~i~i~iii~!!~i!~i~!!i~ ~2i !! ~iiii!;i;i!!!i i!ill ~ ~i ~
~ ~
~
iI~ ~ii~'
~ ~!~ii~iiii~ii~i!~iilii~i!~I~!~ i~ ~ i~i~ii;~!~ ii!iil
Fig. 5 Sequence of Construction
(i) Foundation and Columns The Foundation consists of a 300mm hardcore overlaid with a 150mm reinforced concrete slab. In the foundation, provisions were made to allow for clamping of the pre-cast columns on it during erection as shown in Figure 6 below.
691
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Fig. 6: Foundation and Column erection (ii) Placing the Elements between Columns The columns were designed to have provisions for element fixing in terms of reduced thickness at designated heights. Placing of the elements between the columns was made through these sections by a rotating movement which allowed for horizontal insertion on one column then once aligned with the next column it was fitted to it and finally slid down. Figure 7 shows the way the rotation and placement process.
Fig. 7 Element fixing Process
692
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
(iii) Roofing The roof consists of timber truss and iron sheets. The trusses were fabricated on the ground and then mounted on a timber wall plate running along the top ends of the columns. The wall plate was fastened on to the columns by a stud bolt. The stud bolts were fastened on the columns using epoxy resin. Figure 8 below shows the roofing process.
Fig.8: Roofing 3.3 S u m m a r y of S t a t e m e n t M e t h o d of C o n s t r u c t i o n of the M o d e l H o u s e
The method used in construction of the model house using the elements designed in section 2 above is outlined below. 9
Foundation
(i) (ii) (iii) (iv) (v) (vi)
Set out the layout of the building and include an extra 500mm all around; Excavate to reduced levels (500mm deep) and remove to tip; Set out the slab according the dimension of the design; Excavate the area of the slab and include 500mm extra all around; Lay 300mm hardcore bed; Excavate into the hard core for the beams under the walls, external and internal, as shown on the design drawings; (vii)Fix appropriate shuttering around the perimeter of the slab; (viii) Place polythene sheeting (dpm) on top of the hard core with minimum overlaps on all joints (ix) Position the 'column pocket provision' formwork appropriately; (x) Concrete slab. 9
C o l u m n s and e l e m e n t s delivery on site
(xi) Receive the pre-cast columns and elements. 9
Erection of c o l u m n s & e l e m e n t s
(xii)Position the columns (Vertical & Horizontal Control); (xiii) Concrete column pockets to anchor columns; (xiv) Place the walling elements. 9
Installation of the r o o f
(xv) Fix wall plate onto the columns;
693
International Conference on Advances in Engineering and Technology
(xvi) (xvii) (xviii)
Fix trusses onto wall plate; Fix purlins onto trusses; Fix roof covering onto purlins.
3.4 Costs of the Pilot House The cost of the pilot house is Kshs 650,000 (9,200.00US$). The detailed costs are as shown in Table 3.1 below. Table 3.1: Schedule of costs Item
KShs
Excavation and Earthworks
71,232.25
Floor Slab
56,900.25
Walling Elements
153,255.60
Roof trusses
142,000.00
Roof covering
30,117.50
Finishes
30,000.00
Labour
145,051.68
Contigencies TOTAL Kshs -US dollars
21,442.72 650,000.00 9,200.00
4.0 EXPERIENCES In the process of implementing the project a number of challenges were encountered. Most of the problems arose due to deficiencies in the mould geometry during casting of the elements (particularly wall elements). Some wall elements could not fit properly into the columns and were either too loose or tight. This was caused by reduction and or increase in the dimensions of the grooves on the wall element ends as a result of bulging and caving of the moulds during casting. The problem of the mould was clearly due to inadequate thickness. Another geometric design problem on the vertical wall elements at the Jams (doors and windows openings) was also encountered. In the process of installing the jam elements, it was realized that they fitted loosely since they were only attached to the columns on one side only and the hinge formed by two elements at the mid heights of the Jams (they are vertically aligned) caused an outward instability. A temporary solution for the problem was to bolt the elements onto the columns however a redesign for special elements for the same is being undertaken. The geometric design for end gable elements did not work as it was established that anchorage of the same was inadequate and a redesign was required. However in this pilot project the gable ends were finished with timber boarding. In the foundation, provision for the column pockets for clamping of the same proved tricky. It was established that during compaction of the foundation layers, the walls of the pockets cave in and this necessitated planking and strutting. After column erection, wall plate anchorage on the
694
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
latter required that the columns be drilled at the top and a stud bolt fastened onto it with epoxy resin. The drilling of the columns should not have taken place but provision of the holes or insitu casting of the bolts should have taken place during casting of the columns. Availability of the steel fibres locally was yet another challenge given that the project was targeted to be affordable. In this pilot project, the fibres used were imported from Belgium while the element core material, styrofoam, was rather expensive. However a research for alternative solutions for these materials is ongoing and a success in the same will lead to considerable reduction in the overall cost of the house. 5.0 CONCLUSIONS AND RECOMMENDATIONS Pre-cast concrete technology for development of affordable housing in Kenya and any other country whose housing needs are acute is possible. Based on the pilot house it was possible to come up with a relatively affordable two bed roomed house (Ksh 650,000), whose construction was customised, labour intensive and can be installed within a short period of time. It is clear that with the replacement of the expensive materials and or mass production of pre-cast housing units of the same type lower cost per unit will be realised and this will allow for provision of affordable shelter. Furthermore a success in such technology will spur industrial growth in materials and construction since most of elements will be required to be mass produced independently and construction of the houses separately undertaken for individuals, firms and or schemes. However a number of improvements and designs are still needed in order to finally arrive at a sound and less costly pre-cast house. Based on the experiences encountered in this pilot research, it is recommended that in order to improve on the element geometry for robust anchorage, a redesign of Jam elements and change in mould thickness should be undertaken. Alternative solutions for the expensive and or imported materials be sought so as to allow for the development of a cheaper pre-cast unit. Furthermore new designs with reduced internal geometric measurements and lighter mix proportions should be investigated for production of much lighter elements than those developed for the pilot house. REFERENCES
Republic of Kenya, National Housing Development Programme 2003- 2007, Mirfistry of Roads and Public Works P.O Box 30260 Nairobi, 2003. pp 1. Republic of Kenya, Sessional Paper No. 3 on National Housing Policy for Kenya. Ministry of Lands and Housing P.O Box 30450 Nairobi, July 2004, pp7. CIVIS, Shelter finance for the Poor Series, Cities Alliance April 2003, issue IV, pp 5 CP 3 Chapter V: Part 2"1972 Wind Loads. w w w . k en y a. tude 1ft.nl Low-cost housing in Kenya: Pre-cast low-cost housing with steel fibre reinforced concrete as accessed on 29.03.06.
695
International Conference on Advances in Engineering and Technology
ENVIRONMENTAL (HAZARDOUS CHEMICAL) RISK ASSESSMENT- ERA IN THE EUROPEAN UNION. Musenze Ronald S; Centrefor Environmental Sanitation, Department of Biosciences Engineering, Ghent University, Michiel Vandegehuchte; J. Plateaustraat ,22, B- 9000 Gent. Belgium.
ABSTRACT The use of chemical substances causes complex environmental problems characterised by scientific uncertainty and controversies. Comprehensive risk assessments are now required by law but still they are is subject to debate, not least concerning how to interpret uncertainties.
When a chemical is discharged into the environment, it is transported and may undergo transformation. Knowledge of the environmental compartments in which the chemical substance will be present and of the form in which it will exist is paramount in the assessment of the possible impacts of the chemical on the environment. In the European Union (EU) risk assessment is often carried out and interpreted in accordance with the principles of sustainable development as chemical substances can cause adverse effects in both short and long term exposure scenarios. According to the technical guidelines, ERA is completed in four steps: hazard identification, exposure assessment, effects assessment, and risk characterisation. Attention is drawn towards the negative effects that chemicals may cause to the environment. The procedure shall be precisely discussed with emphasis on exposure and effects assessment. Key words; Environmental risk assessment, environmental compartment, exposure assessment, hazardous chemicals, sustainable development, hazard identification, risk characterisation.
1.0 INTRODUCTION The European Union (EU) directive 93/67, regulation 1488/94 and directive 98/8 require that an environmental risk assessment is carried out on notified new substances, on priority existing substances, active substances and substances of concern in a biocidal product. The experiences following more than half a century of use of man-made chemicals are divided. On the one hand, the benefits for society at large have been enormous. Production and use of chemicals play an indispensable role in e.g. agriculture, medicine, industry, and for the daily welfare of citizens. On the other hand, the use of many chemicals has caused severe adverse and complex problems characterised by scientific uncertainty and controver-
696
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
sies due to their toxic or otherwise 'hazardous' properties, such as persistence and liability to bioaccumulate(Karlsson, 2005). Adverse effects of already regulated hazardous substances prevail. For example, the levels of chlorinated hydrocarbons in oceans and marine biota are still high enough for authorities in the European Union to recommend or issue food restrictions for pregnant women (SNFA, 2004) and for the population at large (EC, 2001a). In the USA, PCB-levels in fishes in the Great Lakes are high enough to cause adverse health effects, such as impaired memory, among consumers with high fish consumption (Schantz et al., 2001). Among other examples of hazardous chemical environmental impacts, PCBs are also known for the 1968 cooking oil mass poisoning in, Yusho, Japan (Tsukamoto et al., 1969, Yang et al., 2005) while Methylated Mercury (Masazumi et al., 1995, 1999; Timbrell, 1989), medicine Thalidomide, and Dioxins are remembered for the 1956 Minamata disaster, the famous Agent Orange effect in Vietnam (Arnold et al., 1992) and the 1960 Softenon-scandal respectively. Remediation costs for hazardous chemical substances are often high. The total remediation and waste management costs for PCB in the EU for the period 1971-2018 has been estimated to be 15-75 billion euro (NCM, 2004), health and environmental costs uncounted. In addition to this, new problems are recognised. This is partly due to re-evaluations of earlier risk assessments, such as when the US National Research Council in 2001 considered arsenic in water to be ten times as carcinogenic as earlier thought (NRC, 2001), but also follows from completely new assessments of existing and new chemicals. In the European Union, for example, the use and the effects of single phthalates (such as DEHP) and brominated flame retardants (such as deca-BDE) are under scrutiny, and new regulations are being developed or imposed (ECB, 2004). Nevertheless, most substances in the European Union have not been assessed at all for their health and environmental risks (Allanou et al., 1999). As a result, a proposal for a new regulatory framework for registration, evaluation, authorisation and restrictions of chemicals (REACH) has been presented in the European Union (European commission, 2003). This proposal is at present the most disputed element of EU chemicals policy and has given rise to a heated debate in more or less all relevant political fora (European Commission, 2001 a, 2002, 2003; Scott and Franz, 2002; US House of Representatives, 2004). In the EU, a system has also been developed to aid ERA. The European Union System for the Evaluation of Substances (EUSES) is now widely used for initial and refined risk assessments rather than for comprehensive assessments (http://ecb.jrc.it/new-chemicals/). It is an approved decision-support instrument that enables government authorities, research institutes and chemical companies to carry out rapid and efficient assessments of the general risks posed by chemical substances to man and the environment. 1.1 RA and the Principles of the Sustainable Development
697
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
The concept of sustainable development is a cornerstone for ERA. In the European Union, sustainable development is stated in primary law as an objective for the union (European Commission, 1997), and a strategy for achieving the objective has been elaborated (European Commission, 2001b). The concept is often interpreted with reference to the World Commission on Environment and Development, meaning that 'the needs of the present' should be met 'without compromising the ability of future generations to meet their own needs' from environmental, economic, and social perspectives (WCED, 1987). This implies a moral duty to develop the society with a much stronger emphasis on improving the state of the environment, as well as socioeconomic and environmental living conditions for present and future generations. Upon this background, even chemicals which pose no adverse effects to the current generation are assessed for the inherent ability to do so in the long term. Due to uncertainty and controversy surrounding usage of chemicals, risk management while aiming at sustainable development is always posed with three important questions. How should the uncertainty be interpreted and managed? Who should do the interpretation and decide on management strategies? How should the responsibility for the management be distributed? (Karlsson, 2005). The answers are offered by three commonly accepted principles in environmental policy: the precautionary principle, the principle of public participation, and the polluter pays principle, all adopted by the international community as well as in the European Union (EC, 1997; UNCED, 1993). A good Hazardous Chemical Risk Assessment (HCRA) should thus recognize and take into account risk uncertainty, identify the polluter and the magnitude of pollution expected or so caused. Environmental risk assessment (ERA) is the link between environmental science and risk management. Its ultimate aim is to provide sufficient information for decision-making with the purpose of protecting the environment from unwanted effects of chemicals. ERA is normally based on a strategy aiming at comparing estimates of effect and exposure concentrations. According to the European Commission (2003a) it is completed in four steps: hazard identification, exposure assessment, effects assessment, and risk characterization (Fig 1).Risk Assessment (RA) is carried out for the three inland environmental compartments, i.e. aquatic environment, terrestrial environment and air, and for the marine environment.
698
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Fig 1. Basic steps in Environmental Risk Assessment (van Leeuwen, Hermens 1995). In addition, effects relevant to the food chain (secondary poisoning) and to the microbiological activity of sewage treatment systems are considered. The latter is evaluated because proper functioning of sewage treatment plants (STPs) is important for the protection of the aquatic environment. The main goal of RA strategies is to compare estimates of effect and exposure concentrations. In the EU, the procedure of calculating Predicted Environmental Concentrations (PECs) and Predicted No-Effect-Concentrations (PNECs) is well laid out. Where this is not possible, the technical guidelines direct how to make; (1)qualitative estimates of environmental concentrations and effect/No Effect Concentrations (NOECs) (2)how to conduct a PBT (Persistence, Bioaccumulation and Toxicity) assessment and (3)how to decide on the testing strategy, if further tests need to be carried out and (4)how the results of such tests can be used to revise the PEC and/or the PNEC. 1.3 Types of Emissions and Sources (TGD 2ed II, 2003) Emission patterns vary widely from well-defined point sources (single or multiple) to diffuse releases from large numbers of small point sources (like households) or line sources (like a motorway with traffic emissions). Releases may also be continuous or intermittent. Besides releases from point sources, diffuse emissions from articles during their service life may contribute to the total exposure for a substance. For substances used in long-life materi-
699
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
als this may be a major source of emissions, both during use and as waste remaining in the environment, distribution processes, which may be relevant for the different environmental compartments. Transport and transformation ("fate") describe the distribution of a substance in the environment, or in organisms, and its changes with time (in concentration, chemical form, etc.), thus including both biotic and abiotic transformation processes. For each compartment, specific fate and distribution models are applied to determine the environmental concentrations of the chemical during exposure assessment. 2.0 EXPOSURE ASSESSMENT (EA) Environmental exposure assessment is based on representative measured data and/or model calculations. One of the major objectives of predicting environmental concentration is to estimate human exposure to chemicals. This is an important step in assessing environmental risk (Katsuya, Kyong, 2005). If appropriate, available information on substances with analogous use and exposure patterns or analogous properties is taken into account. EA is more realistic when detailed information on the use patterns, release into the environment and elimination including information on the downstream uses of the substance is available. Though the general rule is that the best and most realistic information available should be given preference, it is often useful to initially conduct an exposure assessment based on worst-case assumptions, and using default values when model calculations are applied. Such an approach is also used in the absence of sufficiently detailed data and if the outcome is that a substance is "of concern". The assessment is then, if possible, refined using a more realistic exposure prediction. Due to variation in exposure estimation with topographical and climatological variability, generic exposure scenarios, which assume that substances are emitted into a non-existing model environment with predefined agreed environmental characteristics, are always used (TGD 2ed II, 2003). The environment may be exposed to chemical substances during all stages of their life-cycle from production to disposal or recovery. For each environmental compartment (air, soil, water, sediment) potentially exposed, the exposure concentrations should be derived. In principle, the assessment procedure considers production, transport and storage, formulation (blending and mixing of substances in preparations), industrial/ professional use (large scale use including processing (industry) and/or small scale use (trade)), private or consumer use, service life of articles, and waste disposal (including waste treatment, landfill and recovery) as the stages in the life-cycle of a chemical substance. Exposure may also occur from sources not directly related to the life-cycle of the chemical substance being assessed. Due to the cumulative effect that gives rise to a "background concentration" in the environment, during the EA of existing chemicals, previous releases are thus always considered. Consideration is also made of the degradability of the chemical substance under assessment and the properties of the products that might arise.
700
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
2.1 Measured / Calculated Environmental Concentrations (TGD 2ed II, 2003) Concentrations of new substances are always estimated by modelling while data for a number of existing substances in the various environmental compartments has already been gathered. It may seem that measurements always give more reliable results than model estimations. However, measured concentrations can have a considerable uncertainty associated with them, due to temporal and spatial variations. Both approaches complement each other in the complex interpretation and integration of the data. Therefore, the availability of adequate measured data does not imply that PEC calculations are unnecessary. Where different models are available to describe an exposure situation, the best model for the particular substance and scenario is used and the choice explained.
When PECs have been derived from both measured data and calculation, they are compared and if not of the same order of magnitude, analysis and critical discussion of divergences are important steps for developing an ERA of existing substances. 2.2 Model Calculations (TGD 2ed II, 2003)
Calculation of the PEC value begins with the evaluation of the primary data and subsequently the estimation of the substance's release rate based upon its use pattern follows. All potential emission sources are analysed, and the releases and the receiving environmental compartment(s) identified. The fate of the substance in the environment is then considered by assessing the likely routes of exposure and biotic and abiotic transformation processes. Furthermore, secondary data (e.g. partition coefficients) are derived from primary data. The quantification of distribution and degradation of the substance (as a function of time and space) leads to an estimate of PEClocal and PECregional. PEC calculation is not restricted to the primary compartments; surface water, soil and air; but also includes secondary compartments such as sediments and groundwater. Transport of the chemical substances between the compartments is always, where possible, taken into account. As complexity (and relevance) of the model increases, the reliability usually decreases since the large number of parameters interacting increases the rate of random errors and makes the tests less reproducible. Exposure to chemical substances can only be minimised after identification of emission sources. Multimedia mathematical models (Cowan et al., 1995; Mackay, 2004) are extensively used at the stage of screening for risk assessment (Katsuya, Kyong, 2005). 3.0 E F F E C T S ASSESSMENT (TGD 2ed II, 2003) The effects assessment comprises of hazard identification and the concentration-response (effects) assessment. Hazard identification is always the first step during ERA. It is basically the visualisation of what can go wrong as a result of accidental or deliberate exposure to the chemical substance(s). It also involves identification of emissions and their respective sources and its main aim is to identify the effects of concern. For existing substances and
701
I n t e r n a t i o n a l C o n f e r e n c e o n A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
biocidal active substances and substances of concern in biocidal products, the aim is also to review the classification of the substance while for new substances a proposal on classification is done. Dose - response (effect) assessment is a study of the effects of varying concentrations of a chemical to which organisms are exposed in relation with time. It is a quantification step whose ultimate purpose is to determine the Predicted No Effect Concentration (PNEC), where possible. For both steps of the effects assessment, data is evaluated with regard to their adequacy and completeness. Evaluation is of particular importance for existing substances as tests will often be available with non-standard organisms and/or non-standardised methods. Evaluation of adequacy addresses the quality and relevance of data. Indeed the effects assessment process is suitably started with the evaluation of the available ecotoxicological data. Individual tests are described in terms of their (i) cost, (ii) ecological relevance (validity), (iii) reliability (reproducibility), and (iv) sensitivity. In this context, the term cost can refer to the monetary price for the execution of a test. Alternatively, it can also be used to denote the total social loss or detriment associated with a test. In the latter sense, sacrifice of animal welfare is part of the costs of the tests. By validity is meant that the test measures what it is intended to measure. Ecological relevance is the type of validity that is aimed at in ecotoxicology, namely that the test is appropriate for measuring potential hazards in the environment. By reliability is meant that repeated performance of the test will yield concordant results and sensitivity means that the test has sufficient statistical power to reveal an effect even if it is relatively small. The notion of sensitivity can be operationalized in terms of the detection level (Hansson, 1995). With a sufficiently large number of test(s) fulfilling the above four criteria, scientific uncertainties inherent in testing and risk assessment could be substantially reduced. But in reality every test is a trade-off between these aspects, and the combination of characteristics of the test more or less optimized. Therefore different tests are combined into test systems, and the combinations made so that the characteristics of the individual tests supplement each other. Just as for single tests, the design of a test system is aimed at optimizing the four factors. Most test systems are thus tiered (e.g., van Leeuwen, Hermens, 1995), which means that initial tests are used to determine the need for further testing, often in several stages. Different substances will take different paths in the test system, depending on the outcomes obtained in the tests to which they are successively subjected. Usually low cost is prioritized at lower tiers (to enable testing of many compounds), whereas reliability and ecological relevance increase at higher tiers (to enable well-founded risk management decisions).
702
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Ecotoxicity tests may be acute or chronic as regards the duration and whether mortality or sub-lethal effects such as growth, reproduction and morphological deformation could be used as the test criteria. The exposure systems are always static, recirculation, renewal or flow-through. For simplicity, single species tests are used but to face to the challenges of ecological reality/complexity, multi-species tests are best suited. Two important assumptions that are usually made concerning the aquatic environment are; (1) ecosystem sensitivity depends on the most sensitive species, and (2) protecting ecosystem structure protects community function. These assumptions allow, however uncertain, an extrapolation to be made from single-species short-term toxicity data to ecosystem effects. Assessment factors as proposed by the US EPA and OECD (1992d) are then applied to predict a concentration below which an unacceptable effect will most likely not occur. Four major challenges, however, still remain; (1) intra- and inter-laboratory variation of toxicity data, (2) intra- and inter-species variations (biological variance), (3) short-term to long-term toxicity extrapolation (Acute Vs Chronic) and (4) laboratory data to field impact extrapolation (additive, synergistic and antagonistic effects from the presence of other substances may also play a role here). The approach of statistical extrapolation is still under debate and needs further validation. The advantage of these methods is that they use the whole sensitivity distribution of species in an ecosystem to derive a PNEC instead of taking always the lowest long-term No Observed Effect Concentration-NOEC. 3.1 Risk Characterisation
The risk decision process is traditionally divided into two stages, risk assessment and risk management. Risk assessment is the major bridge linking science to policy (Fig. 2). In risk assessment, scientific data on toxicological and ecotoxicological effects are used to determine possible adverse effects and the exposure levels at which these effects may be expected. Risk is thus characterised as the ratio of estimated ordinary (or worst-case) exposure levels, and levels estimated to be harmful. An assessment thus compares "predicted environmental concentrations" with the "predicted no effect concentration," as well as the "no observed effect level" or the "lowest observed effect level" with ordinary exposure levels (European Commission, 1993, 1994). Depending on whether the risk characterisation is performed for a new substance, for an existing substance or for a biocidal active substance, different conclusions can be drawn on the basis of the PEC/PNEC ratio for the different endpoints, and different strategies can be followed when PEC/PNEC ratios greater than one are observed. Therefore, the descriptions of the risk characterisation approaches are given separately for new substances, for existing substances and for biocides. In general, the risk characterisation phase is an iterative process that involves determination of the PEC/PNEC ratios for the different environmental compartments dependent on which further information/testing may lead to redefinition of the risk quotient until a final conclusion regarding the risk can be reached.
703
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
For the aquatic and terrestrial ecosystems, including secondary poisoning, a direct comparison of the PEC and PNEC values is carried out, presuming that the relevant data are available. If the PEC/PNEC ratio is greater than one the substance is "of concern" and further action has to be taken. For the air compartment usually only a qualitative assessment of abiotic effects is carried out. If there are indications that one or more of these effects occur for a given substance, expert knowledge is consulted or the substance is handed over to the relevant international group, e.g. to the responsible body in the United Nations Environment Programme (UNEP) for ozone depleting substances. In some cases also an assessment of the biotic effects to plants can be carried out (TGD 2ed II, 2003). For top predators, if the ratio of PECoral / PNECoral is greater than one and a refinement of the PECoral or the PNECoral is not possible or reasonable, risk reduction measures are considered. For microorganisms in sewage treatment systems; if the ratio of PECstp to the PNECmicroorganisms is greater than one, the substance may have a detrimental effect on the function of the STP and therefore is "of concern" (TGD 2ed II, 2003). In all, when PEC/PNEC ratios greater than one have been calculated, the competent authority consults the concerned industry for possibilities of getting additional data on exposure and/or ecotoxicity as to refine the assessment. The decision to request additional data should be transparent and justified and be based on the principles of lowest cost and effort, highest gain of information and the avoidance of unnecessary testing on animals. Risk characterization is used as a part of the basis for risk management decisions on appropriate measures to handle the risk. Such decisions range from taking no actions at all, via limited measures to reduce the highest exposures, to extensive regulations aiming at completely eliminating the risk, for instance by prohibiting activities leading to exposure. In the risk management decision, factors other than the scientific assessment of the risk are taken into account, such as social and economical impacts, technical feasibility, and general social practicability. According to the European Commission Technical Guidance Document for risk assessment (European Commission, 2003) "the risk assessment process relies heavily on expert judgment".
3.2 Classification and Labelling Once the substances have been satisfactorily assessed, scientific information is then put in a form consumable to the masses in a category/class format by labelling. The classification and labelling system is particularly interesting since according to current regulations, certain aspects of the classification of a substance should depend only on the information that is summarized and evaluated in the risk assessment process (Hansson and Rud'en, 2005). According to the Institute for Health and Consumer Research Centre for the EC and the European Chemicals Bureau (ECB), classification and labelling involves evaluation of the hazard
704
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
of a substance or preparation in accordance to Directive 67/548/EEC (substances) and 1999/45/EC (preparations) and a communication of that hazard via the label. According to the EU regulations (Commission Directive, 2001/59/EC) and TGD, substances and preparations are classified according to their inherent toxicological and ecotoxicological properties into the danger classes summarized in Table 1. Substances and preparations belonging to these classes have to be provided with a warning label, as well as standardized risk and safety phrases that are assigned in a strictly rule-based way. The labelling is the first and often the only information on the hazards of a chemical that reaches the user, which could be a consumer or a worker. The classification rules are all inflexible in the sense that if one of the rules puts a substance into one of these classes, then additional information cannot lower the classification of that substance but can lead to a stricter classification (Hansson and Rud'en, 2003). R:e~arch
.Risk ~ s ~ s ~ e n t
R~sk m a n a ~ m e n t
I
T~176
a
,, !t~
Extra pota!::iion
me~iodis
i 1
.......
~
ide;~i:~ifica:t,:~n l
Do~-,:re~po:,~se
......................................
I
a s ~ m e n t . J ................................... I
........................................................................................................................................
!i!
i! ................................... ~
ExF&N.d NpU~:'O:~ I!l il as~essrnent . . . . .
.
.
.
.
. . . . . . . .
iiiil
.......................................... 1 .............................
I ~ ~i ,............... lil !i11.111.1...11:.:.2.:.11 .................. i...........i ~ i RiSk ~tdecismn and l 11 ch:ar~teriza~ion! _ li [T: ]:,
ti f AQ:e.cy !
Fig 2. The Risk decision process as it is usually conceived. (National research council, 1994) The European Chemical Substances Information System (ESIS), a subsidiary of ECB provides a link to the European INventory of Existing Commercial chemical Substances (EINECS). This online service readily disseminates useful information to the public about the risk assessment status (with draft RA reports) for a number of chemical substances that have already been assessed (http://ecb.jrc.it/ESIS/). Table 1. The classes used in the European classification and labelling system (Hansson and Rud'en, 2005). Very toxic (T+) Toxic (T) Corrosive (c) Harmful (Xn) Irritant (Xi) ,,,
705
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
Sensitizing (Xn or Xi) Carcinogenic (T or Xn) Mutagenic (T or Xn) Toxic to reproduction (T or Xn) Dangerous to the environment (N) This has raised awareness regarding different chemical substances that are being manufactured and/or used in the EU.
3.3 Challenges for Improved Ecotoxicological Testing New regulations, in particular the new European chemicals legislation (REACH), will increase the demands on environmental risk assessment (ERA). This Legislation will result into a number of changes in the ERA process. The most significant being a large number of previously untested substances shall have to undergo testing and risk assessment (EC, 2003b). An interesting issue about REACH is that the burden of poof that a chemical is safe is now for the industry and not for the governments any more. Development of effective, simple, and sensitive tools that are needed to fulfil the objectives of environmental policies, as required by REACH, needs an improved understanding of ecotoxicological structures and their interrelationships (Breitholtz et al., 2005). In the EU today, the requirements on efficient ecotoxicological testing systems are well known and can be summarised as 10 major issues (challenges) for improvement of ERA practices: (1) The choice of representative test species, (2) The development of test systems that are relevant for ecosystems in different parts of the world, (3) The inclusion of sensitive life stages in test systems, (4) The inclusion of endpoints on genetic variation in populations, (5) Using mechanistic understanding of toxic effects to develop more informative and efficient test systems, (6) Studying disruption in invertebrate endocrine mechanisms, that may differ radically from those we know from vertebrates, (7) Developing standardized methodologies for testing of poorly water-soluble substances, (8) Taking ethical considerations into account, in particular by reducing the use of vertebrates in ecotoxicological tests, (9) Using a systematic (statistical) approach in combination with mechanistic knowledge to combine tests efficiently into testing systems, and (10) Developing ERA so that it provides the information needed for precautionary decisionmaking. Since most of the research necessary for the safety evaluation of chemicals requires the killing of laboratory animals, toxicologists are now faced with an ethical conflict between their professional duties and the interests of the animals. In the past, the protection of consumers
706
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
against chemical injury was considered to be of the greatest importance, and society approved of all efforts to detect even the slightest hazards from chemicals. But of recent, toxicologists have become aware of their ethical responsibilities not only for the safety of the human population but also for the welfare of the animals (Zbinden, 1985, www.ncbi.nlm.nih.gov). Consequently, many resources are now being invested to observe the 'three Rs' (Replacement, Reduction, Refinement) concerning the use of laboratory animals in toxicological testing (Otto, 2002). The trend is shifting towards the development and use of alternative methods that permit the investigation of toxicological responses in unicellular organisms and cell cultures (Zbinden, 1985) and molecular methods. 4.0 C O N C L U D I N G REMARKS It is a common misunderstanding that since the precautionary principle is a risk management principle, it does not influence risk assessment. And according to one interpretation, it consists of taking at least some risk-reducing measures at certainty levels that are lower than those required for considering an effect scientifically proven (Hansson and Rud'en, 2004). It should thus be clear that it is the task of risk assessors to provide risk managers with the information they need to take their decisions according to the criteria they choose.
Even in the developed world where the manufacture and use of chemical substances is unequalled, the process of ERA still faces hardships. Though there is no rigidity in most assessment procedures as provided by the EU technical guidelines, expert judgement and professional knowledge is fundamental for each case before hand. The challenges highlighted above are just a tentative checklist that is now used in optimisation of ecological relevance of any ERA activity. The list is non-exhaustive and the factors can be weighted differently from one assessment to another depending on the underlying objective. The combined effect of scientific uncertainty and a high degree of flexibility and reliance on individual experts makes it practically impossible to achieve a risk assessment process that is fully consistent and systematic in all aspects (Hansson & Rud'en, 2005). It is therefore essential to scrutinize and evaluate the risk assessment process, in order to learn more about (1) how scientific information is reflected in risk assessment, (2) how and to what degree risk assessment influences risk management, (3) to what degree the risk decision process as a whole satisfies general quality criteria such as efficiency, consistency, and transparency. Additional knowledge about the applications of general risk assessment and risk management principles in different regulatory settings, in particular the substitution principle (i.e. the principle that a chemical substance should be substituted when a safer alternative is available), and the precautionary principle (i.e. that actions to reduce unwanted effects from chemicals should be taken even if the scientific indications of the existence of that effect do not amount to full scientific proof), is furthermore needed.
707
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
It is also imperative to acknowledge that there is no single perfect test for any chemical during RA. However, combining tests with different strengths and weaknesses to scientifically wellfounded and resource efficient test systems is possible, though challenging. Note should also be taken that because of the influence of both biotic and abiotic factors on the fate of chemicals, exposure models being used in the EU can not always be used in other regions prior to modifications and validation if conclusive results are to be achieved. The biggest challenge ahead of the EU, now, is to monitor the implementation of REACH, and to evaluate the actual working of this system compared to the system it is replacing, once it comes into force. REFERENCES
European Commission, 2003. Technical guidance document in support of the commission directive 93/67/EEC in risk assessment for new notified substances and the Commission regulation (EC) 1488/94 on risk assessment for existing substances. (Available on-line at: www.ecb.it). Institute for Health and Consumer Protection, Europeans Chemical Bureau, European Commission Joint Research Centre, 2003; Technical Guidance on Risk Assessment (TGD) 2 nd ed Part II. IHCP/JRC. Ispra, Italy. ECB (European Chemicals Bureau), 2003b. European chemical substance information system (ESIS). ,Ispra, Italy. Schantz, S.L., Gasior, D.M., Polverejan, E., McCaVrey, R.J., Sweeney, A.M., Humphrey, H.E.B., Gardiner, J.C., 2001. Impairments of memory and learning in older adults exposed to polychlorinated biphenyls via consumption of great lakes Wsh. Environ. Health Persp. 109, 605-611. Mikael Karlsson, 2005. Science and norms in policies for sustainable development: Assessing and managing risks of chemical substances and genetically modified organisms in the European Union. Regulatory Toxicology and Pharmacology 44 (2006) 49-56. NRC, 2001. Arsenic in Drinking Water: 2001 Update. Subcommittee to Update the 1999 Arsenic in Drinking Water Repoxrt, Committee on Toxicology, Board on Environmental Studies and Toxicology, National Research Council (NRC). National Academy Press, Washington. Breitholtz M.,Rude'n C., S.O. Hansson, B. Bengtsson, 2005; Ten challenges for improved ecotoxicological testing in environmental risk assessment. Ecotoxicology and Environmental Safety 2005. Article in Press. Frederik A.M. Verdonck, Geert Boeije, Veronique Vandenberghe, Mike Comber, Watze de Wolf, Tom Feijtel, Martin Holt, Volker Koch, Andre" Lecloux, Angela Siebel-Sauer, Peter A. Vanrolleghem, 2004. A rule-based screening environmental risk assessment tool derived from EUSES. Chemosphere 58 (2005) 1169-1176
708
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Masazumi Harada, Hirokatsu Akagi, Toshihide Tsuda, Takako Kizaki, Hideki Ohno, 1999. Methylmercury level in umbilical cords from patients with congenital Minamata disease. The Science of the Total Environment 234_1999.59162 Harada M . , 1995. Minamata disease" Methylmercury poisoning in Japan caused by environmental pollution. Crit Rev Toxicol 1995;25"1124. Sven Ove Hansson, Christina Rud'en, 2005. Evaluating the risk decision process. Toxicology 218 (2006) 100-111 Hansson, S.O., Rud'en, C., 2003. Improving the incentives for toxicity testing. J. Risk Res. 6, 3-21. Rud'en, C., Hansson, S.O., 2003. How accurate are the European Union's classifications of chemical substances. Toxicol. Lett. 144 (2), 159-173. Otto Meyer 2002. Testing and assessment strategies, including alternative and new approaches. Toxicology Letters 140_ 141 (2003) 21_ 30. Katsuya Kawamoto, Kyong A Park, 2005. Calculation of environmental concentration and comparison of output for existing chemicals using regional multimedia modelling. Chemosphere xxx (2005) xxx-xxx. Michael Fryer, Chris D. Collins, Helen Ferrier, Roy N. Colvile, Mark J. Nieuwenhuijsen, 2006. Human exposure modelling for chemical risk assessment: a review of current approaches and research and policy implications, available at www.sciencedirect .com. Chiu-Yueh Yang, Mei-Lin Yu, How-Ran Guo, Te-Jen Lai, Chen-Chin Hsu, George Lambert, Yueliang Leon Guo, 2005. The endocrine and reproductive function of the female Yucheng adolescents prenatally exposed to PCBs/PCDFs. Chemosphere 61 (2005) 355-360. C.J. van Leeuwen and J.L.M. Hermens, Kluwer Academic, Dordrecht, ISBN 1997. Review Risk Assessment of Chemicals" an Introduction. Aquatic Toxicology 38 (1997) 199-201 Steve K.Teo,David I. Stirling and Jerome B.Zeldis, 2005. Thalidomide as a novel therapeutic agent:new uses for an old product. DDT 9Volume 10, # 2 9January 2005 available at www.sciencedirect.com/science/journal. J. A. Timbrell. Taylor & Francis, Basingstoke, 1989. Introduction to toxicology. Environmental Pollution, Volume 61, Issue 2, 1989, Pages 171-172 Zbinden, 1985. Ethical considerations in toxicology. Food Chem Toxicol. 1985 Feb;23(2)" 137-8.
709
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
THE I M P A C T OF A P O T E N T I A L D A M B R E A K ON THE H Y D R O ELECTRIC P O W E R GENERATION: CASE OF: O W E N FALLS D A M B R E A K SIMULATION, U G A N D A Michael Kizza;_Department of Civil Engineering, Makerere University, P. 0 Box 7062, Kampala, Uganda, [email protected] Seith Mugume; Department of Civil Engineering, Makerere University, P.O Box 7062, Kampala, Uganda, [email protected]
ABSTRACT:
Dams play a vital role in the economy of a country by providing essential benefits like irrigation, hydropower, flood control, drinking water, recreation etc. However, in the unlikely and rare event of failure, these may cause catastrophic flooding in the down stream area which may result in huge loss of human life and property worth billions of dollars The loss of life would vary with extent of inundation area, size of population at risk, and the amount of warning time available. Also a severe energy crisis would befall a nation whose energy is heavily dependent on Hydro Electric Power. This would in the long run hamper industrial progress and the economic development of the nation. Keywords: Dam Break Simulation, Flood control, recreation, Hydro Electric Power, energy crisis, Catastrophic flooding, down stream, installed capacity.
1.0 INTRODUCTION Uganda is a developing country, which is heavily dependent on Hydro Electric Power to feed the National grid. Uganda's installed capacity is 380 MW after the extension of the Owen Falls (Nalubaale) Dam Complex in Jinja, Uganda. The Dam was formally opened on Thursday 29 th April 1954 by Her Majesty the Queen Elizabeth of England as one single investment that would lay the foundation for Industrial development in Uganda. It is a reinforced concrete gravity dam with a design life of 50 years and is located on the Victoria Nile River in the southeast Uganda near Jinja. The old Owen Falls (Nalubaale) Dam has a capacity of 180MW of Hydro Electricity. An additional 200MW of installed capacity was realised after the completion of the Owen falls Extension Project. (Kiira Dam). No structure is permanent however advanced the construction and material technologies employed. (Anderson et al, 2002; Fleming, 2001). According to the US Association of Dam Safety Officials, the average life expectancy for an un-maintained dam is approximately 50years. (Donnelly et. al, (2001)). The Dam has therefore outlived its design life of 50 years and some serviceability failures are already showing in form of cracks and leakages within
710
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
the Nalubaale Powerhouse Structure, raising concerns about the safety of the Dam in its current state. Though major Dams are designed for a very low risk of failure, it is important to note the risk becomes more significant with age. Therefore, as we ponder the current energy crisis, it is important to keep in mind the risk of failure and associated threats.
Figure 1: Owen Falls Dam Complex, Jinja Uganda (Source: Eskom (U) Ltd) 2.0 POTENTIAL FAILURE INCIDENTS From the previous research, the types of potential incidents that could occur to the Dam complex are presented below. The characteristics of the catchment and the configuration and arrangement of the Owen falls complex are unusual and heavily impact the type and nature of the incidents that could occur.
2.1 Earth Quake Damage The Dam lies in a relatively inactive Earth Quake zone between the two more active zones of the rift valley and the Rwenzori Mountains. The Owen falls Dam complex was designed to withstand with no damage an Earthquake acceleration of 0.06g (Operating Basis Earthquake, OBE) and withstand with no failure an Earthquake acceleration of 0.17g (MDE or Maximum Design Earthquake). These Earth quakes have a probability of recurrence of 1,000 and 10,000 years respectively and are applied as horizontal (or 2/3of Vertical force). However, this does not eliminate the extremely remote possibility of a large event. If a large event occurred, it could cause instability of the intake dams or the main concrete dam, stability failure of the wing walls embankments adjacent to the Kiira Power station or the cutting/ embankment forming the west bank of the canal or damage leading to later failure. 2.2 Terrorist Attack The Owen Falls Dam complex is strategically located as a gateway linking Uganda to the coast of Mombasa. Given its strategic location and the damage that could be inflicted by deliberate action, the Owen Falls Complex must be regarded as a terrorist target.
711
International Conference on Advances in Engineering and Technology
2.3 Sliding or Overturning of Concrete gravity sections The failure of gravity sections, should it occur would be by; (i) Overturning (toppling), (ii) Sliding, (iii)Combination of the two The gravity sections were designed to international dam safety standards. The configuration and height of the structures also naturally limits the discharge that would result from a credible failure. Stability failure is therefore unlikely, but a worst credible event can be derived for purposes of generating a flooding event. This can be assumed to be the collapse of Owen Falls Nalubaale intake block at the level of the machine hall, the main Owen Falls Dam complex or the Kiira Power station. 2.4 Embankment Instability Embankment instability would take the form of settlement and/or slip circle failure. Any failure of this sort is likely to be progressive and therefore gives some measure of warning. The embankments like those adjacent to Kiira power station or the cutting/embankment forming the west bank of the canal were designed and assessed for stability. These assessments including settlement and slip circle failure analysis were performed to modem safety standards. Hence failure resulting embankment instability is highly unlikely. 2.5 Embankment seepage failure Embankment seepage failure would take the form of seepage through the structure or foundation which increases and removes material as the flow builds up leading to the development of a pipe and ultimately to failure. Depending on the location of the seepage outlet, some measure of warning may be expected. The embankments like those adjacent to Kiira Power station, or the embankment forming the West bank of the canal have been designed and assessed for seepage failure. These assessments were performed to modem safety standards and hence seepage failure is unlikely. In addition, as with the forms of embankment failure, the configuration and width of the canal would limit the discharge resulting from a seepage failure. 3.0 MODELLING AND SIMULATION A model is a simplified representation of a complex system. Modelling refers to the work of making a simple description of a system or process that can be used to explain it. Simulation refers to the development and use of a computer model for evaluating actual or postulated dynamic systems (McGraw-Hill, 1987c). During simulation, a particular set of conditions is created artificially in order to study or experience a system that could exist in reality. Engineering models can be used for planning, design, operation and for research. 3.1 Dam Break Modelling Dam break modelling consists of (i) Prediction of the outflow hydrograph due to the dam breach
712
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
(ii)
Routing of the hydrograph through the downstream valley to get maximum water level and discharge along with the time of travel at different locations of the river downstream of the dam.
Dam Break studies can be carried out using either (i) Scaled physical hydraulic models or (ii) mathematical simulation using computers. A modem tool for the dam break analysis is the mathematical model which is most cost effective, and approximately solves the governing flow equations of continuity and momentum by computer simulation. Computer models such as SMPDBK, DAMBRK and MIKE11 have been developed in the recent years, however, these computer models are dependent on certain inputs regarding geometrical and temporal characteristics of the breach. The state-of-art in estimating these breach characteristics is not as advanced as that of the computer programs and therefore, these are limiting factors in the Dam Break Analysis. 3.2 The Simplified Dam Break Model The SMPDBK model was developed by Wetmore and Fread (1984) at the National Weather Service (NWS) of the USA. This model produces information needed for determining the areas threatened by dam-break flood waters while substantially reducing the amount of time, data, computer facilities, and technical expertise required in employing more sophisticated unsteady flow routing models such as the DAMBRK model. The NWS SMPDBK model computes the dam break outflow from a simplified equation and routes the outflow base on curves generated with the NWS DAMBRK model. Flow depths are computed based on Manning's equation.
3.2.I Data Requirements (i) Breaching Parameters (final height and width of breach) (ii) Breach formation time (iii) Non dam break flow (spillway/turbine/sluice gate/overtopping flow) (iv) Volume of reservoir (v) Surface area of reservoir (vi) Manning's roughness coefficient for the river channel downstream (vii) Elevation Vs. Width Data for up to five downstream river cross sections. In producing the dam break flood forecast, the SMPDBK model first computes the peak outflow at the dam, based on the reservoir size and the temporal and geometrical description of the breach. The computed flood wave and the channel properties are used in conjunction with routing curves to determine how the peak flow will be diminished as it moves downstream. Based on this predicted flood wave reduction, the model computes the peak flows at specified down stream points with an average error of less than 10%. The model then computes the depth reached by the peak flow based on the channel geometry, slope, and roughness at these downstream points. The model also computes the time required for the peak to
713
International Conference on Advances in Engineering and Technology
reach each forecast point and, if the user entered a flood depth for the point, the time at which that depth is reached as well as when the flood wave recedes below that depth, thus providing the user with a time frame for evacuation and fortification on which the preparedness plan may be based.
3.2.2 Peak outflow computation Qbmax -
W h e r e C = 23.4
Q0 + 3.1B r
I
C ty / 6 0 + C / ~
13
(1)
Sa Br
Qo - Spillway/turbine/overtopping flow (cfs) B r - Breach Width (ft)
t y = Time of failure (hr) h d - Height of Dam (ft)
3.2.3 Flow Depth Computation The model computes depths from Manning's equations for a known discharge. Q _ 1.49 A(A/B) 2 ~fs
(2)
/7 /7
Sc - 7 7 0 0 0 ~
2
(3)
1
D 3
Where n = Manning's roughness coefficient, S = Slope of the channel, Sc = Critical slope.
3.2.4 Routed Flow Routing of peak discharge is done using empirically derived relationships (as used in the DMBRK model). It is represented via dimensionless parameters. Qp/Qb.... X/Xc,V* and F r 9Routing curves are then used. These curves were derived from numerous executions of the NWS DAMBRK model and they are grouped into families based on the Froude Number associated with the flood wave peak (Fread D.L, Wetmore J.N, 1981). To determine the correct family and member curve that most accurately predicts the attenuation of the flood, the user must define the routing parameters listed above. This requires the user to first describe the river channel downstream from the dam to the first routing point as a prism.
714
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y
6VOL
X C-
(4)
_
A(1 + 4 . 0 . 5 ~+~ )
VOL
v* = - AcX~.
(5)
X* _ X
(6)
s
Where;
VOL = Volume in R e s e r v o i r (ft 3) X~, = Distance parameter (ft) m = average channel geometry fitting coefficient
Z- gAc/B
(7)
Q,_ Qp
(8)
Qbmax
Where, F,. = Froude number g = acceleration due to gravity
SIMPLIFIED DAMBREAKROUTING CURVES F=0.25 os ~ ~5 ............................................. 0.B ....
~..
0.7
9 ..xx\ ..':~.,< ......................................... O 0,6 "x. , ..... 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
d
\
0.4 0.3
...........
x~ ................ "~''"
0.2
\
~
~ "---
....
..... i
""
"":~
....
..... v,=5.0 ~" V'=4.0 ...... v,=3.0
.,-.,- ~
.227...~ ~ _ ~ . ~ . ~
9
-.~
"-
0.1 ............................................
""~ "-" : ~
.
.
.
.
~"
V*=2,0
~"".._ v , = L o
I ............ ii i 0 o , 2 3 4 5 6 7 a 9 ~o,~2~3~4~5~6~7~s~92o '
X,/xc, Fig. 2: Simplified Dam Break Routing Curves
Computation of Time to Peak (Tp) The simplified dam break model computes time to peak using the following equations X Tp - t/. + - -
(9)
C
715
International Conference on Advances in Engineering and Technology
C-0"682Vxi[
52m3 3m+ll
(10)
2
(11)
g x i _ 1.4___99.k/-~Dx~/
n Dx i _
(12)
href
m+l hre f -
(13)
f ( Q* , n, if-if, Axi , Dxi )
From Manning's Equation, we have: Q _ 1.49
2
AD ~ ff-ff
(14)
n m
(15)
Q* - Q~/2 (0.3 + ~-~)
Computation of Time to flooding (Tfld) and de-flooding (Tdfld) of Elevation Hf (16)
Qu - a(h f )b
t~d - tp - ({QP I
Qf
(17]
x.
taftd -- tp +
Qp -Qo
--tf
(18)
Qp -Qo
4.0 M O D E L S E T U P 4.1 Reservoir Data
Due to the hydraulic constraint at the former Rippon Falls, the outflow from Lake Victoria will not significantly empty the lake in the short term apart from the 2.Skm reach from Rippon Falls to Owen Falls. Therefore, this reach will act as the effective reservoir in the short term (immediately after the breach): (i) Length of reach 2.8km (1.75mi) (ii) Surface Area = 0.884km 2 (88.4acres) (iii) Average Depth within Reach = 12m (39.37ft) (iv) Total live storage volume = 10,600,000m 3 (3480.315 acres-ft) 4.2 Flow Data
Nalubaale Power station (i) Turbine flow (10 turbines) (ii) Sluice gates flow (6 sluices)
716
= =
1200m3/s 1200m3/s
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Kiira Power station (i) Turbine flow (ii) Three bay Spillway
=
1100m3/s 1740m3/s.
Victoria Nile
(i) Average flow rate in Victoria Nile data from WRMD).
l154.175m3/s (Source: ADCP flow
4.3 Victoria Nile Channel Geometry and Model set up The Victoria Nile Reach that has been modeled is 5 l km long and comprises of 14 cross sections obtained from ADCP measurements carried out along Victoria Nile and Napoleon Gulf in 2004 by the WRMD of the DWD. Missing values of altitude along the Nile were obtained by interpolation between known ones. Also missing cross sectional data was modeled as average values of the known cross sections. The average depth of water in the River Nile Model has been taken as 5m. Also 1 in 50,000 topographic maps were obtained from WRMD i.e; Jinja (sheet 71/1), Kagoma (sheet 62/3), Kamuli (sheet 62/1), Kayonza (sheet 61/2) and Bale (sheet 51/4). These were used to obtain distances along the Nile from Owen falls dam to the particular points of interest. 5.0 DAM BREAK ANALYSES Failure of the dam will most likely result from earth quake damage of one of the concrete sections or from a terrorist attack (sabotage). Therefore, we can consider breaching of the dam by considering three scenarios; (i) Breaching of the Nalubaale Power station intake block (ii) Breaching of Kiira Power station intake block (iii) Breaching of the Main Owen falls Dam complex. The Owen Falls Nalubaale Intake Block can be considered as a concrete gravity dam which will tend to fail by partial breaching as soon as one or more of the monolithic concrete sections formed during construction are removed by the escaping water. The time required for breach formation in this situation is in the range of a few minutes. 5.1 Breaching Parameters
Fig. 3" Breaching Parameters illustrated.
717
International Conference on Advances in Engineering and Technology
Table 1" Breaching Parameters for the three scenarios Section of Dam
Breaching
Breaching
Breaching
Manning's n
Width(m)
Height
time (min)
Value
160
17
0.050
Kiira Power Station
56
24
0.050
Main Owen Falls Dam
190
30
0.050
Nalubaale Power Intake Block
Table 4: Nalubaale Dam Break Results Chainage lm]
Max Flow im3/s]
Max Elevation [mASL]
Max Depth [m]
Time [hr] to Max Depth
0.00
718
0.00
0.00
17365.94
1113.83
3.53
0.03
3.46
15144.01
1113.13
3.58
0.12
4.42
12409.82
1112.97
3.63
0.21
6.00
10128.76
1112.97
3.97
0.29
8.00
7776.00
1108.33
3.33
0.52
11.01
7199.47
1100.62
3.62
0.63
13.01
7131.96
1089.96
2.96
0.66
19.01
5990.76
1079.56
3.56
0.99
26.00
5190.61
1065.25
3.25
1.38
28.00
5012.93
1061.79
3.59
1.50
36.00
4483.80
1046.25
3.25
1.96
38.00
4334.34
1045.85
3.65
2.11
41.01
4138.93
1045.85
4.86
2.37
51.01
3561.75
1042.65
3.65
3.30
International Conference on Advances in Engineering and Technology
7.0 A N A L Y S I S O F R E S U L T S 7.1 N a l u b a a l e D a m B r e a k Results Outflow hydrograph for Nalubaale Dam Breach
iiiiiiiiiilililiiiiiiiiiiiiiiiiiiiiiiiiilililiiiiiiiiiiiiiii!iiiiiiiii ilililiiiiiiiiiiiiiliiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiill
[ii i i ilililili i i i ~ililili i i i i i ilili i i i il iiiiiiil iii!~iiii!iiiiiiiiiiiiiii
--1.--•2000.00 !iiiiiiiiiiliiiiii ~ O00OIO0
~ii~iiii~iti~i~i~iiiiii?iiiiltliil ................................................................... ..................................
iiiiiiiiiiiiiiiiiii:iiiiiiiiiiiiiiliiiiiiiiiiiiiiiiiiiiiiiiiiiiill
~5 8ooo.oo
i!i!i!i!iii)i~!i~i)iii!i!!~!~ii}! ililililiiiiiiiliiiiiiiiiiiiiiiiiiiiiiiiiiiilililililiiii!ii!iliiii
iiiiiiiiiiii}iiiiiiiiiiiiiiiiiiiiil
N iiN ~iiNiNN iNNNiN iNN iNN iNN iN i!NiN iNNN iN ii iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iiiiiiiiiil;iiiiiii!iii!iiltlii!iii[iil;ili!i!i!iii[iii!i!i!!ii!iil
2000.00
i!iiiiiiiii!iiii!ii!ii!ii!iiii!ii
0.00 Zi:ZZZZ;i;iZZi;iZZ 0.20 0.00
0.40
0.60
0.80
1.00
1.20
1.40
t .60
Time [hrs] Discharge [m~/s] Vs. Chainage [mi] along the Nile
1"~)C E
ttor
Chainage [km]
7.2
Kiira D a m B r e a k Results Chainage [m]
Max
Max
Max
Flow [m3/s]
Elevation [mASL]
Depth [m]
0.00 0.00
3.46
11579.80 10382.45
Time [hr] to Max Depth 0.00
1113.80 1113.15
3.50 3.60
0.03 0.17
719
International C o n f e r e n c e on A d v a n c e s in Engineering and T e c h n o l o g y Outflow hydrograph
for Kiira Dam Breach
12000.00
10000.00
8000.00
t~
E ~6000.00 ._ a
4000.00
2000.00
0.00 0.00
1.00
2.00
3.00
4.00
5.00
6.00
Time [hrs] Discharge[m3/s]Vs. Chainage[mi] alongthe Nile
ZE 60 u~
,
00
10.00
15.00
20.00
25.00
30.00
35.00
Chainage [km]
Maximum Flow
(i) Nalubaale Dam Break = 17365.94m3/s (ii) Kiira Dam Break = 11579.80m3/s Compare with the maximum flow in River Nile in 50 years of 1396 m3/s!
8.0 CONCLUSIONS (i) The Consequence of any breach at Owen Falls in the short term would result into a sudden flood of up to 17365.93m3/s and a subsequent increased steady flow in the Nile up to 5000m3/s controlled by the Rippon falls (ii) The 2.Skm stretch of the Napoleon gulf would be emptied in minutes and the extent of flooding would impact the reach of Victoria Nile between Jinja and Lake Kyoga. The Attenuation of the flows will occur at Lake Kyoga and Albert, hence limiting the flooding effects downstream.
720
International Conference on Advances in Engineering and Technology
(iii) The Water level in Lake Victoria would significantly reduce and hence affecting Water Supply, Water transport and fishing activities in the Lake. (iv) The following infrastructure would be potentially at Risk (v) Road bridge across the Owen falls Dam (vi) Road Bridge Across the new canal (vii)Njeru town main water supply crossing the Nalubaale Power station intake, Owen falls dam, and the new canal bridge. (viii) Telecommunications landline from Kampala to Jinja (ix) Fibre optic control line which connects the two control rooms in the Nalubaale and Kiira power stations. This route crosses the Owen falls Dam and runs along the left bank of the canal. (x) Minor electric power line along the roadway (xi) Power and control cables to the sluices on the Owen Falls Dam (xii)Transmission lines at Jinja connecting Kiira Power station to the switch yard at Nalubaale Power station (xiii) New MTN fibre optic line installed in June 2003 (xiv)
A Serious energy crisis would result in the Nation.
9.0 R E C O M M E N D A T I O N S
(i) There is need to carry out an immediate Dam Safety Analysis. (ii) There is need to carry out an inventory/Structure appraisal for the Dam to ascertain its life span. (iii) Plans should be set to either decommission the dam or carry out immediate renovation works. (iv) Take account of the resulting flood in the design of the flood control structures at Bujagali Dam. REFERENCES
Fread D.L (1998) Dam Breach Modelling and Flood Routing: A perspective on present capabilities and future directions. Jacobs J.E (2003) Emergency Preparedness Plan (ECP), Volume 1. Masood S. H, Nitya N. R, One dimensional Dam break Flood Analysis for Kameng Hydro Electric Project, India. Source www.ymparisto.fi/default.asp?contentid accessed o n 18 th Nov 2005. Tony L Wahl (1998) Uncertainty of Predictions of Embankment Dam Breach Parameters. ASCE Paper. Wetmore J. N, Fread D.L, (1980) The New Simplified Dam Break Flood forecasting Model for Desktops and Hand Micro- Computers. Hydrologic Research Laboratory.
721
International Conference on Advances in Engineering and Technology
L E A D L E V E L S IN T H E S O S I A N I O.K. Chibole, School of Environmental Studies, Moi University, Kenya elku2001 @yahoo.corn
ABSTRACT River Sosiani is a tributary of r. Nzoia, one of the major rivers draining the eastern water catchment of Lake Victoria: the largest flesh water lake in Africa. River Sosiani also bisects Eldoret town, the biggest town in the northern region of Kenya. Although in Kenya there is provision for unleaded fuel, leaded fuel is still popular among the motorists. Widespread habit of car washing including petroleum tankers along the shores of the Sosiani; traffic congestion along the bridges on the Sosiani during peak hours; dumping of solid waste, including used motor vehicle batteries next to the river course is cause of concern. River Sosiani drainage course was divided into three zones: (1) the forested zone (Fz), the upper reach of the river in the forest, (2) agricultural zone, the middle reach (Az), and (3) the urban zone (Uz), the lower reach in Eldoret town. There were two sampling sites (Fzl, Fz2) in F z - as reference, and four sampling sites each in Az (Azl, Az2, Az3, Az4), and Uz (Uzl, Uz2, Uz3, Uz4). Water samples and sediment samples, where feasible, were collected from each sampling site once a month for a period of two years and analysed using AAS, Varian SpectrAA 100/200 Model at the Moi University, School of Environmental Studies' laboratory. Results show very low lead levels (<0.01 rag/L) both in water and sediments in Fz, and 400% and 500% increase in lead levels in water and sediments, respectively, in Uz. Az levels show mixed picture (0.05 + 0.02 mg/L). Keywords: Lead; Concentration; Water, Sediments, River Sosiani.
1.0 INTRODUCTION Lead is one of the most versatile heavy metals widely used in various industrial applications. Major uses are in petrol additive, coating for steel products, in batteries, paints, dyes and cables. Lead, however, is known to be associated with a continuum of health effects both at high and low levels of exposure. These include neurobehavioral and psychological problems, premature delivery or still birth, anemia due to heme synthesis interruption, impaired growth, poor hearing and learning difficulties, hyperactivity, low IQ, (Grant et al., 1989). High fat diet and low Ca/Se ratio aid more absorption. Infants are especially at high risk, as they absorb lead 3-5 times more readily than adults. Although there is a provision for unleaded petrol in Kenya, leaded petrol is popular among Kenyan motorists. Eldoret town, the major industrial hub of the Northern region of Kenya, has a population of 300,000 peo-
722
International Conference on Advances in Engineering and Technology
pie. The main transport means in Eldoret is by "Matatu" (a 12-seater minibus) which are powered mostly by leaded regular petrol. Car-washing habit is also widespread along the shores of the Sosiani in Eldoret town. .........................................................
,~.,..?.~.y
.........................................................................
....OI....
Fig. 1 Study area
0~..,~ "~
....
,~
~
.~-
~
...........
,,
9
:~
~
:~, ~ ~
~ :.
.g,
:~..~ . ,
Fig. 2 Source of Sosiani Fi g .:3 Sosi ani meanders into tow~
Fig.
5 c~v effluent
Fig. 4 Car %*4ash [cw)
723
International Conference on Advances in Engineering and Technology
2.0 M E T H O D O L O G Y 2.1 Sampling Strategy Selection of sampling sites was dictated by human activities in the study area. It was then divided into three zones: Forested zone (Fz); Agricultural zone (Az); and Urban zone (Uz), (Fig 1). Fz is upstream Az, near the source of river Sosiani, with minimal anthropogenic impact. Two sampling sites (FZl, Fz2) were established in the forested zone. Az is upstream Uz and sparsely populated. Az had three, while Uz had four sampling sites: AZl; Az2; Az3; and Uzl; Uz2; Uz3; Uz4, respectively (Fig 1). Standard sampling methods were employed in the conceptual frame work.
2.2 Analysis Sediments Sediment samples were oven dried for 24 hours at 70~ to avoid the loss of volatile constituents (Van Loon and Barefoot, 1989). Oven dried samples were then pulverized and passed through 0.2mm sieve. The samples were subsequently digested using nitric acid/hydrochloric acid (aqua regia) mixture (Lewis and McConchie, 1994). All digested solutions were filtered through a 0.45 ~tm ash-less Whatman Filter Paper using a vacuum pump. The filtrates were made up to 50 ml. with distilled water. Lead was analysed using the Atomic Absorption Spectrometer (AAS), Varian SpectrAA 100/200 Model, by acetylene-air flame atomic absorption spectroscopy, whose detection limits are 0.01 mg/L Pb. A minimum of three calibration standards were used to construct calibration graphs. Three readings were taken and averaged to give the mean concentration value, appropriately adjusted for volume/weight correction. Water After thorough, mixing a 50 ml. of water sample was pipetted into a flask and concentrated on a hot plate to 15 ml. The sample was then digested with nitric/sulphuric acid mixture (Greenberg et al.., 1992). No filtering was done prior to digestion, thus obtained values were representative of both the dissolved and suspended fractions of the elements. After digestion the filtrate was made up to 50 ml. using distilled water before analysis. Control blanks yielded values below detectable limits, while certified reference material gave values within the certified concentration. Precision and Accuracy Measurements (i) Blanks: Blanks were prepared and four of each taken through the digestion procedures for water and sediment. The table below shows the result of the analysis. The figures are represented to three decimal places due to their small values.
724
I n t e r n a t i o n a l C o n f e r e n c e o n A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Table 3.5 Precision measurements in blanks # of Sample Sample type Water Sediment 4
Pb (mg/kg) 0.003 + 0.001 0.001 + 0.001
(ii) Standard reference materials" Standard reference materials were analyzed by the same methods used in determination of lead in the samples.
Table 3.6: Results of accuracy tests using standard reference material (flame AAS method) Accuracy Measured Certified value Certified mate- Element (% recovery) value (mg/kg) rial (mg/kg) 57.92 96.5 60.00 Pb 80- 100
0.04
0.05 • 0.01
Pb
Water (EMPA)
With reference to both accuracy and precision results above, methods of sample preparation and analysis for lead used in the study produced comparable metal recoveries, 95% of which lie within the reported concentration range of the reference samples. 3.0 RESULTS Table 1" River water and sediment lead concentrations in Fz (mg/L) Month/
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
X +SD
0.06
0.06
0.07
0.08
0.08
0.07
0.08
0.09
0.08
0.07
0.07
0.07
0.07+0.02
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
0.10
0.11
0.11
0.12
0.12
0.13
0.13
0.13
0.13
0.12
0.11
0.10
0.12+0.01
Site FZl
Fz,
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
0.00"
0.05
0.05
0.08
0.09
0.08
0.07
0.09
0.09
0.07
0.07
0.06
0.05+0.02
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
*BDL - below detection limit Table 2" River water and sediment lead concentrations in Az (rag/L) Month/
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
X +SD
Site
725
International
Az1
Az2
Az4
Conference
on Advances
in Engineering
and Technology
0.11
0.09
0.09
0.10
0.13
0.10
0.13
0.12
0.12
0.13
0.11
0.12
0.11+0.01
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
0.15
0.16
0.17
0.17
0.19
0.20
0.21
0.20
0.21
0.21
0.19
0.19
0.19+0.02
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
0.15
0.14
0.16
0.17
0.19
0.21
0.19
0.18
0.19
0.19
0.17
0.16
0.12+0.02
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
0.21
0.21
0.23
0.23
0.24
0.25
0.25
0.24
0.23
0.24
0.23
0.23
0.23+0.01
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
0.17
0.18
0.17
0.19
0.21
0.21
0.20
0.23
0.23
0.21
0.22
0.19
0.20+0.02
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
0.31
0.31
0.32
0.33
0.34
0.34
0.35
0.34
0.36
0.35
0.33
0.32
0.33+0.02
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
T a b l e 3: R i v e r w a t e r and s e d i m e n t s lead c o n c e n t r a t i o n s in U z ( m g / L ) 9
Site
Uz~
Uz2
Uz3
Uz4
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
_
0.17
0.18
0.18
0.20
0.21
0.21
0.23
0.22
0.21
0.20
0.21
0.19
0.20+0.02
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
0.48
0.48
0.49
0.42
0.51
0.49
0.50
0.52
0.49
0.50
0.51
0.50
0.49+0.03
(s)
(s)
(s)
(s)
(9
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
0.24
0.25
0.26
0.28
0.31
0.33
0.32
0.34
0.33
0.30
0.29
0.27
0.29+0.03
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
0.79
0.79
0.81
0.85
0.89
0.89
0.87
0.88
0.87
0.85
0.83
0.83
0.85+0.04
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
0.23
0.24
0.26
0.27
0.29
0.30
0.31
0.31
0.30
0.31
0.29
0.27
0.28+0.03
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
0.83
0.82
0.83
0.85
0.86
0.87
0.88
0.87
0.86
0.88
0.85
0.85
0.86+0.02
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
(s)
0.22
0.23
0.23
0.24
0.25
0.29
0.28
0.27
0.27
0.25
0.23
0.22
0.25+0.02
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
(w)
s =
sediment.
T a b l e 4" S u m m a r y o f Pb levels ( m g / L ) in all s a m p l e types in various z o n e s * Min Max S a m p l e type N 0.13 0.00 24 Water Fz 0.13 0.08 24 Sediment 0.23 0.09 Water 36 Az 0.15 0.36 Sediment 36
726
_
X +SD
(w)
w = water,
Zone
Dec
x • 0.05 0.12 0.12 0.25
4- 0.01 4- 0.02 4-0.02 + 0.03
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Uz
Water 48 0.17 0.34 Sediment 36 0.42 0.88 *Abbreviations." Fz, Forested zone; Az, Agricultural zone; Uz, Urban zone.
Table 5: Variability of Pb measures water in the three zones Zone Mean (SD) Fz 0.05 mg/L (0.01) Az 0.12 mg/L (0.02) Uz 0.27mg/L (0.03)
0.27 + 0.03 0.73 + 0.04
Mean (CV) 0.05 mg/L (20%) 0.12 mg/L (17%) 0.27 mg/L (11%)
Table 6: Quantification ofPb loadings (kg/yr) in the river in various zones Zone ~- (m3/s), Mean Conc. (mg/L) Loadings (ton/yr) Fz Az Uz *Source: O.K. Chibole, published thesis.
0.35 0.05 0.94 0.12 1.03 0.27 "Scientific model for r. Sosiani's water quality
0.5 3.5 8.8 management" un-
4.0 DISCUSSIONS 4.1 Forested Zone Total concentrations of lead in water and sediments in the forested zone ranged from 0.00 to 0.09 mg/L and 0.08 to 0.09 mg/L respectively, and had means of 0.05 4- 0.01 and 0.12 + 0.02 mg/L respectively (Table 4). These are in essence the base-line values for Pb in water and sediments of the Sosiani, since there are minimal human activities in the reserved indigenous forest zone. The overall average of Pb in water in the Fz compares favourably with the world values of Pb (0.04 ppm) in solution by major unpolluted rivers, (Meybeck, 1988).
When deposited in water, whether from air or through run-off from rain, Pb partitions rapidly between the sediment and aqueous phase, depending upon the salt content of the water as well as the presence of the organic complexing agents. At pH > 5, for example, the total solubility of lead is about 30 pg/L in hard water and 500 pg/L in soft water (Davies & Everhart 1973). Besides, the presence of sulphate and carbonate ions can limit lead solubility, as described by Hem & Durun (1973) in a review of the aqueous chemistry of lead. At Fz2 there are no sediment Pb values because of the rocky relief (Fig 1, Table 1) 4.2 Agricultural Zone Water Pb levels in Az of 0.12 + 0.02 mg/L and sediment level of 0.025 + 0.03 rag/L, show an increase of 140% (0.12/0.05) and 108% (0.25/0.12), respectively. Likewise, Pb in sediment levels is double Pb in water levels (Table 4).
727
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
W a t e r - borne lead has been found to exist as soluble lead or undissolved colloidal particles, either suspended in the aqueous phase or adsorbed as surface coatings on other suspended solids (Lovering, 1976). The ratio of lead in suspended solids to lead in the dissolved form has been found to vary from 3:1 in rural areas to 27:1 in urban streams (Gertz et al., 1977). 4.3 U r b a n zone
Uz has the highest lead levels, an increase of 440% (0.27/0.05) in water, and 508% (0.73/0.12) in sediments as compared to the levels in Fz. It is interesting to note that the levels of Pb in sediments are 170% (0.73/0.27) higher than in water in Uz, compared to 140% and 108 in Fz and Az, respectively. Once released into the environment lead may be transformed from one inorganic species or particle size to another. Lead containing particles in automobile exhaust, for example, are usually lead halides or double salts with ammonium halides (Biggins & Harrisson, 1979). Lead halides are fairly soluble. PbC12, for example, has Ksp = 1.6 x 10.5 at 25~ in flesh waters. Both natural organic compounds (humic and fulvic acids) as well as those of anthropogenic origin (e.g., ethylenediaminotetraacetic and nitrilotriacetic acids) may complex Pb found in surface waters ( Nuebecker & Allen, 1983). The presence in water of such chelators may increase the rate of solution of lead compounds (e.g., PbS) 10 to 60 times over that of water at the same pH without fluvates (Bondarenko, 1968; Lovering, 1976). The variability of lead measures is least in Uz and most in Fz (Table 5). This is probably due to relatively higher levels of lead in both the water and sediments which allows for equilibrium measures to be easily attained and more precise measures to be obtained. 5.0 CONCLUSIONS
(i) River Sosiani has relatively high lead levels in Eldoret town. Pb loading is as high as 8.8 tons per year in town, 18 fold the levels in the relatively pristine waters of the Sosiani in the forested catchment zone. (ii) Lead loading levels in Az lie in between, but are nearer Fz than Uz values (Table 6). (iii) Although there are noticeable increases in the lead level in the river water during the wet months of April-August and September-November in all the three zones, the increase in the urban zone is most conspicuous (Tables l, 2, 3). This could be attributed to automotive and other sources of lead emissions that end up being washed down the river facilitated by high rate of water runoffs during rainy season on the concrete pavements in town. (iv) Lead levels immediately downstream the Elligerin dam, Fz2, in the forested zone is below detection level during the dry month of January, the area has no sediments as well. (v) There is need to monitor lead concentrations in the atmosphere in Eldoret. REFERENCES
728
International Conference on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Biggins PDE & Harrison RM (1979) Atmospheric chemistry of automotive lead. Environ Sci Technol, 13: 558-565. Bondarenko GP (1968)An experimental study of the solubility of galena in presence offluvic acids. Geochem Int, 5:525-531 Davies PH & Everhart WH (1973) Effects of chemical variations in aquatic environment: Volume 3 - Lead toxicity to rainbow trout and testing application factor concept. Washington, DC, US Environmental Protection Agency (EPA Report No. R373011C). Emhart CB, Morrow-Tlucak M, Marler MR, & Wolf AW (1987) Low level lead exposure and intelligence in the preschool years. Sci Total Environ, 71: 453-459. Gertz LL, Haney AW, Larimore RW, McNumey JW (1977) Transport and distribution in a watershed ecosystem. IN: Boggess WR ed. Lead in the environment, Springfield, Virginia, National Technical Information Service, pp 105-113 (National Science Foundation report No. NSF/RA-770214) Grant LD & Davis JM (1989) Effects of low-level lead exposure on paediatric neurobehavioural development: Current findings and future directions. In: Smith MA, Grand LD, & Sors AI ed. Lead exposure and child development." An international Assessment, Dordrecht, London, Kluwer Academic Publishers, p49-115. Greenberg AE, Clesceri LS, Eaton AD (1992) Standard methods for Examination of Water & Wastewater, 18th Edition, APHA, USA. Hem JD & Durum WEI (1973) Solubility and occurrence of lead in surface water, J An Water works association, 65: 562-568. Lewis DW & McConchie DM (1994) Analytical Sedimetology, Chapman & Hall, New York, Oxford University Press. Lovering TG ed. (1976) Lead in the environment, Washington, DC, US department of the Interior, Geological survey (Geological Survey Professional Paper No. 957). Neubecker TA & Allen HE (1983) The measurement of complexion capacity and conditional stability constants for ligands in natural waters, Water Res, 17: 1-14.
729
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
D E V E L O P I N G A WEB P O R T A L FOR THE U G A N D A N CONSTRUCTION INDUSTRY Richard Irumba, Faculty of Technology, Makerere University, Kampala, Uganda.
ABSTRACT Over the past fifteen years the construction industry in Uganda has experienced an annual growth rate of 7.8%, markedly higher than the national GDP of 5.5%. However, the disparity between construction information and the lack of an organised system of sharing it between industry partners has resulted in poor performance of construction projects. This is due to the fact that it is inadequate, insufficient, inaccurate, and inconsistently disseminated. This research investigated the effectiveness of using a web portal to enhance information sharing in the Ugandan construction industry and based on the results of the study, a conceptual web portal model was developed. It is envisaged that the web portal will improve construction project planning and management through the provision of basic construction information. The study population consisted of 233 construction and consulting firms. A multi-method approach was employed in the research. This included the conducting of a questionnaire survey of 80 (i.e. sample size) firms, and an interview survey of 9 representatives of key stakeholders in the industry. A systemic approach was used to elicit the perspectives of stakeholders in the industry and to develop a holistic view of the research problem. The results of the research have shown that the Ugandan construction industry participants have adequate IT infrastructure and Internet access capacity to benefit from the web portal. The research underscores the fact that successful implementation of the web portal will not only depend on the "hard" (or technical) factors but also on the "soft" (or people) factors such as changing the users thinking and methods of work, developing trust, and building partnerships amongst participants. In essence, the project proposes the establishment of a construction information centre to house and coordinate the web portal and its related activities. Keywords: construction industry, information sharing, information technology, web portal.
1.0 I N T R O D U C T I O N The construction industry is information intensive. It needs accurate, reliable, and timely information ranging from legal requirements, building codes and standards, construction industry research results, manufactured products specifications and site specific data about past and present construction projects (Mathur et al., 1993). Mbadhwe (2000) argues that construction is one of the most information dependant industries with diverse forms of information including detailed drawings, images, photos, cost analysis sheets, budget reports, risk analysis charts, contract documents and planning schedules. Therefore, to fully move into the knowledge soci-
730
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
ety and knowledge economy, the construction industry needs to build upon the existing information base. This information base is unique within individual countries (although it may significantly overlap between some countries) and is usually widely spread. Drawing together the information resources and connecting them to each other enables a more effective, informed, and intelligent industry (Bloomfield and Amor, 2001). As observed by Tindiwensi (2000), the Ugandan construction industry lacks cost and resource databases and as a result monitoring of project cost performance is very difficult if not impossible. Tindiwensi (2000) also indicates that to effectively control project costs, contractors need to have a good grasp of measured works i.e. concrete, reinforcement, and formwork, and also require having access to labour rates. Such basic data is not available in an organised database in Uganda. To this effect, a web portal can serve as a centrallyaccessible system where participants in the industry can access and use construction information. A web portal is defined as "a single website that orients users within an organisation to various internal and external information sources such as department websites, external websites, databases, and electronic documents" (Detlor, 2003:122). Rivard et al. (2004:30) define a web portal as "a website that offers a broad array of resources and services to attract and keep a large audience and aims to become its main entrance to the Internet". Therefore, a web portal is a source and provider of information resources, and a gateway to other information sources. This research investigated the effectiveness of using a web portal to enhance information sharing in the Ugandan construction industry and based on the results of the study, a conceptual web portal model was developed. The research is a step towards the establishment of a web portal for the local industry. It is envisaged that the web portal will improve construction project planning and management through the provision of basic construction information. 2.0 RESEARCH A P P R O A C H The approach used in the research is a multi-method approach. The study applied both quantitative and qualitative research methodologies. During the research, questionnaires were used to collect quantitative data on the sample, and interviews were used to collect qualitative data as well as a back up to questionnaire surveys. Literature search was used as a tool to develop a theoretical framework of the study as well as to review the existing documentation on the subject and a systemic approach was used to elicit the perspectives of stakeholders in the industry and to develop a holistic view of the research problem.
The study population was a total of 233 construction and consulting firms consisting of 52 Architectural firms, 12 Quantity Surveying firms, 19 Consulting Engineering firms, and 150 Contracting firms. The population was compiled using lists of firms and professionals regis-
731
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
tered with the professional associations and the Ministry of Works, Housing and Communications (MOWHC). The sample size for the research was a total of 80 firms consisting of 20 Architectural firms, 12 Quantity Surveying firms, 19 Consulting Engineering firms, and 29 Contracting firms. The sampling strategy adopted for the research was stratified random sampling where the study population was divided into strata according to type of firm and units within each stratum selected using a table of random numbers. The research registered a response rate of 88%, with 70 out of 80 questionnaires returned. As recommended by Gillham (2000) that interviewees be selected according to their representative characteristics of the population, nine interviewees representing the key stakeholders in the Ugandan construction industry were selected for the research. The interviewees consisted of senior executives of the different professional associations, commissioners of MOWHC, and heads of relevant schools at Makerere and Kyambogo Universities. The systemic approach adopted in this research provided an effective means for eliciting the responses and perspectives of stakeholders in the Ugandan construction industry with the aim of developing a holistic view of the research problem. It served as a means of qualitatively putting together the captured data through questionnaire and interview surveys. Systemic analysis was carried out using the hard and soft system methodologies, and particularly, employing systems diagrams. This involved constructing systems maps, rich pictures, influence diagrams, multiple cause diagrams, and sign graphs. 3.0 RESULTS 3.1 Respondents' Profiles Respondents consisted of Architectural firms (23%), Quantity Surveying firms (13%), Consulting Engineering firms (14%), Contractors (37%), Project Management firms (10%) and "other type" of firm (3%). Project Management is still a young discipline in Uganda and is yet to be recognised as an independent discipline by the existing professional associations and MOWHC. However, few firms formerly Quantity Surveying and Architectural are practicing as Project Managers. The "other type" refers to categories of firms which do not fall in any of the above categories mainly because of the multi-disciplinary nature of their operations. No respondents classified themselves as Facility Management firms.
The majority of the firms have offices in Kampala city (74%). 3% of the firms have offices in Central Uganda, 3% in Western Uganda, 7% in Northern Uganda, 6% in Eastern Uganda and 7% in Southern Uganda. All Quantity Surveying firms, 90% of Consulting Engineering firms, 94% of Architectural firms, and 54% of Contractors have offices in Kampala city. Central Uganda is characterised by high construction business but because of its proximity to Kampala city, few firms have their offices in the region.
732
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Most of the firms (67%) can be categorised as medium size with fulltime staff of between 5 and 25.20% of the firms have less than 5 fulltime staff, 7% have between 26 and 50 fulltime staff, and 6% have more than 50 fulltime staff. The last two categories are mainly constituted by Contractors. 3.2 Computer Availability and Access to Internet Almost all respondents (97%) have computers and even the 3% who do not own computers have access to commercial computer services. Access to computer services is not influenced by geographical location with both firms in Kampala city as well as upcountry having access to computer services. The high access to computer services can be explained by the relatively low prices of computers, their usefulness and popularity, and the continued demand of computer processed presentations by clients. 97% computer availability in the Ugandan construction industry is comparable to the 99% availability in Canada (Rivard, 2000) and higher than the 88% availability in Sweden (Samuelson, (2002)). The survey revealed that 77% of the respondent firms are connected to the Internet. In comparison to the construction industry in Canada (Internet access of 90% as reported by Riyard, 2000), Malaysia (Internet access of 94% as reported by Mui et al, 2002), Sweden (Internet access of 83% as reported by Samuelson, 2002), Internet access in the Ugandan construction industry can be considered low. However for a developing country such as Uganda, a 77% access to the Internet should be considered satisfactory. A web portal is a web based application and therefore computer availability and Internet access are a major consideration for its successful implementation. 3.3 Existing Means of Information Sharing The majority of respondent firms (93%) are involved in some form of information sharing with other parties in the industry. Information sharing is more prevalent amongst Architectural firms (100%), between Quantity Surveying and Architectural firms (100%), amongst Consulting Engineering firms (90%), between Consulting Engineering firms and Contractors (90%), between Quantity Surveying and Consulting Engineering firms (89%), and between Architectural and Consulting Engineering firms (88%). Overall it can be said that Architectural firms are more involved in information sharing than any other category of firm. It can also be studied from the statistics that information sharing is more prevalent amongst the construction consultants than contractors. Most of the information shared amongst participants is in form of Computer Aided Drawings (70%), estimates (67%), word processed documents (66%) and paper drawings (64%). Database and project management information are amongst the least shared information with 19% and 40% of respondent firms respectively, involved in sharing this category of information. This could be partly explained by the fact that at 39% and 50% respectively, database systems and project planning packages are not widely used in the industry. 97%,
733
International Conference on Advances in Engineering and Technology
87%, and 62% of respondent firms use word processing software, spread sheets, and CAD software respectively. Hand delivery is the most commonly used means of sharing and exchanging information with 90% usage by the respondent firms. 88% of respondent firms use telephone as a means of sharing information, 77% use postage and 74% use E-mails. The interview survey revealed that project meetings are a common means of sharing information and storage devices such as CDs and Floppy disks are commonly used to exchange information. The study revealed that safety of shared information is the biggest weakness of the existing means of information sharing. Additionally, 47% of the respondent firms consider delays in information delivery as a weakness of the existing methods of information sharing. The lack of trust between the participants was also cited by 36% of the respondent firms as a weakness of the existing means of information sharing. 3.4 Viability and Management of the Web Portal from the Users' Perspective 83% of respondent firms consider the web portal to be an effective means of sharing information, 3% do not consider it effective, and 14% were not certain. 93% of the respondent firms are willing to share the information produced by their companies and 7% are not willing to share information from their companies. The argument raised by most of those not willing to share construction information is the fear of competition. This category of participants argue that certain types of information such as estimates and designs should not be shared and indicate that in the event that such information is shared, copyright should remain with the producer of information. The above argument points to the fact that there is stigma associated with information sharing amongst some participants and also acknowledges the fact that there are legal implications related to information sharing.
Respondents were optimistic of the benefits of using a web portal to enhance information sharing. 80% of the respondent firms indicated that information sharing will result in savings in time and cost during design, construction and maintenance phases and 73% indicated that information sharing will improve the quality of decision making due to availability of information. 73% of the respondents indicated that information sharing and exchange will lead to improved team work on projects and 56% indicated that information sharing and exchange will lead to efficient working of dispersed construction parties. 56% of the respondent firms consider professional bodies to be best placed to manage and maintain the web portal. 40% and 24% consider academic and research institutions, and the government ministry of works and communications respectively, to be best placed to manage and maintain the web portal. A common opinion expressed through interviews was that the web portal should be managed by an independent body constituted by the stakeholders in the industry.
734
International Conference on Advances in Engineering and Technology
3.5 Assessment of Users' Information Needs
The users' information needs were assessed through a questionnaire survey by asking respondent firms to rank different categories of information according to their importance to the firm. Information on construction materials and costs, building codes and standards, and opportunities for service delivery was considered most important by the majority of respondent firms at 59%, 59%, and 54% respectively. Information on construction professional bodies (61%), developments in the international construction industry (61%), documentation of previous projects (57%), laws and legislation (56%), opportunities for capacity building and training (56%), and research results from the local construction industry (51%) was considered important by the majority of respondent firms. The users' information needs model is presented as Figure 1 below. As recommended by Tuzmen (2002), the users' information needs model was validated by experts in construction communication. Three experts were drawn from the Ugandan construction industry and the other three from the South African. The inclusion of experts from the Ugandan construction industry was primarily aimed at validating the model to suit the local construction industry needs. It can be argued that appraising the model by experts from the South African construction industry helps to demonstrate that this model can be adopted elsewhere with minor modifications to suit local conditions. 4.0 CHALLENGES OF ESTABLISHING A WEB PORTAL 4.1 Capital and Maintenance Costs
Establishing a web portal is a capital intensive investment requiring inputs of computer and Internet infrastructure as well as skills development. The web portal also requires establishing an effective administrative system to oversee its operation and to coordinate the various stakeholders. 4.2 Legal Frame Work
A web portal requires a legal framework constituted by relevant laws and regulations on information sharing and exchange. Requirements of information authentication and nonrepudiation are crucial to any information sharing system. The web portal system also requires establishing procedures for appeals and arbitration in case of disputes and conflicts arising from the information sharing and exchange process. 4.3 Change of Users Thinking and Methods of Work
Establishing a web portal requires a change in users thinking particularly the importance they attach to information. The information presented on the portal would be of little use, if not fully utilised by users. The system also requires users to change their methods of work especially a shift from paper driven methods to computer driven (or digital) approaches.
735
International Conference on Advances in Engineering and Technology
4.4 Data Requirements Data presented on the web portal must meet certain standards such as the Industry Foundation Classes (IFC), Standard for Exchange of Product Model Data (STEP), and web authoring standards. Procedures that ensure data integrity and data security are also crucial. Planning Information 9 9 9 9 9 9 9 9
9 9 9
Design Information
Local research results Information on professional bodies Land use plans Developments in the intern, constr. industry Opportunities for service delivery Contacts and web sites of participants Capacity building and training programmes Property financing strategies Economic analysis data Construction indices (cost, price, escalation indices, etc.). Land & property data (availability, value, cost, etc.).
Construction laws & legislation
j
9 Previous & ongoing projects Building codes & standards Land use plans Local research results Unit rates Planning & design software Standard drawing details Codes of professional ethics Statutory processing requirements Environmental conditions Construction materials & costs GIS data & survey maps Geo-technical data
A Procurement JConst Information
Facility Management Information Service providers Building maintenance strategies Insurance policies Property rates Property taxes Property management software Replacement costs Life cycle costing data Best practice/benchmarking info. Post occupancy evaluation: methods, results.
Fig. 1" Users' information needs model.
736
9 9 9 9
<
9 9 9 9 9 9
ction
Construction materials & costs Tendering info: type, processes, &procedures, etc. Labour productivity info; Labour rates Construction machinery & equipment Building codes & standards Dispute resolution guidelines Safety guidelines & legislation Contract documentation Project planning software Contractors & suppliers records
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
5.0 WEB PORTAL ARCHITECTURE The proposed architecture of the web portal is a modified n-tier client/server architecture. The n-tier architecture has been found effective for web portals and separates applications into layers that handle presentation services, business processes and rules, and data management services (Murray, 2002). Desktop clients process the user interface, the database servers handle data management, and a business services tier provides support for the processing of the application logic using appropriate software. The proposed architecture of the web portal is presented as Figure 2 below.
Property Developers Property Agents
t
~
"91" - It
Z
WEB PORTAL
I
Project Managers Architects
,
§
t
z.
z_ '.. ~
@
E--
~<
5>L~
~.<
~
E-
Z
O [--
Quantity Surveyors Engineers
~--
,' ~-t
t
: "
Facility Managers
"~'- - ,
-z
-
~ ~
FD <
~ ~
~0 - ~
,,(i_ _ l
m [--
[..
z < o ~
_~ |
Government Depts.
o~
=
-___=
Contractors
F.
N =
-
-
WEB PORTAL FIRE WALL
Fig. 2" Web Portal architecture. 6.0 CONCLUSIONS AND R E C O M M E N D A T I O N S The results of the research have shown that the Ugandan construction industry participants have adequate IT infrastructure and Intemet access capacity to benefit from the web portal. The high number of participants (93%) willing to share the information produced by their organisations is a big incentive to establishing the system. The participants acknowledge the benefits of the web portal in terms of savings in time and cost, improved quality of decision making, improved team work on projects, and efficient working of dispersed parties. Establishing a web portal faces a number of challenges. However their existence should not be a reason to thwart the establishment of the system. Rather they should be seen as the unavoidable socio-technical factors which can be met by the beneficiaries of the system including construction professionals, professional associations, the government, and property developers. The implementation of the web portal should not only focus on the "hard" (or technical) factors but also on the "soft" (or people) factors such as changing the users thinking and
737
International Conference on Advances in Engineering and Technology
methods of work, developing trust, and building partnerships amongst participants. In essence, this research recommends the establishment of a construction information centre to house and coordinate the web portal and related activities. REFERENCES
Bloomfield, D. and Amor, R. (2001) I-SEEC: An Internet Gateway to European Construction Resources, Proceedings of the CIB W78 Conference, Mpumalanga, South Africa, 30 M a y - 1June 2001, ppl2.1 -12.14. Detlor, B. (2003) Internet- Based Information Systems Use in Organisations: an Information Studies Perspective. Information Systems Journal, Volume 13, Issue 2, pp 113-132, April 2003. Gillham, B. (2000) Developing a Questionnaire. London: Continuum. Mathur, K.S., Betts, M.P. and Tham, K.W. (1993) Management of Information Technology for Construction. Singapore: World Scientific & Global Publications Services. Mbadhwe, J. (2000) Improving Communication in the Construction Industry using the World Wide Web. Journal of the Uganda Institution of Professional Engineers, Volume 4, Issue 4, pp25-30, September 2000. Mui, L.Y., Aziz, A.R.A., Cheng, A., Yee, W.C., and Lay, W.S. (2002) A Survey of Internet Usage in the Malaysian Construction Industry. Electronic Journal of Information Technology in Construction (ITcon), Volume 7, pp259-269. Murray, M. (2002) An Investigation of Specifications for Migrating to a Web Portal Framework for the Dissemination of Health Information Within a Public Health Network, Proceedings of the 35th Hawaii International Conference on Systems Sciences, 710 January 2002, Island of Hawaii. Rivard, H. (2000) A Survey on the Impact of Information Technology on the Canadian Architecture, Engineering, and Construction Industry. Electronic Journal of Information Technology in Construction (ITcon), Volume 5, pp37-56. Rivard, H., Froese, T., Waugh, L. M., E1-Diraby, T., Mora, R., Torres, H., Gill, S. M., and O'Reilly, T. (2004) Case Studies on the Use of Information Technology in the Canadian Construction Industry. Electronic Journal of Information Technology in Construction (ITcon), Volume 9, pp 19-34. Samuelson, O. (2002) IT Barometer 2000: The Use of IT in the Nordic Construction Industry. Electronic Journal of Information Technology in Construction (ITcon), Volume 7, ppl-25. Tindiwensi, D. (2000) Development of a Building Cost Control Model for the Ugandan Construction Industry, Unpublished Research Findings, Faculty of Technology, Makerere University, Kampala. Tuzmen, A. (2002) A Distributed Process Management System for Collaborative Building Design. Engineering, Construction and Architectural Management, Volume 9, Issue 3, pp 209-221.
738
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
L O W F L O W A N A L Y S I S IN LAKE K Y O G A BASINEASTERN UGANDA A.I.Rugumayo; Department of Civil Engineering, Makerere University J.Ojeo; Directorate of Water Development, Kampala.
ABSTRACT A key input for effective water resources management is knowledge of the availability of the resource. One such assessment is low flow studies. This paper discusses how low flow methods are applied to eight catchments in eastern Uganda, in order to estimate low flow indices. These catchments are relatively climatically homogenous and with sufficient streamflow data. Internal relationships are also developed between the different low flow indices and their durations.
The low flow indices are then correlated with the catchment characteristics, using statistical software, to develop relationships for estimating low flow indices, at ungauged sites. It was observed that the models developed were linear, show a very high degree of correlation and can be used for preliminary design at ungauged catchments. The results provide an input into a database and show that the methodology applied here can be used in similar relatively homogenous climatic regions. Key words: Flow duration curve, Low flow frequency curve, Base flow index, Average recession constant, Main stream length, Stream frequency, Multiple linear regression, Storage yield analysis.
1.0 INTRODUCTION An important aspect of water resources management is the allocation, at a sustainable level, of available water resources. This also means prioritization between different uses, especially in cases of scarce water resources. Sustainable development and management of water resources, provide a great challenge to engineers and water resources managers. In order to execute the rational management of water resources, it is necessary to obtain knowledge on the available resources and the extent of present and possible future exploitation of these resources. Seasonal variations are related to differences in water availability, between distinct wet and dry seasons. They are also related to considerable variations from year to year in the timing
739
International Conference on Advances in Engineering and Technology
of the seasons and amounts of rainfall and stream flow. These have led to uncertainty and drought risk, especially in areas with low mean annual rainfall. It is therefore necessary, to prepare an assessment of the dynamic aspects of water resources, through a case study of the time series of important hydro-meteorological parameters, in order to detect possible climatic trends and some of their effects. There is also a need to develop appropriate technologies to utilize the water in rivers, in such a way as to limit the environmental impact. This calls for extensive hydrological studies. One of these areas is low flow studies. Low flow studies provide one of the ways of deriving the important parameters that need to be considered in hydrological and water resources planning. The study of low flows of rivers is aimed at obtaining methods that can be used to estimate the variability of a river when the river, as a resource, is exploited. Other applications of low flow studies include; control of discharge to ensure availability of water throughout the year; the design of sewage treatment works, hydropower and reservoir design and operation; the design of water supply systems, the protection of marine life, determining the return periods of severe droughts, river flow forecasts, hydrogeological studies and the licensing of water abstractions. Within the context of Uganda national resource allocation, there has been little prioritization for carrying out comprehensive low flow studies. Planners, policy makers, donors and interested parties, need to appreciate the significance of low flow studies, within the national and international context, for effective water resources management. Only preliminary investigations have been initiated by the Water Resources Management Department. More detailed studies are therefore required. The objectives of this study were threefold. Firstly, to initiate the development of a database on low flow indices. Secondly, to develop a methodology, for determining low flow indices of ungauged catchments. Thirdly, to recommend the applicability of these methods in homogenous climatic regions. 2.0 STUDY AREA This study is based on the analysis of available streamflow data from eight catchments in eastem Uganda.Theyare Kapiri,Malaba,Manafwa,Mpologoma,Namalu,Namatala, Simu and Sipi as shown in Fig.1.1The study area falls in the Kyoga climatic zone, which embraces a great part of northern and eastern Uganda. A large proportion of the area is represented by Lake Kyoga together with extensive papyrus wetlands. Rainfall in the zone averages about 1250 mm and occurs in 140 to 170 days of the year. The wet season extends from April to October, with peaks in April, May and August. A minimum occurs in June and July. Rainfall is mainly convectional, characterized by afternoon and evening occurrences. The April - May peak and the September-October rainfall is due to the passage of the Equatorial trough.
740
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
The vegetation of the catchments is well varied, (National Environment Management Authority, Ministry of Water, Lands and Environment, (1997)). It includes high altitude moorland and heath, woodland, wooded savannah and grassed savannah. High altitude moorland and heath is found in the upstream parts of Sipi, Simu and Namalu catchments. The rest of catchments consist of woodland, wooded savannah and grassed savannah. The biomass density in the study area is 10 - 30 tons per hectare (Forestry Department, Ministry of Water Lands and Environment, (2002)), which corresponds to semi moist lowland specified as Agro-ecological Zone 3. This is characterized by trees in the 5 to 20 cm diameter class and between 300 and 100 stems per hectare. Some parts of the high altitude area near Mt. Elgon have dense biomass of up to 180 tons per hectare. In eastern Uganda, farmland is the most extensive land cover, followed by water bodies and grasslands with tropical high forests and woodlands, coming last. A large section of the area, in particular the catchments of Mpologoma, Kapiri, Namalu are covered with wetlands. These areas are seasonally or permanently flooded and have a significant influence on the discharge from the river basins (National Environment Management Authority, Ministry of Water, Lands and Environment, (2002b)) Their main hydrological effects relate to their large water retaining capacity, which reduces the peak flow, causes suspended sediment to settle and increases evapotranspiration. The shallow depth of the swamps and dense vegetation lead to losses of very large quantities of water through evapotranspiration. River flows are generally reduced through the swampy area. The rest of the study area is high plateau rising to nearly 1300 m. Nearly 80% of the study area are covered by Ferallitic soils (National Environment Authority, Ministry of Water, Lands and Environment, (1999)). The Namalu and Kapiri catchments have Ferrisols combined with organic and hydromorphic material. A small portion of the Manafwa, Namatala, Simu and Sipi have hydromorphic (organic) soils near their sources followed by mostly Ferrilitic and Ferrisols. The average depth of overburden is 3-4m. The basic geology of the study area consists of Basement complex underlying most of the catchments, (Barifaijo,(2002). The Mpologoma catchment is in the Nyanzian system with mobilized and intrusive granites. The catchments of Sipi and Simu are underlain by the Karasuk, pre-Karasuk, Basement complex and Pleistocene sediments 3.0 METHODOLOGY
Each of the eight catchments selected from the Acholi-Kyoga climatic zone in eastern Uganda, had more than 15 years continuous streamflow data. The available data 6 showed major abstractions from only three of the catchments. These are abstractions for urban water supply schemes within the respective catchments. The percentage abstractions in relation to the ADF in the particular catchments are shown in parenthesis; Manafwa (1.5%), and
741
International Conference on Advances in Engineering and Technology
Malaba (0.2%). Other data on groundwater abstractions was not readily available. It should also be noted that the data period considered is before, abstractions had reached the current level and therefore, the effect of abstractions is minimal. The quality of the flow data was checked using the double mass curve method 7. Data infilling was also done using the double mass curve. The complete set of record was plotted on the x-axis to ease the computation. A trend line was generated from the plot and its equation was obtained with the R 2 value.
+ / V (lt~
~laba
Scale: 1:3, 700,000
Fig. 1. A Map of Uganda showing the Eastern Catchments. From the Annual flows together with the in-filled data, the average daily flows for each catchment was estimated by averaging the daily mean flows over the period of data available. The methodology for deriving the low flow indices was based previous studies by the Institute of Hydrology8'9'1~ The details are given below. 3.1 Flow Duration Curves
The daily flows were averaged over 10 days. To allow a comparison between catchments, all the flows were expressed as a percentage of average daily flow (ADF). Daily flow data was arranged into groups of 10 consecutive days for the entire flow record. Averages of these 10day flows were computed and assigned to class intervals and the number of occurrences within each interval counted. The lower limits of the flow were then expressed as a percentage of ADF. The total number of occurrences above the lower limit of each class interval was expressed as a percentage of the total number of 10-day flow averages in the record.
742
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
This percentage was then plotted against the lower limit of the interval on a semilogarithmic graph, to give the flow duration curve. From the graph the values of Q75 (10) and Q95 (10) are read off. The interpretation of a point on a 10-day curve is that it gives the proportion of the 10-day periods during which the average discharge is above a given value.
3.2 Regression Equation In order to produce a Flow Duration Curve for any duration D for each the stations, it is necessary to establish a relationship between Q75 (10) and Q75 (D) D-day Flow Duration Curves for different D values were derived for each of the stations. Values for Q75 (D) were read off from the various Flow Duration Curves for each catchment and a curve of Q75(D) _ 1 against ( D - 10) plotted in log space. Q75(10) For each catchment, and for the various values of D; the most appropriate relationship of the form illustrated by the equation below was derived. Q75(D______~)_ 1 Q75(10) where, CONST and EXP are constants.
CONST(D -
(1)
1 O) e x e
The choice of equation (1) to fit the experimental data was firstly, because the relationship between the magnitude of flow and the percentage time of occurrence is non linear and secondly, because of its use in previous studies in Tropical Africa, Dayton e t al, (1980). Optimum estimates for EXP and CONST were obtained for each of the given catchments by linear regression in log space using the power series. A graph of CONST against Q75 (10) was plotted on normal graph paper and the power series trend line was used to generate the regression equation of the form; CONST
-
d(Q75(10))'
(2)
By substituting equation (2) into equation (1) an equation can then be obtained for estimating the value of Q75 (D) from Q75 (10) for any duration D in the forms shown below; Q75(D) - Q75(10)+ where d, t and
EXP
d(Q75(10))'(D -
10) ex" for D>10days
(3a)
are constants.
743
International Conference on Advances in Engineering and Technology
3.3 Low Flow Frequency Curves The data to be used here is the annual minimum flow averaged over 10 days MAM (10). The lowest average flow over 10 days in each year per station was determined. These flows are ranked from highest (i-l) to lowest (i=N) and expressed as a percentage of ADF. A plotting position Wi was assigned to each ranking as determined from the exceedance probability Pi. Blom's plotting position formula for the plotting position of the ith largest of a sample of N is given by:
( i - 0.44)
P~ =
(4)
( N + 0.12 g
W~ - 4(1 -(In(P/)1/4)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
(5)
where i is the rank of the given flow and N is the total number of flows. The discharge, expressed as a percentage of ADF, was plotted against the plotting position on Weibull's probability paper. The smooth curve drawn through the points yields the Low Flow Frequency Curve.
3.4 Regression Equation The MAM (10) values for all the stations were plotted against Q75 (10) on a normal graph paper and from this plot a relationship for determining MAM (10) from Q75 (10) was obtained. The relationship between MAM (D) and MAM (10) is given as:
MAM(D)- 1 P~ -
(N + 0.12
-1) - CONST2(D -10) Exe
6)
where : CONST2 and EXP are constants. The values of MAM (D) were obtained from Q75(D) for the chosen duration D, using the regression equation derived above. A similar analysis using all the stations was performed for the duration relationship between MAM (D) and MAM (10) similar to that described earlier for the flow duration curve. This will yield equations of the form.
744
MAM(D)
- M A M ( I O ) + a ( Q 7 5 ( 1 0 ) ) b (D - 10) exe for D>10days
(7a)
MAM(D)
- M A M ( 1 0 ) - a ( Q 7 5 ( 1 0 ) ) b ( D - 1 O) exe for D
(7b)
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
3.5 Standardized Storage Yield Curves In the derivation of the standardized type curves for the catchments, the data employed was that of monthly flows for river with the longest data range. The Monthly flows, Q, were obtained by computing monthly averages for all the data. For a given Yield, Y, the differences, (Q-Y) were computed and cumulated on a yearly basis. The end of the refill period was identified at a point where the accumulation, ~ (Q - Y) = 0 The storage requirement, V, for each year was then obtained, up to the end of the refill period. The storage requirements was arranged in descending order and the J largest values taken and ranked {For N>20; N/4<J
0.375)
N(N
+ 0.25
F i ---
(10)
After repeating the above steps for various yield values, Standardized storage yield curves are then produced by plotting the storage requirement, as a percentage of annual volume, against return period of failure for various yield values. 3.6 Storage Yield Analysis For a yield of a given uniform supply rate, the storage-yield diagram obtained above, can be used to determine the volume of storage required for a given probability of failure. The gross yields are determined from the flow duration curves and expressed in cumecs by multiplying the values obtained above by (ADF/100). An appropriate return period of failure, like 1 in 5 years is selected for the analysis. For each gross yield, the required storage is read from the standardized storage yield curves. The storages are converted to cubic metres. The net yield is plotted against storage for the various yields considered on normal graph paper and a line of best fit drawn through these points yielded the storage yield curve for that catchment. 3.7 Base Flow Index To obtain the Base Flow Index, the HYDATA computer program was applied and the algorithm that it uses is based on average daily flow data. The HYDATA Computer program applies smoothing and separation rules to the recorded flow hydrographs, from which the index is calculated as the portion of the flow, under the separated hydrograph, to the flow under the total hydrograph.
745
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
The program calculates the minimum of five-day non-overlapping consecutive periods and subsequently searches for turning points in this sequence of minima. The turning points are then connected to obtain the base flow hydrograph, which is constrained to equal the observed hydrograph ordinate, on any day when the separated hydrograph exceeds the observed. 2.7 Recession Constants For each of the stations, the average monthly flow during the dry season was computed for each year. A graph was then plotted of the natural logarithim of monthly flow against the months, for each year separately. The line of best fit for each year was drawn and 'a' the value of the slope read off. The recession constant, K for each year was then computed using the following relationship. 1 K = ~ (9) ea
The average recession constant, KREC was then obtained for each catchment by computing the arithmetic mean of the yearly recession constants for each catchment separately. 3.8 Catchment Characteristics 6Catchment characteristics obtained from maps and the Data Base of the Water Resources Management Department were: Catchment Area (AREA), Main Stream Length (MSL),Slope as (S1085) Stream frequency (STRFQ), Mean Annual Rainfall (MAR) and Potential Evaporation (PE) (Ruks et al, (1970)).
Relationships were derived between the low flow indices and the catchment characteristics using the lZmultiple regression model, together with the Statistical Package for Social Scientists SPSS software. These relationships can be used to derive low flow indices at ungauged sites. 4.0 RESULTS 4.1 Quality of Data The range of values of the coefficient of correlation for the double mass curves ranged from 0.9263 for the Simu and Sipi catchments to 0.9936 for the Malaba and Manafwa catchments (Ojeo, (2002, unpublished)).
The Table 1 (Water Resources Management Department, Directorate of Water Development, Ministry of Water Lands and Environment, (2003)), 6 gives the quality of data of catchment characteristics. The drainage description of the catchments is good; the main stream length; the slope and the stream frequency are very good.
746
International Conference on Advances in Engineering and Technology
Table 1" Quality of Data of Catchment Characteristics Catchment
Drainage
Main Stream Length
Slope
Description
MSL
S 1085
Stream Frequency STFREQ
Kapiri
Digitized
Good
Good
Good
Malaba
Partially
Very Good
Very Good
Very Good
Digitized Manafwa
Main channel unique
Very Good
Very Good
Very Good
Mpologoma
Partially
Very Good
Good
Good
Digitized Namalu
Digitized
Good
Good
Good
Namatala
Good main channel
Very Good
Very Good
Very Good
Simu
Good channel
Good
Good
Good
Sipi
Good channel
Very Good
Very Good
Very Good
FLOWDURATIONCURVEOF R. NAMATALA
Flow Frequency Curve for R. Namalu
1000
40 30
~< 100 'S
",-.. >......,
~o
~5
,-r
~0
0 -2
-1
0
1
2
Plotting position,Wi
Fig. 2 A 10 Day Flow Duration Curve
3
0
20
40
60
80
100
Percentage of time a flow is exceeded.
Fig. 3. A Low Frequency Curve
A typical flow duration curve and a low frequency curve are shown in Figs 2 and 3 respectively.
4.2 Regression Equations Flow Duration Curves
In relation to equation (2) the following relationship was generated 9
CONST - 16654(Q75(10)) -''~176
(1 o)
By substituting the generated equation into the original expression, it becomes"
Q75(D) - 075(10) + 16654{075(10)} -'~176( D - 1 0 ) ~~ for D>10 days (lla) Similarly"
Q75(D) - 0 7 5 ( 1 0 ) - 1 6 6 5 4 { 0 7 5 ( 1 0 ) } -~~176( 1 0 - D) 1~ for D <10days (llb)
747
International Conference on Advances in Engineering and Technology
Equations 1 la and 1lb can then be used for estimating the value of Q75(D) for any duration D.
Low Flow Frequency Curves In relation to equation (6)The following relationship was generated: CONST
2 -
9409.4 {MAM (10)}-2915
(12)
By substitution into the original equation, the regression equation is:
M A M ( D ) = MAM(1 O) + 9409.4(MAM(10)) -2"915 ( O - 1 0) 1"092 for D> 10 days (13a) Similarly,
MAM(D) = MAM(1 O)- 9409.4(MAM(10))-2.9,5(D - 10) 1"~ for D<10 days (13b) Equations 13a and 13b can be used for estimating values of MAM(D) for any duration D.
4.3 Standardized Storage Yield Curves The monthly river flow data for R. Manafwa were used to derive the standardized curves since it has the longest range of data of all the rivers used in this analysis. The monthly flows are determined from the daily flows Yield values are determined for Q40, Q50, Q60, Q70, QS0 and Q90 in cumecs. For each of the yields above, the storage requirement is determined considering the entire recession period for each year. The plot of the storage requirement against return period, yields the Standardized Storage Yield Curves as shown in Fig.4 STANDARDISED
STORAGE CURVES
YIELD
1 f ............................................................................................................................. Q40
i
0.1] ~
Q60
i
i
O.O1
...........
,
I[ o.ool
1 000 10 0 0 0 100.000 R e t u r n P e r i o d of an e v e n t r e q u i r i n g S t o r a g e > V
Fig. 4. Standardised storage yield curves
748
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Curves are plotted on the same graph to compare the standardized curves of the different rivers for the same yield Q40. The curves obtained are as shown in Fig. 5. S t a n d a r d i s e d C u r v e s for Q 4 0 for s o m e of the R i v e r s
[
g
0.1
- . m
_ -Log.
(Namatala) ~
Log.
(Mpologoma) ....
Log.
0.01
(Manafwa) ~Log.
(Simu)
- - Log. (Malaba)
o 0.001
........................................ 10
100
return period of an event requiring storage > V
Fig.5 Standardised curves for Q40 for 5 Catchments of some rivers in Eastern Uganda Judging from the closeness of curves, it is sufficient to conclude that a series of standardized curves drawn for different yields of one river is appropriate enough as a reference for the other rivers and so will give a fair approximation for the storage yield analysis of all the other rivers in the analysis. Graphs of storage requirement against yield are then plotted for each catchment and the line of best fit drawn through the plotted points. The storage-yield curves for two of the catchments are shown in Fig. 6. Typical hydrographs are shown in Fig.7 and indicate bimodal rainfall characteristics with peaks in May to June and August to October. The recession periods are October to February and June to July, with the least flow occurring in March to April. STORAGE-YIELD
CURVE
FOR
STORAGE-YIELD
R. S I M U
CURVE
FOR
R. SIPI
4 x
3.5
Z
3
j-
2.5
~ 0.5 m 0
/ 1
2
3
/
f
4
YIELD (cumecs)
1
1.5
YIELD (cumecs)
Fig. 6: Typical Storage Yield Curves
749
International Conference on Advances in Engineering and Technology
HYDROGRAPH FOR R. MANAFWA
HYDROGRAPH FOR R. KAPIRI i .
14-
3
5
.
.
.
.
7
i
ca ~ 1 2 .
E ..~
,J
!
/ "
~ 1 0 !
....
.
.
.
.
E =
........
.
'
4
7
"
.......
J
/
,
,. . . . . . .
!
.
_o
i i
........... 25
, 0
30
,
[ i
20 ......'
i
.....
z_6~
e, c,-15
co
o 4
g
10
,
.
.
.
.
.
.
.
/ - / ......
-. . . . . . . . . .
' j
i
~s >
>
i 0
............................................................................
o_
;
g
Time
0 -
o
~
~
.
.
.
.
.
1975
-
1'976
.
.
.
.
n
o
Time
(months)
(months)
Fig.7. Typical Hydrographs B a s e Flow Index
-
R.
M a l a b a at dinja-Tororo Road.
(82218)
I
I
1 0.0
1956
1 -
1957
1958
-
1959
-
1960
/
-
1961
-
1962
TotalNydrograph
-
1963
-
1964
-
1965
-
1966
-
1967
-
1966
-
1969
-
1970
1971
: -
1973
-
1w
-
/ Baseflow
Fig. 8 A plot of the base flow index Fig.8. indicates a typical plot of the base flow index. It was also noted that the values of base flow indices, of rivers through wetlands, were very high giving an indication of the magnitude of the base flow contribution to the total flow of the rivers. The effect of wetlands on river flow is a result of their ability to retain water. Furthermore, large parts of the runoff from upstream catchments evaporate within the wetlands. The low flow indices derived and the catchment characteristics are shown in Tables 2 and 3 respectively. Table 4 displays the monthly low flow parameters.
750
International Conference on Advances in Engineering and Technology
Table 2" Low Flow Indices ADF
Q75(10)
Catchment
m3/s
%ADF
Mpologoma
Q95(10)
MAM(10)
BFI
KREC
0.5434
% ADF
19.545
11
0.007
9.838
0.8747
Manafwa
6.872
35
17
19.214
0.6287
0.6855
Namalu
0.316
22
15
19.296
0.5832
0.7144 0.6732
Kapiri
14.644
10
.0001
17.710
1.0000
Malaba
14.181
32
9
14.591
0.7194
0.6111
Sipi
2.976
2.5
0.004
11.069
0.7823
0.5167
Simu
3.477
36
16
16.19
0.7870
0.6511
Namatala
2.614
40
13
25.005
0.6024
0.6648
23.6
8.75
16.614
0.7472
0.6324
Mean
13.258
Table 3: Catchment Characteristics Area km 2
S1085
STFRQ
Catchment
MSL
MAR
km
mm
P.E mm
Mpologoma
2479.6
17.34
0.113
128.9
1436.0
2005.3
Manafwa
477.6
19.07
0.60
61.8
1459.2
1694.7
Namalu
34.0
30.0
0.21
12.0
1180.0
1791.2
Kapiri
23946.9
2.77
0.011
160.2
999.8
2007.0
Malaba
1603.8
1911.0
13.57
0.025
73.9
1432.3
Sipi
92.0
76.3
0.34
32.5
1731.9
1942.2
Simu
165.0
77.69
0.37
34.0
1816.2
1653.8
Namatala
123.6
43.8
0.60
27.9
1344.9
1652.7
1425.0
1832.2
.....
Mean
Table 4 Monthly low flow parameters Low Flow Analysis (Monthly values) Catchment
Average Driest (m3/s)
Min
flow
re-
1.5 year low
corded (m3/s)
(m3/s) 0.618
Manafwa
February
2.99
0.66
Namatala
February
1.38
0
0.322
Mpologoma
March
3.82
0
3.616
Malaba
February
3.99
0.98
1.232
Kapiri
April
4.13
0
1.64
751
International Conference on Advances in Engineering and Technology
Namalu
March
0.07
0
0.070
Simu
March
1.2
0.35
0.356
Sipi
March
1.01
0
0.086
Table 5 provides the results of model verification, achieved after comparing the derived values and the predicted values for a selection of the indices for the different catchments.
Table 5 Derived and Predicted Indices Catchment
Index
Derived
Predicted
Error %
Mpologoma
Q75
11
11.6
5.45
Manafwa
Q95
17
16
5.88
Namalu
MAM(10)
19.296
20.6
6.76
Kapiri
ICREC
0.6732
0.6727
0
Malaba
BFI
0.7194
0.7198
0
Sipi
ADF
2.976
2.914
2.08
Table 6 Models Generated From Ungauged Catchments. Independent Variables Dependant Variable
Cons-
MAR
AREA 10"6
S1085 10-
STRFQ
MSL
3
10 -z
10-3
PE 10-4
KRE
R
C
tant
10 -s
Q75 (10)
232.631
2038(6)
146.9(4)
314000(3)
-2250.7(~)
3.195(5)
-1210000(2)
0
0.961
Q95(10)
103.586
1130(6)
193.6(5)
- 191000(2)
- 1042.2(1)
-82.32(4)
-526.8(3)
0
0.989
ICREC
1.660
-4.465(5)
5.206(6)
-1.202(2)
-9.819(1)
.-722(3)
-4.719(4)
0
0.978
MAM(10)
73.807
-1139(~)
170.6(5)
.-7332(4)
435.6(6)
-22.16(3)
-225.5(2)
0
0.974
ADF
2.485
387(5)
-456.8(4)
-50.99(2)
-743.1(1)
152000(6) -24.48(3)
0
0.992
BFI
-179
4.917(3)
-2.545(2)
3.667(6)
10700(1)
2.821(5)
354(7)
1
. . . .
1.844(4)
Table 7. The Sums of Ranking Coefficients and the Catchment Characteristics MAR AREA S1085 STFRQ MSL PE 18 27 11 19 26 Sum of Rank- 26 ing 14 22 10 13 24 23 Exclude BFI 10 19 11 16 Exclude BFI, 18 KREC
752
KREC
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Table 6 shows the models developed for determining low flow indices at ungauged sites together with the corresponding multiple regression coefficient. In all cases there is a very high degree of correlation. 5.0 DISCUSSION The data used is of good quality as was evidenced from the double mass plots. This was further confirmed by its use in the development of models that can predict these indices. The flow duration curve, is like a signature of a catchment and can used for general hydrological description, licensing abstractions or effluents and hydropower schemes. The 90% exceedance flow value can be used as one of the measures of groundwater contribution to stream flow. The ratio Q90/Q50 can be used to represent the proportion of stream flow from groundwater sources.
The Low Flow Frequency Curve LFFC can be used to obtain the return period of drought, the design of water schemes and water quality studies. The Malaba, Manafwa, Namalu, Namatala catchments underlain by the Basement Complex rocks, have lower base flow indices than the other catchments of Mpologoma, Simu and Sipi and Kapiri underlain by Nyanzian granites, Tertiary volcanic and Pleistocene sediments alluvium, respectively. The slope of the LFFC may also be considered as a low flow index; represented by the difference between two flow values( normalized by the catcment area). One from the high and another from low probability domains. The similarity in values of the Average Recession Constant KREC, may imply that the rocks are comparable in their storativity and that the catchment climates are related. This is consistent with the fact that they are in the same climatic area. Other indices may be obtained from the LFFC, where it exhibits a break in the curve near the modal value. ~SThough not a general feature, it is regarded by some researchers as the point where a change in drought characteristics occurs. It means that higher frequency values are no longer drought flows but have a tendency towards normal conditions. It may also indicate conditions at which a river starts getting water exclusively from deep subsurface storage. The storage yield diagram can be used for preliminary design of reservoirs sites, including their screening and to estimate yield levels at a certain levels of reliability. By plotting the storage requirements of other catchments on the same graph, as shown in Fig. 5., the hypothesis that all flows corresponding to a particular percentile on the flow duration curve would have the same storage yield curve is tested. The curves are close, and the hypothesis is therefore valid. One curve can thus be used to approximate the storage return period relationship of other catchments. The above analysis is based on the assumption, that the flow duration curve is an appropriate index of storage yield characteristics of catchments.
753
International Conference on Advances in Engineering and Technology
In comparison with the Malawi Study 9, the values of ADF, Q75 (10) and MAM (10) here are higher, implying lower river flow values in Malawi. The average recession constants (KREC) are much lower here, implying less variable rainfall distribution than in Malawi. In comparison with South African 14 rivers, the flows here are higher here and there is less variability of rainfall than in South Africa. Furthermore, the models developed here, for estimating low flow parameters on ungauged catchments are linear, as compared to the UK Studies 1~ The subscript after each value in Table 6 ranks the coefficients of the catchment characteristics, according to their effect on the dependant low flow parameter. If the ranking of all the coefficients of a particular catchment characteristic are then added, the value obtained gives an indication of the impact of individual catchment characteristic on the low flow indices. In Table 7, the 1st row gives a summation of the ranking of the coefficients shows that when all low flow indices are considered, the mean annual rainfall, the area and main stream length (MSL) are the most signifcant independent variables. The next factors are the slope and potential evaporation, respectively and lastly the stream frequency and recession constant. The stream frequency has the most effect on the MAM (10). The BFI is mostly dependant on the recession constant and slope. In the 2nd row, after excluding BFI, the significant dependant variables still remain MAR AREA and MSL. In the 3 rd row when both KREC and BFI are excluded from the dependent variable, the significant factors remain the same. The effect of potential evaporation is provided by a negative coefficient in all the indices except BFI. These observations compare with the UK Low Flow Studies 1~where area and rainfall were significant factors, while the potential evaporation had a negative coefficient. 6.0 CONCLUSIONS (i) .The development of a database on low flow indices has been initiated by taking account the eastern Uganda catchments that have sufficient stream flow data .The data available ranged from the steam 17 to 28 years. The indices provide data on licensing abstractions, hydropower assessment, hydrological description, return periods of drought, reservoir design, short term forecasting and hydrogeology. (ii). The models developed for estimating low flow indices at ungauged sites, based on multiple linear regression, provide very good estimates of the indices. These linear models can be used for design purposes at ungauged catchments. (iii).The applicability and accuracy of these models is a function of the quality and the length of record of the streamflow data together with the accuracy in measurement of catchment characteristics within the region. (iv) The results show that the methodology applied here, can be used for other relatively homogenous climatic regions, with fairly uniform soil and geologic conditions.
754
International Conference on Advances in Engineering and Technology
(v) The results show that the dominant catchment characteristics that determine the values of low flow indices are the mean annual rainfall, the area, the main stream length, the slope and potential evaporation in that order. REFERENCES
State of Environment Report 1996 National Environment Management Authority, Ministry of Water, Lands and Environment 1997, Kampala, Uganda National Biomass Study, Technical Report, Forestry Department, Ministry of Water Lands and Environment, 2002 Kampala Uganda. State of Environment Report 2000/2001, National Environment Management Authority, Ministry of Water, Lands and Environment 2002, Kampala, Uganda State of Environment Report 1998 National Environment Authority, Ministry of Water, Lands and Environment 1999, Kampala, Uganda Barifaijo E. (ed) Geology of Uganda, Geology Department, Makerere University, Kampala, Uganda 2002. Database, Water Resources Management Department, Directorate of Water Development, Ministry of Water Lands and Environment 2003, Entebbe, Uganda. Ayoade J. O Tropical Hydrology and Water Resources, Macmillan 1988 London UK. Institute of Hydrology, Low Flow Report, Institute of Hydrology 1980, Wallingford UK. Drayton A.R.S, Kidd EH.R, Mandeville A.N. and Miller. J.B.A. Regional Analysis of River Floods and Low flows in Malawi 1980, Institute of Hydrology Wallingford UK. Gustard A. Bullock A, Dixon J.M, Low flow Estimation in the United Kingdom, Institute of Hydrology 1992, Wallingford UK. Ruks D.A., Owen. W. G, Hanna L.W, Potential Evaporation in Uganda, Water Development Department, Ministry of Mineral and Water Resources 1970. Entebbe Uganda. Haan C.T. Statistical methods in Hydrology. Iowa State University Press 1982, IOWA, USA. Ojeo. J. A Low Flow Study of Eastern Catchments. Unpublished Report. Department of Civil Engineering Makerere University, Kampala Uganda. Smakhtin, V. Y., Watkins DA, Low Flow Estimation in South Africa 1997 Water Research Commission Report No.494/1/97 Pretoria, South Africa. Velz,C.J.,Gannon J.J.Low .flow characteristics of streams. Ohio State University Studies Engineering Survey XXII: 138-157 1953,Ohio USA.
755
International Conference on Advances in Engineering and Technology
S U I T A B I L I T Y OF A G R I C U L T U R A L R E S I D U E S AS FEEDSTOCK FOR FIXED BED GASIFIERS M. Okure, J.A. Ndemere, S.B. Kucel; Department of Mechanical Engineering, Faculty of
Technology, Makerere University, P. O. Box 7062 Kampala, Uganda. Tel. +256-41 541173, Fax +256-41 530686 B.O. Kjellstrom; Professor emeritus, Division of Heat and Power Technology
The Royal Institute of Technology SE-I O0 44 Stockholm, Sweden
ABSTRACT The use of agricultural residues as feedstocks for fixed bed gasifiers could help Uganda and other developing countries to break their over dependence on expensive fossils fuels for heat and power generation. Uganda produces residues, such as bagasse, rice husks, maize cobs, coffee husks and groundnut shells, that are usually wasted or poorly and inefficiently used. This paper presents the results of an investigation into using the different agricultural residues in the same gasifier units where the only major difficulty is the fuel feeding system. The results of the physical and thermo-chemical tests carried out showed that gasification of these residues is a promising technology with expected relatively high gas yields and heating values. Key words: Agricultural residues, gasification, particle size, heating values.
1.0 INTRODUCTION In the world today, the leading sources of energy are fossil fuels, which include mainly coal, oil and natural gas. The ever growing demand of heat and power for cooking, district heating and other heating processes, construction, manufacturing, communications, transportation, lighting and other utility needs, has led to the great reduction of the energy sources and the subsequent price increments over the years. This high demand is attributed to the growth in economies, especially of the developing countries. This fact has called for the reduction on the dependency on this depletable energy source and advocated for the utilization of the more abundant and renewable energy sources such as biomass, hydropower, solar energy, wind energy, geothermal and tidal energy, and the use of more efficient energy conversion technologies aimed at reducing the inefficient and high energy consumption technologies. Gasification, under research, is a thermal-chemical process for converting a solid fuel into combustible gases by partial combustion process (Gabra Mohamed, (2000)). The gas generated can be combusted in a stove to produce heat or in an internal combustion engine or a gas
756
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
turbine to produce electricity through an electric generator. This technology helps to change the form of the solid fuels into a form that can be used easily and with improved efficiency. The types of solid fuels include, coal, peat and biomass. Among these, biomass is the most environmentally friendly source of energy and is an important renewable energy with a large potential in many parts for the world. Biomass includes wood and forest residues, agrowastes, industrial wastes such as saw dust, and human/animal wastes. The biomass potential in Uganda is enormous (Ministry of Energy and Mineral Development, (2001)). At present, wood and forest residues, and sawdust, is mainly used in thermal processes such as cooking or heating. Animal wastes are utilized to some extent by anaerobic digestion for the production of biogas. In both rural and urban areas of Uganda, the use ofbiomass fuels for heat generation is widely practiced. This does not apply only to areas not connected to the national electric grid but also to a big percentage of people or settings that use hydro electricity. This is mainly because biomass fuels are cheaper compared to the cost of using electricity for heating and cooking. Wood fuels are widely used in rural areas while charcoal is mainly used in urban areas. Due to the increasingly high demand for wood and forest residues, forests have been cleared for energy needs settlement and for farming and led to increase in the prices for biomass fuels and to further environmental degradation. Large amounts of agricultural residues are burnt in the fields as a means of disposal leaving only little for improving or maintaining soil fertility. This leaves a great deal of energy potential unutilized and being wasted. A technology that can efficiently convert these agricultural residues into heat and power would lessen the pressure being put on the forests and also reduce the over dependency on wood and forest residues. Gasification in fixed bed appears to be a technology that would be suitable for the applications of relatively limited capacity that would be suitable in Uganda. Not much research had been done on these residues in as far as physical and thermo-chemical properties are concerned and therefore it was not easy to ascertain which of the various agricultural residues are best suited for the gasification technology. It is imperative to note that some physical and thermo-chemical properties of the different categories of biomass vary from place to place. There was need therefore to carry out a thorough study to determine the suitability of agricultural residues available in Uganda, as feedstocks for fixed bed gasifiers. 2.0 A G R I C U L T U R A L R E S I D U E S IN U G A N D A
Biomass energy constitutes 92-94% of Uganda's total energy consumption (Okure, (2005)). Biomass has traditionally bio-energy forms which include firewood (88.6%), charcoal (5.9%) and agricultural residues (5.5%). Modem biomass includes biogas, briquettes, pellets, liquid bio-fuels and producer gas. The use ofbiogas is limited to few households, Sebbit, (2005).
757
International Conference on Advances in Engineering and Technology
Biomass can be classified as woody biomass and non-woody biomass, (Okure, 2005). Woody biomass includes both hard and soft wood. Non-woody biomass includes agricultural residues, grasses, sawdust and cow dung (Skreiberg, (2005)). Agricultural residues are the leftovers after crops are harvested or processed. Currently most of these residues are left unused, burnt in the fields and on a small scale used for space heating in rural areas as well as for commercial use in few thermal industrial applications (Kuteesakwe, (2005)). The use of these agricultural residues for industrial purposes is much more environmentally friendly practice than many residue disposal methods currently in use. Agricultural residues are an excellent alternative to using woody biomass for many reasons. Aside from their abundance and renewability, using agricultural residues will benefit farmers, industry and human health and the environment (Meghan, (1997)). 3.0 E X P E R I M E N T A L PROCEDURE AND RESULTS Various tests, which were carried out on the various agricultural residues, included tests on fuel physical properties, thermo-chemical properties as well as fuel feeding characteristics
The determination of fuel moisture content was based on a wet basis (%wt.b) (Sluiter, (2004a). Coffee husks were found to have the highest moisture content of 14.073% and rice husks had the lowest of 10.038%. Figure 1 shows the results of the moisture content tests for the various agricultural residues used in the study. The lower and upper quartiles of the data are represented by q 1 and q3 respectively. Moisture content
vs
Fuel
16.000 14.000 Standard deviation
12.000
10.000 "~ ~
8.000
=
6.000
9ql " Min o Median ::::~:max x q3
4.000 2.000 0.000
~-
.~
o,o
oo
Fuel .
.
.
.
.
Figure.l" Moisture content for agricultural residues
758
International Conference on Advances in Engineering and Technology
The next experiment was determining the bulk density, Pb (kg/m3) suing the method described in Albrecht Kaupp, (1984). The results showed that coffee husks have the highest bulk density in comparison to the other agricultural residues. The details can be seen in Figure 2. Bulk density Vs Fuel
350.000
A
Standard deviation
300.000 250.000
,._,,
4, Min
200.000
Median Q) 150.000
Max q3
100.000 50.000 0.000
. ^,se"
.~,"
. ^,5e"
..~o~"
oo"
Fuel
Figure 2" Bulk density for agricultural residues Particle size was also determined. For fuels with relatively big particles, the particle sizes were determined by measuring their dimensions of length width and height using a metre rule. For small particles sizes, the particles were spread on a sheet of paper and a metre rule was used to measure the size with the help of a magnifying glass. The results showed that maize cobs had the biggest particle size followed by bagasse, groundnut shells, coffee husks and rice husks in that order. The heat contents of the agricultural residues were determined based on the lower heating value, LHV using a bomb calorimeter, (Cussions Technologies, (Undated)). The results are shown in Table 1. Also included are the lower heating values for the dry fuels as well as dry ash-free fuels, which were calculated from the Dulong's formula using data for the ultimate analysis. Bagasse had the highest heating value of 17.84MJ/kg while rice husks had the lowest of 13.37 MJ/kg. The tests for the ash content involved complete burning of a dry fuel sample of known weight in an oven at 550~ and weighing the remains, (Sluiter, (2004b)). Rice husks were found to have the highest ash content value of 21.29% while maize cobs had the lowest of 2.07%. Figure 3 shows the detailed results.
759
International Conference on Advances in Engineering and Technology
Table l" Fuel heating values for the agricultural residues LHV for dry LHV for dry ash-free Fuel Fuel ID fuel (MJ/kg) fuel (MJ/kg) Rice husks F1 11.92 16.31 Groundnut F2 17.89 20.70 shells Coffee F3 16.08 17.54 husks Bagasse F4 16.53 17.34 16.25 16.28 Maize cobs F5 Ash
content
LHV (MJ/kg) + 2% (measured) 13.37 17.27 17.08 17.84 17.54
vs Fuel
25.000 Standard deviation
20.000 A i ,-. e-9 15.000
. Min
i ~i
1:: Median
I
8
,.~ 10.000
~:~:.M a x
i
~
i~q3 5.000
0.000
#,-, ~,,o
oo
~,Fuel
Figure 3." Ash contents for the agricultural residues The composition of producer gas was analyzed using a Gas Chromatograph. Methane, carbon dioxide and a negligible amount of ethane were detected. The concentrations of carbon monoxide, methane, and hydrogen were then used to calculate the gas heating value. Gas produced from maize cobs were found to have the highest gas heating value of 3.797MJ/Nm 3 with rice husks showing the lowest of 2.486MJ/Nm 3 as shown in Figure 4.
760
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
Gas heating value Vs Fuel 5.000 ~- 4.500 E z 4.000
Standard deviation
3.500 = 9 3.000
.
ql
m
> 2.500
m Min
m 2.000 c ,-
i
1.500 1.000
o
..... IVledian, :::::::max
xq3
0.500 0.000
oO
e,o Fuel
Figure 4." Gas heating values for producer gas from the agricultural residues The determination of the bulk flow characteristics of the various agricultural residues was also considered important. The results are shown in Table 2. Table 2" Flow characteristics of the agricultural residues
Fuel sample Rice husks Groundnut shells Coffee husks Bagasse Maize cobs
Average angle of repose (o)
Hopper angle (o)
32.6
57.4
30.4
59.6
25.8 30 27
64.2 60 63
4.0 DISCUSSION Low moisture content has a positive impact on the fuel heating value as well as gas heating value because fuels with low moisture content burn more efficiency. Efficient burning or combustion causes the reduction and oxidation processes of gasification to take place at higher temperatures hence yielding producer gas with a high gas heating value. Therefore, fuels with high moisture contents have low fuel heating values and produce a gas with low gas heating value. Based on the above, rice husks should have the highest fuel and gas heating values. Instead they have the lowest fuel heating value and second lowest gas heating value. This is because rice husks have a relatively high ash content (>20%).
761
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
The handling and flow of the fuel into he gasifier depends on several factors including bulk density, particle size and angle of restitution. Due to the difference in bulk densities, the feeding system should be able to handle the various agro-residues for a multi-fuel system. Bulk density does not only depend on the moisture content and the size of the fuel particle but also on the manner in which the fuel is packed in the container. This definitely varies from fuel to fuel and from an individual to another. Fuels with small particles such as rice husks are likely to cause flow problems in the gasitier. There is also a possibility of high-pressure drop in the reduction zone leading to low temperatures and tar production. Large particles like maize cobs may lead to startup problems and poor gas quality i.e. low gas heating value due to the low fuel reactivity. The fuel heating value is generally the representation of the carbon and hydrogen contents in the fuel which in effect influence the gas heating value. It should be noted that the fuel ash content also affects the fuel heating value. The higher the fuel moisture content, the lower the fuel heating value. However, fuels for gasification should not be completely dry because some moisture is necessary for the water gasification of char. Ash content impacts greatly on the running and operation of fixed bed gasifier units. Ashes can cause various problems that include the formation of slag, which lead to excessive tar formation and blocking of the gasifier unit. Gasification of fuels with ash contents of less than 5% do not lead to formation of slag while severe slag formation is encountered with fuels which have ash contents of at least 12%. The heating values for producer gas from fuels with high bulk densities are generally higher compared to those for low bulk density fuels. This means that the heating values increase with increasing bulk densities. However for coffee husks, the gas heating value is slightly out of the general trend because its high moisture content reduces the thermal efficiency of the gasification process hence the abnormally low gas heating value of coffee husks compared to other agricultural residues. It should also be noted that many of the characteristics investigated in this study could change due to various reasons. The fuel moisture content could vary with changes in weather as well as location and storage. Particle size could vary depending on the harvesting and shelling methods and technologies. Bulk density could vary depending on the level of packing which changes from person to person. These physical properties could in turn affect the thermal-chemical properties.
762
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
5.0 CONCLUSIONS
This study showed that agricultural residues, such as maize cobs, bagasse, coffee husks and groundnut shells as well as, to a small extent, rice husks can be used as feedstocks for fixed bed gasifiers. The availability of large amounts of agricultural residues such as coffee husks, bagasse, maize cobs and rice husks all year round presents Uganda with a sustainable energy source that could contribute to solving the country's current energy problems. This in turn could impact greatly on the country's economy, bringing about growth and development hence improving the quality of living of the people. Gasification of agricultural residues has a great potential in Uganda and it could help in reducing on the unsustainable exploitation of woody biomass for purposes of cooking and lighting hence preserving nature as well as maintaining a clear environment. REFERENCES
Gabra Mohamed (2000), "Study of possibilities and some problems of using Cane residues as fuels in a gas turbine for power generation in the sugar industry" Doctoral Thesis, Lulea University of Technology, Sweden. Ministry of Energy and Mineral Development (2001), "National Biomass Energy Demand Strategy -2001 - 2010" Draft Document Okure Mackay, (2005), "Biomass resources in Uganda" presented at the Norwegian University of Science and Technology and Makerere University Joint summer course-Energy systems for developing countries Sebbit Adam (2005), "Traditional use of biomass in Uganda", presented at the Norwegian University of Science and Technology and Makerere University Joint summer courseEnergy systems for developing countries Skreiberg Oyvind (2005), "An introduction to heating values, energy quality, efficiency, fuel and ash analysis and environmental aspects", presented at the Norwegian University of Science and Technology and Makerere University Joint summer course-Energy systems for developing countries Kuteesakwe John (2005), "Biomass commercial utilization", presented at the Norwegian University of Science and Technology and Makerere University Joint summer courseEnergy systems for developing countries. Meghan Hayes (1997), "Agricultural residues: A Promising Alternative to Virgin Wood Fiber", Resource Conservation Alliance, Washington DC Sluiter Amie (2004a), "Determination of Total solids in Biomass", Laboratory Analytical Procedure Albrecht Kaupp, (1984) "Gasification of Rice Hulls: theory and praxis", Gate/Friedr. Vieweg & Sohn Braunschweig/Wiesbaden Cussons Technology, "The P6310 Bomb Calorimeter set", Instruction manual, 102 Great Clowes Street, Manchester M7 1RH, England Sluiter Amie (2004b) "Determination of Ash in Biomass", Laboratory Analytical Procedure.
763
International Conference on Advances in Engineering and Technology
N U M E R I C A L METHODS IN SIMULATION OF INDUSTRIAL PROCESSES Roland W. Lewis, Eligiusz W. Postek, David T. Gethin, Xin-She Yang, William K.S. Pao, Lin Chao; [email protected]; Department of Mechanical Engineering, University of
Wales Swansea, Singleton Park, SA2 8PP Swansea, Wales
ABSTRACT The paper deals with an overview of some industrial applications leading to a formulation for advanced numerical techniques. The applications comprise squeeze casting processes, forming of tablets and petroleum reservoir modelling. All of the problems lead to solutions of highly nonlinear, coupled sets of multiphysics equations. Keywords: squeeze forming, powder compaction, oil fields, coupled problems, thermomechanics, porous media, fluid flow, nonlinear solid mechanics, phase transformations, microstructural solidification models, numerical methods, contact problems, discrete elements, finite elements.
1.0 INTRODUCTION. Contemporary technology still requires increasingly sophisticated numerical techniques. The complexity of most industrial processes and natural phenomenae usually leads to highly nonlinear, coupled problems. The nonlinearities are embedded in the behaviour of the materials, body interactions and interaction of the tensor fields. Further complexities, which are also sources of nonlinearities, are the existence of widely understood imperfections, i.e. geometrical, material. All of these require the elaboration of new numerical algorithms embracing such effects and the effective solution of the arising multiphysics systems of nonlinear differential equations. These problems require a solution in order to improve the design, quality of products and in consequence the quality of life. A few applications of such algorithms, which describe manufacturing processes of everyday products and the description of natural large scale phenomenae, are presented herein. 2.0 SQUEEZE FORMING PROCESSES 2.1 General Problem Statement
The analysis of squeeze forming processes is currently divided in two parts, namely, mould filling and the analysis of thermal stresses. During mould filling special attention is paid to
764
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
the metal displacement and during the stress analysis to the pressure effect on the cooling rate and to the second order effects. The metal displacement during the die closure in squeeze casting is an important process, because many defects such as air entrapment, slag inclusion, cold shuts and cold laps may arise during this process. Modelling the metal displacement is an efficient approach to optimize an existing process or guide a new design. As a typical numerical approach, the finite element method has been used successfully in modelling the mould filling process of conventional castings [1-4]. However, little work has been done on modelling the metal displacement in the squeeze casting process except for the early work by Gethin et al. [5], in which an approximate method was employed to incorporate the effect of the metal displacement in the solidification simulation. The analysis of stresses during the squeeze casting process leads to highly nonlinear coupled thermomechanical problems including phase transformations. The effective and accurate analysis of the stresses is important and should lead to an evaluation of the residual stresses in the workpieces and the stress cycles in the die. The accurate estimation of the stress levels in the die should allow the prediction of the life time of the die from a fatigue aspect.
2.2 Mould Filling In this paper, a quasi-static Eulerian finite element method is presented for modelling the metal displacement in the squeeze casting process. The dynamic metal displacement process is divided into a series of static processes, referred to as subcycles, in each of which the dieset configuration is considered as being in a static state, thus the metal displacement is modelled by solving the Navier-Stokes equation on a fixed mesh. For each subcycle, an individual mesh is created to accommodate the changed dieset configuration due to the motion of the punch. Mesh-to-mesh data mapping is carried out regularly between two adjacent subcycles. The metal front is tracked with the pseudo-concentration method in which a first order pure convection equation is solved by using the Taylor-Galerkin method. An aluminum alloy casting is simulated and the numerical results are discussed to assess the effectiveness of the numerical approach. The associated thermal and solidification problems are described in the thermal stress analysis section, since both analyses exploit the same mathematical formulation. Fuid Flow and Free Surface Tracking The flow of liquid metal may be assumed to be Newtonian and incompressible. The governing Navier-Stokes equations, which represent the conservation of mass and momentum, are given below in terms of the primitive flow variables, i.e. the velocity vector u and pressure p: V.u =0 (1) p
+ (u. V)u
- v.
(Vu)
,
765
International Conference on Advances in Engineering and Technology
where p is the density, p is the pressure, r is the dynamic viscosity and g is the gravitational acceleration vector. The free surface movement is governed by the following first order pure advection equation:
OF c~t
+ (u. V ) F = 0,
(3)
where F is the pseudo-concentration function, which is defined as a continuous function varying between -1 and 1 across the element lying on the free surface. Details of the finite element formulation and numerical algorithm can be found in, Lewis, (2000).
Modelling of Metal Displacement The metal displacement in the die closure process of squeeze casting is a dynamic process in which the liquid metal is driven by the continuously downward punch movement. As a result of the fluid flow, the metal front goes upwards in the die cavity and in some cases where the die has secondary cavities, overspill may take place as well. With this process, the whole die cavity, including the filled and unfilled regions, and all of the molten metal is forced to frequently relocate in the varying die cavity until the process is finished. Obviously, the metal displacement in the squeeze casting process is different from the mould filling of conventional casting processes. As mentioned earlier, an Eulerian type approach is employed in the present study, which implies that the fluid flow and free surface are computed on a fixed finite element mesh which is placed over the entire domain of the filled and unfilled regions. To accommodate the variation of the die cavity, more than one mesh, generally a set of meshes corresponding to different punch positions, has to be generated to cover the whole process of the die closure. Accordingly, the dynamic process of metal displacement is divided into a series of static processes, in each of which a fixed dieset configuration and its corresponding finite element mesh are employed to model the fluid flow and free surface movement. The combination of all the static processes is used to represent the dynamic process approximately. This is the reason why the present method is termed as a "quasi-static approach". Here, each of the static processes, referred to as a subcycle, and any two adjacent subcycles are linked by appropriate interpolation for velocity, pressure, and the pseudo-concentration function from a previous mesh to the following one, which is also called data mapping in this paper. In addition, it is noticeable that the total volume of the molten metal should be constant provided any volume change caused by cooling and solidification is negligible. Therefore, the global volume or mass conservation must be ensured in the simulation.
Punch Movement Simulation The downward punch movement has two direct impacts. One of them is to change the shape and size of the whole die cavity which can be accommodated by generating a series of finite element meshes as mentioned earlier. The other impact is to force the molten metal to flow into the die cavity.
766
International Conference on Advances in Engineering and Technology
i i i
'i Punc....h /1,
/-; : Punch
i
i P7n veoc:W~ ~
t
ii ~ . ......>!". 9
/ ...........f.' ~
Velocities ~ :
I
I
(a)
(b)
(e)
Fig. 1. Schematic illustration of modelling the metal flow in squeeze casting process In the present work, a velocity boundary condition is imposed at the interface between the punch and the liquid metal to simulate the effect of the punch action, as shown in Fig. 1. This manifests itself as a prescription of the inlet velocity boundary condition in conventional mould filling simulations. However, there are some differences with respect to the normal "inlet" condition. In the conventional mould filling process, the positions and size of the inlet do not change. In contrast, in the squeeze casting process the punch/metal interface may vary with the movement of the metal front. This implies that the punch/metal interface, where the velocity boundary condition is to be prescribed, depends upon the profile of the metal front, which is an unknown itself. Therefore, an iterative solution procedure is employed, in which the status of each node on the punch/metal boundaries is switched "on" or "off" dynamically by referring to its pseudo-concentration function value. Whether the boundary velocity is prescribed depends on the node status. PL~.ch
Ouier
~rmg
O~l~r
~ing ,
P~.:rmh
Ca~ir~g Met~
Lo~e~ al~e
Lower die
Fig.2. The initial and final dieset configurations for the casting without metal overspill.
Mesh-To-Mesh Data Mapping The mesh-to-mesh data mapping from a previous subcycle to the following one is implemented based on a mesh of three-node triangular elements which is generated by splitting the six-node flow elements. As mentioned earlier, the values of the velocity and the pseudo-
767
I n t e r n a t i o n a l C o n f e r e n c e o n A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
concentration function are assigned to all of the nodes, but the pressure values are solved only for the comer nodes of the six-node triangular elements. To enable the three-node elements to be used for the data mapping, the pressure values for the mid-side nodes of the flow element are calculated by using a linear interpolation. In the data mapping process, a node-locating procedure, in which all of the new-mesh nodes are located in the old mesh, is followed by a simple linear interpolation based on three-node triangular elements. Global Mass Conservation The global mass conservation for the molten metal must be guaranteed in the modelling. Based on the above description, the metal mass in the die cavity after the data mapping is less than that at the initial moment. The initial metal mass can be used as a criterion to judge whether it is the time to finish an ongoing subcycle and commence a new subcycle. In detail, the total mass of the metal at the initial moment is calculated and is denoted by M0. In the computation for each subcycle, the metal mass in the die cavity is monitored after each iterative loop. Once it achieves M0, the ongoing subcycle is ended immediately and a new subcycle commences.
768
(a) punch displacement 0 mm
(b) punch displacement 3 mm
(c) punch displacement 6 mm
(d) punch displacement 9 mm
(e) punch displacement 12 mm
(f) punch displacement 15 mm
(g) punch displacement 18 mm
(h) punch displacement 21 mm
(i) punch displacement 24 mm
International Conference on Advances in Engineering and Technology
(j) punch displace(k) punch displacement 27 mm ment 30 mm Fig. 3. The evolution of the metal profile in the die cavity
Numerical Simulation and Results Numerical simulation is carried out for an aluminum alloy casting. The computer code employed in the simulation is developed basing on the mould filling part of the integrated finite element package, MERLIN (Lewis & MERLIN, (1996), which has been tested with benchmark problems of fluid flow, Lewis, (2000). The initial and final dieset configurations for the casting are shown in Fig. 2. As the casting has an axisymmetric geometry, only half of the vertical section of the casting and dieset configuration is considered in the numerical simulation. The outer diameter of the casting is 320 mm, the height 80 mm, and the wall thickness 10 mm. The total displacement of the punch from its immediate contact on the metal surface to the end of the die closure process is 30 mm and divided into 10 equal displacement increments in the simulation. The speed of the punch is 5.0 mm/s and the whole metal displacement process lasts 6.0 s. Fig. 3 shows the evolution of the metal profile in the die cavity. The simulation results clearly expose the process in which the liquid metal is displaced in the continuously changed die cavity as a result of the punch action. 2.3 Thermal Stresses Analysis With respect to the stress analysis the following assumptions are made: the mechanical constitutive model is elasto-visco-plastic, the problem is transient and a staggered solution scheme is employed. The influence of the air gap on the interfacial heat transfer coefficient is also included. These issues are illustrated by 3D numerical examples. An extensive literature exists concerning the solution of thermomechanical problems, for example, Lewis, (1996), Zienkiewicz & Taylor, (2000).
Governing Equations The FE discretized thermal equation is of the form
K r T + Crj" - Fr,
(5)
769
International Conference on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
where, K r , Cr are the conductivity and heat capacity matrices and Fr is the thermal loading vector. Eqn (5) can be solved using implicit or explicit time marching schemes. In our case the implicit scheme is chosen. For the case of a nonlinear static problem, with the assumption of large displacements, the mechanical problem is of the /form \
(K e-vp 4-K a J A q
- AF
+ AG
+ AF c ,
(6)
w h e r e K e-vp is
the elasto viscoplastic matrix, K~, is the 'initial stresses' matrix, AF is the increment of the external load vector, AG is the increment of body forces vector, AFc is the increment of contact forces and Aq is the displacement increment. In the case of a dynamic problem the equation governing\ the mechanics is of the form /
~/lq 4- Cq 4- tK e-vp 4- K a )q - V + V c ,
(7)
where M and C are the mass and damping matrices, F and Fc are external and contact forces, respectively. The increment of stresses includes the thermal and the viscoplastic effect assuming Perzyna' s model (Perzyna, (1971) and reads: c3Q A.
- D ( A I ; - A~: "p - A ~ r )
~'~P - 7 < ~p(F) > -
c~ (~p(F))
_~ 0
[ co(F)
F <
0
(8)
F >0
where, F and Q are the yield and plastic potential functions (assumed F=Q, associative plasticity) and 7 is the fluidity parameter. Additionally, in the case of phase transformation the existence of shrinkage strains is assumed when the temperature passes the liquidus threshold in the cast material. The yield limit is a function of temperature.
Outline of the Solution Algorithms A staggered scheme is adopted for the two field problems (thermal and mechanical), Felippa & Park, (1980) and Vaz & Owen, (1996). The general scheme for this type of problem is presented in Fig. 4 (a). The solution is obtained by sequential execution of two modules (thermal and mechanical). T
7 + A~-
~- + 2 A t
T
.,j ....
T
,~:ir~~'H. &/!,
,
[
M
,,:,. ~r (b) (a) :~o Fig.4. Illustration of the solution methods, staggered solution - exchange of information between thermal and mechanical modules, (a) and enthalpy method, (b).
770
International Conference on Advances in Engineering and Technology
The sources of coupling are as follows: thermomechanical contact relations (dependence of the interfacial heat transfer coefficient on the width of the gap between the cast and mould), dependence of the yield limit on temperature and the existence of the shrinkage strains during phase transformation. In the case of phase transformation, due to the existence of a strong discontinuity in the dependence of heat capacity with respect to time (Fig. 4, (b)), the enthalpy method is applied, as shown by Lewis et al, (1978, 1996). The essence of the application of the enthalpy method is the involvement of a new variable (enthalpy). This formulation enables us to circumvent the problems involved with the sharp change in heat capacity due to latent heat release during the phase transformation and leads to a faster convergence. Introducing the new variable, H, and employing the finite element approximation result in the thermal equation taking the form
pcp =
dH
KT
dT
+
dlt
J" - F r
(9)
dT
The definitions of the enthalpy variable for pure metals and alloys are given by Eqn (13) as follows
H-
IcdT,
T <_T
I cdT,
T < T,ot
IcdT + (1- f , ) A h / ,
T - T ,H -
IcdT + (1- f,)Ahf ,
T,o,
I dr + Ah,,
r >r
) d r + Ah,
r >
(~o)
The finite difference approximation (Lewis et al, (1978)) is used for the estimation of the enthalpy variable. The same solution scheme is used in the case of mould filling analysis.
Mechanical Contact The basic assumption is that the whole cast part is in perfect contact with the mould at the beginning of the thermal stress analysis. The assumption is justified by the fact that the thermal stress analysis starts after the commencement of solidification. Because of the assumption concerning the small deformations we may consider the so call "node to node" contact. A penalty formulation is used which is briefly described. The potential energy of the mechanical system is augmented with a system of constraints represented by the stiffness L After minimization the resulting equations of equilibrium are of the form H _1_ r _ rF 1 r . 2q Kq q +-~g_ ~,g,
' Kq
_
!
F,
(ll)
where K' and F' are the augmented stiffness matrix and equivalent force vector, The term g represents a vector of penetration of contacting nodes into the contact surface, respectively. In the case of non-existence of the contact the distance between the nodes is calculated and
771
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
in consequence the value is transferred to the thermal module where the interfacial heat transfer coefficient is calculated. Thermal Contact
As mentioned above the interfacial heat transfer coefficient is used for establishing the interface thermal properties of the layer between the mould and the cast part. The coefficient depends on the air conductivity (kair), thermal properties of the interface materials and the magnitude of the gap (g). The formula from Lewis, (2000), Lewis & Ransing, (1998), is adopted: h=kair/(g+kair/ho). The value ho, an initial heat transfer coefficient should be obtained from experiments and reflects the influence of the type of interface materials where coatings may be applied. Additionally, from a numerical point of view, it allows us to monitor the dependence of the resulting interfacial coefficient on the gap magnitude.
Microstructural Model The main assumptions of a microstructural solidification model are presented herein (Yhevoz, Desbiolles and Rappaz, (1989), Celentano, (2002)). The partition laws are given below. The partition laws state that the sum of the solidus,f, and liquidus, f,, parts is 1. The solidus consists of the sum of the dendritic, j~, and eutectic, fe, fractions. A further assumption, valid for the equiaxed dendritic solid fraction, namely, that the solid fraction consists of dendritic grains, f J , internal,f, and intergranural eutectic volumetric fractions, respectively
f + f~ - 1
f ~ - f a + fe
f~ - f j f
+fg.
(12)
The internal fraction is split into the sum of dendritic, fd, and eutectic, fe, internal volumetric fractions which leads to the final formulae for dendritic and eutectic fractions, i.e.,
f _ fd + fe
fd -- f J f d
f~ -- f J f ~
+ fg
(13)
with the assumption that in the alloy the intergranular eutectic phase does not @pear, Le=0 and the spherical growth 4 4 f gd - - FINaR 3 , f ie - - FINER ) , (14) 3 3 where Na, Ne and Ra, Re are the grain densities and grain sizes described by nucleation and growth laws.
Illustrative Examples
Cylinder To demonstrate the effect of the applied pressure, the geometry of a cylindrical sample is adopted. The diameter of the mould is 0.084 m, the diameter of the cast is 0.034 m, the height of the cast is 0.075 m and the height of mould is 0.095 m.
772
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
lie 6o
[40 0
!--
5
10
15
20
squeezed
25
time
(a) (b) Fig. 5. Discretized mould and cast (a), temperature variation close to the bottom of the cast, squeezed and free casting, (b). The sample was discretized with 9140 isoparametric linear bricks and 10024 nodes. The finite element mesh for half of the cylinder (even though the whole cylinder was analyzed) is presented in Fig. 5. The following thermal boundary and initial conditions were assumed: constant temperature 20~ on the outer surface of the mould, 200~ on the top of the cast, 700~ the initial temperature of the cast, 200~ - the initial temperature of the mould, respectively. The mould is fixed rigidly to the foundation. The die is made of steel H13 with the properties: Young modulus 0.25E+12 N/m 2, Poisson's ratio 0.3, density 7721 kg/m 3, yield stress 0.55E+10 N/m 2, thermal exp. coeff. 0.12E-5 and the material properties of cast (aluminium alloy, LM25): Young modulus 0.71E+l 1 N/m 2, Poisson's ratio 0.3, density 2520 kg/m 3, yield stress 0.15E+9 N/m 2, fluidity parameter 0.1E-2, thermal expansion coefficient 0.21E-4, contraction 0.3E-12 Tzi,=612 C, Tso~=532 C.
(a)
(b)
773
International Conference on Advances in Engineering and Technology
(c)
(d)
Fig.6. Solidification (a, b) and displacement (c, d) patterns (squeeze casting - left, no external pressure- right). The effect of pressure applied to the top of the cast is demonstrated in Fig. 6. When comparing the displacement patterns for both cases it is seen that the displacements for the squeezed workpiece are the smallest at the bottom where the gap is closed. It implies a higher cooling rate and in consequence faster solidification, the solidified region is larger for the squeezed part. The temperature close to the bottom is lower in the squeezed part than in the one without external pressure (Fig 5, right). Aluminium P a r t - Influence of Large Displacements and Initial Stresses The analysed aluminium part has overall dimensions of 0.47 m x 0.175 m x 0.11 m. The finite element discretization of the cast and mould are presented in Fig. 7. The parts are discretized with 2520 linear bricks and 3132 nodes.
Fig. 7. Finite element mesh of the mould (a) and the cast (b).
774
International Conference on Advances in Engineering and Technology
(a)
(b)
(c) Fig.8: Solidification patterns, small displacements, no external pressure (a), squeeze casting, (b), squeeze casting, large displacements (c). Thermal boundary and initial conditions are assumed as in the previous case. The mould is fixed rigidly to the foundation and the pressure is applied to the top of the cast. The material data are set as for the previous example. The process is simulated over the first 30 sec of the cooling cycle. Results concerning three cases are given in Fig 8. We focus our attention on the solidification patterns. Assuming small displacements it can be seen that the effect of pressure is significant, namely, the solidification is further advanced when applying pressure than for the case of a free casting, Fig, 8 (a) and (b). For the case of nonlinear geometry, Fig 8 (c) the solidification appears to be less advanced than in the case of neglecting this effect, Fig. 8 (b). However, the solidification is still more advanced in the case of squeeze forming than without applying the external pressure.
Coupling the Mould Filling and Thermal Stress Analyses In this case we follow the general assumptions that the process is sequential, which implies that the thermal stress analysis is performed after filling the mould with metal and reaching the final position of the punch. The latter implies that the final shape has been achieved. In
775
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
this process the temperature field obtained at the end of the mould filling process represents the initial condition for the thermal stress transient analysis. An example of an industrial squeeze forming process is described herein. Fig. 9 presents the coolant channel system of the punch and die. The problem is actually considered as axisymmetric, and the part being formed is a wheel. The material properties are the same as presented in the previous examples. The diameter of the wheel is 0.5 m, the diameter of the die-punch-ring system is 0.9 m, the height of the punch is 0.23 m and the thickness of the part is 0.015 m. The initial temperatures of the particular parts of the system were as follows: cast 650 ~ die and ring 280 ~ and punch 300 ~
Fig. 9: Coolant channels system, die (a), punch (b). The sequence of the punch positions and the corresponding advancement of filling of the cavity is shown in Fig. 10. The maximum punch travel is 49 ram. The temperature distribution, after completion of the filling process, is given in Fig 11 (a). The next figure, Fig. 11 (b), shows the temperature distribution after 16 sec of the cooling phase. The corresponding solidification pattern is given in Fig. 11 (c) and the von Mises stress distribution is presented in Fig 11 (d). The highest von Mises stress, 325 MPa, is in the punch close to the top of the cast part.
ii
i ~......
:~ (a)
776
i ~:
(b)
(c)
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
.............
i
(d)
(e)
(f)
Fig.10: Sequence of the punch positions (0 mm, 10 mm, 35 mm, 40 mm, 45 mm, final position - 39 ram) and the advancement of metal front (pseudo-concentration function distribution). :/ .
.
~i|
.
.
i..
9
0
B .
.
.
s
"
.
.
.
.
.
.
.
(a)
!i
(b)
| (c) (d) Fig. 11: Temperature distribution after ending of the filling process (a), temperature distribution after 16 sec. of the cooling phase (b), solidification pattern after 16 sec. (c), yon Mises stress distribution after 16 sec. Example
of a Microstructural
Model
The geometry of the cylindrical sample presented above is adopted. The mechanical properties are taken from the previous example.
777
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
sl
sl
r 1~ 9
1.50E-02 L "~
1.00E-02
:
._=~ . L_
0
5
10
15
20
-
,
Sl-
~
1.50E-02
~ ~
~00~.o3
1~ L
O.OOE+O0
~,
25
undercooling
lw~w,-0 5
........... . :::':::~:~:2.~-*:~:~............... 10
15
20
25
undercooling
Fig. 12. Growth laws, dendritic (left), eutectic (right). The material is an aluminium alloy with 7% silicon, solidification properties of dendritic and eutectic parts: average undercoolings are 4.0 deg and 3.5 deg, standard deviations 0.8, maximum grain densities 3.0E+09, latent heats 398000 J/kg, liquidus diffusivity 0.11. The temperature at the top of the sample is kept constant at 600 ~ The growth evolution laws are presented in Fig. 12 and distributions of the internal variables at time 2.7 sec. e.g. the liquidus and solid dendritic and eutectic fractions are given in Fig.13 (a, b, c). The conductivity distribution is also given in Fig. 13 (d).
(a)
(b)
(c) (d) Fig. 13. Distributions: liquidus (a), solid dendritic fraction (b), solid eutectic fraction (c), conductivity (d). Thermal Stress Analysis, Microstructural Solidification, Industrial Example
778
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
An example of the solidification and thermal stress analysis of a wheel is presented. The die and cast are discretized with 25108 isoparametric bricks and 22909 nodes. The discretization scheme for the cast and mould is given in Figure 14. The material data are assumed as above.
a)
9 :~
(b)
Figure 14. Discretization scheme, mould (a) and cast(b)
o
. . . .
(a)
(b) 9
....... ...... ! ?
~,~
> .
.
.
.
.
i
!
9
.............................................................................................................................................................
(c)
~<..~. ~
:.~ .........
:
..................................................................................................................................................................................... ii
(d)
Fig.15. Liquidus distribution (a), eutectic fraction (b), dendritic fraction (c), temperature distribution (d).
779
International Conference on Advances in Engineering and Technology
Fig. 16. Mises stress distribution. The temperature distribution, Fig 15 (a), and the distribution of the main microstructural solidification model internal variables (i.e. liquidus, eutectic fraction, dendritic fraction) are given in Figure 15 (b, c, d). The distribution of the eutectic and dendritic fractions is similar and almost uniform. The Mises stress distribution is shown in Figure 16 (d).The Mises stresses are significant and reach the yield limit of the material (red regions). It may be seen that annealing would be necessary to decrease these values. The application of the microstructural model should allow a better prediction of the thermal stress distribution. The solidification already takes place in the filling phase of the process. However, this problem is highlighted in the paper and will be a topic for future research.
2.4. Microstructural Solidification during Mould Filling The theoretical framework of the problem arises directly from the considerations presented above. The problem comprises mould filling, thermal analysis and the solidification constitutive model. The numerical example is the filling of a valve with the aluminium alloy LM25. During the filling process solidification of the material is observed. The material data are assumed as above. The initial temperature of the cast is 650~ and the initial temperature of the mould is 150~ The ambient temperature is 20~ The interfacial heat transfer coefficient is 6000 W/m 2 ~
780
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
(a)
(b)
Fig. 17. Discretization of the valve, mould (a), cast (b). The wall friction angle is 135~ The filling time is 10 sec. The mould is made of steel H13. The cast and mould are discretized with 10422 nodes and 4917 elements. The discretization is given in Fig.17. A qualitative difference is seen between Figs 18 (a), 18 (b) and Figs 19 (a), 19 (b) where the temperatures distributions and the distributions of the dendritic fraction are presented.
ii:~ il~
!i:~i (a) (b) Fig. 18. Temperature distribution, end of processes, free filling (a), pressurized filling (b).
781
International Conference on Advances in Engineering and Technology
!!!
iE
; p'
i : ; .....
Zi; ........
(a) (b) Fig. 19. Dendritic fraction distribution, end of processes, free filling (a), pressurized filling (b). loo
9o
80
7o
i,~ 50 4o 30 20 10 0
2
4
8
8
10
12
time (s)
Fig. 20. Valve, dependence of the filling percentage versus time, free and pressurized filling. This happened due to a much faster filling of the mould when the pressure was applied. The percentage of filling versus time dependency is given in Fig. 20. When the pressure is applied the filling time is 2.3 sec, while for the case of free filling the time is increased to 10 sec. 2.5 Final Remarks
A quasi-static, Eulerian finite element method is presented for modelling the metal displacement in the squeeze casting process. An aluminium alloy casting is modelled and the evolution of the metal profile in the die cavity as well as the velocity distribution within the melt is revealed. The numerical approach has proved to be efficient in modelling the mould filling during the squeeze casting process. On considering the stress analysis it is shown that the squeezed parts solidify faster. The effect is caused by reduced, or even closed, gaps between the mould and the cast and this affects the interfacial heat transfer coefficient. Further examples concern the effect of pressure and the incorporation of the microstructural solidification model and coupling the
782
International Conference on Advances in Engineering and Technology
thermomechanical with the fluid-thermal analyses. The given examples are of industrial significance. 3.0 PHARMACEUTICAL POWDER COMPACTION 3.1 Problem Statement Pharmaceutical powder compaction is a very important process for pharmaceutical manufacturing, and the uniformity of the powder compaction is directly related to the quality of the formed tablets. The compaction problem has been typically treated using a c'ontinuum approach, and thus finite element methods (FEM) have been widely used [21-23] (Cundall & Strack, (1979), Kibbe, (2000), Lewis & Schrefler, (1998). In reality, however, the dry powder consists of millions of tiny particles of different sizes, and the conventional continuum methods cannot model these discrete features. The discrete/finite simulations for compaction and granular materials have been successfully used for uniform spherical particles which are either rigid or deformable (Gethin et al, (2001), Goodman, Taylor & Brekke, (1968), (Ransing et al, (2000), (Rowe & Roberts, (1996)). Due to the complexity of the particle sizes and shapes in pharmaceutical powders, the results will be more realistic if we can simulate the effect of these irregular particle shapes and particle size distribution. ~i~ .......m~iii#i
Fig. 21. Real powder particles and their random representation. Therefore, this study will focus on the efficient formulation of a discrete/finite element method to simulate the effects of random particle distribution and irregular shapes. These irregular particles are approximated by polygons meshed with adaptive finite element meshing, and a contact detection algorithm is optimized. This subsequently increases the overall efficiency of the simulation system. 3.2 Powders and Packing Representation Pharmaceutical powders consist of a wide range of particle size and shapes (see Fig. 21). For example, the particle size distribution for starch powders is: 2-45 gm for corn starch, wheat starch and rice starch; 10-100 ~tm for potato starch (Rowe & Roberts,(1996)). For practical simulations, we simulate the powder system with pseudo-particles in a representative volume using the balls and bars/plates as representation. The initial configuration is generated using the Macropac package (Oxmat, (2001). Then, each particle is meshed with a finite element mesh with different element sizes determined by an adaptive technique. Then the compaction
783
International Conference on Advances in Engineering and Technology
of the particle assembly is computed using the discrete/finite element method described in detail in the following sections.
3.3 Particle-Scale Discrete/Finite Element Method A powder system usually contains millions of particles, and even a representative volume will have thousands of particles. To maximize the efficiency, we have developed a discrete/finite element formulation on particle-scales. The novelty of this formulation is that the particle-scale assembly of the stiffness matrix is over the elements on each particle, which comprises 50 to 200 elements for each particle. For a given N particles, the matrix size is much smaller and its inversion is usually N times more efficient.
.La
(a) (b) Fig. 22. Meshing on each particle (a) and forces at contacts (b). For a plane-stress elastic problem, the stress-strain relationship can be simply written as
O"x
~x
cry - D
Cy - e 0,
r~y
g~y
1 D -
E
v 1
1-v 2
0 0
,
(15)
1-v 0
2
where e0 is the initial strain. For given shape functions Ni(x,y) and mesh types, after some straightforward calculations, we have a generic form M ii+C fi+K u =F (16) where, M and C are the mass and damping coefficient matrices, respectively. K is the stiffness matrix, and F is the contribution of the body forces, contact forces and boundary conditions. For each particle, the procedure for the finite element analysis is the same as for the continuum counterpart. The only difference is that the contact forces and loading conditions are calculated dynamically at each time step. Thus, the major effort is devoted to the particle contact detection and contact forces computation.
3.4 Contact Force and Detection Algorithm The velocity, rotation and acceleration of each particle are computed using the local equilibrium of particles in direct contact and the unbalanced forces and moments approach equilibrium incrementally. Given a stiffness k and damping c for the particle interactions, the force
784
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
f c a n be computed from the impact velocity, v, the penetration 6 of the overlapping particles and the prescribed halo interaction thickness A. o~k
f
-
~
+
A-6
cv ~
6'
(17)
The acceleration, velocity and position, as well as the rotation of each particle, will be updated at every time step. Contact detection is an important part in determining the forces and subsequent penetration between particles, especially when the number of particles N is very large. For better performance, we have developed a two-stage contact detection algorithm, which works in the following way: the first stage is the coarse detection using a particle contact list, and the second stage is the detail contact detection for the particles concerned, Fig. 22. The complexity of the present algorithm is O(N*log(N)*m) which is better than most existing algorithms. 3.5
Numerical
Examples
By using the combined discrete-finite element method, we can simulate pharmaceutical powder compaction with different particle sizes and shapes. For real powders, such as starch, the particles vary greatly in size and shape. Fig. 23 shows the compaction and shear stress distribution of 14 starch particles with E = I O 6 Pa and v=0.3. We can see that the irregular particles slide, bend and deform significantly during the stage of compaction with highly non-uniform shear stress. The contact area also increases dramatically and thus the strength of the compact. Here we only show that the present method works well for real powders. However, one can simulate thousands, or millions, of particles in principle using the present methodology if computing time and computer memory are appropriate. In addition, parallel algorithms can be used to speed up the simulation process.
Fig.23: Starch particles during compression and shear stress
785
International Conference on Advances in Engineering and Technology
(a) (b) Fig. 24. Macropac generated particles (a) and D-FEM idealization (b) Let us consider a larger matrix of particles, namely 200. They are genearated using Macropac and their idealization using D-FEM scheme is given in Fig 24. The material parameters are as follows, Young's modulus 1000 MPa, Poison's ratio 0.3, yield limit 448 MPa and viscosity is 13 MPa.s. The dimensions of the mould and particles are normalized and when the mould is 1.0 by 1.0 the particles vary between 0.0415 and 0.0248. The maximum punch displacement is 15% of the mould height. The particles behave elastically in the process, since the maximum von Mises stress never exceeds the yield limit and has a value of 356 MPa. The averaged shear strain and stress distributions are given in Fig. 25. The shear strain reaches 2.4% and the maximum values of strain and stress are concentrated along the 45 ~ line.
N
(a)
(b)
Fig. 25. Shear strains distribution (a), shear stress distribution (b).
3.6 Adopting Adaptive Meshing An application of a routine ready to employ into D-FEM scheme is the nonlinear explicit dynamic code [29]. Except for the fact that the problem of compaction is dynamic the particles can be strongly deformed under severe compaction conditions i.e. large punch movements. It is ex-
786
International Conference on Advances in Engineering and Technology
pected that the particles will undergo plastic deformation. Considering such conditions, an explicit time integration algorithm seems to be appropriate. In addition, because of the complex shapes of the particles e.g., sharp comers, we believe that mesh adaptivity procedures are necessary. The Zienkiewicz-Zhu (Zienkiewicz & Zhu, (1987)) error estimator has been applied
~)d~.~I/2 [ g 13TI3 df21/2
[ g (~_ ~)T (~ __ /']
--
(18)
where ry and u represent the exact values of the stresses and displacements, while @ and fi are the finite element approximations, respectively. The mesh refinement is carried out when /7 >/Tperm. In this example a ductile material of the particle has been chosen i.e pregelatinised starch with the following properties; Young's modulus 5.315 x 10 l~ N/m 2 , Poisson ratio 0.3, yield stress 448 MPa, density 660 kg/m 3 and particle diameter 0.02 mm. The given problem is where the particle is squeezed by a rigid plate. The initial mesh and shape of the particle after pressing and the resultant mesh are presented in Fig. 26 (a, b and c). After three adaptive iterations are performed, the global error of the final mesh was reduced to 2.2% containing 2483 elements. The refinement of the mesh is performed in the area around the rigid surface, indicating higher gradients of stresses. It can be found from Fig. 26 (d) that the maximum effective stresses occur at the locations close to the punch. In other sections of the particle further away from the punch, the effective stresses decrease gradually. The adaptivity procedure makes possible the tracking of the stress evolution in the particle. er 1on t~ be . . . . . . . . :lln
~. ~lgld Pla~
F:I~e d
~ ~ ' ~upport 7 Stlr~ r
)
0
.... .,,-....--
(a)
.f
(b)
787
International Conference on Advances in Engineering and Technology
:~ :
):::;'! ........... i 84
Section l:eingzoomed in
(c) (d) Figure 26. Initial mesh (a), deformed mesh after applying adaptive procedure (b), contact region (c), effective von Mises stress distribution. After three adaptive iterations are performed, the global error of the final mesh was reduced to 2.2% containing 2483 elements. The refinement of the mesh is performed in the area around the rigid surface, indicating higher gradients of stresses. It can be found from Fig. 26 (d) that the maximum effective stresses occur at the locations close to the punch. In other sections of the particle further away from the punch, the effective stresses decrease gradually. The adaptivity procedure makes possible the tracking of the stress evolution in the particle. 3.7 Discussion
The pharmaceutical powder and tableting process is simulated using a combined discrete finite element method with adaptive meshing on deformed particles and contact dynamics among irregular particles. The particle-scale formulation and two-stage contact detection algorithm developed for the present method enhances the overall efficiency of the calculations of particle packing and elastic deformation. The irregular particle shapes and random particle size are represented as a pseudo-particle assembly based on real powders. Simulations show that particle size and shape have significant influence on the features of compaction. Particle shapes are more related to the non-uniform shear stress distribution. The results will enhance the design of high-quality tablets and reduce the possibility of capping in the formed tablets. However, the compaction involves many other factors of tableting procedure. Factors such as constitutive relationships, lubrication and coating of the particles may also have important influence on the behavior of the powder compaction. These topics are under active research. 4.0 COUPLED RESERVOIR SIMULATION 4.1 Problem statement
Consideration of production-induced hydrocarbon reservoir compaction and its subsequent surface subsidence is uncommon in traditional reservoir engineering, even though specific
788
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
cases have been reported in the literature. Much of the information and data on subsidence above oil and gas reservoirs is confidential in nature. Those reports available in the literature, which are most complete generally concern older fields (Lewis, Makurat & Pao, (2000)). Generally, oil operators show no interest in the issue of oil production-induced reservoir compaction and the subsequent cause of land surface subsidence.
Fig. 27: Ekofisk Complex in the North Sea. However, in 1984, this viewpoint took a dramatic turn because of the discovery of a subsidence bowl in the Ekofisk field of the North Sea (56~ 03~ (Johnson, Rhett, & Siemers, (1989, July. Apart from the fact that the subsidence of the Ekofisk field causes the sinking of the production platform and endangers both wells and pipelines, the influence of reservoir compaction in the North sea has allowed the production of more crude oil than initially anticipated. Fig. 27 shows the photo of the Ekofisk complex and the insert shows the detection of the subsidence at the concrete base in 1986. The interaction between geomechanical response and multiphase fluid behaviour is a complex phenomena, firstly due to the complexity of the physics involved, and secondly due to the structurally complicated geometry of the reservoir and cap rock. The traditional way of analysing such a problem is to treat the fluid and solid matrix as two separate continua, the so-called uncoupled approach [33, 34]. However, it can be shown that for the uncoupled approach, the pore pressure change for a single-phase simulator is calculated as Settari, Kry & Yee, (1989)];
789
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
A P - (C + A t e ) -~ (-ATOP + Aq), whereas for the coupled approach, it yields
(19)
AP - ( f f K - ~ L + A t e ) -~ (-ATOP + Aq + f f K -1AF).
(20)
It is obvious from this analysis that the coupled and uncoupled approach lead to different expressions for the pore pressure change. In fact, the most common practice in uncoupled reservoir simulation is to add an 'artificial' compressibility term, Cp, into the fluid equation. In the following, we present an analysis of this compressibility term and investigate its effect on the reservoir equations.
4.2 Governing Equations and Analysis The governing equations for the flow-dependent deformation analysis of porous media can be written as
Vcr-aSVp
=0,
(21)
for momentum balance and
v.(
Vp)-57
'
22)
for the fluid mass conservation, where o-, p, c~, 2c, B, ~b,S are effective stress, pore fluid pressure, Biot modulus, mobility, formation volume factor, porosity and fluid saturation, respectively. In order to proceed with the analysis, we assume that S=1, for simplicity. The RHS of Eqn (22) can be written as
-
qk~
+--Bdp - ~ - cr-~'Bc~t
(23)
where, cr = cf + Cpis termed the total compressibility of the fluid-matrix medium. The terms cf and Cpare the fluid and pore compressibility, respectively, with their expression given by -
dp
,
C p -
#dp
.
(24)
Notice that in Eqn (24), the porosity change is assumed to be a unique function of pressure, which is the ingredient assumption used in an uncoupled reservoir simulation. Due to the change of the porosity accompanied by the skeleton deformation caused by fluid withdrawal, the meaning of Cpneeds to be analysed critically since it is the term which differentiates it from the coupled methodology. Two assumptions are usually made when using Cpin a reservoir simulation: (a) the reservoir deforms uniaxially or (b) the reservoir deforms isotropically. In the first case, it has been shown that the pore compressibility term is given by Gutierrez Lewis & Masters, (2001):
790
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
/ 4/
c~,- K + - G 3
-1
(25)
while in case (b), it is derived as 1
cp = ~ ,
OK
(26)
where, K and G are the bulk and the shear modulus, respectively. Eqns (25) and (26) are two values of the pore compressibility, which are usually used in an uncoupled analysis. Even for these two cases, the differences in the values are significant. For instance, using a Poisson's ratio of v - 0.2, the calculated pore compressibility for isotropic compaction is twice the value for the uniaxial strain compaction for the same Young's modulus and porosity. This simple analysis shows that it is impossible to characterise the pore deformation response by a single-valued parameter. In the case of uniaxial compaction, it is only valid when the formation is horizontally infinite. Real reservoirs are, however, bounded laterally and will not deform uniformly, even under a uniform pressure drawdown. On the other hand, for isotropic compaction, it is very unlikely that the horizontal and vertical strain will be of the same order. In fact, field observation via drilling log has shown that during compaction, a dynamical stress arch is formed in the crestal part of the field, which results in loads re-distribution in the overburden. The presence of slip layers allows stress 'venting' to take place through mirco-seismic events, which further invalidates the constant total stress assumption in uncoupled modelling. 4.3 Case Studies In this section, we perform a numerical simulation of a field in the North Sea. The platform of which is shown in Fig. 28 and indicates the geological area of interest.
791
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
(a) (b) Fig. 28_ Oil platform in North Sea (a) and region of interest (b). '
~= 350
'
'
I
'
I
F
,
,
,
I
'
'
'
to:~ er, i~:~- 1437," ~ 1
I .~..,
./"
.....'......... [
400 35O
~'3o0 x
Y
e'250 "13
v
~
300~ x
:
~
:
~
""
: ~>,
:
250
,r
~200 P
200~
f" w
'~15o
o.
150~
..f"
,~
=
~I00
lOO~ 0
,~w"~'
e~_ ~:~.... ~
.
.
.
.
.
.~.e@*;,+~ . . . .
tl00 . . . .
l , 150
0
Time(Months)
Y~
J(a) (b) Fig. 29. Total oil production and water cut (a) and comparison of present study with field measurement (b). stress arch formation
Fig. 30. Stress arch formation in the overburden layer.
792
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
The reservoir composes mainly of heterogeneous chalk with an average porosity of 35%. The depth of the reservoir is approximately 2.7-2.8 km subsea. In order to minimise run time, a relatively coarse mesh was used for the representation. A total of 166 months of history were simulated. Fig. 29 (a) shows the total oil production and the water cut ratio spanning over the simulation period and Fig.29 (b) shows the comparison of the present study with field measurement. The analysis showed that the seabed has sunk approximately 0.11 m below the subsea level. The maximum vertical displacement occurs at the crestal region of the field where most of the active production wells are located. The magnitude of the maximum seabed subsidence is approximately 3.3 m. The extent of the subsidence effected by the production is very extensive, covering an area of approximately 112 sq. km. Fig. 30 shows the stress arch formation in the top of the overburden layer. Due to the load redistribution, the flanks of the reservoir experience overpressuring. The vertical downward movement of the reservoir forces the reserves into the flank region. This also explains the relatively low overall average reservoir pressure decline, apart from the replenishment of reservoir energy due to compaction. 4.4 Conclusions
In this paper, we have presented an analysis of the coupled reservoir equation and perform a critical analysis of the pore compressibility term. It is shown here that the traditional approach of using a single-valued parameter to characterise the pore compressibility is insufficient. In addition to that, field observation has repeatedly invalidated the fundamental assumption of constant total stress in uncoupled formulation. Finally, we present a field case study of a real-life reservoir in North Sea. The analysis showed that formational compaction replenishes the reservoir energy and extend the life of the production period. During active operation, the heterogeneous weak chalk formation experiences compaction in the range of subsidence/compaction ratio (S/C) of 0.7-0.75. 5.0 CLOSURE We have presented a few representative successful applications of developed algorithms and programs. The applications comprise manufacturing and natural phenomena connected with the exploitaition of natural resources. Further research will be connected with the development of algorithms and programs allowing deeper get into nature of the mentioned processes, namely, structural interactions, fluid flow structure interactions, fluid flow temperature interactions, fluid flow structure temperature interactions along with the investigation of influences of different type imperfections by means of extensive parametric studies and design sensitivity analysis. 6.0 A C K N O W L E D G M E N T
The support of Engineering and Physical Research Council, UK, GKN Squeezeform, AstraZeneca, BP Amoco and Elf is gratefully acknowledged.
793
International Conference on Advances in Engineering and Technology
REFERENCES
Usmani A.S., Cross J.T., Lewis R.W., A finite element model for the simulations of mould filling in metal casting and the associated heat transfer, Int. j. numer, methods eng., Usmani A.S, Cross J.T., Lewis R.W., The analysis of mould filling in castings using the finite element method, J. of Mat. Proc. Tech., 38(1993), pp. 291-302. Lewis R.W., Usmani A.S., Cross J.T., Efficient mould filling simulation in castings by an explicit finite element method, Int. j. numer, methods fluids, 20(1995), pp. 493-506. Lewis R.W., Navti S.E., Taylor C., A mixed Lagrangian-Eulerian approach to modelling fluid flow during mould filling, Int. j. numer, methods fluids, 25(1997), pp. 931-952. Gethin D.T., Lewis R.W., Tadayon M.R., A finite element approach for modelling metal flow and pressurised solidification in the squeeze casting process, Int. j. numer, methods eng., 35(1992), pp. 939-950 Ravindran K., Lewis, R.W., Finite element modelling of solidification effects in mould filling, Finite Elements in Analysis and Design, 31 (1998), pp. 99-116. Lewis R.W., Ravindran K., Finite element simulation of metal casting, Int. j. numer, methods eng., 47(2000). Lewis R.W., MERLIN-An Integrated finite element package for casting simulation, University of Wales Swansea, 1996. Zienkiewicz, O.C. and Taylor, R.L., The Finite Element Method, fifth ed., ButterworthHeinemann, Oxford, 2000. Sluzalec, A., Introduction to nonlinear thermomechanics, Springer Verlag, 1992. Kleiber, M., Computational coupled non-associative thermo-plasticity, Comp. Meth. Appl. Mech. Eng., 90 (1991), pp. 943-967. Perzyna, P., Thermodynamic theory of viscoplasticity, in Advances in Applied Mechanics, Academic Press, New York, 11 (1971) Felippa, C..A., Park, K.C., Staggered transient analysis procedures for coupled dynamic Vaz M., Owen D.R.J., Thermo-mechanical coupling: Models, Strategies and Application, CR/945/96, University od Wales Swansea, 1996. Lewis, R.W., Morgan, K., Zienkiewicz, O.C., An improved algorithm for heat conduction problems with phase change, Int. j. numer, methods eng., 12 (1978), pp. 1191-1195. Lewis, R.W., Morgan, K., Thomas, H. R., Seetharamu, K.N., The Finite Element Method in Heat Transfer Analysis, Wiley, 1996. Lewis, R.W., Ransing R.S., The optimal design of interfacial heat transfer coefficients via a thermal stress model, Finite Elements in Analysis and Design, 34 (2000), pp. 193-209. Lewis R.W., Ransing R.S., A correlation to describe interfacial heat transfer during solidification simulation and its use in the optimal feeding design of castings, Metall. Mater. Trans. B, 29 (1998), pp. 437-448. Thevoz Ph., Desbiolles J., Rappaz M., Modelling of equiaxial formation in casting, Metall. Trans. A, 20A, 1989, 311. Celentano D.J., A thermomechanical model with microstructure evolution for aluminium alloy casting processes, Int. J. of Plasticity, 18, 2002, pp. 1291-1335.
794
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Cundall P.A., and Strack O.D.L., A discrete element model for granular assemblies, Geotechnique, 29 (1979), pp. 47-65. Kibbe, A.H., Pharmaceutical Excipients, APA and Pharmaceutical Press, 2000. Lewis R.W. and Schrefler B.A., The Finite Element Method in the Static and Dynamics Deformation and Consolidation of Porous Media, 2nd Ed., John Wiley & Son, England, 1998. Gethin D.T., Ransing R.S., Lewis R.W., Dutko M., Crook A.J.L., Numerical comparison of a deformable discrete element model and an equivalent continuum analysis for the compaction of ductile porous material, Computers & Structures, 79 (2001), pp. 1287-1294. Goodman R.E., Taylor R.L. and Brekke T., A model for mechanics of jointed rock, J. Soil Mech Found, ASCE, 1968. Ransing R.S., Gethin D.T., Khoei A.R., Mosbah P., Lewis R.W., Powder compaction modelling via the discrete and finite element method, Materials & Design, 21 (2000), pp. 263-269. Rowe R.C. and Roberts R.J., Mechanical properties, in: Pharmaceutical powder compaction technology, eds Alderborn G. and Nystrom C., Marcel Dekker Inc, New York, 1996, pp. 283-322. Macropac reference manual, Oxmat, 2001. Dong L.L., Lewis R.W., Gethin D.T., Postek E., Simulation of deformation of ductile pharmaceutical particles with finite element method, ACME conference, April 2004, Cardiff, UK. Zienkiewicz O.C., Zhu J. A simple error estimator and adaptive procedure for practical engineering analysis. International Journal of Numerical Methods in Engineering, 24 (1987), pp. 337--357. Lewis R.W., Makurat A. and Pao W.K.S., Fully coupled modelling of seabed subsidence and reservoir compaction of North Sea oil fields. J. of Hydrogeology. (27), 2000. Johnson J.P., Rhett D.W., Siemers W.T., Rock mechanics of the Ekofisk reservoir in the evaluation of subsidence. JPT, July (1989), pp 717-722. Settari A., Kry P.R., Yee C.T., Coupling of fluid flow and soil behaviour to model injection into uncemented oil sands. JCPT, 28 (1989), pp. 81-92. Finol A. and Farouq-Ali S.M., Numerical Simulation of Oil Production with Simultaneous Ground Subsidence. SPEJ, (1975), pp. 411-424. Gutierrez M., Lewis R.W. and Masters I. (2001) Petroleum reservoir simulation coupling fluid flow and geomechanics. SPE Reser. Eval. & Engrg., June (2001), pp. 164-172.
795
International Conference on Advances in Engineering and Technology
MOBILE AGENT SYSTEM FOR COMPUTER NETWORK MANAGEMENT O. C. Akinyokun; Bells University of Technology, Ota, Ogun State, Nigeria A. A. Imianvan; Department of Computer Science, University of Benin, Benin, Nigeria
ABSTRACT Conventionally, the management of computer networks involves the physical movement of the Network Administrator from one computer location to another. The mathematical modeling and simulation of mobile agent systems for the management of the performance of computer networks with emphasis on quantitative decision variables have been reported in the literature. The prototype of an expert system for the administration of computer networks resources with emphasis on both quantitative and qualitative decision variables using mobile agent technology is presented in this paper. The architecture of the system is characterized by the relational database of the computer networks resources and the process of management is modeled by using Unified Modeling Language (UML) and Z-Notation. The implementation of the mobile agent is driven by intelligent mechanism for de-assembling, serializing, queuing, and Divide-and-Conquer Novelty Relay Strategy of its component parts (subagents), and a mechanism for assembling the output reports of the subagents. The ultimate goal is to provide an intelligent autonomous system that can police the economic use of the computer network resources and generate desirable statistics for policy formulation and decision making. Keywords: Mobile, Agent, management, Launch, Migrate, Queue, En-Queue, Serialize, Assemble, Relay
1.0 INTRODUCTION A computer network is a group of computers connected together and separated by physical distance. The searching for resources in a network, conventionally, involves the physical movement of the Network Administrator from one computer location to another. The approach is not only stressful but introduces some delays in monitoring events on the network. Besides, events on the network were not monitored as they arise and Network Administrator were, often, bored with the issue of which computer to monitor next. Mobile agents are autonomous and intelligent software that are capable of moving through a network, searching for and interacting with the resources on behalf of the Network Administrator. Mobile agent technology has been applied to electronic commerce transactions in (Jonathan et al, (1999), Olga, (2003), Yun et al, (2002), Harry et al, (2000), Li et al, (2003),
796
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Youyong et al, (2003), Dipanjan & Hui, (2004)); distributed information retrieval in (Lalana et al, (2001), Harry & Tim, (2002), Jeffrey & Anupam, (2003), Meas, (1994)), and Network management in (Cassel et al, (1989), Krishnan & Zimmer, (1991), Allan & Karen, (1993), Marshall, (1994), German & Thomas, (1998)) In (Akinyokun, (1997)), an attempt is made to develop a utility program using Pascal to police the economic utilization of the RAM and Hard Disk of microcomputers. The utility program is capable of being activated and run on a stand-alone computer to: (a) Keep track of the users' data and program files in a computer hard disk. (b) Give notice of each of the users file that: (i) Is more than one month old. (ii) Has not been accessed and updated within a month. (iii) Occupies more than a specified memory space. (c) Raises an alarm and recommend that the offending files be backed up by the Operations Manager (d) Automatically delete the offending users' files at the end of the third alarm.. In (Lai & Baker, (2000); Saito & Shusho, (2002)), attempts were made to develop mobile agents that were capable of managing bandwidth of computer network environment. A mathematical modeling and simulation of the performance management of computer network throughput, utilization and availability is proposed in (Aderounmu, (2001)). This research attempts to study the features of a computer network environment and develop an expert system using mobile agent technology to intelligently and practically manage computer network bandwidth, response time, memory (primary and secondary), file (data and program) and input-output device. Each of the resources of a computer network environment has unique features; hence the management of each resource is supported by a unique model using Unified Modeling Language (UML) (Bruce, (1998), Meilir, (2000)), and Z Notation (Bowen et al, (1997), Fraser et al, (1994), Spivey, (1998)). The details of the UML and Z Notation of each model of a subagent have been presented in (Imianvan, (2006)). In an attempt to minimize the size of this paper, the UML for managing the bandwidth of computer network is presented as a typical example while the Z Schema of all the subagents are presented. The ultimate goal is to provide an intelligent autonomous system that can police the economic use of the computer network resources and generate desirable statistics for policy formulation and decision making. 2.0 D E S I G N OF THE M O B I L E A G E N T
The (a) (b) (c)
computer network resources that are of interest are as follows: Bandwidth. Primary memory. Secondary memory.
797
International Conference on Advances in Engineering and Technology
(d) (e) (f) (g) (h)
Response time. Data files. Program files. Input device. Output device.
A modular architecture is proposed whereby each resource is administered by a subagent. The platform for the take off of the mobile agent at the source computer and the platform for its landing at the target computer are the source host operating system and target host operating system respectively. An interface is developed to aid the launching of the mobile agent while it migrates with another interface which causes it to be broken into its component parts (subagents) at the target. Each subagent interacts with the host operating system of the target and its appendages or utility programs for network monitor and cyber clock for the purpose of assessing and evaluating the resources that are available. At the source, the mobile agent is decomposed into its constituent parts (subagents). The results obtained by all the subagents after a successful visit to a set of target workstations are assembled for the purpose of reporting them for external analysis, interpretation, policy formulation and decision making by the Network Administrator. 2.1 Bandwidth Management The used bandwidth in the computer network environment denoted by 'B' is evaluated by:
m mbiy 8-ZZ i=1 j=l
t]
where, bij represents the bandwidth used in transmitting jth packet in ith workstation and tj is the transmission time of jth packet. The percentage of the used bandwidth (Bu) to the bandwidth subscribed (Bs) to by the computer network environment denoted by 'P' is evaluated as:
P - 1 0 0 B.
B,
The Unified Modeling Language (UML) specification of the logic of bandwidth management is presented in Figure 2.2 while its Z Schema is presented as follows: 2.2 Z Schema of Bandwidth Management
NumberofPacets?, NumberofWorkstations? 9~] PacketSize, TargetPacketSize? 9 TransmissionTime, TargetTime? 9 BandwidthUsed, BandwidthUsed' 9 i,j, m, n "~]
798
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
BandwidthSubcribed, PercentageOfUsedBandwidth 9 BandwidthUsed ,--- 0 n +-- NumberofWorkstations? m ~-- NumberofPackets? Vi, 1 < i < n /* loop over workstations or targets */ Begin Vj, 1 < j < m /* loop over packets transmitted in ith workstation*/ Begin PacketSize ,-TargetPacketSize? TransmissionTime +-- TargetTime? BandwidthUsed' ~- BandwidthUsed + (PacketSize / TransmissionTime) DisplayData (PacketSize, TransmissionTime, BandwidthUsed) End End BandwidthPercentage ~-- (BandwidthUsed' / BandwidthSubcribed) * 100 DisplayData (BandwidthUsed', BandwidthSubcribed, BandwidthPercentage) Packet
VodBandw--~ Request
,= v
Number of targets, Target identity
Transmission Time Module Facket t: 'ansmitt, ,'dPacket q ize
Bandwidth Module
Generate Report
Used Bandwidth, %of Bandwidth ,.. Used p,-
Transmi: ;sion time y
mdwidth valuator. Display t 'acket transmitted, P tcket Size, Transmi~ ;sion Time, Bandwid :h Used, Bandwidth Subscribed, Percent; tge of Used Band, vidth Fig. l - UML Diagram for Bandwidth Management
2.3 Primary and Secondary Memory Management The primary or secondary memory of a target computer used over a period of time 't' denoted by 'R' is evaluated by: m
m
R-ZZ, i=l j : l
799
International Conference on Advances in Engineering and Technology
where ri,j represents the primary or secondary memory space used by the jth packet in the ith workstation. The percentage of the used memory denoted by 'Pr' is evaluated as:
P~ -100 R" Rw where 'Rw' represents the memory size of the target computer and 'R.' represents the memory size used by the packets. The Z Schema of memory management is presented as:
2.4 Z Schema of Memory Management NumberofPackets?, NumberofWorkstations? 9~] PacketSize, TargetPacketSize? 9 TimeFrame, TargetPeriodofMemoryusage? 9 Memoryused, Memoryused' : i,j, m, n "~] SubcribedMemory" PercentageMemoryused 9 TotalMemoryused ~-- 0 n +-- NumberofWorkstations? m +-- NumberofPackets? '7'i, 1 _
m tij
"=j=l
r
where, ti,,j = qi,j + gij + Pi,j + di,j represents the packet delays used for the jth packet in the ith workstation and 'r' is the number of packets. qi,j = Queuing delay time.
800
International Conference on Advances in Engineering and Technology
gij = Packet propagation time. Pij = Processing delay time. di:i = Destination delay time. The percentage of the response time denoted by 'PRy' is evaluated as"
PRr - 1 O0 Tc
rE
PRT = 1 0 0 TC /
TE
The Z Schema of response time management is presented as follows:
2.6 Z Schema of Response Time Management NumberofPackets?, NumberofWorkstations?
9
QueuingDelay?, Propagationdelay? 9 Processingdela 9 DestinationDelay? i,j, m, n : ~
y"
ResponseTime, SumofWaitingandDelay, SumofWaitingandDelay' SumofWaitingandDelay ~-- 0 n ~- NumberofWorkstations? m ~- NumberofPackets? Vi, 1 < i < n /* loop over workstations */ Begin
9
Vj, 1 < j < n /* loop over packets transmitted in the ith workstation */ Begin Pi,j *-- Processingdelay? 9 qi,j +-- QueuingDelay? gi,j ~-- Propagationdelay? 9 di,j ~-- DestinationDelay? SumofWaitingandDelay' ,-- SumofWaitingandDelay + (qi,j + gij + P~j + dij) DisplayData (qi,j, gij, Pi,j, d~,j ) End ResponseTime +-- SumofWaitingandDelay' / m DisplayData (SumofWaitingandDelay', ResponseTime) End DisplayData (ResponseTime,)
2.7 File Management The evaluator gets into the files of the target operating system to collect relevant information desirable for estimating files in use in the target workstation. File evaluator generates report on the filename(s), directory/folder name, file size, count of the number and category of files available. The Z Schema of file management is presented below. 2.8 Z Schema of File Management Target_id?, NumberofWorkstations?, NumberofFile?
9~]
801
International Conference
on Advances
in E n g i n e e r i n g
and Technology
DriveLabel 9C H A R DirectoryName,FileName 9seq C H A R ReportDriveLabel!, ReportDirectoryName!" W I N D O W ReportFileName!, ReportFileCount! 9W I N D O W i, j, m, n, count, TotalFileCount 9 n ~-- NumberofWorkstations? 9 m ~ NumberofFile? 9count ~-- 0 '7'i, 1 < i < n
/* loop over workstations */
Begin V j , 1 < j < m /* loop over files (packets) transmitted */ Begin ReportDriveLabel! *-- DISPLAY (DriveLabel) ReportDirectoryName! ~-- DISPLAY (DirectoryName) ReportFileName! ~-- DISPLAY (FileName) count' ~-- count + 1 Va, b
9FileEvaluator 9
a r b =:~ a. DriveLabel 4=b. DriveLabel Vc, d
9FileEvaluator 9
Ve, f
9FileEvaluator ~
c r d :=~ c. DirectoryName -r d. DirectoryName e r f =:~ e. DataFileName 4= f.DataFileName End TotalFileCount' ~-- TotalFileCount + count' End ReportFileCount! ~-- DISPLAY (TotalFileCount')
2.9 Input-Output (I-O) Device Management The evaluator gets into the files o f the target operating system to collect relevant information desirable for estimating I-O device in use in the target workstation. I-O device evaluator generates report on device name, device makes, device identity. The Z Schema of I-O device management is presented as:
2.10 Z Schema of I-O Device Management T a r g e t i d ? , NumberofWorkstations? 9 I-ODeviceName, I-ODeviceMake, I-ODeviceIdentity 9seq C H A R ReportDeviceName!, ReportDeviceMake!, ReportDeviceIdentity! 9W I N D O W
i,n"
~]
n ~-- NumberofWorkstations? 'V'i, 1 < i < n
/* loop over workstations */
Begin ReportDeviceName! ~
802
DISPLAY (I-ODeviceName)
International Conference on Advances in Engineering and Technology
ReportDeviceMake! +- DISPLAY (I-ODeviceMake) ReportDeviceldentity! +-- DISPLAY (I-ODeviceIdentity) V a , b : I-ODeviceEvaluator 9 a :? b ==> a. I-ODeviceName r b.I-ODeviceName Vc, d : I-ODeviceEvaluator ~ c 4=d =:> c. I-ODeviceMake r d.I-ODeviceMake Ve, f
: I-ODeviceEvaluator 9 e ODeviceldentity
:P f
=:>
e.
I-ODeviceldentity
4= f.I-
End 3.0 IMPLEMENTATION TECHNIQUE OF THE MOBILE AGENT A mobile agent is an artificial life which is capable of birth (creation), survival (launching) and death (disposed). The processes of birth, survival and death are characterized by a sequence of logical steps called the mobile agent life cycle are: presented in Figure 3.1. The mobile agent life cycle involves the following series of: (a) Create the Mobile Agent. (b) De-assemble the Mobile Agent. (c) Serialize and En-queue Sub-Agent. (d) Migrate to target computers. (e) Execute sub-agent. (f) Relay (Collaborate with the next Sub-Agent). (g) De-serialize and Assemble Sub-Agents Results. (h) Dispose the Mobile Agent.
The detail design of each of the processes involved in the mobile agent life cycle has been presented in [Imianvan 2006]. The Z Schema for the creation of a mobile agent as a typical example is given as: 3.1 Z Schema of Mobile Agent Creation Agent? 9agent_type Sub_Agent" array[ 1..n] of agent_type Target_Machine_id 9array[ 1..m] of STRING /* target computer identity */ i, k 91..n /* i, k are loop variable, n is maximum number of subagents */ j 91..m /* j is a loop variable, m is maximum number of computers */ CREATE(Agent?) DIS-ASSEMBLE (Agent?, SubAgent, n) VI, 1
{ SERIALIZE And_EN-QUEUE (Sub_Agent(i)) V j, l___j<_m
{
803
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
MIGRATE(Target_Machine_id(j), Sub_Agent( i )) EXECUYE(Yarget_Machine_id(j), Sub_Agent( i ))
} RELAY(Sub_Agent( i ))
} Vk,
1
{ DESERIALIZE_And_ASSEMBLE (Sub_Agent(k), n, Agent) ReportAgentData! = display(Sub_Agent(k), data) /* output agent results */
} The sensitive nature of the mobile agent system necessitates the need to incorporate in the design, a mechanism that can prevent unauthorized usage of the system. Consequently, it is essential to encode the Network Administrator's name and password as part of the design process. The Z Schema of the encoding of the Network Administrator's name and password for the mobile agent system is presented as follows:
3.2 Z Schema of Encoding the Network Administrator's Logon Name? 9 seq CHAR Password? 9 seq CHAR ~,k" Concat_Name/, Concat_Name, Encoded_Name 9 STRING Concat_Password / Concat Password, Encoded Password" STRING a, b, x, y 9STRING Concat Name =' ' WINDOW(Name?) Vi, 1 <_ i <_LEN(Name?) { /* Begin of encoding network administrator's name. */ Concat Name/= Concat_Name + ASC(LEFT$(Name?, i))} Encoded Name = Concat Name' V a, b ~ Encoded_Name" a # b =--) a.Encode_Logon # b.Encode_Logon Concat Password = ' ' WINDOW(Password?) Vk, 1 _< k _
} Encoded Password = Concat Password' V x, y s Encoded_Password 9x r y =") x.Encode_Logon r
y.Encode_Logon
The operation of the mobile agent system is activated on sensing the name and password of the Network Administrator. The verification and validation of the identity of the Network Administrator involves decoding of the name and password. The Z Schema of the mobile
804
International Conference on Advances in Engineering and Technology
agent logon verification and validation process is presented as follows: 3.3 Z Schema of Logon Verification and Validation E Encode Logon Admin Name?, Admin Password 9 STRING Name2, Name2/, Password2, Password2 / 9seq CHAR
iN~ Decoded_Name, Decoded_Password" STRING Vi, 1 <_ i _
805
International Conference on Advances in Engineering and Technology
and available data and program files. Modem design tools (UML, Z-notations, and Petri Nets) were employed in the design process. A Divide-And-Conquer Novelty Relay technique is designed and used to facilitate intelligent hand shaking of the subagents as each accomplishes its task. In a novelty relay, members split themselves out, queue up, run or move, handover to other team members until game is over or victory is won. The implementation technique of the mobile agent also involves de-assembling, serialization, queuing, and migration to the target machine, handing over, de-serialization and assembling of the subagents results. A technique for the implementation of the mobile agent system in an environment characterized by Micro Soft Windows NT operating system, SQL Server and Visual BASIC application language has been proposed. The authors have successfully developed a prototype of the mobile agent and are, currently, carrying out the case study of some computer networks in the University of Benin, Benin, Nigeria and Kigali Institute of Science Technology and Management, Rwanda. The results obtained from the experimental study shall form the basis for a future paper. REFERENCES
Aderounmu G.A. (2001), "Development of an intelligent Mobile Agent for Computer Network Performance Management", PhD Thesis, Obafemi Awolowo University, Ile-lfe. Nigeria. Akinyokun O.C. (1997), "Catching and Using the Virus", The Journal of the Institute of the Management of Information Systems (IMIS), London, United Kingdom, Vol. 7, No. 6, Pps 12-17. Allan L. and Karen, F. (1993). "Network Management: A Practical Perspective". Addison Wesley. Bowen J.P., Hinchey M.G. and Till D. (1997). "ZUM'97: The Z Formal Specification Notation, Springer- Verlag, LNCS 1212. Bruce P. D. (1998). "Real-Time UML: Developing Efficient Object for Embedded Systems". Addison Wesley. Cassel L.N., Patridge C. and Westcott, J. (1989). "Network Management Architecture and Protocols: Problems and Approaches". IEEE Journal on Selected Areas in Communications, VoL 7, No. 7, pp. 1104-1114. Dipanjan C. and Harry C. (1999), "Service Discovery in The Future Mobile Commerce", A CM Crossroads, Vol. 7(2), pages 18 - 24, November 1999. Dipanjan C. and Hui L, (2004). "Extending the Reach of Business Processes", IEEE Computer. Dipanjan C. and Hui L. (2004), "Pervasive Enablement of Business Processes", Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom'04), IEEE Computer Society Washington, DC, USA
806
International Conference on Advances in Engineering and Technology
Faser M.D., Kumar K. and Vaishnavi V.K., (1994). "Strategies for Incorporating Formal Specifications in Software Development". Communications o f ACM, 37(10): 74-86. German G., Thomas J. Y. (1998), "Delegated Agents for Network Management". IEEE Communication Magazine pp. 66- 70. Harry C., Tim F. and Anupam J. (2000), "Service Discovery in the Future Electronic Market", Workshop on Knowledge-based Electronic Markets, AAAI-2000, Austin, Texas, USA. HmTy C. and Tim F. (2002), "Beyond Distributed AI, Agent Teamwork in Ubiquitous Computing", Workshop on Ubiquitous Agents on Embedded Wearable, and Mobile Devices, AAMAS-2002, Bologna, Italy. Imianvan A.A.. (2006). "Development of a Mobile Agent for Assessing and Evaluating Computer Network Resources" Unpublished PhD Thesis, Federal University of Technology, Akure, Nigeria. Jean-Chrysostome B. (1993), "Characterizing End-to-End Delay and Packet Loss in the Intemet", Journal of High-Speed Networks. Available at." http ://ils. unc. edu/demps ey/18 6s OO/bolo t.pdf Jeffrey L U. and Anupam J. (2003), "Data Mining, Semantics and Intrusion Detection: What to Dig for and Where to Find it", Next Generation Data Mining. Available at." ....
~;~
,'. o ~ .
.
.
.
~,......
~.,.,
.... ,
~
.
,
.
,.
Jonathan B., David K. and Daniela R., (1999), "Economic Markets as a Means of Open mobile-agents system". Proceedings of the Workshop on Mobile Agents in the Context of Competition and Cooperation (MAC 3) - Autonomous Agents '99, pp. 43-49. Krishnan L. and Zimmer W. (1991). "Intelligent Network Management" Elsevier Science Publishers, B. V. IFIP. Lai K. and Baker M. (2000), "Measuring Link Bandwidths using a Deterministic Model of Packet Delay". A CM SIGCOMM Computer Communication Review. Lalana K., Timothy F. and Yun P. (2001), "A Delegation Based Model for Distributed Trust", Workshop on Autonomy, Delegation, and Control." Interacting with Autonomous Agents, International Joint Conferences on Artificial Intelligence, pages 1 - 8. Maes, P. (1994), "Agents that Reduce Work Load and Information Overload". Communications of the A CM,, 3 7(7). 31 - 40. Marshall R. (1994). "The Simple Book: An Introduction to Network Management". Second Edition, Prentice - Hall, Eaglewood Cliffs, New Jersey. Meilir P., (2000). "Fundamentals of Object-Oriented Design in UML", Addison Wesley. Olga V. (2003), "eNcentive: A Framework for Intelligent Marketing in Mobile Peer-To-Peer Environments", The 5th International Conference on Electronic Commerce (ICEC 2003). Sasikanth A., Jeffrey U., Anupam J. and John P. (2004), "Security for Sensor Networks", Wireless Sensor Networks, Kluwer Academic Publishers Norwell, MA, USA, Pages." 253 - 275. Spivey J. M. (1998). "The Z notation: A Reference Manual", Prentice Hall International, United Kingdom.
807
International Conference on Advances in Engineering and Technology
Steels L. (1990), "Cooperation between Distributed Agents through Self Organization". In Demaneau, Y. and Muller J. P. editors Decentralized A I - Proceedings of the First European Workshop on Modeling Autonomous Agents in a Multi-Agents Worlds (MAAMA W 89), pp 175 -176. Elsevier Science Publisher B. V. Amsterdam. Troy Bennette (1998), "Network Performance Evaluation Throughput", Computer Science Department, School of Engineering, California Polytechnic State University, California, pages 1-20. Available at." http://www.ee.calpoly.edu/3comproject/senior-projects/BennettTroy.pdf Youyong Zou, Tin Finin, Li Ding, Harry Chen, Rong Pan, (2003), "Using Semantic Web Technology in Multi-Agent systems: A Case Study in the TAGA Trading Agent Environment", Proceeding of the 5th International Conference on Electronic Commerce, pages 1- 7.. Saito H. and Chusho T. (2002), "Design and Implementation of a Network Performance Evaluation System Through Client Observation". Meo'i University, Japan. Troy B. (1998), "Network Performance Evaluation Throughput", Computer Science Department, School of Engineering, California Polytechnic State University, California.
808
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
GIS M O D E L L I N G FOR SOLID WASTE DISPOSAL SITE SELECTION L. Aribo, Department of Meteorology, Ministry of Water, Lands and Environment, P.O. Box
3 Entebbe, Uganda; tel.'+256-712832926/(41)320910. J. Looijen, Department of Natural Resources, ITC, Enschede, The Netherlands B. Toxopeus, Department of Natural Resources, ITC, Enschede, The Netherlands
ABSTRACT
The importance of Remote Sensing (RS) data as an input in Geographic information system) GIS is evidenced in this case study for Chinchina City (Colombia) in 2004, to identify sites suitable for solid waste disposal. Multiple maps were analysed using boolean logic models based on seven criteria set by a team comprising of a spatial ecologist, hydrologist, geologist, geomorphologist, an engineer and GIS/RS specialist, after a field work of one month. The maps were then combined using boolean AND operation to produce possible suitable sites for solid waste disposal. Three suitable sites with sufficient area of at least lhactare were then located after area numbering (assigning each area a unique number) during the GIS modelling process. This case study contains basic principles which are applicable in solving similar problems elsewhere and other applications such as selecting suitable sites for construction works. Keywords: Remote sensing; Geographical Information system; Boolean logic; Modeling; Area numbering; Solid waste; Construction.
1.0 INTRODUCTION With advances in technology, remote sensing (RS) and Geographic information system (GIS) have become important tools for managing environmental problems such as site selection for solid waste disposal. Increasing population, urbanisation, human activities and change in lifestyle have resulted in changing amounts and composition of solid waste (food remains, polythene, green plants, metal cans, chemicals etc) in most parts of the world. In developing African countries due to inadequate resources or insufficient infrastructure, solid waste is not properly disposed of and some are not collected, resulting in contamination of land and water resources, environmental pollution and diseases. The wastes attract pests and vectors of pathogens of epidemic diseases (rats transmitting plague, flies transmitting diarrhoea and trachoma).
809
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Uncollected waste obstructs storm surface runoff and blocks drainage systems generating stagnant water bodies and swampy areas that are favourable breeding places for mosquitoes hence enhancing malaria and yellow fever transmission in the tropics and subtropics. Surface and ground water is contaminated through direct dumping of waste and leaching of decomposed waste from near by sources. All the above impact on health, agricultural and industrial production, aquatic flora and fisheries hence socio-economic development. Having had insight into afore mentioned environmental impacts of solid waste, there is need to develop appropriate techniques to carefully select sites suitable for waste disposal as a safe guard. The general aim of this research was to promote clean and healthy environment through advances in technology; and specifically to investigate most optimal site(s) suitable for solid waste disposal. The pertinent research question is then: where is/are the most optimal site(s) suitable for solid waste disposal in the research area? 2.0 M E T H O D O L O G Y 2.1 Study Area and Data The data set used for the GIS modeling to tackle the above problem of site selection for solid waste, for Chinchina municipality study area located in central cordilleta of Andes (South America) included; slope map (raster with units of degrees), land use map (raster: classified multispectral SPOT image), land slide distribution map (raster with classes: stable, dormant & active), map of Chinchina city centre (raster), table of borehole data (locations, litho-logy, thickness, of overburden, clay percentage and permeability) and map of major roads in the study area (vector). 2.2 (i) (ii) (iii) (iv) (v)
Criteria for Waste Disposal Site Selection An area of active or dormant land slide. A terrain of slope less than 20% Not in areas of important economic or ecological importance Within 2km from the city centre but further than 300m from any built up area Constructed on clay rich soils of 5 meters minimum thickness, and permeability of less than 0.05 meters/day (vi) Area of at least 1 hectare (10000m 2) (vii) Accessibility by road.
2.3 Methods After a field work of one month, the input maps were prepared by the team. With Boolean logic model (Bonham-carter, (1994)), using conditional IFF function, a value of 1 (suitable)
810
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
or 0 (unsuitable) was assigned to each pixel, to derive binary pattern maps from the input raster maps based on the set criteria. Distances were calculated from the source maps before creating buffers around Chinchina City centre and the built up areas for criterion'd'. The individual suitability maps were combined using the Boolean AND function (ITC-ILWIS, May 2001, Westen, 2004), intersecting the binary pattern bitmaps to select most suitable sites for waste disposal. After area numbering (assigning each area a unique number), by map calculation suitable sites with sufficient area of at least lhectare (to allow the site to be used for a longer period) were delineated. Overlaying of the vector road map was then performed to make final decision for selecting most optimal sites suitable for solid waste disposal. 3.0 RESULTS AND DISCUSSION The following map shows the locations of the areas identified most suitable for solid waste disposal in Chinchina Municipality. .,,,,,:
::,:,,:,:,:,,,
:,,
::
::::::
:
:
:
:Suitable site for :Waste diis~sal Based on ~ o l e a n logic Model 104800~
Legend
i 047DC 9
St: S,a~a=.ble
Been~ta~s ~08d
t 04~
N t 04~
i042000
82~
829000 830000 83-~-~ . 8 : ~ 0
.B33000 8 3 4 ~
8~000 836000
. . . . . . - ; - ~ :2500
Producer: Adtx~ L
1 59940
UTM Map projection and WGS84 datum Fig.l: Map showing the three most optimal sites suitable for solid waste disposal (of at least 1 ha in area). These three sites (encircled) satisfied all the seven conditions stipulated in the selection criteria. 4.0 CONCLUSION AND R E C O M M E N D A T I O N This study reveals the support of Geo-Information and communication technology in as a tool for decision making and for addressing environmental problems that can impact on
811
International Conference on A d v a n c e s in Engineering and T e c h n o l o g y
health and socio-economic development. It is therefore recommended that this approach be adopted by developing countries to save the environment for the future. REFERENCES
Born-carter F.G. (1994): Geographic information systems for geoscientists; modeling with GIS. Computer Methods in the Geosciences, 13 :pp267-302. Pergamon Press.. ITC-ILWIS (May, 2001): ILWIS3.0 academic users' guide, PP.287-291,335, 379-380. ITC, Enschede, Netherlands. van Westen.C.J (2004): Tools for map analysis. ILWIS application guide, chapter 18,pp 219. ITC, Enschede, Netherlands.
812
International Conference on Advances in Engineering and Technology
AN ANALYSIS OF FACTORS AFFECTING THE PROJECTION OF AN ELLIPSOID (SPHEROID) ONTO A PLANE Mukiibi-Katende M.W., PhD, Head, Department of Surveying, Faculty of Technology,
Makerere University, Kampala- Uganda ABSTRACT
The choice of a projection depends on many factors, which can be divided into three groups. The first group consists of factors which characterise the geographical location of the territory to be projected, its size and form. The second group is associated with factors characterising not only the type of the map to be made, but also its use. In this group one can talk about the purpose of the map, its specialisation, the scale contents on it, the problems that will be solved using the map and its precision. The third group is composed of factors, which characterise the projection itself, the character of distortion, the condition leading to the attainment of minimum distortions and the maximum allowable linear, angular and area misclosures, the curvature depicting the geodetic lines i.e. loxodrome and the stereographic level of the projection. The choice of a cartographic projection is carried out in two stages, namely (i) The establishment of the characteristics of different projections from which the choice will be made. (ii) The choice, itself, of the desired projection. Key words: Map projections, geographical location, scale of map, map precision, misclosure, curvature, geodetic line, cartography, loxodrome.
1.0 M E T H O D O L O G Y
For many maps the choice is the cylindrical projection if the territory to be mapped lies astride or symmetrical to the equator. Conical projections are often used if the territory to be mapped is asymmetrical to the equator or if it is situated in mid-latitudes. Likewise azimuthal projections will be chosen not only for territories near the poles but also for territories circular in form.
813
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
The second group of factors is the basic one when solving the problem before us. Proceeding from the requirements and conditions of this group we try to determine the relative significance of each of these factors in solving the concrete problems involved in choosing a projection. As a means for solving the above problem we shall try to formulate a criteria for the evaluation of the various cartographic projections. 2.0 M O D E L L I N G On the basis of Bugaevsky's work in determining the best projection the following common criteria (8) is used.
e2={p1
+P2(aebe-1)2+ P3[(ae-1)2+(b-1)2]+P41kmcmax-1
-1
S/AdLma-1
+
(1)
nt-P6~,( Z~mma z~vmx 1/2+PT(2ax- 1)2+P8(CT@ax-1)2 9(~ax PAu /--1/2}+Plo(~,~ATma A~"x- 1/2 }{1/__~t _
+p~
where, P1, P2 ,..., Plo - the required significant weights. a e, b e - extremal relative linear scales
ae --1
,(aeb e -1)2,-~
-
+
-
] - values forangular, areaand linear
distortions respectively. kmc - mean curvature for the projection of a geodetic line in the meridians and parallels. Ad L
- deflection of the loxodrome from a straight line in the projection.
Akm - k m - kmj - difference
between curvature of the meridian in
the j-point of the projection and the given value
A k p - kp - kpj
- difference
(kin)
between the curvature of the parallel in the j-point of the
projection and the given value. C r - value characterising the stereographic nature of the projection. As - s
- s
- difference between the deflection angle in the j point of the projection
and the given value A y - y - 7"g - difference between the convergence of the meridians at the point of projection and the given value.
814
International Conference on Advances in Engineering and Technology
i
900
anglesbetween projecting meridians and parallels on the projection
In computing the above values the following formulae are used, eqn. (2): ad+bd
-
m 2+
h-
-
rt 2 ; a e b e
=
h ; c o s i _ f ; t a n T , _ Y _ y _ ~ ; f _ x , X ~ +y~y~; n~nsin/;sin/-~ /av /av X~
g ;,-
v=
(2)
where relative scales of the lines along the meridians and parallels
m~n
m-
~ +Y~
;n--
p
x +g~
r-Ncoso
r
where /9, r] principalradii of curvature of the spheroid in the meridian and the parallel respectively
Kmc
l+cosi/,+cosi / v )
1 sin (,o~ sin 2r 7T (Pg
-
-
m. sin 1
+
m.n sin ~Og
n<,
~ - m ~
M
t
7
mnM
-
Km ~ (x2 +y2# =
/1 =
i/a xlgg + v~o sec
=
AdL -- I(XjB--X/~ +IE (pj g v
(pj + Aq~;(,oj~
- - + gz
xj2yj2_ YJ -arctg xjlYJII-YJ -Xj (,oj + 2Acp;2
g. tan(45 ~
4 + t a n a [ / . v 1.vj
eg. tan(45 ~
2 a2 b 2 Acrjkm e =------2---;Cr =~;cr~a
v
[L \("~"y
2 ;sin~
esin(p;
j k - X j ) 2 + (YJ~ _yj)2 ]~ ;
ojk m
A O'jk -- (7"jk -- O'jm m ;
sin (,ok
sin Z cos Ak cos (,o; + cos Z sin (,oj
815
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
1 8
1 7
O'ik ; A O'ikm -- - ~ - ' A o - ~ ;
O'ik m -'- -~ Z k=l
Z,a, ak,A~o X r , Xr
7 k=l
- are given.
; X ~ , X ~z ; Y~o , Yr ; Y~ , Y~,~ ; m x , m ~o; ir ; 7"~o, y,~ ; g~o,gz; flz, v~o -respective par-
ticular derivatives The above proposed criteria takes into account the majority of the possible requirements for the choice of the best projection and makes it possible to compare and at the same time get rid of some very absurd requirements for the choice of the best projection. Having said that the best projection will be that one in which the least value of one of the following functions is adopted in all parts of the projected area E2 - _ 1 I g 2 d s Ss where, g2 be chosen.
-
the chosen general criteria needed for the assessment of the projection to
In order for the function E 2 to be determined it is necessary to divide the territory to be projected into smaller parts in whose central points the criteria oc'2 i s computed. Then we can find;
n i=1 After carrying out a comparative analysis of the cartographic projections with the help of the chosen criteria we select the desired projection. 3.0 RESULTS AND CONCLUSIONS
In connection with what is discussed above, let us, as an example, analyse the Cassini projection, which was once in use in Uganda. The Cassini is a transverse, cylindrical equidistant projection. As a result the graticules are transversed through 90 ~ The Equator therefore is replaced by a central meridian, distances along which are correct.
816
I n t e r n a t i o n a l C o n f e r e n c e o n A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
P
P T S Q P ' - central meridian P U V P ' - meridian through U EQVE' Equator ETUE' - great circle W S U Z - parallel through U -
O UV=QS-
- centre of the earth true Northings of U.
If P T S Q P ' is the chosen central meridian then point U, on the globe, is defined with respect to the great circle E T U E ' , which passes through U and is perpendicular to the central meridian at T. The distances Q T = X and T U = Y are plotted as rectangular co-ordinates. Let: (1) arcs Q T and T U subtend angles ~ and ~ at the centre of the Earth 0 respectively. Then X = R. 6~, a n d Y = R . [# J
(2) (~),)~) - be the geographical co-ordinates of point U relative to the central meridian. Then by spherical trigonometry it can be shown that s i n / 3 - sin 2 cos (/9
X - R tan -1 (sec ;L tan (,o) and
tan ~z - sec 2 tan (,o
Y - R sin -~ (sin 2, cos (,o)
As a result the Y- grid lines should be free from scale errors since any point is plotted at its true arc distance from the central meridian. On the other hand the X grid lines have an increasingly large scale factor the further one goes away from the central meridian. This is so because Sec)~ becomes bigger as )~ gets bigger. Consequently there will be proportionately large linear distortions. Because of large linear and angular distortions the Cassini projection was dropped in favour of the Universal Transverse Mercator Projection(U.T.M). The U.T.M is an International system applicable to all parts of the world within the given lati-
817
International Conference on Advances in Engineering and Technology
tudes. The system however is slightly modified to suit Uganda's needs. It is also similar to the Transverse Mercator (T.M) with the following exceptions. (i) Spheroid used (Clarke, (1880)) (ii) Origin Equator and centyal Meridian for each O u u O zone (3 , 9 , 15 , ...,33 ) (iii) Zone width 6o (iv) Unit of measure Metre (v) False origin Northings, 0 metres (10,000,000m in Southem Hemisphere) Lastings 500,000 metres west (vi) Scale Factor at central Meridian - 0.9996 (vii) Latitude limits of the system - 80 0 N, 80 0 S O O (viii) Zone Numbering starting from l (at 180 - 174 West) to 60 (at 174 U - 180UEast) (ix) Zone designation: this is indicated as follows: Each zone is 6~ by 4~ The belts measured from the equator north are prefixed NA, NB, NC etc and from the equator south are prefixed SA, SB, SC etc. The designations for width of 6 ~ longitude are suffixed with the numbers 1 - 60. The zones from 30 east to 36 east are suffixed with the number 36. Most of Uganda falls in zone NA 36.
Longitude NB 35
Longitude NB NB 36
22
NA
35
NA
NA
37
SA
35
SA
36
SA
37
SB
35
SB
36
Latitude 4~ 36
Equator Latitude 4~ South 30 oEast
SB 37 36 oEast
REFERENCES Adams Oscar S. 'Latitude Developments connected with geodesy and cartography' Washington, 1949. Annual Reports 1972, 1974 Department of Lands and Surveys, Kampala, Uganda Ashkenazi V. 'National and Continental Geodetic control networks, XV International Congress of Surveyors'. Stockholm, Sweden, 1977. Australian Surveyor, Volume 28, No. 4 December 1976.
818
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Brown C, Girdler R.W. 'A gravity Traverse across northern Africa'; Journal of Geophysical Research, Volume 85, 1980. Bugaevsky L.M. 'Evaluation creteria in the selection of cartographic projections.Izvestia Vuzov 'Geodesia i Aerophotosiomka, no 3, 1982 Iraas T. F. 'Gravimmetric Data Management; Bulletin of Geodesy no. 56,1982. Measurement of an Arc of Meridian in Uganda,Volume 1, London, 1912. National Reports on Geodesy, 1971 - 1974; XVI General Assembly of the IUIG, Grenoble,1975. Oberson G. 'Geodetic Mapping of Africa'; Boll di Geodesy, nos. 2 and 3 1978. Uganda Survey manual, Volumes I and 11, 1967; Printed by the Department of Lands and Surveying, Entebbe, Uganda.
819
International Conference on Advances in Engineering and Technology
SOLAR BATTERY CHARGING STATIONS FOR RURAL ELECTRIFICATION: THE CASE OF UZI ISLAND IN ZANZIBAR J. Kihedu, Department of Energy Engineering, Faculty of Mechanical and Chemical Engi-
neering, University of Dar es Salaam, Tanzania. C. Z. M. Kimambo, Department of Energy Engineering, Faculty of Mechanical and Chemical Engineering, University of Dar es Salaam, Tanzania. ABSTRACT
Battery Charging Systems (BCS) provide an alternative means of rural electrification in remote rural communities that are not connected to power utility grid and cannot afford Solar Home Systems (SHS). Unlike the expensive SHS, which can not be afforded by the majority households of rural people in many developing countries the costs involved in BCS, on the part of the user, are those of the battery and the charging costs. Solar Battery Charging Stations (SBCS) utilize solar photovoltaic generated power as the source of battery charging energy. This option reduces the charging cost, provides a more environmentally sound source of energy and therefore is more effective in remote rural applications. Battery charging stations can also be powered by grid power, diesel generators, through an ordinary solar home system or other electrical sources. This paper analyses a project for electrification of fifty houses in Uzi Island in Zanzibar. Ten solar photovoltaic panels have been installed, each with the capacity of 120 Watts and powering a SBCS for five households. Each of the households is served by one battery of capacity 50 to 75 Ah. The system is designed such that the houses being served by the SBCS are close together in order to minimize battery carrying distances between the SBCS and the houses. The project was implemented following a detailed feasibility study that covered technical, environmental and socio-economical aspects of the project. The project is run by the villagers and each user pays battery-charging fees to cover costs for operation, maintenance, replacements and expansion of the project to include other villagers. The Uzi Island project has improved access to modem information systems in the island, education and replacement of kerosene lamps. The project has proved that SBCS can adequately meet the energy needs of the people in the island. The project has shown that SBCS can be decentralized, are cost-effective and technically viable in supporting electrification of low-income rural population. It has also shown that SBCS have to be simple in operation, reliable, able to charge batteries fully in a day, capable of serving the required number of customers daily and require low operating, maintenance and replacement costs. Economical analysis favours SBCS in comparison to other available options under situations, which are typical for the rural areas in developing countries. Experiences drawn from Uzi Island project suggest that solar battery charging stations are an effective and affordable means to electrifying low-income people in remote rural areas of Tanzania.
820
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Keywords:
photovoltaic; battery; charging; rural; electrification.
Abbreviations" BCS SBCS SHS DGBCS GBCS LVD PV DTP TASEA ZASEA UziESCO
Battery Charging Systems Solar Battery Charging Stations Solar Home Systems Diesel Generator Battery Charging Station Grid Battery Charging Station Low Voltage Disconnect Photovoltaic Deutsch-Tansanische Partnersschaft e.V. Tanzania Solar Energy Association Solar Energy Association Uzi Electricity Supply Committee
1.0 INTRODUCTION
1.1 Solar Battery Charging Stations Photovoltaic conversion is a process whereby sunlight is converted into electricity through a devise commonly known as the solar cell. A photovoltaic cell is a non-mechanical device usually made from silicon alloys. Sunlight is composed of photons, or particles of solar energy. These photons contain various amounts of energy corresponding to the different wavelengths of the solar spectrum. When a photon strikes a photovoltaic cell, it may be reflected, pass right through, or be absorbed; depending on the amount energy it contains. Only the absorbed photons provide energy to generate electricity. The electricity generated can then be used in various applications e.g. in Solar Home Systems (SHS) and Solar Battery Charging Stations (SBCS). SHS can either be stand-alone system or utility interactive with or without backup. Basically stand-alone systems include a solar panel that converts sunlight into DC power to charge a battery or set of batteries via a charge controller. The charge controller ensures that the battery is properly charged and discharged. DC appliances can be powered directly from the battery, but AC appliances require an inverter to convert the DC into AC power. As opposed to SHS, SBCS are primarily dedicated to charge batteries for more than one household. SBCS fall under two basic designs configurations; (a) Charging outlets come from a common busbar, each outlet having a charge controller to prevent battery over-charging and a blocking diode to prevent battery current reversal. (b) One array of modules dedicated to one charge outlet, making a charge controller and blocking diode optional.
821
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
In addition to SHS and SBCS, other options for battery charging are Diesel Generator Battery Charging Station (DGBCS) and Grid Battery Charging Station (GBCS). DGBCS usually comprises of diesel generator set, which powers a multiple outlet battery charger conversion of AC power from the generator to DC and regulation for battery charging. With GBCS number of batteries that can be charged. The systems are normally not limited by the capacity of the source i.e. the grid. However GBCS are often far from the communities they serve. This means that the users of the service incur high transport costs of the batteries to and from the charge service in addition to payment for the charging service. Many rural communities can afford to own a battery, which can meet their electrical energy demand. In developing countries like Tanzania, car batteries and in some cases used ones are used. Battery charging electrification can improve the life style of the people in such areas through provision of electrical power for lighting, radio, television, mobile telephone charging and other uses with low energy consumption. Availability and affordability of such energy services can also reduce the scale of rural-urban divide and hence migration. To meet the actual demand in its service, SBCS should be designed to charge batteries of various designs, types and capacities and protect them from damage in the course of battery charging. Also, technically unsophisticated SBCS, which depend on operator control in addition to electrical controls, are required for reliability of the rural SBCS services.
1.2 Description of Uzi Island and the Project Uzi Island is one of the two small islands surrounding Unguja Island, which is the larger of two main islands of Zanzibar. The Uzi Island is large enough to support economic lives of the people residing on it. The other small island is Tumbatu. Both these small islands are not connected to the national grid power supply. Connection of grid power to these islands requires submarine cables from Unguja Island, something that is not expected to be done in the foreseeable future. Uzi Island is in the South-West part of Unguja Island and it has a total of 3,200 inhabitants. There are two villages in the island, Uzi village with 2,200 inhabitants and Ng'ambwa village having 1000 inhabitants. The access to this island from Unguja Ukuu village, which is the border village on the Unguja Island side, is by a two and half kilometers stone built road, which is passable only during low tide i.e. only some hours of the day. Otherwise the alternative access is by ferryboat. The request to implement this project was made during the training workshop on 'How to Built a Solar Home System' that was conducted by two solar PV experts including one of the authors of this paper in March 2003, through the support of Deutsch-Tansanische Partnersschaff e.V. (DTP), and in collaboration with Department of Energy Zanzibar and Haile Selassie Secondary School. The Director of the Energy Department in Zanzibar requested the Chairperson of DTP to support the electrification of Uzi and Tumbatu Islands. In response to the request DTP requested a feasibility study to be conducted to determine the electricity demand, examine the technical feasibility, economic viability, social acceptability and environmental
822
International Conference on Advances in Engineering and Technology
implications of the various alternative energy solutions suitable for the island. The feasibility study, which was conducted by the authors, representing the Tanzania Solar Energy Association (TASEA) - an organization that was requested to conduct the study - paved the way to the implementation of the project, which is the subject of this paper. In addition to solar PV applications, other alternative energy solutions considered during project feasibility study include the extension of the national electricity grid, wind electricity generation systems, tidal and wave energy options. Grid extension to the island by a three kilometers marine cable from Unguja Ukuu Village on the main Unguja Island was considered. However high capital investment required was a pullback to the idea, although grid capacity to respond to varying loads, minimal standby losses, reliable power supply and capability to promote various households and public socio-economical activities were also among the factors that were considered. The wind energy option was limited by geographical location of the island, in addition to the high capital investment required. Tidal and wave energy options were dropped due to the fact that both technologies are not yet commercially mature. Solar energy and PV application in particular was favoured due to the abundance of the energy resource in the island, low running costs and proven technology typical for rural applications. In analyzing the various solar PV options, SHS were found to be not cost-effective compared to SBCS with based on the actual household energy demand of within the island. A design was made to electrify fifty households in the island through the SBCS. After having convinced DTP that the SBCS was the most viable option, DTP then agreed to finance the implementation of Project. TASEA provided technical support and services in undertaking the installation and commissioning of the systems. DTP covered the initial financing of the systems, except for the batteries, which are financed by the users. Members of Zanzibar Solar Energy Association (ZASEA) take part in installation and commissioning activities as a continuation of their practical training to foster the knowledge they acquired during the training workshop on 'How to Built a Solar Home System'. The tasks covered during the installation and commissioning of the systems were the following; (a) Electrical installation of ten SBCS each serving a total of five households. Each SBCS is powered by a 120 Wp solar PV module, fitted with a 10 A charge controller. (b) Electrical wiring of fifty 50 households, all fitted with one Low Voltage Disconnect (LVD) and thirteen among them with 250 W inverter. Each household is required to own and care for its battery. (c) Installation of each SBCS under custody of its operator who must be one of the five users served by the SBCS. Installation site for the SBCS is central to other four households and that average distance for battery carriage between a home and SBCS shall not exceed 300 meters. (d) Providing training of all the users on the basics of the installed systems; operators of the SBCS on battery charging; and electricians living in the island on relevant critical technical skills.
823
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
(e) Operationalising of the financing scheme for the systems. All users contribute a monthly fee, currently standing between TSh 2,000/= and TSh 3,000/= (US$1.6 to US$ 2.41), to cover operating and investment costs and hence expansion of the project services to other villagers. (f) Setting up the organization of the project management system. A village electricity supply committee handles responsibility of project operation; Uzi Electricity Supply Committee (UziESCO) was formed. (g) Providing business training to UziESCO. Members of UziESCO were trained on how to operate the project as a business venture. The training covered decision-making, some basics of financial management including book keeping, how to recover the project investment, maintenance and replacement costs. 2.0 FEASIBILITY OF USING SBCS IN UZI ISLAND 2.1 Background Information The focus of the project and hence this paper is on domestic energy uses for lighting, radio and television in the island. These were covered by the use of dry non-rechargeable batteries, automotive batteries, potable solar PV lanterns (SOLUX I lamps) and petrol generator sets. Dry non-rechargeable batteries, sold within the village at a price of TSh. 100.00 (US$ 0.08) and TSh. 250.00 (US$ 0.20) per piece for size AA and D batteries respectively, have been used to power radios and torches. Three households in the island were using automotive batteries to power televisions, radios and lights (incandescent lamps). Batteries were charged at Unguja Ukuu village which is two and half kilometres away across the waters, at a single charging fee of TSh. 500.00 (US$ 0.40). One household was using a petrolpowered generator set, consuming on average four litres of petrol (costing TSh. 1,210.00 which is equivalent to approximately US$1) in six hours of daily operation. A survey of the 50 users indicated that 88% of them require to use of at least three lamps and a radio. In addition to that 41% of them use televisions; among them 24% use AC powered coloured televisions and the rest use black and white DC powered televisions. Mobile telephone usage among the 50 households was at a rate of 8%. Automotive batteries that were being used by households are in range between 50 Ah and 70 Ah capacities. Such batteries were capable of providing sufficient power for the household between 7 and 18 days after a single charging of the battery. Mobile telephone users had to travel to Unguja Ukuu to recharge their hand sets. Uzi School runs SOLUX I lamp renting station, which is powered by a 40 Wp PV module connected to 15 lanterns. The school collects a monthly fee of TSh. 2,000 (US$ 0.80) per lantern, which is spent, on teaching materials and buying more lanterns. Through the monthly fees, Uzi School has been able to raise enough funds to purchase a second renting station.
1 Using the current exchange rate of TSh. 1,250.00 to 1 US$
824
International Conference on Advances in Engineering and Technology
2.2 Cost-Benefit Analysis of the Energy Alternatives A cost-benefit analysis was conducted on the potential power supply alternatives for Uzi Island. The options that were considered are the SHS, SBCS, grid extension, wind energy, tidal and wave energy. The following facts and figures were used for Uzi SBCS system: (a) Equipment acquisition, taxes and transport costs for the entire system of TSh. 20,834,500 (US$16,668). (b) Installation, commissioning and training costs of TSh. 1,390,000 (US$1,112). (c) Annual operating costs for the entire system of TSh. 213,000 (US$170).
Assuming a total investment cost of TSh. 22,224,500 (US$ 17,780) covering equipment acquisition, taxes, transport, installation, commissioning and training costs; equivalent annual investment cost is TSh. 985,000 (US$ 788) 2. Also assuming fixed afore mentioned operating cost i.e. TSh. 213,000 (US$ 170), the net annual cost is TSh. 1,197,500 (US$ 958). Based on current monthly collection of TSh. 112,000 (US$ 90) which is equivalent to annual collection of TSh. 1,344,000 (US$ 1,075), UziESCO will save around TSh. 146,200 (US$117) annually. Financial analyses were carried out for each of the energy alternatives. The results obtained are summarized in the basic multi approach comparison of the energy alternatives presented in Table 1. Table l" Comparison of the Energy Alternatives Comparison Factor Resource
Wind
Tidal and Wave
Moderate
Abundant
Stable
Not reachable Stable
Stable
High
High
Extra High
Very High
Very Low
Low
Moderate
Low
Relatively New Very High Moderate
Easy
Easy
Difficult
Easy
Difficult
SHS
SBCS
Abundant
Abundant
Technology
Stable
Initial Costs Operation and Maintenance Costs Maintenance
Grid Extension
2 In addition to relevant costing the figure considers service life of the equipments as specified by manufacturers.
825
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
Comparison Factor Practicability
SHS
SBCS
Grid Extension
gand
Tidal and Wave
Practicable
Practicable
Practicable
Practicable
Marginal
Sustainability
Sustainable
Sustainable
Sustainable
Sustainable
Certainly
Environmental Impact
Friendly
Friendly
Acceptability
Better
Moderate
Installation & Fault Cases Best
Sight Distortion, Noise Better
Efficiency
Moderate
Moderate
Sufficient
Sufficient
Disturbs Marine Life Crowds Beach Least
Reliability
Reliable
Reliable
Reliable
Reliable
Least
Recommendation
Feasible
Feasible
Feasible
Feasible
Not Feasible
2.3 Environmental Impact Assessment for the Project Climate change is a global problem requiring action from the entire international community. Countries from around the world are working together to share technologies, experience, resources and talent to lower net greenhouse gas emissions and reduce the threat of global climate change. Countries also participate in various bilateral and multilateral technology cooperation initiatives that aim at encouraging the use of technologies that will reduce greenhouse gases and increase carbon sinks. With this consideration, Environmental Impact Assessment (EIA) was part and parcel of the feasibility study. SBCS applications have distinct advantages including being relatively environmental friendly; producing no smoke emissions and green house gases; not causing acid rains; and not contributing to deforestation and land degradation. Also SBCS reduce the scale of disposal for used lamp by use of long lasting energy savers and rechargeable batteries reduces environmental problems associated with dry non-rechargeable cells. Other positive environmental effects of using SBCS as the source of power for Uzi Island include the reduction or total elimination of noise, air and water pollution. SBCS however are not entirely free of environmental pollution. Lead acid batteries that are used with SBCS, for example, contain corrosive sulphuric acid which if spilled could cause severe environmental damage and health hazards. Also batteries give off explosive hydrogen gas when they are being charged, thus posing the danger of explosion if adequate ventilation is not provided for. Batteries contain large amounts of electrical energy. If the terminals are shorted, they could cause a bad electric shock or fire. Disposal of batteries is also of environmental concerns due to damping in rubbish collection places or burning them in the worst cases. Safe and environmentally friendly means of disposing and recycling batter-
826
International Conference on Advances in Engineering and Technology
ies have been developed although not yet disseminated. Similarly as with batteries, there is an environmental concern resulting from disposal used PV panels. The long life of PV panels (over 20 years) makes the scale of the problem less serious, in comparison with the other energy supply options. Other environmental concerns related to SBCS and also the other supply options are on the production processes of the various components making up the systems. These are not covered in this assessment. Based on this analysis, the positive environmental impact of the SBCS applications by far outweighs its negative impacts and also the positive impacts of the other energy supply options. 2.4 Sustainability of the Project The feasibility study proved that the SBCS energy option was the most viable based of the criteria of technical design, performance analysis, user considerations, financial and environmental analysis. Such predictions, which have been confirmed by a recent review of the project, were based on the following facts; (a) Initial funding covered by the donor was so crucial since users could not afford. (b) Users can economically run the project using available resources for the betterment of their lives and therefore reducing the scale of the energy crisis they are currently facing. (c) Technical performance of SBCS has been proven to be sufficient to meet charge demand of the households. The villagers are taking good care of the systems and technical problems are very minimal. (d) Environmental impact assessment conducted proved satisfactory there are no noticeable negative environmental effects of the project in Uzi Island. (e) Financial analysis of the project show that project investment is within reasonable cost. (f) Socio-economic impact assessment suggests that the project is of vital importance to the villagers and so they have positively accepted the project. The villagers are paying their monthly contributions to the village fund without problems. 3.0 DESIGN OF THE SOLAR BATTERY CHARGING SYSTEM 3.1 Household Power Demand Based on the discussion made under Section 2.1, the projected electrical energy demand of the households is summarized in Table 2. The specification has taken into consideration both the options AC and DC power requirements.
Table 2 Basic Adopted Load Specification for Households System Type Household Systems without
Appliance
Number of Appliances
Lamps Radio
1
Power Rating
(w)
Usage (Hours)
10
2-3
10
2-4
827
International Conference on Advances in Engineering and Technology
Inverters
Household Systerns with Inverters
Television
1
14
1-2
Mobile Phone Charger
1
0.5
1-2
Lamps
3
10
2-3
Radio
1
10
2-4
Television
1
50
1-2
Mobile Phone Charger
1
0.5
1-2
3.2 Sizing of SBCS and Household System Components The system design calculator presented in Table 3 was used in carrying the sizing calculation. Extreme cases such as a month with lowest average insolation (herein referred as design month) and each household recharging a battery on the fifth day were assumed. Table 3: System Design Calculator Sheet
Daily Energy Use
Quantity
Voltage
Power
[v}
[w]
Daily Use [Hrs]
Energy Use [WHrs]
Units
DC Powered Appliances WHrs WHrs WHrs
nnmml ,,,o Television
.,.
H i r A C Powered Appliances 1
240
AC Power
50
Subtotal (A C Power) ]
Total Daily System Ener~?: Demand
50
WHrs
5%
2.5
WHrs
52.5 13 7.5
WHrs WHrs
6.875
WHrs
144.3 75
WHrs
Batter,/Storage Days for Design Month
South West Zanzibar April 5 Days
System Energy Requirement for Design Month
721.875
Location of the site Design Month
828
WHrs
5% 95.55
WHrs 50
52.5 91
Estimated System Losses
m
1
50
Inverter Losses
Total
a
WHrs
WHrs
International Conference on Advances in Engineering and Technology
Daily Energy Use
Quantity
Power
Voltage [V]
[w] I
System Voltage: S~stem Charge Requirement for Design Month Estimated Insolation Value of the site: S~stem Design Chargin~ Current Solar Modules Selection Type I , Charging current: A No. of panels needed (WHrs Sizinl) Battery Selection ] Maximum allowable depth of discharge of batteries: Required system battep~ capacity: 9 Battery size: Ah [ , No. of batteries needed (AHrs Sizing) Charge Controller Selection I Module side input current A to be rated at least: Load side output current A to be rated at least: I
Daily Use [Hrs]
Energy Use [WHrs] 12 60.2 5.6 10.7
Units l
I
I
I
VDC Ahrs Hours A
I
120 7.450 1.0 74
80% 75.2 75 1.003
, m
m
9.60 9.56
Wt) A Modules
Percent Ah Ah Batteries A A
3.3 SBCS and Household System Specifications
Based on the design procedure outlines in proceeding sections the resulting specifications for the SBCS and the household systems are as presented in Table 4. Table 4" SBCS and Household System Specifications System
Component
Rating
SBCS
Solar Panel
120 Wp, 7.45 A (rated current)
Charge Controller
10 A (13.4 V, fully charged battery)
Daily Charging Capacity
720 Whrs (dependent on insolation)
LVD
11.6 V (disconnect/discharged battery)
Daily Battery Discharging Capacity Inverter
85
Household System
140 Whrs (with/without inverter)
200 W, 95% efficient
829
International Conference on Advances in Engineering and Technology
4.0 CONCLUSIONS AND RECOMMENDATIONS 4.1 Conclusions SBCS are appropriate when they are designed to keep the capital costs and operating costs low. Such a design would incorporate the following: (a) Household systems with LVD should be preferred than the use a charge controller. (b) Panel sizing should allow discharged battery to be charged in average one day solar condition. (c) Batteries of 50 to 75 Ah capacities are optimum size for a household and 80% Depth of Discharge. This means that solar batteries and not car batteries have to be used. (d) Differently sized or coloured mating plugs should be used on battery terminal to prevent cross wiring at home and at the SBCS.
The Uzi Island inhabitants live a moderate life style. The island is fertile and fishing is the main economic activity. However, this does not guarantee good life to them especially access to electricity to meet their demand for power for lighting and the other low energy appliances. That is to say that SBCS project cannot facilitate the availability of sufficient electrical power to meet all the demand of a modem world for domestic and industrial applications. However, the SBCS Project has demonstrated a viable means to provide access of such people to the basic electric power needed for day-to-day usages at home and school such as lighting, communication and entertainment. 4.2 Recommendations SBCS should be considered as one of the viable options for household energy supply, when targeting low income communities in remote rural areas that are not connected to grid systems; and households that have limited electrical energy needs. As energy needs increase SHS becomes the next supply option that is cost effective for relevant households. It is therefore recommended that SBCS should be used as a dissemination strategy for solar PV technology, to prepare for the SHS market development.
Based on the experience obtained through the Uzi Project, it is recommended that proper battery maintenance practices should be taught to users of the systems. It is also recommended that proper training of SBCS operators on the technical aspects of the technology and the systems is more critical for reliable operation and sustainability of the system. It is also recommended that SBCS schemes should make provision for the users to own the batteries. This has proved to prompt best maintenance practices of the batteries and avoid the costs of monitoring battery use by the SBCS operators. Technically unsophisticated system should preferably be used, implying minimal initial, maintenance and replacement costs. The lesson learnt from the Uzi Project SBCS operators
830
International Conference on Advances in Engineering and Technology
and technicians that improvising for the integration of SBCS operator control rather than sole SBCS electrical control reduces the scale of system sophistication. Similarly is the case for the users that improvising of integration of household system user control rather than household system sole electrical control is more preferable. REFERENCES
SGA Energy Limited, (1999) Solar Battery Charging Stations, An Analysis of Viability and Best Practices. A Report Submitted to The Asia Alternative Energy Program (ASTAE). Harkins, M., (1991) Small Solar Electric Systems for Africa. Commonwealth Science Council and Motif Creative Arts Ltd. Strong, S. J. and Scheller, W. G., The Solar Electric House, Energy for the Environmentally-Responsive, Energy-Independent Home. Kimambo C. Z. M. and Kihedu, J., (2004) Feasibility Study for Application of Solar Photovoltaic Systems at Uzi Island, Zanzibar-Tanzania. The study implemented by Tanzania Solar Energy Association (TASEA) as requested by Deutsch Tansanische Partnerschafft (DTP). Kimambo C. Z. M. and Kihedu, J., (2004) The Study on the Use of Dry Rechargeable Batteries in Tanzania. The study implemented by Department of Energy Engineering, University of Dar es Salaam as requested by Deutsch Tansanische Partnerschafft (DTP). Kimambo C. Z. M. and Magembe R. M., (2003) Workshop Reader: How to a Build Solar Electric Home System. A Workshop Organized by Deutsch Tansanische Partnerschafft, Department of Energy Engineering-Revolutionary Government of Zanzibar and Haile Sellasie Secondary School-Zanzibar. Ahmed, S. M., (2004) Alternative Source of Electricity for Pemba. Project Number 23-0304, Department of Energy Engineering, University of Dar es Salaam.
831
International Conference on Advances in Engineering and Technology
SURFACE RAINFALL E S T I M A T E OF LAKE V I C T O R I A F R O M ISLANDS STATIONS DATA Bennie Mangeni, Lecturer and PhD Student, Department of Civil Engineering, Faculty of
Technology, Makerere University, Kampala, Uganda Gaddi Ngirane-Katashaya;Associate Professor, Department of Civil Engineering, Faculty of
Technology, Makerere University, Kampala, Uganda.
ABSTRACT The estimate of water balance components of Lake Victoria by earlier studies indicated that the two largest components of rainfall and evaporation were the most difficult to determine. Further analysis has shown that this was due to the quality of the data used, the unaccounted for diurnal effect and methods with which these parameters were estimated. In most of the water balance studies, rainfall data from catchment stations were used in computing lake rainfall as straight or weighted averages for the lake disregarding the diurnal phenomenon but striving to maximize spatial coverage. This resulted in underestimation of lake rainfall and creating an error in the water balance itself.
In this paper, areal estimation of rainfall from/island station data was reviewed before choosing an appropriate estmation method. The simple exponential kriging method was chosen to estimate the rainfall over the lake from mean annual rainfall data from island stations plus a few lakeshore stations. The island stations mean annual rainfall values and their locations were used to create a point map. The point map was used to check the arrangement pattern and correlation of the rainfall stations before using it as input into a surface-fitting interpolation procedure to determine the rainfall distribution over the lake. The average annual lake rainfall was then derived using an overlain grid. The resultant average annual rainfall was 1815 mm. Keyword: Areal Rainfall Estimation; direct and surface-fitting methods; point map; pattern and correlation analysis; optimal interpolation/kriging
1.0 INTRODUCTION In Hydrology the estimate of lake rainfall is ideally from island and lakeshore stations. In this study, rainfall over Lake Victoria was derived from island and lakeshore stations' rainfall data. Several methods of estimating regional precipitation from point values were considered, some providing estimates for spatially averaged precipitation only whereas others provided estimates of the spatial distribution of precipitation within a region of interest as
832
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
well as of the spatial average. Conceptually the average precipitation, P, over a region, A, could be represented as
p_
1
( x , )dx dy
(1)
where A was the area of the region, x and y represented rectangular coordinates of points in the region, and p(x,y) was the precipitation at all such points where measurements of precipitation had been_ made by gages for the period of interest. Approaches forcomputing an estimate of P, P can be divided into direct methods that computed P directly as a weighted average of the measured values and surface-fitting methods that first use the measured values to estimate precipitation at points in A and then use some scheme to approximate the integral in equation 1.
1.1 Areal Estimation Method The choice of the method used in this study was based on the comparative studies of Shaw and Lynn (1972), Court and Bare (1984), Creutin and Obled (1982), Tabios and Salas (1985) and Lebel et a1.,(1987). These studies assessed weaknesses of the various methods and finally established that the choice of a method for computing areal precipitation depended on several factors, including the, objective of the analysis, nature of the region and time and computing resources available. After the above assessments, the researchers unanimously concluded that the optimal interpolation/Kriging method, a type of SurfaceFitting Method provided the best estimate of regional precipitation in a variety of situations. This was attributed to the fact that optimal interpolation was based on the spatial correlation structure of precipitation in the region of application, whereas other methods imposed essentially arbitrary spatial structures. Since optimal interpolation methods had the additional advantage of providing a model of the spatial variability of precipitation and estimates of uncertainty, they were chosen to evaluate rainfall over the lake. An extra and final condition for choosing an areal rainfall estimation method was that the results of a good method had to reflect the actual physical conditions in the study area. The simple exponential model kriging method fulfilled this last condition as well.
1.2 Simple Exponential Model Kriging Kriging is a geostatistical interpolation technique that considers both distance and the degree of variation between known data points when estimating values in unknown areas. A kriged estimate is a weighted linear combination of the known sample values around the point to be estimated. Applied properly, Kriging allows the user to derive weights that result in optimal and unbiased estimates. It attempts to minimize the error variance and set the mean of the prediction errors to zero so that there are no over- or under-estimates. Included with the Kriging routine is the ability to construct a semivariogram of the data which is used to weight nearby sample points when interpolating. It also provides a means for users to understand and model the trends of the data. A unique feature of Kriging is that it provides an estimation of the error at each interpolated point, providing a measure of confidence in the
833
International Conference on Advances in Engineering and Technology
modeled surface. Kriging should be applied where: best estimates are required, data quality is good and error estimates are essential. All these points were requirements in this study hence the choice of Optimal Interpolation/ Kriging as the method of areal rainfall estimate over the lake. Kriging is a Best Linear Unbiased Estimator (BLUE) procedure that relies on assumptions regarding the randomness, homogeneity, and to some extent, isotropy of the data in a field, the spatial correlation structure of which is modeled by a covariance function or semivariogram selected by the user to yield interpolated estimates with the lowest estimation error. 1.3 Attribute table and Point Map The starting point in the interpolation process is the creation of a point map. In Integrated Land and Water Information System (ILWIS) a point map is a data object used to store spatial geographic information (points) such as water wells, sample points etc. The relation between the points and position on earth is defined by the coordinate system. The point map was obtained by converting the attribute Table 1 containing station names, locations (coordinate columns) and mean annual precipitation to a point map. Figure 1 represents the point map produced by applying ILWIS Table to point map operation on Table 1 with the base map as reference. The point map was overlain over the base map so that the points were properly located on the base map. This was achieved by assigning the two maps the same georeference/coordinate system. This produced Figure 2 in which the stations may be represented by either their names or mean annual rainfall values. For good results from interpolation it was necessary to ascertain that the locations of the chosen island and lakeshore stations were geographically correctly located. 2.0 ANALYSIS OF SAMPLE POINTS Sample points may possess a random, clustered or regular arrangement. Such points may possess correlation to each other or not. The correlation may be strong positive correlation, strong negative correlation or no correlation when they are randomly arranged. For a point map to be interpolated, its points must either be clustered or regularly arranged in addition to being correlated. These two conditions were checked by pattern analysis and spatial correlation operations on the point map before interpolation. 2.1 Pattern Analysis Data in a point map represent spatial occurrence of a particular phenomenon. To investigate the occurrence of the phenomenon, it was necessary to establish the spatial distribution of the points in the map. Point pattern analysis was the technique used to obtain information about the arrangement of point data in space in order to determine the occurrence of certain patterns. The points could have random, clustered or regular patterns.
834
International Conference on Advances in Engineering and Technology
2.2 M e a s u r e s of D i s p e r s i o n
The Distance column in an ILWIS pattern analysis operation on a point map defines a number of distances from 0 to the upper limit of each distance figure. In each case the probability of finding one or more other points at a given distance from a chosen point is calculated giving ProblPnt, Prob2Pnt, ... Prob6Pnt where: 9 Column Distance lists distances from any point in the input point map also called a search radius 9 Column P r o b l P n t lists the probability that within the specified Distance of any point, at least one other point will be found. The Distance is the one at which any points nearest neighbor can be found. For example if for a distance of 0.7 the P r o b l P n t is 0.9333 it means that for 93.3% of all points in the input point map, 1 nearest neighbor can be found within a radius of 0.7 scale points on the map. 9 Column Prob2Pnt lists the probability that from any point at least two other points will be found at a radius of Distance. The same interpretation applies to Prob3Pnt ...... Prob6Pnt.
9
For a dataset of n points, ProbAllPnt is the summation of P r o b l P n t to Prob(n-1)Pnt divided by (n-1).The formula to calculate probabilities within a certain distance is given by equation 2. n-1
Pr obAllPnt - ~=~
(2)
n-1 where P~ is the probability of finding i neighbors within a specified radius Distance of any point. Table 1: Attribute table of the island and some lakeshore stations Station n a m e
Latitude in ~
L o n g i t u d e in ~
A n n u a l Rainfall in m m
Entebbe
0.050
32.450
1629
Jinja Met
0.417
33.200
1227
Buvuma Is
0.183
33.300
1696
Kalangala Is
-0.333
32.317
2085
Bumangi Is
-0.317
32.233
2086
Bukasa Is
-0.444
32.542
2443
Kisumu airport
-0.100
34.750
1357
Rusinga Is
-0.392
34.19
1053
- 1_500
33_717
839
Musoma airport
835
International Conference on Advances in Engineering and Technology
Ukerewe Is Met
-1.927
32.914
1502
Mwanza airport
-2.467
32.917
1080
Kishanda Met
-1.733
31.550
1055
Izigo Trading
-1.620
31.720
1731
Bukerebe Is
-1.468
32.109
2609
Bukoba Met
-1.333
31.817
2060
[~ii[ ~ Properties
~nj.a Met
~Ente bbe
Buvuma Is Kisumu Airport
~Rusinga Is
uk!eba Met Bukerebe Is I i o T r i ng
~Musom.a Airport kerewe
Is Met
~Mwanza Airport
Fig. 1" The point map from the attribute table 1
2.3 Spatial Correlation By plotting ProbAllPnt against Distance, it was possible to determine until which distance autocorrelation existed between point pairs. The resultant graph was compared to standard graphs and used in conjunction with the Pattern Analysis table to deduce the arrangement pattern of the sample points in the point map. On the other hand the maximum distance from this graph was one of the inputs in calculating Autocorrelation (Moran's I), spatial variance (Geary's c) and the semi-variogram for the assessment of spatial correlation. In this study
836
International Conference on Advances in Engineering and Technology
the Omni directional method was used meaning that directions between points were neglected. Odland (1988) claims that autocorrelation exists whenever a variable exhibits a regular pattern over space in which values at a certain set of locations depend on values of the same variable at other locations.
!-!i~]
~
balJ}ymet :
surfer
(;, rick'.-
LL~g 8 -
m
3213cg -.
m
~ ~
'
~
|
!
i
i
. } M w a n z a ~ ~ 9
~
(,
.%:] ~.~ F.
~ i -~84184
IC,
2:J
3t"
34 k~g E
.i,?
I?~..~th i i i I
5~'.i
I'iO
."5 ...........
Fig. 2" Basemap overlain by point map and georeferenced 2.4
Interpretation of Moran's I and Geary's c
For all point pairs in a distance class, a value for Moran's I and Geary's c was obtained. Geary's c compares the squared differences of point pair values to the mean of all values, Geary (1954). Moran's I relates the product of differences of point pair values to the overall difference, Moran (1948). The general interpretation of both statistics is summarized below: 00 C> 1 Strong negative autocorrelation I<0 C= 1 Random distribution of values I=0
837
International C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g and T e c h n o l o g y
3.0 RESULTS AND DISCUSSION
3.1 Pattern Analysis and Spatial Correlation The pattern analysis results indicated a regular arrangement of points. Further more comparison of the ProbAllPnt against Distance graph to standard sample arrangement graphs also showed a regular pattern. Correlation showed that for the search Distance of 3.5 units when the ProbAllPnt =1.0, Moran's I had a value of -0.213 and Geary's c had a value of 1.02. The interpretation of these parameters indicated a correlation of the sample points between strong positive and strong negative. This meant that the stations in the point map had a regular arrangement and were correlated. Thus the point map satisfied the conditions for interpolation.
3.2 Interpolation Surface Figure 3 showed the interpolation surface from the Exponential Simple Model interpolation procedure. The surface showed interpolated rainfall values distributed over the lake. The interpolated rainfall distribution over the lake depicted higher rainfall values for the Western half of the lake as compared to the Eastern half which was in agreement with the general rainfall regime over the lake caused by the diurnal effect. The optimal interpolation surface was obtained after a trial and error procedure applying several surface fitting procedures. A few interpolation procedures could not be applied due to inadequate data.
u Airport
Fig. 3: Interpolation Surface showing locations of the stations used.
838
I n t e r n a t i o n a l C o n f e r e n c e on A d v a n c e s in E n g i n e e r i n g a n d T e c h n o l o g y
3.3 Mean Annual Rainfall for Lake Victoria From Figure 3, it was clear that the rainfall was highest in the Central Western part of the lake (Bukoba to Bukasa) varying from about 2600 to 2220 mm year -1. It reduced to about 850 mm year-' on the Eastern shores around Musoma, and 1125, 1250 and 1100 mm year-' respectively on the Western, Northern and Southern shores of the lake. Due to the irregularity of the interpolation surface an overlain grid and areal weights had to be utilized in the overall estimate of average over-lake rainfall. The estimated mean annual rainfall over Lake Victoria was 1815 mm. The pixel value interpolation errors ranged between 0 and 0.03 mm for the lake area. The mean annual value of 1815 mm was a reasonable estimate compared to the range of 1700-1800 mm year-' quoted as the rough consensus for most previous studies, Yin et al., (2000). This average value would also be in agreement with measurements on island stations near the northwestern shore which gave an average annual rainfall of 20002250 mm, Flohn and Freadrich (1966). 4.0 CONCLUSIONS AND R E C O M M E N D A T I O N
9
9 9
9 9
Pixel values in Figure 4 indicated that mean annual rainfall in the Central Western part of the lake varied from about 2600 mm to 2220 mm. This value reduced to less than 1300 mm at the lakeshores. The distribution of rainfall over the lake reflected the known physical situation with higher rainfall occurring over the western half of the lake than over the eastern half. This is caused by the diurnal effect. The Lake Victoria mean annual lake rainfall was 1815 mm. The mean annual rainfall is higher than values given by earlier studies most likely because the diurnal effect was accounted for by using the more representative islands stations data and the exponential simple model interpolation method which relies on the inherent correlation structure of the point map other than arbitrary correlation structures. Unavailability of rainfall data for islands at the center of the lake must have affected the outcome since optimal interpolation could not be achieved due to inadequate data. Installation of new island meteorological stations and better instrumentation of the existing stations on the lake should be accelerated to improve the data quantity and quality.
REFERENCES Clark, l, (1979) Practical Geostatistics. Applied Science Publishers, London. 129pp. Court, A. and Bare, M.T.(1984) Basin Precipitation Estimates by Bethalamy's two-axis method J. of Hydrol.68:149-158. Creutin, J.D. and Obled,C.(1982) Objective analysis and mapping techniques for rainfall fields. An objective comparison. Water Resources Research 18:413-431 Geary, R.C.(1954) The Contiguity ratio and statistical mapping. Lebel, T.G. Bastin, Obled, C. and Creutin, J.D.(1987) On the accuracy of areal rainfall estimation. A case study. Water Resources Research 23:2123-2134.
839
Author Index A K. N. E. Abdalla, 326 S. A. Abdalla, 326 L. E. K. Achenie, 522 O. C. Ademosun, 315, 321,375 A. Adesina, 375 B. A. Adewumi, 315, 321 L.A.S. Agbetoye, 375 K. C. Akampuriira, 309 O. Akasheh, 495 C.O. Akinbile, 251 K. Aliila, 647 Sayyid Ahmed Ali, 429 H. Alinaitwe, 260, 268,277 M. Amelin, 385 P.M.R. Anand, 603 N.J. Arineitwe, 465
B G. Babangira, 465 U. Bagampadde, 125 Y. Ballim, 91 N. Behangana, 84 J.K. Byaruhanga,454
C D.J.Chambega, 404 S. C. Chiemeke, 533 Z. Chiguvare, 395 R. Clemence, 421 E. Cuadros-Vargas, 522
D S. S. Daodu, 533 I. P. Da Silva, 385 Vasant Dhamadhikar, 429 N. Donart, 562
E S.O. Ekolu, 75, 91 M. M. El-awad, 326 E. M. Eljack, 326 A. M. EL-Belasy, 504 M. E1-Moattassem, 135 Bjom Otto Elvenes, 488 E. Elwidaa, 59
841
F O.P. Fapem, 375 F. Farag, 495 T. A Fontaine, 1
G A. Goliger, 31, 40, 49 T.Gryba, 612
H Y. I. Hafez, 504 B. Hansson, 260, 268, 277 F. van Herwijnen, 99 R.D. Hooton, 75 D. Hoyer, 1
J G. R. John, 347, 368 S. Jonsson, 454 A. J. M. Jorissen, 99
K M.D. Kabadi, 647 A. K. Kahuma, 143 S. Kalugila, 23 L.L. Kaluuba, 621 E. H. Kalyesubula, 473 S. Kapasa, 109 B. Kariko-Buhwezi, 572 E. Kasembe, 347 H. R. Kerali, 10 S. J. Kenner, 1 D. Kibira, 572 B. M. Kiggundu, 125, 143 M. Kigobe, 211,238 H. Kiriamiti, 339 F. Kizito, 300 M. Kizza, 211, 238 S.B.Kucel, 538, 546 G. S. Kumaran, 635 M. Kyakula, 84, 109
L P. O. Lating, 538, 546 J. E. Lefebvre, 612 W. Lugano, 368 E. Lugujjo, 385
M J. Mahachi, 3 l, 40, 49 A.S. Mahenge, 185 B. Mangeni, 195 S. Manish, 562 M. J. Manyahi, 421 A. Manyele, 647 T. M. Matiasi, 167 Charles Mbohwa, 354 T.S.A. Mbwette, 185 C.F.Mhilu, 368 I.S.N. Mkilaha, 368 M. Mkumbwa, 368 I. N. Mugabi, 300 P. B. Mujugumbya, 309 N. Mukasa, 488 P. I. Musasizi, 572 W.B. Musinguzi, 465 O. Mwaikondela, 368 J. A. Mwakali, 143, 221,260, 268, 277, 285,309 S. Mwalembe, 647 Elijah Mwangi, 429 E. Mwangi, 594 G. Mwesige, 230
E. A. Opus, 109 T. J. Oyana, 522 A.S. Oyerinde, 251
P B. Pariyo, 84 N. Phocus, 562
Q A. Quin, 176
R P. A. Rivers, 522 A. I. Rugumayo, 238 P. D. Rwelamila, 221
S M. B. Saad, 504 B.B.Saanane, 404 K.R.Santhi, 635 S. Sarmat, 339 K. E. Scott, 522 A. Sendegeya, 385 M. Shrivastava, 514 E. E. Stannard, 10 B. Ssamula, 117
N B.M. Nabacwa, 465 B. Nabuuma, 481 B. Nawangwe, 59 C. Neale, 495 D. A. N garambe, 514 G. Ngirane-Katashaya, 195,300 K.N. Njau, 185 Et. Ntagwirumugara, 612 H. K. Ntale, 238 J.N. Ntihuga, 159 A.H.Nzali, 404 C. Nzila, 339
O J. B. Odoki, 10 A. S. Ogunlowo, 315,321,375 R. Okou, 465 M.A.E. Okure, 465, 481,488 O.J. Olukunle, 375 P.W. Olupot, 454
842
T G. Taban-Wani, 143,580, 621 M.D.A. Thomas, 75 S. S. Tickodri-Togboa, 554 D. Tindiwensi, 221,230 L.Trojer, 538, 546 M. N. Twesigye-omwe, 203
V S. Vishwakarma, 437
W D. Waigumbulizi, 621 T. Wanyama, 580 K. Widen, 268 L. Wilson, 347
XYZ S. Saad Zaghloul, 135
Keyword Index Numerals
11 kV Vallee De Mai Feeder, 437 11 kV Cote D'or Feeder, 437 11 kV Baie St. Anne Feeder, 437 2-D Mathematical models, 504 3G, 603 8051 Microcontrollers, 429 A Accident, 285 Activity Sampling, 260 Adaptive filters, 594 Advanced manufacturing technology, 488 Advances in engineering and technology, 167 Air inclination, 321 Air speed, 321 Air voids, 125 Akwa-Ibom, 251 Algorithms, 522 Alkali-silica reaction, 75 Analysis, 404 Architecture, 23 Asset valuation, 10 Aswan High Dam, 135 Augmented Reality, 603 B
Back velocity; 504 Bagasse, 354, 465 Barriers, 268 Bearing Capacity, 203 Behaviours and Properties, 185 Biogas systems, 481 Biomass, 368,473
843
Biomedical Applications, 522 Biosensor, 159 Block shape, 84 BQIH, 31, 40 Brake mean effective pressure, 347 Bricks, 473 Brunauer Emmett and Teller, 185 Budget scenarios, 10 Building, 285 Building craftsmen, 277 Building permit, 23 Building Sites, 260 Built environment, 23 Bulk heterojunctions, 395 Burnt and adobe bricks, 143 Bwebajj a, 285 C Cable, 437 California Bearing Ratio, 203 Cartesian coordinates, 554 Cassava, 375 Cassava processing, 375 CD-ROMs, 546 Cell phone, 603 Cement, 23 Centralized, Decentralized, 580 Characterisation, 454 Channelisation, 125 Civil, 285 Climate of coastal cities, 49 Clustering, 522 CMOS, 612 Cogeneration, 354 Collapse, 285 Combined-cycle, 465
Combined heat and power, 354 Composite action, 109 Compressive strength, 75 Concrete, 23, 91 Conductor, 437 Cone Penetration Test, 203 Conservation, 385 Constraint-Directed Search, 572 Construction, 221, 230,285 Construction failures, 91 Construction Firm, 268 Construction industry, 59 Conventional and vernacular materials, 143 Conversion, 385 Cost, 84 Coupling-of-modes (COM), 612 Craftsmen, 260 Crosstalk, 421 D
Database, 10 Decision-Making, 580 Decision Support Systems, 176, 300, 572 Delay-spread, 621 Density, 321 Desiccant cooling, 326 Desiccant dehumidifier, 326 Design loads, 84 Digital control, 429 Digital Divide, 533 Disaster management, 309 Doppler spread, 621 E
Early warning, 647 Earthquake, 309
Earthquakes, 647 Earthquake monitoring, 647 Earthquake resistant construction, 143 Ease of design/construction, 84 East African rift valley system, 647 Eddies, 504 Education, 91,385 Efficiency, 260, 473 Elastic versus limit state design, 109 E-learning, 538 Electricity production, 465 Electromagnetic Interference, 421 Enablers, 268 Energy, 385,473 Engineered buildings, 309 Enlarged images, 195 Environment, 159 Environmental, 473 Environmental impact assessment, 167 Environmental planning and management, 167 Equipment, 315 Evaporative cooling, 326 Existing environment, 488 Extraction, 339 F
Factors, 277 Fading, 621 Fading channel manifestations, 621 Fading phenomena, 621 Female Students, 538 Fences, 23 Fineness, 75 Finishes, 23
844
Finite Difference Time Domain, 421 FIR Filters, 594 Fire safety, 99 Firm activity, 488 Floor system, 99 Flow field, 504 Forecasting, 135 Formal construction, 285 Four-way video conferencing, 603 Fractionation, 339 Frequency distribution, 238 Fruit juice production, 375 Fruits, 375 Fuel conversion efficiency, 347 Fuel ionization, 347 Fullerene, 395 Functional working life, 99 Fuzzy logic model, 404 G Game-Theory, 580 Gasification, 465 Gender, 538 Gender mainstreaming, 59 Generating function, 554 Geographic Information Systems, 176 Geography, 522 Geological survey, 647 GIS, 300, 495,522 Grass thatch and CGI roofs, 143 Government involvement, 117 GPS, 647 Grain size, 321 Green Production Section, 572
Ground truth, 195 Groundwater inflow, 195 Group-Choice, 580 H
Hazard, 285,309 Hazards, 647 HAZUS, 309 HDM-4, 10 Health, 285 Heavy metals, 159 High Temperature Air/ Steam Gasification, 368 Highways, 10 Hollow clay blocks and solid RC slab, 84 Homogeneity, 238 Housing, 31, 40 Hybrid, 538 Hyperbolic Model. 203 I
ICT, 538, 647 IDT, 612 IFD building, 99 Index flood, 238 Indicators, 221 Indigenous machinery, 375 Industrial Clusters, 230 Industry, 221 Informal construction, 285 Information Technology, 533 Infrastructure, 285 Injury, 285 Innovation, 268 Input-Output Analysis, 230 Insulated, 473 Integrated environmental education, 167 Integrated Water Resources Management, 300
Integration, 99 Interactive., 546 Inter-symbol interference, 621 Investment appraisal, 10 Irrigation, 251 K
Kriging, 211 L Labour, 277, 285 La Digue Island, 437 Large-Scale, 580 Legal framework, 23 Legendre polynomials, 554 Legislation, 285 Light intensity dependence, 395 Lightning, 421 LMS algorithm, 594 Linear regression models, 238 Line Efficiency, 437 L-moments, 238 Local communities, 167 LoColms, 562 Loss estimation, 309 Low-noise amplifier, 612 Low power, 612 Low voltage, 612
Measurement, 221 Merowe Dam, 135 Meso-scale hydraulic features, 495 Microprocessor applications, 429 MIL-SOM, 522 Mineralogy, 75 Mineral matter, 368 Mini-bus taxi industry, 117 Models, 488 Monotoring system, 159 Moisture content, 368 Moisture damage, 125 Motor efficiency, 404 Multi-Agent, 580 Multi-criteria analysis, 10 Multimedia, 546 Multipath propagation, 621 Multiple Perspective Scheduling, 572 Multi-Transmission Lines, 421 N Natural resource base, 167 Negotiation, 580 Neighbourhood, 23 Nigeria, 251,375 Nile River, 135 Noise figure, 612
M
Mapping, 230 Manufacturing industry, 488 Material clarification, 315 Material classification, 315,321 Materials, properties, 454 Mathematics, 546 Mean annual flood, 238
845
Pedal powered PC, 635 Performance, 221 Performance assessment, 481 Phosphorus sorption, 185 Physics, 546 Plate Load Test, 203 Pluvial Tomatoes, 533 Policy framework, 59 Pollution, 1,159 Polythiophene, 395 Porcelain, 454 Potential energy, 554 Poverty, 473,538,546 Power delay profile, 621 Power plants modelling, 354 Pozzolans, 75 PPP, 562 Praslin Island, 437 Preferences, 580 Private public transPort, 117 Probability weighted moments, 238 Production Scheduling, 572 Productivity, 260, 268, 277 Proxy Cache Server, 562 Public transport, 117 Pyrethrin, 339 Pyrethrum, 339 Pyrolysis, 368
O Occupational, 285 Oleoresin, 339 Opponent, 580 Optimisation, 404 Order Due-Date Compliancy, 572 OSH, 285
QoS, 603 Quality assessment, 31, 40 Quality systems, 31 Quantum efficiency, 395
P
R
Particle dynamics, 315
Radio transmitter, 647
Q
RADIUS, 309 Ranking, 277 Real-time, 647 Reasoning, 580 Regional flood frequency analysis, 238 Regulation, 285 Reinforced Concrete Structures, 109 Reinforcement, 84 Remote Sensing, 95 Renewable, 473 Resilient Modulus, 203 Resonator, 612 Resource Utilization, 572 Rice, 251 Riparian Vegetation, 495 Roads, 10 Rodrigue's formula, 554 Role of women, 59 R-scale, 647 Rural, 538, 546 Rural Concept, 533 Rural Water Supply, 176 S
Safety, 285 Sand blending formula, 143 Saturation, 125 SAW filter, 612 Secondary School, 538, 546 Seismicity, 647 Semivariograms, 211 Sensitivity analysis, 10 Seychelles, 437 Shear Strength, 203 Shear stress, moments, 84 Signal fading, 621 Site, 40 Skills development, 91 Skill levels, 488
846
SMS, 647 Software tools, 10 Solar cells, 395 Solar collector heater, 326 Solar powered ICT, 635 SOM, 522 Sorption Equilibrium, 185 Sorption Models, 185 Sorption Isotherms, 185 Spatial Data Mining, 522 Specific fuel consumption, 347 Spherical polar coordinates, 554 Stakeholder participation, 300 Stoves, 473 Sugar mill, 465 Supercritical fluid, 339 Supervision, 309surface temperature maps, 195 Sustainable development, 167, 514 Swamp, 251 T Taylor series expansion, 554 Technical models, 10 Technology, 473 Technology transfer, 91 Thermal infrared anomalies, 195 Temperature dependence, 395 Terminal velocity, 321 Thermal Chemical Conversion, 368 Timber structures, 99 Time of travel of fluctuations, 135
Total maximum daily load, 1 Toxic substances, 159 Transient Surges, 421 Transmissivity, 211 Tsunami, 647 Tuff, 75 Turbulent viscosity, 504 U Ubiquitous Computing, 514 Ubiquitous Society, 514 Uganda, 176, 300, 454, 546 Uganda Clays Limited, 572 Unit-point charge, 554 Urbanisation, 23 V Vehicle emission control, 347 Vertical Extensions, 109 Vibrations, 99 Virtual School, 562 Visualization, 522 Volcanic ash, 75 Volcanic ash briquettes, 143 Voltage Regulation, 437 Vulnerability, 309
W
Warm and cool season patterns, 195 Waste water, 159 Water, 251 Water quality management, 1 Water Supply Coverage, 176 Wheel paths, 125
Wiener filtering, 594 Wi-Fi, 635 WiMAX, 635 Wind damage, 49 Wind environment, 49 Wind erosion, 49 Wind-tunnel, 49 Wind turbine control, 429 Workers, 285 Working hours, 59 Workmanship, 309 XYZ ZnO/Si/A1, 612
847
This page intentionally left blank