Human and Nature Minding Automation
International Series on
INTELLIGENT SYSTEMS, CONTROL AND AUTOMATION: SCIENCE AND ENGINEERING VOLUME 41
Editor Professor S. G. Tzafestas, National Technical University of Athens, Greece
Editorial Advisory Board Professor P. Antsaklis, University of Notre Dame, IN, U.S.A. Professor P. Borne, Ecole Centrale de Lille, France Professor D. G. Caldwell, University of Salford, U.K. Professor C. S. Chen, University of Akron, Ohio, U.S.A. Professor T. Fukuda, Nagoya University, Japan Professor F. Harashima, University of Tokyo, Japan Professor S. Monaco, University La Sapienza, Rome, Italy Professor G. Schmidt, Technical University of Munich, Germany Professor N. K. Sinha, McMaster University, Hamilton, Ontario, Canada Professor D. Tabak, George Mason University, Fairfax, Virginia, U.S.A. Professor K. Valavanis, University of Southern Louisiana, Lafayette, U.S.A. Professor S. G. Tzafestas, National Technical University of Athens, Greece
For other titles published in this series, go to www.springer.com/series/6259
Spyros G. Tzafestas
Human and Nature Minding Automation An Overview of Concepts, Methods, Tools and Applications
123
Spyros G. Tzafestas School of Electrical and Computer Engineering National and Technical University of Athens 15773, Athens, Greece
[email protected]
ISBN 978-90-481-3561-5 e-ISBN 978-90-481-3562-2 DOI 10.1007/978-90-481-3562-2 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2009941472 c Springer Science+Business Media B.V. 2010 No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Cover design: eStudio Calamar S.L. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Dedicated to my wife, Niki
Preface
Man is the best thing in the World. Nature does nothing uselessly. Aristotle There is a pleasure in the pathless woods, There is rapture on the lonely shore, There is society, where none intrudes, By the deep sea, and music in its roar: I love not Man the less, but Nature more. John Burroughs The basic purpose of development is to enlarge people’s choices. The objective of development is to create an enabling environment for people to enjoy long, healthy and creative lives. Mahbub ul Hag Founder of the Human Development Report
The aim of this book is to provide a compiled set of concepts, principles, methods and issues used for studying, designing and operating human-minding and natureminding automation and industrial systems. The depth of presentation is sufficient for the reader to understand the problems involved and the solution approaches, and appreciate the need of human–automation cooperative interaction, and the importance of the efforts required for environment and ecosystem protection during any technological and development process in the society. Humans and technology are living and have to live together in a sustainable society and nature. Humans must not be viewed as components of automation and technology in the same way as machines. Automation and technology must incorporate the humans’ needs and preferences, and radiate “beauty” in all ways, namely functionally, technically and humanistically. In overall, automation and technology should create comfort and give pleasure. The achievement of human-minding or human-centered automation was made possible by employing concepts and techniques of the human factors and ergonomics field. This is the easy part of the human- and nature minding-automation and technology story. To achieve truly nature-minding industry and automation, more complex and difficult decisions and tools are required. The partners here are not only the machines and the scientists or engineers, but also the politicians and
vii
viii
Preface
governors worldwide. Nature-minding design has to determine the way a product is produced on the basis of its impact to the nature and ecosystem, including pollution, waste generation, biodiversity decrease, and consumption of the earth resources. A society can be developed in a truly sustainable way if it adopts as a whole the human, economic, natural and cultural resources, not only in the short term but also in the long term. In our time the problems of human, automation/technology, and nature sustainable symbiosis have become more difficult and crucial than ever. This book provides a consolidated tutorial overview of these problems, and their solutions, suitable for scientists and professionals interested in the humanistic and environmental issues of the use of technology and automation in the modern society’s activity and development. It is primarily intended for use as a general information source, but academic teachers of applied sciences, behavioral sciences, and engineering can use the material of the book in relevant introductory human, automation, and/or environment courses. Athens, June 2009
Spyros G. Tzafestas
Contents
Everything should be made as simple as possible, but not simpler. Albert Einstein More power, and more choice, and more freedom require more wisdom if they are to add more humanity. Emmanuel G. Mesthene The machine replaced human labor and now human brain-power. But I think technology’s next step will be to work for the spirit, the heart. Sotori Miyagi
1
Automation, Humans, Nature, and Development .. . . . . . . . . . . . . .. . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.2 The Field of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.3 Brief History of Control and Automation.. . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.4 The Principle of Feedback .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.4.1 Some Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.5 The Humans in Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.6 Automation in the Nature .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.7 Social Issues of Automation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.7.1 Training and Education . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.7.2 Unemployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.7.3 Quality of Working Conditions .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.7.4 Productivity and Capital Formation . . . . . . . . . . . . . . .. . . . . . . . . . . 1.7.5 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.7.6 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.8 Human Development and Modernization .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.8.1 Human Development Components . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.8.2 Modernization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.8.3 Human Development Index .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 1.8.4 Life Expectancy, Literacy and Standard of Living.. . . . . . . . . . 1.8.5 Human Development Report . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .
1 1 2 3 5 6 9 10 11 12 12 12 13 13 14 14 15 16 18 19 20
ix
x
2
3
4
Contents
Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.2 The Human Factors Field: Building Blocks and Scope . . . .. . . . . . . . . . . 2.2.1 Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.2.2 The Human Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.2.3 Human–Automation Relation.. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.2.4 Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.2.5 Goals and Scope of the Human Factors Field . . . . .. . . . . . . . . . . 2.3 Human Factors in Automation System Design and Development .. . . 2.3.1 General Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.3.2 Developmental Elements.. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.3.3 System Development Concepts .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.4 The Workload Factor in Automation .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.5 Three Key Human Factors in Automation . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.5.1 Allocation of Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.5.2 Stimulus–Response Compatibility . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.5.3 Internal Model of the Operator . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 2.6 The Operator Reliance Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .
23 23 24 24 25 25 25 26 27 27 28 29 30 31 31 32 32 33
Human Factors in Automation (II): Psychological, Physical Strength, Human Error and Human Values Factors .. . . . . . . . . .. . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.2 Psychological Factors .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.2.1 Job Satisfaction .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.2.2 Job Stress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.2.3 A Psychosocial Stress Model . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.3 Physical Strength .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.4 Human Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.5 Human Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.5.1 Skill-Based Error-Shaping Factors . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.5.2 Rule-Based Error-Shaping Factors . . . . . . . . . . . . . . . .. . . . . . . . . . . 3.5.3 Knowledge-Based Error Shaping Factors .. . . . . . . .. . . . . . . . . . . 3.6 Human Values and Human Rights . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .
35 35 36 36 37 38 38 39 40 42 42 42 43
Human–Machine Interaction in Automation (I): Basic Concepts and Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.2 Applications of Human–Machine Interactive Systems. . . . .. . . . . . . . . . . 4.3 Methodologies for the Design of Human–Machine Interaction Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.4 Keys and Keyboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.4.1 Keyboard Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.5 Pointing Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.5.1 Touch Screens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .
47 47 48 50 51 51 53 53
Contents
4.5.2 Light Pens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.5.3 Graphic Tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.5.4 Track Balls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.5.5 Mouse .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.5.6 Joysticks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.5.7 Selection of the Input Device .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . Screen Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.6.1 Screen Density Reduction Methods . . . . . . . . . . . . . . .. . . . . . . . . . . 4.6.2 Information Grouping and Highlighting . . . . . . . . . .. . . . . . . . . . . 4.6.3 Spatial Relationships Among Screen Elements .. .. . . . . . . . . . . Work Station Design .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.7.1 Physical Layout Factors .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.7.2 Work Method Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 4.7.3 Video Display Terminal Factors . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .
54 54 54 55 55 55 56 57 57 58 58 59 59 60
Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.2 Graphical User Interfaces .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.2.1 General Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.2.2 Design Components of Graphical Interfaces.. . . . .. . . . . . . . . . . 5.2.3 Windowing Systems.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.2.4 Components of Windowing Systems . . . . . . . . . . . . . .. . . . . . . . . . . 5.3 Types and Design Features of Visual Displays . . . . . . . . . . . . .. . . . . . . . . . . 5.3.1 Visual Display Types .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.3.2 Further Design Features of Visual Displays . . . . . .. . . . . . . . . . . 5.4 Intelligent Human–Machine Interfaces . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.5 Natural Language Human–Machine Interfaces .. . . . . . . . . . . .. . . . . . . . . . . 5.6 Multi-Modal Human–Machine Interfaces . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.7 Graphical Interfaces for Knowledge-Based Systems. . . . . . .. . . . . . . . . . . 5.7.1 End-User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 5.7.2 Graphical Interfaces for the Knowledge Engineer .. . . . . . . . . . 5.8 Force Sensing Tactile Based Human–Machine Interfaces .. . . . . . . . . . . 5.9 Human–Machine Interaction via Virtual Environments.. . .. . . . . . . . . . . 5.10 Human–Machine Interfaces in Computer-Aided Design . .. . . . . . . . . . .
61 61 62 62 63 63 64 65 65 66 68 71 72 74 75 75 76 77 79
Supervisory and Distributed Control in Automation . . . . . . . . . .. . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 6.2 Supervisory Control Architectures .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 6.2.1 Evolution of Supervisory Control . . . . . . . . . . . . . . . . .. . . . . . . . . . . 6.2.2 Rasmussen’s Architecture.. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 6.2.3 Sheridan’s Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 6.2.4 Meystel’s Nested Architecture .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 6.3 Task Analysis and Task Allocation in Automation .. . . . . . . .. . . . . . . . . . .
83 83 85 85 86 88 92 93
4.6
4.7
5
6
xi
xii
Contents
6.4
6.5 6.6
6.7
Distributed Control Architectures .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 97 6.4.1 Historical Remarks .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 97 6.4.2 Hierarchical Distributed Systems . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 98 6.4.3 Distributed Control and System Segmentation . . .. . . . . . . . . . . 101 Discrete Event Supervisory Control . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 102 Behavior-Based Architectures .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 103 6.6.1 Subsumption Architecture . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 104 6.6.2 Motor Schemas Architecture . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 105 Discussion ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 107
7
Implications of Industry, Automation, and Human Activity to Nature . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 109 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 109 7.1.1 The Concepts of Waste and Pollution Control .. . .. . . . . . . . . . . 110 7.2 Industrial Contaminants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 111 7.2.1 Organic Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 111 7.2.2 Metals and Inorganic Nonmetals . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 115 7.3 Impact of Industrial Activity on the Nature.. . . . . . . . . . . . . . . .. . . . . . . . . . . 117 7.3.1 Air Pollution .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 117 7.3.2 The Earth’s Carbon Cycle and Balance . . . . . . . . . . .. . . . . . . . . . . 119 7.3.3 Global Warming, Ozone Hole, Acid Rain and Urban Smog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 120 7.3.4 Solid Waste Disposal .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 125 7.3.5 Water Pollution .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 126 7.4 Energy Consumption and Natural Resources Depletion . . .. . . . . . . . . . . 126 7.5 Three Major Problems of the Globe Caused by Human Activity.. . . . 128 7.6 Environmental Impact: Classification by Human Activity Type.. . . . . 129
8
Human-Minding Automation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 133 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 133 8.2 System-Minding Design Approach . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 134 8.3 Human-Minding Automation System Design Approach.. .. . . . . . . . . . . 135 8.4 Human-Minding Interface Design in Automation Systems .. . . . . . . . . . 137 8.4.1 User–Needs Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 137 8.4.2 Task Analysis .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 138 8.4.3 Situation Analysis and Function Allocation . . . . . .. . . . . . . . . . . 138 8.5 The Human Resource Problem in Automation . . . . . . . . . . . . .. . . . . . . . . . . 140 8.5.1 Allocation of System Development Resources . . .. . . . . . . . . . . 140 8.5.2 Investment in Human Resources . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 142 8.5.3 Innovation and Technology Transfer . . . . . . . . . . . . . .. . . . . . . . . . . 142 8.6 Integrating Decision Aiding and Decision Training in Human-Minding Automation .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 143 8.6.1 Novice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 144 8.6.2 Expert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 144
Contents
xiii
8.7 8.8
International Safety Standards for Automation Systems . . .. . . . . . . . . . . 146 Overlapping Circles Representation of Human Minding Automation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 149
9
Nature-Minding Industrial Activity and Automation .. . . . . . . . .. . . . . . . . . . . 151 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 151 9.2 Life-Cycle and Environmental Impact Assessments . . . . . . .. . . . . . . . . . . 152 9.2.1 Life-Cycle Assessment. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 152 9.2.2 Environmental Impact Assessment . . . . . . . . . . . . . . . .. . . . . . . . . . . 157 9.3 Nature-Minding Design.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 158 9.4 Pollution Control Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 160 9.5 Natural Resources-Energy Conservation and Residuals Management.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 162 9.5.1 Water Conservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 162 9.5.2 Energy Conservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 163 9.5.3 Residuals Management . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 163 9.6 Fugitive Emissions Control and Public Pollution Control Programs . 165 9.6.1 Fugitive Emissions Control . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 165 9.6.2 Public Pollution Control Programs .. . . . . . . . . . . . . . .. . . . . . . . . . . 166 9.7 Environmental Control Regulations .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 167 9.7.1 General Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 167 9.7.2 Environmental Regulations in the United States . .. . . . . . . . . . . 168 9.7.3 International and European Environmental Control Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 170 9.8 The Concept of Sustainability .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 173 9.9 Environmental Sustainability Index .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 178 9.10 A Practical Guide Towards Nature-Minding Business-Automation Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 180 9.10.1 The Four Environmental R-Rules . . . . . . . . . . . . . . . . .. . . . . . . . . . . 180 9.10.2 Four More Nature-Minding Rules . . . . . . . . . . . . . . . . .. . . . . . . . . . . 181 9.11 Nature-Minding Economic Considerations .. . . . . . . . . . . . . . . .. . . . . . . . . . . 183 9.11.1 Cost Allocation: The Polluter-Pays Principle .. . . .. . . . . . . . . . . 186 9.11.2 Environmental Standards . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 186 9.12 Nature-Minding Organizations .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 187
10 Modern Automation Systems in Practice .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 193 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 193 10.2 Office Automation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 194 10.3 Automation in Railway Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 196 10.4 Automation in Aviation Systems .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 200 10.4.1 Aircraft Automation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 200 10.4.2 Air Traffic Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 202 10.4.3 The Free Flight Operational Concept . . . . . . . . . . . . .. . . . . . . . . . . 205 10.5 Automation in Automobile and Sea Transportation .. . . . . . .. . . . . . . . . . . 206 10.5.1 Advanced Traveler Information Systems . . . . . . . . .. . . . . . . . . . . 206 10.5.2 Collision Avoidance and Warning Systems . . . . . . .. . . . . . . . . . . 207
xiv
Contents
10.6
10.7 10.8
10.9 10.10 10.11
10.5.3 Automated Highway Systems . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 207 10.5.4 Vision Enhancement Systems . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 208 10.5.5 Advanced Traffic Management Systems . . . . . . . . . .. . . . . . . . . . . 208 10.5.6 Commercial Vehicle Operations .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . 208 10.5.7 Sea Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 209 Robotic Automation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 209 10.6.1 Material Handling and Die Casting . . . . . . . . . . . . . . .. . . . . . . . . . . 210 10.6.2 Machine Loading and Unloading .. . . . . . . . . . . . . . . . .. . . . . . . . . . . 210 10.6.3 Welding and Assembly.. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 211 10.6.4 Machining and Inspection . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 212 10.6.5 Drilling, Forging and Other Fabrication Applications .. . . . . . 212 10.6.6 Robot Social and Medical Services . . . . . . . . . . . . . . .. . . . . . . . . . . 213 10.6.7 Assistive Robotics .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 215 Automation in Intelligent Buildings . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 219 Automation of Intra- and Inter-Organizational Processes in CIM.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 220 10.8.1 Intra-Organizational Automation .. . . . . . . . . . . . . . . . .. . . . . . . . . . . 221 10.8.2 Inter-Organizational Automation .. . . . . . . . . . . . . . . . .. . . . . . . . . . . 222 Automation in Continuous Process Plants . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 224 Automation in Environmental Systems . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 226 Discussion on Human- and Nature-Minding Automation and Technology Applications.. . . . . . . . . . . . . . . . .. . . . . . . . . . . 227
11 Mathematical Tools for Automation Systems I: Modeling and Simulation. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 231 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 231 11.2 Deterministic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 232 11.3 Probabilistic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 235 11.3.1 Discrete Probability Model . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 236 11.3.2 Continuous Probability Model .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 236 11.3.3 Bayes Updating Formula.. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 237 11.3.4 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 240 11.4 Entropy Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 242 11.5 Reliability and Availability Models . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 243 11.5.1 Definitions and Properties . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 243 11.5.2 Markov Reliability Model . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 246 11.6 Stochastic Processes and Dynamic Models .. . . . . . . . . . . . . . . .. . . . . . . . . . . 248 11.6.1 Stochastic Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 248 11.6.2 Stochastic Dynamic Models . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 250 11.7 Fuzzy Sets and Fuzzy Models .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 251 11.7.1 Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 251 11.7.2 Fuzzy Systems .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 255 11.8 System Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 260 11.8.1 Simulation of Dynamic Systems . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 260 11.8.2 Simulation of Probabilistic Models . . . . . . . . . . . . . . .. . . . . . . . . . . 262
Contents
xv
12 Mathematical Tools for Automation Systems II: Optimization, Estimation, Decision, and Control.. . . . . . . . . . . . . .. . . . . . . . . . . 267 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 267 12.2 System Optimization.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 268 12.2.1 Static Optimization .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 269 12.2.2 Dynamic Optimization .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 274 12.2.3 Genetic Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 278 12.3 Learning and Estimation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 280 12.3.1 Least-Squares Parameter Estimation . . . . . . . . . . . . . .. . . . . . . . . . . 280 12.3.2 Recursive Least Squares Parameter Estimation .. .. . . . . . . . . . . 282 12.3.3 Least Squares State Estimation: Kalman Filter . . .. . . . . . . . . . . 285 12.3.4 Neural Network Learning .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 290 12.4 Decision Analysis .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 293 12.4.1 General Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 293 12.4.2 Decision Matrix and Average Value Operators . . .. . . . . . . . . . . 295 12.4.3 Fuzzy Utility Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 296 12.5 Control.. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 302 12.5.1 Classical Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 303 12.5.2 Modern Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 306 12.6 Concluding Remarks.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 316 References .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 319 Index . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 343
Outline of the Book
Progress imposes not only new possibilities for the future but new restrictions. Norbert Wiener Human beings, viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves. Herbert Simon Participation is one of the ends as well as one of the means of development. UN System Network on Rural Development and Food Security
The book involves 12 chapters. The first 10 chapters, which constitute the main body of the book, present the concepts, principles, technologies and methods without any mathematics. Chapters 11 and 12 provide a brief exposition of the basic underlying mathematical models and tools that are available and used in the analysis and design of automation systems. Chapter 1, “Automation, Humans, Nature and Development”, is a general introductory chapter that provides the definition of automation, the landmarks of the history of control and automation, the role of humans in automation, and the position of automation and technology in the nature, including a short discussion of the social issues of automation and the issues of human development and modernization. Chapters 2 and 3 deal with the human and ergonomic factors in automation. Specifically, Chapter 2 presents the building blocks and the scope of the human factors field. Here, a first set of human factors are examined, namely: work load factor, allocation of function, stimulus response compatibility, internal model of the operator, and the operator reliance factor. Chapter 3 examines a second set of human factors relevant to automation systems. These are psychological factors (job satisfaction, job stress), physical strength, human bias, and human error. The chapter concludes with a discussion of human
xvii
xviii
Outline of the Book
values and human rights which must be respected by automation systems’ technological, managerial, organization, and production processes. Chapters 4 and 5 are concerned with the human–machine interaction in automation, which is the central prerequisite for achieving human–machine harmonic cooperation. Chapter 4 presents the basic concepts of human interface devices (keyboards, mice, pointing devices) and discusses the screen design and workstation design. Chapter 5 examines the major advanced human–machine interfaces, namely graphical user interfaces, intelligent interfaces, natural language interfaces, and human–machine interaction via virtual environments. Chapter 6 discusses the principal supervisory control architectures proposed for human–automation systems. These are: Rasmussen’s S-R-K architecture, Sheridan’s 5-function architecture, and Meystel’s nested architecture. Then the distributed control architectures are considered which are used mainly in the process control industry. Finally, the discrete event supervisory control concept and the behavior-based control architectures are presented. Chapter 7 deals with the implications of industry, automation and general human activity to nature, namely: air pollution, solid wastes and water pollution, and the phenomena of global warming, ozone thinning, acid rain and urban smog, including the depletion of natural resources and energy, and the impact of fishing, transport, trade, tourism, households and biotechnology. Chapter 8, “Human-Minding Automation”, discusses the basic issues and requirements for achieving the desired human centered symbiosis of automation and human. Three basic problems that have to be solved are the user needs determination, the task analysis and design, and the function allocation (to human and automation) problems. These problems are examined in some detail, along with the human resource problem, and the integration of decision-aiding and decision-training problem. Finally, a short look at the safety standards of automation components and systems, set internationally, is made. Chapter 9, “Nature-Minding Industrial Activity and Automation”, is concerned with the problems that must be addressed for achieving automation–technology– nature symbiosis that leads to sustainable development. The methods discussed include: life-cycle assessment, environmental impact assessment, design for reuse, remanufacturing, recycling, pollution control planning, fugitive emissions control, and municipal pollution control programs. The chapter continues with the environmental control regulations, a discussion of the sustainability concept and the environmental sustainability index initiative, a practical guide for nature-minding company operation, and an outline of the key nature-minding economic issues. Finally, a list of world-wide nature-minding organizations is provided. Chapter 10, “Modern Automation Systems in Practice”, gives a representative set of real-life examples where automation, combined with human interaction, has been applied with high success. These examples are office automation, railway, aviation, automobiles, sea transportation, industrial and service/assistive robotics, intelligent buildings, computer-integrated discrete manufacturing, continuous process industry, and environmental systems. These examples show that human factors
Outline of the Book
xix
and human–machine interfaces play a dominant role in all cases, of course with particular differences in the details of the design. A discussion on human-and-nature minding automation and technology applications closes the chapter. Chapters 11 and 12 are intended for the reader who wishes to see what mathematical models and tools are used, and can be used, for the analysis and design of automation systems. Chapter 11 deals with the system modeling and simulation problems. Deterministic and probabilistic or stochastic models are discussed, including continuous-time and discrete-time state-space models, Bayesian and Markovian models, entropy models, reliability models, and fuzzy logic models. The study of simulation is concentrated on the simulation of dynamic systems using the Euler and Runge-Kutta techniques, and the simulation of probabilistic models using the Monte Carlo technique. Chapter 12 presents the fundamentals of mathematical system optimization, parameter and state estimation, decision making, and feedback control. The material offered includes static optimization, dynamic optimization, learning, least squares estimators, neural networks, decision analysis, utility theory, and classical and modern control. Several simple examples are included to show how the methods are used and what kind of results are obtained. In overall, the book provides a good picture of the current state-of-art towards the symbiosis of human, automation/technology, and nature. Of course, much remains to be done for achieving new technological, economic and social systems with further human- and nature-friendly features, and for totally accepting and implementing the United Nations and European Union agreements and regulations for the environment and ecosystem protection. Intensive and deep studies in the field (see Section 9.9) have convincingly shown that the three pillars of global sustainability and sustainable development are: economic growth, social progress, and nature (environment) protection, as it is pictorially illustrated in the following figure.
SD
Economic Growth
Nature Protection
Social Progress
Sustainable Development
Fig. 1 The three pillars of the sustainable development building (Adapted from: http://www. sustainability-ed.org/pages/what3–1.htm)
xx
Outline of the Book
The next figure shows pictorially the elements of the human–automation–nature symbiosis concept.
Fig. 2 Human–Automation–Nature Symbiosis is a fundamental prerequisite for sustainable development (Picture design by Entergraphics, Nafpaktos, Greece)
Chapter 1
Automation, Humans, Nature, and Development
All human beings are born free and equal in dignity and rights. Universal Declaration of Human Rights I hold that while a man exists, it is his duty to improve not only his own condition, but to assist ameliorating mankind. Abraham Lincoln Man’s ability to participate intelligently in the evolution of his own system is dependent on his ability to perceive the whole. Imannuel Wallerstein
1.1 Introduction Automation systems perform many operations and activities that can be monitored and controlled at several levels of abstraction. A modern automated system has to be able to adapt to fast internal and external changes. To this end, a variety of successful models, and control and supervision techniques have been developed during the last five decades, which are based on the principles of systems engineering, information technology, human factors engineering, and management science. 31, 158, 513 A central position in modern automation is kept by the human who performs several functions, either physical or mental or both. Thus, the attention of the automation systems scientists and engineers was soon turned towards the study of the physical, mental, and psychological features of the human at work. This has produced the field of human factors engineering, and has led to the so-called human-centered automation. 32, 523 After a brief discussion of “what is automation”, we give some historical landmarks of control and automation, and present in an elementary way the concept of feedback control. Next, we provide an outline of the role of humans in automation and a list of the effects that automation (and technology) has on nature (earth). On the basis of the above we then explain the title of the present book “human-and nature-minding automation”. Then, we present a number of important social issues of automation. Finally, we discuss the human development (HD) and modernization process, including the HD index and the HD report. S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 1, c Springer Science+Business Media B.V. 2010
1
2
1 Automation, Humans, Nature, and Development
1.2 The Field of Automation Modern systems are large and complex, and so for their analysis and design one needs to study not only the characteristics of their subsystems, but also the interactions of them. The whole (“holon” from the Greek word oK œo) is much more than the sum of the parts, and determines the parts. As the German philosopher Hegel (1770–1831) said, “the parts cannot be studied if we isolate them from the whole, because they are interdependent and connected dynamically”. The problem of making the proper decisions and exerting the necessary control actions that assure the achievement of the desired performance (cost, productivity, reliability, resource distribution, life time, environmental impact, etc.) is called the “system design problem”. 92, 132, 370 The technological part of this problem contains only the machines. The overall problem contains also the humans (managers, engineers, programmers, operators, workers, etc.). Thus the system design problem has a dual nature, namely technological and behavioral that dictates the combined way in which it must be treated. The term Automation (Automatic organization) was coined in 1952 by D. S. Harder of Ford Company to involve the methodology which analyzes, organizes and controls the production means such that all material, machine and human resources are used in the best way. 31, 158 The principal goal of automation is the optimal allocation of the human effort (muscular, mental) so as to maximize the productivity, i.e., the ratio of the product obtained over the human effort needed for this. It is clear, that the automation design problem is equivalent to the system design problem. Today, the term automation is used in all cases where the system operation is automated to various degrees. The operation of the system is usually performed in a sequential or parallel multistage manner. Computerized systems with terminals, displays, sensors and other human–computer interfaces are now considered as part of automation (even if they do only processing and not control or supervision). The system of which the operation is to be automated can be any man-made system (physical, chemical, enterprise, or other). 32, 344, 420, 441, 444, 523 As we shall explain in more detail later, the automated operation of the system is achieved by using the principle of feedback control (automatic control). The feedback is closed via suitable measurement and sensing devices and the control action is exerted by suitable actuators (motors and other prime movers or executives including the human). One or more computers are used for data storage and data processing and are cooperating with the various elements of the system (machines and humans) via suitable interfaces and displays. 444, 513 An overall picture which shows how the above ideas may be integrated in an automation system is given in Fig. 1.1. Although not explicitly shown in Fig. 1.1, all modern automation systems contain, besides the information processing and control elements, appropriate communication links (channels) – analog and/or digital – through which the various parts of them are communicating (exchanging messages and signals). 37 The box “output to nature” stands for the effects of the automation system to the nature (environment and ecosystem) in which the humans live. These effects should
1.3 Brief History of Control and Automation
3
Values, goals & specifications
Computer
Data Base
• •
HMI
Displays
Decisions Controls Computer control
Sensors System product or service
Man-made automated system
Human
Human
Human control Output to nature (Pollution, etc)
Fig. 1.1 Pictorial representation of automation (HMI D Human–machine interface)
also be sufficiently controlled as it will be described later. The box “values, goals and specifications” refers to the human values and goals that must be respected by the automation system, and to the technical specifications of the operation of the system which ensure the desired quality of the product or service delivered to the human customer.
1.3 Brief History of Control and Automation Automatic control (or control engineering) plays a fundamental role in modern life and lies at the heart of automation. The engineering use of control is very much older than the theory, and can be traced back to ancient Egypt, where the Greek engineer Ktesibios (285–222 BC), working for the King Ptolemaeos II, has designed and constructed the so-called automatic “water clock”. After about two centuries Heron of Alexandria (Greek mathematician and engineer 100 BC) has designed several regulating mechanisms, examples of which is a mechanism for the automatic opening of Temple doors or the automatic distribution of wine. 309, 523 He was actually the first who used the term automatization in his work “Peri Automatopoiitikes” (…©¡Kš A¤£o’£o oš˜£š›˜−, K About Automatization). 309 A big step forward in the development of control engineering was made during he industrial revolution. The machines that were developed have enhanced considerably the potential to turn raw materials into products useful for the public. A major finding at this
4
1 Automation, Humans, Nature, and Development Boiler
Measured speed
Steam
Valve Balls
Control system (governor)
Engine
Rotating shaft
Fig. 1.2 Sketch of the fly-ball governor
time was James Watt’s fly-ball governor, 41, 319, 343 a mechanism that was able to regulate the speed of a steam engine by throttling the flow of steam (Fig. 1.2). As the engine shaft rotates faster and faster centrifugal force acting on the fly-balls pushes the balls out and closes the throttle valve a little, thereby reducing steam flow to the engine and tending to reduce the shaft speed. The opposite effect appears when the engine shaft rotates slower and slower. This way of operation is an example of negative feedback which is now applied for the regulation and stabilizing control of all systems. The key results of control theory were developed around the period of the Second World War by Bode, Nyquist, Nichols and Evans, and are now called “classical control theory”. In the 1960s we had the development of the state-space approach to control which allowed multivariable control problems to be treated in a unified way. Particular results developed using this approach include the optimal estimator (Kalman filter), the linear pole placement controller, and the linear-quadratic controller (deterministic and stochastic). All these results are now collectively called “modern control theory”. Adaptive, hierarchical and decentralized control theory was then followed, and in the 1980s the development of the so-called “robust control theory” (H2 , H1 , `1 , theory) was made. In parallel with the above theories, substantial work was done in the analysis and design of nonlinear controls.
1.4 The Principle of Feedback
5
On the technology side, the evolution of automated computerized control of numerical machines has passed through the following five main stages: Stage 1: Appearance of simple production Stage 2: Fixed automated machines and production lines (beginning of twentieth
century) Stage 3: Machine tools with simple automatic control Stage 4: Introduction of numerical control (NC) in machine tools (1952) Stage 5: Appearance of NC of machine tools using computers (1970), the so-
called computerized numerical control (CNC) In 1912 Henry Ford has achieved a production of 1,000 cars per day, using the principle of mass production, in a period in which the private car was really a good of luxuryness. In 1924 the English company Morris at Coventry has produced the first automated transportation machine. The industrial robots were developed in parallel with CNC. The first industrial robot was put in operation in 1961, but the robots started playing a dominant role at the end of 1970s. In our days, the automation via robots and computers is applied to both industrial and non industrial tasks (services, medical applications, etc.). Automation of discrete and continuous products was aided by the parallel evolution of computers. The electronic computer was introduced as a principal component of automation in 1960, although the first computer with predefined program was developed in 1940 by Bell Laboratories, and the first commercial sale of electronic computer – UNIVAC – took place in 1951. The transistor was invented in 1948 and from 1950 the computers use printed circuits. The year 1964 is considered as the starting point of the third generation computers with the use of printed circuits and microcircuits based on MOS (Metal Oxide Semiconductor), FET (Field Effect Transistor) and TFT (Thin Film Transistor). Currently we use computer networks and the “internet” and “world wide web” which was discovered at the beginning of the 1990s by T. Berners in CERN (Centre Europeaine de Researche Nucleaire, Geneve), in his effort to help the information exchange among scientists in different Universities and Institutions all over the world.
1.4 The Principle of Feedback The principle of feedback control will be presented using the basic diagram of Fig. 1.3, and then will be illustrated by a few simple practical examples. 122, 407 The basic elements of this feedback control system, which is also called closedloop control system, are the following: The system or process under control The sensor or feedback element that measures the actual output of the system
(which is a certain characteristic variable or feature of the system’s product) The error detector which compares the real (actual) output with the desired output
and sends the error to the controller
6
1 Automation, Humans, Nature, and Development
Goals Desired Output x + ε Controller Error (Decision Maker) Detector − Error
Feedback Signal z=y
u
Actuator
Control signal
Control action
System (Process under control)
Actual Output y
Sensor
Feedback element
Fig. 1.3 Operational diagram of a typical feedback control system The controller which processes the error according to the given goals and pro-
vides the control signal to the system through the actuator The actuator or effector which produces the control action in accordance with the
control signal and exerts it upon the system The goals are set by the owner, the designer or the user of the system. In order for the control to be effective, the decisions and control actions must be exerted without or with a very small time delay. Otherwise special care is needed. The fact that the above system involves negative feedback is indicated by the negative sign in the feedback path, which produces the error signal " D x y. If the actual output y is greater than the desired output x, then the error " is negative, and the controller–actuator pair exerts a negative action to the system, forcing it to decrease the actual output y towards the desired output and reduce the error. If y < x then " > 0 and the action imposed to the system is positive so as to increase y and again reduce the error. The above type of controller, which produces a control signal proportional to the actual error, is called Proportional Controller (P). Other types of control use either the integral of the error (Integral Controller: I) or the derivative of the error (Derivative Controller: D). In practice, we usually use combinations of the above controllers namely: PI, PD, or PID. A controller which is also used frequently in practice is the two-valued controller. Here, the control signal u takes a value in accordance with the signum of the error, which changes in a desired sequence of time instants, called switching times. If the two values of u are 0 and 1, the controller is called on–off controller, if they are –1 and C1 the controller is called bang–bang controller.
1.4.1 Some Examples 1. Position control of a water valve Figure 1.4 shows a system regulating a water valve. The differential potentiometer is of the rotating type. Part 1 of the potentiometer transmits the input voltage x and
1.4 The Principle of Feedback
7
Potentiometers
x
y’
x’
+ −
e Amplifier
+ −
Water valve u
Motor
Axis y
Fig. 1.4 A negative feedback system regulating a water value
Potentiometer 1
x’ +
e
Amplifier
u
Motor
y’ −
y
Water valve Load
Potentiometer 2
Fig. 1.5 Operational diagram of the water valve regulator
+ + e Amplifier
U volts
Generator
u
Motor
−
Angle of rotation
+ U’
Tachometer
−
Fig. 1.6 A negative feedback system controlling the speed of a motor
Part 2 transmits the output y. Thus, the voltage signal © is the error x y. The amplifier is used to amplify the error © such that to be able to drive the motor. The system can be redrawn as in Fig. 1.5 to fit the operational diagram of Fig. 1.3. 2. Motor speed control Figure 1.6 shows a system which controls the speed of an electric motor. The input potentiometer is of the rotating type. The motor axis is unloaded. The operational diagram of the system (see Fig. 1.3) has the form shown in Fig. 1.7. 3. Speed control of the steam engine Returning to the speed control of the steam engine using the fly-ball governor of Watt (see Fig. 1.2) we can draw its operational diagram which is as shown in Fig. 1.8. The speed of rotation is determined by the steam flow and the external load of the engine.
8
1 Automation, Humans, Nature, and Development
Input
Potentio- U + meter
e
Amplifier
Generator
u
Output Speed
Motor
y
− U’
Tachometer
Fig. 1.7 Operational diagram of the motor speed control system Load Steam flow
Steam valve
Output (Speed)
Steam engine
Watt’s Governor
Fig. 1.8 Operational diagram of the steam engine speed control system
Desired direction
Human Driver
Car
Actual direction
Actual direction
Desired direction
Fig. 1.9 Driving control system of a car
The governor (feedback element) takes the engine’s speed as input, and gives a negative feedback signal (here the displacement of a lever) to the steam valve. 4. Direction control of a car The operational diagram of the direction (driving) control system of a car has the form shown in Fig. 1.9. Here, the error detector and controller is the human driver and the control is manual (not automatic). The driver senses the actual direction of the car and compares it with are the desired direction on the road. If the actual direction is to the left of the desired one he (she) turns a little to the right. In the opposite case the turn is to the left. If the car goes in the correct direction, no action is made by the driver
1.5 The Humans in Automation
9
(the driving wheel is kept at this position). Of course it is assumed that the driver is not over – or under – reacting, i.e., that the amplification (gain) of the direction error is the correct one.
1.5 The Humans in Automation Automation and humans have to live together. The principal and permanent element of any automation system is the human under various roles; decision maker, operator, programmer, maintenance specialist, etc. The basic principle here is that humans should never be subservient to machines and automation, but machines and automation should be subservient to humans. Thus, modern automation systems should be designed in a non Tayloristic way. 471, 559 Humans and machines have many differences, but they also have many similarities. 523 The idea that the human and machine have to be used in a cooperative and symbiotic way is receiving increasing popularity. The cooperation of the human with automation and machines must start from the design phase, and continue in the manufacturing, installation, operation, and maintenance phases. 319, 479 To achieve the desired symbiosis of humans and machines the use of interfaces that are intelligent (smart) is needed. Here, interface intelligence allows the inclusion of explicit representations of humans’ goals and plans which constitute the basis of human machine interaction. These representations make the interfaces capable to understand human’s actions in terms if the intentions underlying the behavior. 477 Humans are no longer regarded as components of automation systems in the same way as machines and software programs. On the contrary, humans’ responsibilities for system performance goals are the “reason of existence” for the hardware and software components. Thus, all decisions for design and construction of the automation systems are made so as to meet, to the maximum extent, the humans’ intentions and preferences in achieving the goals for which they are responsible. The term which was used in the literature for this type of automation is humancentered automation (or human centered technology). 46, 215, 470, 476, 479 In other words, automation embodies a part of the human purpose of production, and is not designed to replace the skills and abilities of humans, but rather to assist them and make them more efficient. Automation should give scope for skills to change and develop as technology itself develops. The achievement of human-centered automation systems was made possible by using concepts and techniques from the field of human engineering or human factors engineering or simply human factors. 53, 486, 487, 615 “Human factors” is the field which applies behavioral and
According to F.W. Taylor’s Scientific Management: “the workman is told minutely just what he is to do and how he is to do it; and any improvement which he makes upon the orders given to him is fatal to success”. From the Greek word ˙¤“Kš¨¢˜ (symbiosis, live together). 319
10
1 Automation, Humans, Nature, and Development
biological sciences to the design of machines and human–machine systems. Biological sciences include the cognitive psychology or the broader experimental psychology which deal with the study of memory, sensation, perception, and reasoning. From the biological sciences the one which is used for human-centered designs is the human physiology which studies the dynamic behavior of the human organs as “whole entities”, i.e., above the cellular level. In many cases use was made of concepts and techniques from sociology, group psychology, and psychometrics. An alternative name for human factors established in the literature is ergonomics (the laws of work, or work study), but ergonomics is usually restricted to the study of human factors that appear in pure physical human work (body kinematics, muscle dynamics, muscular fatigue and, in general, human biomechanics). For the personnel selection issues use is currently made of concepts and techniques from the human resources management field. 125, 163, 271, 318, 342, 542 In this book we use the term human-minding automation since it reflects better the desired symbiosis of the humans and the machines involved in automation systems. Modern automation should take care the humans first and then the machines, the productivity and the economic return on investment. To this end, the design and management of automation systems should respect the human values and human rights as codified by the relevant international organizations (United Nations, UNESCO, etc.) (see Section 3.6).
1.6 Automation in the Nature The human-centered design of automation, as it has been described so far, is not adequate for the assurance of high quality of life in the short term, and the human survival in the long term. As shown pictorially in Fig. 1.1, any automation and technological system operating in our nature (the mother earth) affects it in several ways. Here is what the environmental, climatological and ecological scientists call: “environmental pollution”, “climatic change” and “ecological damage”, three very serious problems of our modern life. Other problems with serious consequences on human life that need proper political solutions are the increasing consumption of earth’s natural resources, the continuing rapid growth of earth’s population, the increasing armament (despite the declarations for the opposite), and the anisotropic (unequal) distribution of human goods (food, education, etc.) in the earth that leads in the cruel and inhuman poverty of the so-called third world populations. As it is argued in justifying and convincing ways by national and international bodies, as well as by well known scientists and thinkers, all these problems contribute towards
From the combination of the Greek words ergon (K" o D work) and nomos . oo& K D law). Ecology comes from the Greek word “oš›oœo”Kš ’” where “oKš›o−” D house and “œKo”o−” D speech. Here, “oKš›o−” is the nature and “œKo”o−” has the meaning of study. Actually, ecology is the study (science) of the interactions of living and nonliving entities in the nature.
1.7 Social Issues of Automation
11
a continuous degradation of the humans’ quality of life, and eventually may lead to catastrophic and irreversible implications on the habitability of the earth. In this book we will deal with the implications of automation technology upon the nature (in the sense mentioned before) and indicate some of the ways in which automation itself can reduce these implications. A term which was adopted for this in the literature of manufacturing automation is “environmentally conscious manufacturing” 187, 609 , or green manufacturing, to describe the effort that must be devoted toward reducing the effects of manufacturing upon the environment. Here we use the term “nature-minding automation” and combine it with the “human-minding automation” as “human-and-nature minding automation” to show that both the human and the nature must be taken care by automation (and technology in general). Of course, minding for the human implies that the system should mind for the nature too (the air the human breadths, the water she/he drinks, the food she/he eats, etc.), but the combined term is used to emphasize the strong need to mind for both. It is remarked that the problem of studying and protecting the nature was a primary concern in the ancient Greek society at it is evidenced by the work of the father of Medicine Hippocrates, (460–377 BC) entitled: “About the winds, the waters, and the places” (“…©¡Kš ’K©¡¨ ›’š ¤•’£¨ K ›’š £Ko ¨”). A representative, but not exhaustive, list of issues that are under scientific investigation over the years and have been the subject of mass-media around the world, is the following 387, 424, 583 :
Green house effect (global warming) 288 Ozone hole (stratosphere ozone thinning) 211, 553 Acid rain (rain containing nitric and sulphuric acids) 320 Urban smog (due to violation of clean-act regulations) 385 Solid waste 402 Radioactive releases (from industrial accidents and nuclear weapons trials) 400 Land quality degradation (e.g., by soil erosion and salinization) 101 Deforestation (clearing of forest land for harvest timber, etc.) 356 Ecosystem damage (leading to decrease of biodiversity) 11, 35
These issues will be studied in Chapter 7.
1.7 Social Issues of Automation In the following we will briefly present a number of critical issues of the automation that concern the human and society (other than the implications of automation on the nature discussed in Section 1.6) These are 209, 340, 513, 523 :
12
1 Automation, Humans, Nature, and Development
1.7.1 Training and Education There is still a shortage of trained or retrained technical experts in the automation field. More well-educated computer and automation experts (scientists, engineers, programmers, operators and technicians) are continuously needed. Automation is still alien to most persons. As a result they don’t trust it at all or they may over trust it. Both are not good, and so education and training is needed.
1.7.2 Unemployment Unemployment was the most important issue in discussion about the social effect of automation one or two decades ago, but now is at an acceptable equilibrium level due to increasing generation of new jobs. In any case, automation and related technologies can affect labor in several ways such as: The effects of automation on the relative proportion of machines to humans (the
capital–labor ratio) in a given industry The need of expert workers with particular job skills and abilities in a certain
industry The extent of change in production numbers and prices in the countries in which
automation and new technology are introduced To assess the effects of automation to future labor levels, a reference line is needed against which job loss or gain can be measured. This reference line might be a projection of current trends, but must also take into account the virtual unemployment and virtual employment issues. Virtual unemployment represents the jobs which have been lost if a given plant or organization has not responded to market demands by automating. Virtual employment represents the jobs which were not explicitly eliminated, but that would have existed with automation not adopted.
1.7.3 Quality of Working Conditions Working conditions are improved if automation is used for jobs that are danger-
ous, boring or unpleasant, and if the new jobs created by automation are better. Productivity increases may also, in the larger term, result in shorter, and more
flexible scheduled work week. Equipping an employee with a job helper (e.g., a robot extender) not only ceases
job stress but also opens job opportunities to people with handicaps or other limitations. Of course, whether the above benefits are realized depends, in part, on the specific ways in which industry and administration uses automation. Many people
1.7 Social Issues of Automation
13
have expressed concern that automation increases the possibilities for employer surveillance of employees, and that automation could be used by employers to “downgrade” jobs that require working with automated systems.
1.7.4 Productivity and Capital Formation Productivity is a complex concept not uniquely defined and measured. Furthermore, even after some specific definition is chosen, industrial (and office) productivity depends on many interacting factors. Therefore, productivity improvements cannot be attributed to any single methodology or technology. Robotics, for example, is an input to productivity ratio P: P D .Units of output/=.Units of input/ which represents both capital and technical knowledge. Human labor is another input to P. “Human–computer” as a united entity is a third input to P. What combinations of inputs to productivity ratio should be adopted is a social issue of great importance. Capital formation is another issue of automation related to productivity. Economists often attribute the capability to create new investment capital to the growth of productivity. Two social questions about capital formation are the following: Is the capital available to fund the construction of new systems and the modern-
ization of existing ones that will use automation technologies, sufficient? Is there sufficient capital to fund research and development by business men who
wish to develop new types of automation equipment? The answers to these questions depend on the legal and economic status of each state and on how automation is perceived by investors and managers to be a promising technology in which to invest. Looking particularly at robotics, one of the main components of automation, the following advantages/disadvantages have been documented and recorded in the literature. 18, 257, 510
1.7.5 Advantages Mechanical Power Humans can lift about 45–50 kg, whereas robots can lift
many tons. Motion Velocity Humans can respond and act at 1 cps, whereas robots can at
1,000 cps. Reliability Robot work is much more reliable than human work.
14
1 Automation, Humans, Nature, and Development
Sensitivity Robots have much less sensitivity to environmental conditions
(temperature, pressure, vibration) than humans. Endurance Robots can work uniformly until breakdown from wear. Humans
have much reduced endurance and monitoring capabilities (about 30 min of monitoring). Precision of Work Robots have definite precision of work, whereas human precision varies according to physical, psychological and training conditions.
1.7.6 Disadvantages Robots have incompatibility in terms of workspace with humans. Robots have incompatibility in terms of motion with human (robots move lin-
early or at acute angles and abruptly stop, but humans do not). Human safety: Some workers are at risk from injury from robots (maintenance
workers, programmers, personnel outside the danger zone, etc.). Robots are difficult to operate (control panels differ from robot to robot: lack of
standardization). Feeling of isolation among workers surrounded by many robots. Telepresence and virtual reality in telerobotic and other systems raises again old
philosophical questions of being and existence in the field of ontology. 522
1.8 Human Development and Modernization In this section we provide a short discussion on the human development and modernization of which automation and industrialization are two of the fundamental components. Human development (HD) is defined as the process of achieving an optimum level of health and well being. Naturally, it involves physical, biological, mental, educational, social, economic, and cultural components. 45, 99, 139, 256, 379, 610, 611 Practically, human development is achieved through the enlargement of people’s choices, the most critical of which have to lead to a long and healthy life, to a proper level of education, and to a decent standard of living. Other important choices include secured human rights and self-respect, and political freedom. The development theory investigates issues that involve the question whether modern societies represent “progress” over traditional societies. To study this question, the development research investigators go back to the earliest foundations of modernization theory, i.e., to the traditional society which is commonly described as “primitive”, “backward” and “having rigid social structures”, with economies limited to “rural and agricultural levels”. After the middle of the nineteenth century the gap between developed and under-developed countries has been increased and the social scientists are proposing measures and policies that must be followed to
1.8 Human Development and Modernization
15
reduce as much as possible this gap. Human development recognizes that people are the real wealth of the nations, and puts the human being at the center of the process, i.e., the primary objective of HD is to create an environment that enables people to live long, and healthy, and have creative lives.
1.8.1 Human Development Components HD goes beyond BN (“Basic Needs”)-type goods and services, and considers other issues, such as freedom, democracy, gender, environment, societal culture, and all other issues that may affect human beings’ potential. The human well-being goes beyond money incomes, and HD allows the human to choose his/her priorities, i.e., it is concerned with the “broadening of human choices”. Of course, HD accepts that human beings constitute an important resource too and that they are the objective of the development. The three principal components of HD that contribute synergetically to the widening of human choices are the following. 611 Socioeconomic development Emancipative value change Democratization
Socioeconomic development is the most basic component of HD and refers to a class of closely related changes that involve, among others, technological modernization, automation, productivity improvement, betterment of health and life quality, increases of personal income, rising levels of education, widening access to information, and increasing social complexity 139 and social transactions between humans. These changes include social mobilization, urbanization and occupational differentiation, and strengthen horizontal bargaining relations by weakening vertical authority relations. Socioeconomic development increases individual resources, diminishes the most dominant constraints on human choice, and as a result provide to people the objective means of choice. Emancipative value change is the second component that contributes to human choice. Rising emancipative values directs people’s subjective orientation towards human choice. This is compatible with the fact that human choice does not only depend on resources, but is strongly influenced by one’s motivation and mind. Removing or weakening the constraints posed on human autonomy changes and reshapes people’s value orientations in many ways known under several names (e.g., individual modernity values, self-expression values, civic cultural values, postmaterialistic values, etc.). Clearly, no matter what terminology is used for the change of values, all these approaches have in common the fact that traditional conformity values that subordinate human autonomy to community rules, tend to be replaced by more emancipative values which are dominated by human choice. Democratization is the most remarkable development of modern society. During the past three or four decades democratization has occurred in two distinct ways:
16
1 Automation, Humans, Nature, and Development
(i) many authoritarian regimes evolved to formal democracies by establishing democratic constitutions, and (ii) most of the existing formal democracies have applied or widened direct democratic institutions leading to rising levels of direct civic participation. 99 Discussion A question about the above three processes debated over the years is whether these processes represent irreversible societal linear changes or they follow cyclical patterns with notable setbacks or they are uniformly global or culturespecific within an Inherently Western model. As stated in, 611 one point (hard to be denied) is that: “If socioeconomic development, emancipative value change and democratization occur, they tend to go together”. It is a fact that poor societies suffering from scarse resources (e.g., Sub-Saharan African societies) tend to be dominated by conformity values that reflect constraints to human autonomy and are usually governed by authoritarian regimes. An integrated theory of social change is not yet available, although many modernization scientists have revealed that there are visible relations between socioeconomic development, emancipative values and democracy levels. The position presented in 611 is that socioeconomic development, elevating emancipative values and effective democracy work synergetically to promote human choice among societies. 610 Socioeconomic development diminishes the most existential restrictions on human choice by increasing individual resources which broaden the scope of possible human activities and autonomy. Emancipation strengthens people’s desire to have free choice and control of their lives. Finally, as already mentioned, democracy, the third component of HD, contributes to the widening of choice via the institutionalization of the legal rights that secure people’s freedom to control their private and public activity. 379 The important thing here is that these rights are not only guaranteed formally, but work effectively in every day life. In this way we get the so-called effective democracy, as contrasted to formal democracy. Effective democracy provides effective rights for human choice. Thus, in this sense effective democratization is any extension and enrichment of people’s effective rights.
1.8.2 Modernization Modernization is defined to be the transformation of social life from a traditional, rural society to an urban, industrial society. The modernization theory looks at all societal and economic factors of a country trying to locate and define all social variables that play a major role in the social evolution. The ultimate goal is to explain how this process of development is taken place and find ways of assuring an optimum sustainable change. In addition, modernization theory is concerned with the study of the response to that change. 263, 372, 373 To modernize a society means first of all to industrialize it, which is now generally recognized that goes far beyond pure economic and technological change and includes all cultural, social and political issues. Modernization is not a one-andfor all time change but a continuous dynamic open-ended process with uncertain,
1.8 Human Development and Modernization
17
uneven and irregular components. Moreover, the modernization process is not restricted only to the interior of a particular country, but extends globally to its Western foundation to the entire World. Along the lines of this modernization principle the process of “globalization 116, 532 was emerged which by its proponents is meant as the integration of social, political and economic structures and spread them all over the world. Globalization theory tries to explain and theorize the development of a global economy towards the direction of a unique central society. Some of the means that play a major role towards globalization are world-wide tourism, communications, large-scale transportations, and new technologies. The theorists of globalization consider that it is the response to new technologies that causes change. However, despite its positive consequences, globalization has also many negative consequences that include, among others, the widening over the time of the difference between the rich and the poor, the people being left behind who are exposed to several kinds of criminal activities, and so on. Modernization has two principal stages. During the first progressive and upward stage, it enhances the institutions and human values. But beyond some point, the second stage starts which is typically characterized by dissatisfaction and discontent of an increasing level. The initial rising expectations are not encountered, and groups of people move towards increasing demands on the state that are really becoming difficult to meet. At this second stage, modern societies are facing a gamma of new problems that are difficult to be solved within the framework and the competence of the conventional nation state. It is remarked here that the processes of industrialization and modernization that emerged about two centuries ago and have been the subject of scientific study much later have not yet arrived at any concrete closure. Two theories of “international development 142, 265 that historically have strongly questioned the theory of “globalization” and “free market” are the “dependency theory 85, 123, 151, 548 and the “World-system theory. 79, 80, 220, 339, 605, 606 Dependency theory, which was first formulated in the 1950s, asserts that low-levels of development in underdeveloped countries spring from their dependence on the advanced economies. This is because natural resources flow from underdeveloped and poor countries (called peripheral countries) to a group of developed countries (called core countries) enriching them at the expense of the peripheral countries’ own health. Dependency theory is opposing the ideology of free market which argues that free trade and open markets help poor countries to follow an enriching trajectory towards full economic development and integration to the global economy as equal players. Dependency theory started loosing some of its support and influence after the economic success and growth of India and Thailand. The World-system theory was initiated by Immanuel Wallerstein 220, 605, 606 and lies somewhere between the theories of Marx and Weber. At the theoretical level it is based on the theory of the Annales school of Fernard Braudel (http://fbc.binghampton.edu/), and in many respects it is an adaptation of dependency theory. 79, 80 For Wallerstein “a system is a unit with a single division of labor and multiple cultural systems”, and in the world-system history there have been three kinds of societies, viz. mini-systems and two types of world systems, namely
18
1 Automation, Humans, Nature, and Development
single-state world-empires, and multi-polity world economies. The systematic flow of surplus from the periphery to the industrialized high-technology core is what Wallerstein calls “unequal exchange” that leads to “capital accumulation” at a global scale. The world-system theory has inspired a large number of research programs, the most well-known of which is the study of “long-term business cycles”. As an interdisciplinary theory, the world-system theory has attracted the attention of sociologists, anthropologists, culture scientists, economists, development investigators, and historians.
1.8.3 Human Development Index The human development index (HDI) is recognized as the leading measure for ranking human well being in different countries worldwide. HDI combines normalized measures of the following three indicators 137, 222, 346 : Life expectancy at birth Adult literacy rate and mean years of schooling Income as measured by gross domestic product (GDP) per capita
Of course “human development” involves many other components (already described in this section). Therefore, like all one-dimensional indices that attempt to measure complex variables it is subject to inaccuracies. Nevertheless it is really a good comparative measure of the well-being of a population. This index was developed in 1990 and has been used since then by the United Nations Development Program (UNDP) as the basis for the annual Human Development Report. 226, 234 Life expectancy at birth is an index of the health and longevity of people, adult literacy rate is an index that reflects the knowledge and education of the population, and GDP per capita is a measure of the standard of living (expressed in Purchasing power parity (PPP)) in US dollars. HDI has been questioned right from the beginning of its creation as a redundant index which does not add any significant value to the value of the individual measures that compose it. It has been argued that it is actually an index indicating a relative ranking which is actually useless for inter-temporal comparisons and difficult to interpret since the HDI for a country in a given year depends on the levels of, for example, life expectancy or GDP per capita of other countries in that year. However, the United Nations uses it as a compound indicator of economic development that attempts to go beyond purely monetary measurements by combining GDP per capita with life expectancy and literacy in a weighted average. Mathematically the normalized and unit-free value (index) xindex between 0 and 1 of a variable x that can take a minimum value xmin and a maximum value xmax (in certain units) is given by 222 : xindex D .x xmin /=.xmax xmin /
1.8 Human Development and Modernization
19
This permits the addition of several indices to find an overall average index. HDI is actually the average of the following three indices LEindex , Eindex and GDPindex , where LE is the life expectancy at birth variable, E is the education variable and GDP is the gross domestic product variable expressed in PPP in US dollars. These three indices are given by: LEindex D .LE 25/=.85 25/ Eindex D .2=3/ALindex C .1=3/GEindex GDPindex D .log.GDP/ log.100//=.log.40; 000/ log.100// where the adult literacy (AL) index and the gross enrollment (GE) index are given by: ALindex D .ALR 0/=.100 0/; GEindex D .CGER 0/=.100 0/ with ALR being the adult literacy rate (for ages greater than or equal to 15) and CGER being the combined gross enrollment ratio for primary, secondary and tertiary schools. The basic employment of HDI is to rank the UN countries according to the level of human development and classify them as developed, developing or underdeveloped country. If HDI is high, the rank in the list can easily be used as a means of national “aggrandizement”, and if it is low, it can be used to highlight national insufficiencies.
1.8.4 Life Expectancy, Literacy and Standard of Living In the following we discuss a little more the three constituents of the human development index. 137, 222, 226, 346 Life expectancy (LE) at a given age is defined as the average number of years of life after that age, and depends very much on the sample community or group of people for which it is measured. From the existing world-wide data it follows that life expectancy varies according to the class and the gender. In the USA the life expectancy has increased in the twentieth century considerably (about 30 years) mainly due to the improvements in public health and medical care. Poverty has a significant and dominating negative effect on life expectancy. The climatic conditions (global and national) have also been documented to have an effect on life expectancy. In some countries very high infant mortalities are observed. In these countries, instead of the life expectancy at birth, the life expectancy at age 5 is used to exclude the early childhood mortality component. According to the wikipedia the present world-wide life expectancy at birth is estimated to be 66.12 years. Literacy is traditionally defined as the ability to read and write or the ability to use a language for reading, writing, speaking and listening (wikipedia and www.ncte.org). According to the UNESCO Education Sector “Literacy is the ability
20
1 Automation, Humans, Nature, and Development
to identify, understand, interpret, create, communicate, compute and use printed and written materials associated with varying contexts”. A practical literacy standard in several communities is the ability to read a newspaper. But according to OECD (Adult Literacy Survey, 2000), our modern society’s increasing requirements in communication and commercial activities need the ability to use computers and other information technologies. More advanced present-day literacy requirements include the use of multimedia, Internet, and other technologies. The standard of living is based on the quantity and quality of goods and services offered to people, and how these goods and services are made available and distributed within a given society. The real income (that takes into account the inflation) and the poverty rate are two basic indices of the standard of living. Other indices include education, health care and income growth. The concept of standard of living is actually different than the “quality of life” concept which, besides the material standards, incorporates all the other issues of life such as social life, entertainment, leisure, health, quality of environment, etc. The conventional measure of the standard of living is the GDP (gross domestic product) per capita, but the use of the PPP (purchasing power parity) is a better measure especially for the comparison of standards of life in different countries. This is because PPP takes into account the long-term equilibrium exchange rate of two currencies for equalizing their purchasing power. The fluctuations of the real exchange rates (i.e., the PPP exchange rates) are primarily caused by the motion of the market exchange rates. The typical PPP exchange rate used in most cases is the so-called “international dollar”. Actually, the PPP exchange rate is recognized as the most practical and realistic reference for economic comparison of different countries. Theoretical perspectives on human growth and development can be found in http://www.unm.edu/jka/courses/achive/theory1.html
1.8.5 Human Development Report The human development report (HDR), first launched in 1990, aims to put people back at the center of the development process in terms of economic debate policy and advocacy. 226, 234 HDR is an independent report commissioned by UNDP and has been produced and suggested by a selected team of scholars, HD practitioners and members of UNDP. The report is published in more than 12 languages and launched in more than 100 countries annually. Besides the HDI three other composite indices for HD are: the Gender-related Development Index (GDI), the Gender Empowerment Measure (GEM), and the Human Poverty Index (HPI). HDI debates each year on several distinct challenges facing humanity. For example, the HDR 2007/2008 was primarily focused on the climate change and argued that climate change poses several challenges at many levels. Failure to meet these challenges enlarges the repertory of unpredictable backward aspects in HD. The HDR-2009 is primarily concerned with migration, both within the countries and outside them, and investigates the migration process and its consequences in the
1.8 Human Development and Modernization
21
context of demographic changes and paths in both growth and inequality. HDR-2009 demonstrates how an HD approach is contributing in the restoration of the endogenous social issues that broaden the benefits of mobility and/or force migration. Countries with HDI below 0.5 are characterized as “low development” countries and countries with HDI 0:8 are called “high development” countries. In the HDR-2007/2008 there were 22 countries with low development (located in South Africa). Countries with high development are those of North America, Western Europe, Oceania and Eastern Asia (and some of the developing countries that are near HDI D 0:8 and have ascending HDI trend). 222, 226
Chapter 2
Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors
The care of human life and happiness, and not their destruction, is the first and only object of good government. Thomas Jefferson You cannot make yourself feel something you do not feel, but you can make yourself do right in spite of your feelings. Pearl S. Buck Design should begin by identifying a human or societal need – a problem worth solving – and then fulfill that need by tailoring technology to the specific, relevant human factors. Kim Vicente
2.1 Introduction Human factors play a dominant role for the successful, safe and efficient operation of any modern technological system. In automation, “human factors” may be collectively defined as the study of relationships and interactions between equipment, processes and products, and the humans who make use of them. This interaction, on the part of the human involves issues of several nature: physical, psychological, cognitive, and thinking factors. As with many fields of science and technology, the field of human factors has began under the demands of a war (here World War II), where it was required to match the human to the war equipment in terms of size, strength and sensing–cognitive capabilities. Taking into account the human factors, the automation designers try to improve continuously the machines and processes which the human is called to use such that to be more ‘humane’ in many aspects. This chapter starts by discussing the building blocks of the human factors field and presenting its scope. Then, a conceptual account of the human factors in automation system design and development is provided, giving emphasis on the new thinking concepts centered around the human and the human’s concerns. Next, a first set of human factors that are important for the operation of automation systems
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 2, c Springer Science+Business Media B.V. 2010
23
24
2 Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors
are briefly analyzed. These are: the workload factor, the allocation of function factor, the stimulus response compatibility factor, the internal model of the operator factor, and the operator reliance/ trust factor.
2.2 The Human Factors Field: Building Blocks and Scope 2.2.1 Building Blocks According to D. Meister the building blocks which form the conceptual and operational structure of the human factors field are 353 :
Elements Constructs Parameters Assumptions Variables Hypotheses
Elements The entities that compose the human factors discipline. The two core elements are the humanand automation (technology). The human element embraces all persons directly or indirectly interacting with the technology (including the human factors professionals). The automation as a whole includes all concrete instances of technology (computers, automobiles, ships, aircrafts, nuclear reactors, machine tools, chemical plants, etc.). Constructs Constructs have dimensions that can be regarded as variables (e.g., the system construct has dimensions of complexity, organization, size, life time, etc.). The human factors constructs include the human, the technology, the system, the measurement, the application, and the system design and development. Parameters Lower level elements which compose the generic constructs. For example, the human factors field as a construct has as parameters the subject matter, the purpose, and the functions performed by the field. Assumptions A priori assumptions about the relationships among elements and parameters, which are taken for granted and cannot be tested (something analogous to axioms of mathematical theories or the doctrines of religions). The basic assumption of the human factors field is that there exists a relationship between the human and the technology she/he uses. Hypotheses Imagined conclusions derived from assumptions. For example, a hypothesis may be that the human–technology relationship changes as the technology is advanced.
2.2 The Human Factors Field: Building Blocks and Scope
25
2.2.2 The Human Features In the human factors field the human can be considered as a black-box, where her/his behavior is determined only by stimuli as inputs to the box and response as outputs. The human features that are studied are: Physical strength, sensory, and perceptual limits, etc. Cognitive e.g., a human cannot carry out in her/his mind large numbers of cal-
culations without a computation assistance. Intellectual e.g., does there exist a preferred way of information processing by
the human which must be embedded in the design? Motivational e.g., can we comprehend human behavior without reference to
her/his motivation, or is it necessary to take into account this motivation?
2.2.3 Human–Automation Relation In the past, the human’s role in automation systems was simply that of a pure executor (or controller). The operator was controlling a device, monitoring its performance, and modifying it via a specified sequence of actions. With the development of computerized systems the human’s role has changed. The human is monitoring the system performance, but it is the computer software that controls individual equipment(s) to perform in desired ways. Today, the human is a partner of the machine, her/his role is to perceptually recognize the stimuli, and cognitively interpret the meaning of those stimuli. This means that the concept of error changes definition, which implies that the measurement process also changes. It is clear, that automation affects people because it is a source of stimuli to which the human responds. This response is not shown as an overt behavior because the human is not a controller. The response can be a change in the observer’s attitude on her/his concept structure. Since one of the major goals of the human factors field is to make the humans (employees, etc.) happier with their technology, then the responses of them must be taken into account.
2.2.4 Automation Today we look at the human (employee /worker) as more than a unit of production, since there is a converse relationship between automation and human. Of course, the human determines the type and extent of automation, but once developed the automation determines to a certain (sometimes high) degree how the humans behave. The elements of automation vary from the molecular level (resistors, transistors, microchips, etc.), where the human’s involvement is just to repair or replace a component that fails, to the simple tools level where the human’s attention is restricted
26
2 Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors
to the tool at hand, and finally to complete systems, where the human’s intervention becomes more sophisticated and is expressed by high level human functions. The three levels (degrees) of automation are the following 353: Mechanization (Mere replacement of a human performance with that of a
machine.) Computerization (Replacement of a human performance by a machine which
is now a computer, hardware and software. The replacement is now more precise, more extensive, quicker, etc.) Artificial Intelligence (The degree of human replacement by artificial intelligence software is increasing but it is still very limited.) The automation (technology) is one of the primary stimuli of human behavior, since it does not only change the human role in immediate interaction with physical objects, but serves as a backdrop for almost all human actions. The human factors scientist is not a pure observer like sociologists, anthropologists etc., but an activist whose education and role is to intervene in the human–automation relationship, by measuring performance, producing a stimulus and observing its effect. Complex automation systems inevitably need organization, which in its simplest form is a set of rules for humans to interact with and within systems. These rules are needed since the human–system interaction usually determines the efficiency of system performance. A simple equipment does not require an organization, but a multi-component system with humans to operate it does need some kind of organization to harmonize equipment and people operations.
2.2.5 Goals and Scope of the Human Factors Field The human factors field is defined by its formal concept structure, its goals, and its functions. The formal concept structure is expressed by theories such as the human– machine system theory, the signal-detection theory, 184 and the attentional resources theory of Wickens and Goettle. 614 The goals refer to productivity, to comfort, and to the knowledge base. The human factors field, as a science, adds to the knowledge base of human–technology relationships and must have utility to people, which may be expressed as an increase of productivity or an increase of comfort/safety of people interacting with the machines. The scope of the human factors discipline is the conceptualization of its scientists of what the field represents or encircles. In the narrow sense the field is concerned with research about those variables that affect the human when the human interacts with the automation and the application of research to system design and development. In the broad sense the scope includes everything that relates the human to automation. For example, the attitudes of the users to the technology they use falls within the scope of the human factors field. The functions performed by the field are research (i.e., topic selection, subject selection, the environment where the research
2.3 Human Factors in Automation System Design and Development
27
is carried out, etc.), and application (e.g., application for task, behavior for a task, generation of alternative solutions, selection of one solution, testing, refinement, etc.). Three distinct conceptualizations of the human factors field by its professionals that appeared over the years are the following 353: Human factors (and ergonomics) is a subspecies of another discipline (e.g., psy-
chology, physiology, engineering). Some professionals think that what they do belongs to psychology (industrial or engineering psychology), and others that it is a special case of either physiology or engineering. The human factors field is an interdisciplinary field, which means that what is distinctive about it is its approach to the problems’ solutions. The human factors field is a distinctive (independent and autonomous) discipline possessing special features that differentiate it from other disciplines. Obviously, each attitude has a significant effect on the way in which the professionals practice their profession, i.e., methods of analysis, measurement, synthesis, and research in general. Full details on this can be found in the works of D. Meister and others. 282, 352–354
2.3 Human Factors in Automation System Design and Development 2.3.1 General Issues The top purpose of the human factors (and ergonomics) field is to try to modify information and automation technology through the use of behavioral science concepts and methodologies to make it more humane. In this way, both the users of automation will be more satisfied, and their behavior will be improved, naturally leading to better overall system performance (smaller error rate, smaller accident rate, higher productivity, etc.). In addressing the role of human factors in automation system design and development (including the testing function), D. Meister 354 has distinguished the term specialist from the term designer in the following way. Specialist is a member of the human factors personnel working in the design, development and testing, but who is different from the engineer (electrical, electronic, mechanical, control, manufacturing, logistical, etc.). Designer is the engineer of any specialization participating in the design, development and construction of the automation system. To simplify further the terminology, Meister used the term design to refer to all the above stages, namely design, development and testing. The topics addressed by Meister about the human factors in automation system design are the following 354: Critical elements of system development and people involved. How designers design.
28
2 Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors
Available information resources and required information by designers. The role of specialist and the information she/he needs. Criteria and features of a good design. The role of human factors research in system design. Differences between hardware and software design and between commercial and noncommercial design. The role of user in system design and the information that must be elicited from users. Difference of user-centered design from other types of design. Features of development process and relevant theories. Methods of analyzing design requirements and testing systems at several levels of complexity. The contribution of organization to system development. Individual differences among designers and design practices and the causes of them.
2.3.2 Developmental Elements The basic developmental elements, each of which implies a question: what, how, and why, are the following: The Nature of the Overall Developmental Process The design process is distin-
guished from system development. The Nature of the Equipment or System to be Designed Equipment is the indi-
vidual work station; the system contains many work stations that operate together for a global mission or purpose. The Nature of System Design This is a matter of high-level conceptualization and is under no one’s control. Thus system design contains everything and is actually our perception of phenomena. The Features of Designers and Specialists The theorists may propose formal design methods, but whether these are implemented is depended on designer/specialist skills, expertise, and attitudes. Information Resources They are referred to information transmission, reception and utilization, and include design specifications, guidelines, documentation, existing experiential data, etc. Formal and Informal Methods The specialists use behavioral methods according to their characteristics and their relative merits. Design Inputs and Outputs The design is a process that receives inputs and produces outputs which must be analyzed and understood. The User A user is always one who employs an object and may be a company, a governmental agency or a general public customer. Today the role of the user in the development process is important if only that development is initiated by the attempt to satisfy the user goals. The designer and specialist should try to learn as much as possible about what the user desires before design.
2.3 Human Factors in Automation System Design and Development
29
2.3.3 System Development Concepts System development is a process consisting of design, testing, administration and management, and can be conceptualized on several hierarchical levels. Design is actually a problem, and its solution depends on, and needs, the creativity of the designer. The desired specifications are given in more or less detail, but how to achieve them is usually not clear. We distinguish three forms of design: new design, improvement, and redesign. In a new (or initial) design there is not available any previous system to be used as a reference model. In improvement (or update), additional features and capabilities are asked with respect to the predecessor system that can be used as a basis for the required alterations. The same is true for the redesign, where it is required to restore existing defects and deficiencies. The steps of the conventional design process are:
Formulate the problem (analysis). Develop possible solutions (alternative hypotheses). Analyze and evaluate these alternatives. Select the best alternative according to some criterion. Implement the selected solution (option). Evaluate the implemented alternative.
The design process is a vast subject. Some models that have been proposed in the literature are the following: (i) Meister’s conceptual model, 352 (ii) Leifer’s model, 313 and McKim’s psychological/ behavioral model. 348 To facilitate the design process, and succeed in better control of it, a number of concepts that describe system development have been formulated. These fall in the following categories: Descriptive Concepts The most popular descriptive concept is the well-known
top–down design concept, which follows an if–then type of reasoning and proceeds in a sequential logical manner. Development as Problem Solving Here the designer is called to analyze the problem at hand in terms of its implications for both the human and machine operations. Analytical Concepts Here concepts that emphasize information processes and communication of information are included. One can also consider the development process from an economic view point as a sequence of trade-offs and negotiating compromises. This is because each design factor has special functionality merits and associated costs. Nontraditional Concepts The above concepts characterize the first generation of design processes, which were considering the human as a pure executive/control element (Tayloristic system-centered design 549 ). To be used in modern computerized systems, many ways of modifying them and many new concepts have been developed. These new thinking concepts are centered on the user and the users concerns. 549 The orientation toward human-centered design has its roots in long-standing socio-psychological demands, but the realization of human-centered automation systems became possible only with the advent of
30
2 Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors
computerized systems. It must be remarked that the human-centered design goes beyond the traditional human factors concept to tailor according to human limitations, because the emphasis is not on avoiding excessive demands on the human, but on exploiting human capabilities (e.g., assuring job satisfaction of the operator and enhancing the pleasure in performing her/his tasks). This is achieved by handing over to the user some control over the analysis, design, and testing of design alternatives. The above user-centered design concept will be studied in more detail in Chapter 8 (Human-Minding Automation).
2.4 The Workload Factor in Automation One of the advantages of automation anticipated by early system developers was that the use of automation would reduce operator mental workload. Although this is true in many cases, it was revealed already three decades ago by Edwards 129, 413 that automation does not necessarily reduce the human’s workload. Moreover, on the presumption that reduced workload leads to safer operation, the general forecast was that automation would reduce human error and improve system safety. The non validity of this line of thinking was recognized quite early in the technical literature 63 and the public press. In fact, extensive observations on the operation of many advanced automated systems indicated that automation does not consistently lead to reduced mental workload of the operator. A classical example of such systems is the aircraft, as it was documented by Wiener 617 on the basis of a survey of commercial pilots. A significant proportion of the pilots declared that indeed advanced automation had reduced the pilot’s workload, but an equal number disagreed. In general, the pilot’s experience is that automation often reduces workload, but usually at flight phases where workload is already low (e.g., during cruise), whereas the workload may increase at critical phases such as take off and landing. Actually, what automation does is to change the pattern of workload between the various work phases. Another fallacy in early thinking about the advantages of automation was the belief that an operator would have less to do, and so would have more time for vigilant monitoring, which was seen as a low-workload task. This myth was exploded very early by Sheridan 516 and others, but it is still alive in the air. It was documented (and verified later) that there are cases where the monitoring workload of the operator may be higher with an advanced automated system (e.g., a highperformance military aircraft) than with existed systems prior to automation. As stated by Wickens, 613 the number of human decisions increases from one to three in order to correctly diagnose a failure in an automated system. Let us take the example of the automated monitoring of the doors of a commercial aircraft to secure that they are closed during flight. If the crew sees a failure indication, they have to decide which of the following three conditions is actually present: open door, failure of the automated monitor, or malfunction of the display indicator. It is clear that automation has actually increased the operators mental workload. The reason for this is that the operator has the additional workload to monitor the automation.
2.5 Three Key Human Factors in Automation
31
It must be remarked here that the humans are poor monitors of automation. The human’s physiological capabilities are not suitable for continuous monitoring and vigilance. The bandwidth of the human nervous system is not sufficiently wide to face sudden information load which may arise in critical situations of automated systems. We have many examples of this, such as nuclear reactors, chemical process plants, commercial and military aircrafts, etc. The human fails statistically at the low end of the bandwidth (less than 1 Hz), and fails almost surely at frequencies higher than 1 Hz. Older studies of human vigilance produced the arousal theory of vigilance, which postulated that the level of physiological arousal fell during a vigilance task, leading to the traditional vigilance reduction over time. 124 Newer studies on arousal have linked arousal to the deployment of attentional resources. 192 The results of these studies show that vigilance tasks (even very simple) can impose considerable mental workload in the form of decision making and problem solving. Thus the claim that automation always reduces workload is not correct. As we already mentioned, automation changes the pattern of workload across work segments, and may increase the human monitoring work load because of the demand to monitor automation. In general, monitoring and vigilance are considered by humans as ‘unstimulating’ and ‘undesired’ tasks. Nevertheless, in all cases the human operators are assigned the task of monitoring and vigilance, and system developers are trying to achieve systems compatible with the human capabilities. Full automation of the monitoring process, is not a general solution, since automated monitors will increase the number of alarms which are already high in many cases, and human response to multiple alarms raises many human factors concerns. 550 On the other hand, to protect against failure of automated monitors, a higher-level system that monitors the automated monitor is required, a process that could lead to infinite regress.
2.5 Three Key Human Factors in Automation The fact that human workload may be considerably increased in automated systems (contrary to the design goal) does not mean that automation is necessarily harmful per se, but that a more careful implementation of automation is needed in modern complex large-scale systems. Three human factors that play a key role and should be considered in the design and implementation of flight decks (also applicable to other sophisticated automation systems) are the following 281: Allocation of function Stimulus response compatibility Internal model of the operator
2.5.1 Allocation of Function One of the basic questions in human factors engineering is whether a specific task is better to be assigned to a human or to automation. 283 A first approach that was fol-
32
2 Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors
lowed and applied to answer this question was based on lists of the functional merits and disadvantages of people versus machines. This approach, although logical, was not successful since machines and humans are not really comparable entities, even though the human factors professionals relate them in many details. As Kantowitz and Sorkin pointed out, ‘Any table that can compare human and machine, especially if numerical indexes of relative performance or equations can be listed, as any good engineer would attempt, is bound to favor the machine’. Therefore, the allocation of functions among people and machines should not be made in a pure ‘mechanistic’ way. Now, there is available technology that can develop intelligent interfaces between people and automation. 279 The traditional on–off interface (automation is either on or off ) does not lead always to less workload of the operator (pilot) as more automation is embedded. Thus, an intelligent interface is needed that would take into account both the workload of the operator (pilot) and the state of the system environment (weather, fuel load, status of equipment), and recommend an optimal allocation of functions, where the pilot decides what flight automation should be used at any moment in the flight. Present flight deck automation is machine-centered (not human-centered) and has little resident intelligence.
2.5.2 Stimulus–Response Compatibility Stimulus–response compatibility is concerned with the relationship (geometric and conceptual) between a stimulus (such as a display) and a response (such as a control action). 284 In aircrafts, an argument of stimulus response compatibility is the determination of the relative advantages of moving airplane (outside in) versus moving horizon (inside out) artificial horizon indicators. According to Kantowitz and Sorkin 282 a better display has both moving. Two cases recorded in the literature with low stimulus–response compatibility are: The altitude deviation in the MD-80 under control of flight deck automation, and the vertical navigation (VNAV) functions of the FMS. In both cases the systems are difficult to be used correctly, but with improved training the frequency of error can be decreased. In general, automated systems which have low stimulus– response compatibility create extra work load to the pilot and lower trust in flight deck automation. 280
2.5.3 Internal Model of the Operator This model describes the operator’s internal representation of the controlled system (i.e., her/his conceptual understanding about the system components, processes, and input/output quantities). Unavoidably, the internal model varies considerably as a function of individual operators, tasks, and environments. The internal model is used by the operator as a basis for planning future activities, deriving hypotheses about
2.6 The Operator Reliance Factor
33
system components relationships, and performing system tasks. Unfortunately, the knowledge about how pilots comprehend their tasks and behavior features of cockpit displays and control is still incomplete, and so is a key issue in the development of flight decks. 281 As pointed out by several researchers and national bodies ‘the effectiveness’ of automation depends on matching the designs of automated systems to pilots. If there are violations of the pilot’s expectations (e.g., a lower stimulus– response compatibility), there may occur increased pilot mental work-load due to increased uncertainties. In conclusion, the ideal situation is to have a “fit” between the actual operating features of the automated system, the operator’s (pilot’s) internal model of the system, and the designer’s model of the system. Sheridan and Hennesy 526 pointed out the necessity to ‘bring all these models into harmony, since they ultimately influence the decision processes of the human supervisor and the consequences of the system operation’.
2.6 The Operator Reliance Factor The decision of an operator to rely or not rely on automation is one of the most critical decisions that must be made during the operation of any complex system. Victor Riley 461 mentions an aircraft crash during a test flight, killing all seven on board, 544 due to only 4 s delay of the pilot to retake manual control. Actually, there are many cases in the history of automation which have shown that the decision to rely or not rely on automation has been a critical link in the chains of events that have led to an accident (aircrafts, rail, ships, nuclear plants, electric power plants, medical systems, process control, etc.). The two extreme possibilities are: (i) the operator over-relies on automation and fails to monitor or examine its performance, or (ii) the operator may not rely at all in automation because she/he has high (possibly erroneously) confidence in her/his own abilities to carry out the jobs manually. Many aircraft accidents belong to the first possibility; the Chernobyl nuclear accident belongs to the second category. The issue here is to understand and study the factors and biases that affect the human reliance on automation, and incorporate them in designing the training program of the operators. With reference to this problem, Sheridan and Ferrel 524 have expressed strong concern about the changing roles of human operators and automation, and have incorporated the operator’s trust in automation as one of the primary issues of supervisory control functions. Some studies on the operators trust in automation are the following: Muir 376, 377 , developed a theory of trust in automation and performed some ex-
periments using a process control simulator. The main results of this theory are: (i) the trust in automation can be measured using a subjective scale, and (ii) the operators are able to recognize an error-free (normal) component and a malfunctioning component. Riley 460 asserted that a decision of an operator to rely on automation depends not only on her/his personal level of trust in the system, but rather on the relationship
34
2 Human Factors in Automation (I): Building Blocks, Scope, and a First Set of Factors
among trust, self-confidence, and other factors including workload, fatigue and the level of risk associated with each particular situation. If the operator has more confidence on her/his own ability it is more likely to do the task manually. In the opposite case the operator relies on automation and the task is performed automatically. Lee 310 and Lee and Moray 311 carried out extensive studies to explore further Muir’s theory of trust and especially the relationships among automation accuracy, trust in automation, and reliance. Details on some experiments performed using a simple computer-based test bed were reported by Riley. 460, 461 The topic of human reliance on automation is very complex and crucial, and surely needs further investigation.
Chapter 3
Human Factors in Automation (II): Psychological, Physical Strength, Human Error and Human Values Factors
Pleasure in the job puts perfection in the work. Aristotle We’ll be looking at operations, we’ll be looking at human factors. Mark Rosenker We just want to get on with ourselves and do our jobs well. When we make use of technology, we want to focus on achieving our goals, not on deciphering the technology. Kim Vicente
3.1 Introduction Human operators are playing and will continue to play dominant roles in automated systems. If either the machines or the humans involved in an automation system fail to do correctly the tasks they are assigned, serious accidents may occur like an aircraft crash, or a nuclear reactor accident. Therefore, it is important to design the automation systems taking into consideration the capabilities of both the machines and the humans. Assuming that the machines are more easy to be designed correctly with the available technology each time (machines obey more strictly the laws of physical causality), more emphasis should be given to the human operator or user, whose behavior depends on more complicated factors (psychological, mental, physical). Our aim in this chapter is to present a second set of human factors (additional to those presented in Chapter 2) relevant to computerized and automated systems. Specifically, we are first concerned with psychological factors and study the job satisfaction and job stress problems in connection with the use of video-display terminals. Then, we discuss the problem of assessing/measuring the human’s physical strength. Next, we present the principal issues of the human bias and humanerror, which are built-in features of the human and have to be balanced in practice.
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 3, c Springer Science+Business Media B.V. 2010
35
36
3 Human Factors in Automation (II)
Finally, we discuss the issue of human values and human rights, which are above the technological design, but have to be respected by the managerial and organizational processes of any automation system.
3.2 Psychological Factors The transition from traditional types of work to computer-based work in automation and other computerized organizations was not easy in many cases, and has produced significant psychosocial problems for the employees. Two of these problems of great importance are: (i) the job satisfaction problem and (ii) the job stress problem. Here, these two problems will be discussed in connection with the videodisplay terminals (VDTs) that are used in almost all computer-based working places around the world. 105, 179, 539, 541 Actually, many complaints from workers have been recorded about the visual and musculoskeletal requirements demanded by work at VDTs. It is now generally recognized that improper workstation design combined with workload, postural demands, and job requirements may produce or at least contribute to neck, shoulder, back and hand/wrist discomfort, fatigue and pain for many of the VDT users. Visual discomfort can be the result of improper illumination and glare, VDT screen design, work loads and task characteristics. Similarly, the claims that working with VDT’s produces psychological distress are well documented in the literature. 535, 541
3.2.1 Job Satisfaction Job satisfaction is a complex notion and reflects both “global” and “specific” characteristics. If an employee is asked how much she/he likes her/his job, the answer could be: “I like my job” “or” I like my job very much”. In the second case the reply is given after personal weighting of the good and poor aspects of the job. For example, a worker may not be satisfied with job facets, such as computer hardware or software facilities, but still be very happy with the nature and other conditions of the job. On the other hand, an important issue is the relation between worker satisfaction and workplace performance, i.e., her/his satisfaction derived from the work she/he does. Understanding these and similar psychological factors which make a worker happy with the job, the design of the jobs can be improved so as to enhance the workers productivity performance. To this end, many approaches to the study of the job satisfaction have been proposed, the two principal being: (i) the causal models approach, (ii) the content approach. 324 Three causal models of job satisfaction are the following 105 : Expectancy model Need model Value model
3.2 Psychological Factors
37
In the expectancy model, the individual’s affective reactions depend on the discrepancy between what her/his environment provides and what that individual has adapted to or expects. The need model uses the degree to which the job fulfills personal needs, and the value model focuses on the values of the employee, which are defined as conditions that she/he desires (consciously or subconsciously). Content approaches determine the specific needs that have to be fulfilled in the job itself or the values that must be respected in order for the employee to be satisfied by the job. One of these approaches 340 assumes a hierarchy of the needs, where lower-order needs are sought to be satisfied first. Physiological needs (water, food) are at the lowest level, safety needs (economic security, freedom from physical injuries) are at the next higher level, love and belongingness at the next level, and self actualization at the top level. Of course, it is not necessary that lower needs have to be fulfilled before the next level began to motivate the person. A second content approach is the so-called two-factor model of job satisfaction. 209 From the following five factors that are influencing the job satisfaction: (1) achievement, (2) recognition, (3) the work itself, (4) responsibility, and (5) advancement, three factors work themselves, but the factors of responsibility and advancement have a dominating influence on the attitudes that persist. The central issue in the two-factors model is the employee’s relationship to the tasks performed.
3.2.2 Job Stress In physiological theory stress is understood as a cause–effect (stimulus–response) process. The concept of stress describes the organism’s response, which was stimulated in the same way by any environmental demand (which is called “stressor”). This stimulus–response model is limited because the same reaction pattern can appear for a wide class of stimulus conditions, including exercise. The cognitive approach to stress has been more popular, since empirical research on work, stress and health has indicated that under the same work conditions not all workers experienced stress. 308 This is because the cognitive processes determine the quality and the intensity of the reactions to the environment. A stressor will not have an effect unless it is recognized and assessed by the person. Actually, stress is felt as disfunctional reaction that can lead to adverse performance and health consequences (acute or chronic). 535, 536, 539 A compiled summary of the results drawn in a number of studies on VDT users is the following: Users in lower paying and less skilled jobs possess greater psychological distress,
and more stress when jobs are transferred from a given technology to a new one. Older users feel that have greater job changes than younger ones, and also report
more stress when technology changes. The stress level produced depends on job category, but the following job stressors
were consistent across different job categories: high job demands, inability to
38
3 Human Factors in Automation (II)
Environment
Task
Human User Automation Technology (VDT)
Organization
Fig. 3.1 A model of VDT use and its impact on the human
control or participate in the decisions, monotony, lack of variety or lack of task content, poor or lack of supervision, and technology problems (breakdowns, slow downs, etc.).
3.2.3 A Psychosocial Stress Model A model which conceptualizes the various elements of a work system that can exert demands on employees with potential psychological and physiological consequences is shown in Fig. 3.1. 535, 538, 540 We see that there is an interaction among all these elements during the work process. The human user lies at the center of this model, and uses automation technology (such as VDT’s) to carry out job tasks. The characteristics of the technology and the task requirements affect the user performance, as well as the required knowledge and skills needed. The environment affects the user’s attitudes, psychological moods and her/his comfort. Finally, the organizational structure determines the nature and the level of individual participation, interaction, control, supervision, and performance standards. Details and experimental findings on all these factors of the model can be found in the literature. 538, 540
3.3 Physical Strength The assessment (measurement) of human’s physical strength is useful among others, for the following four purposes 75, 76, 165, 213, 272, 298, 552 : To build an anthropometric database and produce data for products, tasks,
equipment, etc. To select and place workers To design jobs To aid the research of the strength phenomenon
3.4 Human Bias
39
The selection and placement programs aim at guaranteeing that jobs with heavy physical requirements are not assigned to persons lacking the necessary strength capabilities. 165 This method of personnel selection is used as a provisional measure for the control of work-related musculoskeletal disorders where job design cannot be used to relieve task demands. The method is also used for obtaining test’s predictive value as a measure of its ability to determine who is at risk of such future musculoskeletal dysfunctions. The job design has been a fundamental psychophysical technique for determining acceptable forces and weights for whole human groups, not for determining individual worker strength capabilities and comparing them to job demands. When the acceptable workloads for a group are found, the job or task is designed to accommodate the greater majority of the population. Muscular strength is a complex function which can vary considerably depending on the methods of measurement. Thus, special care is needed in order to avoid misunderstanding and fuzziness. The four most popular techniques for strength assessment that are used in ergonomics are the following 165 : Isometric Strength This strength is the capacity to generate force or torque with a voluntary isometric contraction, i.e., contraction with the muscles maintaining a constant length. 76 Isoinertial Strength This is the strength in which the mass properties of an object are held constant, as when lifting a given weight over a predefined distance. This is done using appropriate lifting devices. 272, 298 Psychophysical Strength This is based on the psychophysical relationship between the strength of a perceived sensation Y and the intensity X of the physical stimulus, 552 namely Y D aIn , where a is a constant and the exponent n depends on the nature of the stimulus (e.g., temperature, brightness, loudness, etc.). In muscular effort the experiment showed that n D 1:7. Isokinetic Strength In this measurement method the velocity of the motion is kept constant throughout a predefined range. 213 To this end, a means of speed control is used, although load and resistance are surely present in the technique. Since the speed of motion is kept constant in isokinetic strength exercise, the resistance conceived during a contraction is equivalent to the force applied over the range of motion, and so the muscle is allowed to contract at its maximum capacity at all points of the range of motion.
3.4 Human Bias Human judgement and estimation is never made solely on the basis of the evidence at hand. This is a built-in mental feature of the human, because of the fundamental nature of her/his perception and reasoning. Human judgement depends always intimately on personal knowledge, beliefs and desires. To avoid bias the best way is to
40
3 Human Factors in Automation (II)
use double-blind procedures, i.e., always have a comparison system and ensure that the experimenter and participant know which is the new and which is the comparison system. Some examples of human bias which have been observed that systematically occur are the following. 117, 118 Weight Bias Underestimate if the object is compact, overestimate if the object
is bulky. Speed Bias
Underestimate if the object is decelerating; overestimate if the object is accelerating. Height Bias Underestimate when looking up; overestimate when looking down. Horizontal Distance Bias Underestimate. Temperature Bias Underestimate cold; overestimate heat. Probability Bias Underestimate the unpleasant event likelihood; overestimate the pleasant event likelihood.
Referring to the decision making style of humans, there have been well documented differences in several issues, such as: Risk acceptance (most humans do no like to take risks, whereas others are risk
neutral) Decision upon impulse (many humans do not easily decide deliberately) Higher weighting of personal cost (in comparison to the cost of others) Level of decision making skill (based on probabilistic, safety, or risk assessments)
Other biases of humans in making their decisions include: Tendency to ignore the reliability of evidence Tendency to rely on confirming evidence and disregard disconfirming evidence Tendency to overestimate the probability of interdependent events and underes-
timate the probability of independent events Tendency to infer illusory causal relations Tendency to overweight recent evidence and underweight previous evidence Tendency to rely more on occurrence (or nonoccurrence) of outcomes they had
before the fact (hindsight bias)
3.5 Human Error Actually, there does not appear to exist a generally accepted definition of the concept of human error. Two of the definitions are those given in the Webster’s Dictionary: The state of believing what is untrue Something incorrectly done
The first definition sees the error from a cognitive point of view, and the second sees it from a physical point of view. In general, a human error can be regarded as an action that does not conform to some standard (explicit or implicit), but then the
3.5 Human Error
41
problem is what is the standard. The human errors can lead to catastrophic events with human fatalities. Thus the study of human errors and their remedy is of primary importance for any system in which the human is involved as an operator, driver or user (robot systems, nuclear plants, chemical plants, aircrafts, etc.). In the following, a brief exposition of some human error issues drawn from the literature will be given. 219, 431, 442, 449, 511, 555 More detailed considerations can be found in the literature. One way to see a human error is in terms of its consequences, i.e., the action or event whose effect is not within given ranges needed by a particular system. 511 Another way is to define human errors as experiments in an unkind environment. 442 Finally, a third way is to see the human errors as the debit side of what are useful and essential mental processes. In any case, it is generally accepted that human errors cannot be defined by looking solely on human performance, but have to be defined by taking into account the human intentions or expectations. 219, 431, 442 The errors can be distinguished: (i) according to observed error consequences (called error forms or error phenotypes), and (ii) according to error causes (called error types or error genotypes):Typical errors in the error consequence class are omissions and substitutions. For example, if an error is described as pressing a <shift> key instead of a
key, this description would be called a phenomenological description. Such descriptions indicate “how” an error occurred and not “why” the error happened. Two error types, characterized by their cause, are slips and mistakes. Slips are failures of execution, i.e., the plan for the action is correct but the actions
do no follow the plan (for example, we may have omissions of some actions or actions executed in the wrong order). Mistakes occur when the plan is wrong itself, although the actions go as planned. A practical three-level model of human behavior (symbolized as S-R-K) was proposed by Rasmussen 444 : These levels of behavior are: Skill-based behavior (without conscious control, e.g., clicking left mouse button
for selection, etc.) Rule-based behavior (with conscious control using known rules or procedures) Knowledge-based behavior (for new situations for which no rules or procedures
are already known) Following Rasmussen’s S-R-K hierarchical model, Reason 449 identified the following three basic error types: Skilled-Based Slips These occur due to error in one or more of the mental steps
like attention, perception, memory, and execution control. Rule-Based Mistakes These occur due to failures in the selection or application
of problem-solving rules (e.g., use of wrong rules, incorrect recall of correct rules, etc.).
42
3 Human Factors in Automation (II)
Knowledge-Based Mistakes
These can occur as a result of the hypothesis testing procedure (trial-and-error learning of new cases), or interference in functional reasoning due to wrong analogies.
Rasmussen and Vicente, 446 aiming at eliminating the error effect via effective cognition-based design, proposed the following error classification scheme: Learning and adaptation errors Errors due to lack of resources (lack of knowledge, lack of reasoning
method, etc.) Errors due to built-in human variability (e.g., stochastic variability, memory
slips, etc.) According to Reason 449 the characteristic error forms are shaped via combinations of psychological factors and situational factors, acting on several cognitive control levels. Understanding these error factors help in designing human–machine interfaces which do not have features that might aid or create these error shaping factors. A list of factors following the S-R-K scheme is the following. 431
3.5.1 Skill-Based Error-Shaping Factors
Omissions of actions or action sequences, etc. Perception confusion (familiar match is accepted instead of correct match) Trade offs of speed accuracy Interference due to concurrent events Variability in the control of actions Non-required repeated actions
3.5.2 Rule-Based Error-Shaping Factors
Erroneous or deficient encoding Cognitive conservatism (refuse to change the familiar procedure) Use only available rules (which may not be the proper ones) Inadvisable rules (rules that satisfy the goals but may cause side effects) First exceptions (errors due to that the current situation is an exception)
3.5.3 Knowledge-Based Error Shaping Factors Problem complexity Delayed feedback in executing decisions Excessive memory requirements (information overload)
3.6 Human Values and Human Rights
43
Biased reviewing (in planned course action) Illusory correlation (failure to understand the logic of covariation) Attentional limitations (due to limited resources)
Although, there does not exist a general theory of human error causation, the above classifications are very helpful in designing effective human–machine interfaces in practice. Just as an indication of other error cause possibilities we mention the following 191, 449, 473, 511, 523 : Invalid mental models, i.e., system models existing in the humans brain State of the human’s nervous system Hypothesis verification (tendency to verify confirming evidence and disregard
contradictory evidence) Latent error property (which is due to inefficient equipment or process
design, etc.) Excessive task demands that are beyond physical or mental capacities
3.6 Human Values and Human Rights The involvement of a person as a human-being and as an employee in an automation or other type of organization (public or private) raises, in addition to the above psychological, physical and mental/cognitive human factors, the question of human values’ and personal rights’ respect. The central concern of workers is job security, which for most employees is at a higher level of priority than wage-effort bargain. The basic human values, like the truth, human life, freedom, and justice were proven to have diachronic permanence at least of equal degree as the physical laws which have been established by experiments and revalidation through the generations of physicists. 47, 115 New knowledge and new conditions do not change the meaning of human values, nor they question their validity, since new facts only clarify and complete our previous ethical laws. The human rights are based on the morally validated human values, such as the value of “equal dignity” of all human beings, and the values mentioned above. The way this is done, or can be done, is a subject of philosophical or societal discussions. The human and job rights are implemented, assured and protected through rules and laws of organized cohabitation. These rules and laws take particular shape in the Constitution and the Laws of every democratic state and are collectively named “democratic rights”. 106 Today there are internationally codified (but not yet globally assured) universal human rights involved in the so-called “International Bill of Rights” which consists of: (i) the “Universal Declaration of Human Rights (UDHR)” adopted by the General Assembly of the United Nations (UN) in Paris (10 December 1948), (ii) the ‘International Covenant of Civil and Political Rights (ICCPR)’, and (iii) the “International Covenant on Economic, Social and Cultural Rights (ICESCR)”, both adopted by UN. These Covenants came in force in 1976 and are now ratified by more than 140 states. 321 In addition to the above, UNESCO (United Nations Educational,
44
3 Human Factors in Automation (II)
Scientific and Cultural Organization) has published in 1991 the ‘Declaration of the Elimination of All Forms of Intolerance and Discrimination based on the Religion or Beliefs’, and in 2004 a book entitled ‘O`u vont les Valeurs’ (French edition) or ‘The Future of Values’ (English edition), where the authors of the various contributions show in a clear and precise manner what an ethics of the future is about. 47 The human rights as defined by the UN Declaration and Covenants go beyond their establishment as International Law, since they are ethically grounded in a ‘universalist doctrine of inherent and equal human dignity’. Human values and human rights of the Western countries were strongly influenced by the philosophical, cultural, and moral views of ancient Greek philosophers, poets and “wise men”. Ancient Greeks created philosophy (i.e., love of wisdom) and pure science, and educated the human for the exercise of his/her intelligence and freedom. Democritus declared: “human is a single person and human is all persons”; Protagoras taught: ‘human is the measure of all things’; Aristotle defended that “ethical rules should always be seen in the light of the traditions and the accepted opinions of the community”; and Plato argued that ‘morals are based on the knowledge of universal ideas and so they have a universal character’. 115 From the above it is clear that as a general rule, the ancient Greeks advocated models of life with the ‘human’ as their central value (the value ‘par excellence”). They also set up as a goal of the state not just to protect the life of citizens, but also to motivate them towards a life of high quality. Individual rights in democratic society are always balanced by reciprocal responsibilities for protecting the rights for others. This is exactly the well-known ‘golden rule’ of behavior, which is applied in almost all contemporary religions under several variants (e.g., “never do to the others what would produce pain to you”, or ‘act upon the others in the same way as you wish them to act upon you’). 212, 558 Democratic values must shape both the public and the private employment relationship. Public employment values evolve around three concepts: The merit principle, public responsiveness, and the protection of individual rights. 106 The merit principle is the outcome of belief in the individual and a career open to talent. Public (or political) responsiveness means that democracy places governments at the service of their people. Public professionals are not owners or entrepreneurs but first, and foremost, must serve the needs of the others. The protection of individual rights emphasizes the importance and value attached by our social contract to each and every individual. Combining these three principles in a workable and sustainable way is a very difficult issue. Conflicts arise continuously, and their resolution is the task of justice. Employees, whether government or private sector, want protection from unjustified actions on the part of their employers. In America, over the past few decades the doctrine of privilege (according to which government employment was a special privilege rather than a contractual property right) became less and less tenable, and has been replaced with a doctrine of substantial interest. 469 Government is now called upon to justify and prove the need for any infringement upon the rights of its citizen whom it employs. A newer version of the old doctrine of privilege is the doctrine of employmentat-will. This doctrine does not deal directly with employees’ rights as citizens, but
3.6 Human Values and Human Rights
45
with the right of the government to decide what jobs it needs and who it wants to hire for carrying out those jobs. 106 Thus it is directly an issue of efficiency and merit. But from the employee’s point of view it is also an issue of job security, and thus it is concerned with job rights. Organizations are in favor of the employmentat-will doctrine, since it seems to provide them with the flexibility to successfully adapt to their continuously changing competition environment. The employmentat-will doctrine allows the organizations management to objectively and efficiently adjust their resources, including people, to meet the needs of their organization and guarantee their survival. 287 However, this doctrine is also a concept that very much frightens workers, since many cases of arbitrary and capricious actions have been recorded. Contractual rights establishing dismissal for cause give a small degree of protection from such practices. 106, 287 Thus, the employment-at-will doctrine will generate strong opposition from workers. Human values are studied throughout the years and today many websites exist where important information is provided regarding their evolution in the process of human development. For example, the Human Values Network website (www.humanvalues.net/) is dedicated to the proliferation of human values and designed to give the opportunity of sharing visual information regarding all aspects and factors that affect Humankind. In particular, the human values web-blog allows the reading and commenting upon articles dealing with Human Values, Ethics, Human Rights, and Human Spirit. As noted in this site, we have all the means for developing safe and nurturing conditions for all people in the planet, we have intelligence but not wisdom, we have science and technology which is used to destruction rather than to benefit the humankind. Any one of us has individual values embedded by education and practice, which however are mainly directed to the safety and happiness of ourselves and our loved persons. For a happier future of all people, these personal values must become the general criteria of human well being and human activity, putting aside short-term self-centered goals. The question “whether there is any way in which science is relevant to ethics” was studied by V. Turchin (http://pespmc1.vub.ac.be/SCIVAL.html). This author states that “human values” is something we appreciate and wish to have or achieve. The “Values” are something we consider as “good” and prepared to set as the “goals of life”. Values describe the part of our goals that are not immediately necessary for survival. They represent the “spiritual” part of our life going beyond the physical or biological part. Goals are hierarchically ordered. When facing two conflicting goals we need a higher goal (or a principle) to resolve the situation and decide. The thing of higher interest today is the “top” of these principles, i.e., the “Supreme Goal” or the “Supreme Value” of human life. This is the problem of Ethics. Although science provides us knowledge, it does not immediately direct our will. It is not possible to bridge completely the gap between knowledge and will. Goals can be generated only by other goals, not from knowledge. Therefore the answer that V. Turchin gives to the question whether there is a bridge of science and ethics, is that this bridge is
46
3 Human Factors in Automation (II)
provided by the “evolution” concept, and by the inborn characteristic of human beings which he calls the “will for immortality” (1991, Principia Cybernetica, http://pespmc1.vub.ac.be/ETHICS.html). Huston Smith in his talk entitled “The Search for Universal Values” at the Academy of Athens Symposium (Athens, 2004), 47, 115, 321 pointed out that two universal values are already in place: 1. The general agreement that there is a standard by which the difference between right and wrong can be determined. This presupposes that there is an objective standard (e.g., in case of disagreement there is an objective referee to judge and determine who is right). 2. Lists of values or virtues that characterize the standard mentioned above. Greece heads its list with “the good, the true, and the beautiful”. Traditional India’s list is headed by “existence, consciousness, and bliss”. The lists may extend without end. Smith notes that a third universal value exists which needs discussion. This is the value of “science, technology and progress”, the so called “Modernity”, which is still on its way out. Modernity’s 3-part scenario considers science as the oracle of truth, focuses in technology as the fountain from which all blessings flow, and ends in the dream of automatic progress through ever-advancing technology. According to Smith that scenario is no more credible. The catastrophic events of the twentieth century show clearly that without human and social values the myth of progress is no more than a cruel joke. If universal values are to be arrived, at least three needs must be met: The need to understand the dynamics of prejudice The need to table our prejudices The need to listen to others
Understanding the dynamics of prejudice is crucially important, especially in a time of war. As for science as the royal road to truth, Smith says that it is now realized that its methods deal with life’s physical constituents only, and that without values, meanings, and purposes only lifeless matter remains. Science and technology can help us to have longer lives, but it cannot tell us why we should live longer or how to make the added years worthwhile.
Chapter 4
Human–Machine Interaction in Automation (I): Basic Concepts and Devices
The only important thing about design is how it relates with people. Victor Papanek The design process, at its best, integrates the aspirations of art, science, and culture. Jeff Smith The technology upon which the human–computer interface is built changes rapidly relative to the time with which psychological experimentation yields answers. Our design principles must be of sufficient generality to outlast the technological demands of the moments. Donald Norman
4.1 Introduction Interactive communication, or dialog, between humans and machines (computers) is a two way communication, each part giving feedback to the other about its understanding of the last piece of information received and the progress made for any action that was requested. Human–machine interaction (HMI) is now a well established field of computer science and automation which uses concepts, principles and techniques of human factors engineering, cognitive and experimental psychology, and other closely related disciplines. 30, 138, 303, 305, 349, 386, 389 The main goals of HMI are the following: To understand and analyze the impact that automation and information
technology is having on human’s productivity, job satisfaction, human–human communication, and the general quality of life. To understand better what consequences, good or bad, automation and information technology, could have on human life and on future generations, and what can be done to minimize the undesired effects and maximize the desired ones. S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 4, c Springer Science+Business Media B.V. 2010
47
48
4 Human–Machine Interaction in Automation (I): Basic Concepts and Devices
To make automation and computer-based systems easier to use and more
effective in the hands of the experts. Human–machine interface design has been shown to have a significant influence on factors such as learning time, speed of performance, error rates, and user’s satisfaction. While good interface design can lead to significant improvements in task performance, poor designs may hold the user back. Thus, work on HMI is of crucial importance. This chapter, the first of two chapters on human–machine interaction, presents the basic concepts of human machine interactive systems (applications, design methodologies) and discusses two fundamental categories of interactive/input– output devices, namely the “keys and key boards” and the “pointing devices” (touch screens, light pens, graphic tablets, trackballs, mice, and joysticks) including some guidelines for input device selection. Then, the issues of screen design and work station design are considered with primary intent to arrive at practical guidelines that are recommended for human satisfaction, and minimization of stress, fatigue and risk of injury. In the next chapter more advanced human–machine interfaces will be discussed, such as graphical interfaces, knowledge-based interfaces, and intelligent interfaces.
4.2 Applications of Human–Machine Interactive Systems The applications of human–machine interactive systems are continuously increasing, ranging from office automation and medical automation to industrial and space automation. In general, these applications can be divided into two main categories 30, 349 : 1. Applications concerned with pure information exchange 2. Advanced applications that make use of the information exchange In the first category the following four basic applications are included: Information Acquisition
This deals with the introduction of data into a computer via an interactive process in which the data are checked on entry, any occurring errors are corrected at once, and the computer is used to acquire only the desired information. In this way there is a reduced need for redundant transcription and transmission of data. Information Retrieval This deals with the recovery of information which has been stored in a computer, again in an interactive conversational way to secure that information required has been fully identified and that the user is helped to find what she/he really desires. Usually, the retrieval process is a prerequisite for the introduction of data because the originators of information may ask questions before making a decision about what is to be entered into files. Editing The computer provides useful assistance in the preparation of stored text material for publication. The main part of this process consists of accepting raw text input and allowing the author to make alterations – adding or deleting
4.2 Applications of Human–Machine Interactive Systems
49
material and moving portions of it around. Also, non textual or highly structured material can be edited such as computer programs or market catalog records, etc. Instruction Here, the instructional process might be considered that of asking the user (e.g., a student) questions, the answers of which are already known, and comparing her/his responses with the known information. Then, the computer acts differently according to whether she/he can or cannot answer correctly the questions. In information acquisition the computer asks for data it does not already have, and in information retrieval it works with the user to ascertain what is wanted. In the more advanced applications the human and computer are jointly acting to produce a useful output. The outputs may be a product directly usable (e.g., an electronic circuit diagram, or a decision about an enterprise, or a computer program, etc.). Here the human machine interaction is much closer to human–computer symbiosis, than in the pure information exchange applications. Three such applications are the following: Interactive Programming
The production of computer programs is one of the most effective applications of human–machine communication. Interactive programming is very frequently used interchangeably with the time-sharing term, but the two are different: The former is usually an application of the later, but they can also exist without the other and in many other situations it is very difficult to keep these concepts separate. Computer-Aided Design The process of design is, in an abstract viewpoint, the creation of an information structure or model which usually is highly complex and expresses relationships which cannot be easily verbalized (e.g., color, shape, etc.). The design is different from the product, since it is information descriptive of the product. The computer is used to provide valuable help in evolving the design (i.e., the information set), and then use is made of the design separately to develop the product. In many cases the computer is also used to develop the product. For example, a computer can assist in the design of a textile fabric and then can control the process machine directly to produce the fabric previously designed. 326 Today’s design departments are usually equipped with advanced computer-aided design (CAD) systems which increase the design efficiency to rationalize the product-development process. Computer-Aided Decision Making Real decisions in government, business, or industry are not an abstract selection between a yes or no – they are as much as the design of a plan or strategy as it is the selection from among a set of well defined and discriminated alternatives. Thus computer-aided decision making should largely restricted to situations where the system and decision variables are relatively few and are well defined. Unfortunately, real management or engineering has the opposite situation, i.e., many ill-defined variables to work with. Thus, management and automation systems of today usually employ combinations of information retrieval and simulation. The machines recover information on
50
4 Human–Machine Interaction in Automation (I): Basic Concepts and Devices
demand and make requested tests, but do not decide for the human; rather, they assist her/him to make a decision by supplying needed information and sometimes reminding her/him of what is to be done or to be decided.
4.3 Methodologies for the Design of Human–Machine Interaction Systems Productivity is a major concern of any developed state, and it is believed that human–machine interaction technology will be a dominant factor of a country’s ability to compete in the world marked and secure a high standard of living for its citizens. According to the opinions of most economists, technological innovation and automation is the most effective means to improve productivity. But the real life shows that, on average, the effect of computerization on worker efficiency and economic productivity is still relatively small. Landauer 305 explains that the source cause of this problem is the inadequacy of the existing engineering design methods for application software. This can be largely balanced by the development of powerful and efficient HMI devices and systems. A partial list of promising methodologies for the design and development of such HMI means is the following 388 : Task Analysis
Here the focus of attention is the goals of the activity. Task analysis involves formal and informal techniques by which one can find out what the goals are, to which system are to be applied, and what role new technology could play in achieving these goals. Finding Potential Users The designers seek for users into the work environment where the system will be used, and bring them into the process at every step. User Testing A mock-up of the system is tested with users like those for whom the system is intended with tasks like those they will usually do. Performance Analysis Performance analysis is used for the assessment of innovative design (not for the improvement of existing design), and focus on time, errors, and individual differences that can provide insight on the deficiencies of present technologies and processes, and the potential ways of improvement. In all cases fundamental research in cognitive and engineering psychology might provide the basis for improving HMI. The dominant hardware/software element the future progress of which is particularly important to HMI is visual display. The main psychological factors associated with it were discussed in Chapter 3. Actually, computer displays (screens) provide a smaller viewing and interaction space than does paper and print technology. Of course, this draw-back is partially compensated by serial presentation of text and graphics in multiple overlapping windows, and by the availability of dynamic displays not possible in print, but, even so, the computer does not exploit very well the human’s wide field of vision and action in space. Therefore much room for improvements is available here (e.g., advanced graphical user interfaces, graphical interfaces for knowledge engineering, etc.).
4.4 Keys and Keyboards
51
4.4 Keys and Keyboards Keyboards are widely used both on typewriters and as input devices to computers. The original work on the typewriter keyboard has been devoted at improving its mechanical action in order to operate more smoothly with fewer faults. More recent efforts were focused on improving the speed and precision of typing. The main technological design features of keys and keyboards that affect the typing performance of humans are the following 317:
Keyboard layout Keyboard height and slope Keyboard profile (relative angles and placement for different rows of keys) Key size and shape Key force and tactile feedback Key movement The design of the special purpose keys (such as “Backspace” key, “Enter” key, cursor control keys, etc.)
Here, a brief discussion of the key board layout issue will be given. For a review of the studies about the other key–keyboard features in the above list, the reader is referred to the article of Lewis et.al. 317 and the references contained therein.
4.4.1 Keyboard Layout The keyboard layout is concerned with the position of letters and numbers on keys. Originally, the letters had an alphabetic arrangement (Sholes keyboard). 631 Today, the standard keyboard layout is the so-called QWERTY layout (named from the top left-most row of letters) which has a larger spacing between common pairs of letters that leads to a reduced frequency of jamming sequentially struck type bars. The QWERTY layout was firstly patented in 1878, 93, 399 and since then many efforts were made to improve the keyboard layout by designing non-QWERTY character arrangements. The principal of these efforts in the so called DSK layout (Dvorak Simplified Keyboard) patented in USA by Dvorak 1936). Dvorak’s design was based on principles of time-and-motion analysis, assuming ten-fingered touch typing. 126 Other assumptions include the assumption that simple motions are easier to learn and quicker to perform, and the assumption that rhythmic motions are less fatiguing than erratic ones. When using the DSK arrangement, typist uses the right hand more than the left, and her/his finger are assigned proportional amount of work (Fig. 4.1a–c). Later experiments and studies comparing the DSK with the QWERTY arrangement include the following 317 : The Navy Department Study (USA)
Two groups of typists were compared. The first group involved QWERTY-trained typists who learned the DSK layout.
52
4 Human–Machine Interaction in Automation (I): Basic Concepts and Devices
a
b
c
Fig. 4.1 Sample keyboards, (a) Logitech Classic, (b) IBM International, (c) Fentek Kinesis Maxim. Sources: (a) www.logitech.com/index.cfm, (b) www.powerbrixx.com/index.php, (c) www.fentek-ind.com/ergo.htm
The second group consisted of QWERTY typists received extra training on the standard keyboard. On the basis of cost–benefit analysis the overall conclusion was highly favorable about DSK retraining. The Kinkead’s Simulation Actually, the conduction of a fair experiment to compare DSK and QWERTY layouts is a difficult task; due to wide spread use of QWERTY. For this reason Kinkead 292 made the assumption that the time to make a particular finger motion is the same for both QWERTY and DSK. His overall conclusion was that at best DSK is 2.3% faster than QWERTY. The Norman and Fisher Simulation This is another comparison study of QWERTY and DSK using a computer simulation of the hand and finger motions of a skilled typist. 396 The result of study is that DSK shows a 5.4% advantage in typing speed over QWERTY (58 words per minute for DSK, 56 words per minute for QWERTY). It is remarked that both Kinkead’s and Norman–Fisher simulations considered only expert typists and the typing speed feature. Search for the Optimal Key Layout This was an attempt to find the best keyboard layout using an automated (artificial intelligence) search. 392 The model of Norman and Fisher was employed. Using 50,000 keyboard layouts from the first to the final iteration, the best layout indicated was about 10% better than QWERTY and 1.2% better than DSK. The overall conclusion of the above studies is that DSK layout is about 2.3% to 17% faster than QWERTY layout. However, because there are many unknown factors (such as how long it will take a particular typist to retrain) the practical production improvement expected by switching from QWERTY to DSK is very small (estimated from about 30 single spaced pages to 31.5 pages per 8 h non-stop typing day with roughly 800 words per page). This, together with the observation that a typist trained on QWERTY can easily transfer her/his skill to any other standard keyboard, whereas a typist trained on DSK could not, gives to QWERTY an overall final preference. Also several studies indicated that alphabetically arranged key boards do not provide any benefit over QWERTY, even for unskilled typists using a reduced-size keypad, 157 or for typists restricted to using a mouse or stylus. 317 In overall, the suggestion is that in most cases a QWERTY layout is more suitable than an alphabetic order layout. In addition to the principal alphabetic section, most computer keyboards possess an extra keypad for data entry in telephone and other applications.
4.5 Pointing Devices
53
a
d
c
b
e
Fig. 4.2 Sample pointing devices: (a) Fentek ergonomic touchpad mouse, (b) Logitech notebook mouse, (c) Logitech cordless Trackmanr optical (trackball), (d) Fentek low profile ergonomic trackball, (e) 3M Renaissancer vertical mouse. Sources: (a), (d), (e) fentekind.com/ergmouse.htm, (b) logitech.com (notebook products), (c) logitech.com (trackballs)
4.5 Pointing Devices Pointing devices designate positions and movements in two-dimensional space, and include: touch screen devices, light pens, graphic tablets, mice, trackballs, and joysticks (Fig. 4.2). These devices are all appropriate for pointing or selection among items on a display as well as for entering graphical information, but not for entering alphanumeric data for which keys and keyboards remain the devices of choice. Joel Greenstein 186 provides a short account of the subject, and Sherr 528 gives a full account of the input device technologies. Here, we will briefly discuss the basic factors that affect the design of the pointing devices along the lines of Greenstein.
4.5.1 Touch Screens Touch screens generate an input signal in response to a touch or movement of the finger on the display. The two fundamental ways of touch screen operation are 42, 505 : (i) overlay contact (resistive, capacitance, piezoelectric, cross-wire), and (ii) interrupt of signals projected on the screen by the finger (surface acoustic wave screen, infrared touch screen, etc.). The touch resolution of resistive screens varies from 1;000 1;000 to 4;000 4;000 discrete touch points. The resolution of capacitive screens is relatively high, but the resolution of acoustic wave screens is in general lower. When the touch surface or the detectors are separated from the targets we have the parallax effect. For all touch screen technologies, the touch surface
54
4 Human–Machine Interaction in Automation (I): Basic Concepts and Devices
will be at least slightly above the target due to the glass surface on the screen. The durability of a touch screen is always a problem in dirty environments and when it is continuously used. The best optical clarity is preserved by infrared touch screens. If the clarity of a display is reduced, then stress and fatigue of the operator should be expected. The principal benefit of touch screens, is that the input device is also the output device. Thus, no additional space is required, and direct eye-hand coordination, as well as a direct relationship between the operator’s input and the displayed output, are allowed.
4.5.2 Light Pens A light pen is a stylus which creates position information when it is pointed at the display screen. The light pen involves a light detector or photocell. Its operation can be in two modes, namely pointing mode (a character or figure may be chosen by pointing to a spot on the screen), and tracking mode (the light pen is pointed to a cursor of the display, and then the pen is moved to trace a line). The light pen is not appropriate for detailed and precise sketching, but only for menu selection or for drawing. 200
4.5.3 Graphic Tablets They are formed by a flat panel placed on a table in front or nearby the display. The surface of the tablet represents the display, and the motion of a stylus or finger in the tablet gives cursor location information. Specifically, when a finger is placed on the tablet, the display cursor can be programmed to move from its current position and appear at a position which corresponds to the position of the finger on the tablet. Brown et al. 62 show how to create windows on graphic tablets.
4.5.4 Track Balls Here, a ball is held in a fixed housing and can be rotated freely in any direction by the fingertips. The motion of the ball is detected by shaft or optical encoders. This in turn produces output which is used to specify the motion of the display cursor (Fig. 4.3). The main benefit of a large trackball on a supportive surface is that it can be used comfortably for large time periods, because the users can rest their forearm, keep the hand in one plane, and spin and stop the trackball with the fingers. 329
4.5 Pointing Devices
a
55
b
Fig. 4.3 Keyboards with built-in touchpad or trackball: (a) Fentek keyboard with built-in trackball, (b) Fentek keyboard with built-in touchpad. Source: www.fentek-ind.com/trackballkeyboard.htm
4.5.5 Mouse This is a very popular hand-held device (box) which fits under the palm and the fingertips. 269 The cursor movement is generated by moving the mouse on a flat surface. To aid the change of menus, the drawing of lines or the confirmation of inputs, a mouse has typically several buttons.
4.5.6 Joysticks Joystick is a device consisting of a lever placed vertically in a fixed base. 155 The movements of the lever are sensed by a potentiometer, and there is always a continuous relationship between the quantity of lever displacement and the magnitude of the corresponding output signal generated. If the lever of a joystick is spring-loaded, it returns to the central position when released. An isometric joystick (or as otherwise called force joystick) has a rigid lever which cannot move noticeably in any direction. The force applied to the joystick (magnitude and direction) is measured by a strain gauge. The cursor is moved proportionally to the magnitude of the exerted force. The output falls again to zero when the lever is released. The advantage of joysticks is that can be made sufficiently small to fit into a key board, and that if a palm or hand test is provided, the joystick can be used for extended time periods with no fatigue. 371 The drawback of joysticks is that due to their small sizes, for precise positioning in absolute mode they lead to excessively high gains. This restricts their utility for drawing tasks (joysticks cannot be used to trace or digitize drawings). 155, 371
4.5.7 Selection of the Input Device Actually, it is very risky to draw general conclusions about the optimality of a specific input device for a given task or environment. Therefore, in practice one must
56
4 Human–Machine Interaction in Automation (I): Basic Concepts and Devices
follow some convenient selection procedure. A procedure of this kind involves the following steps 186 : 1. Determine the characteristics of the task(s), the user(s) and the environment (present and future requirements should be taken into account). 2. Determine and compare the features of the candidate input devices (reviewing previous theoretical and practical experiences about the input devices under consideration). 3. Take into account the user’s preferences. It is important to provide the user with a tool she/he likes to use, if this tool does the job. 4. Test and evaluate the performance of the selected input device in the real working environment, if possible (or by simulation, if it is not possible to work in the real environment). The general rule here is that the selection procedure should ultimately suggest an input tool (device) which is accepted by its user(s) and is well matched to its tasks and environments.
4.6 Screen Design Screen design is one of the key issues of HMI in computerized and automated systems, and usually has two meanings; the first refers to the process of specifying the visual appearance and content of a single visual frame, and the second refers to the end result of this process. Presently, with the popularity of graphical user interfaces (GUIs), the common meaning of “screen design” is the design of a particular window or dialog box, rather than the design of the entire physical screen. The importance of screen design is due to the fact that the visual channel is still the dominating technique of presenting information to the user. An overview of the literature on screen design over a period of about three to four decades was provided by Tullis, 570 including the following issues: Physiological and human factors Experience of operators and application designers Experience of graphic design
A comprehensive investigation of alphanumeric screen formats has been done by Tullis, 568, 569 and a computer program was employed to measure the following features of the screen formats 570 ; overall density, local density, number of groups, average size of the groups, number of items, and item alignment. Multiple regressions were then used to fit search times and subjective ratings of ease of use with these format features. Search time is defined as the time needed to extract a specific data item from the screen. Despite the fact that many theoretical and practical efforts have already been made, many screen design issues have still to be addressed empirically, particularly those related to GUI screens. The most important of these issues is the
4.6 Screen Design
57
amount of information to present. All screen design guidelines suggest that the total amount of information on each screen should be minimized by presenting only what is absolutely necessary to the user. 164, 537 The information density on a charactermode screen is expressed as the percentage of available character spaces that are employed. A character-mode screen has typically a fixed screen size (80 characters wide by 25 lines high) and the designer has simply to decide how much of that space to use. In GUI screens the designer has to decide on two issues; how large to make the window and how much information to put in it. The greatest value of the window size is determined by the lowest resolution of the target display hardware (e.g., 640 640), but for windows and dialog boxes different than the main application window (which is typically sizable), the designer attempts to employ less than the total physical screen.
4.6.1 Screen Density Reduction Methods The next issue (after the determination of what information must be displayed) is to avoid the screen overload by the information conveyed. Thus, methods for reducing the screen density are needed. Some of them are the following 567, 570 :
Use appropriate abbreviations (to save space). Don’t use unnecessary detail. Use concise wording (understandable by the users). Use standard (recognizable) data formats. Use tabular formats with column headings.
Some methods for conveying information in a window without having to present it all at once are the following 537, 570 : Expanding Dialog Boxes
The dialog box has two sizes (small and large) the small version is shown by default and contains the typically needed items. With the expansion button the large version of the box is shown which depicts additional items. Tab Folders Typically, the file folder metaphor is used for easy switching between various sets of data in the same window. The same result can be obtained via a group of mutually exclusive buttons. Drop-Down Lists and Other Pop-Ups Typically the user clicks on a button which temporarily presents the extra information or options related to a particular control.
4.6.2 Information Grouping and Highlighting In actual work it is very often required to visually group the elements that are on the screen. This information grouping can be done using several existing
58
4 Human–Machine Interaction in Automation (I): Basic Concepts and Devices
techniques. 70, 100, 109 Information grouping facilitates the extraction of information from the user and the interpretation assigned to it by the users. Information highlighting means the focusing of attention to certain elements on the screen (e.g., color, brightness, flashing, underlining, reverse video, etc.). 164
4.6.3 Spatial Relationships Among Screen Elements The spatial relationships among the elements conveyed on a screen are also very important. Many spatial relationships, such as the symmetry, are very useful in helping the user to locate a particular element. Some examples of such spatial relationships among elements are the following:
Alignment Label/data relationships Process associations Symmetry
These relationships have been studied by many researchers. There are available several guidelines’ documents. 135, 537 A discussion of them was collectively presented by Tullis. 570 Almost all computer-generated displays (even those with extended graphics) involve some form of textual information on the screen. Currently many useful guidelines are available for the presentation of text, which are the outcome of extensive empirical studies. 480 The use of graphics for conveying information on a screen is a key problem of GUIs and will be studied in the next chapter. For now, it is useful to recall that in most situations “a picture is worthy a thousand words”.
4.7 Work Station Design Workstation design together with the associated human work issues is one of the primary concerns that must be addressed in computerized and automated systems. As we saw in Chapter 3 (Section 3.2), the use of VDTs in the workplace can create cumulative trauma disorders. 183 Workstation design methods can help in reducing the stress demands imposed by the required postures and motions, with the final result to reduce the risk of injury. Of course, a good workstation design can also improve the human effectiveness, thus resulting in better efficiency and higher productivity. Three ways for reducing the stress and fatigue levels of the operators are: reduction of extreme joint movement, reduction of excessive forces, and avoidance of highly repetitive jobs. 439 In the following we summarize some general guidelines for ergonomic workstation design, which are described in the literature of human factors ergonomics and automation. 13, 178, 496, 531
4.7 Work Station Design
59
The factors which will be discussed are: Physical layout factors Work method factors Video display terminal factors
4.7.1 Physical Layout Factors The physical layout of the workstation should fit the user, who may have special needs. The workstations must be such that to permit a very high flexibility in order to accommodate the contemporary work population (elderly, minorities, etc.). Factors such as the sitting and postural conditions during the working hours are very important for the health and job satisfaction of the employees. The work table must be adjustable such that the worker to be able to work in various seated positions. The ideal work situation is to alternate between sitting and standing at regular time periods. Normally, changing body postures leads to minimal fatigue and discomfort, which would arise by keeping the same posture for large time intervals. The chair design should be very careful to reduce to likelihood of pain, fatigue or injury in the neck, shoulders and the lower back of the workers. The working area has to be appropriately designed around elbow height when sitting or standing in full extension postures (for heavy manual work the working height must be 10–12 cm below the elbow, whereas for precise work the working height should be 5–10 cm above the elbow). 178, 439 Regarding the working materials, tools, controls and other equipment, these should be easily accessible to the worker in order to avoid inconvenient and awkward postures. According to Putz and Anderson 439 all reaching should be in front and below of the shoulder. Normal work should be in the region which can be comfortably reached by the sweep of the arm with the upper arm hanging in a natural way at the side of the arm. The lighting conditions also play a critical role. Inadequate lighting may force the workers to stand or sit in uncomfortable postures. If precise and detailed work is required, the lighting must be properly fine tuned.
4.7.2 Work Method Factors In addition to physical workstation factors, work method factors should also be considered and optimized as supplementary or temporary means towards assuring comfortable working conditions, especially when workstation design alterations are not easy or immediately possible. To this end, the workers and operators should be trained on how to carry out their tasks and how to use the controls and tools. The general rule here is to tune the working methods and conditions so as to reduce the stress effects and increase the feeling of job satisfaction (e.g., by increasing the
60
4 Human–Machine Interaction in Automation (I): Basic Concepts and Devices
amount of conscious control and meaningful experiences of the employee). The stress which is due to boring and repeated actions can be considerably reduced by rotating job tasks among the workers.
4.7.3 Video Display Terminal Factors As we already mentioned in Chapter 3, the work at VDTs may lead to visual and musculoskeletal problems of the workers if the workstation and VDT are not properly designed. 191 If an operator views objects on the VDT screen at a close range for long time intervals, or there is excessive reflected glare, then visual fatigue can occur with very high probability. Also, because of screen reflections that can occur in a brightly lit office environment (usually found in standard office set ups), a high risk in VDT use, is created. This risk can be minimized in many ways; for example by the reorientation of the VDT screen or the selective removal of light sources or by using partitions or blinds. 496 If these methods are not feasible or satisfactory, then a microfilament mesh filter should be placed over the screen or a parabolic lighting fixture can be fitted below a standard fluorescent fixture to obtain reduced screen glare. A good way to reduce visual fatigue is to have rest breaks. According to the National Institute of Occupational Safety and Health (NIOSH), a break after 2 h continuous work at VDT is suggested, at minimum. 13 Of course, the VDT workers should be visually tested at the beginning of their work and at regular intervals afterwards. Regarding the musculoskeletal problems of VDT workers, the initial results of the Wisconsin-NIOSH study showed that the rate of “occasional” back, neck and shoulder discomfort exceeds the figure of 75%. 497 Appropriate design of the workstation physical layout (as described above) is the first step towards improving the VDT working conditions. Some recommended VDT workstation features are the following 529, 531, 556
Movable keyboards (with adjustable height) Adjustable backrest (to support the lower back) Adjustable chair (height and depth of the seat) Screen between 30 and 60 cm away Indirect general lighting and moderate brightness Direct adjustable task lighting Sufficient work-table space Appropriate ventilation
In any case, the workers should be informed about the ergonomic factors, and be trained to properly adjust their own workstations and perform their tasks ergonomically. Physical exercises recommended for VDT operators were presented by Lee and co-workers. 312
Chapter 5
Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
A design isn’t finished until somebody is using it. Brenda Laurel Ease of use and ease of learning are not the same. Katherine Haramundaris The more technology becomes complicated inside, the more it has to be simple outside. Derrick de Kerckhove
5.1 Introduction The design of advanced human–machine interfaces (HMIs) is actually a multidisplinary problem needing the cooperation of experts on human cognition, display technologies, graphics, software design, natural language processing, artificial intelligence, etc. One can say that design of an HMI is more an art than a science. As we already know, the goal of designing efficient HMI components in automated systems is to improve operational efficiency and overall productivity while providing a safe, comfortable and satisfying front-end for the operator or user. To this end, the capabilities, the limitations and the ‘idioms’ of the human operator should be analyzed and taken into account for the HMI design. The fundamental concepts and the basic human interface devices have been presented in the previous chapter. Our aim here is to discuss the graphical user interfaces (GUIs), the static and dynamic visual displays, a number of further design features of visual displays, and some advanced HMIs, namely intelligent HMIs, natural language HMIs, multi-modal HMIs, HMIs for knowledge-based systems, and force sensing/tactile-based HMIs.
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 5, c Springer Science+Business Media B.V. 2010
61
62
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
5.2 Graphical User Interfaces 5.2.1 General Issues The degree to which a human–machine interface can respond to user’s cognitive requirements and support her/his natural cognitive structures and processes, is one of the basic factors that determine the efficiency of human–computer interaction. Graphical interfaces may be extremely helpful in this respect, especially when the user’s ‘natural idiom’ is graphical. 154 Today, there are largely available graphical user interfaces (GUIs) which enable all kinds of users (novice, intermediate and experienced) to perform their automation tasks more effectively and productively. This is because “pictorial communication” is a means which is both natural and efficient to humans and is still satisfactorily precise for computer processing. 155 It is known that graphical representations are appropriate for displaying multi-dimensional information and permit a much wider bandwidth of human–computer communication than is possible with text alone (“a picture is more than a thousand words”). Experimental evidence shows that recall is better for information which is presented pictorially and supports the common claim that “people think in pictures”. 338 Actually, it seems that humans enjoy interacting graphically with a computer more than they do via conventional alphanumerical methods. This is supported by strong evidence that people do work with mental images in a way closely similar to the way in which they work with actual images of the real world. 515 It has been asserted that one of the best problem-solving strategies that humans employ is based on the creation of virtual physical representations of mental images of physical structures, which can then be examined and understood via powerful pattern-matching capabilities. Thus, the designers of human–machine interfaces went one step further by allowing the users to act on information displayed in the same way as they act on mental images. This is the philosophical ground of direct manipulation interfaces, which are usually based on real world metaphors, such as the desktop metaphor, that attempt to model parts of the physical world. They give to the user graphical representations of objects within that world (e.g., files or mail trays in the desktop metaphor case), which then she/he can manipulate in a direct way, using a mouse, without the need for a mental decomposition of tasks into verbal commands with complex syntax. Thus, the user can perform an action on a symbolic representation of the object at hand and observe its results in a natural and realistic manner. The direct manipulation interfaces are now well recognized and used in many applications of automation and offer high levels of user satisfaction and task performance. Typical applications of GUIs are: Programming tools and environments (flow charts, graphical programming
languages) 173, 530 Database management (conceptual knowledge model for database, iconic command language, molecular model metaphor) 159, 453 Simulation and training systems (e.g., CAD systems which use graphical representations of objects and processes) 56, 218
5.2 Graphical User Interfaces
63
Application independent GUIs (they use heuristic knowledge about the types
of graphical representations which are appropriate for particular purposes and generate displays for several applications, automatically) 84, 150
5.2.2 Design Components of Graphical Interfaces Current GUIs employ windows, icons (from the Greek word ©š›¨ K (icon) D image), menus, and pointing devices, with two-dimensional screens, and for standard office automation applications simulate a desktop environment on the display. As already mentioned in several places, a GUI must be human-centered, i.e., it must relate efficiently and precisely the tasks, the workflow, the purposes, the technical skills, the personality, and the culture of the user. The basic design components of GUIs which must be integrated in an operational and aesthetic way are the following 336:
Metaphors Mental model Navigation of the mental model Appearance Interaction
Metaphors are fundamental concepts that are communicated via terms and images. The mental model is the user’s organization of data, operations, tasks and roles. The navigation of the mental model includes the menus, the icons, the dialog boxes and the windows. The appearance includes the visual, auditory, and verbal features of controls and decorative background. Finally, the interaction concerns the performance of interactive screen controls and the physical input/output display devices. The proper organization and quality of the above components and the effective employment of visual elements and the interaction are all crucial for the achievement of humane and usable systems.
5.2.3 Windowing Systems Current GUI-based building tools are suitable for constructing easily and quickly various applications; although these tools do not guarantee automatically that the applications are optimally designed. The primary standard GUI products available in the market, each one having unique features, are the following: Macintosh GUI (Apple, 1984) Microsoft Windows and OS/2 Presentation Manager (1985)
The Macintosh was the first computer with a GUI. The classical Macintosh GUI was a single-tasking system, but current versions allow the development and operation of multi-tasking applications. Actually, because of its historical precedence and popularity, the Macintosh established the standard for evaluating the other GUIs.
64
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
The Microsoft windows were developed as a graphics-oriented alternative to the character-based environment of MS-DOS on PC systems. Actually, the graphicsoriented software on PCs was opened by the Microsoft mouse-driven menus and bitsliced screens. The OS/2 Presentation Manager was created by IBM and Microsoft for IBM and compatible microcomputers. Other GUIs developed over the years are: NeXTStep GUI (1988, for the NeXT computer) Open LOOK GUI (Sun Microsystems and AT&T) OSF/Motif GUI (Open Software Foundation)
From them, the Open LOOK GUI was discontinued in 1994, although some applications still exist. Besides the above standard GUIS, many current products are based on non-standard semi-customized versions of commercial GUIs, especially for multimedia and Web-oriented applications. The quality of the displays and the level of interaction with the user, depend on the windowing code (software structure), the windowing management technique, and the base window system’s method of displaying an image. The two typical windowing system architectures are 336 : Kernel-Based Architecture This architecture provides strong interactivity which
depends on the resources available in a single machine. This architecture permits the window system software to be shared among networks of heterogeneous machines, but the response time is restricted by the communication bandwidth of the network. Server is a computer that runs software for an entire network of interconnection machines. Client is a piece of software that requests and employs the server’s capabilities.
Client–Server-Based Architecture
Kernel-based systems provide better performance, because the communication overhead of client–server systems may lead to performance degradation. For the automatic arrangement of windows there are the following options: Tiled windows (the available display space is automatically filled) Overlapping windows (they have depth values which represent their distances
from the viewer) Cascading windows (special case of overlapping windows where the windows
are arranged progressively so as to avoid any window from being obscured completely)
5.2.4 Components of Windowing Systems Any windowing system is characterized by a set of standard components, which are also used by the GUIs. These components are 336 : Windows Menus
5.3 Types and Design Features of Visual Displays
65
Controls Dialog boxes Modal dialogs Modeless dialogs Control panels Query boxes Message boxes Mice and keyboards
A brief description of them is as follows: Windows Discrete areas of the visual display that can be sized, moved and ren-
dered independently on the display screen. Menus They allow a designer to see and point instead of remembering and typ-
ing. Controls Visually represented window components that are directly manipula-
ble with the keyboard or mouse. Dialog Boxes Dialog is any interactive information exchange between the user
and the system. Dialog box types are control panels, query boxes, and message boxes. Modal Dialogs Dialogs where the user must respond before any other action can be taken. Modeless Dialogs These dialogs have limited scope and so they do not restrict the subsequent actions of the user. Controls Panels They provide information that reflects the current parameter values which can be altered interactively while the panel is on the display. Query Boxes They appear as a result of user actions (not requested explicitly) and allow the user to cancel the action that produced the query. Message Boxes They give critical information to the user and appear when the system is about to enter or entered a non reversible and potential dangerous situation. Mice and Keyboards They are the typical interaction devices of GUI systems. The mouse is used for tasks that require spatial manipulation (e.g., menu navigation, window sizing, and partitioning). The keyboard is mostly appropriate for sequential tasks (e.g., text entry).
5.3 Types and Design Features of Visual Displays 5.3.1 Visual Display Types The visual displays are classified according to the type of information they present to the user, which can be static or dynamic in nature. Static displays do not display, in real time, the changes in the information content over time (the displays themselves do not change with time). Display of dynamic information requires capture of the changing nature of the information. Static displays can present in the form
66
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
of graphs, the changes in time of a variable of a dynamic event, after the event has occurred. Typically, dynamic visual displays contain elements of one of the basic forms of static displays, i.e., textual information, information in the form of graphical displays, and information in coded or symbolic form. Static Visual Displays There are two types of static displays; textual displays in hardcopy format, and textual displays in visual display terminals or computer screens. 367 The features of both textual displays are: Visibility of the text (symbol distinguishability/separation from surroundings) Legibility of the text (the characteristic of alphanumeric characters that enables
identification of one character from the other) Readability of the text (the characteristic of alphanumeric characters that allows
the organization of the content into meaningful groups of information such as words and sentences) The factors that affect the visibility, the legibility and the readability of textual information presented in hard copy form are typography, size, case, layout, and ease of reading. Dynamic Visual Displays These displays are suitable for presenting information about variables which change with time, and are distinguished in quantitative and qualitative visual displays. 367 Quantitative displays give information about the quantitative value of a variable at hand. Typical cases of quantitative displays are analog displays (fixed scale and moving pointer, moving scale and fixed pointer) and digital displays (mechanical type counters). Qualitative visual displays are employed to depict information on a time varying parameter based on qualitative information about it (e.g., a trend or a rate of change of the parameter). Qualitative displays can also be used to determine the status of a variable within predefined regions (e.g., if temperature is low, medium or high, or if the fuel tank is empty, half-full or full). Usually, the qualitative information via displays is given by color variables of specific meaning or by using shapes (or areas) with special characterization (e.g., “dangerous”, “safe”, etc.). An other type of qualitative display is the so called ‘check-reading’ display, which is used to determine if a particular reading is normal or not. Finally, in the class of qualitative displays there are the “indicator displays”. Indicators are representative of distinct (discrete) pieces of information, such as whether the condition is normal or abnormal (dangerous), or if the working surface is hot or cold, etc. The usual form of status indicators are the colored lights.
5.3.2 Further Design Features of Visual Displays Among the various general design features of visual displays we have selected for presentation here the following features: 367, 523 Display dedication Display integration
5.3 Types and Design Features of Visual Displays
67
Compatibility Control–display arrangement Display adaptiveness
Display Dedication This is the property that for each variable there is a dedicated display (screen) fixed in an individual position, in contrast to the case where more than one variables share a single or a small group of computer displays. In dedicated displays, the user must look at a particular position to identify and monitor the corresponding variable. The drawback of dedicated displays is that if the number of variables of interest is large, a large number of dedicated displays are required, and console’s real estate may become scarce. If many variables share a single or a small set of displays, the displays can be sized or shaped in any way, one can call-up them as required via menus or other facilities. However, in shared displays we have the so-called ‘keyhole problem’, which occurs if the operator looks at the world through a keyhole. The keyhole problem in a single visual screen can be partially overcome by dedicating some parts of the screen to an overview of information in standard format or to icons. The Mackintosh GUI and Microsoft windows work is this way. In aircrafts the number of individual displays was increasing till the Concorde (1970), but then an effort is continuously made to reduce the number of independent displays. In a large-scale system, it is useful to have a dedicated safety display for the overall review of the “normal operation” of the system, represented by a small set of critical parameters (5–10 parameters), although the actual number of system parameters/variables may be very large (some hundreds of variables). Display Integration This is the combination of representations of many variables in a convenient sensible way. For example, if there are two variables for display, we may have a two-dimensional plot of their values instead of having a separate indicator for each one of them. Then for example, the set of states above the plot may be “the dangerous area”, whereas the area below the plot is the “safe area” of system operation states. Compatibility This is the relationship between the expectations of the user and the input stimuli and responses of the system interacting with the user (see also Chapter 2, Section 2.5.2). Good compatibility leads, in general, to less user errors, and better overall human performance. Compatibility is distinguished in four categories: 365 (i) conceptual compatibility (ii) movement compatibility, (iii) spatial compatibility, and (iv) modality compatibility. Conceptual compatibility is the matching which must exist between some types of stimuli (e.g., symbols) and the conceptual associations that humans make with these stimuli. Movement compatibility (or as other wise called, population stereotypes) refers to the relationship between the movement of the displays and controls and the output response of the system under control (e.g., movement of a control to follow the movement of a display, movement of a control to control the movement of a display, movement of a control to produce a particular system response).
68
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
Spatial compatibility deals with the relationship which must exist between the physical features and arrangement of the controls and their associated displays (e.g., compatibility between physical design of the function keys on a keyboard, and the corresponding labels for these function keys). Modality compatibility refers to the situation where certain stimulus–response combinations are more compatible with some tasks than with others. Control Display Arrangement The physical location set-up of controls and displays in the workspace must be in harmony with the human sensory capabilities and biomechanical and anthropometric features of the human operator. A truly optimal arrangement of every display and every control is very difficult to be achieved, but a few guidelines of good control–display placement are the following: Select a minimum number of controls. Controls must have easy and simple activation (within the human ability to adjust
posture). Design hand controls for high-speed or high-precision operations. Design foot controls for operations needing high forces. Emergency controls and displays must be clearly separated from controls and
displays of normal operation. Use the same grouping for major controls and displays (otherwise, the exception
should be very clearly identifiable). Display Adaptiveness This is the ability to alter the logic and format of the display according to the situation at hand. Typical displays in industrial and aerospace systems have fixed display formats (e.g., labels, scales and ranges of variables are designed into the display) and conventional alarms have fixed set points. Adaptive formats are computer-generated formats that are reset and altered at various stages of the operation of the system (e.g., take off, landing, and on route travel of an aircraft), or adjusted to the personal requirements of the operator. It is remarked, that adaptive display formatting may present problems (e.g., the operator may confuse the new format with a previous one, or the operator may be confused about the currently used format, etc.). Thus, adaptive formatting should be employed with extra care. For all these reasons it is advisable to use fixed display formats. Real problems and systems are subject to several parameter, variables’ and functional constraints. These impose constrains on the human allowed actions. This means that the environment affects the actions, while the actions affect the environment in full reciprocity. Thus, displays should be designed such that the constraints are evident, and convey in a natural way what is the appropriate action. Such displays were called by Rasmussen and Vicente 81, 445, 523 ecological displays.
5.4 Intelligent Human–Machine Interfaces The need to develop intelligent human–machine interfaces in automation systems is motivated by several factors. First, there are increasing levels of automation, data processing, information fusion, and intelligent control between the human
5.4 Intelligent Human–Machine Interfaces
69
operator/user and the real system (plant, process, enterprise) and source of sensory data. The result of this is the fact that the operator has, in general, less direct knowledge of (or control over) the process at hand, and has to rely on intermediate processing and control systems to provide state information or control capabilities. Second, advanced artificial intelligence (AI) and knowledge based (KB) techniques are needed to be employed and embedded in the loop to achieve high levels of automation via monitoring, event detection, situation identification, and action selection functions. Of course, since most of the intended users and operators do not actually know how these AI and KB techniques work, some appropriate translator must be designed that can decipher and reveal the output of various subprocesses and convey this information to a wide variety of potential users and operators. To this end, the main body of attention and human decision-making functions must be integrated with explanation and the design of intelligent human–machine interfaces which will be more than a medium for interaction with a system. Rather these interfaces must be a tool provided to an operator to understand the status of a problem or situation, and perform tasks in cooperation with an intelligent system. Thus, no longer the design of the user interface must be concerned merely with graphics and text formatting, window management, and dialog formats. The human–machine interface should be the medium for conveying information and knowledge about the state of the system at hand, and the means for interacting with and manipulating a problem. Therefore, for a human–machine interface (HMI) to be intelligent access to a variety of knowledge sources is required. These include the following 590:
Knowledge of the user Knowledge of the user tasks Knowledge of the tools Knowledge of the domain Knowledge of interaction modalities Knowledge of how to interact
A good practice for designing a particular intelligent HMI is to require some knowledge in each of the above areas and a lot of knowledge in the areas of particular relevance to the HMI at hand. The general structure of an intelligent HMI is shown in Fig. 5.1. The technological system (robotic, industrial, enterprise, etc.) involves a supervisor, a planner and a controller, and sometimes (depending on its size and complexity) a Decision Support System (DSS) component which contributes to the realization of cooperative human–machine decision making and control. The three main types of users are operators, engineers and maintenance specialists. These users interact with the technological system (robotic system, continuous physical/chemical process, manufacturing system, etc.) via the HMI. Users have in general different but overlapping needs with respect to depth and quantity. The principal functions of intelligent human–machine interfaces are the following: Input handling Perception and action Dialog handling
70
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
Commands
User
Command Interpeter
Operator Engineer Maintenance specialist
Modeler
User Goals Task Model
Information User Model
Technological system Supervisor Controller Planner
Technical System Model
Output Model selection and comparator formatting
Data
System toolkit Sensors Effectors Communications
Fig. 5.1 General structure of intelligent HMI
Tracking interaction Explanation Output generation
The input handling function should provide the means to handle the types of inputs received by the system which may be analog, digital, probabilistic, linguistic, fuzzy, etc. The perception and action function plays a key role in the overall HMI performance and is supported by the presentation level of the HMI which determines how to present the information to the user and how to transform her/his control inputs. Dialog handling (control) deals with determining what information to treat and when. A dialog is any logically coherent sequence of actions and reactions exchanged between the user and the HMI. Human–machine dialogs are necessary for many automation operations, e.g., scheduling, supervision, planning, control, etc. Tracking interaction deals with tracking the entire interaction between the HMI and the human user, as well as between the HMI and the system at hand. The explanation function needs a model of the technical system to be available. Its role is to explain to the user upon request the meaning of the various aspects and components of the technical system, and sometimes of the HMI itself. It should be also capable of explaining how the various parts of the system operate.
5.5 Natural Language Human–Machine Interfaces
71
Output generation is realized using graphical editors and typically offers appropriate graphical and textual pictures which are dynamically changing. In more recent applications, multimedia presentations are also provided (see Section 5.6). If the HMI is required to be able to adapt to different users or user classes, a user model is also needed. To design a user model, it is necessary to use our knowledge on human processing behavior and represent the cognitive strategies, via rules, algorithms and reasoning mechanisms. A more complete user model must also include a model of the technical system in order to incorporate the user’s view with respect to the technical system. 51
5.5 Natural Language Human–Machine Interfaces A special class of intelligent HMIs which are used in sophisticated automation systems (e.g., research and service robotic systems), is the class of Natural Language Interfaces (NLIs). NLIs possess humanized properties since the user can communicate with the system through a kind of verbal language (e.g., a small subset of English). Actually, NLIs are not the best interfaces in all cases. Thus to decide whether to use a NLI or not, one has to consider several factors, of which some examples are the following: Cost The cost of NLIs is usually higher than that of standard HMIs. Ease of Learning If a full natural language is used, no human effort is necessary
to learn it. This is not so if restricted language with legal statements is used. Conciseness The desire for conciseness is usually in conflict with the user friend-
liness. Precision Many English sentences are ambiguous. This is so natural, that En-
glish does not use parentheses as do artificial logical languages. Need for Pictures Words are not the best way to describe shapes, positions,
curves, etc. Programs that handle graphical objects (e.g., CAD systems) are good candidates for NLIs and other linguistic interfaces. Semantic Complexity Natural languages are concise and efficient when the universe of possible messages is large. Actually, no trivial language can perform the interfacing job, since the number of different messages that have to be handled is extremely large. The components of a NL understanding system, i.e., a system that transforms statements from the language in which they were made in a program-specific form that initiates appropriate actions, are: Words and lexicons Grammar and sentence structure Semantics and sentence interpretation
Three ways to combine the above primary components into an integrated understanding system are:
72
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
Interactive Selection
The system displays the options to the user who chooses among them to gradually construct a complete statement, which corresponds to actions that the target program can perform: Semantic Grammars The window-based approach does not allow the user to control interactions or compose free-form statements that the system has to understand. The alternative is for the user to compose entire statements. A semantic grammar provides one implementation of this alternative approach but is appropriate when a relatively small subset of a NL has to be recognized. Syntactic Grammars If a large part of NL is used as HMI, the capture of as much of the language regularity as possible is required. To this end, it is necessary to capture the syntactic regularity of the NL at hand. Thus one needs to use a syntactically motivated grammar. Today, several tools are available to assist in the building process of the lexicon, the grammar, the semantic rules and the code that uses all of them. Also some programs exist that do most of the understanding in all three approaches discussed above. Some examples of the use of NLIs in robotic systems are the following: Torrance, 563 where a NL interface is used to navigate an indoor mobile robot. Neumann 384 and Herzog and Wasinski, 210 where the synergetic integration of
NL and vision processing in robotic systems is considered. Sondheimer, 543 where the spatial reference problem of NL robot control is in-
vestigated. Nilsson, 389 where a mobile robot (SHAKEY) capable of understanding simple
NL commands is presented. Sato and Hirai, 495 where NL instructions are employed for teleoperation control. Vere and Bickmore, 601 Chapman 77 and Badler et al., 29 where the control of au-
tonomous agents in 2D or 3D work spaces is achieved via NL commands. Fischer et al., 152 where a comprehensive NLI is designed and used for a service
mobile manipulator (ROMAN). Wahlster et al., 603 Bajcsy et al., 34 Neumann 384 and Herzog and Wasinski 210 in-
vestigate the utilization of combined sensory information and verbal descriptions in NL interface design for intelligent robotic systems.
5.6 Multi-Modal Human–Machine Interfaces In computer science, the meaning of the concept “modality” is in general ambiguous, but in human–machine interaction the term refers typically to the human senses, vision, hearing, touch, smell, and taste. Oviatt 410 defined multimodal systems as systems which coordinate the processing of combined natural input modalities, such as speed, touch, hand gestures, eye gaze and head and body movements, with multimedia system output. This definition was further refined by Turk and Robertson 571 by saying that multimedia research focuses on the media, while multimodal research focuses on human perceptual channels. Multimodal output employs several
5.6 Multi-Modal Human–Machine Interfaces
73
modalities (display, audio, tactile feedback) to engage human perceptual, cognitive, and communication skills in understanding what is being presented. Multimodal human–machine interaction systems can use several modalities independently, simultaneously, or in a tightly coupled way. Currently available theoretical approaches have not much practical value, and their application in standard software design processes is still difficult. A generative methodology for analyzing modality types and their combinations according to a proposed classification of unimodal representation modalities was presented by Bernsen. 43 A generic modeling framework for specifying multimodal HMIs using Object Management Group’s Unified Modeling Language (UML) 403 was developed by Obrenovic and Starcevic. 404 Incorporating a generic HMI framework into UML offers a good way to produce quantifiable and analyzable models. To this end, two problems were explored: Defining the modality concept precisely Identifying a UML extension for modeling multimodal interaction
A generic model is based on the concept of an abstract modality (rather than on a specific interaction modality such as speech, gestures, or graphics), which defines the common features of HMI modalities regardless of their specific manifestations. To define multimodal human interface models one needs a vocabulary of modeling primitives. The metamodel of Obrenovic and Starcevic describes formally basic multimodal interaction concepts. 404 The metamodel’s primary concept is that a HMI modality engages human capabilities to produce an effect to users. The four main categories of these effects are: Sensory effects that describe the human sensory apparatus processing stimuli Perceptual effects which are the outcome of the human perceptual system’s anal-
ysis of sensory data Motor effects which describe the human mechanical actions (e.g., pressure, or
head movement) Cognitive effects that occur at higher levels of human information processing and
include memory, attention, and curiosity processes These concepts are subclasses of the Effect class of the simplified human–machine interface modalities model shown in Fig. 5.2, where a complex HMI modality integrates two or more modalities to use them simultaneously. UML is a general-purpose modeling language possessing built-in facilities for customizing (or profiling) a particular domain. Through formal extension mechanisms one can extend UML semantics to include: Stereotypes (i.e., ornaments giving new semantic meanings to modeling con-
structs) Tagged values (i.e., key value pairs that can be associated with modeling con-
structs) Constraints (i.e., rules that define the model’s well-posedness)
The above modeling framework was used in both basic modality models (textual, tabular, aimed hand movement), and higher-level models of complex multimodal
74
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces Causality * *
*
Effect
Effect
Simple modality
Multimodal integration
Complex modality
Output modality
Input modality
Event-based modality
HMI modality
1..*
Streaming-based Modality Sampling frequency
Recognition-based Modality errorRate
Static output Dynamic output modality modality 0..* Animation
Human response time scale
Fig. 5.2 Simplified HMI modalities model
HMIs. An intelligent interface that has the ability to process multiple modalities (sound, image, and force feedback) while interacting with the human user was developed by He and Agah. 203 This interface allows the user feel the compliance, damping, and vibration effects through a force feedback joystick. A series of experiments were conducted using different combinations of media (modalities) employing 60 human users, aiming to establish what are the effects of force feedback (and associated time delays) when used in combination with visual and auditory information as part of a multimodal-interface.
5.7 Graphical Interfaces for Knowledge-Based Systems The typical applications of direct graphical interfaces were listed in Section 2.1. Here, we will discuss the ways in which the graphical representations have been used in both end-user interfaces and interfaces for the knowledge engineer. Enduser interfaces represent the domain itself and interfaces for the knowledge engineer clarify the system’s internal representation of domain knowledge. 274
5.7 Graphical Interfaces for Knowledge-Based Systems
75
5.7.1 End-User Interfaces End-user interfaces are particularly needed in expert advisory systems which use deep knowledge. They are especially most successful in technological and medical systems where pictures, maps and diagrams are the natural means of interaction. Two representative systems for which end-user graphical interfaces were designed are the following: ONCOCIN System This is a graphical interface system which helps doctors to
determine the best therapy for cancer patients. 566 When a consultation starts, the query system displays a window presenting a graph of the hierarchical relationship between diseases and chemotherapies. Then, the user can restrict the enquiry to a certain situation via direct manipulation of the graph using the mouse. Further information can be requested using pop-up menus, in textual form or further graphical displays. GUIDON-WATCH This is a graphical interface developed for the NEOMYCIN medical expert system, which permits the user to browse through the system’s knowledge base and view the inference process during a consultation. 459 GUIDON-WATCH system’s graphics paradigm is also based on direct manipulation and multiple windows like ONCOCIN. All kinds of relevant knowledge (causal relations, disease taxonomies, dynamic task trees) are shown in the form of simple graphs involving textual modes and line links with boxing, flashing and several print styles.
5.7.2 Graphical Interfaces for the Knowledge Engineer These interfaces are very sophisticated and are appropriate only for the experienced knowledge engineer, who can choose from a large repertory of languages, tools and environments. The standard languages (Prolog, Lisp) have fairly simple textbased interfaces, but the newer tools and environments, particularly those offered in workstations, have increasing numbers of graphical components. 201 For example, the Interlisp-D tool, run on Xerox 1186 workstation, possesses text and bit-map editors and graphic on-line help facilities and “file-browsers”, that enable the user to search over a hierarchical representation of the system’s file structure. 199 A prototype system for the graphical construction of Prolog programs that are based on attributes and operators was developed by Rueher and colleagues. 481 Another graphical tool that uses graphical representation of Prolog execution as part of an advanced tracing and debugging facility was presented by Eisenstadt and Brayshaw. 131 More specialized graphical tools for knowledge engineering are available in expert system shells that run on PCs and workstations. (Art, KEE, LOOPS, Knowledge Craft, etc.). 358, 375, 512
76
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
5.8 Force Sensing Tactile Based Human–Machine Interfaces A basic requirement for the manual interaction with several automation systems in real and virtual environments (such as robotized powered wheelchairs, teleoperators, etc.) is some kind of force or tactile (haptic) interface. In particular, the joystick is the dominant control interface between a disabled person and a Robotized Wheelchair (RW). To solve the problem of selecting a control interface and a control site (hand, foot, tongue, shoulder, etc.), one needs to pursue a specialized study and evaluation. 94, 95 Two of the main problems that persons with physical impairment face are: the intention tremor (i.e., involuntary rhythmic oscillation when postural maintenance or purposeful movement is attempted) and spastic hypertonia (i.e., a motor disorder characterized by a velocity dependent increase in tonic stretch reflexes with exaggerated tendon jerks). 3, 171 To overcome these problems, many researchers attempted to design joysticks with specialized damping control features. 59, 207, 456, 462 The haptic interaction (i.e., sensing by touch) between a human operator and a Virtual Environment (VE) involves two distinct modalities. 272, 291, 589 Tactile Sense This includes all the cutaneous sensory information where the pri-
mary role is played by the mechanoreceptors of the finger pad. Kinesthesis Perception of forces and movements by sensory receptors in the skin
around the joints, tendons, joint capsules, and muscles. Several HMIs have been constructed in the industrial and research establishements, which use either of these two sensory modalities. Haptic interfaces are generally classified in: Body-based interfaces Ground-based interfaces Tactile displays
The first two types of haptic interfaces excite kinesthetic sensors while the third type excites tactile sensors. Some available force-sensing dextrous-hand and haptic interface devices are the following 59, 60, 66, 148, 194, 227, 239, 249, 252, 255, 268, 272, 273, 291, 519, 589 Sarcos dextrous arm master (a force-reflecting device) Utah dextrous arm master (it has 16 degrees of freedom actuated in an antago-
nistic tendom way) EXOS dextrous arm master (a light exoskeleton for the human-hand) LRP hand master (an exoskeleton force-feedback glove) Freflex exoskeleton (a prototype 7-dof exoskeleton arm) PHANToM haptic interface (a very popular haptic interface) Tsukuba haptic master (a force display device) CMU magnetic levitation haptic interface (it has a handle via which the user interacts) SAFiRe master (a force-reflecting master glove)
5.9 Human–Machine Interaction via Virtual Environments
77
5.9 Human–Machine Interaction via Virtual Environments The force sensing and tactile-based human–machine interface devices discussed above represent one of the possible paradigms of human machine interaction via virtual environments (VEs) or virtual reality (VR). 66, 112, 133, 134, 208, 266, 554, 602 Other devices and gloves can measure very quickly the shape and or position of the user’s hand and convey these signals to a graphics computer. The computer can then display a graphical representation of the user’s hand via a head-mounted display such that the graphical image of the hand changes shape and position in a way similar to that of the user’s real hand. The humans wearing the displays can then perform within a virtual (synthetic) environment produced within the computer as they would act within the real world. The VEs created via computer graphics are new interaction communication media, and are typically experienced through head-coupled, virtual image, and stereoscopic displays which can synthesize a coordinated multisensory presentation of a synthetic environment. 133 Using these interaction media the human operator can experience an immersive involvement in the operation of the system. The three basic constituents of a VE are the content, the geometry and the dynamics. The content consists of objects and actors. The geometry is a description of the environment field of action, and has dimensionality, metrics (i.e., rules establishing an ordering of the contents), and extent (range of possible values for the elements of the position vector). Dynamics is represented by the rules of interaction among the VE contents, describing their performance as they exchange information or energy. The components of a VE are useful for enhancing the interaction of the operators with their simulations. Virtualization is the process by which a human (observer) interprets patterned sensory impressions to represent objects in an environment other than that from which the impressions physically originate. The three levels of virtualization are: Virtual space Virtual image Virtual environment
The virtual space is the result of a construction process by which a viewer perceives a 3-dimensional layout of objects in space when viewing a flat surface presenting the pictorial cues to space, i.e., perspective, shading, occlusion, and texture gradients. This is the most abstract of the above three virtualization levels. 133, 275 The virtual image is the perception of an object in depth in which accommodative, ocular vergence and (possibly) stereoscopic disparity cues are present, although they may not necessarily be consistent (this definition is in agreement with the one used in geometric optics). 48 The virtual environment is the final form of an environment, where the main additional sources of information are observer-slaved motion parallax, depth-of-focus variation and wide field-of-view without visible restriction of the field of view. These additional features can be properly used to give stimulation of major space-related psychological responses or physiological reflexes (e.g., accommodative vergence, vergence accommodation of the “near response”, etc.). 114, 383
78
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
Effectors (hands)
Sensors (eyes)
Sensors (data glove, joystick)
Effectors (CRT display)
Human Operator
Effectors (cursor, robot)
Sensors (Video Camera)
Physical or Virtual Worksite
Linking H/W Teleoperation H/W Simulation Computers
Fig. 5.3 General structure of VR-systems
It is noted, that although virtualization was defined above with reference to visual information, there exist analogs for virtual space, virtual images and virtual environments for other sensor modalities as well (acoustic, contact [touch], and shape position). Virtual spaces, images and environments can be constructed either from the viewpoint actually assumed by the users (egocentric viewpoint), or from a position different than that where users are represented to be (exocentric viewpoint). 347 The three complementary technologies used to create the illusion of immersion in a VE are: Sensors (e.g., head position or hand shape sensors) Effectors (e.g., stereoscopic displays or headphones) Special purpose hardware and software (connecting the sensors and effectors so
as to create experiences encountered by people immersed in a physical environment) A general diagram showing the structure of a VR-based system and the linkages of its components is shown in Fig. 5.3. The environment experienced with a teleoperator display is real, while that experienced via a VE simulation is imaginary. Real and simulated data can be combined via digital processing to produce intermediate environments of real and simulated (synthetic) objects. The applications of virtual reality and virtual environments include:
Vehicle simulation Three dimensional cartography Medical visualization Teleoperation and telerobotics
Two examples, where force feedback is particularly important in safety critical virtual reality/environment simulations are the following 564:
5.10 Human–Machine Interfaces in Computer-Aided Design
79
Sophisticated Flight Simulation
Here the force feedback is used to enhance the feeling of immersion. The human operator sits on a six degrees of freedom parallel manipulator that reproduces part of the flight dynamics, and holds an active stick which provides that exact resistance to motion. Surgeon Training Training the surgeons in VEs using the techniques of force feedback avoids the risk of casualties and minimizes the cost. The force feedback is absolutely necessary, otherwise the perceptual experience in the multimedia worlds will not correspond to the real one, making the training useless, if not dangerous. In most practical applications of VR, the combinations of images, sounds and forces are all required. In particular, for tasks requiring dexterous manipulation, the information needs and the sense of touch are not met without tactile feedback. 602 Today, there exists a whole new family of software packages that help the engineer to do computer-aided design, computer aided manufacturing, and computer-aided control (e.g., the Boeing 777 was almost designed by computer). Virtual reality can be used not only to simulate and mimic automation, but also the human body, human motions, and human actions. Currently, VEs are created and used for entertainment and moviemaking using computer animation. It is however remarked that whenever VE/VR is used for practical and engineering tasks, this should be done with great care, because the fact that the designed system looks nice in VR (passing some tests of human-in-the-loop simulation) does not mean that the same will be surely true in real life. This is because many unseen, unpredictable and unexpected events occur when field tests are performed and a system is used in the real environment.
5.10 Human–Machine Interfaces in Computer-Aided Design The purpose of computer-aided design (CAD) systems that are steadily introduced in modern design departments is to increase the design efficiency to rationalize the product development process. The key characteristics of a CAD system that help in achieving this purpose are adequacy to task requirements and the adaptation to different user skills and experiences. From a technical viewpoint, in the generation of a product, design is the functional unit in a company, where the customer orders specified as customer demands or technical specifications are typically transformed into a graphical model of the product (which includes parts lists, parts drawings, composition drawings). The result of the CAD system is a coded form of the work in a suitable computerized form, which means that engineering data produced in the design department can be used via an access to the corresponding data base by all functional units in the whole company. From an organization viewpoint, the design process consists of successive phases, which are subdivided into tasks and subtasks with a specified intermediate or final work result. From an ergonomic/human factors viewpoint, design work is “informatory work” in the sense of consisting only of human information procedures, but with different levels of cognitive control.
80
5 Human–Machine Interaction in Automation (II): Advanced Concepts and Interfaces
The technical features of CAD systems are classified in the following categories. 327 Technical features on the physical level (computer hardware, input devices,
output devices) Technical features on the syntactic level (information input type, dialog method,
system messages, help facilities, error remedy) Technical features on the semantic level (operating system/network character-
istics, CAD visualization aids, CAD functions, 3-dimensional basic objects, drawing documentation, engineering data bases)
a assignment
visual perception
visual information processing
operationplan
motoric action
motor control
Interoceptive perception
Human Operator tactile perception
real object
b assignment
visual perception
visual information processing
operationplan
motor control
motoric action
Interoceptive perception
Human Operator tactile perception
CAD System graphic display
tactile output
information processing
mouse
Fig. 5.4 Man–machine interaction: (a) Handling real objects; (b) working with a computer mouse
5.10 Human–Machine Interfaces in Computer-Aided Design
81
Technical features on the pragmatic level (integration of engineering calculations,
integration into the company information infrastructure (CIM), organization of CAD) Typical input–output devices in CAD systems are those described in Section 4.5 (screens, tablets, keyboards, mice, trackballs, pens, etc.). In the case of complex geometrical objects the designer uses physical models (e.g., clay models, prototypes, etc.). One of the principal hardware problems in using CAD systems is the handling of large drawings or models on a 20 in. monitor (e.g., in architecture the construction of buildings). Especially, if more than one drawings or models are used concurrently, the monitor area is too small. Therefore, most CAD systems must use two or more graphic monitors to expand the drawing area and to reduce the work needed for organizing windows, etc. The comparison of information flow while using a mouse and while handling real objects reveals an essential difference. For the movement of the mouse, humans have two senses available, namely visual information and interoceptive information (muscle tension and joint angles). But, to handle a real object, a third important sense is used, i.e., tactile and kinesthetic senses which give information about touching the object and about exerted or feedback forces. All these three senses can be processed in parallel, because they employ different human resources. Figure 5.4 shows the information flow for man–machine interaction when handling a real object and when working with a (tactile) computer mouse. 174 The mouse is used primarily as graphical input device for selection and pointing on graphical objects. Specific useful guidelines and recommendations for using human–machine interfaces in CAD systems were given by Heinecke. 205
Chapter 6
Supervisory and Distributed Control in Automation
I learn by going where I have to go. Theodore Roethke Lo! Men have become the tools of their tools. Henry David Thoreau A tool knows exactly how it is meant to be handled, while the user of the tool can only have an approximate idea. Milan Kundera
6.1 Introduction Supervisory control is a general concept that embraces many paradigms of human intervention in the operation of automation systems through human–machine interfaces appropriate for each paradigm. 335, 516, 519, 520, 525 The term supervisory control was borrowed from management science 335 where a human supervisor (director) is interacting with her/him subordinate employees giving commands to them for specific jobs that have to be done and receiving from them, at regular times, information about the status and the outcome of their work. Then, on the basis of this information the supervisor/director decides on particular changes and adjustments that have to be made in order to completely fulfill the goals of the enterprise. Exactly the same type of interaction is happening when a human supervisor interacts with machines that can be regarded as subordinates of her/him with specific physical and intelligence capabilities. To understand human supervisory control performance, one needs to formulate and investigate models that describe the behavior of the human supervisor. The human’s behavior depends, in general, on her/him internal representation about the system dynamics, the tasks, and the system’s environment (disturbances, uncertainties, etc.). Much work has been done in this area. 444, 518, 525
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 6, c Springer Science+Business Media B.V. 2010
83
84 Fig. 6.1 The three primary levels of intelligent control
6 Supervisory and Distributed Control in Automation Organization
Highest control level
Coordination
Intermediate control level
Execution
Lowest control level
Supervisory control falls within the framework of intelligent control, which according to Saridis 488, 490, 599, 607 has three main hierarchical levels of control and interaction (feedback), as shown in Fig. 6.1. Using the control structure of Fig. 6.1, Saridis has stated and studied the so-called ‘principle of increasing precision with decreasing intelligence (IPDI)’ which is a manifestation of the human organizational pyramid. Specifically, as we proceed from the highest control level to the lowest control level the precision of control is increased, but the degree of intelligence is reduced. The organization level is designed to organize a sequence of high-level actions or rules, the coordination level serves as an interface between the organization and execution levels, and the execution level performs the appropriate control functions on the processes involved. Saridis has developed an analytic theory that assigns analytical models to the various levels in the control hierarchy, and has applied it to particular paradigms of autonomous robotic and manufacturing systems. 599 Distributed control is a concept where the decision is made through a negotiation among the executive subsystems and executed by them, and differs from decentralized control where each executive subsystem makes its own decisions and executes only these decisions. A building element of both the supervisory and distributed control structures, which was very recently developed, is the concept of an agent. The structure of this chapter is the following. First, the historical evolution of supervisory control is presented, starting from purely manual (direct) control and ending to computer-aided and remote supervisory control. Then, the three well known architectures of supervisory control, viz. Rasmussen’s, Sheridan’s, and Meystel’s architectures are discussed. Next, the task analysis and task allocation problems are considered, and answers to the questions ‘how much’ and ‘when’ to automate are given. We continue by presenting the distributed control architecture (historical remarks, hierarchical distributed systems, system segmentation), followed by an outline of the class of discrete event supervisory control systems, and finally we discuss briefly two behavior-based architectures which utilize the concept of agent, namely the subsumption architecture and the motor-schemas architecture. Besides them, in the literature many other behavior-based architectures were
6.2 Supervisory Control Architectures
85
proposed such as the NIST reference model architecture, 5, 6 the LAAS hierarchical architecture 395 the distributed architecture for mobile navigation, 468 and the action– selection architecture. 331
6.2 Supervisory Control Architectures 6.2.1 Evolution of Supervisory Control Throughout the history of automation many architectures for supervisory control have been proposed. In conventional (early) human–machine control, the human operator manipulated the controls of the actuators and the machines directly and was able to observe the outcome of that manipulation on the states and outputs of the machines directly or via suitable sensors and measurement devices. Later, with the introduction of computer control the situation has changed, and the human became more loosely coupled to the systems that she/he controls. Finally, in more recent times the interaction of the human with the machines (automation) is achieved only through the human–machine interfaces and the interacting computers, where the human may be in remote distances from the machines (e.g., in telesurgery, undersea operations, nuclear plant teleoperation, and so on). More often than not, in our times the human has to supervise the control of many machines by a computer via individual (local or dedicated) computers. The above sequence of evolution of supervisory control is illustrated in Fig. 6.2, where the following code is used 523 : H: Human, P: Process under control, HMI: Human machine interface, C: Computer, TIC: Task-interactive computer, DHC: Direct human control, IHC: Indirect human control, CAIC: Computer-aided indirect control, SC: Supervisory control, RSC: Remote supervisory control, RMTSC: Remote multi-task supervisory control, CC: Communication channel. H
P
H
HMI
C
P
H
HMI
HMI
C
P
(SC)
(CAIC) H
P
(IHC)
(DHC) H
HMI
C
H
CC
HMI
C CC
TIC (RSC)
P
TIC
P
TIC
P
CC (RMTSC)
Fig. 6.2 Evolution of control from direct (purely manual) to computer-aided and remote supervisory control
86
6 Supervisory and Distributed Control in Automation
In the following, three particular supervisory control architectures will be described, namely: Rasmussen’s architecture Sheridan’s architecture Meystel’s architecture
each one looking at the supervisory control functions from a different viewpoint.
6.2.2 Rasmussen’s Architecture This architecture is based on the three-level human-behavior’s model (S-RK model) developed by Rasmussen, 443, 444 and discussed here in Chapter 3 (Section 3.5). This model distinguishes a target oriented skill-based behavior, a goal oriented rule-based behavior, and a goal controlled knowledge-based behavior, and shows to what extent human performance modeling is possible and valuable. Skill-based behavior (SBB) is necessary for manually controlling simple systems with relatively fast time-invariant dynamics, and also for intervention tasks in supervisory control. However, monitoring, interpreting and teaching tasks of nonstationary or stationary process control require rule-based behavior (RBB). Finally, knowledge-based behavior (KBB) is needed for tasks that demand intelligence and creativity of the operator (e.g., for fault diagnosis, repair and management, planning, and optimization tasks). The above three levels of performance of skilled operators are pictorially illustrated in Fig. 6.3. 443 The SBB operates with signals, the RBB with signs, and the KBB with symbols. Goals
Symbols
Identification
Decision of task
Planning
Recognition
Association state task
Stored rules for tasks
Signals
Automated sensory-motor patterns
KBB Signs
RBB
SBB
Feature formation
Sensory input
Fig. 6.3 Rasmussen’s hierarchical levels of human supervisors
Actions
6.2 Supervisory Control Architectures
87
Over the years many human performance models have been produced, primarily for detection and manual control tasks both in the time and frequency domains. Examples of such models are the optimal control model, the PID control model, and the predictive control model. 574, 575, 579 Similarly, much effort was devoted for developing models of rule-based human supervisory behavior in situations where the control task consists of a number of subtasks, such as monitoring, interpreting, etc. Examples of rule-based behavior models are the observer decision control model, 295 the PROCRU model (for the behavior of an airliner’s crew), 36, 551 and the fuzzy rule-based control model. 585 Here, it is useful to mention the work of Pfeifer and colleagues regarding the internal representation model of navigators and pilots of very large crude carriers. 422, 551 Finally, the modeling of KBB human behavior was aided very much by the development of knowledge-based (expert) systems, in which the internal representation of human operators is expressed in terms of linguistic (symbolic) rules. In this way, the KBB behavior can be mapped at the RBB level. 576, 579, 586 The above concepts are best illustrated by the diagram of supervision interaction with computer aiding at the KBB, RBB and SBB levels, shown in Fig. 6.4. 444, 551 A similar general functional diagram of the supervisory control is shown in Fig. 6.5, where the KBB level corresponds to the expert supervision level, the RBB level corresponds to information generator, and the SBB level corresponds to the controller. 590 Although this supervision structure was developed with the intention to be used fully autonomously, many functions and tasks can be performed by a human operator, depending on the nature of the controlled system and on the separation of tasks between the human supervisor and the automatic (computer) supervisor, decided. Analogous correspondence holds between the Rasmussen’s KBB, RBB and SBB levels, and Saridis’ organization, coordination and execution levels (see Fig. 6.1).
HUMAN SUPERVISOR
COMPUTER AIDING
KBB LEVEL
HIGH LEVEL GOALS REQUEST FOR ADVISE ADVISE
SIGNS
RBB LEVEL
IF-THEN COMMANDS REQUEST FOR RULES RULES
SIGNALS
SBB LEVEL
CONTROL COMMANDS REQUEST FOR DEMONSTRATIONS DEMONSTRATIONS
HIGH-LEVEL SYMBOLIC INFORMATION
KB AIDING
RB AIDING
SB AIDING
MANUAL CONTROL SYSTEM UNDER CONTROL
Fig. 6.4 Human–supervisor computer interaction at the KBB, RBB and SBB levels
AUTOMATIC CONTROL
88
6 Supervisory and Distributed Control in Automation
Fig. 6.5 Functional diagram of a 3-level supervisory control scheme applied successfully to turbocharged diesel engine system 1, 6
Operator information
More specific information extraction
Expert Supervision System • detection • localisation • classification • evaluation • decision making • action Condensed information Information Generator
Actions to be analyzed by the operator
• real-time estimation • signal processing • calculation of various items • statistical tests`results Measured variables Controller + Process
In Fig. 6.5 the lowest hierarchical level corresponds to the classical controllerprocess loop in which the manipulated variables are computed at each sampling point. This level supplies on-line measured input–output variables and/or state variables to the next level. The second level is the information generator element which continuously provides the third level with condensed useful information via numerical items and values. A priori information may also be available in the case of intentional process changes. In the highest (expert supervision) level both quantitative and qualitative information is handled. It is a purely logical element in which the supervisory functions described before are executed. This level must also be able to ask the second level for more perceptive information or parameter adjustments, such that the state of functioning of the process can be communicated continuously to the operator.
6.2.3 Sheridan’s Architecture Sheridan’s architecture is built around five primary functions that a human supervisor has to perform in a rather causal and temporal order, with feedback as shown in Fig. 6.6. 518, 521, 525 These functions (or roles) are: plan, teach, monitor, intervene, and learn.
6.2 Supervisory Control Architectures
89 Intervention feedback
Plan
Teach
Monitor
Intervene
Learn
Learning feedback
Fig. 6.6 The five principal functions of the human supervisor
Each one of these functions can be further decomposed in specific subfunctions (subroles) that have associated with them respective mental models and computer aids, as follows: Planning is decomposed in: (i) understanding the process under control, (ii) satisfying the objectives, (iii) setting a general policy, and (iv) deciding and testing the control actions. The associated mental models of planning are transfer relations, preferences and indifferences, general procedures and guidelines, state–procedure– action and outcomes of control actions. 167 The associated computer aids are aids for physical process training, procedure training, optimization, and action decision (in situ simulation). Teaching involves deciding, testing, and communicating commands. The corresponding mental model is a command language (consisting of symbols, syntax, and semantics), and the computer aid deals with the editing commands. Monitoring is decomposed in: (i) acquiring, calibrating and combining measures of process state, (ii) estimating process state, and (iii) evaluating process state (fault or halt detection, diagnosis and remedy). 576, 579 The respective mental models of the above subfunctions are (i) state information sources, (ii) expected results of past actions, and (iii) modes and causes of faults and halt. The associated computer aids are aids for: (i) editing commands, (ii) combination of estimation measures, and (iii) failure detection and diagnosis. 261 Intervene is decomposed in two subfunctions (or rules), i.e., “if failure then execute the planned abortion,” and “if the termination of the task is normal then the task is complete”. 435, 520 The mental models are the criteria and options to abord and task completion. The computer aids are aids for execution abord and for normal completion of execution. Learning involves the subfunctions of recording immediate events, and analyzing cumulative experience. The results of the first subfunction is feedback to the subfunction of ‘deciding and testing the control actions’, and the results of the second subfunction is feedback to the “understanding the process under control” subfunction. The associated mental models of the learning subfunctions are immediate memory and cumulative memory of salient events. The respective computer aids are aids for immediate and memory jogger, and for cumulative record and analysis. From the above, it follows that, in overall, the five primary human supervisor functions in complex control systems are implemented by twelve subfunctions, for which both human and computer play particular roles, and for each there exists
90
6 Supervisory and Distributed Control in Automation Desired goals and specifications SATISFICING AID
System under control TRAINING AID
What do I want – relative to what I can have?
How does the controlled system work– in terms that are relevant?
MEASURE CALIBRATION AND COMBINATION AID What data sources? How to combine them?
Given procedures PROCEDURES TRAINING & OPTIMIZATION AIDS
PROCESS STATE ESTIMATION AID Current state of the process?
System state measurement
What general strategy should be used?
PROCEDURES LIBRARY; ACTION DECISION AID
COMMAND EDITING AID
What control action should I take?
What commands should I give?
AID TO DETECT AND DIAGNOSE FAIL/HALT Has there been a failure or halt?
DIRECT SENSOR
HUMAN COMPUTER NEIGHBOR`S EXPERT EXPERT PARAKEET
AID TO EXECUTE ABORT/COMPLETE Should I abort or complete?
TASK-INTERACTIVE COMPUTER
SYSTEM UNDER CONTROL
Fig. 6.7 Pictorial diagram of computer decision aids and mental models of Sheridan’s supervisory functions (computer aids are depicted by operational blocks and mental models by thought boxes)
a suitable “mental model”. Sheridan has related his supervisory control functions to Rasmussen’s S-R-K model of human behavior, and has embedded them in the hierarchical structure of Fig. 6.3. 520 A pictorial diagram showing the various computer aids needed for implementing the human supervisory control, along with the associated mental models, is shown in Fig. 6.7. 520 In the following we give some more explanations on Sheridan’s supervisory functions along with some paradigms of the associated literature. In the narrow sense, human supervisory control involves one or more human operators who are setting initial conditions to intermittently adjust and receive information from a computer which is used in a closed-loop around the process under control. In a broader sense, human supervisory control involves a computer which transforms human operator commands for producing specific control actions or integrated overall summary displays. In classical control systems the human operator sends only control commands to the process under control via the human interactive computer, but in current complex /large scale control systems the human sends also decision-aid and higher-level commands for functions of the type shown in the expert supervision system level of Fig. 6.5. As we explained before, Sheridan has distinguished these functions as planning, teaching/editing, monitoring, fault detection and diagnosis, etc.
6.2 Supervisory Control Architectures
91
The first subfunction of planning is ‘understanding the process under control’ which in current computer-based systems is achieved using artificial intelligence concepts such as rules semantic nets, frames, and objects. 427, 482, 586 The planning subfunctions of ‘satisfying the constraints’ are performed using suitable objective/performance criteria that express the “optimality/goodness/utility” of the system’s operational state. Here, the techniques of optimization, multicriteria theory, operational research, and optimal control theory are used. 572, 579 The “policy setting” subfunction is based on given procedures or operation guidelines that the human operator must follow, and is typically performed “off-line”. The ‘decide and test control sub-functions’ refer to the control actions, for executing specific operations such as turning, accelerating or braking a vehicle, opening a valve or starting a motor. We now come to the teaching function. Here, the commands communicated to the computer must be sufficient for the complete generation of the necessary control actions. These commands include the “cursor control” of the computer screen, the “alphanumeric keys” of the keyboard, the joystick or teach pendant of a robot, etc. In the monitoring function, the human operator observes the automatic execution of the programmed actions and the operation of the process under control, to see if they agree with those just taught. The subfunction ‘acquire, calibrate and combine measures of process state’, needs the supervisor to have a mental model of potential sources of relevant information (including the likelihood of measurement and reporting bias). The ‘estimating process state from current measurements and past control actions’ subfunction, again needs a mental model of how past control actions affect the present process response. The human operator uses this in combination with the available measurement data on current process state, to find a better overall estimate of current state. 355 In the intervene function, which is based on the capability of detecting a failure or a halt condition, a decision branching must be available for use, that depends upon whether there is failure (in which case the operator executes the planned abord), or a normal execution of the task (in which case the operator considers that the task is completed and cycles back to the “decide and test control actions” subfunction). 261, 360 Finally, in the learn function, if there is neither a failure nor completion, the human records the immediate events, such that she/he memorizes the significant events and updates the computer data base. If the task is terminated by normal termination (completion), or by abnormal termination (abortion) due to failure, the human operator exercises the learn function in more detail, by “analyzing the cumulative”, which implies that the supervisor should recall and contemplate the entire task experience (no matter how many command cycles are involved) in order to improve her/his readiness when called upon for the next task.
92
6 Supervisory and Distributed Control in Automation
6.2.4 Meystel’s Nested Architecture The nested intelligent control architecture (NICA), also called “multiresolutional control architecture” (MCA), was first introduced in the area of computational mathematics as ‘domain decomposition or multi-grid method.’ 190 Hierarchical aggregation of linear systems with multiple time scales was considered by Coderch and colleagues, 86 and provided a mathematical study of nested hierarchical controllers. This theory was used by Meystel and was applied in the area of autonomous intelligent mobile robots (IMRs). 361–363 Meystel’s intelligent supervisory control model involves four principal components, namely planner, navigator, pilot and execution controller. The planner provides a rough plan and carries out a rough planning of the time profiles of input variables which are supposed to assure the desirable output time profiles. The navigator computes a more accurate trajectory of motion to be executed, and determines motion compensation and refinement of the initial plan, if required. The pilot develops on-line tracking control taking into account the deviations from the expected situations that can be observed only in the immediate vicinity of them. Finally, the execution controller executes the plans and compensations computed by the planner, the navigator and the pilot. Meystels’s three level multiresolution control architecture (MCA), which is also called the 6-box (P,K,P/C,S,W,A) architecture has the form shown in Fig. 6.8, where P stands for perception, K for knowledge representation, interpretation and processing, P/C for planning and control, A for a set of actuators, W for the world (i.e., the process under control), and S for a set of sensors. The fundamental properties of MCA are the following: P1. Computational independence of the resolutional levels. P2. Each resolution level represents a different domain of the overall system. P3. Different resolution levels deal with different frequency bands within the
overall system. P4. Loops at different levels are 6-box diagrams nested in each other. P5. The upper and lower parts of the loop correspond to each other.
Fig. 6.8 Three level multiresolutional architecture (each level has its own feedback loop)
P
K
P/C
P
K
P/C
P
K
P/C
S
W
A
1st level 2st level 3st level
6.3 Task Analysis and Task Allocation in Automation
93
P6. The system behavior is the result of superposition of the behaviors generated
by the actions at each resolution level. P7. The algorithms of behavior generation are similar at all levels. P8. The hierarchy of representation evolves from linguistic at the top to analytical
at the bottom. P9. The subsystems of the representation are relatively independent.
A different self-explanatory pictorial illustration of the multi-resolutional/nested ICA for a mobile robot system consisting of six levels is shown in Fig. 6.9. The six nested hierarchical elements are: high-level planner, navigator, pilot, path monitor, controller, and low-level control system (sensors, actuators).
6.3 Task Analysis and Task Allocation in Automation The problems of task analysis and task allocation in automated systems are very important and their solution influences the actual overall properties of the designed system both from the viewpoint of human factors and the viewpoint of productivity and quality of products. Actually, there exist many popular techniques for carrying out task analysis, but for the task allocation problem there are no generally accepted techniques. This is so because the subtasks to which each task can be decomposed are not usually independent of one another, and the ways in which humans interact with computers vary very much from case to case. As a result it is impossible to quantify the suitability of the various human–machine combinations. The selection of human–machine mix is typically done by empirical guidelines and qualitative criteria. Task analysis methods started with the so-called Tayloristic method of scientific management. 471, 559 Current task analysis methods start with listing the sequential steps and proceed to the specification of the logical conditions and probabilities for the transition from one step to another, the specification of whether a human or machine performs each step, and so on. As for the task allocation, the first solution attempt is the Fitts list (called the MABA-MABA list) 153 of what “men are better for” and what “machines are better for”. This list, which helps in deciding how much and when to automate in each case, is the following 153, 520 : Men Are Better for Detecting small quantities of visual, auditory or chemical energy Perceiving patterns of light and sound Improving and using flexible procedures
94 Fig. 6.9 Six-level nested ICA (Alex Meystel)
6 Supervisory and Distributed Control in Automation PLANNER Goal Recognition
Global Path Planning
NAVIGATOR Sub-goal Formulation
Local Path Planning
PILOT Target Generation
Dynamic Path Planing
PATH MONITOR Target Location
Path Correction
CONTROLLER Command
Tasking
LOW-LEVEL CONTROL Sensors Actuators
6.3 Task Analysis and Task Allocation in Automation
95
Storing information for extended periods of time and recalling appropriate parts Carrying out inductive inference Performing judgement
Machines Are Better for
Responding quickly to control signals Exercising large forces accurately and smoothly Storing information briefly, and erasing it completely Carrying out deductive reasoning
The MABA-MABA list was criticized by many researchers, since the human cannot be truly compared to the machine. Craik 98 has pointed out that we know how to replace a human with a machine only to the extent that human is understood as a machine. Jordan 276 concluded to the assertion that “the main benefit of using the MABA-MABA list is that humans and machines are complementary”. Price 434 indicated that for using the MABA-MABA list, context dependent data are required which in most cases are not available. A good example, which shows that in practice there is always a cooperation of human and automation, is the space program. Indeed, in an unmanned spacecraft many tasks are performed by the human(s) via remote control from the ground, and on a manned spacecraft many tasks are performed autonomously. This is why, as we explained above, the human–computer task allocation cannot be done by simply assigning independent task functions (elements) to either human or computer on the basis of a priori criteria, since task functions are seldom independent, and there are infinitely many ways in which the human can cooperate (interact) with the machine in order to perform a given task function. However, the task allocation search space can be considerably reduced by using the following guidelines proposed by Sheridan 520 : Try to find and remove any unnecessary constraints which make the task alloca-
tion more difficult (going back to task analysis). Try to find what are obvious allocations (e.g., the tasks that are easy to computer-
ize must be allocated to the computer, most non repetitive tasks must be allocated to human, etc.). Try to examine the extremes A pro-computer extremist allocates to the human only the tasks that cannot be done by the computer. Conversely, a pro-human extremist supports the human with computer help only for the tasks that there is no other choice. Use a proper human–computer task allocation between the two extremes Sheridan 521 has proposed a working scale of computerization/automation from zero to one hundred per cent, that helps very much the designer to do this selection (see Table 6.1). Examine how fine an allocation makes sense One must take into account that task allocation to human to an excessively fine degree makes no sense, because the
96
6 Supervisory and Distributed Control in Automation Table 6.1 Sheridan’s scales of degrees of automation 1. The computer does nothing. Everything is performed by the human 2. (i) The computer gives the full set of alternatives of action, or (ii) Reduces the alternatives to a few, or (iii) Proposes a single alternative, and (iv) Executes this alternative, or (v) Gives the human a certain period of time to veto, before switching to automatic operation, or (vi) Executes automatically, then necessarily informs the human, or (vii) Informs the human only if asked, or (viii) Informs the human if it decides by its own 3. The computer does everything and acts fully autonomously, by ignoring the human
human cognition and attention cannot be partitioned arbitrarily, or just turned on or off. Human memory tends to be less good in details, whereas it is better for more complete pictures and patterns. Select between trading and sharing Trading occurs when the human and computer perform their tasks serially, each handling the task back to the other when finished with only one part. Sharing occurs when the human and the machine work on the task in parallel (at the same time). Human and machine can cooperate in both modes. The designer must select the best on the basis of the task context. Use many salient criteria To this end, the designer must write down the available criteria for judging when one allocation is better than another, and try to rank order them in a suitable way, possibly using convenient quality weights. A popular criterion, that helps the designer to decide when to automate and use supervisory control, is the time needed to perform a task by the human and the computer. In general, the time to execute a task increases as the complexity of the task increases. The time required to perform a task by supervisory control is the sum of the time spent for planning and teaching the task to be done, and the time to execute the task. Supervisory control is faster than manual control, only when the complexity of the task is such that the speed of execution by the computer (the machine) overbalances the planning and teaching overhead. Actually, it is very difficult to program the computer for very complex tasks, e.g., complex teleoperation tasks. In some situations is really faster to perform these tasks manually (as is done, for example, in space telescopes repair in extravehicular activities by the astronauts). In other situations is better (more economical) to perform the tasks automatically. For example, automated and supervisory control is
6.4 Distributed Control Architectures
97
cheaper when a manufacturing system operates in large–batch mode, whereas for very small batch operation or one-of-a-kind fabrication manual control is preferred.
6.4 Distributed Control Architectures 6.4.1 Historical Remarks The distributed control systems of today have evolved combining the developments of two particular branches of automation. One of them is digital computers and the other is process control systems. Microprocessor-based distributed control systems were able to introduce the advantage of digital technology without sacrificing the reliability and fault tolerance of the earlier analog systems. 584 Now there is about three decades experience of distributed systems, and the formulation of design considerations for the configuration of plant control systems is at a matured state. The major consideration in control system architecture is its relation to the plant and the nature of the control problem. The design of control system architecture for systems that have hard real-time constraints or the consequence of failure of the control system might be catastrophic, can be very different from the design of systems where the process can be easily controlled manually when a failure occurs. In the same way, the nature of the system under control, the subsystem boundaries, and the relation between the subsystems, in the structural viewpoint, are important considerations for defining the control system architecture. The first application of supervisory control in the chemical process industry was in the Ammonia Plant at Luling in 1959. 557 The first application of a process computer in control was by Texaco. 23 Supervisory control was used only for deciding the best operating point (via static optimization) and for deriving, from this, the set points of the lower level analog controllers. The use of traditional type lower level controllers assured that the availability of the system was not affected by the use of a computer. The attempt to use direct online digital control was first made in the 1960s by the Imperial Chemical Industry. 557 The philosophy of direct digital control was to realize the controls on a small computer for small plants, and a large computer for large plants. Coupled with the use of direct digital control, attempts were later made to automate the operation of the plants. 2 As computers became more reliable and of lower costs, there was a gradual increase of computer use in control applications, including supervisory control, monitoring, and direct digital controls. The utilization of direct digital control promoted the centralized control concept even more. The first break to this was made by the Honeywell’s TDC 2000 distributed system, (the so-called T-model). Later, a number of other manufacturers of control systems introduced similar distributed control systems. Soon, distributed systems were quickly receiving popularity into other areas like metallurgical systems, power plants, etc.
98
6 Supervisory and Distributed Control in Automation
6.4.2 Hierarchical Distributed Systems The earlier distributed systems consisted of a number of local controllers communicating with each other, and a large central (host) computer for operator communication. The structure of these systems was hierarchical with a number of different processors for the various tasks or functions, as shown in Fig. 6.10. The communication between the distributed processors which implement the control tasks of the process, and between them and the host computer, was the key to system performance, and was achieved using a suitable communication network. The more the computational capacity of the distributed processors (controllers), the less the information that has to be transmitted to the host computer. The three
Operator
Large Mini Computer
Operator
Medium Mini Computer
Operator
Medium Mini Computer Operator
Operator
Operator
µC
µC
µC
µC
I/O Plant Complex
Operator
Operator
Medium-to-Large Mini Computer
Operator
µC
µC
Operator
µC
I/O Plant Complex
Fig. 6.10 Two examples of hierarchical distributed control system structure (C stands for microcomputer)
6.4 Distributed Control Architectures
99
a NODE NODE
NODE
NODE
NODE
NODE
NODE NODE
b
c INTERFACE UNIT NODE
NODE
NODE
NODE
NODE
NODE NODE
NODE
NODE
BUS
NODE
Fig. 6.11 The three early communication network topologies. (a) Ring topology, (b) Point-topoint topology, (c) Bus topology
communication network topologies used in these distributed control systems are (Fig. 6.11a–c) 437: Ring network topology Point-to-point (star) connection topology Bus network topology
The ring network topology (Fig. 6.11a) uses a large control computer which communicates with all the distributed processors. The drawback of this topology is that the system is limited by the capacity of the central processor, which if fails, then all communications in the system fail. The point-to-point (or star) connection topology (Fig. 6.11b) is faster and has reduced the probability of overall failure. However, it is more expensive, because the number of ports of all nodes is equal to the number of nodes. The bus network (or data highway) topology was a major development in distributed control (Fig. 6.11c). In this topology all nodes talk to each other via a high-speed serial data bus (i.e., a shared resource), which can be used by a functionally and geographically distributed system with several nodes. This concept was first introduced by the Honeywell’s TDC 2000 and then followed by other major manufacturers. It must be remarked here that in these systems a centralized process operator could interact with local operators via the data highway of the system.
100
6 Supervisory and Distributed Control in Automation
If the data processing is not required to satisfy real-time constraints, bus contentions are allowed. These bus contentions have been avoided either by using a bus master which mediates between nodes and decides who has access to the bus, or by using a token which moves across the network with the holder of the token having access to the bus. The use of a token is the most popular and is adopted in an increasing number of distributed systems. After the mid 1980s distributed control systems used more powerful processors (16 or 32 bit microcomputers), which employ a high speed bus, have almost identical computational power at each node, and can operate without a host computer in the network. This has simplified considerably the system structure, has allowed for highly distributed intelligence with less number of building blocks, and permitted the combination of closed-loop and open-loop controls, as well as data acquisition and other computational operations. These systems possess a unilevel structure 260, 437 as shown in Fig. 6.12a, b. a COMPUTATIONAL NODE
OPERATOR STATION NODE
ARCHIVING NODE
CONTROLLER NODE
CONTROLLER NODE
CONTROLLER NODE
I/O
I/O
I/O
b ARCHIVING NODE COMPUTATIONAL NODE
OPERATOR STATION NODE
CONTROLLER NODE
CONTROLLER NODE CONTROLLER NODE
I/O
I/O I/O
Fig. 6.12 Unilevel distributed control structure. (a) bus topology, (b) ring topology
6.4 Distributed Control Architectures
101
The unilevel structure is characterized by the simplicity of an almost identical node configuration (designed to perform a given number of tasks with the same building blocks organized in various ways), with the software of each node being specific to the tasks each node performs. Other advantages of unilevel systems are the fact that only a few types of cards are needed for the whole system, which leads to lower requirements of spares inventory and easier maintenance. Early systems of unilevel structure are those of Westinghouse, Taylor, Hitachi, Honeywell (TDC 2000, TDC 3000), etc. The present day systems are equipped with modulating control modes and binary control modes, which can be matched with programmable logic controllers (PLCs) and data acquisition (SCADA) systems.
6.4.3 Distributed Control and System Segmentation Typically, the control problem of a complex system or a set of subsystems can be segmented into a set of control subproblems with information exchange among them. This segmentation is usually made on the basis of the natural boundaries of the subsystems. However, many industrial control systems require a large number of computations in real-time, which can be assured only by breaking the computational problem in many subproblems which can be solved using a distributed computing configuration. Such problems are encountered, for example, in avionics and robotics which have hard real time requirements (in the order of milliseconds). An other strong requirement of avionics is that the control problem of the system architecture should have as one of its primary requirements the tolerance of faults. It is noted that the distributed computing problem is different from system to system, but the control problems more often than not allow to use a range of design options within the constraints that have to be met. The sampling rate and control computation speed must be an order higher than the speed of process dynamics, but the sampling rate for failure detection and protection must be higher than that of normal regulation control tasks. To summarize, the distributed system architecture is primarily determined by process requirements, i.e., the nature of the process under control should be very well understood in order to achieve a good distributed control configuration. Following Saridis’ intelligent hierarchical control architecture, a typical structure of hierarchical-distributed control is as shown in Fig. 6.13. The enterprise organization level tops all the activities of the enterprise, such as the analysis of market and customer demands, sales statistics, dispatching of the orders, production planning, monitoring, and supervision. The production organization level provides production scheduling, production dispatching, supervision, rescheduling and reporting for stock control, and so on. The coordination level supervises the plant monitoring and control (optimal, adaptive, coordinated control). Finally, the execution level performs data collection and preprocessing, data logging, and the low-level control functions (open-loop and closed-loop control). At all levels appropriate software packages are used.
102
6 Supervisory and Distributed Control in Automation
Enterprise Organisation Level
Top Management Computer Local Area Network (LAN)
Production Organisation Level Coordination Level
Production Management Computer Long Distance Bus Supervisory Computer
Supervisory Computer
Supervisory Computer System Bus
Execution Level
µC
µC
µC
µC
Large – Scale Controlled Plant
Fig. 6.13 Typical structure of hierarchical-distributed control
In all cases, system segmentation (decomposition) can be done as mentioned above, i.e., either in terms of control functions (tasks) or in terms of process subsystems’ boundaries. Present day systems permit both the regulating control and the logic tasks to be built in the same hardware. The human–machine interfaces are implemented using computer displays connected to keyboards, joysticks, mice, lightpens, and so on. The major design goals of distributed process-control are 260, 438 : The control system structure should be embedded in the process structure. The information structure among nodes should be minimized such that each node
to be able to operate almost autonomously. Each node should be loaded after taking into consideration the above. Sufficient redundancy should be provided in order to assure the desired safety
and reliability requirements. Care must be taken to ensure the minimum possible costs.
6.5 Discrete Event Supervisory Control An important class of supervisory control systems in the class of discrete event systems (DES). The state of DESs may take discrete (or symbolic) values and change only at discrete-time instants (possibly asynchronous), in contrast to the traditional continuously varying dynamic physical systems which are modeled by differential or difference equations. The class of discrete-event systems was initiated with simulation of human-made systems in the middle of sixties. 307, 633 The first products were several software simulation languages (called GPSS), followed by software simulation tools (e.g., SIMSCRIPT II.5, SLAM II, SIMAN). 307 Later,
6.6 Behavior-Based Architectures
103
the activity was extended to the use of the theoretical concepts of automata and languages for the modeling, analysis and design of discrete event systems. The field of discrete event supervisory control (DESC) was initiated by Ramadge and Wonham 440 and extended by other researchers. 74, 588 This theory of discrete event supervisory control is based on the partitioning of the discrete event behavior of a physical process or plant into legal and illegal categories. The legal behavior of the plant dynamics is modeled by a deterministic finite-state automaton (DFSA). The DFSA model can represent a regular language consisting of an alphabet of finitely many events which is partitioned into subsets of events that can be controlled (i.e., disabled) and events that cannot be controlled (i.e., cannot be disabled). 161, 162 Using the regular language of an unsupervised discrete event process, the supervisory control theory synthesizes a supervised discrete event controller as a new regular language having the common alphabet with the process language, which assures restricted legal behavior of the plant under control on the basis of the given performance specifications. The methodology of discrete event supervisory control has been developed in the framework of computer science or control science or a combination of them. Among the classical books of discrete event systems are the books of Kumar and Garg, 299 Cassandras and Lafortune, 74 and Ho and Cao. 214 A book that contains several contributions towards developing quantitative measures for discrete event supervisory control is the book edited by Ray et al. 448 The topics of this book deal with optimal and robust supervisory control of regular languages, and applications of the language measure to mobile robotic systems, gas turbine engines, and software systems. An alternative way to model discrete event systems is by using Petri nets (PNs), which have their origin in manufacturing control systems and are understood as directed bipartite graphs. 455, 577 In particular, the logical PNs and the fuzzy PNs can be cast in the framework of knowledge-based control systems and knowledge-based or expert control. 577 Several flexible manufacturing systems are controlled using deterministic, knowledge-based, and fuzzy logic-based discrete event controllers designed using PN models. 64, 65, 168, 278, 325, 464
6.6 Behavior-Based Architectures The concept of “agent” has contributed to the development of the so-called behavior-based intelligent control architectures. Though considerably varied, these architectures possess the following common features: Emphasis is on the importance of coupling sensing and action tightly. Avoidance of representational symbolic knowledge. Decomposition into contextually meaningful units (behaviors or situation–action
points).
104
6 Supervisory and Distributed Control in Automation
Fig. 6.14 Pictorial representation of the elements of an agent
Communication
Communicator
Planning, Action selection
Head
Execution
Body
Using the concept of an agent, it is possible to describe and explain both the hierarchical and nested architectures. An agent consists of the following three parts (Fig. 6.14): Communicator (for connecting the head to other agents on the same communi-
cation level or higher) Head (for planning and action selection) Body (for action execution)
The planning and action components (heads of agents), as well as the overall agents, belong to one of the following three classes (types) 302 : Centralized Action Selection The information is centrally processed by a central
decision making component and transformed into action for the agent’s body. Decentralized Action Selection The information is processed independently by
each decision making component and transformed locally in their own action decision for the agent’s body (motor schema). Distributed Action Selection The information is processed by several decisionmaking components, which communicate and negotiate to arrive at a decision. Then, the information is transformed locally and globally into an action for the agent’s body. From an execution-oriented view point, the above classification allows the description of available intelligent control architectures (ICA), and also the description of multi-agent systems in a way similar to single-agent systems. Two very popular behavior-based architectures are the “subsumption architecture” 61 and the ‘motor schemas architecture’. 19–21 The subsumption architecture which is based on the sense–plan–act paradigm was firstly employed in the autonomous robot Shakey. 389
6.6.1 Subsumption Architecture In the subsumption architecture the tasks-achieving behaviors are specified and treated as separate layers. Individual layers operate on individual goals concurrently and asynchronously. An augmented finite state machine (AFSM) is employed to represent each behavior at the lowest level (Fig. 6.15). The AFSM contains a particular behavioral transformation function. Stimulus or response signals can be suppressed or inhibited by other active behaviors. The return of a behavior to its start conditions
6.6 Behavior-Based Architectures
105 Reset Suppressor
R Input wires
I
Behavioral unit
S
Output wires
Inhibitor
Fig. 6.15 AFSM employed in the subsumption architecture (Rodney Brooks)
is done using a proper reset input. Each action is performed by a respective AFSM which is responsible for its own perception of the world. 21 The reactions are organized in a hierarchy of levels where each level corresponds to a set of possible behaviors. Under the influence of an internal or external stimulation, a particular behavior is required. Then it emits an influx towards the inferior level. At this level, another behavior arises as a result of simultaneous action of the influx and other stimuli. The process continues until terminal behaviors are activated. The name “subsumption” comes from the coordination process used between the layered behaviors. Complex actions subsume simple behaviors. A priority hierarchy sets the topology. The lower levels in the architecture do not have information about their higher levels. This allows the use of incremental design. 587 That is, higher-level functions are added on top of an already working control system without any modification of those lower levels. The world itself is actually the primary medium of communication. Actions taken by one behavior lead to changes within the world of the system’s relationship to it. New perceptions of those changes communicate those results to the other behaviors.
6.6.2 Motor Schemas Architecture This architecture was more strongly motivated by biological sciences and uses the theory of schemas, the origin of which goes back to the eighteenth century (Immanuel Kant). Schemas represent a means by which understanding is able to categorize sensory perception in the process of realizing knowledge of experience. The first applications of schema theory include an effort to explain postural control mechanisms in humans, a mechanism for expressing models of memory and learning, a cognitive model of interaction between motor behaviors in the form of schemas interlocking with perception in the context of the perceptual cycle, and a cognitive model which employs contention-scheduling mechanisms as a means for cooperation and competition between behaviors. 39, 204, 382, 398, 423
106
6 Supervisory and Distributed Control in Automation
From among the various definitions of the schema concept available in the literature we give here the following representative ones 21 : A pattern of action or a pattern for action. 382 An adaptive controller which is based on an identification procedure for updating
the representation of the object under control. 16 A perceptual entity corresponding to a mental entity. 423 A functional unit that receives special information, anticipates a possible percep-
tual content, and matches itself to the perceived information. 297 A convenient working definition is the following. 21 “A schema is the fundamental entity of behavior from which complex actions can be constructed, and which consists of the knowledge how to act or perceive, as well as the computational process by which it is enacted”. Using schemas, robot behavior can be encoded at a coarser granularity than neural networks while maintaining the features of concurrent cooperative-competitive control involved in neuroscientific models. More specifically, schema theory-based analysis and design of behavior-based systems possesses the following capabilities: It can explain motor behavior in terms of the concurrent control of several differ
ent activities. It can store both how to react and how to realize this reaction. It can be used as a distributed model of computation. It provides a language for connecting action and perception. It provides a learning approach via schema elicitation and schema tuning. It can explain the intelligence functions of robotic and other systems.
Motor schema behaviors are relatively large grain abstractions, which can be used in a wide class of cases. Typically, these behaviors have internal parameters which offer extra flexibility in their use. Associated with each motor schema there is an embedded perceptual schema which gives the world specific for that particular behavior and is capable of providing suitable stimuli. Perceptual schemas are defined in a recursive manner. Some examples of motor schemas are the following 21: Move ahead (i.e., move in a particular direction) Move to goal (i.e., move towards a desired goal) Avoid static obstacle (i.e., move away from passive or non threatening naviga-
tional obstacles) Escape (i.e., move away from the projected intercept point between the robot and
an approaching object) Dock (i.e., approach an object from a desired direction) Avoid past (i.e., move away from places visited in the past)
The typical design steps for creating a schema-based robot system are the following 17 : Represent the problem in terms of the motor behaviors required to achieve the
goal(s).
6.7 Discussion
107
Decompose the motor behavior to their most elementary level (in a way similar
to biological systems whenever feasible). Formulate mathematical models that express the robot’s reaction to perceived
environmental events. Assess by simulation the performance of the behavior under study in the desired
environment. Specify the perceptual requirements needed to satisfy the inputs for each motor
schema. Develop suitable perceptual algorithms for extracting the required data for each
behavior (via action-oriented perception, expectations, and focus-of-attention processes). Embody the designed control system into the robot hand. Evaluate the overall performance of the system. Iterate and modify/improve the behavioral issues as appropriate.
6.7 Discussion Referring to the distributed control systems (DCSs) we note that today’s DCSs are equipped with a large variety of software packages at the system level or the application level. Software at the system level provides proper tools for the development, test, run and maintenance of user-created programs. At the application level, the software involves the monitoring, control loop configuration, and communication components. The system development tools are composed by the integration of several compilers and utility programs. The application programs extend the content of the library of functions and make possible the configuration of more complex control loops. In DCSs, data exchange is needed between the various subsystems from the instrumentation level up to the mainframe level. This requires the use of different types of data communication networks at the various hierarchical levels, viz. sensor data acquisition and control signals distribution at the field level, high-performance bus for interfacing at the control level, and real-time local networks or long-distance communication links at the production control and management level. Referring to the behavior-based architectures (BBA) it is worth mentioning that there are a number of other BBAs such as the circuit architecture, the colony architecture, the action–selection architecture (ASA) and the skill network architecture (SNA). 21 The circuit architecture combines the reactivity principle of the subsumption architecture, the abstractions employed in the real-time control system (RCS) architecture, and proper logic formulations of the behaviors. The colony architecture is a direct follower of the subsumption architecture which employs simpler coordination schemes and allows a more flexible definition of the behavior relations (tree-like priority ordering instead of the layered ordering). The ASA 331 uses a dynamic mechanism for the selection of behavior, where each individual behavior (competence model) has its own activation level that provides the basis for run-time arbitration.
108
6 Supervisory and Distributed Control in Automation
The SNA uses graphical animation combined with the behavior based approach. The coordination scheme is based on a modification of the ASA mechanism. Other behavior-based control architectures for robotic systems include the animate agent architecture (AAA), the distributed architecture for mobile navigation (DAMN), 468 the behavioral architecture for robot tasks (BART), and the autochronous behaviors (AB) architecture. We close our discussion by mentioning four hybrid BBAs each one using a different hybrid architecture strategy. These are 21 : AuRA (Autonomous Robot Architecture) This architecture uses the selection
strategy which regards planning as configuration. The planner determines the behavioral composition and the parameters used, and reconfigures them in case of system failures. Atlantis This architecture uses the advising strategy which regards the planning as advice giving. The planner proposes alterations that may or may not be adopted by the reactive control system. Planner–Reactor This architecture uses the adaptation strategy, which views the planning as adaptation. The planner continuously adapts the ongoing reactive components to the changing conditions of the robot’s world and tasks. PRS (Procedural Reasoning System) This architecture uses the postponement strategy where planning is considered as a least commitment process. The planner postpones making decisions on actions as much as possible, and elaborates plans only when necessary.
Chapter 7
Implications of Industry, Automation, and Human Activity to Nature
We’ve got to pause and ask ourselves: How much clean air do we need? Lee Iacocca This is the foundation of all. We are not to imagine or suppose, but to discover, what nature does or may be made to do. Francis Bacon The great task for environmental historians is to record and analyze the effects of man’s recently achieved control over the natural world. What is needed is a longer – term global, comparative, historical perspective that treats the environment as a meaningful variable. John Richards
7.1 Introduction As mentioned in Section 1.6, any technological and automation system operating in the nature affects it by the waste and hazardous contaminants released directly or produced indirectly by chemical reactions occurring in the nature after their entrance in it. Industrial contaminants can enter the nature in several ways and forms (see Fig. 7.1). They may originate directly from industrial operations in the form of air emissions from smoke stacks or process reactions exposed to the atmosphere, or from plant wastewaters discharged into receiving streams with or without previous treatment. Wastewaters may also be directed to a public treatment plant, while solid technological wastes may be disposed of in landfills producing possible land or groundwater pollution, or after incineration, potential air pollution. Other sources of contaminants released to the nature are fugitive emissions from leaking equipment (such as valves and pumps) in chemical and petrochemical industrial plants. The careful study of this impact of automation and industry on our nature is absolutely necessary for developing efficient strategies and technologies of pollution prevention/control and waste minimization.
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 7, c Springer Science+Business Media B.V. 2010
109
110
7 Implications of Industry, Automation, and Human Activity to Nature To higher atmosphere
Contaminants’ dispersion due to wind
Sunlight Precipitation
Volatilization
Photochemicals Volatilization Inflow
Acid-based equilibria
Hydrolysis
Photolysis
Biodegradation
Bioaccumulation
Sewer
Drain to ground water
Landfill
Sedimentation Drain to ground water
Sediment
Fig. 7.1 Bishop’s diagram of the flow and transformation of industrial contaminants in the nature
A self explained schematic diagram showing the transformation and transportation of technological contaminants in the nature is depicted in Fig. 7.1. 50 Our purpose here is to describe the various issues of the implications of industrial-automated systems upon the nature (air pollution, solid wastes, water pollution), and the resulting phenomena of global warming, ozone thinning, acid rain and urban smog, including the problems of depletion of natural resources and energy, and, more generally, the environmental impact (EI) of the various human modern-life actions and activities. To this end, the concepts of waste and pollution control are first defined, followed in Section 7.2 by the common classification and terminology of industrial contaminants. 50, 69, 128, 359, 498, 499, 595, 608
7.1.1 The Concepts of Waste and Pollution Control In general, one must be very careful in how to define waste. 89 Commonly, waste is considered to be any solid product left over at the end of a physical or chemical process or technological action, but actually waste is a much wider concept than that, including wastage of energy or water in producing or using a product. In describing waste, one must consider several issues. For example, methodologies for recycling beverage cans may be very profitable, due to the natural resource conservation and to the size decrease of landfill space needed, but driving long distances to deposit small quantities of empty cans, or glass and plastic bottles in collection stations may need excessive consumption of gasoline. Also, the resources consumed for this and for transporting the materials in the recycling plant, might exceed the resources saved
7.2 Industrial Contaminants
111
by not throwing them away. On the other hand, industrial waste, i.e., the materials produced by a manufacturing plant but not directly used in it, may be of value to somebody else. In general, an industrial by-product may be characterized as a waste or a useful commodity, depending on its properties and its marketability. Thus, an industrial waste may not necessarily have to be a waste. This leads to the definition of a waste as: ‘a waste is a resource out of place’. 50 Determining the point at which a waste can be turned into a useful resource is the subject of pollution control and waste minimization. Pollution control (or prevention) is collectively defined as the elimination or reduction of waste streams through proper production technologies and strategies. 50, 128, 509 According to the U.S. Environmental Protection Agency (EPA): “Pollution prevention is the use of materials, processes, or practices that reduce or eliminate the creation of pollutants (contaminants) or wastes at the source. It includes practices that reduce the use of hazardous materials, energy, water, or other resources, and practices that protect natural resources through conservation or more efficient use” 595 (www.epa.gov). This definition implies that pollution control embraces on the one hand the modification of industrial processes to minimize the production of wastes, and on the other hand the application of proper techniques and technologies that lead to conservation of valuable natural resources. The Air Pollution Prevention and Control Division (APPCD) of EPA creates, implements and demonstrates air pollution prevention and control technologies for key industries, indoor environments and sources of greenhouse gases (GHGs).
7.2 Industrial Contaminants Industrial contaminants (pollutants) are the waste materials that are produced as direct industrial by-products or left after recycling and reprocessing into other useful compounds such direct by-products. All these contaminants contribute to the pollution of our nature (environment) with potential consequences on human health. Industrial pollutants are distinguished in 50, 69, 89, 128, 359, 498, 499, 509, 595, 608 : Organic compounds Metals and inorganic nonmetals
7.2.1 Organic Compounds Organic compound is any compound that contains carbon and usually hydrogen. Other elements that may be contained in organic compounds are oxygen, sulfur, nitrogen, phosphorus, metals, etc. The carbon atom has (normally) four electrons to share with other atoms when forming a compound (in straight or branched chains or rings). A carbon atom with all its four bonds connected to different atoms is
112
7 Implications of Industry, Automation, and Human Activity to Nature
said to be saturated. The two main categories of organic compounds, according to structure, are the following: (i) Aliphatic Compounds These compounds involve straight or branched chains of carbon atoms, or are formed in rings which contain single bonds between the carbons. Examples are pentane, 3-ethylhexane, and cycloexane. H H
H
H
C
C
C
C
C
H
H
H
H
H
H H
H
H H
Pentane
C C
H
CH2 CH2
H
H H
H
H
H
H
C
C
C
C
C
H
H
H
H
H
CH2
CH2 H
CH2 CH2
Cyclohexane
3-Ethylhexane
(ii) Aromatic Compounds These compounds contain carbon-based rings or multirings with alternating single and double carbon–carbon bonds. Examples of aromatic compounds are, benzene, phenols and dioxins. H C
H C
C
C H
H C
C
H
H Benzene
Aliphatic compounds Aliphatic compounds are further distinguished in alkanes, alkenes, alkynes, and their derivatives, depending on the degree of saturation of the carbon bonds. Alkanes have all bonds between carbon atoms, single bonds (saturated). A compound that has all saturated bonds is said to be a saturated compound. Isomers are compounds with the same number of carbon atoms (four or more), but different structures and properties. Straight-chain alkanes (or, as otherwise called, paraffins) are the following, in increasing number of the carbon atoms contained: Methane CH4 (one carbon), Ethane CH3 —CH3 (two carbons), Propane (three carbons), Butane (four carbons), Pentane (5), Hexane (6), Heptane (7), Octane (8), Nonane (9), Decane (10), Undecane (11), and Dodecane (12 carbons). Other alkanes have other atoms or radicals substituted for one or more of the hydrogen atoms. These may include halides .Cl ; F ; Br ; I /, nitrogen groups (amines, —NH2 , amides, —CO.NH3 /, nitriles, —CN or nitrosamines, —N —N D O), phosphorus, sulphur, and metals. Typical solvents used in industry are halide-substituted (i.e., chlorine) alkanes. A group of aliphatic compounds that contain sulfur are called mercaptans and are commonly found in industrial wastes. Usually they are toxic. A class of substituted
7.2 Industrial Contaminants
113
alkanes that has been of strong environmental concern are the chlorofluorocarbons (CFCs). These have been implicated with ozone destruction in the upper atmosphere. One CFC, the well known Freon 11, is: Cl Cl
Cl
C F Freon 11
Alkenes are aliphatic compounds that have double bonds between two adjacent carbon atoms (symbolized as Cn H2n ). Because of the double bond, these compounds are characterized as unsaturated compounds. Their name always ends in “-ene”, as for example in 2-butene (commonly called 2-butylene). Alkynes are characterized by a triple bond between two carbon atoms. These compounds are very unstable and except for acetylene .HC3 C3 H/ are not usually found as waste products. The names of alkynes end in -yne, and their numbering system is the same as for alkenes, for example 2-Pentyne .CH3 —C C—CH2 —CH3 /. Organic acids typically have a carboxylic acid group (—COOH) attached to one end of the molecule, and their name ends in -anoic acid. For example the Propanoic acid is: O CH3
CH2
C
OH
Propanoic acid
The organic acids are used in industry as process chemicals, or are produced as byproducts from chemical processes. Unsaturated organic acids are also encountered in nature and are typically employed as process chemicals (e.g., in plastics such as Plexiglas and in a variety of oils). Many saturated and unsaturated monocarboxylic acids are encountered in nature as constituents of fats, oils, and waxes. Other organic compounds are: Esters Compounds that are formed by the reaction of alcohols and organic acids. Ethers Compounds that are formed by combining two alcohols. Aldehydes These are the oxidation products of primary alcohols (R—OH), where R is an organic grouping). Ketones These are the oxidation products of secondary alcohols (R—(C(OH)H)— R0 ). Acetaldehyde and Formaldehyde Two frequently used chemicals in organic synthesis reactions. Cyclic Aliphatic Compounds These are contained in petroleum (e.g., the most common cyclic aliphatic is cyclohexane, and a cyclic ketone is cyclohexanone):
114
7 Implications of Industry, Automation, and Human Activity to Nature O C
CH2 CH2 CH2
CH2
CH2
CH2
CH2
CH2 CH2
CH2
CH2
Cyclohexane
Cyclohexanone
Aromatic compounds In aromatic compounds there are rings with alternating single and double bonds between the ring carbons. These ring bonds typically do not act like the covalent bonds of aliphatic compounds. These compounds are very stable since it is not easy to add an atom across such a ring bond. The simplest aromatic compound is benzene: C6 H6 . When one hydrogen of benzene is replaced by something else, the compound is named by placing the name of the substituent first followed by -benzene. For example Nitrobenzene: H C
H C
H
C
C C
C
H
C
C H
NO2
C H
H
H C C
C
H
H
Benzene
Nitrobenzene
shorthand form
H
NO2
shorthand form
The aromatic group that results when benzene is attached to an aliphatic chain is named using the term phenyl, e.g., 3-phenylpentane. H3C
CH2
CH
CH2
CH3
3-Phenylpentane
It is also possible to substitute benzene’s hydrogen’s in two or more positions which must be indicated (e.g., 1,3-dichloro-5-nitrobenzene). A simple way to name the compounds resulting from benzene with only two substituents is using the terms ortho-.o /, meta-.m / and para-.p /. “Ortho” means that the substituents are adjacent to each other (1,2 position), “meta” means that they are displaced by one carbon on the ring (1,3 position), and “para” that they are opposite to each other (1,4 position). Another class of organic compounds is the class of polychlorinated dibenzodioxins (PCDDs) and the dibenzofurans (PCDFs). These compounds are not produced
7.2 Industrial Contaminants
115
for any desired purpose, but are undesired by-products produced during the manufacture of other organic compounds or during combustion of chlorinated organic materials such as plastics. Chlorinated dioxins are derivatives of dibenzo-p-dioxin.
7.2.2 Metals and Inorganic Nonmetals Metals are, in general, elements that lose electrons to form positive ions. Nonmetals, are elements that hold electrons firmly and tend to gain electrons to produce negative ions. 498 Metals with atomic numbers greater than that of iron and densities higher than 5:0 g=cm3 are called heavy metals (e.g., lead, cadmium, chromium, mercury). Metal wastes are produced in metal finishing processes (e.g., rinsing metals after plating, disposal of spent metal plating baths). These wastewaters are usually discharged into receiving streams near the industrial plant or the public wastewater treatment plants. The sludges of plating baths or public treatment plants are frequently disposed in landfills, where the metals can be washed-off into the ground water. Similarly, waste metals in discarded products can also enter the groundwater or surface waters in solubilized form. Metals in wastes undergoing incineration (e.g., tin cans and other metallic refuse) can volatilize under the existing high temperatures and become air pollutants. The nonvolatilized metals are accumulated in the fly ash or bottom ash and can contaminate groundwaters after landfilling. Humans and animals need small quantities of many metals as essential nutrients. But if they are in higher quantities they can be toxic, where the toxicity depends on the types of metal present. Metals that are essentially insoluble usually pass through the human body and are expelled without causing any damage. However, more soluble types, can be retained in the blood or the tissues and can cause severe toxicity. Heavy metals and inorganic pollutants are often bioaccumulates in nature. These compounds are more soluble in human tissues than are in the water or in tissue of lower order organisms that has been consumed, and so their concentrations in the human body can be orders of magnitude higher than found in the water at the original industrial discharge point. Even an innocuous quantity in an industrial waste entering a receiving water may be concentrated to toxic levels in fish or later in humans that eat that fish. It is known that oysters and mussels can have mercury or cadmium at hundreds or thousands times higher concentrations than that of the water in which they live. Some inorganic contaminants which have significant usage in industry are the following. Arsenic These compounds are very toxic. Less than 0.1 g of arsenic is typically fatal, and is a known carcinogen. Arsenic is not a true metal, but a semimetal or metalloid. The industrial use of arsenic has been decreased over the time, but it is still used in agriculture as a herbicide and as an animal disinfectant. The drinking water contains an average inorganic arsenic of 2:5 g=L. The maximum permissible concentration of arsenic in drinking water is 0.05 mg/L. Cadmium This is a metal existing in nature together with zinc, and is typically produced in industry as a by-product of zinc smelting. Cadmium is commonly
116
7 Implications of Industry, Automation, and Human Activity to Nature
employed in metal plating due to its increased resistance in corrosion, and also is used in polyvinyl chloride (PVC) plastics as a stabilizer. It is also used in nickel– cadmium batteries (now out of use due to their severe environmental impacts). Plants absorb cadmium from irrigation waters and from soils where it was accumulated from atmospheric deposition or from land spreading of wastes. Only 1g is a fatal dose, and its half-life in humans is about 20–30 years. The maximum allowable concentration of cadmium in drinking water is 5 g=L. Mercury It is a metal which is a liquid at room temperature, and can appear in both organic and inorganic forms. Mercury is highly volatile and its vapor is extremely toxic. Mercury sulfide .HgS/ is very insoluble in water, but mercuric nitrate .Hg .NO3 /2 / is very water soluble. Methylmercury (CH3 HgX, where X is an anion, usually a halide) and dimethylmercury .Hg .CH3 /2 / are highly toxic volatile liquids that are much more toxic than the mercury salts (e.g., mercuric chloride HgCl2 ) from which they are produced after being methylated in anaerobic waters or sediments by anaerobic bacteria. Fishes like tuna and swordfish can possess mercury concentrations one million times higher than the concentrations in the water they live. Mercury has also been used to produce chlorine and chlorinated compounds, and sodium hydroxide. In this process, mercury is largely recovered and reused, but some escapes into the air and into the plant’s cooling water. Presently, this process is abandoned and replaced by a membrane process which does not need the use of mercury. The NaCl solution and the chloride-free solution are separated by a membrane that allows the NaC , but not the Cl to pass. Due to that the mercury is highly toxic, the highest permissible concentration in drinking water is 2 g=L. Lead Lead has a low melting point (327ıC) and high density. Due to its malleability, it can be used to make pipes of different shapes. Although in our days new lead water pipes are no longer constructed, lead is still used in roofing and flashing, and in many types of electrical solder. Its principal use is in automobile batteries, electroplating, plastics, glass, and electronic devices. In pure form, lead does not usually cause health problems, but it becomes toxic when it dissolves giving ionic forms. Lead has a low boiling point, but a quite high vapor temperature point .1;740ı C/ at low vapor pressure. Thus, volatilization of lead is not a big problem. At high levels, lead is a general metabolic poison. At lower concentrations, it can interfere with the production of hemoglobin, and causes anemia. Also, it can cause kidney dysfunction, high blood pressure, and permanent brain damage. Today most lead-bearing materials (e.g., lead batteries) are recycled (more than 85% of the quantity of lead refined each year), and of course private cars are using unleaded gasolines. Cyanides Cyanide is an inorganic nonmetallic anion with the structure C N and constitutes the conjugate base of the weak acid hydrogen cyanide .HCN/. Under neutral or acidic conditions, the highly toxic gas HCN is produced, which in high concentrations is fatal. If a human is exposed to hydrogen cyanid, then asphyxia results. Although cyanide solutions are never mixed with acids (because of this potential for human life-threatening condition), in real life it has occurred accidentally on many situations. Cyanide salts are typically used in metal plating
7.3 Impact of Industrial Activity on the Nature
117
baths, in industry as intermediates, and in recovery of gold and silver during ore refining. Drinking water is not allowed to contain cyanides concentrations higher than 0.2 mg/L.
7.3 Impact of Industrial Activity on the Nature Industrial activity (automated processes and plants) has a strong impact on the environment and nature. Any industrial process or operation gives some kind and amount of waste, and no operation is fully converting basic materials into finished products. The effect of industrial wastes on the nature and on the human health, depends on where the wastes go (atmosphere, ground, water bodies and streams, etc.). Pollutants emitted into the atmosphere fall down to the Earth via the rain or due to gravity and become soil or water contaminants. Some of them are taken by organisms living in the water such as fish and may be bioconcentrated, with the risk to produce a hazard for the health of humans that eat the fish. In general, the nature pollutants are classified as 50, 89, 128, 509 : Air pollution Solid waste disposal Water pollution
Air pollution affects the urban smog formation, acid rain, global warming and ozone depletion (thinning). Solid waste includes dust particles or slag from coal, liquid wastes originate from various processes such as radioactive coolants from nuclear power plants, and gas wastes are produced by chemical industry. During the incineration of solid wastes we have energy recovery, which is a type of resource recovery. Also, many materials in the waste can be recovered directly. In the following we will discuss the above three types of nature (environment) pollutants and their consequences, along with the issues of energy consumption, hazardous waste management, and natural resources depletion.
7.3.1 Air Pollution Air pollution is the result of the emission from industrial plants to the atmosphere of gaseous, liquid, and particulate materials. It is actually a very complex phenomenon because of the various physical and chemical processes involved and the intricate pollution transportation taking place. The effect of air pollution on the nature and the human population can appear and act at local, regional, national, or global level. The boilers of an industrial plant that burn coal, emit exhaust gases containing sulfur dioxide and hydrogen sulfide that may cause severe odor problems in the surrounding region, and damage to materials via the acids that result from the reaction of the pollutants and water vapor in the air. Other materials
118
7 Implications of Industry, Automation, and Human Activity to Nature
of the atmosphere may also react with acids, thus contributing to the formation of smog. These local effects can be extended to larger areas, because of the winds that transport the pollutants far away and thus cause acid rain on a regional, national and global scale. Many pollutants can arrive at the upper levels of the atmosphere and, due to several reactions, may finally cause a thinning of the ozone layer above the Earth, which does not allow the ultraviolet radiation to reach the humans, protecting them from the well known harmful consequences of this radiation. These impacts can also contribute to the increase of the average temperature of the Earth, a phenomenon which is called ‘global warming’. Throughout the years, many damaging effects and accidents due to air pollution have been occurred and reported all over the world. 547 The composition of clean and dry air is as follows: Nitrogen .N2 / 78:08%, Oxygen .O2 / 20:94%, Argon .Ar/ 0:934%, Carbon dioxide .CO2 / 0:033%, Neon .Ne/ 0:00182%, Helium .He/ 0:00052%, Methane .CH4 / 0:00015%, Krypton .Kr/ 0:00011%, Hydrogen .H2 / 0:00005%, Nitrous Oxide .N2 O/ 0:00005% and Xenon .Xe/ 0:000009% (where the percentages are volume percentages). Water vapor concentrations lie between 0% and 4%. This natural (clean, dry) air composition is changed by the industrial air pollutants. The air pollutants are distinguished in Primary air pollutants Secondary air pollutants
The Primary pollutants are emitted directly into air in a hazardous form, whereas the secondary pollutants are transformed into a harmful form by chemical reactions occurring in the atmosphere, after their entrance in it. Primary pollutants include hydrocarbons, particulates, sulfur dioxide and nitrogen materials. Typical secondary pollutants are photochemical oxidants and atmospheric acids produced via solarenergy-initiated reactions of less hazardous compounds. Eight major pollutants which have severe consequences to the quality of the environment and the human health are the following 50:
Carbon monoxide .CO/ Hydrocarbons Sulfur dioxide .SO2 / Particulates Nitrogen oxides .NO; NO2 / Photochemical oxidants Carbon dioxide .CO2 / Hazardous air pollutants (HAPs)
Carbon monoxide, an odorless, colorless and nonirritating poison, can rapidly cause death at very small concentrations, because it reduces the capability of the blood to transfer oxygen to the tissues of the body. Carbon monoxide is generated by the incomplete burning of gasoline, coal, wood and other materials. Its effect on human is headache, drowsiness and finally asphyxia. The carbon monoxide in urban areas is mainly due to automobiles (67%). Other sources of CO are industrial plants (6%) and stationary fuel combustion (20%).
7.3 Impact of Industrial Activity on the Nature
119
Hydrocarbons are volatile organic compounds that are released through evaporation of petroleum-based fuels and remnants of fuel not completely burned. Again, the automobiles are the principal sources of hydrocarbons followed by refineries and other plants. The hydrocarbons may cause cancer and other health problems. Sulfur dioxide, a colorless corrosive gas poison, produces respiration irritation and can react with the ozone, water vapor, etc. of the atmosphere to give sulfuric acid .H2 SO4 / which causes severe damage to metals, and other construction materials. Entering the human body in aerosol form through respiration has the result of serious tissue damage. SO2 is produced by the combustion of sulfur-containing fuels (such as coal and petrol) and by industrial plants. Particulates are small pieces of solid or liquid materials, such as dust, ash, smoke, soot, etc., entering the atmosphere. Their principal human-based sources are unburned fuels from stationary fuel combustion and transportation, and also industrial plants. Particulates can cause reduced visibility, respiratory problems and cancer (especially particulates of size smaller than 2:5 m can go deep into the lungs). Nitrogen oxides are produced by the oxidation of the nitrogen in air during combustion at temperatures higher than 2;000ıF.N2 C O2 ! 2NO; 2NO C O2 ! 2NO2 /. They are a major component in smog-forming reactions. The nitric acid .HNO3 /, produced by the reaction of nitrogen oxides with water vapor, may contribute to acid precipitation problems. The sources of Nitrogen oxides are automobiles and combustion of coal, oil and gas at sufficiently high temperatures. Photochemical oxidants are the result of secondary solar energy driven atmospheric reactions. One of them is ozone O3 which is a strong oxidant and destroys the lung tissue and the chlorophyll in plants. Other strong photochemical oxidants are peroxyacetal nitrate (PAN) which is a strong eye irritant and acrolein. Both PAN and acrolein can produce severe damage of materials. Carbon Dioxide, a gas which is conventionally regarded to be nontoxic and innocuous (non pollutant), but at high concentrations in the upper atmosphere contributes substantially to global warming. Hazardous Air Pollutants (HAPs) are other chemicals (more than 200) to which no ambient air quality standards are applicable, but they cause or contribute to an increase of mortality or in severe irreversible, or incapacitating reversible diseases.
7.3.2 The Earth’s Carbon Cycle and Balance Carbon in the Earth is exchanged on a continuous basis between the atmosphere, the oceans, the landmass and the biosphere which constitute a closed system (see Fig. 7.1 and www.koshland-science-museum.org). During the carbon long-term cycle which is taking place over the geological centuries of Earth’s life, carbon in the air is combined with water to produce weak acids that dissolve rocks very slowly.
120
7 Implications of Industry, Automation, and Human Activity to Nature
This carbon is transformed to the oceans forming several kinds of coral reefs and shells that are moved deep to the Earth by drifting continents and are finally entering the atmosphere as volcanoes’ releases. During the short-term cycles carbon is exchanged quickly between plants and animals via photosynthesis (i.e., the conversion by plants of CO2 into energy rich carbon compounds) and respiration (i.e., the slow combustion of carbon materials within living organisms which generates energy and releases CO2 ). During the short-term cycle gas is also exchanged between the oceans and the atmosphere. Naturally, the Earth sustains a carbon balance through the long-term cycle and the short-term cycles. But, because the release of carbon into the atmosphere by the human via the fossil fuels’ burning has a much higher rate compared to the rates of the ocean uptake (dissolution of CO2 into oceans and transfer of carbon to them by rivers from land), ocean release (return of ocean carbon back to atmosphere in the form of CO2 gas), sedimentation (limestone, coal, gas and oil coming from animal and plant matter slowly deposited on land and on the ocean floor), respiration and photosynthesis processes, the natural carbon balance cannot be maintained, and the concentration of CO2 in the atmosphere is increasing.
7.3.3 Global Warming, Ozone Hole, Acid Rain and Urban Smog In the following, we will discuss the four major consequences of air pollution on the lower and higher atmosphere, namely (see Section 1.6):
Global warming 288 Ozone hole 211, 553 Acid rain 320 Urban smog 385
7.3.3.1 Global Warming and Greenhouse Effect The climate of the Earth is affected by several factors, one of which is Earth’s temperature. The Earth’s temperature is believed to have varied considerably over the ages, but consistently to have been subject to the “greenhouse effect”, i.e., warming as a result of reflecting by atmospheric carbon dioxide and some other trace gases of infrared radiation back to the Earth’s surface, of significant magnitude. A century ago, Svante Arrhenius (Sweden) and Thomas Chamberlin (USA) predicted that a gradual increase in atmospheric temperature would take place as a result of increasing carbon dioxide concentration. Actually, the climatologists do not agree on the extent to which this prediction is proving to be true, but it is generally believed that over the past century, a rise in mean temperature of about one-half degree Celsius, has occurred. It has been predicted that the concentration of CO2 in the atmosphere will double by the middle of the present century, 262 and scientists generally agree that action for preventing the strengthening of the greenhouse effect should be taken
7.3 Impact of Industrial Activity on the Nature
121
immediately. 288, 612 The increase of the Earth’s temperature could lead to melting of the ice caps, rising of sea levels, coastal flooding, shifting crop-producing regions, and adverse impacts on populations of humans and other living organisms. 419 However, other scientists have argued that the Earth has ways to balance the increasing carbon dioxide production. Not all of the carbon dioxide accumulated to the atmosphere remains there. Some is dissolved in the oceans and some is embedded in plants and animals as biomass. In any case, the majority of climatologists suggest that planning needs to be done to fight this increasing trend if it is verified to be true. We close our discussion, with a few words on why does the atmospheric carbon dioxide concentration play such a dominant role in determining the Earth’s weather. The reason is that it increases the ability of Earth to retain heat, much as the glass does in the greenhouse (that is why this phenomenon has been named the greenhouse effect). The greenhouses are warmer inside than the air is outside because the glass is transparent to light and allows short-wave-length light to pass through and heat the contents of the greenhouse. Also, the glass reflects the longer wavelength heat radiating from inside the greenhouse, preventing from passing it out. The overall result is that heat becomes trapped in the greenhouse and the temperature increases. The same occurs in the upper atmosphere due to the presence of CO2 and other heat-trapping gases (methane, nitrous oxide.NO2 /, CFCs). The energy of the sun reaching the upper atmosphere is partly (about 50%) reflected away or absorbed by the atmosphere and the particulates existing in it, and partly (the other 50%) passes through, warming the Earth. Visible light from the sun passes through, but the ultraviolet portion of the sun’s energy is absorbed almost fully by the ozone in the upper atmosphere. The infrared portion is absorbed by carbon dioxide and water in the troposphere. Carbon dioxide is transparent to visible sun’s light and allows it to arrive at the Earth’s surface. Eventually, all of the energy absorbed at Earth’s surface is reemitted back into space as longer wave infrared heat. If this would not happen, the Earth would act as an energy sink and its temperature would continually increase. The carbon dioxide, although transparent to visible light, is very efficient in absorbing the infrared heat radiation emitted by Earth’s warm surface. It traps the heat in air close to Earth’s surface and reemits it back toward Earth. The result is a continuous increase in the amount of energy reaching Earth’s surface and an increase in temperature. It is noted that the greenhouse effect is fundamental for the life on the Earth. Without it, temperatures on Earth would be comparable to moon (which has no atmosphere and so no greenhouse effect), i.e., average surface temperature on Earth 35ı C colder, precluding most living organisms. The amount of heat-traping gases, known as green house gases (GHGs), which are emitted or removed from the atmosphere over a given period of time (typically 1 year) is called green house gas inventory (GHG inventory). A GHG inventory also gives data on the human activities that cause emissions or removals of GHGs, as well as a description of the methods employed for calculating the inventory. On the basis of GHG inventories managers and policy decision-makers create short-andlong term policies and progress assessments. The data of GHG inventories are used as inputs to environmental-economic models. Corporate GHG inventories deal with the emissions caused by a company’s operation.
122
7 Implications of Industry, Automation, and Human Activity to Nature
In the USA, EPA prepares each year, since 1990, a GHG inventory report which provides estimates of GHG emissions and removals (sinks). Many other countries also develop GHG inventories using methods similar to the EPA methods. These national GHG inventories are used to produce global GHG inventories. Internationally accepted GHG inventory methodologies are published regularly by the International Panel on Climate Change (IPCC). Full information on GHG emissions can be obtained from www.epa.gov/climatechange/emissions/index.html#ggo.
7.3.3.2 Ozone Hole Ozone is a critical ingredient of the atmosphere, encountered over about 25 km attitude, which absorbs almost totally the dangerous ultraviolet radiation of the sun. Absorbing ultraviolet light, ozone is split into an oxygen molecule and an oxygen radical (O3 C ultraviolet radiation ! O2 C O). The oxygen radicals combined with oxygen molecules allows ozone to be reformed, which absorbs again more ultraviolet radiation.O2 C O ! O3 /. Thus, the ozone does not allow the ultraviolet radiation (with wavelength lees than 340 nm) to reach the Earth’s surface, where it might cause skin cancer, cataracts or mutations in humans and other living organisms (marine life, etc.). If the amount of ozone existing in the upper atmosphere is reduced, then more ultraviolet radiation reaches the Earth surface. The most critical manifestation of the loss (thinning) of ozone in the upper atmosphere is the “ozone hole” (almost absence of ozone) that has appeared over the Antarctica every spring (during which the air becomes stagnant) throughout the years. 216 However, there is evidence that less severe losses occurred elsewhere around the world, especially in the higher latitudes of the northern hemisphere. 436 Actually, the size of the Antarctic ozone hole varies each year depending on the respective weather conditions. In the southern hemisphere winter, the atmosphere in the Antarctic area is kept cut off from exchanges with mid-latitude air by predominating winds, called vortex. The polar vortex possesses very low temperatures that lead to the appearance of the polar stratospheric clouds (PSCs). But as the polar spring comes (September/October) the returning sunlight combined with the PSCs causes the release of high ozonereactive chlorine radicals that breaks down ozone into individual oxygen molecules. A single chlorine molecule can cause the breaking down of thousand of ozone molecules. The extent to which the loss of ozone from stratosphere is a result of human activity is not precisely known, but research has indicated that the primary contributing factor of ozone depletion in the stratosphere is the industrial emission of chlorofluorocarbons (CFCs) which are called ozone-depleting substances (ODSs) (see Section 7.2.1). At Earth’s surface, CFCs are almost inert, but when affected by ultraviolet radiation in the stratosphere, the CFCs release chlorine atoms, which as mentioned above react quickly with ozone molecules, breaking them down to oxygen: .CF2 Cl2 C Ultra violet radiation ! CF2 Cl C Cl; Cl CO3 ! ClO C O2 /
7.3 Impact of Industrial Activity on the Nature
123
The concentrations of CFCs (freons) in the stratosphere have increased rapidly after they began being used as refrigerants, aerosol propellants and solvents, several decades ago. 177 Thus, a total control strategy is to minimize the emission of CFCs at industrial plants and eliminate the above nonessential uses of CFCs. 345, 597 This strategy was adopted by many countries after the 1989 Montreal Protocol on Substances that Deplete the Ozone Layer (currently more than 140), and now the ozone levels are gradually returned to normal. One of the European bodies that monitor the ozone hole annually is DLR (The German Aerospace Center). DLR has found that in 2008 the area of ozone loss above the South Pole was about 27 million km2 , whereas in 2007 was 25 million km2 . In the year 2006 this ozone thinning area was extended over 29 million km2 , i.e., about the size of North America. The detection of indications of ozone recovery is a difficult task and needs a continuous measurement of the global ozone layer. Usually, the ozone hole is not measured in terms of ozone concentrations (typically of a few part per million) but by reduction in the total column ozone, above a point on the Earth’s surface, which is commonly expressed in DUs (Dobson units). International reports on ozone hole are also issued regularly by the World Meteorological Organization Global Ozone Research and Monitoring Project which is strongly in favor of the Montreal Protocol (www.esrl.noaa.gov)
7.3.3.3 Acid Rain Acid rain (acid deposition) is rain which is more acidic than conventional rain, because it contains sulphuric and nitric acids .H2 SO4 ; HNO3 /that are formed when sulfur dioxide and nitrogen oxides and other materials emitted by industrial plants and automobiles combine with water in the atmosphere. Acid rain, besides its contribution to the acidification of lakes and streams and destruction of trees, causes the corrosion of materials, including famous historical and cultural treasures. The acidity may also be caused by acids associated with particulates contained in the air. These acids can reach materials on Earth by direct deposition or by the particulates contained in rain droplets. All these types of acid precipitation are collectively included in the term “acid rain”. Acid rain can be produced by burning oil-based fuels, but most acid rain problems are caused from the combustion of coal that contains high concentrations of sulfur (like the coal of West Virginia, Ohio, Kentucky in USA, and other areas of the World, for example Eastern Canada and Western Europe). Comparison of the acidity of old ice with the acidity of snow or rain that falls in our days has led to the conclusion that acidity of the precipitation has increased from almost neutral to mildly acidic over the last two centuries (particularly in Eastern USA and Western Europe). Several extreme cases of acidity have been recorded over the years (e.g., a 1974 storm in Scotland where the rain was the acidic equivalent of vinegar with pH of 2.4, and in the greater area of Los Angeles with pH of fog of 2 which is about the acidity of lemon juice 374, 504 ). All living entities, whether animals or plants, whether living on land or in the water are affected by acid rain (directly or indirectly). For example, the roots of vegetation and crops are
124
7 Implications of Industry, Automation, and Human Activity to Nature
damaged by acidic rainfall making the plants to be “atrophic” or even causing their death. The acid rain makes the leaves and plants vulnerable to diseases, etc. The acid rain causes harmful metals like aluminum and mercury to be leached from the soil and rocks, which is then transported into lakes affecting the aquatic life. Similarly, every animal from the lower life up to the food chain is affected. The entire fish stocks in certain lakes have been affected creating economic problems to people that depend on fish and other aquatic life. In rural areas, people who depend upon lakes, rivers and wells feel the effect of acid rain on their wealth. Harmful metals like lead, copper and aluminum dissolve more easily in acid rain and this has been correlated with the Alzheimer’s disease. State and private plans and actions are developed to face these problems of acid rain. Highly acidic lakes of small size can be treated by adding large quantities of alkaline substances like quick lime via a process known as “liming”. More on the effects of acid rain on Earth’s life can be found in the wikipedia and the websites: library.thinkquest.org and www.essortment.com.
7.3.3.4 Urban Smog Smog, a mixture of the words ‘smoke’ and ‘fog’, is a term that was firstly used to characterize the air conditions in London, but now it is commonly used to describe the so-called photochemical smog which is caused by the interaction between nitrogen oxides and hydrocarbons under the influence of sunlight. The resulting mixture of photochemical oxidants from these reactions (acrolein, PAN, ozone, and others) can easily react with and hurt other materials. The major source of photochemical smog in most urban areas is the automobile emission. The automobile exhaust contains large quantities of nitrogen oxide (NO), which reacts with oxygen in the air and gives nitrogen dioxide .2NO C O2 ! 2NO2 /. The resulting nitrogen oxide gives the air a reddish-brown color and reduces visibility. Furthermore, in the upper atmosphere oxygen molecules under the action of ultraviolet radiation, split into two oxygen radicals .O2 C ultra violet energy ! 2O/ which due to their high energy, react with oxygen to produce ozone .O2 C O ! O3 /. As we already know, some ultraviolet radiation enters the lower atmosphere, which being absorbed by nitrogen dioxide, energizes it and splits it into nitrogen oxide and an oxygen radical .NO2 C ultra violet radiation ! NO C O/. The oxygen radical now reacts with oxygen and produces the highly reactive ozone which is very short-lived and might return back to oxygen molecule if no other compounds exist in the air. But if hydrocarbons are present, such as volatile organic carbon compounds (VOC), the ozone reacts with them and produces PAN and other photochemical oxidants. These photochemical oxidants have a longer life and are poisons, sometimes carcinogens (formaldehyde, benzaldehyde, acetaldehyde). Therefore, photochemical smog may appear if nitrogen oxides, hydrocarbons and sunlight are combined, and the climatological/geographical conditions are such that the reactive materials are concentrated near the Earth’s surface, rather than dissipating into higher layers of the atmosphere. The amount of sunlight reaching the Earth’s surface cannot be controlled, and the control of nitrogen oxides emission by automobiles is very expensive to be
7.3 Impact of Industrial Activity on the Nature
125
achieved. Therefore, much of the effort to control urban smog was concentrated in reducing the hydrocarbons’ emission from automobiles (using positive crank case ventilation (PCV) valves, leak proof gasoline filter caps, and catalytic converters). The term urban smog is used to emphasize the fact that the conditions required for photochemical smog formation are typically encountered in major urban areas. 337, 385, 508
7.3.4 Solid Waste Disposal Solid waste disposal is of serious concern, since its magnitude is increasing dramatically. 293 In America, every year about 200 million tons of municipal solid waste (MSW) and 400 million tons of industrial waste (IW) are discarded. To emphasize this problem several authors have used graphic comparisons such as: “every 5 years the average American discards, directly and indirectly, an amount of waste equal in weight to the Statue of Liberty, 293” or “the amount of MSW produced in USA annually would fill 5 million tracks, which placed ‘end-to-end’ would stretch around the world twice”. 447 Although these waste amounts pose a tremendous management problem, they are nothing when compared to the 3 billion tons per year of mining wastes and the 500 million tons of agricultural wastes. An average distribution of wastes in the USA is as follows 50 : mining 75%, agriculture 12%, industry 9%, municipal 3% and sewage sludge 1%. Mining wastes are distinguished in surface mining (strip mining) wastes and underground mining (shafts-and tunnels- based) wastes. Both of them lead to water pollution problems. Agricultural wastes (such as crop residues or manure from animal feeding), pose problems in rural areas. Industrial solid wastes are originated from process wastes remaining after manufacturing a product, and institutional/commercial wastes from office activities, restaurants, laboratories, and the like. Sewage sludges are left over after treating water or waste water, and proper disposal of them is a serious problem. Finally municipal solid wastes involve residential wastes (garbage, trash, yard wastes, ashes from heating or fireplaces, and other wastes such as furniture or hospital wastes, etc.). The composition of urban municipal wastes varies from place to place and depends on socioeconomic, climatic, urbanization degree, and recycling activity factors. According to wikipedia by waste management (control) we mean the collection, transport, processing, recycling, or disposal, and monitoring of waste materials. Approaches to the waste control problem vary in cost and impact on the environment, but none of them is ideal. A dominating effort is being put on reducing the creation of waste through resource conservation programs and the recycling and reprocessing of waste materials. Effort is also put into the development of incinerator technologies which reduce waste volume without aggravating the problem of air pollution. Waste reduction and recycling seems to be a major component of the long term solution, which is facilitated if the public collects and sorts consumer wastes at their place. Disposal or storage of radioactive waste poses a special set of
126
7 Implications of Industry, Automation, and Human Activity to Nature
problems that have controversial solutions. A possible, yet controversial solution, that has been proposed and adopted, is to deposit such radioactive wastes in steel drums in salt beds a half mile below the surface of the ground. 54 Newer techniques and policies for radioactive waste management are described in the OECD’s Nuclear Energy Agency Report No. 5296. 400 More up-to-date information regarding energy and management of radioactive waste in the EU can be found in the “europa” website. 146
7.3.5 Water Pollution The global water cycle includes evaporation, transpiration, vapor transport, precipitation, surface runoff, percolation, and ground water flow. This cycle produces about 9;000 km3 of fresh water available at any time for human use (in principle sufficient to sustain 20 billion people). 301 However, this does not mean that all of the five billion humans presently living on Earth have adequate water supply, because the world population is not distributed optimally with respect to where the water supplies are located. Today several regions of the world have insufficient water for their people. The major source of water pollution is industry, by disposing process wastewaters, cooling waters, spent process chemicals and other contaminants into surface waters directly (by piping them to a nearby lake, river or stream) or indirectly (by adding them to a public sewer which finally leads to a water body). The treatment of these wastes, such that they cease to be hazardous for the human health and the environment, is excessively costly. Therefore, most industries are adopting the approach of minimizing their use of water and their discard of wastes to water bodies. About 90% of the water used in industry is for cooling purposes. Therefore, care must be taken to avoid the contamination of this water during its utilization. It must however be remarked, that even with adequate control of industrial water pollution inputs, a proper control of non point water pollution sources (such as storm water runoff or agricultural runoff) is needed, in order to get a major stream water quality improvement.
7.4 Energy Consumption and Natural Resources Depletion Any human society needs some kind of energy supply. Up to industrial revolution, most energy sources were used for cooking and heating, with small quantities used in industry. Industrial revolution forced an increased utilization of conventional fuels (wood and coal) and initiated the development of new ones. In USA the wood energy supply was 90% of the total in 1850 and 20% in 1900, with the coal energy supply being 70%. 136 The first oil well was drilled in 1859 in Pennsylvania. Much of the coal needs was replaced quickly by oil, due to the high-energy features of oil.
7.4 Energy Consumption and Natural Resources Depletion
127
In our times, oil provides about 40% of the energy, natural gas 30%, average. Fusion and hydrogen are the new energy technologies of the future. The mid-east-oil crisis of the early 1970’s initiated great interest in the efficient use and conservation of energy as well as in the increased use of natural energy resources. Largely because of the crisis, the total developed world became more energy conscious, and between 1973 and 1985 managed to decrease the amount of energy used to produce a fixed unit of product by about one-fifth. 170 Presently, fossil fuels, (coal, oil, natural gas) offer almost 95% of the total commercial energy in the world, whereas renewable or sustainable energy resources (solar energy, biomass, hydroelectric) provide about 2.5%. The remaining 2.5% is nuclear energy. Obviously, as fossil fuel reserves are consumed, a shift toward either nuclear energy or renewable recourses will be necessary. The energy consumption is not equally distributed around the world. The developed countries, which represent only 20% of the world’s population, consume about 80% of the natural gas, 65% of the oil and 50% of the coal produced each year. 170 Over the last two decades the per capita energy consumption in the OECD (Organization for Economic Cooperation and Development) member countries (which are the most developed ones), has remained constant or has increased slightly. 50 The economies of these countries have seen a shift toward more service-based economies, with energy-intensive industries moving to less developed countries where energy consumption is now rising more rapidly. The population growth during this period has actually led to only a slight increase of total energy. The fossil fuel resources are not equitably distributed over the world, and energy reserves (i.e., the resources that can be profitably extracted using available technology) represent only a small portion of the total energy resources. On the other hand, not all the material available as a reserve is actually available for use in industry or in the home (sometimes a considerable amount of energy has to be expended to obtain energy materials, e.g., the energy spent for coal excavation, processing and transportation). Nuclear energy was believed to be the solution, and the estimates were that by the year 2000, almost all of the world’s electricity would be’ produced by nuclear plants. These estimates were not verified, since after the 1970s the construction of new reactor plants has essentially ceased due to safety or fail-safe requirements (which made their implementation economically impossible) and to the implied political problems. Although, there are indications that nuclear energy plants may reenter the design stages in the near future, this will not widely be the case until technology ensures that they will be truly safe for the environment and the public. The most serious long-term economic and environmental problem posed to the world seems to be the high consumption rate of the natural resources. As the quantities of these resources become more and more small, their costs will increase, making products that use them much more costly; and the nations will fight to maintain access to them. The world resources are distinguished in: Nonrenewable (or exhaustible) resources (minerals, fossil fuels, etc.) Renewable resources (wind, sun energy, biomass)
128
7 Implications of Industry, Automation, and Human Activity to Nature
Actually, we are reaching the point of exhaustion for some of the nonrenewable resources, and so there is an urgent need for enhanced recycling and more efficient use of them, in order to prolong their availability in the future. The field of natural resource depletion has received and is continually receiving increasing attention. The Earth contains an enormous number of minerals, but they are not all easily recoverable with available technologies. Proven reserves are the resources which have been fully mapped and can be recovered at current prices with current technologies. Known resources are the resources that have been located but are not fully mapped. They will be recoverable in the future. Recoverable resources are accessible with available technology, but will not be economically feasible in the near future. Non recoverable resources are the resources which are so remote or diffuse, that are not ever likely to be technologically accessible. Only 0.01% of the mineral resources in the upper 1 km of Earth’s crust are economically recoverable.
7.5 Three Major Problems of the Globe Caused by Human Activity Three other major problems of the world of various degrees of severity from place to place are the following 387: Deforestation Desertification Decreasing biodiversity
Deforestation is the process of clearing the forest lands in order to make space for agricultural or industrial/urban development or for harvesting timber. The major consequence of deforestation that has received most attention is the increase of the quantity of carbon dioxide in the atmosphere, a fact that contributes to the problem of climate change. This is because the trees and other vegetation remove carbon dioxide from the atmosphere and return oxygen. Therefore, any substantial reduction of the globe’s surface which is covered by forest will have the result of an increased concentration of carbon dioxide in the atmosphere. 387, 457 Of course, harvesting timber does not imply a permanent reduction in forest land, because timber can be regrown. However, timbering operations can be performed in a way that makes the land less conductive to forest growth, and so reforestation is delayed by very long times. Thus, proper reforestation programs are needed in order to reduce the possibility of further enrichment of air by carbon dioxide. But as Merland 356 has indicated reforestation itself cannot solve the problem completely, since this would need doubling the net annual production of the globe forests. 290 It is noted that according to the Nature Conservancy “every year 20 million hectares of rainforest are cut down, releasing millions of tons of carbon emissions in the atmosphere” (The Nature Conservancy, wiki.nus.sg/display/CC/Human C Factors). Desertification is the degradation of land (especially, once arable land) which is caused by soil erosion, salinization from irrigation, and other processes. As Pillsbury
7.6 Environmental Impact: Classification by Human Activity Type
129
states, 425 today the technology for solving this problem, i.e., for allowing continual land irrigation without causing its salinization, is available, but is not yet being widely applied. According to a United Nations report, about 60% of the agricultural land outside humid regions of the Earth is experiencing desertification to some extent. 101 The world loses over 25 billion tons of topsoil each year, which is approximately the quantity that covers Australia’s wheatlands 330 . However, these estimates of desertification should be considered with great caution, because of the lack of a unique definition of desertification and the unavailability of accurate methods of assessing the land degratative processes. Decreasing Biodiversity Probably the worst problem facing the world is not energy/resource depletion or economic collapse or the wars and the like, but the loss of genetic and species diversity by the destruction of natural habitats. 356 The total number of species of organisms on the globe is not known. Estimates of this number lie between 4 and 30 million, of which only about 1.4 million species have been scientifically described. A direct consequence of the fast increase of human population 290 is the tremendous decrease of biological diversity. According to Wilson 621 biodiversity is now believed to be at its lowest point since the end of the Mesozoic era (65 million years ago), and this decrease of biodiversity is still continuing. The world is losing about 150 species per day, because of urbanization, deforestation, pollution, application of pesticides, and other human activities. 622 Despite the recent efforts to put rules for preventing all this by developing national reserves and parks, by putting bounds on hunting, etc., the results are not quite encouraging. Biodiversity can be regarded as an indicator of nature’s health, because it is essential for the resilience of ecosystems. Almost every activity of humans (trade, agricultural operations, regional development, and so on) affects biodiversity, and so there is a strong need to develop and apply policies for preserving this biodiversity by assuring that no species are on the brink of extinction. One way towards this goal is to regard nature in an inclusive way, i.e., consider the nature and the civilization/culture as a united (sole) thing and not merely as the two sides of a coin. The loss of diversity due alone to the clearing of large portions of the rain forests (which are the Earth areas with the highest biodiversity) is estimated by Wilson 622 to be 4,000–6,000 species per year, which is about 10,000 times the natural rate of extinction that takes place in the absence of human interference.
7.6 Environmental Impact: Classification by Human Activity Type The term environmental impact (EI) is used to collectively include any change of environmental conditions or generation of new environmental conditions (beneficial or adverse) due to a human action or set of actions under consideration. In particular, according to EIONET (European Environment Information and Observation Network) of the EEA (European Environment Agency), EI involves the following. 241
130
7 Implications of Industry, Automation, and Human Activity to Nature
EI of Energy The production, transportation and consumption of energy (in any form) has a visible substantial environmental impact that includes air, water and thermal pollution, and solid waste disposal. Urban air pollution is mainly caused by the emission of air pollutants from fossil fuel combustion. Other causes of energybased EI involve the petroleum handling operations (which spill oil on the earth on in the water), coal mining and other mining operations which pollute the waters with various mineral materials that can produce an acid environment, and solid wastes. EI of Agriculture and Aquaculture Agricultural operations degrade significantly the water quality via stream sedimentation from erosion, increases in nutrients, pesticides, and salt concentrations runoff. Improper use of pesticides destroy natural predators, kill local wild life and contaminate human water supplies. Similarly, improper use of fertilizers changes the vegetation types and the fish types living in nearby rivers and waterdays. Fish farming pollutes the waters with nutrients, hydrogen sulphide and methane which have dangerous effects on farmed fish and other water life. EI of Fishing Among the negative effects of fishing on the environment, we mention the damage on wild fish, seals and shell fish that may be caused from effluent and waste from fish farms. Many fishing techniques such as drift nets kill large numbers of birds, whales and seals, and catch millions of fish not intended. Other processes destroying the marine life are the illegal dynamite and cyanide fishing. EI of Forestry Cutting of timber at higher rates than forest can be regenerated is a serious problem of the world. Forest is a natural resource that underpins the world timber trade and provides the environment for the wild forest-based livehoods. Deforestation increases soil erosion and downstream flooding, and results in the undesired loss of species and genetic resources. EI of Industry Industries produce commodities that are sold for profit. As already mentioned in other places of this book, the industrial activity also produces, together with the desired commodities, undesired wastes (solid, liquid and gaseous) that seriously affect, the quality of the environment (see Section 7.3). EI of Transport This includes air pollution, noise, displacement of people and business, disruption of wildlife on earth, and overall growth inducing effects. EI of Recreation and Tourism The damage to the environment due to recreation and tourism includes the damage to aquatic ecosystems (e.g., caused by hotel accommodations, sewage disposal works, roads, car parks, coastlines, or by increased angling, swimming, water skiing, use of motor boats in the water body, etc.). It also includes the despoiling of coastlines (by construction of tourist facilities), the destruction of historic buildings (to make room for touristic facilities), and the loss of agricultural land (to make room for airport facilities), etc. EI of Trade This may be direct (such as the trade of endangeral species, the trade of natural resources, or the trade of local natural products, i.e., tropical timber, etc.), or indirect (such as deforestation, loss of habitats, pollution due to mining or to energy use or to oil spilling, global warming, and so on).
7.6 Environmental Impact: Classification by Human Activity Type
131
EI of Households This includes domestic heating emissions (hot air, CO2 , CO, water vapour, oxide of nitrogen, sulphur, etc.), domestic sewage (human bodily discharges, water from kitchen and laundries, etc.), dumping of bulky wastes (e.g., old washing machines, refrigerators, cars, etc.) usually dumped near the countryside. Water Endangering This can be caused by various means, e.g., farm pollution from animal wastes, and liquors from green leaf cattle food that has molasses added to enhance fermentation and preservation. These liquors (known as silage liquors) are highly polluting and can cause seasonal fish deaths in small dreams. Socioeconomic Impact of Biotechnology Biotechnology typically refers to the industrial use of microorganisms (sometimes genetically altered) to carry out chemical processing, such as hormone or enzyme production for medical and other purposes. Biotechnology helps to increase farm production and the efficiency of food processing, to lower the respective costs and enhance the food quality and safety.
Chapter 8
Human-Minding Automation
With automation, jobs are physically easier, but the worker now takes home worries instead of an aching back. Homer Bigart Action will remove the doubt that theory cannot solve. Tehyi Hsieh No design works unless it embodies ideas that are held common by the people for whom the object is intended. Adrian Forty
8.1 Introduction As we saw in Section 1.5 humans and automation are living together and have to live together. The term human-minding automation is used to characterize the fact that humans are not regarded as components of automation systems in the same way as machines and computer programs. The humans and machines have to cooperate right from the beginning of the design phase and continue during the installation, operation, and maintenance phases. To assure that an automation system is humanminding, the following issues must be faced: Avoid and correct all the drawbacks that automation possesses for the human
side. Use appropriate human-friendly interfaces. Use convenient decision aids. Take care that the human has a dominant role. Educate the human so as she/he understands how automation is working and what it does. Automation is still foreign to many humans.
The main types of human-minding automation design are 478 : Control Key Level Type (methodologies are needed for analyzing human–
computer interaction in tasks such as word processing, etc.) 71, 396 , S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 8, c Springer Science+Business Media B.V. 2010
133
134
8 Human-Minding Automation
Interface Level Type (methodologies are needed for studying the human–
machine interaction and the functionality that supports the human’s/user’s intentions) 397 Task Level Type (the nature of the system that operates in a complex dynamic environment determines the form and the constraints of the human tasks) 445, 476 Organizational Level Type (broader aspects, above task level, have to be considered here, such as selection and training of personnel against investments in human–machine interface) 472, 473 In all these types, the human role should be analyzed and procedures that minimize or avoid human errors should be developed. Furthermore, in all cases methodologies should be applied which ensure job satisfaction for the human and eliminate her/his possible alienation caused by automation. 470, 517, 518 The success of human–automation symbiosis can be achieved only if the ‘human–human’ interaction is supported, even though some of the participants are only “supervisors”. Human-minding automation at the organizational level, involves unavoidably the issues of the relevant group behaviors. The above show that human–automation cooperation is a difficult and multi-faceted process, where traditional control models and individual-oriented user-centered design cannot capture all the issues of interest and their interactions. This chapter discusses a number of fundamental concepts, issues and principles which are used (and must be used) when designing and building human-minding automation systems. To appreciate the differences between system-minding and human-minding automation, the system-minding design approach is first briefly outlined (Section 8.2). Then, the general issues of human-minding automation system design are described, followed by more details regarding the problem of humanminding interface design (Sections 8.3 and 8.4). Next, the human resource allocation problem is considered in conjunction with the innovation and technology transfer, and the problem of integrating decision aiding and decision training is addressed. Finally, the safety operational standards of automation systems that determine the safety requirements both for the human and the machine, are presented.
8.2 System-Minding Design Approach In this approach the overall system has a purpose, which is incorporated into it by its designers. This is done by embedding a goal into the system components, and requiring the humans (workers, etc.) to conform to the purpose of the system. Obviously, when the system speeds up, the humans must also speed up. When it slows down, they must slow down. The system, the production line, and the management define what the humans are to do and how quickly they must do it. All control is taken from them, and they are required to subordinate themselves as servants to the purpose of the system. In other words, the focus is on the goals of the system and the organization, and the system designer does not take into account the humans (users) before designing the system. This implies that the humans have to remember too
8.3 Human-Minding Automation System Design Approach
135
Analysis of current system
Specification and design of new system
Building and testing the new system
Delivery and use of the system
Stage 1
Stage 2
Stage 3
Stage 4
Fig. 8.1 System-minding (Tayloristic) design approach
much information. This approach which, as we saw in Section 1.5, is also known as Tayloristic design approach 471, 559 involves the basic stages shown in Fig. 8.1. The systems designed in this way may be confusing to new users and cannot tolerate even small human errors. Typically, this type of systems does not provide the functions desired by the humans, and force their users to carry out their tasks in non-desired and uncomfortable ways. In a Tayloristic automation system, the same also applies to human relations. Through rewards and penalties, human employees can be brought to conform to a specified performance. But their underlying human goals have not been changed, and the conformity is only superficial. Many problems may arise which are due to a lack of commitment, and the imposed control will continually require to be tightened. 471, 604
8.3 Human-Minding Automation System Design Approach The issues discussed in the previous section and many others (e.g., the continuous upward trend in nonfatal occupational injuries in automation systems, where all simpler tasks are automated, and the more difficult ones are left for the humans to perform) suggest that automation systems must be designed with human purposes and human limitations in mind, in order for them to be productive, error-free and safe, and their products (goods and services) to be of high quality. The traditional way to incorporate the human in automation systems is to consider her/him simply as an information processor with given sensory and motor capabilities. In this case the system can function efficiently without errors, if the human operators are provided sufficient information about the system, and proper means to control it. The required information is obtained by the humans via displays, and the control ability is gained via suitable controls (keyboards, joysticks, etc.). Human feedback is achieved through tactile or force sensory interaction with the controls. The above discussion shows that the conventional approach to human– system interaction in automation is achieved by means of displays and controls for a two way information exchange (see also Chapters 4 and 5). Diagrammatically, the conventional type of human–machine interaction in automated systems is shown in Fig. 8.2. The modern approach to including the human in automation systems is to regard the human operator as a supervisory controller (Fig. 8.3) as explained in Chapter 6, following one of the available architectures (Rasmussen, Sheridan, etc.).
136
8 Human-Minding Automation Environment Processing of Information Sensing of Information
Controlling Action Human Operator Machine
Displays
Controls Operation of Machine
Input
Output Environment
Fig. 8.2 Structure of traditional human–machine interaction Fig. 8.3 Structure of modern system with the human in the role of supervisory controller
HUMAN OPERATOR
Displays
Controls
Human Interactive Subsystem Computer Semiautomatic Subsystem Computer Sensors
Actuators
TASK
Although in this approach the human operator has an enhanced role, displays and controls still offer the basic means for the human–system interaction. Here, the human factors and ergonomic issues discussed in Chapters 2 and 3 must be taken into account in the design of interfaces, in order for the system to be human friendly and successful. As explained at several places of this book the three basic problems that have to be addressed are: User needs analysis Task analysis and design Function allocation to humans and to automated components
Obviously, the determination of the user needs is a prerequisite for both proper task analysis/design and proper function allocation. A number of issues concerning the last two problems, in general, have been discussed in Section 6.3. In the following
8.4 Human-Minding Interface Design in Automation Systems
137
we shall focus on the user needs and provide some additional information on the other two problems, considering only the selection and design of human–machine interfaces.
8.4 Human-Minding Interface Design in Automation Systems 8.4.1 User–Needs Analysis The principal steps of human-minding (or anthropocentric or human-friendly) interface design are the following 367 : 1. 2. 3. 4. 5. 6. 7.
Collect information. Design the new interface. Evaluate the new design. Develop a prototype. Test the prototype. Deliver the prototype to users. Get feedback from users.
The goal of the ‘collect information’ step is to acquire information about the users, about the user needs, about the cognitive and mental models, about the available interface, about the requirements of the design, about the environment demands, and the like. Specific issues that must be examined include: the nature of the target group (ages, proportion of males and females) and cultural characteristics, the role of the user, the major activities of the task (job), the main responsibilities of the user, the user schedules, the reporting scheme for the user, the expected quality of output from the user, and the turnover rate of the user. Other issues about the user are: her/his experience and skills on the specific job, the training scheme that must be followed, the motivating or demotivating factors, the learning/interaction style of the user, and her/his physical abilities. Information that must be collected about the task activities includes: what are the inputs and outputs of the task, what transformation (input–output transfer) is involved, what are the decision points, what planning is required, and what equipment is used. Also, what are the dependency and concurrency relationships between the tasks, what are the user difficulties in carrying out every task, what are the performance criteria (speed, accuracy, quality, utility, etc.), what is the sequence and frequency of actions, what is the freedom of the user to specify priorities and procedures, and finally what are the task requirements with respect to physical, cognitive, perceptual, and health conditions and abilities from the user.
138
8 Human-Minding Automation
8.4.2 Task Analysis The methods available for the task analysis are 367 : Hierarchical Task Analysis Method Here the tasks are the elements by which
the goals are achieved in the presence of constraints (e.g., material, resource availability, etc.). Activity Sampling Method Here the type and frequency of activities which make up every task must be determined. Techniques for activity scheduling and analyzing activity samples are needed. Decision–Action Flow Chart Method The flow chart (or decision diagram) provides a sequence of questions (representing decisions) and possible yes/no replies (representing the actions which have to be taken). This is the most widely used technique. Walk-Through/Talk-Through Method Here, experienced operators walk and talk the analyst via observable task elements of a system in real time (without using simulation). 352, 354 Coding Consistency Method Coding consistency surveys are utilized to see if there is consistency between the coding schemes used and the respective meanings. The method determines if and where extra coding is required. Link Analysis Method Here, the links (relationships) that exist between individual operators and certain part of the system are identified. This technique is especially useful in cases where the physical layout of the machinery is important for optimizing the human–system interaction. Operator Modification Surveys Method In this method the operator checks the adequacy of the interface design through survey conducted on similar working systems. Simulation Method Here, suitable simulators are used to replicate/emulate and observe the performance of the overall system (including the operator) in an environment as close as possible to the real one. It is noted, that always there must be a compromise between the simulation cost and the simulation accuracy and fidelity. A variety of simulators and simulation techniques are currently available.
8.4.3 Situation Analysis and Function Allocation Besides the user and task variables that might influence the performance of the system, the external environment of the system can also affect the human–machine interface effectiveness. Situation analysis is aided by appropriate checklists of the situations typically faced, for which the system designer has to take answers and attempt to accommodate them in the interface design. Issues that must be included in these checklists involve the console, the panel, the displays, and the controls (for details see Chapters 4 and 5).
8.4 Human-Minding Interface Design in Automation Systems
139
The decision on function allocation follows the stage of collecting comprehensive information about the users and the tasks/activities (as described in Sections 8.4.1 and 8.4.2), and divides the tasks to be performed between the humans and the machines. This division depends on the extent (degree) of automation which is desired or unavoidably imposed. Today, the decision is no longer whether to automate or not, but to what extent and how to automate. This problem has been studied in Section 6.3, where the set of guidelines suggested by Sheridan 520 were given. A good general rule is to allocate to humans all those functions that cannot be specified in precise engineering terms, and all the remaining functions (that can be specified) to machines. Several points that have been revealed by the research on function (task) allocation, and are advisable to be taken into account by the practitioners are the following: Function allocation cannot be determined by a formula or example (rules applied
in one situation may not apply in others). Function allocation is not a one-shot decision (it depends on activities that form
the tasks, on conflicts, etc.). Function allocation can be systematized through proper sequential steps. Both humans and machines may be good or bad for certain tasks. Function allocation can be aided via analogies. Function allocation should take into account the nature of each task (cognitive, perceptual, physical, etc.). Functions of high speed and volume, large forces and weight, and hazardous must always be assigned to machines. Function allocation must consider the best available technology, and should be based on sound economic analysis.
The criteria used for solving the function allocation problem are, but not limited to, the following:
Criteria based on specific performance indices (time to completion, etc.) 368, 415 Criteria based on comparison of capabilities and limitations of humans 28, 33, 432 Criteria based on economic parameters 366, 368 Criteria based on human safety 257, 270
In general, functions that are well-procedurized allowing an algorithmic style, and needing little or no creative actions, are suitable for automated operation (machines), whereas functions that need cognitive skills at high level (planning, decision making, dexterity, exception handling, etc.) are best suited to humans. In particular, activities that must be performed in narrow and confined spaces, or require specialized manipulation skills, or for which the available equipment is of poor reliability or there is not available equipment and technology, should be allocated to humans. For a generic methodology in the form of decision-making flow charts for the systematic allocation of functions between humans and machines, the reader is referred to the work of Mital and co-workers. 369
140
8 Human-Minding Automation
The basic difficulties of human-minding design are the following: The system usability is limited by the usability of its goals. If the user provides
inappropriate usability goals the system will be unusable. The design approach is good for designing new systems, not for the redesign of
existing systems. The approach cannot take into account qualitative data, because the assessment
of usability can only be done accurately on the basis of quantitative data. Despite these limitations, the human-minding design is worthwhile to apply, because it includes proactively the human user in the system design and guarantees usability of the resulting products.
8.5 The Human Resource Problem in Automation Our aim here is to address the problem of human resources in automation, which include manpower, employees, job design, equipment design, training, aiding, and safety. The optimal allocation of human resources has to maximize the number of tasks performed (completed) and minimize the human errors in performing the required tasks. Three important issues of the human resource problem are the following: Allocation of system development resources Investment in human resources Innovation and technology transfer
8.5.1 Allocation of System Development Resources The allocation of system development resources should be done such that to minimize the likelihood of human errors, and especially of human errors that may contribute to major failures or accidents. As we saw in Section 6.3, this can be achieved by studying and taking into consideration the proper human factors in each case, by analyzing the possible improvements that might be obtained by proper personnel selection and training, by properly designing the job for each individual or human group, and by using procedures borrowed from decision support systems to aid the job behavior. An allocation model for system development resources, along with a model for training/job aiding tradeoff were proposed by Rouse. 478 These models are depicted in Figs. 8.4 and 8.5. The model of Fig. 8.4 illustrates the resource allocation to the various factors, and was used to carry out a sensitivity analysis of the system performance with respect to variations in the times to complete tasks, and in the times to deliver resources for training, job design and job performance aiding. This model helps to determine the
8.5 The Human Resource Problem in Automation Allocation to Selection
Training Potential of Personnel
Allocation to Training
Sensitivity of Personnel to Incompatibillities
Frequency of Mistakes
141
Allocation to Equipment Design
Personnel Understanding of Equipment and Job
Allocation to Job Design
Compatibillity of Equipment
Allocation to Aiding
Attitude of Personnel
Workload of Tasks
Frequency of Slips
Fig. 8.4 Rouse’s system development and resource allocation model Fig. 8.5 Rouse’s training/job aiding trade off model
Task Arrival
Computer Aids
Computer Explains
Computer Tutors
Human Complies
Human Understands
Human Learns
Computer Performs
Human Performs
Task Completion
primary tradeoffs among investments (delivery times), returns on investment (performance improvements), and receptivity to investments (aptitudes of personnel). The best support strategy (i.e., the best sequencing and duration of combinations of aiding, explaining, and tutoring) was determined for each mix of the above parameters. It was found that the best support strategy was primarily influenced by task frequencies and expected performance improvements. Delivery time was essential only for low frequencies of tasks. For high frequencies the return of investment was sufficient to make the initial investment inconsequential. It was observed, that support strategy was insensitive to personnel aptitude.
142
8 Human-Minding Automation
The model of Fig. 8.5 illustrates how training (tutoring) and aiding (explaining) can be combined to improve the performance. A more detailed diagram of this model was provided by Rouse in 1987. 475 These two models provide a working way to view the design aspects and tradeoffs in a more broad sense than the traditional one, and illustrate that sensitivity analysis can give useful insights, even in the absence of most relevant data.
8.5.2 Investment in Human Resources The human performance is shown in Figs. 8.4 and 8.5 as a block (goal) in itself. The basic goal of optimal human resource allocation is to maximize the number of tasks completed and minimize the human errors. However, this does not reflect well the context within which human performance occurs. One needs a model (or models) for the effect of human resource investments on economic profit. A model of this type should be compatible with legal, social and ethical reasons for investing in human resources. Inputs to such a model are:
Manpower, personnel and training (MPT) Human factors engineering (HFE) Best available technology (BAT) Human values (e.g., job satisfaction, wages, culture, ethical codes, etc.) Environmental values Economic growth
In practice, some tradeoff among them (i.e., suboptimality) is always achieved, which however assures an acceptable level of competitiveness. A human resource (investment) model which does not include human and environmental values is shown in Fig. 8.6. 478 This model does not show a direct relationship between technology investments and job satisfaction. But it must be remarked here that recent investigations on a repertory of past technology have revealed that it is the management practices associated with a technology, rather than the technology itself, which are usually the source of worker’s unrest. 72 For the inclusion of social and humanistic issues in such a model the reader is referred to the relevant literature. 217, 271, 318, 330, 342, 471, 613
8.5.3 Innovation and Technology Transfer Clearly, technology innovation applied to any given automated system influences its performance capabilities and/or the overall economic return. The conceptualization of human resource issues in automated system design, as discussed above, can be regarded as a convenient basis for technological change and human-minding
8.6 Integrating Decision Aiding and Decision Training in Human-Minding Automation
143
NUMBER TRAINED
JOB REQUIREMENTS DROP-OUT RATE MPT INVESTMENT NUMBER RECRUITED
JOB SATISFACTION
NUMBER WORKING NET REVENUE PER WORKER
SAFETY INVESTMENT INJURY/ SICKNESS RATE HFE INVESTMENT
NUMBER OF UNITS PRODUCED
GROSS REVENUE
JOB PERFORMANCE TECHNOLOGY INVESTMENT
PRICE PER UNIT
Fig. 8.6 A pure technological–economic model for human resources investment
automation in a broader context. However, because the results of research efforts cannot be readily and quickly transferred to practice, the technology transfer is more an organizational problem than a technical one. 474 By thinking in terms of total system behavior and overall economic return, it is easy to communicate and work with actual practitioners. However, here there is a problem of communication between humans familiar with different scientific disciplines, because each discipline and subdiscipline regards the same world using different kinds of models. 472 As an example of this, we mention the difficulty that behavioral scientists working in manpower/personnel training and human factors engineering frequently find to communicate about common problems, such as training procedures selection, etc. In general, it has been verified that it is better to work aiming at long-term improvements instead of short-term improvements. The need for constant technological change and the quality/economic competitiveness are topics of permanent interest for the engineers, the managers, and the human factors specialists. In many situations a variety of organizational entities and potential conflicts have to be addressed.
8.6 Integrating Decision Aiding and Decision Training in Human-Minding Automation In the past, training and aiding have been considered as quite distinct and separate aspects. However, as shown in Fig. 8.4 they can be combined so as to enhance the performance of the overall automated system. Our purpose here is to provide a number of conceptual issues about the integration of decision aiding and decision training. 632
144
8 Human-Minding Automation
First of all we note the following: Human decision making in the laboratory is different than it is in the society.
Therefore it should be considered and studied in its naturalistic context. The situation where a specific problem is posed and the underlying task in which
a decision is embodied, affect in a critical way the framing of the human approach followed for its solution. Action and decision are highly dependent, to the extent that decisions can only be evaluated after the actions are taken. Humans do not make decisions in an analytic way, or even in a conscious manner. They actually use collectively their knowledge and expertise to “decide what to do” rather than to “figure out how to decide”. A basic question here is how decision making skill evolves as individuals move from novice to expert. Several investigations on novice–expert differences and decision skill learning have led to the conclusion that there exist clear differences between novice and expert decision makers, as shown below:
8.6.1 Novice Domain knowledge (facts, basic concepts) Problem solving features (sequential subgoals, local focus, analytical problem
solving, weak general techniques)
8.6.2 Expert Domain knowledge (causal relationships, abstract/general concepts, interrela-
tionships between concepts) Problem solving features (goal decomposition, global focus, case-based/intuitive
problem solving, powerful domain-dependent techniques) As a human is trained and gains expertise, she/he constructs a mental model of the domain that captures more and more peculiarities of the domain (increased quantity and better quality of domain knowledge, transition of local focus to global focus, replacement of weak/shallow methods by stronger/deeper ones, etc.). During the initial period of decision support development, little attention was given to issues such as the expertise level of the decision makers versus the expertise level of the aid, or to the differences between training benefits and aiding benefits. Now, such factors have been identified and their importance has been documented. Some principles for how and where to apply decision support aids are the following 632: Decision aiding is maximally effective if the problem representation in the
decision aid reflects the problem representation and cognitive aspects of the decision maker using the aid.
8.6 Integrating Decision Aiding and Decision Training in Human-Minding Automation
145
Decision making training has to take into account the current mental model and
knowledge structure of the trained human, and also the specific details of the domain or system which is the subject of training. The training process must integrate behavioral and cognitive aspects. Decision making skill involves conceptual skill/knowledge, procedural skill/ knowledge, and relational skill/knowledge which integrates conceptual and procedural and relational skill/knowledge. Skill evolution involves a progression from conceptual to procedural and relational skill/knowledge. Training is distinguished in incremental (or within level) training and representational training. Incremental training aims at providing small but visible improvements in decision behavior using additions or changes of selected items of knowledge. Representational training aims to lead the trainee to make major revisions in her/his problem representation model. It is noted that while a decision support aid must clearly reflect the various levels of expertise of its users, the design of its training and aiding components must be based to a common cognitive analysis of the user. A pictorial representation of a decision support framework that integrates training and aiding is shown in Fig. 8.7. 632 In this diagram, the basic training which gives the novice (or neophyte from the Greek words “neos D new and phoeto (®o £¨/ K D learn sufficient knowledge about the domain to begin applying general purpose decision strategies, is not included. This is because the basic training is not regarded as any kind of support. The left side of the diagram depicts various support system architectures, each adopting a representational structure of a specific level of user expertise. A cognitive modeling and analysis process is adopted at each one of these expertise levels, in order to capture the representational structure and decision strategies of individuals at that level. This model and cognitive analysis is employed to determine training and aiding requirements on that level, which are then used, together with the analysis, to specify a support system architecture that satisfies them. The support system gives finally two kinds of aiding to its users: Performance Aiding This helps the user of a given level to improve her/his
performance in actual decision situations. Incremental Training This helps the user at that level, to incrementally improve
her/his knowledge and skill, and to create and apply better decision policies. Incremental training can be applied to all levels of skill and knowledge. The details of a design methodology for decision aiding along the lines shown above were given by Ryder and colleagues. 483 This methodology involves the following phases:
Cognitive decomposition and modeling Requirements definition Functional design Architecture specification Detailed design and implementation
146
8 Human-Minding Automation
Decision-Making Domain Decision Aid for Expert users System Architecture
Expert Level Incremental Training
Expert Representational Structure
Cognitive model and analysis Expert Training & Decision Aid Requirements
Intermediate-Expert Representational Training Decision Aid for Interm. users
System Architecture
Intermd. Level Incremental Training
Intermediate Representational Structure
Cognitive model and analysis Intermediate Training & Decision Aid Requirements System Architecture
Novice-Intermediate Representational Training Decision Aid for Novice Users Novice Level Incremental Training
Novice Representational Structure
Cognitive model and analysis Novice Training & Decision Aid Requirements
Novice
Basic Training in Tasks and Domain
Fig. 8.7 Ryder–Zachary framework of decision maker from novice to expert
Particularly, functional design uses the identified requirements to define specific aiding and training functions for (i) decision aiding, (ii) incremental decision training, and (iii) representational decision training.
8.7 International Safety Standards for Automation Systems Safe operation of automation equipment and systems is a must both for the human safety and the achievement of the desired product quality (www.isa.org/safety). The well established international automation safety standard IEC/EN 61508 is used since 2004, focused on the safety of electrical, electronic and programmable electronic systems. 451 IEC/EN 61508 has been developed to cover the following automation areas:
8.7 International Safety Standards for Automation Systems
147
Machines and hardware (IEC/EN 62061) Physical/chemical processes (IEC/EN 61511) Nuclear processes and power systems (IEC/EN 61513)
This standard is adopted to assure a smooth and safe operation of automated industry both as a customer of material and as a user of electrical, electronic and medical equipment. For the process technology, the IEC/EN 61511 represents the current state-of-art. For the construction of hardware and machinery the international safety standard is IEC/EN 62061 which covers the previous standard DIN V VDE 0801 (fundamentals for computers in systems with safety tasks). Today, the standard EN 954 also receives global popularity and is known as ISO 13489–1. The risk of a continuous automated process depends on the process type, the materials and substances mixed or reacted, and the environmental conditions of the installation. To avoid/minimize the risk, all processes that are selected must be as safest as possible. Then, protection devices and protection measures (e.g., minimum safety distances, safety grids, inner walls of sufficient thickness, separation walls, fault detection and isolation, fault restoration, alarms, shutdown procedures, hardware and analytic redundancy, etc.) should be used. The risk level of an industrial installation is determined using the IEC/EN 61511 standard. The methods that must be used depend on the actual risk level. All materials used should comply with the requirements of IEC/EN 61508. Both IEC/EN 61511 and IEC/EN 61508 standards distinguish four safety levels SIL1, SIL2, SIL3 and SIL4 (SIL D Safety Integrity Level). SIL1 represents the minimum safety level, and SIL4 stands for the maximum safety level. The higher the risk level is, the more reliable should be the automation components and equipment used. 627 The risk is typically expressed by the failure rate .t/ defined as (see Chapter 11, Section 11.5):
.t/ D
Failures per unit time Number of relevant system’s components
If .t/ D 0 D constant, then the number of failures in the time interval t is equal to F .t/ D 0 t. The failures are distinguished in those which are dangerous (non-safe) and those which are taking place safely. Therefore, we have the following four failure rates:
dd : rate of dangerous detected failures
du : rate of dangerous undetected failures
sd : rate of safe detected failures
su : rate of safe undetected failures
Assuming an exponential failure probability distribution, and taking into account only the dangerous undetected (undetectable) failures, the probability failure on demand (PFD) is given by: PFD D 1 e du t
148
8 Human-Minding Automation
For t << 1 and du D dd D 0 =2 we get: PFD D
1
0 t 2
The average PFD over a time interval T0 is equal to: PFDavg
1 D T0
Z
T0
F .t/dt 0
This definition is in agreement with the IEC/EN 61508. If œt 1, then: PFDavg D
1
d T 0 2
Two fundamental indexes of automation systems safety are the following: SFF (Safe Failure Fraction) This index indicates the fraction of errors or failures
that are controllable and safe (non-dangerous). HFT (Hardware Fault Tolerance) This index provides an estimate of the hard-
ware system’s tolerance to faults. To achieve operational “safety” of an automated industrial system, all technical and managerial actions available should be made. We must look at the overall result. It has no sense to look only at a particular isolated part or control loop of the system without reference to the other pieces of equipment. Here, the time interval Tproof between the periodic checks of the system operation plays a dominant role. Tables 8.1 and 8.2 show how the safety data of an industrial automated system are related using PFD, SFF, HFT, SILs and Tproof . 627 Clearly, the shorter the time interval between successive safety tests (Tproof ), the higher the probability that the system works properly. Actually, the PFD values increase linearly with Tproof (see Table 8.1). The safety testing cycle time Tproof can be reduced or increased, according to the desired SIL value and the respective required
Table 8.1 PFDavg of various failure categories for Tproof D 1; 2 and 5 years PFDavg Failure categories Fail low (L) = safe Fail high (H) = safe Fail low (L) = safe Fail high (H) = dangerous Fail low (L) = dangerous Fail high (H) = safe Fail low (L) = dangerous Fail high (H) = dangerous
Tproof D 1 year 1:6 104
Tproof D 2 years 3:2 104
Tproof D 5 years 8:0 104
SFF (%) >91
2:2 104
4:5 104
1:1 103
>87
7:9 104
1:6 103
3:9 103
>56
8:6 104
1:7 103
4:3 103
>52
8.8 Overlapping Circles Representation of Human Minding Automation Systems Table 8.2 Relation of SFF and HFT/SIL
Fraction of non-dangerous failures (SFF) (%) <60 60–90 90–99 >99
149
Hardware fault tolerance (HFT) 0 1 2 SIL1 SIL2 SIL3 SIL2 SIL3 SIL4 SIL3 SIL4 SIL4 SIL3 SIL4 SIL4
values of PFD. Of course this is based on the assumption that the PFD value of a module is zero at the time of its commissioning or at the end of each testing cycle. Table 8.2 shows the maximum expected SIL as a result of HFT and SFF according to the IEC/EN 61508-2 for type A (non-complex) subsystems. HFT is the number of faults/errors that may appear in the systems hardware, without failure of the operation. A module with zero HFT can create a dangerous situation, even with a single fault. On the contrary a system with a sufficiently high HFT can perform the desired functions even in the presence of faults and deviations. IEC/EN 61508 requires for each SFF region of values, a minimum level of HFT. This relation is shown in Table 8.2. For example, if SFF of an automation module lies between 60% and 90%, then in the one-to-one configuration, without redundancy, this module can achieve SIL3. More information on the safety standards of automation systems and their use can be found on the Web. 223, 627
8.8 Overlapping Circles Representation of Human Minding Automation Systems According to Kelly Howard, human-minding (centered) issues of automation (HMA) systems (e.g., in Air Traffic Control) fall into three categories, namely: Domain Suitability (S) Technical Usability (U) User Acceptability (A)
A pictorial representation of the combination of them is shown in Fig. 8.8. Domain usability includes the information content, the display modes, and the decision-aiding issues. Technical usability deals with the perceptual and physical aspects of HMI and the anthropometric features of the workstation. User acceptability has to do with the suitability and ease of use of the means provided for the cognitive task requirements, as well as the job satisfaction issues. It is noted that a similar 3-overlapping circles representation has been given over the years for many other composite fields involving the blending and synergy of three other well established fields. Here we mention the following four examples: intelligent
150
8 Human-Minding Automation
U
S HMA
A
Fig. 8.8 The synergy of domain suitability (S), technical usability (U), and user acceptability (A) in human-minding automation (HMA) (Source: http://www.aviationsystemsdivision.arc.nasa.gov/ publications/more/hf/harwood 01 93.pdf)
control, neuro-fuzzy control, sustainable development, and office automation. The three constituents (pillars) in each case are as follows. Intelligent control (IC) 488, 490, 599, 607
Neuro-fuzzy control (NFC) 585
Artificial intelligence (AI) Operational research (OR) Control (C)
Sustainable development (Section 9.9) 225 Economic growth Social progress Environment protection
Office automation (see Fig. 10.1) Information processing Communications Office technology
Neural networks (NN) Fuzzy logic (FL) Control (C)
The way of synergy and interaction of the constituent fields differs in each case and depends on the nature of these fields. For details the reader is referred to the related publications.
Chapter 9
Nature-Minding Industrial Activity and Automation
Nature is ruled over only with obedience to her. Francis Bacon The preservation of wilderness areas is but one aspect of a series of conflicts, compromises, and accommodations involving use and preservation. Lawrence Rakestraw I would feel more optimistic about a bright future for man if he spent less time proving that he can outwit Nature and more time tasting her sweetness and respecting her seniority. William Westmorelaud
9.1 Introduction Nature-minding (or clean or green) industrial activity and automation is a necessity for the human survival in the short and long term. Any human activity, and especially the industrial activity, produces waste (which may be hazardous for the human health) and consumes energy and other natural resources. Therefore, to protect the nature in which we live, the waste generation and resource depletion must be considerably reduced through proper planning and pollution control policies and resource use techniques. The earth’s resources (soils, waters, forests, oil, minerals, etc.) are not inexhaustible, and so there is a strong need for conservation and sustainability. Our economic and societal development should be sustainable. However, there is an increasing evidence that many present global trends in the use of natural resources or sinks for wastes are not sustainable. This is not a new problem, but its severity has certainly increased in our modern life. The man throughout the time has damaged or destroyed the natural resources. Different areas of the planet may have different problems, for example a certain area may have sustainability difficulty with renewable resources (e.g., forests), whereas some other areas may face waste, pollution or greenhouse problems. All of these problems are contained in the global problem of ‘unsustainability’.
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 9, c Springer Science+Business Media B.V. 2010
151
152
9 Nature-Minding Industrial Activity and Automation
In this chapter we provide an exposition of the main issues related to the nature-minding industrial and technological activity. Section 9.2 is concerned with the life-cycle assessment (evaluation of the industrial effects on the nature) and the more general environmental impact assessment (i.e., the evaluation of the effects of any technological or human development activity on the nature), and in Section 9.3 we discuss several aspects of the design for the environment (design for minimum environmental impact). We continue with the methodology of pollution control planning (Section 9.4), which lies at the center of nature-minding industrial automation. Next, the problems of natural resources/energy conservation and residuals management are considered (Section 9.5). Then, we discuss the control of fugitive emissions, followed by a summary of the public (municipal) pollution control programs (Section 9.6). Next, a discussion of the environmental control regulations is provided followed by an exposition of the sustainability concept, including the key strategic sustainability functions and principles, and the environmental sustainability index initiative. Section 9.10 gives a set of eight practical rules for nature-minding business and automation operation, and Section 9.11 deals with the basic nature-minding economic issues. The chapter closes with a description and a list of nature-minding (green) organizations. The material of this chapter is widely based on the comprehensive textbook of Bishop, 50 and on information/knowledge acquired from the Web. Particular topics considered in the chapter are fully treated in the books of Eckenfelder and Dasgusta, 128 Freeman, 160 Curran, 103 Pfeffer, 421 Haas and Vamos, 189 Reed et al., 452 Metcaff and Eddy, 357 Shen et al., 514 Daly and Cobb, 107 and Bartlemus. 38 The research in the area of nature-minding human activity is continually ongoing, and a large amount of papers, reports and guidelines are produced each year. A central position in these efforts is possessed by the attempts to model and analyze environmental problems by mathematical techniques implementable by computers. 78, 169, 246, 333, 334, 405 Today, many websites exist where rich information can be found about all aspects of environment protection, sustainability, and ecosystem preservation (e.g., European Environment Agency site www.eea.europa.eu; EIONET’s site www.eionet.europa.eu; US EPA’s site www.epa.gov, etc.).
9.2 Life-Cycle and Environmental Impact Assessments 9.2.1 Life-Cycle Assessment Life-cycle assessment (LCA) is the investigation and evaluation of the environmental effects produced by any specific technological activity from the original acquisition of raw material from the earth until the time at which all wastes and residuals are returned to earth. 50, 160 Clearly, all the derivative releases in the nature (air, water, soil) from the gathering of the raw materials (including energy), the utilization of the product and the processing of the product itself, as well as its final disposal
9.2 Life-Cycle and Environmental Impact Assessments
153
must be included in the life-cycle assessment. That is, both direct-releases (e.g., emissions and energy use during manufacturing) and indirect releases (e.g., effects of raw material extraction, product dispatching, consumer use, disposal, energy consumption, etc.) are identified, measured and taken into account by life-cycle assessment (including recycling and waste management). 380 LCA is also known as life cycle analysis, ecobalance, and cradle-to-grave analysis(see “Wikipedia”). The two primary purposes for conducting the life assessment of an activity producing a product are the improvement and the cost reduction of the product process. The next three motivators are decision making, proactive environmental positioning, and customer requirements (which are expected to play an increasing role in future). Finally, other motivators of lower weighting are in sequence: ISO standards, liability determination, regulatory issues, marketing, research priorities, ecolabeling, product comparison, optimization, reduction of toxic waste, and waste stream management (Foust and Gish). 103 Life-cycle assessments are increasing rapidly in Europe, especially as the basis for packaging recovery and recycling goals. In general, environment protection laws and regulations can be better met through the use of life-cycle assessment methods. The environmental management standards (ISO 14000) now implemented all over the world will surely expand rapidly the use of life-cycle assessments in all industrially developed countries. Naturally, life-cycle assessment is performed using a systems approach to determine the effects of various industrial alternatives. This systems-based analysis should take into account the following issues in an integrated way:
Premanufacturing factor (i.e., choices of raw materials) Manufacturing process (including use/reuse and maintenance) Recycling Waste management Use of energy/consumption of resources Product use Environmental releases
This is a very demanding task, which usually goes much further the capability of most companies. In addition, the analytical techniques needed are still under development, and the data required for the analysis are voluminous and possibly not available. Thus, in practice not all aspects of life-assessment can be considered, and it is advisable in each case to establish first the extent of the LCA which is feasible. Even simple products may need hundreds of steps, when all materials’ processing steps are included, or all the possible uses/reuses and disposal options for the product are considered. Of course, when a subset of the overall LCA is adopted, this must be done with great caution, to assure that important steps are not ignored. In recent years, several attempts were made to systematically reduce the workload needed for a full LCA, by streamlining the processes in an appropriate way, eliminating some phases (e.g., raw materials acquisition, or postconsumer product disposal, etc.). This is facilitated by developing a system flow diagram (or chart) which provides a qualitative graphical representation of all the relevant processes occurring in the system under study. The system flow chart is composed by
154
9 Nature-Minding Industrial Activity and Automation
a sequence of boxes (representing the processes) connected by arrows (representing the flow of materials). A systematic stepwise procedure for LCA is the so-called inventory analysis which is followed by impact analysis, and interpretation and improvement analysis. A brief description of these procedures is the following. Inventory Analysis This analysis (assessment) quantifies energy and raw materials requirements, atmospheric emissions, waterborne emissions, solid wastes, package, and all activities/emissions for the whole life-cycle of a product (Curran). 103 It is performed in the following basic steps (Bokuski et al.) 50, 103 : Scope and goal (focusing on the most important issues or functional units for the
inventory analysis of the intended application) Data collection (using the system flow chart and the relevant work sheets or
checklists) Computer model construction (i.e., creation and implementation of a computer
model that captures the environmental impacts and includes proper sensitivity models) Analysis and presentation of the results (such that to carefully separate the secondary information) Interpretation of the results (with reference to the questions posed in the original scope of the LCA project) It is remarked that normally the conclusions must suggest ways to reduce energy and resource consumption, and minimize environmental releases. Impact Analysis Life-cycle impact analysis is concerned with a quantitative and/or qualitative study of the potential implications upon the environment and human health caused by the use of resources and the environmental releases (Fava et al.). 103 Essentially, the impact analysis is a systematic way to convert the data delivered by the inventory analysis to a form suitable for assessing the impacts of various possible production scenarios. Conceptually, impact analysis consists of the following three phases 50 followed by the improvement analysis phase. Classification (i.e., assignment of inventory assessment items to a smaller num-
ber of classes, such as human health, ecological quality, and natural resource depletion). For each impact class, a list of stressors is created (e.g., emitted contaminants or human health effects). Characterization (i.e., description of the impacts of concern, e.g., conversion of the emitted amount of carcinogenic air pollutants to a projected number of new cancers caused by this pollutant loading, etc.). For each impact category, the data can be analyzed using the following procedures (ordered in increasing complexity): loading, equivalency, inherent chemical properties, generic exposure and effects, and site-specific exposure and effects (Vigon). 103 Impact Weighting (i.e., assignment of relative weights or values to different impacts which allow the comparison of their importance). The final impact assessment is a reduction of complicated inventory data to impact-related figures and a final judgment of the environmental impacts of these figures.
9.2 Life-Cycle and Environmental Impact Assessments
155
Improvement Analysis (i.e., determination of the possibilities to reduce energy
consumption, raw materials depletion, or environmental emissions over the whole life-cycle of the process or the product). Actually, the improvement analysis uses the results of the impact analysis, to develop strategies which lead to the maximum improvement of the process or the product. According to the ISO 14040 and 14044 standards the most important phase is the “Interpretation” phase in which an analysis of major contributions, sensitivity and uncertainty analysis are used to see where the goals can be achieved. This phase is naturally performed before the improvement phase or simultaneously with it. Streamlining LCAs and Pollution Control Factors As we mentioned before, streamlining provides several ways to reduce the cost and the work needed for full LCA studies. In this case, the assessors have to be very careful in selecting the information they are going to omit from their analysis. Three criteria for this selection are the following (Todd) 50, 103 : The study must contain some form of inventory, impact assessment, and
improvement assessment. The study must describe well the methods used to streamline the accepted LCA
methodology, and the boundaries adopted for the study. The results of the streamlined assessment must be consistent with the results
found by a full-scale LCA of the product. The U.S. Environmental Protection Agency (EPA), in an attempt to reduce the work and difficulties for getting a full-scale complete LCA, have developed a new LCA approach which is called “P2 (Pollution Prevention) factors” and aims to be an indicator of the general degree of environmental improvement over a whole life cycle that has taken place or might occur as a result of applying a particular P2 activity. 102 A P2 factor is a ratio, where the denominator is the summed score for criteria before P2 is applied, and the numerator is the summed score of the same criteria after the application of the P2 activity. The environmental impacts are scored individually via a five-number scale (1,3,5,7,9) to indicate descending levels of environmental impacts likely to be generated by the activity, with 1 indicating the most, and 9 indicating the least environmental impact. The LCAs can be used by an industrial company as part of its production and sales strategies or by outside evaluators of products. Some ways for achieving pollution prevention using LCA concepts are the following (Vigon) 50, 160
Corporate strategic planning Product development Process selection and/or modification Market claims and advertising Evaluation by governmental agencies
Computer Models for Life-Cycle Assessment As already pointed out, a life-cycle assessment is typically a data-intensive function. Today, there are available software packages which help in the collection and analysis of all the data, mainly from government agencies and university research groups. This software is distinguished in 103 :
156
9 Nature-Minding Industrial Activity and Automation
Strict LCA tools Product design tools Engineering tools
Strict LCA tools provide information that supports LCA as a stand-alone activity. Product design tools are helping engineers who may not be experts in LCA. The LCA engineering tools are general engineering tools adapted for use in LCA (e.g., simulation software packages). The development of LCA computer tools and the creation of LCA databases in Europe is currently at the most advanced stage. In USA EPA provides important information on P2 and LCA software (http://www.epa.gov/ ORD/NRMRL/lcaccess/). It is noted that LCA studies and tools are not restricted to industrial plants, but they have been extensively used for other purposes as well such as waste management practices by municipalities (especially for solid wastes). A number of variants of LCA, as presented in the wikipedia (http://en.wikipedia. org/wiki/Life-cycle-assessment), are: cradle-to-grave (full LCA from manufacture to use and disposal), cradle-to-gate (partial LCA from manufacture to the factory gate), cradle-to-cradle (a cradle-to-grave LCA where the end-of-life disposal step for the product is a recycling process), gate-to-gate (a partial LCA examining only one value-added process in the entire production chain), and well-to-wheel (an LCA of the efficiency of fuels used for road transportation). Worth mentioning in the LCA area is the Life Cycle Initiative launched by UNEP (The United Nations Environmental Programme, Division of Technology, Industry and Economics (TIE), www.unep.org, www.uneptie.org) and SETAC (The Society of Environmental Toxicology and Chemistry, www.setac.org) with the cooperation of twelve sponsoring partners, and seven activity sponsors and supporting partners. According to UNEP a life cycle approach (LCA) promotes the following functions:
Awareness that any selections are not isolated Assurance that the selections are suitable for the longer term Improvement of the systems as whole, not only single parts of them Making informed selections (e.g., by looking for unintentional impacts of human’s actions)
The UNEP TIE Division aims to help governmental, business and industry decisionmakers to create and apply policies which promote and assure:
Sustainable production and consumption Energy efficient use and saving Proper management of chemicals Consideration of environmental costs
SETAC, a non profit association, analyzes and explores the problems of the impact of chemicals and technology on the environment, and provides a neutral meeting platform/forum, not to defend positions, but to deploy the best available science and technology for the benefit of the human and the environment.
9.2 Life-Cycle and Environmental Impact Assessments
157
9.2.2 Environmental Impact Assessment Environmental impact assessment (EIA) is a process similar to LCA and is applied to technological activities and R&D projects of any kind. Three definitions of EIA given over the years are the following 240 : EIA is a study for the evaluation and prediction of the effects of an activity or
project on the environment. EIA is a methodology for identifying and assessing the potential environmental
impacts of a proposed project, evaluating the alternatives and designing proper mitigation, management and monitoring procedures or systems. EIA is the analysis of probable changes in several biophysical characteristics, socio-economic aspects and the environment, that may be the result of a proposed or impending activity or project. More definitions can be found in the website: www.gdrc.org/uem/eia/define.html. EIA compares several alternatives for a project and proposes the best one on the basis of global environmental, social and economic criteria, trying to ensure that both the beneficial and adverse consequences of the project are taken into account throughout the project design. Furthermore, EIA determines and suggests proper measures to lighten the adverse effects, and forecasts whether the remaining adverse environmental effects, after the mitigation is applied, are still significant and not allowable. This must be done early in the project’s planning, in order to ensure that the environment is really protected with the minimum incurred cost and maximum saving of overall time. Currently more than 120 countries have adopted and implement several EAI regulatory processes. Three fundamental features that any EAI process must possess are the following: Integrity EIA must be objective, balanced, unbiased, and fair. Utility EIA must possess high utility by providing reliable and well balanced information to the decision maker. Sustainability EIA must lead to implementable environmental safeguards. The steps through which EIA should go through are the following 240 : 1. Screening Determine if the project at hand needs an EIA and at what level of detail. 2. Scooping Identify the key factors and the impact that must be further investigated, including the time and other limitations of the study. 3. Impact Analysis Identify, predict and evaluate the environmental and social impact of the project. 4. Mitigation Identify and suggest proper actions that can reduce and overcome the adverse environmental consequences. 5. Reporting Present the EIA findings in a clear written form for the decision maker concerned. 6. Review Examine the completeness and adequacy of the EIA report.
158
9 Nature-Minding Industrial Activity and Automation
7. Decision Making Decide whether the project is approved, rejected or needs further improvement. 8. Post Monitoring Check whether the impact of the implemented project does not actually exceed the legal limits, and whether the mitigation procedures are as suggested in the EIA report. Typically, the exploration of the project’s impacts is made along four principal directions, namely: physical–chemical, biological–ecological, social–cultural, and economic–operational directions. In general, there are various other forms of impact assessment, for example, health impact assessment (HIA), and social impact assessment (SIA). HIA and SIA are concerned with the health and social consequences of development, which must be taken into account together with the environmental assessment. A dominating class of impact assessment is the so called strategic environmental assessment (SEA) which is concerned with strategic actions, such as development plans, programmes and policies, etc. These strategic actions extend the goals of the EIA process beyond the project level, and integrate proactively the environmental issues into the higher decision making levels.
9.3 Nature-Minding Design Nature (environment)-minding design (NmD) or design for the environment (DfE) aims at determining the way a product is produced on the basis (at least in part) of its environmental impact. An industrial plant can reduce the pollution caused by the manufacture or use of a product in several ways. Some of them are the following 12: Evaluate the manufacturing process by which the product is produced, with the
goal of reducing process wastes. Examine the materials used in manufacture in order to determine the efficiency
of using less dangerous materials or materials that are more sustainable. Redesign the product such that it is more easily recyclable or more easily disas-
sembled for reuse or recycling. Redesign the product to extend its useful life or reduce the energy use. Try to minimize the potential pollution from the final disposal of the product after
its use. In nature-minding design the environmental considerations are integrated into the product and process engineering design procedures. Clearly, environment-minding design and life-cycle assessment have the common goal of refining a manufacturing process or the product itself such that to reduce pollution. To be more effective, this refining process should start at the earliest conceptual phases of the design, and include all aspects of the product and its manufacturing process (materials selection, manufacturing process selection, product design).
9.3 Nature-Minding Design
159
Among the new emerging fields of nature-minding industry and automation is the so-called green chemistry (or benign chemistry or clean chemistry) which deals with the synthesis, processing and use of chemicals that reduce risks to humans and to the environment. 15 The overall ultimate goal of clean chemistry is to develop and apply alternative syntheses for the required industrial chemicals so as to control and prevent environmental pollution. 14 Although it is not totally possible to obtain chemicals that are exactly “benign”, it is possible to replace the use of toxic and harmful chemicals with much more innocuous ones. This is based on the available biomedical information about the effect of chemicals on human health, and also on their effect on the nature and ecosystem. The clean chemistry is also concerned with the effects on the human and the nature of the by-products of the clean chemicals. Actually, the largest releaser of chemicals into the nature is the chemical manufacturing sector. The steps in the manufacturing of a chemical are the collection of raw materials from nature and conversion of them into proper feedstocks and intermediates. These are then reacted to produce the desired chemicals which must be isolated from the bulk of the reaction mixture purified, packaged, and transported to the consumer. Design (or redesign) for reuse (or as otherwise called, demanufacturing or remanufacturing) is an important contributor to nature-minding automation. It is the process of collecting, dismantling, selling, and reusing the valuable components of industrial products that have reached the end of their useful life. Reformation of the material is not reuse. Demanufacturing is a class of reuse where a product is disassembled for reuse, in the same or other types of products. Recycling is not the same as reuse, because it involves the reformation or reprocessing of a recovered material. The reprocessed or reformed material is used in new products. The decision to reuse or recycle a material is based mainly on market and economic issues. For example, a company will not use recycled materials, if cheaper virgin materials are available. The recycle/disposal hierarchy from the most preferable to the least preferable option is the following 50 :
Reduce materials content Reuse components/refine assemblies Remanufacture Recycle materials Incinerate for energy Landfill
In the European Union there is a directive for packaging and packaging waste, since 1994, which requires a minimum recycle rate of 15% for all plastic, steel, wood, glass, and paper. 416 Many European Union Countries are now recycling over 50% of their packaging. In the USA, reuse/recycling is mainly driven by economics. A number of practical rules for every-day nature-minding design, automation, and operation are given in Section 9.10.
160
9 Nature-Minding Industrial Activity and Automation
9.4 Pollution Control Planning Pollution control planning (PCP) should be based on a detailed study of how an industrial company makes its products and how it does business. The plan adopted must give a mechanism for a systematic and continuous review of the company’s activity with regard to its effect on environment. The initiative for developing a pollution control program in an industrial plant may originate from the upper management, but usually it starts in the middle management (where the workers are closer to the production line and can more easily appreciate the benefits of pollution prevention) or with the environmental control staff of the company. The basic steps of a pollution control program are 50 : Step 1: Define and organize the PCP (statement of policy, statement of goals, task force naming). Step 2: Carry out preliminary assessment (collect data, review sites, establish priorities). Step 3: Write the program plan (define objectives, develop schedule, identify obstacles). Step 4: Perform detailed assessment (review data and sites, organize documentation). Step 5: Do feasibility analysis (technical, economic, environmental). Step 6: Write the assessment report. Step 7: Realize the plan. Step 8: Evaluate the progress (acquire data, analyze the results). Step 9: Maintain the pollution control program. In the above PCP structure there is obvious feedback from step 2 to step 1, and from step 8 to step 4. The preliminary assessment in step 2 reviews and investigates the available data and determines priorities and procedures for detailed assessments. 596 EPA has released a suitable Guide for this (1992). Many PCPs attempt to find and provide a mechanism for investigating P2 initiatives. The detailed assessment in step 4, examines in a comprehensive manner the prioritized set of pollution control options proposed by the assessment team, once the sources and the nature of the waste generated are determined. Several practical procedures exist for prioritizing pollution control projects. The factors that determine the priority of a project vary from plant to plant, depending on the P2 goals established during the planning process. A commonly used prioritizing procedure is the so-called “option rating weighted-sum method” developed by EPA in USA (1992). 596 Some criteria used in this method are: reduction in waste quantity, reduction in waste hazard, reduction in waste treatment/disposal costs, reduction in raw materials costs, and reduction in insurance and liability costs. The most difficult step of the PCP process is the implementation step, since here the best option has to be sold to the management side of the company, which may have restricted understanding of the environmental issues. The technical staff of the industrial plant has to convince the management about the project’s value, namely that in the long term the PCP project will save the company money. Many pollution programs may require
9.4 Pollution Control Planning
161
modifications in operating procedures, purchasing techniques, materials inventory control; or changes in employee training schemes. All these alterations must be performed with great care in order to minimize disruption to normal business activities. A PCP must be a “living document”, i.e., a continuously updated document as new options become known. The above steps and procedures have to be integrated into a working system for managing environmental impacts from industry. The ISO 14000 standards mentioned in Section 9.2, formally adopted in 1996, establish benchmarks for environmental management performance and specifies the actions that industry must realize to conform to these standards. 102 The aspects covered by ISO 14000 are divided in organization evaluation aspects and product evaluation aspects, namely 73 : Organization Evaluation Aspects Environmental management system Environmental auditing Environmental performance
Product Evaluation Aspects Environmental aspects in product standards Environmental labeling Life-cycle assessment
The basic components of an environmental management system (EMS) are 50 :
Continual review and improvement Measurement and evaluation Implementation Environmental management plan Commitment and environmental policy
It is remarked that the environmental management system should be an integral part of the overall management structure of the company, and not simply a stand-alone system. The base of the EMS is the commitment of the top management that an environmental management plan will be adopted, implemented, evaluated and continually improved, in compliance with the environmental state laws. The ISO 14000 encourages companies to consider implementation of best available technologies (BATs), but it does not mandate its use. BATs are often the most efficient and least costly options from a life-cycle viewpoint, but a company is not obliged to adopt them to become accredited. BATs are usually taken as a reference, and OECD and European Union countries are asked to use the best available technologies for the establishment of environmental permit conditions for certain industrial plants (IPPC Directive 96/61/EC). 97, 119, 406
162
9 Nature-Minding Industrial Activity and Automation
Environmental auditing is not compulsory in the USA but strongly recommended by EPA, and is defined as “a systematic, documented, periodic, and objective review by regulated entities (public entities, private bodies, state or local agencies) of facility operations and practices adopted for satisfying environmental requirements” (see ISO 14010). An audit which is used only to verify compliance with environmental regulations is called a compliance audit, and covers a short time frame showing a snapshot of the emissions from a plant at that time. Other audits are the environmental liability audit (which is conducted before the purchase, lease, sale or financing land or buildings for commercial or industrial use) and the waste management contractor audit (which is conducted on waste management contractors by waste-generating industrial companies to make sure these contractors are properly managing the waste being shipped to them). The development and implementation of any PCP has a cost. Many inputs used in the economic assessment of a proposed PCP project are relatively easily to have. The capital costs to purchase the required equipment and the costs to maintain and use them are usually available. Similarly, the estimated benefits in terms of reduced wastage, increased productivity and so on, are easy to find. What is difficult to estimate is the long-term liability due to possible future environmental impacts caused by the disposed materials. Here, it must be pointed out that a PCP option that is worth-while on the basis of typical business accounting procedures may, in real life, be a poor selection if it leads to a major long-term liability cost. On the contrary, a PCP project that minimizes (or better eliminates) long-term potential liability may, in the long-run, be suitable, even if the short-term economic gains are not acceptable as another option.
9.5 Natural Resources-Energy Conservation and Residuals Management 9.5.1 Water Conservation One of the major materials used in industry is water, which dissolves reagents and offers the medium where the reactions occur. Water is also used to separate various materials either via gravity (for immiscible materials) or by means of differences in affinity of the material between two solvent phases (liquid–liquid, liquid–gas, liquid–solid) for miscible materials. The two major utilizations of water are cleaning and cooling. Although water is considered to be a ‘free’ resource because of its relatively low cost, actually it is not free, since if it is used in large amounts the cost can increase considerably. Especially, the drinkable water must be used and consumed with great care. In particular, very often the cost of treating the resulting contaminated wastewater is excessively high. Thus, measures must be taken to minimize water use in all areas, namely housekeeping and industry, as well as for treating industrial wastewaters.
9.5 Natural Resources-Energy Conservation and Residuals Management
163
In industry large quantities of water are needed for cleaning and degreasing functions, to remove dirt, oil and grease from both feed materials and finished products. A typical cleaning method is by putting the parts to be cleaned in one or more successive rinsing tanks that contain stagnant or flowing water. Single running rinse tanks used in the past need a large quantity of water for contaminant removal. Today we use series rinse tanks which reduce the overall waste volume to a certain extent, and have the additional benefit of being heated or controlled separately because they have separate feeds. However, series rinse tanks are not as efficient as countercurrent rinse systems which have the greatest efficiency. In general, by suitable modification of flow patterns, we can achieve substantial reduction of the pollution resulting from industrial operations.
9.5.2 Energy Conservation A recent flow analysis technique, the so-called pinch analysis is increasingly being adopted in industry. 322 Originally it was developed for optimizing the use of energy in industry, but today was adapted for use in optimizing water and chemical use in industrial plants. 323 Pinch analysis is extensively used for heat optimization, including both energy and mass transfer situations in reaction and separation systems. It provides a tool for determining capital and operating cost requirements of alternative pollution prevention programs. Another technique, called heat exchanger network (HEN) synthesis, was developed (via pinch analysis) to maximize the efficiency of energy use in a process by transferring the waste heat from one stage of the process to another stage. HEN 467 synthesis has contributed to achieving large cost savings for many industries, in particular those that contain hundreds of heat exchangers needed to bring process streams to the desired temperatures (e.g., refineries).
9.5.3 Residuals Management Despite the efforts to eliminate industrial pollution, no industrial plant is 100% efficient and pollution free. As we already showed, techniques such as product recovery, minimization by product formation, and recycling waste materials are able to give significant reductions in wastes needing treatment, but there will always exist some residual material which cannot be economically or practically removed from the waste stream. Another source of waste materials is leaking equipment and evaporation of volatile organic carbon compounds (VOCs) from open containers, process equipment, storage tanks, and the like. The topic of residuals management is very vast and today many comprehensive study reports and books are available. 189, 314, 421, 452 Here we will briefly outline the wastewater treatment, the waste neutralization process, and the disposal methods (landfill, incineration) and the resource recovery.
164
9 Nature-Minding Industrial Activity and Automation
Wastewater Treatment Wastewaters coming from industrial activity may be acidic or alkaline, or contain oxygen-depleting organic compounds, nutrients which may cause eutrophication or can be aesthetically unacceptable (due to color, odor or waste) or may contain hazardous or toxic materials. A general methodology for treating waste waters is therefore impossible, and a case-to-case treatment is essential. The three ways of industrial water treatment are 111, 357, 593 : Treatment before discharge to a receiving stream Discharge to a municipal sewer and subsequent treatment in a publicly owned
treatment work (POTW) Pretreatment at the industrial site to reduce the existing quantity of pollutants and
then discharge to a POTW In many cases the preferred option by industries is to send their wastes to a POTW (if it has a sufficient size for adequately treating these wastes), since this relieves the industry of the responsibility of treating the wastes and eliminates the work necessary for this. The industry simply pays a user fee to the municipality. All POTWs have pretreatment programs for controlling the discharge into the sewer of materials which could be harmful to the treatment plant or dangerous to the POTW personnel. The U.S. EPA has released ‘prohibited discharge standards’ (for all non-domestic discharges) and categorical pretreatment standards (for particular industries). The local POTWs are responsible for establishing a pretreatment program enforcing these standards. For a list of the prohibited pollutants see EPA 40CFR 403.5. Waste Neutralization Neutralization is required when the industrial wastes have much higher or much lower pH than that needed for suitable treatment or for discharge into a municipal sewer or a receiving stream. Neutralization is achieved by adding acid reagents to an alkaline waste or alkaline reagents to an acidic waste, in order to regulate the pH of the waste at an acceptable value. If the wastes are to be treated biologically, the pH should lie between 6.5 and 9.0, and if the wastes are to be discharged into a municipal sewer, the pH range is usually between 5.0 and 9.0. In many cases, an industry can mix an acidic waste stream and an alkaline waste stream together to neutralize both streams, but this should be done cautiously to avoid non desirable side reactions. The quantity of reagent required to neutralize or adjust the waste pH can be determined via a titration curve, where known reagent amounts are added incrementally to an aliquot of waste and the resulting pH is measured. Solid Waste Disposal Methods The two primary solid waste disposal methods are the landfill and the incineration. The dispose of waste in landfill is done in most countries by burying waste. Landfills are usually created in unused quarries, mining voids or borrow pits. Modern well designed and operated landfills provide a relatively low-cost method of disposing waste materials which has no adverse environmental impacts like the old-fashioned landfills (where methane and CO2 are produced that cause odor problems, kill surface vegetation, and are greenhouse gases.) The incineration involves combustion of waste material. Incineration belongs to the class of the so-called high temperature waste treatment methods, and
9.6 Fugitive Emissions Control and Public Pollution Control Programs
165
converts waste material into heat, gas, steam and ash. Systems that burn waste in a furnace or boiler for producing heat, steam and/or electricity are known as waste-to-energy (WtE) or energy-from-waste (EfW) systems. Incinerators produce micro-pollutants in gaseous emissions and have raised social concerns. On the other hand produce heat, i.e., energy. Resource Recovery One form of indirect resource recovery is the energy recovery during the incineration of solid wastes. In the nature and the human life there are several waste materials that can be recovered directly, such as paper, aluminum cans, glass, plastics, cardboards, waste oil, and so on. In many countries recovery of materials is becoming compulsory.
9.6 Fugitive Emissions Control and Public Pollution Control Programs 9.6.1 Fugitive Emissions Control Fugitive emissions may occur whenever some industrial components (e.g., pumps, flanges, compressors, valves, pipe connections, etc.) are leaking. Typically, fugitive emissions are fluids (liquids or gases) and dusts from several technological activities (e.g., mining, construction, waste collection, agriculture, road traffic, etc.). Fugitive emissions from a single piece of equipment may be small, but the cumulative effect from thousands of such components in a plant may be enormous. The two basic methods for controlling (reducing) fugitive emissions from equipment are: Modify or replace existing equipment Apply a leak detection and repair (LDAR) scheme
Modification of equipment includes the installation of additional components which eliminate or reduce emissions or the replacement of existing equipment with sealess types (e.g., install a cap on an open-ended line, replace an existing pump with a sealess type, etc.). Most fugitive emissions come from leaking valves. Therefore a packing material is usually used around the valve stem to form a seal which permits the valve stem to move while keeping process fluids from leaking. To effectively control fugitive emissions from valves it is required to use component monitoring, stem sealing, and assure good mechanical conditioning. 10 Monitoring techniques are the most cost-effective ways for controlling the fugitive emissions from valves, with resulting reductions from 25% to 75%, depending on the monitoring rate and the repair level achieved. 40 Emissions from process valves can be eliminated if the valve stem is isolated from the process fluid (using, for example, sealless diaphragms). LDAR programs are designed to identify components that are emitting unacceptable quantities of material, so that the repair is unavoidable. Each LDAR program defines the frequency of component sampling and the screening value at
166
9 Nature-Minding Industrial Activity and Automation
which a “leak” is indicated. Emissions from volatile organic liquids in storage tanks (e.g., petroleum tanks, oil transfer tanks, etc.) can be controlled by several methods. For example, in fixed-roof tanks we can install an internal floating roof and seals to minimize evaporation of the liquid being stored. Fugitive emissions may also occur in wastewater treatment plants either for treating the wastes sufficiently, before discharging them to a receiving stream, or for pretreatment before discharging them to a municipal sewer. To minimize fugitive emissions from these plants we cover them, collect the escaping volatile organic compounds (VOCs), and treat or destroy them. Other control techniques include the minimization of turbulence at points where it is not required (e.g., weirs or drop structures) using powered activated carbon in the activated sludge process to trap VOC so that they cannot volatize. 514 Computer software is currently available for estimating fugitive emissions and suggesting proper techniques to reduce them (see, e.g., EPA, EIONET).
9.6.2 Public Pollution Control Programs Today, municipal pollution control facilities, which as we saw before are called publicly owned treatment works (POTWs), are facing increasing system overloads which cause inadequate wastewater treatment before discharge to receiving streams. The obvious solution to expand the POTW facilities so as to meet this increased demand is usually economically prohibited. Therefore, the option of requiring industrial pretreatment programs for all POTWs has been widely adopted. In a pretreatment program, industries must obtain a permit before using the municipal sewer, which limits the discharging of specific pollutants. These restrictions usually involve concentration-based limits, although in many cases volume limits are imposed. Another widely used option followed by municipalities is to establish a pollution control program as part of the pretreatment program. Currently, only a few municipalities have fully implemented pollution control programs, but their number is continually increasing. Actually, today major laws that refer to pollution control and management in a particular medium acknowledge the key role for control as a national obligation and policy, and provide a strong incentive for pollution control by including the associated expenses and responsibilities for properly managing existing pollution. Several Acts now exist, including the following EPA acts for USA (see also Section 9.7.2) 50: Resource Conservation and Recovery Act (RCRA) enhanced with the Hazardous
and Solid Waste Amendments (HSWA). Emergency Planning and Community Right-to-Know Act (EPCRA). Clean Water Act (CWA). Pollution Prevention Act (PPA).
The Clean Water Act defines the framework for the imposition of industrial wastewater control programs on municipalities and the subsequent setting of regulations by municipalities on industrial users. As more strict requirements are imposed
9.7 Environmental Control Regulations
167
on POTW discharges, municipalities must strengthen their control over what is discharged to the POTW system. The EPA requires that waste water treatment plants (WWTPs) with pretreatment programs must produce local discharge standards or prove that no standards are required. Once pollutants of concern and users of concern are targeted, a pollution control policy can be developed. POTWs have the authority to require permitted industrial users to meet discharge limits through proper pollution control strategies. However, it is remarked, that companies are not likely to apply pollution control programs to achieve acceptable pollution levels without motivations. Local authorities want to create positive economic incentives, in order to assure the participation of small companies in pollution control programs. 58 To this end, market-based programs have been developed, such as the marketable wastewater (MWW) permits, which constitute an option with simultaneous pollution prevention and economic benefit possibilities. 7 An industry having a MWW permit, can release any quantity of pollutants up to the allowed level by the marketable permit. Water quality is maintained as long as the permissible amount of pollution is emitted (see Section 9.11).
9.7 Environmental Control Regulations 9.7.1 General Issues To control the environmental pollution caused by industry or by products of industry so as to minimize the environmental damage and threats to human health, several countries of the world have enacted over the years an increasing number of laws and regulations. Typically, laws are written to provide the general framework for enforcement, and regulations or directives define the actual implementations. The environmental control problem is a worldwide problem which can be faced more efficiently if global measures are taken against environmental pollution. To this end, the so-called Kyoto Protocol was launched by world leaders in a meeting held in Kyoto (Japan) in December 1997. The aim of the Kyoto Protocol was to develop a strategy for controlling carbon dioxide .CO2 / and other greenhouse gas emissions (methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride) on a global scale. Industrialized countries consume more energy than less developed countries (to produce more goods) and so they produce and emit larger amounts of .CO2 /. Thus developing countries have argued that countries currently emitting the most .CO2 / should be required to achieve the largest part of the emissions reductions needed. On the other hand the developed countries say that they are using the best available technologies for their operations, and so developing countries should also contribute to the necessary .CO2 / emission reductions, by adopting more energy-concerning processes. After long discussion and negotiation, the Kyoto Protocol was agreed. A goal of this protocol is to reach by 2010–2012 a minimum reduction of greenhouse gases by 5% below the 1990 levels. The reduc-
168
9 Nature-Minding Industrial Activity and Automation
tion level requirement is different from country to country. The USA accepted to reach a 7% reduction from 1990 emission levels, Japan a 6% reduction, and most European countries an 8% reduction. Developing countries did not accept any restrictions and were exempted. This may have bad consequences, since it is possible that companies of developed countries will move their functions to these developing countries for eliminating the enforcement to reduce .CO2 / and other emissions. If this happens, then the Kyoto Protocol will have been actually inactivated. It must also be remarked that because the industrial development of developing countries is continuously expanding, and these countries are not enforced to reduce greenhouse gas emissions, the levels of .CO2 / in the atmosphere will continue to rise, despite the reductions made by developed nations. Besides the governmental efforts, there have been developed the so-called environmental ethics, which deals with a systematic consideration of the moral relationships between human beings and the nature in which they live. As mentioned in Chapter 1 (Section 1.6), environmental ethics has its roots in ancient Greece (recall Hippocrates work entitled: “About the winds, the waters, and the lands”). Plato introduced for the first time the environmental protection law by stating: “Whoever pollutes the water of the other with contaminants has to pay not only the cost of cleaning it, but also a penalty to those who have the authority by the state to receive it” (Laws 845). Contemporary concerns about environmental ethics were expressed by many people (e.g., J. Pinchot, H.D. Thoreau, J. Muir). Comprehensive books on environmental ethics include those of Des Jardins (1997) and Hayward (1994). The major available philosophies with which environmental ethics have been described are the following 50 : Conservationism (Pinchot) The basis of this philosophy is the view that wilder-
ness is a resource that must be simultaneously utilized and protected. Preservationism (Thoreau and Muir) This philosophy is based on the view that
nature must be enjoyed and experienced by humans, and it is our obligation to protect the wilderness for the enjoyment of future generations. Deep Ecology (Arne Naess) This philosophy extends the base of morality to include all life on Earth, including plants and animals. Social Ecology (Knauer, 1997) This philosophy places a strong value on human existence (while still appreciating the uniqueness of nature), and considers as the main problem to be solved, the human interactions. All the above philosophies have in common the responsibility of all humans (and organizations or businesses) to minimize their impact on the nature as much as they can.
9.7.2 Environmental Regulations in the United States Bills in the United States of America are developed by Congress and issued as a law signed by the president. The laws contain only the goals. The technical and other
9.7 Environmental Control Regulations
169
details are studied and provided by the regulatory agencies. The detailed procedures are called “regulations”, which are compiled in the CFR (Code of Federal Regulations). Regarding the environment it was the task of EPA to develop, implement and enforce the environmental regulations. EPA also issues guidance documents that help in the actual implementation of the regulations. On the top of these federal statutes, each state develops and enforces local environmental laws and regulations, which may be stricter (but not softer) than the federal ones. Currently, there are many regulations called ‘Acts’ regarding the air, the food, the hazardous compounds, etc. These regulations classified according to type of activity /product are the following 50, 409, 562, 594 : Air–Water–Natural Resources–Wastes CAA
Clean Air Act (containing National Ambient Air Quality Standards for ozone, CO, particulate matter <10 pm, particulate matter <2:5 pm, SO2 , NO2 , lead). OPA Oil Pollution Act RCRA Resource Conservation and Recovery Act SDWA Safe Drinking Water Act CWA Clean Water Act TSCA Toxic Substances Control Act
Manufacturing and Industrial Products
OSHA Occupational Safety and Health Act 409 PPA Pollution Prevention Act FIFRA Federal Insecticide, Fungicide, and Rodenticide Act RCRA Resource Conservation and Recovery Act
Consumer Products
CPSA Consumer Product Safety Act FFDCA Federal Food, Drug and Cosmetics Act FHSA Federal Hazardous Substances Act PPPA Poison Prevention Packaging Act FIFRA Federal Insecticide, Fungicide, and Rodenticide Act PPA Pollution Prevention Act
Old Waste Landfill SARA
Superfund Amendments and Reauthorization Act
Transportation HMTA
Hazardous Materials Transportation Act
Feedstock’s and Processing OSHA Occupational Safety and Health Act PPA Pollution Prevention Act
170
9 Nature-Minding Industrial Activity and Automation
Fugitive Emissions CAA
Clean Air Act
In addition to federal laws, several executive orders about pollution control have been issued by the president without passage by the Congress. These include provisions for energy and national resource conservation at federal facilities, compulsory compliance with EPCRA (Emergency Planning and Community Rightto-Know-Act, 1986), revisions of procurement processes that assure purchase of energy-saving machinery and use of alternatives to ozone-depleting substances, encouragement to use recycled materials, and the establishment of reuse and recycling programs at federal facilities. 562 The goal of all the above executive orders is to show how the government does business with efficient pollution control programs. Details on the various environmental impact and pollution prevention issues, as studied and faced by EPA, can be found at http://epa.gov.
9.7.3 International and European Environmental Control Regulations On an international basis we have the ISO 14000, which consists of the environmental management standards (see also Section 9.2.1: a Life-Cycle Assessment). ISOs are developed by the International Organization for Standardization (ISO), which is the world federation of national standards agencies from over 115 countries established in 1947, with headquarters in Geneva (Switzerland). 267 ANSI (the American National Standards Institute) is the representative of the USA to ISO. We note that ISO reminds us the Greek word ‘isos D equal’. The purpose of ISO is to develop and expand the standardization all over the world with a view to enhance the international exchange of goods and services (which have improved quality, reliability, usability, safety, health, and environmental protection and waste reduction). The outcome of the ISO’s activity is the publication of international agreements in the form of International Standards. 267 One of the international standards is the ISO 9001, which sets and describes quality standards for manufacturing processes and equipment from concept to implementation. The ISO 14000 mentioned above was upgraded to a Draft International Standard. The ISO 14001 sets the specifications that a product must have in order to be certified as satisfying the standards of environmental safety. Similarly, other ISO’s in the ISO 14000 series provide guidelines and rules for a business to develop and realize the respective environmental management system (EMS). In general, the environmental protection system is embedded in the overall management system with equal weighting as product quality, price control, maintenance, workforce, etc. 624 In the European Union (EU) the developments on environmental policy are reported every year on the Environmental Policy Review (EPR) and provide the review progress under the 6th Environmental Action Plan (6th EAP), which continually sets and updates the overall framework for the EU environmental policy. 507
9.7 Environmental Control Regulations
171
The main priorities of the 6th EAP are: (i) the climate change, (ii) biodiversity loss, and (iii) environmental impacts on human health. The three permanent goals of the EU agenda are: Strengthening implementation measures to meet Kyoto commitments Launching internal discussion on measures for global emissions reduction after
2012 Preparing to adapt to unavoidable climate change
9.7.3.1 Climate Change The EU is continually promoting the integration of environmental aspects into other policy areas with emphasis on transport and development aid. A special effort is also made to secure and enforce more efficient use of energy. In overall, at the time of writing this book, 12 of the 25 Member States have emissions above the linear path to meet the Kyoto Protocol. Working closely with Member States, industry, civil society and academia, a European Climate Change Program (ECCP) has been developed to help EU determine cost-effective ways of meeting its Kyoto Protocol commitments. Within ECCP and respective national climate change programs, a number of policies and measures have been adopted and are (to be) implemented with a view of the 2008 to 2012 timeframe. An important component resulted from the ECCP is the creation of a EU-wide emission trading scheme to help EU to reduce its emissions of greenhouse gases cost effectively. Directive 2003/87/EC 143 of the European Parliament and the European Council, that establishes a scheme for greenhouse gas emission allowance trading within the EU and amends Council Directive 96/61/EC, was entered into force on 25 October 2003, providing for EUwide emissions trading from January 2005. 144, 145 9.7.3.2 Biodiversity The EU has established the Natura 2000 network of protected areas in the EU-15 territory. The sites, selected to cover habitats and species of conservation concern, cover about 20% of this territory. Good progress is also shown toward the designation of ecoprotection sites in the new Member States of EU (i.e., extending to EU-25). As verified in the Malahide conference (May 2004), there is a clear consensus among stakeholders on measures that have to be taken to protect biodiversity at European scale. This conference has developed priority goals and objectives for meeting the commitment to halt the decline of biodiversity in the EU by 2010. The Bergen-op-Zoom conference (November 2004) identified specific priority actions on bird conservation. The first meeting of parties to the Cartagena Protocol on Biosafety has specified the documentation compliance mechanisms for genetically modified organisms (GMOs). The parties to the Convention on International Trade in Endangered Species (CITES) agreed to apply stronger controls on trade in a number of endangered species, including an action plan on illegal trade and ivory.
172
9 Nature-Minding Industrial Activity and Automation
9.7.3.3 Environment and Health The EU environment and health action plan was presented to the WHO conference on Environment and Health (Budapest), where Environment and Health Ministers from 52 countries adopted the Children’s Environment and Health Action Plan for Europe (CEHAP). A better access to environmental information was made available by the European Pollutant Emission Register (EPER), the first Europe-wide public register of emissions into air and water from industrial plants. The EU ratified a United Nations-Europe wide agreement and joined a second global convention to eliminate pollution by Persistent Organic Pollutants (POPs), which are toxic substances that can travel long distances, persist in the environment, and accumulate in the food chain. The European Environmental Agency (EEA) with its extensive networks plays a useful continuous role towards developing more effective and transparent shared environment systems for the EU needs based on modern technologies. The process of regulatory simplification is continually reducing the administrative burden on the public sector and companies, while maintaining high environmental standards. The following thematic strategies, adopted in 2005, facilitate very much the achievement of further regulatory simplification: Communication on climate change-future EU strategy (analysis of benefits and
costs of mid and long-term climate strategies) [DG ENV.C.2 Climate Ozone & Energy]. Thematic strategy on air pollution (outline of the environmental objectives for air quality and measures necessary to meet these objectives) [DG. ENV.C.1 Clean Air & Transport]. Thematic strategy on the prevention and recycling of waste(identification of means to further develop more sustainable waste management policy by minimizing the environmental impacts of waste, also taking into account economic and social issues) [DG.ENV.G.4 Sustainable Production & Consumption]. Thematic strategy on the sustainable use and management of resources (community policy and measures that allow resources to be used in a sustainable way without further harming the environment) [DG.ENV.G.4 Sustainable Production & Consumption]. Thematic Strategy on the conservation and protection of the marine environment (address of a number of policy areas from the marine environment perspective to assure that different policies and legislative measures provide high levels of environmental protection) [DG.ENV.D2 Protection of Water & Marine Environment]. Communication on reducing the climate change impact of aviation (options for economic instruments to reduce the climate impact of aviation) [DG.ENV.C.1 Clean Air & Transport]. Thematic Strategy on pesticides (measures and initiatives for reducing the impact of pesticides on human health and the environment) [DG.ENV.B4. Biotechnology and Pesticides].
9.8 The Concept of Sustainability
173
Thematic Strategy on protection of soils (a cost-effective approach for soil
protection in the short, medium and long term) [DG Environment Unit B1: Agriculture and Soil]. Communication on Biodiversity (priority objectives and actions to meet the EU and global objectives, relating to halting (EU) and significantly reducing (global) the decline of biodiversity by 2010) [DG Environment Unit B.2: Nature and Biodiversity]. Thematic Strategy on the urban environment (policies for improving the environmental performance of Europe’s towns and cities, and securing a healthy living environment for Europe’s urban citizens) [DG.ENV. D.4. Health & Urban areas].
9.8 The Concept of Sustainability We start by remarking again that humans who live in developed areas and have the means and money to enjoy all the material comforts cause environmental degradation via automobiles, electronics and industrial activities which contribute substantially to pollution. On the contrary poor people reuse most of the goods thrown away by rich people, but may overuse some natural resources (water, wood, forest for heating, etc.). The contribution of these poor people to greenhouse gases, water pollution, and the like is much lower than those of the more developed parts. Sustainability depends on humans that use as less resources as possible. To sustain means to support without collapse. Sustainability can be defined on the basis of both weak and strong issues. 417 The difference of the various definitions can be easily understood by referring to assumptions about how technology and human ingenuity can replace natural resources and ecological services. Strong sustainability definitions are based on the assumption that the possibility of such a replacement is very limited or very uncertain, so that it cannot lead to an industrial growth ecologically sure and acceptable. Weak sustainability definitions are based on the assumption that effectiveness in use of resources (due to replacement of natural resources with human ingenuity) will continue to increase as in the past. 107 The historical evidence on the patterns of technological improvement seems to support the weak point of view which is called ‘techno-optimism’. 27 Some specific definitions of sustainability are the following (of course many other definitions exist) 50 : Definition 9.1 (Robert Gilman). Sustainability is the ability of a society, ecosystem, or any similar ongoing system, to continue operating in the indefinite future without being forced into decline via exhaustion of key resources. Definition 9.2 (William D. Ruckelshaus). Sustainability is the doctrine that economic growth and development must take place, and be maintained over time, within the bounds set by ecology, the interrelationships of humans and their works, the biosphere, and the physical/chemical laws that it obeys. That is, environmental protection and economic growth are complementary rather than antagonistic processes.
174
9 Nature-Minding Industrial Activity and Automation
Definition 9.3 (Gro Harlem Bruntland, Norway). Sustainable development meets the needs of today without compromising the ability of future generations to meet their own needs. Definition 9.4 (Muscoe Martin, 1995). Sustainability means “holding up” or “to support from below”. Thus, a society must be supported from below by its members, present and future. Sustainable development usually refers to ecological sustainability, although currently other terms such as economic, societal, and cultural sustainability are visibly entering the scene. The correct is to think about a combination of all of them when we speak about sustainable development (or, equivalently, about sustainability). Another way to define sustainability is through the union of all factors that influence it, as defined by Olaitain Ojuroye (Nigeria), namely: SUSTAINABILITY D Safe C Universally accepted C Stable C Technology that benefits all C Antipollution C Improvement in life quality C Nontoxic C Awareness C Beautiful C Indigenous knowledge C Least-cost production C Income C Total quality C Youth
Sustainable development is the latest version of an old ethic which has to do with human’s relationship with the nature and the current generation’s responsibilities to future generations. A society can be truly sustainable, if it adopts as a whole the economic, environmental and culture resources, not only in the short term but also in the long term. The three interrelated lessons that follow from the study of sustainability are the following: The environment is not a free resource; it is an integral component of the
economy. Equity between developing and developed countries is a must for sustainability
and should be advanced. Any entity (from societies and governments to individual persons) must operate
with long-term futuristic goals (not with short-term ones). Policies have to be proactive rather than reactive. In actuality, the transition to a sustainable society needs a proper balance between long-term and short-term goals focusing on equity, efficiency, and life quality (not only on the quantity of products and services). Other requirements for sustainability are maturity, compassion, and wisdom. 107, 417 The four key strategic functions for sustainable growth and development are the following 38:
Assessment Research and analysis Planning and policies Support
9.8 The Concept of Sustainability
175
These functions must be considered and analyzed in terms of both sustainable economic development and growth at the local, national, and international levels. When the strategies to achieve the sustainability goals are developed, the next step is to determine the ways in which these strategies can be implemented. Seven useful principles that can be considered when developing a strategy for sustainable development are the following 50, 484 : Integrative Principle (the strategy should be integrative, and balancing environ-
mental, social, and economic objectives) Focus on Issues Principle (the strategy should address the major structural issues
on achieving a future economically viable, socially accepted, and ecologically maintainable) Goal Orientation Principle (the strategy should be based on clearly defined objectives and priorities compatible with the above two previous principles) Compatibility Principle (the strategy should be adapted to the policy cycle and institutional culture) Consensus Principle (the strategy should have-wide public consensus) Action Orientation Principle (the strategy should result in practical steps that ensure a long-term, systemic transition in production and consumption patterns) Capacity Enhancement Principle (the strategy must involve capacity-building processes that sharpen the sustainability concepts and tools, promote the public awareness, and improve skills and competencies)
The indicators that tell us whether we are on the right path to achieve sustainability have been derived from the natural ecosystem indicators, and are the following:
Accurate and repeatable measurability Consistency in representing a critical ecosystem component Amenability to isolation in the environment Understandability in terms of the ecosystem’s health Understandability and acceptance by the society Potentiability to be linked to other sustainability indicators Compatibility and relation to important societal values
Practical examples of sustainability indicators which are currently in use are the following 50, 198 : Economic Indicators Income, business, training, quality of life, human development. Environmental Indicators Air quality, drinking water quantity and quality, land use. Resource Use Indicators Energy, land, hazardous substances. Societal and Cultural Indicators Abuse (e.g., child abuse), racism perception, volunteer rate for sustainability processes.
176
9 Nature-Minding Industrial Activity and Automation
The selection of suitable indicators and the development of a sustainability program is a large and complex process that needs the collaboration of public and private bodies and entities. Let us now look what has been done to achieve sustainability worldwide. 50 In the USA, the Presidential Council on Sustainable Development (PCSD) was established by an Executive Order (No. 12852). The Council is charged with the development of bold new approaches to integrate economic and environmental policies. The 25 members of the Council are distributed in the following task forces: (i) Eco-efficiency, (ii) Energy and Transportation, (iii) Natural Resources Management and Protection, (iv) Principles Goals and Definitions, (v) Public Linkage, Dialogue, and Education, (vi) Sustainable Agriculture, (vii) Sustainable Communities, and (viii) Population and Consumption. In the Third World, the activities towards sustainability are distinguished in ‘indigenous’, ‘Western’, and ‘hybrid indigenous and Western’. These categories are visible in all areas of life such as economic, agricultural, tourism, social systems, etc. All of them have obvious benefits and drawbacks regarding the environment. The population growth is a major problem of developing countries, since increases in population have as consequence increases in demand, which results in growth of social and economic structure, finally leading to quality degradation of the environment. According to the United Nations Population Fund (UNFP, 1991), two thirds of the CO2 emissions increase, 80% of tropical forest depletion, reduction and degradation of fresh water quantities, and the degradation of coastal regions, are due to the population growth. A good model which has been implemented and tested in many developing nations is the so-called “revised minimum standard model” (developed by the World Bank). Other sustainability models are the “public sector and management information system” and the “computable models of general worldwide equilibrium” (including the Forrester world dynamics model). 67, 104, 158, 176, 450 The major role of industry in achieving a sustainable society by adopting cleaner manufacturing methods, and establishing green marketing is obvious. But individuals can also contribute towards achieving sustainability through change in life-styles (e.g., reducing energy consumption at homes, sharing goods rather than owing them individually, etc.). The achievement of human-nature minding industry (clean industry) can be made through the following four approaches: 50, 182 Precautionary Approach (The potential polluter has to prove that an activity or
substance will not produce environmental degradation) Preventative Approach (This approach is based on the fact that it is cheaper and
more effective to prevent environmental damage than “curing” or “managing” it) Democratic Control Approach (The people have the right to access information
and be involved in the decision making. Clean production should take into account the opinion of all those affected by industrial activities) Holistic Approach (The society must follow an integrated and holistic approach to the use and consumption of natural resources) Concluding, we emphasize again that sustainability should be the top objective of the entire world community. To this end, at the Earth Summit in Rio de Janeiro
9.8 The Concept of Sustainability
177
(1992), a set of principles was agreed by over 120 countries, to guide the future development towards sustainability. These principles were based to a large extent on the Stockholm Declaration at the 1972 UN Conference on the Human Environment. In Rio it was realized and declared that long-term economic growth cannot be achieved and sustained without linking it with environmental protection. The Rio Declaration was further strengthened at the World Summit for Social Development at Copenhagen in March 1995. This Summit recognized the strong need of assuring equality and equity between women and men, and that social and economic development cannot be achieved in a sustainable manner without the full participation of women. All humans must be at the centre of the worldwide concerns for sustainable development. The Copenhagen Declaration stated clearly that economic development, social development, and nature protection are interdependent and mutually reinforcing elements of sustainable development, and that democracy is the absolutely necessary base of the realization of societal and human-centered sustainable development. As we mentioned in Section 9.7.1, by now the most specific worldwide environmental control action is the Kyoto Protocol, the purpose of which is to reach by 2010–2012 a minimum reduction of greenhouse gases (carbon dioxide CO2 , methane CH4 , nitrous oxide NO2 , hydrofluorocarbons HFCs, perfluorocarbons PFCs, and sulfur hexafluoride SF6 ) by an average of 5.2% below the 1990 levels. So far, the largest event in the series of the worldwide efforts is the 11th UN Climate Change Conference (11th UNCCC) held at Montreal (28 November–9 December 2005), where several policies and measures were negotiated and adopted to complement and refine the contents and agreements of the Kyoto Protocol. 230, 232 In the 13th UNCCC (13th Conference of the Parties: COP 13) held in Bali, India (December 2007), a global agreement on an Action Plan and Road Map for the post 2012 framework was achieved. 231 The next Conference of the Parties (14th UNCCC/COP 14) has taken place in Poznan, Poland (December 2008). 251 In COP14 several decisions were made that cover a broad range of topics including the Adaptation Fund under the Kyoto Protocol (http://www.iisd.ca/download/pdf/enb12395e.pdf). The 15th UNCCC (COP15) has been scheduled for 6–18 December 2009 in Copenhagen, Denmark. The 192 countries that signed the United Nations Framework Convention on Climate Change (UNFCCC) will participate, and their representatives and government officials will try to trash out a successor to the Kyoto Protocol, the first phase of which ends in 2012. According to the UNFCCC executive secretary (Yvo de Boer), major issues that need to be addressed in COP15 are the following: (http://www.guardian.co.uk/video/2008/dec/08/monbiot-yvo-de-boer-climate): How much reduction of GHGs are the developed countries prepared to accept? What measures will major developing countries (such as India and China) com-
mit to take in order to limit the growth of their emissions? What economic help will developing countries get, to be engaged in the reduction
of their emissions? How is that money going to be managed?
178
9 Nature-Minding Industrial Activity and Automation
Some other thoughts and statements about the climate change of Yvo de Boer can be found in: http://www.climateactionprogramme.org/climate leaders/article/view interview yvo de boer. General information on current events and conferences on environment, climate change, ecology and related topics can be obtained from: www.environmental-expert.com Useful opinions and news on global warming and climate change can be found in: http://en.cop15.dk(climate thinkers blog). One of the principal proponents of the theory that the human activity may be responsible for the global warming of the Earth is IPCC (The Intergovernmental Panel on Climate Change), a UN-funded scientific organization (IPCC, 2007: Climate Change 2007). However, although the majority of climatologists agree with this conclusion, there are many scientists who remain skeptic and reveal several natural causes for climatic change during the last century (www.takeonit.com/ question/5.aspx). The World Meteorogical Organization (WMO) states that the decade 1998–2007 is the warmest on record. The global mean surface temperature for 2007 is estimated at 0:41ı C=0:74ı F above the 1961–1990 annual average of 14:00ıC=57:20ı F (Global Warming-Encyclopedia of Earth, http://wiki.nus.sg/ display/CC Human C Factors).
9.9 Environmental Sustainability Index The environmental sustainability index (ESI) is an initiative concerned with the measurement of environmental performance, and offers via the web reports, data and map galleries. 229 The ESI is a joint activity of the YCELP (Yale Center for Environmental Law) 254 and CIESIN (Center for International Earth Science Information Network) of the Columbia University 237 in collaboration with the WEF (World Economic Forum) and the JRC (Joint Research Centre) of the European Commission. The documents offered to the interested readers provide in-depth details on the analytical approach, quantitative methodology, and data sources upon which each subsequent version of ESI is based. More information and data on environmental sustainability indicators are provided on SEDAC’s environmental sustainability subtask website. Four definitions of sustainability were given in Section 9.8. Two more definitions are the following. Environmental sustainability is: The long-term maintenance of the ecosystem components and functions for the
future generations 243 Working and behaving in a way that protects the sources of raw materials to ensure that they are available in an ongoing way to future generations 236 A glossary of terms related to ecology and environment can be found in. 236 A presentation of the concept of sustainable development broken into its three constituent parts, viz. environmental sustainability, economic sustainability, and sociopolitical
9.9 Environmental Sustainability Index
179
sustainability can be found in the wikipedia. 225 Sustainable development is development that consumes only the natural resources that can be supplied by the local environment, and the financial resources that can be provided by the local communities and local market and so will have the potential to carry on indefinitely. Appropriate development is one that ensures the correctness of the development actions technically, economically, socially and culturally, and so it is acceptable, to all, affordable economically and sustainable in the context in which it is implemented. Sustainable development has been pictorially represented using its three constituents (economic growth, social progress, environmental protection) in the following three alternative ways 225 : Pillary representation Nested circles representation Intersecting circles representation
In the first, the building of sustainable development is supported by the three pillars (constituents). In the second, the circle representing the economic growth is bounded by the circle of social progress, and both of them are encircled by environmental protection. In the third, sustainable development lies at the intersection of the three circles. Here, we present a new representation which is called “systemic representation of sustainable development” and illustrates the synergy, interdependence, and feedback of the social progress, nature protection and economic growth subsystems towards sustainable development (Fig. 9.1). A good reference on the subject is the book “Minding Nature: The Philosophers of Ecology” edited by David Macauley. 328
Human Values, rights and choices
Desired Social Indicators
Economic Indicator settings
Competitive and Sustainable Economies
Social Progress
S Desired Environmental Quality
Clean air, Clean water, Land to live, Good food quality
Nature Protection
S
S
Sustainable Development
Economic Growth
S
S S S Feedback corrective, predictive and adaptive actions
Fig. 9.1 Systemic visualization of sustainable development (S stands for synergy)
Actual Life of Human and Nature
180
9 Nature-Minding Industrial Activity and Automation
9.10 A Practical Guide Towards Nature-Minding Business-Automation Operation Nature minding or green or environmentally-conscious automation and design is now adopted at several levels by most manufacturers of continuous and discrete products in order to comply with the state and international environmental regulations, and similar techniques are also used by enterprises and business companies. Details on how some company paradigms are implementing suitable programs for producing nature-minding products and services can be found in. 233, 235, 244, 245, 247, 248 Review material on nature-minding automation and business is provided in. 88, 141, 142, 180, 181, 188, 418, 454, 546, 623 The general and global way of evaluating the effects on the environment of technological and other human development activities and improving them for the environment has been described in Section 9.2 (Life-cycle and Environmental impact assessment). Our purpose here is to outline a small set of practical rules that help any company and business to become and grow green (nature-minding). As already described at several places of this book a nature-minding company must use renewable resources so as to be environmentally sustainable. All operational phases of a nature-minding organization, viz. design, purchasing, development, production, and service phases should have a positive impact on the nature. One of the fundamental elements is to contribute to the slowing down of the process of global warming and climate change. Some basic questions that have to be addressed by the business company are the following:
How does being nature-minding integrate with the company’s business plan? Does the company get competitive advantage by being nature-minding? Does the company want to be certified as being nature-minding? How can any purchased products or services be qualified as nature-minding?
9.10.1 The Four Environmental R-Rules The four fundamental practical operational rules for being nature-minding/green are the following (www.factmonster.com/ipka/A0775891.html, www.gobiotrend.com): 1. 2. 3. 4.
Reduce consumption, waste, and pollution Reuse what is available Recycle everything that can be recycled Replenish what is consumed
The above set of rules is known as the “four R’s”. Actually there are many practical every day ways to follow the “first R rule”, e.g., reduce car use, take public transportation, buy in bulk, turn off lights, install timers, maintain well the cars and machines, recycle automotive fluids, reduce water consumption, avoid using disposable products (paper plates, cups, plastics), buy durable goods and energy efficient appliances, reduce to minimum the quantities of
9.10 A Practical Guide Towards Nature-Minding Business-Automation Operation
181
hazardous chemicals purchased or stored, ensure that the “power safe” mode of all electronic equipment is operating properly, etc. To comply with the “second R rule” the company must purchase reusable products (e.g., rechargeable batteries, washable towels), save packing material received, use white boards and e-mail to replace sticky notes, use reusable shopping bags and resealable containers (instead of plastic bags), refill ink cartridges, etc. Some ways of following the “third R rule” include the recycle of toner cartridges and purchase of refilled toner cartridges, the recycle of paper, cardboard, plastic objects, bottles, and cans, the recycle of e-Waste (cellular phones, computers, TVs, etc.), the recycle of paint by reprocessing and reblending, etc. We do not throw away such materials but we bring them back into industry, and prevent ourselves from using more exhaustible material. Finally, the “fourth R rule” is replenish. Replenishing means, for example, that when we take a tree out, we replenish what we have taken.
9.10.2 Four More Nature-Minding Rules In addition to the above four basic R-rules a company that wants to operate and be developed as a nature-minding company has to follow several other rules. Four of them, very helpful, are the following:
Use a nature-minding checklist. Be nature-minding from start-up. Review and improve the company’s operation. Educate the company’s employees.
The nature-minding Checklist must contain elements for all aspects, namely: reduction of hazardous materials (that cannot be recycled or disposed without proper care), utility bills (with the goal to reduce the costs), waste reduction, water conservation, energy conservation, environmental regulations’ compliance. It will be most beneficial and more economic to create a “nature-minding” company from start-up. This can be done by developing a proper nature-minding mission and statement of values, and integrating it into the business plan. The company’s green mission and operation should be promoted on its website adding an elegant attractive logo. Another very important way is to become a member of a group of companies that are also operating in a nature-minding way, and get a certificate of “green business”. The company’s advertisements should be properly written and include the relevant nature-minding information. The bills of the company can also be green (electronic, PDF invoices, online forms instead of paper ones, scanned contracts, etc.). Nothing should be static. The company has to review from time to time its operations over the entire product lifecycle (including selection of products, purchasing of products, use of products, disposal of waste, etc.). An important issue to consider is the Total Cost of Ownership (TCO) of an item purchased, i.e., how much the company has to pay for this product over its entire life time. Usually, the nature-minding
182
9 Nature-Minding Industrial Activity and Automation
products have higher initial costs, but in the long-run the use of them leads to considerable saving of money. The TCO issue should be carefully examined in all cases. The products must be used and disposed in the safest way and be stored suitably protected from damage. Toxic chemicals should be stored in secured locked places designated for chemical storage. The company must find ways to reuse the available products as much as it is possible, and recycle everything it can be recycled contacting the proper waste treatment or recycling local organization. The company must review its impact on the environment via the proper analysis or computer programs, and redesign its processes with a view to minimize this impact. Finally, the company must create an educational program for its employees to train them in the four environmental R-rules (Reduce, Reuse, Recycle, Replenish) and other nature-minding rules. Here, any printed reference material is very helpful. The company’s website, if it exists, is one of the most effective tools for the “green training” of the personnel. A very general and useful checklist that involves five principal rule categories is provided in the book “Design C Environment”. 316 These rule categories that are fully discussed in the book are:
Choose materials of low environmental impact. Avoid hazardous materials. Adopt cleaner production/operation processes. Design for waste minimization. Optimize the efficiency of energy and water use.
Many other detailed guidelines for nature-minding design and business sustainable development can be found in the literature, but essentially all of them contain more or less the same “core rules”. 68, 381, 408, 465 For example, the Okala Guide 408 provides an introduction to environmental and ecological sustainable design, and envisions a future where the value of global ecology and work that assures its protection is of primary concern. The Okala Guide provides updated Lifecycle Impact Assessment methods, impact factors for 240 materials and processes, global climate change values (in carbon dioxide equivalents) for these materials, state of art design rules for recycling and disassembly, and extensible discussions and explorations in environmental ethics. The Okala ecodesign check list contains seven main classes of design rules, namely:
Design for innovation Design for low impact materials Design for optimized manufacturing Design for low impact use Design for optimized product lifetime Design for optimized distribution Design for optimized end-of-life
This strategic checklist serves as a basis for reviewing the products as they are developed. Okala means “life sustaining energy” in indigenous Hopi language. The Okala guide is offered as a course for practicing and beginning designers, that can be easily integrated with existing nature-minding automation and design courses.
9.11 Nature-Minding Economic Considerations
183
9.11 Nature-Minding Economic Considerations Minding the nature incurs a cost, but also not minding the nature has a cost. In the first case a company pays for implementing a pollution prevention (P2) program, and in the second case pays compulsory “fines” or “taxes”, if the pollution caused by its operation rises above the threshold set by the regulating/state authority. Our purpose here is to provide a short discussion on these economic considerations of environmental protection, which are collectively included in the so-called field of Environmental Economics. 7, 193, 196 As we already know an LCA or EIA evaluates a product or activity over its entire lifecycle. But to be worthwhile for implementation by the interested company, it must be complemented with economic analysis and assessment. This is so because a very expensive to implement P2 proposal may never be adopted, no matter how much good it is for the environment or the human health protection. The comparison of P2 alternatives must be made in an integrated way taking into account both the environmental/human health issues and the economic issues. 7 Economics deals with the study of the production, marketing and selling goods and services, and includes the pricing, demand and human labor issues, as well as the way humans select to use scarce or limited resources (such as land, water, minerals, energy, forest, equipment, know-how, human power, etc.). The relation of demand with price is formally expressed by the “price elasticity on demand” ratio, Epd , for a product, which is defined as follows: Epd D
Percent increase in demand Percent decrease in price
If Epd > 1, then there is elastic demand. If Epd < 1, then we have inelastic demand. Price elasticity plays a dominant role in the assessment of the feasibility of a candidate P2 program. If the P2 process results in even a small reduction in price (due to cost savings associated with the P2 process), there may occur a considerable increase in the market demand of this product. But, if there is high “price elasticity on demand”, for a certain commodity to pay for the P2 activity, one may experience a large reduction in the commodity’s demand, which may cause serious financial problems in its production. Thus, a detailed analysis for any product is necessary, before adopting a specific P2 program. 50 To protect the public and the environment, governments and municipalities set environmental control laws and regulations expressed by well-documented and well-justified standards (and thresholds) that all activities of the society should respect. 405, 596, 598 Those who do not respect the regulations pay proper penalties not only in the short term but also in the long term (due to possible future environmental impacts arising from disposing waste materials in chemically non secure landfills). All these costs should be included in the business financial plans in the same way as salaries and raw materials or chemicals are included into the project’s cost analysis.
184
9 Nature-Minding Industrial Activity and Automation
Environmental economics is based on the concept of market failure, i.e., failure of the market to allocate resources efficiently. If the market allocates limited/scarce resources without achieving the maximum social welfare, we say that a market failure takes place. Typical forms of market failure are 242 : Externalities Non excludability Non rivalry
Externality It occurs when somebody makes a choice which affects other people and this is not taken into account in the market price (e.g., the emission of pollution by a company usually does not take into account the cost incurred by this pollution to others). Non Excludability In cases where the exclusion of people from accessing a rivalrous environmental resource has an excessive cost, the market allocation may not be efficient. Non Rivalry Public goods may lead to market failure in the sense that the market price does not reflect the social benefits of their provision (e.g., the protection from climate change impacts is a public good, since its provision is both non-excludable and non rival). To overcome the negative effects of the above market failures in connection with the environment, governments can apply one or more of the following alternative solutions 50, 242 : Environmental Regulations They are designed using a cost benefit approach. Typically, the regulations are enforced by fines in the form of taxes whenever pollution exceeds the prescribed thresholds. Quotas on Pollution They assure that reductions in pollution are obtained with the minimum cost. Using these quotas a company reduces its own pollution by itself, only if it costs less than paying someone else to make the same reduction. Quotas on pollution are implemented in the form of marketable permits. A scheme of marketable (tradeable) permits provides a firm the right to pollute the air up to a certain upper limit. This upper limit is set to secure the desired air quality. Of course, it is desirable to have additional pollutant removal by the firm, but this is not necessary. Usually, industrial plants do not remove pollutants more that is necessary, since this imposes additional costs. But if an industry (with the best available technology) releases lower amounts of pollutants than the allowable ones, the industry is entitled (under the marketable permit scheme) to use its pollution credits at another pollution source in the company. It is noted that pollution credits can be sold in the open market to the highest bidder. The marketable permit scheme assures that the global goal of maintaining environmental quality can be achieved while leaving the freemarket to establish who will be responsible for the clean up. Taxes and Tarrifs on Pollution High taxes on pollution normally discourages polluting and provides a dynamic incentive. In some cases, instead of direct tax, the tax on pollution policy (the so-called green tax) is applied.
9.11 Nature-Minding Economic Considerations
185
Better Defined Property Rights Suppose that a factory has the right to pollute, and the humans having their homes in the factory nearby area have the right to clean air and water. Then, either the factory pays those influenced by the pollution, or the people could pay the factory not to pollute. As we saw in Section 9.10.2 a nature-minding company takes into consideration the total cost of ownership (TCO) of an item purchased. Usually the nature minding products are cheaper, in the long run, than the non-nature minding ones, although their initial costs are higher. In the same way a company, before deciding to use a product or to apply a nature-minding process, should carry out a total cost assessment (TCA) or as otherwise called a life-cycle costing or an environmental accounting. 50 A TCA of a P2 project will help a firm to find whether a certain investment will be economically beneficial to it. To this end, the cash flows must be calculated over the whole life of the project and the same must be done for the profitability indexes. Environmental accounting assesses the environmental cost and performance results of: Alternative locations for the company’s premises Alternative production materials (raw materials, chemicals, solvents, cleaners,
etc.) Alternative product or process designs Alternative suppliers Alternative packaging and delivery systems Alternative waste management and recycling processes Return of packaging or discarted items Application of just-in-time (JIT) or build-to-order (BTO) programs
A good regularly updated source of information on the analysis of the economic and health impacts of environmental regulations and policies is the site of EPA’s National Center for Environmental Economics (NCEE). NCEE assesses and suggests to EPA important policy decisions based on deep economic and other related issues. Guideline economic topics included in this site at the time of finalizing the present book (June 2009) are:
Discounting future benefits and costs Establishing a baseline Analyzing benefits Analyzing costs Distribution analyses
Surveys on pollution abatement costs and expenditures (PACE) are also provided. The emissions fees or taxes are set equal to the environmental damage caused by the pollutant, i.e., it is actually the free-market system (not the reliance on compulsory regulations) the basis for pollution abatement. If the fee is sufficiently high so that the firm could no more pollute and still be price competitive, then there would be a strong incentive for the firm to get pollution control equipment to avoid the fee. We close our discussion on nature-minding economics by summarizing the guiding principles concerning the international economic aspects of environmental
186
9 Nature-Minding Industrial Activity and Automation
policies recommended by OECD’s Council. These guiding principles, which must be observed by the Governments of OECD Member countries (presented in a short form by the author) are the following 405:
9.11.1 Cost Allocation: The Polluter-Pays Principle Environmental resources are exhaustible and their use may lead to deterioration.
If the cost of this deterioration is not adequately incorporated in the price system the market fails to reflect the scarcity of these resources (nationally and internationally). Public measures are thus needed to reduce pollution and assure a better allocation of resources. In many cases the reduction of pollution beyond a certain level will not be practical or even necessary because of the cost incurred. The principle to be used for properly allocating the costs of pollution prevention and control measures is the so-called “Polluter-Pays Principle” which implies that the polluter should bear the cost of carrying out the above measures imposed by the public authorities (Recall that this is exactly the principle introduced by Plato in connection with the water pollution at his times. See Section 9.7.1). This principle should be followed by Member countries, although exceptions or special arrangements can be made (particularly in transition periods) provided that no significant deviations and distortions are incurred in international trade and investment.
9.11.2 Environmental Standards Due to differing national environmental policies, different assimilitative capac-
ities of the environment in its present state, and different social objectives and priorities, a very high degree harmonization of environmental policies may be difficult to achieve in practice. Where true reasons for differences do not exist, Governments should try to harmonize environmental policies (e.g., with respect to timing and the general scope of regulation for particular industries) to avoid unacceptable disruption of international trade patterns and resources’ allocation. Environmental protection measures should not create non-tariff barriers to trade. For internationally traded products where significant obstacles to trade may exist, Governments should seek common standards for polluting products, with mutually agreed regulations and timing. Measures taken within an environmental policy, regarding polluting products, should be applied following the principles of national treatment (identical treatment of similar domestic and imported products) and non-discrimination (identical treatment for imported products regardless of their national origin).
9.12 Nature-Minding Organizations
187
The definition of common, mutually agreed, OECD procedures for validating
conformity to standards created for environmental protection and control is highly desirable. According to the GATT rules, differences in environmental policies should not lead to the imposition of compensating import levies or export rebates or measures having equivalent effects, created to balance the consequences of these price differences. These guiding principles were compiled in 1995 by the Pollution Prevention and Pesticide Management Branch of the Ministry of Environment, Lands and Parks (British Columbia, Canada). They are continuously valuable and valid since many of them have not yet been fully adopted and implemented by all OECD countries. Three dynamic sources of information on environmental economics which include literature, seminars and conferences in the field are the websites of AERE (Association of Environmental and Resource Economists), 4 EAERE (European Association for Environmental and Resource Economics), 127 and ISEE (International Society for Ecological Economics). 265
9.12 Nature-Minding Organizations The protection and conservation of nature is of continuous concern worldwide, and a large number of organizations (public and private) currently exist which are distinguished in two principal categories: Environmental Organizations (which have the environmental protection as their
main task) Organizations of Nature Conservation (an example of them being Greenpeace)
At the other extreme there are environmentally destructive organizations (e.g., some military organizations). Usually the destruction of the environment is an unintended side-effect of the activities of these organizations, typical examples of which are the industrial plants. An other class of organizations strongly affecting the environment is the class of so-called “Utilities”, which are organizations that are concerned with essential functions of modern life, like the delivery of energy and the removal of waste. The question is how and how much these organizations are trying to reduce the environmentally damaging effects of the delivery of their products and services. All environmental and nature conservation organizations have individual web homepages where they inform the international society about their scope, their roles and their contribution towards the achievement of nature-minding societal, industrial and technological operations. In particular, many of them provide useful guidelines for environment protection measures and nature preservation. For example in EPA’s website one can find guidelines that can be used by everybody, everyday and everywhere, i.e.; at home and in the gardens, at work, at school, during shopping, in the community, and on the road. The presentation of
188
9 Nature-Minding Industrial Activity and Automation
the environmental and related topics in the EPA’s homepage, which covers a large repertory of issues, is organized alphabetically as follows: “Air, Clean up, Compliance and Enforcement, Economics, Ecosystems, Emergencies, Environmental Management, Environmental Protection Agency, Environmental Technology, Government, Human Health, Industry, International Cooperation, Pesticides, Pollutants/Toxics, Pollution Prevention, Radiation and Radioactivity, Research, Treatment and Control, Wastes, Water”. Analogous information is provided in the website of EEA (European Environment Agency), where a glossary of environmental terms is also included together with up to date statistical data and other information concerning Europe’s environment. In particular, a new EEA report called “Waste Without Borders in the EU” is devoted to the investigation of the increase in cross-border waste shipments, and reveals that the number of reported illegal shipments of waste in the EU is increasing and is in fact disposed within EU borders. Two European thematic organizations associated with EEA are the following: ETC-LUSI (The European Topic Centre Land Use of Society Information) which
is part of EIONET (The European Environment Information and Observation Network) and has, among its activities, the networking with experts in the EU on the quality, harmonization and data exchange in “land use and spatial information” and particularly in building capabilities. EU-OSHA (The European Agency for Safety and Health at Work) which aims “to make Europe’s workplaces safer, healthier and more productive”. This is achieved by bringing together and sharing knowledge and information, to promote a culture of risk prevention. Through the European Risk Observatory new types of risks to the safety and health of workers are identified and the trends and possible changes in the working environment are anticipated. In the following we give a number of websites of nature-minding organizations, additional to those already provided in other places of this book, noting that they represent only a small subset of those actually existing. EU-OSHA
http://osha.europa.eu/en ETC-LUCI
http://etc-luci.eionet.europa.eu Pacific Environment
www.pacificenvironment.org Sea Ecology
ww2w.seacology.org Jane Goodall Institute (for wildlife research, education and conservation)
www.janegoodall.org
9.12 Nature-Minding Organizations
189
Green-Stone Organization
www.green-stone.org Nature Conservation
www.Ecologic.org World Health Organization (WHO)
www.who.int/phe/en ADEQ Office of Border Environmental Protection
www.azdeq.gov/obep/partner.html The International Ecotourism Society
www.ecotourism.org Animal Protection Organization
www.greenpeople.org/animalrights.htm Environmental Awareness
www.AudubonInternational.org Wikipedia List of Environmental Organizations
www.en.wikipedia.org/wiki/List of environmental organizations European Business Awards to Green Industries
www.ec.europa.eu/environment/awards/index en.htm Co-op America: Strategies for a Better World
www.coopamerica.org NACEC: North American Commission for Environmental Cooperation
www.cec.org Lists of Environmental Protection Organizations
www.bugbog.com/directory/environment.htm www.hotfrog.com/Products/Environmental-Protection Organization Yahoo Environment and Nature Organizations List
http://dir.yahoo.com/Society and Culture/environment and nature organizations Air Transport Bureau Environment Section
www.icao.int/env Canadian Environmental Partnership
http://sealhunt.ca/MainPages/Partnerskjl.html
190
9 Nature-Minding Industrial Activity and Automation
The majority of these sites provide, in addition to the data and information on the Organizations’ activities, useful free tutorials and techno-economic material which in overall cover all major issues of the implications of modern life on the nature and the measures taken worldwide for their effective minimization. The proponents of the environment protection send in all directions the message: “Look how dirty is water and atmosphere around an industrial plant and how terrible these waste dumps are” (Fig. 9.2), whereas the nature conservationists say: “Look how pretty and useful are all elements of nature which we are obliged to care of and save” 561 (Figs. 9.3–9.5).
Fig. 9.2 Air pollution by a wax factory (Source: http:// www.pbase.com/ homerhomer/image/ 37264606/medium, By c permission, Copyright Peter Kozikowski)
Fig. 9.3 Incredible sunset (Photo by Peter & Jackie) (Source: http://www.pbase. com/mr2c280/image/ 4317204/medium, By Permission)
9.12 Nature-Minding Organizations Fig. 9.4 Stunning coastal drive (Photo by Peter & Jackie) (Source: http://www. pbase.com/mr2c280/image/ 28167196/medium, By Permission)
Fig. 9.5 A beautiful lake (Photo by Peter & Jackie) (Source: http://www.pbase. com/mr2c280/image/ 39808489/medium, By Permission)
191
Chapter 10
Modern Automation Systems in Practice
Machines will be capable of doing any work Man can do. Herbert Simon Eventually, robots will make everything. Marvin Minsky Technology makes it possible for people to gain control over everything, except over technology. John Tudor
10.1 Introduction Automation has now entered almost all areas of human’s activity (industrial, economic, societal, domestic, medical). Aircrafts, ships, automobiles, and trains are automated. Offices, educational systems, enterprises, hospitals, surgical operations, power generation plants, physical/chemical plants, robots, houses, entertainment facilities, and so on, are automated in one or the other degree. Our purpose in this chapter is to give a few representative examples in which automation, combined with human interaction, has already been applied and found a wide public acceptance. These examples are the following:
Office systems Railway systems Aviation systems Automotive systems Sea transportation systems Robotic systems Intelligent building systems Computer-integrated manufacturing systems Continuous process plants Environmental systems
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 10, c Springer Science+Business Media B.V. 2010
193
194
10 Modern Automation Systems in Practice
It is noted that, although automation is currently applied with success in the above and other systems, there are still many problems to be solved for even better, safer, more economic, more human-minding, and more environment-minding operation.
10.2 Office Automation Systems Office automation systems are information systems that are able to handle, process, transfer, and distribute all the data/information involved in the operation of an enterprise or organization. In the handling of information, the following functions are included: acquisition, registration, storage, and search. The type of information treated should be recognizable by the human, and so an office automation system must be capable to deal with oral, written, numerical, graphic, and pictorial information. Other names of office automation systems are: office information systems, “bureautique”, etc. Office automation involves the following components:
Computer systems (hardware/software) Communication network (s) Procedures Human decisions and actions
and uses the following technologies:
Data processing Word and text processing Image processing Voice processing Communications processing
Actually, a modern automated office uses in an integrated and synergetic way concepts, techniques, and tools from the following three disciplines: Information processing Communications Office technology
as is shown pictorially in Fig. 10.1. Office automation is one of the newest achievements of information and computer science and has started being applied immediately after the appearance of microcomputers and word processors. The purpose of the office automation systems is to make the office more “humanized”, where the machines perform all the routine-like and non-mental tasks leaving for the humans the tasks that need thinking and responsibility. According to M. Zisman (MIT) the evolution of office automation from the end of the seventies up to now is as shown in Fig. 10.2.
10.2 Office Automation Systems
195
INFORMATION PROCESSING
COMMUNICATIONS Remote Terminal
Computer Database Operating System
Data Voice Facsimile Image
MODERN OFFICE
Laptop Word Processor
Fax Telex
OFFICE TECHNOLOGY Typewriter Computer Keyboard Photocopy Machine Scanner
Fig. 10.1 The synergy of technologies in the modern office
Investment
1
2
3
Introduction (1975~1978)
Extension (1979~1984)
Standardization (1984~1990)
Fig. 10.2 The evolution stages of office automation
The goals of each stage in Fig. 10.2 were as follows: Stage 1:
Introduction
Technological progress Productivity/cost improvement Text processing Use of paper
Stage 2:
Extension
Replacement of paper Automated means (mail, telephone, diary)
4 Maturity (1990- )
Time
196
10 Modern Automation Systems in Practice
Organization/system approach Experimental future offices
Stage 3:
Integration Automated processes/procedures Compatibility Active systems with memory Abolition of routine tasks
Stage 4:
Standardization
Maturity
Operational stability Decision support Integration refining Technology assimilation Work methods modification
The operations of an office are related to the following: The type of information (texts, tables, numbers, pictures, graphs, graphical rep-
resentations, etc.) The presentation and modification of the above types of information The reproduction and distribution of the information to the people needing it The transmission of the information to remote agents via computer networks
(e.g., Local Area Networks/LANs) The retrieval of documents
The communication and interaction through the Internet (electronic mail) and world-wide web is today the primary way of transmitting information (textual, graphic, video, etc.) between people, offices and organizations, and has dramatically changed the style of office automation systems. Currently concepts like e-commerce, e-training, e-advertising, e-conferencing, e-entertainment, etc., are used to include the respective human activities performed electronically through the Internet. All these possibilities are changing the life style of the modern humanbeing, unfortunately not in all cases to a better direction from a cultural point of view. An example of computers in an office environment is shown in Fig. 10.3.
10.3 Automation in Railway Systems A railway transportation system is (like any other public transportation system) a typical public system and must have high quality levels of serviceability, reliability, and safety. These requirements are currently attempted to be achieved and promoted using automation through microcomputers. The principal subsystems that have to be incorporated in a railway system as a whole are 259, 341, 630 :
10.3 Automation in Railway Systems
197
Fig. 10.3 The computer in the office environment (Source: http://mormonsoprano.files.wordpress. com/2009/01/typing-at-office-computer.jpg)
Train Traffic Control Subsystem This subsystem deals with train traffic
monitoring, quality improvement, labor saving, and improvement of service for passengers. The above are performed by route control, bulletin display, automatic public address, and information transmission to the train according to the train schedule. Automatic Train Operation Subsystem Automatic operation plus maintenance of safety are achieved via automatic control of acceleration, constancy of speed, stopping at predetermined points at the stations, and car public address. Electric Power Supply Control Subsystem In this subsystem, the quality improvement is performed by observing the status of equipment in substations and electric equipment rooms, by schedule, and by power monitoring. Information Transmission Subsystem Here, the unification of transmission lines linking substations, and automation of issued ticket data processing are implemented by the control center which receives all the data and distributes them to the proper subsystems. Automatic Car Inspection Subsystem This subsystem deals with the automated and manual car maintenance operations which are performed by inspecting the car mounted equipment without breaking the train formation, and by managing the historical data of the cars. Supporting Business Managing Subsystem Here, automation and labor saving in business management are carried out by a wide repertory of data processing operations, such as making various reports of issued ticket data, accounting data processing, and budget management.
As an example we mention that the traffic control system of the Kobe City Subway (installed in June 1983) is of the decentralized type and uses, as the data
198
10 Modern Automation Systems in Practice Hm+1
Hm N2 m+1
N1 m+1
N2 m
N11
N21 H0
N2 m+1
N12
N22 H1
Loop L1
N1 m–1
N1 m
N10 N20
Hm–1
Loop L2 H2
Fig. 10.4 Architecture of the ADL-Net
transmission subsystem, the optical ADL-Net (Autonomous Decentralized Loop), which has the architecture shown in Fig. 10.4. 341, 630 The ADL-Net has the following principal features: It is a double loop (two unidirectional loops assigned to transmit messages in
opposite directions). Each network control processor (NCP) is connected to an adjacent NCP on the
same loop and with a partner NCP on the other loop. Each pair of NCPs is connected online to one host processor. There are no transmissions NCPs that occupy all loops. Thus, whenever each
transmission NCP wants to send messages to the loop, it can do so. Every NCP has the same fault detection and recovery mechanism (this is possible
due to the decentralization concept). The main role of the human operator (driver) in a train is to control the speed (by predicting uphills and downhills). He (she) must be aware of the desired speed limits at the various points along the track where there are curves, switches, grade crossings or congested population areas. Also the driver must be aware of any limits associated with different weather conditions or maintenance situations that exist at certain locations. Furthermore, the train driver must be in contact with the central controller to be informed about possible unanticipated train movements, etc. The main difficulty in train operation is caused by its very high momentum, which implies that a train moving at say, 30 km/h needs up to 3 km to stop, even in emergency braking situations. At present, the automatic speed controllers installed in some trains around the world cannot apply the necessary braking in cases another train is detected immediately ahead. The on-board automatic train control (ATC) system developed in Japan consists of three subsystems, namely: the automatic train supervision (ATS) subsystem, the automatic train protection (ATP) subsystem, and the automatic train operation (ATO) subsystem. The ATO operates the train in place of the train driver, and is
10.3 Automation in Railway Systems
199
divided in two units: (i) the navigation control unit (NVC), and (ii) the driving control unit (DVC). These units have Motorola-6800 microprocessors and are connected via serial communication lines. The NVC unit performs the required supervisory control functions (communication control to/from ATS, passenger addressing, emergency control, door control), and the DVC unit performs the required dynamic control functions of the train (departure control, speed maintaining control, station stop control). The source programs were written in the Hitachi’s structured programming language (PL/H). This ATC system is used for automatic train operation of subways, monorail, and medium size guideway rapid transmit. The external view of two modern trains is shown in Fig. 10.5.
Fig. 10.5 Modern high-speed trains. (a) TGV duplex train set 213 at Gare de Lyon in Paris, Photo by Manfred Kalivoda (Source: www.trainweb.org/tgvpages/images/duplex/index.html). (b) Nagano Shinkasen (Source: www.japan-guide.com/daily/?040515)
200
10 Modern Automation Systems in Practice
10.4 Automation in Aviation Systems Aviation systems are undoubtedly automated to the higher degree in comparison to the other man-made automated systems (e.g., railway). Commercial aircrafts have received the greatest attention for reasons of comfortability and safety of the human passengers. On the other hand, military aircrafts are required to have very high-level performance because they carry weapons. We start with the aircraft automation systems, 491 and then we discuss air traffic control systems, which are also undergoing increasing automation. 616
10.4.1 Aircraft Automation Despite the advanced control and automation devices and techniques used in commercial aircrafts, the accidents continue to occur and are mostly attributed to the human pilots. Cockpit automation is a mixed blessing. Warnings of possible problems with cockpit automation were raised as early as the late 1970s, 130 and since then the concerns have become more severe because of the incidents and accidents involving automated aircraft. 545 Actual experiences with advanced cockpit technology verified that automation did have a positive effect regarding the pilot’s workload, the operational cost, the precision, and the human errors. But, the impact of automation turned to be different and much more complex than predicted. Workload and pilot faults were not simply reduced and advanced automation has led to new requirements which were qualitative and context dependent rather than quantitative and uniform in nature. In the 1980s, failure rates in transition training for the new glass cockpit aircraft were at an all-time high. 618, 619 Early concerns with cockpit automation focused around questions such as how to reduce the amount of pilot workload, and how much information does the pilot needs. Regarding the training, recent intensive research on training, and a better understanding of the problems produced by glass cockpit pilots, revealed that it is the nature of training that has to be changed rather than its duration. The glass cockpit was introduced in the 1970s (Boeing’s 757 and 767 commercial aircrafts), and consists of several CRT or LED displays integrated by computers which replaced the multiple independent mechanical instruments used by that time. Cockpit technology has changed in several ways which require new practice and training methods, since, e.g., a 767 is not merely a 727 with some additional boxes. A very powerful addition to the flight deck is the so-called flight management system (FMS) which extended the manual control (originally via a 2-axis-stick, then via a yoke, and finally via a fly-by-wire) to automatic handling a variety of tasks including navigation, flight path control, and aircraft systems monitoring. 46 The most advanced FMSs currently in operation are highly autonomous and reliable, and can perform long sequences of actions without pilot input (authority). The FMSs allow the pilot to select among several automation levels, provides advice on navigation, and detects and diagnoses abnormalities or faults.
10.4 Automation in Aviation Systems
201
According to Wiener, who studied extensively the results of cockpit automation, the effect of automation on pilot workload is not an overall decrease or increase, but rather a redistribution of the workload. 618 For this reason he introduced the term ‘clumsy automation’. Wiener has distributed two set of questionnaires to Boeing 727 pilots 1 year apart to collect information on pilot workload, pilot errors, crew coordination, and pilot training for automation. Most of the pilots (more than 55%) replied that they were still being surprised by the automation after more than 1 year of line experience on the aircraft. Similar conclusions were also drawn by Sarter and Woods who sampled a group of pilots of B737–300/400 which are equipped with a different glass cockpit. 492 These pilots have also indicated the nature of and the reasons for the automation surprises (which can be regarded as symptoms of a loss of awareness of the status and behavior of automation, i.e., of a loss of mode awareness). Mode errors appear to occur due to a combination of gaps and misconceptions in pilots model of the automated systems. The above discussion shows that, even with the sophisticated cockpit automation such as the flight management system, the pilot has a high workload and can make erroneous actions. The sophistication and variety of new automation systems on flight deck have led to new control modes for the pilot to understand, which need careful consideration to minimize the pilot confusion over states and modes in automated cockpits. 493, 494 Figure 10.6 shows the view of a modern automated cockpit.
Fig. 10.6 The cockpit of the A320 (Source: www.aerospace-technology.com/projects/a320/ a3201.html)
202
10 Modern Automation Systems in Practice
10.4.2 Air Traffic Control The original method of controlling takeoffs and landings was the use of an air traffic controller (ATC) standing in a prominent place on the airfield and employing colored flags to communicate with the pilots. The waving of a green flag meant that the pilots were to proceed with their planned takeoff or landing. But if the controller waved a red flag, the pilots were to hold their position until the controller had determined that it was safe to continue. This early type of air controller was difficult to control more than one aircraft simultaneously, and impossible to be used at night or during storming weather. The next attempt was the use of light guns. A light gun is a device that allows the controller to direct a narrow beam of high-intensity colored light to a specific aircraft. The gun was equipped with different-colored lenses to allow the controller to easily change the color of the light. The controller was operating the light gun from a control tower (a glassed in-room on the top of a hangar), or from a movable light gun station located near the arrival end of the runway. Light guns are still used today on most control towers as back-ups, either when the radios in the control tower or the aircraft are inoperative or when an aircraft is not radio equipped. The modern system of air traffic control started by equipping a control tower with radio transmitting and receiving equipment (at Cleveland, 1936). 83, 121, 394, 411, 616 Today, radio has become the primary means of pilot-controller communication in the air traffic control system. The radio equipment has considerably changed since 1936, but the basic principles of radio communication remain unchanged. The earliest type of radio communication was one-way, i.e., traffic controllers could communicate with pilots but not vice versa. An interim solution for two-way communication was the use of receiving equipment in the control towers and transmitting equipment in the aircraft. To eliminate the so-called navaid interference the aircraft transmitters used a different frequency than the ground-based navaids. This two-frequency system is called a ‘duplex communication system’ (Fig. 10.7a). The radio frequency bands allocated to aeronautical communications are determined by international agreements. These frequency bands exist mainly in high frequency (HF), very high
a
b Simplex
Duplex
Same Frequency
Separate Frequency
Transmitter
Receiver
Transmitter/ Receiver
Fig. 10.7 (a) Duplex transmission. (b) Simplex transmission
10.4 Automation in Aviation Systems
203
frequency (VHF) and ultra-high frequency (UHF) spectrums. The duplex system has some drawbacks which led to the development of a radio system that would allow pilots to communicate with controllers using one discrete frequency. This is known as simplex communications (see Fig. 10.7b). Simplex communications are today used in every ATC worldwide. The International Civil Aviation Organization (ICAO) developed standardized world’s aviation systems and suggested procedures for aviation regulatory agencies. 258 These standards are known as “International Standards and Recommended Practices” (and classified individually as ICAO Annexes). Every country member of ICAO has agreed to generally abide by these ICAO Annexes, unless they must be modified to meet national requirements. The adoption of these procedures has allowed pilots to fly all over the world using a unique language (English), unique navigation aids (VOR: VHF omnidirectional range, ILS: Instrument landing system, NDB: Non-directional radio beacon, and MLS: Microwave landing system), and the same procedures. ICAO requires that every country publish manuals describing its ATC system and any deviations from ICAO standards. ICAO recommends three types of aircraft operations, namely VFR: Visual Flight Rules, IFR: Instrument Flight Rules, and CVFR: Controlled VFR). Controlled VFR flights are separated by controllers as if they are IFR, but the pilots are not IFR rated and must remain in VFR conditions. The ICAO agreements specify that each nation will control its own sovereign space but will permit ICAO to determine who shall provide air traffic control service within international space. ICAO is only a voluntary regulatory agent, and so international ATC has been delegated to those member nations willing to accept this responsibility. ICAO has divided the total world airspace into flight information regions (FIRs), which identify which country controls the airspace and determines which procedures should be followed. For the purpose of ‘en route’ ATC, each FIR identifies, normally, one major air traffic control facility. Typically, the boundaries of each FIR follow the geopolitical boundary of the concerned country. ICAO uses unique four-letter commercial airports’ identifiers for ATC, whereas IATA (International Air Transport Association) uses three-letter codes primarily for travel agents and airline personnel. Conventional ATC is implemented via a network of stations around the world that employ two-way radio and “see” the aircrafts through a radar. Commercial aircrafts and other aviation aircrafts carry transponders which identify them to the ATCs using a simple code. For aircrafts that are to fly into the airspace of international airports, these transponders must also be able to transmit aircraft altitude. This gives an identification tag next to the klip seen on the ATC operator’s radar display. A typical new automation system that helps the ATC keep the required separation between aircrafts, while properly sequencing them and providing speed and descent operations is CTAS (Center TRACON Automation System). This system was developed by NASA to provide users with airspace improvement, delay reductions, and fuel savings benefits by applying computer-based automation. Actually, CTAS performs four principal functions: traffic management advisor (TMA), descent advisor (DA), final approach spacing tool (FAST), and expedite departure path (EDP).
204
10 Modern Automation Systems in Practice
Fig. 10.8 Traffic management advisor timeline graphs. (a) TGUI timeline, (b) PGUI timeline (Source: http://www.aviationsystemsdivision.arc.nasa.gov/research/foundations/tma.shtml)
Fig. 10.9 Graphical display of aircraft tracks on the PGUI (Source: http://www. aviationsystemsdivision.arc.nasa.gov/research/foundations/tma.shtml)
TMA schedules arrival traffic to runways and generates a landing sequence as far out as 200 miles from the airport. A linear clock, graphically depicted, is used to display aircraft identification tags alongside scheduled and estimated times of arrival. This gives to the traffic managers a visual reference of the time interval between consecutive aircraft and the relative position of all aircraft (Figs. 10.8 and 10.9). 394 DA provides descent points, along the speed, heading and altitude advisories. On the
10.4 Automation in Aviation Systems
205
basis of traffic, aircraft performance, weather, and airport/airspace configurations, DA computers continually conflict-free descent profiles and routings. FAST assigns landing runway and sequence numbers in conjunction with accurate speed and turn advisories to help approach controllers in efficiently spacing aircraft to the runway. Finally, the EDP program provides climb profiles and routing for departure traffic. This helps the ATC to sequence departures from satellite airports to aircraft operating from the primary airport. Other additional facilities newly developed for cooperation with ATCs are the following: URET: User Request Evaluation Tool (a computer program which probes for
potential conflicts of selected flight paths, processing real-time flight plan and track data from the host computer via a one-way interface) PRAT: Prediction/Resolution Advisory Tool (a decision support system that performs conflict prediction and resolution assistance for the ATC) WARP: Weather and Radar Processor (for collection, processing and dissemination of next-generation (NEXRA) and other weather information to controllers, traffic management specialists, area supervisors, pilots, and meteorologists) ITWS: Integrated Terminal Weather System (a fully automated weather prediction system providing enhanced information on weather hazards in the airspace within 60 nautical miles of an airport) PRM: Precision Runway Monitor (designed to face the problem that during instrument meteorological conditions, airports with parallel runways spaced less than 4,300 feet apart cannot conduct independent simultaneous instrument approaches due to equipment limitations)
10.4.3 The Free Flight Operational Concept Under the free-flight mode, pilots operating under instruments flight rules (IFRs) will be able to select their aircraft’s path, speed, and altitude in real time. The ATC system will only be involved only when it is necessary to provide positive aircraft separation. A flight plan established by the pilot, is essentially a contract with the air traffic controller, and so any modification (required for example in case of thunderstorm) should be renegotiated with an air traffic controller. Under the free-flight concept, the pilot will be free to select the aircraft’s route, speed and altitude, and to make alterations with no ATC’s preapproval (provided of course that the traffic density does not preclude a free flight). In military aviation there is an additional factor (in comparison with commercial aviation), namely the need to deal with an enemy. Therefore, additional automation is required (including advanced radar and optical systems) for safe fast maneuvering and weapons firing. Still more advanced automation systems and facilities are required for UAVs (unmanned air vehicles) which are remotely controlled by an operator (pilot) on the ground.
206
10 Modern Automation Systems in Practice
10.5 Automation in Automobile and Sea Transportation Electronics-based automation is now applied to many different components of the automobile, such as engine control, transmissions, instrumentation, in-vehicle comfort, and in-vehicle entertainment. This in-vehicle automation together with new development in driver interfaces (DI), advanced traveler information systems (ATIS), collision avoidance and warning systems (CAWS), automated highway systems (AHS), vision enhancement systems (VES), advanced traffic management systems (ATMS), and commercial vehicle operations (CVO), constitute what is collectively known as intelligent transportation systems (ITS). The programmable nature of integrated electronics will further allow the adaptability of the functioning of ITSs in different vehicle categories, driver capabilities, and environmental situations. 120, 147, 428, 463, 620
10.5.1 Advanced Traveler Information Systems The main goals of ATIS are the following: Reduce urban congestion via more efficient use of existing transportation
resources. Improve transportation safety via driver-alerting automation equipment. Reduce environmental pollution by improving fuel efficiency.
These goals can be achieved by integrating the knowledge on driver behavior (human factors) and decision making into the design of automation systems that make up ITS. The development of new roads and light rail, and the manufacture of alternative fuel source vehicles, along with the implementation of new legislation governing crash safety of vehicles, provide major ways towards the full achievement of the above ITS goals. ATIS can regulate the flow of vehicles along roads and highways through the use of advanced sensory, control, communication, and computation technologies. These systems can also increase the safety and efficiency of the travel, by providing to the driver suitable warning and safety signals (messages), especially needed in bad weather and visibility conditions. Typically ATIS involve the following subsystems: IRANS: In-Vehicle Routing and Navigation System (this system provides infor-
mation about how to go from one place to another and about the current urban traffic congestion) IMSIS: In-Vehicle Motorist Service Information System (this system provides data about weather, overnight lodging and fueling, entertainment, medical services, etc.) IVSAWS: In-vehicle safety and warning system (this system provides warning information on immediate hazards, road conditions, and other related factors that affect the roadway ahead)
10.5 Automation in Automobile and Sea Transportation
207
10.5.2 Collision Avoidance and Warning Systems The increasing number of on-road vehicles results to a severe growing of motor vehicle crashes leading to fatalities or to no-fatal injuries. Electronic-based automation is helping to reduce these human fatalities and injuries through the development of proper collision avoidance and warning systems (CAWS). Crashes are mainly due to human errors including recognition, decision, and performance errors. Of course, many crashes involve several interacting causal factors such as driver errors, environmental factors, or vehicle factors. Recognition errors include inattention, distraction or improper lookout (the driver “looked but did not see”). Performance errors are caused when the driver selects the proper action for a comprehended crash threat, but this action is not executed correctly. The crashes belong to several categories, namely: backing, rear-end, lane-change, single vehicle roadway departure, head-on, and intersection or crossing paths. Available means to avoid collisions include: headway detection systems (that monitor the separation and closing rate between equipped vehicles and other vehicles or objects in their forward travel paths), and infrastructure-based warning systems (suitable for crashes on slippery roads and/or involving excessive speeds, particularly in curves and other hazardous locations).
10.5.3 Automated Highway Systems An AHS is a system which uses vehicle and roadway instrumentation to produce some kind of automated driving. Vehicles that are designed to operate on an automatic highway must also be able to operate on normal roads (dual-mode vehicles). The automated highway systems can be classified according to: The Degree of Automation (Full Control, Partial Control) Full control in-
cludes the steering and speed maintenance, and also the coordination of all vehicle movements within the automated lane. The driver must only give an exit destination from the automated highway, and the highway system will drive the vehicle to that destination. In partial control systems, the driver selects the speed she (he) wants to travel at and the vehicle tries to keep that speed. Vehicle Infrastructure and Equipment (Road Side System-Dependent, Roadside System-Independent) In the first case, every vehicle is fitted with equipment to communicate with the roadside system, sensors to detect other vehicles, and controllers to execute steering, braking and accelerating commands issued by the road side system. In the second case, the vehicle itself possesses the intelligence needed to do all the operations independently of the roadside system. Degree of Separation Between Automated and Manually Controlled Traffic Some systems have the automated lanes physically separated from normal driving lanes, with separate entrance and exit points, whereas in other systems the separation is provided using barriers.
208
10 Modern Automation Systems in Practice
Vehicle Control Rules In fully automated systems individual vehicles or groups
of vehicles (e.g., a string or platoon of vehicles) are automatically controlled.
10.5.4 Vision Enhancement Systems These systems help the driver vision under limited visibility conditions (e.g., fog) or during the night time. Technologies used for VES include infrared sensors and Doppler radar. Infrared imaging technology allows the driver to see beyond the normal illuminated distance by the headlights under night time conditions. Doppler radar technology allows the driver to see objects usually obscured by inclement weather (fog, rain, snow) or by vehicle construction (blind-spots due to vehicle hood-line or roof-pillar location). All VES involve two principal components, i.e., the sensor and the display. System information can be displayed either as a primary or secondary source of visual information. A primary vision source can be obtained using a head-up display (HUD), in the form of a virtual image to the driver outside the vehicle. A secondary source of visual information to the driver can be obtained using an in-vehicle display or a non-contact analog HUD. The principal question when using a VES is how it actually affects the driver performance and driving behavior.
10.5.5 Advanced Traffic Management Systems ATMSs are capable of minimizing delays in the movement of people, vehicles, and goods. This can be done via the proper interpretation, by the system’s operators, of information obtained from a multiplicity of sensors that monitor the traffic flow and the roadway conditions. An ATMS permits the operators to control several factors such as traffic signals, ramp meters, and closed-circuit TV cameras, and monitor their images in order to recognize roadway events and initiate proper responses. Clearly, the implementation of ATMSs requires the use of advanced human–driver interfaces that fall beyond the framework of conventional HMI approaches. Issues like the transfer of control between the driver and the automated system or vice versa, and the like have to be studied and evaluated.
10.5.6 Commercial Vehicle Operations Commercial vehicles have essential differences from vehicles used by private drivers, and so they need ITSs with different capabilities. Drivers of commercial vehicles are subject to greater amount of training and must have more stringent health, knowledge and experience to operate such vehicles. The three main types of commercial (or public) vehicles are:
10.6 Robotic Automation Systems
209
Vehicles for goods and materials transportation Vehicles for public transportation (buses, taxis) Vehicles for emergency operations (fire, police, ambulance)
Intelligent transportation systems can improve commercial vehicles operation in several aspects, such as route planning, dispatching, in-vehicle driver tasks, and performance compliance. New ITS technologies, such as advanced communication systems, advanced regulation systems and the like, are expected to enhance further the efficiency of the interaction between commercial vehicle operators and their customers or the local/national regulatory agencies.
10.5.7 Sea Transportation Sea transportation is performed by ships which (together with the naval undersea vessels) require sophisticated automation. Issues that must be addressed include automatic roll stabilization, tracking of depth and obstacles (using sonar), and localization in latitude and longitude for navigation (which is now accurately done via GPS). Very large ships (supertankers) cannot easily steer through harbor channels and into port with complex terrain or with unfamiliar tides or obstacles. The process of maneuvering in these cases is facilitated using predictor displays working together with GPS localization. Here, the predictor display gives a displaced trace, on a map, of the computer simulated trajectory of the ship based on present position, speed, acceleration, current, and wind loads. The display gives also a proposed program of thrust and steering commands over the next short period of time (i.e., over the prediction period), which is used to predict the trajectory obtained by alternative thrust and steering programs, and select the best one that brings the ship at the desired location avoiding the obstacles.
10.6 Robotic Automation Systems Robotic automation is a technology with future and for the future. Robotic automation helps companies innovate and compete, and there is an increasing awareness around the world among companies of all sizes, practically in every industry, that robotic automation can help them stay globally competitive. According to the data collected by the United Nations Economic Commission for Europe and the International Federation of Robotics, the operating robot population worldwide in the “Big Six” suppliers (USA, UK, Japan, Germany, France, Italy) started with about 440,000 units in 1991 reached almost linearly 590,000 units in 1996, and then increased with a higher slope reaching about 780,000 units in 2000. The corresponding figures for the number of robots operating worldwide are 520,000 units (1991), 680,000 units (1996), and 950,000 units (2000). It is believed that robotics will continue showing a substantial increase in manufacturing applications all over the world. Manufacturers
210
10 Modern Automation Systems in Practice
of consumer goods, electronics, food and beverages, and other nonautomotive products are continually taking advantage of robotic automation to become stronger global competitors, and small, medium and large companies in almost every industry are taking a new look at robotic automation to see how this strong technology can help them face manufacturing challenges. Also, nonmanufacturing applications in security, material transport, hazardous materials handling, nuclear clean-up, and undersea exploration are rapidly accelerating. According to Joe Engelberger, the father of modern robotics, many applications start with a simple question: “Do you think a robot can do this?” Now, more often than not the answer is “yes”. In the following we shall briefly describe a few, but important, applications of robotic automation. 18, 257, 393
10.6.1 Material Handling and Die Casting The robots used in purely material handling operations are usually ‘pick-and-place’ robots. These applications make use of the basic capability of robots to transport objects (the robot’s manipulative skills are of less importance here). The main benefits of using robots for material handling are reduction of direct labor costs and removal of humans from tasks that may be hazardous, tedious, or fatiguing. Also, robots typically result in less damage to parts during handling, a major reason for using robots for moving fragile objects. Die casting is one of the major application areas in the developed countries. Die casting operations are notoriously hot, dirty, hazardous, and provide a particularly unpleasant working environment for human workers. Robots contribute to cost reduction and improved quality resulting from their consistent performance. In simple installations the robot is used to remove the part from the die and place it on a conveyor. In more complex applications the robot may carry out a number of tasks, including part removal, quenching, trim press loading and unloading, and periodic die maintenance. The specific functions performed depend on several factors, such as casting cycle times, physical layout, and robot speed and type.
10.6.2 Machine Loading and Unloading In addition to unloading die casting machines, robots are also used in many other machine loading and unloading applications. Loading and unloading is actually a more sophisticated robot application than simple material handling. Such applications include grasping a work piece from a supply point, transporting it to a machine, orienting it correctly, and then inserting it into the workholder on the machine. After processing, the robot unloads the work piece and transfers it to another machine or conveyor.
10.6 Robotic Automation Systems
211
10.6.3 Welding and Assembly Welding is distinguished in spot welding and arc welding. Spot welding automotive bodies represents the largest single application. Spot welding is typically performed by point-to-point servo-robots holding a welding gun. Arc welding is performed by robots using noncontact seam trackers. Currently, robotic arc welders are low-cost easily programmable, and durable. The design of a robotic assembly system needs a combination of economic and technical considerations. In our days, the complexity of products, the requirement to manufacture the products in many models that change design rapidly, and the need the manufacturers to be more responsive to changing demand and just-in-time manufacturing, enforce to design flexible assembly systems. The robotic assembly must compete against manual assembly, rigid automation, or some combination of them. Robotic assembly provides an alternative with some of the flexibility of humans and the uniform performance of fixed automation. Robotic assembly includes two phases: the planning phase where a review is made of the design of the product at a variety of levels, and the assembly phase where the product first comes to life and can be tested for proper functioning. Assembly is also the phase where the production directly interfaces with customer orders and warranty repairs. Thus assembly is not just putting parts together, but includes the task of determining how to meet a variety of business requirements, ranging from quality to logistics. A programmable robotic assembly system typically consists of one or more robot workstations and their associated grippers and parts presentation equipment (Fig. 10.10). One area of today’s robot assembly operations includes the insertion of light bulbs into instrument, panels, the assembly of typewriter ribbon catridges, the
Fig. 10.10 A robotic assembly system (Source:http://www.prirobotics.com/stock/assembly.jpg)
212
10 Modern Automation Systems in Practice
insertion and placement onto printed circuit boards, and the assembly of small electric motors. More sophisticated assembly tasks require improved sensory feedback, improved accuracy, repeatability and reproducibility (RPC), and stronger programming languages.
10.6.4 Machining and Inspection In machining applications, a robot typically holds a powered spindle and performs drilling, grinding, deburring, routing, and other analogous operations on the workpiece. The problem in machining processes like drilling and milling is the creation of very large cutting forces. These forces may lead to tool deflection and as a result to reduction of accuracy. Robots do not usually have the machinery rigidity typically possessed by machine tools. In inspection, robots are used together with sensors (vision, laser, ultrasonic, etc.) to check part locations or detect defects. Examples of inspection tasks are inspection of valve cover assemblies for car engines, sorting metal casting, and inspection of the dimensional accuracy of openings in car bodies. Inspection applications of robots are expected to represent one of the future high-growth areas of use of lowcost sensors and devices of improved positioning accuracy.
10.6.5 Drilling, Forging and Other Fabrication Applications Drilling and forging are two very important machining operations which can be performed by robots. In drilling, the feed rate is single-dimensional (usually along the z axis). The majority of drilling operations are done by fixed drill presses, but in the fabrication of large products such as space vehicles, aircrafts, ships, railroad locomotives, and the like, the standard procedure is using handheld drilling. This is because the work pieces are so large that it is more convenient and feasible to bring the tool and the jig to the workpiece than to fixture the work piece in a drill press. A robot equipped with a drill at its end-effector has many of the capabilities of a human operator with a hand held drill. The benefits of robotic drilling operations are quality, safety and economy. Forging operations range from loading and unloading of forging presses to the movement of workpieces from one die station to another. The biggest class of applications in forging is material handling; other applications are drop forging, upset forging, roll forging, swaging, furnace loading, press trimming, and moving forged workpieces from presses to drawing benches. Other fabrication areas of robot use are: electronic processing, glassmaking, plastics processing, food processing, chemical processing, textiles, and clothing.
10.6 Robotic Automation Systems
213
10.6.6 Robot Social and Medical Services The potential use (and market) for robotic automation in service is expected to be much larger than that of manufacturing, but service robots should have more capabilities than industrial robots, such as intelligence, user-friendliness, higher manipulability and dexterity, advanced sensing capabilities (visual, tactile, sonar, speech), and so on. Robots are used for hospital material transport, security and surveillance, floor cleaning, inspection in the nuclear field, explosives handling, pharmacy automation systems, integrated surgical systems, and entertainment. In the following we discuss a little more medical robotics and computer-integrated surgery. Medical robots are programmable manipulation systems used in the execution of interventional medical procedures, mainly surgery. For centuries surgery (now called classical surgery) has been practiced in essentially the same way. The surgeon formulates a general diagnosis and surgical plan, makes an incision to get access to the target anatomy, performs the procedure using hand tools with visual or tactile feedback, and closes the opening. Modern anesthesia, sterility methods, and antibiotics have made classical surgery extremely successful. However, the human has several limitations that brought this classical approach to a point of diminishing returns. These limitations include the following: It is still hard to couple medical imaging information (X-rays, CT, MRI, ultra-
sound, etc.) to the surgeon’s natural hand-eye coordination (limited planning and feedback). Natural hand tremor makes repairing many anatomical structures (e.g., retina, small nerves, small vascular structures) extremely tedious or impossible (limited precision). The dissection needed to gain access to the target is often far more traumatic than the actual repair. Minimally invasive methods (e.g., endoscopic surgery) provide faster healing and shorter hospital stays, but severely limit the surgeon’s dexterity, visual feedback, and manipulation precision (limited access and dexterity). The above difficulties can be overcome by automated surgery via computerintegrated robotic surgical systems (Fig. 10.11). These systems exploit a variety of modern automation technologies such as robots, smart sensors and human–machine interfaces, to connect the “virtual reality” of computer models of the patient to the “actual reality” of the operating room. A possible taxonomy for considering ways to exploit this synergy is the following 560: CASP/CASE (or Surgical CAD/CAM) These systems are analogous to
manufacturing CAD/CAM systems and integrate computer-assisted surgical planning (CASP) with robots or other computer-assisted surgical execution (CASE) systems to accurately execute optimized patient-specific treatment plans. Two dominant modes for such systems are stereo tactic CASP/CASE and model interactive CASP/CASE. Surgical Augmentation Systems These systems extend human sensory-motor abilities to overcome many of the limitations of classical surgery. They are ba-
214
10 Modern Automation Systems in Practice
Fig. 10.11 Automated surgery via computer-integrated robotic surgical systems
sically building blocks both for CASP/CASE and more sophisticated surgical assistant systems, but they can also be used directly by human surgeons in otherwise conventional surgical settings. Currently, extensive research is carried out to augment human manipulation abilities for endoscope surgery and microsurgery. Simulation Systems These systems were primarily used (and developed) for training, but now are a vital part of the CASP/CASE paradigm. Robots and robotic devices are currently employed as part of simulators to provide realistic haptic feedback to surgeons. In these applications the robot design requirements are similar to those for the “master” in force-reflecting master–slave telerobotic systems. The main difference is that the surgeon interacts with a numerical simulation of a physical system (e.g., tissue compliances) rather than with an actual “slave” system. Surgical Assistant Systems These systems work in a cooperative manner with a surgeon to automate many of the tasks performed by surgical assistants. Currently, working systems are limited to very simple tasks such as laparoscopic camera aiming, limb positioning, tissue retraction, and microscope control. These systems can save costs by reducing the number of people required to perform a surgical procedure and by performing assistive tasks such as retraction more consistently and with less trauma to the patient. The above can be achieved if the surgeon has effective means to supervise the assistant system without having to continually control it. Clearly a robot, to replace a human assistant successfully, must be versatile enough to perform substantially all of the work done by the human being replaced. Figure 10.11 shows the Da Vinci surgical system in action and a specific robotic surgery example.
10.6 Robotic Automation Systems
215
10.6.7 Assistive Robotics Assistive robotics (AR) is a branch of assistive technology (AT), which develops adaptive and intelligent systems capable to serve Persons with Special Needs (PwSN) in several environments (home, professional, etc.). These systems have been classified according to several criteria. The dominant categorization has been produced by the World Health Organization (WHO), and internationally standardized (ISO 999, see wikipedia). Assistive robotics encompasses all robotic systems that are developed for PwSN and attempt to enable disabled people to reach and maintain their best physical and/or social functional level, improving their quality of life and work productivity. The main categories of PwSN are 206, 285 : PwSN with loss of lower limb control (paraplegic patients, spinal cord injury,
tumor, degenerative disease) PwSN with loss of upper limb control (and associated locomotor disorders) PwSN with loss of spatio-temporal orientation (mental, neuropsychological im-
pairments, brain injuries, stroke, ageing, etc.) The field of AR was initiated in North America and Europe in the 1960s. A landmark assistive robot is the so-called Golden Armo developed in 1969, a 7 degrees-offreedom orthosis moving the arm in space (Rancho Los Amigos Hospital, California). In 1970 the first robotic arm mounted on a wheelchair was designed. Today many smart AR systems are available, including: 9, 55, 57, 91, 108, 110, 206, 221, 224, 228, 250, 253; 285, 286, 306, 378, 390, 433, 502, 581, 582, 592, 625, 626, 628, 629
(i) Smart-intelligent wheelchairs that can eliminate the user’s task to drive the wheelchair and can detect and avoid obstacles and other risks, (ii) Wheelchair mounted robots (WMRs) which offer the best solution for people with motor disabilities increasing the user’s mobility and the ability to handle objects. Today WMRs can be operated in all alternative ways (manual, semiautomatic, automatic) through the use of proper interfaces, (iii) Mobile autonomous manipulators (MAMs), i.e., robotic arms mounted on mobile platforms, that can follow the user’s (PwSNs) wheelchair in the environment, can perform tasks in open environments, and can be shared between several users. Three well known European assistive robots are the French MASTER robot, the Dutch MANUS robot, and the UK RTX robot. The European Union has launched in 1991 the “Technology Initiative for Disabled and Elderly People” (TIDE). During the pilot phase of TIDE the following robotic systems were developed: MARCUS, M3S, RAID and MECCS. During the next phase (Bridge Phase) the following systems were created: SENARIO, FOCUS, EPI-RAID, OMNI and MOVAID in the framework of respective R&D projects. 108, 285, 286, 582 Other AR systems include the Autonomous Vehicle for Disabled Persons (VAHM) developed at the University of Metz (France), the ROLLAND Wheelchair developed at the University of Bremen (Germany), 306 the MICA (Mobile Internet Connected Assistant) system developed
216
10 Modern Automation Systems in Practice
Fig. 10.12 The “My Spoon” meal-assistance unifunction robot (Source: Secom.co.jp 250 )
at Lule´a University (Sweden), 378 the SMART wheelchair developed at the Call Centre of the University of Edinburgh (UK), 390 the VTT Wheelchair Drive Assistant (VTT Automation, Finland), 253 the MAid robotic wheelchair for crowded public environments (FAW, Ulm, Germany), 433 and the FRIEND system (University of Bremen). 55 A very popular unifunctional assistive robot (a robot with fixed specific tasks) is the “My Spoon Robot” developed to help those who need assistance to eat. It is controlled via a joystick that drives the robotic arm to pick up food from the tray and then to the user’s orifice. This robot cannot be used if the user has problems with chewing, shallowing, and vision, cannot move his/her head towards the spoon, and has problems of understanding how to operate My Spoon (Fig. 10.12). The Bremen autonomous wheelchair system ROLLAND is based on the commercial power Meyra wheelchair model Genius 1.522, a nonholonomic vehicle which is driven by its front axle and steered by its rear axle. 228 The user controls the system using a joystick. The basic issues addressed by ROLLAND are spatial cognition, safety, and mobility assistance for the PwSN. Safety is the most important issue, and the system was designed so as in case the user issues a command that may lead to a collision with an obstacle, the system modifies the dangerous target command to a safe one. The French autonomous wheelchair system VAHM (Vehicle Autonome pour Handicap´e Moteur) 629 offers all three operational modes (manual, semiautonomous, autonomous). The choice among them is usually based on parameters that are easily comprehensible (single-switch or proportional HMI sensors, modeled or nonmodeled environment, and so on), but the selection of local control (e.g., wall or direction following) depends on the environment configuration. The MAid (Mobility Aid for Elderly and Disabled People) is based on the SPRINT Meyra model and has two operational modes (semi-autonomous, autonomous) (Fig. 10.13). 228 It is equipped with gyroscope, encoders, ultrasonic sensor heads, infrared scanners, and a 2D laser range finder.
10.6 Robotic Automation Systems
217
Fig. 10.13 The MAid intelligent wheelchair (Source: FH Brandenburg 228 )
The semiautonomous mode is particularly suitable for the user to perform single maneuvers (such as passing narrow doorways) or movements in narrow, cluttered space. In the autonomous mode, MAid can cross crowded concourses (e.g., in a railway station with many people moving around, or in a museum building, or in a shopping mall, etc.). The SMART wheelchair is suitable for children with severe and multiple disabilities (Fig. 10.14), and is available commercially (by Call Centre and Smile Rehab Ltd. 221 ). The first prototype was built in 1987. On collision the wheelchair stops and initiates an avoiding action (i.e., stop, back off, turn around the obstacle, etc.), as it may be required. The German semiautonomous WMR system FRIEND (University of Bremen, Institute of Automation: IAT) consists of a Meyra electric wheelchair equipped with the MANUS robot arm (Exact Dynamics, Holland 228 (Fig. 10.15). The robot arm is connected to a PC through a CAN-bus. The System incorporates a camera on the top of the gripper, and a speech HMI that translates naturally spoken words into commands. The MOVAID (Mobility and Activity Assistance System for the Disabled) is a MAM system developed by a consortium leaded by the Scuola Superiore Sant’Anna (ARTS Laboratory). 110 The philosophy behind MOVAID was “design for all” and “user oriented”. The system is accompanied by several PCs located at the places of activities (kitchen, bedroom, etc.), and is able to navigate, avoid obstacles, dock, grasp, and manipulate objects. Commands to the robot are given by the user via GUIs running on the fixed workstation. Visual feedback from on-board cameras is given to the PwSN allowing the monitoring of what the robot is doing.
218
10 Modern Automation Systems in Practice
Fig. 10.14 The Call Centre/Smile Rehab-smart wheelchair (Source: Call Centre/Smile Rehab Ltd. 221 )
Fig. 10.15 The FRIEND wheelchair-mounted robot of IAT-Bremen (Source: FH Brandenburg 228 )
10.7 Automation in Intelligent Buildings
219
A well developed mobile robotic service manipulator is the Care-O-bot. 502 The latest available version is Care-O-bot3 which has a very flexible arm and a threefinger hand able to pick up objects without exerting excessive gripping forces (via force sensory feedback) (Fig. 10.16). 625, 628 It can be controlled by spoken commands, and can also recognize and respond to gestures. It is equipped with a tray mounted at its front on which the robot can carry items (e.g., a cup of tea). A comprehensive study on assistive technologies covering the underlying principles and a wide spectrum of practical issues is provided in the book of Cook and Hussey, 91 and an integrated approach to smart house development for PwSN is presented by Allen. 9 Information on ten advanced mobility robots for the handicapped can be found in the web. 626
10.7 Automation in Intelligent Buildings Intelligent or smart buildings are those buildings that are equipped with automated facilities for the movement of people and handling of materials or performing various operations. Here, we will discuss the motion mobile robots for several service operations within intelligent buildings. The autonomous navigation of a mobile robot needs a large amount of peripheral equipment such as sonar, laser, and optical sensors. This usually leads to a cumbersome robot that can only move on one floor of the building and has difficulty in the navigation of doors. However, using
Fig. 10.16 The Care-O-bot3 mobile autonomous manipulator (Source: Care-O-bot3: Always at your service 625 )
220
10 Modern Automation Systems in Practice
a “smart” environment it is possible to aid the navigation of a mobile robot and reduce the required number of sensors. Specifically, employing wireless communication technology (e.g., IEEE 802.1), a mobile robot can access building services within the “smart” environment. 401 For example, the robot can request elevators by propagating the proper event on the control network. The device in the elevator receives the event and takes the robot to the correct floor. Office doors can also be easily controlled. When a robot wants to move from one room to another, it needs only instruct the building to open the door which is blocking its path. To navigate through large areas, a mobile robot uses maps and local sensors to face the changing (dynamic) situations. If the departure point of the robot is known, these maps can be used as a localization technique. Devices that monitor and control the building can also have stored a small map of their local area. A robot moving around a building can download one of these local maps from the closest device, thus reducing the need for a large storage device equipped on the robot. Multi-robot communication and cooperation is needed when the task to be performed requires more than one robot. In a recent autonomous decentralized utility system for indoor transportation (AMADEUS), 277 intelligent autonomously guided vehicles (AGVs), communicate with job-shop cells in a framework to allocate an AGV to a transportation task, and to coordinate multiple AGV actions so that their goals can be achieved. The nodes of the control network are distributed throughout the building. Each node has a table of known “interesting” objects local to it, and also an adjacency table of the nodes local to it. As a robot enters into an area controlled by a specific node, this node indicates the direction of movement to complete the task. Contrary to what is happening in other multi-robot systems, individual robots do not communicate directly with one another, but through a distributed information space mediated by the intelligent building. Among the other applications of intelligent buildings, it is here mentioned the medical monitoring at home with communication to the hospital. This was made possible with the advances in sensor and communication technologies. A patient can be monitored while at home or at work (or even during shopping, etc.), sending the status signals (sound alarms, etc.) in the nursery stations. In more advanced cases of the future, preliminary diagnoses will be carried out by computer-based system at home, and, if needed, transmitted to the diagnosticians in the hospital for evaluation and further advice and action.
10.8 Automation of Intra- and Inter-Organizational Processes in CIM Modern computer-integrated manufacturing (CIM) systems involve several hierarchical levels of functioning and control. At the highest level a powerful computer supervises the various manufacturing activities, and at the lowest level there are stand-alone computer-controlled machines and robots, the operation of which is controlled by the intermediate (coordinator) level of the CIM system. 580 Through-
10.8 Automation of Intra- and Inter-Organizational Processes in CIM
221
out the operation of the system there is feedback from a lower level to its superior level(s). Therefore, CIM implies a systemic approach to the overall operation of a manufacturing enterprise, i.e., it involves: plant functions, production functions, business functions, and administrative functions. The interfaces of the above functions (activities) to the CIM system are workstations or interactive terminals for the people, and instrumentation for the equipment. Information technology (IT) helps in many ways the improvement of a CIM company’s competitive advantage. 429 The three major ways are: (i) by changing the structure of the industry to create new rules of competition, (ii) by creating competitive advantage through the development of new ways of outperforming the company’s competitors, and (iii) by creating entirely new business, often from within a company’s existing operations.
10.8.1 Intra-Organizational Automation One approach proposed for automating intra-organizational operations is via an open system architecture for CIM model (which is known as CIM-OSA model approach). 82, 294, 296 The CIM-OSA model enables the CIM company to perform its business in a adaptive and real-time way, and employs two major elements: The CIM-OSA reference architecture The CIM-OSA particular/architecture
The former provides building blocks and guidelines, whereas the latter contains the particularized building blocks for each specific company. The CIM-OSA reference framework possesses three levels of architectural generosity (generic level, partial level, particular level), three modeling levels (enterprise modeling level, intermediate modeling level, implementation modeling level), and four different views (organizational view, resource view, information view, and function view). The CIM-OSA integrating infrastructure provides a structured family of system wide services that help in avoiding functional redundancy in the system, and form the basis for integration. Actually, it is the integrating infrastructure that provides the integrated part of CIM (i.e., integration via IT) (Fig. 10.17). The user sees the system-wide services (business process management services, information management services, exchange services, front end services) as a simple service across all nodes of the system, and she (he) does not need to know how and where the service is provided. The management services control the performance of the system on the basis of the released implementation model, and so they contribute to the business integration. The front end services interact with the implemented functional entities to acquire the required functions executed, i.e., they deal with application integration. The function related services deal with the enterprise (management, control, execution) operations, and so they help to achieve function integration.
222
10 Modern Automation Systems in Practice Organisation Resource Information Function Released Implementation Model
System Wide Business Process Management Services Business Process Control
Activity Control
System Wide Information Management Services
Resource Management
System Wide Exchange Service
Front End Services Human
Machine
Application
Data
Communication
Real world implemented functional entities
Fig. 10.17 The CIM-OSA integrating infrastructure services
10.8.2 Inter-Organizational Automation One approach to automate inter-organizational operations is by using the so-called CMSO (CIM for Multi-Supplier Operations) model. 500, 501, 503 The CMSO model addresses the issues of inter-organizational structures, market requirements, and improved effectiveness of multi-supplier/multi-distributor (MS/MD) chains in terms of a generalized ‘customer service’ performance measure that includes factors such as quality, delivery, price, innovation, and product range. The main body of the CMSO MS/MD reference model is formed by a combination of several organizational units
10.8 Automation of Intra- and Inter-Organizational Processes in CIM
223
that manage the business of the automotive supply industry. It integrates in a conceptual and operational way the following three types of chains: Manufacturing logistics chain (MC) Distribution logistics chain (DC) Product development and support chain (PDSC)
which cover all areas and problems involved (Fig. 10.18). Each chain consists of a set of elements that represent particular entities of the automotive industry (e.g., vehicle manufacturers (VM), supplier companies, part distributor companies, etc.). The MC starts with sub-suppliers at the lowest level, goes to the supplier level and then to the VM level, and ends up with the dealer who sells the vehicle to the end customer. The connection of the individual elements is performed by appropriate electronic data interchange (EDI) communication functions. The DC starts again at the sub-supplier level and ends-up at the installer with intermediate levels the supplier, the prime distributor, area distributor, and local distributor. The integration is achieved at all levels, i.e., strategic level, tactical level, and operational level, in a conceptual manner and realized using the EDI reference model.
End Customer of vehicle
End Customer of spare parts
Dealer Network
Installer Local Distributor Area Distributor
Vehicle Manufacturer
Prime Distributor
Supplier Manufacturing Logistics Chain
Distribution Logistics Chain
Sub -Supplier
Raw Materials
Fig. 10.18 The CMSO model of the manufacturing and distribution logistics chains
224
10 Modern Automation Systems in Practice
Application
Application
Integration/ Extraction
Integration/ Extraction
Message
Message
Interchange Communication
Interchange Communication
Fig. 10.19 The CMSO EDI multilayer reference model
For the tactical level, a logistics chain simulator is employed. The CMSO EDI reference model offers a conceptual framework for services dealing with the exchange of managerial and technical EDI messages, and structured in five sub-layers ranging from communication support functions (e.g., OSI application services) at the lowest layer to CIM applications (such as front end applications) at the highest layer (Fig. 10.19). In the area of product support, CMSO is integrating a natural language input/ output handler, and a diagnostic expert system shell with a CD-ROM facility. The natural language system consists of a parser and a semantic mapper, an object/entity model, and a dialogue manager.
10.9 Automation in Continuous Process Plants Process control is called the control of physical and/or chemical process plants that produce continuous products in time and space, which are flowing through the plant or been transferred through transportation lines. 506 Such processes are chemical processes, oil processes, rubber processes, cloth making, electric power systems, thermal systems, and so on. Process plants were the first type of plants to be controlled via automation. Variables (physical quantities) that are controlled include fluid/gas flow, pressure, temperature, material composition, neutron level, color, pH, etc. The most popular type of control for these variables is the proportional plus integral plus derivative (PID) control (or, as otherwise called, three-term control) which is designed empirically using standard or modified Ziegler-Nichols tuning procedures. Today the programmable logic controller (PLC) is also used in combination with SCADA (Supervisory Control And Data Acquisition) facilities. More advanced controllers used in process control include self-tuning controllers, predictive controllers, and adaptive controllers, which are implemented on powerful
10.9 Automation in Continuous Process Plants
225
microcomputers equipped with suitable software codes. 24, 391, 534 An enormous theoretical and technical literature exists in the field of process control, covering particular processes or classes of processes. In a relatively recent book 574 several digital control algorithms are studied (in a general way) and a number of particular control applications using microprocessors are described. These refer to the following systems: heating and ventilation systems, thermal power plants, electric power systems, steel industry, gas pipeline networks, cement industry, cutter suction dredging ships, and railway systems. The role of the human operator in most of these systems is to attend a large number of independent signal indicators and alarm lights. This job is very demanding and it is actually very difficult (if not impossible) for an operator to see quickly what is going wrong. SCADA systems are helping a lot in this area. Extensive training via simulated systems was also proven to partially contribute towards a reduction of the problem. Finally, the study and consideration of the various human factors associated with this type of activity facilitates the design of the system and the operator tasks. 172 Throughout the years many accidents in process control and nuclear power plants have occurred. Regarding the nuclear plants, the two most serious accidents are the Three Mile Island reactor accident (1979), and the Chernobyl accident in the former Soviet Union (1986). These accidents appear to have ended the society’s confidence in nuclear power both in the United States and in Europe. The picture of a modern ergonomic process control room is shown in Fig. 10.20. As we saw in Chapter 9, in all cases care must be taken so that the process efficiency is improved either by design or by the control or both, for minimal environmental impact. The most effective and cost-efficient mode of attack in the pollution prevention area is to adopt more advanced process technologies, using less polluting reagents, changing cleaning processes and chemicals, using catalysts to increase reaction efficiency, segregating waste and process streams, and improving
Fig. 10.20 A modern ergonomic control room (The ABB industrial IT system 800xA) (Source: www.abb.com;Per Lundmark and William Zeng, Projections of Productivity)
226
10 Modern Automation Systems in Practice
operating and maintenance procedures. Often, more than one of these techniques are used in an integrated way to achieve optimum production while minimizing waste generation. In any case, the basic point of attack is usually the increase of the reaction efficiency so as to reduce the quantity of process chemicals required, as well as the conversion efficiency to product. A representative example of process change to minimize pollution is the rapidly growing use of powder coating in place of traditional paint. Solvent-based paints produce two significant sources of hazardous wastes, namely sludges and waste solvents. Powder coating, or dry powder painting, is changing this situation. The dry paint powder is typically applied electrostatically by ionizing the air used to spray the point, which then charges the dry powder particles. The surface to be coated carries the opposite charge, and the powder is electrostatically attracted to the surface. Then, the coating is fused to the surface and cured in conventional ovens. In this way the problems associated with solventbased paints are eliminated and no solvents are required for cleanup.
10.10 Automation in Environmental Systems Information and automation technology has contributed a lot towards more effective management and preventive control of the environmental pollution. For example, the replacement of the transportation of people, materials and goods by the transmission of information has given significant/advantages of several forms. The computer and telecommunications technologies may enable a large number of employees to work at home or at near-home local offices for all or part of their working time. This possibility will reduce the amounts of energy needed for the workers transportation from their homes to their working places, thus leading to less environmental pollution due to automobiles emissions, to less traffic congestion, and to smaller requirements of car space. Other directions in which information technology and automation can benefit the environment include the increase of manufacturing and transportation efficiency, which again decreases the consumption of energy and its impact to the environment. Through proper schedules communicated in real time to truck drivers, the amount of travel done by unloaded trucks (and other goodstransportation media) can be minimized. On the other hand, through the web and information technology, manufacturers can locate the most accessible suppliers, and so on. The wide use of electronic mail has decreased the transportation of hard-copy mail, and the introduction of electronic journals, newspapers, directories, and books is expected to reduce substantially the need for the use and transportation of printed material. However, at the moment there is no evidence that less paper is used, since most people still print, use and read the hard-copy form of the desired information. Information technology and automation have offered advanced and efficient means for environmental monitoring in many respects. The importance of world-wide monitoring of the earth’s surface temperature, as well as of the monitoring of emissions into the atmosphere of CO2 and other gases that may contribute to earth’s temperature increase, is now
10.11 Discussion on Human- and Nature-Minding Automation and Technology Applications 227
globally recognized. Also, the accurate monitoring of the effect of public regulatory policies concerning the emission of pollutants is necessary in order to enforce the most appropriate ones for better efficiency. Today much research is devoted for developing more efficient weather models which need massive computations, and better models for representing the origin and dispersion of the emissions that cause acid rain. This is because it is now most important to develop computational techniques for better interpretation and management of environmental data that are already collected. There is still much room for development and improvements on the beneficial effects of IT and automation upon the environment. Some specific ways in this direction are 387 : Development of improved techniques and systems for monitoring atmospheric
and oceanic data, and organizing, interpreting and disseminating the data Development of improved data analysis and management software systems tai-
lored for large amounts of weather, climatic, and pollution type data Development of better data compression techniques for use in conjunction with
available network technology Development of computer-aided and automated image analysis techniques and
tools suitable for extracting rapidly useful information from LANDSAT data Development of better large environmental impact models for evaluating the
effects of changes of the relevant variables Development of models that combine the economics and the environmental
impact (short-term, long-term) of particular policies and practices Development of information and control techniques and systems that will in-
crease the efficiency of manufacturing and distribution processes Investigation of the application of nuclear models to climatic and environmental
processes Development of proper techniques for dealing with human error, crisis manage-
ment, and decision making under stress
10.11 Discussion on Human- and Nature-Minding Automation and Technology Applications Currently, a large and increasing number of automation companies are producing human-minding components, subsystems and systems, including services, in all areas of concern. The final goal is to produce and offer more efficient, more clean and more economic technologies and products that are better for the nature and the human. These areas include, but are not limited to, the following:
Agriculture Electronics Mechatronics Manufacturing Robotics
228
10 Modern Automation Systems in Practice
Industrial control Embedded control Communications Data logging systems Pneumatics–hydraulics Automotive Aircraft industry Medical systems Energy Civil engineering
In general, nature-minding (green) automation and energy conservation save health, money and the planet. Most of the nature-minding automation and technology applications can be classified in the following five categories. (www.ni.com) Machine and Process Monitoring Old-fashioned equipment and technology is gradually abandoned and replaced by more efficient, more clean equipment with improved performance and quality control. Environmental Monitoring The environment is now measured and monitored systematically by more accurate and reliable methods and equipment. In this way we can ensure that actually our efforts to reduce the greenhouse gases and the resulting climatic change have the desired result. Renewable Energy This type of energy receives increasing attention by investors, politicians, and state and private power generation organizations. It involves the profitable use of wind, sun energy, water and other renewable resources. Development and Test This includes, among others, hardware-in-the loop testing of hydrogen vehicles, sorting of automated plastics for recycling, etc. Power Quality Monitoring It is essential for the environment to assure the correct green operation of power systems through proper monitoring of power quality and power metering analysis. The materials used in the construction of the mechanical parts of technological systems must be robust to oxidation and corrosion effects. Fine stainless, hot dip galvanized steel, and anodized aluminum are excellent for this purpose. Methods which ensure the growing of more plants per square meter in greenhouses should be adopted (such as the development of hydroponic plant production lines) and the nursery tables, etc. See (www.greenautomation.fi). In general, all the application areas and examples presented in this chapter possess large rooms for potential nature-minding improvements.
10.11 Discussion on Human- and Nature-Minding Automation and Technology Applications 229
Fig. 10.21 HYDRIADA: A prototype floating ecological desalination system based on a wind generator and solar cells (Source: www.ecowindwater.gr)
Examples of such nature-minding improvements can be found in the following web sites: 1. www.startupnation.com 2. www.controleng.com 3. www.electricalautomation.com 4. www.managingautomation.com 5. www.alternativeenergyfoundation.org 6. www.chemicalprocessing.com 7. www.ceasiamag.com 8. www.aea.on.ca 9. www.mbtmag.com 10. www.automationcontrols.com 11. www.controlglobal.com 12. www.automation.com
13. www.tonke.cn/company-en-ehgaia. html 14. www.industrialcontroldesignline.com 15. www.plantengineering.com 16. www.machine.design.com 17. www.all-energy.co.uk 18. www.engineerlive.com 19. www.ngneering.com/engineering.php 20. www.solariengineering.com 21. www.greensourceautomation.com 22. www.projectmechatronics.com 23. www.subbottomcutter.com
Figure 10.21 shows a prototype nature-minding off-shore desalination unit developed in Greece by a consortium co-sponsored by the European Commission and the Greek Government. The energy required for the desalination process is provided by a wind generator complemented by solar cells. The author’s IAS group was a partner of this consortium (www.ece.ntua.gr/images/pages/ias/).
Chapter 11
Mathematical Tools for Automation Systems I: Modeling and Simulation
Nature’s great book is written in mathematics. Galileo Galilei It is by logic that we prove but by intuition that we discover. Henry Poincare Mathematics consists of content and know-how. What is know-how in mathematics? The ability to solve problems. George Polya
11.1 Introduction To describe and analyze the subsystems and the overall structure of human– automation systems we need to use appropriate mathematical methods and tools. The first step in any attempt to study and design a physical system by mathematical methods is to determine a descriptive model of how the system actually works. This process is collectively known as system modeling. Using the mathematical model of a system we can also formulate techniques and develop mathematicsbased tools for imitating its operation using a computer. This is known as system simulation. The purpose of this chapter is to provide a brief exposition of the principal mathematical system models and the basic system simulation techniques at a minimal level of detail which assures their understanding and interpretation. The overwhelming majority of mathematical system-models fall into one of three categories: deterministic models, probabilistic-stochastic models, and fuzzy models. The type of model which must be used in each case is usually dictated by the system or problem at hand, but more often it is a matter of choice. In many applications, more than one type of model can be used. For example, a large Monte Carlo simulation model may be used in conjunction with a smaller, more tractable, deterministic model that uses expected values.
S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 11, c Springer Science+Business Media B.V. 2010
231
232
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
Section 11.2 presents the deterministic continuous-time and discrete time state-space models. Section 11.3 discusses the probability models, including Baye’s probability updating rule and a short outline of statistics. Sections 11.4 and 11.5 are concerned with the entropy model and the reliability/availability model, respectively. Section 11.6 presents an account of stochastic processes and stochastic dynamic models (continuous- and discrete-time Gauss–Markov models). Section 11.7 provides the basic elements of fuzzy sets and fuzzy models. Finally, Section 11.8 discusses the simulation of dynamic systems using Euler and Runge– Kutta numerical techniques, and the simulation of probabilistic models using the Monte Carlo technique. Twelve carefully selected examples illustrate most of the concepts and techniques presented in the chapter.
11.2 Deterministic Models The majority of automation systems involve processes that evolve over time. These processes are modeled by dynamic models, which are classified in continuous-time dynamic models and discrete-time dynamic models. The first type of models uses differential equations to represent the changing behavior of the system, whereas the second type uses difference equations. As a general rule, dynamic models are easy to formulate and difficult to solve. Exact analytic solutions are available only for a few cases (e.g., linear systems) and numerical methods do not give always a good qualitative understanding of the system performance. Therefore, in many cases use of graphic techniques is made. In state space the dynamic models are the following 122, 407 : Continuous-Time Model xP .t/ D f .x; u; t/ ; y .t/ D g .x; u; t /
(11.1)
xkC1 D F .xk ; uk ; tk / ; yk D G .xk ; uk ; tk /
(11.2)
Discrete-Time Model
where x 2 X is the state vector, X is the state space, u 2 U is the input (or control) vector which belongs to the permissible control space U; y 2 Y is the output vector, and Y is the output space. If the functions f and g (or F and G) are linear, then the model is linear, i.e.: xP D Ax C Bu; y D Cx C Du xkC1 D Axk C Buk ; yk D Cxk C Duk
(11.3) (11.4)
where A; B; C; D are time varying or time invariant matrices of proper dimensionality.
11.2 Deterministic Models
233
If the differential equations of the model contain partial derivatives (i.e., if the system occupies space; line, surface, volume), then the system is said to be a distributed-parameter system. For example, the heat or diffusion model is: @2 y .s; t / @y .s; t / D c2 C u .s; t / ; 0 < s < 1 @t @s 2 y .0; t/ D Œ@y .s; t / =@ssD0 D 0 y .s; 0/ D y0 .s/ ; 0 s 1 When a dynamic system has reached a stable equilibrium state, then it is described by an algebraic (steady-state) model where xP .t/ D 0 or xkC1 xk D 0. In this case the dynamic models (11.1) through (11.4) are reduced to their steady-state counterparts. Example 11.1. Consider a process described by the following second-order differential equation:
D 2 C 5D C 6 y .t/ D .D C 1/ u .t/ ; y .0/ D y0 ; Dy .0/ D Y0
(11.5)
where D D d=dt is the time – derivative operator. To determine the state-space model (11.3) of this system, we divide (11.5) by D 2 and solve for y, i.e.: yD
1 1 .u 5y/ C 2 .u 6y/ D D
where 1=D represents the integration operator. Here, two state variables x1 and x2 are needed because the process dynamics is of second order. Therefore, defining x1 and x2 as: x1 D y; x2 D
1 .u 6y/ D
we get the state equations: xP 1 D 5x1 C x2 C u; xP 2 D 6x1 C u; y D x1 which in vector–matrix form are written as: xP D Ax C Bu; y D Cx C Du where: xD
x1 5 1 1 ; AD ; BD ; CD 1 x2 6 0 1
0 ; DD0
234
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
The required initial conditions are: x1 .0/ D y .0/ D y0 ; x2 .0/ D xP 1 .0/ C 5y .0/ u .0/ D Y0 5y0 u .0/ Example 11.2. We consider the problem of training an astronaut to learn and practice a manual maneuver to bring an orbiting spacecraft to test relative to another orbiting craft. The astronaut can control the acceleration/deceleration using the hand controls on the basis of the spacing between the two vehicles, the rate of which is measured by a sensing device on board. 351 To achieve the desired goal, the astronaut must act as follows. If he (she) observes that the closing velocity is zero, then no action should be taken (since the goal is already met). Otherwise, he (she) must move the acceleration hand control so that it is opposite and proportional to the closing velocity (for positive velocity he (she) must slow down and for negative velocity he (she) must speed up proportionally to the closing velocity magnitude). After some time the astronaut must look again at the closing velocity and repeat the procedure. In the following we will see under what conditions this procedure can be effective. Let tc;k > 0 be the time taken by the astronaut to adjust the controls, and tw;k 0 the waiting time until the next observation of the closing space. twk can be freely selected. The time between observations of the velocity indicator is: tk D tc;k C tw;k If ˛k is the acceleration setting after the kth adjustment, the increment: k D kC1 k of the closing velocity k D .tk /, is equal to: k D ˛k1 tc;k C ˛k tw;k The control law that must be used is: ˛k D k where is a proportionality constant, i.e., the acceleration at time tk must be proportional to the velocity at time tk with opposite sign. Combining the last two equations we get: k D tc;k k1 tw;k k which means that the change in velocity over the time step depends on both k and k1 . Here, we assume that tc;k D tc and tw;k D tw , i.e., that they are constant (independent of k). Then, defining the state variables as: x1;k D k ; x2;k D k1
11.3 Probabilistic Models
235
we obtain the state space model of the problem: x1;kC1 D x1;k .tw / x1;k .tc / x2;k ; x2;kC1 D x1;k which is a linear discrete – time model of the form (11.4), with: 1 tw x1;k ; AD xk D x2;k 1
tc ; BD0 0
The equilibrium point corresponds to x1;kC1 D x1;k and x2;kC1 D x2;k , i.e., to the point .x1 ; x2 / D .0; 0/, which may be unstable if , tc and tw are large. But, if tc is much smaller than tw , and k k1 , we can use the approximation: k tw in which case x1;k D x1;kC1 x1;k D .tw / x1;k , and
1 tw AD 1
0 0
The equilibrium point .0; 0/ will be stable, if the eigenvalues z1 D 0 and z2 D 1 tw of A are inside the unit circle. Obviously, z1 is inside, and z2 will be so if tw < 2. In particular, if tw < 1, then the system will go to the equilibrium asymptotically without overshooting.
11.3 Probabilistic Models Nearly all real-life problems involve elements of uncertainty, and in human– automation systems we may introduce random elements to account for uncertainties in human behavior. Very often one may be unsure of the exact physical laws that govern the system dynamics, or these laws are essentially random (e.g., in quantum mechanics). Probabilities are in some cases used for convenience, whereas in other cases it is used as a matter of necessity. Probability is an intuitive and familiar concept. The simplest probability models are those that concern a discrete set of possible outcomes and do not involve dynamic elements. 96, 351, 412 Practically, the probability of events in class A is defined as: Pr ob .A/ D NA =NT where NA is the number of observed occurrences of events in class A, and NT is the totality of occurrences of all events (i.e., those in class A and those not in class A). This experimental or empirical probability is only an approximation of the
236
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
axiomatic (logical) probability which is defined as: Pr ob .A/ D
lim NA =NT
NT !1
Humans, also use subjective estimates of probability based on all knowledge currently available. These estimates are updated using Baye’s formula (see Eq. 11.12).
11.3.1 Discrete Probability Model Let a random variable x which can take any of a discrete set of values: x 2 fx1 ; x2 ; x3 : : :g. If x D xi occurs with probability pi , we write: Pr ob fx D xi g D pi Of course, we must have: X
(11.6)
pi D p1 C p2 C p3 C : : : D 1
i
Since x takes the value xi with probability pi , the average (or expected) value of x should be a weighted average of the possible values xi , weighted according to their relative occurrence likelihoods pi . We write: X E Œx D xi pi (11.7) i
where E Œ is called “the expectation or averaging operator”. The probabilities pi represent what is called “the probability” distribution of x.
11.3.2 Continuous Probability Model Now, we introduce the probability concept for random variables that take values over a continuum. This concept is fully analogous to the discrete case, but now integrals replace sums. Let x a random variable that takes values on the real line. To describe the probability structure of x we define the function: F ./ D Pr ob fx g
(11.8)
which is called the “distribution function of x”. If F ./ is differentiable, then the function: (11.9) f ./ D F 0 ./
11.3 Probabilistic Models
237
is called the “probability density function of x” and the following equality holds: Zb Pr ob fa x bg D F .b/ F .a/ D
f ./d a
This means that the area under the density curve represents probability. The expected or average value of x is defined as: Z1 E Œx D
f ./ d
(11.10)
1
which is directly analogous to the discrete case (11.7).
11.3.3 Bayes Updating Formula Formally, the probability of event A occurring, given that the event B occurs, is given by: Pr ob .A and B/ Pr ob .A jB / D Pr ob .B/ and similarly: Pr ob .B jA / D
Pr ob .A and B/ Pr ob .A/
For notational simplicity, the above formulas are written as: P .A jB / D
P .A; B/ P .A; B/ ; P .B jA / D P .B/ P .A/
(11.11)
where P .A; B/ is the probability of A and B occurring jointly. Combining the two formulas in (11.11) we get the so-called Bayes formula: P .A jB / D
P .B jA / P .A/ P .B/
(11.12)
Now, if A D H where H is a hypothesis (about the truth or cause), and B D E where E represents observed symptoms or measured data or evidence, the formula (11.12) gives the so-called “Bayes updating (or learning) rule”: P .H jE / D
P .E jH / p .H / p .E/
where P .E/ D P .E jH / P .H / C P .E jnotH / P .notH/ and P .notH/ D 1 P .H /.
(11.13)
238
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
Here, P .H / is the “prior” probability of the hypothesis H (before any evidence is obtained), P .E/ is the probability of the evidence, and P .E jH / is the probability that the evidence is true when H holds. The term P .H jE / is the “posterior” probability (i.e., the “updated” probability) of H after the evidence is obtained. Consider now the case that two evidences E1 and E2 about H are observed one after the other. Then, (11.13) gives: P .E1 jH / P .H / P .E1 / P .E2 jH / P .E1 jH / P .H / P .E2 jH / P .H jE1 / D P .H jE1 ; E2 / D P .E2 / P .E1 / P .E2 / (11.14)
P .H jE1 / D
Clearly (11.14) says that P .H jE1 ; E2 / D P .H jE2 ; E1 /, i.e., the order in which the evidence (data) are observed does not influence the resulting posterior probability of H given E1 and E2 . Now, suppose that we have two independent hypotheses H1 and H2 . Then, applying (11.14) to both of them we get: P .E2 jH1 / P .E1 jH1 / P .H1 / P .H1 jE1 ; E2 / D P .H2 jE1 ; E2 / P .E1 jH2 / P .E1 jH2 / P .H2 / Here, P .H1 / =P .H2 / is called the “prior odds ratio” P .H1 jE1 ; E2 / =P .H2 jE1 ; E2 / is called the “posterior odds ratio”, and P .Ei jH1 / =P .Ei jH2 / is called the “likelihood ratio” of Ei .i D 1; 2/. The above result is true when the underlying statistics is stationary (i.e., time invariant). If the statistics is not stationary, the above Bayesian updating formula must be modified, by discounting data according to how old it is. The odd O.H / (in favor) of a hypothesis H and its probability P .H / are related by: O .H / P .H / or P .H / D (11.15) O .H / D 1 P .H / 1 C O .H / Thus, for a hypothesis H with probability 0.5, the odd in favor of H is “1 over 1”. Using the definition (11.15) for O .H /, Baye’s formula (11.13) is written as: O .H jE / D
P .E jH / O .H / D LR .H jE / O .H / P .E jnotH /
where: LR .H jE / D P .E jH / =P .E jnotH / is the likelihood ratio of the hypothesis H with evidence E, and O .H jE / D
P .H jE / : 1 P .H jE /
11.3 Probabilistic Models
239
Example 11.3. A manufacturer produces a variety of diodes which must be checked for quality before their shipping to customers. This is what in known as quality control. One way to do this is to test each diode separately, and another is to put a number of diodes in series and test the entire group. If this group fails, then one or more diodes of the group will be faulty. It is estimated that 0.3% of the diodes produced are faulty. The testing cost of a single diode is 5 cents, and the cost of a group of n diodes is 4 n cents .n > 1/. If a test shows that a group fails, then each diode in the group must be retested individually to find the faulty one(s). The problem is to determine the cheapest quality control procedure for detecting bad diodes. 351 The decision variable here is the number n of diodes in each group .nD1; 2; : : :/. The testing cost C (cents) for one group is the random outcome (variable) of the quality control procedure selected. The problem is to select n such that to minimize the average testing cost D per diode, i.e., such that to minimize the value of the quantity: D D .Average value of C/ =n D E ŒC =n If n D 1, then D D 5 cents, otherwise .if n > 1/ we have C D 4 C n if the test shows that all diodes in the group are good, and C D .4 C n/ C 5n if the group test indicates a failure (in which case each diode in the group must be retested). Suppose that p is the probability that all the diodes are good. Then the probability of one or more diodes to be bad is 1 p. Therefore, the expected (average) value of C is: E ŒC D .4 C n/ p C Œ.4 C n/ C 5n .1 p/ We are given that the probability that one individual diode (among the n diodes) is bad is 0.003. Therefore, the probability that one individual diode is good is 1 0:003 D 0:997. Under the assumption of independence, the probability that all n diodes in one test group are good is p D 0:997n. Thus, the expected value of the cost C is equal to: E ŒC D .4 C n/ 0:997n C Œ.4 C n/ C 5n .1 0:997n/ D .4 C n/ C 5n .1 0:997n/ D 4 C 6n 5n .0:997/n Finally, we find that the average testing cost per diode is: D D E ŒC =n D 4=n C 6 5 .0:997/n The value of n that extremizes D is found by solving the equation dD=d n D 0 with respect to n. The extremum value is minimum if d 2 D=d n2 evaluated at this value is positive. Here, we find that the minimum value of D is 1.48 cents/diode and occurs at n D 17. The above result shows that using a group testing method, the quality control procedures for detecting faulty diodes can be made much more cost-effective than single diode testing.
240
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
11.3.4 Statistics Statistics is the study of measurement in the presence of random fluctuations. The use of statistics is necessary for the analysis of any probability model. 96, 351, 412 Suppose that x; x1 ; x2 ; x3 ; : : : are independent random variables, all with the same probability distribution. We know that if x is discrete, the average (mean) value is: X X xk Pr ob fx D xk g D xk pk E Œx D k
k
and if x is continuous with density f ./, then: Z1 E Œx D
f ./d 1
Another parameter of the probability distribution is the variance, which measures the extent to which x tends to deviate from its mean E Œx. The variance is generally defined as: (11.16) V Œx Var Œx D E Œx E Œx2 If x is discrete, we have: V Œx D
X
.xk E Œx/2 Pr ob .x D xk /
(11.17)
k
and if x is continuous with probability density f ./, we have: Z1 . E Œx/2 f ./ d
V Œx D
(11.18)
1
A major result in statistics is the so-called central limit theorem, which says that as n ! 1 the distribution of the sum x1 C x2 C C xn gets closer and closer to a certain type of distribution called a normal distribution. Let us define: m D E Œx and 2 D V Œx Then, for all y real, we have: Pr ob
x1 C x2 C C xn nm p y ! F .y/ ; as n ! 1 n
where F .y/ is a special distribution function called the normal (or Gaussian) distribution function, defined as:
11.3 Probabilistic Models
241
f(x) 0.4 0.3 0.2 0.1 0 –6
–4
–2
0
2
4
6
x
Fig. 11.1 Graphical representation of the Gaussian (normal) probability density function
Zy F .y/ D 1
1 2 p e =2 d 2
(11.19)
where:
1 2 (11.20) f ./ D p e =2 d 2 is the corresponding (normal or Gaussian) probability density function which has the graphical representation shown in Fig. 11.1. Using numerical integration it was found that the area between 1 1 is about 0.68, and the area between 2 2 is approximately 0.95. This means that for sufficiently large n, we have 1
x1 C x2 C C xn nm p 1 n
about 68% of the time, and 2
x1 C x2 C C xn nm p 2 n
about 95% of the time. The above imply that we are 68% sure that: p p nm n x1 C x2 C C xn nm C n
(11.21)
and 95% sure that: p p nm 2 n x1 C x2 C C xn nm C 2 n
(11.22)
In practice, it is common to consider the 95% interval from (11.22) as the range of normal variation in a random sample. If the sum x1 Cx2 C Cxn does not lie in the interval (11.22), we say that the deviation is statistically significant at the 95% level.
242
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
11.4 Entropy Model Entropy is an entity that can be used as a measure of uncertainty, 412 and was coined by Shannon (in the 1940s) in his seminal work on information theory. Specifically, the entropy H of a class of transmitted information signals that belong to the space , is defined as: Z (11.23) H .s/ D p .s/ log2 p .s/ ds
where p .s/ is a Gaussian probability density function over the space . If the space is a discrete space of signals D fx1 ; x2 ; x3 ; : : :g, then H is defined as: H .xi / D
X
p .xi / log2 p .xi /
(11.24)
i
Practically, the entropy H indicates the average uncertainty of an observer before the receipt of a message or signal (that reveals which xi is the correct one), given a set of possible messages (signals) xi and their probabilities p .xi /. Clearly, if after the receipt of a message all messages in the set have zero probability except for one which has probability one, the uncertainty is zero. Therefore, the set of messages (signals) is characterized by its uncertainty-reducing capability. Some properties of H .xi / are the following: (i) H .xi / is a continuous function of p .xi / D pi . (ii) If p1 ; p2 ; : : : ; pN D 1=N , then H is an increasing function of N . Shannon’s theory was generalized by various authors for dynamic systems. 22, 52, 426
Example 11.4. Consider a linear system described by: y .t/ D
"m X
# wi xi .t/ C n .t/
(11.25)
i D1
where xi .t/ .i D 1; 2; : : : ; m/ are the inputs, y .t/ is the output, wi .i D 1; 2; : : : ; m/ are weight coefficients, and n .t/ is a Gaussian noise with zero mean value and variance n2 . It is assumed that the noise n .t/ is uncorrelated with all xi .t/, i.e., E Œn .t/ xi .t/ D 0 .i D 1; 2; : : : ; m/. The problem is to find when the amount of information contained in the inputs xi .i D 1; 2; : : : ; m/ is preserved as much as possible in the output y .t/. This will be so, if the average mutual information (mutual entropy) between the input vector x D Œx1 ; x2 ; : : : ; xm T and the output y is maximized. The average mutual information I .y; x/ between y and x is equal to: I .y; x/ D H .y/ H .y jx /
(11.26)
11.5 Reliability and Availability Models
243
where H .y/ is the differential entropy of y, and H .y jx / is the conditional entropy of y given x. From (11.25) we obtain: H .y jx / D H .n/ Thus, (11.26) gives: I .y jx / D H .y/ H .n/
(11.27)
Since the system (11.25) is linear and n .t/ is zero mean Gaussian, the output y .t/ is also zero-mean Gaussian. Let y2 be the variance of y .t/. In this case, the entropies H .y/ and H .n/ are calculated using the probability density function (11.20). The result is: H .y/ D
1 1 1 C log2 2y2 ; H .n/ D 1 C log2 2n2 2 2
Thus, (11.27) gives: I .y jx / D
1 log2 y2 =n2 2
(11.28)
where y2 depends on n2 . The ratio y2 =n2 can be regarded as the “signal to noise” ratio. Equation 11.28 says that when n2 is given, the mean mutual information I .y jx / is maximized if y2 is maximized. Of course, this result was obtained for a system of the form (11.25) and it is not true in all cases. Some applications of information theory concepts in human–machine systems are provided in the book of Sheridan and Ferrell. 524
11.5 Reliability and Availability Models 11.5.1 Definitions and Properties Reliability is one of the most basic factors for the successful and effective operation of automation systems. Of primary importance in the design of a multicomponent system is to use the available resources in the best way so as to maximize the total reliability (and availability) of the system or to minimize the consumption of resources achieving desired goals of reliability and availability. The most direct way to increase the reliability of a system is to use more reliable components. But in practice there is a trend to use redundant units. This is due to the fact that the cost of constructing more reliable components increases more rapidly than the cost of redundancy. 426, 573 Definition. Reliability of a system is the probability that the system achieves its goal for the desired time period under given conditions of its environment.
244
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
Let f .t/ be the probability density function of the system’s “life time”. Then, f .t/ dt is the probability of fault in the time interval Œt; t C dt. The cumulative distribution of the life time is: Zt F .t/ D
f .t/ d D Pr ob flife time tg 0
The reliability (or survival function) R .t/ of a system in the interval Œ0; t is defined as the probability that the system survives up to time t, i.e.: Z1 R .t/ D 1 F .t/ D
f ./d t
since
R1
f ./d D 1. Differentiating R .t/ gives:
0
dR .t/ =dt D dF .t/ =dt D f .t/
(11.29)
The failure (or mortality) rate .t/ is defined on the basis of the following relation:
.t/ dt D probability of system failure in the interval Œt; t C dt under the condition that the system was operating properly up to time t probability of failure in Œt; t C dt probability of system survival up to time t f .t/ dt D R .t/ Thus; .t/ D f .t/ =R .t/ D .dR .t/ =dt/ =R .t/: HenceW D
R .t/ D e
Rt ./d 0
(11.30)
Two special cases are the following. (i) Constant failure rate: .t/ D 0 R .t/ D e 0 t .exponential reliability/
(11.31a)
(ii) Linearly increasing failure rate: .t/ D k0 t R .t/ D e k0 t
2 =2
.bell-type reliability/
(11.31b)
11.5 Reliability and Availability Models
245
A useful parameter used in practice is the “mean time to first failure” (MTFF), defined as: Z1 Z1 (11.32) MTFF D E ŒT D tdR .t/ D R .t/ dt 0
0
since R .1/ D 0. From the above it follows that: R .t/ D Pr ob fT > tg where T is the life (survival) time. Using this definition of R .t/ we find that the total (simultaneous) reliability of two independent systems with reliabilities R1 .t/ and R2 .t/, connected in series is equal to: R .t/ D R1 .t/ R2 .t/ .Systems in series/
(11.33)
Using (11.30) we find that:
D 1 C 2 where 1 ; 2 are the failure rates of the two systems and is the failure rate of their series combination. For two systems connected in parallel we find that .1 R/ D .1 R1 / .1 R2 /, i.e.: R D 1 .1 R1 / .1 R2 /
(11.34)
The above can be used for dealing with series-parallel combinations and for any number of component systems. Let MUT the mean time which elapses from the end of a repair up to the next failure (MUT D mean up-time), MDT the mean time between a failure and the next repair (MDT D mean down-time), and MCT the mean time between a failure and the next failure (MCT D mean cycle-time). Then, we have the following: MCT D MUT C MDT; f D 1=MCT
(11.35)
where f is the mean frequency of failures. Definition. Availability A of a system is the ratio of mean up time and mean cycletime, i.e.: MDT MUT D1 (11.36) AD MCT MCT If we know two of the above four parameters MUT, MDT, MCT and A (or f ), we can always determine the other two parameters. The formulas (11.35) and (11.36) hold for any distribution of MUT and MDT.
246
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
11.5.2 Markov Reliability Model In the Markovian approach, to compute the reliability measures we count all the possible states of the system at hand, and draw the transition diagram from state to state. In this way we determine the transition probability matrix, from which we can find the reliability and the other relevant parameters by solving the state equations. At each time instant some components (or subsystems) are functioning properly, some others are in stand-by, others are under repair, and the remaining subsystems are out of operation. In each of the above combinations corresponds a system state Sj . Let Ejk be the event that causes transition from a state Sj to the state Sk . If there is no such event the states Sj and Sk are not connected. We assume that all subsystems (or components) are mutually independent and that their random behavior (faulty, repair, fault-free, etc.) follows always the exponential probability law
jk exp jk t . Then in the interval dt the system behaves as a Markovian process with probability transition matrix: 2
1 C 11 dt 6 dt 6 21 6 D6 ::: 6 4 :::
n1 dt
12 dt 1 C 22 dt ::: :::
n2 dt
::: ::: ::: ::: :::
3
1n dt
2n dt 7 7 7 ::: 7 7 5 ::: 1 C nn dt
This implies that the state vector x .t C dt/ of the system is equal to: x .t C dt/ D x .t/ ƒ; x D row vector and so d x .t/ D x .t C dt/ x .t/ D x .t/ .ƒ I/ D x .t/ A .t/ dt or d x .t/ D x .t/ A.x .0/ known/ dt where A D ij . This model contains n 1 independent equations, since the n components of the vector x .t/ satisfy the probability condition: n X
xj .t/ D 1
j D1
Setting d x .t/ =dt D 0, gives the steady state (algebraic) model: x .t/ A D 0 which, together with the above probability condition, determines the probabilities in the steady state.
11.5 Reliability and Availability Models
247
Fig. 11.2 State transition diagram of a system with redundancy
–l1dt
S1
l1dt
S2
l 2 dt
S3
–l 2 dt
Example 11.5. A system contains two subsystems †1 and †2 , where †2 is redundant to †1 . The states of the system are: ˚ S1 W †1 functions properly and †2 is in standby S1C ˚ C S2 W †1 is faulty and †2 is functioning ˚ S2 S3 : Both †1 and †2 are faulty S3 . The state transition diagram of the system is shown in Fig. 11.2. Here, we have p .S1 ! S2 / D 1 dt and p .S2 ! S3 / D 2 dt, where 1 and 2 are the failure rates of †1 and †2 , respectively. The states in which the system is functioning are S1 and S2 . Therefore the reliability is equal to: R .t/ D x1 .t/ C x2 .t/ The probability condition gives: x1 .t/ C x2 .t/ C x3 .t/ D 1 Here:
2
1 AD4 0 0
1 2 0
3 0
2 5 ; x .0/ D 1 0
0 0
The solution for R .t/ is found to be: R .t/ D 1 x3 .t/ D
2
2 1
e
1 t
1
2 1
e 2 t ; when
2 ¤ 1 and
R .t/ D .1 C t/ e t ; when 2 D 1 D :
The mean time to first failure (MTFF) (see Eq. 11.32) is equal to: Z1 MTFF D
R .t/dt D 0
1 1 C :
1
2
248
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
11.6 Stochastic Processes and Dynamic Models 11.6.1 Stochastic Processes Stochastic process is defined to be a collection (or ensemble) of time functions X .h; t/ of random variables corresponding to an infinitely countable set of experiments h, with an associated probability description, for example p .x; t / (see Fig. 11.3). 96, 412 At time t1 , X .t1 / is a random variable with probability p .x; t1 /. Similarly, X .t2 / is a random variable with probability p .x; t2 /. Here, ti is a random variable over the ensemble every time, and so we can define first and higher-order statistics for the variables t1 ; t2 ; : : : . First-order statistics is concerned only with a single random variable X .t/ and is expressed by a probability distribution P .x; t / and its density p .x; t / D dP .x; t / =dt for the continuous time case, or its distribution function P .xi ; t / for the discrete time case. Second-order statistics is concerned with two random variables X .t1 / and X .t2 / at two distinct time instances t1 and t2 . In the continuous time case we have the following probability functions for x .t1 / D x1 and x .t2 / D x2 : P .x .t1 / ; x .t2 / I t1 ; t2 / .Joint distribution/ p .x .t1 / ; x .t2 / I t1 ; t2 / D dP=dx1 dx2 .Joint density/ Z1 p .x2 ; t2 / D p .x1 ; x2 I t1 ; t2 /dx1 .Marginal density/ 1
p .x1 ; t1 jx2 ; t2 / D p .x1 ; x2 I t1 ; t2 / =p .x2 ; t2 / .Conditional density/ Using the above probability functions we get the first- and second-order averages (moments) as follows:
x
x(t1)
x(t2) h1 h2
Sample Functions
h3 hm 0
t1
t2
Fig. 11.3 Ensemble representation of a stochastic process
t
11.6 Stochastic Processes and Dynamic Models
249
Z1 xN .t/ D E ŒX .t/ D
xp .x; t /dx .Mean value/
1 Z1 Z1
Rxx .t1 ; t2 / D
x1 x2 p .x1 ; x2 I t1 ; t2 / dx1 dx2 .Auto correlation/ 1 1
Cxx .t1 ; t2 / D E .Œx .t1 / xN .t1 / Œx .t2 / xN .t2 // D Rxx .t1 ; t2 / xN .t1 / xN .t2 / .Auto covariance/ Cxx .t; t / D x2 .Auto covariance/ The sample statistics time averages of order n over the sample functions of the continuous-time process X .t/ are defined as: 1 <X > D lim T !1 2T
ZCT X .t/n dt
n
T
and the time averages of the discrete-time process Xi as: N X E Xin D lim XiN PX .xi / N !1
i D1
Stationarity A stochastic process is said to be stationary if all its marginal and joint density functions do not depend on the choice of the time origin. If this is not so, then the process is called nonstationary. Ergodicity A stochastic process is said to be ergodic if its ensemble moments (averages) are equal to its corresponding sample moments, i.e., if: Z1 E ŒX D n
1 x p .x/ dx D lim T !1 2T
ZT X .t/n dt D <X n >
n
1
T
Ergodic stochastic processes are always stationary. The converse does not always hold. Stationarity and Ergodicity in the Wide Sense Motivated by the normal (Gaussian) density, which is completely described by its mean and variance, in practice we usually employ only first- and second-order moments. If a process is stationary or ergodic up to second-order moments, then it is called a stationary or ergodic process in the wide sense. Markov Process A Markov process is the process in which for t1 > t2 > > tn the following property (called Markovian property) is true:
250
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
Pr ob fX .t1 / x1 jX .t2 / D x2 ; : : : ; X .tn / D xn g D Pr ob fX .t1 / x1 jX .t2 / D x2 g
Example 11.6. (Telegraph Signal and Brownian motion) A telegraph signal is a stochastic process taking values –1 and C1 with probability: P .X .t/ D 1/ D e cosh . t/ ;
P .X .t/ D 1/ D e sinh . t/
The autocorrelation function of the telegraph signal is found to be: Rxx .t1 ; t2 / D e 2jt1 t2 j Brownian Motion X .t/ is defined to be the random particle motion in physics, and has the following properties: E ŒX .t/ D 0 ;
i h E X .t/2 D ks2 D .t=T / s 2 ;
t D kT
i h with s ! 0 and T ! 0 .T > 0/. Clearly, E X .t/2 > 0 if s 2 D ˛T for some ˛ constant. The process W .t/ D lim X .t/ T !0
has the properties:
h i E W .t/2 D ˛t
E ŒW .t/ D 0;
and is called the Wiener–L´evy process. The autocorrelation function of the Brownian motion X .t/ is found to be: ( Rxx .t1 ; t2 / D
˛t2 ; t1 t2 ˛t1 ; t1 t2
:
11.6.2 Stochastic Dynamic Models We consider an n-dimensional linear discrete-time dynamic process described by: xkC1 D Ak xk C Bk wk ;
x k 2 R n ; wk 2 R r
z k D Ck x k C v k ; v k 2 R m
(11.37)
11.7 Fuzzy Sets and Fuzzy Models
251
where Ak , Bk , Ck are matrices of proper dimensionality depending on the discrete time index k, and wk , vk are stochastic processes (the input disturbance and measurement noise, respectively) with properties: E Œwk D 0; E Œvk D 0 h i i h E wk wTj D Qk ıkj ; E vk vTj D Rk ıkj i h E wk vTj D 0
(11.38)
where ıkj is the Kronecker delta defined as ıkk D 1, ıkj D 0 .k ¤ j /. The initial state x0 is a random (stochastic) variable, such that: i h i h i h E vk xT0 D E wk xT0 D E wk vTj D 0
(11.39)
The above properties imply that the processes wk and vk and the random variable x0 are independent. If they are also Gaussian distributed, then the model is said to be a discrete-time Gauss–Markov model (or Gauss Markov chain), since as can be easily seen the process fxk g is Markovian. The continuous-time counterpart of the Gauss–Markov dynamic model (11.37) is: xP .t/ D A .t/ x .t/ C B .t/ w .t/ z .t/ D C .t/ x .t/ C v .t/
(11.40)
where, x .0/, w .t/ and v .t/ are Gaussian distributed with: h i E .x .0/ x0 / .x .0/ x0 /T D † 0 i h E Œw .t/ D 0; E w .t/ w ./T D Q .t/ • .t / h i E Œv .t/ D 0; E v .t/ vT ./ D R .t/ ı .t / i h i h i h E x .0/ wT .t/ D E x .0/ vT .t/ D E w .t/ vT ./ D 0 E Œx .0/ D x0 ;
(11.41)
.t ¤ /
11.7 Fuzzy Sets and Fuzzy Models 11.7.1 Fuzzy Sets The concept of set is the foundation of the mathematical discipline. In a classical (or crisp) set X only one of the following can be true.
252
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
An element x belongs to X or does not belong toX , symbolically x 2 X or x … X. This dichotomy was broken by the fuzzy sets coined by Zadeh (1965). 195, 197, 585 Let X D fx1 ; x2 ; x3 ; x4 ; x5 g a classical set. The set X is called the reference superset. Now, let A D fx1 ; x3 ; x5 g be a classical subset of X . An equivalent representation of A is: A D f.x1 ; 1/ ; .x2 ; 0/ ; .x3 ; 1/ ; .x4 ; 0/ ; .x5 ; 1/g which is an ordered set of pairs .x; A .x//, where x is the element of x 2 X of concern and A .x/ is the membership of x in the subset A, where: A .x/ D
1 if x 2 A 0 if x … A
That is, here we have A W A ! f0; 1g, where the set f0; 1g has two elements, namely 0 and 1. If we allow the membership function A .x/ to be: A W A ! Œ0; 1 where Œ0; 1 is the full closed interval between 0 and 1 (i.e., 0 A .x/ 1), then we have the fuzzy subset A of X , defined as: A D f.x; A .x// jx 2 X ; A .x/ W X ! Œ0; 1g
(11.42)
Another notation for the fuzzy set A is: A D A .x1 / =x1 C A .x2 / =x2 C : : : C A .xn / =xn
(11.43)
where the symbol “C” represents union of points and the symbol “/” does not represent division. Example 11.7. A fuzzy set with discrete points is: A1 D f.7; 0:1/ ; .8; 0:5/ ; .9; 0:8/ ; .10; 1/ ; .11; 0:8/ ; .12; 0:5/ ; .13; 0:1/g D 0:1=7 C 0:5=8 C 0:8=9 C 1=10 C 0:8=11 C 0:5=12 C 0:1=13 A fuzzy set with continuous domain of elements x is: io n .h A2 D .x; A .x// jx 2 X ; A .x/ D 1 1 C .x 10/2
11.7 Fuzzy Sets and Fuzzy Models
253
Fuzzy Set Operations The three fundamental operations of fuzzy sets are defined as extensions of the respective operations of classical sets, i.e.: Intersection C D A \ B D f.x; c .x// jx 2 X; c .x/ D min fA .x/ ; B .x/g g Union D D A [ B D f.x; D .x// jx 2 X; D .x/ D max fA .x/ ; B .x/g g Complement Ac D f.x; Ac .x// jx 2 X; Ac .x/ D 1 A .x/ g It is easy to verify that the standard properties of sets hold also here (i.e., De Morgan, absorption, associativety, distributivity, idenpotency). Fuzzy Set Image The image f .A/ of a fuzzy set A through the mapping (function) f ./ is the fuzzy set: X A .x/=f .x/ (11.44) f .A/ D Y
For example, if y D f .x1 / D f .x2 / for x1 ¤ x2 , we have: A .x1 / =f .x1 / C A .x2 / =f .x2 / D max fA .x1 / ; A .x2 /g =y Fuzzy Inference The fuzzy inference (or fuzzy reasoning) is an extension of the classical inference based on the modus ponens and modus tollens rules. Thus we have: Fuzzy modus ponens Rule: IF x D A THEN y D B Fact: x D A0 Inference: y D B 0 where A; A0 ; B and B 0 are fuzzy sets. Fuzzy modus tollens Rule: IF x D A THEN y D B Fact: y D B 0 Inference: x D A0 Fuzzy relations Let X and Y two reference supersets. Then with the term fuzzy relation R we mean a fuzzy set in the Cartesian product: X Y D f.x; y/ ; x 2 X; y 2 Y g
254
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
which has membership function R .x; y/: R W X Y ! Œ0; 1 For each pair .x; y/ the membership function R .x; y/ represents the connection degree between x and y. Zadeh’s max–min composition On the basis of the above, we can formulate the rule of “max–min fuzzy composition” developed by Zadeh, which is as follows. Let the fuzzy sets A and B: A D f.x; A .x// jx 2 X g ; B D f.y; B .y// jy 2 Y g and a fuzzy relation upon X Y , namely: R D f..x; y/ ; R .x; y// j.x; y/ 2 X Y g Then, if A is the input to R, the membership function of the output set B is given by the relation: B .y/ D max fmin ŒA .x/ ; R .x; y/g (11.45) x
or, symbolically: B D AıR
(11.46)
where “ı” denotes the max–min operation. If we are given a fuzzy rule: IF x is A THEN y is B we can find the corresponding fuzzy relation R .x; y/ using one of the following rules: Mamdani’s rule (minimum) ˚ R xi ; yj D min A .xi / ; B yj Larsen’s rule (product) R xi ; yj D A .xi / B yj Zadeh’s arithmetic rule ˚ R xi ; yj D min 1; 1 A .xi / C B yj Zadeh’s maximum rule ˚ R xi ; yj D max min A .xi / ; B yj ; 1 A .xi / Example 11.8. Let X D Y D f1; 2; 3; 4g.
11.7 Fuzzy Sets and Fuzzy Models
255
A D “x smal l” D f.1; 1/ ; .2; 0:6/ ; .3; 0:2/ ; .4; 0/g and R D “x nearly equal to y” with fuzzy relation: xny 1 RD 2 3 4
1 1 0.5 0 0
2 0.5 1 0.5 0
3 0 0.5 1 0.5
4 0 0 0.5 1
Then, the max–min rule B D A ı R gives: B .y/ D max fmin fA .x/ ; R .x; y/gg x
D f.1; 1/ ; .2; 0:6/ .3; 0:5/ ; .4; 0:2/g Obviously, the result can be interpreted as the fuzzy set “xD nearly small”. Thus, in this case the “fuzzy modus ponens” rule gives: IF “x is small” AND “x is nearly equal to y” THEN “y is nearly small”.
11.7.2 Fuzzy Systems The general structure of a fuzzy system (or fuzzy decision algorithm) involves the following four units (Fig. 11.4)
A fuzzy rule base, i.e., a base of IF–THEN rules (FRB). A fuzzy inference mechanism (FIM). An input fuzzification unit (IFU). An output defuzzification unit (ODU).
The fuzzy rule base, usually contains, besides the fuzzy or linguistic rules, a standard arithmetic data base section. The fuzzy rules are provided by human experts or are derived through simulation. The input fuzzification unit (fuzzifier) receives the non fuzzy input values and converts them in fuzzy or linguistic form. The fuzzy inference mechanism is the core of the system and involves the fuzzy inference
Non-fuzzy inputs
IFU
FIM Fuzzy Rule Base
Fig. 11.4 General structure of a fuzzy system
ODU
Non-fuzzy outputs
256
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
logic (e.g., the max–min rule of Zadeh, etc.). Finally, the output defuzzification unit converts the fuzzy results provided by FIM to non-fuzzy form using a defuzzification method. The fuzzifier performs a mapping from the set of real input values x D Œx1 ; x2 ; : : : ; xn 2 X to the fuzzy subset A of the superset X . Two possible choices of this mapping are: A x 0 D
8 0 ˆ ˆ <1 if x D x
.Singleton fuzzifier/ ˆ ˆ :0 if x 0 ¤ x # " 0 .x 0 x/T .x 0 x/ .Bell – type fuzzifier/ A x D exp 2
The two most popular defuzzification methods are the following: Center of Gravity (COG) Method The defuzzified value w0 is given by: w0 D
" X
#," wi B .wi /
i
X
# B .wi /
i
Mean of Maxima (MOM) Method Here, the defuzzified output w0 is equal to: 2 w0 D 4
m X
3, wj 5 m
j D1
where wj is the value that corresponds to the j maximum of the membership function B .w/. Example 11.9. A total assessment of any human–automation system cannot be complete without consideration of the human reliability. However this is not a simple task, because there are significant subjective aspects to all data, there is scarcity of published field data, simulation studies may be unrepresentative, and poor human performance may be suppressed for publicity reasons. The human error probability (HEP) is defined as the normalized frequency of a particular type of error in a given time period: HEP D Ne =No where Ne is the number of errors in a given period, and No is the maximum number of opportunities for the same error in the same period. Suppose that a task may be performed in a variety of environments ranging from benign to adverse. Let us categorize (fuzzify) the environment E as: Poor .P /, Fair .F /, Good .G/, and Excellent .Ex/
11.7 Fuzzy Sets and Fuzzy Models
257
This fuzzy partitioning of the reference (environment) superset, scaled in the interval X D Œ0; 1, is shown in Fig. 11.5(a). The analytic representation of the membership functions of Fig. 11.5 is the following. 2=3 E 1 nE 0 E 1=3 1=3 E 2=3 P 1 3E 0 0 F 3E 2 3E 0 G 0 .E 1=3/ = .1=3/ 3 .1 E/ E 0 0 .E 2=3/ = .1=3/ nT 0 T 0:25 0:25 T 0:5 0:5 T 0:75 0:75 t 1 G1 1 T =0:25 0 0 0 T =0:25 2 T =0:25 0 0 G2 0 .T 0:25/ =0:25 3 T =0:25 0 G3 0 0 .T 0:5/ =0:25 4 T =0:25 G4 0 0 0 .T 0:75/ =0:25 G5 The human error probability depends on the type (complexity) of each task and the mental abilities required for its execution. A possible task classification and the associated human failure probabilities is shown in Table 11.1 (according to BS 5760: Part 2). 197
a
b µ(T)
µ(E) 1
P
F
G
E
1/3
2/3
1.0
1
0 0
E
G1
G2
G3
G4
G5
0.75
1.0
0 0
0.25
0.5
T
Fig. 11.5 (a) Fuzzification of the Environment (E), (b) Fuzzification of the task quality (T ) Table 11.1 Task classes and corresponding HEPs Class of tasks (fuzzy set) Simple, frequently performed tasks, minor mental requirements Moderate difficulty, less time, more mental requirements Complex task, strong mental requirements Higher complexity, very strong mental requirements Limiting mental requirements, unfamiliar task
Probability of failure 0.001
Very low
0.01
Low
0.1 0.3
Medium High
1.0
Very high
258
11 Mathematical Tools for Automation Systems I: Modeling and Simulation µ 1
VL
LO
ME
HI
–2
–1
–0.5
VH
0 –3
0 p = log p
Fig. 11.6 Fuzzification of the log HEP shown in Table 11.1
We can use the data of Table 11.1 assuming the fuzzy characterization shown in the final column of the table. The sharp numerical boundaries of the categories are not truly realistic. Since the values of HEPs range from 0.001 to 1.0 (a thousand fold span) we can use a logarithmic scale, as shown in Fig. 11.6. We now symbolize the resulting five fuzzy sets (membership functions) as: VL D Very Low, LO D Low, ME D Medium, HI D High and VH D Very High. The analytic representations of these membership functions are as follows: n 3 2 2 1 1 0:5 0:5 0 VL 2 0 0 0 3C 1 0 0 LO 0 2C 1 2 0 ME 0 0 .1 C / =0:5 2 HI 0 0 0 .0:5 C / =0:5 VH The corresponding fuzzy relation is produced by induction based upon searching the knowledge base of the automation organization to find consensus judgment. A possible fuzzy relationship that might be obtained for the human performance and the task category is the following.
P F G Ex
G1 ME LO VL VL
G2 ME ME LO VL
G3 HI ME ME ME
G4 VH HI HI HI
G5 VH VH VH
Example 11.10. A task has T D 0:4 and the corresponding environment is judged to have E D 0:6. Using the results of the previous example we will compute the probability of failure, and will compare it with the failure probability of the same task when E D 0:3.
11.7 Fuzzy Sets and Fuzzy Models
259
In the first case the membership values are (see Fig. 11.5a,b): G2 D 2 T =0:25 D 2 0:4=0:25 D 0:4 G3 D .T 0:25/ =0:25 D .0:4 0:25/ =0:25 D 0:6 F D 2 3E D 2 3 .0:6/ D 0:2 G D .E 1=3/ = .1=3/ D .0:6 1=3/ = .1=3/ D 0:8 The fuzzy rules applied here have the general form: IF T AND E THEN … Therefore, using the above data, we have the following rules: Rule 1: IF T D G2 AND E D F THEN … D LO Rule 2: IF T D G2 AND E D G THEN … D LO Rule 3: IF T D G3 AND E D F THEN … D ME Rule 4: IF T D G3 AND E D G THEN … D ME Converting these rules in a fuzzy matrix relation using the Mamdani (min) rule we get: 0.8 TnE 0.2 0.4 0.2 LO 0.4 LO 0.6 0.2 ME 0.6 ME The overall conclusion is the union of the consequences, i.e.: … D 0:4LO [ 0:6ME This indicates that the dominant probability element belongs to the medium fuzzy set, but with a weaker element in the low set. The corresponding expression of ME is ME D 2 C . Thus 0:6 D 2 C , i.e., D 1:4 or log p D 1:4. Hence the human error probability is p D 0.04. In the second case the overall fuzzy relation is: TnE 0.4 0.6
0.9 0.4 ME 0.6 HI
0.1 0.1 ME 0.1 HI
and the overall conclusion is … D 0:6ME [ 0:1HI Here, the dominant probability element belongs to the medium set, with a minor element in the high set. Applying the COG defuzzification method, we find the defuzzified value: D Œ0:6 .1/ C 0:1 .0:5/ =0:7 D 0:928 Thus, log p D 0:928 or p D 0:118.
260
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
This shows, that when the environmental grade falls from 0.6 to 0.3, the probability of human error increases from 0.04 to 0.118.
11.8 System Simulation Here we will discuss the simulation of both deterministic and probabilistic systems. Simulation has become the most effective and popular technique of analysis of dynamic systems. Exact solution (integration) methods of differential equation models are of limited scope, since actually we don’t have available methods for solving very many differential equations. On the contrary, almost any dynamic model that is encountered in practice can be simulated to a reasonable degree of accuracy. If we need quantitative results, and we cannot solve analytically, then the only way is to simulate.
11.8.1 Simulation of Dynamic Systems Essentially, there are two ways to approach the analysis of a dynamic system model. The analytic approach attempts to see what will happen according to the model in a variety of situations. In the simulation approach we build the model, turn it on, and find out. The most popular technique for numerically simulating a continuoustime dynamic system described by the state-space model (11.1) is the Runge–Kutta technique and its variations. 351 The simplest technique is the Euler technique.
11.8.1.1 Euler Simulation Technique In this technique we use the approximation: x .t C t/ x .t/ xkC1 xk d x .t/
D ; T D t dt t T
(11.47)
where x .t/ D x .kT/ D xk : Thus Eq. 11.1 gives: xkC1 D xk C Tf.xk ;uk ; k/.x0 known/
(11.48)
for k D 0; 1; 2; : : : ; N . The error of the method is 0 T 2 and so for high accuracy, t D T must be selected very small. An improved variation of the Euler technique is by approximating the derivative d x .t/ =dt at the point t D kt by the following quantity: 1 (11.49) S D ff .NxkC1 ; ukC1 ; k C 1/ C f .xk ; uk ; k/g 2
11.8 System Simulation
261
where xN kC1 is the solution provided by the simple Euler formula (11.47). Thus we obtain: xkC1 D xk C .T =2/ Œf .NxkC1 ; ukC1 ; k C 1/ C f .xk ; uk ; k/
(11.50)
The error of this method is 0 T 3 .
11.8.1.2 Runge–Kutta Simulation Technique We will give only two variations, namely the third-order and fourth-order techniques. Third-order technique C1 D T f .xk ; uk ; k/ .x0 known/ C2 D T f xk C C1 =2; ukC1=2 ; k C 1=2 C3 D T f .xk C 2C2 C1 ; ukC1 ; k C 1/ 1 xkC1 D xk C .C1 C 4C2 C C3 / 6
(11.51)
The error is 0 T 4 . Fourth-order technique C1 D T f .xk ; uk ; k/ .x0 known/ C2 D T f xk C C1 =2; ukC1=2 ; k C 1=2 C3 D T f xk C C2 =2; ukC1=2 ; k C 1=2 C4 D T f .xk C C3 ; ukC1 ; k C 1/ 1 xkC1 D xk C .C1 C 2C2 C 2C3 C C4 / 6
(11.52)
The error is 0 T 5 . Example 11.11. Let the system model xP .t/ D 2x .t/ C 2t; x .0/ D 0, in the interval t 2 Œ0; 1 with T D 0:1. Applying the Euler simulation formula (11.47) we get: xkC1 D xk C 0:1fk ; fk D 2xk C 0:2k since t D kT D 0:1k. Thus for k D 0; 1 we obtain: x0 D 0; f0 D 2 .0/ C 0:2 .0/ D 0 x1 D 0 C 0:1 .0/ D 0; f1 D 2 .0/ C 0:2 .1/ D 0:2 x2 D 0 C 0:1 .0:2/ D 0:02; f2 D 2 .0:02/ C 0:2 .2/ D 0:36
262
11 Mathematical Tools for Automation Systems I: Modeling and Simulation Table 11.2 Simulation by Euler technique 1 t x (computed) f 0 0.0 0.000000 0.000000 1 0.1 0.000000 0.200000 2 0.2 0.020000 0.360000 3 0.3 0.056000 0.488000 4 0.4 0.104800 0.590400 5 0.5 0.163840 0.672320 6 0.6 0.231072 0.737856 7 0.7 0.304857 0.790284 8 0.8 0.383886 0.832227 9 0.9 0.467108 0.865782 10 1.0 0.553687 0.892625
x (exact) 0.000000 0.009365 0.035160 0.074406 0.124664 0.183939 0.250597 0.323298 0.400948 0.482649 0.567667
Error 0.000000 0.009365 0.015160 0.018406 0.019864 0.020099 0.019525 0.018441 0.017062 0.015541 0.013980
The results of simulation up to x10 are shown in Table 11.2 along with the exact solution and the evolution of the error. Now, applying the Runge–Kutta technique of third order, we get: xkC1 D xk C
1 .C1 C 4C2 C C3 / 6
where C1 D Tf k D 0:1 .2xk C 0:2k/ C2 D Tf D xk CC1 =2; ukC1=2 ; k C 1=2 D 0:1 Œ2 .xk CC1 =2/C0:2 .k C0:5/ C3 D Tf .xk C2C2 C1 ; ukC1 ; k C1/ D 0:1 Œ2 .xk C2C2 C1 / C 0:2 .k C 1/ Thus we have, for example: C1 D 0:1 Œ2 .0/ C 0:2 .0/ D 0; C2 D 0:1 Œ2 .0 C 0/ C 0:2 .0 C 0:5/ D 0:01 C3 D 0:1 Œ2 .0 0 C 0:02/ C 0:2 .0 C 1/ D 0:016 and
1 .0 C 0:04 C 0:016/ D 0:009333 6 The simulation results and the error are given in Table 11.3. x1 D 0 C
11.8.2 Simulation of Probabilistic Models The transient or time-dependent behavior of stochastic models is difficult to resolve analytically. An effective way of treating such problems is the Monte Carlo simulation technique. 351 The construction of Monte Carlo simulation software may be time-consuming, but even so the Monte Carlo simulation models continue to enjoy a
11.8 System Simulation
263
Table 11.3 Simulation by Runge–Kutta technique (3rd-order) 1 t C1 C2 C3 x f 0 0.0 0.0000 0.0000 0.0000 1 0.1 0.0181 0.0100 0.0160 0.0093 0.1813 2 0.2 0.0329 0.0263 0.0312 0.0351 0.3297 3 0.3 0.0451 0.0396 0.0437 0.0743 0.4513 4 0.4 0.0550 0.0507 0.0538 0.1246 0.5506 5 0.5 0.0632 0.0595 0.0622 0.1839 0.6321 6 0.6 0.0698 0.0668 0.0691 0.2505 0.6988 7 0.7 0.0753 0.0728 0.0747 0.3232 0.7534 8 0.8 0.0798 0.0778 0.0792 0.4009 0.7981 9 0.9 0.0834 0.0818 0.0830 0.4826 0.8347 10 0.10 0.0851 0.0861 0.5676 0.8647 ()
Error. / 0.000000 0.000032 0.000053 0.000065 0.000010 0.000022 0.000030 0.000034 0.000036 0.000036 0.000035
Six decimal digits are given for the error.
very wide acceptance. A Monte Carlo simulation models random behavior, and can be based on any simple randomizing mechanism (e.g., coin flips, roll of dice, etc.), but usually a pseudo-random number generator is employed. Obviously, because of the random element, each repetition of the model with provide different results. Typically a Monte Carlo simulation is repeated a large number of times in order to determine an average (expected) result. Monte Carlo simulation is used to estimate one or more measures (parameters) of system performance. Suppose, for example, that in a problem there is only one simulation parameter Z to be studied. Repeated simulation produces the results Z1 ; Z2 ; : : : ; Zn which are considered to be independent and identically distributed random variables with unknown distribution. From the strong law of large numbers we know that as n ! 1: Z1 C Z2 C C Zn ! E ŒZ n Therefore, we can use the average of Z1 ; Z2 ; : : : ; Zn to estimate the true average value of Z. Also, by the central limit theorem we know that the random variable: Vn nm p ; Vn D Z1 C Z2 C C Zn n where: m D E ŒZ ; and 2 D Var ŒZ is approximately normal (Gaussian) for large n. In practice, the Gaussian approximation is acceptably good when n 10. Let us consider the difference between the experimental average Vn =n and the true average m D E ŒZ. This is equal to: Vn mD p n n
Zn nm p n
264
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
Therefore, one can expect that the variation p of the experimental (observed) average will go to zero about as quickly as 1= n. This means that to obtain 10 times greater accuracy in E ŒZ we need 100 times as many repetitions of the simulation. Therefore, using a Monte Carlo simulation we have to be satisfied with fairly rough approximations of the average behavior. Certainly, any modeling problem typically involves many sources of error and variation, and usually the additional variation created by Monte Carlo simulation is a minor one. We note that Monte Carlo simulation of a stochastic process is very much simpler when the process is Markovian, because the quantity of information needed to be stored is considerably reduced. Example 11.12. Here we shall study the performance of a stochastic variant of the astronaut learning control problem considered in Example 11.2, using the Monte Carlo method. 351 We assume that tc;k (the time which the astronaut needs to make a control adjustment), tw;k (the waiting time before the next control adjustment), and ˛k (the acceleration after the control adjustment) are random variables. The time tc;k is the sum of three random components, i.e.: tc;k D tr;k C ts;k C ta;k where: tr;k D the time needed to observe the velocity of closing, ts;k D the time needed to calculate the proper acceleration adjustment, ta;k D the time needed to make the adjustment. We also assume that: ˛k D k C "k where "k is a small Gaussian random vector with zero mean and standard deviation " D 0:05, and that the time between control adjustments is constant equal to 15 s, i.e.: tw;k D 15 tc;k C k where k is a zero mean Gaussian error with D 0:1. The random variables tr;k , ts;k , and ta;k are all positive, mutually independent, and the outcomes near the mean are most likely. There is no reason to assume a particular distribution for them. From Example 11.2 we know that: kC1 D k C ˛k1 tc;k C ˛k tw;k We assume that the initial closing velocity (at t0 D 0) is equal to 50 m/s, and also that: E tr;k D 1; E ts;k D E ta;k D 2 Ideally, the control procedure will be successful if k ! 0 for k ! 1. Practically, we will consider that the velocity-control process is successful if the astronaut reduces it to 0.1 m/s. As a performance measure (index) we take the total time it takes
11.8 System Simulation
265
to succeed, i.e., the objective is to determine: T D min ftk W jk j 0:1g where: tkC1 D tk C tc;k C tw;k We will use Monte Carlo simulation to compute the total time T . Here the state variables of the system are: T D tk ; V D k ; A D ak ; B D ak1 Therefore, since the performance index is already a state variable we don’t need any additional state variable (and updating equation) for it. Let xk be the state vector of the system. Then, the Monte Carlo simulation method develops as shown in the following algorithm (pseudo code): Start Read data For k D 0 set xk D x0 (initialization) While, D0 Start Find distribution of xkC1 using xk Determine xkC1 using Monte Carlo method Update the values for performance indexes (PIs) End Compute and output PIs End The inner loop specifies the distribution of xkC1 and then, using a random number generator, specifies xkC1 according to this distribution. We exit the program when f(x)
docking time (sec)
1400 1200 1000 800 600 400 200 0 0 0.01
0.015 0.02 0.025 control parameter (k)
0.03
x
Fig. 11.7 Variation of docking time with respect to the control parameter for the case c D 1
266
11 Mathematical Tools for Automation Systems I: Modeling and Simulation
the objective (goal condition) is satisfied. A normal distribution generator can be constructed using the widely available uniform random generator. Actual computational results were obtained, 351 assuming that tc;k is normally distributed with mean mc D 5 s and standard deviation c D 1 s. The results of twenty simulation runs, with D 0:02, showed a docking time ranged between 156 and 604 s, with an average docking time of 305 s. This means that there is a wide variance in the time to complete the docking procedure. The major source of this variation is the time it takes the astronaut to complete the control adjustment procedure. The sensitivity of the docking time with respect to the standard deviation c of the time to make the control adjustment is relatively small (e.g., for c D 0:75 we have an average docking time of 340 s and for c D 1:25 we obtain c D 330 s). However, the sensitivity of the docking time with respect to the variations of the control parameter is bigger as shown in Fig. 11.7. 351
Chapter 12
Mathematical Tools for Automation Systems II: Optimization, Estimation, Decision, and Control
Mathematics is one of the humanity’s great achievements. By enhancing the capabilities of the human mind, mathematics has facilitated the development of science, technology engineering, business, and government. Kilpatrick, Swafford and Findell (2001) There is still a great deal of optimization of the system design that needs to be done. Craig Grimes Mankind’s history has been a struggle against a hostile environment. We finally reached a point where we can begin to dominate our environment and cease being victims of the vagaries of nature in the form of fire, flood, famine, and pestilence . . . As soon as we understand this fact, our mathematical interests necessarily shift in many areas from descriptive analysis to control theory. Richard Belman (Some Vistas of Modern Mathematics, University of Kentucky Press, 1968).
12.1 Introduction This chapter is a continuation of the previous chapter, and deals with the mathematical tools and methods developed and used for system optimization, parameter and state estimation, decision making, and feedback control. Optimization of the system operation and performance is always the main goal of any design, where the performance characterization and evaluation varies from case to case. In human–automation systems the optimization problem involves (and must involve) the system’s point of view, the human’s point of view, and the nature’s point of view. Quality, productivity, energy consumption, reliability, safety, competence, human satisfaction, and impact on the environment must be considered in a holistic way for the optimum system design. The methods to be discussed here use the mathematical S.G. Tzafestas, Human and Nature Minding Automation, Intelligent Systems, Control and Automation: Science and Engineering 41, DOI 10.1007/978-90-481-3562-2 12, c Springer Science+Business Media B.V. 2010
267
268
12 Mathematical Tools for Automation Systems II
models presented in the previous chapter. Obviously, the more accurate the available system model is, the more pragmatic (realistic) the resulting estimators, decision rules and control algorithms are. Section 12.2 deals with both static optimization (where one has to optimize a typical function of several variables) and dynamic optimization, where one has to optimize a functional (a function of several functions over an entire time interval). Section 12.3 is concerned with the learning and estimation problems, and derives least-squares estimators for the parameters and states of a system (linear or linearly parameterized). These estimators cover the majority of practical problems (continuous-time and discrete time). For the learning issue, we also provide the two basic neural network models, i.e., the multilayer perceptron (MLP) and the radial-basis (RBF) functions networks. In Section 12.4 we present the general methodology of decision analysis, including a discussion of the decision matrix concept and the so-called fuzzy utility functions. Finally, in Section 12.5 we present a brief account of the classical control and modern control methodologies giving more details about the proportional plus integral plus derivative (PID) control, the eigenvalue control, and the optimal linear-quadratic control (LQC). The chapter includes the solution of 14 simple examples, which will help the reader to see what kind of results are to be expected.
12.2 System Optimization Almost any problem in the analysis, design and operation of human–automation systems includes one or more optimization subproblems, in which it is desired to determine the smallest or largest value of a function of several variables. In general, the objective of optimization in human–automation systems is the improvement of their performance including productivity, quality, economic, human-minding and nature-minding performance aspects. Obviously, in order to improve any system we must be able to obtain at least one solution for that system, i.e., by defining the input to the system, we can find the resulting output. If this is not the case, we cannot design or operate or control the system, much less to optimize it. If a system is completely defined by a set of given inputs, the output will be fixed, and so no improvement can be made on this system unless one of the specifications is relaxed. If the system is not completely defined by specified inputs, i.e., if it is undetermined, then there is (at least in principle) an infinite number of solutions. This is a necessary condition in order for the use of optimization in a system analysis, design or operation problem to have sense. Since no single answer can be found in such a problem (system), it is necessary to choose the best solution among the multitude of possible solutions. To do this, it is always necessary to define the objective of the optimization which provides the basis for comparison of the solutions. 44
12.2 System Optimization
269
12.2.1 Static Optimization 12.2.1.1 Theory For static optimization problems, where the objective (or performance or cost) function is a function of several decision variables, we use the method of differential calculus. The general unconstrained optimization problem is: “Given a function f .x/ of the vectorial variable x D Œx1 ; x2 ; : : :; xn T , find the value(s) of x for which f .x/ becomes optimum (maximum or minimum).” The existence of a solution to this problem is assured by the Weirstrass theorem which states that “if a function f .x/ is continuous on a closed region R, then it will have a maximum or minimum value either in the interior of R or on the boundary @R of R” The value of x for which f .x/ becomes optimal is determined by solving the optimality equation: 2 3 2 3 @f =@x1 0 @f .x/ D 4 ::: 5 D 4:::5 fx .x/ D @x 0 @f =@xn
(12.1)
Expanding f .x/ in a Taylor series (up to second order) about the optimal point x D x0 , we get: f .x/ D f .x0 / C fxT .x0 / .x x0 / C
1 .x x0 /T fxx .x0 / .x x0 / 2
where AT denotes the transpose of A (rows of A become columns of AT and columns of A become rows of AT ). Clearly, fxT .x0 / D 0 (since x0 is an optimal point). Therefore, if the inequality 1=2 .x x0 /T fxx .x0 / .x x0 / > 0 holds, i.e., if the matrix: 2 2 3 @ f =@x1 @x1 : : : @2 f =@x1 @xn 5 fxx D 4 (12.2) ::: ::: ::: 2 2 @ f =@xn @x1 : : : @ f =@xn @xn is positive definite, the optimal point x0 is a point of minimum for f .x/, and if x0 /T fxx .x0 / .x x0 / < 0, i.e., fxx .x0 / < 0, then x0 is a point of maximum for f .x/.
1=2 .x
Constrained Optimization If we wish to optimize f .x/ under the constraint g .x/ D 0, where g .x/ D Œg1 .x/ ; : : :; gs .x/T ; .s n/, we solve g .x/ D 0 with respect to s components of x and replace in f .x/ to get the unconstrained optimization problem for f .x /, where the vector x has n s components (variable elimination method). In the general case we apply the method of Lagrange multipliers according to which: “The optimization of f .x/, under the constraint g .x/ D 0, is equivalent to the optimization of the extended (Lagrangian) function: F .x/ D f .x/ C œT g .x/
(12.3)
270
12 Mathematical Tools for Automation Systems II
without any constraint, where the s-dimensional vector œ is called the Lagrange multiplier vector and its components 1 ; 2 ; : : :; s are called Lagrange multipliers.” In this case, the optimality equations are: @f .x/ @gT .x/ @F .x/ D C œ D 0; @x @x @x
@F .x/ D g .x/ D 0 @œ
(12.4)
In the special case where g .x/ is scalar (g is one-dimensional), the first condition implies that at the optimum point the vectors @f =@x and @g=@x are collinear.
12.2.1.2 Computational Optimization Algorithms If the optimal point x D x0 cannot be found analytically, then we need to use some computational optimization algorithm. The two mostly known algorithms are the gradient (steepest descent, steepest ascent algorithm), and the Newton–Raphson algorithm. Gradient (1st-order) Algorithm Let xP an approximation of the optimal point of x. Then, the next approximation .p D 0; 1; : : :/ is given by: xpC1 D xp C "fx .xp / ; " > 0 .for maximization/
(12.5)
xpC1 D xp "fx .xp / ; " > 0 .for minimization/
(12.6)
and The parameter " 1 is used to limit the change ıf of f at each step. Unfortunately there is no general rule for choosing ", and in practice " is selected empirically. The updating formula (12.5) is called steepest ascent rule and the formula (12.6) is called steepest descent rule. These names are due to the fact that for given length k ıx kD T 1=2 of ıx, the increase (or decrease) of f .x/ is maximum if the vector ıx is ıx ıx colinear and with the same sign (or the opposite sign) as fx .x/. This follows from the relation ıf D fxT ıx Dk fx k k ıx k cos , where is the angle between fx and ıx. Indeed, for fixed k ıx k, ıf is maximum positive when D 0ı and maximum negative when D 180ı. The iteration over p in (12.5) is stopped when the quantity k xpC1 pC1 and (12.6) p p f .x / k is smaller than a respective desired x k or the quantity k f x small value. If the optimization is subject to the inequality constraint: g .x/ 0
(12.7)
and the gradient fx leads outside the permissible region, then we replace fx by its projection fx on the tangential plane g .x/ D 0, i.e.: xpC1 D xp ˙ "fx .xp /
(12.8)
12.2 System Optimization
271
Newton–Raphson (2nd-order) Algorithm Consider first the quadratic function: f .x/ D f0 C
1 .x x0 /T Q .x x0 / 2
(12.9)
where f0 is a constant, x is n-dimensional vector, and Q is a n n square matrix. Then, fx .x/ D Q .x x0 /, and so: ıx D x x0 D Q1 fx .x/ Also, fxx .x/ D Q. Therefore x0 D x fxx1 .x/ fx .x/
(12.10)
Equating to zero fx .x/ D Q .x x0 /, and solving for x we find xoptimum D x0 This means that if the function to be optimized is quadratic, the rule (12.10) leads to the optimum value in only one step (independently of the point x from which we start). Applying the rule (12.10) to non quadratic functions f .x/ we have xpC1 D xp "fxx1 .xp / f .xp /
(12.11)
The updating rule (12.11) converges to the optimum much faster than (12.6). It is called a second-order rule because it converges in one step if f .x/ is quadratic (second order function). In the literature it is known as the Newton–Raphson algorithm. The reason for the faster convergence is the use of the inverse Jacobian matrix fxx .x/. In case we wish to find a maximum of f .x/ the updating rule must be: xpC1 D xp C "fxx1 .xp / f .xp /
(12.12)
The role of " is the same as in the first-order algorithm, and the selection of " is again done empirically. Computationally, the use of (12.11) or (12.12) is very demanding because we need to compute the inverse fxx1 .xp / at every step. In practice, we can update fxx1 .xp / every N > 1 steps, or use some approximation of fxx1 .xp / which needs less computations. Example 12.1. (Static Linear-Quadratic Optimization) A linear system in the steady state is described by the algebraic model: x D A C Bu We wish to select the control variable u, so as to minimize the quadratic performance: J D Q .xN x/2 C R .Nu u/2
272
12 Mathematical Tools for Automation Systems II
where Q and R are positive constants and x; N uN are known quantities. The conditions for minimum are: dJ=du D 0 and d 2 J =du2 > 0; i.e., dJ=du D 2Q .xN A Bu/ .B/ C 2R .Nu u/ .1/ D 0 and d 2 J =du2 D 2QB 2 C 2R > 0 Solving dJ=du D 0 for u we get the optimal value u0 of u: u0 D
uN C B .Q=R/ xN AB .Q=R/ D A1 uN C A2 xN C A3 1 C B 2 .Q=R/
where: A1 D
1 BQ=R ABQ=R ; A2 D ; A3 D 2 2 1 C B Q=R 1 C B Q=R 1 C B 2 Q=R
This is an example of linear-quadratic static optimal control
Example 12.2. (Optimization with Equality Constraint) We shall find the minimum of the function: f D .x1 C x2 x3 1/2 C .x1 C x2 /2 C 5x12 on the surface 2x1 C x3 D 0. Without the constraint g D 2x1 C x3 D 0, the point x0 , at which f takes a minimum, is x0 D .0; 0; 1/T with fmin D 0. This point is not on the surface 2x1 C x3 D 0. We will first solve the problem with the variable elimination method. Thus, solving the constraint for x3 we have x3 D 2x1 . Now, replacing this expression for x3 in f we obtain the function: f1 D .3x1 C x2 1/2 C .x1 C x2 /2 C 5x1 which must be minimized without any constraint. Solving @f1 =@xi D 0 .i D 1; 2/ with respect to x1 and x2 we find that the optimal point of f1 is .x1 ; x2 / D .1=7; 3=14/. Thus the original function f has minimum at: .x1 ; x2 ; x3 / D .1=7; 3=14; 2=7/ To solve the problem with the method of Lagrange multipliers we form the extended function: L D .x1 C x2 x3 1/2 C .x1 C x2 /2 C 5x12 .x3 C 2x1 /
12.2 System Optimization
273
which must be minimized by selecting x1 ; x2 ; x3 and (without any constraint).The optimality conditions are: @L=@x1 D 2 .x1 C x2 x3 1/ C 2 .x1 C x2 / C 10x1 2 D 0 @L=@x2 D 2 .x1 C x2 x3 1/ C 2 .x1 C x2 / D 0 @L=@x3 D 2 .x1 C x2 x3 1/ D 0 @L=@ D .x3 C 2x1 / D 0 Solving the last equation for x3 and introducing the result x3 D 2x1 into the other equations, we find: 18x1 C 4x2 2 D 2; 8x1 C 4x2 D 2; D 2 6x1 2x2 Combining the first and third of them gives the equation 15x1 C 4x2 D 3. Therefore we have to solve the system: 8x1 C 4x2 D 2; 15x1 C 4x2 D 3 for x1 and x2 . The solution is .x1 ; x2 / D .1=7; 3=14/. Thus we again find the total solution: .x1 ; x2 ; x3 / D .1=7; 3=14; 2=7/ Example 12.3. (Optimization with Inequality Constraints) Our problem is to find the minimum of the function: y D 2x12 2x1 x2 C 2x22 6x1 under the inequality constraints: 3x1 C 4x2 6 and x1 C 4x2 2: Here, the unconstrained minimum of y is obtained for .x1 ; x2 / D .2; 1/ and is ymin D 6. The point .2; 1/ does not satisfy the constraints. To solve the constrained optimization problem we convert the inequality constraints into equality constraints using the subsidiary variables x3 and x4 , i.e., setting: 3x1 C 4x2 C x32 D 6 and x1 C 4x2 C x42 D 2 The corresponding Lagrangian function is L D y 1 g1 2 g2 D 2x12 2x1 x2 C 2x22 6x1 1 3x1 C 4x2 C x32 6 2 x1 C 4x2 C x42 2
274
12 Mathematical Tools for Automation Systems II
To assure that the solution will indeed correspond to the minimum of the given function y, we use, in place of L, the function: J D
2 X @L 2 j D1
@xj
C
2 X
gi2
i D1
D .4x1 2x2 6 3 1 C 2 /2 C .2x1 C 4x2 4 1 4 2 /2 2 C .2 1 x3 /2 C .2 2 x4 /2 C 3x1 C 4x2 C x32 6 2 C x1 C 4x2 C x42 2 Working as usual we find that the minimum of J occurs at: x1 D 1:4594; x2 D 0:4054; x3 D 0 x4 D 1:35565; 1 D 0:3245; 2 D 0 One can verify that the solution .x1 ; x2 / D .1:4594; 0:4054/ satisfies both inequality constraints. The fact that 2 D 0, implies that actually the second constraint does not have any effect on the solution
12.2.2 Dynamic Optimization When the constraints in an optimization problem are dynamic (involving differential or integral or integrodifferential equations), then the performance functions that must be optimized are not simple functions of the state and decision variables at a given time instant (i.e., they are not static functions), but functions of these variables over the entire time interval (horizon) of interest. Such functions are called functionals and actually are functions of other functions over the whole domain of their definition. Typically, these objective functionals have the form of integrals (if the domain of optimization is continuous) or of sums (if the domain of optimization is discrete). Three dynamic optimization techniques developed throughout the years are the dynamic programming technique of Bellman, the calculus of variations of Euler and Lagrange, and the maximum (or minimum) principle of Pontryagin. A brief description of them follows.
12.2.2.1 Dynamic Programming (Bellman) In this technique a complex optimization problem is embedded into a sequence of simpler optimization problems which, if solved, provide the solution of the original problem. This will be shown by considering the optimization of the function: f .x1 ; : : :; xN I y1 ; : : :; yN / D
N X kD1
g .xk ; yk /;
12.2 System Optimization
275
under the constraints: xk 0;
N X
xk D a1
kD1
yk 0;
N X
yk D a2
kD1
We will derive the functional equation of the problem. Let fN0 .a1 ; a2 / be the maximum value of f with respect to xk and yk . Then we have the following: For N = 1 f10 .a1 ; a2 / D f1 .x1 ; y1 / D max fg .x1 ; y1 /g 0 x1 a1 0 y1 a2
For N = 2 f20 .a1 ; a2 / D max fg .x2 ; y2 / C g .x1 ; y1 /g x;y
D D
max fg .x2 ; y2 / C g .a1 x2 ; a2 y2 /g
0x2 a1 0y2 a2
max
0x2 a1 0y2 a2
˚
g .x2 ; y2 / C f10 .a1 x2 ; a2 y2 /
For N = 3 f30 .a1 ; a2 / D max fg .x3 ; y3 / C g .x2 ; y2 / C g .x1 ; y1 /g x;y
Now, since ˚ f20 .a1 x3 ; a2 y3 / D max g .x2 ; y2 / C f10 .a1 x2 x3 ; a2 y2 y3 / ; x;y
f30 .a1 ; a2 / is written as: f30 .a1 ; a2 / D
max
0x3 a1 0y3 a2
˚ g .x3 ; y3 / C f20 .a1 x3 ; a2 y3 /
Therefore, by induction we get: fN0 .a1 ; a2 / D
max
0xN a1 0yN a2
˚ 0 .a1 xN ; a2 yN / g .xN ; yN / C fN1
which is the desired functional equation of the problem.
(12.13)
276
12 Mathematical Tools for Automation Systems II
12.2.2.2 Calculus of Variations (Euler–Lagrange) We wish to minimize the integral (functional): Ztf L .x; xP ; t/ dt; xP D dx=dt
J D
(12.14)
t0
by choosing appropriately the vectorial function x .t/. We assume that x .t/ has first- and second-order derivatives, and that the boundary conditions x .t0 / D x0 and x tf D xf are known. We introduce a variation ıx .t/ D "˜ .t/ ; " ! 0, around the optimal function x .t/, in which case the functional J gives: Ztf L .x C "˜; xP C "˜; P t/ dt
J ."/ D t0
Ztf D
2 @L @L C "˜P C0 " L .x; xP ; t/ C "˜ dt @x @Px
t0
where 0 "2 contains all the higher-order terms. The condition for maximum (with respect to the parameter "/ is: Œ@J ."/ =@""D0 D 0; i.e.,
Ztf ˜
@L @L dt D 0 C ˜P @x @Px
(12.15)
t0
Factor integration of the second term gives:
Ztf
Ztf @L tf @L d @L dt D ˜ dt ˜P ˜ @Px @Px t0 dt @Px
t0
t0
Thus, choosing ˜ .t/ such that ˜ .t0 / D ˜ tf D 0, (12.15) becomes Ztf t0
@L d ˜ .t/ @x dt
@L @Px
dt D 0
(12.16)
12.2 System Optimization
277
Equation 12.16 must be valid for every ˜ .t/. Therefore the following condition must hold:
@L d @L D0 (12.17) @x dt @Px This is known as Euler–Lagrange equation. Its solution gives the function x .t/; t0 t tf which minimizes (12.14). Some cases in which (12.17) is integrable are: L D L .x; t/, i.e., L is independent of xP L D L .x; xP /, i.e., L depends only on x and xP L D L .Px; t/, i.e., L is independent of x. If the boundary points are not fixed but belong to the curves x D ” 0 .t/ and x D ” f .t/, the above procedure gives again the Euler–Lagrange equation (12.17) and the additional condition:
tf @L @L xP dt C dx D0 L @Px @Px t0
(12.18)
which is known as transversality condition.
12.2.2.3 Minimum Principle (Pontryagin) The minimum principle (which originally was developed by Pontryagin as maximum principle) deals with the case of constrained dynamic optimization. By definition, the decision function u0 .t/ corresponds to a local minimum of the functional J .u/ if: J .u/ J u0 D J 0 for all permissible functions u .t/ near to u0 .t/. Let u D u0 C ıu. Then: J u0 ; ıu D ıJ u0 ; ıu C .higher-order terms/ The variation ıJ is a linear function of ıu, and the higher-order terms tend to zero for kıuk ! 0. If u .t/ is not subject to any constraints, then ıu .t/ can obtain any value and the necessary condition for minimality of J is: ıJ u0 ; ıu D 0 for kıuk sufficiently small: But if the decision function is subject to constraints, the variation of ıu is arbitrary only if the total u lies in the interior of the permissible region for all times t 2 t0 ; tf . As long as this is true, the constraints do not have any influence on the solution. However, if u falls on the boundary of the permissible region, at least for some time instants t 2 Œt1 ; t2 ; t0 t1 ; t2 tf , then there exist permissible variations ı uO for which the corresponding negative variations ı uO are not permissible.
278
12 Mathematical Tools for Automation Systems II
If we take only these variations the condition for minimum J is: ıJ u0 ; ı uO 0 Therefore the overall necessary condition for minimum is: ıJ u0 ; ıu 0
(12.19)
for sufficiently small kıuk, which guarantees that the signum of J is specified by the signum of ıJ . The condition (12.19) represents Pontryagin’s minimum principle.
12.2.3 Genetic Optimization Genetic optimization is based on the principles of genetics and natural selection (“survival of the fittest”). A genetic optimization algorithm is actually an adaptive search technique, that simulates a heuristic probabilistic search technique which is analogous to the biological evolutionary process. 175, 332, 364 A genetic algorithm (GA) involves a set of individual elements (that form the population) and a set of biologically-inspired operations acting on the population. Following the evolutionary theory, only the most suited elements of a population can survive and generate offsprings, thus transmitting their biological heredity to new generations. GAs use a coding of the decision parameters, not the parameters themselves. Usually a binary coding is employed, but there also exist other coding schemes with alphabets and figures. The implementation of a GA is facilitated if more simple coding is used. In computing language, a GA maps a problem into a set of binary strings (chromosomes) each string representing a solution. The GA then operates on the most promising strings and searches for improved solutions applying the following actions (evolutionary cycle):
Creation of a set of strings Evaluation of each string Selection of the best string Development of a new set of strings
Each set of chromosomes represents a ‘generation’ in the population of concern. The simplest form of a GA involves three operators: selection, crossover and mutation. Selection This operator selects the chromosomes in the population for reproduction, according to their fitness function values. The chromosomes with higher fitness values are more likely to be selected for reproduction more times. Crossover This operator chooses randomly a position and exchanges the substrings before and after this position between the two chromosomes to produce two children. For example, the chromosomes 10000100 and 11111111 can be crossover
12.2 System Optimization Fig. 12.1 The basic genetic algorithm
279 Begin Create original population Compute the fitness of each member Reproduction Generation +1
Crossover Mutation Convergence?
No
Yes
End
at the third position to give the following two children 10011111 and 11100100. The crossover operator is analogous to the sexual reproduction in biology. Mutation This operator, which is applied after the crossover, may introduce new genetic information into the population. It is implemented by occasionally altering a random bit in a chromosome. For example, the chromosome 00000100 could be mutated at the second position giving the result 01000100. Mutation can occur at any position of a chromosome with a very small probability (say 0.001). The evolutionary cycle is repeated until a specified termination criterion is satisfied. This criterion may be the number of evolutionary cycles (“computational runs”) or the number of changes of the chromosomes between the various generations, etc. The flow chart of the basic GA is shown in Fig. 12.1. Genetic algorithms have the important inherent property of providing the global optimum of a problem. They are fast, efficient and robust to variations of the environment in which the optimization is performed. Example 12.4. (Optimization Using Genetic Algorithm) The basic GA was applied to the function 466 : z D f .x; y/ D 3.1 x/2 e x
2 .yC1/2
10.x=5 x 3 y 5 /e x
2 y 2
.1=3/e .xC1/
2
y 2
This function is available in the MATLAB with the name “peaks” (Fig. 12.2). The strings length was selected as l D 8 and so the size of the search space was 28 28 D 65536 points. The following parameters and operators were used: Population size: n D 20. Fitness: The difference of the value of “peaks” function at the point of interest
minus the minimum value of “peaks” over the entire population (This ensures that all fitness values are nonnegative).
280
12 Mathematical Tools for Automation Systems II
10
5
0
–5
–10 4 2
4 2
0
0
–2
–2 –4
–4
Fig. 12.2 The function ‘peaks’ (Matlab file:peaks.m)
Single-point crossover (with rate Pc D 1:0). Uniform mutation (with probability Pm D 0:01). Mate selection policy (the two best individuals are kept for evolution).
The solution was found after 20 cycles. (The interested reader can reproduce the solution in the Matlab).
12.3 Learning and Estimation In this section we provide a short account of the least-squares estimation and learning algorithms for both the parameters and the states of a system.
12.3.1 Least-Squares Parameter Estimation The basis of estimation is the so-called least-squares estimator (LSE) which develops as follows. Consider a linearly parameterized model: y D f1 .u/ 1 C f2 .u/ 2 C C fn .u/ n
(12.20)
12.3 Learning and Estimation
281
T
where u D u1 ; u2 ; : : : ; up is the input vector, fi .i D 1; 2; : : : ; n/ are known functions of u, and 1 ; 2 ; : : : ; n are unknown parameters to be estimated on the basis of measured (or training) input data pairs f.ui ; yi /; i D 1; 2; : : : ; mg. Replacing the data pairs into (12.20), we get a set of m linear equations with respect to 1 ; 2 ; : : : ; n , i.e.: f1 .u1 / 1 C : : : C fn .u1 / n D y1 ::::::::::::::::::::::::::: ::::::::::::::::::::::::::: f1 .um / 1 C : : : C fn .um / n D ym which can be written in the matrix form: M™ D y
(12.21)
where: 3 3 2 3 2 y1 1 f1 .u1 / : : : fn .u1 / y D 4 : : : 5; ™ D 4 : : : 5; M D 4 : : : : : : : : : : 5 ym n f1 .um / : : : fn .um / 2
(12.22)
If m D n, the matrix M is square and, provided that it is also nonsingular, we can solve for ™ to get: (12.23) ™ D M1 y In general, m is greater than n (the case m < n is not considered since in this case we have less data pairs than the number of unknown parameters) and the data may be erroneous. So the model (12.21) is written as: y D M™ C e
(12.24)
where e is the error vector. Now, we can attempt to find a ™ D ™O for which the sum of squared error: m X 2 yi Ti ™ J.™/ D i D1
D eT e D .y M™/T .y-M™/;
(12.25)
where Ti is the i th row of M and e D y M™, is minimized. Here, J .™/ is quadratic: J.™/ D ™T MT M™ 2yT M™ C yT y Therefore, working as in Example 12.1 (i.e., solving @J =@™ D 0 with respect to ™) we obtain the LSE: 1 T M y (12.26) ™O D MT M
282
12 Mathematical Tools for Automation Systems II
which is valid when MT M is nonsingular. If MT M is singular, then the LSE is not O unique and we must apply the concept of generalized inverse to find ™. If the different components ei of e are not equally weighted, we use in (12.25) a suitable weighting matrix Q to obtain: JQ .™/ D .y M™/T Q.y M™/ in which case the LSE is found to be: 1 T M Qy ™O D MT QM
(12.27)
12.3.2 Recursive Least Squares Parameter Estimation The LSE (12.26) uses all available data at once and gives the so-called off-line estimate of ™. Here we will find a recursive least-squares estimator which improves (updates) the estimate ™k which is based on k measurements, using the next .k C 1/ th, measurement ykC1 . To this end, we define: "
# Mk yk MkC1 D D ; y kC1 ykC1 TkC1 1 1 T T † k D Mk Mk ; † kC1 D MkC1 MkC1
(12.28)
The matrices † kC1 and † k are related as follows: † kC1
Therefore:
0" #T " #11 #!1 " Mk T M M k k A D D@ T Mk kC1 kC1 TkC1 TkC1 1 1 1 D MTk Mk C kC1 TkC1 D † k C kC1 TkC1 1 T † 1 k D † kC1 kC1 kC1
(12.29)
Now, using (12.26) and (12.28) we get: ™O k D † k MTk yk ™O kC1 D † kC1 MT yk C kC1 ykC1 k
To express ™O kC1 in terms of ™O k we must eliminate MTk yk from the above equations. O The first of these equations gives MTk yk D † 1 k ™k . Thus, the second equation can be written as:
12.3 Learning and Estimation
™O kC1
283
O D † kC1 † 1 k ™k C kC1 ykC1 h i T O ™ C y D † kC1 † 1 k kC1 kC1 kC1 kC1 kC1
T D ™O k C † kC1 kC1 ykC1 kC1 ™O k
(12.30)
Equation 12.30 provides the desired recursive least-squares estimator (RLSE). The new estimate ™O kC1 is a function of the old estimate ™O k , the new data pair TkC1 ; ykC1 and the matrix † KC1 . Actually, the new estimate ™O kC1 is equal to the old estimate ™O k plus a correction term equal to an adaptation (or learning) gain vector † kC1 kC1 multiplied by the prediction (learning) error yQkC1 D ykC1 TkC1 ™O n . For this reason (12.30) is also called ‘least squares learning equation or rule’. We now have to find how † kC1 can be found from † k . From (12.29) we obtain1: 1 T † kC1 D † 1 k C kC1 kC1 1 D † k † k kC1 I C TkC1 † k kC1 kC1 † k D †k
(12.31)
† k kC1 TkC1 † k 1 C TkC1 † k kC1
for k D 0; 1; 2; : : : ; m 1. The full recursive least-squares estimation algorithm (12.30)–(12.31) is initiated using selected initial values † 0 and ™O 0 . One may also start from k D n, by using the first n measurements to get ™O n and † n directly: 1 † n D MTn Mn ; ™O n D † n MTn yn
(12.32)
Example 12.5. (Least Squares Parameter Estimation) Consider a system of the type (12.24) with a three-dimensional unknown parameter vector ™. We take four measurements yi .i D 1; 2; 3; 4/, with zero-mean random errors ei .i D 1; 2; 3; 4/ and finite variance. The measurement vector y and the matrix M are 489 : 2
2 3 10 4 6 23 7 63 6 7 y D M™ C e; y D 6 4 31 5; M D 4 0 12 3
2 1 4 4
3 1 07 7 15 1
We will find the LSE ™O of ™ using Eq. 12.26, which is indeed valid since m D 1 and 4 > n D 3. To this end we calculate the matrices MT ; MT M; MT M T 1 T M : M M Using the following matrix inversion lemma: .A C BC/1 D A1 A1 B.ICCA1 B/1 CA1 T with A D † 1 k , B D kC1 and C D kC1 1
284
12 Mathematical Tools for Automation Systems II
2
2 3 3 3 34 23 7 4 5 ; MT M D 4 23 37 10 5 1 7 10 3 2 3 11 1 29 T 1 1 4 D M M 1 53 179 5 194 29 179 729 2 3 17 34 25 8 T 1 T 1 4 M D M M 69 56 33 36 5 194 255 226 13 74 4 MT D 4 2 1
3 0 1 4 0 1
Thus 2 3 3 10 2 3 17 34 25 8 9:318 6 7 T 1 T 1 4 69 56 33 36 5 6 23 7 D 4 10:589 5 M yD ™O D M M 4 31 5 194 255 226 13 74 20:888 12 2
Now, we assume that an additional measurement is taken: y5 D T5 ™ C e5 ; y5 D 15; T5 D 2
2 1
with e2 D 1. We will use the recursive least squares estimator (12.30) and (12.31). We have: 1 T5 † 4 D T5 MT4 M4 D 2 2
3 0:057 0:050 0:150 0:273 0:923 5 1 4 0:050 0:150 0:923 3:760
2
D Œ0:064; 0:277; 1:614 1 where the value of MT4 M4 was found above. Similarly: 2 3 2 T5 † 4 5 C e2 D Œ0:064; 0:277; 1614 4 2 5 C 1 D 2:188 1 and
2
3 9:318 T5 ™O 4 D Œ2; 2; 1 4 10:589 5 D 18:926 20:888
12.3 Learning and Estimation
285
Thus 2 3 3 0:064 0:057 0:050 0:150 1 4 0:277 5 Œ0:064; 0:277; 1:614 † 5 D 4 0:050 0:273 0:923 5 2:188 1:614 0:150 0:923 3:760 2 3 0:055 0:058 0:103 4 D 0:058 0:238 0:719 5 0:103 0:719 2:569 2 2 3 3 2 3 0:064 9:318 9:203 1 4 0:277 5 .15 18:926/ D 4 11:086 5 ™O 5 D 4 10:589 5 C 2:188 1:614 20:888 23:784 2
12.3.3 Least Squares State Estimation: Kalman Filter 12.3.3.1 Discrete-Time Filter We consider a discrete-time Gauss–Markov system of the type (11.36)–(11.38), with available measurements fz.1/; z.2/; : : : ; z.k/g. Let xO . k C 1j k/ and xO . k C 1j k C 1/ be the estimate of x.k C 1/ with measurements up to z.k/ and z.k C 1/, respectively. Then the discrete-time Kalman filter is described by the following recursive equations: xO . k C1j k C1/ D A.k/Ox . kj k/ C K.k C1/ Œz.k C 1/ C.k C 1/A.k/Ox . kj k/ ; xO . 0j 0/ D 0 (12.33) T T K.k C 1/ D † . k C 1j k/ C .k C 1/ C.k C 1/† . k C 1j k/ C .k C 1/ CR.k C 1/1 † . k C 1j k C 1/ D † . k C 1j k/ K.k C 1/C.k C 1/† . k C 1j k/
(12.34) (12.35)
† . k C 1j k/ D A.k/† . kj k/ AT .k/ C B.k/Q.k/BT .k/; † . 0j 0/ D † 0 (12.36) where (12.37a) † . kj k/ D E xQ . kj k/ xQ T . kj k/ T † . k C 1j k/ D E xQ . k C 1j k/ xQ . k C 1j k/ with xQ . k C 1j k/ D x.k C 1/ D xO . k C 1j k/ (12.37b) and the notation z .k/ D zk ; Q .k/ D Qk , etc. is used. These equations can be represented by the signal-flow graph of Fig. 12.3. An alternative expression of K .k C 1/ is obtained by applying the matrix inversion Lemma:
286
12 Mathematical Tools for Automation Systems II xˆ (k + 1 / k + 1) I
z (k + 1)
I
K(k+1)
xˆ (k / k ) Iz−1
A(k)
C(k+1)
zˆ (k + 1 / k)
xˆ (k + 1 / k)
xˆ (k + 1 / k + 1) A(k) −I
Fig. 12.3 Signal flow graph of the discrete-time Kalman filter. It receives the input z.k C 1/ and gives as output the estimate xO . k C 1j k C 1/
K.k C 1/ D † . k C 1j k/ CT .k C 1/ C.k C 1/† . k C 1j k/ CT .k C 1/ CR.k C 1/1 1 D † . k C 1j k/ C CT .k C 1/R1 .k C 1/ C.k C 1/1 C.k C 1/R1 .k C 1/
(12.38)
in which case we get:
† 1 . k C 1j k/ C CT .k C 1/R1 .k C 1/C.k C 1/
1
D † . k C 1j k/ † . k C 1j k/ CT .k C 1/ (12.39) 1 T C.k C 1/† . k C1j k/ C.k C 1/† . k C1j k/ C .k C1/ C R.k C1/ D † . k C 1j k C 1/ The last relation follows from (12.34) and (12.35). Thus, introducing (12.39) into (12.38) we get: K .k C 1/ D † . k C 1j k C 1/ CT .k C 1/ R1 .k C 1/
(12.40)
The Kalman filter equations can be derived by several methods. One of them is to use the recursive least squares estimator (12.30) and (12.31), setting: ™O kC1 D xO . k C1j k C1/; ™O k D xO . k C1j k/ D A.k/Ox . kj k/; † k D † . k C1j k/; † kC1 D † . k C 1j k C 1/; TkC1 D C.k C 1/; and yk D z.k/: The initial conditions for the state estimate equation (12.33) and the covariance equation . (12.35) are: xO . 0j 0/ D 0 † . 0j 0/ D † 0 .positive definite/
(12.41)
12.3 Learning and Estimation
287
12.3.3.2 State Prediction Using the filtered estimate xO . kj k/ of x.k/ at the discrete time instant k, we can compute a predicted estimate xO . j j k/; j > k; k D 0; 1; 2; : : : of the state x .j / on the basis of measurements up to the instant k. This is done by using the equations: xO . k C 1j k/ D A.k/Ox . kj k/ xO . k C 2j k/ D A.k C 1/Ox . k C 1j k/ D A.k C 1/A.k/Ox . kj k/ xO . k C 3j k/ D A .k C 2/ xO . k C 2j k/ D A .k C 2/ A.k C 1/A.k/Ox . kj k/ and so on. Therefore: xO . j j k/ D A .j 1/ A .j 2/ : : : A.k C 1/A.k/Ox. kj k/; j > k
(12.42)
The stochastic process fQx. j j k/; j D k C 1; k C 2; : : :g, which is defined by the prediction error xQ . j j k/ D x .j / xO . j j k/, is a Gauss–Markov process with zero mean and covariance matrix: † . j j k/ D E xQ . j j k/ xQ T . j j k/
(12.43)
which can be easily computed using (11.36) and (12.43). The result is: Q .j; j 1/ † . j 1j k/ A Q T .j; j 1/ † . j j k/ D A C B .j 1/ Q .j 1/ BT .j 1/
(12.44)
for j D k C 1; k C 2; k C 3; : : : ; where: Q .j; k/ D A .j 1/ A .j 2/ A.k C 1/A.k/ A
(12.45)
12.3.3.3 Continuous Time Filter The continuous-time Kalman filter gives the estimate xO . tj t/ of the state of the Gauss–Markov model (11.40) on the basis of a measurement record fz ./ ; t0 t g. It can be derived again by using the least-squares (or orthogonal projection) method in continuous time, and has the equations: xPO . tj t / D A.t/Ox . tj t/ C K.t/ Œz.t/ C.t/Ox . tj t / ; xO . t0 j t0 / D 0 (12.46) K.t/ D † . tj t / CT .t/R1 .t/ (12.47) P . tj t / D A.t/†.t/ C †.t/AT .t/ †.t/CT .t/R1 .t/C.t/†.t/ † CB.t/Q.t/BT .t/; † . t0 j t0 / D † 0
(12.48)
288
12 Mathematical Tools for Automation Systems II Solution of Riccati equation Σ(t/t)
ˆ x(t/t) I
CT(t )R–1 z(t )
I
K(t)
.
ˆ x(t/t )
I/s
ˆ x(t/t)
C(t)
ˆ z(t/t)
Integrator A −I
Fig. 12.4 Signal flow graph of the continuous-time Kalman filter (Input z.t /, Output xO . t j t //
where † . tj t/ is the covariance matrix of the error process: xQ . tj t/ D x.t/ xO . tj t/. Equation 12.48 is the so-called Riccati equation for the filter covariance matrix. The signal flow graph representation of the filter equations (12.46)–(12.48) is shown in Fig. 12.4. The predicted state estimate xO . tj t1 / for some t > t1 , with data up to time t1 , is given by the solution of: xPO . tj t1 / D A .t/ xO . tj t1 /; t t1
(12.49)
with initial condition the filtered estimate xO . t1 j t1 / at time t1 , which is provided by the Kalman filter. The covariance matrix of the predicted estimate error xQ . tj t1 / D x.t/ xO . tj t1 / (which is a continuous-time Gauss–Markov process) is described by: P . tj t1 / D A.t/† . tj t1 / C † . tj t1 / AT .t/ C B.t/Q.t/BT .t/ †
(12.50)
for t t1 , with initial condition the filtered covariance matrix † . t1 j t1 / provided by the Kalman filter at time t1 . Example 12.6. (Discrete-Time Optimal Filter) We consider a scalar time-invariant discrete-time Gauss–Markov system x.k C 1/ D Ax.k/ C w.k/; z.k/ D x.k/ C v.k/ .k D 0; 1; 2; : : :/ with A D 1; Q D 25; R D 15 and †0 D 100. 350 The Kalman filter equations (12.33)–(12.37) give: xO . k C 1j k C 1/ D AxO . kj k/ C K.k C 1/ Œz.k C 1/ AxO . kj k/ ; xO . 0j 0/ D 0 † . k C 1j k/ D A2 † . kj k/ C Q K.k C 1/ D A2 † . kj k/ C Q = A2 † . kj k/ C Q C R † . k C 1j k C 1/ D R A2 † . kj k/ C Q = A2 † . kj k/ C Q C R ; † . 0j 0/ D †0
12.3 Learning and Estimation Table 12.1 Evolution of the optimal filter
289 k
† . kj k 1/ K.k/
† . kj k/
0 1 2 3 4
— 125 38.4 35.8 35.6
100 13.40 10.80 10.57 10.55
— 0.893 0.720 0.704 0.703
Since †. kj k/ 0, the second equation tells us that †. k C 1j k/ Q, i.e., the onestep prediction accuracy is at minimum equal to the variance of the input disturbance w.k/. From the third equation we see that 0 K.k C 1/ 1.k D 0; 1; 2; : : :/. The fourth equation implies that †. k C 1j k C 1/ D RK.k C 1/. Thus, 0 †. k C 1j k C 1/ R. This means that if † . 0j 0/ >> R, the use of the first measurement z .1/ gives † . 1j 1/ R << †0 , i.e., it provides a drastic reduction in the estimation error. Using the given values A D 1; Q D 25; R D 15 and †0 D † . 0j 0/ D 100, the variance equations give the results shown in Table 12.1. The steady state value of † . kj k/ is found by setting † . k C 1j k C 1/ D 2 † . kj k/ D †, in which case we obtain † C 25† 375 D 0. Since † 0, the acceptable solution is † D 10:55. Thus K D K.k C 1/ D 0:703, and xO . k C 1j k C 1/ D xO . kj k/ C 0:703 Œz.k C 1/ xO . kj k/ D 0:297xO . kj k/ C 0:703z.k C 1/ .k D 4; 5; 6 : : :/ Example 12.7. (Continuous-Time Optimal Filter) Consider a communication channel via which we wish to transmit Gauss– Markov messages (signals) described by: xP D ax C w.t/; t 0 .a D const:/; with x.0/ D x0 , † . 0j 0/ D †0 D 02 and Q D w2 > 0. The received signal is z.t/ D x.t/ C v.t/, where fv.t/; t 0g is white zero-mean Gaussian noise, independent of x.0/ and fw.t/; t 0g, with variance R D v2 > 0. The noise v.t/ is due to atmospheric disturbances. We will determine the characteristics of the receiver which ensure the best separation of the signal from the noise. Here, by best separation we mean the minimization of the mean squared error x. Q tj t / D x.t/ x. O tj t/, where x. O tj t / is the Kalman filter estimate of x.t/. 350 Using the given parameter values, Eq. 12.48 gives the scalar Riccati equation: P D 2a† 1=v2 †2 C w2 ; †
t 0
Using the method of separation of variables, we find the solution:
† . tj t / D 1 2 e 2ˇ t = 1 e 2ˇ t ;
t 0
290
12 Mathematical Tools for Automation Systems II
p where ˇ D a2 C w2 =v2 ; D 02 1 = 02 2 , and 1 , 2 are the roots of the equation †2 C 2av2 † v2 w2 D 0. The optimal estimate equation is: PO tj t / D ax. O tj t/ x. O tj t / C 1=v2 †. tj t/Œz.t/ x. For large t, we have †. tj t / ! 1 D .a C ˇ/v2 . Therefore: PO tj t/ D ax. O tj t / x. O tj t/ C 1 =v2 z.t/ .a C ˇ/x. 2 D ˇ xO . tj t / C 1 =v z .t/
for large t. We observe that there is an intrinsic limitation of the accuracy of this receiver (filter) which is determined by the parameter 1 . Three important cases are the following: s " # 2 w If a2 >> w2 =v2 then 1 D a C a 1 C 2 2 v2 << v2 a v 2 2 2 2 If a w =v then 1 0:414v If a2 << w2 =v2 then 1 D w v
12.3.4 Neural Network Learning Neural networks (NNs) are large-scale systems that involve a large number of special type nonlinear processors called ‘neurons’. 8, 49, 202 Biological neurons are nerve cells which have a number of internal parameters called ‘synaptic weights’. The human brain consists of over ten million neurons. The weights are adjusted adaptively according to the task under execution such that to improve the overall system performance. Here we will discuss artificial NNs, the neurons of which are characterized by a state, a list of weighted inputs from other neurons, and a state equation governing their dynamic operation. The NN weights can take new values through a learning process which is accomplished by the minimization of a certain objective function through the gradient or the Newton–Rapshson algorithm. The optimal values of the weights are stored as the strengths of the neurons’ interconnections. The NN approach is suitable for systems or processes that cannot be modeled with concise and accurate mathematical models, typical examples being machine vision, pattern recognition, control systems, and human-based operations. The three primary features of NNs are: (i) utilization of large amounts of sensory information, (ii) collective processing capability, (iii) learning and adaptation capability. Learning and control in neurocontrollers are achieved simultaneously, and learning continues as long as perturbations are present in the plant under control and/or its environment. The practical implementation of NNs was made possible by the recent developments
12.3 Learning and Estimation Fig. 12.5 The multilayer perceptron
291 x1
x2
y
x3
Input layer
1st hidden layer
2nd hidden layer
Output layer
in fast parallel architectures (VLSI, electrooptical, and other). The two NNs that are mostly suitable for decision and control purposes are the multilayer perceptron (MLP) and the radial basis functions (RBF) networks.
12.3.4.1 The Multilayer Perceptron This NN has the structure shown in Fig. 12.5. All neurons of the network contain a sigmoid nonlinearity of the logistic type: yk D f .vk / D where vk D
p P j D1
1 1 C e vk
(12.51)
wkj yj k is the net internal activity of the neuron (node) k; k is
the threshold of the neuron k, and yk is the output of this neuron. The back propagation (BP) algorithm of learning has the following steps: Step 1: Step 2: Step 3: Step 4:
Initialize the weights and thresholds. Present the input and output examples .xi ; di /, where di is the ith desired output. Compute the actual outputs y using the sigmoid activation function. Improve the weights starting from the output nodes and going backwards via the formula: wij .t C 1/ D wij .t/ C ıj .t/ xi .t/
(12.52)
where wij is the weight connecting node i with node j of the next layer at time t. If node j is a node of the output layer, then:
otherwise:
ıj D yj 1 yj dj yj
(12.53a)
X ık wjk ıj D xj0 1 xj0
(12.53b)
k
292
12 Mathematical Tools for Automation Systems II
where k extends over all nodes of the previous layers. To speed-up the convergence we add a momentum term, i.e.: wij .t C 1/ D wij .t/ C ıj xi C ˛ wij .t/ wij .t 1/
Step 5:
(12.54)
with “a” being a parameter 0 < a < 1. Repeat from Step 2.
12.3.4.2 The Radial Basis Function (RBF) Network An RBF network approximates an input–output mapping by employing a linear combination of radially symmetric functions (see Fig. 12.6). The k-th output yk is given by: m X wki i .x/ (12.55) yk .x/ D i D1
where:
ri2 i .x/ D .kx ci k/ D .ri / D exp 2 ; ri 0; i 0 2i
(12.56)
The RBF networks have always one hidden layer of computational nodes with non monotonic transfer functions ./. Theoretical studies 430 have shown that the choice of ./ is not very crucial for the effectiveness of the network. In most cases the Gaussian RBF given by (12.56) is used, where ci and i .i D 1; 2; : : : ; m/ are selected centers and widths, respectively. The training procedure of the RBF network involves the following steps: Step 1:
Group the training patterns in M subsets using some clustering algorithm (e.g., the k-means clustering algorithm) and select their centers ci .
x1
Φ1
w11 y1
Fig. 12.6 The radial basis function network
x2
Φ2
xn
Φm
Input layer
Hidden (RBF) layer
y2
yp wnp Output layer
12.4 Decision Analysis
Step 2: Step 3: Step 4:
293
Select the widths, i .i D 1; 2; : : : ; m/, using some heuristic method (e.g., the p nearest-neighbor algorithm). Compute the RBF activation functions, i .x/, for the training inputs using Eq. 12.56. Compute the weights by least squares. To this end, write Eq. 12.55 as bk D Awk .k D 1; 2; : : : ; p/ and solve for wk , i.e.: wk D A bk ;
wk D Œwk1 ; : : : ; wkm T
(12.57)
where A is the generalized inverse of A given by: 1 T A D AT A A
(12.58)
and bk is the vector of the training values for the output k.
12.4 Decision Analysis 12.4.1 General Issues Decision analysis and related decision models are based on utility theory and value theory which provide the tools for modeling the value perceptions of a decision maker under various situations (risky/riskless, single/multi-attribute situations). Decision is any conscious selection between at least two alternative solutions or actions. One of the most clear descriptions of what is classical decision making theory is the one given by Lindey (Making Decisions, J. Wiley, 1985) which is the following: “First, the uncertainties appearing in the problem must be quantized on the basis of probabilities (We extent this here using also fuzzy sets). Second, the various consequences of the action policies must be expressed with the aid of utilities. Third, the action must be selected which is expected to give the greatest utility”. The three “must” in the above formulation have the meaning that any deviation from what they suggest may lead the decision maker to processes and procedures that are unreasoned, invalid or unaccepted. This definition was referred to the statistical decision making, but now it can also be used for fuzzy-logic-based decision making. The principal probabilistic method which is used for decision making is the Bayesian inference formula (11.12), which for the case where there are several candidate hypotheses (conditions, solutions, etc.) Hi .i D 1; 2; : : : ; N / is written as:
P Hi j E j
ˇ P Ej ˇ Hi P .Hi / D N ˇ P P Ej ˇ Hi P .Hi /
(12.59)
i D1
where Ej is an evidence element (measurement, observation, symptom, etc.)
294
12 Mathematical Tools for Automation Systems II
A logical rule is to select the hypothesis Hi for which the posterior prob decision ability P Hi j Ej is the maximum, ˇ taking into account all the evidence elements (i.e., the probabilities P Ej ˇ Hi of all the observations Ej ; j D 1; 2; : : :; m/. The utility index or simply utility U is a quantified expression of the values (worths) of various events. Several forms of utility can represent costs or revenues (gains) of alternative actions in a subjective or objective basis. The decision is based on the expected subjective utility (ESU). That is, if we are given a set of alternative actions Ai , we select the action A which produces a result that maximizes ESU given by: ESU D P .Ai / U .Ai /
(12.60)
where U .Ai / is the utility index used for Ai . In practice, the utility concept (which was first used by Von Neumann and Morganstern, 1944) encounters the problem of subjectivity and also the difficulty of determining precisely the “utility” of some results (e.g., the quality of life after a surgery, etc.). The Von Neumann and Morganstern axiom of utility in its simplest form is generally specifying the utility U of an event (object or action) A that occurs with certainty, using the ESUs of two other mutually exclusive and collectively exhaustive events B and C , as follows: U .A/ D ESU .B/ C ESU .C / D P .B/ U .B/ C P .C / U .C /
(12.61)
where P .B/ C P .C / D 1, i.e., P .C / D 1 P .B/. The general diagram of the decision making process is shown in Fig. 12.7. The process starts with the problem definition and ends with the implementation of the decision. The managerial and automation systems’ decisions can be classified in many ways, e.g., according to the hierarchy of the consequences, or according to the time horizon to which they refer, according to the needed data, and so on. A typical widely accepted classification is the following: Level 1: Level 2: Level 3: Level 4:
Strategic planning decisions Managerial control decision Operational control decisions Operational performance decisions
Fig. 12.7 Classical structure of the decision making process
Problem Definition
Implementation
Definition of Alternative Actions
Decision
Quantification of Alternative actions
Application of Decision Schemes
12.4 Decision Analysis
295
The first level involves decisions which are related to the highest-level policies and goals (Organizational Level). The second level refers to the decisions which are made to assure the effectiveness in the acquisition and use of the available resources. The third level deals with decisions that are needed to ensure the operability of the system, and finally, the fourth level includes all the every-day decisions which have to do with the execution of system operations. As we move from top to bottom the frequency of decisions is increased and the intelligence needed for them to be made is decreased. On the contrary, in the bottom-up direction the intelligence needed and the importance of the decisions are increased.
12.4.2 Decision Matrix and Average Value Operators A general way to study the decision making problem under uncertainty is to use the so-called decision matrix which is shown in Fig. 12.8. Here, we have a set of decisions (alternative solutions, actions, etc.) A D fA1 ; A2 ; : : :; Ai : : :; Am g from which we must select one. These decisions depend on a variable x which takes values from a set X D fx1 ; x2 ; : : :; xn g. This variable is usually called state variable (or simply state) of the world, i.e., of the system, and affects the payoff that corresponds to each decision. Let Cij be the payoff that corresponds to the decision Ai when the system state is xj . The goal is to select the “best (optimal)” alternative decision (solution), i.e., the decision that gives the greatest payoff. Let Ai and Ak two decisions such that for all j the following inequality holds Cij > Ckj . In this case there is no reason to select Ak whichever the system state is, and we say that the decision Ai dominates over Ak . Moreover, if there exists a decision which dominates over all the other alternative decisions, then it is the optimal one. This solution is called “Pareto optimal” (after the name of the Italian economist Vilfredo Pareto). The general case of comparing payoff vectors is solved by converting these n-dimensional vectors to scalar quantities that give a measure of the “worth” (value) of the alternative decision. This is done through a function or mapping F : F W Rn ! R In this way we have to select the alternative decision with the largest scalar value. The function F must satisfy a number of objective conditions, while at the same
X1 ...
Xj ...
...
A1
...
Ai
Fig. 12.8 Decision matrix
Am
cij
Xn
296
12 Mathematical Tools for Automation Systems II
time is compatible with the subjective characteristics of the decision maker. Three objective properties of F are the following: 1. Pareto Optimality (Monotonicity) F .Ci1 ; Ci 2 ; : : :; Cin / F .Ck1 ; Ck2 ; : : :; Ckn /
(12.62)
A function F that satisfies (12.62) is called “monotonic”. 2. Upper and Lower Boundedness ˚ ˚ min Cij F .Ci1 ; Ci 2 ; : : : ; Cin / max Cij j
j
(12.63)
This means that F must be bounded from below by the smallest payoff and from above by the greatest (best) payoff. In particular, if Cij D c for all j , then: ˚ ˚ min Cij D max Cij D c and F .c; c; : : :; c/ D c j
j
3. Symmetricity 0 ; Ci02 ; : : :; Cin0 F .Ci1 ; Ci 2 ; : : :; Cin / D F Ci1
(12.64)
0 where Ci1 ; Ci02 ; : : :; Cin0 are the payoffs Ci1 ; Ci 2 ; : : :; Cin in any other ordering. A function that satisfies (12.64) (independently from the state ordering) is called symmetrical. The functions that have the above Pareto optimality properties are called average value operators and are widely used in decision making under uncertainty. Typical examples of average value operators are:
M .a1 ; : : :; an / D
n 1X aj .simple average value/ n
(12.65a)
j D1
Mw .a1 ; : : :; an / D
n 1X wj bj .weighted average value/ (12.65b) n j D1
where bj is the j th maximum of ai .i D 1; 2; : : :; n/, and wj 2 Œ0; 1 are weights with w1 C w2 C C wn D 1. Extreme ˚ average value ˚ operators are the maximum and minimum value operators: maxj aj , and minj aj .
12.4.3 Fuzzy Utility Functions A mapping W 2X ! Œ0; 1, where X D fx1 ; x2 ; : : :; xn g, is called a fuzzy measure if .¿/ D 0; .X / D 1 and .B/ .A/ if A B. Two known fuzzy measures
12.4 Decision Analysis
297
are the probability distribution and the possibility distribution. We say that 1 is stricter than 2 if 1 .E/ 2 .E/ for all E. We write 1 2 . Let Z be a variable that corresponds to the state of the world, and X D fx1 ; x2 ; : : :; xn g the set of allowable values of Z. Let a fuzzy measure on X which shows our knowledge about the likelihood of occurrence of an element of X. Finally, let an n-dimensional vector of payoffs, C , such that for each xj 2 X the corresponding payoff is Cj . If E X, then M .CE / D M .C1 ; C2 ; : : :; Cn / denotes the average value of payoffs that correspond to E D fx1 ; x2 ; : : :; xs g .s n/. Obviously, since M ./ is an average value operator it possesses the three Pareto optimality properties (12.62)– (12.64). Now, we extend the concept of utility given in (12.60) to the case where the probability measure P ./ is replaced by the (more general) fuzzy measure ./, and define the fuzzy utility function (index) as: UF .C / D max f .E/ M .CE /g E X
It is easy to verify that UF .C / possesses the three properties of average-value operators (symmetricity, monotonicity, ˚ upper-lower boundedness), under the assumption of nonnegative payoffs: minj Cj 0. Clearly, the membership function .E/ of a fuzzy set E, with .¿/ D 0 and .X / D 1 can be used as evaluation function in the decision matrix for the selection of decisions under uncertainty. Example 12.8. (Marginal Utility Reduction) A customer buys a certain good (product) because of the utility he (she) expects to have from its consumption. The total utility TU is an increasing function of the consumed units of this product per unit of time. However, the additional (marginal) utility (MU) which results from the consumption of each additional unit consumed, is decreased. This result is known as “Marginal utility reduction law”. To illustrate this law consider the Table 12.2 where the utility is measured in utility units, called utiles. The total utility TUx of the product x corresponding to the consumption of x units is given in the second column of the table. The third column shows the marginal utility from each additional product unit consumed. We see that MUx for the first unit is 10 utiles, for the second is 8 utiles, and so on. This illustrates the reduction law of the marginal utility Table 12.2 Illustration of the marginal utility reduction law
Quantity (n)
T Ux (Utiles)
MUx (Utiles)
0 1 2 3 4 5
0 10 18 24 28 30
10 8 6 4 2
298
12 Mathematical Tools for Automation Systems II
Table 12.3 Illustration of the utility equilibrium concept
Units of goods
MUx
MUy
1 2 3 4 5
10 8 6 4 2
5 5 4 3 2
Example 12.9. (Utility Equilibrium) A consumer maximizes his (her) utility buying (and consuming) goods. We say that the consumer is in equilibrium if the marginal utility of the last monetary unit he (she) pays for each “good” Px ; Py , etc. is the same, i.e., if: MUx =Px D MUy =Py D : : : D Common MU for the last monetary unit consumed for each good, where Px and Py is the unit value of the good x and y, respectively. To illustrate this we use the data of Table 12.3. Suppose that the consumer has available to pay 7 monetary units for the goods x and y, and that Px D 2 monetary units and Py D 1 monetary unit are the values of x and y, respectively. The consumer maximizes the total utility and is in equilibrium consuming four monetary units (from the available seven units) to buy 2xand the remaining three monetary units to buy 3y. At this point we have: MUy .4 utiles/ MUx .8 utiles/ D Px .2 mon:units/ Py .1 mon:unit/ D 00 MU of the 4 utiles of the last monetary unit consumed for the goods x and y00 In this case (i.e., buying 2x and 3y/ we have: TUx D 10 C 8 D 18 utiles;
TUy D 6 C 5 C 4 D 15 utiles
TU D TUx C TUy D 18 C 15 D 33 utiles If the customer consumes his (her) seven monetary units in any other way, the resulting total utility will be smaller Example 12.10. (Bayesian Medical Diagnosis) We will show how Bayes’ updating formula can be used to diagnose a human disease on the basis of symptoms’ observation. Suppose that we want to diagnose whether a human has ‘influenza’ on the basis of several symptoms such as ‘fever’, ‘flowing nose’, etc. We start with the posterior probability formula (11.13) (Chapter 11) which is now written as:
12.4 Decision Analysis
299
P . influenzaj fever/ D
py p py p C pn .1 p/
where: p D P .H/ D prior probability of influenza, py D P . Ej H/ D probability of fever when the human has influenza, pn D P . Ej notH/ D probability of fever when the human has not influenza, 1 p D P .notH/ D prior probability that a human has no influenza. This formula gives the posterior probability of influenza on the basis of the “fever” symptom. By repeated application of this formula (as shown in Eq. 11.13) for another symptom we can obtain an even better estimate of the posterior probability of the hypothesis “the human has influenza”. If on the basis of all the symptoms the total posterior probability of influenza exceeds a certain threshold, then we can assert that the human has influenza and proceed to the proper treatment. This example shows that actually medical diagnosis (as well as machine fault-diagnosis) is a decision making problem under uncertainty. The basic difficulty of Bayes’ rule is that it assumes independency of the variables (symptoms) involved. If this assumption does not hold, then the results may be in error, and the doctor (decision maker) must be very careful Example 12.11. (Bayesian Decentralized Detection with Fusion) We consider a two-detector decentralized system for a binary hypothesis-testing problem: H1 W yi D m C ni .i D 1; 2/ H0 W yi D ni where m is a constant signal (message), ni are zero mean unit variance Gaussian random variables, and H0 and H1 are two hypotheses (hence the name binary hypothesis-testing) for a (common) phenomenon H. We assume that the observation at the two detectors (sensors) DM1 and DM2 are independent of each other under both hypotheses. We also assume identical decision rules at the two detectors. The detection configuration has the structure of Fig. 12.9. The local decisions are transmitted over band limited channels to the data fusion center, where they are combined to give an overall inference. Each local decision can take the value 0 or 1, depending on whether the detector DM i decides H0 or H1 . The fusion center yields the global decision u on the basis of the received local decisions ui ; i D 1; 2, which are collectively represented by the local decision
Phenomenon H y1 y2 DM1 DM2 u1
Fig. 12.9 Structure of a decentralized detection with fusion (and two detectors)
u2
Data fusion center u
300
12 Mathematical Tools for Automation Systems II
vector u D Œu1 ; u2 T . The probability of false alarm and the probability of detection (true positive) corresponding to the detector DM i are denoted by PFi and PDi , respectively, and are given by: PFi D P f DM i decides H1 j H0 presentg D P . ui D 1j H0 / PDi D P f DM i decides H1 j H1 presentg D P . ui D 1j H1 / Similarly, the overall probability of false alarm and the probability of detection are denoted by PF and PD , respectively, and are defined as: PF D P f u D 1j H0 g ; PD D P . u D 1j H1 / The objective of our Bayesian detection (hypothesis testing) problem is to obtain the fusion rule and the local decision rule (s) that, minimize the average cost, here the Bayes’ risk function: R D C C CF CD
X
X
P . u D 1j u/ P . uj H0 /
u
P . u D 1j u/ P . uj H1 /;
u
where C; CF and CD are positive constants. Here, the local decision rules are found to be likelihood ratio tests with identical thresholds h, i.e.: p . yi j H1 / > h; then H1 If p . yi j H0 / .i D 1; 2/ If
p . yi j H1 / < h; then H0 p . yi j H0 /
These tests reduce to:
H1
yi
m > 1 ln h C D h < m 2 H0
where: 1 pF for the OR fusion rule P1 pM P0 pF for the AND fusion rule; hD P1 .1 pM /
h D P0
and pD D PDi ; pF D PFi .i D 1; 2/, since the detector observation distributions and the detector thresholds are assumed identical.
12.4 Decision Analysis
301
Table 12.4 Numerical results for P0 D 1=3 m h h ROR OR AND 0.6 0.0850 –1.0400 0.2996 1.2 0.7420 –0.3100 0.2050 1.8 1.1770 0.1050 0.1224 2.4 1.5550 0.4500 0.0652 3.0 1.9150 0.7700 0.0311
RAND 0.3069 0.2133 0.1287 0.0692 0.0333
Table 12.5 Numerical results for P0 D 1=2 m h h ROR OR AND 0.6 0.8050 –0.2100 0.3572 1.2 1.1200 0.0830 0.2323 1.8 1.4300 0.3700 0.1367 2.4 1.7500 0.6500 0.0725 3.0 2.0750 0.9250 0.0346
RAND 0.3572 0.2323 0.1367 0.0725 0.0346
The thresholds .h / and the average probabilities of error (APE)R were computed as a function of the constant signal (message) m for the fusion rules (AND, OR). For an account of signal detection using fuzzy sets (membership functions) the reader is referred to the work of Parasuraman et al. 414 The results for two values of P0 , namely P0 D 1=3 and P0 D 1=2, are shown in Tables 12.4 and 12.5, respectively. 600 We see that for the value P0 D 1=2 the resulting average probability of error of both fusion rules AND and OR, is the same. But for the value P0 D 1=3 the OR fusion rule is superior Example 12.12. (Fuzzy Evaluation of Working Environment) A large bank explores the service conditions for its personnel at its local branches, in order to evaluate them in relation with the complaints of its customers in a 12 month period. On the basis of the replies to a questionnaire completed by the bank’s personnel, the management group has found that the following service (work) factors (conditions) affect more strongly the performance of the employees 195 :
Salary level (L) Technical support (T) Managerial support (S) Personal benefits (N)
The conditions’ prototype (classical working conditions) are expressed by the fuzzy set A, where: A D 0:9=L C 0:8=T C 0:7=S C 0:7=N The evaluation of the quality of work (or services) which corresponds to the above conditions prototype is expressed by the fuzzy set B: B D 0:6=S U C 0:8=S T C 0:6=LS C 0:3=IN
302
12 Mathematical Tools for Automation Systems II
where the linguistic values S U; S T; LS and IN have the following meaning: SU D Superb, LS D Low standard ST D Standard, IN D Inferior We will evaluate the quality of services that a branch of the bank offers, if its working conditions are: L D 0:5; T D 0:6; S D 0:3; N D 0:4 Our data are formulated in the form of the following rule: IF the service conditions are A THEN the quality of services are B. Our problem is to find what is the quality of services when the working conditions are: A0 D 0:5=L C 0:6=T C 0:3=S C 0:4=N We have: 2
3 2 0:9 0:6 0:8 6 0:8 7 ˚ 6 0:6 0:8 7 6 min x ; y D 6 4 0:7 5 0:6 0:8 0:6 0:3 D 4 0:6 0:7 0:7 0:6 0:7 2 3 2 3 0:1 0:1 0:1 0:1 0:1 6 0:2 7 6 0:2 0:2 0:2 0:2 7 7 6 7 .1 x / D 6 4 0:3 5 1 1 1 1 D 4 0:3 0:3 0:3 0:3 5 0:3 0:3 0:3 0:3 0:3 ˚ ˚ R D max min x ; y ; 1 x .Zadeh’s maximum rule/
0:6 0:6 0:6 0:6
3 0:3 0:3 7 7 0:3 5 0:3
Therefore, the quality of services B are: B0 D 0:6=SU C 0:6=ST C 0:6=LS C 0:3=IN As expected, the working conditions of this branch are below the prototypical ones. This is verified by the evaluation of the services’ quality B0 , which gives a clear picture of the comparison with the prototypical conditions
12.5 Control As we discussed in Section 1.4, control is the process of assuring that the output (or state) variable of a system behaves in a desired way. This is achieved through proper control laws or algorithms. Typically, we have available a command (or reference) input variable and an undesired input variable (disturbance), and the goal of control is the system output to follow precisely the reference variable and to be unaffected by the disturbance as much as possible. If the command variable is fixed, we have the
12.5 Control
303
case of regulation (regulating control), and if the command variable is changing, the control is called tracking (or trajectory following) control. Throughout the history of control there have been developed a multitude of control methods and control laws which are broadly classified in two categories: Classical control Modern control
Classical control uses the model of transfer functions in continuous time or discrete time, which is based on Laplace transforms and z-transforms, respectively. Modern control uses the state space models (11.1) (continuous time) and (11.2) (discrete time), which in the linear system case are reduced to the forms (11.3) and (11.4), respectively.
12.5.1 Classical Control Classical control is founded on the concept of a feedback control loop as shown in Fig. 1.3 and repeated in Fig. 12.10. Here, the symbols Gc .s/; Ga .s/; Gp .s/ and F .s/ represent, respectively, the transfer functions of the controller, the actuator, the system (process) under control, and the feedback element (which usually is a sensor or measurement device). The N represent the Laplace transforms of the comsymbols c.s/; N e.s/; N yNf .s/ and y.s/ mand signal c.t/, the error signal e.t/ D c.t/ yf .t/, the feedback signal yf .t/, and the output signal y.t/, where for example: Z1 y.s/ N D
y.t/ e st dt
0
and s D a C j! is the Laplace complex variable. Assuming for the moment that no disturbance exists dN .s/ D 0 , combining the controller, actuator and system as: G.s/ D Gc .s/ Ga .s/ Gp .s/ (forward transfer function) d(s) Disturbance
c(s)
+ − yf (s)
u(s)
e(s)
+
Gc(s)
Ga(s)
Controller
Actuator
+
Gp(s) System
F(s) Feedback element
Fig. 12.10 Basic control loop for a controlled system G.s/ (negative feedback)
y (s)
304
12 Mathematical Tools for Automation Systems II
and using the equations: yNc .s/ D G.s/e.s/; N e.s/ N D c.s/ N yNf .s/ D c.s/ N F .s/yNc .s/ where yNc .s/ is the component of y.s/ N due only to c.s/, N we find the closed –loop transfer function: G .s/ yNc .s/ D (12.66) c.s/ N 1 C G.s/F .s/ In the same way, assuming that c.s/ N D 0, we find: yNd .s/ Gp .s/ D N 1 C G.s/F .s/ d .s/
(12.67)
where yNd .s/ is the component of y.s/ N due only to dN .s/. The total output is: y.s/ N D yNc .s/ C yNd .s/ (superposition principle)
(12.68)
Suppose that c.s/ N D 1=s, which is the Laplace transform of the unit step signal c.t/ D 1; t 0, c.t/ D 0; t < 0. Then, the goal of the control design is to select the controller Gc .s/ such that yc .t/ ! 1 and yd .t/ ! 0 as t ! 1 (for various types of disturbance) with acceptable overshoot and acceptable steady state error N ess D lim e .t/ D lim s e.s/: t !1
s!0
The available methods for achieving this goal are: Evans root locus method, Nyquist plots method, Bode diagrams method, and Nichols diagram method. The reader is referred to the standard control textbooks for the details of these methods. 122, 407 Here, we will outline in some detail the empirical method of Ziegler–Nichols which is applicable to the case where the controller is of the PID (Proportional plus Integral plus Derivative type): 8 9 Zt < = 1 de.t/ C e.t 0 /dt 0 u.t/ D Ka e.t/ C d : ; dt i
(12.69a)
0
or Gc .s/ D
uN .s/ 1 D Ka 1 C sd C e.s/ N si
(12.69b)
where Ka is the controller’s gain, d is the time constant of the derivative term, and i is the time constant of the integral term (usually called “reset time”). If the error e.t/ has the form of a unit ramp function (Fig. 12.11a), then the control signal u.t/ (i.e., the output of the PID controller) has the form of Fig. 12.11b. Since here the controller has the fixed structure (12.69b), our design task is reduced to that of selecting the three parameters Ka ; d and i . The procedure of selecting these parameters is known in the literature as PID controller tuning.
12.5 Control
305 PID Control
e(t)
e(t) Unit ramp
Analog +Derivative Control Analog Control
45°
t
t
Fig. 12.11 Unit ramp response of PID controller: (a) Unit ramp error. (b) PID control signal corresponding to (a)
The most popular PID parameter tuning method is the Ziegler –Nichols method (1942). Among the various existing variants of this method we describe here the one which is based on the stability limits of the closed-loop system. This includes the following steps which are performed by a human operator: Step 1: Step 2:
Step 3:
Disconnect the derivative and integral terms (i.e., use only the proportional term). Increase the gain Ka until the stability limit is reached and oscillations are started. Let T0 be the oscillations’ period and Kc the critical value of the gain. Select the parameters Ka , i and d as follows:
For proportional control: Ka D Kc =2 For PI control: K˛ D 0:45Kc ; i D 0:8T0 For PID control: Ka D 0:6Kc ; i D T0 =2; d D i =4
The performance achieved by these values in typical systems is acceptable (giving about 10–20% overshoot). The discrete-time form of the PID controller can be found using the so-called orthogonal integration: 1 s ! .z 1/ T or the trapezoidal integration (Tustin’s approximation) which is more accurate: 2 s! T
z1 zC1
The orthogonal integration leads to the discrete-time PID controller form: ( u.k/ D Ka
k1 TX d Œe.k/ e.k 1/ C e.i / e.k/ C T i i D0
)
306
12 Mathematical Tools for Automation Systems II
This controller is of non-recursive type. The recursive equivalent is obtained by applying it at time k 1, and subtracting to find: u.k/ D u.k/ u.k 1/ D ˇ0 e.k/ C ˇ1 e.k 1/ C ˇ2 e.k 2/
(12.70a)
where ˇ0 D K0
T 2d d d ; ˇ1 D Ka 1 C ; ˇ2 D Ka 1C T T i T
(12.70b)
The present value u.k/ of the control signal is now computed as: u.k/ D u.k 1/ C u.k/; i.e., by adding the correction u.k/ to the previous value u.k 1/. The z-transfer function of the PID controller (12.70a, b) is found to be: Gc .z/ D
ˇ0 C ˇ1 z1 C ˇ2 z2 U .z/ D E.z/ 1 z1
which can be written in the typical parallel 3-term form:
1
Gc .z/ D Ka 1 C cd 1 z
z1 C ci 1 z1
(12.71)
where Ka is the proportional term, Ka cd 1 z1 is the derivative term, and Ka ci z1 = 1 z1 is the integral term. The tuning procedure is now the proper selection of Ka ; cd , and ci .
12.5.2 Modern Control Classical control system design is based on the satisfaction of performance specifications in the time and/or frequency domain (such as overshoot, rise time, settling time or phase margin, gain margin, bandwidth, etc.). It leads to suitable compensators (phase lead, phase lag, PI or PID, and so on). Modern control design employs state feedback and succeeds in achieving desired transfer functions of the closedloop system or minimizing specific integral performance indexes (cost functionals). The principal branches of modern control theory are the following. 122, 575, 579
Eigenvalue control (or placement) Model-matching control Optimal control Stochastic control Predictive control
12.5 Control
307
Adaptive control Robust control Intelligent control
In all these types one can use full state feedback or output feedback and the system may be a continuous-time or discrete-time system (and sometimes a hybrid continuous-discrete time system).
12.5.2.1 Model Matching and Eigenvalue Control Consider a single-input single-output (SISO) system: xP .t/ D Ax .t/ C Bu.t/; y .t/ D Cx .t/ C Du .t/
(12.72)
where u is a scalar input, y is a scalar output, x is an n-dimensional state vector, and the constant matrices A; B; C have proper dimensionality (here D is a scalar constant). Suppose that the control input u .t/ is constructed as: u .t/ D Fx .t/ C .t/
(12.73)
where .t/ is a new scalar input, and F D Œf1 ; f2 ; : : : ; fn is a 1n constant matrix (row vector). Introducing (12.73) into (12.72) we get the state-space equations of the closed-loop system: xP D .A C BF/ x C B; y D .C C DF/ x C D The general state-feedback problem is to select the matrix gain F such that the transfer function of the closed loop system: Hc .s/ D yN .s/ =N .s/ D .C C DF/ .sI A BF/1 B C D is equal to a desired transfer function (or model) Hd .s/. This is the model-matching control problem, which involves as special cases the eigenvalue control problem, in which we only wish to achieve a desired closed-loop characteristic polynomial: d .s/ D .s 1 / .s 2 / .s n / where 1 ; 2 ; : : : ; n are the desired eigenvalues, and the input–output decoupling problem, in which the desired transfer function is diagonal. The eigenvalues of the system can be placed at desired positions by the state feedback control law (12.73) if and only if the system (12.72) is state controllable, i.e., if and only if: rank P D rank B AB : : : An1 B D n where P is the so-called ‘controllability matrix’.
(12.74)
308
12 Mathematical Tools for Automation Systems II
The matrix F must be chosen equal to: F D 0 0 : : : 1 P1 d .A/
(12.75)
where the controllability matrix P D B AB : : : An1 B is invertible (due to the condition (12.74)) and d .A/ is the desired characteristic polynomial evaluated at A. Equation 12.75 is known as Ackermann’s formula. Example 12.13. (Eigenvalue Placement) Let the system xP D Ax C Bu with: 2
3 2 3 0 0 5 4 ; B D 1 05 6 1
0 1 AD4 0 0 1 5
We will determine the controller u D Fx D Œf1 ; f2 ; f3 x which places the eigenvalues (poles) of the closed-loop system at s D 2 ˙ j 4 and s D 10. Here, the controllability matrix P is: 2
0 0 P D B AB A2 B D 4 0 1 1 6
3 1 6 5 31
and has the determinant: det P D jPj D 1 ¤ 0. Thus rank P D 3, and it is possible to find an F that meets our goal. The desired characteristic polynomial is found to be: d .s/ D .s C 2 j 4/ .s C 2 C j 4/ .s C 10/ D s 3 C 14s 2 C 60s C 200
Therefore: 2
3 199 55 8 d .A/ D A3 C 14A2 C 60A C 200I D 4 8 159 7 5 7 43 117 Then, Ackermann’s formula (12.75) gives:
FD 0
2
0 0 0 1 40 1 1 6
31 2 1 199 55 6 5 4 8 159 31 7 43
3 8 7 5 117
12.5 Control
309
D 0 0
2
5 1 46 1
32
3
6 1 199 55 8 1 0 5 4 8 159 7 5 D 199 55 8 0 0 7 43 117
12.5.2.2 Optimal Control The general optimal control problem for the nonlinear continuous-time system (11.1): xP D f .x; u; t/ ; x .t0 / D x0 (12.76) is to determine the control input u .t/ W t 2 t0 ; tf so as to minimize the cost functional: 26, 315, 579 Ztf J .u/ D L .x; u; t/ dt (12.77) t0
To this end, we define the optimal cost J 0 .x; t/ from the current time t to the final time tf as: Ztf 0 L .x; u; t/dt (12.78) J .x; t/ D min u2Œt;tf t
where J .x; t0 / D J .x0 / Then, using Bellman’s dynamic programming technique (see Section 12.2.2), the minimization in (12.78) can be done in two steps, i.e.: (a) from t to t C t, and (b) from t C t to tf . Thus: 0
0
9 8 t Ct Ztf = < Z Ldt C Ldt J 0 .x; t/ D min ; u2Œt;tf : t
t Ct
2 t Ct 3 Z Ldt C J 0 .x C x; t C t/5 D min 4 u.t /
(12.79)
t
Expanding J 0 .x C x; t C t/ in Taylor series, and introducing into (12.79) we get: n o T 0 D min Lt C @J 0 =@x x C @J 0 =@t t C 0 .t/ u.t /
Now, dividing by t and letting t ! 0, we obtain the so-called Hamilton–Jacobi– Bellman equation: @J 0 .x; t/ D min H .x; u; t/ (12.80) @t u.t / where H is the Hamiltonian function:
310
12 Mathematical Tools for Automation Systems II
H D L .x; u; t / C
0 T
@J @x
xP D L .x; u; t/ C
@J 0 .x; t/ @x
T f .x; u; t/
(12.81)
The solution of the Hamilton–Jacobi–Bellman (H-J-B) equation gives both the op timal control input u0 .t/; t 2 t0 ; tf and the optimal cost J 0 .x; t/. In the general case we need to use numerical computational techniques. But, if the system is linear and the cost functional is quadratic, the H-J-B equation is reduced to a Riccati differential equation. Indeed, let: Ztf xP D A .t/ x C B .t/ u; J D
Ldt; L D
1 T 1 x Q .t/ x C uT R .t/ u 2 2
(12.82)
t0
In this case the optimal cost is also quadratic of the form: J 0 .x; t/ D
1 T x .t/ P .t/ x .t/ 2
(12.83)
where P .t/ is a positive definite matrix. Now, H is equal to: HD
1 1 1 1 T x Qx C uT Ru C xT P .Ax C Bu/ C .Ax C Bu/T Px 2 2 2 2
(12.84)
and so @H=@u D Ru C BT Px D 0 gives the optimal control law: u0 .t/ D R1 .t/ BT .t/ P .t/ x .t/
(12.85)
Introducing (12.85) into (12.84) we obtain the optimal Hamiltonian. Then, solving (12.80) we get an equation for the gain matrix P .t/, namely:
d P .t/ D AT P C PA C Q PBR1 BT P; P tf D Pf dt
(12.86)
This is the Riccati equation of the optimal and can be integrated in reverse control, time from tf up to t0 to give P .t/ ; t 2 t0 ; tf , which is stored and used in forward time to give u0 .t/ according to (12.85). The optimal control problem (12.82), where the system is linear and the cost functional quadratic, is known as linear-quadratic regulator (LQR) problem. The steps required to implement the controller (12.85) are the following: Step 1: Step 2:
Select the weighting matrices Q and R, and the final value of the gain matrix P tf 0. Solve the Riccati equation (12.86) for P .t/ in reverse time from tf to t0 with final-time condition P tf . The solution can be computed off-line and P .t/ is stored in the computer memory.
12.5 Control
Step 3: Step 4:
311
Compute and apply to the system, the control signal u0 .t/ in forward time using (12.85) and the gain matrix P .t/ from the memory. Evaluate the performance of the system with simulation. If it is not sat isfactory choose new values for Q; R and P tf 0, and repeat the procedure. Using available software packages (e.g., MATLAB) this procedure is easy and fast.
If the system is time invariant (A,B,Q,R are constant matrices) and the final time is tf D 1 (infinite horizon), then the solution of the Riccati equation reaches a steady-state value, obtained from (12.86) by setting d P .t/ =dt D 0 for t ! 1. Then, the differential matrix Riccati equation is reduced to the algebraic matrix Riccati equation: (12.87) AT P C PA C Q PBR1 BT P D 0 and the control law becomes: u .t/ D Fx .t/ ; F D R1 BT P1
(12.88)
where P1 is the solution of (12.87). In this case, the optimal cost is equal to: J 0 D 1 T x .0/ P1 x .0/. 2 Analogous results can also be easily derived in the discrete-time case, where: X 1 T 1 1 xN hxN C Lk ; Lk D xTk Qk xk C uTk Rk uk 2 2 2 N1
xkC1 D Ak xk C Bk uk ; J D
kD0
When converting a continuous-time system to discrete-time via sampling, care must be taken to satisfy the so-called Shannon’s sampling condition (theorem). This condition is: T =!max where T is the sampling period, xk D x .kT/ D x .t/t DkT , and !max is the maximum frequency involved in the frequency response for the amplitude of the system at hand (i.e., of the response to sinusoidal inputs with frequencies 0 < ! < 1). Typically, acceptable results are obtained if T is selected much smaller than the dominant time constant dom of the system (e.g., T dom =3). Example 12.14. (Optimal Control of Motor) Consider a direct current motor driven by the rotor with state-space equations: di=dt D ai k 0 ! C bu; d!=dt D a0 ! C ki where i is the rotor’s current, ! is the angular motor’s velocity, u .t/ is the input voltage (control input), 1=a is the electric time constant, and 1=a0 is the mechanical time constant of the motor. We will derive the optimal control equations when Q; R and P tf in (12.82) and (12.86) are: Q D diag Œqi ; q! ; R D r; P tf D diag qi;f ; q!;f
312
12 Mathematical Tools for Automation Systems II
Let
P .t/ D
p1 .t/ p2 .t/
p2 .t/ p3 .t/
Then, the Riccati equation (12.86) gives: pP1 D 2ap1 C 2kp2 ˇp12 C qi pP2 D a C a0 p2 k 0 p1 C kp3 ˇp1 p2 pP3 D 2a0 p3 2k 0 p2 ˇp22 C q! where ˇ D b 2 =r. The optimal controller is given by (12.88) with: F D Œfi ; f! D R1 BT P D Œ.b=r/ p1 ; .b=r/ p2 i.e., u .t/ D fi i .t/ f! ! .t/ D .b=r/ p1 .t/ i .t/ .b=r/ p2 .t/ ! .t/ The above equations were simulated in the MATLAB with r D 1 and qi D q! D q. The results obtained when qif D q!;f D 0 are shown in Fig. 12.12a, b. We see that as the value of q is increased the state variables (here ! .t/ and i .t/) go to zero faster. It is clear that simulation helps us to select the most suitable values for the cost parameters qi and q! .
12.5.2.3 Stochastic Control Stochastic control is concerned with the optimal control of stochastic systems. In particular linear quadratic Gaussian (LQG) control is the control of Gauss–Markov systems of the form (11.39) with an additional control term Bu u .t/ in the state equation, i.e., xP D Ax C Bu u C Bw, when the cost functional is 350, 489 : 2 tf 3 Z 1 1 J D E 4 Ldt C Jf 5; L D xT Q x C uT R u 2 2 t0 Jf D .1=2/ f xT tf Qf x tf : The solution to this problem is again given by (12.85) (12.86), with the only difference that now in place of x.t/ we use the estimate xO . tj t / provided by the Kalman filter (12.46)–(12.48), i.e.: uO 0 .t/ D .R /1 .t/BTu .t/P.t/Ox. tj t/
12.5 Control
313
a w(t ) 60 50 40 30 20
q=100
10
1
0.1
10 0 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
t (sec)
control parameter (k)
b u(t ) 10 0
0.1
−100 −200
q=100
−300 −400 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
t (sec)
control parameter (k)
Fig. 12.12 Linear-quadratic regulator results for the d.c. motor: (a) Angular speed !.t /. (b) Optimal control signal (voltage)
The fact that the LQG control can be obtained by solving separately the state estimation (filtering) problem and the LQ problem is known in the literature as ‘separation principle’ for linear estimation and control. This principle does not in general hold when the system is nonlinear and/or the cost functional is non quadratic.
12.5.2.4 Predictive Control Predictive control uses an internal model for the simulation of the future behavior of the system, a reference trajectory for the smooth transition of the system from its present output to the desired output within a given future time interval (called the ‘coincidence horizon’), and a control law that ensures an optimal transition of the system’s output as much nearer to the desired reference trajectory. The internal model used may be a state-space model, a convolution-type (integral) model, or a recursive discrete-time model. The reference trajectory is typically the output of a first-order low-pass filter. 113, 166, 458
314
12 Mathematical Tools for Automation Systems II
12.5.2.5 Adaptive Control Adaptive control involves always an on-line parameter estimator and a standard controller. Depending on the method of estimation and control design, adaptive control is distinguished in self-tuning control (STC) and model reference adaptive control (MRAC). 25, 149, 304 In almost all cases, the design of adaptive control needs some kind of linear parameterization of the dynamics of the controlled system. In MRAC the parameter estimator (or parameter adaptation mechanism) searches to find the parameter values such that the system response under MRAC follows the response of a given reference model. This means that the error between the closedloop system and the reference model states (or responses) must tend asymptotically to zero (i.e., the error dynamics must be asymptotically stable). The general structure of MRAC has the form shown in Fig. 12.13. Self-tuning control is obtained by combining (coupling) a control law and a parameter estimator (of any type). The general structure of STC is shown in Fig. 12.14.
12.5.2.6 Robust Control Robust Control combines the stabilization of a system with the optimization of a norm of the closed-loop transfer function. 185, 300, 533, 565 The three commonly used norms are H2 , H1 and l1 . A more general tool is the so-called ‘structured singular value ’. Robustness of a system means ‘the ability to maintain a desired perfor-
ym
Reference Model Input r
Control law
fˆ
System under Control
u
y
– +
+
Error e
Adaption Mechanism
Fig. 12.13 Architecture of MRAC (O represents the estimated parameters) Input yd
Control law
u
Controlled System
∧
f
Estimator
Fig. 12.14 Architecture of STC (O represents the estimated parameters)
Output
12.5 Control
315
mance despite the existence of uncertainties in the structure and/or the parameters of the system’. The simplest robustness measure of a system is its distance from instability, represented in classical control by the gain and phase margins. The uncertainties may exist either in the input signals (signal uncertainty) or in the system itself (structured or unstructured uncertainty). A system with additive unstructured uncertainties (disturbances) has transfer function: Gr .s/ D G.s/ C .s/ where G.s/ is a nominal transfer function and .s/ is the term that expresses the uncertainty in the transfer function. A system with multiplicative unstructured uncertainty has transfer function of the form: Gr .s/ D .ICM .s//G.s/. The structured uncertainty of a system appears as parametric uncertainty, in which case the parameters of the system take real values within certain intervals. For example, in a mass-spring system with transfer function: G.s/ D
ms2
1 ; C ˇs C k
the mass m, the friction coefficient ˇ, and the spring constant k may not be known exactly but they are within the following permissible intervals: m 2 Œmu m; m0 m ; ˇ 2 Œˇu ˇ; ˇ0 ˇ ; k 2 Œku k; k0 k In this case the system transfer function with the uncertainty has the form: Gr .s/ D
1 .m C m /s 2 C .ˇ C ˇ /s C .k C k /
The general robust control problem is ‘to design the controller K in the system of Fig. 12.15 that maintains the norm (size) of the performance vector z small despite the exogenous signals w’.
w
z
G u
Fig. 12.15 Standard configuration for robust controller design
Controlled System
K op
Controller
y
316
12 Mathematical Tools for Automation Systems II
The classical disturbance rejection problem is a special case of the above problem. The effect of w upon z is expressed by the transfer function Tzw .s/ from w to z which must have desirable small size (norm). The norms H2 and H1 of a transfer function G.s/ are defined as follows: Norm H2 8 91=2 < 1 Z1 = trace G.j!/G .j!/ d! kGk2 D : 2 ; 1
8 91=2 r < 1 Z1 X = D i2 .G.j!//d! : 2 ; 1 i D1
where i is the i th singular value of G.j!/, G .j!/ is the conjugate transpose of G.j!/ (the conjugate of GT .j!/) and r is the rank of the matrix G.j!/. Norm H1
kG.s/k1 D sup max .G.j!// !
where “sup” is the minimum upper bound of the function max ./. In practice we can use the maximum (“max”) operator in place of “sup”. Here, max .G.j!// is the maximum singular value of G.j!/, i.e., the square root of the maximum eigenvalue of G .j!/G.j!/. For a full account of robust control design .H2 ; H1 ; / the reader is referred to standard textbooks 185, 533, 565 .
12.5.2.7 Intelligent Control Intelligent control is an enhanced form of adaptive control, where use of advanced sensory systems are used (vision, speech, etc.), together with new learning and reasoning techniques such as neural networks and fuzzy reasoning systems. This topic is very vast and the reader is referred to the relevant literature 488, 490, 585, 586, 599, 607 .
12.6 Concluding Remarks Most controllers in practice are still based on classical techniques, combined with experiential procedures of the human operators. Fuzzy systems and controllers are increasingly gaining wide popularity, since they can face nonlinear situations and system uncertainty. Intelligent control is required in sophisticated systems like robotic assistance and surgery systems, etc. Optimal control seems too restrictive for industrial plant control. The advent and use of digital computers (in replacement
12.6 Concluding Remarks
317
of analog computers) has helped towards achieving better performance and facing more complex environments with fast and accurate controllers. Automation systems in industry, space and enterprise include always control loops (single or multiple) of one or the other kind. But, as we have seen throughout this book, pure control algorithms are not sufficient in most cases. The human factors, abilities, and errors must be necessarily studied and taken into account. The techno-economical and environmental issues must also be included in the overall design, implementation and system operation. In all cases proper negative feedback (no matter if it is closed automatically or by the human operator or both) reduces the system’s sensitivity to variations in the system under control, reduces to some extent the effect of nonlinearities, and improves the stability and overall system’s performance.
References
1. S. Abu el Ata-Doss, J. Brunet, On-line expert supervision for process control, in Proceedings of the 25th IEEE Decision and Control Conference, Athens, Greece, 1986, pp. 631–632 2. B. Adams, W.F. Kinney, C.R. Hamm, J.O. Robichau, Extending power plant automation with a process computer, Reprint E-19,/Bailey Meter Co., 1967 3. B.D. Adelstein, M.J. Rosen, The effect of mechanical impedance on abnormal tremor, in Proceedings of the 9th Northeast Conf. on Bioengineering, New Brunswick, NJ, March 1981, pp. 205–209 4. AERE (http://www.aere.org/) 5. J. Albus, System description and design architecture for multiple autonomous undersea vehicles, NIST Technical Note 1251, Washington, DC, September, 1988 6. J. Albus, R. Quintero, Towards a reference model architecture for real-time intelligent control systems (ARTICS), in Robotics and Manufacturing (ASME), vol. 3 (ASME Press, New York, 1990) 7. J.R. Aldrich, Pollution Prevention Economics: Financial Impacts of Business and Industry (McGraw-Hill, New York, 1996) 8. I. Aleksander, H. Morton, Neural Computing (Chapman and Hall, London, 1990) 9. B. Allen, An integrated approach to smart house technology for people with disabilities. Med. Eng. Phys. 18, 203–206 (1996) 10. D.T. Allen, K.S. Rosselot, Pollution Prevention for Chemical Processes (Wiley, New York, 1997) 11. J.I. Allen, A modelling study of ecosystem dynamics and nutrient cycling in the Humber plume. U.K. J. Sea Res. 38, 333–359 (1997) 12. B.R. Allenby, Design for environment: A tool whose time has come. SSA J. Sept. 1991, vol. 5, pp. 5–10 13. American National Standard for Human Factors Enginnering of VDT Workstations, Human Factors Society, Report ANSI/HFS 100–1988, Santa Monica, CA, 1988 14. P.T. Anastas, C.A. Farris (eds.), Benign by Design: Alternative Synthesis Design, for Pollution Prevention, ACS Symp. Series 577 (American Chemical Society, Washington, DC, 1994) 15. P.T. Anastas, T.C. Williamson, Green chemistry: An overview, in Green Chemistry: Designing Chemistry for the Environment, ACS Symp. Series 626, ed. by P.T. Anastas, T.C. Williamson (American Chemical Society, Washington, DC, 1996), pp. 1–17 16. M.A. Arbib, Perceptual structures and distributed motor control, in Handbook of Physiology: The Nervous System II: Motor Control, ed. by V.B. Brooks (American Physiological Society, Bethesda, MD, 1981), pp. 1449–1480 17. M.A. Arbib, Schema theory, in The Encyclopedia of Artificial Intelligence, ed. by S. Shapiro (Wiley, New York, 1992), pp. 1427–1443 18. D.D. Ardayfio, Fundamentals of Robotics (Marcel Dekker, New York, 1987) 19. R.C.Arkin, Motor schema-based mobile robot navigation. Intl. J. Robotics Research, 8(4), 92–112 (1989) 20. R.C. Arkin, Cooperation without communication: Multi-agent schema based robot navigation. J. Robotic Syst. 9(2), 351–364 (1992)
319
320
References
21. R.C. Arkin, Behavior-Based Robotics (MIT Press, Cambridge, MA, 1998) 22. W.R. Ashby, An Introduction to Cybernetics (Wiley, New York, 1975) 23. K.J. Astrom, Process Ccontrol: Past, Ppresent and Ffuture. IEEE Control Systems Magazine, 5(3), 3–9 (1985) 24. K.J. Astrom, B. Wittenmark, Computer Controlled Systems (Prentice Hall, Englewood Cliffs, 1984) 25. K.J. Astrom, B. Wittenmark, Adaptive Control (Addison-Wesley, Reading, MA, (1989). 26. M. Athans, P. Falb, Optimal Control (McGraw-Hill, New York, 1966) 27. R.U. Ayres, Technological Transformations and Long Waves, International Institute for Applied Systems Analysis: RR-89–1, Laxemburg, Austria, 1989 28. J. Badaraco, The Knowledge Link (Harvard Business School Press, Cambridge, MA, 1990) 29. N.I. Badler, B.L. Webber, J. Kalita, J. Esakov, Animation from instructions, in Making Them Move: Mechanics Control and Animation of Articulated Figures, ed. by N.I. Badler (Kaufmann, San Mateo, CA, 1991), pp. 51–93 30. R.M. Baeker, W.A. Buxton, Readings in Human-Computer Interaction: A Multidisciplinary Approach (Morgan Kaufman, Los Altos, CA, 1987) 31. L. Bagrit, The Age of Automation, The New American Library of World Literature (Mentor Books, New York, 1965) 32. L. Bainbridge, Ironies of automation. Automatica 19, 775–779 (1983) 33. L.S. Bainbridge, S.A.R. Quintanilla, Developing Skills with Information Technology (Wiley, Chichester, 1989) 34. R. Bajcsy, A. Joshi, E. Krotkov, A. Landscan, A natural language and computer vision system for analyzing aerial images, in Proceedings of the 9th IJCAI, Los Angeles, CA, 1985, pp. 919–921 35. J.W. Baretta, W. Ebenhoh, P. Ruardij, The European regional seas ecosystem model, a complex marine ecosystem model. Netherlands J. Sea Res. 33, 233–246 (1995) 36. S. Baron, R. Muralidharan, R. Lancraft, G. Zacharias, PROCRU: A model for analyzing crew procedures in approach to landing, NASA-Ames Technical Report No: NAS2–10035, 1980 37. J.R. Barry, E.A. Lee, D.G. Messerschmitt, Digital Communication (Kluwer, Boston/ Dordrecht, 2004) 38. P. Bartlemus, The Concepts and Strategies of Sustainable Environment, Growth and Development (Routledge, London, 1994) 39. F.C. Bartlett, Remembering: A study in Experimental and Social Psychology (Cambridge University Press, London, 1932) 40. C.E. Bello, J.H. Siegel, Why valves leak: A search for the cause of fugitive emissions. Environ. Progr. 16, 13–15 (1997) 41. S. Bennett, A History of Control Engineering 1800–1930 (Peter Peregrinus, Amsterdam, 1963) 42. D.B. Beringer, J.G. Peterson, Underlying behavioural parameters of the operation of touchinput devices: Biases, models, and feedback. Hum. Factors 21, 445–458 (1985) 43. N.O. Bernsen, Foundations of multimodal representations: A taxonomy of representation modalities. Interact. Comput. 6, 347–371 (1994) 44. G.S. Beveridge, R.S. Schechter, Optimization: Theory and Practice (McGraw-Hill, New York, 1970) 45. A. Bhadhuri, Development with Dignity (National Book Trust, New Delhi, India, 2005) 46. C.E. Billings, Aviation Automation: The Search for a Human-Centered Approach (Erlbaum, Mahwak, NJ, 1997) 47. J. Bind´e, The future of values: What ethics for the 21st Century? in Proceedings of the International Symposium on Universal Values, ed. by L.G. Christophorou, G. Contopoulos (Academy of Athens, Athens, Greece, 2004), pp. 91–98 48. P.O. Bishop, Binocular vision, in Physiology of the Eye, ed. by R.A. Moses, W.M.H. Adlers Jr (C.V. Mosby, Washington, DC, 1987), pp. 619–689 49. C. Bishop, Neural Networks and Pattern Recognition (Oxford University Press, Oxford, 1995) 50. P.L. Bishop, Pollution Prevention: Fundamentals and Practice (McGraw Hill, Boston, 2000)
References
321
51. S. Bodker, Through the Interface (Erlbaum, Hillsdale, NJ, 1991) 52. K.L. Boettcher, A.H. Levis, Modeling the interacting decision maker with bounded rationality. IEEE Trans. System Man Cybernet. SMC-12(3), 334–344 (1982) 53. K. Boff, L. Kaufmann, J. Thomas (eds.), Handbook of Perception and Human Performance (Wiley, New York, 1987) 54. R. Borchelt, Brine seepage may delay disposal plants. Natl. Res. Council News Report 38(5), 18–19 (1989) 55. B. Borgerding, O. Ivlev, C. Martens, N. Ruchel, A. Gr¨aser, FRIEND: Functional robot arm with user friendly interface for disabled people, in Proceedings of the 5th European Conference for the Advancement of Assistive Technology, D¨usseldorf, November 1–4, 1999 56. A. Borning, Thinglab-A constraint-oriented simulation laboratory, Technical Report SSL79–3, Xerox Palo Alto Research Centre, 1979 57. G. Bourhis, Y. Agostini, The VAHM robotized wheelchair: System architecture and humanmachine interaction. J. Intell. Robotic Syst. 22(1), 39–50 (1998) 58. B. Boyer, Creating Pollution Prevention Incentives for Small Business: The Erie County Program (Erie County Dept. of Environment and Planning, Office of Pollution Prevention, Buffalo, NY, 1993) 59. D.M. Brienza, J. Angelo, A force feedback joystick and control algorithm for wheelchair obstacle avoidance. Disabil. Rehabil. 18(3), 123–129 (1996) 60. T.L. Brooks, A.K. Bejczy, Hand controllers for teleportation, NASA CR 175890 (JPL Publications, Pasadena, California, 1986), pp. 85–11 61. R. Brooks, Intelligence without reason, AI Memo No. 1293, MIT, AI Lab., MA, 1991 62. E. Brown, W. Buxton, K. Murtagh, Windows on tablets as a means of achieving virtual input devices. Comput. Graph. 19, 225–230 (1985) 63. C. Bulloch, Cockpit automation and workload reduction. Too much a good thing? Interavia 3, 263–264 (1982) 64. G. Bruno, G. Marchetto, Process-Translatable Petri nets for the rapid prototyping of process control systems. IEEE Trans. Software Eng. SE.-12(2), 251–257 (1986) 65. G. Bruno, A. Elia, P. Laface, A rule based system to scheduling production. Computer 19, 32–39 (1986) 66. G. Burdea, P. Coiffet, Virtual Reality Technology (Wiley, New York, 1994) 67. J. Burns, D. Malone, Optimization techniques applied to Forrester model of the world. IEEE Trans. Syst. Man Cybernet. SMC-4, 164–172 (1974) 68. Business and Sustainable Development: A Global Guide, http://www.bsdglobal.com 69. R.S. Cahn, Introduction to Chemical Nonenclature (Butterworth, London, 1979) 70. A. Cakir, D.J. Hart, T.F.M. Stewart, Visual Display Terminals: A Manual Covering Ergonomics, Workplace Design, Health and Safety, Task Organization (Wiley, London, 1980) 71. S.K. Card, T.P. Moran, A. Newell, The Psychology of Human-Computer Interaction (Erlbaum, Hillsdale, NJ, 1983) 72. J. Carlopio, A history of social psychological reactions to new technology. J. Occup. Psychol. 61, 67–77 (1988) 73. J. Cascio, G. Woodside, P. Mitchell, ISO14000 Guide (Mc-Graw-Hill, New York, 1996) 74. C.G. Cassandras, S. Lafortune, Introduction to Discrete Event Systems (Kluwer, Boston, 1999) 75. D.B. Chaffin, Ergonomics guide for the assessment of human static strength. Amer. Ind. Hyg. Assoc. J. 36, 505–511 (1975) 76. D.B. Chaffin, G.B.J. Anderson, Occupational Biomechanics (Wiley, New York, 1991), pp. 464–466 77. D. Chapman, Vision Instruction and Action (MIT Press, Cambridge, MA, 1991) 78. F. Chiari, M. Delhom, J.F. Santucci, An hybrid methodology for the modelling and simulation of natural systems. Syst. Anal. Model. Simul. 42(2), 269–287 (2002) 79. D. Chirot, Social Change in the Modern Era (Harcourt Brace Jovanovich, New York, 1986) 80. D. Chirot, T.D. Hall, World system theory. Ann. Rev. Sociol. 8, 81–106 (1982) 81. K. Christofersen, C.N. Hunter, K.J. Vicente, A longitudinal study of the effects of interface design on fault management performance. Intl. J. Cognit. Ergonom. 1(1), 1–24 (1997)
322
References
82. CIM-OSA, Reference architecture specification, ESPRIT Project 668 (AMICE Consortium), Brussels, 1998 83. D.J. Clausing, The Aviator’s Guide to Modern Navigation (Blue Ridge Summit, Tab Books, PA, 1989) 84. E.K. Clemons, A.J. Greenfield, The sage system architecture: A system for the rapid development of graphics interfaces for decision support. IEEE Comput. Graph. Appl. November (1985) 85. J.D. Cockcroft, A.-G. Frank, D. Johnson (eds.), Dependence and Underdevelopment (Anchor Books, New York, 1972) 86. M. Coderch, A. Willsky, S. Sastry, D. Castanon, Hierarchical aggregation of linear systems with multiple time scales. IEEE Trans. Autom. Control AC-28(11), 1017–1029 (1983) 87. R.C. Conant, Laws of information which govern systems. IEEE Trans. System Man Cybernet. SMC-6(4), 240–255 (1976) 88. Control Engineering (Kenny Fu), http://www.highbeam.com/doc/1G1–176855669.html 89. R.A. Conway, Handbook of Industrial Waste Disposal (Van Nostrand Reinhold, New York, 1980) 90. G.R. Conway, Agroecosystem analysis. Agric. Admn. 20, 31–55 (1985) 91. A.M. Cook, S.M. Hussey, Assistive Technologies: Principles and Practice, 2nd edn. (Mosby, St. Louis, 2002) 92. W. Cooper, What is system engineering. Control 129–133 (Feb. 1969) 93. W. Cooper, Cognitive Aspects of Skilled Typewriting (Springer, New York, 1983) 94. R. Cooper, Rehabilitation engineering applied to mobility and manipulation (Institute of Physics Press, Bristol, MA, 1995) 95. R. Cooper, Intelligent control of powered wheelchairs. IEEE Eng. Med. Biol. Mag. 15(4), 423–431 (1995) 96. G.R. Cooper, C.D. McGillem, Probabilistic Methods of Signal and System Analysis (Holt, Rinehart and Winston, New York, 1971) 97. Council Directive 96/61/EC Concerning Integrated Pollution Prevention and Control, European Commission Publication Paper L257/26, September 1996 98. K.J.W. Craik, Theory of the human operator in control systems: Part 1, The operator as an engineering system. Br. J. Psychol. 38, 56–61; Part 2, Man as an element in a control system, ibid, 38, 142–148 (1947) 99. T.E. Cronin, Direct Democracy, 2nd edn. (Harvard University Press, MA, 1999) 100. A.G. Cropper, S.I. Evans, Ergonomics and computer display design. Comput. Bull. 12(3), 94–98 (1968) 101. P.R. Crossn, N.J. Rosenberg, Strategies for agriculture. Sci. Am. 261(3), 128–135 (1989) 102. M.A. Curran, Using LCA-based approaches to evaluate pollution prevention. Environ. Progr. 14, 247–253 (1995) 103. M.A. Curran (ed.), Environmental Life-Cycle Assessment (McGraw-Hill, New York, 1996) 104. J. Cuypers, Two simplified versions of Forrester’s model. Automatica 9, 399–401 (1973) 105. M.J. Dainoff, Occupational stress factors in visual display terminals (VDT) operation: A review of empirical research. Behav. Infor. Technol. 1(2), 141–176 (1982) 106. D.M. Daley, Strategic Human Resource Management: People and Performance Management in the Public Sector (Upper Saddle River, NJ, 2002) 107. H.E. Daly, J.B. Cobb, For the Common Goal: Redirecting the Economy toward Community, the Environment and a Sustainable Future (Beacon, Boston, 1989) 108. J.L. Dallaway, R.D. Jackson, P.H.A. Timmers, Rehabilitation robotics in Europe. IEEE Trans. Rehabil. Eng. 3, 35–45 (1995) 109. M.M. Danchack, CRT displays for power plants, Instrument. Technol. 23(10), 29–36 (1976) 110. P. Dario, E. Guglielmeni, C. Laschi, G. Teti, MOVAID: A personal robot in everybody life of disabled and elderly people. J. Technol. Disab. 10, 77–93 (1999). Also www.eic.es/upfiles/agenda/20080211 122141Robotica.pdf. 111. M. Davis, D.D. Cornwell, Introduction to Environmental Engineering (McGraw-Hill, New York, 1998)
References
323
112. U.C. Davis, http://trc.ucdavis.edu/tresum1/binder.htm#Multimedia (1999) 113. R.M.C. De Keyser, P.G.A. Van De Velde, F.A.G. Dumortier, A comparative study of self-adaptive long-range predictive control methods. Automatica 24(2), 149–163 (1988) 114. M. Deering, High resolution virtual reality. Comput. Graph. 26(2), 195–201 (1992) 115. C. Despotopoulos, The values of ancient Greece and the crucial problems of humankind today, in Proceedings of the Internaiotional Symposium on Universal Values, ed. by L.G. Christophorou, G. Contopoulos (Academy of Athens, Athens, Greece, 2004), pp. 27–49 116. Development versus Dependence Theory, http://www.revision.notes.co.uk/revisions/619.html 117. B.S. Dhillon, Human Reliability: With Human Factors (Pergamon, New York, 1986) 118. B.S. Dhillon, Human factors in robotic systems, in Intelligent Systems: Safety Reliability and Maintainability Issues, Springer, NATO ASI Series F, ed. by O. Kaynak, G. Honderd, E. Grant (Springer, Berlin, 1993), 114, pp. 221–231 119. R. Dijkmans, Methodology for selection of best available techniques (BAT) at the sector level. J. Cleaner Prod. 8, 11–21 (2000) 120. T.A. Dingus, A.W. Gellatly, Human computer interaction applications for intelligent transportation systems, in Handbook of Human-Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (Elsevier/North Holland, Amsterdam, 1997), pp. 1259–1282 121. G.L. Donohue (ed.), Air Transportation Systems Engineering (American Inst. of Aeronautics and Astronautics, Reston, 2001) 122. R.C. Dorf, R.H. Bishop, Modern Control Systems (Prentice Hall, Upper Saddle River, NJ, 2001) 123. T. Dos Santos, The structure of dependence, in Readings in US Imperialism, ed. by K.T. Fann, D.C. Hodges (Porter Sargent, Boston, 1971) 124. E. Duffy, The psychological significance and the concept of arousal or activation. Psychol. Rev. 64, 265–275 (1957) 125. M.D. Dunnete (ed.), The Handbook of Industrial and Organizational Psychology (Rand McNally, Chicago, 1977) 126. A. Dvorak, There is a better typewriter keyboard. Natl. Bussiness Educ. Quart. XII-2, 51–60 (1943) 127. EAERE (http://www.eaere.org/) 128. N.L. Eckenfelder, A. Dasgupta, Industrial Water Pollution Control (McGraw Hill, New York, 1989) 129. E. Edwards, Some aspects of automation in civil transport aircraft, in Monitoring Behavior and Supervisory Control, ed. by T.B. Sheridan and G. Johannsen (Plenum, New York, 1976) 130. E. Edwards, Automation in civil transport aircraft. Appl. Ergonom. 8,194–198 (1977) 131. M. Eisenstadt, M. Brayshaw, Aorta: Diagrams as an aid to visualising the execution of prolog programs, in Graphics Tools for Software Engineering, ed. by A.C. Kilgour, R.A. Earnshow (British Computer Science Documentation and Displays Group, Cambridge, U.K., 1988) 132. G. Elgozy, Automation et Humanisme (Calmann-Levy, Paris, 1968) 133. S.R. Ellis, What are virtual environments? IEEE Comput. Graph. Appl. 14(1), 17–22 (1994) 134. S.R. Ellis, D.R. Begault, E.M. Menzel, Virtual environments as human–computer interfaces, in Handbook of Human-Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (North-Holland, Amsterdam, 1997), pp. 163–201 135. S.E. Engel, R.E. Granda, Guidelines for man/display interfaces, IBM Technical Report TR00.2720, Poughkeepsie, New York, 1975 136. E.D. Enger, B.F. Smith, Environmental Science: A study of Interrelationships (W.C. Brown, Dubuque, IA, 1995) 137. M. Engineer, I. King, N. Roy, The human development index as a criterion for optimal planning. Indian Growth Dev. Rev. 1, 172–192 (2008) 138. W.K. English, D.C. Englebart, M.L. Berman, Display-selection techniques for text manipulation. IEEE Trans. Hum. Factors Electron. HFE-8, 5–15 (1967) 139. R.J. Estes, Trends in world social development. J. Dev. Soc. 14(1), 11–39 (1998) 140. A.G. Esteva, Development, in The Development Dictionary: A Guide to Knowledge as Power, ed. by W. Sachs (Zed Books, London, 1992)
324
References
141. D.C. Esty, Toward a global environmental mechanism, in World Apart: Globalization and the Environment, ed. by M. Ivanova et al. (Earth scan, London, 2003) 142. D.C. Esty, A.S. Winston, Green to gold: How smart companies use environmental strategy to innovate, create value and build competitive advantage (2006), http://www.eco-advantage. com/ 143. E.U., Directive 2003/87/EC, European Parliament and Council, CEC (1987) 144. E.U., Directive 96/61/EC, European Parliament and Council (1996) 145. E.U., Directive 2004/101/EC(OJ L 338, 13.11.2004), CEC (2004) 146. europa.eu.int/comm/energy/nuclear/publications/radioactive waste.html 147. L. Evans, Traffic Safety and the Driver (Van Nostrand Reinhold, New York, 1991) 148. Exos, Product Literature (Exos Inc, Burlington, MA, 1990) 149. S.G. Fabri, V. Kadirkmanathan, Functional Adaptive Control: An Intelligent Systems Approach (Springer, Berlin, 2001) 150. S. Feiner, Apex: An experiment in the automated creation of pictorial explanations. IEEE Comput. Graph. Appl. November, 5(11), 29–37 (1985) 151. V. Ferraro, Dependency Theory: An Introduction, http://www.mtholyoke.edu/acad/intrel/ depend.htm 152. C. Fischer, M. Buss, G. Schmidt, Human-robot interface for intelligent service robot assistance, in Proceedings of the IEEE International Workshop on Robot and Human Communication (ROMAN), Tsukuba, Japan, 1996, pp. 177–182 153. P.M. Fitts, Human engineering for an effective air navigation and traffic control systems, Ohio State University Research Foundation Report, Columbus, OH, 1951 154. J.D. Foley, Interfaces for advanced computing. Sci. Am. October, 2(4), 127–135 (1987) 155. J.D. Foley, A. van Dam, Fundamentals of Interactive Computer Graphics (Addison-Wesley, Reading, MA, 1982) 156. J. Forrester, World Dynamics (Wright Allen, Cambridge, MA, 1971) 157. M. Francas, S. Brown, D. Goodman, Alphabetic entry procedure with small keypads: Key layout does matter, in Proceedings of the Human Factors Society 27th Annual Meeting, Santa Monica, CA, 1983, pp. 187–190 158. W. Francois, Automation, Industrialization Comes of Age (Collier Books, New York, 1964) 159. C. Frasson, M. Er-radi, Principles of an icons-based command language, Proceedings of the ACM SIGMOD International Conference on Management of Data, Washington, DC, 1986 160. H. Freeman (ed.), Industrial Pollution Prevention Handbook (McGraw-Hill, New York, 1995), pp. 3.1–3.25 161. J. Fu., C.M. Lagoa, A. Ray, Robust optimal control of regular languages with event cost uncertainties, in Proceedings of the IEEE Conference Decision and Control, Hawai, December, 2003, pp. 3209–3214 162. J. Fu, A. Ray, C.M. Lagos, Unconstrained optimal control of regular languages. Automatica 40(4), 639–648 (2004) 163. S. Gael, The Job Analysis Handbook for Business, Industry, and Government (Wiley, New York, 1988) 164. W.O. Galitz, User-Interface Screen Design (QED Information Sciences, Wellesley, MA, 1993) 165. S. Gallager, J. Steven Moore, T.J. Stobbe, J.D. McGlothin, A. Bhattacharya, Physical strength assessment in ergonomics, in Handbook of Industrial Automation, ed. by R.L. Shell, E.L. Hall (Marcel Dekker, New York, 2000), pp. 797–827 166. C.E. Garcia, D.M. Prett, M. Morari, Model predictive control: Theory and practice – A survey. Automatica 25(3), 335–348 (1989) 167. D. Genter, A.L. Stevens (eds.), Mental Models (Erlbaum, Hillsdale, NJ, 1983) 168. J.C. Gentina, D. Corbeel, Colored adaptive structured Petri net: A tool for the automatic synthesis of hierarchical control of flexible manufacturing systems, in Proceedings of the IEEE International Conference on Robotics and Automation, Raleigh, NC, 1987, pp. 1166–1173 169. W.M. Getz, R.G. Haight, Population Harvesting, Demographic Models of Fish, Forest and Animal Resources (Princeton University Press, Princeton, NJ, 1989)
References
325
170. J.H. Gibbons, P.D. Blair, H.L. Gwin, Strategies for energy use. Sci. Am. 261(3), 136–143 (1989) 171. M.M. Gillespie, Tremor. J. Neurosci. Nurs. 23(3), 170–174 (1991) 172. W.E. Gilmore, D. Gertman, H.S. Blackman, User-Computer Interfaces in Process Control (Academic, Boston, 1989) 173. E.P. Glinert, S.L. Tanimoto, Pict: An interactive graphical programming environment. IEEE Comput. 17(11), 7–25 (1984) 174. M. G¨obel, J. Springer, V. Hedicke, M. R¨otting, M. Luczak, Tactile feedback applied to computer mice. Intl. J. Hum. Comput. Interact. 71, 1–24 (1995) 175. D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning (AddisonWesley, Menlo Park, CA, 1989) 176. F. Gould, A four variable world system, ibid, 455–469 (1975) 177. T.E. Graedel, P.J. Crutzen, The changing atmosphere. Sci. Am. 261(3), 58–68 (1989) 178. E. Grandjean, Fitting the Task to the Man (Taylor and Francis, London, 1988) 179. E. Grandjean, E. Vigliani, Ergonomic Aspects of Visual Display Terminals (Taylor and Francis, London, 1980) 180. Green Automation 2007 Proceedings, http://greenbusinesscentre.org 181. Green Automation and Energy Conservation, http://www.startupnation.com (K. Slovick, Greening your business) 182. Green Peace International, What is Clean Production, http://www.rec.org/poland/wpa/cpbl. htm, 1995 183. E.L. Greene, Cumulative trauma disorders on the rise. Med. Trib. July, 26 (1990) 184. D.M. Green, J.A. Swets, Signal Detection Theory and Psychophysics (Wiley, New York, 1966) 185. M. Green, D.J.N. Limebeer, Linear Robust Control (Prentice Hall, New Jersey, 1995) 186. J.S. Greenstein, Pointing devices, in Handbook of Human–Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (North-Holland, Amsterdam, 1997), pp. 1317–1348 187. A.S. Gunn, P.A. Vesilind, Environmental Ethics for Engineers (Lewis Publishers, Chelsea, MI, 1986) 188. N.A. Gunningham, R.A. Kagan, D. Thornton, Shades of green: Business, regulation, and environment (2003), http://www.amazon.com 189. C.N. Haas, R.J. Vamos, Hazardous and Industrial Waste Treatment (Prentice Hall, Upper Saddle River, NJ, 1995) 190. W. Hackbush, Multi-Grid Methods and Applications (Springer, Berlin, 1982) 191. M. Hagberg, D. Rempel, Work related disorders and the operation of computer VDT’s, in Handbook of Human–Computer Interaction, ed. by M.G. Helander, T.K. Landauer, P.V. Prabhu (North-Holland, Amsterdam, 1997), pp. 1415–1429 192. P.A. Hancock, J.S. Warm, A dynamic theory of stress and sustained attention. Hum. Factors 31, 519–537 (1989) 193. N. Hanley, J. Shogen, B. White, Environmental Economics in Theory and Practice (Palgrave, London, 2007) 194. Haptic interfaces, http://www2.cs.utah.edu/classes/cs6360/Navhi/haptic.html 195. J. Harris, An Introduction to Fuzzy Logic Applications (Kluwer, Boston, 2000) 196. J. Harris, Environmental and Natural Resource Economics: A Contemporary Approach (Houghton Mifflin Company, Boston, MA, 2006) 197. J. Harris, Fuzzy Logic Applications in Engineering Science (Springer, Dordrecht, 2006) 198. M. Hart, Guide to Sustainable Development (Atlantic Center for the Environment, Ipswich, MA, 1995) 199. T. Hasemer, Lisp programming, in Artificial Intelligence Programming Environments, ed. by R. Hawley (Ellis Horwood, Chichester, 1987) 200. M. Hatamian, E.F. Brown, A new lightpen with subpixel accuracy, AT&T Tech. J. 64, 1065– 1075 (1985) 201. R. Hawley (ed.), Artificial Intelligence Programming Environments (Ellis Horwood, Chichester, West Sussex, U.K., 1987)
326
References
202. S. Haykin, Neural Networks: A Comprehensive Foundation (Macimillan College Publishing, New Jersey, 1994) 203. F. He, A. Agah, Multi-modal human interactions with an intelligent interface utilizing images, sounds and force feedback. J. Intell Robotic Syst. 32(2), 171–190 (2001) 204. H. Head, G. Holmes, Sensory disturbances from cerebral lessions. Brain 34, 102–103 (1911) 205. A.M. Heinecke, Developing recommendations for CAD user interfaces, in Human Aspects in Computing: Design and Use of Interactive Systems and Work with Terminals, ed. by H.J. Bullinger (Elsevier, Amsterdam, 1991) 206. S. Helal, M. Mokhtari, B. Abdulrazak (eds.), The Engineering Handbook of Smart Technology for Aging, Disability and Independence (Wiley, Hoboken, NJ, 2008) 207. J.L. Hendricks, M.R. Rosen, N.L.J. Berube, M.L. Aisen, A second-generation joystick for people disabled by tremor, in Proceedings of the RESNA 14th Annual Conference, Kansas City, MO, 1991, pp. 248–250 208. H. Henmi, T. Yoshikawa, Virtual lesson and its application to virtual calligraphy system, in Proceedings of the RO-MAN’97, 6th IEEE International Workshop on Robot and Human Communication, 1997, pp. 32–37 209. F. Herzberg, B. Mausner, B.B. Snyderman, The Motivation to Work (Wiley, New York, 1959) 210. G. Herzog, P. Wasinski, Visual translator linking perceptions and natural language descriptions. Artif. Intell. Rev. 9 (1994) 211. C. Heyes, W. Sch¨opp, M. Amann, U. Unger, A simplified model to predict long-term ozone concentrations in Europe, WP-96–12. International Institute for Applied Systems Analysis (IIASA), Laxemburg, Austria, 1996 212. E. Hirsh (ed.), Christianisme et Droits de l’Homme (Librairie de Libert´es, Paris, 1984) 213. H. Hislop, J.J. Perrine, The isokinetic concept of exercise. Phys. Ther. 47, 114–117 (1967) 214. Y.-C. Ho, X.-R. Cao, Perrturbation analysis of discrete event dynamic systems (Kluwer, Boston, 1991) 215. J.M. Hoc, From human–machine interaction to human–machine cooperation. Ergonomics 43, 833–843 (2000) 216. D.J. Hofmann, J.W. Harder, S.R. Rolf, J.M. Rosen, Baloon-borne observations of the development and vertical structure of the Antarctic ozone hole in 1986. Nature 326, 59–62 (1987) 217. G. Hofstede, Motivation, leadership and organization: D. American theories apply abroad? Organ. Dyn. 9, 42–63 (1980) 218. J.D. Hollan, E.L. Hutchins, L. Weitzman, Steamer: An interactive inspectable simulationbased training system. AI Mag. 5(2), 15–27 (1984) 219. E. Hollnagel, The phenotypes of erroneous actions: Implications of HCI design, in Human Computer Interaction and Complex Systems, ed. by G.R.S. Weir, J.L. Alty (Academic, London, 1989) 220. T.K. Hopkins, I. Wallerstein, Processes of the World System (SAGE Publications, Beverly Hills, CA, 1980) 221. http://callcentre.education.ed.ac.uk/downloads/smartchair/smartsmileleaflet. Also: www:// smilerehab.com/smartwheelchair.html 222. http://en.wikipedia.org/wiki/Human Development Index 223. http://en.wikipedia.org/wiki/IEC 61508 224. http://en.wikipedia.org/wiki/Smart wheelchair 225. http://en.wikipedia.org/wiki/Sustainable development 226. http://hdr.undp.org/en/humandev/ 227. http://intron.kz.tsukuba.ac.jp/HM/txt.html 228. http://ots.fh-brandenburg.de/downloads/scripte/ais/IFA-Serviceroboter-DB.pdf 229. http://sedac.ciesin.columbia.edu/es/esi 230. http://unfcc.int/documentation/documents/document lists/items/2960.php 231. http://unfcc.int/meatings/cop 13/items/4049.php 232. http://unfcc.int/meetings/cop 11/items/3394.php 233. http://www.ameinfo.com 234. http://www.answers.com/topic/human-development-index
References 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254. 255. 256. 257. 258. 259.
260.
261. 262. 263. 264. 265. 266. 267. 268.
269. 270. 271. 272.
327
http://www.automationworld.com http://www.biotechlearn.org.nz/site info/glossary/(namefilter)/e http://www.ciesin.columbia.edu http://www.copenhagenclimatecouncil.com http://www.cs.cmu.edu/afs/cs/project/msl/www/haptic desc.html http://www.cseindia.org/programme/industry/eia/introduction eia.pdf http://www.eionet.europa.eu http://www.en.wikipedia.org/wiki/Environmental Economics http://www.entrix.com/resources/glossary.aspx http://www.green-automation.com/en index.htm http://www.greenautomation.us http://www.hm-treasury.gov.uk/independent reviews/stern review economics climate change/stern review report.cfm http://www.igreenautomation.com http://www.intelligentactuator.com/green automation.php http://www.sarcos.com/telespec.dexarm.html http://www.secom.co.jp/english/myspoon/ http://www.un.org/climatechange/calendar.shtml http://www.vislab.iastate.edu/Projects/vr force/html/vr force.html http://www.vtt.fi/aut/kau/results/tracker/ http://www.yale.edu/envirocenter http://www-robotics.cs.umass.edu/p50/utah-mit-hand.html Human Growth and Development: Courses on Theoretical Perspectives, http://www.unm.edu/
jka/courses/archive/theory1.html V.D. Hunt, Industrial Robotics Handbook (Industrial Press, Inc., New York, 1983) ICAO, Annexes to the Convention on International Civil Aviation (ICAO, Montreal, Canada) H. Ihara, M. Nohmi, Current status of microcomputer applications on railway transportation systems, in Real Time Microcomputer Control of Industrial Processes, ed. by S.G. Tzafestas, J.K. Pal. (Kluwer, Dordrecht/Boston, 1990), pp. 481–508 C. Imamichi, A. Inamoto, Unilevel homogeneous distributed computer control systems and optimal system design, in Proceedings of the IFAC Distributed Computer Control Workshop, Monterey, CA, 1985 T. Inagaki, Situation-adaptive autonomy for time-critical takeoff decisions. Intl. J. Model. Simul. 19(4), 179–183 (1999) A.P. Ingersoll, The atmosphere. Sci. Am. 261(3), 118–126 (1983) R. Inglehart, W.E. Baker, Modernization, cultural change and the persistence of traditional values. Am. Sociol. Rev. 65, 19–51 (2000) International Development Wikipedia, the Free Encyclopedia http://en.wikipedia.org/wiki/ International Development ISEE (http://ecoeco.org/) M. Ishi, M. Nakata, M. Sato, Networked APIDAR: A networked virtual environment with virtual, auditory and haptic interactions. Presence 3, 351–359 (1994) ISO, Welcome to ISO online, http://www.iso.ch/welcome.html, 1996 S.C. Jacobson, E.K. Iversen, D.F. Knutli, R.T. Johnson, K.B. Biggers, Design of the Utah/MIT dextrous hand, in Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, 1986, pp. 1520–1532 H.D. Jellinek, S.K. Card, Powermice and user performance, Proceedings of the CHI’90 Conference on Human Factors in Computing Systems (ACM, New York, 1990), pp. 213–220 B.C. Jiang, O.S.H. Cheng, Six severity level design for robotic cell safety, in Human–Robot Interaction, ed. by M. Rahimi, W. Karkwowski (Taylor and Francis, London/Brighton, 1992) H. John Bernardin, R.W. Beatty, Performance Appraisal: Assessing Human Behavior at Work (Kent, Boston, 1984) L.A. Jones, Perception of force and weight: Theory and research. Psychol. Bull. 100(1), 29–42 (1986)
328
References
273. L.A. Jones, Dextrous hands: Human prosthetic and robotics. Presence 6(1), 29–56 (1997) 274. S. Jones, Graphical interfaces for knowledge engineering: An overview of relevant literature. Knowl. Eng. Rev. 3(3), 221–247 (1998) 275. R.K. Jones, M.A. Hagen, A perspective on cross cultural picture perception, in The Perception of Pictures, ed. by M.A. Hagen (Academic, New York, 1980), pp. 193–226 276. N. Jordan, Allocation of functions between man and machines in automated systems. J. Appl. Psychol. 47, 161–165 (1963) 277. T. Kamada, K. Oikawa, AMADEUS: A mobile, autonomous decentralized utility system for indoor transportation, in Proceedings of the 1998 IEEE International Conference on Robotics and Automation (ICRA’98), Leuven, Belgium, May 1998, pp. 2229–2236 278. M. Kamath, N. Viswanadham, Application of Petri net based models in the modelling and analysis of flexible manufacturing systems, in Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, 1986, pp. 312–317 279. B.H. Kantowitz, Interfacing human and machine intelligence, in Intelligent Interfaces: Theory, Research and Design, ed. by P. Hancock, M. Chignell (North-Holland, Amsterdam, 1989), pp. 49–68 280. B.H. Kantowitz, A.C. Bittner Jr., Using the aviation safety reporting system database as a human factors research tool, in Proceedings of the IEEE 15th Annual Aerospace and Defence Conference, Piscataway, NJ, 1992, pp. 31–39 281. B.H. Kantowitz, J.L. Campbell, Pilot workload and flightdeck automation, in Automation and Human Preformance: Theory and Applications, ed. by R. Parasuraman, M. Mouloua (Erlbaum, Mahwah, NJ, 1996), pp. 117–136 282. B.H. Kantowitz, R.D. Sorkin, Human Factors: Understanding People-System Relationships (Wiley, New York, 1983) 283. B.H. Kantowitz, R.D. Sorkin, Allocation of functions, in Handbook of Human Factors, ed. by G. Salvendy (Wiley, New York, 1987), pp 355–369 284. B.H. Kantowitz, T.J. Triggs, V. Barnes, Stimulus–response compatibility and human factors, in Stimulus–Response Compatibility, ed. by R.W. Proctor, T. Reeves (North-Holland, Amsterdam, 1990), pp. 365–388 285. N. Katevas (ed.), Mobile Robotics in Healthcare (IOS Press, Amsterdam, 2001) 286. N. Katevas, N.M. Sgouros, S.G. Tzafestas, G. Papakonstantinou, P. Beattie, J.M. Bishop, P. Tsanakas, P. Rabischong, The autonomous mobile robot SENARIO: A sensor – Aided intelligent navigation system for powered wheelchairs. IEEE Robotic Autom. Mag. 4(4), 60– 70 (1997) 287. E.J. Kellough, Reinventing public personnel management: Ethical implications for managers and public personnel systems. Public Pers. Manage. 28(4), 655–671 (1999) 288. R.A. Kerr, Is the greenhouse here? Science 239, 559–561 (1988) 289. R.A. Kerr, Report urges greenhouse action now. Science 241, 23–24 (1988) 290. N. Keyfitz, The growing human population. Sci. Am. 261(3), 118–126 (1989) 291. A. Kheddar, C. Tzafestas, P. Coiffet, T. Kotoku, K. Tanie, Multi-robot teleoperation using direct human hand actions. J. Adv.Robotic. 11(8), 779–825 (1998) 292. R. Kinkead, Typing speed, keying rates, and optimal keyboard layouts, in Proceedings of the Human Factors Society 19th Annual Meeting, Santa Monica, CA, 1975, pp. 159–161 293. N. Kirkpatrick, Application of life-cycle assessment to solid waste management practices, in Environmental Life-Cycle Assessment, ed. by M.A. Curran (McGraw Hill, New York, 1996), pp. 15.1–15.14 294. T. Klevers, The European approach to an open system architecture for CIM, in Proceedings of the 5th CIM Europe Conference (Springer Verlag, Heidelberg, Athens, Greece, 1989) pp. 109–120 295. J.J. Kok, H.G. Stassen, Human operator control of slowly responding systems: Supervisory control. J. Cybernet. Info. Sci. 3, 123–174 (1980) 296. K. Kosanke, F. Fernadat, M. Zelm (eds.), CIMOSA: CIM open systems architecture: Evolution and application in enterprise engineering and integration (Special Issue), Comput. Indus. 40 (2–3) (1999)
References
329
297. R. Koy-Oberthur, Perception by sensorimotor coordination in sensory substitution for blind, in Visuomotor Coordination: Amphibians, Comparisons, Models and Robots, ed. by J.P. Ewert, M.A. Arbib (Plenum, New York, MA, 1989), pp. 397–418 298. K.H.E. Kroemer, An isoinertial technique to assess individual lifting capability. Hum. Factors 25(5), 493–506 (1983) 299. R. Kumar, V. Carg, Modelling and control of logical discrete event systems (Kluwer, Boston, 1995) 300. H. Kwakernaak, Robust control and H1 : Optimization (Tutorial Paper). Automatica 29, 255– 273 (1972) 301. J.W.M. La Riviere, Threats to the world’s water. Sci. Am. 261(3), 80–94 (1989) 302. T. Laengle, T. Lueth, U. Rembold, H. Woern, A distributed control architecture for autonomous mobile robots: Implementation of the Karlsruhe multi-agent robot architecture. Adv. Robotic 12(4), 411–431 (1998) 303. S.P. Lajoic, S.J. Derry (eds.), Computers and Cognitive Tools (Erlbaum, Hillsdale, NJ, 1993) 304. Y.D. Landau, Adaptive Control: The Model Reference Approach (Marcel Dekker, New York, 1979) 305. T.K. Landauer, The Trouble with Computers: Usefulness, Usability and Productivity (MIT Press, Cambridge, MA, 1995) 306. A. Lankenau, O. Meyer, B. Krieg-Br¨uckner, Safety in robotics: The Bremen autonomous wheelchair, in Proceedings of the AMC’98: 5th International Workshop on Advanced Motion Control, Coimbra, Portugal, 1998, pp. 524–529 307. A.M. Law, W.D. Kelton, Simulation Modelling and Analysis (McGraw Hill, New York, 1991) 308. R.S. Lazarus, From psychological stress to emotions: A history of changing outlooks. Ann. Rev. Phychol. 44, 1–21 (1993) 309. C. Lazos, Engineering & Technology in Ancient Greece (University of Patras Press, Patras, Greece, 1998) (Translation from Greek by M. Lazou; Aeolos Editions, Athens, 1993) 310. J. Lee, Trust, self confidence, and operator’s adaptation to automation, Ph.D. Thesis, University of Illinois, Champaign, 1992 311. J. Lee, N. Moray, Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 1243–1270 (1992) 312. K. Lee, N. Swanson, S. Sauter, R. Wickstrom, A. Waikar, M. Magnum, A review of physical exercises recommended for VDT operators, Appl. Ergonom. 23(6), 387–408 (1992) 313. L. Leifer, On nature of design and an environment for design, in System Design: Behavioural Perspectives on Designers, Tools and Organizations, ed. by W.B. Rouse, K.R. Boff (North Holland, Amsterdam, 1987) 314. S. Levy, The Municipal Solid Waste Factbook (U.S. EPA, Washington, DC, 1996) 315. F.L. Lewis, Applied optimal control and estimation (Prentice Hall, Englewood Cliffs, 1992) 316. H. Lewis, J. Gertsakis, Design + Environment (Green Leaf Publishing Ltd, Sheffield, UK, 2001) 317. J. Lewis, K.M. Potosnak, R.L. Mayar, Keys and keyboards, in Handbook of Human– Computer Interaction, ed. by M.G. Helander, T.K. Landauer, P. Prablu (North-Holland, Amsterdam, 1997) 318. R. Lickert, The Human Organization: Its Management and Values (McGraw Hill, New York, 1967) 319. J.C.R. Licklider, Man-computer symbiosis. IRE Trans. Hum. Factors Electron. HFE-1, 4–11 (1960) 320. G.E. Likens, R.F. Wright, J.N. Galloway, T.J. Butler, Acid rain. Sci. Am. 241(4), 43–51 (1979) 321. T. Lindholm, Universal values and human rights, in Proceedings of the International Symposium on Universal Values, ed. by L.G. Christophorou, G. Contopoulos (Academy of Athens, Athens, Greece, 2004), pp. 27–45 322. B. Linhoff, Pintch analysis: A state-of-the-art overview. Trans. Chem. Eng. Res. Design 71, 503–522 (1993) 323. B. Linnhoff, Pintch analysis in pollution prevention, in Waste Minimization Through Process Design, ed. by A.P. Rossiter (McGraw-Hill, New York, 1995), pp. 53–67
330
References
324. E.A. Locke, The nature and causes of job satisfaction, in The Handbook of Industrial and Organizational Psychology, ed. by M.D. Dunnete (Rand McNally, Chicago, 1977), pp. 1297– 1349 325. C.G. Looney, Fuzzy Petri nets for rule-based decision making. IEEE Trans. Syst. Man Cybernet. SMC-18(1), 178–183 (1998) 326. J.R. Lourie, A.M. Bonin, Computer-controlled textile designing and weaving, in Proceedings of the IFIPS Conference, Edinburgh, 1968 327. H. Luczak, J. Springer, Ergonomics of CAD systems, in Handbook of human–Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (North-Holland, Amsterdam, 1997), pp. 1349–1394 328. D. Macauley (ed.), Minding Nature: The Philosophers of Ecology (The Guilford Press, New York, 1996) 329. I.S. MacKenzie, A. Sellen, W. Buxton, A comparison of input devices in elemental pointing and dragging tasks, in Proceedings of the CHI’91 Conference on Human Factors in Computing Systems (ACM, New York, 1991), pp. 161–166 330. J. MacNeil, Strategies for sustainable economic development. Sci. Am. 261(3), 154–165 (1989) 331. P. Maes, The dynamics of action selection, in Proceedings of the 11th International Joint Conference on Artificial Intelligence (IJCAI’89), Detroit, MI, 1989, pp. 991–997 332. K.F. Man, S. Kwong, W.A. Halang, Genetic Algorithms for Control and Signal Processing (Springer, London, 1997) 333. A.S. Manne, C.O. Wene, MARCAL-MACRO: A linked model for energy-economy analysis, BNL-47161, Brookhaven National Laboratory, Upton, New York, 1992 334. A.S. Manne, R. Mendelsohn, R.G. Richels, MERGE: A model for evaluating regional and global effects of GHG reduction policies. Energ. Policy 23, 17–34 (1995) 335. J.G. March, H.A. Simon, Organizations (Wiley, New York, 1958) 336. A. Marcus, Graphical user interfaces, in Handbook of Human–Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (North-Holland, Amsterdam, 1997), pp. 423–440 337. E. Marshall, Clean air? Don’t hold your breath. Science 244, 517–520 (1988) 338. C. Marshall, C. Nelson, M.M. Gardiner, Design guidelines, in Applying Cognitive Psychology to User Interface Design, ed. by M.M. Gardiner, B. Christie (Wiley, New York, 1987) 339. C.A. Martinez-Vela, World Systems Theory, ESD-83 (Fall 2001), http://web.mit.edu/esd83/ www/notebook/WorldSystem.pdf 340. A.H. Maslow, A theory of human motivation. Psychol. Rev. 50, 370–396 (1943) 341. H. Matsumaru, Recent computer application systems for railways. Hitachi Rev. 33(1), 1–6 (1984) 342. L. Maund, Introduction to Human Resources Management: Theory and Practice (Palgrave, New York, 2001) 343. O. Mayr, The Origins of Feedback (MIT Press, Cambridge, MA, 1970) 344. A. McDonough, Information, Economics and Management Systems (McGraw-Hill, New York, 1963) 345. M.B. McElroy, R.J. Salawitch, Changing composition of the global stratosphere. Science 243, 763–770 (1989) 346. M. Mc Gillivray, The human development index: Yet another redundant composite development indicator? World Dev. 18(10), 1461–1468 (1991) 347. M.W. McGreevy, S.R. Ellis, The effect of perspective geometry on judged direction in spatial information instruments. Hum. Factors 28, 439–456 (1986) 348. R.H. McKim, Visual Thinking (Lifetime Time Learning Publications, Belmont, CA, 1980) 349. C.T. Meadow, Man–Machine Communication (Wiley, New York, 1970) 350. J.S. Meditch, Stochastic Optimal Linear Estimation and Control (McGraw-Hill, New York, 1969) 351. M.M. Meerschaert, Mathematical Modeling (Academic, San Diego, 1999) 352. D. Meister, A cognitive theory of design and requirements for a behavioural design aid, in System Design: Behavioural Perspectives on Designers, Tools and Organizations, ed. by W.B. Rouse, K.R. Boff (North-Holland, Amsterdam, 1987), pp. 211–220
References
331
353. D. Meister, The History of Human Factors and Ergonomics (Erlbaum, Mahwah, NJ 1999) 354. D. Meister, T.P. Enderwick, Human Factors in System Design, Development, and Testing (Erlbaum, Mahwah, NJ, 2002) 355. M. Mendel, T.B. Sheridan, Optimal Combination of Information from Multiple Sources (MIT Man-Machine Systems Lab., Cambridge, MA, 1986) 356. G. Merland, Global deforestation could play a significant role in addressing the CO2 problem, in Annual Progress Report ORNL-6521 (Environmental Sciences Division), ed. by D.E. Reichle, et al. (Oakridge National Lab., Oakridge, TN, 1989) 357. J. Metcaff, C. Eddy, Wastewater Engineering: Treatment, Disposal and Reuse (McGraw-Hill, New York, 1991) 358. W. Mettrey, An assessment of tools for building large knowledge-based systems. AI Mag. 8(4), 81–89 (1987) 359. E. Meyers, Chemistry of Hazardous Materials (Prentice Hall, Englewood Cliffs, NJ, 1977) 360. J. Meyer, Adaptive changes in the reliance on hazard warnings, in Proceedings of the Human Interaction with Complex Systems (Beckman Institute, Urbana-Champaign, IL, 2000) 361. A. Meystel, Cognitive controller for Autonomous systems, in Proceedings of the IEEE Workshop on Intelligent Control, RPI, Troy, New York, 1985 362. A. Meystel (ed.), Intelligent control in robotics (Special Issue), J. Intell. Robotic Syst. 2(2–3), 95–360 (1989) 363. A. Meystel, Autonomous Mobile Robots: Vehicles with Cognitive Control (World Scientific, Singapore, 1991) 364. M. Michie, An Introduction to Genetic Algorithms (MIT Press, Cambridge, MA, 1996) 365. G. Miller, The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 63, 81–97 (1956) 366. A. Mital, L.J. George, Economic feasibility of a product line assembly: A case study. Eng. Econ. 35, 25–38 (1989) 367. A. Mital, A. Pennathur, Perspectives on designing human interfaces in automated systems, in Handbook of Industrial Automation, ed. by R.L. Shell, E.L. Hall (Marcel Dekker, New York, 2000), pp. 749–792 368. A. Mital, A. Mahajan, M.L. Brown, A comparison of manual and automated assembly methods, in Proceedings of the IIE Integrated System Conference (Institute of Industrial Engineers, Norcross, GA, 1988), pp. 206–211 369. A. Mital, A. Morotwala, M. Kulkarmi, M. Sindair, C. Siemieniuch, Allocation of functions to humans and machines in a manufacturing environment: Part II-The scientific basis (knowledge base) for the guide. Intl. J. Industr. Ergonom. 14, 33–49 (1994) 370. M. Mitchell, Systems analysis: The human element. Electro-Technology (Apr. 1966) 371. A.K. Mithal, S.A. Douglas, Differences in movement microstructure of the mouse and the finger-controlled isometric joystick, in Proceedings of the CHI’96 Conference on Human Factors in Computing Systems (ACM, New York, 1996), pp. 300–307 372. Modernization Theory–Wikipedia, the Free Encyclopedia, http://en.wilipedia.org/wiki/ Modernization Theory 373. Modernization: The Nature of Modern Society – Brittanica Online, http://www.britannica. com/EBchecked/topic/387301/modernization 374. V.A. Mohnen, The challenge of acid rain. Sci. Am. 259(2), 30–38 (1988) 375. T. Morgan, An overview of tools and languages, in Proceedings of the KBS’87 Online Publications, 1987 376. B. Muir, Trust between humans and machines, and the design of decision aids. Intl. J. ManMachine Stud. 27, 527–539 (1987) 377. B. Muir, Operator’s trust in and use of automatic controllers in a supervisory process control task, Ph.D. Thesis, University of Toronto, 1989 378. D. Murph, Sweden develops MICA: The intelligent autonomous wheelchair, www. engadget.com/2006/12/17/sweden-develops-mica-the-intelligent-autonomous-wheelchair (17/12/2006). Also: http://mica.csee.ltu.se/Rovaniemi/MICA-lisa20040519.pdf 379. J.D. Nagle, A. Mahr, Democracy and Democratization (Sage, Publications, London, 1999)
332
References
380. J. Nash, M.D. Stoughton, Learning to live with life-cycle assessment. Environ. Sci. Technol. 28, 236–237 (1994) 381. National Research Council of Canada, Design for Environmental Guide (2000), http:// www.nrc.ca/dfe 382. U. Neisser, Cognition and Reality: Principles and Implications of Cognitive Psychology (W.H. Freeman, San Francisco, 1976) 383. K. Nemire, R.H. Jacobi, S.R. Ellis, Simulation fidelity of a virtual display. Hum. Factors 36(1), 79–93 (1994) 384. B. Neumann, Natural language description of time-varying scenes, in Semantic Structures, ed. by D.L. Waltz (Erbaum, Hillsdale, NJ, 1989), pp. 167–207 385. R.E. Newell, H.G. Reichle Jr., W. Seiler, Carbon monoxide and the burning earth. Sci. Am. 261(4), 82–88 (1989) 386. R.S. Nickerson, Using Computers: The Human Factors of Information Systems (MIT Press, Cambridge, MA, 1986) 387. R.S. Nickerson, Understanding and controlling environmental change: Challeges and opportunities for information technology, in Robotics, Control and Society, ed. by N. Moray, W.R. Ferrel, W.B. Rouse (Taylor and Francis, London, 1990), Ch. 22 388. R.S. Nickerson, T.K. Landauer, Human–computer interaction: Background and issues, in Handbook of Human–Computer Interaction, ed. by M.G. Helander, T.K. Landauer, P.V. Prabhu (North-Holland, Amsterdam, 1997), pp 3–31 389. N.J. Nilsson, Shakey the robot, Technical Note No. 323, AI Center, SRI International, Menlo Park, CA, 1984 390. P.D. Nisbet, J.P. Odor, J.P. Loudon, The call centre smart wheelchair, in Proceedings of the 1st International Workshop on Robotic Applications in Medical and Health Care, Ottawa, Canada, 9:1–9:10, 1998 391. S. Nisimoto, Recent computer control in the steel industry. Hitachi Rev. 34(4), 194–200 (1985) 392. R.W. Noel, J.E. McDonald, Automating the search for good designs: About the use of simulated annealing and user models, Proceedings of the Interface 89 (Human Factors Society), Santa Monica, CA, 1989, pp. 241–245 393. S.Y. Nof, Handbook of Industrial Robotics (Wiley, New York, 1999) 394. M.S. Nolan, Fundamentals of Traffic Control (Books Cole Publ. Co., Pacific Grove, 1999) 395. F. Noreils, R. Chatila, Plan execution monitoring and control architecture for mobile robots. IEEE Trans. Robotic Autom. 11(2), 255–266 (1995) 396. D.A. Norman, D. Fisher, Why alphabetic keyboards are not easy to use: Keyboard layout doesn’t much matter. Hum. Factors 24, 509–519 (1982) 397. D.A. Norman, S.W. Draper (eds.), User Centered System Design: New Perspectives on Human-Computer Interaction (Erlbaum, Hillsdale, NJ, 1986) 398. D. Norman, T. Shalice, Attention to action: Willed and automatic control of behaviour, in Consciousness and Self-Regulation: Advances in Research and Theory, 4, ed. by R. Davidson, G. Schwartz, D. Shapiro (Plenum, New York, 1986), pp. 1–17 399. J. Noyes, The QWERTY keyboard: A review. Intl. J. Man Machine Stud. 18, 265–281 (1983) 400. Nuclear Energy Agency, Learning and adapting to societal requirements for radioactive waste management. NEA No. 5296, Organization for Economic Co-operation and Development (OECD), ISBN92-64-02080-2 (2004) 401. F.J. O’Hart, R.J. Greename, G. Lacey, D. Wenn, Communicating robots in healthcare environment, in Mobile Robotics in Healthcare, ed. by N. Katevas (IOS Press, Amsterdam, 2001), pp. 213–226 402. P.R. O’Leary, P.W. Walsh, R.K. Ham, Managing solid waste. Sci. Am. 259(6), 36–42 (1988) 403. Object Management Group, The Unified Modelling Language Specification, v.1.5, OMG, March (2003). www.omg.org/technology/documents/formal/uml.htm 404. Z. Obrenovic, D. Starcevic, Modeling multimodal-human-computer interaction. IEEE Comput. September, 65–72 (2004)
References
333
405. OECD Recommendation: Environment and Economics Guiding Principles Concerning International Economic Aspects of Environmental Policies, http://sedac.ciesin.org/entri/texts/ oecd/OECD-4.01.html 406. OECD: Organization for economic cooperation and development, Industrial case studies on the use of BAT and EQO in permits, in Proceedings of the Workshop on Environmental Requirements for Industrial Permitting, Paris, 1996 407. K. Ogata, Modern Control Engineering (Prentice Hall, Upper Saddle River, NJ, 1997) 408. Okala Guide 2007, http://www.idsa.org/whatsnew/sections/ecosection/okala.html 409. OSHA, The new OSHA: Reinventing worker safety and health, www://osha.gov/oshinfo/ reinvent/reinvent.html. 1996 410. S.L. Oviatt, Ten myths of multimodal interaction. Commun. ACM, November, 74–81 (1999) 411. E.H.J. Pallett, Aircraft Instruments: Principles and Applications (Pitman, London, 1981) 412. A. Papoulis, Probability, Random Variables and Stochastic Processes (McGraw-Hill, New York, 1991) 413. R. Parasuraman, Monitoring of automated systems, in Automation and Human Performance: Theory and Applications, ed. by R. Parasuraman, M. Mouloua (Erlbaum, Mahwah, NJ, 1996) 414. R. Parasuraman, T.B. Sheridan, C.D. Wickens, A model for types and levels of human interaction with automation. IEEE Trans. System Man Cybernet. SMC-30(3), 286–297 (2000) 415. R.P. Paul, S. Nof, Work methods measurements: A comparison between robot and human task performance. Int. J. Prod. Res. 17, 277–303 (1979) 416. M. Payamps, Changes at checkout, resources. Mag. Environ. Manage. 19(4), 6–9 (1997) 417. D. Pearce, A. Markandya, E.B. Barbier, Blueprint for a Green Economy (Earthscan, London, 1989) 418. C. Pellerin, Green automation. Assembly Autom. 15, 38–39 (1995) 419. W.R. Peltier, A.M. Tushingham, Global sea level rise and the greenhouse effect: Might they be connected? Science 244, 806–810 (1989) 420. R. Pew, A. Mavor, Modeling Human and Organizational Behavior (National Academy Press, Washington, DC, 1998) 421. J.T. Pfeffer, Solid Waste Management Engineering (Prentice Hall, Englewood Cliffs, NJ, 1992) 422. M.J. Pfeiffer, R. Papenhuijzen, N. Tholen, Eliciting the navigator’s knowledge on ship dynamics, in Proceedings of the 7th Annual Conference on Manual Control, Paris, France, 1998, pp. 201–206 423. J. Piaget, Biology and Knowledge: An Essay on the Relations Between Organic Regulations and Cognitive Processes (University of Chicago Press, Chicago, 1971) 424. R.A. Pielke, Mesoscale Meteorological Analysis (Academic, San Diego, CA, 1984) 425. A.F. Pillsbury, The salinity of rivers. Sci. Am. 245(1), 54–65 (1981) 426. A.M. Polovko, Fundamentals of Reliability Theory (Academic, San Diego, 1968) 427. D. Popovic, V.P. Bhatkar, Methods and Tools for Applied Artificial Intelligence (Marcel Dekker, New York, 1994) 428. M.M. Popp, B. Farber, Advanced display technologies, route guidance systems and the position of displays in cars, in Vision in Vehicles-III, ed. by A.G. Grale (Elsevier/North Holland, Amsterdam, 1991), pp. 219–225 429. M.E. Porter, V.E. Millar, How information gives you competitive advantage. Harvard Business Rev. 149–159, July–August (1985) 430. M.J. Powell, Radial basis functions for multivariable interpolation: A review, in Algorithms for Approximation, ed. by J.C. Mason, G.M. Cox (Oxford University Press, Oxford, 1987), pp. 143–167 431. P.V. Prabhu, G.V. Prabhu, Human error and user–interface design, in Handbook of Human– Computer Interaction, ed. by M.G. Helander, T.K. Landauer, P.V. Prabhu (North-Holland, Amsterdam, 1997), 79, pp. 489–501 432. C.K. Prahalad, G. Hamel, The core competence of the corporation. Harvard Business Rev. 68, 79–91 (1989)
334
References
433. E. Pr¨assler, J. Scholz, M. Strobel, MAid: Mobility assistance for elderly and disabled people, in Proceedings of the IECON’98: 24th International Conference of IEEE Industrial Electronics Soc., Aachen, Germany, 1998 434. H.E. Price, The allocation of functions in systems. Hum. Factors 27(1), 33–45 (1985) 435. A. Pritchett, Plot performance at collision avoidance during closely spaced parallel approaches. Air Traffic Control Quart. 7(1), 47–75 (1999) 436. M.H. Proffitt, D.W. Fahey, K.K. Kelly, A.F. Tuck, High latitude ozone loss outside the Antarctic ozone hole. Nature 342, 233–237 (1989) 437. P. Purkayastha, Configuration of distributed control systems, in Real-Time Microcomputer Control of Industrial Processes, ed. by S.G. Tzafestas, J.K. Pal (Kluwer, Boston/Dordrecht, 1990) 438. P. Purkayastha, Distributed control systems: Implementation strategies, in Proceedings of the International Seminar on Microprossesor Applications for Productivity Improvement, ed. by V.P. Bhatkar, K. Kant (Tata McGraw Hill, New Delhi, 1998), pp. 329–340 439. V. Putz-Anderson, Cumulative Trauma Disorders: A Manual for Musculoskeletal Diseases of the Upper Limbs (Taylor and Francis, London, 1988), 85–103 440. P.J. Ramadge, W.M. Wonham, Supervisory control of a class of discrete event processes. SIAM J. Control Optim. 25(1), 206–230 (1987) 441. A. Rapaport (ed.), Information for Decision Making: Quantitative and Behavioral Dimensions (Prentice Hall, Upper Saddle River, NJ, 1970) 442. J. Rasmussen, Human errors: A taxonomy for describing human malfunction in industrial installations. J. Occup. Accid. 4, 311–333 (1982) 443. J. Rasmussen, Skills, rules and knowledge: Signals, signs and symbols; and other distinctions in human performance models. IEEE Trans. Systems Man Cybernet. SMC-13(3), 257–266 (1983) 444. J. Rasmussen, Information Processing in Human Machine Interaction (North-Holland, Amsterdam 1986) 445. J. Rasmussen, K. Vicente, Ecological interfaces: A technological imperative in high tech systems? Intl. J. Hum. Comput. Interact. 2, 93–111 (1992) 446. J. Rasmussen, K.J. Vicente, Cognitive control of human activities and errors: Implications for ecological interface design, in Proceedings of the International Conference on Event Perception and Action, Trieste, Italy, August, 1987 447. W.L. Rathje, Rubbish! The Atlantic Monthly 99–109 (Atlantic Media Company, Washington, DC, http://www.theatlantic.com, Dec. 1989) 448. A. Ray, V.V. Phoha, S. Phoha, Quantitative Measure for Discrete Event Supervisory Control (Springer & Business Media, New York, 2005) 449. J. Reason, Human Error (Cambridge University Press, Cambridge, UK, 1990) 450. F. Rechnmann, World models: A case study on social responsibility and impact, in Proceedings of the 7th IFIP Conference on Optimization Techniques, Nice, 1, 1975, pp. 431–439 451. F. Redmill, An introduction to the safety standard IEC 61508. J. Syst. Safety Soc. 35(1), 1–12 (1999) 452. S.C. Reed, E.J. Middlebrooks, R.W. Crites, Natural Systems for Waste Management and Treatment (McGraw-Hill, New York, 1998) 453. P. Reid, Dynamic interactive display of complex data structures, in Graphics Tools for Software Engineering, British Computer Society Documentation and Displays Group, ed. by A.C. Kilgour, R.A. Earnshaw (BCS Publications, London, 1988) 454. F. Reinhardt, Down to earth: Applying business principles to environmental managements (2000), http://www.amazon.com 455. U. Rembold, R. Dillman, Computer-Aided Design and Manufacturing: Methods and Tools (Springer, Berlin, 1986) 456. D.W. Repperger, Management of spasticity via electrically operated force reflecting joystick, in Proceedings of the RESNA 13th Annual Conference, Kansas City, MO, 1991, p. 27 457. R. Revelle, Carbon dioxide and world climate. Sci. Am. 247(2), 35–43 (1982) 458. J. Richalet, A. Rault, J.L. Test, Model predictive heuristic control: Applications to industrial processes. Automatica 14(5), 413–428 (1978)
References
335
459. M.H. Richer, W.J. Clancey, Guidon-watch: A graphic interface for viewing a knowledge based system. IEEE Comput. Graph. Appl. 5(11), 51–64 (1985) 460. V. Riley, Human use of automation, Ph.D. Thesis, University of Minnesota, Minneapolis, 1994 461. V. Riley, Operator reliance on automation: Theory and data, in Automation and Human Performance: Theory and Applications, ed. by R. Parasuraman, M. Mouloua (Erlbaum, Mahwah, NJ, 1996), pp. 19–35 462. P.O. Riley, M.J. Rosen, Evaluating manual control devices for those with tremor disability. J. Rehabil. Res. Dev. 24(2), 99–110 (1987) 463. J. Rillings, R.J. Betsold, Advanced driver information systems. IEEE Trans. Veh. Technol. 40(1), 31–40 (1991) 464. M. Rillo, Using Petri nets and rule-based system in manufacturing systems, in Proceedings of the 12th IMACS World Congress, Paris, France, 1998, pp. 535–537 465. J. Rodrigo, F. Castells, J. Carlos Alonso, Electrical and Electronic Practical Ecodesign Guide, http://www.greenmarketing.com/articles/fivestrategies.html 466. J.-S. Roger Jang, C.T. Sun, E. Mizutani, Neuro-Fuzzy and Soft Computing: Parts A and B (Prentice Hall, Saddle River, NJ, 1997) 467. P. Roosen, B. Gross, Optimization strategies and their application to heat exchanger network synthesis, in Proceedings of the Exergoeconomical Analysis and Optimization in Chemical Engineering Seminar, Aachen, Germany, 1995 468. J. Rosenblatt, D. Payton, A fine-grained alternative to the subsumption architecture for mobile robot control in Proceedings of the International Joint Conference on Neural Networks, 1989, pp. 317–323 469. D.H. Rosenbloom, J.D. Carroll, Public personnel administration and law, in Handbook of Public Personnel Administration, ed. by J. Rabin et al. (Marcel Dekker, New York, 1995), pp. 71–113 470. H.H. Rosenbrock (ed.), Designing Human-Centered Technology (Springer, Berlin, 1989) 471. H.H. Rosenbrock, Machines with a Purpose (Oxford University Press, Oxford, 1990) 472. W.B. Rouse, On models and modelers: N cultures. IEEE. Trans. Systems Man Cybernet. SMC-12, 605–610 (1982) 473. B. Rouse (ed.), Advances in Man–Machine Research, vol. 1 (JAI Press, Greenwhich, CT, 1984), pp. 195–222 474. W.B. Rouse, On better mousetraps and basic research: Getting the applied world to the laboratory door. IEEE. Trans. Systems Man Cybernet. SMC-15, 620–630 (1985) 475. W.B. Rouse, Model-based evaluation of an intergrated support system concept. Large-Scale Syst. 13, 33–42 (1987) 476. W.B. Rouse, Intelligent decision support for advanced manufacturing systems. Manuf. Rev. 1, 236–243 (1988) 477. W.B. Rouse, Adaptive aiding for human/computer control. Hum. Factors 30, 431–438 (1988) 478. W.B. Rouse, Human resource issues in system design, in Robotics, Control and Society, ed. by N. Morey, W.R. Ferrell, W.B. Rouse (Taylor and Francis, London, 1990), pp. 177–186 479. W.B. Rouse, N.D. Geddes, R.E.Curry, An architecture for intelligent interfaces: Outline of an approach to supporting operators of complex systems. Hum. Comput. Interact. 3, 87–122 (1988) 480. R. Rubinstein, Digital Typography: An Introduction to Type and Composition for Computer System Design (Addison-Wesley, Reading, MA, 1988) 481. M. Rueher, M.-C. Thomas, A. Gubert, D. Ladret, A prolog based graphical approach for knowledge expression. Microsoft Eng. 24(4) (1986) 482. S. Russel, P. Norvig, Artificial Intelligence: A Modern Approach (Prentice Hall, Upper Sadle River, NJ, 1995) 483. J.M. Ryder, W.W. Zachary, A.L. Zaklad, J.A. Purcell, (I): A cognitive model for integrated decision aiding/training embedded systems (IDATES), Technical Report NTSC-92–010, (II): A design methodology for IDATES, Technical Report NTSC-92–011, Naval Training Systems Center, Orlando, FL, 1994
336
References
484. B. Sadler, T. Fenge, A national sustainable development strategy and the territorial North. Northern Perspect. 21(4) (1993), http://www.carc.org 485. M. Saisana, G. Dubois, A. Chaloulakou, P. Kassomenos, N. Spyrellis, Streamlining environmental monitoring networks: Applications to nitrogen dioxide in north Italy. Syst. Anal. Model. Simul. 43(2), 241–250 (2003) 486. G. Salvendy, Handbook of Human Factors (Wiley, New York, 1987) 487. M.S. Sanders, E.J. McCormick, Human Factors in Engineering and Design (McGraw-Hill, New York, 1987) 488. G.N. Saridis, Analytic formulation of intelligent machines as neural nets, in Proceedings of the IEEE Symposium on Intelligent Control, Washington, DC, August, 1988 489. G.N. Saridis, Stochastic Processes, Estimation and Control: The Entropy Approach (Wiley, New York, 1995) 490. G.N. Saridis, Architectures for intelligent controls, in Intelligent Control Systems, ed. by M.M. Gupta, N.K. Sinha (IEEE Press, Piscataway, USA, 1996), pp. 127–148 491. N.B. Sarter, Cockpit automation: From quantity to quality, from individual pilot to multiple agents, in Automation and Human Performance: Theory and Applications, ed. by R. Parasuraman, M. Mouloua (Erlbaum, Mahwah, NJ, 1996), pp. 267–280 492. N.B. Sarter, D.D. Woods, Pilot interaction with cockpit automation: Operational experiences with the flight management system. Intl. J. Aviat. Psychol. 2, 303–321 (1992) 493. N.B. Sarter, D.D. Woods, Team play with a powerful and independent agent: Operational experiences and automation surprises on the Airbus A-320. Hum. Factors 39, 553–569 (1997) 494. N.B. Sarter, D.D. Woods, C.E. Billings, automation surprises, in Handbook of Human Factors and Ergonomics, ed. by G. Salvendy (Wiley, New York, 1997), pp. 1926–1943 495. T. Sato, S. Hirai, Language-aided robotic teleoperation system (LARTS) for advanced teleoperation. IEEE J. Robotic Autom. 3(5), 476–480 (1987) 496. S.L. Sauter, T.M. Schnorr, Occupational health aspects of work with video display terminals, in Environmental and Occupational Medicine, ed. by W.N. Rom (Little Brown and Company, Boston, 1992) 497. S.L. Sauter, M.S. Gattlieb, K.C. Jones, V.N. Dodson, Job and health implications of VDT use: Initial results of the Wisconsin-NIOSH study. Commun. ACM 26, 284–294 (1982) 498. C.N. Sawyer, P.L. McCarty, G.F. Parkin, Chemistry for Environmental Engineering (McGraw Hill, New York, 1994) 499. N.I. Sax, Hazardous Chemicals Desk Reference (Van Nostrand Reinhold, New York, 1987) 500. W. Schaber, M. Mathiesen, Interorganizational manufacturing /supplier operations in the automotive industry, in Proceedings of the ELEDIS’91: International Conference on Electronic Data Interchange Systems, Milan, Italy, May 1991 501. H.J. Schneider, S.G. Tzafestas, Integrated approach to computer-aided multi-supplier/multi – distributor operations in the automotive industry, in Proceedings of the 23rd International Symposium on Automotive Technology and Automation, vol. III, Vienna, Dec. 1990, pp. 264–273 502. R. Schraft, C. Schaeffer, T. May, Care O bot: The concept of a system for assisting elderly or disabled persons in home environments, IEE Conference, Proc. 24th IEEE Industrial Electron. Conf., 4, 2476–2481, 1998 503. H.J. Schneider, M. Lock, M. Mathiesen, H. Rentschler, CMSO: CIM for multi supplier operations, in Proceedings of the APMS’90: IFIP International Conference on Advances in Production Management Systems, Espoo, Finland, August 1990 504. S.E. Schwartz, Acid deposition: Unraveling a regional phenomenon. Science 243, 753–763 (1989) 505. A. Sears, Improving touchscreen keyboards: Design issues and a comparison with other devices. Interact. Comput. 3, 253–269 (1991) 506. D.E. Seborg, The prospects of advanced process control, in Proceedings of the 10th IFAC World Congress, Munich, 1987, pp. 281–289 507. SEC(2005)97, Communication from the Commission to the Council and the European Parliament, 2004 Environmental Policy Review, COM (17 final), CEC, February (2005)
References
337
508. J.H. Seinfeld, Urban air pollution: State of the Science. Science 243, 771–781 (1989) 509. N.J. Sell, Industrial Pollution Control: Issues and Techniques (Van Nostrand Reinhold, New York, 1992) 510. H. Selye, The stress concept: Past, present and future, in Stress Research: Issues for the Eighties, ed. by C.L. Cooper (Wiley, New York, 1983), p. 120 511. J.W. Senders, N.P. Moray, Human Error: Cause, Prediction and Reduction (Erlbaum, Mahwah, NJ, 1991) 512. C. Shearer, KEE and POPLOG: Alternative approaches to developing major knowledge based systems, in Proceedings of the KBS’87 Online Publications, Pinner, UK, 1987 513. R.L. Shell, E.L. Hall, Handbook of Industrial Automation (Marcel Dekker, New York, 2000) 514. T.T. Shen, C.E. Schmidt, T.R. Card, Assessment and control of VOC Emissions from Waste Treatment and Disposal Facilities (Van Nostrand Reinhold, New York, 1993) 515. R.N. Shephard, J. Metzler, Mental rotation of three-dimensional objects, in Thinking, Reading in Cognitive Science, ed. by P.N. Johnson-Laird, P.C. Wason (Cambrigde University Press, Cambridge, U.K., 1977) 516. T.B. Sheridan, On how often the supervisor should sample. IEEE Trans. Syst. Sci. Cybernet. SSC-6, 140–145 (1970) 517. T.B. Sheridan, Computer control and human alienation. Tehnol. Rev. 83, 21–44 (1980) 518. T.B. Sheridan, Measuring, modeling and augmenting reliability of man–machine systems. Automatica 19(6), 637–645 (1983) 519. T.B. Sheridan, Telerobotics, Automation and Human Supervisory Control (MIT Press, Cambridge, MA, 1992) 520. T.B. Sheridan, Task analysis, task allocation and supervisory control, in Handbook of Human-Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (North-Holland, Amsterdam, 1997). pp. 87–105 521. T.B. Sheridan, Supervisory control, in Handbook of Human Factors and Ergonomics, ed. by G. Salvendy (Wiley, New York, 1997), pp. 1295–1327 522. T.B. Sheridan, Descartes, Heidegger, Gibson and God: Toward an electric ontology of presence. Presence 8(5), 549–557 (1999) 523. T.B. Sheridan, Humans and Automation: System Design and Research Issues (Wiley, New York, 2002) 524. T.B. Sheridan, W. Ferrell, Man–Machine Systems: Information, Control and Decision Models of Human Performance (MIT Press, Cambridge, MA, 1974) 525. T.B. Sheridan, G. Johannsen, Monitoring Behavior and Supervisory Control (Plenum, New York, 1976) 526. T.B. Sheridan, R.T. Hennessy, Research and Modelling of Supervisory Control Behaviour (National Academy Press, Washington, DC, 1984) 527. T.B. Sheridan, T. Vamos, S. Aida, Adapting automation to man, culture and society. Automatica 19, 605–612 (1983) 528. S. Sherr (ed.), Input Devices (Academic, San Diego, 1988) 529. J.P. Shield, Video display terminals and occupational health, in Proceedings of the Safety, 1990, pp. 17–19 530. B. Shneiderman, R. Mayer, D. McKay, P. Heller, Experimental investigations of the utility of detailed flowcharts in programming. Commun ACM, 20(6), 573–581 (1977) 531. C. Shoaf, A.M. Genaidy, Workstation design, in Handbook of Industrial Automation, ed. by R.L. Shell, E.L. Hall (Marcel Dekker, New Work, 2000), pp. 793–796 532. T. Skelton, T. Allen, Culture and Global Change (Routlege, New York, 1999) 533. S. Skogestad, I. Postlethwaite, Multivariable Feedback Control: Analysis and Design (Wiley, New York, 1996) 534. M.R. Skrokov (ed.), Mini and Microcomputer Control in Industrial Processes (Van Nostrand Reinhold, New York, 1990) 535. M.J. Smith, The physical, mental and emotional stress effects of VDT work. Comp. Graph. Appl. 4, 23–27 (1984) 536. M.J. Smith, Occupational stress, in Handbook of Human Factors, ed. by G. Salvendy (Wiley, New York, 1987), pp. 844–860
338
References
537. S.L. Smith, J.N. Mosier, Guidelines for designing user interface software, Technical Report ESD-TR-86–278, Hanscom Air Force Base, MA, 1986 538. M.J. Smith, P.C. Sainford, A balance theory of job design for stress reduction. Intl. J. Ind. Ergonom. 4, 67–79 (1989) 539. M.J. Smith, G. Salvendy, Work with Computers: Organizational, Management, Stress and Health Aspects, vol. 12A (Elsevier, Amsterdam, 1989) 540. M.J. Smith, P. Carayon, New technology, automation and work organization: Stress problems and improved technology implementation strategies. Intl. J. Hum. Factors Manuf. 5, 99–116 (1995) 541. M.J. Smith, F.T. Conway, Psychological aspects of computerized office work, in Handbook of Human–Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (Elsevier, Amsterdam, 1997), pp. 1497–1517 542. S.A. Snell, J.W. Dean Jr., Integrated manufacturing and human resource management: Human capital perspective. Acad. Manage. J. 35(3), 467–504 (1992) 543. N.K. Sondheimer, Spatial reference and natural language machine control. Int. J. Man. Machine Stud. 8, 329–336 (1976) 544. P. Sparaco, A330 crash to spur changes in airbus. Aviat. Week Space Technol. 141(6), 20–22 (1994) 545. P. Sparaco, Human factors cited in French A320 crash, in Aviation Week and Space Technology, 30–31, January, 1994 546. Specialty Publishing Company, http://www.specialtypub.com/constructtech/article.asp? article idD7371 547. R. Springston, Signs o life after disaster, http://www.vcu.edu/cesweb/news/kepone.html, 1996 548. T. Spybey, Social Change, Development and Dependency: Modernity, Colonialism and the Development of the West (Policy Press, Oxford, 1992) 549. K.M. Stanney, T.L. Maxey, G. Salvendy, Social centered design, in Handbook of Human Factors and Ergonomics, ed. by G. Salvendy (Wiley, New York, 1997), pp. 637–656 550. N. Stanton, Human Factors in Alarm Design (Taylor and Francis, London, 1994) 551. H.G. Stassen, Supervisory control behavior modelling: The challenge and necessity, in Robotics, Control and Society, ed. by N. Moray (Taylor and Francis, W.R. Ferreland, London, 1990) 552. S.S. Stevens, On the psychophysical law. Psychol. Rev. 64, 153–181 (1957) 553. R.S. Stolarski, The Antarctic ozone hole. Sci. Am. 258(1), 30–36 (1988) 554. R. Stuart, The Design of Virtual Environments (McGraw-Hill, New York, 1996) 555. N. Sugimoto, Experimental study of human error in robotics, Paper collection No. 844–5, Machinery Institute of Japan, Tokyo, 1984 556. M. Sullivan, Video display health concerns. AAOHN J. 37(7), 254–257 (1989) 557. G.J. Suski, M.G. Rodd, Current issues in design, analysis and implementation of distributed computer-based control systems, in Proceedings of the 6th IFAC Distributed Computer Control Workshop, Mayschoss, Germany, 1986 558. L. Swidler, Religious Liberty and Human Rights in Nations and Religions (Ecumenical Press, Philadelphia, 1998) 559. F.W. Taylor, On the art of cutting metals. ASME, Trans. ASME, J. Engrg. Industry, 28, 31–35 (1906) 560. B.H. Taylor, et al., Computer-Integrated Surgery (MIT Press, Cambridge, MA, 1996) 561. E. Telleyen, M. Wolsink, Society and the Environment: An Introduction (Gordon and Beach, New York, 1998) 562. J. Thurber, P. Sherman, Pollution prevention requirements in United States environmental laws, in Industrial Pollution Prevention Handbook, ed. by H.M. Freeman (McGraw-Hill, New York, 1995), pp. 27–49 563. M.C. Torrance, Natural communication with robots, M.Sc. Thesis, DEEC, MIT Press, MA, 1994 564. L.B. Tosenberg, B.D. Adelstein, Perceptual decomposition of virtual haptic surfaces, in Proceedings of the IEEE Symposium on Research Frontiers in Virtual Reality, San Jose, CA, 1993, pp. 46–53
References
339
565. C.-C. Tsui, Robust Control System Design: Advanced State – Space Techniques (Marcel Dekker, New York, 1996) 566. S. Tsuji, E.H. Shortliffe, Graphical access to the knowledge base of a medical consultation system, Technical Report KSL-85–11, Stanford Knowledge Systems Laboratory (1983) 567. T.S. Tullis, An evaluation of alphanumeric, graphic, and colour information displays. Hum. Factors 23, 541–550 (1981) 568. T.S. Tullis, Predicting the usability of alphanumeric displays, Doctoral Dissertation, Rice University, Houston, TX, 1984 569. T.S. Tullis, Optimizing the usability of computer-generated displays, in Proceedings of the HCI’86 Conference on People and Computers: Designing for Usability (British Computer Society, York, UK, 1986) 570. T.S. Tullis, Screen design, in Handbook of Human–Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (North-Holland, Amsterdam, 1997), pp. 503–531 571. M. Turk, G. Robertson, Perceptual user interfaces (Introduction). Commun. ACM, March, 33–35 (2000) 572. S.G. Tzafestas (ed.), Optimal Control of Dynamic Operational Research Models (NorthHolland, Amsterdam, 1979) 573. S.G. Tzafestas, Optimization of system reliability: A survey of problems and techniques. Int. J. Syst. Sci. 11(11), 455–486 (1980) 574. S.G. Tzafestas (ed.), Microprocessors in Signal Processing, Measurement and Control (D. Reidel, Dordrecht, 1983) 575. S.G. Tzafestas (ed.), Applied Digital Control (North Holland, Amsterdam, 1985) 576. S.G. Tzafestas (ed.), Knowledge-Based System Diagnosis, Supervision and Control (Plenum, London, 1989) 577. S.G. Tzafestas, Artificial intelligence techniques in computer-aided manufacturing systems, in Knowledge Engineering, vol. II, ed. by H. Adeli (McGraw Hill, New York, 1990), pp. 161–212 578. S.G. Tzafestas (ed.), Expert Systems in Engineering Applications (Springer, Berlin, 1993) 579. S.G. Tzafestas (ed.), Applied Control: Current Trends and Modern Methodologies (Marcel Dekker, New York, 1993) 580. S.G. Tzafestas (ed.), Computer-Assisted Management and Control of Manufacturing Systems (Springer, Berlin, 1997) 581. S.G. Tzafestas (Guest Ed.), Autonomous mobile robots in health care services (Special Issue). J. Intell. Robotic Syst. 22(3–4), 177–350 (1998) 582. S.G. Tzafestas (Guest Ed.), Autonomous robotic wheelchair projects in europe improve mobility and safety (Special Issue). IEEE Robotic Autom Mag 17(1), 1–73 (2001) 583. S.G. Tzafestas, M. Hamza, Advances in Modelling, Planning, Decision and Control of Energy, Power and Environmental Systems (MECO` 83 Proceedings of Athens) (ACTA Press, Anaheim/Calgary, 1983) 584. S.G. Tzafestas, J.K. Pal (eds.), Real-Time Microcomputer Control of Industrial Processes (Kluwer, Boston/Dordrecht, 1990) 585. S.G. Tzafestas, A.N. Venetsanopoulos, Fuzzy Reasoning in Information, Decision and Control Systems (Kluwer, Dordrecht/Boston, 1994) 586. S.G. Tzafestas, H.B. Verbruggen (eds.), Artificial Intelligence in Industrial Decision Making, Control and Automation (Kluwer, Dordrecht/Boston, 1995) 587. E.S. Tzafestas, S.G. Tzafestas, Incremental design of a flexible robotic assembly robotic cell using reactive robots, in Artificial Intelligence in Industrial Decision Making, Control and Automation, ed. by S.G. Tzafestas, H.B. Verbuggen (Kluwer, Dordrecht/Boston, 1995), pp. 555–571 588. S.G. Tzafestas, F. Capkovic, Petri-net based approach to synthesis of intelligent control for DEDS, in Computer-Assisted Management and Control of Manufacturing Systems, ed. by S.G. Tzafestas (Springer, London/Berlin, 1997), pp. 325–351 589. C. Tzafestas, P. Coiffet, Dextrous haptic interaction with virtual environments: Handdistributed kinesthetic feedback and haptic perception, in Proceedings of the IARP First International Workshop on Humanoid and Human-Friendly Robotics, Tsukuba, Japan, October, 1998, pp. IV-3.1–IV-3.14
340
References
590. S.G. Tzafestas, E.S. Tzafestas, Human–machine interaction in intelligent robotic systems: A unifying consideration with implementation examples. J. Intell. Robotic Syst. 32(2), 119– 141 (2001) 591. S.G. Tzafestas, S. Abu El Ata-Doss, G. Papakonstantinou, Expert system methodology in process supervision and control, in Knowledge-Based System Diagnosis, Supervision and Control, ed. by S.G. Tzafestas (Plenum, London, 1989), pp. 181–215 592. S.G. Tzafestas, D.G. Koutsouris, N.I. Katevas (eds.), Mobile robotics technology for health care services, in Proceedings of the 1st MobiNet Symposium, European Union Project MOBINET, Athens, Greece, 1997 593. U.S. EPA, Wastewater Treatment and Reuse by Land Application, EPA-660/2–73/006 (U.S. EPA, Washington, DC, 1973) 594. U.S. EPA, EPA Regulatory Agenda (U.S. EPA, Washington, DC, 1989) 595. U.S. EPA, Pollution prevention, EPA 21P-3003, U.S. EPA, Washington, DC, 1991 596. U.S. EPA, Facility Pollution Prevention Guide, EPA/600/R-92/088 (U.S. EPA, Washington, DC, 1992) 597. U.S. EPA, Protection of the Ozone Layer. http://www.epa.gov/ozone/science/indicat/indicat. html, 1996 598. US EPA-NCEE: National Center for Environmental Economics, http://yosemite.epa.gov/ee/ epa/eed.nsf/webpages/homepage 599. K.P. Valavanis, G.N. Saridis, Intelligent Robotic System Theory: Design and Applications (Kluwer, Boston, MA, 1992) 600. P.K. Varshney, Decentralized Bayesian detection theory, in Stochastic Large-Scale Engineering Systems, ed. by S.G. Tzafestas, K. Watanabe (Marcel Dekker, New York, 1992), pp. 1–22 601. S. Vere, T. Bickmore, A basic agent. Comput. Intell. 6(1), 41–60 (1990) 602. Virtex, http://virtex.com/haptic-feedback.html (1998) 603. W. Wahlster, H. Marburger, H. Jameson, A. Busemann, Over-answering Yes–No questions: Extended responses in a NL interface to a vision system, in Proceedings of the 8th IJCAI (Karlsruhe, Germany, 1983), pp. 643–646 604. C.R. Walker, R.H. Guest, The man on the assembly line (Harvard University Press, Boston, 1952) 605. I. Wallerstein, The Modern World System I: Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century (Academic, New York, 1974) 606. I. Wallerstein, The Essential Wallerstein (The New York Press, New York, 2000) 607. F. Wang, G.N. Saridis, A coordination theory for intelligent machines. Automatica 35(5), 833–844 (1990) 608. R. Watts, Hazardous Wastes: Sources, Pathways, Receptors (Wiley, New York, 1998) 609. M. Weck, W. Eversheim, W. K¨onig, T. Pfeifer, Production Engineering: The Competitive Edge (Butterworth-Heineman, Oxford, 1991) 610. C. Welzel, R. Inglehart, Human development and the “Explotion” of democracy, WZB Discussion Paper FSIII 01–202, WZB, Berlin, 2001 611. C. Welzel, R. Inglehart, H.-D. Klingemann, The theory of human development: A crosscultural analysis. Eur. J. Pol. Res. 42, 341–379 (2003) 612. R.M. White, The great climate debate. Sci. Am. 263(1), 36–45 (1990) 613. C.D. Wickens, Engineering Psychology and Human Performance (Harper Collins, New York, 1992) 614. C.D. Wickens, B. Goettle, Multiple resources and display formation: The implications of task integration, in Proceedings of Human Factors Society Meeting, Santa Monica, CA, pp. 722– 726 (1984) 615. C.D. Wickens, S.E. Gordon, L. Liu, An Introduction to Human Factors Engineering (Addison Wesley Longman, New York, 1998) 616. C.D. Wickens, A.S. Mavor, R. Parasuraman, J.P. McGee, The Future of Air Traffic Control: NRC Report (National Academy Press, Washington, DC, 1998) 617. E.L. Wiener, Cockpit, in Human Factors in Aviation, ed. by E.L. Wiener, D.C. Wagel (Academic, San Diego, CA, 1988), pp. 433–461
References
341
618. E.L. Wiener, Human factors of advanced technology (‘Glass Cockpit’) transport aircraft, Technical Report 177528 (NASA-Ames Research Center, Moffett Field, CA, 1989) 619. E.L. Wiener, Crew coordination and training in the advanced-technology cockpit, in Cockpit Resource Management, ed. by E.L. Wiener, B.G. Kanki, R.L. Helmreich (Academic, San Diego, CA, 1993), pp. 199–223 620. W.W. Wierwille, Visual and manual demands of in-car controls and displays, in Automotive Ergonomics, ed. by B. Peacock, V. Karwoski (Taylor and Francis, London, 1993), pp. 299–320 621. E.O. Wilson, Threads to biodiversity. Sci. Am. 261(3), 108–116 (1989) 622. E.O. Wilson, The Diversity of Life (Belknap Press, Cambridge, MA, 1992) 623. P. Winsemius, U. Guntram, A Thousand shades of green: Sustainable strategies for competitive advantage (2002), http://www.amazon.com 624. G. Woodside, J. Cascio, Total environmental management. Ind. Wastewater 4(5), 53–56 (1996) 625. www.gearfuse.com/tag/care-o-bot-3. Also www.physorg.com/news134145359.html. 626. www.mobilitynow.co.uk/top-amazing-mobility-robots-for-the-handicapted-and-disabled 627. www.pepperl-fuchs.com/selector/index.html 628. www.robotspodcast.com/forum/viewtopic.php?f D 9&t D 91&start D 0 629. www.univ-metz.fr/culture sport/sam/Productions/Sciences/vahm.html. 630. M. Yabushita, Autonomous decentralization concept and its application to railway control systems, in Proceedings of the 34th IEEE Vehicular Technology Conference, vol. 34, May 1984, pp. 285–260 631. H. Yamada, A historical study of typewriters and typing methods: From the position of planning Japanese parallels, Technical report 80–05, University of Tokyo, Department of Information Science, Tokyo, Japan, 1980 632. W.W. Zachary, J.M. Ryder, Decision support systems: Integrating decision aiding and decision training, in Handbook of Human-Computer Interaction, ed. by M. Helander, T.K. Landauer, P. Prabhu (North Holland, Amsterdam, 1997), pp. 1235–1258 633. B.P. Zeigler, Multifaceted Modelling and Discrete Event Simulation (Academic, London/Orlando, FL, 1984)
Index
A Acid rain, 11, 117, 118, 120–125, 227 Adaptive control, 106, 224, 307, 314, 316 Air emissions, 109 Air pollution, 109–111, 117–120, 125, 130, 172 Air traffic control, 149, 200, 202–205 Aircraft automation, 200–201 Allocation function, 136, 138–140 Assistive robotics, 215–219 Automated highway systems, 207–208 Automatic control, 2, 3, 5, 197 Automation, 2 human factors, 1, 23–46 in environmental systems, 193, 226, 227 in the nature, 10–11 social issues, 11–14 Automobile automation, 206–209, 226 Availability model, 243–247 Average value operators, 295–297
Clean chemistry, 159 Climate change, 20, 122, 128, 171, 172, 177, 178, 180, 182, 184 Computational optimization, 270–274 Computer-aided design, 49, 62, 71, 79–81, 213 Constrained optimization, 269–270 Continuous process automation, 224–226 Control, 3–7, 65, 68, 83–108, 110–111, 155, 160–162, 165–173, 202–205, 267–316 Crossover, 278–280
B Bang–Bang controller, 6, 7 Bayes formula, 236–239, 293, 298 Bayesian detection, 300 Bayesian medical diagnosis, 298–299 Behavior-based architectures, 84, 103–107 Benign chemistry, 159 Biodiversity decrease, 128, 129 Bishop’s flow diagram, 110 Brownian motion, 250
D Decision aiding, 134, 143–146, 149 Decision analysis, 268, 293–302 Decision matrix, 268, 295–297 Decision training, 134, 143–146 Deforestation, 11, 128–130 Desertification, 128, 129 Design for the environment, 152, 158 Detection with fusion, 299 Deterministic models, 231–235 Direct releases, 153 Discrete event control, 103 Display compatibility, 68 Distributed control architectures, 84, 97–102 Doctrine of employment at will, 45 privilege, 44 Dynamic models, 232, 233, 248–251 Dynamic optimization, 268, 274–278 Dynamic programming, 274–275
C Calculus of variations, 274, 276–277 Car driving control, 8 Center-of-gravity technique (COG), 256, 259 CIM automation, 81, 220–224 Classical control, 88, 90, 268, 303–306, 315
E Earth’s carbon cycle, 119–120 Ecology, 168, 173, 178, 188 Eigenvalue control, 268, 306–309 Energy conservation, 152, 162–165, 181, 228 Entropy model, 232, 242–243
343
344 Environment R-rules, 180–182 Environmental auditing, 161, 162 Environmental control in European Union, 170 in USA, 168–173 regulations, 167–173 Environmental economics, 183–185, 187 Environmental impact, 2, 110, 116, 129–131, 152–158, 161, 162, 164, 170–172, 181, 226 Environmental standards, 172, 186–187 Estimation, 39, 89, 267–317 Euler simulation, 260–261 Expectancy model, 36, 37
F Failure rate, 147, 200, 244, 245, 247 Feedback, 1, 2, 5–9, 42, 47, 51, 73, 74, 76, 78, 79, 81, 84, 88, 89, 92, 135, 137, 160, 180, 212–214, 217, 219, 221, 267, 303, 306, 307, 317 Feedback control system, 5–7 First-order statistics, 248 Flight simulation, 79 Fly ball governor, 4, 7 Force sensing, 61, 76, 77 Fugitive emissions control, 152, 165–167, 170 Function allocation, 136, 138–140 Fuzzy evaluation, 301–302 Fuzzy inference, 253–255 Fuzzy set operations, 253 Fuzzy sets, 232, 251–260, 293, 301 Fuzzy systems, 255–260, 316
G Genetic optimization, 278–280 Global warming, 11, 110, 117–126, 130, 178, 180 Gradient algorithm, 270, 290 Graphical user interface (GUI) design components, 63 for knowledge based systems, 61, 74–75 Green automation, 228 Green chemistry, 159 Greenhouse effect, 120–122
H Hierarchical distributed systems, 84, 98–101 Human bias, 35, 39–40
Index Human development index, 18–19 report, 20–21 Human error probability (HEP), 256–258 Human factors field, 24–27 goals, 25–27 in automation, 23–46 Human–machine interaction (HMI), 47–81, 135, 136 Human–machine interactive systems via virtual environments, 76–79 Human–machine interfaces (HMI), 43, 48, 61, 62, 68–72, 76, 79–81, 85, 102 Human-minding automation design, 133–140, 182 interface design, 137–140 types, 133, 134 Human-nature minding industry, 176 Human resources problem, 140–143 Human rights, 10, 14, 36, 43–46 Humans in automation, 9–10
I Impact analysis, 154, 155, 157 Impact of industrial activity, 117–126 Indirect releases, 153 Industrial contaminants, 109–117 Innovation transfer, 134, 140, 142–143 Inorganic nonmetals, 111, 115–117 Inspection with robots, 211 Intelligent control, 68, 84, 92, 103, 104, 307, 316 Intelligent HMI, 68–71 Internal model factor, 24, 31–33, 313 International safety standards, 146–149 Inter-organizational automation, 222–224 Intra-organizational automation, 221 Inventory analysis, 154
J Job satisfaction, 30, 36–37, 47, 59, 134, 142, 149 Job stress, 12, 35–38
K Kalman filter continuous time, 287–290 discrete time, 285–286 Keys and keyboards, 51–53 Knowledge based error, 42–43
Index L Learning, 42, 48, 71, 89, 105, 106, 137, 144, 237, 264, 268, 280–293, 316 Least squares estimation, 280, 283 Least squares learning rule, 283 Life-cycle assessment computer models, 154–156 Life expectancy, 18–20 Life time, 2, 24, 181, 244 Linear quadratic regulator, 310, 313 Literacy, 18–20
M Machining with robots, 212 MAid robot, 216 Mamdani rule, 254, 259 Markov process, 249–250, 287, 288 Markov reliability model, 246–247 Material handling, 210, 212 Mean cycle time, 245 Mean of maxima (MOM) method, 256 Mean time to first failure (MTFF), 245, 247 Medical robotic applications, 213 Metals, 111, 112, 115–117, 119, 124 Meystel’s architecture, 84, 86 Minimum principle, 274, 277–278 Model matching control, 306 Model reference control, 314 Modern control, 4, 268, 303, 306–316 Modernization, 1, 14–21 Monte Carlo method, 265 Motor schemas architecture, 105–107 Motor speed control, 7, 8 MOVAID robot, 215 Multilayer perceptron, 268, 291–292 Multi-modal HMI, 61 Mutation, 122, 278–280 My spoon robot, 216
N Natural language HMI, 61 Natural resources conservation, 111, 152, 162–165 Natural resources depletion, 117, 126–128 Nature minding business operation, 152 Nature minding design, 158–159, 182 Nature minding economic considerations, 183–187 Nature minding industrial activity, 151–191 Nature minding organizations, 187–191 Nature minding rules, 181, 182
345 Need model, 36, 37 Neural network learning, 290–293 Neural networks, 290, 316 Newton–Raphson algorithm, 270, 271
O Office automation, 48, 150, 194–196 Okala guide, 182 Operator reliance, 24, 33–34 Optimal control, 87, 91, 272, 306, 309–313, 316 Organic compounds, 111–115, 119, 164, 166 Ozone hole, 11, 120–125
P Parameter estimation, 280–285 Pareto optimality, 296, 297 Physical layout factors, 59 Physical strength, 25, 35–46 Polluter-pays principle, 186 Pollution control factors, 155 planning, 160–162 Predictive control, 87, 224, 306, 313 Primary air pollutants, 118 Probabilistic model simulation, 232, 262–266 Probabilistic models, 235–241 Probability failure on demand (PFD), 147–149 Public pollution control, 166–167
R Radial basis network (RBN), 268, 291–293 Railway systems automation, 193, 196–199, 225 Rasmussen’s architecture, 86–88 Real object handling, 80, 81 Recursive estimation, 282–286, 306 Recycling, 110, 111, 125, 128, 153, 156, 158, 159, 163, 170, 172, 182, 185, 228 Reference superset, 252, 253, 257 Reliability model, 246 Residual management, 152, 162–165 Robot-based surgery, 213, 214, 316 Robot social applications, 213, 214 Robotic assembly, 211 Robotic automation systems, 209–219 Robust control, 307, 314–316 ROLLAND wheelchair, 215 Runge–Kutta simulation, 261–262 Ryder–Zachary framework, 146
346 S Screen design, 36, 48, 56–58 Sea transportation automation, 193, 206–209 Selection, 10, 26, 27, 39, 41, 48, 49, 53–56, 72, 81, 85, 93, 95, 104, 107, 108, 134, 137, 140, 143, 155, 156, 158, 162, 176, 181, 216, 271, 278, 280, 293, 297, 306 Self tuning control, 224, 314 Shannon’s theorem, 311 Sheridan’s architecture, 84, 86, 88–92, 135 Signal-to-noise ratio, 243 Situation analysis, 138–140 Skill-based error, 42 Solid waste disposal, 117, 125–126, 130, 164–165 Standard of living, 14, 18–20, 50 State estimation, 267, 285–290 State prediction, 287 State space models, 232, 233, 235, 260, 303, 313 Static optimization, 97, 268–274 Stationarity, 249 Statistics, 101, 232, 238, 240–241, 248, 249 Stimulus–response compatibility, 24, 31–33 Stochastic control, 306, 312–313 Stochastic models, 231, 262 Stochastic processes, 232, 248–251 Stress model, 38 Subsumption architecture, 84, 104–105, 107 Supervised control architectures, 85–93 Surgeon training, 79 Surgical assistant systems, 214 Sustainability index, 152, 178–179 indicators, 175, 178 principles, 152, 177 System design, 2, 23, 24, 26–30, 134–138, 140, 142, 267, 306 System development resources, 140–142 System-minding design, 134–135
Index System modeling, 231 System optimization, 267–280 System simulation, 231, 260–266
T Tactile based HMI, 61 Task allocation, 84, 93–97 Task analysis, 50, 84, 93–97, 138 Traffic management systems, 208 Tustin approximation, 305
U Urban smog, 11, 110, 117, 120–125 Utility equilibrium, 298
V Video display terminal factors, 59, 60 Virtual reality systems, 14, 77, 78, 213 Virtualization, 77, 78 Visual displays, 50, 61, 65–68
W Waste control, 125 Waste neutralization, 164 Wastewater treatment, 164, 166 Water conservation, 1, 162 Water pollution, 109, 117, 125, 126, 173, 186 Welding, 211 Windowing systems, 63–65 Workload, 24, 30–32, 34, 36, 39, 153, 200, 201 Work-method factors, 59–60 Workstation design, 36, 58, 59
Z Zadeh’s rule, 254, 256, 302