'seap,
Pipeline Risk Management Manual Ideas, Techniques, and Resources Third Edition
Pipeline Risk Management Manual Ideas, Techniques, and Resources Third Edition
W. Kent Muhlbauer
-
-
-
-
AMSTERDAM. BOSTON HEIDELBERG * LONDON * NEWYORK OXFORD PARIS * SANDIEGO * SANFRANCISCO SINGAPORE SYDNEY -TOKYO
ELSEVIER
Gulf Professional Publishing IS an lrnprrnt of Elsevier 1°C
Gulf Professional Publishing is an imprint of Elsevier 200 Wheeler Road, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright 02004, Elsevier Inc. All rights reserved. No part ofthis publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher.
Permissions may be sought directly from Elsevier’s Science &Technology Rights Department in Oxford, UK: phone: ( 4 4 ) 1865 843830, fax: ( 4 4 ) 1865 853333, e-mail:
[email protected] may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting “Customer Support” and then “Obtaining Permissions.”
8
Recognizing the importance of preserving what has been written, Elsvier prints its books on acid-free paper whenever possible. Library of Congress Cataloging-in-PublicationData Muhlbauer, W. Kent. Pipeline risk management manual : a tested and proven system to prevent loss and assess risk / by W. Kent Muhlbauer.-3rd ed. p. cm. Includes bibliographical references and index. ISBN 0-7506-7579-9 1. Pipelines-Safety measures-Handbooks, manuals, etc. 2. Pipelines-Reliability-Handbooks, manuals, etc. I. Title. TJ930.M84 2004 6213 ’ 6 7 2 4 ~ 2 2 20030583 15 British Library Cataloguing-in-PublicationData A catalogue record for this book is available from the British Library. ISBN: 0-7506-7579-9
For information on all Gulf Professional Publishing publications visit our Web site at www.gulfpp.com 03 04 05 06 07 08 09
10 9
Printed in the United States ofAmerica
8 7 6 5 4
3 2
1
Contents Acknowledgements
vii
Preface
ix
Introduction
xi
Risk Assessment at a Glance
xv
Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Appendix A Appendix B Appendix C Appendix D Appendix E Appendix F Appendix G
Risk: Theory and Application Risk Assessment Process Third-party Damage Index Corrosion Index Design Index Incorrect Operations Index Leak Impact Factor Data Management and Analyses Additional Risk Modules Service Interruption Risk Distribution Systems Offshore Pipeline Systems Stations and Surface Facilities Absolute Risk Estimates Risk Management Typical Pipeline Products Leak Rate Determination Pipe Strength Determination Surge Pressure Calculations Sample Pipeline Risk Assessment Algorithms Receptor Risk Evaluation Examples of Common Pipeline Inspection and Survey Techniques
1 21 43 61 91 117 133 177 197 209 223 243 257 293 33 1 357 361 363 367 369 375 379
Glossary
38 1
References Index
385 389
Acknowledgments As in the last edition, the author wishes to express his gratitude to the many practitioners of formal pipeline risk management who have improved the processes and shared their ideas. The author also wishes to thank reviewers of this edition who contributed their time and expertise to improving portions of this book, most notably Dr. Karl Muhlbauer and Mr. Bruce Beighle.
Preface The first edition of this book was written at a time when formal risk assessments of pipelines were fairly rare. To be sure, there were some repairheplace models out there, some maintenance prioritization schemes, and the occasional regulatory approval study, but, generally, those who embarked on a formal process for assessing pipeline risks were doing so for very specific needs and were not following a prescribed methodology. The situation is decidedly different now. Risk management is increasingly being mandated by regulations. A risk assessment seems to be the centerpiece of every approval process and every pipeline litigation. Regulators are directly auditing risk assessment programs. Risk management plans are increasingly coming under direct public scrutiny. While risk has always been an interesting topic to many, it is also often clouded by preconceptions of requirements of huge databases, complex statistical analyses, and obscure probabilistic techniques. In reality, good risk assessments can be done even in a data-scarce environment. This was the major premise of the earlier editions. The first edition even had a certain sense of being a risk assessment cookbook-“Here are the ingredients and how to combine them.” Feedback from readers indicates that this was useful to them. Nonetheless, there also seems to be an increasing desire for more sophistication in risk modeling. This is no doubt the result of more practitioners than ever before-pushing the boundaries-as well as the more widespread availability of data and the more powerful computing environments that make it easy and cost effective to consider many more details in a risk model. Initiatives are currently under way to generate more
complete and useful databases to further our knowledge and to support detailed risk modeling efforts. Given this as a backdrop, one objective ofthis third edition is to again provide a simple approach to help a reader put together some kind of assessment tool with a minimum of aggravation. However, the primary objective of this edition is to provide a reference book for concepts, ideas, and maybe a few templates covering a wider range of pipeline risk issues and modeling options. This is done with the belief that an idea and reference book will best serve the present needs ofpipeline risk managers and anyone interested in the field. While I generally shy away from technical books that get too philosophical and are weak in specific how-to’s, it is just simply not possible to adequately discuss risk without getting into some social and psychological issues. It is also doing a disservice to the reader to imply that there is only one correct risk management approach. Just as an engineer will need to engage in a give-and-take process when designing the optimum building or automobile, so too will the designer of a risk assessment/management process. Those embarking on a pipeline risk management process should realize that, once some basic understanding is obtained, they have many options in specific approach. This should be viewed as an exciting feature, in my opinion. Imagine how mundane would be the practice of engineering if there were little variation in problem solving. So, my advice to the beginner is simple: arm yourself with knowledge, approach this as you would any significant engineering project, and then enjoy the journey !
Introduction As with previous editions of this book, the chief objective of this edition is to make pipelines safer. This is hopefully accomplished by enhancing readers’ understanding of pipeline risk issues and equipping them with ideas to measure, track, and continuouslyimprove pipeline safety. We in the pipeline industry are obviously very familiar with all aspects of pipelining. This familiarity can diminish our sensitivity to the complexity and inherent risk of this undertaking. The transportation of large quantities of sometimes very hazardous products over great distances through a pressurized pipeline system, often with zero-leak tolerance, is not a trivial thing. It is useful to occasionally step back and re-assess what a pipeline really is, through fresh eyes. We are placing a very complex, carefully engineered structure into an enormously variable, ever-changing, and usually hostile environment. One might reply, “complex!? It’s just a pipe!” But the underlying technical issues can be enormous. Metallurgy, fracture mechanics, welding processes, stress-strain reactions, soilinterface mechanical properties of the coating as well as their critical electrochemical properties, soil chemistry, every conceivable geotechnical event creating a myriad of forces and loadings, sophisticated computerized SCADA systems, and we’re not even to rotating equipment or the complex electrochemical reactions involved in corrosion prevention yet! A pipeline is indeed a complex system that must coexist with all of nature’s and man’s frequent lack of hospitality. The variation in this system is also enormous. Material and environmental changes over time are of chief concern. The pipeline must literally respond to the full range of possible ambient conditions of today as well as events of months and years past that are still impacting water tables, soil chemistry, land movements, etc. Out of all this variation, we are seeking risk ‘signals.’Our measuring ofrisk must therefore identify and properly consider all of the variables in such a way that we can indeed pick out risk signals from all of the background ‘noise’ created by the variability. Underlying most meanings of risk is the key issue of ‘probability.’ As is discussed in this text, probability expresses a degree ofbelief:This is the most compelling definition of probability because it encompasses statistical evidence as well as interpretations and judgment. Our beliefs should be firmly rooted in solid, old-fashioned engineering judgment and reasoning. This does not mean ignoring statistics-rather, using data appropriately-for diagnosis; to test hypotheses; to
uncover new information. Ideally, the degree of belief would also be determined in some consistent fashion so that any two estimators would arrive at the same conclusion given the same evidence. This is the purpose of this book-to provide frameworks in which a given set of evidence consistently leads to a specific degree of belief regarding the safety of a pipeline. Some of the key beliefs underpinning pipeline risk management, in this author’s view, include: Risk management techniques are fundamentally decision support tools. We must go through some complexity in order to achieve “intelligent simplification.” In most cases, we are more interested in identifying locations where a potential failure mechanism is more aggressive, rather than predicting the length of time the mechanism must be active before failure occurs. Many variables impact pipeline risk. Among all possible variables, choices are required to strike a balance between a comprehensive model (one that covers all of the important stuff) and an unwieldy model (one with too many relatively unimportant details). Resource allocation (or reallocation) towards reduction of failure probability is normally the most effectiveway to practice risk management. (The complete list can be seen in Chapter 2) The most critical beliefunderlying this book is that all available information should be used in a risk assessment. There are very few pieces of collected pipeline information that are not useful to the risk model. The risk evaluator should expect any piece of information to be useful until he absolutely cannot see any way that it can be relevant to risk or decides its inclusion is not cost effective. Any and all expert’s opinions and thought processes can and should be codified, thereby demystifymg their personal assessment processes.The experts’ analysis steps and logic processes can be duplicated to a large extent in the risk model. A very detailed model should ultimately be smarter than any single individual or group of individuals operating or maintaining the p i p e l i n e includingthat retired guy who knew everything. It is often useful to thinkof the model building process as ‘teaching the model’ rather than ‘designing the model.’ We are training the model to ‘think’
xii Introduction
like the best experts and giving it the collective knowledge of the entire organization and all the years ofrecord-keeping.
Changes from Previous Editions This edition offers some new example assessment schemes for evaluating various aspects of pipeline risk. After several years of use, some changes are also suggested for the model proposed in previous editions of this book. Changes reflect the input of pipeline operators, pipeline experts, and changes in technology. They are thought to improve our ability to measure pipeline risks in the model. Changes to risk algorithms have always been anticipated, and every risk model should be regularly reviewed in light of its ability to incorporate new knowledge and the latest information. Today’s computer systems are much more robust than in past years, so short-cuts, very general assumptions, and simplistic approximations to avoid costly data integrations are lessjustifiable. It was more appropriate to advocate a very simple approach when practitioners were picking this up only as a ‘good thing’ to do, rather than as a mandated and highly scrutinized activity. There is certainly still a place for the simple risk assessment. As with the most robust approach, even the simple techniques support decision makmg by crystallizing thinking, removing much subjectivity,helping to ensure consistency, and generating a host of other benefits. So, the basic risk assessment model of the second edition is preserved in this edition, although it is tempered with many alternative and supporting evaluation ideas. The most significant changes for this edition are seen in the Corrosion Index and Leak Impact Factor (LIF). In the former, variables have been extensively re-arranged to better reflect those variables’ relationships and interactions. In the case of LIF, the math by which the consequence variables are com-
bined has been made more intuitive. In both cases, the variables to consider are mostly the same as in previous editions. As with previous editions, the best practice is to assess major risk variables by evaluating and combining many lesser variables, generally available from the operator’s records or public domain databases. This allows assessments to benefit from direct use of measurements or at least qualitative evaluationsof several small variables, rather than a single, larger variable, thereby reducing subjectivity. For those who have risk assessment systems in place based on previous editions, the recommendation is simple: retain your current model and all its variables, but build a modern foundation beneath those variables (if you haven’t already done so). In other words, bolster the current assessments with more complete consideration of all available information. Work to replace the high-level assessments of ‘good,’ ‘fair,’ and ‘poor,’ with evaluations that combine several data-rich subvariables such as pipe-to-soil potential readings, house counts, ILI anomaly indications, soil resistivities, visual inspection results, and all the many other measurements taken. In many cases, this allows your ‘ascollected’ data and measurements to be used directly in the risk model-no extra interpretation steps required. This is straightforward and will be a worthwhile effort, yielding gains in efficiency and accuracy. As risks are re-assessed with new techniques and new information, the results will often be very similar to previous assessments. After all, the previous higher-level assessments were no doubt based on these same subvariables,only informally. If the new processes do yield different results than the previous assessments, then some valuable knowledge can be gained. This new knowledge is obtained by finding the disconnectthe basis of the differences-and learning why one of the approaches was not ‘thinking’ correctly. In the end, the risk assessment has been improved.
Disclaimer The user of this book is urged to exercise judgment in the use of the data presented here. Neither the author nor the publisher provides any guarantee, expressed or implied with regard to the general or specific application of the data, the range of errors that may be associated with any of the data, or the appropriateness of using any of the data. The author accepts no responsibility for damages, if any, suffered by any reader or user of this book as a result of decisions made or actions taken on information contained herein.
Risk Assessment at a Glance The following is a summary of the risk evaluation framework described in Chapters 3 through 7. It is one of several approaches to basic pipeline risk assessmentin which the main consequences of concern are related to public health and safety, including environmental considerations.Regardlessof the risk assessment methodologyused, this summary can be useful as a checklist to ensure that all risk issues are addressed.
Relative Risk Score
t
1
Leak Impact Factor
I
Incorrect Operations Figure0.1 Risk assessment model flowchart.
xvi Risk assessment at a glance
Relative Risk Rating Index Sum
A. B. C. D. E. F. G.
= =
(Index Sum) f (Leak Impact Factor) [(Third Party) +(Corrosion) +(Design) +(Incorrect Operations)]
Third-party Index Minimum Depth of Cover. ................. 0-20 pts Activity Level. ........................... 0-20 pts 0-10 pts Aboveground Facilities .................... 0-15 pts LineLocating .......................... 0-1 5 pts Public Education .......................... Right-of-way Condition. . . . . . . . . . . . . . . . . . . . 0-5 pts Patrol.. ................................. . O-15 pts
20% 20% 10% 15% 15%
0-100 pts
100%
Corrosion Index A. Atmospheric Corrosion. . . . A l . Atmospheric Exposure 0-2 pts A2. AtmosphericType ..................... A3. Atmospheric Coating. ................. 0-3 pts
5 yo 15%
10%
B. Internal Corrosion. . . . . . . 0-20 pts B1. Product Corrosivity . . . . . . . . . . . . . . . . . . .0-10 pts B2. Internal Protection. . . . . . . . . . . . . . . . . . . .0-10 pts
20%
C. Subsurface Corrosion. .................... .&70 pts C 1. Subsurface Environment ...............0-20 pts
70%
Mechanical Corrosion. . . . . . . . . . . . . . . . . .0-5 pts C2. Cathodic Protection. ................... 0-8 pts ... 0-15 pts Effectiveness. . . . Interference Potential. ................ 0-10 pts C3. Coating.. ............................ 0-10 pts Fitness ........... 0-10 pts Condition. ........................... 0-1 5 pts
Design Index A. Safety Factor. . . . . . . . . . . . . . . .
.......................... 0-15 pts C. Surge Potential. .......................... .O-10 pts .... .0-25 pts D. Integrity Verifications E. Land Movements. ........................ , 6 1 5 pts
. .O-35 pts
35% 15% 10% 25% 15%
0-100 pts
100%
Incorrect Operations Index A. Design A l . Hazard Identification .......... .. W p t s A2. MAOP Potential . . . . . . . . . . . . . . 0-12 pts A3. Safety Systems. . . . . 0-10pts A4. Material Selection. .................... 0-2 pts A5. Checks.. ............................. 0-2 pts 0-30 pts
30%
Risk assessment at a glance xvii
6. Construction BI. Inspection.. . . . . . . . . . . . . . . . . . . . . . . . . . .&IO pts 0-2 pts 8 2 . Materials.. . . . . . . . . . . . . . . . . . . . . . . . . . . . B3. Joining,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts B4. Backfill. ............................. &2 pts 65. Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts 6 6 . Coating,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts 0-20 pts
20%
C. Operation C1. Procedures. ........................... 0-7 pts C2. SCADNCommunications . . . . . . . . . . . . . . 0-3 pts C3. DrugTesting . . . . . . . . . . . . . . . . . . . . . . . . . . . O-2 pts C4. Safety Programs. ...................... 0-2 pts C5. SurveydMapdRecords . . . . . . . . . . . . . . . . . 0-5 pts 0-10 pts C6. Training. ............................ C7. Mechanical Error Preventers . . . . . . . . . . . . 0-6 pts 0-35 pts
35%
D. Maintenance DI. Documentation. . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts D2. Schedule.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . O-3 pts D3. Procedures,. . . . . . . . . . . . . . . . . . . . . . . . . .@-lopts 0-15 pts
15%
Total Index Sum 0-400 pts
Leak Impact Factor Leak Impact Factor = Product Hazard (PH) x Leakvolume (LV) x Dispersion (D)x Receptors (R) A. Product Hazard (Acute + Chronic Hazards) 0-22 points A 1. Acute Hazards a. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 04pts b. N r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 04pts c. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-4pts Total(Nf+N,+Nh) 2. Chronic Hazard RQ B. Leak Volume ( LV) C. Dispersion (D)
D. Receptors (R) D 1. Population Density (Pop) D2. Environmental Considerations (Env) D3. High-Value Areas (HVA) Total (Pop + Env + HVA)
0-12 pts 0-10 pts
Risk: Theory and Application Contents I The science and philosophyof ris Embracing paranoia 111 The scientificmethod 1/2 Modeling 113 II. Basicconcepts 113 Hazard 113 Risk 1/4 Farlure 114 Probability 114 Frequency, statistics, and probabi Failure rates 115 Consequences 116 Risk assessment 117 Riskmanagement 117 Experts 118 111 Uncertainty 118 IY Bsk process-the general steps
1. The science and philosophyof risk Embracingparanoia One of Murphy’s’ famous laws states that “left to themselves, things will always go from bad to worse.” This humorous prediction is, in a way, echoed in the second law of thermodynamics. That law deals with the concept of entropy. Stated simply, entropy
I Murphy$ laws arefamousparodies on scientific laws and l*, humorously pointing out all the things that can and often do go wrong in science and life.
is a measure of the disorder of a system.The thermodynamics law states that “entropy must always increase in the universe and in any hypothetical isolated system within it” [34]. Practical application of this law says that to offset the effects of entropy, energy must be injected into any system. Without adding energy, the system becomes increasingly disordered. Although the law was intended to be a statement of a scientific property, it was seized upon by “philosophers” who defined system to mean a car, a house, economics, a civilization, or anything that became disordered.By this extrapolation, the law explains why a desk or a garage becomes increasingly cluttered until a cleanup (injection of energy) is initiated. Gases
1/2 Risk: Theory and Application
diffuse and mix in irreversible processes, unmaintained buildings eventually crumble, and engines (highly ordered systems) break down without the constant infusion of maintenance energy. Here is another way of looking at the concept: “Mother Nature hates things she didn’t create.” Forces of nature seek to disorder man’s creations until the creation is reduced to the most basic components. Rust is an example-metal seeks to disorder itself by reverting to its original mineral components. If we indulge ourselves with this line of reasoning, we may soon conclude that pipeline failures will always occur unless an appropriate type of energy is applied. Transport of products in a closed conduit, often under high pressure, is a highly ordered, highly structured undertaking. If nature indeed seeks increasing disorder, forces are continuously at work to disrupt this structured process. According to this way of thinlang, a failed pipeline with all its product released into the atmosphere or into the ground or equipment and components decaying and reverting to their original premanufactured states represent the less ordered, more natural state of things. These quasi-scientific theories actually provide a useful way of looking at portions of our world. If we adopt a somewhat paranoid view of forces continuously acting to disrupt our creations, we become more vigilant. We take actions to offset those forces. We inject energy into a system to counteract the effects of entropy. In pipelines, this energy takes the forms of maintenance, inspection, and patrolling; that is, protecting the pipeline from the forces seeking to tear it apart. After years of experience in the pipeline industry, experts have established activities that are thought to directly offset specific threats to the pipeline. Such activities include patrolling, valve maintenance, corrosion control, and all of the other actions discussed in this text. Many of these activities have been mandated by governmental regulations, but usually only after their value has been established by industry practice. Where the activity has not proven to be effective in addressing a threat, it has eventuallybeen changed or eliminated. This evaluation process is ongoing. When new technology or techniques emerge, they are incorporated into operations protocols. The pipeline activity list is therefore being continuously refined. A basic premise of this book is that a risk assessment methodology should follow these same lines of reasoning. All activities that influence, favorably or unfavorably, the pipeline should be considered-even if comprehensive, historical data on the effectivenessof a particular activity are not yet available. Industry experience and operator intuition can and should be included in the risk assessment.
The scientific method This text advocates the use of simplifications to better understand and manage the complex interactions of the many variables that make up pipeline risk. This approach may appear to some to be inconsistent with their notions about scientific process. Therefore, it may be useful to briefly review some pertinent concepts related to science, engineering, and even philosophy. The results of a good risk assessment are in fact the advancement of a theory. The theory is a description of the expected behavior, in risk terms, of a pipeline system over some future period of time. Ideally, the theory is formulated from a risk assessment technique that conforms with appropriate scientific
methodologies and has made appropriate use of information and logic to create a model that can reliably produce such theories. It is hoped that the theory is a fair representation of actual risks. To be judged a superior theory by the scientific community, it will use all available information in the most rigorous fashion and be consistent with all available evidence. To be judged a superior theory by most engineers, it will additionally have a level of rigor and sophistication commensurate with its predictive capability; that is, the cost of the assessment and its use will not exceed the benefits derived from its use. If the pipeline actually behaves as predicted, then everyone’s confidence in the theory will grow, although results consistent with the predictions will never “prove” the theory. Much has been written about the generation and use of theories and the scientific method. One useful explanation of the scientific method is that it is the process by which scientists endeavorto construct a reliable and consistent representation of the world. In many common definitions, the methodology involves hypothesis generation and testing of that hypothesis: 1. Observe a phenomenon. 2. Hypothesize an explanation for the phenomenon. 3. Predict some measurable consequence that your hypothesis would have if it turned out to be true. 4. Test the predictions experimentally.
Much has also been written about the fallacy of believing that scientists use only a single method of discovery and that some special type of knowledge is thereby generated by this special method. For example, the classic methodology shown above would not help much with investigation of the nature of the cosmos. No single path to discovery exists in science, and no one clear-cut description can be given that accounts for all the ways in which scientific truth is pursued [56,88]. Common definitions of the scientific method note aspects such as objectivity and acceptability of results from scientific study. Objectivity indicates the attempt to observe things as they are, without altering observations to make them consistent with some preconceived world view. From a risk perspective, we want our models to be objective and unbiased (see the discussion of bias later in this chapter). However, our data sources often cannot be taken at face value. Some interpretation and, hence, alteration is usually warranted, thereby introducing some subjectivity. Acceptability is judged in terms ofthe degree to which observations and experimentationscan be reproduced. Of course, the ideal risk model will be accurate, but accuracy may only be verified after many years. Reproducibility is another characteristicthat is sought and immediately verifiable. If multiple assessors examine the same situation, they should come to similar conclusions if our model is acceptable. The scientific method requires both inductive reasoning and deductive reasoning. Induction or inference is the process of drawing a conclusion about an object or event that has yet to be observed or occur on the basis of previous observations of similar objects or events. In both everyday reasoning and scientific reasoning regarding matters of fact, induction plays a central role. In an inductive inference, for example, we draw conclusions about an entire group of things, or a population, on the basis of data about a sample of that group or population; or we predict the occurrence of a future event on the basis of observations of similar past events; or we attribute a property to a nonobserved thing on the grounds that all observed things of
Basic concepts 113 the same kind have that property; or we draw conclusions about causes of an illness based on observations of symptoms. Inductive inference permeates almost all fields, including education, psychology, physics, chemistry, biology, and sociology 1561.The role of induction is central to many of our processes of reasoning. At least one application of inductive reasoning in pipeline risk assessment is obvious-using past failures to predict future performance. A more narrow example of inductive reasoning for pipeline risk assessment would be: “Pipeline ABC is shallow and fails often, therefore all pipelines that are shallow fail more often.” Deduction on the other hand, reasons forward from established rules: “All shallow pipelines fail more frequently; pipeline ABC is shallow; therefore pipeline ABC fails more frequently.” As an interesting aside to inductive reasoning, philosophers have struggled with the question of what justification we have to take for granted the common assumptions used with induction: that the future will follow the same patterns as the past; that a whole population will behave roughly like a randomly chosen sample; that the laws of nature governing causes and effects are uniform; or that we can presume that a sufficiently large number of observed objects gives us grounds to attribute something to another object we have not yet observed. In short, what is the justification for induction itself? Although it is tempting to try to justify induction by pointing out that inductive reasoning is commonly used in both everyday life and science. and its conclusions are. by and large, proven to be correct. this justification is itself an induction and therefore it raises the same problem: Nothing guarantees that simply because induction has worked in the past it will continue to work in the future. The problem of induction raises important questions for the philosopher and logician whose concern it is to provide a basis for assessment of the correctness and the value of methods of reasoning [56,88]. Beyond the reasoning foundations of the scientific method, there is another important characteristic of a scientific theory or hypothesis that differentiates it from, for example, an act of faith: A theory must be “falsifiable.”This means that there must be some experiment or possible discovery that could prove the theory untrue. For example. Einstein’s theory of relativity made predictions about the results of experiments. These experiments could have produced results that contradicted Einstein, so the theory was (and still is) falsifiable [56]. On the other hand, the existence of God is an example of a proposition that cannot be falsified by any known experiment. Risk assessment results, or “theories” will predict very rare events and hence not be falsifiable for many years. This implies an element offaith in accepting such results. Because most risk assessment practitioners are primarily interested in the immediate predictive power of their assessments. many of these issues can largely be left to the philosophers. However, it is useful to understand the implications and underpinnings of our beliefs.
Modeling As previously noted, the scientific method is a process by which we create representations or models of our world. Science and engineering (as applied science) are and always have been concerned with creating models of how things work.
As it is used here, the term model refers to a set of rules that are used to describe a phenomenon. Models can range from very simple screening tools (Le., “ifA and not B, then risk = low”) to enormously complex sets of algorithms involving hundreds of variables that employ concepts from expert systems, fuzzy logic, and other artificial intelligence constructs. Model construction enables us to better understand our physical world and hence to create better engineered systems. Engineers actively apply such models in order to build more robust systems. Model building and model applicatiodevaluation are therefore the foundation of engineering. Similarly, risk assessment is the application of models to increase the understanding of risk, as discussed later in this chapter. In addition to the classical models of logic. logic techniques are emerging that seek to better deal with uncertainty and incomplete knowledge. Methods of measuring “partial truths”-when a thing is neither completely true nor completely false-have been created based on fuuy logic originating in the 1960s from the University of California at Berkley as techniques to model the uncertainty of natural language. Fuzzy logic or fuzzy set theory resembles human reasoning in the face of uncertainty and approximate information. Questions such as “To what degree is1 safe?’ can be addressed through these techniques. They have found engineering application in many control systems ranging from “smart” clothes dryers to automatic trains.
II. Basic concepts Hazard Underlying the definition of risk is the concept of hazard. The word hazard comes from a1 zahr: the Arabic word for “dice” that referred to an ancient game of chance [lo]. We typically define a hazard as a characteristic or group of characteristics that provides the potential for a loss. Flammability and toxicity are examples of such characteristics. It is important to make the distinction between a hozard and a risk because we can change the risk without changing a hazard. When a person crosses a busy street, the hazard should be clear to that person. Loosely defined it is the prospect that the person must place himself in the path of moving vehicles that can cause him great bodily harm were he to be struck by one or more of them. The hazard is therefore injury or fatality as a result of being struck by a moving vehicle. The risk, however, is dependent on how that person conducts himself in the crossing of the street. He most likely realizes that the risk is reduced if he crosses in a designated trafficcontrolled area and takes extra precautions against vehicle operators who may not see him. He has not changed the hazard-he can still be struck by a vehicle-but his risk of injury or death is reduced by prudent actions. Were he to encase himself in an armored vehicle for the trip across the street, his risk would be reduced even further-he has reduced the consequences of the hazard. Several methodologies are available to identify hazards and threats in a formal and structured way. A hazard and operability (HAZOP) study is a technique in which a team of system experts is guided through a formal process in which imaginative scenarios are developed using specific guide words and analyzed by the team. Event-tree and fault-tree analyses are other tools. Such techniques underlie the identified threats to pipeline integrity that are presented in this book. Identified
1/4 Risk: Theory and Application
threats can be generally grouped into two categories: timedependent failuremechanisms and random failuremechanisms, as discussed later. The phrases threat assessment and hazard identification are sometimesused interchangeably in this book when they refer to identifyingmechanisms that can lead to a pipeline failure with accompanying consequences.
Risk Risk is most commonly defined as the probability of an event that causes a loss and the potential magnitude of that loss. By this definition, risk is increased when either the probability of the event increases or when the magnitude of the potential loss (the consequencesof the event) increases. Transportation ofproducts by pipeline is a risk because there is some probability of the pipeline failing, releasing its contents, and causing damage (in addition to the potential loss of the product itself). The most commonly accepted definition of risk is often expressed as a mathematical relationship: Risk = (event likelihood) x (event consequence)
As such, a risk is often expressed in measurable quantities such as the expectedfrequency of fatalities, injuries, or economic loss. Monetary costs are often used as part of an overall expression of risk however, the difficult task of assigning a dollar value to human life or environmental damage is necessary in using this as a metric. Related risk terms include Acceptable risk, tolerable risk, risk tolerance, and negligibie risk, in which risk assessment and decision making meet. These are discussed in Chapters 14 and 1 5 . A complete understanding of the risk requires that three questionsbe answered: 1. What can go wrong? 2. How likely is it? 3. What are the consequences?
By answering these questions,the risk is defined.
Failure Answering the question of “what can go wrong?’ begins with defining a pipeline failure. The unintentional release of pipeline contents is one definition. Loss ofintegrity is another way to characterize pipeline failure. However, a pipeline can fail in other ways that do not involve a loss of contents. A more general definition is failure to perform its intended function. In assessing the risk of service interruption, for example, a pipeline can fail by not meeting its delivery requirements (its intended purpose). This can occur through blockage, contamination, equipment failure, and so on, as discussed in Chapter 10. Further complicating the quest for a universal definition of failure is the fact that municipalpipeline systemslike water and wastewater and even natural gas distribution systems tolerate some amount of leakage (unlike most transmission pipelines). Therefore, they might be considered to have failed only when the leakage becomes excessive by some measure. Except in the
case of service interruption discussed in Chapter 10, the general definition of failure in this book will be excessive leakage. The term leakage implies that the release of pipeline contents is unintentional. This lets our definition distinguish a failure from a venting, de-pressuring, blow down, flaring, or other deliberateproduct release. Under this working definition, a failure will be clearer in some cases than others. For most hydrocarbon transmission pipelines, any leakage (beyond minor, molecular level emissions) is excessive, so any leak means that the pipeline has failed. For municipal systems, determinationof failure will not be as precise for several reasons, such as the fact that some leakage is only excessive-that is, a pipe failure-after it has continued for a period of time. Failure occurs when the structure is subjected to stresses beyond its capabilities,resulting in its structural integritybeing compromised.Internal pressure, soil overburden, extreme temperatures, external forces, and fatigue are examples of stresses that must be resisted by pipelines. Failure or loss of strength leading to failure can also occur through loss of material by corrosion or from mechanical damage such as scratches and gouges. The answers to what can go wrong must be comprehensive in order for a risk assessmentto be complete.Every possible failure mode and initiating cause must be identified. Every threat to the pipeline, even the more remotely possible ones, must be identified. Chapters 3 through 6 detail possible pipeline failure mechanisms grouped into the four categories of Third Par& Corrosion. Design, and Incorrect Opemtions. These roughly correspond to the dominant failure modes that have been historically observedin pipelines.
Probability By the commonly accepted definition of risk, it is apparent that probability is a critical aspect of all risk assessments. Some estimate of the probability of failure will be required in order to assess risks. This addresses the second question of the risk definition: “How likely is it?” Some think of probability as inextricably intertwined with statistics. That is, “real” probability estimates arise only from statistical analyses-relying solely on measured data or observed occurrences.However, this is only one of five definitions of probability offered in Ref. 88. It is a compelling definition since it is rooted in aspects of the scientific process and the familiar inductive reasoning. However, it is almost always woefully incomplete as a stand-alonebasis for probability estimates of complex systems. In reality, there are no systems beyond very simple, fixed-outcome-typesystems that can be fully understood solely on the basis of past observations-the core of statistics.Almost any system of a complexity beyond a simple roll of a die, spin of a roulette wheel, or draw from a deck of cards will not be static enough or allow enough trials for statistical analysis to completely characterize its behavior. Statisticsrequires data samples-past observationsfrom which inferencescan be drawn. More interestingsystems tend to have fewer available observations that are strictly representative of their current states. Data interpretation becomes more and more necessary to obtain meaningful estimates. As systems become more complex, more variable in nature, and where trial observations are less available, the historical frequency approach
Basic concepts 115 will often provide answers that are highly inappropriate estimates of probability. Even in cases where past frequencies lead to more reliable estimates of future events for populations, those estimates are often only poor estimates of individual events. It is relatively easy to estimate the average adulthood height of a class of third graders, but more problematic when we try to predict the height of a specific student solely on the basis of averages. Similarly, just because the national average ofpipeline failures might be 1 per 1,000 mile-years, the 1,000-mile-longABC pipeline could be failure free for 50 years or more. The point is that observed past occurrences are rarely sufficient information on which to base probability estimates. Many other types of information can and should play an important role in determining a probability.Weather forecasting is a good example of how various sources of information come together to form the best models. The use of historical statistics (climatological data-what has the weather been like historically on this date) turns out to be a fairly decent forecasting tool (producing probability estimates), even in the absence of any meteorological interpretations. However, a forecast based solely on what has happened in previous years on certain dates would ignore knowledge of frontal movements, pressure zones, current conditions, and other information commonly available. The forecasts become much more accurate as meteorological information and expert judgment are used to adjust the base case climatological forecasts [88]. Underlying most of the complete definitions of probability is the concept of degree of belief:A probability expresses a degree of belief. This is the most compelling interpretation of probability because it encompasses the statistical evidence as well as the interpretations and judgment. Ideally, the degree of belief could be determined in some consistent fashion so that any two estimators would arrive at the same conclusion given the same evidence. It is a key purpose of this book to provide a framework by which a given set of evidence consistently leads to a specific degree of belief regarding the safety of a pipeline. (Note that the terms likelihood. probability, and chance are often used interchangeably in this text.)
Frequency, statistics, and probability As used in this book,frequency usually refers to a count of past observations; statistics refers to the analyses of the past observations; and the definition ofprobability is “degree of belief,” which normally utilizes statistics but is rarely based entirely on them. A statistic is not a probability. Statistics are only numbers or methods of analyzing numbers. They are based on observations-past events. Statistics do not imply anything about future events until inductive reasoning is employed. Therefore, a probabilistic analysis is not only a statistical analysis. As previously noted, probability is a degree of belief. It is influenced by statistics (past observations), but only in rare cases do the statistics completely determine our belief. Such a rare case would be where we have exactly the same situation as that from which the past observations were made and we are making estimates for a population exactly like the one from which the past data arose-a very simple system. Historical failure frequencies-and the associated statistical values-are normally used in a risk assessment. Historical
data, however, are not generally available in sufficient quantity or quality for most event sequences. Furthermore, when data are available, it is normally rare-event d a t a 4 n e failure in many years of service on a specific pipeline, for instance. Extrapolating future failure probabilities from small amounts of information can lead to significant errors. However, historical data are very valuable when combined with all other information available to the evaluator. Another possible problem with using historical data is the assumption that the conditions remain constant. This is rarely true, even for a particular pipeline. For example, when historical data show a high occurrence of corrosion-related leaks, the operator presumably takes appropriate action to reduce those leaks. His actions have changed the situation and previous experience is now weaker evidence. History will foretell the future only when no offsetting actions are taken. Although important pieces of evidence, historical data alone are rarely sufficient to properly estimate failure probabilities.
Failure rates A failure rate is simply a count of failures over time. It is usually first a frequency observation of how often the pipeline has failed over some previous period of time. A failure rate can also be a prediction of the number of failures to be expected in a given future time period. The failure rate is normally divided into rates of failure for each failure mechanism. The ways in which a pipeline can fail can be loosely categorized according to the behavior of the failure rate over time. When the failure rate tends to vary only with a changing environment, the underlying mechanism is usually random and should exhibit a constant failure rate as long as the environment stays constant. When the failure rate tends to increase with time and is logically linked with an aging effect, the underlying mechanism is time dependent. Some failure mechanisms and their respective categories are shown in Table 1.1.There is certainly an aspect of randomness in the mechanisms labeled time dependent and the possibility of time dependency for some of the mechanisms labeled random. The labels point to the probability estimation protocol that seems to be most appropriate for the mechanism. The historical rate of failures on a particular pipeline system may tell an evaluator something about that system. Figure 1.1 is a graph that illustrates the well-known “bathtub shape of failure rate changes over time. This general shape represents the failure rate for many manufactured components and systems over their lifetimes. Figure 1.2 is a theorized bathtub curve for pipelines. Table 1.1
Failure rates vs. failuremechanisms
Failure mechanism Corrosion Cracking Third-party damage Laminationsiblistering Earth movements Material degradation Material defects
Nature of mechanism
Failure rate tendency
Time dependent Time dependent Random Random Random (except for slow-acting instabilities) Time dependent Random
Increase Increase Constant Constant Constant Increase Constant
they reach the end of their useful service life. Where a timedependent failure mechanism (corrosion or fatigue) is involved, its effects will be observed in this wear-outphase of the curve. An examination of the failure data of a particular system may suggest such a curve and theoretically tell the evaluator what stage the system is in and what can be expected. Failure rates are further discussed in Chapter 14.
ul
2?
.-3 m LL
c
0
t
n
f
I I I I I I I
Consequences
z I I I
Time
-
I I
!
Figure 1.1 Common failure rate curve (bathtubcurve)
Some pieces of equipment or installations have a high initial rate of failure. This first portion of the curve is called the burninphase or infant mortalityphase. Here, defects that developed during initial manufacture of a component cause failures. As these defects are eliminated, the curve levels off into the second zone. This is the so-called constantfailurezone and reflects the phase where random accidents maintain a fairly constant failure rate. Components that survive the bum-in phase tend to fail at a constant rate. Failure mechanisms that are more random in nature-third-party damages or most land movements for example-tend to drive the failure rate in this part of the curve. Far into the life of the component, the failure rate may begin to increase. This is the zone where things begin to wear out as
Inherent in any risk evaluation is a judgment of the potential consequences. This is the last of the three risk-defining questions: If something goes wrong, what are the consequences? Consequence implies a loss of some kind. Many of the aspects of potential losses are readily quantified. In the case of a major hydrocarbon pipeline accident (product escaping, perhaps causing an explosion and fire), we could quantify losses such as damaged buildings, vehicles, and other property; costs of service interruption; cost of the product lost; cost of the cleanup; and so on. Consequences are sometimes grouped into direct and indirect categories, where direct costs include Property damages Damages to human health Environmental damages Loss ofproduct Repair costs Cleanup and remediation costs Indirect costs can include litigation, contract violations, customer dissatisfaction, political reactions, loss of market share, and government fines and penalties.
Failures
Third-party; earth movements; material<defects
Time
Corrosion; fatigue
-
Figure 1.2 Theorized failure rate curve for pipelines.
Basic concepts 117 As a common denominator, the monetary value of losses is often used to quantify consequences. Such “monetizing” of consequences-assigning dollar values to damages-is straightforward for some damages. For others, such as loss of life and environmental impacts, it is more difficult to apply. Much has been written on the topic of the value of human life, and this is further discussed in absolute risk quantification (see Chapter 14). Placing a value on the consequences of an accident is a key component in society’s determination of how much it is willing to spend to prevent that accident. This involves concepts of acceptable risk and is discussed in Chapter 15. The hazards that cause consequences and are created by the loss of integrity of an operating pipeline will include some or all ofthe following:
0
0
Toxicityiasphyxiation threats from released productscontact toxicity or exclusion of air from confined spaces Contaminatiodpollution from released productsdamage to flora, fauna, drinking waters, etc. Mechanical effects from force of escaping producterosion, washouts, projectiles, etc. Firehgnition scenarios involving released products-pool fires, fireballs,jet fires, explosions
These hazards are fully discussed in following chapters, beginning with Chapter 7.
Risk assessment Risk assessment is a measuring process and a risk model is a measuring tool. Included in most quality and management concepts is the need for measurement. It has been said that “If you don’t have a number, you don’t have a fact-you have an opinion.” While the notion of a “quantified opinion” adds shades of gray to an absolute statement like this, most would agree that quantifying something is at least the beginning of establishing its factual nature. It is always possible to quantify things we truly understand. When we find it difficult to express something in numbers, it is usually because we don’t have a complete understanding of the concept. Risk assessment must measure both the probability and consequences of all of the potential events that comprise the hazard. Using the risk assessment. we can make decisions related to managing those risks. Note that risk is not a static quantity. Along the length of a pipeline, conditions are usually changing. As they change, the risk is also changing in terms of what can go wrong, the likelihood of something going wrong, andor the potential consequences. Because conditions also change with time, risk is not constant even at a fixed location. When we perform a risk evaluation, we are actually taking a snapshot of the risk picture at a moment in time. There is no universally accepted method for measuring risk. The relative advantages and disadvantages of several approaches are discussed later in this chapter. It is important to recognize what a risk assessment can and cannot do, regardless of the methodology employed. The ability to predict pipeline failures-when and where they will occur-would obviously be a great advantage in reducing risk. Unfortunately, this cannot be done at present. Pipeline accidents are relatively rare and often involve the simultaneous failure of several safety provi-
sions. This makes accurate failure predictions almost impossible. So, modem risk assessment methodologies provide a surrogate for such predictions. Assessment efforts by pipeline operating companies are normally not attempts to predict how many failures will occur or where the next failure will occur. Rather, efforts are designed to systematically and objectively capture everything that can be known about the pipeline and its environment, to put this information into a risk context, and then to use it to make better decisions. Risk assessments normally involve examining the factors or variables that combine to create the whole risk picture. A complete list of underlying risk factors-that is. those items that add to or subtract from the amount of risk-can be identified for a pipeline system. Including all of these items in an assessment, however, could create a somewhat unwieldy system and one of questionable utility. Therefore, a list of critical risk indicators is usually selected based on their ability to provide useful risk signals without adding unnecessary complexities. Most common approaches advocate the use of a model to organize or enhance our understanding of the factors and their myriad possible interactions. A risk assessment therefore involves tradeoffs between the number of factors considered and the ease of use or cost of the assessment model. The important variables are widely recognized, but the number to be considered in the model (and the depth ofthat consideration)is a matter ofchoice for the model developers. The concept of the signal-to-noise ratio is pertinent here. In risk assessment, we are interested in measuring risk levels-the risk is the signal we are trying to detect. We are measuring in a very “noisy” environment. in which random fluctuations and high uncertainty tend to obscure the signal. The signal-to-noise ratio concept tells us that the signal has to be of a certain strength before we can reliably pick it out of the background noise. Perhaps only very large differences in risk will be detectable with our risk models. Smaller differences might be indistinguishable from the background noise or uncertainty in our measurements. We must recognize the limitations of our measuring tool so that we are not wasting time chasing apparent signals that are, in fact, false-positives or falsenegatives. The statistical quality control processes acknowledge this and employ statistical control charts to determine which measurements are worth investigating further. Some variables will intuitively contribute more to the signal; that is, the risk level. Changes in variables such as population density, type of product, and pipe stress level will very obviously change the possible consequences or failure probability. Others. such as flow rate and depth of cover will also impact the risk, but perhaps not as dramatically. Still others, such as soil moisture, soil pH, and type of public education advertising, will certainly have some effect, but the magnitude of that effect is arguable. These latter are not arguable in the sense that they cannot contribute to a failure, because they certainly can in some imaginable scenarios, but in the sense that they may be more noise than signal, as far as a model can distinguish. That is, their contributions to risk may be below the sensitivity thresholds ofthe risk assessment.
Risk management Risk management is a reaction to perceived risks. It is practiced everyday by every individual. In operating a motor vehicle,
1/8 Risk: Theory and Application
compensating for poor visibility by slowing down demonstrates a simple application of risk management. The driver knows that a change in the weather variable of visibility impacts the risk because her reaction times will be reduced. Reducing vehicle speed compensates for the reduced reaction time. While this example appears obvious, reaching this conclusion without some mental model of risk would be difficult. Risk management, for the purposes of this book, is the set of actions adopted to control risk. It entails a process of first assessing a level of risk associated with a facility and then preparing and executing an action plan to address current and future risks. The assimilation of complex data and the subsequent integration of sometimes competing risk reduction and profit goals are at the heart of any debate about how best to manage pipeline risks. Decision making is the core of risk management. Many challenging questions are implied in risk management: Where and when should resources be applied? How much urgency should be attached to any specific risk mitigation? Should only the worst segmentsbe addressed first? Should resources be diverted from less risky segments in order to better mitigate risks in higher risk areas? How much will risk change if we do nothing differently? An appropriate risk mitigation strategy might involve risk reductions for very specific areas or, alternatively, improving the risk situation in general for long stretches of pipeline. Note also that a risk reduction project may impact many variables for a few segments or, alternatively, might impact a few variables but for many segments. Although the process of pipeline risk management does not have to be complex,it can incorporate some very sophisticated engineeringand statisticalconcepts. A good risk assessment process leads the user directly into risk management by highlighting specific actions that can reduce risks. Risk mitigation plans are often developed using “what-if” scenariosin the risk assessment. The intention is not to make risk disappear. If we make any risk disappear, we will likely have sacrificed some other aspect of our lifestyles that we probably don’t want to give up. As an analogy, we can eliminate highway fatalities, but are we really ready to give up our cars? Risks can be minimized however-at least to the extent that no unacceptablerisks remain.
Experts The term experts as it is used here refers to people most knowledgeable in the subject matter. An expert is not restricted to a scientist or other technical person. The greatest expertise for a specific pipeline system probably lies with the workforce that has operated and maintained that system for many years. The experience and intuition of the entire workforce should be tapped as much as is practical when performing a risk assessment. Experts bring to the assessment a body of knowledge that goes beyond statistical data. Experts will discount some data that do not adequately represent the scenario being judged. Similarly, they will extrapolate from dissimilar situations that may have better data available.
The experience factor and the intuition of experts should not be discountedmerely because they cannot be easily quantified. Normally little disagreement will exist among knowledgeable persons when risk contributorsand risk reducers are evaluated. If differences arise that cannot be resolved, the risk evaluator can have each opinion quantified and then produce a compiled value to use in the assessment. When knowledge is incomplete and opinion, experience, intuition, and other unquantifiable resources are used, the assessment of risk becomes at least partially subjective. As it turns out, knowledge is always incomplete and some aspect of judgment will always be needed for a complete assessment. Hence, subjectivity is found in any and all risk assessment methodologies. Humans tend to have bias and experts are not immune from this. Knowledge of possible bias is the first step toward minimizing it. One source [88] identifies many types of bias and heuristic assumptions that are related to learning based on experiment or observation.These are shown inTable 1.2.
111. Uncertainty As noted previously, risk assessment is a measuring process.
Like all measuring systems, measurement error and uncertainty arise as a result of the limitations of the measuring tool, the process oftaking the measurement, and the person performing the measurement. Pipeline risk assessmentis also the compilation of many other measurements (depth of cover, wall thickness, pipe-to-soil voltages, pressure, etc.) and hence absorbs all of those measurement uncertainties. It makes use of engineering and scientific models (stress formulas, vapor dispersion and thermal effects modeling, etc.) that also have accompanying errors and uncertainties. In the use of past failure rate information, additional uncertainty results from small sample sizes and comparability, as discussed previously. Further adding to the uncertainty is the fact that the thing being measured is constantly changing. It is perhaps useful to view a pipeline system, including its operating environment, as a complex entity with behavior similar to that seen in dynamic or chaotic systems. Here the term chaotic is being used in its scientific meaning (chaos theory) rather than implying a disorganized or random nature in the conventional sense of the word. In science, dynamic or chaotic systems refer to the many systems in our world that do not behave in strictly predictable or linear fashions. They are not completely deterministic nor completely random, and things never happen in exactly the same way. A pipeline, with its infinite combinations of historical, environmental, structural, operational, and maintenance parameters, can be expected to behave as a so-called dynamic system-perhaps establishing patterns over time, but never repetition.As such, we recognize that, as one possible outcome of the process of pipelining, the risk of pipeline failure is sensitive to immeasurable or unknowable initial conditions. In essence, we are trying to find differences in risk out of all the many sources of variation inherent in a system that places a man-made structure in a complex and ever-changing environment. Recall the earlier discussion on signal-to-noise considerationsin risk assessment. In more practical terms, we can identify all of the threats to the pipeline. We understand the mechanisms underlying the
Risk process-the general steps 119 Table 1.2
Types of bias and heuristics
Heuristic or bias
Description
Availability heuristic Availability bias Hindsight bias Anchoringand adjustment heuristic Insufficientadjustment Conjunctive distortion Representativeness heuristic Representativeness bias
Judging likelihoodby instances most easily or vividly recalled Overemphasizing available or salient instances Exaggerating in retrospectwhat was known in advance Adjustingan initial probability to a final value Insufficientlymodifying the initial value Misjudging the probabilityof combined events relative to their individual values Judging likelihood by similarityto some referenceclass Overemphasizing similaritiesand neglecting other information;confusing“probability ofA given B’ with “probabilityofB given A” Exaggerating the predictive validity of some method or indicator Overlooking frequency information Overemphasizing significanceof limited data Greater confidencethan warranted, with probabilitiesthat are too extreme or distributionstoo narrow about the mean Less confidencethan warranted in evidence with high weight but low strength Intentional distortionof assessedprobabilitiesto advance an assessor’s self-interest Intentionaldistortionof assessed probabilitiesto advance a sponsor’s interest in achieving an outcome
Insensitivity to predictability Base-rateneglect Insensitivityto sample size Overconfidencebias Underconfidencebias Personal bias Organizationalbias
Source: From Vick. Steven G.. Degrees of Belief:Subjective Probability and EngineeringJudgment. ASCE Press, Reston, VA, 2002.
threats. We know the options in mitigating the threats. But in knowing these things, we also must know the uncertainty involved-we cannot know and control enough of the details to entirely eliminate risk. At any point in time, thousands of forces are acting on a pipeline, the magnitude of which are “unknown and unknowable.” An operator will never have all of the relevant information he needs to absolutely guarantee safe operations. There will always be an element of the unknown. Managers must control the “right” risks with limited resources because there will always be limits on the amount of time, manpower, or money that can be applied to a risk situation. Managers must weigh their decisions carefully in light of what is known and unknown. It is usually best to assume that Uncertainty= increased risks
This impacts risk assessment in several ways. First, when information is unknown, it is conservatively assumed that unfavorable conditions exist. This not only encourages the frequent acquisition of information, but it also enhances the risk assessment’s credibility,especially to outside observers. It also makes sense from an error analysis standpoint. Two possible errors can occur when assessing a condition-saying it is “good,” when it is actually “bad,” and saying it is “bad” when it is actually “good.” If a condition is assumed to be good when it is actually bad, this error will probably not be discovered until some unfortunate event occurs. The operator will most likely be directing resources toward suspected deficiencies, not recognizing that an actual deficiency has been hidden by an optimistic evaluation. At the point of discovery by incident, the ability of the risk assessment to point out any other deficiency is highly suspect. An outside observer can say, “Look, this model is assuming that everything is rosy-how can we believe anything it says?!” On the other hand, assuming a condition is bad when it is actually good merely has the effect of highlighting the condition until better information makes the “red flag” disappear. Consequences are far less with this
latter type of error. The only cost is the effort to get the correct information. So, this “guilty until proven innocent” approach is actually an incentive to reduce uncertainty. Uncertainty also plays a role in inspection information. Many conditions continuously change over time. As inspection information gets older, its relevance to current conditions becomes more uncertain. All inspection data should therefore be assumed to deteriorate in usefulness and, hence, in its risk-reducing ability. This is further discussed in Chapter 2. The great promise of risk analysis is its use in decision support. However, this promise is not without its own element of risk-the misuse of risk analysis, perhaps through failure to consider uncertainty. This is discussed as a part of risk management in Chapter 15. As noted in Ref. [74]: The primary problem with risk assessment is that the informationon which decisions must be based is usually inadequate. Because the decisions cannot wait, the gaps in information must be bridged by inferenceand belief, and these cannot be evaluated in the same way as facts. Improving the quality and comprehensiveness of knowledge is by far the most effective way to improve risk assessment, but some limitationsare inherent and unresolvable, and inferenceswill always be required.
IV. Risk process-the general steps Having defined some basic terms and discussed general risk issues, we can now focus on the actual steps involved in risk management. The following are the recommended basic steps. These steps are all fully detailed in this text.
Step 1: Risk modeling The acquisition of a risk assessment process, usually in the form of a model, is a logical first step. A pipeline risk assessment model is a set of algorithms or rules that use available information and data relationships to measure levels of risk along a pipeline. An assessment model can be selected
1/10 Risk: Theory and Application
from some commercially available existing models, customized from existing models, or created “from scratch” depending on your requirements. Multiple models can be run against the same set of data for comparisons and model evaluations.
Step 2: Data collection and preparation Data collection entails the gathering of everything that can be known about the pipeline, including all inspection data, original construction information, environmental conditions, operating and maintenance history, past failures, and so on. Data preparation is an exercise that results in data sets that are ready to be read into and used directly by the risk assessment model. A collection of tools enables users to smooth or enhance data points into zones of influence, categories, or bands to convert certain data sets into risk information. Data collection is discussed later in this chapter and data preparation issues are detailed in Chapter 8.
Step 3: Segmentation Because risks are rarely constant along a pipeline, it is advantageous to segment the line into sections with constant risk characteristics (dynamic segmentation) or otherwise divide the pipeline into manageable pieces. Segmentation strategies and techniques are discussed in Chapters 2 and 8, respectively.
V. Data collection Data and information are essential to good risk assessment. Appendix G shows some typical information-gathering efforts that are routinely performed by pipeline operators. After several years of operation, some large databases will have developed. Will these pieces of data predict pipeline failures? Only in extreme cases. Will they, in aggregate, tell us where risk hot spots are? Certainly. We ohviously feel that all of this information is important-we collect it, base standards on it, base regulations on it, etc. It just needs to be placed into a risk context so that a picture of the risk emerges and better resource allocation decisions can be made based on that picture. The risk model transforms the data into risk knowledge. Given the importance of data to risk assessment, it is important to have a clear understanding of the data collection process. There exists a discipline to measuring. Before the data gathering effort is started, four questions should be addressed 1. What will the data represent? 2. How will the values be obtained? 3. What sources ofvariation exist? 4. Why are the data being collected?
What will the data represent?
Now the previously selected risk assessment model can be applied to each segment to get a unique risk “score” for that segment. These relative risk numbers can later be converted into absolute risk numbers. Working with results of risk assessments is discussed in Chapters 8,14, and 15.
The data are the sum of our knowledge about the pipeline section: everything we know, think, and feel about it-when it was built, how it was built, how it is operated, how often it has failed or come close, what condition it is in now, what threats exist, what its surroundings are, and so on-all in great detail. Using the risk model, this compilation of information will be transformed into a representation of risk associated with that section. Inherent in the risk numbers will be a complete evaluation of the section’s environment and operation.
Step 5: Managing risks
How will the values be obtained?
Having performed a risk assessment for the segmented pipeline, we now face the critical step of managing the risks. In this area, the emphasis is on decision support-providing the tools needed to best optimize resource allocation. This process generally involves steps such as the following:
Some rules for data acquisition will often be necessary. Issues requiring early standardization might include the following:
Step 4:Assessing risks
Analyzing data (graphically and with tables and simple statistics) Calculating cumulative risks and trends Creating an overall risk management strategy Identifying mitigation projects Performing what-if’s These are fully discussed in subsequent chapters, especially Chapter 15. The first two steps in the overall process, (1) risk model and (2) data collection, are sometimes done in reverse order. An experienced risk modeler might begin with an examination of the types and quantity of data available and from that select a modeling approach. In light of this, the discussion of data collection issues precedes the model-selection discussion.
Who will be performing the evaluations? The data can be obtained by a single evaluator or team of evaluators who will visit the pipeline operations offices personally to gather the information required to make the assessment. Alternatively, each portion of a pipeline system can be evaluated by those directly involved in its operations and maintenance. This becomes a self-evaluation in some respects. Each approach has advantages. In the former, it is easier to ensure consistency; in the latter, acceptance by the workforce might be greater. What manuals or procedures will be used? Steps should be taken to ensure consistency in the evaluations. How often will evaluations be repeated? Reevaluations should be scheduled periodically or the operators should be required to update the records periodically. Will “hard proof” or documentation be a requirement in all cases? Or can the evaluator accept “opinion” data in some circumstances? An evaluator will usually interview pipeline operators to help assign risk scores. Possibly the most com-
Conceptualizing a risk assessment approach 1/11 mon question asked by the evaluator will be “How do you know?” This should be asked in response to almost every assertion by the interviewee(s). Answers will determine the uncertainty around the item, and item scoring should reflect this uncertainty. This issue is discussed in many of the suggested scoring protocols in subsequent chapters. What defaults are to be used when no information is available? See the discussion on uncertainty in this chapter and Chapter 2.
the mission statement or objective of the risk management program. The underlying reason may vary depending on the user, but it is hoped that the common link will be the desire to create a better understanding of the pipeline and its risks in order to make improvements in the risk picture. Secondary reasons or reasons embedded in the general purpose may include
0
What sources of variation exist? Typical sources of variation in a pipeline risk assessment include
0
Differences in the pipeline section environments Differences in the pipeline section operation Differences in the amount of information available on the pipeline section Evaluator-to-evaluator variation in information gathering and interpretation Day-to-day variation in the way a single evaluator assigns scores
Every measurement has a level of uncertainty associated with it. To be precise, a measurement should express this uncertainty: 10 f t i 1 in., 15.7”F~0.2’.Thisuncertaintyvaluerepresents some of the sources of variations previously listed: operator effects, instrument effects, day-to-day effects, etc. These effects are sometimes called measurement “noise” as noted previously in the signal-to-noise discussion. The variations that we are trying to measure. the relative pipeline risks. are hopefully much greater than the noise. If the noise level is too high relative to the variation of interest, or if the measurement is too insensitive to the variation of interest, the data become less meaningful. Reference [92] provides detailed statistical methods for determining the “usefulness” of the measurements. If more than one evaluator is to be used, it is wise to quantify the variation that may exist between the evaluators. This is easily done by comparing scoring by different evaluators of the same pipeline section. The repeatability of the evaluator can be judged by having her perform multiple scorings of the same section (this should be done without the evaluator’s knowledge that she is repeating a previously performed evaluation). If these sources of variation are high, steps should be taken to reduce the variation. These steps may include 0
0
Improved documentation and procedures Evaluator training Refinement of the assessment technique to remove more subjectivity Changes in the information-gathering activity Use of only one evaluator
Why are the data being collected? Clearly defining the purpose for collecting the data is important. but often overlooked. The purpose should tie back to
0 0
Identify relative risk hot spots Ensure regulatory compliance Set insurance rates Define acceptable risk levels Prioritize maintenance spending Build a resource allocation model Assign dollar values to pipeline systems Track pipelining activities
Having built a database for risk assessment purposes, some companies find much use for the information other than risk management. Since the information requirements for comprehensive risk assessment are so encompassing, these databases often become a central depository and the best reference source for all pipeline inquiries.
VI. Conceptualizing a risk assessment approach Checklist for design As the first and arguably the most important step in risk management, an assessment ofrisk must be performed. Many decisions will be required in determining arisk assessment approach. While all decisions do not have to be made during initial model design. it is useful to have a rather complete list of issues available early in the process. This might help to avoid backtracking in later stages, which can result in significant nonproductive time and cost. For example, is the risk assessment model to be used only as a high-level screening tool or might it ultimately be used as a stepping stone to a risk expressed in absolute terms? The earlier this determination is made, the more direct will be the path between the model’s design and its intended use. The following is a partial list of considerations in the design of a risk assessment system. Most of these are discussed in subsequent paragraphs of this chapter. 1. Purpose-A short, overall mission statement including the objectives and intent of the risk assessment project. 2. Audience-Who will see and use the results of the risk assessment? General public or special interest groups Local, state, or federal regulators Company-all employees Company-management only Company-specific departments only 3. Uses-How will the results be used? 0 Risk identrficafion-the acquisition of knowledge, such as levels of integrity threats, failure consequences and overall system risk, to allow for comparison of pipeline risk levels and evaluation of risk drivers
1/12 Risk: Theory and Application
Resource allocation-where and when to spend discretionary and/or mandated capital andor maintenance funds Design or mod& an operating discipline-reate an O&M plan consistent with risk management concepts Regulatory compliance for risk assessment-if risk assessment itself is mandated Regulatory compliancefor all required activities- flags are raised to indicate potential noncompliances Regulatory compliance waivers-where risk-based justifications provide the basis to request waivers of specific integrity assessment or maintenance activities Project appmvals+ostlbenefit calculations, project prioritizations and justifications Preventive maintenance schedules-creating multiyear integrity assessment plans or overall maintenance priorities and schedules Due diligence-investigation and evaluation of assets that might be acquired, leased, abandoned, or sold, from a risk perspective Liability reduction-reduce the number, frequency, and severity of failures, as well as the severity of failure consequences, to lower current operating and indirect liability-related costs Risk communications-present risk information to a number of different audiences with different interests and levels of technical abilities 4. Users-This might overlap the audience group: Internal only Technical staffonlwngineering, compliance, integrity, and information technology (IT) departments Managers-budget authorization, technical support, operations Planning department-facility expansion, acquisitions, and operations District-level supervisors-maintenance and operations Regulators-if regulators are shown the risk model or its results Other oversight-ity council, investment partners, insurance carrier, etc.-if access given in order to do what-ifs, etc. Public presentations-public bearings for proposed projects 5. Resources-Who and what is available to support the program? Data-type, format, and quality of existing data Sofhvare-urrent environments’suitability as residence for risk model Hardware-urrent communications and data management systems Staff--availability of qualified people to design the model and populate it with required data Monq~-availability of funds to outsource data collection, database and model design, etc. Industry-access to best industry practices, standards, and knowledge 6. Designxhoices in model features, format, and capabilities: Scope Failure causes considered-corrosion, sabotage, land movements, third party, human error, etc.
Consequences considered-public safety only, environment, cost of service interruption, employee safety, etc. Facilities covered-pipe only, valves, fittings, pumps, tanks, loading facilities, compressor stations, etc. Scoring-define scoring protocols, establish point ranges (resolution) Direction of scale-higher points can indicate either more safety or more risk Point assignments-addition of points only, multiplications, conditionals (if X then Y), category weightings, independent variables, flat or multilevel structures Resolution issues-range of diameters, pressures, and products Defaults-philosophy of assigning values when little or no information is available Zone-ofinfluence distances-for what distance does a piece of data provide evidence on adjacent lengths ofpipe Relative versus absolure4hoice of presentation format and possibly model approach Reporting-types and frequency of output and presentations needed
General beliefs In addition to basic assumptions regarding the risk assessment model, some philosophical beliefs underpin this entire book. It is u s e l l to state these clearly at this point, so the reader may be alerted to any possible differences from her own beliefs. These are stated as beliefs rather than facts since they are arguable and others might disagree to some extent: 0
0
Risk management techniques are fundamentally decision support tools. Pipeline operators in particular will find most valuable a process that takes available information and assimilates it into some clear, simple results. Actions can then be based directly on those simple results. We must go through some complexity in order to achieve “intelligent simplification.” Many processes, originating from sometimes complex scientific principles, are “behind the scenes” in a good risk assessment system. These must be well documented and available, but need not interfere with the casual users of the methodology (everyone does not need to understand the engine in order to benefit from use of the vehicle). Engineers will normally seek a rational basis underpinning a system before they will accept it. Therefore, the basis must be well documented. In most cases, we are more interested in identifying locations where a potential failure mechanism is more aggressive rather than predicting the length of time the mechanism must be active before failure occurs. A proper amount of modeling resolution is needed. The model should be able to quantify the benefit of any and all actions, from something as simple as “add 2 new ROW markers” all the way up to “reroute the entire pipeline.” Many variables impact pipeline risk. Among all possible variables, choices are required that yield a balance between a comprehensive model (one that covers all of the important stuff) and an unwieldy model (one with too many relatively unimportant details). Users should be allowed to determine their own optimum level of complexity. Some will choose to
Conceptualizing a risk assessment approach 1/13 capture much detailed information because they already have it available; others will want to get started with a very simple framework. However, by using the same overall risk assessment framework, results can still be compared: from very detailed approaches to overview approaches. Resource allocation (or reallocation) is normally the most effective way to practice risk management. Costs must therefore play a role in risk management. Because resources are finite, the optimum allocation of those scarce resources is sought. The methodology should “get smarter” as we ourselves learn. As more information becomes available or as new techniques come into favor, the methodology should be flexible enough to incorporate the new knowledge, whether that new knowledge is in the form of hard statistics, new beliefs, or better ways to combine risk variables. Methodology should be robust enough to apply to small as well as large facilities, allowing an operator to divide a large facility into subsets for comparisons within a system as well as between systems. Methodology should have the ability to distinguish between products handled by including critical fluid properties, which are derived from easy-to-obtain product information. Methodology should be easy to set up on paper or in an electronic spreadsheet and also easy to migrate to more robust database software environments for more rigorous applications. Methodology documentation should provide the user with simple steps, but also provide the background (sometimes complex) underlying the simple steps. Administrative elements of a risk management program are necessary to ensure continuity and consistency of the effort. Note that ifthe reader concurs with these beliefs, the bulleted items above can form the foundation for a model design or an inquiry to service providers who offer pipeline risk assessmenurisk management products and services.
Scope and limitations Having made some preliminary decisions regarding the risk management’s program scope and content, some documentation should be established. This should become a part of the overall control document set as discussed in Chapter 15. Because a pipeline risk assessment cannot be all things at once, a statement of the program’s scope and limitations is usually appropriate. The scope should address exactly what portions of the pipeline system are included and what risks are being evaluated. The following statements are examples of scope and limitation statements that are common to many relative risk assessments. This risk assessment covers all pipe and appurtenances that are a part of the ABC Pipeline Company from Station Alpha to Station Beta as shown on system maps. This assessment is complete and comprehensive in terms ofits ability to capture all pertinent information and provide meaningful analyses of current risks. Since the objective of the risk assessment is to provide a useful tool to support decision making, and since it is intended to continuously evolve as new information is received, some aspects of academician-type risk assessment methodologies are intentionally omitted. These are not thought to produce limitations in the
assessment for its intended use but rather are deviations from other possible risk assessment approaches. These deviations include the following: Relative risks only: Absolute risk estimations are not included because of their highly uncertain nature and potential for misunderstanding. Due to the lack of historical pipeline failure data for various failure mechanisms, and incomplete incident data for a multitude of integrity threats and release impacts, a statistically valid database is not thought to be available to adequately quantify the probability of a failure (e.g., failureskm-year), the monetized consequences of a failure (e.g., dollars/failure), or the combined total risk of a failure (e.g., dollarskm-year) on apipeline-specific basis. Certuin consequences: The focus ofthis assessment is on risks to public safety and the environment. Other consequences such as cost of business interruption and risks to company employees are not specifically quantified. However, most other consequences are thought to be proportional to the public safety and environmental threats so the results will generally apply to most consequences. Abnormal conditions: This risk assessment shows the relative risks along the pipeline during its operation. The focus is on abnormal conditions, specifically the unintentional releases of product. Risks from normal operations include those from employee vehicle and watercraft operation; other equipment operation; use of tools and cleaning and maintenance fluids; and other aspects that are considered to add normal and/or negligible additional risks to the public. Potential construction risks associated with new pipeline installations are also not considered. Insensitivity to length: The pipeline risk scores represent the relative level of risk that each point along the pipeline presents to its surroundings. is the scores are insensitive to length. If two pipeline segments, 100 and 2600 ft, respectively, have the same risk score, then each point along the 100-8 segment presents the same risk as does each point along the 2600-ft length. Of course, the 2600-ft length presents more overall risk than does the 100-A length, because it has many more risk-producing points. Note: With regard to length sensitivity, a cumulative risk calculation adds the length aspect so that a 100-A length ofpipeline with one risk score can be compared against a 2600-A length with a different risk score. Use of judgment: As with any risk assessment methodology, some subjectivity in the form of expert opinion and engineering judgments are required when “hard” data provide incomplete knowledge. This is a limitation of this assessment only in that it might he considered a limitation of all risk assessments. See also discussions in this section dealing with uncertainty.
Related to these statements is a list of assumptions that might underlie a risk assessment. An example of documented assumptions that overlap the above list to some extent is provided elsewhere.
Formal vs. informal risk management Although formal pipeline risk management is growing in popularity among pipeline operators and is increasingly mandated by governmentregulations, it is important to note that risk management has always been practiced by these pipeline operators. Every time a decision is made to spend resources in a certain way, a risk management decision has been made. This informal approach to risk management has served us well, as evidenced by the very good safety record of pipelines versus other modes of transportation. An informal approach to risk management can have the further advantages of being simple, easy to comprehend and to communicate, and the product of expert engineering consensus built on solid experience.
1/14 Risk: Theory and Application
However, an informal approach to risk management does not hold up well to close scrutiny, since the process is often poorly documented and not structured to ensure objectivity and consistency of decision making. Expanding public concerns over human safety and environmental protection have contributed significantly to raising the visibility of risk management. Although the pipeline safety record is good, the violent intensity and dramatic consequences of some accidents, an aging pipeline infrastructure, and the continued urbanization of formerly rural areas has increased perceived, if not actual, risks. Historical (Informal) risk management, therefore has these pluses and minuses:
strengths and weaknesses, including costs ofthe evaluation and appropriatenessto a situation:
Advantages 0 Simplehtuitive Consensus is often sought 0 Utilizes experience and engineeringjudgment 0 Successful, based on pipeline safety record
0
0 0
0 0
0
0
Some of the more formal risk tools in common use by the pipeline industry include some of the above and others as discussed below.
Reasons to Change 0 Consequences of mistakes are more serious Inefficiencies/subjectivities Lack of consistency and continuity in a changing workforce Need for better evaluation of complicated risk factors and their interactions
Developing a risk assessment model In moving toward formal risk management, a structure and process for assessing risks i s required. In this book, this structure and process is called the risk assessment model. A risk assessment model can take many forms, but the best ones will have several common characteristics as discussed later in this chapter. They will also all generally originate from some basic techniques that underlie the final model-the building blocks. It is useful to become familiar with these building blocks of risk assessment because they form the foundation of most models and may be called on to tune a model from time to time. Scenarios, event trees, and fault trees are the core building blocks of any risk assessment. Even if the model author does not specifically reference such tools, models cannot be constructed without at least a mental process that parallels the use of these tools. They are not, however, risk assessments themselves. Rather, they are techniques and methodologies we use to crystallize and document our understanding of sequences that lead to failures. They form a basis for a risk model by forcing the logical identification of all risk variables. They should not be considered risk models themselves, in this author’s opinion, because they do not pass the tests of a fully functional model, which are proposed later in this chapter.
Risk assessment building blocks Eleven hazard evaluation procedures in common use by the chemical industry have been identified [9]. These are examples of the aforementioned building blocks that lay the foundation for a risk assessment model. Each of these tools has
Checklists Safety review Relative ranking Preliminary hazard analysis “What-if” analysis HAZOPstudy FMEA analysis Fault-tree analysis Event-tree analysis Cause-and-consequenceanalysis Human-error analysis
0
HAZOP. A hazard and operability study is a team technique that examines all possible failure events and operability issues through the use of keywords prompting the team for input in a very structured format. Scenarios and potential consequences are identified, but likelihood is usually not quantified in a HAZOP. Strict discipline ensures that all possibilities are covered by the team. When done properly, the technique is very thorough but time consuming and costly in terms of person-hours expended. HAZOP and failure modes and effects analysis (FMEA) studies are especially useful tools when the risk assessments include complex facilities such as tank farms and pump/compressor stations. Fault-tree/event-tree analysis. Tracing the sequence of events backward from a failure yields afault tree. In an event tree, the process begins from an event and progresses forward through all possible subsequent events to determine possible failures. Probabilities can be assigned to each branch and then combined to arrive at complete event probabilities. An example of this application is discussed below and in Chapter 14. Scenarios. “Most probable” or “most severe” pipeline failure scenarios are envisioned. Resulting damages are estimated and mitigating responses and preventions are designed. This is often a modified fault-tree or event-tree analysis.
Scenario-based tools such as event trees and fault trees are particularly common because they underlie every other approach. They are always used, even if informally or as a thought process, to better understand the event sequences that produce failures and consequences. They are also extremely useful in examining specific situations. They can assist in incident investigation, determining optimum valve siting, safety system installation, pipeline routing, and other common pipeline analyses. These are often highly focused applications. These techniques are further discussed in Chapter 14. Figure 1.3 is an example of a partial event-tree analysis. The event tree shows the probability of a certain failure-initiation event, possible next events with their likelihood, interactions of some possible mitigating events or features, and, finally, possible end consequences. This illustration demonstrates
Risk assessment issues 1/15
(1/600)
Ignition (1/100) Large rupture
(500/600)High thermal
1
Thirdparty damage
equipmentcontacts line (1:2 years)
Detonation
damages (99/600) Torch fire only
(29/30) No ignition
(1/20) Corrosion (6/10)
-Reported - - - - - - - - -
[
@/lo)Cathodic 1t;o in
(4/10)
Unreported
survey leak
(2/10)
(’/’ O0) No damage -No event Figure 1.3
Event-treeanalysis
how quickly the interrelationships make an event tree very large and complex. especially when all possible initiating events are considered. The probabilities associated with events will also normally be hard to determine. For example, Figure 1.3suggests that for every 600 ignitions of product from a large rupture. one will result in a detonation, 500 will result in high thermal damages, and 99 will result in localized fire damage only. This only occurs after a Ym chance of ignition, which occurs after a Yim chance of a large rupture, and after a once-every-two-years line strike. In reality, these numbers will be difficult to estimate. Because the probabilities must then be combined (multiplied) along any path in this diagram, inaccuracies will build quickly. 0
Screening analyses. This is a quantitative or qualitative technique in which only the most critical variables are assessed. Certain combinations of variable assessments are judged to represent more risk than others. In this fashion, the process acts as a high-level screening tool to identify relatively risky portions of a system. It requires elements of suh-
jectivity and judgment and should be carefully documented. While a screening analysis is a logical process to be used subsequent to almost any risk assessment, it is noted here as a possible stand-alone risk tool. As such, it takes on many characteristics of the more complete models to be described, especially the scoring-type or indexing method.
VII. Risk assessment issues In comparing risk assessment approaches, some issues arise that can lead to confusion. The following subsections discuss some ofthose issues.
Absolute vs. relative risks Risks can be expressed in absolute terms, for example, “number of fatalities per mile year for permanent residents within one-half mile of pipeline. . . .” Also common is the use of relative risk measures, whereby hazards are prioritized such that the examiner can distinguish which portions of the
III 6 Risk: Theory and Application
facilities pose more risk than others. The former is a frequencybased measure that estimates the probability of a specific type of failure consequence. The latter is a comparative measure of current risks, in terms of both failure likelihood and consequence. A criticism of the relative scale is its inability to compare risks from dissimilar systems-pipelines versus highway transportation,for example-and its inabilityto directly provide failure predictions. However, the absolute scale often fails in relying heavily on historical point estimates, particularly for rare events that are extremely difficult to quantify, and in the unwieldy numbers that often generate a negative reaction from the public. The absolute scale also often implies a precision that is simply not available to any risk assessment method. So, the “absolute scale” offers the benefit of comparability with other types of risks, while the “relative scale” offers the advantage of ease-of-use and customizability to the specific risk being studied. In practical applications and for purposes of communications, this is not really an important issue. The two scales are not mutually exclusive. Either scale can be readily converted to the other scale if circumstances so warrant. A relative risk scale is converted to an absolute scale by correlating relative risk scores with appropriate historical failure rates or other risk estimates expressed in absolute terms. In other words, the relative scale is calibrated with some absolute numbers. The absolute scale is converted to more manageable and understandable (nontechnical) relative scales by simple mathematical relationships. A possible misunderstanding underlying this issue is the common misconception that a precise-looking number, expressed in scientific notation, is more accurate than a simple number. In reality, either method should use the same available data pool and be forced to make the same number of assumptions when data are not available. The use of subjective judgment is necessary in any risk assessment, regardless of how results are presented. Any good risk evaluation will require the generation of scenarios to represent all possible event sequences that lead to possible damage states (consequences). Each event in each sequence is assigned a probability. The assigned probabilities are assigned either in absolute terms or, in the case of a relative risk application, relative to other probabilities. In either case, the probability assigned should be based on all available information. For a relative model, these event trees are examined, and critical variables with their relative weightings (based on probabilities) are extracted. In a risk assessment expressing results in absolute numbers, the probabilities must be preserved in order to produce the absolute terms. Combining the advantages of relative and absolute approaches is discussed in Chapter 14.
Quantitativevs. qualitative models It is sometimes difficult to make distinctions between qualitative and quantitative analyses. Most techniques use numbers, which would imply a quantitative analysis, but sometimes the numbers are only representations of qualitative beliefs. For example, a qualitative analysis might use scores of 1,2, and 3 to replace the labels of “low,” “medium,” and “high.” To some, these are insufficient grounds to now call the analysis quantitative.
The terms quantitative and qualitative are often used to distinguish the amount of historical failure-related data analyzed in the model and the amount of mathematical calculations employed in arriving at a risk answer. A model that exclusively uses historical frequency data is sometimes referred to as quantitative whereas a model employing relative scales, even if later assigned numbers, is referred to as qualitative or semi-quantitative. The danger in such labeling is that they imply a level of accuracy that may not exist. In reality, the labels often tell more about the level of modeling effort, cost, and data sources than the accuracy of the results.
Subjectivity vs. objectivity In theory, a purely objective model will strictly adhere to scientific practice and will have no opinion data. A purely subjective model implies complete reliance on expert opinion. In practice, no pipeline risk model fully adheres to either. Objectivity cannot be purely maintained while dealing with the real-world situation of missing data and variables that are highly confounded. On the other hand, subjective models certainly use objective data to form or support judgments.
Use of unquantifiable evidence In any of the many difficult-to-quantify aspects of risk, some would argue that nonstatistical analyses are potentially damaging. Although this danger of misunderstanding the role of a factor always exists, there is similarly the more immediate danger of an incomplete analysis by omission of a factor. For example, public education is seen by most pipeline professionals to be a very important aspect of reducing the number of thirdparty damages and improving leak reporting and emergency response. However, quantifying this level of importance and correlating it with the many varied approaches to public education is quite difficult. A concerted effort to study this data is needed to determine how they affect risk. In the absence of such a study, most would agree that a company that has a strong public education program will achieve some level ofrisk reduction over a company that does not. A risk model should reflect this belief, even if it cannot be precisely quantified. Otherwise, the benefits of efforts such as public education would not be supported by risk assessment results. In summary, all methodologieshave access to the same databases (at least when publicly available) and all must address what to do when data are insufficient to generate meaningful statistical input for a model. Data are not available for most of the relevant risk variables of pipelines. Including risk variables that have insufficient data requires an element of “qualitative” evaluation.The only alternative is to ignore the variable, resulting in a model that does not consider variables that intuitively seem important to the risk picture. Therefore, all models that attempt to represent all risk aspects must incorporate qualitative evaluations.
VIII. Choosing a risk assessment technique Several questions to the pipeline operator may direct the choice of risk assessment technique:
Choosing a risk assessmenttechnique 1117
What data do you have? What is your confidence in the predictive value of the data? What resources are available in terms of money, personhours, and time? What benefits do you expect to accrue in terms of cost savings, reduced regulatory burdens, improved public support, and operational efficiency? These questions should be kept in mind when selecting the specific risk assessment methodology, as discussed further in Chapter 2. Regardless ofthe specific approach, some properties of the ideal risk assessment tool will include the following:
Appropriate costs. The value or benefits derived from the risk assessment process should clearly outweigh the costs of setting up, implementing, and maintaining the program. Ability to learn. Because risk is not constant over the length of a pipeline or over a period of time, the model must be able to “learn” as information changes. This means that new data should be easy to incorporate into the model. Signal-to-noise ratio. Because the model is in effect a measurement tool, it must have a suitable signal-to-noise ratio, as discussed previously. This means that the “noise,” the amount of uncertainty in the measurement (resulting from numerous causes), must be low enough so that the “signal:’ the risk value of interest, can be read. This is similar to the accuracy of the model, but involves additional considerations that surround the high level of uncertainty associated with risk management.
Comparisons can be made against fixed or floating “standards” or benchmarks Finally, a view to the next step, risk management, should be taken. A good risk assessment technique will allow a smooth transition into the management of the observed risks. This means that provisions for resource allocation modeling and the evolution ofthe overall risk model must be made. The ideal risk assessment will readily highlight specific deficiencies and point to appropriate mitigation possibilities. We noted previously that some risk assessment techniques are more appropriately considered to be “building blocks” while others are complete models. This distinction has to do with the risk assessment’s ability to not only measure risks, but also to directly support risk management. As it is used here. a complete model is one that will measure the risks at all points along a pipeline, readily show the accompanying variables driving the risks, and thereby directly indicate specific system vulnerabilities and consequences. A one-time risk analysis-a study to determine the risk level-may not need a complete model. For instance, an event-tree analysis can be used to estimate overall risk levels or risks from a specific failure mode. However, the risk assessment should not be considered to be a complete model unless it is packaged in such a way that it efficiently provides input for risk management.
Four tests Four informal tests are proposed here by which the difference between the building block and complete model can be seen. The proposition is that any complete risk assessment model should be able to pass the following four tests:
Model performance tests (See also Chapter 8 for discussion of model sensitivity analyses.) In examining a proposed risk assessment effort, it may be wise to evaluate the risk assessment model to ensure the following: All failure modes are considered All risk elements are considered and the most critical ones included Failure modes are considered independently as well as in aggregate All available information is being appropriately utilized Provisions exist for regular updates of information, including new types of data Consequence factors are separable from probability factors Weightings, or other methods to recognize the relative importance of factors, are established The rationale behind weightings is well documented and consistent A sensitivity analysis has been performed The model reacts appropriately to failures of any type Risk elements are combined appropriately (“and” versus “or” combinations) ’ Steps are taken to ensure consistency of evaluation Risk assessment results form a reasonable statistical distribution (outliers?) There is adequate discrimination in the measured results (signal-to-noise ratio)
1. The “I didn’t know that!” test 2. The “Why is that?” test 3. The “point to amap” test 4. The “What about -?’test
Again, these tests are very informal but illustrate some key characteristics that should be present in any methodology that purports to be a full risk assessment model. In keeping with the informality, the descriptions below are written in the familiar, instructional voice used as if speaking directly to the operator of a pipeline.
The “Ididn ’t know that! ” test (new knowledge) The risk model should be able to do more than you can do in your head or even with an informal gathering of your experts. Most humans can simultaneously consider a handful of factors in making a decision. The real-world situation might be influenced by dozens of variables simultaneously. Your model should be able to simultaneously consider dozens or even hundreds of pieces of information. The model should tell you things you did not already know. Some scenario-based techniques only tend to document what is already obvious. If there aren’t some surprises in the assessment results, you should be suspicious ofthe model’s completeness. It is difficult to believe that simultaneous consideration of many variables will not generate some combinations in certain locations that were not otherwise intuitively obvious.
1/18 Risk: Theoryand Application
Naturally, when given a surprise, you should then be skeptical and ask to be convinced. That helps to validate your model and leads to the next points.
The “Whyis that? ” test (drill down) So let’s say that the new knowledge proposed by your model is that your pipeline XYZ in Barker County is high risk. You say, “What?! Why is that high risk?”You should be initially skeptical, by the way, as noted before. Well, the model should be able to tell you its reasons; perhaps it is because coincident occurrences of population density, a vulnerable aquifer, and state park lands, coupled with 5 years since a close interval survey, no ILI, high stress levels, and a questionable coating condition make for a riskier than normal situation. Your response should be to say, “Well, okay, looking at all that, it makes sense. . . .” In other words, you should be able to interrogate the model and receive acceptable answers to your challenges. If an operator’s intuition is not consistent with model outputs, then one or the other is in error. Resolution of the discrepancy will often improve the capabilities of both operator and model.
The “point to map ” test (location specific and complete) This test is often overlooked. Basically, it means that you should be able to pull out a map of your system, put your finger on any point along the pipeline, and determine the risk at that point--either relative or absolute. Furthermore, you should be able to determine specifically the corrosion risk, the third-party risk, the types of receptors, the spill volume, etc., and quickly determine the prime drivers of the apparently higher risk. This may seem an obvious thing for a risk assessment to do, but many recommended techniques cannot do this. Some have predetermined their risk areas so they know little about other areas (and one must wonder about this predetermination). Others do not retain information specific to a given location. Others do not compile risks into summary judgments. The risk information should be a characteristic of the pipeline at all points, just like the pipe specification.
The “Whatabout -? completeness)
”
test (a measure of
Someone should be able to query the model on any aspect of risk, such as “What about subsidence risk? What about stress corrosion cracking?” Make sure all probability issues are addressed. All known failure modes should be considered, even if they are very rare or have never been observed for your particular system. You never know when you will be comparing your system against one that has that failure mode or will be asked to perform a due diligence on a possible pipeline acquisition.
IX. Quality and risk management In many management and industry circles, quality is a popular concept--extending far beyond the most common uses of the term. As a management concept, it implies a way of think-
ing and a way ofdoing business. It is widely believed that attention to quality concepts is a requirement to remain in business in today’s competitive world markets. Risk management can be thought of as a method to improve quality. In its best application, it goes beyond basic safety issues to address cost control, planning, and customer satisfaction aspects of quality. For those who link quality with competitiveness and survival in the business world, there is an immediate connection to risk management. The prospect of a company failure due to poor cost control or poor decisions is a risk that can also be managed. Quality is difficult to define precisely. While several different definitions are possible, they typically refer to concepts such as (1) fitness-for-use, (2) consistency with specifications, and (3) freedom from defects, all with regard to the product or service that the company is producing. Central to many of the quality concepts is the notion of reducing variation. This is the discipline that may ultimately be the main “secret” of the most successful companies. Variation normally is evidence of waste. Performing tasks optimally usually means little variation is seen. All definitions incorporate (directly or by inference) some reference to customers. Broadly defined, a customer is anyone to whom a company provides a product, service, or information. Under this definition, almost any exchange or relationship involves a customer. The customer drives the relationship because he specifies what product, service, or information he wants and what he is willing to pay for it. In the pipeline business, typical customers include those who rely on product movements for raw materials, such as refineries; those who are end users of products delivered, such as residential gas users; and those who are affected by pipelining activities, such as adjacent landowners. As a whole, customers ask for adequate quantities of products to be delivered With no service interruptions (reliability) With no safety incidents At lowest cost This is quite a broad brush approach. To be more accurate, the qualifiers of “no” and “lowest” in the preceding list must be defined. Obviously, trade-offs are involved-improved safety and reliability may increase costs. Different customers will place differing values on these requirements as was previously discussed in terms of acceptable risk levels. For our purposes, we can view regulatory agencies as representing the public since regulations exist to serve the public interest. The public includes several customer groups with sometimes conflicting needs. Those vitally concerned with public safety versus those vitally concerned with costs, for instance, are occasionally at odds with one another. When a regulatory agency mandates a pipeline safety or maintenance program, this can be viewed as a customer requirement originating from that sector ofthe public that is most concerned with the safety of pipelines. When increased regulation leads to higher costs, the segment of the public more concerned with costs will take notice.
Reliability 1/19
As a fundamental part ofthe quality process, we must make a distinction between types ofwork performed in the name of the customer: Value-added work. These are work activities that directly add value, as defined by the customer, to the product or service. By moving a product from point A to point B, value has been added to that product because it is more valuable (to the customer) at point B than it was at pointA. Necessary work. These are work activities that are not value added, but are necessary in order to complete the valueadded work. Protecting the pipeline from corrosion does not directly move the product, but it is necessary in order to ensure that the product movements continue uninterrupted. Waste. This is the popular name for a category that includes all activities performed that are unnecessary. Repeating a task because it was done improperly the first time is called rework and is included in this category. Tasks that are done routinely, but really do not directly or indirectly support the customer needs, are considered to be waste. Profitability is linked to reducing the waste category while optimizing the value-added and necessary work categories. A risk management program is an integral part of this. as will be seen. The simplified process for quality management goes something like this: The proper work (value added and necessary) is identified by studying customer needs and creating ideal processes to satisfy those needs in the most efficient manner. Once the proper work is identified the processes that make up that work should be clearly defined and measured. Deviations from the ideal processes are waste. When the company can produce exactly what the customer wants without any variation in that production, that company has gained control over waste in its processes. From there. the processes can be even further improved to reduce costs and increase output, all the while measuring to ensure that variation does not return. This is exactly what risk management should do: identify needs, analyze cost versus benefit of various choices, establish an operating discipline, measure all processes, and continuously improve all aspects of the operation. Because the pipeline capacity is set by system hydraulics, line size, regulated operating limits. and other fixed constraints, gains in pipeline efficiencies are made primarily by reducing the incremental costs associated with moving the products. Costs are reduced by spending in ways that reap the largest benefits, namely, increasing the reliability of the pipeline. Spending to prevent losses and service interruptions is an integral part of optimizing pipeline costs. The pipeline risk items considered in this book are all either existing conditions or work processes. The conditions are characteristics of the pipeline environment and are not normally changeable. The work processes, however, are changeable and should be directly linked to the conditions. The purpose of every work process, every activity. even every individual motion is to meet customer requirements. A risk management program should assess each activity in terms of its benefit from a risk perspective. Because every activity and process costs something, it must generate some benefit-thenvise it is waste. Measuring the benefit, including the benefit of loss prevention, allows spending to be prioritized.
Rather than having a broad pipeline operating program to allow for all contingencies, risk management allows the direction of more energy to the areas that need it more. Pipelining activities can be fine-tuned to the specific needs of the various pipeline sections. Time and money should be spent in the areas where the return (the benefit) is the greatest. Again, measurement systems are required to track progress, for without measurements, progress is only an opinion. The risk evaluation program described here provides a tool to improve the overall quality of a pipeline operation. It does not necessarily suggest any new techniques; instead it introduces a discipline to evaluate all pipeline activities and to score them in terms of their benefit to customer needs. When an extra dollar is to be spent, the risk evaluation program points to where that dollar will do the most good. Dollars presently being spent on one activity may produce more value to the customer if they were being spent another way. The risk evaluation program points this out and measures results.
X. Reliability Reliability is often defined as the probability that equipment, machinery, or systems will perform their required functions satisfactorily under specific conditions within a certain time period. This can also mean the duration or probability of failure-free performance under the stated condition. As is apparent from this definition, reliability concepts are identical to risk concepts in many regards. In fact, sometimes the only differences are the scenarios of interest. Where risk often focuses on scenarios involving fatality, injury, property damage, etc.. reliability focuses on scenarios that lead to equipment unavailability, repair costs, etc. [45] Risk analysis is often more of a diagnostic tool, helping us to better understand and make decisions about an overall existing system. Reliability techniques are more naturally applied to new structures or the performance of specific components. Many of the same techniques are used, including FMEA, root cause analyses, and event-tree/fault-tree analyses. This is logical since many ofthe same issues underlie risk and reliability. These include failure rates, failure modes, mitigating or offsetting actions, etc. Common reliability measurement and control efforts involve issues of ( I ) equipment performance, as measured by availability, uptime, MTTF (mean time to failure), MTBF (mean time between failures), and Weibull analyses; (2) reliability as a component of operation cost or ownership costs, sometimes measured by life-cycle cost; and (3) reliability analysis techniques applied to maintenance optimization, including reliability centered maintenance (RCM), predictive preventive maintenance (PPM). and root cause analysis. Many of these are, at least partially, risk analysis techniques, the results of which can feed directly into a risk assessment model. This text does not delve deeply into specialized reliability engineering concepts. Chapter 10, Service Interruption Risk, discusses issues of pipeline availability and delivery failures.
2/21
Risk Assessment Process
1. Using this manual
To get answers quick!
Formal risk management can become a useful tool for pipeline operators, managers, and others interested in pipeline safety and/or efficiency. Benefits are not only obtained from an enhanced ability to improve safety and reduce risk, but experience has shown that the risk assessmentprocess draws together so much useful information into a central location that it becomes a constant reference point and information repository for decision making all across the organization. The purpose of the pipeline risk assessment method described in Chapters 3 through 7 of this book is to evaluate a pipeline’s risk exposure to the public and to identify ways to effectively manage that risk. Chapters 8 through 14 discuss special risk assessment considerations, including special pipeline facilities and the use of absolute risk results. Chapter 15 describesthe transition from risk assessment to risk management.
While the topic ofpipeline risk management does fill the pages of this book, the process does not have to be highly complex or expensive. Portionsof this book can be used as a “cookbook” to quickly implement a risk management system or simply provide ideas to pipeline evaluators.A fairly detailed pipeline risk assessment system can be set up and functioning in a relatively short time by just one evaluator. A reader could adopt the risk assessment framework described in Chapters 3 through 7 to begin assessingrisk immediately. An overview of the base model with suggested weightings of all risk variables is shown in Risk Assessment af a Glance, with each variable fully described in later chapters. A risk evaluator with little or no pipeline operating experience could most certainly adopt this approach, at least initially. Similarly, an evaluatorwho wants to assess pipelines covering a wide range of services, environments, and operators may wish
2/22Risk Assessment Process
to use this general approach, since that was the original purpose of the basic framework. By using simple computer tools such as a spreadsheet or desktop database to hold risk data, and then establishing some administrativeprocesses around the maintenance and use ofthe information, the quick-start applicator now has a system to support risk management. Experienced risk managers may balk at such a simplification of an often complex and time-consuming process. However, the point is that the process and underlying ideas are straightforward, and rapid establishment of a very useful decision support system is certainly possible. It may not be of sufficient rigor for a very detailed assessment, but the user will nonetheless have a more formal structure from which to better ensure decisions of consistency and completeness of information.
For pipeline operators Whereas the approach described above is a way to get started quickly, this tool becomes even more powerful if the user customizes it, perhaps adding new dimensions to the process to better suit his or her particular needs. As with any engineered system (the risk assessment system described herein employs many engineering principles), a degree of due diligence is also warranted. The experienced pipeline operator should challenge the example point schedules: Do they match your operating experience? Read the reasoning behind the schedules: Do you agree with that reasoning? Invite (or require) input from employees at all levels. Most pipeline operators have a wealth of practical expertise that can be used to fine-tune this tool to their unique operating environment. Although customizing can create some new issues, problems can be avoided for the most part by carefully planning and controlling the process of model setup and maintenance. The point here again is to build a useful toolbone that is regularly used to aid in everyday business and operating decision making, one that is accepted and used throughout the organization. Refer also to Chapter 1 for ideas on evaluating the measuring capability of the tool.
11. Beginning risk management Chapter 1 suggests the following as basic steps in risk management:
Step 1:Acquire a risk assessment model A pipeline risk assessment model is a set of algorithms or “rules” that use available information and data relationships to measure levels of risk along a pipeline. A risk assessment model can be selected from some commercially available models, customized from existing models, or created “from scratch” depending on requirements.
Step 2: Collect and prepare data Data preparation are the processes that result in data sets that are ready to be read into and used by the risk assessment model.
Step 3: Devise and implement a segmentation strategy Because risks are rarely constant along a pipeline, it is advantageous to first segment the line into sections with constant risk characteristics (dynamic segmentation) or otherwise divide the pipeline into manageable pieces.
Step 4:Assess the risks After a risk model has been selected and the data have been prepared, risks along the pipeline route can be assessed. This is the process of applying the algorithn-the rules-to the collected data. Each pipeline segment will get a unique risk score that reflects its current condition, environment, and the operating/ maintenance activities.These relative risk numbers can later be converted into absolute risk numbers. Risk assessment will need to be repeated periodically to capture changing conditions.
Step 5: Manage the risks This step consists of determining what actions are appropriate given the risk assessment results. This is discussed in Chapter 15. Model design and data collection are often the most costly parts of the process. These steps can be time consuming not only in the hands-on aspects, but also in obtaining the necessary consensus from all key players. The initial consensus often makes the difference between a widely accepted and a partially resisted system. Time and resources spent in these steps can be viewed as initial investments in a successfil risk management tool. Program management and maintenance are normally small relative to initial setup costs.
111. Risk assessment models What is a model? Armed with an understanding of the scenarios that compose the hazard (see Chapter 1 discussion of risk model building blocks), a risk assessment model can be constructed. The model is the set of rules by which we will predict the future performance of the pipeline from a risk perspective. The model will be the constructor’s representation of risk. The goal of any risk assessment model is to quantify the risks, in either a relative or absolute sense. The risk assessment phase is the critical first step in practicing risk management. It is also the most difficult phase. Although we understand engineering concepts about corrosion and fluids flow, predicting failures beyond the laboratory in a complex “real” environment can prove impossible. No one can definitively state where or when an accidental pipeline failure will occur. However, the more likely failure mechanisms, locations, and frequencies can be estimated in order to focus risk efforts. Some make a distinction between a model and a simulation, where a model is a simplification of the real process and a simulation is a direct replica. A model seeks to increase our understanding at the expense of realism, whereas a simulation attempts to duplicate reality, perhaps at the expense of understandability and usability. Neither is necessarily superior-
Risk assessment models 2/23
either might be more appropriate for specific applications. Desired accuracy, achievable accuracy, intended use, and availability of resources are considerationsin choosing an approach. Most pipeline risk efforts generally fall into the “model” category-seeking to gain risk understanding in the most efficient manner. Although not always apparent, the most simple to the most complex models all make use of probability theory and statistics. In a very simple application,these manifest themselves in experience factors and engineering judgments that are themselves based on past observationsand inductive reasoning; that is, they are the underlying basis of sound judgments. In the more mathematically rigorous models, historical failure data may drive the model almost exclusively. Especially in the fields of toxicology and medical research, risk assessments incorporate dose-response and exposure assessments into the overall risk evaluation. Dose-response assessment deals with the relationship between quantities of exposureand probabilities of adverse health effects in exposed populations. Exposure assessment deals with the possible pathways, the intensity of exposure, and the amount of time a receptor could be vulnerable. In the case of hazardous materials pipelines, the exposure agents of concern are both chemical (contaminationscenarios) and thermal (fire related hazards) in nature. These issues are discussed in Chapters 7 and 14.
Three general approaches Three general types of models, from simplestto most complex, are matrix, probabilistic, and indexing models. Each has strengthsand weaknesses, as discussed below.
Matrix models One of the simplest risk assessment structures is a decisionanalysis matrix. It ranks pipeline risks according to the likelihood and the potential consequences of an event by a simple scale, such as high, medium, or low,or a numerical scale; from 1 to 5 , for example. Each threat is assigned to a cell of the matrix based on its perceived likelihood and perceived consequence. Events with both a high likelihood and a high consequence appear higher on the resulting prioritized list. This approach may simply use expert opinion or a more complicated application might use quantitativeinformationto rank risks. Figure 2.1 shows a matrix model. While this approach cannot consider all pertinent factors and their relationships, it does help to crystallize thinking by at least breaking the problem into two parts (probabilityand consequence)for separate examination.
Probabilistic models The most rigorous and complex risk assessment model is a modeling approach commonly referred to as probabilistic risk assessment (PRA) and sometimes also called quantitative risk assessment (QRA)or numerical risk assessment (NRA). Note that these terms carry implicationsthat are not necessarily appropriate as discussed elsewhere. This technique is used in the nuclear, chemical, and aerospace industries and, to some extent, in the petrochemical industry. PRA is a rigorousmathematicaland statisticaltechnique that relies heavily on historical failure data and event-treelfault-tree
Highest risk High
’I
Consequence
Il Low
Lowest risk L o w < I Likelihood =>High Figure 2.1
Simple risk matrix.
analyses. Initiating events such as equipment failure and safety system malfunction are flowcharted forward to all possible concluding events, with probabilities being assigned to each branch along the way. Failures are backward flowchartedto all possible initiating events, again with probabilities assigned to all branches.All possible paths can then be quantified based on the branch probabilities along the way. Final accident probabilities are achieved by chaining the estimated probabilities of individualevents. This technique is very data intensive. It yields absolute risk assessmentsof all possible failure events. These more elaborate models are generally more costly than other risk assessments. They are technologicallymore demanding to develop, require trained operators, and need extensive data. A detailed PRA is usually the most expensive of the risk assessmenttechniques. The output of a PRA is usually in a form whereby its output can be directly compared to other risks such as motor vehicle fatalities or tornado damages. However, in rare-event occurrences,historical data present an arguably blurred view. The PRA methodology was first popularized through opposition to various controversialfacilities, such as large chemical plants andnuclear reactors [88]. In addressingthe concerns, the intent was to obtain objective assessments of risk that were grounded in indisputable scientific facts and rigorous engineering analyses.The technique therefore makes extensive use of failure statistics of components as foundationsfor estimates of future failure probabilities. However, statistics paints an incompletepicture at best, and many probabilitiesmust still be based on expertjudgment. In attemptsto minimize subjectivity, applicationsof this techniquebecame increasinglycomprehensive and complex, requiring thousands of probability estimates and like numbers ofpages to document. Nevertheless,variation in probability estimates remains, and the complexity and cost of this method does not seem to yield commensurateincreases in accuracy or applicability [MI.In addition to sometimes widely differing results from “duplicate” PRAs performed on the same system by different evaluators, another criticism
2/24 Risk Assessment Process
includes the perception that underlying assumptions and input data can easily be adjusted to achieve some predetermined result. Of course, this latter criticism can be applied to any process involving much uncertainty and the need for assumptions. PRA-type techniques are required in order to obtain estimates of absolute risk values, expressed in fatalities, injuries, property damages, etc., per specific time period. This is the subject of Chapter 14. Some guidanceon evaluating the quality of a PRA-type technique is also offered in Chapter 14.
Indexing models Perhaps the most popular pipeline risk assessment technique in current use is the index model or some similar scoring technique. In this approach, numerical values (scores) are assigned to important conditions and activities on the pipeline system that contribute to the risk picture. This includes both riskreducing and risk-increasing items, or variables. Weightings are assigned to each risk variable. The relative weight reflects the importance of the item in the risk assessment and is based on statistics where available and on engineering judgment where data are not available. Each pipeline section is scored based on all of its attributes. The various pipe segments may then be ranked according to their relative risk scores in order to prioritize repairs, inspections,and other risk mitigating efforts. Among pipeline operators today, this technique is widely used and ranges from a simple one- or two-factor model (where only factors such as leak history and population density are considered) to models with hundreds of factors consideringvirtually every item that impacts risk. Although each risk assessmentmethod discussed has its own strengths and weaknesses, the indexing approach is especially appealingfor several reasons: Provides immediate answers Is a low-cost analysis (an intuitive approach using available information) Is comprehensive (allows for incomplete knowledge and is easily modified as new informationbecomes available) Acts as a decision support tool for resource allocation modeling Identifies and places values on risk mitigation opportunities
An indexing-typemodel for pipelinerisk assessmentis a recommended feature of a pipeline risk management program and is hlly described in this book. It is a hybrid of several of the methods listed previously. The great advantage of this technique is that a much broader spectrum of information can he included; for example, near misses as well as actual failures are considered.A drawback is the possible subjectivity of the scoring. Extra efforts must be employed to ensure consistency in the scoring and the use of weightings that fairly represent real-world risks. It is reasonableto assumethat not all variable weightings will prove to be correct in any risk model. Actual research and failure data will doubtlessly demonstrate that some were initially set too high and some too low. This is the result of modelers misjudging the relative importance of some of the variables. However, even if the quantification of the risk factors is imperfect, the results nonetheless will usually give a reliable picture
of places where risks are relatively lower (fewer “bad” factors present) and where they are relatively higher (more “bad” factors are present). An indexing approach to risk assessment is the emphasis of much ofthis book.
Further discussion on scoring-type risk assessments Scoring-typetechniques are in common use in many applications. They range from judging sports and beauty contests to medical diagnosis and credit card fraud detection, as are discussed later. Any time we need to consider many factors simultaneously and our knowledge is incomplete, a scoring system becomes practical. Done properly, it combines the best of all other approaches because critical variables are identified from scenario-based approaches and weightings are established from probabilistic conceptswhen possible. The genesis of scoring-typeapproaches is readily illustrated by the following example. As operators of motor vehicles, we generally know the hazards associated with driving as well as the consequencesof vehicle accidents.At one time or another, most drivers have been exposed to driving accident statistics as well as pictures or graphic commentary of the consequencesof accidents. Were we to perform a scientific quantitative risk analysis, we might begin by investigating the accident statistics of the particular make and model of the vehicle we operate.We would also want to know something about the crash survivability of the vehicle. Vehicle condition would also have to be included in our analysis. We might then analyze various roadways for accident history including the accident severity. We would naturally have to compensate for newer roads that have had less opportunityto accumulate an accident frequency base. To be complete, we would have to analyze driver conditionas it contributesto accident frequency or severity, as well as weather and road conditions. Some of these variables would he quite difficult to quantify scientifically. After a great deal of research and using a number of critical assumptions,we may be able to build a system model to give us an accident probability number for each combination of variables. For instance, we may conclude that, for vehicle type A, driven by driver B, in condition C, on roadway D, during weather and road conditions E, the accident frequency for an accident of severity F is once for every 200,000 miles driven. This system could take the form of a scenario approach or a scoring system. Does this now mean that until 200,000 miles are driven, no accidentsshould be expected? Does 600,000 miles driven guarantee three accidents?Of course not. What we do believe from our study of statistics is that, given a large enough data set, the accident frequency for this set of variables should tend to move toward once every 200,000 miles on average, if our underlying frequencies are representative of fume frequencies.This may mean an accident every 10,000miles for the first 100,000miles followed by no accidents for the next 1,900,000 miles-the average is still once every 200,000 miles. What we are perhaps most interestedin, however, is the relative amount of risk to which we are exposing ourselvesduring a single drive. Our study has told us little ahout the risk of this drive until we compare this drive with other drives. Suppose we change weather and road conditionsto state G from state F and find that the accident frequency is now once every 190,000
Risk assessment models 2/25
miles. This finding now tells us that condition G has increased the risk by a small amount. Suppose we change roadway D to roadway H and find that our accident frequency is now once every 300,000 miles driven. This tells us that by using road H we have reduced the risk quite substantially compared with using road D. Chances are, however, we could have made these general statements without the complicated exercise of calculating statistics for each variable and combining them for an overall accident frequency. So why use numbers at all? Suppose we now make both variable changes simultaneously. The risk reduction obtained by road H is somewhat offset by the increasedrisk associated with road and weather condition F, but what is the result when we combine a small risk increase with a substantial risk reduction? Because all of the variables are subject to change, we need some method to see the overall picture. This requires numbers, but the numbers can be relative-showing only that variable H has a greater effect on the risk picture than does variable G. Absolute numbers, such as the accident frequency numbers used earlier, are not only difficult to obtain, they also give a false sense of precision to the analysis. If we can only be sure of the fact that change X reduces the risk and it reduces it more than change Y does, it may be of little W h e r value to say that a once in 200,000 frequency has been reduced to a once in 2 10,000 frequency by change X and only a once in 205,000 fiequency by change Y. We are ultimately most interested in the relative risk picture of changeXversus change Y. This reasoning forms the basis of the scoring risk assessment. The experts come to a consensus as to how a change in a variable impacts the risk picture, relative to other variables in the risk picture. If frequency data are available, they are certainly used, but they are used outside the risk analysis system. The data are used to help the experts reach a consensus on the importance of the variable and its effects (or weighting) on the risk picture. The consensus is then used in the risk analysis. As previously noted, scoring systems are common in many applications. In fact, whenever information is incomplete and many aspects or variables must be simultaneously considered, a scoring system tends to emerge. Examples include sporting events that have some difficult-to-measure aspects like artistic expression or complexity, form, or aggressiveness. These include gymnastics, figure skating, boxing, and karate and other martial arts. Beauty contests are another application. More examples are found in the financial world. Many economic models use scoring systems to assess current conditions and forecast future conditions and market movements. Credit card fraud assessment is another example where some purchases trigger a model that combines variables such as purchase location, the card owner’s purchase history, items Table 2.1
purchased, time of day, and other factors to rate the probability of a fraudulent card use. Scoring systems are also used for psychological profiles, job applicant screening, career counseling, medical diagnostics, and a host of other applications.
Choosing a risk assessment approach Any or all ofthe above-describedtechniques might have a place in risk assessment/management. Understanding the strengths and weaknesses of the different risk assessment methodologies gives the decision-maker the basis for choosing one. A case can be made for using each in certain situations. For example, a simple matrix approach helps to organize thinking and is a first step towards formal risk assessment. If the need is to evaluate specific events at any point in time, a narrowly focused probabilistic risk analysis might be the tool of choice. Ifthe need is to weigh immediate risk trade-offs or perform inexpensive overall assessments, indexing models might be the best choice. These options are summarized in Table 2.1.
Uncertainty It is important that a risk assessment identify the role of uncertainty in its use of assumptions and also identify how the state of “no information” is assessed. The philosophy behind uncertainty and risk is discussed in Chapter 1. The recommendation from Chapter 1 is that a risk model generally assumes that things are “ b a d until data show otherwise. So, an underlying theme in the assessment is that “uncertainty increases risk.” This is a conservative approach requiring that, in the absence of meaningful data or the opportunity to assimilate all available data, risk should be overestimated rather than underestimated. So, lower ratings are assigned, reflecting the assumption of reasonably poor conditions, in order to accommodate the uncertainty. This results in a more conservative overall risk assessment. As a general philosophy, this approach to uncertainty has the added long-term benefit of encouraging data collection via inspections and testing. Uncertainty also plays a role in scoring aspects of operations and maintenance. Information should be considered to have a life span because users must realize that conditions are always changing and recent information is more useful than older information. Eventually, certain information has little value at all in the risk analysis.This applies to inspections, surveys, and so on. The scenarios shown inTable 2.2 illustrate the relative value of several knowledge states for purposes of evaluating risk where uncertainty is involved. Some assumptions and “reasonableness” are employed in setting risk scores in the absence of
Choosing a risk assessment technique
When the need is t o . . .
A technique to use might be
Study specific events. perform post-incidentinvestigations,compare risks of specific failures. calculatespecific event probabilities Obtain an inexpensive overall risk model, create a resource allocation model, model the interaction of many potential failure mechanisms, study or create an operatingdiscipline Better quantify a belief, create a simple decision support tool, combine several beliefs into a single solution, document choices in resource allocation
Event trees. fault trees, FMEA. PRA, HAZOP Indexing model Matrix
2/26 Risk Assessment Process Table 2.2 Uncertainty and risk assessment ~~
Action
Inspection results
Risk relevance
Timely and comprehensive inspection performed Timely and comprehensive inspection performed
No risk issues identified Some risk issues or indications of flaw potential identified: root
Least risk
cause analysis and proper follow-up to mitigate risk High uncertainty regarding risk issues Some nsk issues or Indications of flaw potential identifieduncertain reactions,uncertain mitigation of risk
More risk
No timely and comprehensiveinspection performed
Timely and comprehensive inspection performed
data; in general, however, worst-case conditions are conservatively used for default values. Uncertainty also arises in using the risk assessment model since there are inaccuracies inherent in any measuring tool. A signal-to-noise ratio analogy is a useful way to look at the tool and highlights precautions in its use. This is discussed in Chapter 1.
Sectioning or segmenting the pipeline It is generally recognized that, unlike most other facilities that undergo a risk assessment, a pipeline usually does not have a constant hazard potential over its entire length. As conditions along the line’s route change, so too does the risk picture. Because the risk picture is not constant, it is efficient to examine a long pipeline in shorter sections. The risk evaluator must decide on a strategy for creating these sections in order to obtain an accurate risk picture. Each section will have its own risk assessment results. Breaking the line into many short sections increases the accuracy of the assessment for each section, hut may result in higher costs of data collection, handling, and maintenance (although higher costs are rarely an issue with modern computing capabilities). Longer sections (fewer in number) on the other hand, may reduce data costs but also reduce accuracy, because average or worst case characteristics must govern if conditions change within the section.
Fixed-length approach A fixed-length method of sectioning, based on rules such as “every mile” or “between pump stations” or “between block valves,” is often proposed. While such an approach may be initially appealing (perhaps for reasons of consistency with existing accounting or personnel systems), it will usually reduce accuracy and increase costs. Inappropriate and unnecessary break points that are chosen limit the model’s usefulness and hide risk hot spots if conditions are averaged in the section, or risks will be exaggerated if worst case conditions are used for the entire length. It will also interfere with an otherwise efficient ability of the risk model to identify risk mitigation projects. Many pipeline projects are done in very specific locations, as is appropriate. The risk of such specific locations is often lost under a fixed-length sectioning scheme.
Dynamic segmentation approach The most appropriate method for sectioning the pipeline is to insert a break point wherever significant risk changes occur. A significant condition change must be determined by the eval-
Most risk
uator with consideration given to data costs and desired accuracy. The idea is for each pipeline section to be unique, from a risk perspective, from its neighbors. So, within a pipeline section, we recognize no differences in risk, from beginning to end. Each foot ofpipe is the same as any other foot, as far as we know from our data. But we know that the neighboring sections do differ in at least one risk variable. It might he a change in pipe specification (wall thickness. diameter, etc.), soil conditions (pH, moisture, etc.), population, or any of dozens of other risk variables, but at least one aspect is different from section to section. Section length is not important as long as characteristics remain constant. There is no reason to subdivide a 10-mile section of pipe if no real risk changes occur within those 10 miles. This type of sectioning is sometimes called dynamic segnienfution. It can be done very efficiently using modern computers. It can also be done manually, of course, and the manual process might be suitable for setting up a high-level screening assessment.
Manually establishing sections With today’s common computing environments, there is really no reason to follow the relatively inefficient option of manually establishing pipeline sections. However. envisioning the manual process of segmentation might be helpful for obtaining a better understanding of the concept. The evaluator should first scan Chapters 3 through 7 of this text to get a feel for the types ofconditions that make up the risk picture. He should note those conditions that are most variable in the pipeline system being studied and rank those items with regard to magnitude of change and frequency of change. This ranking will be rather subjective and perhaps incomplete, but it will serve as a good starting point for sectioning the line(s).An example of a short list ofprioritized conditions is as follows: 1. Population density 2. Soil conditions 3. Coating condition 4. Age ofpipeline
In this example, the evaluator(s) foresees the most significant changes along the pipeline route to be population density, followed by varying soil conditions, then coating condition, and pipeline age. This list was designed for an aging 60-mile pipeline in Louisiana that passes close to several rural communities and alternating between marshlands (clay)and sandy soil conditions. Furthermore, the coating is in various states ofdeterioration (maybe roughly corresponding to the changing soil
Risk assessment models 2/27
conditions) and the line has had sections replaced with new pipe during the last few years. Next. the evaluator should insert break points for the sections based on the top items on the prioritized list of condition changes. This produces a trial sectioning of the pipeline. If the number of sections resulting from this process is deemed to be too large, the evaluator needs to merely reduce the list (eliminating conditions from the bottom of the prioritized list) until an appropriate number of sections are obtained. This trial-anderror process is repeated until a cost-effective sectioning has been completed.
E.xaniple 2.1: Sectioning the Pipeline Following this philosophy, suppose that the evaluator of this hypothetical Louisiana pipeline decides to section the line according to the following rules he has developed: Insert a section break each time the population density along a 1-mile section changes by more than 10%. These popula-
tion section breaks will not occur more often than each mile, and as long as the population density remains constant, a section break is unwarranted. Insert a section break each time the soil corrosivity changes by 30%. In this example, data are available showing the average soil corrosivity for each 500-ft section of line. Therefore, section breaks may occur a maximum of I O times (5280 ft per mile divided by 500-ft sections) for each mile ofpipeline. Insert a section break each time the coating condition changes significantly. This will be measured by the corrosion engineer’s assessment. Because this assessment is subjective and based on sketchy data, such section breaks may occur as often as every mile. Insert a section break each time a difference in age of the pipeline is seen. This is measured by comparing the installation dates. Over the total length of the line, six new
sections have been installed to replace unacceptable older sections. Following these rules, the evaluator finds that his top listed condition causes 15 sections to be created. By applying the second condition rule, he has created an additional 8 sections. bringing the total to 23 sections. The third rule yields an additional 14 sections, and the fourth causes an additional 6 sections. This brings the total to 43 sections in the 60-mile pipeline. The evaluator can now decide if this is an appropriate number of sections. As previously noted, factors such as the desired accuracy of the evaluation and the cost of data gathering and analysis should be considered. If he decides that 43 sections is too many for the company’s needs, he can reduce the number of sections by first eliminating the additional sectioning caused by application of his fourth rule. Elimination of these 6 sections caused by age differences in the pipe is appropriate because it had already been established that this was a lower-priority item. That is, it is thought that the age differences in the pipe are not as significant a factor as the other conditions on the list. If the section count (now down to 37) is still too high, the evaluator can eliminate or reduce sectioning caused by his third rule. Perhaps combining the corrosion engineer’s “good’ and “fair” coating ratings would reduce the number of sections from I4 to 8. In the preceding example, the evaluator has roughed out a plan to break down the pipeline into an appropriate number of sections. Again, this is an inefficient way to section a pipeline and leads to further inefficiencies in risk assessment. This example is provided only for illustration purposes. Figure 2.2 illustrates a piece of pipeline sectioned based on population density and soil conditions. For many items in this evaluation (especially in the incorrect operations index) new section lines will not be created. Items such as training or procedures are generally applied uniformly across the entire pipeline system or at least within a single
I
Section 4
I
Section 5
I
I I
Section 6
I
Town
\
Pipeline
Figure 2.2
Sectioning of the pipeline.
w‘
I
2/28Risk Assessment Process
operations area. This should not be universally assumed, however, during the data-gathering step.
detail and complexity. Appendix E shows some samples of risk algorithms. Readers will find a review of some database design concepts to be useful (see Chapter 8).
Persistence of segments Another decision to make is how often segment boundaries will be changed. Under a dynamic segmentation strategy, segments are subject to change with each change of data. This results in the best risk assessments, but may create problems when tracking changes in risk over time. Difficulties can be readily overcome by calculating cumulative risks (see Chapter 15) or tracking specific points rather than tracking segments.
Results roll-ups The pipeline risk scores represent the relative level of risk that each point along the pipeline presents to its surroundings. It is insensitive to length. If two pipeline segments, say, 100 and 2600 ft, respectively, have the same risk score, then each point along the 100-ft segment presents the same risk as does each point along the 2600-ft length. Of course, the 2600-ft length presents more overall risk than does the 100-ft length because it has many more riskproducing points. A cumulative risk calculation adds the length aspect so that a 100-ft length of pipeline with one risk score can be compared against a 2600-ft length with a different risk score. As noted earlier, dividing the pipeline into segments based on any criteria other than all risk variables will lead to inefficiencies in risk assessment. However, it is common practice to report risk results in terms of fixed lengths such as “per mile” or “between valve stations,” even if a dynamic segmentation protocol has been applied. This “rolling up” of risk assessment results is often thought to be necessary for summarization and perhaps linking to other administrative systems such as accounting. To minimize the masking effect that such roll-ups might create, it is recommended that several measures be simultaneously examined to ensure a more complete use of information. For instance, when an average risk value is reported, a worst-case risk value, reflecting the worst length of pipe in the section, can be simultaneously reported. Length-weighted averages can also be used to better capture information, but those too must be used with caution. A very short, but very risky stretch of pipe is still of concern, even if the rest of the pipeline shows low risks. In Chapter 15, a system of calculating cumulative risk is offered. This system takes into account the varying section lengths and offers a way to examine and compare the effects of various risk mitigation efforts. Other aspects of data roll-ups are discussed in Chapters 8 and 15.
IV. Designing a risk assessment model A good risk model will be firmly rooted in engineering concepts and be consistent with experience and intuition. This leads to the many similarities in the efforts of many different modelers examining many different systems at many different times. Beyond compatibility with engineering and experience, a model can take many forms, especially in differing levels of
Data first or framework first? There are two possible scenarios for beginning a relative risk assessment. In one, a risk model (or at least a framework for a model) has already been developed, and the evaluator takes this model and begins collecting data to populate her model’s variables. In the second possibility, the modeler compiles a list of all available information and then puts this information into a framework from which risk patterns emerge and risk-based decisions can be made. The difference between these two approaches can be summarized in a question: Does the model drive data collection or does data availability drive model development? Ideally, each will be the driver at various stages of the process. One of the primary intents of risk assessment is to capture and use all available information and identify information gaps. Having data drive the process ensures complete usage of all data, while having a predetermined model allows data gaps to be easily identified. A blend of both is therefore recommended, especially considering possible pitfalls of taking either exclusively. Although a predefined set of risk algorithms defining how every piece of data is to be used is attractive, it has the potential to cause problems, such as: 0
Rigidity of approach.Difficulty is experienced in accepting new data or data in and unexpected format or information that is loosely structured. Relative scoring. Weightings are set in relation to types of information to be used. Weightings would need to be adjusted if unexpected data become available.
On the other hand, a pure custom development approach (building a model exclusively from available data) suffers from lack of consistency and inefficiency. An experienced evaluator or a checklist is required to ensure that significant aspects of the evaluation are not omitted as a result of lack of information. Therefore, the recommendation is to begin with lists of standard higher level variables that comprise all of the critical aspects of risk. Chapters 3 through 7 provide such lists for common pipeline components, and Chapters 9 through 13 list additional variables that might be appropriate for special situations. Then, use all available information to evaluate each variable. For example, the higher level variable of activity (as one measure of third-party damage potential) might be created from data such as number ofone-call reports, population density, previous thirdparty damages, and so on. So, higher level variable selection is standardized and consistent, yet the model is flexible enough to incorporate any and all information that is available or becomes available in the future. The experienced evaluator, or any evaluator armed with a comprehensive list of higher level variables, will quickly find many useful pieces of information that provide evidence on many variables. She may also see risk variables for which no information is available. Similar to piecing together a puzzle, a picture will emerge that readily displays all knowledge and knowledge gaps.
Designinga risk assessment model 2/29
Risk factors Tvpes of information Central to the design ofa risk model are the risk factors or variables (these terms are used interchangeably in this text) that will be included in the assessment. A complete list of risk factors, those items that add to or subtract from the amount of risk, can be readily identified for any pipeline system. There is widespread agreement on failure mechanisms and underlying factors influencing those mechanisms. Setting up a risk assessment model involves trade-offs between the number of factors to be considered and the ease of use of the model. Including all possible factors in a decision support system, however, can create a somewhat unwieldy system. So, the important variables are widely recognized, but the number to be considered in the model (and the depth of that consideration) is amatter of choice for the model developers. In this book, lists ofpossible risk indicators are offered based on their ability to provide useful risk signals. Each item’s specific ability to contribute without adding unnecessary complexities will be a function of a user’s specific system, needs, and ability to obtain the required data. The variables and the rationale for their possible inclusion are described in the following chapters. It is usually the case that some data impact several different aspects of risk. For example, pipe wall thickness is a factor in almost all potential failure modes: It determines time to failure for a given corrosion rate, partly determines ability to survive external forces, and so on. Population density is a consequence variable as well as a third-party damage indicator (as a possible measure of potential activity). Inspection results yield evidence regarding current pipe integrity as well as possibly active failure mechanisms. A single detected defect can yield much information. It could change our beliefs about coating condition, CP effectiveness, pipe strength, overall operating safety margin, and maybe even provides new information about soil corrosivity, interference currents, third-party activity, and so on. All of this arises from a single piece of data (evidence). Many companies now avoid the use of casings. But casings were put in place for a reason. The presence of a casing is a mitigation measure for external force damage potential, but is often seen to increase corrosion potential. The risk model should capture both of the risk implications from the presence of a casing. Numerous other examples can be shown. A great deal of information is usually available in a pipeline operation. Information that can routinely be used to update the risk assessment includes
0
0
All survey results such as pipe-to-soil voltage readings, leak surveys, patrols, depth of cover, population density, etc. Documentation of all repairs Documentation of all excavations Operational data including pressures and flow rates Results of integrity assessments Maintenance reports Updated consequence information Updated receptor information-new housing, high occupancy buildings. changes in population density or environmental sensitivities, etc. Results of root cause analyses and incident investigations Availability and capabilities of new technologies
Attributes andpreventions Because the ultimate goal of the risk assessment is to provide a means of risk management, it is sometimes useful to make a distinction between two types of risk variables. As noted earlier, there is a difference between a hazard and a risk. We can usually do little to change the hazard, but we can take actions to affect the risk. Following this reasoning, the evaluator can categorize each index risk variable as either an attribute or a prevention. The attributes correspond loosely to the characteristics of the hazard, while the preventions reflect the risk mitigation measures. Attributes reflect the pipeline’s environment-characteristics that are difficult or impossible to change. They are characteristics over which the operator usually has little or no control. Preventions are actions taken in response to that environment. Both impact the risk, but a distinction may be useful, especially in risk management analyses. Examples of aspects that are not routinely changed, and are therefore considered attributes, include Soil characteristics Type of atmosphere Product characteristics The presence and nature ofnearby buried utilities The other category, preventions, includes actions that the pipeline designer or operator can reasonably take to offset risks. Examples ofpreventions include Pipeline patrol frequency Operator training programs Right-of-way (ROW) maintenance programs The above examples of each category are pretty clear-cut. The evaluator should expect to encounter some gray areas of distinction between an attribute and a prevention. For instance. consider the proximity of population centers to the pipeline. In many risk assessments, this impacts the potential for third-party damage to the pipeline. This is obviously not an unchangeable characteristic because rerouting of the line is usually an option. But in an economic sense. this characteristic may be unchangeable due to unrecoverable expenses that may be incurred to change the pipeline’s location. Another example would be the pipeline depth of cover. To change this characteristic would mean a reburial or the addition of more cover. Neither of these is an uncommon action, but the practicality of such options must be weighed by the evaluator as he classifies a risk component as an attribute or a prevention. Figure 2.3 illustrates how some of the risk assessment variables are thought to appear on a scale with preventions at one extreme and attributes at the other. The distinction between attributes and preventions is especially useful in risk management policy making. Company standards can be developed to require certain risk-reducing actions to be taken in response to certain harsh environments. For example, more patrols might be required in highly populated areas or more corrosion-prevention verifications might be required under certain soil conditions. Such a procedure would provide for assigning a level of preventions based on the level of attributes. The standards can be predefined and programmed into a database program to adjust automatically the standards to
2/30Risk Assessment Process
r D e p t h cover
I
I
I
1
--------------Conditions * - - - - - - - - - - - - - - - - - - - - +Actions Figure 2.3
the environment of the section-harsh preventions to meet the standard.
Example items on attributes-preventions scale.
conditions require more
Model scope and resolution Assessment scope and resolution issues further complicate model design. Both involve choices of the ranges of certain risk variables. The assessment of relative risk characteristics is especially sensitive to the range of possible characteristics in the pipeline systems to be assessed. If only natural gas transmission pipelines are to be assessed then the model does not necessarily have to capture liquid pipeline variables such as surge potential. The model designer can either keep this variable and score it as “no threat” or she can redistribute the weighting points to other variables that do impact the risk. As another example, earth movements often pose a very localized threat on a relatively few stretches of pipeline. When the vast majority of a pipeline system to be evaluatedis not exposed to any land movement threats, risk points assigned to earth movements will not help to make risk distinctions among most pipeline segments. It may seem beneficial to reassign them to other variables, such as those that warrant full consideration. However, without the direct consideration for this variable, comparisons with the small portions of the system that are exposed, or future acquisitions of systems that have the threat, will be difficult. Model resolution-the signal-to-noise ratio as discussed in Chapter I-is also sensitive to the characteristics of the systems to be assessed. A model that is built for parameters ranging from, say, a 40-inch, 2000-psig propane pipeline to a 1-inch, 20psig fuel oil pipeline will not be able to make many risk distinctions between a 6-inch natural gas pipeline and an 8-inch natural gas pipeline. Similarly, a model that is sensitive to differences between a pipeline at 1 100 psig and one at 1200psig might have to treat all lines above a certain pressure/diameter threshold as the same. This is an issue ofmodeling resolution. Common risk variables that should have a range established as part of the model design include Diameter range Pressure range Products to be included
The range should include the smallest to largest values in systems to be studied as well as future systems to be acquired or other systems that might be used as comparisons. Given the difficulties in predicting future uses of the model, a more generic model-widely applicable to many different pipeline systems-might be appropriate.
Special Risk Factors Two possible risk factors deserve special consideration since they have a general impact on many other risk considerations. Age as a risk variable Some risk models use age as a risk variable. It is a tempting choice since many man-made systems experience deterioration that is proportional to their years in service. However, age itself is not a failure mechanism-at most it is a contributing factor. Using it as a stand-alone risk variable can detract from the actual failure mechanisms and can also unfairly penalize portions of the system being evaluated. Recall the discussion on time-dependent failure rates in Chapter 1, including the concept of the bathtub failure rate curve. Penalizing a pipeline for its age presupposes knowledge of that pipeline’s failure rate curve. Age alone is not a reliable indicator ofpipeline risk, as is evidenced by some pipelines found in excellent operating condition even after many decades of service. A perception that age always causes an inevitable, irreversible process of decay is not an appropriate characterization ofpipeline failure mechanisms. Mechanisms that can threaten pipe integrity exist but may or may not be active at any point on the line. Integrity threats are well understood and can normally be counteracted with a degree of confidence. Possible threats to pipe integrity are not necessarily strongly correlated with the passage of time, although the “area of opportunity” for something to go wrong obviously does increase with more time. The ways in which the age of a pipeline can influence the potential for failures are through specific failure mechanisms such as corrosion and fatigue, or in consideration of changes in manufacturing and construction methods since the pipeline was built. These age effects are well understood and can normally be countered by appropriate mitigation measures.
Designing a risk assessment model 2/31 Experts believe that there is no effect of age on the microcrystalline structure of steel such that the strength and ductility properties of steel pipe are degraded over time. The primary metal-related phenomena are the potential for corrosion and development of cracks from fatigue stresses. In the cases of certain other materials, mechanisms of strength degradation might be present and should be included in the assessment, Examples include creep and UV degradation possibilities in certain plastics and concrete deterioration when exposed to certain chemical environments. In some situations, a slow-acting earth movement could also be modeled with an age component. Such special situations are discussed in Chapters 4 and 5. Manufacturing and construction methods have changed over time. presumably improving and reflecting learning experiences from past failures. Hence, more recently manufactured and constructed systems may be less susceptible to failure mechanisms of the past. This can be included in the risk model and is discussed in Chapter 5. The recommendation here is that age not be used as an independent risk variable. unless the risk model is only a very high-level screening application. Preferably, the underlying mechanisms and mitigations should be evaluated to determine ifthere are any age-related effects.
rating tasks.) It is therefore useful for capturing expert judgments. However, these advantages are at least partially offset by inferior measurement quality, especially regarding obtaining consistency. Some emerging techniques for artificial intelligence systems seek to make better use of human reasoning to solve problems involving incomplete knowledge and the use of descriptive terms. In mirroring human decision making. fuzzy logic interprets and makes use of natural language in ways similar to our risk models. Much research can be found regarding transforming verbal expressions into quantitative or numerical probability values. Most conclude that there is relatively consistent usage of terms. This is useful when polling experts, weighing evidence. and devising quantitative measures from subject judgments. For example. Table 2.4 shows the results of a study where certain expressions, obtained from interviews of individuals, were correlated against numerical values. Using relationships like those shown in Table 2.4 can help bridge the gap between interview or survey results and numerical quantification of beliefs.
Table 2.4
Inspecfion age Inspection age should play a role in assessments that use the results of inspections or surveys. Since conditions should not be assumed to be static, inspection data becomes increasingly less valuable as it ages. One way to account for inspection age is to make a graduated scale indicating the decreasing usefulness of inspection data over time. This measure of information degradation can be applied to the scores as a percentage. After a predetermined time period scores based on previous inspections degrade to some predetermined value. An example is shown in Table 2.3. In this example, the evaluator has determined that a previous inspection yields no useful information after 5 years and that the usefulness degrades 20% per year. By this scale, point values based on inspection results will therefore change by 20% per year. A more scientific way to gauge the time degradation of integrity inspection data is shown in Chapter 5.
Inteniew dutu Collecting information via an interview will often require the use of qualitative descriptive terms. Such verbal labeling has some advantages, including ease of explanation and familiarity. (In fact. most people prefer verbal responses when replying to
Table 2.3
Assigning numbers to qualitative assessments
E-rpression
Almost certain Very high chance Very likely High chance Very probable Very possible Likely Probable Even chance Medium chance Possible Low chance Unlikely Improbable Very low chance Very unlikely Very improbable Almost impossible
Median prohahilrw equl~'ulellr YO 90
85 80 80 RO 70 70
Ruii,yt 9&99 5
85-')9 75-90 x0 Y?
75-92 70 87.5 65 85 h&75
15
45-55 40-6(1 40-70 I &70 IO 3
15
5-?0
10 10 5 2
5-15 2 I
50 50 40
70
1-15
0-5
Source: From Rohrmann, 6.. "Verbal Qualifiers for Ratlng Scales: Sociolinguistic Considerations and Psychometric Data," Project report, Universityof Melbourne,Australia, September 2002
Example of inspection degradations
Inspection age (j'ear.YJ
Adjustment (degradation) fuctor /%i
IO0 80 60
Nota Fresh data; no degradation Inspection data is 1 year old and less representative ofactual conditions
40
Inspection data is now 3 years old and current conditions might now be significantly di tErent
20 0
Inspection results assumed to no longer yield useful information
2/32Risk Assessment Process
Additional studies have yielded similar correlations with terms relating to quality and frequency. In Tables 2.5 and 2.6, some test results are summarized using the median numerical value for all qualitative interpretations along with the standard deviation. The former shows the midpoint of responses (equal number of answers above and below this value) and the latter indicates how much variability there is in the answers. Terms that have more variability suggest wider interpretations of their meanings. The terms in the tables relate quality to a 1-to 10-pointnumerical scale.
Variablegrouping The grouping or categorizing of failure modes, consequences, and underlying factors is a model design decision that must be made. Use of variables and subvariables helps understandability when variables are grouped in a logical fashion, but also creates intermediate calculations. Some view this as an attractive Table 2.5 Expressions of quality Term Outstanding Excellent Very good Good Satisfactory Adequate Fair Medium Average Not too bad
so-so Inadequate Unsatisfactoiry
Poor Bad
Median
Standard deviation
9.9 9.7 8.5 7.2 5.9 5.6 5.2 5 4.9 4.6 4.5 1.9 1.8 1.5 1
0.4 0.6 0.7 0.8 1.2 1.2 1.1 0.6 0.5 1.3 0.7 1.2 I .3 1.1 1
Source: From Rohrmann. B.. “Verbal Qualifiers for Rating Scales: Sociolinguistic Considerations and Psychometric Data,” Project report, University of Melbourne,Australia,September 2002.
Table 2.6 Expressions of frequency Term Always Very often Mostly Frequently Often Fairly often Moderately often Sometimes Occasionally Seldom Rarely Never
Median
Standard deviation
10 8.3 8 7.4 6.6
0.2 0.9 1.3 1.2 1.2 1.1 1.2
6.1 5.7 3.6 3.2 1.7 1.3 0
1
1.1 0.7 0.6 0.1
Source: From Rohrmann, B.,“Verbal Qualifiers for Rating Scales: Sociolinguistic Considerations and Psychometric Data,” Project report, University of Melbourne, Australia, September 2002.
aspect of a model, while others might merely see it as an unnecessary complication.Without categories of variables,the model takes on the look of a flat file, in database design analogy. When using categories that look more like those of a relational database design, the interdependenciesare more obvious.
Weightings The weightings of the risk variables, that is, their maximum possible point values or adjustment factors, reflect the relative importance of that item. Importance is based on the variable’s role in adding to or reducing risk. The following examples illustrate the way weightings can be viewed. Suppose that the threat of AC-induced corrosion is thought to represent 2% of the total threat of corrosion. It is a relatively rare phenomenon. Suppose further that all corrosion conditions and activities are thought to be worst case-the pipeline is in a harsh environment with no mitigation (no coatings, no cathodic protection, etc) and atmospheric, internal, and buried metal corrosion are all thought to be imminent. Ifwe now addressed all AC corrosion concerns only, then we would be adding 2% safety-reducing the threat of corrosion of any kind by 2% (and reducing the threat of AC-induced corrosion by 100%). As another example, if public education is assumed to carry a weight of 15 percent of the third-party threat, then doing public education as well as it can be done should reduce the relative failure rate from third-party damage scenariosby 15%. Weightings should be continuously revisited and modified whenever evidence shows that adjustments are appropriate. The weightings are especially important when absolute risk calculations are being performed. For example, if an extra foot of cover is assumed, via the weightings assigned,to reduce failure probability by 10% but an accumulation of statistical data suggests the effect is closer to 20%, obviously the predictive power of the model is improved by changing the weightings accordingly. In actuality, it is very difficult to extract the true influence of a single variable from the confounding influence of the multitude of other variables that are influencing the scenario simultaneously.In the depth of cover example, the reality is probably that the extra foot of cover impacts risk by 10% in some situations, 50% in others, and not at all in still others. (See also Chapter 8 for a discussion of sensitivity analysis.) The issue of assigning weightings to overall failure mechanisms also arises in model development.In a relative risk model with failure mechanisms of substantially equivalent orders of magnitude, a simplification can be used. The four indexes shown in Chapters 3 through 6 correspond to common failure modes and have equal @lo0 point scales-all failure modes are weighted equally. Because accident history (with regard to cause of failures) is not consistent from one company to another, it does not seem logical to rank one index over another on an accident history basis. Furthermore, if index weightings are based on a specific operator’s experience, that accident experience will probably change with the operator’s changing risk management focus. When an operator experiences many corrosion failures, he will presumably take actions to specifically reduce corrosion potential. Over time, a different mechanism may consequently become the chief failure cause. So, the weightings would need to change periodically, making the tracking of risk difficult. Weightings should, however, be used
Designing a risk assessment model 2/33 to reflect beliefs about frequency of certain failure types when linking relative models to absolute calculations or when there is large variations in expected failure frequencies among the possible failure types.
Risk scoring Direction ofpoint scale In a scoring-type relative risk assessment, one of two point schemes is possible: increasing scores versus decreasing to represent increased risk. Either can be effectively used and each has advantages. As a risk score, it makes sense that higher numbers mean more risk. However, as an analogy to a grading system and most sports and games (except golf), others prefer higher numbers being better-more safety and less risk. Perhaps the most compelling argument for the “increasing points = increasing safety” protocol is that it instills a mind-set of increasing safety. “Increasing safety” has a meaning subtly different from and certainly more positive than “lowering risks.” The implication is that additional safety is layered onto an already safe system, as points are acquired. This latter protocol also has the advantage of corresponding to certain common expressions such as “the risk situation has deteriorated’ = “scores have decreased and “risk situation has improved” = “scores have increased.” While this book uses an “increasing points = increasing safety” scale in all examples of failure probability, note that this choice can cause a slight complication if the relative risk assessments are linked to absolute risk values. The complication arises since the indexes actually represent relative probability of survival, and in order to calculate a relative probability of failure and link that to failure frequencies, an additional step is required. This is discussed in Chapter 14.
important the risk will be until she sees the weighting of that variable. Confusion can also arise in some models when the same variable is used in different parts of the model and has a locationspecific scoring scheme. For instance, in the offshore environment, water depth is a risk reducer when it makes anchoring damage less likely. It is a risk increaser when it increases the chance for buckling. So the same variable, water depth, is a “good” thing in one part of the model and a “ b a d thing somewhere else.
Combining variables An additional modeling design feature involves the choice of how variables will be combined. Because some variables will indicate increasing risk and others decreasing, a sign convention (positive versus negative) must be established. Increasing levels ofpreventions should lead to decreased risks while many attributes will be adding risks (see earlier discussion of preventions and attributes). For example, the prevention of performing additional inspections should improve risk scores, while risk scores deteriorate as more soil corrosivity indications (moisture, pH, contaminants, etc.) are found. Another aspect of combining variables involves the choice of multiplication versus addition. Each has advantages.Multiplication allows variables to independently have a great impact on a score. Adhtion better illustrates the layering of adverse conditions or mitigations. In formal probability calculations, multiplication usually represents the and operation: If corrosion prevention = “poor” AND soil comsivity = “high” then risk = “high.”Addition usually represents the or operation: If depth of cover = “good” OR activity levef= ‘‘low’’ then risk =“low.” Option 1 Risk variable = (sum of risk increasers) -(sum of nsk reducers)
Where to assign weightings In previous editions ofthis model, it is suggested that point values be set equal to weightings. That is, when a variable has a point value of 3, it represents 3% of the overall risk. The disadvantage of this system is that the user does not readily see what possible values that variable could take. Is it a 5-point variable, in which case a value of 3 means it is scoring midrange? Or is it a 15-point variable, for which a score of 3 means it is relatively low? An alternative point assignment scheme scores all variables on a fixed scale such as C L l O points. This has the advantage of letting the observer know immediately how “good” or “bad” the variable is. For example, a 2 always means 20% from the bottom and a 7 always means 70% of the maximum points that could be assigned. The disadvantage is that, in this system, weightings must be used in a subsequent calculation. This adds another step to the calculation and still does not make the point scale readily apparent. The observer does not know what the 70% variable score really means until he sees the weightings assigned. A score of 7 for a variable weighted at 20% is quite different from a score of 7 for a variable weighted at 5%. In one case, the user must see the point scale to know that a score of, say, 4 points represents the maximum level of mitigation. In the alternate case, the user knows that 10 always represents the maximum level of mitigation, but does not know how
where the point scales for each are in the same direction. For example, Corrosion threat = (environment) - [(coating) + (cathodic protection)]
Option 2 Risk variable = (sum ofrisk increasers) + (sum ofnsk reducers)
Point scales for risk increasers are often opposite from the scale of risk reducers. For example, in an “increasing points means increasing risk” scheme, Corrosion threat = (environment) + [(coating) + (cathodic protection)]
where actual point values might be (corrosion threat) = (24) + (-5
+-2)
=
17
Option 3 In this approach, we begin with an assessment ofthe threat level and then consider mitigation measures as adjustment factors. So, we begin with a risk and then adjust the risk downward (if increasing points = increasing risk) as mitigation is added: Risk variable = (threat) x (sum of% threat reduction through mitigations)
2/34 Risk Assessment Process
Exaniple Corrosion threat = (environment)x [(coating)+ (cathodic protection)] Option 3 avoids the need to create codes for interactions of variables. For example, a scoring rule such as “cathodic protection is not needed = 10 pts” would not be needed in this scheme. It would be needed in other scoring schemes to account for a case where risk is low not through mitigation but through absence of threat. The scoring should also attempt to define the interplay of certain variables. For example, if one variable can be done so well as to make certain others irrelevant, then the scoring protocol should allow for this. For example, ifpatrol (perhaps with a nominal weight 20% of the third-party damage potential) can be done so well that we do not care about any other activity or condition, then other pertinent variables (such as public education. activity level, and depth of‘cover) could be scored as NA (the best possible numerical score) and the entire index is then based solely on patrol. In theory, this could be the case for a continuous security presence in some situations. A scoring regime that uses multiplication rather than addition is better suited to capturing this nuance. The variables shown in Chapters 3 through 6 use a variation of option 2. All variables start at a value of 0, highest risk. Then safety points are awarded for knowledge of less threatening conditions and/or the presence of mitigations. Any of the options can be effective as long as a point assignment manual is availableto ensure proper and consistent scoring. Variable calculations Some risk assessment models in use today combine risk variables using only simple summations. Other mathematical relationships might be used to create variables before they are added to the model. The designer has the choice of where in the process certain variables are created. For instance, if D/t (pipe diameter divided by wall thickness) is often thought to be related to crack potential or strength or some other risk issue. A variable called D/t can be created during data collection and its value added to other risk variables. This eliminates the need to divide D by t in the actual model. Alternatively, data for diameter and wall thickness could be made directly available to the risk model’s algorithm which would calculate the variable D/t as part of the risk scoring. Given the increased robustness of computer environments, the ability to efficiently model more complex relationships is leading to risk assessment models that take advantage of this ability. Conditional statements “If X then Y,” including comparative relationships [“if b o p density) > 2 then (design factor) = 0.6, ELSE (design factor) = 0.72”] are becoming more prevalent. The use of these more complex algorithms to describe aspects of risk tend to mirror human reasoning and decisionmaking patterns. They are not unlike very sophisticated efforts to create expert systems and other artificial intelligence applications based on many simple rules that represent our understanding. Examples of more complex algorithms are shown in the following chapters and in Appendix E.
Direct evidence adjustments Risk evaluation is done primarily through the use of variables that provide indirect evidence of failure potential. This includes knowledge of pipe characteristics, measurements of environmental conditions, and results of surveys. From these, we infer the potential presence of active failure mechanisms or failure potential. However, active failure mechanisms are directly detected by in-line inspection (ILI), pressure testing, and/or visual inspections, including those that might be prompted by a leak. Pressure testing is included here as a direct means because it will either verify that failure mechanisms, even if present, have not compromised structural integrity or it will prompt a visual inspection. If direct evidence appears to be in conflict with risk assessment results (based on indirect evidence), then one of three scenarios is true: 1. The risk assessment model is wrong; an important variable
has been omitted or undervalued or some interaction of variables has not been properly modeled. 2. The data used in the risk assessment are wrong; actual conditions are not as thought. 3. There actually is no conflict; the direct evidence is being interpreted incorrectly or it represents an unlikely, but statistically possible event that the risk assessment had discounted due to its very low probability. It is prudent to perform an investigation to determine which scenario is the case. The first two each have significant implications regarding the utility of the risk management process. The last is a possible learning opportunity. Any conclusions based on previously gathered indirect evidence should be adjusted or overridden when appropriate, by direct evidence. This reflects common practice, especially for time-dependent mechanisms such as corrosionbest efforts produce an assessment of corrosion potential, but that assessment is periodically validated by direct observation. The recommendation is that, whenever direct evidence of failure mechanisms is obtained, assessments should assume that these mechanisms are active. This assumption should remain in place until an investigation, preferably a root cause analysis (discussed later in this chapter). demonstrates that the causes underlying the failure mechanisms are known and have been addressed. For example, an observation of external corrosion damage should not be assumed to reflect old, alreadymitigated corrosion. Rather, it should be assumed to represent active external corrosion unless the investigation concludes otherwise. Direct or confirmatory evidence includes leaks, breaks, anomalies detected by ILI, damages detected by visual inspection, and any other information that provides a direct indication of pipe integrity, if only at a very specific point. The use of ILI results in a risk assessment is discussed in Chapter 5 . The evidence should be captured in at least two areas of the assessment: pipe strength and failure potential. If reductions are not severe enough to warrant repairs, then the wall loss or strength reduction should be considered in the pipe strength evaluation (see Chapter 5). If repairs are questionable (use of nonstandard materials or practices), then the repair itself
Designing a risk assessment model 2/35 should be evaluated. This includes a repair’s potential to cause unwanted stress concentrations. If complete and acceptable repairs that restored full component strength have been made, then risk assessment “penalties” can be removed. Regardless of repair, evidence still suggests the potential for repeat failures in the same area until the root cause identification and elimination process has been completed. Whether or not a root cause analysis has been completed, direct evidence can be compiled in various ways for use in a relative risk assessment. A count of incidences or a density of incidences (leaks per mile, for example) will be an appropriate use of information in some cases, while a zoneofinfluence or anomaly-specific approach might be better suited in others. When such incidences are rather common-ccurring regularly or clustering in locations-the density or count approaches can be useful. For example, the density of ILI anomalies of a certain type and size in a transmission pipeline or the density ofnuisance leaks in a distribution main are useful risk indications (see Chapters 5 and 1 I). When direct evidence is rare in time andor space, a more compelling approach is to assign a zone qf influence around each incident. For example, a transmission pipe leak incident is rare and often directly affects only a few square inches of pipe. However, it yields evidence about the susceptibility of neighboring sections of pipeline. Therefore, a zone of influence, X number of feet on either side of the leak event, can be assigned around the leak. The length of pipeline within this zone of influence is then conservatively treated as having leaked and containing conditions that might suggest increased leak susceptibility in the future. The recommended process incorporating direct evidence into a relative risk assessment is as follows: A. Use all available leak history and ILI results---even when root cause investigations are not available-to help evaluate and score appropriate risk variables. Conservatively assume that damage mechanisms are still active. For example, the detection of pipe wall thinning due to external corrosion implies 0 The existence of a corrosive environment 0 Failure of both coating and cathodic protection systems or a special mechanism at work such as AC-induced corrosion or microbially induced corrosion 0 A pipe wall thickness that is not as thought-pipe strength must be recalculated Scores should be assigned accordingly. The detection of damaged coating, gouges, or dents suggests previous third-party damages or substandard installation practices. This implies that 0 Third-party damage activity is significant, or at least was at one time in the past 0 Errors occurred during construction Pipe strength must be recalculated Again, scores can be assigned accordingly. B. Use new direct evidence to directly validate or adjust risk scores. Compare actual coating condition, pipe wall thickness, pipe support condition, soil corrosivity, etc., with the corresponding risk variables’ scores. Compare the relative likelihood of each failure mode with the direct evi-
dence. How does the model’s implied corrosion rate compare with wall loss observations? How does third-party damage likelihood compare with dents and gouges on the top or side of pipe? Is the design index measure of land movement potential consistent with observed support condition or evidence of deformation? direct evidence says C. If disagreement is apparent-the something is actually “good’ or “bad” while the risk model says the opposite-then perform an investigation. Based on the investigation results, do one or more of the following: Modify risk algorithms based on new knowledge. Modify previous condition assessments to reflect new knowledge. For example, “coating condition is actually bad, not fair as previously thought” or “cathodicprotection levels are actually inadequate, despite 3-year-old close interval survey results.” 0 Monitor the situation carefully. For example, “existing third-party damage preventions are very protective of the pipe and this recent detection of a top side dent is a rare exception or old and not representative of the current situation. Rescoring is not appropriate unless additional evidence is obtained suggesting that third-party damage potential is actually higher than assumed.” Note that this example is a nonconservative use of information and is not generally recommended.
Role of leak history in riskassessment Pipeline failure data often come at a high cost-an accident happens. We can benefit from this unfortunate acquisition of data by refining our model to incorporate the new information. In actual practice, it is a common belief, which is sometimes backed by statistical analysis, that pipeline sections that have experienced previous leaks are more likely to have additional leaks. Intuitive reasoning suggests that conditions that promote one leak will most likely promote additional leaks in the same area. Leak history should be a part of any risk assessment. It is often the primary basis of risk estimations expressed in absolute terms (see Chapter 14). A leak is strong evidence of failure-promoting conditions nearby such as soil corrosivity, inadequate corrosion prevention, problematic pipe joints, failure of the one-call system, active earth movements, or any of many others. It is evidence of future leak potential. This evidence should be incorporated into a relative risk assessment because, hopefully, the evaluator’s “degree of belief” has been impacted by leaks. Each risk variable should always incorporate the best availableknowledge ofconditions andpossibilities for promoting failure. Where past leaks have had no root cause analysis and/or corrective action applied, risk scores for the type of failure can be adjusted to reflect the presence of higher failure probability factors. A zone of influence around the leak site can be established (see Chapter 8) to penalize nearby portions of the system. In some pipelines, such as distribution systems (see Chapter 11) where some leak rate is routinely seen, the determination as to whether a section of pipeline is experiencing a higher frequency of leaks must be made on a relative basis. This can be
2/36Risk Assessment Process
done by making comparisons with similar sections owned by the company or with industry-wide leak rates, as well as by benchmarking against specific other companies or by a combination of these. Note that an event history is only useful in predicting hture events to the extent that conditions remain unchanged. When corrective actions are applied, the event probability changes. Any adjustment for leak frequency should therefore be reanalyzed periodically.
Visual inspections A visual inspection of an internal or external pipe surface may be triggered by an ILI anomaly investigation, a leak, a pressure test, or routine maintenance. If a visual inspection detects pipe damage, then the respective failure mode score for that segment of pipe should reflect the new evidence. Points can be reassigned only after a root cause analysis has been done and demonstrates that the damage mechanism has been permanently removed. For risk assessment purposes, a visual inspection is often assumed to reflect conditionsfor some length ofpipe beyondthe portions actually viewed. A conservative zone some distance either side of the damage location can be assumed. This should reflect the degree of belief and be conservative. For instance, if poor coating condition is observed in one site, then poor coating condition should be assumed for as far as those conditions (coating type and age, soil conditions, etc.) might extend. As noted earlier, penalties from visual inspections are removed through root cause analysis and removal of the root cause. Historical records of leaks and visual inspectionsshould included in the risk assessment even if they do not completely document the inspection, leak cause, or repair as is often the case. Because root cause analyses for events long ago are problematic, and their value in a current condition assessment is arguable, the weighting of these events is often reduced, perhaps in proportion to the event’s age.
Root cause analyses Pipeline damage is very strong evidence of failure mechanisms at work. This should be captured in the risk assessment. However, once the cause of the damage has been removed, if it can be, then the risk assessment should reflect the now safer condition. Determining and removing the cause of a failure mechanism is not always easy. Before the evidenceprovided by actual damage is discounted, the evaluator should ensure that the true underlying cause has been identified and addressed. There are no rules for determining when a thorough and complete investigation has been performed. To help the evaluator make such a judgment, the following concepts regarding root cause analysesare offered [32]. A root cause analysis is a specializedtype of incident investigation process that is designed to find the lowest level contributingcauses to the incident. More conventional investigations often fail to arrive at this lowest level. For example, assume that a leak investigation reveals that a failed coating contributedto a leak. The coating is subsequently repaired and the previously assigned leak penalty is removed from the risk assessment results. But then, a few years later, another leak appears at the same location. It turns out that the
main root cause was actually soil movements that will damage any coating, eventually leadingto a repeat leak (discountingthe role of other corrosionpreventions; see Chapter 3). In this case, the leak penalty in the risk assessment should have been removed only after addressing the soil issue, not simply the coating repair. This example illustrates that the investigators stopped the analysis too early by not determining the causes of the damaged coating. The root is often a system of causes that should be defined in the analysis step. The very basic understanding of cause and effect is that every effect has causes (plural). There is rarely only one root cause. The focus of any investigation or risk assessment is ultimately on effective solutions that prevent recurrence. These effective solutions are found by being very diligent in the analysis step (the causes). A typical indication of an incompleteanalysis is missing evidence. Each cause-and-effect relationship should be validated with evidence. If we do not have evidence, then the causeand-effect relationship cannot be validated. Evidence must be added to all causes in the analysis step. In the previous example, the investigators were missing the additional causes and its evidence to causally explain why the coating was damaged. If the investigators had evidenceof coating damage, then the next question should have been “Why was the coating damaged?” A thorough analysis addresses the system of causes. If investigators cannot explain why the coating was damaged then they have not completed the investigation. Simply repairing the coating is not going to be an effective solution. Technically, there is no end to a cause-and-effect chainthere is no end to the “Why?” questions.Common terminology includes mot cause, direct cause, indirect cause, main cause, primaty cause, contributing cause, proximate cause, physical cause, and so on. It is also true that between any cause-andeffect relationshipthere are more causes that can be added-we can always ask more “Why?” questionsbetween any cause and effect. This allows an analysis to dig into whatever level of detail is necessary. The critical point here is that the risk evaluator should not discount strong direct evidence of damage potential unless there is also compelling evidence that the damage-causing mechanisms have been permanently removed.
V. Lessons learned in establishing a risk assessment program As the primary ingredient in a risk management system, a risk assessment process or model must first be established.This is no small undertaking and, as with any undertaking, is best accomplished with the benefit of experience. The following paragraphs offer some insights gained through development of many pipeline risk management programs for many varied circumstances.Of course, each situationis unique and any rules of thumb are necessarily general and subject to many exceptions to the rules. To some degree, they also reflect a personal preference, but nonetheless are offered here as food for thought for those embarking on such programs. These insights include some key points repeated from the first two chapters of this book.
Lessons learned in establishing a risk assessment program 2/37 The general lessons learned are as follows:
Avoid complexity
Work from general to specific. Think “organic.” Avoid complexity. Use computers wisely. Build the program as you would build a new pipeline. Study your results.
Every single component of the risk model should yield more benefits than the cost it adds in terms of complexity and datagathering efforts. Challenge every component of the risk model for its ability to genuinely improve the risk knowledge at a reasonable cost. For example: Don’t include an exotic variable unless that variable is a useful risk factor. Don’t use more significant digits than is justified. Don’t use exponential notation numbers if a relative scale can be appropriately used. Don’t duplicate existing databases; instead, access information from existing databases whenever possible. Duplicate data repositories will eventually lead to data inconsistencies. Don’t use special factors that are only designed to change numerical scales. These tend to add more confusion than their benefit in creating easy-to-use numbers. Avoid multiple levels of calculations whenever possible. Don’t overestimatethe accuracy of your results, especially in presentations and formal documentation. Remember the high degree ofuncertainty associated with this type of effort.
We now take a look at the specifics ofthese lessons learned.
Work from general to specific Get the big picture first. This means “Get an overview assessment done for the whole system rather than getting every detail for only a portion of the system.” This has two advantages: I. No matter how strongly the project begins, things may change before project completion. If an interruption does occur, at least a general assessment has been done and some useful information has been generated. 2. There are strong psychological benefits to having results (even if very preliminary--caution is needed here) early in the process. This provides incentives to refine and improve preliminary results. So, having the entire system evaluated to a preliminary level gives timely feedback and should encourage further work. It is easy to quickly assess an entire pipeline system by limiting the number of risk variables in the assessment. Use only a critical few, such as population density, type of product, operating pressure, perhaps incident experience, and a few others. The model can then later be “beefed up” by adding the variables that were not used in the first pass. Use readily available information whenever possible.
Think “organic” Imagine that the risk assessment process and even the model itself are living, breathing entities. They will grow and change over time. There is the fruit-the valuable answers that are used to directly improve decision making. The ideal process will continuously produce ready-to-eat fruit that is easy to “pick” and use without any more processing. There are also the roots-the hehind-the-scenes techniques and knowledge that create the fruit. To ensure the fruit is good, the roots must he properly cared for. Feed and strengthen the roots by using HAZOPS, statistical analysis, FEMA, event trees, fault trees, and other specific risk tools occasionally. Such tools provide the underpinnings for the risk model. Allow for growth because new inspection data, new inspection techniques, new statistical data sets to help determine weightings, missed risk indicators, new operating disciplines, and so on will arise. Plan for the most flexible environment possible, Make changes easy to incorporate. Anticipate that regardless of where the program begins and what the initial focus was, eventually, all company personnel might he visiting and “picking the fruit” provided by this process.
Use computers wisely Too much reliance on computers is probably more dangerous than too little. In the former, knowledge and insight can be obscured and even convoluted. In the latter, the chief danger is that inefficiencies will result-an undesirable, hut not critical, event. Regardless of potential misuse, however. computers can greatly increase the strength of the risk assessment process, and no modem program is complete without extensive use of them. The modem software environment is such that information is easily moved between applications. In the early stages of a project, the computer should serve chiefly as a data repository. Then, in subsequent stages, it should house the algorithnhow the raw information such as wall thickness, population density, soil type, etc., is turned into risk information. In later stages of the project, data analysis and display routines should he available. Finally, computer routines to ensure ease and consistency of data entry, model tweaking, and generation of required output should he available. Software use in risk modeling should always follow program development-not lead it. 0
Early stage. Use pencil and paper or simple graphics software to sketch preliminary designs of the risk assessment system. Also use project management tools if desired to plan the risk management project. Intermediate stages. Use software environments that can store, sort, and filter moderate amounts of data and generate new values from arithmetic and logical (if. . . then. . . else. . .) combinations of input data. Choices include modem spreadsheets and desktop databases. Later stages. Provide for larger quantity data entry, manipulation, query, display, etc., in a long-term, secure, and userfriendly environment. If spatial linking of information is desired, consider migrating to geographical information systems (GIS) platforms. If multiuser access is desired, consider robust database environments.
2/38Risk Assessment Process
Computer usage in pipeline risk assessment and management is further discussed in Chapter 8.
Build the program as you would build a new pipeline A useful way to view the establishment of a risk management program, and in particular the risk assessment process, is to consider a direct analogy with new pipeline construction. In either case, a certain discipline is required. As with new construction, failures in risk modeling occur through inappropriate expectations and poor planning, while success happens through thoughtful planning and management. Below. the project phases of a pipeline construction are compared to a risk assessment effort.
I. Conceptualization and scope creation phase: Pipeline: Determine the objective, the needed capacity, the delivery parameters and schedule. Risk assessment: Several questions to the pipeline operator may better focus the effort and direct the choice of a formal risk assessment technique: What data do you have? What is your confidence in the predictive value of the data? What are the resource demands (and availability) in terms of costs, man-hours, and time to set up and maintain a risk model? What benefits do you expect to accrue, in terms of cost savings, reduced regulatory burdens, improved public support, and operational efficiency? Subsequent defining questions might include: What portions of your system are to be evaluated-pipeline only? Tanks? Stations? Valve sites? Mainlines? Branch lines? Distribution systems? Gathering systems? Onshore/offshore? To what level of detail? Estimate the uses for the model, then add a margin of safety because there will be unanticipated uses. Develop a schedule and set milestones to measure progress. 11. Route selectiodROW acquisition: Pipeline: Determine the optimum routing, begin the process of acquiring needed ROW. Risk assessment: Determine the optimum location for the model and expertise. Centrally done from corporate headquarters? Field offices maintain and use information? Unlike the pipeline construction analogy, this aspect is readily changed at any point in the process and does not have to finally decided at this early stage of the project. 111. Design: Pipeline: Perform detailed design hydraulic calculations; specify equipment, control systems, and materials. Risk assessment: The heart of the risk assessment will be the model or algorithm-that component which takes raw information such as wall thickness, population density, soil type, etc., and turns it into risk information. Successful risk modeling involves a balancing between various issues including: Identifying an exhaustive list ofcontributing factors versus choosing the critical few to incorporate in a model (complex versus simple)
Hard data versus engineering judgment (how to incorporate widely held beliefs that do not have supporting statistical data) Uncertainty versus statistics (how much reliance to place on predictive power of limited data) Flexibility versus situation-specific model (ability to use same model for a variety of products, geographical locations, facility types, etc.) It is important that all risk variables be considered even if only to conclude that certain variables will not be included in the final model. In fact, many variables will not be included when such variables do not add significant value but reduce the usability of the model. These “use or don’t use” decisions should be done carefully and with full understanding ofthe role of the variables in the risk picture. Note that many simplifying assumptions are often made, especially in complex phenomena like dispersion modeling, fire and explosion potentials, etc.. in order to make the risk model easy to use and still relatively robust. Both probability variables and consequence variables are examined in most formal risk models. This is consistent with the most widely accepted definition of risk: Event risk = (eventprobability) x (eventconsequence) (See also “VI. Commissioning” for more aspects of a successful risk model design.) IV. Material procurement: Pipeline: Identify long-delivery-time items, prepare specifications, determine delivery and quality control processes. Risk assessment: Identify data needs that will take the longest to obtain and begin those efforts immediately. Identify data formats and level of detail. Take steps to minimize subjectivity in data collection. Prepare data collection forms or formats and train data collectors to ensure consistency. V Construction: Pipeline: Determine number of construction spreads, material staging, critical path schedule, inspection protocols. Risk assessment:Form the data collection team(s), clearly define roles and responsibilities, create critical path schedule to ensure timely data acquisition, schedule milestones, and take steps to ensure quality assurance/ quality control. VI. Commissioning: Pipeline: Testing of all components, start-up programs completed. Risk assessment: Use statistical analysis techniques to partially validate model results from a numerical basis. Perform a sensitivity analysis and some trial “what-ifs” to ensure that model results are believable and consistent. Perform validation exercises with experienced and knowledgeable operating and maintenance personnel. It is hoped that the risk assessment characteristics were earlier specified in the design and concept phase of the project. but here is a final place to check to ensure the following:
Examples of scoring algorithms 2/39 All failure modes are considered. All risk elements are considered and the most critical ones are included. Failure modes are considered independently as well as in aggregate. All available information is being appropriately utilized. Provisions exist for regular updates of information. including new types of data. Consequence factors are separable from probability factors. Weightings, or other methods to recognize relative importance of factors, are established. The rationale behind weightings is well documented and consistent. A sensitivity analysis has been performed. The model reacts appropriately to failures ofany type. Risk elements are combined appropriately (“and” versus “or” combinations). Steps are taken to ensure consistency of evaluation. Risk assessment results form a reasonable statistical distribution (outliers?). There is adequate discrimination in the measured results (signal-to-noise ratio). Comparisons can be made against fixed or floating standards or benchmarks. V11. Project completion: Pipeline: Finalize manuals, complete training, ensure maintenance protocols are in place, and turn system over to operations. Risk assessment: Carefully document the risk assessment process and all subprocesses. especially the detailed workings of the algorithm or central model. Set up administrative processes to support an ongoing program Ensure that control documents cover the details of all aspects of a good administrative program, including: Defining roles and responsibilities Performance monitoring and feedback Process procedures Management of change Communication protocols
Study the results This might seem obvious, but it is surprising how many owners really do not appreciate what they have available after completing a thorough risk assessment. Remember that your final risk numbers should be completely meaningful in a practical. real-world sense. They should represent everything you know about that piece of pipe (or other system component)-all of the collective years of experience of your organization, all the statistical data you can gather, all your gut feelings, all your sophisticated engineering calculations. If you can’t really believe your numbers. something is wrong with the model. When, through careful evaluation and much experience, you can really believe the numbers, you will find many ways to use them that you perhaps did not foresee. They can be used to 0 0
Design an operating discipline Assist in route selection
Optimize spending Strengthen project evaluation Determine project prioritization Determine resource allocation Ensure regulatory compliance
VI. Examples of scoring algorithms Sample relative risk model The relative risk assessment model outlined in Chapters 3 through 7 is designed to be a simple and straightforward pipeline risk assessment model that focuses on potential consequences to public safety and environment preservation. It provides a framework to ensure that all critical aspects of risk are captured. Figure 2.4 shows a flowchart of this model. This framework is flexible enough to accommodate any level of detail and data availability. For most variables. a sample point-scoring scheme is presented. In many cases, alternative scoring schemes are also shown. Additional risk assessment examples can be found in the case studies of Chapter 14 and in Appendix E. The pipeline risk picture is examined in two general parts. The first part is a detailed itemization and relative weighting of all reasonably foreseeable events that may lead to the failure of a pipeline: “What can go wrong?” and “How likely is it to go wrong?. This highlights operational and design options that can change the probability of failure (Chapters 3 through 6). The second part is an analysis of potential consequences if a failure should occur. This addresses the potential consequences should failure occur (Chapter 7). The two general parts correspond to the two factors used in the most commonly accepted definition of risk: Risk = (event likelihood) x (eventconsequence)
The failure potential component is further broken into four indexes (see Figure 2.4). The indexes roughly correspond to categories of reported pipeline accident failures. That is, each index reflects a general area to which, historically, pipeline accidents have been attributed. By considering each variable in each index, the evaluator arrives at a numerical value for that index. The four index values are then summed to a total value (called the index sum) representing the overall failure probability (or survival probability) for the segment evaluated. The individual variable values, not just the total index score, are preserved however, for detailed analysis later. The primary focus ofthe probability part ofthe assessment is the potential for a particular failure mechanism to be active. This is subtly different from the likelihood of failure. Especially in the case of a time-dependent mechanism such as corrosion. fatigue, or slow earth movements, the time to failure is related to factors beyond the presence of a failure mechanism. These include the resistance of the pipe material, the aggressiveness of the failure mechanism, and the time of exposure. These, in turn, can be furtherexamined. For instance. the material resistance is a function of material strength; dimensions, most notably pipe wall thickness; and the stress level. The additional aspects leading to a time-to-fail estimate are usually more appropriately considered in specific investigations.
2/40 Risk Assessment Process
In the second part of the evaluation,an assessmentis made of the potential consequences of a pipeline failure. Product characteristics,pipeline operating conditions, and the pipeline surroundings are considered in arriving at a consequence factor. The consequence score is called the leak impact factor and includes acute as well as chronic hazards associatedwith product releases. The leak impactfactor is combinedwith the index sum (by dividing) to arrive at a final risk score for each section of pipeline. The end result is a numerical risk value for each pipeline section. All of the information incorporated into this number is preserved for a detailed analysis, if required. The higher-level variables of the entire process can be seen in the flowchart in Figure 2.4.
Basic assumptions Some general assumptionsare built into the relative risk assessment model discussed in Chapters 3 through 7. The user, and especially, the customizer of this system, should be aware of these and make changes where appropriate. Independence Hazards are assumed to be additive but independent. Each item that influences the risk picture is considered separately from all other items-it independently influences the risk. The overall risk assessmentcombines all of the independent factors to get a final number. The final number reflects the “area of opportunity” for a failure mechanism to be active because the number of independentfactors is believed to be directly proportional to the risk. For example, if event B can only occur if event A has first occurred, then event B is given a lower weighting to reflect the fact that there is a lower probability of both events happening. However, the example risk model does not normally stipulate that event B cannot happen without eventA. Worst case When multiple conditions exist within the same pipeline segment, it is recommendedthat the worst-case condi-
I
tion for a section govern the point assignment.The rationale for this is discussed in Chapter 1. For instance, if a 5-mile section of pipeline has 3 ft of cover for all but 200 ft of its length (which has only 1 ft of cover), the section is still rated as if the entire 5mile length has only 1 ft of cover. The evaluator can work around this though his choice of section breaks (see Sectioning of the Pipeline section earlier in this chapter). Using modem segmentationstrategies,there is no reason to have differing risk conditionswithin the same pipeline segment. Relative Unless a correlation to absolute risk values has been established, point values are meaningfid only in a relative sense. A point score for one pipeline section only shows how that section compares with other scored sections. Higher point values represent increased safety-decreased probability of failure-in all index values (Chapters 3 through 6). Absolute risk values can be correlated to the relative risk values in some cases as is discussed in Chapter 14. Judgment bused The example point schedules reflect experts’ opinions based on their interpretations of pipeline industry experience as well as personal pipelining experience. The relative importance of each item (this is reflected in the weighting of the item) is similarly the experts’ judgments. If sound, statistical data are available, they are incorporated into these judgments. However, in many cases, useful fiequency-of-occurrence data are not available. Consequently, there is an element of subjectivityin this approach. Public Threats to the general public are of most interest here. Risks specific to pipeline operators and pipeline company personnel can be included as an expansion to this system, but only with great care since a careless additionmay interfere with the objectivesofthe evaluation.In most cases, it is believed that other possible consequences will be proportional to public safety risks, so the focus on public safety will usually fairly represent most risks.
Index sum
Figure 2.4 Flowchart of relative risk index system.
Examples of scoring algorithms 2/41 Mitigations It is assumed that mitigations never completely erase the threat. This is consistent with the idea that the condition 0f‘‘no threat” will have less fisk than the condition igated threat,’’ regardless of the robus~essof the mitigation measures. It also shows that even with much prevention in place, the hazard has not been removed.
Other examples See Appendix E for examples of other risk scoring algorithms for pipelines in general. Additional examples are included several 0 t h chapters, notably in Chapters 9 through 13, where discussions involve the assessments of special situations.
3/43
Third-party Damage Index
k
Third-partyDamage Index A. Minimum Depth of Cover B. Activity Level C. Aboveground Facilities D. Line Locating E. Public Education Programs E Right-of-way Condition G. Patrol Frequency
0-20 pts 0-20pts 0-10 pts 0-15 pts 0-15 pts 0 - 5 pts 0-15 pts
20% 20% 10% 15% 15% 5%
0-100pts
100%
15%
This table lists some possible variables and weightings that could be used to assess the potential for third-party damages to atypical transmission pipeline (see Figures 3.1 and 3.2).
Background Pipeline operators usually take steps to reduce the possibility of damage to their facilities by others. The extent to which mitigating steps are necessary depends on how readily the system can be damaged and bow often the chance for damage occurs.
Third-party damage, as the term is used here, refers to any accidental damage done to the pipe as a result ofactivities ofpersonnel not associated with the pipeline. This failure mode is also sometimes called outside force or external force, but those descriptions would presumably include damaging earth movements. We use third-party damage as the descriptor here to focus the analyses specifically on damage caused by people not associated with the pipeline. Potential earth movement damage is addressed in the design index discussion of Chapter 5. Intentional damages are covered in the sabotage module (Chapter 9). Accidental damages done by pipeline personnel and contractors are covered in the incorrect operations index chapter (Chapter 6). U.S. Department of Transportation (DOT) pipeline accident statistics indicate that third-party intrusions are often the leading cause of pipeline failure. Some 20 to 40 percent of all pipeline failures in most time periods are attributed to thirdparty damages. In spite of these statistics, the potential for third-party damage is often one of the least considered aspects of pipeline hazard assessment. The good safety record of pipelines has been attributed in part to their initial installation in sparsely populated areas and
3/44 Third-party Damage Index
Figure 3.1 Basic risk assessment model.
Soil cover Type of soil (rock, clay, sand, etc.) Pavement type (asphalt, concrete, none, etc.) Warning tape or mesh Water depth Population density Stability of the area (construction, renovation, etc.) One calls Other buried utilities Anchoring, dredging
Minimum depth of cover Activity level
--Aboveground facilities One-call system Public education Right-of-way condition Patrol
Vulnerability (distance, barriers, etc.) Threats (traffic volume, traffic type, aircraft, etC.)
. -
1
Mandated Response by owner Well-known and used
Methods (door-to-door, mail, advertisements, etC.) Frequency
Signs (size, spacing, lettering, phone numbers, etc.) Markers (air vs ground, size, visibility, spacing, etc.) Overgrowth Undergrowth Ground patrol frequency Ground patrol effectiveness Air patrol frequency Air patrol effectiveness Figure 3.2 Assessing third-partydamage potential:sample of data used to score the third-party damage index.
Riskvariables 3/45
their burial 2.5 to 3 feet deep. However, encroachments ofpopulation and land development activities are routinely threatening many pipelines today. In the period from 1983 through 1987, eight deaths, 25 injuries, and more than $14 million in property damage occurred in the hazardous liquid pipeline industry due solely to excavation damage by others. These types of pipeline failures represent 259 accidents out of a total of 969 accidents from all causes. This means that 26.7% of all hazardous liquid pipeline accidents were caused by excavation damage 1871. In the gas pipeline industry, a similar story emerges: 430 incidents from excavation damage were reported in the 1984-1987 period. These accidents resulted in 26 deaths, 148 injuries, and more than $18 million in property damage. Excavation damage is thought to be responsible for 10.5% of incidents reported for distribution systems, 22.7% of incidents reported for transmissiodgathering pipelines, and 14.6% of all incidents in gas pipelines [87]. European gas pipeline experience, based on almost 1.2 million mile-years of operations in nine Western European countries, shows that third-party interference represents approximately 50% of all pipeline failures [441.
Exposure To quantify the risk exposure from excavation damage, an estimate of the total number of excavations that present a chance for damage can be made. Reference 1641 discusses the Gas Research Institute’s (GRI’s) 1995 study that makes an effort to determine risk exposure for the gas industry. The study surveyed 65 local distribution companies and 35 transmission companies regarding line hits. The accuracy of the analysis was limited by the response-less than half (41%) of the companies responded, and several major gas-producing states were poorly represented (only one respondent from Texas and one from Oklahoma). The GRI estimate was determined by extrapolation and may be subject to a large degree of error because the data sample was not representative. Based on survey responses, however, GFU calculated an approximate magnitude of exposure. For those companies that responded, a total of25,123 hits to gas lines were recorded in 1993; from that, the GRI estimated total U.S. pipeline hits in 1993 to be 104,128. For a rate of exposure, this number can be compared to pipeline miles: For 1993, using a reported 1,778,600 miles of gas transmission, main, and service lines, the calculated exposure rate was 58 hits per 1000 line miles. Transmission lines had a substantially lower experience: a rate of 5.5 hits per 1000 miles, with distribution lines suffering 71 hits per 1000 miles [64]. All rates are based on limited data. Because the risk of excavation damage is associated with digging activity rather than system size, “hits per digs” is a useful measure of risk exposure. For the same year that GRI conducted its survey, one-call systems collectively received more than an estimated 20 million calls from excavators. (These calls generated 300 million work-site notifications for participating members to mark many different types of underground systems.) Using GRI’s estimate of hits. the risk exposure rate for 1993 was 5 hits per 1000 notifications to dig ~41.
Risk variables Many mitigation measures are in place in most Western countries to reduce the threat of third-party damages to pipelines. Nonetheless, recent experience in most countries shows that this remains a major threat, despite often mandatory systems such as one-call systems. Reasons for continued third-party damage, especially in urban areas, include Smaller contractors ignorant of permit or notification process No incentive for excavators to avoid damaging the lines when repair cost (to damaging party) is smaller than avoidance cost Inaccurate mapshecords Imprecise locations by operator. Many of these situations are evaluated as variables in the suggested risk assessment model. The pipeline designer a n 4 perhaps to an even greater extent, the operator can affect the probability of damage from thirdparty activities. As an element ofthe total risk picture, the probability of accidental third-party damage to a facility depends on The ease with which the facility can be reached by a third party The frequency and type ofthird-party activities nearby. Possible offenders include Excavating equipment Projectiles Vehicular traffic Trains Farming equipment Seismic charges Fenceposts Telephone posts Wildlife (cattle, elephants, birds, etc.) Anchors Dredges. Factors that affect the susceptibility of the facility include Depth of cover Nature of cover (earth, rock, concrete, paving, etc.) Man-made barriers (fences, barricades, levees, ditches. etc.) Natural barriers (trees, rivers, ditches, rocks, etc.) Presence of pipeline markers Condition of right ofway (ROW) Frequency and thoroughness of patrolling Response time to reported threats. The activity level is often judged by items such as: Population density Construction activities nearby Proximity and volume of rail or vehicular traffic Number of other buried utilities in the area.
3/46 Third-party Damage Index
Serious damage to a pipeline is not limited to actual punctures of the line. A mere scratch on a coated steel pipeline damages the corrosion-resistant coating. Such damage can lead to accelerated corrosion and ultimately a corrosion failure perhaps years in the future. If the scratch is deep enough to have removed enough metal, a stress concentration area (see Chapter 5 ) could be formed, which again, perhaps years later, may lead to a failure from fatigue, either alone or in combination with some form of corrosion-accelerated cracking. This is one reason why public education plays such an important role in damage prevention. To the casual observer, a minor dent or scratch in a steel pipeline may appear insignificantcertainly not worthy of mention. A pipeline operator knows the potential impact of any disturbance to the line. Communicating this to the general public increases pipeline safety. Several variables are thought to play a critical role in the threat of third-party damages. Measuring these variables can therefore provide an assessment of the overall threat. Note that in the approach described here, this index measures the potential for third-party damage-not the potential for pipeline failure from third-party damages. This is a subtle but important distinction. Ifthe evaluator wishes to measure the latter in a single assessment, additional variables such as pipe strength, operating stress level, and characteristics of the potential third-party intrusions (such as equipment type and strength) would need to be added to the assessment. What are believed to be the key variables to consider in assessing the potential for third-party damage, are discussed in the following sections. Weightings reflect the relative percentage contribution of the variable to the overall threat of thirdparty damage.
Assessing third-party damage potential A. Minimum depth of cover (weighting: 20%) The minimum depth of cover is the amount of earth, or equivalent cover, over the pipeline that serves to protect the pipe from third-party activities. A schedule or simple formula can be developed to assign point values based on depth of cover. In this formula, increasing points indicate a safer condition; this convention is used throughout this book. A sample formula for depth of cover is as follows:
-
Amount of cover in inches 3 =point value up to a maximum of 20 points For instance, 42 in. of cover = 42 + 3 points = 14 points 24 in. of cover = 24 + 3 points = 8 points
Points should be assessed based on the shallowest location within the section being evaluated. The evaluator should feel confident that the depth of cover data are current and accurate; otherwise, the point assessments should reflect the uncertainty. Experience and logic indicates that less than one foot of cover may actually do more harm than good. It is enough cover to conceal the line but not enough to protect the line from even shallow earth moving equipment (such as agricultural equip-
ment). Three feet of cover is a common amount of cover required by many regulatory agencies for new construction. Credit should also be given for comparable means of protecting the line from mechanical damage. A schedule can be developed for these other means, perhaps by equating the mechanical protection to an amount of additional earth cover that is thought to provide equivalent protection. For example, 2 in. ofconcrete coating = 8 in. of additional earth cover 4 In. of concrete coating = 12 in. of additional earth cover Pipe casing = 24 in. of additional cover Concrete slab (reinforced)= 24 in. of additional cover.
Using the example formula above, a pipe section that has 14 in. of cover and is encased in a casing pipe would have an equivalent earth cover of 14 + 24 = 38 in., yielding a point value of 38 + 3 = 12.7. Burial of a warning tape-a highly visible strip of material with warnings clearly printed on it-may help to avert damage to a pipeline (Figure 3.3). Such flagging or tape is commercially available and is usually installed just beneath the ground surface directly over the pipeline. Hopefully, an excavator will discover the warning tape, cease the excavation, and avoid damage to the line. Although this early warning system provides no physical protection, its benefit from a failureprevention standpoint can be included in this model. A derivative of this system is a warning mesh where instead of a single strip of low-strength tape, a tough, high-visibility plastic mesh, perhaps 30 to 36 in. wide is used. This provides some physical protection because most excavation equipment will have at least some minor difficulty penetrating it. It also provides additional protection via the increased width, reducing the likelihood of the excavation equipment striking the pipe before the warning mesh. Either system can be valued in terms of an equivalent amount of earth cover. For example: Warning tape = 6 in. of additional cover Warning mesh = 18 in of additional covet As with all items in this risk assessment system, the evaluator should use his company’s best experience or other available information to create his point values and weightings. Common situations that may need to be addressed include rocks in one region, sand in another (is the protection value equivalent?) and pipelines under different roadway types (concrete versus asphalt versus compacted stone, etc.). The evaluator need only remember the goal of consistency and the intent of assessing the amount of real protection from mechanical damage. Ifthe wall thickness is greater than what is required for anticipated pressures and external loadings, the extra thickness is available to provide additional protection against failure from external damage or corrosion. Mechanical protection that may be available from extra pipe wall material is accounted for in the design index (Chapter 5). In the case of pipelines submerged at water crossings, the intent is the same: Evaluate the ease with which a third party can physically access and damage the pipe. Credit should be given for water depth, concrete coatings, depth below seafloor, extra damage protection coatings, etc. A point schedule for submerged lines in navigable waterways might look something like the following:
Assessing third-party damage potential 3/47
Minimum depth of cover
,-
Ground surface
1
Warning tape J Pipeline
Figure 3.3 Minimum
Depth below)water surJace: 0 pts 3 pts 7 pts
0-5 ft 5 +Maximum anchor depth >Maximum anchor depth
Depth below bottom of waterway (add thesepoints to the points.from depth below water surface): 0 pts &2 ft 3 pts 2-3 ft 5 pts 3-5 ft 1 pts 5 %Maximum dredge depth 10 pts >Maximum dredge depth Concrete coating (add these points to the points assigned fcr uuter depth and burial depth): None 0 pts Minimum I in. 5 pts The total for all three categories may not exceed 20 pts if a weighting of 20% is used.
depth of cover
The above schedule assumes that water depth offers some protection against third-party damage. This may not be a valid assumption in every case; such an assumption should be confirmed by the evaluator. Point schedules might also reflect the anticipated sources of damage. If only small boats can anchor in the area, perhaps this results in less vulnerability and the point scores can reflect this. Reported depths must reflect the current situation because sea or riverbed scour can rapidly change the depth of cover. The use of water crossing surveys to determine the condition of the line, especially the extent of its exposure to external force damage, indirectly impacts the risk picture (Figure 3.4).Such a survey may be the only way to establish the pipeline depth and extent of its exposureto boat trafic, currents, floatingdebris,etc. Because conditions can change dramatically when flowing water is involved, the time since the last survey is also a factor to be considered.Such surveys are considered in the incorrect operations index chapter (Chapter 6).Points can be adjusted to reflect the evaluators’confidence that cover information is current with the recommendation to penalize (show increased risk) wherever uncertainty is higher. (See also Chapter 12 on offshore pipelines systems.)
River bank Previous survey
Figure 3.4
-,
River crossing survey
3/48 Third-party Damage Index
Example 3.1: Scoring the depth of cover In this example, apipeline section has burial depths of 10 and 30 in. In the shallowest portions, a concrete slab has been placed over and along the length of the line. The 4-in. slab is 3 ft wide and reinforced with steel mesh. Using the above schedule, the evaluator calculates points for the shallow sections with additional protection and for the sections buried with 30 in. of cover. For the shallow case: I O in. of cover + 24 in. of additional (equivalent) cover due to slab = (10 + 24)/3 pts = 11.3 pts. Second case: 30 in. of cover = 30/3 = I O pts. Because the minimum cover (including extra protection) yields the higher point value, the evaluator uses the IO-pt score for the pipe buried with 30 in. of cover as the worst case and,hence, the governing point value for this section. A better solution to this example would be to separate the 10-inch and 30-inch portions into separate pipeline sections for independent assessment. In this section, a submerged line lies unburied on a river bottom, 30 ft below the surface at the river midpoint, rising to the water surface at shore. At the shoreline, the line is buried with 36 in. of cover. The line has 4 in. of concrete coating around it throughout the entire section. Points are assessed as follows: The shore aonroaches are very shallow; although boat anchoring is rare, it is possible. No protection is offered by water depth, so 0 pts are given here. The 4 in. of concrete coating yields 5 pts. Because the pipe is not buried beneath the river bottom, 0 pts are awarded for cover. I I
Total score = O + 5 + 0 = 5 pts
B. Activity level (weighting: 20%) Fundamental to any risk assessment is the area ofopportunity. For an analysis of third-party damage potential, the area of opportunity is strongly affected by the level of activity near the pipeline. It is intuitively apparent that more digging activity near the line increases the opportunity for a line strike. Excavation OCCUTS frequently in the United States. The excavation notification system in the state of Illinois recorded more than 100,000 calls during the month ofApril 1997. New Jersey’s one-call system records 2.2 million excavation markings per year, an average of more than 6000 per day [64]. As noted previously, it is estimated that gas pipelines are accidentally struck at the rate of 5 hits per every 1000 one-call notifications. DOT accident statistics for gas pipelines indicate that, in the 1984-1987 period, 35% of excavation damage accidents occurred in Class 1 and 2 locations, as defined hy DOT gas pipeline regulations [87]. These are the less populated areas. This tends to support the hypothesis that a higher population density means more accident potential. Other considerations include nearby rail systems and high volumes of nearby traffic, especially where heavy vehicles such as trucks or trains are prevalent or speeds are high. Aboveground facilities and even buried pipe are at risk because an automobile or train wreck has tremendous destructiveenergy potential. In some areas, wildlife damage is common. Heavy animals such as elephants, bison, and cattle can damage instrumenta-
tion and pipe coatings, if not the pipe itself. Birds and other smaller animals and even insects can also cause damage by their normal activities. Again, coatings and instrumentation of aboveground facilities are usually most threatened. Where such activity presents a threat of external force damage to the pipeline, it can be assessed as a contributor to activity level here. The activity level item is normally a risk variable that may change over time, but is relatively unchangeable by the pipeline operator. Relocation is usually the only means for the pipeline operator to change this variable, and relocation is not normally a routine risk mitigation option. The evaluator can create several classifications of activity levels for risk scoring purposes. She does this by describing sufficient conditions such that an area falls into one of her classifications. The following example provides a sample of some of the conditions that may be appropriate. Further explanation follows the example classifications. High activity level (0 points) one or more of the following:
0
0
This area is characterized by
Class 3 population density (as defined by DOT CFR49Part 192) High population density as measured by some other scale Frequent construction activities High volume of one-call or reconnaissance reports (>2 per week) Rail or roadway traffic that poses a threat Many other buried utilities nearby Frequent damage from wildlife Normal anchoring area when offshore Frequent dredging near the offshore line.
Medium activiy level (8 points) by one or more of the following:
This area is characterized
Class 2 population density (as defined by DOT) Medium population density nearby, as measured by some other scale No routine construction activities that could pose a threat Few one-call or reconnaissance reports (<5 per month) Few buried utilities nearby Occasional wildlife damage.
LOW activity level (I5 points)
This area is characterized by
all of the following: Class 1 population density (as defined hy DOT) Rural, low population density as measured by some other scale Virtually no activity reports (
Assessing third-party damage potential 3/49 In each classification of the above example, population density is a factor. More people in an area generally means more activity: fence building, gardening, water well construction. ditch digging or clearing, wall building, shed construction, landscaping, pool installations, etc. Many of these activities could disturb a buried pipeline. The disturbance could be so minor as to go unreported by the offending party. As already mentioned, such unreported disturbances as coating damage or a scratch in the pipe wall are often the initiating condition for a pipeline failure sometime in the future. An area that is being renovated or is experiencing a growth phase will require frequent construction activities. These may include soil investigation borings, foundation construction, installation of buried utilities (telephone, water, sewer, electricity, natural gas), and a host of other potentially damaging activities. Planned or observed development is therefore a good indicator of increased activity levels. Local community land development or planning agencies might provide useful information to forecast such activity. Perhaps one of the best indicators of the activity level is the frequency of activity reports. These reports may come from direct observation by pipeline personnel, patrols by air or ground and telephone reports by the public or by other construction companies. The one-call systems (these are discussed in a later section), where they are being used, provide an excellent database for assessing the level of activity, although they might only be a lagging indicator; that is, they may show where past activity has occurred but not necessarily be indicative of future activity. The presence of other buried utilities logically leads to more frequent digging activity as these systems are repaired maintained and inspected. This increased exposure is perhaps partially offset by a presumption that utility workers are better versed in potential excavation damages than are some other industry excavators. If considered credible evidence of increased risk, the density ofnearby buried utilities can be used as another variable in judging the activity level.
Anchoring, fishing, and dredging activities, along with dropped objects, pose the greatest third-party threats to submerged pipelines. To a lesser degree, new construction by open-cut or directional-drill methods may also pose a threat to existing facilities. Dock and harbor constructions and perhaps even offshore drilling activities may also be a consideration.
Seismograph activi8 Of special note here is seismograph work or other activities involving underground detonations. As a part of exploratory work, perhaps searching for oil or gas reservoirs, energy is transmitted into the ground and measured to determine information about the underlying geology of the area. This usually involves crews laying shot lines-rows of buried explosives that are later detonated. The detonations supply the energy source to gather the information sought. Sometimes, instead of explosive charges, other techniques that impart energy into the soil are used. Examples include a weight dropped onto the ground where the resulting shock waves are monitored and a vibration technique that generates energy waves in certain frequency ranges. Seismograph activity can be hazardous to pipelines and the potential for such activity should be included in the risk assessment. The first hazard occurs ifholes are drilled to place explosives. Such drilling can, of course, place the pipeline in direct jeopardy. Depth of cover provides little protection because the holes may be drilled to any depth. The second hazard is the shock waves to which the pipeline is exposed. When the explosive(s) is detonated a mass of soil is accelerated. If there is not enough backup support for the pipeline, the pipe itself absorbs the energy of the accelerating soil mass [29]. This adds to the pipe stresses (Figure 3.5). Conceivably, a charge (or line of charges) detonated far below the pipeline can be more damaging than a similar charge placed closer to the line but at the same depth. An analysis must be performed on a case-by-case basis to determine the extent of the threat.
Ground surface
moved by the detonation) (this is moved against the pipeline when charge detonates) Figure 3.5 Seismograph activity near pipelines.
3/50 Third-party Damage Index
As of this writing, pipeline operators have little authority in specifying minimum distances for seismograph activity. Technically, the operator can only forbid activity on the few feet of ROW that he controls. Cooperation from the seismograph company is often sought. As an additional special case of potentially damaging activity, directional boring is not always sensitive to line hits. It is possible for a boring equipment operator to hit a facility without being aware of the hit. The drill bits, designed to go through rock, experience little change in resistance when going through plastic pipe or cable and can cause much damage to steel pipelines. This is another aspect that can be included in the activity level variable.
C. Aboveground facilities (weighting: loo/,) This is a measure of the susceptibility of aboveground facilities to third-party disturbance. Aboveground pipeline components have a different type of third-party damage exposure compared to the buried sections. Included in this type of exposure are the threats of vehicular collision and vandalism. The argument can be made that these threats are partially offset by the benefit of having the facility in plain sight, thereby avoiding damages caused by not knowing exactly where the pipeline is (as is the
case for buried sections). The evaluator should adjust the weighting factor and the point schedule to values consistent with the company’s judgment and experience, but also recognize that one company’s experience might not accurately represent the threats. One source reports that, due mainly to the greater chance of impact and increased exposure to the elements, equipment located above ground has a risk of failure approximately 100 times greater than for facilities underground [67].This will, of course, be situation specific. While the presence of aboveground components is something that is often difficult or impossible to change-their location is usually based on strong economic andor design considerations-preventive measures can be taken to reduce the risk exposure. This risk variable combines the changeable and nonchangeable aspects into a single point value but recognizes that a mitigated threat still poses more risk than a threat that does not even exist. The evaluator can set up a point schedule that gives the maximum point value for sections with no aboveground components-where the threat does not exist. For sections that do have aboveground facilities, point credits should be given for conditions that reduce the risk of third-party damage (Figure 3.6). These conditions will often take the form of vehicle barriers or other obstacles or discouragements to intrusion. I
I I I I I I I
$1
51 I1 I I I I I I 1 I I
I Protection trees (partial) concrete barrier fence distance from highway signs
Points 2 4 2 0 1 9 points
Figure 3.6 Protectionfor aboveground facilities.
Assessing third-party damage potential 3/51 No aboveground facilities Aboveground facilities
10 pts 0 pts
plus any of the following that apply (total not to exceed
The effectiveness of a one-call system depends on several factors. The evaluator should assess this effectiveness for the pipeline section being evaluated. Here is a sample point schedule (with explanations following):
10 pts):
Facilities more than 200 ft from vehicles Area surrounded by 6-ft chain-link fence Protective railing (4-in. steel pipe or better) Trees ( 12 in. in diameter), wall, or other substantial structure@.) between vehicles and facility Ditch (minimum 4-ft depthiwidth) between vehicles and facility Signs (“Warning,” “No Trespassing,” “Hazard” etc.)
5 pts 2 pts 3 pts 4 pts 3 pts 1 pt
Credit may be given for security measures that are thought to reduce vandalism (intentional third-party intrusions). The example above allows a small number of points for signs that may discourage the casual mischief-maker or the passing hunter taking target practice. Lighting, barbed wire, video surveillance. sound monitors, motion sensors, alarm systems, etc., may warrant point credits as risk reducers. Beyond minor vandalism potential, the threat of sabotage can be considered in a risk assessment. Chapter 9 explores that aspect of risk.
D. Line locating (weighting: 15%) A line locating program or procedure-the process of identifying the exact location of a buried pipeline in order for third parties to safely excavate nearby-is central to avoiding third-party damages. A one-call system is a service that receives notification of upcoming digging activities and in turn notifies all owners of potentially affected underground facilities. It is the foundation of many pipeline-locating programs. A conventional one-call system is defined by the DOT as “a communication system established by two or more utilities (or pipeline companies), governmental agencies, or other operators ofunderground facilities to provide one telephone number for excavation contractors and the general public to call for notification and recording of their intent to engage in excavation activities. This information is then relayed to appropriate members of the onecall system, giving them an opportunity to communicate with excavators, to identify their facilities by temporary markings, and to follow up the excavation with inspections of their facilities.” [68] Such systems can also be established by independent entrepreneurs. In this text, one-call generically refers to all such notification systems, although many go by other names such as no-dig, miss utilig, or miss-dig. The first modern one-call system was installed in Rochester, New York, in 1964. As of 1992, there were 88 one-call systems in 47 states and Washington, D.C., plus similar systems operating in Canada, Australia, Scotland, and Taiwan. A report by the National Transportation Board on a study of 16 one-call centers gives evidence of the effectiveness of this service in reducing pipeline accidents. In 10 instances (of the 16 studied), excavation-related accidents were reduced by20 to 40%. In the remaining six cases, these accidents were reduced by 60 to 70% [68]. One-call systems operate within stated boundaries, usually in urban areas. Participation in and use of a one-call system is mandatory in most states in the United States.
Effectiveness Proven record of efficiency and reliability Widely advertised and well known in community Meets minimum ULCCA standards Appropriate reaction to calls Maps and records
6 pts 2 pts 2 pts 2 pts 5 pts 4 pts
Add points for all applicable characteristics. The best onecall system is characterized by all of the above factors and will have a point value of 15 points. The first variable is a judgment of the one-call system’s effectiveness. To get any points at all, it should be a system mandated by law, especially when noncompliance penalties are severe. Such a system will be more readily accepted and utilized. Beyond that, elements of the one-call systems operation and results can be evaluated. The next two point categories are more subjective. The evaluator is asked to judge the effectiveness and acceptance of the system. The degree of community acceptance can be assessed by a spot check of local excavators and by the level of advertising of the system. The evaluator may set up a more detailed point schedule to distinguish among differences he perceives. This detailed schedule could be tied to the results of a random sampling ofthe one-call system. Another category in this schedule refers to standards established by the Utility Location and Coordination Council of America (ULCCA) for one-call centers. Local utility location and coordinating councils (ULCCs) are established by the American Public Works Association (APWA). The evaluator may substitute any other appropriate industry standard. This may overlap the first question of whether the one-call system is mandated by law. If mandated, certain minimum standards will no doubt have been established. Minimum standards may address Hours of operation Record keeping Method ofnotification Off-hours notification systems Timeliness ofnotifications. The US.National Transportation Safety Board (NTSB) [64] reports that there are very practical distinctions between onecall centers: An assortment of communication methods are used to receive excavators’ calls and to issue notification tickets to the centers’ participants: centers may use telephone staff operators, voice recorded messages, e-mail, fax machines, Internet bulletin hoards. or a combination of methods. Service hours may be seasonally limited to a few hours a day or extend to 24 hours a day. Some locations operate only seasonally because of construction demand; most operate year-round. Most centers have statewide coverage but may not strictly follow State boundaries. A center may cover portions of several States (Miss Utility in Virginia, Maryland, and the District of Columbia) or there may be several centers within a State (Idaho has six different one-call systems; Washington and Wyoming each have nine). Centers may provide
3/52Third-party Damage Index training to the construction community, conduct publicity campaigns to educate the public to excavation notification requirements, and work with facility operators to protect their underground facilities. Other centers do little work in these areas. Some centers use positive response procedures-members who do not mark facilities in the construction area [instead] confirm with the excavatorsthat they have no facilities in the area rather than just not mark a location; other centers do not have this requirement.A part ofthe Miss Utility program in the Richmond Virginia, area uses positive response procedures to notify the excavator when the marking is complete. Facility owners directly inform a voice messaging system ofthe status of a notification ticket. As a time-saving alternative,the contractor can call the information system anytime to receive an up-to-date status of their marking request. Information indicating that marking has been completed or that no facilities are located in the area of excavation, allows construction work to proceed as soon as marking is completed rather than waiting the full time period for which marking activity is allowed. The important elements of an effective one-call notification center have been generally identified by industry organizations. For example, the position ofthe Associated General Contractors ofAmerica on one-call systems is summarized in six elements: Mandatory participation Statewide coverage 48-hour marking response Standard marking requirements Continuing education Fair system ofliahility. Participants at the Safety Board’s 1994workshop, on the other hand developed detailed lists of elements they believed are essential for an effective one-call notification center, other elements a center should have, and elements it could have. All agreed, however, that first and foremost was the need for mandatory participation and use ofnotification centers hy all parties. The Safety Board concludes that many essential elements and activities of a one-call notification center have been identified but have not been uniformly implemented. [64]
The last scoring item deals with the pipeline company’s response to a report of third-party excavation activity. Obviously, reports that are not properly addressed in a timely manner negate the effects of reporting. The evaluator should look for evidence that all reports are investigated in a timely manner. A sense of professionalism and urgency should exist among the responders. Appropriate response may include A system to receive and record notification of planned excavation activity Dispatching ofpersonnel to the site to provide detailed markers of pipeline location Comprehensivemarking and locating procedures and training Accurate maps and records showing pipeline locations, depths, and specifications Prejob communications or meetings with the excavators On-site inspection during the excavation A system to ensure updating of drawings Inspection of the pipeline facilities after the excavation. The evaluator may look for documentation or other evidence to satisfy himself that an appropriate number of these often critical actions is being taken.
Locating and marking methods In awarding points, the evaluator may wish to distinguish between methods of direct line location. Methods may range
from the use ofa detection device (with verification by physical probing by experienced personnel) to merely sighting between aboveground facilities (a method that often leads to errors). Some pipe materials, such as plastic, are difficult to locate when buried, and some sites require expensive excavations to precisely locate the line, regardless ofthe material. Some materials are also susceptible to damage by the common probing techniques used to locate the line. Especially in congested areas, the need to determine the exact location of the pipeline is critical. Modern locating techniques include instrumentation that can detect buried pipe via electromagnetic signals, impressed electric signals, and ground penetrating radar. These instruments are designed to determine line location and depth. Because they are susceptible to extraneous signals and barriers to signal reception, a degree of operator skill is required. For a variety of situation-specific reasons-such as pipe material, type of cover, and presence of interfering signals-not all pipelines can be located with this instrumentation. In some cases, special wires are inserted into non-conducting pipe materials to aid in line location. These tracer wires or locator wires are susceptible to damage from corrosion, lightning surges, and external forces. Another aid to pipeline location is the installation of small electronic markers that emit discrete radio-frequency (RF) signals when polled by surface instrumentation [66]. Line locating can also be accomplished by direct excavation andor probing (also called prodding) using a stiff rod to penetrate the ground, sometimes with a water-jet assist, and physically contact the top of the pipe. With some pipe materials and coatings, these latter methods risk damage to the pipeline. Some common methods of line locating are listed in Table 3.1. Practices for marking the underground facilities can have an impact on the risk of excavation damage. Good practices include pre-marking of the intended excavation site by the excavator to clearly identify to the facility owner the area of digging; positive response by the utility owner to confirm that underground facilities have been marked or to verify that no marking was necessary; the use of industry-accepted marking standards to unambiguously communicate the type of facility and its location; marking facility locations within the specified notification time; and responding to requests for emergency markings, when necessary. The time frame for excavation marking is usually specified by state damage prevention laws. Twenty states require underground facility marking to be accomplished within 48 hours of excavation notification [64]. Some pipes that are difficult to locate from the surface require expensive excavations to determine their precise location. This is often the case for lines located beneath concrete sidewalks or roadways and those adjacent to buildings. For these reasons, modern distribution systems often rely heavily on accurate records and drawings to show exact piping locations. This allows for more potential human error, as is discussed in the incorrect operations index discussion (Chapter 6 ) . The use of standard marking colors informs the excavator about the type of underground facility for which the location has been marked (Table 3.2). Markings of the appropriate color for each facility are placed directly over the centerline of the pipe, conduit, cable, or other feature. Offset marking procedures are used when direct marking cannot be accomplished. For most surfaces, including paved surfaces, spray paint is used for markings; however, stakes or flags may be used if necessary. A proposed marking standard
Assessing third-party damage potential 3/53 Table 3.1 Common methods of locating buried pipelines
Equipment h p e
Functional description
Attributes
RF detection techniques
Conventional underground line detection method. Requires a transmitter and a receiver. Conductive tracing attaches the transmitter directly to the line or tracer wire. Inductive tracing does not require direct line connection.
Oldest, most widely used technology. Inductive signal detection is quicker. but conductive signal reading is more accurate.
Electromagnetic techniques
Records signal differentials of magnetic fields. Similar to RF technology.
Useful for detecting metal objects or structures that exhibit strong magnetic fields in the ground surface. This type of detector is affected hy obstructions between the transmitting signal and the locating equipment.
Magnetic method<
Useful for detecting iron and steel facilities.
Magnetic flux methods are easy to use and inexpensive. but they are subject to interference from metal surface structures.
Vacuum extraction
Small test holes are dug from the surface by vacuuming out the soil. The activity, usually referred to aspotholing, follows more preliminary locating work to identify the general facility location. The pothole then confirms the location and verifies a depth for that specific site.
Requires preliminary records search to approximate location for potholing and special vacuum equipment. Process can be expensive and labor intensive.
Ground penetrating radar
Radar wave reflections from underground surfaces of different dielectric constants are used to identify subsurface structures.
This method is relatively expensive compared to other locator methods and does not work well in clay or saltwater.
Terrain conductivity
Detects current measures that differ from average ground surface conductivity.
This method can be useful in areas of high conductivity, such as marine clay soils, particularly for locating underground storage tanks.
Global positioning system (GPS)
Uses triangulated satellite telemetry to identify latitude/longitudelocation ofground unit.
Although not a detection technology. GPS coordinates are frequently used to define geographic location.
Source: National Transportation Safety Board, "Protecting Public Safety through Excavation Damage Prevention," Safety Study NTSB/SS-97/01, Washington, DC: NTSB, 1997. Table 3.2 Uniform color code of the APWA utility location and
coordinating council Color.
Feuture identified
Red Yellow Orange Blue Green Pink Purple White
Electric power lines, cables, conduit, and lighting cables Gas, oil, steam, petroleum, gaseous materials Communications, alarm or signal lines, cable or conduit Water, irrigation. and slurry lines Sewers, drain lines Temporary survey markings Cable television Proposed excavation
addresses conventions for marking the width of the facility, change of direction, termination points, and multiple lines within the same trench. The standard symbology indicates how to mark construction sites to ensure that excavators know important facts about the underground facilities, for example, hand-dig areas, multiple lines in the same trench, and line termination points [64]. Such conventions help to avoid misinterpretation between locators who designate the position of underground facilities and excavators who work around those facilities.
The points for the preceding categories are added to get a value for a one-call system. A section that is not participating in such a program would get zero points.
E. Public education program (weighting: 15%) Public education programs are thought to play a significant role in reducing third-party damage to pipelines. Most third-party damage is unintentional and due to ignorance. This is ignorance not only of the buried pipeline's exact location, but also ignorance of the aboveground indications of the pipeline's presence and of pipelines in general. A pipeline company committed to educating the community on pipeline matters will almost assuredly reduce its exposure to third-party damage. Some of the characteristics of an effective public education program are shown in the following list, along with an example relative point scale. More explanation is given in the paragraphs that follow. Mailouts Meetings with public officials once per year Meetings with local contractors/excavators once per year Regular education programs for community groups Door-to-door contact with adjacent residents
2 pts 2 pts 2 pts 2 pts 4 pts
3154 Third-party Damage Index
Mailouts to contractorslexcavators Advertisements in contractor/utility publications once per year
2 pts
way, the value is obtained as the audience is made aware or reminded of its role in pipeline safety.
1 Pt
Add points for all characteristicsthat apply.The best public education program will score 15 points according to this schedule. Regular contact with property owners and residents who live adjacent to the pipeline is thought to be the first line of defense in public education. When properly motivated, these people actually become protectors of the pipeline. They realize that the pipeline is a neighbor whose fate may be closely linked to their own. They may also act as good neighbors out of concern for a company that has taken the time to explain to them the pipeline’s service and how it relates to them. Although it is probably the most expensive approach, door-to-door contact is arguably unsurpassed in effectiveness. This is perhaps especially true where pleasant and direct contact between large corporations and concerned citizens is rare. The door-to-door contact, when performed at least once per year, rates the highest points in the example schedule. Other techniques that emphasize the good neighbor approach include regular mailouts, presentations at community groups, and advertisements. Mailouts generally take the form of an informational pamphlet and perhaps a promotional item such as a magnet, calendar, memo pad, pen, rain gauge, tape measure, or key chain with the pipeline company’s name and 24-hour phone number. The pamphlet may contain details on pipeline safety statistics, the product being transported, and how the company ensures the pipeline integrity (patrols, cathodc protection, etc.). Most important perhaps is information that informs the reader of how sensitive the line can be to damage from third-party activities. Along with this is encouragement to the reader to notify the pipeline company if any potentially threatening activities are observed.The tokens often included in the mailout merely serve to attract the reader’s interest and to keep the company’s name and contact number handy. Consideration should be given to all languages commonly spoken in the area. Written materials in several languages and bilingual contact personnel will often be necessary. Mailouts can be effectivelysent to landowners, tenants, other utilities, excavation contractors, general contractors, emergency response groups, and local and state agencies. Professional, entertaining presentations are always welcomed at civic group meetings. When such presentations can also get across a message for public safety through pipeline awareness, they are doubly welcomed. These activities should be included in the point schedule. Any regular advertisements aimed at increasing public awareness of pipeline safety should similarly be included in the evaluation schedule. Meetings with public officials and local contractors serve severalpurposes for the pipeline operator.While advising these people ofpipeline interests (and the impact on their interests), a rapport is established with the pipeline company. This rapport can be valuable in terms of early notification of government planning, impending project work, emergency response, and perhaps a measure of consideration and benefit ofthe doubt for the pipeline company. Points should be given for this activity to the extent that the evaluator sees the value of the benefits in terms of risk reduction. Advertising can be company specific or can represent the common interests of a number of pipeline companies. Either
F. Right-of-waycondition (weighting: 5%) This item is a measure of the recognizability and inspectability of the pipeline corridor. A clearly marked, easily recognized ROW reduces the susceptibility of third-party intrusions and aids in leak detection (ease of spotting vapors or dead vegetation from ground or air patrols) (Figure 3.7). The evaluator can establish a point schedule with clear parameters. The user of the schedule should be able to tell exactly what actions will increase the point value. The less subjective the schedule, the greater the consistency in evaluation, but simplicity is also encouraged. The following example schedule is written in paragraph form where interpolations between paragraph point values are allowed. Excellent 5 pts Clear and unencumbered ROW route clearly indicated; signs and markers visible from any point on ROW or from above, even if one sign is missing; signs and markers at all roads, railroads, ditches, water crossings; all changes of direction are marked; air patrol markers are present. Good 3 pts Clear route (no overgrowth obstructing the view along the ROW from ground level or above); well marked: markers are visible from every point of ROW or above if all are in place; signs and markers at all roads, railroads, ditches, water crossings. Average 2 p t s ROW not uniformly cleared; more markers are needed for clear identification at roads, railroads, waterways. Below average I pt ROW is overgrown by vegetation in some places; ground is not always visible from the air or there is not a clear line of sight along the ROW from ground level; indistinguishable as a pipeline ROW in some places; poorly marked. Poor Opt Indistinguishable as a pipeline ROW no (or inadequate) markers present.
Select the point values corresponding to the closest description of the actual ROW conditions observed in the section. Descriptions such as those given above should provide the operator with enough guidance to take corrective action. Point values can be more specific (markers at 90%of road crossings: 2 pts; at 75% of road crossings: 1 pt; etc.), but this may be an unnecessary complication.
G. Patrol frequency (weighting: 15%) Patrolling the pipeline is a proven effective method of reducing third-party intrusions. The frequency and effectiveness of the patrol should be considered in assessing the patrol value. Patrolling becomes more necessary where third-party activities are largely unreported. The amount of unreported activity will depend on many factors, but one source [ 1I] reported the number of unreported excavation works around a pipeline system in the United Kingdom to be 25% of the total number of
Assessing third-party damage potential 3/55
Signscompany name emergency pnone
x-x-
x-x-
Painted fenceposts
x-x-
Figure 3.7 ROW condition.
excavation works. This is estimated to be around 775 unreported excavations per year on their 10,400-km system [ll]. While unreported excavation does not automatically translate into pipeline damage, obviously the potential exists for some of those 775 excavations to contact the pipeline. These numbers represent a U.K. situation that has no doubt changed with the increasing use of one-call systems, but some countries do not have formal notification systems and rely on patrols to be the primary means of identifying third-party activity near their pipelines. From a reactive standpoint, the patrol is also intended to detect evidence of a leak such as vapor clouds, unusual dead vegetation, bubbles from submerged pipelines, etc. As such, it is a proven leak detection method (see Chapter 7 on the leak impact factor). From a proactive standpoint, the patrol also should detect impending threats to the pipeline. Such threats take the form of excavating equipment operating nearby, new construction of buildings or roads, or any other activities that could cause a pipeline to be struck, exposed, or otherwise damaged. Note that some activities are only indirect indications of threats. New building construction several hundred yards from the pipeline will not pose a threat in itself, but the experienced observer will investigate where supporting utilities are to be directed. Constructionofthese utilities at a later time may create the real threat. The patrol should also seek evidence of activity that has already passed over the line or land movements. Such evidence is usually present for several days after the activity and may warrant inspection of the pipeline. Training of observers and possibly the use of checklists are important aspects of patrol effectiveness. Reportable observations should include the following:
Land movements-landslides, subsidence. bank erosion, creek or riverbank instability, etc. Construction activi+both nearby and likely to move toward the ROW Encroachments-utbuildings, landscaping changes, gardens, etc., may warrant additional investigation Unauthorized activities on ROW-off-road vehicles, motorcycles, snowmobiles,etc. Missing markershigns Evidence of vehicular intrusions onto ROW-highway accident, train derailment, etc. Plantings of trees, gardens Third-party changes to slope or drainage. Slope issues can be an important but often overlooked aspect of pipeline stability, detectable to some extent by patrol. Slope alterations near, but outside, the right-of way by third parties should be monitored. Construction activities near or in the pipeline right-of-way may produce slopes that are not stable and could put the pipeline at risk. These activities include excavation for road or railway cuts, removal of material from the toe of a slope, or adding significant material to the crest of a slope. The ability to detect potentially damaging land movements is also a risk mitigation measure discussed in Chapter 5. One measure of patrol effectiveness would be data showing a number of situations that were missed by the patrollers a n d or accompanying observers when the opportunity was there. Indirect measures include observer presence, patroller/ observer training, and analysis of the “detection opportunity.” This opportunity analysis would look at altitude and speed of aerial patrol an4 for ground patrol, perhaps the line of sight
3/56 Third-party Damage Index
along and either side of the ROW. The opportunity for early discovery lies in the ability to detect activities before the pipeline ROW is encroached. Note also the ability of certain aircraft (helicopters) to take immediate action to interrupt a potentially dangerous activity. Such interruptions include landing the aircraft or dropping a container containinga message in order to alert the third party. The suggested point schedule will award points based on patrol frequencyunder the assumption of optimum effectiveness. Ifthe evaluatorjudgesthe effectivenessto be less than optimum, he can reduce the points to the equivalent of a lower patrol frequency. This is reasonable because lower frequency and lower effectivenessboth reduce the area of opportunity for detection. If practical, the patrol frequency can be determinedbased on a statistical analysis of data. Historical data will often follow a typical rare-event frequency distribution such as those shown in Figures 3.8 and 3.9. Figure 3.9 is based on tabulated estimates shown in Table 3.3. Once the distribution is approximated, analysis of the curve will enable some predictive decisionsto be made. An analysis of the “opportunityto detect” various common excavation activitiesis presented at the end of this chapter. Such analyses can be the basis of determining patrol frequency or assessing the probability of detection for any given frequency. For example, management may decide that the appropriatepatrol frequency should detect, with a 90% confidence level, 80% of all threatening events. This might be based on a costhenefit analysis. Patrol frequencies at or slightly above this optimum can receive the highest points.
Frequency distribution urve based on recent historical data
6 C tn
3
0-
2
LL
1 2 3 4 5 6 Number of Potential Threats Found on a Single Flight
0
Figure 3.8
Typical patrol data.
An example point schedule is as follows:
Daily Four days per week Three days per week Two days per week Once per week Less than four times per month; more than once per month
15 pts 12 pts
IO pts 8 pts 6 pts
4 pts
Detection Opportunity 1.o
0.9 0.8
8 c L
0.7
0 3
6
0.6
c
0
n a n
0.5
a 0.4 > .-0) ;ri -
E’ 5
0.3 0.2
0.1
0 0
2
4
6
8
10
Days of Proximal Activity and/or Evidence Figure 3.9
Patrol detection opportunities.
12
14
16
Third-party damage mitigation analysis 3/57 Patrol as an opportunity to prevent failures caused by third-party damages Spectrum of third-party activities used to produce “probabilityof detection”graph (Figure3.9) Table 3.3
Hypothesis to be examined
Activity duration (includes evidence remaining)
4ctiviry
Highway construction Subdivisionwork
Davs 14
13 12
It
Buried utility crossings Drainage work Swimmingpools Land clearing Agricultural work Seismographcrew Fence post installation Other
10 9 8 7 6 5 4 3 2 1
0.5 0. I
Frequencvof occurrence 0.03 0.03 0.03 0.03 0.05 0.05 0.07 0.07 0.07 0.1 0.1 0.1 0.1 0.1 0.05 0.01
At least 9 out of 10 third-party damage failures that would be otherwise expected are avoided through the stringent implementation of the proposed mitigation plan.
Cumulative frequency 0.03 0.06 0.09 0.12 0.17 0.22 0.29 0.36 0.43 OS3 0.63 0.73 0.83 0.93 0.98 0.99
Source URS Radian Corporation, ‘“EnvironmentalAssessment of Longhorn Partners Pipeline.”report prepared for US EPA and DOT, September 2000
Less than once per month Never
Mitigation effectiveness for third-party damageA scenario-based evaluation
2 pts 0 pts
Select the point value corresponding to the actual patrol frequency.This schedule is built for a corridor that has a frequency of third-party intrusions that calls for a nominal patrol frequency of 3 days per week. In this case, the evaluator feels that daily patrols are perhaps justified and provide a measurably greater safety margin. Frequencies greater than once per day (once per 8-hour shift, for instance) warrant no more points than daily in this example. The evaluator may wish to give point credits for patrols dnring activities such as close interval surveys (see Chapter 4, Corrosion Index). Routine drive-bys, however, would need to be carefully evaluated for their effectiveness before credit is awarded. An example of an analysis of “opportunity to detect” potentially damaging third-party actikities is shown in the example below.
Third-party damage mitigation analysis A type of scenario risk analysis of third-party damage prevention effectiveness was done as part of an environmental assessment for a proposed pipeline [86].This analysis is mostly an exercise in logic, testing whether it is plausible that mitigation measures could markedly reduce third-party damage failures from previous levels.This analysis is interesting not only because it demonstrates a type of analysis, but also because it discusses many concepts that underlie our beliefs about third-party failure potential. Excerpts from the analysis ofRef. [86] follow.
Discussion This failure estimation is suggested by modeling and analyses shown elsewhere in this environmental assessment,There is a question of whether such an estimation can be supported by a logical event-tree type analysis and examination of some ofthe past failures.Therefore, the objective is to determine if the proposed mitigation measures could have interrupted past failure sequences, at least 90 percent of the time, under some set ofreasonable assumptions. Third-party damage (or “outside force”) is a good candidate for this examination since this failure category is often viewed as the most random and hence, the least controllable through mitigations. Seven (7) out oftwenty-six (26) historical leaks on the subject pipeline were categorized as being caused by “thirdparty damage.” It is useful to characterize these incidents based on some situation specifics. At least six (6) of the incidents involved heavy equipment such as back hoe, bulldozer. bulldozer with ripperiplow, and ditching machine (the seventh is not listed). Five (5) of the incidents suggest that a professional contractor was probably involved since activities are described as cable installations, water line installations, excavations for an oil/gas company, land clearing, etc. At least four (4) of these events occurred before a One-Call system was available in Texas (beginning in the early 1990s and mandated in late 1997). So, the opportunities for advance knowledge of the presence of the pipeline was limited to signs, ROW indications, and perhaps some records ifthe excavator was exceptionally diligent in a pre-job investigation. Contractor and public education efforts, ROW condition, and actual patrol frequency are unknown. Based on current survey information, depth of cover at these sites varies from 19 inches to over 48 inches. Scenarios have been created to address the question: “How many failures, similar to these past incidents, might occur today?” These scenarios take into account the proposed mitigation measures. Two tables are offered to show potential failure sequences and opportunities to interrupt those sequences. Since the previously discussed incidents occurred despite some prevention measures, the estimates are showing opportunities for damage avoidance above and beyond prevention practices thought to be prevalent at the time of the incidents.These tables are loosely using terminology to represent frequency of events and probability of events-this is not a rigorous statistical analysis. In the first table, the estimated probabilities of various scenario elements are presented. The table begins with the assumption that a potentially damaging third-party activity is already present in the immediate vicinity ofthe pipeline. Given that an activity is present, column 2 of the table characterizes the distribution of likely activities. The distribution assumes a predominance of heavy equipment involvement in previous incidents, and is therefore conservative since that category is perhaps the most threatening to the pipeline. Column 3 examines the possibility, under today’s mandated and advertised One-Call system. that the system is used and the
3/58Third-party Damage Index
process works correctly to interrupt a potential failure sequence. It is assumed that 60 percent of heavy equipment operators would have knowledge of and experience with the one-call process and would therefore utilize it. It is further assumed that the one-call process “works” 80 percent of the time it is used. (Both assumptions are thought to conservatively underestimate the actual effectiveness.) This yields a 48 percent chance (60 percent x 80 percent) that this variable interrupts the sequence for that type of activity. It is assumed that one in ten potentially damaging events would be similarly interrupted in the case of typical homeowner or farmerhancher activity. This is lower than for the heavy equipment operators since the latter group is thought to be more targeted with training, advertising, and presentations from owners of buried utilities. The interruption rates reflect improvements over one-call effectiveness at the time period of the incidents, approximately 1969 to 1995, which includes periods when there was either no one-call system available or it was available but not mandated. The continuously increasing acceptance of the one-call protocols by the public and the response of the pipeline operator to notifications combine to create this estimated interruption rate. Columns 4,5, and 6 examine the possibility that, given that an activity has escapedthe one-call process, the impending failure sequence will be interrupted by improved ROW condition, signs, or public/contractor education. Assumptions of likelihood range from five in 100 to 15 in 100, respectively. This means that out of every group of threatening activities, at least a few will be interrupted by someone noticing the ROW and/or a sign or having been briefed on pipeline issues and reacting appropriately. In the interest of conservatism, relatively small interruption rates are assigned to the proposed improvementsin these variables although they can realistically prevent an incident in numerous credible scenarios. Column 7 examines the effect of depth of cover. One reference [Ref [58] in this book] cites Western European data (CONCAWE) which suggests that approximately 15 percent fewer third-party damage failures occur with each foot of cover over the normal (0.9 meters). Using this, a length-weighted average depth of cover was calculated for the pipeline, respectively. The pipeline shows between 7 percent and 4 percent improvement,based on the lengths that are covered deeper than about 0.9 meters. Based on this, a value of 5 percent was assigned to the cover variable for the “heavy equipment operations’’ type of activity. This means that five out of every 100 potentially damaging third-party activities would be prevented from causing damage by an extra amount of cover. For homeowner activities, depth of cover is judged to be a more effective deterrent, preventing three out of ten potential damages. One out of ten potentially threatening ranchedfarmer activities are assumed to be rendered non-threatening by depth of cover. Finally, the impact of patrolling is examined in column 8. A table of common third-party activities is presented against a continuum of opportunity to detect, expressed in days (see patrol figure in Table 1). The “opportunity” includes an estimate of how long after the activity occurs its presence can still be detected. Since third-party activities can cause damages that do not immediately lead to failure, this ability to inspect evidence of recent activity is important. The table is intended to provide an estimate of the types of activities that can reasonably be detected in a timely manner by a patrol. The frequency of the various types of activities will be very location- and time-
specific, so frequencies shown are very rough estimates. It seems reasonable to assume that activity involving heavy equipment requires more staging, is of a longer duration, and leaves more evidence of the activity. All of these promote the opportunity for detection by patrol. Statistical theory confirms that, with a few reasonable assumptions, the probability of detection is directly proportional to the frequency of patrols. For example, calculations indicate that the probability of detection in two patrols is twice the probability of detection in one patrol if detection of the same event cannot occur in both patrols. This condition is essentially satisfied for these purposes since patrol sightings subsequent to the initial sighting are no longer considered to be “detections.”The key point here is that the probability that one or more events will occur is the sum of their individual probabilities ifthe events are mutually exclusive. Discounting patrol errors, as the patrol interval approaches 0 hours (a continuous observation of the ROW), the detection probability approaches 100 percent. The patrol interval is changing from a historical maximum interval between patrols of 336 hours (once every two weeks on average, although it could be as high as three weeks or 504 hours). The mitigation plan requires a patrol every 24,60, or 168 hours, depending on the location. In theory, this improves the detection probability by multiples of 2 to 14. On the table of activities, patrol intervals of 24,60, and 168 hours suggest detections of 93 percent, 75 percent, and 36 percent of activities, respectively. This means that, with a maximum interval between patrols of 24 hours, only 7 percent of activities would go undetected, given the assumed distribution of activities. Obviously, the real situation is much more complex than this simple analysis, but the rationale provides a background for making estimates of patrol benefits. In order to make conservative estimates (possibly underestimating the patrol benefits), the increased detection probabilities under the proposed mitigation plan are assumed to be: 30 percent, 10 percent, and 20 percent for heavy equipment, homeowner, and ranchifarm operations, respectively. This means that about one-third of heavy equipment operations, one in every ten homeowner activities, and one in every five ranchifarm activities would be detected before damage occurred or, in the case of no immediate leak, would provide the operator time to detect andrepair damages before a leak occurs. Homeowner and ranchifarm actions are judged to be more difficult to detect by patrol because such activities tend to appear with less warning and are often of shorter duration than the heavy equipment operations. Table 2 converts Table 1 columns 3 through 8 into probabilities of the sequence NOT being interrupted-the “opposite” of Table 1, Column 9 of Table 2 estimates the fraction of times that the line is under enough stress that, in conjunction with powerful enough equipment, a rupture would occur immediately. This stress level is a function of many variables, but it is conservatively estimated that 50 percent of the line is under a relatively high stress level. For the 50 percent of the line that could be damaged, but not to the extent that immediate leakage occurs, the mitigation plan’s corrosion control and integrity reverification processes, which specifically factor in third-party damage potential in determining reinspection intervals, are designed to detect and remediate such damages before leaks occur.
Third-party damage mitigation analysis 3/59 Table 1
p (interruption of event sequence by. . . )
p (activ)
Heavy equipment operations Homeowner equipment operations Ranchlagricultural equipment operations Notes
80% 10% 10% 4
One Call
0.48 0.1 0. I 1.12
Column 10 of Table 2 estimates the frequency of a thirdparty activity involving equipment ofenough power to cause an immediate leak. This may be somewhat correlated to depth of cover, but no such distinction is made here. Heavy equipment is assigned a value of 0.9-indicating h g h probability that the equipment has enough power to rupture the line. A minor reduction from a value of 1.O that would otherwise be assigned is recognized-it is assumed that such heavy equipment normally is operated by skilled personnel. So, while heavy equipment is certainly capable of rupturing a line, a skilled operator can usually “feel” when something as unyielding as a steel pipe is encountered, and will investigate with hand excavation before extra power is applied. Homeowners and ranchdfarmers are assumed to be using powerful equipment in 30 percent and 60 percent of their activities, respectively. No credit for operator skill is assumed in these cases. Column 11 multiplies all column estimates and shows the combined frequency for the three types of activities. Although not quantified here, the impact of future focus on the issue of third-party damages can reasonably be included. The pipeline industry shares this concern with buried utilities containing water, sewer, and any of several types of data transmission lines. Interruption of such lines can represent enormous costs. Additional unexamined activities that would
ROW
Signage
Public/Contractor Education
0.1 0.1 0.1
0.05 0.05 0.05
0.15 0.15 0.15
9
Cover 0.05 0.3 0. I
2.3
Patrol 0.3 0.1 0.2
6.7.8
suggest efforts in the future to prevent such damages include on-going government industry initiatives addressing the issue.
Conclusions It is important to note that this analysis is strictly a logic exercise, to test if the hypothesis could reasonably be supported through assumed effectivenessof individual mitigation measures. Th~sanalysis suggests that under the proposed mitigation plan, and assuming modest mitigation benefits from the mitigation measures, approximately 89 percent of hrd-party activities, not interruptedunder previous mitigation efforts, could reasonably be expected to be interruptedbefore they cause a pipeline failure.The initial hypothesisthereforeseems reasonable, given the results and the conservative assumptionsemployed in this analysis. These calculations are based on scenarios with assumptions that are thought to underestimate rather than overestimate prevention effectiveness. However, since they contain a large element of randomness, third-party damages are more difficult to predict and prevent. Scenarios can be envisioned where all reasonable preventive measures are ineffective and damage does occur. Such scenarios are usually dnven by human erroran element that causes difficulty in making predictions.
Table 2
p (event) = 1 - p (interruption) Heavy equipment operations Homeownerequipment operations Ranch/agricultural equipment operations Total
80% 10% 10%
0.52 0.9 0.9
0.9 0.9 0.9
0.95 0.95 0.95
0.85 0.85 0.85
0.95 0.7 0.9
p (high stress)
0.7 0.9 0.8
0.5 0.5 0.5
p (equipment powerjul enough)
0.9 0.3 0.6
100% 5
Notes: 1 Assume that 60 percent of contractors follow one-call procedure and that marking, etc., is 80 percent effective. 2 Western Europe data suggest 15 percent failure reduction per foot of additional cover (over "normal" depth). 3 Assume cover is more effective against non-heavy equipmentdamages. 4 At least six of the seven previousthird-party involved heavy equipment used by contractors. 5 Assume percent of line that is in a highly stressed condition; enough to promote leak upon moderate damage. 6 Assume that these percentages are detected prior to incident or soon thereafter (damage assessment opportunity). 7 Previous third-party damage rate allowed 336 hours as maximum interval between detection opportunities; new is 24.60, or 168 hours maximum 8 Assumes that homeowner and ranch activities tend to appear faster than most heavy equipment projects. 9 Includesdoor-to-door in Tier 3 and presentationsto excavatingcontractorseverywhere. 10 Chances that equipment is powerful enough that, in conjunction with a higher stress condition in the pipe wall, immediate rupture is likely. 11 p (damage detection before failure) =function of (patrol, CIS. ILI. fatigue, corrosion rate, stress level). 12 No one-call system was available for five out of seven previousthird-party leaks.
10
p (ofleuk huppening ufrer activity is proximal) 9.05% 0.62% 1.41% 11.08%
4161
Corrosion Index
1. Overview CorrosionThreat = (Atmospheric Corrosion) +(Internal Corrosion) + (Buried Metal Corrosion)
10%
20% 70% 100%
Corrosion Threat
A. Atmospheric Corrosion A1 . Atmospheric Exposures A2. AtmosphericType A3. Atmospheric Coating B. Internal Corrosion B1. Product Corrosivity B2. Preventions C. Subsurface Corrosion C 1. Subsurface Environment Soil Corrosivity Mechanical Corrosion C2. Cathodic Protection Effectiveness Interference Potential
0-10 pts 0-5 pts 0-2 pts 0-3 pts 0-20 pts 0-10 pts 0-10 pts 0-70 pts 0-20 pts 0-15 pts 0-5 pts 0-25 pts 0-15 pts 0-10 pts
C3. Coating Fitness Condition Overall Threat of Corrosion
0-25 pts 0-10 pts 0-15 pts 0-100 pts
II. Background The potential for pipeline failure caused by corrosion is perhaps the most familiar hazard associated with steel pipelines. This chapter discusses how common industry practices of corrosion analysis and mitigation can be incorporated into the risk assessment model (see Figure 4.1). A detailed discussion of the complex mechanisms involved in corrosion is beyond the scope of this text. Corrosion comes from the Latin word corrodere, meaning “gnaw to pieces.” Corrosion, as it is used in this text, focuses mainly on a loss of metal from pipe, although the concepts apply to many corrosionlike degradation mechanisms. From previous discussions of entropyand energy flow, we can look at corrosion from a somewhat esoteric viewpoint. Simply stated, manufactured metals have a natural tendency to revert to their originalmineral form. While this is usually a very slow process,
4/62 Corrosion Index
Figure 4.1 Basic risk assessment model
it does require the injection of energy to slow or halt the disintegration. Corrosion is of concern because any loss of pipe wall thickness invariably means a reduction of structural integrity and hence an increase in risk of failure. Non-steel pipeline materials are sometimes susceptible to other forms of environmental degradation. Sulfates and acids in the soil can deteriorate cement-containingmaterials such as concrete and asbestos cement pipe. Some plastics degrade when exposed to ultraviolet light (sunlight). Polyethylene pipe can be vulnerable to hydrocarbons. Polyvinyl chloride (PVC) pipe has been attacked by rodents that actually gnaw through the pipe wall. Pipe materials can be internally degraded when transporting an incompatibleproduct. All of these possibilities can be considered in this index. Even though the focus here is on steel lines, the evaluatorcan draw parallels to assess his nonsteel lines in a similar fashion. As with other failure modes, evaluating the potential for corrosion follows logical steps, replicating the thought process that a corrosion control specialist would employ. This involves (1) identifjmg the types of corrosion possible (atmospheric, internal, subsurface), (2) identifying the vulnerability of the pipe material, and (3) evaluating the corrosion prevention measures used at all locations. Corrosion mechanisms are among the most complex of the potential failure mechanisms. As such, many more pieces of information are efficiently utilized in assessing this threat. Some materials used in pipelines are not susceptible to corrosion and are virtually free from any kind of environmental degradation potential. These are not miracle materials by any means. Designers have usually traded away some mechanical properties such as strength and flexibility to obtain this property. Such pipelines obviously carry no risk of corrosioninduced failure, and the corrosion index should reflect that absence ofthreat (see Figure 4.2). The two factors that must be assessed to define the corrosion threat are the material type and the environment.The environment includes the conditions that impact the pipe wall, internally as well as externally. Because most pipelines pass through
several different environments, the assessment must allow for this either by sectioning appropriately or by considering each type of environmentwithin a given section and using the worst case as the governing condition. Several types of human errors can increase the risk from corrosion. Incorrect material selection for the environment (both internal and external exposures) is a possible mistake. Placing incompatiblematerials close to each other can create or aggravate corrosion potentials. This includesjoining materials such as bolts, gaskets, and weld metal. Welding processes must be selectedwith corrosionpotential in mind. Insufficientmonitoring or care of corrosion control systems can also be viewed as a form of human error. These factors are covered in the incorrect operations index discussionof Chapter 6 . In general, four ingredients are required for the commonly seen metallic corrosion to progress. There must exist an anode, a cathode, an electrical connection between the two, and an electrolyte. Removal of any one of these ingredients will halt the corrosion process. Corrosion prevention measures are designed to do just that.
Three types of corrosion The corrosion index assesses three general types: atmospheric corrosion, internal corrosion, and subsurface corrosion. This reflects three general environment types to which the pipe wall may be exposed. Atmospheric corrosion deals with pipeline componentsthat are exposed to the atmosphere.To assess the potential for corrosion here, the evaluator must look at items such as Susceptiblefacilities Atmospheric type Paintingkoatinghnspection program. For the general risk assessment model described here, atmospheric corrosion is weighted as 10% of the total corrosion threat. This indicates that atmospheric corrosion is a rela-
Background 4/63
Casings Ground soil interface Hot spots Temperature Humidity Contaminants Atmospheric Corrosion 0-10 pts A1 . Atmospheric exposures A2. Atmospheric type A3. Atmospheric coating Internal Corrosion 0-20 pts B1. Product corrosivity B2. Internal protection
-
/ \
Subsurface Corrosion 0-70 pts C1. Subsurface environment Soil corrosivity Mechanical corrosion C2. Cathodic Drotection Effectiveness Interference potential \Test C3. Coating Fitness Condition
\
\ Figure4.2
Type, age, application of coating
Fitness Conditions
Other inspection age and results
Flowstream conditions Upset conditions pH, solids, H,S, CO, MIC, etc. Low-spot accumulatibns, equipment failure, etc. Internal coating Operational measures Monitoring Resistivity, pH, moisture, carbonates, MIC, etc. Stress level, stress cycling, temperature, coating, CP, PH, etc lead surveys, age, and results Close spaced surveys, type, age, and results DC related AC related Shielding potential Type, age, application of coating Visual inspection age and results Other inspection age and results
Assessing corrosion potential:sample of data used to scorethe corrosion index
tively rare failure mechanism for most pipelines. This is due to the normally slower atmospheric mechanisms and the fact that most pipelines are predominantly buried and, hence, not exposed to the atmosphere. The evaluator must determine if this is an appropriate weighting for her assessments. Internal corrosion deals with the potential for corrosion originating within the pipeline. Assessment items include Product corrosivity Preventive actions. Internal corrosion is weighted as 20% of the total corrosion risk in the examples. This indicates that internal corrosion is often a more significant threat than atmospheric corrosion, but still a relatively rare failure mechanism for most pipelines. Nevertheless, some significant pipeline failures have been attributed to internal corrosion. The evaluator may wish to give this category a different weighting in certain situations. Subsurface pipe corrosion is the most complicated ofthe categories, reflecting the complicated mechanisms underlying this type of corrosion. Among the items considered in this assessment are a mix of attributes and preventions including: Cathodic protection Pipeline coatings Soil corrosivity
c--Visual inspection age and results
Presence of other buried metal Potential for stray currents Stress corrosion cracking potential Spacing oftest leads Inspections of rectifiers and interference bonds Frequency of test lead readings Frequency and type of coating inspections Frequency and type of inspections of pipe wall Close interval surveys Use of internal inspection tools. Subsurface corrosion is weighted as 70% of the total corrosion threat in the examples of this chapter. For nonmetal lines, the evaluator may wish to adjust this weighting to better reflect the actual hazards. Note that corrosion threats are very situation specific. The weightings of the three corrosion types proposed here are thought to generally apply to many pipelines but might be illsuited to others. Any ofthe corrosion types might lead to a failure under the right circumstances, even when weightings suggest a relatively rare failure mechanism. The use of special alerts or even conversions to absolute probability scales might be appropriate, as is addressed in discussions of data analysis later in this text. Especially in the case of buried metal, inspection for corrosion is commonly done by indirect methods. Direct inspection
4/64 Corrosion Index
of a pipe wall is often expensive and damaging (excavation and coating removal are often necessary to directly inspect the pipe material). Corrosion assessments therefore usually infer corrosion potential by examining a few variables for evidence of corrosion. These inference assessments are then occasionally confirmed by direct inspection. Characteristics that may indicate a high corrosion potential are often difficult to quanti@. For example, in buried metal corrosion, soil acts as the electrolyte-the environment that supports the electrochemical action necessary to cause this type of corrosion. Electrolyte characteristics are of critical importance, but include highly variable items such as moisture content, aeration, bacteria content, and ion concentrations. All of these characteristics are location specific and time dependent, which makes them difficult to even estimate accurately. The parameters affecting atmospheric and internal corrosion potentials can be similarly difficult to estimate. Because corrosion is often a highly localized phenomenon, and because indirect inspection provides only general information, uncertainty is usually high. With this difficulty in mind, the corrosion index reflects the potential for corrosion to occur, which may or not mean that corrosion is actually taking place. The index, therefore, does not directly measure the potential for failure from corrosion. That would require inclusion of additional variables such as pipe wall thickness and stress levels. So, the primary focus of this assessment is the potential for active corrosion. This is a subtle difference from the likelihood of failure by corrosion. The time to failure is related to the resistance of the material, the aggressiveness of the failure mechanism, and the time of exposure. The material resistance is in turn a function of material strength and dimensions, most notably pipe wall thickness, and the stress level. In most cases, we are more interested in identifying locations where the mechanism is potentially more aggressive rather than predicting the length of time the mechanism must be active before failure occurs. An exception to this is found in systems where leak rate is used as a leading indicator of failure and where failure is defined as a pipe break (see Chapter 1 1).
Corrosion rate Corrosion rate can be measured directly by using actual pipe samples removed from a pipeline and calculating metal loss over time. Extrapolating this sample corrosion rate to long lengths of pipe will usually be very uncertain, given the highly localized nature of many forms of corrosion. A corrosion rate can also be measured with coupons (metal samples) or electronic devices placed near the pipe wall. From these measurements, actual corrosion on a pipeline can be inferred-at least for the portions close to the measuring devices. In theory, one can also translate in-line inspection (ILI) or other inspection results into a corrosion rate. Currently, this is seen as a very problematic exercise given spatial accuracy limitations of continuously changing ILI technologies and the need for multiple comparative runs over time. However, as data become more precise, corrosion rate estimates based on measurements become more useful. Because the corrosion scores are intended to measure corrosion potential and aggressiveness, it is believed that the scores relate to corrosion rates. However, the relationship can only be determined by using actual measured corrosion rates in a vari-
ety of environments. Until the relationship between corrosion index and corrosion rate can be established, a theoretical relationship could be theorized. An example of this is shown in Chapter 14.
Information degradation As discussed in earlier chapters information has a usehl life span. Because corrosion is a time-dependent phenomenon and corrosion detection is highly dependent on indirect survey information, the timing of those surveys plays a role in uncertainty and hence risk. The date ofthe information should therefore play a large role in any determination based on inspections or surveys. One way to account for inspection age is to make a graduated scale indicating the decreasing usefulness of inspection data over time. This measure of information degradation can be applied to the scores as a percentage. After a predetermined time period, scores based on previous inspections degrade-conservatively assuming increasing risk-to some predetermined value. An example is shown in Table 2.2. In that example, the evaluator has determined that a previous inspection yields no useful information after 5 years and that the usefulness degrades uniformly at 20%per year.
Changes from previous editions After several years of use of previous versions of the corrosion algorithm, some changes have been proposed in this edition of this book. These changes reflect the input of pipeline operators and corrosion experts and are thought to enhance the model’s ability to represent corrosion potential. The first significant change is the modification ofthe weightings of the three types of corrosion. In most parts of the world and in most pipeline systems, subsurface corrosion (previously called buried metal corrosion) seems to far outweigh the other types of corrosion in terms of failure mechanisms. This has prompted the change in weightings as shown inTable 4.1. Note that these are very generalized weightings and may not fairly represent any specific situation. A pipeline with above average exposures to atmospheric and internal corrosion mechanisms would warrant a change in weightings. Another significant change is in the groupings of subsurface corrosion variables. The new suggested scoring scheme makes use of the previous variables, but changes their arrangements and suggests new ways to evaluate them. A revised subsurface corrosion evaluation shows a regrouping of variables to better reflect their relationships and interactions.
Table 4.1
Changes to corrosion weightings ~~
~
~
Werghtrng m Previous naightrng
Atmospheric
Internal Subsurface (buried metal) Total
20 20 60 100
current examples 10
20 70 100
Scoring the corrosion potential 4/65
Scoring the corrosion potential All variables consldered here continue to reflect common industry practice in corrosion mitigatiodprevention. The variable weightings indicate the relative importance of each item in terms of its contribution to the total corrosion risk. The evaluator must determine if these weightings are most appropriate for the specific systems being assessed. In the scoring system presented here, points are usually assigned to conditions and then added to determine the corrosion threat. This system adds points for safer conditions. For example, under subsurface corrosion of steel pipelines, three main aspects are examined: environment, coating, and cathodic protection. The best combination of environment (very benign), coating (very effective), and cathodic protection (also very effective) commands the highest points. An alternative approach that may be more intuitive in some ways is to begin with an assessment of the threat level and then consider mitigation measures as adjustment factors. In this approach, the evaluator might wish to begin with a rating of environment-ither atmosphere type, product corrosivity, or subsurface conditions, depending on which ofthe three types of corrosion is being examined. Then, multipliers are applied to account for mitigation effectiveness. For example, in a scheme where an increasing number of points represents increasing risk, perhaps a subsurface environment of Louisiana swampland warrants a risk score of 90 (very corrosive). A dry Arizona desert environment has an environmental rating of 20 (very low corrosion). Then, the best coating system decreases or offsets the environment by 50% and the best cathodic protection system offsets it by another 50%. So, the Louisiana situation with very robust corrosion prevention would be 90 x 50% x 50% = 22.5. This is very close to the Arizona desert situation with no coating or cathodic protection system. This is intuitive since a very benign environment, from a corrosion rate perspective. can be seen as roughly equivalent to a corrosive environment with mitigation. Further discussion of scoring options such as this can be found in Chapter 2.
A. Atmospheric corrosion (weighting: 10% of corrosion threat) Atmospheric corrosion is basically a chemical change in the pipe material resulting from the material's interaction with the atmosphere. Most commonly this interaction causes the oxidation of metal. In the United States alone, the estimated annual loss due to atmospheric corrosion was more than $2 billion, according to one 1986 source [31]. Even though cross-country pipelines are mostly buried they are not completely immune to this type of corrosion. The potential for and relative aggressiveness of atmospheric corrosion is captured in this portion ofthe model. The evaluator may also include other types of potential degradations of exposed pipe such as the effect ofultraviolet light on some plastic materials. A possible evaluation scheme for atmospheric corrosion is outlined below and described in the following paragraphs.
Atmospheric Corrosion (10% ofcorrosion threat = I O pis) Exposures (50% atmospheric = 5 pts) Environment (25% atmospheric = 2 pts) Coatings (30% atmospheric = 3 pts) Fitness (50% of coatings = 1.5 pts) Condition (50% of coatings = 1.5 pts) Visual inspection (50% of Condition) Nondestructive testing (NDT) (30% of Condition) Destructive testing (DT) (20% of Condition)
A l . Atmospheric exposure (weighting: 50% of atmospheric corrosion) The evaluator must determine the greatest risk from atmospheric corrosion by first locating the portions of the pipeline that are exposed to the most severe atmospheric conditions. Protection from this form of corrosion is considered in the next variable. In this way, the situation is assessed in the most conservative manner, The most severe atmospheric conditions may be addressed by the best protective measures. However, the assessment will be the result of the worst conditions and the worst protective measures found in the section. This conservatism not only helps in accounting for some unknowns, it also helps in pointing to situations where actions can be taken to improve the risk picture. A schedule of descriptions of all atmospheric exposure scenarios should be set up. The evaluator must decide which scenarios offer the most risk. This decision should be based on data (historical failures or discoveries of problems), when available, and employee knowledge and experience. The following is an example of such a schedule for steel pipe: Air/water interface Casings Insulation Supports/hangers Groundair interface Other exposures None Multiple occurrences detractor
0 pts 1 pts 2 pts 2 pts 3 pts 4 pts 5 pts -1 pt
In this schedule, the worst case, the lowest point value, govems the entire section being evaluated. Aidwater interface The air/w'ater interfuce is also known as a splash zone, where the pipe is alternately exposed to water and air. This could be the result of wave or tide action, for instance. Sometimes called waterline corrosion. the mechanism at work here is usually oxygen concentration cells. Differences in oxygen concentration set up anodic and cathodic regions on the metal. Under this scenario, the corrosion mechanism is enhanced as fresh oxygen is continuously brought to the corroding area and rust is carried away. If the water happens to be seawater or brackish (higher salt content), the electrolytic properties enhance corrosion because the higher ion content further promotes the electrochemical corrosion process. Shoreline structures often have significant corrosion damages due to the airiwater interface effect. Cusings Industry experience points to buried casings as a prime location for corrosion to occur. Even though the casing
4/66 Corrosion index
and the enclosed carrier pipe are beneath the ground, atmospheric corrosion can be the prime corrosion mechanism.A vent pipe provides a path between the casing annular space and the atmosphere. In casings, the carrier pipe is often electrically connected to the casing pipe, despite efforts to prevent it. This occurs either through direct metallic contact or through a higher resistance connection such as water in the casing. When this connection is made, it is nearly impossible to control the direction ofthe electrochemical reaction, or even to know accurately what is happening in the casing. The worst situation occurs when the carrier pipeline becomes an anode to the casing pipe, meaning the carrier pipe loses metal as the casing pipe gains ions. Even without an electrical connection, the carrier pipe is subject to atmospheric corrosion, especially as the casing becomes filled with water and then later dries out (an aidwater interface). The inability for direct observation or even reliable inference techniques causes this scenario to rate high in the risk hierarchy (see Figure 4.3 and The case for/against casings).
Ground/air interface As with the aidwater interface, the groundair interface can be harsh from a corrosion standpoint. This is the point at which the pipe enters and leaves the ground (or is lying on the ground). The harshness is caused in part by the potential for trapping moisture against the pipe (creating a waterlair interface). Soil movements due to changing moisture content, freezing, etc., can also damage pipe coating, exposing bare metal to the electrolyte.
Insulation Insulation on aboveground pipe is notorious for trapping moisture against the pipe wall, allowing corrosion to proceed undetected. If the moisture is periodically replaced with freshwater,the oxygen supply is refreshed and corrosion is promoted. As with casings, such corrosion activity is usually not directly observable and, hence, can be potentially more damaging.
None If there is no corrodible portion of the pipeline exposed to the atmosphere, the potential for atmospheric corrosion does not exist.
Supportshangers Another hot spot for corrosion as determined by industry experience is pipe supports and hangers, which often trap moisture against the pipe wall and sometimes provide a mechanism for loss of coating or paint. This occurs as the pipe expands and contracts, moving against the support and perhaps scraping away the coating. Mechanical-corrosion damage is also possible here. This type of damage often goes undetected.
Figure 4.3
Other exposures The above cases should cover the range of worst case exposures for steel pipe exposed to the atmosphere. One of the above situations must exist for any aboveground piping; the pipe is either supported and/or it has one of the listed interfaces. A situation may exist however, in which a non-steel pipe is not subject to degradation by any of the oxidation contributors listed. A plastic pipe may not be affected by any water or air or even chemical contact and yet may become brittle (and hence weaker) when exposed to sunlight. Sunlight exposure should therefore be included in that particular risk assessment.
Multiple occurrences detmctor In this example schedule, the evaluator deducts 1 point for sections that have multiple occurrences of a given condition. This reflects the increased opportunity for mishap because there are more potential corrosion sites. By this reasoning, a section containing many supports would receive 2 - 1 = 1 pts, the equivalent of a section containing a casing. This says that the risk associated with multiple supports equals the risk associated with one casing. A further distinction could be made by specifying a point deduction for a given number of occurrences: -1 point for 5 to 10 supports, -2 points for 10 to 20 supports, etc. This may be an unnecessary complication, however.
Typical casing installation.
Scoring the corrosion potential 4/67
Example 4.1 :Scoring road casings A section of steel pipeline being evaluated has several road crossings in which the carrier pipe is encased in steel. There are two aboveground valve stations in this section. One of these stations has approximately 25 ft of pipe supported on concrete and steel pedestals. The other one has no supports. The evaluator assesses the section for atmospheric corrosion “facilities” as follows: Casings Groundair interface supports
1 Pt 2 pts 2 pts
Picking the worst case, the point value for this section is 1 pt. The evaluator feels that the number of casings and number of supports and number of groundair interfaces are roughly equivalent and chooses not to use the multiple occurrences option. If other sections being evaluated have a significantly different number of occurrences, adjustments would be needed to show the different risk picture. A distinction between a section with one casing and a section with two casings is needed to show the increased risk with two casings. In many modem assessments, segmentation is done so that sections with atmospheric exposures are distinct from those that have no such exposures. A cased piece of pipe will often be an independent section for scoring purposes since it has a distinct risk situation compared with neighboring sections with no casing. The neighboring sections will often have no atmospheric exposures and hence no atmospheric corrosion threat at all. This sectioning approach is a more efficient way to perform risk assessments as is discussed in Chapter 2.
A2. Atmospheric type (weighting: 20% of atmospheric corrosion) Certain characteristics of the atmosphere can enhance or accelerate the corrosion of steel. They are thought to promote the oxidation process. Oxidation is the primary mechanism evaluated in this section. Some of these atmospheric characteristics and some simplifying generalities about them are as follows: Chemical composition. Either naturally occurring airborne chemicals such as salt or CO, or man-made chemicals such as chlorine and SO,,(which may form H,SO, and H,SO,) can accelerate the oxidation of metal. Humidity. Because moisture can be a primary ingredient of the corrosion process, higher air moisture content is usually more corrosive. Temperature.Higher temperatures tend to promote corrosion. A schedule should be devised to show not only the effect of a characteristic, but also the interaction of one or more characteristics. For instance, a cool, dry climate is thought to minimize atmospheric corrosion. If a local industry produces certain airborne chemicals in this cool, dry climate, however, the atmosphere might now be as severe as a tropical seaside location. The following is an example schedule with categories for several different atmospheric types, ranked from most harsh to most benign, from a corrosion standpoint:
A. B. C. D. E. E G.
Chemical and marine Chemical and high humidity Marine, swamp, coastal High humidity and high temperature Chemical and low humidity Low humidity and low temperature No exposures
0 Pt 0.5 pt 0.8 pt 1.2 pts 1.6 pts 2 pts 2 pts
A. Chemical and marine Considered to be the most corrosive atmosphere, this includes certain offshore production facilities and refining operations, especially if in splash-zone environments. The pipe components are exposed to airborne chemicals and salt spray that promote oxidation, as well as occasional submersion in water. B. Chemical and high humidiw Also quite a harsh environment, this may include chemical or refining operations in coastal regions. Airborne chemicals and a high moisture content in the air combine to enhance oxidation of the pipe steel.
C. Marine. swamp, coastal High levels of salt and moisture combine to form a corrosive atmosphere here. D. High humidity and high temperature Similar to the situation above, this case may be seasonal or in some other way not as severe as the marine condition.
E. Chemical and low humidity While oxidation-promoting chemicals are in the air, humidity is low, somewhat offsetting the effects. Distinctions may be added to account for temperatures here.
I? Low humidity
The least corrosive atmosphere will have no airborne chemicals, low humidity, and low temperatures.
G. No exposures There are no atmospheric exposures in the section being evaluated. In applying this point schedule, the evaluator will probably need to use judgment. The type of environment being considered will not usually fit specifically into one ofthese categories, but will usually be comparable to one of them. Note however, that given the low point values suggested here, scoring this variable does not warrant much research and scoring effort.
Example 4.2: Scoring atmospheric conditions The evaluator is comparing three atmospheric conditions. The first case is a line that runs along a beach on Louisiana’s Gulf Coast. This most closely resembles condition C. Because there are several chemical-producing plants nearby and winds may occasionally carry chemicals over the line, the evaluator adjusts the C score down by 50%. The second case is a steel line in eastern Colorado. While the line is seasonally exposed to higher temperatures and humidity, it is also frequently in cold, dry air. The evaluator assigns a point value based on an adjusted condition E This is 1.6 pts. equivalent from a risk standpoint to condition E, even though there is no chemical risk. The final case is a line in southern Arizona. Experience confirms that this environment does indeed experience only minor,
4/68 Corrosion Index
nonaggressive corrosion. Because the evaluator foresees the evaluation of a line in a similarly dry, but also cold climate, he awards points for condition F: pt adjusted for higher temperatures = 1.9 points. (He plans to score the dry, cold climate as 2 pts.) These evaluations therefore yield the following rank order and relative magnitude: Louisiana Colorado Arizona
0.4 pt 1.6 pts 1.9 pts
The evaluator sees little difference between conditions in Colorado and Arizona, from an atmospheric corrosion viewpoint, but feels that conditions around the line in south Louisiana are roughly four times worse.
A3. Atmospheric coating (weighting: 30% of atmospheric corrosion) The third component in this study of the potential for atmospheric corrosion is an analysis ofthe preventive measures taken to minimize the threat. Obviously, where the environment is harsher, more preventive actions are required. From a risk standpoint, a situation where preventive actions are not required-a very benign environment-poses less risk than a situation where preventive actions are being taken to protect a pipeline from a harsh environment. The most common form of prevention for atmospheric corrosion is to isolate the metal from the offending environment. This is usually done with coatings. Coatings include paint, tape wraps, waxes, asphalts, and other specially designed coatings. For aboveground components, painting is by far the most common technique. No coating is defect free, so the corrosion potential will never he totally removed, only reduced. Note that, at this point, the evaluator is making no judgments as to whether a highquality coating or inspection program is needed. That detennination is made when the attributes of facilities and atmosphere type are combined with an assessment of these preventions.
Coating evaluations Coating effectiveness depends on four factors:
1. 2. 3. 4.
Quality of the coating Quality of the coating application Quality of the inspection program Quality of the defect correction program.
The first two address the fitness of the coating-its ability to perform adequately in its intended service for the life of the project. The second two address the current condition of the coating-how it is actually performing. For a general, qualitative evaluation, each of these components can be rated on a4-point scale: good, fair, poor, or absent. The point values should probably be equivalent unless the evaluator can say that one component is of more importance than another. A quality coating is of little value if the application is poor: a good inspection program is incomplete if the defect cor-
rection program is poor. Perhaps an argument can be made that high scores in coating and application place less importance on inspection and defect correction. This would obviously be a sliding scale and is probably an unnecessary complication. An evaluation scale could look like this: Good Fair Poor Absent
Coatingfitness (weighting: 50% ofcoating evaluation) Coating qualit?, Evaluate the coating in terms of its appropriateness in its present application. Where possible, use data from coating stress tests or actual field experience to rate the quality. When these data are not available, draw from any similar experience or fromjudgment. Good-A high-quality coating designed for its present environment. Fair-An adequate coating but probably not specifically designed for its specific environment. Poor-A coating is in place but is not suitable for long-term service in its present environment. Absent-No coating present. Note: Some of the more important coating properties include electrical resistance, adhesion, ease of application, flexibility, impact resistance, flow resistance (after curing), resistance to soil stresses, resistance to water, resistance to bacteria or other organism attack. In the case of submerged or partially submerged lines, marine life such as barnacles or borers must be considered. Application Evaluate the most recent coating application process and judge its quality in terms of attention to precleaning, coating thickness, the application environment (control of temperature, humidity, dust, etc.), and the curing or setting process. Good-Detailed specifications used, careful attention paid to all aspects ofthe application; appropriate quality control systems used. Fair-Most likely a proper application, but without formal supervision or quality controls. Poor-Careless, low-quality application performed. Absent-Application was incorrectly done, steps omitted, environment not controlled.
Coating condition (weighting: 50% of coating evaluation) Inspection Evaluate the inspection program for its thoroughness and timeliness. Documentation may also be an integral part of the best possible inspection program. Good-Formal, thorough inspection performed specifically for evidence of atmospheric corrosion. Inspections are performed by trained individuals using checklists at appropriate intervals (as dictated by local corrosion potential).
Scoring the corrosion potential 4/69 Fair-Informal inspections, but performed routinely by qualified individuals. Poor-Little inspection; reliance is on chance sighting of problem areas. Absent-No inspection done. Note: Typical coating faults include cracking. pinholes, impacts (from sharp objects), compressive loadings (stacking of coated pipes, for instance). disbondment, softening or flowing. and general deterioration (ultraviolet degradation. for example). The inspector should pay special attention to sharp corners and difficult shapes. They are difficult to clean prior to painting, and difficult to adequately coat (paint will flow away from sharpness). Examples are nuts, bolts, threads, and some valve components. These are often the first areas to show corrosion and will give a first indication as to the quality of the paint job.
Correction ofdefects Evaluate the program of defect correction in terms of thoroughness and timeliness. Good-Reported coating defects are immediately documented and scheduled for timely repair. Repairs are carried out per application specifications and are done on schedule. Fair-Coating defects are informally reported and are repaired at convenience. Poor-Coating defects are not consistently reported or repaired. Absent-Little or no attention is paid to coating defects. A more rigorous evaluation of coating condition would involve specific measurements of defects found adjusted by the time that has passed since the inspection and the use of special equipment during the inspection. Nondestructive testing (NDT) performed during the inspection includes a visual inspection. The visual inspection can quantify and/or characterize the defects observed. Qualitative scales for such visual assessments can be found in National Association of Corrosion Engineers (NACE) guidelines. NDT using special equipment can also quantify the coating thickness and the extent of holidays. Current thickness can be compared against design or intended thickness to assess the degradation or other inconsistency with design intent. If an electrical continuity tester is used the extent of holidays can be expressed in terms of the voltage setting and number of indications or other measurement of number and size of coating defects. An NDT inspection rating scale can be established for a more detailed evaluation. Table 4.2 provides an example. Destructive testing (DT) involves removing a sample of coating or pipe and performing laboratory tests. Properties investigated might include Table 4.2
Holidays Adhesion Abrasion resistance Shear Impact resistance.
In some cases, a more detailed evaluation ofcoating condition might be warranted. An example list of variables for this more rigorous evaluation is as follows: Atmospheric Coating Condition Visual inspection results Coating failures per square foot Date of last visual inspection NDT inspection results Thickness versus design thickness Holidays per square foot Chalking, cracking. blistering, flaking Date ofNDT inspection DT inspection results Adhesion Abrasion resistance Impact resistance Shear strength Date of DT inspection
Example 4.3: Scoring coating condition (Good) In this section of aboveground piping, records indicate that a high-quality paint was applied per NACE specifications. The operator sends a trained inspector to all aboveground sites once each quarter, and corrects all reported deficiencies at least twice per year. The evaluator awards points as follows: Coating-good Application-good Inspection-good Defect correction-good
3 pts 3 3 3
Average
3 pts
Note; Twice per year defect correction is deemed appropriate for the section's environment.
Example 4.4: Scoring coating condition (Fair) Here, a section contains several locations of aboveground pipe components at valve stations and compressor stations. Touch-up painting is done occasionally at the stations. This is done by a general contracting company at the request of the
Example NDT inspection rating scale Blister Rusting
Blister size
.f'equency
Chalking
Cracking
Flaking ~
Good Fair Poor
None
None <1/8 in <3/8in
None Medium Dense
None Light Complete
None Moderate Moderate
None 30% 50"'o
4/70 Corrosion Index
pipeline area foreman. No formal specifications exist. The foreman requests paint work whenever he feels it is needed (based on his personal inspection of a facility). The evaluator awards points as follows: Coating-fair Application-fair Inspection-fair Defect correction-poor
2.0 pts
Average
1.75 pts
1.8
2.2 1.o
Note: In this example, the evaluator wishes to make distinctions between the evaluation scores, so she uses decimals to rate items a little above or a little below the normal rating. This may be appropriate in some cases, but it adds a level of complexity that may not be warranted, given the low point values. The evaluator feels that choice of paint is probably appropriate though not specified. Application is slightly below fair because no specifications exist and the contractor’s workforce is usually subject to regular turnovers. Inspection is slightly above fair because the foreman does make specific inspections for evidence of atmospheric corrosion and is trained in spotting this evidence. Defect correction is poor because defect reporting and correction appear to be sporadic at best.
The casefodagainst casings Buried casings show up at several points in this risk assessment-sometimes as risk reducers, sometimes as risk creators. The following information provides a general discussion of the use of pipeline casings. Oversized pipe, called casingpipe, is sometimes placed over the carrier pipeline to protect it from external loadings and/or to facilitate repairs to the carrier pipe. Casings have long been used by the pipeline industry. They are generally placed under highways, roads, and railroads where higher external loadings are anticipated or where pipeline leaks might cause structural damages to a structure (see Figure 4.3 earlier). A casing also allows for easier replacement of the pipeline if a problem should develop. Instead of digging up a roadway, the pipeline can simply be pulled out of the casing, repaired, and reinstalled without disrupting traffic. A third potential benefit from casings is that a slow pipeline leak can be contained in the casing and detected via the casing vent pipe rather than slowly undermining the roadway or forming underground pockets of accumulated product. An industry controversy arises because the benefits casings provide are at least partially offset by problems caused by their presence. These problems are primarily corrosion related. It is probably safe to say that corrosion engineers would rather not have casings in their systems. It is more difficult to protect an encased pipe from corrosion. The casing provides an environment in which corrosion can proceed undetected and prevention methods are less effective. Because the pipeline cannot be directly inspected, indirect methods are used to give indications of corrosion. These techniques are not comprehensive, sometimes unreliable, and often require expert interpretation. Several dilemmadproblems are typically encountered with casings. Atmospheric corrosion can occur if any coating
defects exist, and yet, insertion of the pipeline into the casing is an easy way to damage the coating and create defects. End seals are used to keep water, mud, and other possible electrolytes out of the casing annular space, but are easily defeated by poor design and/or installation or by minor ground movements.The presence of electrolyte in the annular space can lead to corrosion cells between the casing and the pipeline, as well as interference problems with the cathodic protection system. Vent pipes are often installed to release leaked products, but these vents allow direct communication between the casing annular space and the atmosphere--consequently, moisture is almost always present in the annular space. Cathohc protection is usually employed to protect buried steel pipelines. The casing pipe can shield the pipeline from the protective currents if there is no electrical bond between the casing and the pipeline. If there is such a bond, the casing usually not only shields the pipeline from the current, but also draws current from it, effectively turning the pipeline into an anode that is sacrificed to protect the casing pipe, which is now the cathode! Several mitigative measures can be employed to reduce corrosion problems in casings. These were illustrated earlier in Figure 4.3 and are described below: Test leads. By comparing the pipe-to-soil potentials (voltages) of the pipeline versus the casing pipe, evidence of bonding between the two is sought. Test leads allow the voltage measurements to be made. Nonconductive spacers. These are designed to keep the pipeline physically and electrically separated from the casing pipe. They also help to protect the pipe coating during insertion into the casing. End seals. These are designed to keep the annular space free of substances that can act as an electrolyte (water, mud, etc.). Filling the annular space. Use of a dielectric (nonconductive) substance reduces the potential for electrical paths between the casing and the pipeline. Unfortunately, it also negates some ofthe casing benefits listed earlier.
Reflecting the trade-off in benefits, casings can be risk reducers (protection from external loads, including third-party damages and land movements) yet at the same time be risk adders in the corrosion index (promoting atmospheric and subsurface corrosion of metal). It would be nice to say that one will always outweigh the other, but we do not know that this is always the case. A risk costhenefit analysis for casings can be performed by using a risk model to quantify the relative advantages and disadvantages from a risk standpoint. Other factors must be considered in casing decisions. Often regulatory agencies leave no choice in the matter. The owner of the crossing (railroad, highway, etc.) may also mandate a certain design. Economics, of course, always play an important role. The costs of casings must include ongoing maintenance costs, but the costs of not using casing must include pipe strong enough to carry all loads and damages to the crossing, should pipeline replacement be needed. As an additional benefit of applying a risk management system such as this one to the problem of casings, the pipeline operator and designer have a rational basis for weighing the benefits of alternate designs.
Scoring the corrosion potential 4/71
B. Internal corrosion (weighting: 20% of corrosion threat) Internal Corrosion (20%,20pts) Product corrosivity (50% of internalcorrosion = 10 pts) From potential upsets (70% ofproduct corrosivity = 7 pts) Equipment (30% of 7 pts -2 pts) O&M (3OYoof7 pts, 2 pts) Flow velocity (40% of 7 pts, 3 pts) From flow stream characteristics (30% of product corro sivity,3 pts) Solids related (40% of 3 pts, 1 pt) Water related (60% of 3 pts, 2 pts) Preventions (5oy0of internal corrosion, 10 pts) Measured corrosion rate (adjustments) In this section, an assessment is made of the potential for internal corrosion. Internal corrosion is pipe wall loss or damage caused by a reaction between the inside pipe wall and the product being transported. Such corrosive activity may not be the result of the product intended to be transported but rather a result of an impurity in the product stream. Seawater intrusion into an offshore natural gas stream, for example, is not uncommon. The natural gas (methane) will not harm steel, but saltwater and other impurities can certainly promote corrosion. Other corrosion-promoting substances sometimes found in natural gas include CO,, chlorides, H,S, organic acids, oxygen, free water, solids or precipitates, or sulfur-bearing compounds. Microorganisms that can indirectly promote corrosion should also be considered here. Sulfate-reducing bacteria and anaerobic acid-producing bacteria are sometimes found in oil and gas pipelines. They produce H,S and acetic acid, respectively, both of which can promote corrosion [79]. Pitting corrosion and crevice corrosion are specialized forms of galvanic or concentration cell corrosion commonly seen in cases o f internal corrosion. Corrosion set up by an oxygen concentration cell can be accelerated if certain ions are present to play a role in the reactions. The attack against certain stainless steels by saltwater is a classic example. Erosion as a form o f internal corrosion is also considered here. Product reactions that do not harm the pipe material should not be included here. A good example of this is the buildup of wax or paraffin in some oil lines. While such buildups cause operational problems, they do not normally contribute to the corrosion threat unless they support or aggravate a mechanism that would otherwise not be present or as severe. Some of the same measures used to prevent internal corrosion, such as internal coating, are used not only to protect the pipe, but also to protect the product from impurities that may be produced by corrosion. Jet fuels and high-purity chemicals are examples of pipeline products that are often carefully protected from such contaminants. The assessmentofthe threat from internal corrosionis evaluated by an examination ofthe product characteristicsand the preventive measuresbeing taken to offset certain product characteristics.
B I . Product corrosivity (weighting: 50% of internal corrosion potential) This is an assessment of the relative aggressiveness of the pipeline contents that are in immediate contact with the pipe
wall. The greatest threat exists in systems where the product is inherently incompatible with the pipe material. Another threat arises when corrosive impurities can routinely get into the product. These two scenarios can be scored separately and then combined for an assessment of product corrosivity: Product corrosivity = [flow stream charactenstics] +[upset conditions]
These components are added since the worst case scenario would be a case where both are active in the same pipelineboth a corrosive product and potential for additional corrosion through upsets. The weighting of the two is situation specific, but because hydrocarbons are inherently non-corrosive and most transportation o f hydrocarbons strives for very low product contaminant levels, a weighting emphasizing upset potential might be appropriate for many hydrocarbon transport scenarios. The following example point scores uses a 30 / 70% weighting scheme, emphasizing product corrosivity episodes originating from unintentional contaminations-upsets. For convenience, the term contaminant is used here to mean some product component that is corrosive to the pipe wall, even though some amounts of the component might he allowable according to the product specification. Normalflow stream characteristics The normal flow stream characteristics should represent a measure of the corrosivity of the products transported in the pipeline. This measure assesses corrosion potential from normal contact between flowing product and the pipe wall, based on product specifications andor product analyses. A “no-flow’’ condition might aggravate otherwise harmless contact between product and pipe wall. An example is the higher concentrations o f dropout contaminants that occur during no-flow or low-flow conditions, such as water accumulation in low spots. These scenarios can be considered here (as normal flow conditions) or they might be more efficiently handled under the evaluation of corrosivity due to upset conditions (where they are considered to be abnormal flow conditions). In many cases, the flow stream characteristics can be divided into two main categories-water related and solids relatedfor purposes of evaluating corrosivity [94]. These categories do not precisely reflect the role or transport state of the various contaminants, but might be useful for organizing variables. Flow stream characteristics = [water related] + [solids related]
Water-related contamination potential might include an assessment of the concentrations of components such as Water content Oxygen PH H,S Temperature Chlorides. Solids-related contamination potential might include measuring the concentrations of components such as MIC Suspended solids (see also discussion of erosion potential)
4/72 Corrosion Index
Sulfates Carbonates Conductivity, A detailed assessment of internal corrosion would use the actual measurements of the concentrations, especially where such measurements are easily available to the evaluator. Weightings can be assigned based on the perceived role of the contaminant in corrosion. Point scales can then be developed based on the weightings and the expected range of measurements, best case to worst case, for each contaminant. Upset potential This aspect of internal corrosion measures the potential for increased product corrosivity under abnormal conditions. This might include unintentional introduction of contaminants and changes in flow patterns that might aggravate previously insignificant corrosion potential. The introduction of contaminants is a function of (1) the processing prior to delivery into the pipeline, ( 2 )equipment capabilities and failure potential, and (3) operations and maintenance practices of the facility delivering the product into the pipeline. Changes in flow patterns including stagnant flow conditions, can be considered to be “upsets.” Low flow rates can lead to increased chances of liquid or solid dropout and accumulation at low spots, whereas high flow rates can lead to erosion. Contaminant dropout may lead to increased contact time between pipe wall and product, maybe at higher contaminant concentrations (at low spot accumulation points, for instance). Anything that leads to increased corrosive contaminant contact with pipe walls will logically increase corrosion potential and rate. Note, however, that subsequent higher flow rates might sweep accumulations and hence be a mitigation measure as described later. Erosion is the removal of pipe wall material caused by the abrasive or scouring effects of substances moving against the pipe wall. It is a form of corrosion only in the pure definition of the word, but is considered here as an internal corrosion potential. High velocities and abrasive particles in the product stream are the normal contributing factors to erosion. Impingement points such as elbows and valves are the most susceptible erosion points. Gas at high velocities may be carrying entrained particles of sand or other solid residues and, consequently, can be especially damaging to the pipe components. Historical evidence of erosion damage is of course a strong indicator of susceptibility. Other evidence includes high product stream velocities (perhaps indicated by large pressure changes in short distances) or abrasive fluids. Combinations of these factors are, of course, the strongest evidence. If, for instance, an evaluator is told that sand is sometimes found in filters or damaged valve seats, and that some valves had to be replaced recently with more abrasion-resistant seat materials, he may have sufficient reason to penalize the pipe section for erosion potential. The overall assessment of upset potential, as a contributing factor to internal corrosion potential, can be accomplished through an evaluation and scoring of the following: Equipment-an evaluation of the types of equipment used to remove contaminants or prevent contaminant introduction into the pipeline, and the reliability of such equipment. Examples include product filters, dehydrators, and scrub-
bers. Potential for carryovers due to incorrect operations, improperly sized equipment, or unusual levels of contaminants received should be included in the evaluation. O&Mpructices-an evaluation of the actions taken by the operator to prevent introduction of contaminants. This may include the degree of human intervention required and the number of redundancies that can interrupt a sequence of events that might otherwise result in increased contaminant concentrations. See also the discussion under mitigation measures. Highestflow velociv + highestprofle-an evaluation of the normal and worst case high flowing velocities and an assessment of this effect on erosion potential and contact time between contaminant and pipe wall. Both the average high and the peak velocities should be of interest. Lowestflow velociv + lowest profile-an evaluation of the normal and worst case low flowing velocities and an assessment of this effect on erosion potential and contact time between contaminant and pipe wall. Both the average low and the lowest velocities should be of interest. Points can be assigned to these factors based on observed or reported conditions and can be combined for a final assessment of upset potential.
Simplified scoring ofproduct corrosivity In many cases, the amount of detail described above may not be warranted for scoring internal corrosion potential. This is especially true if the risk evaluation is primarily used as a high-level screening tool. In this case, the above factors can be considered more generally and perhaps outside a formal scoring protocol. These considerations can then be used to assign point values in amore qualitative fashion. A simple schedule can be devised to assign points to the product corrosivity if a more generalized approach is appropriate: Strongly corrosive Mildly corrosive Corrosive only under special conditions Never corrosive
0 pts 3 pts I pts 10 pts
“Strongly corrosive” suggests that a rapid damaging kind of corrosion is possible. The product is highly incompatible with the pipe material. Transportation of brine solutions, water, products with H,S, and many acidic products are examples of materials that are highly corrosive to steel lines. “Mildly corrosive” suggests that damage to the pipe wall is possible but only at a slow rate. Having no knowledge of the product corrosivity can also fall into this category. It is conservative to assume that any product can do damage, unless we have evidence to the contrary. “Corrosive only under special conditions” means that the product is normally benign, but there exists the chance of introducing a harmful component into the product. CO, or saltwater excursions in amethanepipeline are a common example. These natural components of natural gas production are usually removed before they can get into the pipeline. However, equipment used to remove such impurities is subject to equipment failures, and subsequent spillage of impurities into the pipeline is apossibility.
Scoring the corrosion potential 4/73 “Never corrosive” means that there are no reasonable possibilities that the product transported will ever be incompatible with the pipe material. The evaluator may also wish to interpolate and assign point values between the ones shown.
BZ. Preventions (weighting:50% ojinternal corrosion) It is often economically advantageous to transport corrosive substances in pipe that is susceptible to corrosion by the substance. In these cases, it is prudent to take actions to reduce the damage potential. Having assessed the potential for a corrosive product, the evaluator can now examine and evaluate mitigation measures being employed against potential internal corrosion. A point schedule, based on the probable effectiveness of the measures, will show how the risk picture is affected. In the following example schedule. points are added for each preventive action that is employed, up to a maximum of 10 points. A nticovvosion activities heingperfimzed:
None Internal monitoring Inhibitor injection Not needed Internal coating Operational measures Pigging
0 pts 2 pts 4 pts 10 pts 5 pts 3 pts 3 pts
This. of course, means that no actions are taken to reduce the risk of internal corrosion. lvbnr
hternal monifoving Normally, this is done in either of two ways: ( 1 ) by an electronic probe that can continuously transmit measurements that indicate a corrosion potential or ( 2 ) by a coupon that actually corrodes in the presence of the flowing product and is removed andmeasuredperiodically.Each ofthese methods requires an attachment to the pipeline to allow the probe or coupon to be inserted into and extractedfrom the flowing product. Another method involves the use of a spool piece-a test piece of pipe that can be removed and carefully inspected for evidence of internal corrosion. Searching for corrosion products in pipeline filters or during pigging operations is yet another method of inspectiordmonitoring. To be creditable under this section, an inspection method requires a well-defined program of monitoring and interpretation of the data at specified intervals. It is further implied that appropriate actions are taken, based on the analysis from the monitoring program. Where a corrosion rate is actually measured, the overall internal corrosion score can be somewhat calibrated with this information. Ideally, the scores will reflect the corrosion potential and will correlate well with more direct evidence such as a measured corrosion rate. Caution must be exercised however, when assigning favorable scores based solely on the nondetection of internal corrosion at certain times and at limited locations. It is important to note that the potential for corrosion might be high and is worth noting. even when no active corrosion is detected. More is said about corrosion rate later in this chapter and in Chapter 14.
Inhibitor injection When the corrosion mechanism is fully understood, certain chemicals can be injected into the flowing product stream to reduce or inhibit the reaction. Because oxygen is a chief corroding agent of steel, an “oxygen-scavenging” chemical can combine with the oxygen in the product to prevent this oxygen from reacting with the pipe wall. A more common kind of chemical inhibitor forms a protective barrier between the steel and the product-a coating, in effect. Inhibitor is reapplied periodically or continuously injected to replace the inhibitor that is absorbed or displaced by the product stream. In cases where microorganism activity is aproblem. biocides can be added to the inhibitor. The evaluator should be confident that the inhibitor injection equipment is well maintained and injects the proper amount of inhibitor at the proper rate. Inhibitor effectiveness is often verified by an internal monitoring program as described above. A pigging program may be necessary to supplement inhibitor injection. The pigging would be designed to remove free liquids or bacteria colony protective coverings, which might otherwise interfere with inhibitor or biocide performance. Internal coating Internal coating can take several forms including spray-on applications of plastics, mortar, or concrete as well as insertion liners for existing pipelines. New materials technology allows for the creation of “lined” pipe. This is usually a steel outer pipe that is isolated from a potentially damaging product by a material that is compatible with the product being transported. Plastics, rubbers, or ceramics are common isolating materials. They can be installed during initial pipe fabrication, during pipeline construction. or sometimes the material can be added to an existing pipeline. Such twomaterial composite systems are also discussed in the design index (Chapter 5). For purposes of this part of the risk assessment, the evaluator should assure himself that the composite system is effective in protecting the pipeline from damage due to internal corrosion. A common concern in such systems is the detection and repair of a leak that may occur in the liner. The internal coating can be judged by the same criteria as coatings for protection from atmospheric corrosion and buried metal corrosion described in this chapter. Note that an internal coating that is applied for purposes of reduction in flow resistance might be of limited usefulness in corrosion control. Operational nieaswes In situations where the product is normally compatible with the pipe material but corrosive impurities can be introduced, operational measures are often used to prevent the impurities. Systems used to dehydrate or filter a product stream fall into this classification. A system that strips sour gas (sulfur compounds) from a product stream is another example. Maintaining a certain temperaturc on a system in order to inhibit corrosion would also be a valid operational measure. These systems or measures are termed operurionul here because the operation ofthe equipment is often as critical as the original design. Procedures and mechanical safeties should be in place to prevent corrosive materials from entering the pipeline in case of equipment failure or system overloads. The evaluator should check to see that the conditions for which the equipment was designed are still valid especially if the effectiveness of the impurities removal cannot be directly determined. The evaluator should look for consistency and
4/74 Corrosion Index
effectiveness in any operational measure purported to reduce internal corrosion potential. Pigging A pig is a cylindrical or spherical object designed to move through a pipeline for various purposes (Figure 4.4). Pigs are used to clean pipeline interiors (wire brushes may be attached), separate products, push products (especially liquids), gather data (when fitted with special electronic devices), detect leaks, etc. A wide variety of special-purpose pigs in many shapes and configurations is possible. There is even a bypass pig that is designed with a relief valve to clear debris from in front of the pig if the debris causes a high differential pressure across the pig. A regular program of running cleaning or displacement-type pigs to remove potentially corrosive materials is a proven effective method of reducing (but not eliminating) damage from internal corrosion. The program should be designed to remove liquids or other materials before they can do appreciable damage to the pipe wall. Monitoring of the materials displaced from the pipeline should include a search for corrosion products such as iron oxide in steel lines. This will help to assess the extent of corrosion in the line. Pigging is partly an experience-driven technique. From a wide selection of pig types, the knowledgeable operator must choose an appropriate model, design the pigging protocol including pig speed, distance, and driving force, and assess the progress during the operation. The evaluator should be satisfied that the pigging operation is indeed beneficial and effective in removing corrosive products from the line in a timely fashion.
Example 4.5: Scoring internal corrosion A section of natural gas pipeline (steel) is being examined. The line transports gas from offshore production wells. The gas is dried and treated (removal of sulfur) offshore, but the offshore treating equipment malfunctions rather routinely. The operator injects inhibitor to control corrosion from any offshore liquids that escape the dehydration process. Recently, it was discovered that the inhibitor injector had failed for a period of 2 weeks before the malfunction was corrected. The operator also runs pigs once per month to remove any free-standing liquids in the pipeline. Corrosion probes provide continuous data on the corrosion rate inside the line.
The evaluator assesses the situation as follows: A. Product corrosivity
5 pts
The line is exposed to corrosive components only under upset conditions, but 2 points are deducted because the upset conditions appear to be rather frequent. B. Internal monitoring Inhibitor injection Operational measures Pigging Total
2 pts 2 pts 2 pts 3 pts 9 pts (out of IO pts max)
Points were deducted from each of two of the preventive measures (inhibitor injection and operational measures) because of known reliability problems with the actions. A penalty for the offshore operational measures was actually taken twice in this case, once in the product corrosivity and once in the preventive actions. The total score for internal corrosion is then: A + B = 5 + 9 = 14 pts
C. Subsurface corrosion (weighting: 70%) Subsurface Corrosion (70% of overall corrosion threat, 7Opts) Subsurface environment (20 pts) Soil corrosivity (15 pts) Mechanical corrosion ( 5 pts) Coating (25 pts) Fitness (10 pts) Condition (15 pts) Cathodic protection (25 pts) Effectiveness (15 pts) Interference potential (10 pts) AC related (2 pts) Shielding (1 pt) DC related (7 pts) Telluric currents (1 pt) DC rail (3 pts) Foreign lines (3 pts)
Figure4.4 Examples of pipeline pigs
Scoring the corrosion potential 4/75 This part of the risk assessment will apply to metallic pipe material that is buried or submerged. If the pipeline being evaluated i s not vulnerable to subsurface corrosion, as would be the case for a plastic pipeline or a totally aboveground pipeline, the evaluator should use the previous two sections and any other pertinent factors to assess the corrosion risk. Of the three categories of corrosion, this is usually considered to be the most complex. Several corrosion mechanisms can be at work in the case ofburiedmetals. This situation is further complicated by the fact that corrosion activity is normally deduced only from indirect evidence4irect observation is a rather limited option. The most common danger is from some form of galvanic corrosion. Galvanic corrosion occurs when a metal or metals in an electrolyte (an electrically conductive fluid) form anodic and cathodic regions. A cathode is a metal region that has a greater affinity for electrons than the corresponding anodic region. This affinity for electrons is called electronegativity. Different metals have different electronegativities and even different areas on a single piece of metal will have slightly different electronegativities. The greater the difference, the stronger the tendency for electrons to flow. If an electrical connection between anode and cathode exists. allowing this electron flow, metal will dissolve at the anode as metal ions are formed and migrate from the parent metal. Chemical reactions occur at the anode and the cathode as ions are formed and corrosion occurs. Such a system, with anode, cathode, electrolyte, and electrical connection between anode and cathode, is called agalvanic cell and is illustrated in Figure 4.5. Because soil is often an effective electrolyte, agalvanic corrosion cell can be established between areas along a single a pipeline or between a pipeline and another piece ofburiedmetal. When a new piece of pipe is attached to an old piece, a galvanic
cell can be established between the two metals. Dissimilar soils with differences in concentrations of ions, oxygen, or moisture can also set up anodic and cathodic regions on the pipe surface. Corrosion cells ofthis type are called concentrationcells. When these cells are established, the anodic region will experience active corrosion. The severity of this corrosion is dictated by variables such as the conductivity of the soil (electrolyte) and the relative electronegativities ofthe anode and cathode. Common industry practice is to employ a two-part defense against galvanic corrosion of a pipeline. The first line of defense is a coating over the pipeline. This is designed to isolate the metal from the electrolyte. If this coating is perfect, the galvanic cell is effectively stopped-the electric circuit is blocked because the electrolyte is no longer in contact with the metal. It is safe to say, however, that no coating is perfect. If only at the microscopic level, defects will exist in any coating system. The second line of defense is called cathodicprofecfion(CP). Through connections with other metals, the pipeline is turned into a cathode, which, according to the galvanic cell model, is not subject to loss of metal (as a matter of fact, the cathode actually gains metal). The theory behind cathodic protection is to ensure that the current flow is directed in such a way that current flows to the pipeline, and away from an installed bed of metal that is intended to corrode. The installed metal that is to corrode is appropriately called sacrificial anode. The sacrificial anode has a lower affinity for electrons than the steel it is protecting. Depending on electrolyte (soil) type and some economic considerations, a voltage may be imposed on the system to further drive the current flow. When this is necessary, the system is referred to as an impressed current system (Figure 4.6). In an impressed current system, rectifiers are used to drive the low-voltage current flow between the anode bed and the pipeline. The amount of current required is dictated by
-
Current flow (electrical connection) migration
Electrolyte
N e g a t i ions
\
\ \
-
’/,‘
1,
‘ - // \
--
/
/ I
Figure 4.5 The galvanic corrosion cell
ions
4/76 Corrosion Index
Rectifier
r Pipeline (cath
bed
Figure 4.6
Pipeline cathodic protection with impressed current rectifier
variables such as coating condition, soil type, and anode bed design-all of which add resistance to this electric circuit. In the scoring approach describe here, subsurface corrosion threat is examined in three main categories: subsurface envuonment. cathodic protection, and coating.The weightingsare generally equivalent with the weighting for subsurface environment having slightly less weight. This slight underweighting reflects a belief that most environmental conditions can be overcome with the right coating and CP system. If the evaluator does not believe this to he true, then she may wish to re-weightthe main categories.
C1. Subsurface environment (weighting 20% of corrosion threat) In order to better visualize the position of this variable in the overall hierarchy of the corrosion threat assessmeb, the branch of the risk assessment leading to this variable can be seen as follows:
Corrosion Index Atmospheric Corrosion Internal Corrosion Subsurface Corrosion Subsurface environment Soil corrosivity Mechanical Coating Cathodic protection
{
(20 PtS) (15 pts) (5 Pts) (25 PtS) (25 pts)
I
A major aspect of assessing subsurface corrosion potential is an evaluation of the environment surrounding the pipe. A recommendation is to examine the soil corrosivity as the most important aspect of the environment. This can then be supplemented with an evaluation of the potential for specialized mechanical corrosion effects such as stress corrosion cracking. Soil corrosiviw (weighting: 15%) Because a coating system is always considered to he an imperfect barrier, the soil is always assumed to he in contact with the pipe wall at some points. Soil corrosivity is often a qualitative measure of how well the soil can act as an electrolyte to promote galvanic corrosion on the pipe. Additionally, aspects of the soil that may otherwise directly or indirectly promote corrosion mechanisms should also be considered. These include bacterial activity and the presence of other corrosion-enhancing substances. The possibly damaging interaction between the soil and the pipe coating is not a part of this variable. Soil effects on the coating (mechanical damage, moisture damage, etc.) should he considered when judging the coating effectiveness as a risk variable. The importance of soil as a factor in the galvanic cell activity is not widely agreed on. Historically, the soil's resistance to electrical flow has been the measure used to judge the contribution of soil effects to galvanic corrosion. As with any component of the galvanic cell, the electrical resistances play a role in the operation of the circuit. Soil resistivity or conductivity
Scoring the corrosion potential 4/77 therefore seems to be one of the best and most commonly used general measures of soil corrosivity. Soil resistivity is a function of interdependent variables such as moisture content, porosity. temperature, ion concentrations, and soil type. Some of these are seasonal variables. corresponding to rainfall or atmospheric temperatures. Some researchers report that abrupt changes in soil resistivity are even more important to assessing corrosivity than the resistivity value itself. In other words, strong correlations are reported between corrosion rates and amount of change in soil resistivity along a pipeline [41]. A schedule can be developed to assess the average or worst case (either could be appropriate-the choice, however, must he consistent across all sections evaluated) soil resistivity. This is a broad-brush measure of the electrolytic characteristic of the soil. MIC Microorganism activity can promote corrosion. This is often termed microbially induced corrosion or MZC.A family of anaerobic bacteria (no oxygen needed for the bacteria to reproduce), called sulfate-reducing bacteria, can cause the depletion of the hydrogen layer adjacent to the outside pipe wall. This hydrogen layer normally provides a degree ofprotection from corrosion. As it is removed corrosion reactions can actually be accelerated. Soils with sulfates or soluble salts are favorable environments for anaerobic sulfate-reducing bacteria ~91. Although it does not actually attack the metal, the microorganism activity tends to produce conditions that accelerate corrosion. The sulfate-reducing bacteria are commonly found in areas where stagnant water or water-logged soil is in contact with the steel. Previous discovery of MIC or at least microorganism presence is often the best indicator of such damage potential. Some operators train employees to look for signs during any and all pipe excavation and exposure. On excavation, evidence of bacterial activity is sometimes seen as a layer of black iron-sulfide on the pipe wall. An oxidation-reduction probe can be used to test for conditions favorable for bacteria activity. (It does not determine if corrosion is taking place, however.)A normal cure for microorganism-promoted corrosion is increased levels of cathodic protection currents.
p H The ion concentration in the soil, as measured by pH, can have a dramatic effect on corrosion potential. A pH lower than 3 or higher than 9 (either side of the neutral 4 8 range) can promote corrosion. For metals, more acidic (lower pH) soils promote corrosion more than the more alkaline (higher pH) soils. The soil pH may affect other pipe materials in other ways. Data sources Some publicly available databases have relative soil corrosivity evaluations for steel and concrete. These correspond to specific geographical regions of the world. They also show pH, moisture content, sulfates, chlorides. water table depths, and many other soil characteristics. As of this writing, these data sets tend to be very coarse-averaging many factors so that the resolution is not fine enough to distinguish local hot spots of differing characteristics. In fact. the generalized information might exactly contradict more local information. An example would be where a large-area evaluation shows a very low soil moisture content, but in reality, there are several small
areas within the larger area (perhaps near the creeks and ravines) that have relatively high moisture contents most of the year. This might be significant information for pipelines traversing such areas. Therefore, very coarse resolution data are sometimes used only as a default or as a factor to consider in addition to other, more location-specific information.
Scoring soil corrosivih. A simple soil corrosivity assessment scale might use only soil resistivity as an indicator. An example is shown in Table 4.3. A more detailed evaluation might involve several additional variables as discussed above. Each variable is assessed on its own scale, either using actual measurements or in relative terms (such as high, medium. or !OM'). They would then be combined using some relative weighting scheme in order to arrive at a final soil corrosivity score. An example is shown in Table 4.4. The soil corrosivity score could be the result of summing the subvariable scores:
Soil corrosivity score = [soil resistivity]+ [pH] + [soil moisture] + [MIC]+ [STATSGO steel corrosion]. Weightings are established based on the corrosion expert's judgments or empirical data showing which factors are more critical in determining soil corrosivity. Different pipe materials have differing susceptibilities to damage by various soil conditions. Sulfates and acids in the soil can deteriorate cement-containing materials such as concrete or asbestos-cement pipe. Polyethylene pipe may be vulnerable to damage hy hydrocarbons. Any and all special knowledge of pipe material susceptibility to soil characteristics should be incorporated into this section of the corrosion index. Chapter 11 shows an approach where soil corrosivity is being assessed against various different pipe materials.
Mechanical corrosion effects (Weighting: 5% of corrosion threat) This risk variable involves the potential for damaging phenomena that consist of both a corrosion component and a mechanical component. This includes hydrogen stress corrosion Table 4.3
Example soil corrosivityassessment scale using only
resistivity
< 1,000 ohm-cm
High
Medium 1,000-15.000 Medium ohm-cm or moderately active corrosion indicated High resistivity Low (low corrosion potential); >15.000 ohm-cm and no active corrosion
I? 6
0 50
3
100
indicated
Do not know a
From ASMEIANSI 831 8
High
0
4/78 Corrosion Index Table 4.4
Soilfactor
0
Relative weighting (%) 0
Soil resistivity
30
gril moisture MIC“ STATSGO~steel corrosivityrating
25 25 15 5
Operating stress > 60% specified minimum yield strength Operating temperature > 100°F Distance from compressor station < 20 miles Age > 10 years Coating system other than fusion bonded epoxy (FBE).
An automatic assessment incorporating these criteria can be set up in a computer environment.
~
Soil corrosivity scorec
100
aMIC =evaluationof the potential for microbially induced corrosion. bSTATSGO = State Soil Geographic (STATSGO)soils data compiled by the Natural Resources Conservation Service of the US.
Department of Agriculture. cracking (HSCC), sulfide stress corrosion cracking (SSCC), hydrogen-induced cracking (HIC), or hydrogen embrittlement, corrosion fatigue, and erosion. In the United States, stress corrosion cracking (SCC) reportedly caused more than 250 pipeline failures in the 1965-1985 period [52]. Some failure investigators think that these numbers represent an underreporting of the actual number of SCC related failures since such failures are often very difficult to recognize. Stress corrosion cracking can occur under certain combinations of physical and corrosive stresses. Evidence shows that three conditions must be present: tensile stress, a susceptible pipe material, and a supporting environment at the pipe surface. SCC is sometimes referred to as an “environmentally assisted cracking” phenomenon. A breakdown in both coating barrier and cathodic protection must occur before SCC initiates [63]. Two different forms have been identified: high-pH SCC (classical) and near-neutral, low-pH SCC. These are similar in many ways and differ in the role of temperature, electrolyte characteristics, and cracking morphology [63]. Both types are characterized by formation of corrosion-accelerated cracking in areas of the pipe wall subjected to high tensile stress levels. The presence of corrosive substances aggravates the situation. Certain types of steel are more susceptible than others. In general, a steel with a higher carbon content is more prone to SCC. Characteristics ofthe steel that may have been brought about by welding or other post-manufacturing processes may also make the steel more susceptible. Materials that have little fracture toughness (see the design index discussion in Chapter 5) do not offer much resistance to brittle failure. Rapid crack propagation brought on by corrosion and stress is more likely in these materials. Note that SCC is also seen in plastic pipe materials. Stress corrosion cracking is difficult to detect and SCC failures are not predictable. The effects can be highly localized. Even a fairly non-corrosive environment can support a SCC process. A previous history of this type of process is, of course, strong evidence of the potential. In the absence of historical data, the susceptibility of a pipeline to this sometimes violent failure mechanism should be judged by identifying conditions that may promote the SCC process. Predictive models have been developed and have been effective in prioritizing excavations to find higher occurrences than would be discovered under a plan of investigationsduring routine maintenance [63]. ASME/ANSI B31.8 notes the following as high risk factors, where further investigation may be warranted if all of the following are present in a segment:
Tensile stress at the pipe surface is thought to be a necessary condition for SCC. The stressmight be residual, however, and hence virtually undetectable. The higher the stress, the more potential for crack formation and growth. Fluctuations in stress level are also thought to play an aggravating role since such fluctuations produce fatigue loadings that can increase crack growth. It is reasonable to assume that all pipelines will be under at least some amount of stress. Because internal pressure is often the largest stress contributor, pipelines operating at higher pressures relative to their wall thickness are thought to have more susceptibility to SCC. Thermally induced stresses and hendmg stresses can also contribute to the overall stress level, but, for simplicity’s sake, the evaluator may choose only internal pressure as a factor in assessing potential for SCC.
Stress
High pH levels close to the steel can be a conEnvironment tributing factor in classic SCC.This may be caused by a high pH in the soil, in the product, or even in the coating. Chlorides, H,S, CO,, and high temperatures are more contributing factors. The presence of certain bacteria will increase the risk. Persistent moisture and coating disbondment are also threatening conditions. In general, any environmental characteristicthat promotes corrosion should be considered to be a risk contributor here. This must include external and internal contributors. Steel type A high carbon content (20.28%) increases the likelihood of stress corrosion cracking. Low ductility materials with low fracture toughness are more susceptible. Sometimes the rate of loading determines the fracture toughness-a material may be able to withstand a slow application of stress, but not a rapid application (see the design index discussion in Chapter 5). This further complicates the use of material type as a contributing factor.
A schedule can be developed that employs these contributing factors in an assessment ofthe potential for SCC. Low stress in a benign environment is the best condition, whereas high stress in a corrosive environment is the most dangerous condition. Stress level can be expressed as a percentage of maximum allowable operating pressure (MAOP) or specified minimum yield strength (SMYS) of the pipe-the highest normal operating pressure divided by MAOP or SMYS. A history of stress corrosion cracking should be seen as the strongest evidence ofthis risk and should accordingly score the section at 0 points.
C2. Cathodic protection (weighting 25% of corrosion threat) The branch of the risk assessment leading to the variable cathodic protection is as follows:
Scoring the corrosion potential 4/79 Corrosion Indeer Atmospheric Internal Subsurface Subsurface environment Coating
ICathodic protection Effectiveness Interference potential
I
(25 P ~ S ) (15 pts) (10 pts)
Cuthodicprotection is the application of electric currents to a metal in order to offset the natural electromotive force of corrosion. Chemical reactions occur at the anode and the cathode as corrosion occurs and ions are formed. Some form of CP system is normally used to protect a buried or submerged steel pipeline as one part of the common two-part defense against corrosioncoating and CP. The exceptions to CP use might be instances where temporary lines are installed in fairly non-corrosive soil and where conditions do not warrant cathodic protection. Nonmetal lines may not require corrosion protection. In this evaluation, the effectiveness of the CP system is assessed in general and in terms of possible interferences. The effectiveness is judged by the existence of a system that meets the following general criteria: Enough electromotive force is provided to effectively negate any corrosion potential. Enough evidence is gathered, at appropriate times. to ensure that the system is working properly. In assessing interference potential, points are awarded based on the potential for any of three sources of interference and measures taken to mitigate such interference.
CP svstem effectiveness To ensure that the CP system can be effective, the evaluator should seek records of the initial cathodic protection design. Are the design parameters appropriate? What was the projected life span ofthe system? Is the system functioning according to plan? The evaluator should then inspect documentation of the most recent checks on the system. Anode beds can become depleted, conditions can change, equipment can malfunction. Will the operator become aware of serious problems in a timely manner? Although cathodic protection problems can be caught during normal test lead readings and certainly during close interval surveys, problems such as malfunctioning rectifiers (or worse, rectifiers whose electrical connections have been reversed! ) should ideally be found even quicker. Effectiveness criteria The presence of adequate protective currents is normally determined by measurement of the voltage (potential) difference between the pipe metal and the electrolyte. By some common practices and regulatory agency requirements, a pipe-to-soil potential of at least -0.85 volts (-850 millivolts), as measured by a copper-copper sulfate reference electrode, is the general criterion indicating adequate protection from corrosion.Another common criterion is a min-
imum negative polarization voltage shift of 100 millivolts. This is the amount of shift in potential between the polarized pipeline (after current has been applied for some time) and the buried pipeline without a protective current appliedthe native state. Many corrosion control experts believe that the 100-mV shift criterion is the most conclusive measure of CP effectiveness. Unfortunately, the 100 mV is also often the most costly measurement to obtain. requiring a polarization survey, whereby the pipeline is depolarized over hours or days and comparative pipe-to-soil measurements are made. The 0.85volt criterion is normally adequate because it encompasses the 100-mV shift in almost every case since native potentials are normally less than 700 mV. A criterion for excessive CP currents is also often appropriate. Excessive currents might cause hydrogen evolution that can cause coating disbondment. The actual practice of ensuring adequate levels of cathodic protection is often more complex than the simple application of criteria. Readings must be carefully interpreted in light of the measurement system used. Too much current may damage the coating. Higher levels ofprotection may be required when there is evidence ofbacteria-promoted corrosion. A host of other factors must often similarly be considered by the corrosion engineer in determining an adequate level of protection. In the interpretation of all pipe-to-soil measurements, attention must be paid to the resistances that are part of the pipe-tosoil reading. The reading that is sought, but difficult to obtain, is the electric potential difference between the outside surface ofthe pipe and apoint in the adjacent soil a short distance away. In actual practice, a reading is taken between the pipe surface (via the test lead) and apoint at the ground surface, usually several feet above the pipe. The soil component of the circuit is a nonmetallic current path. Consequently, this model is not directly analogous to a simple electrical circuit. The measured circuit is completed at the ground surface by contacting the soil with a reference electrode (a half cell, usually copper electrode in a copper sulfate solution).Therefore, the normal pipe-to-soil reading measures not only the piece of information sought. but also all resistances in the electric circuit, including wires, pipe steel, instruments, connectors, and,the largest component. the several feet of soil between the buried pipe wall and the ground surface. The knowledgeable corrosion engineer will take readings in such a way as to separate the extraneous information from the needed data. The industry refers to this technique as compensatingfor the IR drop. There is some controversy in the industry as to exactly how the readings should be interpreted to allow for the IR drop. An instant-off pipe-to-soil measurement, where the reading is taken immediately as the current source is interrupted is often taken as a reading that is relatively IR free. Therefore. some operators use a more conservative adequacy criterion of “at least 850 mV interrupted (or instant-ow pipe-to-soil potential instead of “at least 850 mV with the current applied.” In some cases the controversy is more theoretical because government regulations mandate certain techniques. The evaluator should be satisfied that sufficient expertise exists in the interpretation of readings to give valid answers. Equipment One aspect of the adequacy of protection will be the maintenance of the associated cathodic protection equipment. For impressed current CP systems. equipment such as
4/80 Corrosion Index
rectifiers and bonds must be maintained. Inspections of these pieces of equipment are usually performed at shorter intervals than the overall check of the potential levels. Because a rectifier provides the driving force for the cathodic protection systems, the operator must not allow a rectifier to be out of service for any length of time. Monthly or at least bimonthly rectifier inspections are often the norm. Use of a risk assessment adjustment factor that could be called something like a rectifier interruptionfactor is one way to account for the effects of inconsistent application of CP. An interruption or outage can be defined as some deviation from normal rectifier output (probably in amperes) once sufficient data have been accumulated to establish a baseline or normal output for a rectifier. The tracking of kilowatt-hours may also be useful in determining outage periods. The hours of outage per year can be tracked and accumulated to assign risk assessment penalties for both high outage hours in any year and the accumulation of outage hours over several years. Each indicates periods during which the pipeline might not be adequately protected from corrosion. A potential difficulty in application of such rectifier interruption factors is that each rectifier must be linked to the specific pipeline lengths that are influenced by that rectifier. The adjustment factor is derived from each rectifier, but the penalty applies to the actual portions of the pipeline that suffered the inconsistent application of CP In a complex system of rectifiers and pipelines, it is often difficult to ascertain which rectifiers are influencing which portions of pipeline. Tracking the equipment performance might also provide the ability to better quantify the benefits of remote monitoring capabilities. If a system is installed to monitor and alarm (in a control center as part of the SCADA system, perhaps) rectifier malfunctions and perhaps even pipe-to-soil readings at test leads, then outage times and inadequate protection times can be minimized. Depending on the corrosion rates and economic considerations such as labor costs and availability, adding such monitoring capabilities might be justified. Test leads Often, the primary method for monitoring the effectiveness of a cathodic protection system is through the use of test leads, fixed survey points for taking pipe-to-soil voltage readings. A test lead is normally a wire attached (usually welded or soldered) to the buried pipeline and extended above the ground. A test lead allows a worker to attach a voltmeter with a reference electrode and measure the pipe-to-soil potential. Placement of test leads at locations where interference is possible is especially important. The most common points are metal pipe casings and foreign pipeline crossings. At these sites, careful attention should be paid to the direction of current flow to ensure that the pipeline is not anodic to the other metal. Where pipelines cross, test leads on both lines can show if the cathodic protection systems are competing.
survevs Several survey types are commonly used to verify that effectiveness criteria are being met. These include variations in how pipe-to-soil readings are taken and where they are taken. Examples of the former include on readings, instant-off readings, voltage gradient, and polarization measurements as previously described. Variations in where readings are taken include readings at test lead stations only versus close interval
surveys where readings are taken at short intervals such as every 3 to 15 feet. An annual test lead survey is the cornerstone of many operators’ CP verification programs. A pipe-to-soil measurement taken at a test lead measurement indicates the degree of cathodic protection on the pipe because it indicates the tendency of current flow, both in terms ofmagnitude and direction (to the pipe or from the pipe) (see Figure 4.6). Uncertainty increases with increasing distance from the test lead because the test lead reading represents the pipe-to-soil potential in only a localized area. Because galvanic corrosion can be a localized phenomenon, the test leads provide only limited information regarding CP levels distant from the test leads. A test lead reading is therefore an indicator of cathodic protection only in the immediate area around the lead-a lateral distance along the pipe that is roughly equal to the depth of cover, according to one rule of thumb. Closer test lead spacings, therefore, yield more information and less chance oflarge areas of active corrosion going undetected. Because corrosion is a time-dependent process, the number oftimes the test leads are monitored is also important. Using these concepts, a coarse point schedule can be developed based on general criteria such as: All buried metal in the vicinity of the pipeline is monitored directly by test leads, and test lead spacing is no greater than 1 mile throughout this section Best Test leads are spaced at distances of 1 to 2 miles apart (maximum) and all foreign pipeline crossings are monitored via test leads; not all casings are monitored; there may be other buried metal that is not monitored Fair Test lead spacing is sometimes more than 2 miles; not all potential interference sources are monitored Poor A more robust assessment can use actual distances from the nearest test lead to characterize each point along the pipeline. This would penalize (show higher risks), on a graduated scale, those portions of the pipeline that are farther away from a test lead or other opportunity for a pipe-to-soil potential reading. The frequency of readings at test leads is rated as follows. Pipe-to-soil readings are taken with the IR drop understood and compensated at intervals of <6 months 6 months-annually >annually
Best Fair Poor
Notes: As previously explained, lack of proper IR drop compensation may negate the effectiveness of readings. For our purposes here, test lead can be any place on the pipeline where an accurate pipe-to-soil potential reading can be taken. This may include most aboveground facilities, depending on the type of coating present. Readings taken at longer intervals such as greater than one year do have some value, but a year’s worth of corrosion might have proceeded undetected between readings. Close interval surve-vs A powerful tool in the corrosion engineer’s tool bag is a variation on test lead monitoring called close intenialsurveying (CIS) or closespacedsurveying. In this technique, pipe-to-soil readings are taken (and IR compensa-
Scoringthe corrosion potential 4/81 tion is employed ideally) every 2 to 15 feet along the entire length of the pipeline. In this way, almost all localized inadequate CP can be detected. It also normally yields some coating effectiveness information. Any aboveground pipeline attachment, including valves, test leads, and casing vents, can be used to connect to one side of a voltmeter. The other side ofthe voltmeter is connected by a wire to the reference half-cell that is used to make electrical connection at the ground surface as the surveyor walks along the pipeline. The voltmeter and data-logging device are therefore in the circuit between the two electrodes. Results are usually interpreted from a chart or database of the measurements that shows peaks and valleys as the current flow changes magnitude ordirection(Figure 4.7). Several types of CIS are in common use. These include DCVG (direct current voltage gradient) and various types of interrupted surveys with various distances between readings. AC readings can also be taken in conjunction with the DC readings. Ideally such a profile of the pipe-to-soil potential readings will indicate areas of interference with other pipelines. casings, etc.; areas of inadequate cathodic protection; and even areas of bad coating. When needed, excavations are performed to verify the survey readings. A CIS is repeated periodically to identify changes in CP along the pipeline route. The CIS technique is quite robust in monitoring the condition of buried steel pipelines and hence, can play a significant role in risk management. It is also a proactive technique that can be used to detect potential problems before appreciable damage is done to the pipeline. The most credit toward risk reduction can be given for a thorough CIS recently performed over the entire pipeline section by trained personnel and with careful interpretations of all readings made by a knowledgeable corrosion engineer. An accompanying assumption (to be verified by the evaluator) is that corrective actions based on survey results have been taken or are planned (in a timely fashion). The survey’s role in risk reduction can be quantified at a coarse level by simply assessing the time since the last survey.
If survey results are thought to be out of date and provide no useful risk information after, say, 5 years, then the point assignment equation could be: (maximum points) x (survey age. years)’5
Using CIS in a more detailed risk assessment model will involve an assessment of the type of survey itself, as discussed in the next paragraphs.
Scoring of CP effectiveness The assessment of CP effectiveness should include evaluations of how much information is available from various survey types and frequencies. In this regard some surveys could be judged to be more valuable in terms of uncertainty-reducing information produced. In the following sample scoring scheme, the evaluator has weighted various survey techniques based on their value in ensuring adequate CP effectiveness. The survey scores are then adjusted by factors that consider the age of the survey and the prospect of a CP system failure. In this scheme, the close spaced polarization survey warrants the highest point score-55% of the maximum points for CP effectiveness. It also encompasses the other survey types because, in effect, it requires that “on” and “interrupted” readings also be captured. Therefore, this survey, done recently and finding no areas of inadequate CP, leads to the full point value for the risk variable of CP effectiveness-I 00% of the maximum points. Other surveys are of a lesser value, with a simpler CIS “on” survey being worth 30% and a CIS interrupted survey being worth 20% (but more often 50% since the interrupted survey will normally include an “on” survey, so the points can be combined). The “test lead only” surveys warrant fewer points given its reduced ability to confirm adequate CP at locations not close to a test lead. Anytime a pipe-to-soil reading does not meet the minimum criteria, CP effecriveness should be deemed inadequate and
C
Normal reading (adequately protected
y
\ uI E a s I .4 ----I-
I Test lead
I
7; I
Sudden dip (possible interference problem, undetected by test lead reading
..’”‘./
I
----
......-..
1
Low readings (more current required to protect pipe)
Distance Measured Along Pipeline Figure 4.7
I
Close interval pipe-to-soilpotential survey.
4/82 Corrosion Index
scored accordingly (0 points, by the example scales). Table 4.5 shows an example of a scoring system for CP effectiveness. An age-of-survey adjustment would be used in arriving at final point values. According to the above scoring rules, the evaluator has three options for scoring the annual test lead survey. As one option, he can consider each test lead reading to be a pipe-to-soil reading representing about 5 ft along the pipeline either side of the reading location. This means that very short sections of pipe would receive 20, 30, or 55% of the CP effectiveness points (depending on what type of survey is used), where test lead readings show adequate CP levels. All pipe sections in between the reading locations would be penalized for having no pipe-tosoil voltage information at a l l 4 points. For example, in an annual on-reading survey (current applied), where all readings show adequate CP, the risk assessment will show point values of (Maximum CP Effectiveness Points) x (Annual on survey weighting, option 1) or 15 points x 30% = 4.5 points for the 10 ft of pipe around the test lead location, and 0 points elsewhere. This indicates that the evaluator has no evidence that CP levels are adequate between test lead locations. In another option, the evaluator feels that the test lead reading does yield useful information on CP levels between test lead reading locations, even at distances thousands of feet away. The weighting, however, must be far less than for a CIS, where the reading locations are very closely spaced. So, only 1% of the maximum possible points are awarded, but the points apply to all locations between test lead locations. In this case, the annual, on-reading survey where all readings show adequate CP, will show point values of (Maximum CP Effectiveness Points) x (Annual on survey weighting, option 2) or 15 points x 1% = 0.15 points everywhere. In yet another option, the evaluator feels that the test lead reading yields information about surrounding lengths ofpipe in proportion to their distance from the test lead location. In this
case, the annual, on-reading survey where all readings show adequate CP, will show point values of (Maximum CP Effectiveness Points) x (Annual on survey, option 3) x (test lead adj) yielding point values of 15 points x 10% x 100% = 1.5 points for portions of the pipeline within 1 mile of the test lead and 15 x 10% x 50% = 0.75 for portions of the pipeline 1.5 miles from a test lead. This may appear to be a rather complicated scoring scheme, but it does reflect the reality of the complex corrosion control choices commonly encountered in pipeline operations. It is not uncommon for the corrosion specialist to have results of various types of surveys of varying ages and be faced with the challenge of assimilating all of this data into a format that can support decision making. The previous scenarios discount additional adjustments for age of survey and equipment malfunctions. Such adjustments should play a role in scoring (even though they are not illustrated here) because they are important considerations in evaluating actual CP effectiveness. The scoring scheme is patterned after the decision process of the corrosion control engineer, but is of course considering only some of the factors that may be important in any specific situation.
Inter$erence potential (weighting 10% of corrosion threat) Corrosion Index Atmospheric Internal Subsurface Subsurface environment Coating Cathodic protection Effectiveness
(100pts)
{
Table 4.5 Sample of more detailed scoring for CP effectiveness
Information source (survey type)
Weight (to be multiplied by maximum CP effectivenesspoints)
CIS polarization
55%
CIS on (current applied)
30%
CIS off (current is interrupted)
20%
Comments and directionsfor scoring Polarization survey usually gets to 100% since other survey types are done as part of the polarization survey. CIS readings with current applied. Ifpipe-to-soil criteria are met and survey is recent. then points are 15 x 30% = 4.5 points. Establishes static line the first time; can reuse static line with subsequent CIS interrupted surveys; CIS-interrupted also gains credit for CIS-on survey, so 30 20 = 50% Use survey type weighting (20%. 30%, or combination = 50%) and apply to 5 ft either side of a test lead location Apply to half the distance to next test lead Multiply also by test lead adjustment factor Same as above. Test i s done at test lead stations by interrupting rectifier and using a static polarization survey measure for comparison.
+
30% or20% Annual on or intempted (at test lead locations only) Annual polarization (at test lead locations only)
1% 10%
55% 4% 15%
Test lead spacing
Adjustment
Rectifier out of service
Adjustment
100%when all parts of the pipeline segment are within 1 mile oftest lead; 0% when any part of segment is greater than 2 miles. lfany part ofthe segment is > 1 mile from the
test lead, then degrade to 0 points when distance reaches 2 miles; Penalties for equipment outages in any year plus cumulative outages over several years. Penalties removed after ILI or visual confirmation that no damage occurred.
Scoring the corrosion potential 4/83 nterference potential
(10 PtS)
I
where: Interference potential AC related Shielding DC related Telluric currents DC rail Foreign tines
(20%) (10%)
(70%) (I?’o)
(50%) (49%)
Corrosion is an electro-chemical process and corrosion prevention methods are designed to interrupt that process, often with electrical methods like cathodic protection. However, the prevention methods themselves are susceptible to defeat from other electrical effects. The common term for these effects is inferjerence. Three types of interference are evaluated: AC related DC related and shielding effects. AC-related interference (weighting: 20% of inte+rence potential) Pipelines near AC power transmission facilities are exposed to a unique threat. Through either a ground fault or an induction process, the pipeline may become electrically charged. Not only is this charge potentially dangerous to people coming into contact with the pipeline, it is also potentially dangerous to the pipeline itself. The degree of threat that AC presents to pipeline integrity has been debated, Reference [38] presents case histories and an analysis of the phenomena. This study concludes that AC can cause corrosion even on pipelines cathodically protected to industry standards. “The corrosion rate appears to be directly related to the AC density such that corrosion can be expected at AC current densities of 100 A/m2 and may occur at AC current densities greater than 20 A/m2” [38]. Given specific measurable criteria for the threat, the evaluator might be able to develop a threat assessment system around AC current levels directly measured. Otherwise. indirect evidence can lead to an assessment system. A basic understanding of the AC issue will serve the evaluator in assessing the threat potential. Electric current seeks the path of least resistance. A buried steel conduit like a coated pipeline may be an ideal path for current flow for some distance. Almost always, though, the current will leave the pipeline to another more attractive path, especially where the power line and the pipeline diverge after some distance of paralleling. The locations where the current enters or leaves the pipe may cause severe metal loss as the electrical charge arcs to or from the line. At a minimum, the pipeline coating may be damaged by the AC interference effects. The ground fault scenario of charging the pipeline includes the phenomena of conduction, resistive coupling, and electrolytic coupling. It can occur as AC power travels through the ground from a fallen transmission line, an accidental electrical connection onto a tower leg, through a lightning strike on the power system, or from an imbalance in a grounded power system. These are often the more acute cases of AC interference, but they are also often the more easily detectable cases. The sometimes high potentials resulting from ground faults expose the pipe coating to high stress levels. This occurs as the soil surrounding the pipeline becomes charged, setting up a high volt-
age differential across the coating. Disbondment or arcing may occur. If the potentials are great enough, the arcing may damage the pipe steel itself. The induction scenario occurs as the pipeline is affected by either the electrical or magnetic field created by the AC power transmission. This sets up a current flow or a potential gradient in the pipeline (Figure 4.8). These cases of capacitive or inductive coupling are dependent on such factors as the geometrical relation ofthe pipeline to the power transmission line, the magnitude of the power current flow, the frequency of the power system, the coating resistivity, the soil resistivity, and the longitudinal resistivity of the steel [77]. Induced potentials become more severe as soil resistivity and/or coating resistivity increases. Formulas exist to estimate the potential effects of AC interference under normal and fault conditions. To perform these calculations, some knowledge of power transmission load characteristics of the power system, including steady-state line currents and phase relationships, is required. Estimations and measurements will be needed to generate soil, coating. and steel resistivity values, as well as the distances and configurations between the pipeline and the power transmission facilities. The key factors in assessing the normal effects for most situations will most likely be the characteristics of the AC power and the distance from and configuration with the pipeline. Fault conditions can, of course. encompass a multitude ofpossibilities. Induced AC voltage can also be measured by methods similar to those used to measure DC pipe-to-soil voltages for cathodic protection checks. Therefore, an AC survey can be a part of a close interval survey (see earlier section on CIS). thereby generating a profile of AC voltages. Methods used to minimize the AC interference effects. both to protect the pipeline and/or personnel coming into contact with the line, include [53,62]
0 0 0
Electrical shields Grounding mats or gradient control electrodes Independent structure grounds Bonding to existing structures Supplemental grounding of the pipeline via distributed anodes Casings Proper use of connectors and conductors Insulating joints Electrolytic grounding cells Polarization cells Lightning arresters.
Monitoring should be an integral part of the AC mitigation effort. Because so many variables are involved in performing accurate calculations and this is a relatively rare threat to most pipelines, a simplified schedule can be set up for this rather complex issue. In terms of risk exposure, one of three possible scenarios can exist and be scored from a risk perspective:
No AC power is within 1000 ft of the pipeline AC power is nearby, but preventive measures are being used to protect the pipeline AC power is nearby, but no preventive actions are being taken
3 pts 1-2 pts
0 pts
4/84 Corrosion Index
causing coating or metal
I')r
Magnetic or electric field sets up current flow in pipeline
Figure 4.8 AC power currents on pipeline
Also fitting into the second scenario might be cases such as Very low-power AC only High-power AC present but at least 3000 ft away AC nearby, but regular surveying confirms no induction occurring. Note, however, that significant inductive interference effects can be seen as far away as 1.2miles in high resistivity soils [SO]. In some cases, a more thorough investigation of AC effects might be warranted. Before a site-specific analysis is done, a more robust risk-screening tool could be used to highlight the most critical of suspect locations. Breaking the AC interference potential into several items for more detailed scoring might involve the variables shown inTable 4.6. Preventive measures can be designed for induction or for ground fault cases or for both. As previously mentioned, grounding cells can be designed to safely handle the discharge of current from the pipeline. Close monitoring of the situation would be considered part ofthe preventive measures taken. The evaluator should be satisfied that the potential AC current problem is well understood and is being seriously addressed, before credit is given for preventive measures. Shielding (weighting: 10% of interference potential) Shielding is the blocking of protective currents. Casing pipe,
especially where such pipe is coated, is a common example of the potential to create shielding effects. Certain soil or rock types. concrete coatings, and other buried structures (retaining walls, culverts, foundations, etc.) are also examples. Any structure that is very close to the pipe (perhaps <2 ft) should alert the evaluator to shielding potential. Where the evaluator sees potential shielding situations, points assigned should show a lowered interference score (higher interference potential). When the operator is sensitive to this potential and takes special precautions, points can be awarded. When there is no potential for shielding, as has been verified by appropriate surveys, maximum points should be given. DC-related interference (weighting: 70% of interference potential) The presence of other buried metal in the vicinity of a buried metal pipeline is a potential concern for corrosion prevention. Other buried metal can short circuit or otherwise interfere with the cathodic protection system of the pipeline. In the absence of a cathodic protection system, the foreign metal can establish a galvanic corrosion cell with the pipeline. This may cause or aggravate corrosion on the pipeline. This can be quite severe: 1 amp ofDC current discharging from buried steel can dissolve more than 20 pounds of steel per year. The most critical interference situations occur when electrical contact occurs between the pipeline and the other metal.
Scoring the corrosion potential 4185 Table 4.6 Sample variables that can be evaluated to assess AC interference potential lirr.iuhle
N o f a laample scoring pmfocols)
Verification
Maximum credit given when AC survey is conducted at least annually at distanceno greater than I mile: no credit when greater than 1 mile or more than 2.years between surveys. IfAC is detected on the pipeline. penaltiesassigned where worst case is >I 5 volts (considerationofpipeline damage only-not personnel safety issues). Assesses the more problematic configurationsbetween the pipeline and the AC power line: parallel and then diverging represents the highest potential for problems. Higher strength means higher chance ofproblems. Shorter distance means greater chance ofproblems: 0.5 mile or greater from current sourceipipeline is best score (unlesslowresistance path exists. such as waterway). Lower resistivity means higher chance of probiems. Points are “recovered”based on type of mitigation present.
.4C present
Configuration Strength DlStdnce Soil resistivity Mitigation
This is especially critical when the other metal has its own impressed current system. Electric railroads are a good example of systems that can cause special problems for pipelines whether or not physical contact occurs. The danger occurs when the other system is competing with the pipeline for electrons. If the other system has a stronger electronegativity, the pipeline will become an anode and, depending on the difference in electron affinity. the pipeline can experience accelerated corrosion. As noted elsewhere, coatings may actually worsen the situation if all anodic metal dissolves from pinhole-sized areas, causing narrow and deep corrosion pits. Common mitigation measures for interference problems include interference bonds, isolators. and test leads. Interference bonds are direct electrical connections that allow the controlled flow of current from one system to another. By controlling this flow, corrosion effects arising from the foreign systems can be mitigated. Isolators. when properly installed, can similarly control the flow of current. Finally, test leads are used to monitor for problems. By comparing the pipe-to-soil potential readings of the two systems, signs of interference can be found. As with any monitoring system. checks must be done regularly by trained personnel, and corrective actions must be taken when problems are identified. A reasonable question when assessing interference potential from other buried metal is “How close is too close?” The proximity of the foreign metal obviously is a key factor in the risk potential, but the distance is not strictly measured in feet or meters. Longer distances can be dangerous in low-resistivity soil or in cases where the current levels are relatively high. If the foreign system also has an impressed current CP system, the strength and location of the source are also pertinent. A reasonable rule of thumb might be to consider all buried metal within a certain distance from the pipeline--perhaps 500 ft, if no other calculations or experience-based distances are available. This rule should be tailored to the specific situation, but then held constant for all pipelines evaluated. Points can be assessed based on how many occurrences of buried metal exist along a section. Again, the greater the area of opportunity, the greater the risk. For pipelines in corridors with foreign pipelines, higher threat levels of interference may exist (although it is not uncommon for pipeline owners in sharedcorridors to cooperate and thereby reduce interference potentials). Note that many modem approaches to pipeline segmentation for risk assessment will create smaller, unique sections where counts of occurrences would not be appropriate. For instance,
each occurrence of a cased road crossing would be an independent pipeline section for purposes of risk scoring. Such sections would carry the risk of interference (including shielding effects)whereas neighboring sections might not. Specific subvariables in assessing DC-related interference include those shown in Table 4.7.
C3. Coating (weighting:25% of corrosion threat) Corrosion index Atmospheric Internal Subsurface Subsurfaceenvironment
I(Coating
(25 P
Fitness Condition Cathodic protection
WI
(10PtS) ( 1 5 pts) (25 pts)
Pipeline coatings are one part of the two-part defense against subsurface corrosion of metallic pipe. Commonly used coatings are often a composite of two or more layers of materials. Paints, plastics, rubbers, and other hydrocarbon-based products Table 4.7 Sample variables for assessing DC-related interference kiable
Relative weight
DC present
50?h
Nofes
Investigate for the presence of potentially interferingcurrents. Configuration 25% Parallel and then divergent might be worst-case configurations. Strength 10% This measures CP source strength,if any. Intermittent voltages (for example, electric trains)are often more problematic. Distance 10% Use shorter of source distanceor structure(foreignpipeline. rail, etc.). Soil resistivity 5% Lower soil resistivitiesmight lead to longer distancesofinterest. Mitigation Adjustment Improve scores where mitigations are employed.
4/86 Corrosion Index
such as asphalts and tars are common coating materials. A coating must be able to withstand a certain amount ofmechanical stress from initial construction, from subsequent soil, rock, root movements, and from temperature changes as the pipe moves against the adjacent soil. The coating will be continuously exposed to ground moisture and any damaging substances contained in the soil. Additionally, the coating must adequately serve its main purpose: isolating the steel from the electrolyte. To do so, it must be resistant to the passage of electricity and water. Because pipelines are designed for long life spans, the coating must perform all of these functions without losing its properties over time; that is, it must resist aging. Typical coating systems include Cold-applied asphalt mastics Layered extruded polyethylene Fusion-bonded epoxy Coal tar enamel and wrap Tapes (hot or cold applied).
only because they are difficult to detect, but also because they can promote narrow and deep corrosion pits. Because galvanic corrosion is an electrochemical reaction, a given driving force (voltage difference) will cause a certain rate of metal ionization. If the exposed area of metal is large, the corrosion will be wide and shallow, whereas a small exposure will lose the same volume ofmetal, causing deeper corrosion. Deeper corrosion is potentially more weakening to the pipe wall. A small geometric discontinuity may also cause high stress concentrations (see Chapter 5). To assess the present coating condition, several things should be considered, including the original installation process. An evaluation similar to the one used to assess the coating for atmospheric corrosion protection may be appropriate. Again, no coating is defect free; therefore, the corrosion potential will never be totally removed, merely reduced. How effectively the coating is able to reduce corrosion potential depends on four general factors: 0
Any coating system can fail. Factors contributing to failure include Mechanical damages from soil movements, rocks, roots, construction activities Disbondment caused by hydrogen generation from excessive cathodc protection currents Incorrect coating type or application for the pipeline operating condition and environment Water penetration Coatings can fail in numerous ways. Some failures result in large defects that are relatively easy to detect and repair. The presence of many small defects, however, indicates active coating degradation mechanisms that may result in massive coating failure unless the mechanisms are addressed [50]. Correction costs here may be considerably more expensive. One of the main reasons for using cathodic protection systems is that no coating system is defect free. Cathodic protection is designed to compensate for coating defects and deterioration. As such, one way to measure the condition of the coating is to measure how much cathodic protection is needed. Cathodic protection requirements are partially a function of soil conditions and the amount of exposed steel on the pipeline. Coatings with defects allow more steel to be exposed and hence require more cathodic protection. Cathodic protection is generally measured in terms of current consumption. A certain amount of voltage is thought to negate the corrosion effects, so the amount of current generated while maintaining this required voltage is a gauge of cathodic protection. A corrosion engineer can make some estimates of coating condition from these numbers. One potentially bad situation that is difficult to detect is an area of disbonded coating, where the coating is separated from the steel surface. While the coating still provides a shield of sorts, moisture can often get between the coating and the steel. If this moisture is occasionally replaced, active local corrosion can proceed while showing little change in current requirements. Excessively high CP currents may cause hydrogen generation that can lead to coating disbondment. Another common type of coating defect is the presence of pinhole-sized defects. These can be especially dangerous not
Quality of the coating Quality ofthe coating application Quality of the inspection program Quality of the defect correction program.
Each of these components can be rated on a 4-point scale equating to the qualitative judgments of good, fair, poor, or absent. The weighting of each component should probably be equivalent unless the evaluator can say that one component is of more importance than another. A quality coating is of little value if the application is poor; a good inspection program is incomplete if the defect correction program is poor. Perhaps an argument can be made that high scores in coating and application place less importance on inspection and defect correction. This would obviously be a sliding scale and is probably an unnecessary complication.
Coatingfitness (weighting 50% of coating) Coating Evaluate the coating in terms of its appropriateness in its present application. Where possible, use data from coating stress tests to rate the quality. Hardness, elasticity, adhesion to steel, and temperature sensitivity are common properties used to determine the appropriateness. When these data are not available, draw from company experience. The evaluation should assess the coating’s resistance to all anticipated stresses including a degree of abuse at initial installation, soil movements, chemical and moisture attack, temperature differentials, and gravity. Good-a high-quality coating designed for its present environment Fair-an adequate coating but probably not specifically designed for its specific environment Poor-a coating in place but not suitable for long-term service in its present environment Absent-no coating prtsent. Note: Some of the more important coating properties include electrical resistance, adhesion, ease of application, flexibility, impact resistance, flow resistance (after curing), resistance to soil stresses, resistance to water (moisture uptake), and resist-
Scoring the corrosion potential 4/87 ance to bacteria or other organism attack. In the case of submerged lines, marine life such as barnacles or borers must be considered. Application Evaluate the most recent coating application process and judge its quality in terms of attention to pre-cleaning. coating thickness, the application environment (temperature, humidity, dust, etc.), and the curing or setting process. Good-Detailed specifications are used, carefu! attention is paid to all aspects ofthe application; appropriate quality control systems are used. Fair-Most likely a proper application is done, but without formal supervision or quality controls. Poor-A careless, low-quality application is performed. Absen/-Application was incorrectly done, steps omitted, environment not controlled. An alternate approach to scoring coating fitness is to begin with a score for the type of coating, reflecting the coating’s perceived future performance. This might be based on historical performance information or laboratory tests. Then. adjustments are applied to this score for any conditions that might affect the coating’s ability to perform. The magnitude of an adjustment should reflect its possible impact on coating performance. Examples of adjustments are shown inTable 4.8.
Coating Condition (weighting 50% of coating) Inspection Evaluate the inspection program for its thoroughness and timeliness. Documentation will also be an integral part of the best possible inspection program. Inspection of underground coating can take several forms. Opportunities for visual inspection will occasionally present themselves, as the pipe is exposed for various reasons. When this happens, the operator should take advantage of the situation to have trained personnel evaluate the coating condition and record the findings. A second inspection method, less direct than visual inspection, impresses aradio or electric signal onto the pipe andmeasures this signal strength at points along the pipeline (Figure 4.9). The signal strength should decrease linearly in direct proportion to the distance from the signal source. Peaks and unexpected changes in the signal indicate areas of non-uniform coating-perhaps damaged coating. This technique is called a holiday detection survev. Based on the initial survey, test holes are dug for visual inspection ofthe coating in order to correlate actual coating condition with signal readings. Table 4.8
Another indirect method was mentioned in this section’s introduction. A measure of the cathodic protection requirements--especially the change in these requirements over time-gives an indication of the coating condition (see Figure 4.7 earlier). The methods discussed above and other indirect observation methods require a degree of skill on the part of the operator and the analyzer. Industry opinion is divided on the effectiveness of some of these techniques. The evaluator should satisfy himself that the operator understands the technique and can demonstrate some success in its use for coating inspection. Good-A formal, thorough inspection is performed specifically for evidence of coating deterioration. Inspections are performed by trained individuals at appropriate intervals (as dictated by local corrosion potential). Full use of visual inspection opportunities in addition to one or more indirect techniques being used. Fair-Inspections are informal, but performed routinely by qualified individuals. Perhaps an indirect technique is used but maybe not to its full potential. Poor-Little inspection is done; reliance is on chance sighting of problem areas. Informal visual inspections when there is the opportunity. Absent-No inspection done. Note: Typical coating faults include cracking, pinholes, impacts (sharp objects). compressive loadings (stacking of coated pipes), disbondment, softening or flowing, and general deterioration (ultraviolet degradation, for example). Evaluate the program of defect correcCorrection ofdefects tion in terms of thoroughness and timeliness. Good-Reported coating defects are immediately documented and scheduled for timely repair. Repairs are carried out per application specifications and are done on schedule. Fair-Coating defects are informally reported and are repaired at convenience. P o o r 4 o a t i n g defects are not consistently reported or repaired. Absent-Little or no attention is paid to coating defects. The coating condition assessment can be made more data driven if accurate measurements of cathodic protection current requirements exist. These measurements are usually in the form of milli-amperes per square foot of pipeline surface area. A
Adjustments to coating performance scores
Coating type
Age Application
Adjustment that conservatively assumes ongoing coating deterioration. Adjustment that penalizes field-appliedcoating because applicatlon conditions and surface preparation are more difficult to control.
Damage potential
Coating environment Coating protection
Examines the potentlal harm to the coating from its immediate environment. Mechanlcal damage is main consideration: rock impingement, soil stress, wave action,etc. Aseismic faultings and subsidences are areas of active soil movements that can damage coating. Assesses the response to a potentially harmful environment-the forms of mitigation in place to minimize coating damage potential.
Note: Points are assigned based on experience with lifecycle of various coating types.
4/88 Corrosion Index
Unexpected rise in signal strength (possible coating damage area)
5
P e
c
UJ
m c
Normal signal
.. slope of signal attenuation
*.
.'
0
z ~~~
~~
Distance from signal source (along pipeline) Figure 4.9 Example of coating survey results.
schedule such as that shown in Table 4.9 could be used. This type of scale would depend on soil corrosivity. Again, lowered electric current requirements mean less exposed metal and better electrical isolation from the electrolyte. A special situation, such as evidence of high microorganism activity or unusually low pH that promotes steel oxidation, should be accounted for by reducing the point value (but not below 0 points). Not knowing the soil corrosion potential would conservatively warrant a score of0 points. A more detailed approach to scoring coating conditions would be to use inspection results directly. Adjustments are applied to account for age of the inspection-it is conservatively assumed that coating deteriorates until inspection verifies actual condition. In the case of a visual inspection, a zone of influence might be appropriate. Even if only a short length of pipe is excavated and inspected, this provides some evidence of coating condition for some distance past the excavation if there are no known environmental changes. Zone of influence applications are discussed in Chapter 8. Table 4.10 shows inspection-related risk variables from a scoring system that considers three types of inspection: visual, nondestructive testing (NDT), and destructive testing (DT). Each variable can be weighted andor can be hrther divided into specific measurements.
Table 4.9
Quantitativemeasure of coating condition
CP Current requirements
Coating condition
0.0003 m A / s q ft 0.003 M s q ft
Good Fair Poor Absent
0.1 M s q A
l.OmA/sqft
Example 4.6: Scoring coating A buried oil pipeline in dry, sandy soil is cathodically protected by sacrificial anodes attached to the line at a spacing of about 500 ft. A pipe-to-soil voltage measurement is taken twice each year over the whole section to ensure that cathodic protection is adequate. Records indicate that the line was initially coated 22 years ago with a plant-applied coal tar enamel material that was applied over the sand-blasted and primed pipe. An inspector supervised the coating process. The pipeto-ground potential has not changed measurably since original installation. This section of line has not been exposed for 10 years. The evaluator assesses the coating condition as follows: Coating Condition Parameter Coating (good) Application (good) Inspection (fair) Defect correction (fair)
Max Weighting
Score
25% 25% 25% 25%
20% 20% 10% 10%
100%
60%
The coating type in this environment has a good track record. However, no confirmatory information is available. The evaluator feels that the semiannual pipe-to-soil voltage readings give a good indirect indication of coating condition. Full points would be awarded if this was confirmed by visual inspection (cracked or disbonded coating may not be found by the potential readings alone). Defect correction is an unknown at this point. Three points are awarded based on the thoroughness with which the operator runs other aspects of his operations-in other words, some benefit of the doubt is given here. Coating selection and application processes appear to be high quality, based on records and conversations with the operator, but
Table 4.10 Three types of coating inspection ~~~~
Condition
Inspec tion ~~
~~
Notes
~
Inspection age
Derived from pipe inspection reports; inspection reports could be generated for each diameter length (36-111. pipe needs a report for every 36 in. of exposed pipe); inspector notes the beginning and end station of all anomalies; measures the percent failure area per square foot of surface area. This is treated separately from other failure types since this failure mode is more problematic than othersdifficult to protect. Uses a rate-of-decay of inspection results 4 d e r inspections yield less useful risk information,
NDT inspection
Dry film thickness versus design In situ coating surveys Holidays as detected by IO0 volt/mil intensity Inspection age
A measure of remaining coating. An inferential measure of coating coverage and integrity. A direct measure of coating integrity available during excavation and visual inspection. Uses a rate-of-decay of inspection results-dder inspections yield less useful risk information.
DT inspection
Adhesion Abrasion Impact resistance Shear Inspection age
Visual inspection
General
Disbonding failures
Data obtained via laboratory or on-site tests ofcoating samples obtained from field investigations
Uses a rate-of-decay of inspection results--older inspections yield less useful risk information.
4/90 Corrosion Index
again, scores are conservatively assigned in the absence of more evidence.
Adjustments Several adjustments to previously assigned scores are appropriate. Some have already been discussed, such as adjustments for age of surveys or equipment malfunction potential. Additional adjustments will often be warranted when direct evidence is included. The corrosion variables use mostly indirect evidence to infer corrosion potential, as is consistent with the historical practice of corrosion control in the industry. Because the scores will ideally correlate to corrosion rate, any detection of corrosion damages or direct measurements of actual corrosion rate can be used to calibrate the scores and/or tune the risk model. Where a corrosion rate is actually measured, the overall corrosion score can be calibrated with this information. The reverse is not always implied, however. Caution must be exercised in assigning favorable scores based solely on the non-detection of corrosion at certain times and at limited locations. It is important to note that the potential for corrosion can be high even when no active corrosion is detected.
Previous damages Results from an in-line inspection (ILI) or other inspections may detect previous corrosion damage. When there is actual corrosion damage, but risk assessment scores for corrosion potential do not indicate a high potential, then a conflict seemingly exists between the direct and the indirect evidence. Such conflicts are discussed in Chapter 2. Sometimes we will not know exactly where the inconsistency lies until complete investigations have been performed. The conflict could reflect an overly optimistic assessment of effectiveness of mitigation measures (coatings, CL etc.) or it could reflect an underestimate of the harshness of the environment. Another possibility is that some of the information might be inappropriately used by the risk model. For example, detection of corrosion damages might not reflect active corrosion. As a temporary measure to ensure that the corrosion scores always reflect the best available direct inspection information, limitations can be placed on corrosion scores, in proportion to the direct inspection results. This will force the risk model to preferentially use recent direct evidence over previous assumptions, until the conflicts between the two are investigated. Techniques to assimilate ILI and other direct inspection information into risk scores are discussed in Chapter 5 . If such direct inspection scores are created, they can be used as input to the corrosion scores. Basically, a ‘ceiling’ is created that uses the inspection information (adjusted for age and accuracy) to override scores derived from the more indirect evidence. This is illustrated in Table 4.1 1. In this sample, the worst ILI scores, indicating the most extensive corrosion damage, limit the risk scores for one or
more corrosion types, depending on whether the ILI indications are internal or external wall loss. For example, suppose that, prior to the ILI, an evaluator had assessed the coating condition, CP effectiveness, etc., and had assigned the segment a subsurface corrosion score of 55 out of 70 (higher points indicate more safety). If the ILI score, based on the recent inspection, indicates that some damage might have occurred (suspicious indications), then the subsurface corrosion score would be capped at 60% x 70 (maximum points possible) = 42 and the previously assigned 5 5 would be temporarily reduced to 42, pending an investigation. In other words, the previous assessment based on indirect evidence has been overridden by the results ofthe ILI. The segment would be reassessed after an investigation had determined the cause of the damage-how the mitigation measures may have failed and how the risk assessment may be incorrect. An ILI score that indicates no damages puts no limitations on corrosion scores. This is discussed in Chapter 5 , if the direct inspection score is based upon un-verified ILI results, it can eventually be improved through ‘pig digs’, that is, excavation, inspection, and verifications that anomalies are indeed damages. The limitation on corrosion scores can also be reduced even if the direct inspection score does not improve by damage repair. This can happen if a root cause analysis of the detected damages concludes that active corrosion is not present, despite a poor inspection score. For example, the root cause analysis might use previous ILI results to demonstrate that corrosion damage is old and corrosion has been mitigated. A critical aspect is the determination ofwhether the damages represent active corrosion or are past damages whose progress has been halted through increased corrosion prevention measures. Replacing anode beds, increasing current output from rectifiers, eliminating interferences, and recoating are all actions that could halt previously active corrosion. This type of adjustment should be only temporarily employed. It will not give satisfactory long-term support for the risk model since it is, in effect, overriding risk information rather than finding and correcting discrepancies of evidence.
Temporarily limiting corrosionscores on the basis of recent inspections
Table 4.11 ~~~~~~~
~
Interpretation of direct inspection score
Severe corrosion damages are identified. Significantcorrosion damages are identified. Possibility of some damages has been identified. Suspicious results suggest that damages might have occurred. Direct evidence has verified that no corrosion has occurred.
%of maximum score.
0 10
30 60 100
‘Use this value or current corrosion score, whichever indicates higher
corrosion threat.
5/91
Design Index
5110 10
Design Risk A. Safety Factor B. Fatigue C. Surge Potential D. Integrity Verifications E. Land Movements
pts pts pts pts
35% 15%
0-15 pts
15%
0-100 pts
100%
0-35 0-15 0-10 0-25
10% 25%
Background The probability of pipeline failure is assessed, in this model, via an evaluation of four failure mechanisms. In three of thosthird par& corrosion, and incorrect operations-the assessment focuses on the probability that the failure mechanism is active. The potential presence of the failure mechanism is assumed to directly correlate with failure potential under the assumption that all failure mechanisms can eventually precipitate a failure, even in very strong pipelines. In the fourth-the design index-the assessment looks not only at the potential for an active failure mechanism, but also at the ability of the pipeline to withstand failure mechanisms. This resistance to
failure (sufetyfuctor and integrity verifications variables) will play a role in absolute risk calculations when a time-to-fail considerationis required. A significant element in the risk picture is the relationship between how a pipeline was originally designed and how it is presently being operated-its safety margin.Although this may seem straightforward, it is actually quite a complex relationship. All original designs are based on calculations that must, for practical reasons, incorporate assumptions.These assumptions deal with the variable material strengths and anticipated stresses over the life of the pipeline. Safety factors and conservativeness in design are incorporated with the assumptions in the interest of safety but further cloud the view of the actual margin. Further complications arise with the uncertainties in estimating the long-term interactions of many variables such as pipe support, activity of time-dependent failure mechanisms, and actual stress loadings imposed on the structure. In aggregate, then, the evaluator will always be uncertain in his estimationofthe margin of safety This uncertainty should be acknowledged,but not necessarily quantified. An evaluation system should incorporate all known information and treat all unknowns (and “unknowables”) consistently. Because a relative risk picture is sought,
5/92 Design Index
Figure 5.1 Basic risk assessment model
the consistency in treatment of design variables provides a consistentbase with which to perform risk comparisons. Design is used as an index title here because most, if not all, of the risk variables here are normally addressed directly in the system’s basic structural design. They all have to do with structural integrity against all anticipated loads-inteqal, external, time dependent, and random. This chapter, therefore, provides guidance on evaluating the pipeline’s environment against the critical design parameters.
Load vs. resistanceto load curves Conservatism in design and specifications is not the result of insensitivity to costs and wastes of efforts and materials. Rather, it is an acknowledgment,after centuries of experience, of the inherent unpredictability of the real world. Safety factors, or allowancesfor margins of error, in any structural design are only prudent. The safety margin implies a level of risk tolerance, fiuther discussed in Chapter 14. As many modem design efforts move toward limit-state design approaches, the historical notions of safety margins and extra robustness in a design are being quantified and re-evaluated. A visual model to better appreciate the relationships among pipeline integrity management, safety factors, and risk is shown in Figure 5.3,which illustrates the uncertainty involved in engineeringdesign in general. We generally use single numbers to represent the material strength (load resistance) and the anticipated load (internal pressure plus external loads) at any point along the pipeline. However, we should not lose sight of the fact that our single numbers really are representations or simplificationsof underlying distributionssuch as those shown in Figure 5.3A. The actual loads or forces on a pipeline are not constant either over time or space. They vary as we move along the pipeline and they vary at a single point on the pipeline over time. In the first distribution, we assume that the distributions shown represent changing loads and resistances along the pipeline. So, some pipe segments are exposed to relatively low loads, some relatively high, and most are in a midrange.
Similarly, each pipe segment will have a different ability to resist the loads. To find the differences, we might have to go down to a microscopic level in the case of a new pipeline with very consistent manufacturingprocesses,but there will be some joints with at least minor weaknesses, allowing failure at lower loadings; some with no weaknesses, allowing higher load resistance; and the vast majority behaving as predicted. The two distributions are initially separated by a generous distancethe safety factor-so that even iftails are a bit longer than expected (that is, if some segments are exposed to more loads than expected andor some segments have even less strength than expected), there is still no threat of failure. This represents the as-designed risk condition and is illustrated in Figure 5.3A. It is conservative and prudent to assume that any system’s resistance will be weakened over time, despite best efforts to prevent it. Weaknesses are caused by time-dependentdeterioration mechanisms and repeated stresses. It is also conservative to assume that loads might increase; perhaps due to an external force such as earth movements or increasingtraffic loadings. In reality, actual loads are often less than the conservative design assumptions,leaving more safety margin. This conservatively assumed movement of the curves toward each other-the reduction in safety margin-leads us to assume anew risk state. This is shown in Figure 5.3B.This is of course what we are trying to avoid an overlap whereby a low resistance segment is exposed to a high load (too high) and a failure occurs. The role of integrity verification, a key component of risk assessmenVmanagement, is to figuratively stop the movement of the curves long before any overlap occurs. Integrity re-verification can reshape the resistance distribution so that we have less variability and fewer weak points. The integrity verification in effect removes weaknesses-ven if the only weakness was lack of knowledge (uncertainty = increased risk, as discussed in Chapters 1and2). Figure 5.3C illustratesthe risk situation after integrity verification.The knowledge gained assures us that no weaknesses beyond a certain detection limit exist. This has the effect of at least truncating the “resistanceto load” curve, if not providing enough evidence that the curve has not
New pipelines5/93
Maximum pressure Normal pressure Material strength Pipe wall thickness External loadings Diameter Strength of fittings, valves, components Pressure cycle magnitude Pressure cycle frequency Material toughness Diameter/wall thickness ratio
Safety factor Fatigue Surge potential
4
Fluid bulk modulus Pipe modulus of elasticity Rate of flow stoppage Flow rates
Integrity verifications Verification date Pressure test level In-line inspection technique In-line inspection accuracy
Land movements
Seismic shaking Fault movement Subsidence Landslide Water bank erosion Figure 5.2
Assessing threats relatedto design aspects:sample of data used to score the design index
changed shape at all. The distance between the load curve and this truncation point reestablishes our safety factor. Because we are uncertain about how exactly and how quickly the curves are changing, we will be uncertain as to how much time we can take between weakness removal efforts. The weakness removal interval selected has implications for failure probability as discussed in Chapter 14.
New pipelines Evaluators will often need to perform a risk assessment on aproposed pipeline based on design documents. This should be considered a preliminary assessment. A preliminary risk assessment will be based on the best available preconstmction information such as route surveys and soil investigations. During actual installation, new information will usually arise that might be pertinent to the risk assessment. This information might include Reroutes Unexpected subsurface conditions encountered Use of different pipe components (elbows versus field bends, etc.) Results of quality control inspections and tests.
Often, as-built information will be required before a detailed risk assessment can be completed or a preliminary risk assessment can be confirmed. New construction followed immediately by integrity verification, decreases the chance of failure from design-related issues and from time-dependent failure mechanisms. After all, the design process itself is an exercise in risk management. Where conditions are judged to be more threatening, offsetting measures are employed. This includes deeper burial, provisions for land stabilization, increased pipe wall thicknesses, and use of casings and anchors where appropriate. Theoretically, these responses to changing conditions should keep the probability of failure constant along the length of the line. Differences in failure probability occur when responses are more or less than required for the conditions. An over-response often occurs for economic reasons; standardization of materials designed for worst case conditions provides a benefit when conditions are not worst case. An under-response often occurs for reasons of inability to completely respond to a lowfrequency, high-consequence event such as a landslide or earthquake. The challenge in a risk assessment of a new facility is to first establish the baseline risk level, then to identify areas where
5/94 Design Index
such under- and over-responses have changed the risk picture, and then relate both of these to the consequences at specific areas along the pipeline route.
Maximumpressure The terms maximum operating pressure (MOP), maximum allowable operating pressure (MAOP), maximum permissible pressure, and design pressure are often used interchangeably and indeed they are used interchangeablyin this text. They all imply an internal pressure level that comports with design intent and safety considerations-whether the latter stem from regulatory requirements, industry standards, or a company’s internal policies. MOP is normally calculated. For purposes of risk assessment, MOP can incorporateany and all design safety factors,or it may excludethe operatingsafety factorsthat are mandated by govemment regulations. It should not exclude engineering safety factors that reflect the uncertainty and variability of material strengths and the simplifying assumptions of design formulas since these are technically based limitations on operating pressure. These include adjustment factors for temperature, joint types, and other considerations.Regulatory operating safety factors, however, usually go beyond this to allow for errors and omissions, deterioration of facilities, and extra safety margins in general. Such allowances are certainly needed in pipeline operation, but can be confusing if they are included in the risk assessment.The actual margin of safety exists between the maximum stress level caused by the highest pressure and the stress tolerance of the pipeline. Measuring this directly without determining the margin between a regulated stress level and stress tolerance, makes the assessment more intuitive and useful when differing regulatory requirements make comparisons more complicated. Regulatory safety factors may therefore be omitted from the MOP calculations for risk assessment purposes. As with all elements of this risk assessment tool, such distinctions are ultimately left to the evaluator. Because a picture of risk relative to other pipelines is sought, any consistent definition of MOP will work. Surge (water hammer) pressures may be included in maximum pressure determination or alternatively, can be part of a separate risk variable, as shown in this proposed model. Surge potential is discussed in Appendix D.Pipe wall damages or suspected weaknesses-anomalies-may impact pipe strength and hence allowable pressures or safety margins. Anomalies are discussed in Appendix C. Reductions of MOP resulting from pipeline anomalies are normally based on remaining effective wall thickness calculations and conform to approaches described in industry standards such as ASME/ ANSI B3 1G, Manualfor Determining the Remaining Swngth of Corroded Pipelines, or AGA Pipeline Research Committee Project PR-3-805, A Modified Criterion for Evaluating the Remaining Strength of Corroded Pipe. It may also be important to make a distinction between a safety-system-protected MOP from one that is impossible to exceed due to the absence of adequate pressure productiowhere it is physically impossible to exceed the MOP because there is no pressure source (includingstatic head to temperature effects) that can cause an exceedance.This is covered more in
Chapter 6, but the evaluator should carefully define MOP for purposes of risk assessment.
V. Risk variables and scoring The Design Index is more technically complex than most of the othercomponents of the evaluation. If the evaluatordoes not possess expertise in matters of pipeline design, outside help may be beneficial. This is not a requirement, though. By making some conservative assumptions and being consistent, a nonexpert can do a credible job here. He must, however, be able to obtain some calculated values. Where original design calculations are available, few additional calculations are needed. The following paragraphs describe a risk assessment model that captures and evaluates design-related risk variables. All variables are listed together at the beginning of this chapter for quick reference.
A. Safety factor (weighting: 35%) In this part of the assessment, the overall strength of the pipeline segment and its stress levels are considered. This includes an assessment of loads, stresses, and component strengths. Known and foreseeable weaknesses in pipe due to previous damage or suspect manufacturing processes are also consideredhere. In effect, we are calculatinga safety factor or a margin of safety, comparing what the pipeline can do (design) versus what it is currentlybeing asked to do (operations). The evaluation process involves an evaluation of loadings: Internal pressure External loadings Special loadings System strength (resistanceto loadings) is also evaluated Pipe wall thickness Pipe material strength Pipe structuralstrength Possible weaknesses in pipe Other components.
Internalpressure When calculating stresses due to internal pressure, evaluators may use either the maximum (design) pressures or the normal operating pressures, depending on the type of risk assessment being performed (see previous discussionof MOP definitions). The former is the most conservative and is appropriatefor characterizing the maximum stress levels to which all portions of the pipeline might be subjected, even if the normal operating pressures for most of the pipe are far below this level. This use of design pressure or MOP might be more appropriate when characterizingan entire pipeline as one unit. It also avoids the potential criticism that the assessment is not appropriately conservative. The second alternative, using normal operating pressures, provides a more realistic view of stress levels along the pipeline. Portions immediately downstream of pumps or compressors would routinely see higher pressures, and downstream
Risk variables and scoring 5/95
7 Safety factor
A
v Safety factor
I
*---*
e-------
__--
,’
-- .
:wee \
. ’..---
- - -
--Failure
Figure 5.3 Load and resistance distributions: (A) risk 1, situation as designed; (6) risk 2, assumed changes over time; (C)risk 3, afler integrity verification.
5/96 Design Index
portions might never see pressures even close to design limits. This approach might be more appropriate for operational risk assessments where differences along the pipeline are of most interest. Pressure cycling should be a part of the assessment since the magnitude and frequency of cycling can contribute to fatigue failure mechanisms. This is discussed elsewhere. Calculating pipe stresses from internal pressure is discussed in Appendix C.
External loadings External loadings include the weight ofthe soil over the buried line, the loadings caused by traffic moving over the line, possible soil movements (settling, faults, etc.), external pressures and buoyancy forces for submerged lines, temperature effects (these could also be internally generated), lateral forces due to water flow, and pipe weight. Stress equations for some of these are shown in Appendix C. The diameter and wall thickness of the pipe combine to determine the structural strength of the pipeline against most external loadings. Pipe flexibility is also a factor. Rigid pipe generally requires more wall thickness to support external loads than does flexible pipe. This chapter focuses on steel pipe design. See Chapter 11 for a discussion of other commonly used pipe materials.
Overburden The weight of the soil or other cover and anything moving over the pipeline comprises the overburden load. In an offshore environment, this would also include the pressure due to water depth. Uncased pipe under roadways may require additional wall thickness to handle the increased loads. Often, casing pipe is installed to carry anticipated external loads. A casing pipe is merely apipe larger in diameter than the carrier pipe whose purpose is to protect the carrier pipe from external loads (see Figure 4.2). Casing pipe often causes difficulties in establishing cathodic protection to prevent corrosion. The effect of casings on the risk picture from a corrosion standpoint is covered in the corrosion index (see Chapter 4). The impact on the design index is found here, when the casing carries the external load and produces a higher pipe safety factor for the section being evaluated (see The case fodagainst casings).
Spans An unsupported pipe is subject to additional stresses compared with a uniformly supported pipe. An unsupported condition can arise intentionally-an aerial crossing of a ditch or stream, for instance, or unintentionally-the result of erosion or subsidence, for example. From a risk perspective. the evaluator should be interested in a verification that all aboveground pipeline spans are identified, adequately supported vertically, and restrained laterally from credible loading scenarios, including those due to gravity, internal pressure, and externally applied loads. Especially in an offshore or submerged environment, this must include lateral loads such as current flow and debris impingement. Resistance to stresses from unsupported spans is generally modeled using beam formulas (seeAppendix C).
Each aboveground span can be visually inspected to verify the existence ofan adequate number ofpipe supports, such that pipe spans do not exceed precalculated lengths based on applied dead load (i.e., load due to gravity) and the internal pressure. Pipe coating and pipe supports can also be inspected for integrity. Historical floodwater elevations can be identified based on field inspections andor against floodplain maps. The maximum allowable pipe spans can be calculated based on accepted industry standards such as ASME B31.4, Liquid Transportation Svsteins for Hydrocarbons, Liquid Petroleum Gas, Anhvdrous Ammonia and Alcohol, that specify requirements for gravity loads and internal pressure. Allowable span lengths can conservatively be based on the assumption of a beam fully restrained against rotation at its supports when calculating the applied stresses in the span.
Third-party daniage Loadings from third-party strikes are not normally included in pipe design calculations, but the pipe’s design certainly influences the pipe’s ability to withstand such forces. According to one study, pipe wall thickness in excess of 11.9 mm can only be punctured by 5% of excavator types in service as of 1995 and none could cause a hole greater than 8Omm. Furthermore, no holes greater than 80 mm have occurred in pipelines operating at design factors of:0.3 with a wall thickness greater than 9.1 mm [%]. These types of statistics can be useful in assessing risks, either in a relative sense or in absolute terms (see Chapter 14).
Buckling Pipe buckling or crushing is most often a consideration for offshore pipelines in deep water. Calculations can estimate the pressure level required for buckling initiation and buckling propagation. It is usually appropriate to evaluate buckle potential when the pipeline is in the depressured state and, thereby, most susceptible to a uniformly applied external force (see Appendix C). Recognition of buckling initiation pressure is sometimes made since less pressure is needed to propagate a buckle, compared with initiating.
Other The pipe’s ability to resist land movement forces such as those generated in seismic events (fault movement, liquefaction, shaking, ground accelerations, etc.) can also be included here. Soil movements associated with changing moisture conditions and temperatures can also cause longitudinal stresses to the pipeline and, in extreme cases, can cause a lack of support around the pipe. The potential for damaging land movements is considered in a later variable, but whether or not such forces are “damaging” depends on the pipeline’s strength. The diameter and wall thickness are often good measures of the pipeline’s ability to resist land movements. Loss of support is covered in the discussion of spans as well as in the evaluation ofpotential land movements. Hydrodynamic forces can occur offshore or in any situation where the pipeline is exposed to forces from moving water, including water-borne debris.
Risk variables and scoring 5/97
Buoyancy and buoyancy mitigation measures can introduce new stresses into the pipeline. Cyclic loadings and fatigue from external forces should be a consideration in material selection and wall thickness determination as discussed elsewhere. Temperature effects can occur through internal or external changes in temperature. The maximum allowable material stress depends on the temperature. Hence, temperature extremes may require different wall thicknesses. Such changes introduce longitudinal stresses as discussed in Appendix C. In composite pipelines. such as a PE liner in a steel pipe. many more complexities are introduced. Often used to handle more corrosive materials, such composites may have a layer of corrosion-resistant or chemical degradationresistant material and a layer of higher strength (structural) material. Because two or more materials are involved, the stresses in each and the interaction effects must be understood. Such calculations are not easily done. Original design calculations must be used (or re-created when not available) to determine minimum required wall thicknesses. The evaluator must then be sure that the additional wall thickness of one or more of the materials will indeed add to the pipe strength and corrosion resistance, and not detract from it. It is conceivable that an increase in wall thickness in one layer may have an undesirable effect on the overall pipe structure. Further, some materials may allow diffusion of the product. When this occurs. composite designs may be exposed to additional stresses.
.4cc-ountingji1rex-ternal loads If detailed calculations are not deemed to be cost effective, the evaluator may choose to use a standard percentage to add to the wall thickness required for internal pressure to account for all other loadings combined. For instance, 10 or 20% additional wall thickness, beyond requirements for internal pressure alone, would be conservative for most steel pipe under normal loading conditions. This percentage should of course be increased for sections that may be subjected to additional loadings or where ‘diameter to wall thickness’ ratios suggest diminished structural strength.
Pipe wall thickness The role of increased wall thickness in risk reduction is intuitive and verified by experimental work. Some general conclusions from this work can be incorporated into arisk analysis. Pipe wall thickness is assumed to be proportional to structural strengthgreater wall thickness leads to greater structural strength (not always linearlytwith the accompanying assumption of uniform material properties and absence of defects. Most pipeline systems have incorporated some “extra” wall thickness-beyond that required for anticipated loads-in the pipe, and hence have extra strength. This is normally because of the availability of standard manufactured pipe wall thicknesses. Such “off-the-shelf” pipe is often more economical even though it may contain more material than may be required for the intended service. This extra thickness will provide some additional protection against corrosion. external damage. and most other failure mechanisms. Excess strength (and increased margin of safety) also occurs when a pipeline is operated below
its design limits. This is quite common in the industry and happens for a variety of reasons: Downstream portions are intended to operate at lower stresses. even though they are designed for system maximum levels. Diminishing supplies reduce flows and pressures. Changes in service result in decreased stresses. Regardless of the cause, any extra strength, beyond the current operational requirements. can be considered i n the risk evaluation (Figure 5.4). Research indicates that in design factors of 0.5 and below, the failure of a typical corrosion or material defect flaw will fail in the through-wall direction, leading to a leak rather than a rupture. In design factors of 0.3 and below, all flaws, even those with sharp edges, will likewise fail in a similar manner[58]. A through-wall failure mode leading to less product release (and hence. lower consequences) is discussed in Chapter 7. However, research also indicates that increased wall thickness is not a cure-all. Increased brittleness, greater difficulties in detecting material defects, and installation challenges are cited as factors that might offset the desired increase in damage resistance [58]. As previously noted certain wall thicknesses are also thought to reduce the chances of failure from excavating equipment. Some wall thickness-internal pressure combinations provide enough strength (safety margin) that most conventional excavating equipment cannot puncture them (see page 96 and Chapter 14). However, avoidance of immediate failure is only part of the threat reduction-nonlethal damages can still precipitate future failures through fatigue and/or corrosion mechanisms. When evaluating a variety of pipe materials. distinctions in material strengths and toughness can be made In terms of external damage protection, a tenth of an inch of steel offers more than does a tenth of an inch of fiberglass. The evaluator must make this distinction if she wants to compare the risks associated with pipelines constructed of different materials. An important consideration is the difference between nominal or “specified” wall thickness and actual wall thickness. Pipe strength calculations assume a uniform pipe wall, free from any defect that might reduce the material strength. This includes possible reductions in effective wall thickness caused by defects such as cracks, laminations. hard spots, etc. Pipeline integrity assessments are designed to identify areas of weaknesses that might have originated from any of the several causes. Differences between nominal and effectivewall thickness include: Allowable manufacturing tolerances Manufacturing defects Installatiodconstruction damages Damages suffered during operation.
Manufacturing issues Strength It is commonly accepted that older manufacturing and construction methods do not match today’s standards. Technological and quality-control advances have improved quality and consistency of both manufactured components and construction techniques. These improvements have varying
5/98 Design Index
for external loads
Wall thickness required for internal pressure
Figure 5.4 Cross section of pipe wall illustrating the pipe safetyfactor.
degrees of importance in a risk assessment. In a more extreme case, depending on the method and age of manufacture, the assumption of uniform material may not be valid. If this is the case, the maximum allowable stress value must reflect the true strength of the material. Modern pipe purchasing specifications address specified minimum yield stress (SMYS) and toughness criteria, among other properties, as critical measures of material strength. These properties are normally documented with certifications from a steel mill, as discussed later. The risk evaluator should ensure that specifications were appropriate, adhered to, and relevant to the current properties ofthe pipe or component. A history of failures that are attributable in part or in whole to a specific pipe manufacture process is sufficient reason to question the allowable stress level of the pipe material, regardless of pipe specifications or pressure test results. In some risk models, pipe materials received from certain steel mills over certain periods of time are penalized due to known weaknesses. Manufacturing tolerances The actual pipe wall thickness is
not usually the nominal wall thickness specified in the purchase agreement. Nominal wall thicknesses designate a wall thickness that can vary, plus or minus, by some specified manufacturing tolerance. For the purposes of a detailed risk assessment, the lowest effective wall thickness in the section would ideally be used. If actual thickness measurement data are not available, the nominal wall thickness minus the specified maximum manufacturing tolerance can be used. Note, however, that some stress formulas are based on nominal wall thickness rather than actual. In the case of longitudinally welded steel pipe, the weld seam and area around it are often metallurgically different from the parent steel. If such seams are thought to weaken the pipe wall, this should be taken into account when assessing pipe strength. A higher susceptibility to certain failure mechanisms has been identified in older electric resistance welding (ERW) pipe. ERWpipe
This applies to pipe manufactured with a low-frequency ERW process, typically seen in pipe manufactured prior to 1970.Thls type of weld seam is more vulnerable to failure mechanisms such as: Lack of fusion Hookcracks Nonmetallic inclusions Misalignment Excessive trim Fatigue/corrosionfatigue Selective corrosion (crevice corrosion) Hardspots Fatigue at lamination ERW interface. These mechanisms, failure databases, and supporting metallurgical investigations are more fully described in technical literature references. Since 1970, the use of high-frequency ERW techniques coupled with improved inspection and testing techniques have resulted in a more reliable pipe product. U.S. government agencies issued advisories regarding the low-frequency ERW pipe issue, but did not recommend derating the pipe or other special standards. The increased defect susceptibility of this type of pipe is generally mitigated through integrity verification processes. A lamination is a metal separation within the pipe wall. Laminations are not uncommon in older pipelines and generally pose no integrity concerns unless they contribute to the formation of a blister. Hydrogen blistering occurs when atomic hydrogen penetrates the pipe steel to a lamination and forms hydrogen molecules which cannot then diffuse through the steel. A continuing buildup of hydrogen pressure can separate the layers of steel at the lamination, causing a visible bulging at the ID and OD surfaces. Hydrogen blistering at laminations is a potential contributing cause of failure when there is an aggravating presence of hydrogen, such as from a product like sour crude oil service. Although hydrogen generation is possible from cathodic proLaminations and blistering
Risk variables and scoring 5/99
tection under certain circumstances, this is not thought to be a common failure mechanism. There is no proven method of predicting the failure pressure level of a preexisting blister and no proven method to calculate its crack-driving potential from the standpoint of fatigue [86]. The potential for laminations surviving pressure tests, adding weaknesses to the pipe wall, and contributing to a future failure can be considered by the evaluator when deemed appropriate.
Construction issites Similar to the discussion on pipe manufacturing techniques, the methods for welding pipe joints have improved over the years. Girth welds today must pass a more stringent inspection than welds from the original construction of the pipeline. Welding standards such as API 1104 (incorporated by reference into U S . regulations) specify additional and different potential weld defects to be repaired than the standards from previous periods. It is not certain that girth weld defects, as defined by today’s welding inspection standards, increase the probability of weld failure in an inspected and tested pipeline. However, this issue illustrates an improving safety and risk-awareness evolution over time, presumably rooted in actual experience and supported by engineering calculations. Arc bums, created during welding, are of concern due to the possibility of tiny cracks forming around the “hard spot” that can be created from the arc burn. A common procedure among pipeline operators is to remove arc burns. Some previous construction techniques might have permitted miter joints. wrinkles in field bends, certain branch reinforcement designs, certain repair methods, and other aspects not currently acceptable for most pipeline construction. These should be considered in evaluating the strength ofthe system. Offsetting these concerns to some extent might be the evidence of a pipeline system in continuous and reliable operation for many years. In other words, incorporating “withstood the test of time” evidence may be appropriate.
Damages during operations Failure modes and potential damage can occur when the pipeline is in operation. These include damage from corrosion. dents, gouging, ovality, cracking. stress corrosion cracking (SCC), and selective seam corrosion. These are generally rare phenomena and involve simultaneous and coincident failure mechanisms. Potential corrosion damage and SCC are addressed in Chapter 4. Selective seam corrosion is a possible, but rare, phenomenon on low-frequency ERW pipe. However, the possibility cannot be dismissed entirely. It is an aggressive form of localized corrosion that has no known predictive models associated with it. Not all low-frequency ERW pipe is vulnerable since, apparently, special metallurgy is required for increased susceptibility [ 8 6 ] . Damages can be detected by visual inspection or through integrity verification techniques. Until an evaluation has shown that an indication detected on a pipe wall is potentially serious, it is normally called an anomalv. It is only a defect if it reduces pipe strength significantly-impairing its ability to be used as intended.
Many anomalies will be of a size that do not require repair because they have not reduced the pipe strength from required levels. However, a risk assessment that examines available pipe strength should probably treat anomalies as evidence of reduced strength and possible active failure mechanisms. A complete assessment ofremaining pipe strength in consideration of an anomaly requires accurate characterization of the anomaly-its dimensions and shape. In the absence ofdetailed remaining strength calculations, the evaluator can reduce pipe strength by a percentage based on the severity of the anomaly. Higher priority anomalies are discussed in Appendix C.
Stress calculations Calculation of the required wall thickness for a pipeline to withstand anticipated loads involves several steps. First, Barlow’s formula for circumferential stress is used to determine the minimum wall thickness required for internal pressure alone. This calculation is demonstrated in Appendix C. Barlow’s calculation assumes a uniform material thickness and strength and requires the input of a maximum allowable stress for the pipe material. It yields a stress value for the extreme fibers ofthe pipe wall (for the stress due solely to internal pressure). By starting with a maximum allowable material stress, the wall thickness needed to contain a given pressure is calculated. Alternately, inputting a wall thickness into the equation yields the maximum internal pressure that the pipe can withstand. These calculations assume that there are no weaknesses in the pipe. Allowable material stress levels are normally specified in pipe purchase agreements and verified by material test reports accompanying the purchase. These reports are usually called miN certifications of pipe material composition and properties and are issued by the pipe manufacturing steel mill. In the absence of mill certificates, reliable pipe specification documents (or recent pressure test data+specially if the material ratings are questioned) regarding the maximum pressure to which the pipe has been subjected (usually the preservice hydrostatic test) can be used to calculate a material allowable stress. That is, we input the maximum internal pressure into Barlow’s formula to calculate a material allowable stress value. From this allowable stress value. we can then calculate a minimum required wall thickness.
Scoring thepipe safetv factor The procedure recommended here is to calculate the required pipe wall thickness and compare it to the actual wall thickness (see Figure 5.4),adjusted by any integrity assessment information available. The required wall thickness calculation is more straightforward if it does not include standard safety factors. This is not only in the interest of simplicity. but also because some of the reasons for the safety factors are addressed in other sections of this risk analysis. For instance, regulations often base design safety factors on nearby population density. Population density is part of the consequences section (see Chapter I , Leak Impaci Factor) in this evaluation system and would cloud the issue of pipe strength if considered here also. Consequences are examined in detail separately from probability-of-failure considerations, for purposes of risk assessment clarity and risk management efficiency.
511 00 Design Index
The comparison between the actual and the required wall thickness is most easily done by using a ratio of the two numbers. Using a ratio provides a numerical scale from which points can be assigned. If this ratio is less than one, the pipe does not meet the design criteria-there is less actual wall thickness than is required by design calculations. The pipeline system has not failed either because it has not yet been exposed to the maximum designconditions or because some error in the calculations or associated assumptions has been made. A ratio greater than one means that extra wall thickness (above design requirements) exists. For instance, a ratio of 1.1 means that there is 10% more pipe wall material than is required by design and 1.25 means 25% more material. The actual wall thickness should account for all possible weaknesses, as discussed earlier and again in the integrity verification variable. This can be done using detailed stress calculations (see Appendix C) or through derating factors devised by the evaluator. When all issues have been considered, a simple point schedule such as that shown in Table 5.1 can be employed to award points based on how much extra wall thickness exists. This schedule uses the ratio of actual pipe wall to pipe wall required and calls this ratio t. A simple equation can also be used instead ofTable 5.1. The equation ( t - 1) x 35 =point value
yields approximately the same values and has the benefit of more discrimination between differences in t. Table 5.1
Point schedule based on extra wall thickness Points
f
11.0
-10 WARNING
1.0-1.1 1.1 1-1.20 1.21-1.40 1.41-1.60 1.61-1.80 >I21
3.5 7 14 21 28 35
Some examples to illustrate the pipe component ofthe safety factor follow. ~~~~
Example 5.1: Calculating the Safety Factor A cross-country steel pipeline is being evaluated. The pipeline transports natural gas. Original design calculations are available. Pipe is the only type of pipeline component in the segment being assessed. The evaluator feels that no extraordinary conditions exist on the line and proceeds as follows: 1. He uses information from the design file to determine the
required wall thickness. A MOP of 2000 psig using a grade of steel rated for 35,000-psi maximum allowable stress yields a required wall thickness of 0.60 in, for this diameter of pipe (see Appendix C). External load calculations show the need for an additional 0.08 in. in thickness to handle the
additional stresses anticipated. Surge pressures, extreme temperatures, or other loadings are extremely unlikely. The total required wall thickness is therefore 0.60 + 0.08 = 0.68 in. The actual pipe wall thickness installed is a nominal 0.88 in. Manufacturing tolerances allow this nominal to actually be as thin as 0.79 in. No documented thickness readings indicate that the line is any thinner than this 0.79-in. value and recent integrity verifications indicate no defects, so the evaluator uses 0.79 as the actual wall thickness. The ratio of actual to required wall thickness is therefore 0.79 + 0.68 = 1.16.Therefore, 16% of additional protection against external damage or corrosion exists. The point value for 16% extra wall thickness is 5.6, using the equation given earlier. ~~~~~~~~
-
~
Example 5.2: Calculating the safety factor Another section of cross-country steel pipeline is being evaluated. Hydrocarbon liquids are being transported here. In this case, original design calculations are not available. The line is 35 years old and is exposed to varying external loadings. The evaluator proceeds as follows:
1. Because of the age of the line and the absence of original documents, the most recent hydrostatic test pressure is used to determine the maximum allowable stress for the pipe material. Using the test pressure of 2200 psig, the stress level is calculated to be 27,000 psi (see Appendix C). The evaluator is thus reasonably sure that the pipeline can withstand a stress level of 27,000 psi. The maximum operating pressure of the line is 1400 psig. Using this value and a stress level of 27,000 psi, the required wall thickness (for internal pressure only) is calculated to be 0.38 in. 2. Using some general calculations and the opinions of the design department, the evaluator feels that an additional 10% must be added to the wall thickness to allow for external loadings for most conditions. This is an additional 0.04 in. He adds an additional 5% (total of 15% above requirements for internal pressure alone) for situations where the line crosses beneath roadways. This 5% is thought to account for fatigue loadings at all types of uncased road crossings, regardless of pipeline depth, soil type, roadway design, and traffic speed and type. In other words, 15% wall thickness above that required for internal pressure only is the requirement for the worst case situation. This is an additional 0.06 in. for sections that have uncased road crossings. 3 . Water hammer effects can produce surge pressures up to 100 psig. Such surges could lead to an internal pressure as high as 1500 psig (100 psig above MOP). This additional pressure requires an additional 0.02 in. of wall thickness. 4. The requiredminimum wall thicknesses are therefore 0.38 + 0.06 + 0.02 = 0.46 in. for sections with uncased crossings, and 0.38 + 0.04 + 0.02 = 0.44 in. for all other sections. 5. The evaluator next determines the actual wall thickness. Records indicate that the original purchased pipe had a nominal wall thickness of 0.65 in. When the manufacturing tolerance is subtracted from this, the wall thickness is 0.58 in. Field personnel, however, mention that wall thickness
Risk variables and scoring 5/101
checks have revealed thicknesses as low as 0.55 in. This is confirmed by documents in the files. Additionally, there may be weaknesses related to the low-frequency ERW pipe manufacturing process. No integrity verifications have been performed recently. so there is justification for a conservative assumption of pipe weaknesses. The evaluator chooses to apply a somewhat arbitrary 12% de-rating of pipe strength due to possible weaknesses and uses 0.48 in. as the actual ‘effective’ wall thickness. 6 . The actual-to-required-wall thickness ratios are therefore 0.48 0.46 = 1.04 and 0.48 + 0.44 = 1.09 for sections with and without uncased road crossings, respectively. These ratios yield point values of 1.4 and 3.2, respectively. Conservatism requires that the evaluator assign a value of 1.4 points for this section of pipeline.
.-lltemntivepipe strength scoring An alternative scoring approach for the sufep./uctor, could add pipe diameter as a variable to further consider structural strength. Pipe strength. from an external loading standpoint, is related to the pipe’s wall thickness and diameter. In general, larger diameter and thicker walled pipes have stronger loadbearing capacities and should be more resistive to external loadings. A thinner wall thickness and smaller diameter will logically increase a pipe’s susceptibility to damage [48]. Some risk evaluators have used D/t as a variable for both resistance against external loadings and as a susceptibility-tocracking indicator. As D/t gets larger, stress levels increaseincreasing failure potential and risk. Another risk measure of pipe strength has been proposed [38] as a pipe geometry score, derived from a relationship where failure probability is estimated to be proportional to (l/r2+12,d)
where t =pipe wall thickness (in.) ( I = pipe diameter (in. ), As this number gets higher, the relative risk of failure from external forces increases. Either of these relationships is readily converted into a risk scoring scheme similar to the one described using simple wall thickness ratios.
Non-pipe coniponents The evaluation of safety jactur should also include non-pipe components whenever they are part of a segment being assessed. If a non-pipe component is the weakest part of the pipeline segment being evaluated its point score should govern. Components include flanges, valve bodies. fittings, filters, pumps, flow measurement devices, pressure vessels, and others. Each pipeline component has a specified maximum operating pressure. This value is given by the manufacturer or determined by calculations. The lowest pressure rating in the system determines the weakest component and is used to set the design pressure. Ideally, the design pressure as it is used here should not include safety factors for the individual components for the same reasons it is recommended that
regulatory safety factors be removed from MOP calculations (see page 00). It may be difficult, however. to separate the safety factor from the actual pressure-containing capabilities of Provide the component. A flange, for instance, may be rated by the manufacturer to operate at a pressure of 1400 psig. It can be safely tested for short periods at pressures up to 2160 psig, as certified by the manufacturer. It is not obvious exactly how much pressure the flange can withstand from these numbers and it is a nontrivial matter to calculate it. For purposes of this risk assessment, the value of 1400 psig should probably be used as the maximum flange pressure even though this value certainly has a safety factor built in. The separation of the safety factor would most likely not be worth the effort. It also makes the comparison to pipe strength (MOP) more valid when safety factors are removed from each. On the other hand, the design calculations for a pressure vessel are usually available. This would allow easy separation of the safety factor. Again, if these calculations are not available, the best course is probably to use the rated operating pressure. This will yield the most conservative answer. Again. consistency is important. As in the pipe analysis. a ratio can be used to show the difference between what a system component can do and what it is presently being asked to do. This can be the pressure rating of the weakest component divided by the system maximum operating pressure. When this ratio is equal to I. there is no safety factor present (discounting some component safety factors that were not separated).This means that the system is being operated at its limit. Ifthe ratio is less than 1, the system can theoretically fail at any time because there is a component of the system that is not rated to operate at the system MOP A ratio greater than 1 means that there is a safety factor present; the system is being operated below its limit. A simple schedule can now be developed to assign points. It may look something like this: Design-to-MOP Ratio
-7.0 1.75-1.99 1.50-1.74 1.25-1.49
3s pis 28pts -71pts
llpts 7pts 1.00-1.09 Opts <1.00 -1 0 pts An equation can also be used instead of the point schedule: 1.10-1.24
[(Design-to-MOPratio)- I ] x 35 =points The steps for the evaluator are therefore:
I . Determine the pressure rating of the weakest system component. 2 . Divide this pressure rating (from step I ) into the systemwide MOP. 3 . Assign points based on the schedule. This is equivalent to the previous pipe strength evaluation but uses pressure instead of wall thickness. Because pressure and wall thickness are proportional in a stress calculation, pressure could also be used in the pipe strength analysis.
5/102 Design Index
Note that no credit is given for weaker components that are protected from overpressure by other means. These scenarios are examined in detail in the incorrect operations index (Chapter 6).The reasoning here is that the entire risk picture is being examined in small pieces. The fact that there exists a weak component contributes to this piece of the risk picture, regardless of protective actions taken. Even though a pressure vessel is protected by a relief valve, or a thin-walled pipe section is protected by an automatic valve, the presence of such weak components in the section being evaluated causes the lower design-to-MOP ratio and hence the lower point values. Of course, the evaluator may insert a section break if she feels that a higher pressure section is being penalized by a lower rated item when there is adequate isolation between the two. Regardless of his choice, the adequacy of the isolation will be evaluated in the incorrect operations index (Chapter 6 ) .
Example 5.3: Calculating the safety factorf o r non-pipe components The evaluator is examining a section of a jet fuel pipeline. The MOP of the pipeline is 1200 psig. This particular section has an aboveground storage tank that is rated for 1000 psig maximum. The tank is the weakest component in this section. It is located on the low-pressure end of the pipeline and is protected by relief systems and redundant control valves such that it never experiences more pressure than 950 psig. This effectively isolates the tank from the pipeline system and does not require that the pipeline be down-rated to a lower operating pressure. These safety measures, however, are not considered for this item and the design-to-MOP ratio is as follows: Weakest component + system MOP = 1000/1200 = 0.80
This is based on the fact that the weakest component can withstandonly 1000 psig.Thisratesapoint scoreof-l0points.
Example 5.4: Calculating the safety factorf o r non-pipe components In this section, the only components are pipe and valves. The pipe is designed to operate at 2300 psig by appropriate design calculations.The overall system is rated for a MOP of 800 psig. The valve bodies are nominally rated for maximum pressures of 1400 psig, with permissible hydrostatic test pressures of 2200 psig. The evaluator rates the weakest component, the valve bodies, to be 1400 psig. Because he has no exact information as to the strength of the valve bodies, he uses the pressure rating that is guaranteed by the manufacturer for long-term service. The design-to-MOP ratio is, therefore, 1400/800 = 1.75, which yields a point value of 26.3 points
Example 5.5: Calculating the safety factorf o r non-pipe components Here, a section has valves, meters, and pipe. The MOP is 900 psig. The pipe strength is calculated to be 1700 psig.
The valve bodies and meters can all withstand pressure tests
of 2700 psig and are rated for 1800 psig in normal operation. Again, the evaluator has no knowledge of the exact strength of the valves and meters, so he uses the normal operation rating of 1800 psig. The weakest component, the pipe, governs; therefore, 1700/900= 1.89, which yields a point value of 3 1.2points
Note that in the preceding examples, the pipeline segments being evaluated have a mixture of components. An alternative and often-preferable segmentation strategy would create a separate pipeline segment, to be independently scored, for each component present. This avoids the blending of dissimilar risks within a segment scores. It has the further benefits of allowing similar components to be grouped and compared-”apples to apples.” See discussions segmentation strategy (Chapter 2) and also risk evaluations of station facilities (Chapter 13).
B. Fatigue (weighting: 1%0) Fatigue failure has been identified to be the largest single cause of metallic material failure [47]. Historical pipeline failure data does not indicate that this is a dominant failure mechanism in pipelines, but it is nonetheless an aspect of risk. Because a fatigue failure is a brittle failure, it can occur with no warning and with disastrous consequences. Fatigue is the weakening of a material due to repeated cycles of stress. The amount of weakening depends on the number and the magnitude of the cycles. Higher stresses, occurring more often, can cause more damage to the material. Factors such as surface conditions, geometry, material processes, fracture toughness, temperature, type of stress applied, and welding processes influence susceptibility to fatigue failure (see Cracking: a deeper look, in this chapter). Predicting the failure of a material when fatigue loadings are involved is an inexact science. Theory holds that all materials have flaws--cracks, laminations, other imperfectionsif only at a microscopic level. Such flaws are generally too small to cause a structural failure, even under the higher stresses of a pressure test. These flaws can grow though, enlarging in length and depth as loads (and, hence, stress) are applied and then released. After repeated episodes of stress increase and reduction (sometimes hundreds of thousands of these episodes are required), the flaw can grow to a size large enough to fail at normal operating pressures. Unfortunately. predicting flaw growth accurately is not presently possible from a practical standpoint. Some cracks may grow at a controlled, rather slow rate, while others may grow literally at the speed of sound through the material. The relationship between crack growth and pressure cycles is based on fracture mechanics principles, but the mechanisms involved are not completely understood. For the purposes of risk analysis, the evaluator need not be able to predict fatigue failures. He must only be able to identify, in a relative way, pipeline structures that are more susceptibleto such failures. Because it is conservative to assume that any amount of cycling is potentially damaging, a schedule can be set up to compare numbers and magnitudes of cycles. Stress magnitudes should be based on a percentage of the normal operating pres-
Riskvariables and scoring 5/103 sures. A 100-psi pressure cycle will have a potentially greater effect on a system rated for 150 psi MOP than on one rated for 1500 psi. Most research points to the requirement oflarge numbers of cycles at all but the highest stress levels, before serious fatigue damage occurs. In many pipeline instances, the cycles will be due to changes in internal pressure. Pumps, compressors, control valves, and pigging operations are possible causes of internal pressure cycles. The following example schedule is therefore based on internal pressures as percentages of MOP. If another type of loading is more severe, a similar schedule can be developed. Stresses caused by vehicle traffic over a buried pipeline would be an example of a cyclic loading that may be more severe than the internal pressure cycles. This is admittedly an oversimplification of this complex issue. Fatigue depends on many variables as noted previously. At certain stress levels, even the frequency of cycles-how fast they are occurring-is found to affect the failure point. For purposes of this assessment, however, the fatigue failure risk is being reduced to the two variables of stress magnitude and number. The following schedule is offered as a possible simple way to evaluate fatigue’s contribution to the risk picture. One cycle is defined as going from the starting pressure to a peak pressure and back down to the starting pressure. The cycle is measured as a percentage of MOP. In this example of assessing fatigue potential, the evaluator uses the scoring protocol illustrated in Table 5.2 to analyze various combinations of pressure magnitudes and cycles. The point value is obtained by finding the worst case combination of pressures and cycles. This worst case is the situation with the lowest point value. Note the “equivalents” in this table; 9000 cycles at 90% of MOP is thought to be the equivalent of 9 million cycles at 5% of MOP; 5000 cycles at 50% MOP is equal to 50,000 cycles at 10% of MOP, etc. In moving around in this table, the upper right corner is the condition with the greatest risk, and the lower left is the least risky condition. The upper left corner and the lower right corner are roughly equal. Note also that Table 5.2 is not linear. The designer ofthe table did not change point values proportionately with changes in either the magnitude or frequency of cycles. This indicates a belief that changes within certain ranges have a greater impact on the risk picture. The following example illustrates further the use of this table.
Fatigue scores based on various combinationsof pressure magnitudes and cycles
Table 5.2
Lifetime cycles
IO0 90 7s 50
25 10 5
7 9 10 11
I2 I3 14
5 6 1 8 9
IO II
3 4 5 6 1
8 9
I 2 3 4
5 6 7
0 1
2 3 4
5 6
Example 5.6: Scoring fatigue potential The evaluator has identified two types of cyclic loadings in a specific pipeline section: (1) a pressure cycle of about 200 psig caused by the start of a compressor about twice a week and (2) vehicle traffic causing a 5-psi external stress at a frequency of about 100 vehicles per day. The section is approximately 4 years old and has an MOP of 1000 psig. The traffic loadings and the compressor cycles have both been occurring since the line was installed. For the first case, the evaluator enters the table at ( 2 startdweek x 52 weekdyear x 4 years) = 416 cycles across the horizontal axis, and (200 psig/lOOO psig) = 20% of MAOP on the vertical axis. This combination yields a point score of about 13 points. For the second case, the lifetime cycles are equal to (100 vehicles/day x 365 daysiyear x 4 years) = 146,000.The magnitude is equal to (5 psig/lOOO psig) = 5%. Using these two values, the schedule assigns a point score of 7 points. The worst case, 7 points, is assigned to the section.
Cracking: a deeper look All materials have flaws and defects, if only at the microscopic level. Given enough stress, any crack will enlarge, growing in depth and width. Crack growth is not predictable under realworld conditions. It may occur gradually or literally at the speed of sound through the material. (See also discussions on possible failure hole sizes in Chapters 7 and 14.) As contributors to fatigue failures, several common crackenhancing mechanisms have been identified. Hydrogeninduced cracking (HIC), stress corrosion cracking (SCC), and sulfide stress corrosion cracking (SSCC) are recognized flawcreating or flaw-propagating phenomena (see Chapter 4). The susceptibility of a material to these mechanisms depends on several variables. The material composition is one of the more important variables. Alloys, added in small quantities to ironcarbon mixtures, create steels with differing properties. Toughness is the material property that resists fatigue failure. A trade-off often occurs as material toughness is increased but other important properties such as corrosion resistance, weldability, and brittleductile transitions may be adversely affected. The fracture toughness of a material is a measure of the degree of plastic deformation that can occur before full failure. This plays a significant role in fatigue failures. Much more energy is required to fail a material that has a lot of fracture toughness, because the material can absorb some of the energy that may otherwise be contributing directly to a failure. A larger defect is required to fail a material having greater fracture toughness. Compare glass (low fracture toughness) with copper (high fracture toughness). In general, as yield strength goes up, fracture toughness goes down. Therefore. flaw tolerance often decreases in higher strength materials. Another contributor to fatigue failures is the presence of stress concentrators. Any geometric discontinuity such as a hole, a crack, or a notch, can amplify the stress level in the material. Coupled with the presence of fatigue loadings, the situation can be further aggravated and make the material even more susceptible to this type of failure. The process of heating and cooling of steel during initial formation and also during subsequent heating (welding) plays
W104 Design Index
a large role in determining the microstructure of the steel. The microstructure of two identical compositions that were heat treated in different manners may be completely different. One may be brittle (lacks toughness), and the other might be ductile at normal temperatures. The welding process forms what is known as the hear-affectedzone (HAZ). This is the portion of the parent metal adjacent to the weld that has an altered microstructure due to the heat of the welding operation. The HAZ is often a more brittle area in which a crack might initiate. Because the HAZ is an important element in the structural strength of the pipe, special attention must be paid to the welding process that creates this HAZ. The choice of welding temperature, speed of welding, preheating, post-heating, weld metal type, and even the type of weld flux, all affect the creation of the HAZ. Improper welding procedures, either because of the design or execution of the welding, can create a pipeline that is much more susceptible to failure due to cracking. This element of the risk picture is considered in the potential for human error in the incorrect operations index discussion in Chapter 6 . So-called “avalanche” or “catastrophic” fractures, where crack propagation extends literally for miles along the pipeline, have been seen in large-diameter, high-pressure gas lines. In these “rapid-crack-growth” scenarios, the speed of the crack growth exceeds the pipeline depressurization wave. This can lead to a violent pipe failure where the steel is literally flattened out or radically distorted for great distances. From a risk standpoint, such a rupture extends the release point along the pipeline, but probably does not materially affect the amount of gaseous product released. An increased threat of damage due to flying debris is present. Preventive actions to this type of failure include crack arresters-sleeves or other attachments to the pipe designed to slow the crack propagation until the depressurization wave can pass-and the use of more crack-resistant materials including multilayer wall pipe. If the evaluator is particularly concerned with this type of failure and feels that it can increase the risk picture in her systems, she can adjust the spill score in the leak impact factor (Chapter 7 )by giving credit for crack arrester installations, and recognizing the increased susceptibility of large-diameter, high-pressure gas lines (particularly those lacking material toughness).
C. Surge potential (weighting: 10%) The potential for pressure surges, or water hammer effects, is assessed here. The common mechanism for surges is the sudden conversion of kinetic energy to potential energy. A mass of flowing fluid in a pipeline, for instance, has a certain amount of kinetic energy associated with it. If this mass of fluid is suddenly brought to a halt, the kinetic energy is converted to potential energy in the form of pressure. A sudden valve closure or pump stoppage is a common initiator of such a pressure surge or, as it is sometimes called, a pressure spike. A moving product stream contacting a stationary mass of fluid (while starting and stopping pumps, perhaps) is another possible initiator. This pressure spike is not isolated to the region of the initiator. In a fluid-filled pipeline, a positive pressure wave is propagated upstream of the point where the fluid flow is interrupted.
A negative pressure wave travels downstream from the point of interruption. The pressure wave that travels back upstream along the pipeline adds to the static pressure already in the pipeline. A pipeline with a high upstream pressure might be overstressed as this pressure wave arrives, causing the total pressure to exceed the MOP. The magnitude of the pressure surge depends on the fluid modulus (density and elasticity), the fluid velocity, and the speed of flow stoppage. In the case ofa valve closure as the flow stoppage event, the critical aspect ofthe speed of closure might not be the total time it takes to close the valve. Most of the pressure spike occurs from the last 10% of the closing of a gate valve, for instance. From a risk standpoint, the situation can be improved through the use of surge protection devices or devices that prevent quick flow stoppages (such as valves being closed too quickly). The operator must understand the hazard and all possible initiating actions before corrective measures can be correctly employed. The evaluator should be assured that the operator does indeed understand surge potential (see Appendix D for calculations). He can then assign points to the section based on the chances of a hazardous surge occurring. To simplify this process, a hazardous surge can be defined as one that is greater than 10% of the pipeline MOP. It may be argued in some cases that a line, in its present service, may operate far below MOP and hence, a 10% surge will still not endanger the line. A valid argument, perhaps, but perhaps also an unnecessary complication-removing a risk variable that might be important as the operations change-in the risk assessment. The evaluator should decide on a method and then apply it uniformly to all sections being evaluated. The point schedule can be set up with three general categories and room for interpolation between the categories. For instance, evaluate the chances of a pressure surge of magnitude greater than 10% of system MOP: High probability Low probability Impossible
0 pts 5 pts 10 pts
High probability exists where closure devices, equipment, fluid modulus, and fluid velocity all support the possibility of a pressure surge. No mechanical preventers are in place. Operating procedures to prevent surges may or may not be in place. Lowprobability exists when surges can happen (fluid modulus and velocity can produce the surge), but are safely dealt with by mechanical devices such as surge tanks, relief valves, and slow valve closures, in addition to operating protocol. Low probability also exists when the chance for a surge to occur is only through a rather unlikely chain of events. Impossible means that the fluid properties cannot. under any reasonable circumstances, produce a pressure surge of magnitude greater than 10% MOP.
Example 5.7: Scoring surge potential A crude oil pipeline has flow rates and product characteristics that are supportive of pressure surges in excess of 10% of MOP. The only identified initiation scenario is the rapid closure of a mainline gate valve. All of these valves are equipped with
Riskvariables and scoring 5/105
automatic electric openers that are geared to operate at a rate less than the critical closure time (see Appendix D). If a valve must be closed manually. it is still not possible to close the valve too quickly-many turns of the valve handwheel are required for each 5% valve closure. Points for this scenario are assessed at 5.
D. Integrity verifications (weighting: 25%) Pipeline integrity is ensured by two main efforts: (1) the detection and removal of any integrity-threatening anomalies and (2) the avoidance of future threats to the integrity (protecting the asset).The latter is addressed by the many risk mitigation measures commonly employed by a pipeline operator, as discussed in Chapters 3 through 6 . The former effort involves inspection and testing and is fundamental to ensuring pipeline integrity, given the uncertainty surrounding the protection efforts. The purpose of inspection and testing is to validate the structural integrity of the pipeline and its ability to sustain the operating pressures and other anticipated loads. The goal is to test and inspect the pipeline system at frequent enough intervals to ensure pipeline integrity and maintain the margin of safety. This was discussed earlier and illustrated by Figure 5.3. A d&ct is considered to be any undesirable pipe anomaly, such as a crack. gouge, dent. or metal loss, that could later lead to a leak or spill. Note that not all anomalies are defects. Some dents, gouges. metal loss. and even cracks will not affect the service life of a pipeline. Possible defects include seam weaknesses associated with low-frequency ERW and electric flash welded pipe, dents or gouges from past excavation damage or other external forces. external corrosion wall losses, internal corrosion wall losses. laminations. pipe body cracks, and circumferential weld defects and hard spots. A conservative assumption underlying integrity verification is that defects are present in the pipeline and are growing at some rate, despite preventive measures. By inspecting or testing the pipeline at certain intervals, this growth can be interrupted before any defect reaches a failure size. Defects will theoretically be at their largest size immediately before the next integrity verification. This estimated size can be related to a failure probability by considering uncertainty in measurements and calculations. Therefore, the integrity re-verification interval is implicitly establishing a maximum probability of failure for each failure mode. The absence of any defect of sufficient size to compromise the integrity ofthe pipeline is most commonly proven through pressure testing and/or ILI. the two most comprehensive integrity validation techniques used in the hydrocarbon transmission pipeline industry today. Integrity is also sometimes inferred through absence of leaks and verifications of protective systems. For instance, CP counteracts external corrosion of steel pipe and its potential effectiveness is determined through pipe-to-soil voltage surveys along the length of the pipeline, as described in Chapter 4. All ofthese measurementbased inspections and tests are occasionally supported by visual inspections of the system. Each ofthese components of inspection and testing of the pipeline can be-and usually should be-a part of the risk assessment. Common methods of pipeline survey. inspection. and testing are shown in Appendix G. Pipe wall inspections include non-
destructive testing (NDT) techniques such as ultrasonic, magnetic particle, dye penetrant, etc., to find pipe wall flaws that are difficult or impossible to detect with the naked eye.
Evaluating the integrity verification For purposes of risk assessment, the age and robustness of the most recent integrity verification should dnve the score assignment. The performance of a series of inspections, especially using in-line inspection, where results can be overlaid and more minor changes detected is even more valuable.
Age of verification The age consideration can be a simple proportional scoring approach using a predetermined information deterioration rate Note that information deterioration refers to the diminishing usefulness of past data to determine current pipe condition. (see discussions in Chapters 2). The past data should be used to characterize the current effective wall thickness until better information replaces it. Five or 10-year information deterioration periods-after which the inspection or test data are no longer providing meaningful evidence of current integrityare common defaults, but these can be set more scientifically. An inspection interval is best established on the basis of two factors: (1) the largest defect that could have survived or been undetected in the last test or inspection and ( 2 ) an assumed defect growth rate. A failure size must be estimated in order to calculate a time to failure. For cracklike defects, fracture mechanics and estimates of stress cycles (frequency and magnitude) are required to determine this. For metal loss from corrosion, the failure size for purposes of probability calculations can be determined by two criteria: (1)the depth ofthe anomaly and (2) a calculated remaining pressure-containing capacity of the defect configuration. Two criteria are advisable since the accepted calculations for remaining strength (see Appendix C) are not considered as reliable when anomaly depths exceed 80% of the wall thickness. Likewise, depth alone is not a good indicator of failure potential because stress level and defect configuration are also important variables [86]. These defect rates of growth can be estimated after successive integrity evaluations or. when such information is unavailable, based on conservative assumptions. With knowledge of maximum surviving defect size, defect rate of growth, and defect failure size, all of the ingredients are available to establish an optimum integrity verification schedule. This in turn sets the information deterioration scale. Unfortunately, most of these parameters are difficult to estimate with any degree of confidence and resulting schedules will also be rather uncertain.
Robustness of verification Integrity verifications vary in terms of their accuracy and ability to detect all types of potential integrity threats. The robustness consideration for a pressure test can simply be the pressure level above the maximum operating pressure. This establishes the largest theoretical surviving defect. The role of pressure level and a possible scoring protocol are discussed below.
5/106 Design Index
Visual and NDT inspections (see Appendix G) performed on exposed pipe can be very thorough, but are very localized assessments. A zone-of-influence approach (see Chapter 8) or ideas taken from statistical sampling techniques can be used to extrapolate integrity information from such localized assessments. For an ILI, the assessment should ideally quantify the ability of the 1LI to detect all possible defects and characterize them to a given accuracy. Given the myriad of possible defects, ILI tools, interpretation software, and postinspection excavation programs, this can be a complex undertaking. In the final analysis, it is again the largest theoretical undetected defect that best characterizes the robustness. One approach is to characterize the ILI program-tool accuracy, data interpretation accuracy, excavation verification protocol-against all possible defect types. When both a pressure test and an ILI have been done, the scores can be additive up to the maximum allowed by the variable weighting.
Pressure test A pipeline pressure test is usually a hydrostatic pressure test in which the pipeline is filled with water, then pressurized to a predetermined pressure, and held at this test pressure for a predetermined length of time. It is a destructive testing technique because defects are discovered by pipe failures during the test. Other test media such as air are also sometimes used. Tests with compressible gases carry greater damage potential since they can precipitate failures and causemore extensive damage than by testing with an incompressible fluid. The test pressure exceeds the anticipated operational maximum internal pressure to prove that the system has a margin of safety above that pressure. It is a powerful technique in that it proves the strength of the entire system. It provides virtually indisputable evidence as to the system integrity (within the test parameters). However, pressure testing does not provide information on defects or damages present below its detection threshold. Such surviving defects might later worsen and cause a failure. As noted previously, all materials have flaws and defects, if only at the microscopic level. Given enough stress, any crack will enlarge, growing in depth and width. Under the constant stress of a pressure test, it is reasonable to assume that a group of flaws beyond some minimum size will grow. Below this minimum size, cracks will not grow unless the stress level is increased. If the stress level is rather low, only the largest of cracks will be growing. At higher stresses, smaller and smaller cracks will begin to grow, propagating through the material. When a crack reaches a critical size at a given stress level, rapid, brittle failure ofthe structure is likely. (See previous explanations of fracture toughness and crack propagation in this chapter.) Certain configurations of relatively large defects can survive a hydrostatic test. A very narrow and deep groove can theoretically survive a hydrostatic test and, due to very little remaining wall thickness, is more susceptible to failure from any subsequent wall loss (perhaps occurring through corrosion). Such defect configurations are rare and their failure potential at a pressure lower than the test pressure would require ongoing corrosion or crack growth.
However, the inability to detect such flaws is a limitation of pressure testing. By conducting a pressure test at high pressures, the pipeline is being subjected to stress levels higher than it should ever encounter in everyday operation. Ideally, then, when the pipeline is depressured from the hydrostatic test, the only cracks left in the material are of a size that will not grow under the stresses of normal operations. All cracks that could have grown to a critical size under normal pressure levels would have already grown and failed under the higher stress levels of the hydrostatic test. Research suggests that the length of time that a test pressure is maintained is not a critical factor. This is based on the assumption that there is always crack growth and whenever the test is stopped, a crack might be on the verge of its critical size and, hence, close to failure. The pressure level, however, is an important parameter. The termpressure reversal refers to a scenario in which, after a successful pressure test, the pipeline fails at a pressure lower than the test pressure. This occurs when a defect survives the test pressure but is damaged by the test so that it later fails at a lower pressure when the pipeline is repressurized. The higher the test pressure relative to the normal operating pressure, the greater the safety margin. The chances of a pressure reversal become increasingly remote as the margin between test and operating pressures increases. This is explained by the theory of critical crack size discussed earlier. Immediately after the pressure test, uncertainty about pipeline integrity begins to grow again. Because a new defect could be introduced at any time or defect growth could accelerate in a very localized region, the test’s usefulness is tied to other operational aspects of the pipeline. Introduction of new defects could come from a variety of sources, such as corrosion, third-party damages, soil movements, pressure cycles, etc., all of which are contributing to the constantly changing risk picture. For this reason, pressure test data have a finite lifetime as a measure of pipeline integrity. A pipeline can be retested at appropriate intervals to prove its structural integrity. Interpretation of pressure test results is often a nontrivial exercise. Although time duration of the test may not be critical, the pressure is normally maintained for at least 4 hours for practical reasons, if not for compliance with applicable regulations. During the test time (which is often between 4 and 24 hours), temperature and strain will be affecting the pressure reading. This requires a knowledgeable test engineer to properly interpret pressure fluctuations and to distinguish between atransient effect and a small leak on the system or the inelastic expansion of a component. The evaluation point schedule for pressure testing can confirm proper test methods and assess the impact on risk on the basis of time since the last test and the test level (in relation to the normal maximum operating pressures). An example schedule follows: ( I ) Calculate H, where H = (testpressureh4OP) H < 1.10(1.10=testpressure lO%aboveMOP) 1.11 < H < 1.25 1 . 2 6 < H < 1.40 H > 1.41 or a simple equation can be used:
0 pts 5 pts 10pts 15 pts
Risk variables and scoring 51107 ( H - l)x30=pointscore(uptoarnaxirnumof 15points)
and where the minimum = 0 points.
(2) Time since last test: Points = 10 - (“ears since test) (minimum = Opoints) A test 4 years ago 6 pts A test 1 1 years ago 0 pts Add points from (1) and (2) above to obtain the total hydrostatic test score. In this schedule, maximum points are given to a test that occurred within the last year and that was conducted to a pressure greater than 40% above the maximum operating pressure.
Example 5.8: Scoring hydrostatic pressure tests The evaluator is studying a natural gas line whose MAOP is 1000 psig. This section of line was hydrostatically tested 6 years ago to a pressure of 1400 psig. Documentation on hand indicates that the test was properly performed and analyzed. Points are awarded as follows: H = 1400/1000=1.4
(1) (1.4- 1)x 30 (2) 10-6years Thus, 12+4
12 pts 4pts 16pts
In-line inspection The use of instrumented pigs to inspect a pipeline from the inside is a rapidly maturing technology. In-line inspection. also called smartpigging or intelligentpigging. refers to the use of an electronically instrumented device traveling inside the pipeline that measures characteristics of the pipe wall. Any change in pipe wall can theoretically be detected. These devices can also detect pipe wall cracks, laminations, and other material defects. Coating defects may someday also be detected in this fashion. The pipe conditions found that require further evaluation are referred to as anonialies. The industry began to use these tools in the 1980s, but ILI presently benefits from advancements in electronics and computing technology that make it much more useful to the pipeline industry. State-of-the-art ILI has advanced to the point that many pipeline companies are basing extensive integrity management programs around such inspection. A wealth of information is expected from such inspections when a highquality, in-line device is used and supported by knowledgeable data analysis. It is widely believed that pipe anomalies that are of a size not detected through failure under a normal pressure test can be detected through ILI. While increasingly valuable, the technology is arguably inexact, requiring experienced personnel to obtain most meaningful results. The ILI tools cannot accommodate all pipeline system designs-there are currently restrictions on minimum pipe diameter, pipe shape, and radius of bends. All current ILI tools have difficulties in detecting certain types of problemssometimes a combination of tools is needed for full defect
detection. In-line inspection is also relative costly. Precleaning of the pipeline, possible service interruptions, risks of unnecessary repairs, and possible blockages caused by the instrument are all possible additional costs to the operation. The ILI process often involves trade-offs between more sensitive tools (and the accompanying more expensive analyses) requiring fewer excavation verifications and less expensive tools that generate less accurate results and hence require more excavation verifications. Because this technique discovers existing defects only. it is a lagging indicator of active failure mechanisms. ILI must be performed at sufficient intervals to detect serious defect formations before they become critical. General types of anomalies that can be detected to varying degrees by ILI include Geometric anomalies (dents, wrinkles, out-of-round pipe) Metal loss (gouging andgeneral, pitting, and channeling corrosion) Laminations, cracks, or cracklike features. Some examples of available ILI devices are caliper tools, magnetic flux leakage low- and high-resolution tools. ultrasonic wall thickness tools, ultrasonic crack detection tools, and elastic wave crack detection tools. Each of these tools has specific applications. Most tools can detect previous third-party damage or impacts from other outside forces. Caliper tools are used to locate pipe deformations such as dents or out-of-round areas. Magnetic flux leakage tools identify areas of metal loss with the size of the detectable area dependent on the degree of resolution of the tool. Ultrasonic wall thickness tools detect general wall thinning and laminations. So-called “crack tools” are specifically designed to detect cracks, especially those whose orientation is difficult to detect by other means. Currently, no single tool is superior in detecting all types of anomalies and not all ILI technologies are available for smaller pipeline sizes. Depending on vendor specifications and ILI tool type, detection thresholds can vary. The degree of resolution (the ability to characterize an anomaly) also depends on anomaly size, shape, and orientation in the pipe. The probability of detecting an anomaly using ILI increases with increasing anomaly size. Smaller anomalies as well as certain anomaly shapes and orientations have lower detection thresholds than others. The most common tools employ either an ultrasonic or a magnetic flux technology to perform the inspection. The ultrasonic devices use sound waves to continuously measure the wall thickness around the entire circumference of the pipe as the pig travels down the line. The thickness measurement is obtained by measuring the difference in travel time between sound pulses reflected from the inner pipe wall and the outer pipe wall. A liquid couplant is often required to transmit the ultrasonic waves from the transducer to the pipe wall. The magnetic flux pig sets up a magnetic field in the pipe wall and then measures this field. Changes in the pipe wall will change the magnetic field. This device emphasizes the detection of anomalies rather than measurement of wall thickness, although experienced personnel can closely estimate defect sizes and wall thicknesses. In either case, all data are recorded. Both types of pigs are composed of several sections to accommodate the measuring
5/108 Design index
instruments, the recording instruments, a power supply, and cups used for propulsion of the pig. After receiving an ILI indication of an anomaly, an excavation is usually required to more accurately inspect the pipeusing visual and NDT techniques (see Appendix G)-and make repairs. Sample excavating to inspect the pipe is also used to validate the ILI results. The process of selecting appropriate excavation sites from the ILI results can be challenging. The most severe anomalies are obviously inspected, but depending on the resolution of the ILI tool and the skills of the data analyst, significant uncertainty surrounds a range of anomalies, which may or may not be serious. Some inaccuracies also exist in current ILI technology such as with distance measuring and errors in pig data interpretation. These inaccuracies make locating anomalies problematic. Probability calculations can be performed to predict anomaly size survivability based on ILI tool detection capabilities, measurement accuracy, and follow-up validation inspections. These, combined with loading conditions and material science concepts, would theoretically allow a probabilistic analysis of future failure rates. Such calculations depend on many assumptions and hence carry significant uncertainty. Several industry-accepted methods exist for determining corrosion-flaw severity and for evaluating the remaining strength in corroded pipe. ASME B31G, ASME B31G Modified, and RSTRENG are examples of available methodologies. Several proprietary calculation methodologies are also used by pipeline companies. These calculation routines require measurements of the depth, geometry, and configuration of corroded areas. Depending on the depths and proximity to one another, some areas will have sufficient remaining strength despite the corrosion damage. The calculation determines whether the area must be repaired. Scoring the ILIprocess As previously noted ILI robustness should be a part of the evaluation. It should ideally quantify the ability of the ILI to detect all possible defects and characterize them to a given accuracy. It is the largest theoretical surviving defect that best characterizes the robustness. A complete evaluation of the ILI process can be part of the risk assessment to ensure that the integrity verification is robust. This will require an examination of the services and capabilities of the ILI provider, including
Tool types, performance, and tolerances Analysis procedures and processes (human interpretations and computer analyses ofpig outputs)
Identification and reporting of immediate threats to pipeline integrity Overall report content and analysis such as corrosion type, defect type, and minimum defect criteria Performance specifications and variance approval processes Vendor personnel qualifications. An example of scoring the ILI program-tool accuracy, data interpretation accuracy, excavation verification protocolagainst all possible defect types is shown in Table 5.3. In this example, the evaluator has identified five general types of defects that are of concern. Each is assigned a weighting with the relative weights summing to 100% of the integrity threats. The weights are set based on each defect’s expected frequency and severity. Historical failure rate data or expert judgment can be used to set these. The third column, Possiblepoints, is simply each defect’s weighting multiplied by the integrity verification variable’s maximum point value (35 points). The next two columns reflect the capabilities of the ( I ) ILI tool and data interpretation accuracies and (2) the excavation verification program, respectively. These two capabilities are added together and then multiplied by the defect’spoint value to get the score for each defect. In the example values shown in the table, the ILI program isjudged to be 40%effective in detecting significant cracking-20% of that effectiveness comes from the ILI tool and 20% from the follow-up excavation program. Similarly, the program is judged to be 95% effective in detecting significant corrosion metal loss-90% from the tool capability and 5% from the excavation program. No follow-up excavation occurs for the remaining defect types (in this example), so the effectiveness comes entirely from the tool capabilities. The sum of these scores is the assessment of the ILI robustness: ILI robustness = sum{[defect weight] x [max points] x ([ILI capability]
+ [excavation verification capability])}
Adding the capabilities captures the belief that increased capabilities of either the tool or the excavation program can offset limitations ofthe other. The sum is always less than 1.O since a score of 1.O represents 100% detection capabilities, which is not realistic. In the example ofTable 5.3, the ILI tool and data interpretation are very capable in terms of detectingmetal loss and geometry issues. Little excavation verification i s needed. Because those defects represent the bulk of anticipated integrity prob-
Table 5.3 Sample ILI robustness scoring program ~~~
Failure mode/de/ect
FatiguelcracWERWdefects Corrosionlmetal loss Third-party damage/dents/gouges Manufacturing defectsllaminationsi H, blisters Earth movement/ovalityibuckling Totals
Weight
10% 30% 30% 5 Y”
25% 100%
Possible points
3.5 10.5 10.5 1.75 8.75
35
ILI capabilih, 0.2 0.9 0.95
0.8 0.9
Excavation verification
0.2 0.05 0 0 0
Score
I .4 9.975 9.975 1.4 7.87
30.45
Risk variables and scoring 51109
lems. the inregrip verification variable receives a score of 30.5 out of 35 possible points, based on this ILI program. These points will be reduced over time until either the information has aged to the point of little value or the ILI is repeated. For instance, if a fixed 5-year deterioration is assumed the score after 3 years will be: ( 5 - 3 ) / 5 x 3 0 . 2 5 = 14points
Scoring ILI results The previous discussion focused on scoring the ILI process-how timely and robust was it? It did not take into account the use of the results of the ILL That aspect is discussed here. ILI results provide direct evidence of damages and, by inference. of damage potential. Such evidence should he included in a risk assessment. The specific use of direct evidence in evaluating riskvariables is discussed in Chapter 2. ILI results provide evidence about possibly active failure mechanisms, as illustrated in Table 5.4.
lritegriv assessment andpipe strength When integrity assessment information becomes available, it can and should become a part of the pipe strength calculation. All defects left uncorrected should reduce calculated pipe strength in accordance with standard engineering stress calculations described in this chapter. Defects that are repaired should impact other risk model variables as direct evidence of failure mechanisms (see Appendix C). Even if no defects are detected uncertainty has been reduced with a corresponding reduction in perceived risk. If the information is from very specific portions of the pipeline-such as after a visual or NDT inspection of an excavated section of pipe-a zone-of-influence approach (see Chapter 8) or ideas taken from statistical sampling techniques can be used to expand integrity information for scoring longer stretches ofpipeline. Full characterization of the impact of 1LI indications on pipe strength would involve statistical analysis of anomaly measurements, considering tool accuracies. But even without detailed calculations. the effective actual wall thickness should be reduced depending on the nature of the anomalies detected in the pipeline segment being scored. For example, a severe corrosion indication might warrant a 50 to 70% reduction in effective pipe wall thickness. This direct consideration of ILI results presumes that specific anomalies have been mapped to specific pipeline segments and that anomalies are few enough to consider individually. Ifthis is not the case, ILIresults can also be used to generally characterize the current integrity condition. This can be done either as a preliminary step pending full investigations or as stand-alone evidence. Table 5.4
One challenge often faced by evaluators is the requirement that they use results from different types of ILI tools. Different tools will often have different detection capabilities and accuracies. Even similar tools used at different times can have significant variations due to the evolving technologies. To make use of all available information, it might be necessary to establish equivalencies between indications from various tools. An indication from a low-resolution tool should be weighted differently from one from a high-resolution tool. given the different uncertainties involved in each.
Approach 1 An example system for generalizing ILI results is outlined here. Under this scoring scheme, pipeline segments are characterized in terms of past damages that might reduce pipe strength and indicate possibly active failure mechanisms. Data from the most recent 1LI runs for every pipeline are collected. The pipelines are divided into fixed length segments-perhaps 100 or 1000 ft long. For each segment, all ILI indications are accumulated and characterized based on their frequency and severity. Each type of anomaly is counted and weighted and then used in setting five variables, discussed in the following subsections, that characterize the relative amount and severity of damage to the pipe wall.
External damage This variable represents the relative quantity and severity of dents, gouges, and other indications of outside force damage. It is created by using the counts of dents, dents on welds, and top side indications from recent ILI results. Each is weighted according to its possible impact on pipe strength. Higher weightings are assigned to anomalies on welds andor those more likely to be related to third-party damage and. hence, possibly involving a gouge or a more severe contour or dent. As an overall adjustment to risk scores, this variable reduces the previously calculated third-pry i n c h by up to 10% Corrosion remaining strength This variable represents the relative remaining strength. from a pressure-containing viewpoint, of the pipe after allowing for metal losses due to corrosion. It represents the relative severity of metal loss by accumulating the lengths and depths of metal loss indications in each pipeline segment. Greater emphasis is given to lengths in keeping with commonly accepted formulas for calculating the remaining strength of pipe. As an adjustment to risk scores, this variable reduces the previously calculated sufeh~,fuctor by up to 30%. Corrosion metal loss This variable represents the relative quantity and severity of corrosion damages. It measures the
Interpretation of ILI results
1L1unotmilv
Failwe mechanism
Geometric anomalies (dents. wrinkles. out-of-round pipe)
Third-party damages (normally on top and sides); improper support;bedding (normally on bottom); excessive external loads Gouge = third-party damage; metal loss = external or internal corrosion
Metal loss (gouging and general. pitting, and channeling corrosion) Laminations. cracks. or cracklike features
Fatigue and/or manufacturing defects
51110 Design Index
relative volume of metal losses from corrosion, either internal or external. The volume of each metal loss indication is approximated by assuming a parabolic shape for the metal loss configuration. As an adjustment to risk scores, this variable reduces the previously calculated corrosion index by up to 50%.
Crack This variable represents the relative quantity and severity of cracking and crack-like indications. As an adjustment to risk scores, this variable reduces the previously calculated safety factor by up to 90%, recognizing the relative unpredictability and severity of cracking.
Pipe wall flaws This is the combination ofthe other four variables described above. As an adjustment to risk scores, this variable reduces the previously calculated safe@factor by up to 90%,in addition to previous reductions. After this analysis, each pipeline segment has been characterized in terms of the five defect-type variables shown above. Those five variables each impact a previously determined risk score, as noted, In other words, the pipeline segment is penalized for having damages that are evidence of inadequate corrosion control, weakened pipe wall, etc. The amount of the penalty is proportional to the ILI score and the scale maximum possible value of the risk variable. This worst case penalty is set on the basis of how much influence that factor could have on failure probability. Default values are set for missing information, usually due to a lack of inspection information. Therefore, the default value represents a condition where no current inspection information is available and the presence of some level of anomalies will be conservatively assumed. In this particular application, it was conservatively assumed that an ILI yields no useful information after 5 years from the inspection date. ILI scores will therefore be assumed to worsen by 20% each year until the default value is reached. The ILI score is improved through visual inspections and removal of any damages present. A pipeline segment that is partially replaced or repaired will show an improvement under this scoring protocol since the anomaly count will have been reduced which reduces the corresponding defect penalty. The penalty can also be reduced even if the ILI score does not improve by anomaly removal. This can happen if a root cause analysis of the ILI anomalies concludes that active mechanisms are not present, despite a poor ILI score. For example, the root cause analysis might use previous ILI results to demonstrate that corrosion damage is old and corrosion has been halted. This is a rather complex approach and is not fully detailed here. It is included to demonstrate one possible method to more fully consider evidence from previous ILI in a general (not anomaly-specific) manner.
Approach 2 Another example of an IL1 scoring application where corrosion evaluations are adjusted by recent ILI results is presented here. First an ILI score is generated that characterizes the overall corrosion metal loss in the pipeline segment. This characterization could be based on a system similar to that ofApproach 1 or it could simply involve a scale for accumulating frequency and severity of wall loss damages in a segment.
When an ILI score indicates actual corrosion damage, but risk assessment scores for corrosion potential do not indicate a high potential, then a conflict may exist between the direct and the indirect evidence. It will sometimes not be known exactly where the inconsistency lies until complete investigations are performed. The conflict could reflect an overly optimistic assessment of effectiveness of mitigation measures (coatings, CP, etc.) or it could reflect an underestimate of the harshness of the environment. It could also be old damage from corrosion that has since been mitigated. To ensure that the corrosion scores always reflect the best available information, limitations could be placed on corrosion scores, in proportion to the ILI scores, pending results of the full investigation. This is illustrated in Table 4.1 1 of Chapter 4.
E. Land movements (15% weighting in example model) A pipeline may be subjected to stresses due to land movements and/or geotechnical events of various kinds. These movements may be sudden and catastrophic or they may be long-term deformations that induce stresses on the pipeline over a period of years. These can cause immediate failures or add considerable stresses to the pipeline and should be carefully considered in a risk analysis. A common categorization of failure causes is external forces. This category blends several failure causes and makes it difficult to separate land movements from third-party damages as a root cause of the failure. Since this separation is critical in risk management efforts, this risk assessment model isolates land movements as a specific failure mode under the Design Index. The land movement threat is very location specific. Many miles ofpipeline are located in regions where potentially damaging land movements are virtually impossible. On the other hand, land movements are the primary cause of failures, outweighing all other failure modes, for other pipelines. All of these issues make the assignment of a weighting difficult. It often becomes an issue of model scope, as discussed in Chapter 2. The suggested weighting presented here should be examined in consideration of all pipelines to be assessed with the risk model. Where land movements are a very high threat, a fifth failure probability index can be established specifically for the land movement failure mode. Land movement or geotechnical issues in general, can be categorized in various ways. One method is proposed whereby such events are referred to as natural hazards and categorized as shown in Table 5.6. In the following paragraphs, land movements are examined as the potential for landslides, soil movements, tsunamis, seismic events, aseismic faulting, scour and erosion. Additional threats such as sand dune formation and movement or iceberg scour (see Chapter 12) can be included in an existing category or evaluated independently. Land movements specific to the offshore environment are discussed in Chapter 12.
Landslide Many of the potentially dangerous land movement scenarios have a slope involved (Figure 5.5). The presence of a slope adds the force of gravity. Landslides, rockslides, mudflows, and
Risk variables and scoring 511 11 Table 5.6
Possible categorization of natural hazards
Cutego?y
Subrategoy
Specific Events
Geotechnical
On-ROW instability
Landslide Soil erosion Liquefaction Landslide Rockslide Debris flows
Off-ROW
Hydrotechnical
Tsunami Volcano Fault rupture Scour Channel degradation Bank erosion Encroachment Avulsion
Source: Porter, M., and K. W. Savigny, "Natural Hazard and Risk Management for South American Pipelines," Proceedings of IPC 2002: 4th International Pipeline Conference, Calgary, Canada, September 2002.
creep are the more well-known downslope movement phenomena. Another movement involving freezing, thawing, and gravity is solifluction, a cold-regions phenomenon distinct from the more common movements [93]. Landslides can occur from heavy rain, especially on slopes or hillsides with heavy cutting of vegetation or loadings from construction or other activities that disturb the land. Slides can also be caused by seismic activity. Landslide displacement of pipe can cause structural damage and leaks by increased external force loading if the pipeline is buried under displaced soil. Landslides can happen offshore also, with rock fall damage to pipeline possible. Slope issues can be an important but often overlooked aspect of changing pipeline stability. Slope alterations near, but outside. the right of way by third parties should be monitored.
nrinil.ral slope
Original position
,\
Construction activities near or in the pipeline right of way may produce slopes that are not stable and could put the pipeline at risk. These activities include excavation for road or railway cuts, removal of material from the toe of a slope, or adding significant material to the crest of a slope. Given that maintenance activity involving excavation could potentially occur without engineering supervision, standard procedures may be warranted to require notification of an engineer should such conditions be found to exist. In soil sliding analyses, a pipeline experiences axial and bending loads depending on the direction of sliding movement with respect to the pipe axis. Axial strains in the pipeline are caused by soil sliding normal to the pipe axis. If the sliding movement is 90 degrees to the pipe axis, the pipeline will predominantly experience tensile strain with small compressive bending strains present at the transition zones of the liquefied and nonliquefied soil sections. If the sliding movement is 45 degrees to the pipeline, both compressive and tensile axial strains increase significantly due to the combination of axial and bending loads. Impact loadings are also possible, especially involving rockslides and above ground pipeline components. An evaluation for rockfall hazards to railroads has identified some key variables to assess. These are shown inTable 5.7. An evaluation methodology like this is readily modified to be applicable to pipelines. Some available databases provide rankings for landslide potential. As with soils data, these are very coarse rankings and are best supplemented with field surveys or local know ledge.
Soils (shrink, swell, subsidence, settling) Effects that are not slope oriented include changes in soil vol-
ume causing shrinkage, swelling, or subsidence. These can be caused by differential heating, cooling, or moisture contents. Sudden subsidence or settling can cause shear forces as well as bending stresses.
Pipe position after slope failure-this displacement has added bending stresses to the pipeline
'---
A-
Slope profile after slow
\
2L / .--
failure Line of slope
/
Figure 5.5
Sudden slope failure over pipeline.
/
5/112Design Index
Table 5.7 Rockfall hazard assessment-elative
probability
Category
Pammeters
Comments
Source volumes
Volume of rock that could fall during any one event. Structuralgeology Effective mitigation
Uses three categories ofpotential volume; highest category is >3m3. “Favorable”or “unfavorable” geological orientations. Use ofmeasures to either hold source volumes in place (anchors,dowels, etc) or protect the track (ditches,berms, etc). Measures are judged as either “effective” or “ineffective.” “Effective” aprons,dense vegetation, larger distances, etc. that prevent contact with track. Probability of certain dimensions and fragmentation of fallingrock; characterizesresultant rubble on track.
Likelihood of source volume detachingand reaching railroad track
Natural barriers Rock size
Source: Porter, M., A. Baumgard, and K. W. Savigny,“A Hazard and Risk Management System for Large Rock Slope Hazards Affecting Pipelines in MountainousTerrain,”Proceedingsof IPC 2002:4th International Pipeline Conference,Calgary, Canada, September 2002. Many pipelines traverse areas of highly expansive clays that are particularly susceptible to swelling and shrinkage due to moisture content changes. These effects can be especially pronounced if the soil is confined between nonyielding surfaces. Such movements of soil against the pipe can damage the pipe coating and induce stresses in the pipe wall. Good installation practice avoids embedding pipes directly in such soils. A bedding material is used to surround the line to protect the coating and the pipe. Again, rigid pipes are more susceptible to structural damage from expansive soils. The potential for the shrink or swell behavior ofpipeline foundation soils can lead to excessive pipe deflections.The potential for excessive stresses is often seen in locations where the pipeline connects with a facility (pump station or terminal) on a foundation. In this circumstance, the difference in loading on foundation soils below the pipeline and below the facility could lead to differences in settlementand stresses on connections. Frost heave is a cold-region phenomenon involving temperature and moisture effects that cause soil movements. As ice or ice lenses are formed in the soil, the soil expands due to the freezing of the moisture. This expansion can cause vertical or uplift pressure on a buried pipeline. The amount of increased load on the pipe is partially dependent on the depth of frost penetration and the pipe characteristics.Rigid pipes are more easily damaged by this phenomenon. Pipelines are generally placed at depths below the frost lines to avoid frost loading problems. Previous mining (coal for example) operations might increase the threat of subsidence in some areas. Changes in groundwater can also contribute to the subsidence threat. Ground surface subsidence can be a regional phenomenon. It may be a consequence of excessive rates of pumpage of water from the ground and occasionally from production of oil and gas at shallow depths. This phenomenon occurs where fluids are produced from unconsolidated strata that compact as pore fluid pressures are reduced.
Seismic Seismic events pose another threat to pipelines. Aboveground facilities are generally considered to be more vulnerable than buried facilities, however, high stress mechanisms can be at work in either case. Liquefaction fluidizes sandy soils to a level
at which they may no longer support the pipeline. Strong ground motions can damage aboveground structures. Fault movements sometimes cause severe stresses in buried pipe. A landslide can overstressboth aboveground and buried facilities. Threats from seismic events include Pipeline seismic shaking due to the propagation of seismic waves Pipeline transverse and longitudinal sliding due to soil liquefaction Pipeline flotation and settlement due to soil liquefaction Failure of surface soils (soil raveling) Seismic-induced tsunami loads that can adversely affect pipelines. Key variables that influence a pipe’s vulnerability to seismic events include Pipeline characteristics Diameter (empirical evidencedata from past seismic events-indicates that larger diameters have lower failure rates) Material (cast iron and other more brittle pipe materials tend to perform worse) Age (under the presumption that age is correlated to level of deterioration, older systems might have more weaknesses and hence, be more vulnerable to damage) Joining (continuous pipelines such as welded steel, tend to perform better than systems with joints such as flanges or couplings) Branches (presence of connections and branches tends to concentrate stresses leading to more failures) Seismic event characteristics 0 Peak ground velocity Peak ground deformation Faultoffset Landslide potential Liquefaction Settlement. To design a pipeline to withstand seismic forces, earthquake type and frequency parameters must be defined. This is often
Risk variables and scoring 5/113
done in terms of probubilib of e..;ceedunce. For instance, a common building code requirement in the U.S. is to design fsr an earthquake event with a probability of exceedance of 10% in 50 years: Probability of exceedance = 1 - [ 1 - 1/ t . ~ ] ’
where t = design life is = return period. For example, a 10?’0 probability of exceedance in 50 years equates to an annual probability of I in 475 of a certain ground motion being exceeded each year. A ground motion noted as having a 10% probability ofexceedance in 50 years means that the level of ground motions has a low chance of being exceeded in the next 50 years. In fact, there is a 90% chance that these ground motions will nor be exceeded. This probability level requires engineers to design structures for larger, rarer ground motions than those expected to occur during a 50-year interval. Fault displacement is another potential threat to a pipeline. The relative displacement of the ground on opposite sides of an assumed fault rupture will produce strains in a pipeline that crosses the rupture. Several types offault movements are possible. Each produces a different load scenario on the pipeline crossing the fault. Generally, normal fault displacement leads to bending and elongation of the pipeline (tension dominant loading), whereas reverse fault displacement leads to bending and compression of the pipeline (compression dominant loading). Strike-slip fault displacement will either stretch or compress the pipeline depending on the angle at which the pipeline crosses the fault. Oblique raulting is a combination of normal or reverse movement combined with strike-slip movement. Oblique faulting will result in either tension-dominant loading or compressiondominant loading of the pipeline depending on the pipeline’s fault crossing angle and the direction ofthe fault movements. Fault displacement resulting in axial compression of the pipeline is generally a more critical condition because it can result in upheaval buckling. Upheaval buckling causes the pipeline to bend or bow in an upward direction. In typical settlement‘flotation analyses, the pipeline is subjected to bending where it passes through the liquefied soil section and the bending is maximum at the transition of liquefied and nonliquefied soil zones. When bending occurs, axial strains are compressive in the inner fibers of the bend and tensile in the outer fibers of the bend relative to the neutral axis of the pipeline. Calculations of maximum tensile and compressive strains for known faults can be made and incorporated into the assessment. Similar calculations can also be made for maximum strains in areas of seismic-induced soil liquefaction. These calculations require the use of assumptions such as maximum displacement, maximum slip angle, amount of pipeline cover, and intensity of the seismic event. Ideally, such assumptions are also captured in the risk assessment since they indicate the amount of conservatism in the calculations.
Aseismic faulting Aseismic faulting refers to shearing-type ground movements that are too small and too frequent to cause measurable earth tremors. Aseismic faults can be of a type that are not discrete fractures in
the earth. Rather, they can be zones of intensely sheared ground. In the Houston, Texas, area, such zones exist, measure a few tens of feet wide, and are oriented in a horizontal direction perpendicular to the trend of the fault [86]. Evidence of aseismic faulting includes visible damages to streets (often with sharp, faultlike displacements) and foundations, although all such damage is not the result of this phenomenon. Aseismic faulting threatens pipe and pipe coatings because soil mass is moving in a manner that can produce shear, bending, and buckling stresses on the pipeline. A monitoring program and stress calculations would be expected where a pipeline is threatened by this phenomenon. The risk evaluator can seek evidence that the operator is aware of the potential and has either determined that there is no threat or is taking prudent steps to protect the system.
Tsunamis Tsunamis are high-velocity waves, often triggered by offshore seismic events or landslides. A seiche is a similar event that occurs in a deep lake [70b]. These events are of less concern in deep water, but have the potential to cause rapid erosion and scour in shallow areas. Most tsunamis are caused by a major abrupt displacement of the seafloor. This hazard can be evaluated by considering the potential for seismic events, and the beach geometry, pipeline depth, and other site-specific factors. Often a history ofsuch events is used to assess the threat.
Scour and erosion Erosion is a common threat for shallow or above-gradepipelines, especially when near stream banks or areas subject to highvelocity flood flows. Even buried pipelines are exposed to threats from scour in certain situations. A potential is for the depth of cover to erode during flood flows, exposing the pipeline. If a lateral force were sufficiently large. the pipeline could become overstressed. Overstressing can also occur through loss of support if the pipeline is undermined. At pipeline crossings where the streambed is composed of rock, the pipeline will often have been placed within a trench cut into the rock. During floods at crossings where flow velocities are extremely high, the potential exists for pressure differences across the top of the pipeline to raise an exposed length of pipeline into the flow, unless a concrete cap has been installed or the overburden is otherwise sufficient to prevent this. Calculations can be performed to estimate the lengths of pipeline that could potentially be uplifted from a rock trench into flows of varying velocities. Fairly detailed scour studies have been performed on some pipelines. These studies can be based on procedures commonly used for highway structure evaluations such as “Stream Stability at Highway Structures.” A scour and bank stability study might involve the following steps:
0
Review the history of scour-related leaks and repairs for the pipeline. Perform hydraulic calculations to identify crossings with potentially excessive flood flow velocities. Obtain current and historic aerial photographs for each ofthe crossings of potential concern to identify crossings that show evidence of channel instability.
5/114 Design Index
Perform site-specific geomorphic studies for specific crossings. These studies may suggest mitigation measures (if any) to address scour. Perform studies to address the issue of uplift of the pipeline at high-velocity rock bed crossings. The flood flow velocities for a crossing can be estimated using cross-sections derived from the best available mapping, flow rates derived from region-specific regression equations, and channel/floodplain roughness values derived from a review of vegetation from photography or site visits. Upstream and downstream comparisons can be made to identify any significant changes in stream flow regime or visual evidence of scour that would warrant a site-specific geomorphic study. Potential impact by foreign bodies on the pipeline after a scour event can be considered, as well as stresses caused by buoyancy, lateral water movements,pipe oscillations in the current, etc. The maximum allowable velocity against an exposed pipe span can be estimated and compared to potential velocities, as one means of quantifying the threat. The potential for wind erosion, including dune formation and movement, can also be evaluated here.
Evaluating land movement potential The evaluator can establish a point schedule for assessing the risk of pipeline failure due to land movements.The point scale should reflect the relative risk among the pipeline sections evaluated. If the evaluations cover everything from pipelines in the mountains ofAlaska to the deserts ofthe Middle East, the range of possible point values should similarly cover all possibilities. Evaluations performed on pipelines in a consistent environment may need to incorporate more subtleties to distinguish the differences in risk. As noted, public databases are available that show relative rankings for landslides, seismic peak ground accelerations, soil shrink and swell behavior, scour potential, and other land movement-related issues. These are often available at no cost through government agencies. However, they are often on a very coarse scale and will fail to pick up some very localized, high-potential areas that are readily identified in a field survey or are already well known.
Scoring of land movement It is often advantageous to develop scoring scales for each type of land movement. This helps to ensure that each potential threat is examined individually. These can be added so that multiple threats in one location are captured. Directly using the relative ranking scales from the available databases, and then supplementing this with local information, can make this a very straightforward exercise. The threat can alternatively be examined in a more qualitative fashion and for all threats simultaneously.The following schedule is designed to cover pipeline evaluations in which the pipelines are in moderately differing environments. Potentialfor significant (damaging) soil movements: High 0 pts Medium 5 pts
Low None Unknown
10 pts 15 pts
0 pts
High Areas where damaging soil movements are common or can be quite severe. Regular fault movements, landslides, subsidence, creep, or frost heave are seen. The pipeline is exposed to these movements. A rigid pipeline in an area of less frequent soil movements should also be classified here due to the increased susceptibility of rigid pipe to soil movement damage. Active earthquake faults in the immediate vicinity of the pipeline should be included in this category. Medium Damaging soil movements are possible but rare or unlikely to affect the pipeline due to its depth or position. Topography and soil types are compatible with soil movements, although no damage in this area has been recorded. Low Evidence of soil movements is rarely if ever seen. Movements and damage are not likely. There are no recorded episodes of structural damage due to soil movements. All rigid pipelines should fall into this category as a minimum, even when movements are rare. None No evidence of any kind is seen to indicate potential threat due to soil movements. Unknown In keeping with an "uncertainty = increased risk" bias, having no knowledge should register as high risk, pending the acquisition of information that suggests otherwise.
Mitigation Initial investigation and ongoing monitoring are often the first choices in mitigation of potentially damaging land movements. Beyond that, many geotechnical and a few pipeline-specific remedies are possible. A geotechnical evaluation is the best method to determine the potential for significant ground movements. In the absence of such an evaluation, however, the evaluator should seek evidence in the form of operator experience. Large cracks in the ground during dry spells, sink holes or sloughs that appear during periods of heavy rain, foundation problems on buildings nearby, landslide or earthquake potential, observation of soil movements over time or on a seasonal cycle, and displacements of buried structures discovered during routine inspections are all indicators that the area is susceptible. Even a brief survey of the topography together with information as to the soil type and the climatic conditions should either readily confirm the operator's experience or establish doubt in the evaluator's mind. Anticipated soil movements are often confirmed by actual measurements. Instruments such as inclinometers and extensometers can be used to detect even slight soil movements. Although these instruments reveal soil movements, they are not necessarily a direct indication of the stresses induced on the pipe. They only indicate increased probability of additional pipe stress. In areas prone to soil movements, these instruments can be set to transmit alarms to warn when more drastic changes have occurred. Movements of the pipe itself are the best indication of increased stress. Strain gauges attached to the pipe wall can be
Risk variables and scoring 51115
used to monitor the movements of the pipeline, but must be placed to detect the areas of greatest pipe strain (largest deflections). This requires knowledge of the most sensitive areas of the pipe wall and the most likely movement scenarios. Use of these gauges provides a direct measure of pipeline strain that can be used to calculate increased stress levels. Corrective actions can be sometimes performed to the point where the potential for significant movements is “none”. Examples include dewatering of the soil using surface and subsurface drainage systems and permanently moving the pipeline. While changing the moisture content of the soil does indeed change the soil movement picture, the evaluator should assure herself that the potential has in fact been eliminated and not merely reduced, before she assigns the “none” classification. Moving the pipeline includes burial at a depth below the movement depth (determined by geotechnical study; usually applies to slope movements), moving the line out of the area where the potential exists, and placing the line aboveground (may not be effective if the pipe supports are subject to soil movement damage). Earthquake monitoring systems tell the user when and where an earthquake has occurred and what its magnitude is often only moments from the time of occurrence. This is very useful information because areas that are likely to be damaged can be immediately investigated. Specific pipeline designs to withstand seismic loadings is another mitigation measure. Scour and erosion threats can be reduced through armoring of the pipeline and/or reducing the potential through diversions or stabilizations. These can range from placements of gravel or sandbags over the pipeline to installations of full scale river diversion or sediment deposition structures to deep pipeline installation via horizontal directional drill. The evaluator must evaluate such mitigations carefully, given the relatively high rate of failure of scour and erosion prevention schemes. Where a land movement potential exists and the operator has taken steps to reduce the threat, point values may be adjusted by judging the effectiveness of threat-mitigation actions. includ-
ing the acts of monitoring, site evaluations, or other information gathering. Monitoring implies that corrective actions are taken as needed. Continuous monitoring offers the benefit of immediate indication of potential problems and should probably reflect lowered risk compared with occasional monitoring. Continuous monitoring can be accomplished by transmitting a signal from a soil movement indicator or from strain gauges placed on the pipeline. Proper interpretation of and response to these signals is implied in awarding the point values. Periodic surveys are also commonly used to detect movements. However, surveying cannot be relied on to detect sudden movements in a timely fashion. In the case of landslide potential, especially a slow-acting movement, stress relieving is apotential situation-specific remedy and can be accomplished by opening a trench parallel to or over the pipeline. This effectively unloads the line from soil movement pressures that may have been applied. Another method is to excavate the pipeline and leave it aboveground. Either ofthese is normally only a short-term solution. Installing the pipeline aboveground on supports can be a permanent solution, but as already pointed out. may not be a good solution if the supports are susceptible to soil movement damage. The use of barriers to prevent landslide damage, for example, can also be scored as stress relieving.
Example 5.9: Scoringpotentinl for earth movements In the section being evaluated, a brine pipeline traverses a relatively unstable slope. There is substantial evidence of slow downslope movements along this slope although sudden, severe movements have not been observed. The line is thoroughly surveyed annually, with special attention paid to potential movements. The evaluator scores the hazard as somewhere between “high” and “medium” because potentially damaging movements can occur but have not yet been seen. This equates to a point score of 3 points. The annual monitoring increases the point score by 3 points, so the final score is 6 points.
61117
Incor rect Operations Index Incorrect Operations A. Design AI. Hazard Identification A2. MOP Potential A3. Safety Systems A4. Material Selection A5. Checks B. Construction B1. Inspection B2. Materials B3. Joining B4. Backfill B5. Handling B6. Coating C. Operations C 1. Procedures C2. SCADN Communications C3. DrugTesting C4. Safety Programs C5. Surveysimapsl records C6. Training C7. Mechanical Error Preventers D. Maintenance D1. Documentation 120 D2. Schedule D3. Procedures
30%
0-30 pts &4 pts 0-12 pts 0-10 pts
20%
35%
0-2 pts 0-2 pts 0-20 pts 0-10 pts 0-2 pts 0-2 pts 0-2 pts 0-2 pts 0-2 pts 0-35 pts 0-1 pts 0-3 pts 0-2 pts 0-2 pts 0-5 pts 0-10 pts
06 pts 15%
0-15 pts &2 pts 0-3 pts 0-10 pts
100%
0-100 pts
Human error potential It has been reported that 80% of all accidents are due to human fallibility. “In structures, for example, only about 10% of failures are due to a statistical variation in either the applied load or the member resistance. The remainder are due to human error or abuse” [57]. Human errors are estimated to have caused 62%
of all hazardous materials accidents in the United States [85]. In the transportation industry, pipelines are comparatively insensitive to human interactions. Processes of moving products by rail or highway or marine are usually more manpower intensive and, hence, more error prone. However, human error has played a direct or indirect role in most pipeline accidents. Although one of the most important aspects of risk, the potential for human error is perhaps the most difficult aspect to quantify. Safety professionals emphasize that identification of incorrect human behavior may be the key to a breakthrough in accident prevention. The factors underlying behavior and attitude cross into areas of psychology, sociology, biology, etc., and are far beyond the simple assessment technique that is being built here. The role of worker stress is discussed in Chapter 9 and can be an addition to the basic risk assessment proposed here. This index assesses the potential for pipeline failure caused by errors committed by the pipeline personnel in designing, building, operating, or maintaining a pipeline. Human error can logically impact any of the previous probability-of-failure indexes-active corrosion, for example, could indicate an error in corrosion control activities. Scoring error potential in a separate index has the advantage of avoiding duplicate assessments for many of the pertinent risk variables. For instance, assessments of training programs and use of written procedures will generally apply to all failure modes. Capturing such assessments in a central location is a modeling convenience and further facilitates identification of risk mitigation opportunities in the risk management phase. If the evaluator feels that there are differences in human error potential for each failure mode, he can base his score on the worst case or evaluate human error variables separately for each failure mode. Sometimes, an action deemed to be correct at the time, later proves to be an error or at least regrettable. Examples are found in the many design and construction techniques that have changed over the years-presumably because it is discovered that previous techniques did not work well, or newer techniques are superior. Low frequency ERW pipe manufacturing processes (see Chapter 5) and the use of certain mechanical couplings (see Chapter 13) are specific examples. These kinds
6/118 Incorrect Operations Index
Third-patty damage Figure 6.1 Basic risk assessment model.
Hazard identification MAOP potential Safety systems Material selection Checks
Inspection Materials Joining Backfill Handling Coating
Design
Construction Procedures SCADNcommunications
Maintenance
Training Mechanical error preventers
Documentation Schedule Procedures Figure 6.2 Assessing human error potential:sample of data used to score the incorrectoperations index
Design 6/119
of issues are really not errors since they presumably were determined based on best industry practices at the time. For a risk assessment, they are normally better assessed in the Design Index if they relate to strength (wrinkle bends, low frequency ERW pipe, etc) or in the Corrosion index if related to periods with no cathodic protection, incomplete pipe-to-soil reading techniques, etc. Actions such as vandalism, sabotage, or accidents caused by the public are not considered here. These are addressed to some extent in the third-party damage index and in the optional sabotage module discussed in Chapter 9. Many variables thought to impact human error potential are identified here. The risk evaluator should incorporate additional knowledge and experience into this index as such knowledge becomes available. If data, observations, or expert judgment demonstrates correlations between accidents and variables such as years of experience, or time of day, or level of education. or diet, or salary. then these variables can be included in the risk picture. It is not thought that the state ofthe art has advanced to that point yet. Human interaction can be either positive-preventing or mitigating failures, or negative-exacerbating or initiating failures. Where efforts are made to improve human performance, risk reduction is achieved. Improvements may be achieved through better designs of the pipeline system, development of better employees, and/or through improved management programs. Such improvements are a component ofrisk management. An important concept in assessing human error risk is the supposition that small errors at any point in a process can leave the system vulnerable to failure at a later stage. With this in mind, the evaluator must assess the potential for human error in each of the four phases of a pipeline's life: design, construction, operation, and maintenance. A slight design or construction error may not be apparent for years until it is suddenly a contributor to a failure. By viewing the entire pipelining process as a chain of interlinked steps, we can also identify possible intervention points, where checks or inspections or special equipment can be inserted to avoid a human error-type failure. Because many pipeline accidents are the result of more than one thing going wrong, there are often several opportunities to intervene in the failure sequence. Specific items and actions that are thought to minimize the potential for errors should be identified and incorporated into the risk assessment. A point schedule can be used to weigh the relative impact of each item on the risk picture. Many of these variables will require subjective evaluations. The evaluator should take steps to ensure consistency by specifying, if only qualitatively, conditions that lead to specific point assignments. The point scores for many of these items will usually be consistent across many pipeline sections if not entire systems. Ideally, the evaluator will find information relating to the pipeline's design, construction, and maintenance on which risk scores can be based. However, it is not unusual. especially in the case of older systems, for such information to be partially or wholly unavailable. In such a case, the evaluator can take steps to obtain more information about the pipeline's history. Metallurgical analysis of materials, depth-of-cover surveys, and research of manufacturers' records are some ways in which
information can be reconstructed. In the absence of data, a philosophy regarding level gfproofcan be adopted. Perhaps more so than in other failure modes, hearsay and employee testimony might be available and appropriate to varying degrees. The conservative and recommended approach is to assume higher risks when uncertainty is high. As always, consistency in assigning points is important. This portion of the assessment invdves many variables with low point values. So. most variables will not have a large impact on risk individually, but in aggregate, the scores are thought to present a picture of the relative potential for human error leading directly to a pipeline failure. Because the potential for human error on a pipeline is related to the operation of stations, Chapter 13 should also be reviewed for ideas regarding station risk assessment.
A. Design (weighting: 30%) This is perhaps the most difficult aspect to assess for an existing pipeline.Design and planning processes are often not well defined or documented and are often hghIy variable. Consequently, they are the most difficult to assess for an existingpipeline. The suggested approach is for the evaluator to ask for evidence that certain error-preventing actions were taken during the design phase. It would not be inappropriate to insist on documentation for each item. If design documents are available, a check or certification of the design can be done to verify that no obvious errors have been made. Aspects that can be scored in this portion of the assessment are as follows:
AI. A2. A3. A4. A5.
Hazard identification MOP potential Safety systems Material selection Checks
4 pts 12 pts 10 pts 2 pts 2 pts
A l . Hazard identification (0-4 pts) Here, the evaluator checks to see that efforts were made to identify all credible hazards associated with the pipeline and its operation. A hazard must be clearly understood before appropriate risk reduction measures can be employed. This would include all possible failure modes in a pipeline risk assessment. Thoroughness is important as is timeliness: Does the assessment reflect current conditions? Have all initiating events been considered?-even the more rare events such as temperature-induced overpressure? fire around the facilities? safety device failure? (HAZOP studies and other appropriate hazard identification techniques are discussed in Chapter 1 .) Ideally, the evaluator should see some documentation that shows that a complete hazard identification was performed. If documentation is not available, she can interview system experts or explore other ways to verify that at least the more obvious scenarios have been addressed. Points are awarded (maximum of 4 points) based on the thoroughness of the hazard studies. with a documented, current, and formal hazard identification process getting the highest score.
6H 20 Incorrect Operations Index
A2. MOP potential (0-12 pts) The possibility of exceeding the pressure for which the system was designed is an element of the risk picture. Obviously, a system where it is not physically possible to exceed the design pressure is inherently safer than one where the possibility exists. This often occurs when a pipeline system is operated at levels well below its original design intent. This is a relatively common occurrence as pipeline systems change service or ownership or as throughputs turn out to be less than intended. The ease with which design limits might be exceeded is assessed here. The first things required for this assessment are knowledge of the source pressure bump, compressor, connecting pipelines, tank,well, etc.) and knowledge of the system strength. Then the evaluator must determine the ease with which an overpressure event could occur. Would it take only the inadvertent closure of one valve to rapidly build a pressure that is too high? Or would it take many hours and many missed opportunities before pressure levels were raised to a dangerous level? Structural failure can be defined (in a simplified way) as the point at which the material changes shape under stress and does not return to its original form when the stress is removed. When this “inelastic” limit is reached, the material has been structurally altered from its original form and its remaining strength might have changed as a result. The structure’s ability to resist inelastic deformation is one important measure of its strength. The most readily available measure of a pipeline’s strength will normally be the documented maximum operating pressure or MOP. The MOP is the theoretical maximum internal pressure to which the pipeline can be subjected, reduced by appropriate safety factors. The safety factors allow for uncertainties in material properties and construction. MOP is determined from stress calculations, with internal pressure normally causing the largest stresses in the wall of the pipe. Material stress limits are theoretical values, confirmed (or at least evidenced) by testing, that predict the point at which the material will fail when subjected to h g h stress. External forces also add stress to the pipe. These external stresses can be caused by the weight of the soil over a buried line, the weight of the pipe itself when it is unsupported, temperature changes, etc. In general, any external influence that tries to change the shape of the pipe will cause a stress. Some of these stresses are additive to the stresses caused by internal pressure. As such, they must be allowed for in the MOP calculations. Hence, care must be taken to ensure that the pipeline will never be subjected to any combination of internal pressures and external forces that will cause the pipe material to be overstressed. Note that MOP limits include safety factors. If pipeline segments with different safety factors are being compared, a different measure of pipe strength might be more appropriate. Appendix C discusses pipe strength calculations. To define the ease of reaching MOP (whichever definition of MOP is used) a point schedule can be designed to cover the possibilities. Consider this example point-assignment schedule: A. Routine 0 pts Definition: Where routine, normal operations could allow
the system to reach MOP. Overpressure would occur
fairly rapidly due to incompressible fluid or rapid introduction of relatively high volumes of compressible fluids. Overpressure is prevented only by procedure or single-level safety device. B. Unlikely 5 pts Definition: Where overpressure can occur through a combination of procedural errors or omissions, and failure of safety devices (at least two levels of safety). For example, a pump running in a “deadheaded” condition by the accidental closing of a valve, and two levels of safety system (a primary safety and one redundant level of safety) failing, would overpressure the pipeline. C. Extremely Unlikely 10 pts Definition: Where overpressure is theoretically possible (sufficient source pressure), but only through an extremely unlikely chain of events including errors, omissions, and safety device failures at more than two levels of redundancy. For example, a large diameter gas line would experience overpressure if a mainline valve were closed and communications (SCADA) failed and downstream vendors did not communicate problems and local safety shutdowns failed, and the situation went undetected for a matter of hours. Obviously, this is an unlikely scenario. D. Impossible 12 pts Definition: Where the pressure source cannot, under any conceivable chain of events, overpressure the pipeline. In studying the point schedule for ease of reaching MOP, the “routine” description implies that MOP can be reached rather easily. The only preventive measure may be procedural, where the operator is relied on to operate 100%error free, or a simple safety device that is designed to close a valve, shut down a pressure source, or relieve pressure from the pipeline. If perfect operator performance and one safety device are relied on, the pipeline owner is accepting a high level of risk of reaching MOP. Error-free work techniques are not realistic and industry experience shows that reliance on a single safety shutdown device, either mechanical or electronic, allows for some periods of no overpressure protection. Few points should be awarded to such situations. Note that the evaluator is making no value judgments at this stage as to whether or not reaching MOP poses a serious threat to life or property. Suchjudgments will be made when the “consequence” factor is evaluated. The “unlikely” description, category B, implies a pressure source that can overpressure the segment and protection via redundant levels of safety devices. These may be any combination of relief valves; rupture disks; mechanical, electrical, or pneumatic shutdown switches; or computer safeties (programmable logic controllers, supervisory control and data acquisition systems, or any kind of logic devices that may trigger an overpressure prevention action). The requirement is that at least two independently operated devices be available to prevent overpressure of the pipeline. This allows for the accidental failure of at least one safety device, with backup provided by another. Operator procedures must also be in place to ensure the pipeline is always operated at a pressure level below the MOP. In this sense, any safety device can be thought of as a backup to proper operating procedures. The point value of category B should reflect the chances, relative to the other categories, of a
Design 61121 procedural error coincident with the failure of two or more levels of safety. Industry experience shows that this is not as unlikely an occurrence as it may first appear. Category C, “extremely unlikely,” should be used for situations where sufficient pressure could be introduced and the pipeline segment could theoretically be overpressured but the scenario is even more unlikely than category B. An example of a difference between categories B and C would be a more compressible fluid or a larger volume pipeline segment in category C. requiring longer times to reach critical pressures. As this chance becomes increasingly remote, points awarded should come closer to a category D score. The “impossible” description of category D is fairly straightforward. The pressure source is deemed to be incapable of exceeding the MOP ofthe pipeline under an.v circumstances. Potential pressure sources must include pumps, compressors, wellhead pressure. connecting pipelines, and the often overlooked thermal sources. A pump that, when operated in a deadheaded condition, can produce 1000-psig pressure cannot, theoretically, overpressure a line whose MOP is 1400 psig. In the absence of any other pressure source, this situation should receive the maximum points. The potential for thermal overpressure must not be overlooked however. A section of liquid-full pipe may be pressured beyond its MOP by a heat source such as sun or fire if the liquid has no room to expand. Further, in examining the pressure source, the evaluator may have to obtain information from connecting pipelines as to the maximum pressure potential of their facilities. It is sometimes difficult to obtain the maximum pressure value as it must be defined for this application, assuming failure of all safety and pressure-limiting devices. In the next section, a distinction is
made between safety systems controlled by the pipeline operator and those outside his direct control.
A3. Safety systems (0-10 pts) Safety devices, as a component of the risk picture, are included here in the incorrect operations index (Figure 6.2) rather than the design index of Chapter 5 . This is done under the premise that safety systems exist as a backup situations in which human error causes or allows MOP to be reached. As such, they reduce the possibility of a pipeline failure due to human error. The risk evaluator should carefully consider any and all safety systems in place. A safety system or device is a mechanical, electrical, pneumatic. or computer-controlled device that prevents the pipeline from being overpressured. Prevention may take the form of shutting down a pressure source or relieving pressurized pipeline contents. Common safety devices include relief valves, rupture disks, and switchesthat may close valves, shut down equipment, etc., based on sensed conditions. A level of safety is considered to be any device that unilaterally and independently causes an overpressure prevention action to be taken. When more than one level of safety exists-with each level independent of all other devices and their power sources-redundancy is established (Figure 6.3).Redundancy provides backup protection in case of failure of a safety device for any reason. Two, three, and even four levels of safety are not uncommon for critical situations. In some instances, safety systems exist that are not under the direct control of the pipeline operator. When another pipeline or perhaps a producing well is the pressure source, control of that source and its associated safeties may rest with
t TO
I
Pump motor High-pressure Pump
vent line
Safety relief valve
Pump overpressure-one level of safety
I
Pump overpressure-two levels of safety Figure 6.3
Safety systems.
6/122 Incorrect Operations Index
the other party. In such cases, allowances must be made for the other party’s procedures and operating discipline. Uncertainty may be reduced when there is direct inspection or witnessing of the calibration and maintenance of the other party’s safety equipment, but this does not replace direct control ofthe equipment. There is some redundancy between this variable and the previously assessed MOPpotential since safety systems are noted there also. A point schedule should be designed to accommodate all situations on the pipeline system. [Note: The evaluator must decide if she will be considering the pipeline system as a whole (ignoring section breaks) for this item. A safety system will often be physically located outside of the pipeline segments it is protecting (see Example 6.3 later).] An example schedule follows: A. No safety devices present B. On site, one level only C. On site, two or more levels D. Remote, observation only E. Remote, observation and control F. Non-owned active witnessing G. Non-owned no involvement H. Safety systems not needed
0 pts 3 pts 6 pts 1 Pt 3 pts -2 pts -3 pts IO pts
In this example schedule, more than one safety system “condition” may exist at the same time. The evaluator defines the safety system and the overpressure scenarios. He then assigns points for every condition that exists. Safety systems that are not thought to adequately address the overpressure scenarios should not be included in the evaluation. Note that some conditions cause points to be subtracted.
A . No safety devices present In this case, reaching MOP is possible, and no safety devices are present to prevent overpressure. Inadequate or improperly designed devices would also fall into this category. A relief valve that cannot relieve enough to offset the pressure source is an example of an ineffective device. Lack of thermal overpressure protection where the need exists is another example of a situation that should receive 0 pts. B. On site, one level For this condition a single device, located at the site, offers protection from overpressure. The site can be the pipeline or the pressure source. A pressure switch that closes a valve to isolate the pipeline segment is an example. A properly sized relief valve on the pipeline itself is another example. C. On site, two or more levels Here, more than one safety device is installed at the site. Each device must be independent of all others and be powered by a power source different from the others. This means that each device provides an independent level of safety. More points should be awarded for this situation because redundancy of safety devices obviously reduces risk. D. Remote, observation on!v In this case, the pressure is monitored from a remote location. Remote control is not possible and automatic overpressure protection is not present. While not a replacement for an automatic safety system, such
remote observation provides some additional b a c k u p t h e monitoring personnel can at least notify field personnel to take action. Points can be given for such systems when such observation is reliable 95 to 100% of the time. An example would be a pressure that is monitored and alarmed (visible andor audible signal to observer) in a control room that is manned 24 hours a day and that has a communication reliability rate of more than 95%. On notification of an abnormal condition, the observer can dispatch personnel to correct the situation. E. Remote, observation and control This is the same situation as the previous one with the added feature of remote control capabilities. On notification of rising pressure levels, the observer is able to remotely take action to prevent overpressure. This may mean stopping a pump or compressor and opening or closing valves. Remote control capability can significantly impact the risk picture only if communications are reliable95% or better for both receiving of the pressure signal and transmission of the control signal. Remote control generally takes the form of opening or closing valves and stopping pumps or compressors. This condition receives more points because more immediate corrective action is made possible by the addition of the remote control capabilities.
R Non-owned, active witnessing Here, overpressure prevention devices exist, but are not owned, maintained or controlled by the owner of the equipment that is being protected. The pipeline owner takes steps to ensure that the safety device(s) is properly calibrated and maintained by witnessing such activities. Review of calibration or inspection reports without actually witnessing the activities may, in the evaluator’s judgment, also earn points. Points awarded here should reflect the uncertainties arising from not having direct control of the devices. By assigning negative points here, identical safety systems under different ownerships would have different point values. This reflects a difference in the risk picture caused by the different levels of operator control and involvement. G. Non-owned. no involvement Here again, the overpressure devices are not owned operated, or maintained by the owner of the equipment that is being protected. The equipment owner is relying on another party for her overpressure protection. Unlike the previous category, here the pipeline owner is taking no active role in ensuring that the safety devices are indeed kept in a state of readiness. As such, points are subtracted-the safety system effectiveness has been reduced by the added uncertainty.
H. Safety systems not needed In the previous item, MOP potential, the most points were awarded for the situation in which it is impossible for the pipeline to reach MOP. Under this scenario, the highest level ofpoints is also awarded for this variable because no safety systems are needed. For all safety systems, the evaluator should examine the status of the devices under a loss of power scenario. Some valves and switches are designed to “fail c1osed”on loss oftheir power supplies (electric or pneumatic, usually). Others are designed to “fail open,” and a third class remains in its last position: “fail
Design 6/123
last.” The important thing is that the equipment fails in a mode that leaves the system in the least vulnerable condition. Three examples follow of the application of this point schedule.
Example 6.1 :Scoring safety systems (CaseA ) In the pipeline section considered here, a pump station is present. The pump is capable of overpressuring the pipeline. To prevent this, safety devices are installed. A pressure-sensitive switch will stop the pump and allow product to flow around the station in a safe manner. Should the pressure switch fail to stop the pump, a relief valve will open and vent the entire pumped product stream to a flare in a safe manner. This station is remotely monitored by the transmission of appropriate data (including pressures) to a control room that is manned 24 hours per day. Remote shutdown of the pump from this control room is possible. Communications are deemed to be 98% reliable.
Example 6.3: Scoringsafety systems (Case C) In this example, a supplier delivers product via a high-pressure pump into a pipeline section that relies on a downstream section’s reliefvalve to prevent overpressure. The supplier has a pressure switch at the pump site to stop the pump in the event of high pressure. The pipeline owner inspects the pump station owner’s calibration and inspection records for this pressure switch. The pump station owner remotely monitors the pump station operation 24 hours per day. Conditionspresent
Points
B F-G Total points = 0.5
- 2.5
3
Note that two levels of safety are present (pressure switch and relief valve). and that full credit is given to the remote capabilities only after communication effectiveness is assessed
Note that in this case credit is not given for a relief valve not in the section being evaluated. The evaluator has decided that the downstream relief valve does not adequately protect the pipeline section being assessed. Note also that no points are given for the supplier’s remote monitoring. Again, the evaluator has made the decision to simplify-he does not wish to be evaluating suppliers’ systems beyond the presence of direct overpressure shutdown devices located at the site. Finally, note that the evaluator has awarded points for the pipeline owner’s inspection of the suppliers’ maintenance records. He feels that. in this case. an amount of risk reduction is achieved by such inspections.
Example 6.2: Scoring safety sjstems (Case B)
A4. Material selection ( 6 2 pts)
For this example, a section of a gas transmission pipeline has a supplier interconnect. This interconnect leads directly to a producing gas well that can produce pressures and flow rates which can overpressure the transmission pipeline. Several levels of safety are present at the well site and under the control of the producer. The producer has agreed by contract to ensure that the transmission pipeline owner is protected from any damaging pressures due to the well operation. The pipeline owner monitors flow rates from the producer as well as pressures on the pipeline. This monitoring is on a 24-hour basis, but no remote control is possible.
The evaluator should look for evidence that proper materials were identified and specified with due consideration to all stresses reasonably expected. This may appear to be an obvious point, but when coupled with ensuring that the proper material is actually installed in the system, a number of historical failures could have been prevented by closer consideration of this variable. The evaluator should find design documents that consider all anticipated stresses in the pipeline components. This would include concrete coatings, internal and external coatings, nuts and bolts, all connecting systems. supports, and the structural (load-bearing) members of the system. Documents should show that the corrosion potential, including incompatible material problems and welding-related problems, was considered in the design. Most importantly, a set of control documents should exist. These control documents, normally in the form of pipeline specifications, give highly detailed data on all system components, from the nuts and bolts to the most complex instrumentation. The specifications will address component sizes, material compositions, paints and other protective coatings. and any special installation requirements. Design drawings specify the location and assembly parameters of each component. When any changes to the pipeline are contemplated the control documents should be consulted.All new and replacementmaterials should conform to the original specifications or the specifications must be formally reviewed and revised to allow different materials. By rigidly adhering to these documents, the chance of mistakenly installing incompatible materials is reduced. A management-of-change(MOC) process should be in place.
~
~
Condition\ present
~~~~~
Pornrr
C E Total points = 9
Conditionspresen I
6 3
Points
C
6
E G
-3
1
Total points = 4
Note that credit is given for condition C even though the pipeline owner has no safety devices of his own in this section. The fact that the devices are present warrants points; the fact that they are not under the owner’s control negates some of those points (condition G).Also, while contractual agreements may be useful in determining liabilities ufier an accident, they are not thought to have much impact on the risk picture. If the owner takes an active role in ensuring that the safety devices are properly maintained, condition F would replace G. yielding a total point score of 5.
6/124 Incorrect Operations Index
Awarding of points for this item should be based on the existence and use of control documents and procedures that govern all aspects of pipeline material selection and installation. Two points are awarded for the best use of controls, 0 points if controls are not used.
A5. Checks (0-2 pts) Here, the evaluator determines if design calculations and decisions were checked at key points duringthe design process. In the U.S., a licensed professional e n p e e r often certifies designs. This is a possible interventionpoint in the design process. Design checks by qualified professionals can help to prevent errors and omissions by the designers. Even the most routine designs require a degree of professional judgment and are consequently prone to error. Design checks can be performed at any stage in the life of the system. It is probably impossible to accurately gauge the quality of the checks-evidence that they were indeed performed will probably have to suffice. Two points are awarded for sections whose design process was carefully monitored and checked.
B. Construction (suggested weighting: 20%) Ideally, construction processes would be well defined, invariant from site to site, and benefit from a high pride of workmanship among all constructors. This would, of course, ensure the highest quality and consistency in the finished product and inspection would not be needed. Unfortunately, t h s is not the present state of pipeline construction practice. Conformance specifications are kept wide to allow for a myriad of conditions that may be encountered in the field. Workforces are often transient and awarding of work contracts is often done solely on the basis of lowest price. This makes many projects primarily price driven; shortcuts are sought and speed is often rewarded over attention to detail. For the construction phase, the evaluator should find evidence that reasonable steps were taken to ensure that the pipeline section was constructed correctly. This includes checks on the quality of workmanship and, ideally, another check on the design phase. While the post-construction pressure test verifies the system strength, improper construction techniques could cause problems far into the future. Residual stresses, damage to corrosion prevention systems, improper pipe support, and dents or gouges causing stress risers are some examples of construction defects that may pass an initial pressure test, but contribute to a later failure. Variables that can be scored in the assessment are as follows: B1. B2. B3. B4. B5. B6.
Inspection Materials Joining Backfilling Handling Coating
10 pts 2 pts 2 pts 2 pts 2 pts 2 pts
These same variables can also apply to ongoing construction practices on an existing pipeline. This might include repairs,
adjustments to route or depth, and addition of valves or connections. The stability of the buried pipeline during modifications is often a critical consideration. Construction activities near or in the pipeline right of way may produce slopes that are not stable and could put the pipeline at risk. These activities include excavation for road or railway cuts, removal of material from the toe of a slope, or adding significant material to the crest of a slope, in addition to construction activities on the pipeline itself. Slope alterations near, but outside, the right of way by third parties should be monitored and the responsible parties notified and consulted about their project’s effect on the pipeline. The evaluator can assess the potential for human error in the construction phase by examining each of the variables listed above and discussed in more detail next.
B1. Inspection (0-10 pts) Maximum points can be awarded when a qualified and conscientious inspector was present to oversee all aspects of the construction and the inspection provided was ofthe highest quality. A check of the inspector’s credentials, notes during construction, work history, and maybe even the constructor’s opinion of the inspector could be used in assessing the performance. The scoring of the other construction variables may also hinge on the inspector’s perceived performance. If inspection is a complete unknown, 0 points can be awarded. This variable commands the most points under the construction category because current pipeline construction practices rely so heavily on proper inspection.
B2. Materials (0-2 pts) Ideally, all materials and components were verified as to their authenticity and conformance to specifications prior to their installation.Awareness of potential counterfeitmaterials should be high for recent construction. Requisition of proper materials is probably not sufficient for this variable. An on-site material handler should be taking reasonable steps to ensure that the right material is indeed being installed in the right location. Evidence that this was properly done warrants 2 points.
B3. Joining (0-2 pts) Pipe joints are sometimes seen as having a higher failure potential than the pipe itself. This is reasonable since joining normally occurs under uncontrolled field conditions. Highest points are awarded when high quality of workmanship is seen in all methods of joining pipe sections, and when welds were inspected by appropriate means (X-ray, ultrasound, dye penetrant, etc.) and all were brought into compliance with governing specifications. Where weld acceptance or rejection is determined by two inspectors, thereby reducing bias and error, assurances are best. Point values should be decreased for less than 100% weld inspection, questionable practices, or other uncertainties. Otherjoining methods (flanges, screwed connections, polyethylene fusion welds, etc.) are similarly scored based on the quality of the workmanship and the inspection techmque. 100% inspection of all joints by industry-accepted practices warrants 2 points in this example.
Operation 6/125
B4. Backfill (0-2 pts) The type of backfill used and backfilling procedures are often critical to a pipeline’s long-term structural strength and ability to resist corrosion. It is important that no damage to the coating occurred during pipeline installation. Uniform and (sometimes) compacted bedding material is usually necessary to properly support the pipe. Stress concentration points may result from improper backfill or bedding material. Knowledge and practice of good backfillisupport techniques during construction warrants 2 points.
B5. Handling (0-2 pts) For this variable. the evaluator should check that components, especially longer sections of pipe, were handled in ways that minimize stresses and that cold-working of steel components for purposes of fit or line-up were minimized. Cold-working can cause high levels ofresidual stresses, which in turn can be a contributing factor to stress corrosion phenomena. Handling includes storage of materials prior to installation. Protecting materials from harmful elements should be a part ofthe evaluation for proper handling during construction. The evaluator should award 2 points when he sees evidence of good materials handling practices and storage techniques during and prior to construction.
B6. Coating (0-2 pts) This variable examines field-applied coatings (normally required for joining) and provides an additional evaluation opportunity for precoated components. Field-applied coatings are problematic because effects of ambient conditions are difficult to control. Depending on the coating system, careful control of temperature and moisture might be required. All coating systems will be sensitive to surface preparation. Ideally, the coating application was carefully controlled and supervised by trained individuals and preapplied coating was carefully inspected and repaired prior to final installation of pipe. Coating assessment in terms of its appropriateness for the application and other factors is done in the corrosion index also, but at the construction stage, the human error potential is relatively high. Proper handling and backfilling directly impact the final condition of the coating. The best coating system can be defeated by simple errors in the final steps of installing the pipeline. The maximum points can be awarded when the evaluator is satisfied that the constructors exercised exceptional care in applying field coatings and caring for the preapplied coating. The evaluator must be careful in judging all ofthe variables just discussed, especially for systems constructed many years ago. System owners may have strong beliefs about how well these error-prevention activities were carried out, but may have little evidence to verify those beliefs. Evaluations of pipeline sections must reflect a consistency in awarding points and not be unduly influenced by unsubstantiated beliefs. A “documentation-required” rule would help to ensure consistency. Excavations, even years after initial installation, provide evidence of how well construction techniques were camed out.
Findings such as damaged coatings, debris (temporary wood supports, weld rods, tools, rocks, etc.) buried with the pipeline. low-quality coating applications over weld joints, etc., will still be present years later to indicate that perhaps insufficient attention was paid during the construction process.
C. Operation (suggestedweighting: 35%) Having considered design and construction, the third phase, operations, is perhaps the most critical from a human error standpoint. This is the phase in which an error can produce an immediate failure since personnel may be routinely operating valves, pumps, compressors, and other equipment. Emphasis therefore is on error prevention rather than error detection. Most hazardous substance pipelines have redundant safety systems and are designed with generous safety factors. Therefore, it often takes a rather unlikely chain of events to cause a pipeline to fail by the improper use of components. However, history has demonstrated that the unlikely event sequences occur more often than would be intuitively predicted. Unlike the other phases. intervention opportunities here may be less common. But a system can also be made to be more insensitive to human error through physical means. As a starting point, the evaluator can look for a sense of professionalism in the way operations are conducted. A strong safety program is also evidence of attention being paid to error prevention. Both of these, professionalism and safety programs, are among the items believed to reduce errors. The variables considered in this section are somewhat redundant with each other, but are still thought to stand on their own merit. For example, better procedures enhance training; mechanical devices complement training; better training and professionalism usually mean less supervision is required. Operations is the stage where observability and controllability should be maximized. Wherever possible, intervention points should be established. These are steps in any process where actions contemplated or just completed can be reviewed for correctness. At an intervention point, it is still possible to reverse the steps and place the system back in its prior (safe) condition. For instance, a simple lock on a valve causes the operator to take an extra step before the valve can be operated perhaps leading to more consideration of the action about to be taken. This is also the place in the assessment where special product reaction issues can be considered. For example, hydrate formation (production of ice as water vapor precipitates from a hydrocarbon flow stream, under special conditions) has been identified as a service interruption threat and also. under special conditions, an integrity threat. The latter occurs if formed ice travels down the pipeline with high velocity, possibly causing damages. Because such special occurrences are often controlled through operational procedures, they warrant attention here. A suggested point schedule to evaluate the operations phase is as follows:
C 1. C2. C3. C4. C5.
Procedures SCADNcommunications Drug testing Safety programs Surveys/maps/records
7 pts 3 pts
2 pts 2 pts 5 pts
61126 Incorrect Operations Index
C6. C7.
Training Mechanical errorpreventers
IO pts 6 pts
C1. Procedures (0-7 pts) The evaluator should be satisfied that written procedures covering all aspects of pipeline operation exist. There should be evidence that these procedures are actively used, reviewed, and revised. Such evidence might include filled-in checklists and copies of procedures in field locations or with field personnel. Ideally, use of procedures and checklists reduces variability. More consistent operations imply less opportunity for human error. Examples ofjob procedures include 0
0
0
Mainline valve checks and maintenance Safety device inspection and calibration Pipeline shutdown or startup Pump/compressor operations Product movement changes Right-of-way maintenance Flow meter calibrations Instrument maintenance Safety device testing Management of change Patrol Surveys Corrosion control Control center actions Lock-out and equipment isolation Emergency response
oped and communicated with great care. A protocol should exist that covers these procedures: who develops them, who approves them, how training is done, how compliance is verified, how often they are reviewed. A document management system should be in place to ensure version control and proper access to most current documents. This is commonly done in a computer environment, but can also be done with paper filing systems. The evaluator can check to see if procedures are in place for the most critical operations first: starting and stopping ofmajor pieces of equipment, valve operations, changes in flow parameters, instruments taken out of service, etc. The nonroutine activity is often the most dangerous. However, routine operations can lead to complacency. The mandated use ofpre-flight checklists by pilots prior to every flight is an example of avoiding reliance on memory or habits. A strong procedures program is an important part of reducing operational errors, as is seen by the point level. Maximum points should be awarded where procedure quality and use are the highest. More is said about procedures in the mining variable and in Chapter 13.
C2. SCADNcommunications (0-3 pts)
and many others. Note that work near the line, but not actually involving the pipeline, is also included because such activities may affect the line. Unique or rare procedures should be devel-
Supervisory control and data acquisition (SCADA) refers to the transmission of pipeline operational data (such as pressures, flows, temperatures, and product compositions) at sufficient points along the pipeline to allow monitoring of the line from a single location (Figure 6.4). In many cases, it also includes the transmission of data from the central monitoring location to points along the line to allow for remote operation of valves, pumps, motors, etc. Devices called remote terminal units (RTUs) provide the interface between the pipeline datagathering instruments and the conventional communication
Common communications
Valve station
Pump station Figure 6.4
Pipeline SCADA systems.
Operation 61127
paths such as telephone lines, satellite transmission links, fiber optic cables, radio waves, or microwaves. So, a SCADA system is normally composed of all of these components: measuring instrumentation (for flow, pressure, temperature, density, etc.), transmitters, control equipment, RTUs, communication pathways. and a central computer. Control logic exists either in local equipment (programmable logic controllers, PLCs) or in the central computer. SCADA systems usually are designed to provide an overall view of the entire pipeline from one location. In so doing, system diagnosis, leak detection, transient analysis, and work coordination can be enhanced. The main contribution of SCADA to human error avoidance is the fact that another set of eyes is watching pipeline operations and is hopefully consulted prior to field operations. A possible detractor is the possibility of errors emerging from the pipeline control center. More humans involved may imply more error potential, both from the field and from the control center. The emphasis should therefore be placed on how well the two locations are cooperating and cross-checking each other. Protocol may specify the procedures in which both locations are involved. For example, the operating discipline could require communication between technicians in the field and the control center immediately before 0 0
0
Valves opened or closed Pumps and compressors started or stopped Vendor flows started or stopped Instruments taken out of service Any maintenance that may affect the pipeline operation.
Two-way communications between the field site and the control center should be a minimum condition to justify points in this section. Strictly for purposes of scoring this variable, a control center need not employ a SCADA system. The important aspect is that another source is consulted prior to any potentially upsetting actions. Telephone or radio communications, when properly applied can also be effective in preventing human error. Maximum points should be awarded when the cross-checking is seen to be properly performed.
of a SCADA system would ideally involve an examination of the entire reporting process, from first indication of an abnormal condition, all the way to the final actions and associated system response. This assessment would therefore involve an evaluation ofthe following aspects:
0
0
0
0
A list of characteristics that could be used to assess a specific SCADA system can be created. These characteristics are thought to provide a representative indication of the effectiveness in reducing risks: 0 0 0
0
0
Alternative approach
0
This subsection describes an alternative approach to evaluating the role of SCADA in human error avoidance. In this approach, a more detailed assessment of SCADA capabilities is made part of the risk assessment. Choice of approaches may be at least partially impacted by the perceived value of SCADA capabilities in error prevention. A SCADA system can impact risk in several ways:
0
0
Human error avoidance Leak detection Emergency response Operational efficiencies.
As with any system, the SCADA system is only as effective and reliable as its weakest component. A thorough assessment
Detection of abnormal conditions; for instance, what types of events can be detected? What is the detection sensitivity and reliability in terms of 100% of event type A occurrences being found, 72% of event type B occurrences being found, etc.? This includes assessment of redundant detection opportunities (by pressure loss and flow increase, for instance), instrument calibration and sensitivities, etc. Speed, error rate, and outage rate of the communications pathways; number of points of failure; weather sensitivity; third-party services; average refresh time for data; amount of error checking during transmission; report-by-exception protocols Redundancy in communication pathways; outage time until backup system in engaged Type and adequacy of automatic logic control; local (PLCs) versus central computer; ability to handle complex input scenarios Human response, if required as a function of time to recognize problem, ability to set alarms limits, effectiveness of madmachine interface (MMI); operator training: support from logic, graphic, and tabular tools Adequacy of remote andor automatic control actions; valve closing or opening; instrument power supply.
Local automatic control Local remote control (on-site control room) Remote control as primary system Remote control as backup to local control Automatic backup communications with indication of switchover 24-hour-per-day monitoring Regular testing and calibration per formal procedures Remote, on-site monitoring and control of all critical activities Remote, off-site monitoring and control of all critical activities Enforced protocol requiring real-time interface between field operations and control room; two sources involved in critical activities; an adequate real-time communications system is assumed Interlocks or logic constraints that prevent incorrect operations; critical operations are linked to pressure, flow. temperature, etc., indications, which are set as “permissives” before the action can occur Coverage of data points; density appropriate to complexity of operations Number of independent opportunities to detect incidents Diagnostics capabilities including data retrieval, trending charts. temporary alarms, correlations, etc.
6/128 Incorrect Operations Index
Many of these characteristics impact the leak detection and emergency response abilities of the system. These impacts are assessed in various consequence factors in Chapter 7. As one variable in assessing the probability of human error, the emphasis here is on the SCADA role in reducing human error-type incidents. Therefore, only a few characteristics are selected to use in evaluating the role of a specific SCADA system. From the human error perspective only, the major considerations are that a second “set of eyes” is monitoring all critical activities and that a better overview of the system is provided. Although human error potential exists in the SCADA loop itself, it is thought that, in general, the crosschecking opportunities offered by SCADA can reduce the probability of human error in field operations. The following are selected as indicators of SCADA effectiveness as an error reducer: 1. Monitoring ofall critical activities and conditions 2. Reliability of SCADA system 3. Enforced protocol requiring real-time communications between field operations and control room; two sources involved in critical activities; an adequate real-time communications system@)is assumed 4. Interlocks or logic constraints that prevent incorrect operations; critical operations are linked to pressure, flow, temperature, etc.; indications that are set as “permissives” before the action can occur.
Note the following assumptions:
0
0
0
Critical activities include pump stadstop; tank transfers; and any significant changes in flows, pressures, temperatures, or equipment status. Monitoring is seen to be critical for human error prevention, but control capability is mostly a response consideration (consequences). Remote monitoring is neither an advantage or disadvantage over local (on-site control room) monitoring. Proper testing and calibration are implied as part of reliability.
Because item 4 above (interlocks or logic constraints) is already captured in the “Computer Permissives Program” part ofthe variable mechanical error preventers, the remaining three considerations can be “scored” in the assessment for probability of human error as shown in Table 6.1.
Table 6.1 Evaluation of SCADA role in human error reduction Level 1 Level 2 Level 3 Level 4
No SCADA system exists or is not used in a manner that promotes human error reduction. Some critical activitiesare monitored; field actions are informallycoordinated through a control room; system is at least 80% operational. Most critical activitiesare monitored;field actions are usually coordinated through a control room; system uptime exceeds 95%. All critical activitiesare monitored;all field actions are coordinated through a control room; SCADA system reliability(measured in uptime) exceeds 99.9%.
Other aspects of the SCADA role in risk reduction can be captured in the consequence section, under “Spill Reduction Factors.”The more technical aspects of kind and quality of data and control (incident detection) and the use of that capability (emergency response), can be assessed there.
C3. Drug testing (0-2 pts) Government regulations in the United States currently require drug and alcohol testing programs for certain classes of employees in the transportation industry. The intent is to reduce the potential for human error due to an impairment of an individual. Company testing policies often include Random testing Testing for cause Pre-employment testing Postaccident testing Return-to-worktesting. From a risk standpoint, finding and eliminating substance abuse in the pipeline workplace reduces the potential for substance-abuse-relatedhuman errors. A functioning drug testing program for pipeline employees who play substantial roles in pipeline operations should warrant maximum points. In cultures where drug and substance abuse is not a problem, a practice of employee health screening may be a substitute item to score.
C4. Safety programs (0-2 pts) A safety program is one of the nearly intangible factors in the risk equation. It is believed that a company-wide commitment to safety reduces the human error potential. Judging this level of commitment is difficult. At best the evaluator should look for evidence of a commitment to safety. Such evidence may take the form of some or all of the following: Written company statement of safety philosophy Safety program designed with high level of employee participation--evidence of high participation is found Strong safety performance record (recent history) Good attention to housekeeping Signs, slogans, etc., to show an environment tuned to safety Full-time safety personnel. Most will agree that a company that promotes safety to a high degree will have an impact on human error potential. A strong safety program should warrant maximum points. C5. Surveyslmapslrecords (0-5 pts)
While also covered in the risk indexes they specifically impact, surveys as a part of routine pipeline operations are again considered here. Examples of typical pipeline surveys include: Close interval (pipe-to-soil voltage) surveys Coating condition surveys Water crossing surveys
Operation 6/129 Deformation detection by pigging Population density surveys Depth of cover surveys Sonar (subsea) surveys Thermographic surveys Leak detection Airpatrol Each item is intended to identify areas of possible threat to the pipeline. A formal program of surveying, including proper documentation, implies a professional operation and a measure of risk reduction. Routine surveying Further indicates a more proactive, rather than reactive, approach to the operation. For the pipeline section being evaluated points can be awarded based on the number of surveys performed versus the number of useful surveys that could be performed there. Survey information should become a part of maps and records whereby the survey results are readily available to operations and maintenance personnel. Maps and records document critical information about the pipeline systems and therefore play a role in error reduction. That role can be evaluated here. As discussed in the third-party damage index discussion (Chapter 3). there is often a need to routinely locate a pipeline to protect it from pending excavations. When indirect means of line locating, such as drawings and other records, are used, there is an increased opportunity for incorrect locating. This is due to the human error potential in the creation and use of maps, including: Incorrect initial measurements of the line location during installation Errors in recording of these measurements Errors in creation of the record documents Failure to update documents Incorrect filing and retrieval ofthe documents Incorrect interpretation and communication of the data from the document. While some pipe movement after construction is possible, this is normally not an important factor in line location. Maps and records are increasingly being stored on and retrieved from computers. Whether in digital or paper form, and similar to the evaluation of procedures discussed previously, the scoring of surveys/maps/records can be based on aspects such as:
0
Comprehensiveness-amount of the system covered by maps and records Detail-level of detail shown (depth, landmarks, pipe specifications, leak history, current condition, etc.) C l a r i w a s e of reading; chance of misinterpretation of information Timeliness of updates Document management system--ensuring version control and ready access to information.
Examples of common pipeline survey techniques are shown in Appendix H. The following information on maps and records is excerpted from a 1997 study, Ref. [64]:
Maps and Records In general, facility records maintained by the utility owners or pipeline operators are the most widely used sources of information about the underground infrastructure. In the U.S., operators are required to identify facilities in environmentally sensitive areas and in densely populated areas. In many pipeline environments, however, there is no specific requirement for system operators to maintain a comprehensive system map oftheir underground facilities. Nevertheless. many do maintain this information to facilitate their business operations. System records developed prior to the widespread use of computer technology most likely exist as architectural and engineering diagrams For some systems, these diagrams have been electronically imaged so that they are easier to reference. update, and store. Digitized versions of early maps do not always reflect the uncertainty of information that may have been inherent on the hand-drafted version. Structural references and landmarks that define the relative locations of underground facilities also change over time and may not he reflected on maps. Many system maps lack documentation of abandoned facilities Abandoned facilities result when the use of segments of the underground system are discontinued,or when replaced lines run in new locations, or when entire systems are upgraded. Without accurate records of abandoned facilities, excavators run the risk ofmistaking the abandoned line for an active one. thereby increasing the likelihood of hitting the active line. In addition to documenting the location of a facility. utility map records may also contain informationon the age of the facility, type and dimensions of the material. history of leakage and maintenance. status of cathodic protection, soil content, and activity related to pending construction. However, the quality ofthis information varies widely. Excavators. locators, and utility operators can use GPS information to identify field locations (longitude and latitude coordinates). and they can use this information to navigate to the sites. With the added capability of differential GPS. objects can be located to an accuracy of better than 1 meter (1. I yards).This degree of accuracy makes differential GPS appropriate for many aspects of mapping underground facilities. Subsurfaceutility engineering (SUE) is a process for identifying, verifying. and documenting underground facilities. Depending on the information available and the technologies employed to verify facility locations, a level of the quality of information can he associated with underground facilities. These levels, shown in Table I , indicate the degree of uncertainty associated with the information; level A is the most reliable and level D the least reliable. This categorization is a direct result of the source of information and the technologies used to venfy the information.
C6. Training (0-10 pts) Training should be seen as the first line of defense against human error and for accident reduction. For purposes of this risk assessment, training that concentrates on failure prevention is the most vital. This is in contrast to training that emphasizes protective equipment, first aid, injury prevention, and even emergency response. Such training is unquestionably cntical, but its impact on the pipeline probability of failure is indirect at best. This should be kept in mind as the training program is assessed for its contribution to risk reduction. Obviously, different training is needed for different job functions and different experience levels. An effective training program, however, will have several key aspects, including
6/130 Incorrect Operations Index
Table I Qirulity level of the information
Level D
Level C
Level B
Level A
Description
Information is collected from existing utility records without field activitiesto verify the information. The accuracy or cornprehensiveness of the information cannot be guaranteed; consequently,this least certain set of data is the lowest quality level. Adds aboveground survey data (such as manholes, valve boxes, posts, and meters) to existing utility records. The Federal HighwayAdministration Office of Engineering estimates that 15-30 percent of level C facility information pertinent to highway construction is omitted or plotted with an error rate ofmore than 2 feet. Confirmed existence and horizontal position of facilities are mapped using surface geophysical techniques.The Wo-dimensional, plan-view map is useful in the construction planning phase when slight changes to avoid conflictscan produce substantialcost savings by eliminating the relocation of utilities. Vacuum excavation is used to positively verify both the horizontal and vertical depth location of facilities.
common topics in which all pipeline employees should be trained. A point schedule can be developed to credit the program for each aspect that has been incorporated. An example (with detailed explanations afterwards) follows. Documented minimum requirements Testing Topics covered: Product characteristics Pipeline material stresses Pipeline corrosion Control and operations Maintenance Emergency drills Job procedures (as appropriate) Scheduled retraining
2 pts 2 pts 0.5 pts 0.5 pts 0.5 pts 0.5 pts 0.5 pts 0.5 pts 2 pts 1 Pt
Documented minimum requirements A document that specifically describes the body of knowledge that is expected of pipeline workers is a good start for a program. This document will ideally state the minimum knowledge requirements for each pipeline job position. Mastery of this body of knowledge will be verified before that position is worked by an employee. For example, a pump station operator will not be allowed to operate a station until she has demonstrated a command of all of the minimum requirements of that job. This should include station shutdowns, alarms, monitors, procedures, and the ability to recognize any abnormal conditions at the station. Testing A formal program should verify operator knowledge and identify deficiencies before they pose a threat to the pipeline system. Tests that can be passed with less than 100% correctness may be failing to identify training weaknesses.
Ideally, the operator should know exactly what knowledge he is expected to possess. The test should confirm that he does indeed possess this knowledge. If the test indicates deficiencies, he may be retested (within reasonable limits) until he has mastered the body of knowledge required for his job. Testing programs vary greatly in technique and effectiveness. It is left to the risk evaluator to satisfy himself that the testing achieves the desired results. Topics Covered Regardless of their specific jobs, all pipeline operators (and arguably, all pipeline employees) should have some basic common knowledge. Some of these common areas may include the following: Product characteristics. Is the product transported flammable, toxic, reactive, carcinogenic? What are the safe exposure limits? If released, does it form a cloud? Is the cloud heavier or lighter than air? Such knowledge decreases the chances of an operator making an incorrect decision due to ignorance about the product she is handling. Pipeline material stresses. How does the pipeline material react to stresses? What are indications of overstressing? What is the failure mode ofthe material? What is the weakest component in the system? Such basic knowledge must not be confused with engineering in the minds of the operators. All operators should understand these fundamental concepts to help to understand and avoid errors only-not to replace engineering decisions. With this knowledge though, an operator may find (and recognize the significance of) a bulge in the pipe indicating yielding had occurred. All trainees may gain a better appreciation of the consequences of a pipeline failure. Pipeline corrosion. As in the above topic, a basic understanding of pipeline corrosion and anticorrosion systems may reduce the chances of errors. With such training, a field operator would be more alert to coating damage, the presence of other buried metal, or overhead power lines as potential threats to the pipeline. Office personnel may also have the opportunity to recognize a threat and bring it to the attention of the corrosion engineer, given a fundamental understanding of corrosion. A materials handler may spot a situation of incompatible metals that may have been overlooked in the design phase. Control and operations. This is most critical to the employees who actually perform the product movements, but all employees should understand how product is moved and controlled, at least in a general way. An operator who understands what manner of control is occurring upstream and downstream of his area ofresponsibility is less likely to make an error due to ignorance of the system. An operator who understands the big picture of the pipeline system will be better able to anticipate all ramifications of changes to the system. Maintenance. A working knowledge of what is done and why it is being done may be valuable in preventing errors. A worker who knows how valves operate and why maintenance is necessary to their proper operation will be able to spot deficiencies in a related program or procedure. Inspection and calibration of instruments, especially safety devices, will usually be better done by a knowledgeable employee. Given that many maintenance activities involving excavation could
Operation 6/131 occur without engineering supervision, safety training of maintenance crews should include education on the conditions potentially leading io slope failure or other stabilityisupport issues. Standard procedures should be written to require notification of an engineer should such conditions be found to exist.
In this schedule, points may be added for each application up to a maximum point value of 5 points. An application is valid only if the mechanical preventer is used in all instances of the scenario it is designed to prevent. Ifthe section being evaluated has no possible applications, award the maximum points (5 points) because there is no potential for this type of human error.
Emergent? dri1l.v The role of emergency drills as a proactive risk reducer may be questioned. Emergency response in general is thought to play a role only after a failure has occurred and consequently is considered in the leak impact facfor (Chapter 7). Drills. however, may play a role in human error reduction as employees think through a simulated failure. The ensuing analysis and planning should lead to methods to further reduce risks. The evaluator must decide what effect emergency drills have on the risk picture in a specific case.
Three-way walwes It is common industry practice to install valves between instruments and pipeline components.The ability to isolate the instrument allows for maintenance of the instrument without taking the whole pipeline section out of service. Unfortunately, it also allows the opportunity for an instrument to be defeated if the isolating valve is left closed after the instrument maintenance is complete. Obviously if the instrument is a safety device such as a relief valve or pressure switch, it must not be isolated from the pipeline that it is protecting. Three-way valves have one inlet and two outlets. By closing one outlet, the other is automatically opened. Hence, there is always an unobstructed outlet. When pressure switches, for instance, are installed at each outlet of a three-way valve. one switch can be taken out of service and the other will always be operable. Both pressure switches cannot be simultaneouslyisolated. This is a prime example of a very effective mechanical preventer that reduces the possibility of a potentially quite serious error. Points are awarded accordingly.
Job proceditres As required by specific employee duties, the greatest training emphasis should probably be placed on job procedures. The first step in avoiding improper actions of employees is to document the correct way to do things. Written and regularly reviewed procedures should cover all aspects of pipeline operation both in the field and in the control centers. The use of procedures as a training tool is being measured here Their use as an operational tool is covered in an earlier variable.
Scheduled retraining Finally, experts agree that training is not permanent. Habits form, steps are bypassed, things are forgotten. Some manner of retraining and retesting is essential when relying on a training program to reduce human error. The evaluator should be satisfied that the retraining schedule is appropriate and that the periodic retesting adequately verifies employee skills.
C7. Mechanical error preventers (0-6 pts) Sometimes facetiously labeled as “idiot-proofing,” installing mechanical devices to prevent operator error may be an effective risk reducer. Credit toward risk reduction should be given to any such device that impedes the accomplishment of an error. The premise here is that the operator is properly trainedthe mechanical preventer serves to help avoid inattention errors. A simple padlock and chain can fit in this category, because such locks cause an operator to pause and, it is hoped, consider the action about to be taken. A more complex error prevention system is computer logic that will prevent certain actions from being performed out of sequence. The point schedule for this category can reflect not only the effectiveness of the devices being rated, but also the possible consequences that are being prevented by the device. Judging this may need to be subjective, in the absence of much experiential data. An example of a schedule with detailed explanations follows: Three-way valves with dual instrumentation Lock-out devices Key-lock sequence programs Computer permissives Highlighting of critical instruments
4 pts 2 pts 2 pts 2 pts I Pt
Lock-ouf dewices These are most effective if they are not the norm. When an operator encounters a lock routinely. the attention-grabbing effect is lost. When the lock is an unusual feature. signifying unusual seriousness of the operation about to be undertaken, the operator is more likely to give the situation more serious attention. Key-lock sequence programs These are used primarily to avoid out-of-sequence type errors. If a job procedure calls for several operations to be performed in a certain sequence. and deviations from that prescribed sequence may cause serious problems, a key-lock sequence program may be employed to prevent any action from being taken prematurely. Such programs require an operator to use certain keys to unlock specific instruments or valves. Each key unlocks only a certain instrument and must then be used to get the next key. For instance. an operator uses her assigned key to unlock a panel of other keys. From this panel she can initially remove only key A. She uses keyA to unlock and close valve X. When valve X is closed, key B becomes available to the operator. She uses key B to unlock and open valveY. This makes key C available, and so on. At the end of the sequence, she is able to remove key A and use it to retrieve her assigned key. These elaborate sequencing schemes involving operators and keys are being replaced by computer logic, but where they are used, they can be quite effective. It is important that the keys be nondefeatable to force operator adherence to the procedure. Compuferpermissiwes These are the electronic equivalent to the key-locks described in the last section. By means of software logic ladders. the computer prevents improper actions from being taken. A pump start command will not be executed if the valve line-up (proper upstream and downstream valves
61132 Incorrect Operations Index
open or closed as required) is not correct. A command to open a valve will not execute if the pressure on either side of the valve is not in an acceptable range. Such electronic permissives are usually software programs that may reside in on-site or remotely located computers. A computer is not a minimum requirement, however, because simple solenoid switches or wiring arrangements may perform similar functions. The evaluator should assess the adequacy of such permissives to perform the intended functions. Furthermore, they should be regularly tested and calibrated to warrant the maximum point scores.
ment. These programs can be quite sophisticated in terms ofthe rigor of the data analysis. Use of even rudimentary aspects of PPM provides at least some evidence to the evaluator that maintenance is playing a legitimate role in the company’s risk reduction efforts. The evaluator may wish to judge the strength of the maintenance program based on the following items:
Highlighting of critical instruments This is merely another method of bringing attention to critical operations. By painting a critical valve the color red or by tagging an instrument with a special designation, the operator will perhaps pause and consider his action again. Such pauses to reconsider may well prevent serious mistakes. Points should be awarded based on how effective the evaluator deems the highlighting to be.
D1. Documentation (0-2 pts)
D. Maintenance (suggestedweighting: 15%) Improper maintenance is a type of error that can occur at several levels in the operation. Lack of management attention to maintenance, incorrect maintenance requirements or procedures, and mistakes made during the actual maintenance activities are all errors that may directly or indirectly lead to a pipeline failure. The evaluator should again look for a sense of professionalism, as well as a high level of understanding of maintenance requirements for the equipment being used. Note that this item does not command a large share of the risk assessment points. However, many items in the overall pipeline risk assessment are dependent on items in this section. A valve or instrument, which due to improper maintenance will not perform its intended function, negates any risk reduction that the device might have contributed. If the evaluator has concerns about proper operator actions in this area, she may need to adjust (downward) all maintenance-dependent variables in the overall risk evaluation. Therefore, ifthis item scores low, it should serve as a trigger to initiate a reevaluation of the pipeline. Routine maintenance should include procedures and schedules for operating valves, inspecting cathodic protection equipment, testingcalibrating instrumentation and safety devices, corrosion inspections, painting, component replacement, lubrication of all moving parts, engine/pump/compressor maintenance, tank testing, etc. Maintenance must also be done in a timely fashion. Maintenance frequency should be consistent with regulatory requirements and industry standards as a minimum. Modem maintenance practices often revolve around concepts ofpredicfive preventive maintenance (PPM) programs. In these programs, systematic collection and analyses of data are emphasized so that maintenance actions are more proactive and less reactive. Based on statistical analysis of past failures and the criticality of the equipment, part replacement and maintenance schedules are developed that optimize the operationnot wasting money on premature part replacement or unnecessary activities, but minimizing downtime of equip-
DI. Documentation D2. Schedule D3. Procedures
2 pts 3 pts IO pts
The evaluator should check for a formal program of retaining all paperwork or databases dealing with all aspects of maintenance exists. This may include a file system or a computer database in active use. Any serious maintenance effort will have associated documentation. The ideal program will constantly adjust its maintenance practices based on accurate data collection through a formal PPM approach or at least by employing PPM concepts. Ideally, the data collected during maintenance, as well as all maintenance procedures and other documentation, will be under a document management system to ensure version control and ready access of information.
DZ. Schedule (0-3 pts) A formal schedule for routine maintenance based on operating history, government regulations, and accepted industry practices will ideally exist. Again, this schedule will ideally reflect actual operating history and, within acceptable guidelines, be adjusted in response to that history through the use of formal PPM procedures or at least the underlying concepts
D3. Procedures (0-10 pts) The evaluator should verify that written procedures dealing with repairs and routine maintenance are readily available. Not only should these exist, it should also be clear that they are in active use by the maintenance personnel. Look for checklists, revision dates, and other evidence of their use. Procedures should help to ensure consistency. Specialized procedures are required to ensure that original design factors are still considered long after the designers are gone. A prime example is welding, where material changes such as hardness, fracture toughness, and corrosion resistance can be seriously affected by the subsequent maintenance activities involving welding.
Incorrect operations index This is the last of the failure mode indexes in the relative risk model (see Figure 6.1). This value is combined with the other indexes discussed in chapters 3 through 6 and then divided by the leak impact factor, which is discussed in Chapter 7, to arrive at the final risk score. This final risk score is ready to be used in risk management applications as discussed in Chapter 15. Chapters 8 through 14 discuss some specialized applications of risk techniques. If these are not pertinent to the systems being evaluated, the reader can move directly to Chapter 15.
71133
Leak Impact Factor Contents
iquid Spill Dispersion 7/15] Physical extent of spill 71151 hemal effects 71152 ontamination Potential 71153 Spill Migration 71153 Spill and Leak Mitigation 71154 Secondary Containment 71154 Emergency Response 71154 VI. Scoring Releares 711 54 Scoring Hazardous Liquid Releases 71155 Scoring Hazardous Vapor Releaqes 71156 ing 71158 VII. Scores 71159 71159 Emergency Response 71162 VIII. Receptors 71165 Population Density 7/165 Environmental Issues 71166 Environmental Sensitivity 71167 High-Value Arcas 71168 Equivalencies of Receptors 71170
I Changes in LIF Calculations 71135
TI. Background 71135
Ill Product Hazard 71136 Acute Hazards 71136 Chronic Hazards 71138 IV Leakvolume 71142 Hole Size 71142 Matenals 7/143 Stresses 71144 Initiatrng Mechanisms 71145 Release Models 71146 Hazardousvapor Releases 71146 Hazardous Liquid Spills 71147 HVL Releases 71147 V. Dispersion 71148 Jet Fire 71149 Vapor Cloud 71149 Vapor Cloud Ignition 71149 Overpressure Wave 71150 Vapor Cloud Size 71150 CloudModeling 71150
Leak Impact Factor Overview Leak impact factor (LIF) = product hazard (PH) x leak (L) x dispersion (D) xreeeptors (R)
A. Product Hazard (PH) (Acute + Chronic Hazards) Al. Acute Hazards a. N, b. N, c. N, Total (Nf+ N,+ N,) A2. Chronic Hazard (RQ)
1-22 pts
0-4pts M pts 0-4 pts &12 pts e 1 0 pts
B. Leak/Spill Volume (LV) C. Dispersion (D) D. Receptors (R) D1. Population Density (Pop) D2. Environmental Considerations (Env) D3. High-Value Areas (HVA) Total Receptors = (Pop + Env + HVA)
Note: The leak impact factor is used to adjust the index scores to reflect the consequences of a failure. A higher point score for the leak impact factor represents higher consequences and a higher risk.
7/134 Leak Impact Factor
l-
Dispersion Receptors
Figure 7.1 Relative risk model.
Acute hazard
+/-Hi
K Aquatic toxicity Mammalian toxicity Environmentalpersistence lgnitability Corrosivity Reactivity
Chronic hazard 4
Receptors
Population Environment High value areas
Spill size
r Product state (gas, liquid, combination) Flow rate Diameter Pressure Product characteristics Failure size Leak detection
Dispersion
P
Weather Topography Surface flow resistance Product characteristics Volume released Emergency response Flgure 7.2 Assessing potentialconsequences:samples of data used to calculate the leak impactfactor.
Background 71135
Changes in LIF calculations Some changes to the leak impact factor (LIF), relative to the first and second editions of this text, are recommended. The elements of the LIF have not changed, but the protocol by which these ingredients are mathematically combined has been made more transparent and realistic in this discussion. Additional scoring approaches are also presented. Given the increasing role of risk evaluations in many regulatory and highly scrutinized applications, there is often the need to consider increasing detail in risk assessment, especially consequence quantification. There is no universally agreed upon method to do this. This edition ofthis book seeks to provide the risk assessor with an understanding of the sometimes complex underlying concepts and then some ideas on how an optimum risk assessment model can be created. The final complexity and comprehensiveness of the model will be a matter of choice for the designer, in consideration of factors such as intended application, required accuracy, and resources that can be applied to the effort.
Background Up to this point, possible pipeline failure initiators have been assessed. These initiators define what can go wrong. Actions or devices that are designed to prevent these failure initiators have also been considered. These preventions affect the “How likely is it?’ follow-up question to “What can go wrong?” The last portion of the risk assessment addresses the question “What are the consequences?” This is answered by estimating the probabilities of certain damages occurring. The consequence factor begins at the point of pipeline failure. The title of this chapter, Leak Impact Factor: emphasizes this. What is the potential impact of a pipeline leak? The answer primarily depends on two pipeline condition factors: ( I ) the product and (2) the surroundings. Unfortunately, the interaction between these two factors can be immensely complex and variable. The possible leak rates, weather conditions, soil types, populations nearby, etc., are in and of themselves highly variable and unpredictable. When the interactions between these and the product characteristics are also considered the problem becomes reasonably solvable only by making assumptions and approximations. The leak impact factor is calculated from an analysis of the potential product hazard, spill or leak size, release dispersion, and receptor characteristics. Although simplifying assumptions are used enough distinctions are made to ensure that meaningful risk assessments result. The main focus ofthe LIF here is on consequences to public health and safety from a pipeline loss of containment integrity. This includes potential consequences to the environment. Additional consequence considerations such as service interruption costs can be included as discussed in later chapters. The LlF can be seen as the product of four variables: LIF=PH x LV x D x R where LIF =leak impact factor (higher \slues represent higher consequences) PH = product hazard (as previously defined)
LV =leak volume (relative quantity of the liquid or vapor release) D = dispersion (relative range of the leak) R =receptors (all things that could be damaged). Because each variable is multiplied by all others. any individual variable can drastically impact the final LIE This better represents real-world situations. For instance. this equation shows that if any one of the four components is zero. then the consequence (and the risk) is zero. Therefore, if the product is absolutely nonhazardous (including pressurization effects), there is no risk. If the leak volume or dispersion is zero, either because there is no leak or because some type of secondary containment is used then again there is no risk. Similarly, if there are no receptors (human or environmental or property values) to be endangered from a leak. then there is no risk. As each component increases, the consequence and overall risks increase. The full range of hazard potential from loss of integrity of any operating pipeline includes the following: I . Toxicit?,/asphyxiation~ontact toxicity or exclusion of air from confined spaces. 2. Contamination pollution-acute and chronic damage to property, flora, fauna, drinking waters, etc. 3. Mechanical eflects-erosion, washouts, projectiles. etc., from force of escaping product. 4. Firdignition scenarios: a. Fir.eballs-normally caused by boiling liquid, expanding vapor explosions (BLEVE) episodes in which a vessel, usually engulfed in flames, violently explodes, creating a large fireball with the generation of intense radiant heat b. Flame jets--occurs when an ignited stream of material leaving a pressurized vessel creates a long flame jet with associated radiant heat hazards and the possibility of a direct impingement of flame on nearby receptors c. Vapor cloudfire--occurs when a cloud encounters an ignition source and causes the entire cloud to combust as air and fuel are drawn together in a flash fire situation d. Vapor cloud explosion--occurs when a cloud ignites and the combustion process leads to detonation of the cloud generating blast waves e. Liquid poolfires-a liquid pool of flammable material forms, ignites, and creates radiant heat hazards Naturally, not all of these hazards accompany all pipeline operations. The product being transported is the single largest determinant of hazard type. A water pipeline will often have only the hazard of “mechanical effects” (and possibly drowning).A gasoline pipeline, on the other hand, carries almost all of the above hazards. Hazard zones, that is, distances from a pipeline release where a specified level of damage might occur, are more fully discussed in Chapter 14. Example calculation routines are also provided there as well as later in this chapter. Figure 7.8, presented later in this chapter, illustrates the relative hazard zones oftypical flammable pipeline products. There is a range of possible outcomes--consequencesassociated with most pipeline failures. This range can be seen as a distribution of possible consequences; from a minor nuisance leak to a catastrophic event. Point estimates of the more
7/136 Leak Impact Factor
severe potential consequences are often used as a surrogate for the distribution in a relative risk model. When absolute risk values are sought, the consequence distribution must be better characterizedas is described in later chapters. A comprehensive consequence assessment sequence might follow these steps: 1. Determine damage states of interest (see Chapter 14) 2. Calculatehazard distances associated with damage states of interest 3. Estimate hazard areas based on hazard distances and source (burning pools, vapor cloud centroid, etc.) location (see particle fmceelement inTable 7.6) 4. Characterize receptorvulnerabilitieswithin the hazard areas
Limited modeling resources often requires some short cuts to this process-leading the use of screening simplifications and detailed analyses at only critical points. Such simplifications and the use of conservative assumptions for modeling convenience,are discussed in this chapter.
A. Product hazard The primary factor in determining the nature of the hazard is the characteristics of the product being transported in the pipeline. It is the product that to a large degree determines the nature of the hazard. In studying the impact of a leak, it is often useful to make a distinctionbetween acute and chronic hazards.Acute can mean sudden onset, or demanding urgent attention, or of short duration. Hazards such as fire, explosion, or contact toxicity are considered to be acute hazards. They are immediate threats caused by a leak. Chronicmeans marked by a long duration.A time variable is therefore implied. Hazards such as groundwater contamination, carcinogenicity, and other long-term health effects are consideredto be chronic hazards. Many releases that can cause damage to the environment are chronic hazards because they can cause long-term effects and have the potential to worsen with the passage of time. The primary difference between acute and chronic hazards is the amount of time involved. An immediate hazard, created instantly upon initiation of an event, growing to its worst case level within a few minutes and then improving,is an acute hazard. The hazard that potentially grows worse with the passage of time is a chronic hazard. For example, a natural gas release poses mostly an acute hazard. The largest possible gas cloud normally forms immediately, creating a fire/explosionhazard,and then begins to shrink as pipeline pressure decreases. If the cloud does not find an ignition source, the hazard is reduced as the vapor cloud shrinks. (If the natural gas vapors can accumulate inside a building, the hazard may become more severe as time passesit then becomes a chronic hazard.) The spill of crude oil is more chronic in nature because the potential for ignition and accompanying thermal effects is more remote, but in the long term environmental damages are likely. A gasoline spill containsboth chronicand acute hazard characteristics. It is easily ignited, leading to thermal damage sce-
narios, and it is also has the potential to cause short- and longterm environmentaldamages. Many products will have some acute hazard characteristics and some chronic hazard characteristics. The evaluator should imagine where his product would fit on a scale such as that shown in Figure 7.3, which shows a hypothetical scale to illustrate where some common pipeline products may fit in relation to each other. Aproduct’slocation on this scale depends on how readily it disperses (the persistence) and how much long-term hazard and short-termhazard it presents. Some product hazards are almost purely acute in nature, such as natural gas. These are shown on the left edge of the scale. Others, such as brine, may pose little immediate (acute) threat, but cause environmental harm as a chronic hazard. These appear on the far right side of the scale.
Al. Acute hazards Both gaseous and liquid pipeline products should be assessed in terms oftheir flammability,reactivity,and toxicity.These are the acute hazards. One industry-accepted scale for rating product hazards comes from the National Fire Prevention Association (NFPA). This scale rates materials based on the threat to emergencyresponse personnel (acute hazards). If the product is a mixture of several components, the mixture itself could be rated. However, a conservative alternative might be to base the assessment on the most hazardous component, because NFPA data might be more readily available for the components individually. Unlike the previous point scoring systems described in this book, the leak impact factor reflects increasing hazard with increasing point values.
Flammabili& Nr Many common pipeline products are very flammable. The greatest hazard from most hydrocarbons is from flammability. The symbol N, is used to designate the flammability rating of a substance according to the NFPA scale. The five-point scale shows, in a relative way, how susceptiblethe product is to combustion. The flash point is one indicator of this flammability.
Brine Methane Ammonia Ethane Gasoline Diesel Propane Propylene Ethylene
Fuel oil Toluene Benzene Styrene
Oxygen Immediate t only Figure 7.3
products.
1
1
Relative acute-chronichazard scalefor common pipeline
Product hazard 7/137
The j7ash point is defined as the minimum temperature at which the vapor over a flammable liquid will “flash when exposed to a free flame. It tells us what temperature is required to release enough flammable vapors to support a flame. Materials with a low flash point (<100”F)ignite and burn readily and are deemed to be flammable. If this material also has a boiling point less than lOO”F,it is considered to be in the most flammable class. This includes methane, propane, ethylene, and ethane. The next highest class of substances has flash points of less than 100°Fand boiling points greater than 100°F. In this class, less product vaporizes and forms flammable mixtures with the air. This class includes gasoline. crude petroleum, naphtha, and certain jet fuels. A material is termed combustible if its flash point is greater than 100°Fand it will still bum. This class includes diesel and kerosene. Examples of non-combustibles are bromine and chlorine. Use the following list or Appendix A to determine the NFPA N, value (FP = flash point; BP =boiling point [26]): Noncombustible FP > 200’F 100°F< FP < 200°F FP < 100°Fand BP < 100°F FP < 73°Fand BP < 100°F
Nf=O N,= 1 Nf=2 N,=3 N,= 4
More will be said about flammability in the discussion of vapor cloud dispersion later in this chapter.
Reactivity, N, Occasionally,a pipeline will transport a material that is unstable under certain conditions. A reaction with air, water, or with itself could be potentially dangerous. To account for this possible increase in hazard, a reactivity rating should be included in the assessment of the product. The NFPA value N, is used to do this. Although a good beginning point, the N, value should he modified when the pipeline operator has evidence that the substance is more reactive than the rating implies. An example of this might be ethylene. A rather common chain of events in pipeline operations can initiate a destructive series of detonations inside the line. This is a type ofreactivity that should indicate to the handler that ethylene is unstable under certain conditions and presents an increased risk due to that instability. The published N, value of 2 might not adequately cover this special hazard for ethylene in pipelines. Use the following list or Appendix A to determine the N, value [26]: N, = 0 Substance is completely stable. even when heated under fire conditions N, = 1 Mild reactivity on heating with pressure Nr= 2 Significant reactivity, even without heating N, = 3 Detonation possible with confinement Nr = 4 Detonation possible without confinement.
Note that reactivity includes self-reactivity (instability) and reactivity with water. The reactivity value (N,) can be obtained more objectively by using the peak temperature of the lowest exotherm value as follows [26]:
Exotherm. ’C >400 305400 2 15-305 125-2 15 <125
N, 0 1 2 3 4
The immediate threat from the potential energy of a pressurized pipeline is also considered here. This acute threat includes debris and pipe fragments that could become projectiles in the event of a catastrophic pipeline failure. Accounting for internal pressure in this item quantifies the intuitive belief that a pressurized container poses a threat that is not present in a nonpressurized container. The increased hazard due solely to the internal pressure is thought to be rather small because the danger zone is usually very limited for a buried pipeline. When the evaluator sees an increased threat, such as an aboveground section in a populated area, she may wish to adjust the reactivity rating upward in point value. In general, a compressed gas will have the greater potential energy and hence the greater chance to do damage. This is in comparison to an incompressible fluid. The pressure hazard is directly proportional to the amount of internal pressure in the line. Although the MOP could be used here, this would not differentiate between the upstream sections (often higher pressures) and the downstream sections (usually lower pressures). One approach would be to create a hypothetical pressure profile of the entire line and, from this, identify normal maximum pressures in the section being evaluated. Using these pressures, points can be assessed to reflect the risk due to pressure. So, to the N, value determined above, a pressure factor can be added as follows: Incompressible Fluids (Liquids) 0-100 psig internal pressure >lo0 psig
Pressure Factor 0 pts 1 Pt
Compressible Fluids (Gases) &50 psig
5 1-200 psig >200 psig
0 pts 1 Pt 2 pts
Total point values for N, should not be increased beyond 4 points, however, because that would minimize the impact of the flammability and toxicity factors, N, and N,, whose maximum point scores are 4 points. Example 7.1: Product hazard scoring
A natural gas pipeline is being evaluated. In this particular section, the normal maximum pressure is 500 psig. The evaluator determines from AppendixA that the N, for methane is 0. To this. he adds 2 points to account for the high pressure of this compressible fluid. Total score for reactivity is therefore 2 points. Toxic& Nh
The NFPA rating for a material’s health factor is N,. The N, value only considers the health hazard in terms of how that
7/138 Leak Impact Factor
hazard complicates the response of emergency personnel. Long-term exposure effects must be assessed using an additional scale. Long-term health effects will be covered in the assessment of chronic hazards associated with product spills. Toxicity is covered in more detail in the following section. As defined in NFPA 704, the toxicity ofthe pipeline product is scored on the following scale [26]: Nh = 0 No hazard beyond that of ordinary combustibles. Nh = 1 Only minor residual injury is likely. Nh = 2 Prompt medical attention required to avoid temporary incapacitation. Nh = 3 Materials causing serious temporary or residual injury. Nh = 4 Short exposure causes death or major injury. Appendix A lists the N, value for many substances commonly transported by pipeline.
Acute hazard score The acute hazard is now obtained by adding the scores as follows: Acute hazard (&I 2 pts) = (Nf+ N, + N,,)
A score of 12 points represents a substance that poses the most severe hazard in all three of the characteristics studied. Note that the possible point values are low, but this is part of a multiplying factor. As such, it will have a substantial effect on the total risk score. Few preventive actions are able to substantially reduce acute hazards. To be effective, a preventive action would have to change the characteristics of the hazard itself. Quenching a vapor release instantly or otherwise preventing the formation of a hazardous cloud would be one example of how the hazard could be changed. While the probability and the consequences of the hazardous event can certainly be managed, the state of the art is not thought to be so advanced as to change the acute hazard of a substance as it is being released.
Direct measurement of acute hazards Acute hazards are often measured directly in terms fire and explosion effects when contact toxicity is not an issue. In the case of fire, the possible damages extend beyond the actual flame impingement area, as is readily recognizable from approaching a large campfire. Heat levels are normally measured as thermal radiation (or heatflux or radiant heat) and are expressed in units of Btu/ft2-hr or kW/m2. Certain doses of thermal radiation can cause fatality, injury, andor property damage, depending on the vulnerability of the exposed subject and the time of exposure. Thermal radiation effects are discussed in this chapter and quantified in Chapter 14 (see also Figure 7.8 later in this chapter). Explosion potential is another possible acute hazard, in the case of vapor releases. Explosion intensity is normally characterized by the blast wave, measured as overpressure and expressed in psig or Wa. Mechanisms leading to detonation are discussed in this chapter and a discussion of quantification of overpressure levels can be found in Chapter 14. The amount of harm potentially caused by either of these threats depends on the distance and shielding of the exposed subjects.
A2. Chronic hazard A very serious threat from a pipeline is the potential loss of life caused by a release of the pipeline contents. This is usually considered to be an acute, immediate threat. Another quite serious threat that may also ultimately lead to loss of life is the contamination of the environment due to the release of the pipeline contents. Though not usually as immediate a threat as toxicity or flammability, environmental contamination ultimately affects life, with possible far-reaching consequences. This section offers a method to rate those consequences that are of a more chronic nature. We build on the material presented in the previous section to do this. From the acute leak impact consequences model, we can rank the hazard from fire and explosion for the flammables and from direct contact for the toxic materials. These hazards were analyzed as short-term threats only. We are now ready to examine the longer term hazards associated with pipeline releases. Figure 7.4 illustrates how the chronic product hazard associated with pipeline spills can be assessed. The first criterion is whether or not the pipeline product is considered to be hazardous. To make this determination, U S . government regulations are used. The regulations loosely define a hazardous substance as a substance that can potentially cause harm to humans or to the environment. Hazardous substances are more specifically defined in a variety of regulations including the Clean Water Act (CWA), the Clean Air Act (CAA), the Resource Conservation and Recovery Act (RCRA), and the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA, also known as Superfund). If the pipeline product is considered by any of these sources to be hazardous, a reportable spill quantity (RQ) category designation is assigned under CERCLA (Figure 7.4). These RQ designations will be used in our pipeline risk assessment to help rate hazardous products from a chronic standpoint. The more hazardous substances have smaller reportable spill quantities. Larger amounts of more benign substances may be spilled before the environment is damaged. Less hazardous substances, therefore, have larger reportable spill quantities. The designations are categories X, A, B, C, and D, corresponding to spill quantities of 1, 10, 100, 1000, and 5000 pounds, respectively. Class X, a 1-pound spill, is the category for substances posing the most serious threat. Class D, a 5000-pound spill, is the category for the least harmful substances. The EPA clearly states that its RQ designations are not created as agency judgments of the degree of hazard of specific chemical spills. That is, the system is not intended to say that a 9-pound spill of a class A substance is not a problem, while a 10-pound spill is. The RQ is designed to be a trigger point at which the government can investigate a spill to assess the hazards and to gauge its response to the spill. The criteria used in determining the RQ are, however, appropriate for our purposes in ranking the relative environmental hazards of spills. Classifying a chemical into one ofthese reportable quantities categories is a nontrivial exercise outlined in U.S. Regulations, 40 CFR Parts 117 and 302. The primary criteria considered include aquatic toxicity, mammalian toxicity (oral, dermal, inhalation), ignitability and reactivity, chronic toxicity, and potential carcinogenicity. The lowest of these criteria (the worst case) will determine the initial RQ ofthe chemical.
Product hazard 7/139
acute hazard
Chronic model
I
Examples: Benzene Toluene Butadiene Chlorine
volatile?
No
Examples: Methane Ethane Propane Ethylene Propylene
4
l N 1 Is a formal cleanup required?
Examples:
Yes
f
I
Is the product
T
I 2R;ozooIo
Diesel Fuel oil
Water Nitrogen
Kerosene Brine
Hydrogen
RQ = 100
I
RQ
= “none”
Figure 7.4 Determination of RQ
The initial RQ may then be adjusted by analysis of the secondary criteria of biodegradation, hydrolysis, and photolysis. These secondary characteristics provide evidence as to how quickly the chemical can be safely assimilated into the environment. A chemical that is quickly converted into harmless compounds poses less risk to the environment. So-called “persistent” chemicals receive higher hazard ratings. The CERCLA reportable quantity list has been revised since its inception and will probably continue to be revised. One weakness of the system is that the best available knowledge may not always be included in the most current version. An operator who is intimately familiar with a substance may be in a better position to rate that product relative to some others. When operator experience suggests that the substance is worse than the published CERCLA RQ implies, the evaluator should probably revise the number to a more severe rating. This can be done with the understanding that the CERCLA rating is subject to periodic review and will most likely be updated as better information becomes
available. If the operator, on the other hand, feels that the substance is being rated too severely, the evaluator should recognize that the operator may not realize all aspects of the risk. It is recommended that RQ ratings should not be reduced in severity rating based solely on operator opinion. Use of the RQ factor incorporates some redundancy into the already assigned NFPA ratings for acute hazards. However, the overlap is not complete. The RQ factor adds information on chronic toxicity, carcinogenicity, persistence, and toxicity to nonhumans, none ofwhich is included in the NFPA ratings. The overlap does specifically occur in acute toxicity, flammability, and reactivity This causes no problems for a relative risk analysis.
Primary criteria The following is a brief summary of each of the CERCLA primary criteria [ 141:
7/140 Leak Impact Factor
I . Aquatic toxicity. Originally developed under the Clean Water Act, the scale for aquatic toxicity is based on LC,,, the concentration of chemical that is lethal to one-half ofthe test population of aquatic animals on continuous exposure for 96 hours (see Table 7.1; also see the Notes on toxicity section later in this chapter). 2. Mammalian toxic@. This is a five-level scale for oral, dermal, and inhalation toxicity for mammals. It is based on LC,, data as well as LD,, (the dose required to cause the death of 50% of the test population) data and is shown in Table 7.2. 3. Ignitability and reactivity. Ignitability is based on flash point and boiling point in the same fashion as the acute characteristic, N, Reactivity is based on a substance’s reactivity with water and with itself. For ourpurposes, it also includes pressure effects in the assessment of acute hazards. 4. Chronic toxicity. To evaluate the toxicity, a scoring methodology assigns values based on the minimum effective dose for repeated exposures and the severity of the effects caused by exposure. This scoring is a fimction of prolonged exposure, as opposed to the acute factor, N,, which deals with short-term exposure only. The score determination methodology is found in U.S. regulations (48 CFR 23564). 5. Potential canzinogenicity This scoring is based on a high weight ofevidence designation (either a “known,” “probable,” or “possible” human carcinogen) coupled with a potency rating. The potency rating reflects the relative strength of a substance to elicit a carcinogenic response. The net result is a high, medium, or low hazard ranking that correspondsto RQs of 1,10, and 100 pounds, respectively [30].
Secondary criteria As previously stated, the final RQ rating may be adjusted by evaluating the persistence of the substance in the environment. The susceptibility to biodegradation, hydrolysis, and photolyTable 7.1
Aquatic toxicity Aquatic toxic@ (LC,, range) (mg/L)
RQ (lb)
1 10 100 1000 5000
0.1-1 .o 1-10 10-100 10&500
~
Table 7.2
~~~
~
sis allows certain substances to have their RQ ratings lowered one category (e.g., from RQlO to RQ 100).To be considered for the adjustment, the substance has to pass initial criteria dealing with the tendency to bioaccumulate, environmental persistence, presence of unusual hazards (such as high reactivity),and the existence of hazardous degradation or transformation products. If the substance is not excluded because of these items, it may be adjusted downward one RQ category if it shows a very low persistence. Unfortunately, petroleum, petroleum feedstocks, natural gas, crude oil, and refined petroleum products are specifically excluded from the EPA’s reportable quantity requirements under CERCLA. Because these products comprise a high percentage of substances transported by pipeline, an alternative scoring system must be used. This requires a deviation from the direct application of the EPA rating system when petroleum products are evaluated. For our purposes here, however, we can extend the spirit of the EPA system to encompass all common pipeline products. This is done by assigning RQ equivalent classifications to substances that are not assigned an RQ classification by the EPA. For the products not specifically listed as hazardous by EPA regulatory agencies, a general definition is offered. If any one of the following four properties are present, the substance is considered to be hazardous [14]: 1. Ignitubility. Defined as a liquid with a flash point of less than 60°C or a nonliquid that can spontaneously cause a fire through friction, absorption of moisture, or spontaneous chemical changes and will burn vigorously and persistently. 2. Corrosivity. Defined as liquids with pH I 2 or t 12.5, or with the ability to corrode steel at a rate of 6.35 millimeters per year at 55°C. 3. Reuctivity. Defined as a substance that is normally unstable, reacts violently with water, forms potentially violent mixtures with water, generates toxic fumes when mixed with water, is capable of detonation or explosion, or is classified as an explosive under DOT regulations. 4. Extraction procedure toxicity. This is defined by a special test procedure that looks for concentrations of materials listed as contaminants in the Safe Drinking Water Act’s list of National Interim Primary Drinking Water Regulation contaminants [14].
Although petroleum products are specifically excluded from regulatory control, these definitions would obviously include most pipeline hydrocarbon products. This then becomes the second criterion to be made in the evaluation of pipeline products,
Mammalian toxicity Mammaliantoxicity ~~
RQ (lbl 1 10 100 1000 5000
Oral LD,, range (mg/kg)
Dermal LD,, range (mg/kg) <0.04 m&g 0.04-0.4
0.44 4-30 40-200
Inhalation LC,, mnge @pm) <0.4 ppm 0.44 4-40 40400 400-2000
Product hazard 71141
Products that are not specifically listed with an EPAassigned RQ but do fit the definition of hazardous are now divided into categories of volatile or nonvolatile. These products that do not meet the definition of “hazardous substance” set forth above OR are not volatile AND do not require a formal cleanup are assumed to have a RQ designation of “none” (see Figure 7.4). Following the “hazardous substance” AND volatile branch of the flowchart (Figure 7.4), we now assess these volatile substances. Highly volatile products of concern produce vapors, which when released into the atmosphere, cause potential acute hazards, but usually only minimal chronic hazards. Common pipeline products that will fall into this category include methane, ethane, propane, ethylene, propylene, and other liquefied petroleum gases. These products also meet the definition of “hazardous substances” set forth above. We can assume that the bulk of the hazard from highly volatile substances occurs in leaks to the atmosphere. We assume that all leaks of such products into any of the three possible environmental media (air, soil, water) will ultimately cause a release to the air. We can then surmise that the hazard from these highly volatile liquids is mostly addressed in the atmospheric dispersion modeling analysis that will he performed in the acute leak impact consequences analysis. The chronic part ofthis leak scenario is thought to be in the potential for (1) residual hydrocarbons to be trapped in soil or buildings and pose a later flammability threat, and (2) the so-called “greenhouse” gases that are thought to be harmful to the ozone layer of the atmosphere. These threats warrant an RQ equivalent of 5000 pounds in this ranking system. This leaves the less volatile hazardous substances, which also need an assigned RQ. Included here are petroleum products such as kerosene, jet fuel, gasoline, diesel oil, and crude oils. For spills ofthese substances, the acute hazards are already addressed in the flammability, toxicity, and reactivity assessment. Now, the chronic effects such as pollution of surface waters or groundwater and soil contamination are taken into account. Spills of nonvolatile substances must be assessed as much from an environmental insult basis as from an acute hazard basis. This in no way minimizes the hazard from flammability, however. The acute threat from spilled flammable liquids is addressed in the acute portion of the leak impact. The longer term impact of spilled petroleum products is obtained by assigning an RQ number to these spills. It is recommended that these products be classified as category B spills (reportable quantities of 100 pounds) unless strong evidence places them in another category. This means the RQ equivalent is 100 pounds. An example of evidence sufficient to move the product down one category (more hazardous) would be the presence of a significant amount of category X or category A material (such as methylene chlonde--category X). This is discussed further below. Evidence that could move the petroleum product into a category C or category D (less hazardous) would be high volatility or high biodegradation rates. To make further distinctions within this group, more complex determinations must be made. The value of these additional determinations is not thought to outweigh the additional costs. For instance, it can perhaps be generally stated that the heavier petroleum products will biodegrade at a slower rate than the lighter substances. This is because the degradability is
linked to the solubility, and the lighter products are usually more soluble. However, it can also be generally stated that the lighter petroleum substances may more easily penetrate the soil and reach deeper groundwater regions. This is also a solubility phenomenon. We now have conflicting results of a single property. To adequately include the property of density (or solubility), we would have to balance the benefits of quicker degradation with the potential of more widespread environmental harm. We have now established a methodology to assign a ranking, in the form of an RQ category, for each pipeline product. An important exception to the general methodology is noted. If the quantity spilled is great enough to trigger an RQ of some trace component, this RQ should govern. This scenario may occur often because we are using complete line rupture as the main leak quantity determinant. For example, a crude oil product that has 1% benzene would reach the benzene RQ number on any spill greater than 1000 pounds. This is because the benzene RQ is 10 pounds and 1% of 1000-pound spill of product containing 1% benzene means that 10 pounds of benzene was spilled. To easily account for this general exception to the RQ assignment, the evaluator should start with the leak quantity calculation. She can then work from the CERCLA list and determine the maximum percentage for each trace component that must be present in the product stream before that component governs the RQ determination. Comparing this to an actual product analysis will point out the worst case component that will determine the final RQ rating. An example illustrates this.
Example 7.2: Calculating the RQ An 8-in. pipeline that transports a gasoline that is known to contain the CERCLA hazardous substances benzene, toluene, and xylene is being evaluated. The leak quantity is calculated from the line size and the normal operating pressure (normal pressures instead of maximum allowable pressures are used throughout this company’s evaluations) to be 10,000 pounds. This calculated leak quantity is now used to determine component percentages that will trigger their respective RQs for this spill: Benzene (RQ = 10): Toluene (RQ = 1000) Xylene (RQ= 1000)
10/10,000 = 0.001 = 0. I % 1000/10,000=0.1= 10%
1000/10,000=O.I= 10%
The evaluator can now look at an actual analysis to see If the actual product stream exceeds any ofthese weight percentages. If the benzene concentration is less than 0.1 % and the toluene and xylene concentrations are each less than lo%, then the RQ is set at 100 pounds, the default value for gasoline. If, however, actual analysis shows the benzene concentration to be 0.7%, then the benzene RQ set at 10 pounds governs. This is because, more than 10 pounds of benzene will be spilled in a 10,000pound spill of this particular gasoline stream. Gasolines generally are rich in benzene, but they are also fairly volatile. Heating oils, diesel, and kerosene are more persistent, but may contain fewer toxicants and suspected carcinogens. Crude oils, of course, cover a wide range of viscosities
71142 Leak Impact Factor
and compositions. The pipeline operator will no doubt be familiar with his products and their properties. Note that there is a 2-point spread between each RQ classification. The evaluator may pick the midpoint between two RQs if she has special information that makes it difficult to strictly follow the suggested scoring. Once again, she must be consistent in her scoring.
Product hazard score We arrive at a total product hazard score by using this equation: Product hazard score = acute hazard score + chronic hazard score
6. Leak volume Notes on toxicity An important part of the degree of consequences, both acute and chronic, is toxicity. The degree of toxic hazard is usually expressed in terms of exposure limits to humans. Exposure is only an estimate of the more meaningful measure, which is dosage. The dose is the amount of the product that gets into the human body. Health experts have established dosage limits beyond which permanent damage to humans may occur. Because the intake (dose) is a quantity that is difficult to measure, it is estimated by measuring the opportunity for ingesting a given dose. This intake estimate is the exposure. There are three recognized exposure pathways: inhalation, ingestion, and dermal contact. Breathing contaminated air, eating contaminated foods, or coming into skin contact with the contaminant can all lead to an increased dose level within the body. Some of the exposure pathways can extend for long distances, over long periods of time from the point of contaminant release. Plants and animals that absorb the contaminant may reach humans only after several levels of the food chain. Groundwater contamination may spread over great distances and remain undetected for long periods. Calculations are performed to estimate dosages for each exposure pathway. EPA ingestion route calculations include approximate consumption rates for drinking water, h i t s and vegetables, beef and dairy products, fish and shellfish, and soil ingestion (by children). These consumption rates, based on age and sex of population affected, are multiplied by the contaminant concentration and by the exposure duration. This value, divided by the body weight and life span, yields the lifetime average ingestion exposure. In a similar calculation, the lifetime average inhalation exposure yields an estimate of the inhalation route exposure. This is based on studies of movement of gases into and out of the lungs (pulmonary ventilation). The calculation includes considerations for activity levels, age, and sex. The dermal route dose is obtained by estimating the dermal exposure and then adjusting for the absorption of the contaminant. Included in this determination are estimates of body surface area (which is, in turn, dependent on age and sex) and typical clothing ofthe exposed population. In each of these determinations, estimates are made of activity times in outdoor play/work, showering, driving, etc. Life spans are similarly estimated for the population under study. We are not proposing that all of these parameters be individually estimated for purposes of a risk assessment.The evaluator should realize the simplifications he is making, however, in rating spills here. Because we are only concerned with relative hazards, accuracy is not lost, but absolute risk determination often requires more formal methods.
For purposes here, the terms leak, spill, and release are used interchangeably and can apply to unintentional episodes of product escaping from a pipeline system, whether that product is in the form of liquid, gas, or a combination. The total spill quantity is the sum of leak volumes prior to system isolation (includes detection and reaction times), the leak volume after facility isolation (drain and/or depressure time), and mitigated leak volume (secondary containment). The following paragraphs discuss pipeline spills and suggest ways to model spill size for a relative risk assessment. Leakedvolume or spill size is a function of leak rate, reaction time, and facility capacities. It is a critical determinant of damage to receptors under the assumption that hazard zone size is proportional to spill size. This assumption is a modeling convenience and will not hold precisely true for all scenarios. Some leaks have a negative impact that far exceeds the impacts predicted by a simple proportion to leak rate. For example, in a contamination scenario, a 1 gaVday leak rate corrected after 100 days is often far worse than a 100 gaVday leak rate corrected in 1 day, even though the same amount of product is spilled in either case. Unknown and complex interactions between small spills, subsurface transport, and groundwater contamination, as well as the increased ground transport opportunity, account for the increased chronic hazard. On the other hand, from an acute hazard perspective, such as thermal radiation, the slower leak is preferable. The overall equation for LIF recommends breaking the spill and dispersion variables into separate components. This facilitates the assessment of possible spill mitigations. For example, dispersion potential can be affected by secondary containment where the released contents are fully contained in a leak recovery system, or at least limited in their spread by in-station berms or natural barriers. So, even if the potential volume released has not changed, risk can be reduced by preventing the spread of the spill. However, in many of the sample approaches discussed below, it is a modeling convenience to select variables that impact both the spill size and dispersion potential and use them simultaneously to assess the overall spill scenario. Therefore, this leak evaluation section is organized into separate discussions for leak size, mitigation, and dispersion potential, but the actual scoring examples usually blend the three components.
Hole size As a critical component of assessing the volume or rate of a release, the failure opening size (hole size) through which the release occurs, must be estimated. A criterion must be established for choosing a leak rate scenario for a release from a pipeline. It is reasonable to assume that virtually any size leak may form in any pipeline. The evaluator could simply choose a
Leak volume 7/143
I-in.-diameter hole as the leak size. However, this would not adequately distinguish between a 36-in.-diameter pipeline and a 4-in.-diameter pipeline. While a 1-in. hole in either might cause approximately the same spill or release (initially, at least), we intuitively believe that a 36-in.4iameter pipeline presents a greater hazard than does a4-in.-diameterpipeline, all other factors being equal. This is no doubt because a much greater release can occur from the 36-in.-diameter pipeline than from the 4-in. line. The hole size is determined by the failure mode, which in turn is a function of pipe material, stress conditions, and the failure initiator. As an extreme example of failure mode, an avalanchefailure, is characterized by rapid crack propagation, sometimes for thousands of feet along a pipeline, which completely opens the pipe. Main contributing factors to an avalanche failure include low material toughness (a more brittle material that allows crack formation and growth), high stress level in the pipe wall (usually at the base of a crack), and an energy source that can promote rapid crack growth (usually a gas compressed under high pressure). In many applications, a risk assessment model does not attempt to distinguish among likely failure modes-a worst case scenario is often assumed for simplicity. Distinguishing between leak types adds a degree of complexity to the model; however, the added information that is provided can be useful in some cases. In a general sense, the leak size probabilities can somewhat offset an otherwise higher consequence event. For example, a smaller diameter line more prone to large breakage can equal the consequences ofa larger line that is prone to small pinhole leaks. A spill size probability distribution can be developed from an examination of past releases. This is further discussed in Chapter 14. We will assume, if only for the sake of simplification, that a larger hole size leads to a larger leak and that a larger leak has the potential for more severe consequences. Of course, under the right circumstances, a large- or small-area failure can be equally consequential in the pipeline system. For example, amore ductile failure that allows only a minor pipe wall tear can leak undetected for long periods, allowing widespread migration of leaked product. A more violent break of the pipe wall, on the other hand, may cause a rapid depressurization and quick detection of the problem. See also the discussion of leak detection in Chapters 7 and 11 for more details regarding possible leak volumes. Because of the many different materials and conditions that may need to be compared when studying some pipeline systems, a consideration can be included to allow for higher or lower anticipated incidents of large openings in a pipe failure. One intent is to make a distinction between pipes more likely to fail in a catastrophic fashion. This is highly dependent on pipe material toughness. Where pipe material toughness is constant, changing pipe stress levels or initiating mechanisms will govern. Figure 7.5 shows a model of the interrelationships among some of the many factors that determine the type of pipeline leak that is likely. Initiating mechanisms that promote cracks are more likely to lead to a large leak than are mechanisms that cause a pinhole-type leak. (See Chapter 5 for further discussion on fracture mechanics and crack propagation.)
Materials When different materials and various likely failure modes are to be included in the risk analysis, the spill size factor of the leak impact factor can be adjusted. Although such an adjustment is intended primarily to address widely different materials often encountered in a single distribution system, it can also he used to address more subtle differences in pipelines of basically the same material but operated under different conditions. For example, a higher strength steel pipeline usually has slightly less ductility than Grade B steel and, when combined with factors such as changing stress levels and crack initiators, this raises the likelihood of an avalanche-type line break. An important difference lies in materials that are prone to more consequential failure modes. A large leak area is usually characterized by the action ofa crack in the pipe wall. A crack is more able to propagate in a brittle material; that is, a brittle pipe material is more likely to fail in a fashion that creates a large leak area--equal to or greater than the pipe cross-sectional area. This problem is covered in more detail in a discussion of fracture mechanics in Chapter 5. The brittleness or ductility of a material is often expressed in combination with its strength as a material toughness or fracture toughness. Important material factors influencing toughness in pipeline steels include chemical composition (percentage of carbon, manganese, phosphorus, sulfur, silicon, columbium, and vanadium), deoxidization practices, cold work, and heat treatments [65]. The challenge of gauging the likelihood of a more catastrophic failure mode is further complicated by the fact that some materials may change over time. Given the right conditions, a ductile material can become more brittle. Material toughness is an important variable in the potential for certain failure modes. Even in the same material, slight differences in chemical composition and manufacture can cause significant differences in toughness. The most common methodused to assess material toughness is the Charpy V-notch impact test. This test has been shown to correlate well with fracture mechanics in that test results above certain values ensure that fatigue-cracked specimens will exhibit plastic behavior in failure. Charpy-Izod test results for some common pipeline materials are shown inTable 7.3. The ASTM has reported [7] that the tensile stress behavior of steel is not well correlated with its behavior in notched impact tests such as the Charpy test. In other words, acceptable ductile behavior seen in tension failures sometimes becomes unacceptable brittle behavior under notch impact failure conditions. Therefore, specifying minimum material behavior under tensile stress will not ensure adequate material properties from a fracture mechanics standpoint. Impact testing or some equivalent of this is needed to ensure that material toughness properties are adequate. Until the last decade or so, material toughness or material ductility was not normally specified when pipe was purchased. The rate of loading and the temperature are important parameters in assessing toughness. The likelihood of brittle failure increases with increasing speed of deformation and with decreasing temperature. Below a certain temperature, brittle fracture will always occur in any material.
7/144 Leak Impact Factor
Contributing Factors
&Material
type Stress
Crack formation
Cycles
Crack growth
Temperature
Material
Ductile Brittle Transition Pressure Internal loading
Failure Types Pinhole Puncture Tear Crack
Fatigue External loading Stress
Various Rate of stressing Ovality Geometry
Gouge Stress concentrator Lamination
t
Manufacture defect
Crack Detection
+ methods
Initiator
Outside force
Corrosion
Pu en .rct Tear
W
Gouge
/see
Dent
Depth Geometry
Surface, general Figure 7.5
Sample of factors that influence failure hole size
Stresses High stress levels in a pipe wall are one of the most important contributing factors to a catastrophic failure. High stress levels are a function of internal pressure, external loadings, wall thickness, and exact pipe geometry. Mechanical pipe damage (e.g., dents, gouges, buckles) and improper use of some pipe fittings (e.g., sleeve taps, branch connections) can dramatically impact stress levels by causing stress concentration points.. The energy source should also be considered here. A compressed gas, due to the higher energy potential ofthe compressible fluid can promote significantly larger crack growth and,
consequently, leak size. For relatively incompressible fluids, decompression wave speed will usually exceed crack propagation speed and hence will not promote large crack growth. In other words, on initiation of the leak, the pipeline depressures quickly with an incompressible fluid. This means that usually insufficient energy is remaining at the failure point to support continued crack propagation. The use of crack arrestors can also impact the risk picture. A crack arrestor is designed to slow the crack propagation sufficiently to allow the depressurization wave to pass. Once past the crack area, the reduced pressure can no longer drive crack growth. More ductile or thicker material (stress levels are
Leakvolume 11145 Table 7.3
Charpy-lzod tests
Muteriul
High-densitypolyethylene Low-density polyethylene Polypropylene PVC Gray cast iron Ductile cast iron Carbon steel (0.2% carbon) Carbon steel (0.45% carbon)
Tensile srrength (psi)
Charpy-lzod lest result.\ (ft-lhJ
4000 2000 5000 6000 4 1,000 60,000 60,000 90.000
I -I2 16 1-1 I I 4 20 55 20
Source: Keyser, C. A , Materials Science in Engineering, 3rd ed., Columbus, OH: Charles E. Merrill Publishing Company, 1980.pp. 75-101,131-159. Note: The Charpy-lzod impact test is an accepted method for gauging material resistance to impact loadings when a flaw (a notch) is present. The test is temperature dependent and is limited in some ways, but can serve as a method to distinguish materials with superior resistance to avalanche-typefailures. reduced as wall thickness increases) can act as a crack arrestor. Allowances for these can be made in the material toughness scoring or in the stress level scoring. As with other pressure-related aspects of this risk assessment, it is left to the evaluator to choose stress levels representing either normal operating conditions (routine pressures and loadings) or extreme conditions (MOP or rare loading scenarios). The appropriateness of either option will depend on the intended uses of the assessment. The choices made should be consistent across all sections evaluated and across all risk variables that involve pressure.
Initiating mechanisms Another consideration in the failure initiator is the type of damage to the pipe that has initiated the failure. For instance, some analyses suggest that corrosion effects are more likely to lead to pinhole-type failures, whereas third-party damage initiators often have a relatively higher chance of leading to catastrophic failures. Note that such general statements are very generalized. Many mechanisms might contribute to a failure and their possible interactions. The first contributor to the formation of a crack might be very different from the contributor that ultimately leads to the crack propagation and pipe failure. When crack formation and growth are very low possibilities, the likelihood of a tear or a pinhole instead of a larger failure is higher. When a strong correlation between initiator and failure mode is thought to exist, a scale can be devised that relates a consequence adjustment factor with probability score. The probability score should capture the type of initiating event. For example, when the third-party damage index is higher (by some defined percentage, perhaps) than the corrosion index, the spill score is decreased, reflecting a larger possible leak size. When corrosion index scores are higher, the effective spill size can be decreased, reflecting a smaller likely hole size. This could similarly be done in the design index to capture stress and earth movement influences. It is left to the evaluator to more fully develop this line of reasoning when it is deemed prudent to do so. An example of the application of this reasoning to an absolute risk assessment is given in Chapter 14. As an example of assessing the higher potential of avalanche failures. an adjustment factor can be applied to apreviously cal-
culated spill score (see following section). In this sample scheme, the two key variables to determine the adjustment factor to be applied, especially for compressed gas pipelines, are (1) stress level and (2) material toughness. Consideration for initiating mechanism can also be added to this adjustment factor. As a base case, pipe failures are modeled as complete failures where the leak area is equal to the cross-sectional area of the pipe. This allows a simple and consistent way to compare the hazards related to pipes of varying sizes and operating pressures. To incorporate the adjustment factor, the base case should be further defined as some “normal” situation, perhaps the case of a Grade B steel line operating at 60% of the specified minimum yield strength ofthe material. This base or reference case will have a certain probability of failing in such a way that the leak area is greater than the pipe cross-sectional area (the base case). In situations where the probability of this type of failure is significantly higher or lower than the base case, an adjustment factor can be employed (see Table 7.4). This adjustment factor will make a real change in the risk values, but, since it is a measure of likelihood only, it does not override the diameter and pressure factors that play the largest role in determining spill size. The final spill score is as follows: Final spill score = (effectivespill size score)x (adjustmentfactor larger openings) where the effective spill size score is based on a small failure opening. Alternatively, the adjustment factor can decrease the effective spill size when the preliminary assessment assumes a full-bore rupture scenario. Therefore, when a material failure distinction is desired, the evaluator can create a scale ofadjustment factors that will cover the range of pipe materials and operating stresses that will be encountered. When stress levels and material toughness values reach certain levels, the effective spill size can be adjusted. For instance, a material with lower toughness, operated at high stress levels, might cause the score to double. This is the same effect as a large increase in leak size (normally caused by an increase in pipe diameter or pressure in the basic risk assessment model). Table 7.4 provides an example of an adjustment scale. As mentioned earlier, such scales are normally more
7/146 Leak Impact Factor
Table 7.4
Effective spill size adjustment factors based on Oh SMYS of high pressure gas pipeline' % of SMYS -~
Toughness
Lowest (PVC) Low (cast iron) Medium (PE, APISLX 60, or higher steel) Base case (A53 Grade B steel)
~~
~~
<40%
50%
60%
70%
>80%
1 1 1 1
1.5 I 1 1
1.5 I .5
1.5 1.5
2
1
1 1
1.5 1
1
2
aUsesmaller values when evaluating a liquid pipeline useful in gas pipelines, given the higher energy level (and, hence, the higher possibility for catastrophic failures) associated with compressed gases.
Release models An underlying premise in risk assessment is that larger spill quantities lead to greater consequences. In addition to spill size, the evaluator must identify what kinds of hazards might be involved since some are more sensitive to release rate, while others are more sensitive to total volume released. The rate of release is the dominant mechanism for most short-term thermal damage potential scenarios, whereas the volume of release is the dominant mechanism for many contaminationpotential scenarios. Table 7.5 shows some common pipeline products and how the consequences should probably be modeled. Each of the modeling types shown in Table 7.5 are discussed in this chapter. Because potential spill sizes are so variable and because different spill characteristics will be of interest depending on the product type, it is useful to create a spill score to represent the relative spill threat. To assess a spill score, the evaluator must first determine which state (vapor or liquid) will be present after a pipeline failure. If both states exist, the more severe hazard should govern or the spill can be modeled as a combination of vapor and liquid (see Appendix B). Even though the difficult-to-predict dispersion characteristics of a vapor release appear more complex than a liquid spill, the liquid spill is actually more challenging to model.The vapor release scenarios lend themselves to some simplifying assumptions and the use of a few variables to use as substitutes for the complex dispersion models. Liquid spills, on the other hand, are more difficult to generalize because there are infinite possibilities of variables such as terrain, topography, groundwater,
Table 7.5
and other characteristics that dramatically impact the severity ofthe spill. In both release cases, liquid or vapor, leak detection can play a role in potential risks. This is discussed briefly under Dispersion in a subsequent section and also in Chapter 1 1.
Hazardous vapor releases In an initial risk assessment for most general purposes, it is suggested that a leak scenario of a complete line failure-a guillotine-type shear failure-should be used to model the worst case leak rate. This type of failure causes the leak rate to be calculated based on the line diameter and pressure. Even though this type of line failure is rare, the risk assessment is still valid. By consistency of application, we can choose any hole size and leak rate. We are simply choosing one here that serves the dual role of incorporating the factors of pipe size and line pressure directly into rating vapor release potential. Alternatively, several scenarios of failure hole sizes can be evaluated and then combined. These scenarios could represent the distribution of all possible scenarios and would require that the relative probability of each hole size be estimated. This requires additional complexity of analysis. However, because this approach incorporates the fact that the larger hole size scenarios are usually rare, it better represents the range of possibilities. Having determined the failure hole sizes to be used in the assessment, the vapor release scenario also needs estimates for the characteristics that will determine the potential consequences from the release. As discussed previously, the threats from a vapor release are generally more dependent on release rate than release volume, because the immediate vapor cloud formation and thermal effects from jet fires are of most concern. Exceptions exist, of course, most notably scenarios that involve accumulation of vapors in confined areas.
Common pipeline products and modeling of consequences
Product
Hazard iype
Hazardnature
Dominant hazard model
Flammablegas (methane, etc.) Toxic gas (chlorine, H,S, etc.) Highly volatile liquids (propane, butane, ethylene, etc.) Flammable liquid (gasoline, etc.) Relativelynonflammable liquid (diesel, fuel oil, etc.)
Acute Acute Acute
Thermal Toxicity Thermal and blast
Acute and chronic Chronic
Thermal and contamination Contamination
Jet fire; thermal radiation Vapor cloud dispersion modeling Vapor cloud dispersion modeling;jet fire; overpressure (blast) event Pool fire; contamination Contamination
Leak Volume 7/147
As one approach to assessing relative release rate impacts, the leak volume can be approximated by calculating how much vapor will be released in I O minutes. Our interest in the leak volume under this approach is not a contradiction to the earlier statement of primary dependence on leak rate. The conversion ofthe leak rate into a volume is merely a convenience under this approach that allows a combined vapor-liquid release to be modeled in a similar fashion (see Appendix B). The highest leak rate occurs when the pressure is the highest and the escape orifice is the largest. This leads to the assumption that, in most cases, the worst case leak rate happens near the instant of pipeline rupture, while the internal pressure is still the highest and after the opening has reached its largest area. As will be discussed later, the highest leak rate generally produces the largest cloud. As the leak rate decreases, the cloud shrinks. As an exception for the case of a dense cloud, vapors may ‘‘slump’’ and collect in low-lying areas or “roll” downhill and continue to accumulate as the cloud seeks its equilibrium size. In modeling the 10-minute release scenario, we are conservatively assuming that all the vapor stays together in one cloud for the full 10-minute release. We are also conservatively neglecting the depressuring effect of 10 minutes worth of product leakage. This is done to keep the calculation simple. The 10-minute interval is chosen to allow a reasonable time for the cloud to reach maximum size, but not long enough to be counting an excessive mass of welldispersed material as part of the cloud. The amount of product released and the cloud size will almost always be overestimated using the above assumptions. Again, for purposes ofthe relative risk assessment, overestimation is not a problem as long as consistency is ensured. See Appendix B for more discussion of leak rate determinations. An alternative approach that avoids some of the calculation complexities associated with estimating release quantities is to use pressure and diameter as proxies for the release quantities. Using a fixed damage threshold (thermal radiation levels; see page 308), it has been demonstrated that the extent of the threat from a burning release of gas is proportional to pressure and diameter [83]. Therefore, pressure and diameter are suitable variables for assessing at least one critical aspect of the potential consequences from a gas release. As in the first approach, this can incorporate conservative assumptions regarding cloud formation and dispersion. Because the immediate hazards from vapor releases are mostly influenced by leak rate, leak detection will not normally play a large role in risk reduction. One notable exception is a scenario where leak detection could minimize vapor accumulation in a confined space.
Hazardous liquid spills Potential liquid spill size is a variable that depends on factors such as hole size, system hydraulics, and the reliability and reaction times of safety equipment and pipeline personnel. Safety equipment and operation protocol are covered in other sections of the assessment, so the system hydraulics alone are used here to rank spill size. Based on the expected potential hazards from a liquid spill, including pool fire and contamination potential, the spill volume is a critical variable. Potential spill volume is estimated from potential leak rates and leak times.
Leak rate is determined with a worst case line break scenari-a full bore rupture. As with the atmospheric dispersion, choosing this scenario allows us to incorporate the line size and pressure into the hazard evaluation. A 36-in. diameter highpressure gasoline line poses a greater threat than a 4-in. diameter high-pressure gasoline line, all other factors being equal. This is because the larger line can potentially create the larger spill. Alternatively, several scenarios of failure hole sizes can be evaluated and then combined, as was noted for the vapor release modeling. The scenarios would represent the distribution of all possible scenarios and would require that the relative probability of each hole size be estimated. This requires additional complexity of analysis. However, because this approach incorporates the fact that the larger hole size scenarios are usually more rare, it better represents the range of possibilities. It also can evaluate scenarios where small amounts, below detection capabilities, are leaked for very long periods and result in larger total volume spills. This requires evaluation of leak detection capabilities and the construction of representative scenarios. Because the release of a relatively small volume of an incompressible liquid can depressure the pipeline quickly, the longer term driving force to feed the leak may be gravity and siphoning effects or pumping equipment limitations. A leak in a lowlying area may be fed for some time by the draining of the rest of the pipeline, so the evaluator should find the worst case leak location for the section being assessed. The leak rate should include product flow from pumping equipment. Reliability of pump shutdown following a pipeline failure is considered elsewhere. Based on the worst case leak rate and leak location for the section, the relative spill size can be scored according to how much product is spilled in a fixed time period of, say, 1 hour. Leaks can be (and have been) allowed to continue for more than 1 hour, but leaks can also be isolated and contained in much shorter periods. The 1-hour period is therefore arbitrary, but will serve our purposes for a relative ranking. This approach will distinguish the more hazardous situations such as highthroughput, large-diameter liquid pipelines in low-lying areas. In many scenarios, reaction to a liquid spill plays a larger role in consequence minimization than does reaction to a gas release. An adjustment to the spill score can be applied when it can be shown that special capabilities exist that will reliably reduce the potential spill size by at least 50%, as is detailed in later sections. The consequences arising from various liquid spill volumes is closely intertwined with the dispersion potential of those volumes. Therefore, evaluating these scenarios is often best done by simultaneously considering spill volume and dispersion, as is discussed later in this chapter. Some modeling options are shown inTable 7.6.
Highly volatile liquid releases Calculating the quantity of material released under flashing conditions is a very complex task, due to the quite complex phenomena that take place during such a release. The process represents a nonlinear, non-equilibrium process, for at least part of the episode. Beyond the quantity calculation, the vapor cloud generation calculation adds further complications. Many variables such as weather conditions, heat transfer through soil,
7/148 Leak Impact Factor Table 7.6
Liquid spill size analysis options
Variables used in the analysis
Application
Flow rate only
Assumes high volume; full line rupture at MOP. Higher level screening to assess differences in consequence potential among different liquid pipeline systems (different locations, diameters, products, etc.). Generally assumes that things get worse uniformly in proportion to increasing leak size. Adds more resolution to identify potential consequence differences along a single pipeline route since relative low spots are penalized for greater stabilization release volumes after section isolation. Improves evaluations ofpotential consequences in general since site-specific variables are included. Examples of terrain variables include slope, surface flow resistance, and waterway considerations obtained from maps or from general field surveys. Water body intersects can be determined and characterized based on the water body’s flow (slope at intersect point can be proxy for water flow rate). More realistic, but must include probabilities of various hole sizes. Provides for an estimate ofpinhole leak volumes over several years. Also calledflowpath modeling, this is normally a computer application (GIS) that determines the path ofa hypothetical leaked drop of liquid. Includes topography and sometimes surface flow resistance. The computer routine is sometimes called a costingfunction. Accumulation points and water body intersects are determined. Arbitrary stop points of potential flow paths may need to be set. Adds aspect of ground penetration (soil permeability) and driving force ofthe volume release in order to better characterize both the depth and lateral spread distance of the leak. Flow path stop points are automatically determined. Volumes may be determined based on worst case releases or probabilistic scenarios. Adds hydrogeologic subsurface component to surface flow analyses to model groundwater transport of portions of a leak that may contact the aquifer over time. More important for toxic and/or environmentally persistent contamination scenarios.
Flow rate and draindown potential Add basic terrain considerations
Hole size and pressure Particle trace
Particle trace with release volume considerations
Add aquifer characteristics
wind patterns, rate of air entrainment, and temperature changes must be considered. In simple terms, a gaseous cloud of highly volatile liquids (HVLs) will be formed from the initial leak, which in turn transitions into a combination of secondary sources. The secondary sources include the quantity of immediately flashing material, the vapor generation from a liquid pool, and the evaporation of airborne droplets. The initial release rate will be the highest release rate of the event. This rate will decrease almost instantly after the rupture. As the depressurization wave from a pipeline rupture moves from the rupture site, pressures inside the pipeline quickly drop to vapor pressure. At vapor pressure, the pipeline contents will vaporize (boil), releasing smaller quantities of vapor. Releases of a highly volatile liquid are similar in many respects to the vapor release scenario. Key differences include the following: HVLs have multiphase characteristics near the release point. Product escapes in liquid, vapor, and aerosol form, increasing the vapor generation rate in the immediate area of the leak. Liquid pools might form, also generating vapors. As the pipeline reaches vapor pressure, the remaining liquid contents vaporize through flashing and boiling, until the release is purely gaseous. Lower vapor pressures (compared to pure gases) generally lead to heavier vapors (negatively buoyant), more cohesive clouds with more concentrated product, and possibly higher energy potential.
C. Dispersion A release of pipeline contents can impact a very specific area, determined by a host of pipeline and site characteristics. The
relative size of that impacted area is the subject of this portion ofthe consequence assessment. As modeled by physics and thermodynamics, spilledproduct will always seek a lower energy state. The laws of entropy tell us that systems tend to become increasingly disordered. The product will mix and intersperse itself with its new environment in a nonreversible process. The spill has also introduced stress into the system. The system will react to relieve the stress by spreading the new energy throughout the system until a new equilibrium is established. The characteristics of the spilled product and the spill site determine the movement of the spill. The possibilities are spills into the atmosphere, surface water, soil, groundwater, and man-made structures (buildings, sewers, etc.). Accurately predicting these movements can be an enormously complex modeling process. For releases into the atmosphere, product movement is covered in the discussion of vapor dispersion. Liquid dispersion scenarios cover releases into other media. Some spill scenarios involve both the spill of a liquid and vapor generation from the spilled liquid as it disperses. For purposes of many assessments, accurate modeling ofthe dispersion of spilled product will not be necessary. It is the propensity to do harm that is of interest. A substance that causes great damage even at low concentrations, released into an area that allows rapid and wide-ranging spreading, creates the greatest hazard. If a product escapes from the pipeline, it is released as a gas andor a liquid. As a gas, the product has more degrees of freedom and will disperse more readily. This may increase or decrease the hazard, since the product may cover more area, but in a less concentrated form. A flammable gas will entrain oxygen as it disperses, becoming an ignitable mixture. A toxic gas may quickly be reduced to safe exposure levels as its concentration decreases.
Dispersion 7/149
The relative density of the gas in the atmosphere will partly determine its dispersion characteristics.A heavier gas will generally stay more concentrated and accumulate in low-lying areas. A lighter gas should rise due to its buoyancy in the air. Every density of gas will be affected to some extent by air temperature, wind currents, and terrain. A product that stays in liquid form when released from the pipeline poses different problems. Environmental insult, including groundwater contamination, and flammability are the most immediate problems, although toxicity can play a role in both short- and long-term scenarios. For purposes of risk assessment, dispersion goes beyond the physical movement of leaked product. Thermal and blast effects can range far beyond the distance that the leaked molecules have traveled. The calculation of a hazard zone expands the concept of dispersion to include these additional ramifications. Dispersion is normally the determining factor of a hazard zone. Dispersion and. hence, hazard zone, are also intuitively closely intertwined with spill quantity. This risk analysis assesses dispersion somewhat separately from spill size in the interest of risk management-there are risk mitigation opportunities to reduce spill size or dispersion independently. Reductions in dispersion are assumed to reduce the potential consequences. From a risk standpoint, the degree of dispersion impacts the area of opportunity because more wide-ranging effects offers greater chances to harm sensitive receptors. Reductions in the amount or range of the spill may occur through natural processes of evaporation and mixing and thereby reduce the potential consequences. Similarly, reductions in the harmful properties of the substance reduce the risk. This may occur through natural processes such as biodegradation, photolysis, and hydrolysis. If the by-products of these reactions are less harmful than the original substance, which they often are, the hazard is proportionally reduced. Barriers that either limit dispersion or protect receptors from hazards also reduce risks. Several dispersion mechanisms-the underlying processes that create the dispersion or hazard zone area-are examined in this section. The hazard zone for a gas release is established through either a jet fire or a vapor cloud. The hazard zone for a liquid release arises from either a pool fire or a contamination scenario. HVL hazard zones can arise from any of these mechanisms.
Jet fire Kelease of a flammable gas carries the threat of ignition and subsequent fire. Thermal radiation from a sustained jet or torch firc, potentially preceded by a fireball, is a primary hazard to people and property in the immediate vicinity of a gas pipeline failure. in the event of a line rupture, a vapor cloud will form, grow in size as a function of release rate, and usually rise due to discharge momentum and buoyancy. This cloud will normally disperse rapidly and an ignited gas jet, or unignited plume, will be established. If ignition occurs before the initial cloud disperses, the gas may bum as a rising and expanding fireball. A trench fire is a special type ofjet fire. It can occur ifa discharging gas jet impinges on the side of the rupture crater or some other obstacle. This impingement redirects the gas jet, reducing its momentum and length while increasing its width, and possibly producing a horizontal profile fire. The affected
area of a trench fire can be greater than for an unobstructed jet fire because more of the heat-radiating flame surface may be concentrated near the ground surface [83]. Calculating hazard zones from jet fires is discussed later in this chapter and in Chapter 14. Those discussions illustrate that pressure, diameter, and energy content of the escaping gas are critical determinants in the thermal effects distances.
Vapor clouds (vapor spills) Of great interest to risk evaluators are the characteristics of vapor cloud formation and dispersion following a product release. Vapor can be formed from product that is initially in a gaseous state or from aproduct that vaporizes as it escapes or as it accumulates in pools on the ground. The amount of vapor put into the air and the vapor concentrations at varying distances from the source are the subject of many modeling efforts. At least two potential hazards are created by a vapor cloud. One occurs if the product in the cloud is toxic or displaces oxygen (that is, acts as an asphyxiant).The threat is then to any susceptible life forms that come into contact with the cloud. Larger clouds or low-lying clouds provide a greater area of opportunity for this contact to occur and hence carry a greater hazard. The second hazard occurs if the cloud is flammable. The threat then is that the cloud will findan ignition source. causing fire andor explosion. Larger clouds logically have a greater chance of finding an ignition source and also increase the damage potential because more flammable material may be involved in the fire event. Of course, the vapor cloud can also present both hazards: toxicity and flammability.
Vapor cloud ignition When an escaping pipeline product forms a vapor cloud the entire range of possible concentrations of the product/air mixture exist. Within a specific fuel-to-air ratio range, the vapor cloud will be flammable. This is the range between the upper ,flammability limit (UFL) and the lower ~fkurnmabiiit~ limit (LFL), which are the threshold concentration levels of interest (also called explosion limits) representing the concentration of the vapors in the air that support combustion. Ignition Is only possible for concentrations of vapors mixed with air that fall between these limits. Outside these limits, the mixture is either too rich or too lean to ignite and burn. Because mixing is by no means constant, the LFL distance will vary in any release event. A flammable gas will therefore be ignitable at this point in the cloud. Although ignition is not necessarily inevitable, there is often a reasonable probability of ignition due to the large number of possible ignition sources--cigarettes, engines, open flames. residential heaters, and sparks to name just a few. It is not uncommon during gaseous product release events for the ignition source to be created by the release of energy, including static electricity arcing (created from high dry gas velocities), contact sparking (e.g., metal to metal, rock to rock, rock to metal), or electric shorts (e.g., movement of overhead power lines). It is conservative to assume, then, that an ignition source will come into contact with the proper &el-to-air ratio at some point during the release. The consequences of this contact range from
7/150 Leak Impact Factor
a jet fire to a massive fireball and detonation. On ignition, a flame propagates through the cloud, entraining surrounding air and fuel from the cloud. If the flame propagation speed becomes high enough, a fireball and possibly a detonation can occur. The fireball can radiate damaging heat far beyond the actual flame boundaries, causing skin and eye damage and secondary fires. If the cloud is large enough, a “fire storm” can be created, generating its own winds and causing far-reaching secondary fires and radiant heat damage. Ignition probabilities are discussed in Chapter 14.
Overpressurewave In rare cases, a vapor cloud ignition can lead to an explosion. An explosion involves a detonation and the generation of blast waves, commonly measured as overpressure in psig. Anunconfined vapor cloud explosion, in which a cloud is ignited and the flame front travels through the cloud quickly enough to generate a shock wave, is a rare phenomenon. Such a phenomenon is called an overpressure wave. A confined cloud is more likely to explode, but confinement is difficult to accurately model for an open-terrain release. The intensity of the overpressure event is inversely proportional to the distance from the explosion point-the intensity is less at greater distances. Various overpressure levels can be related to various damages. An overpressure level of 10 psi generally results in injuries (eardrum damage) among an exposed population. Higher overpressure levels cause more damages but would only occur closer to the explosion point. It is conservatively assumed that an unconfined vapor cloud explosion can originate in any part of the cloud. Therefore, the overpressure distance is conservatively added to the LFL distance (the ignition distance) for purposes of hazard zone estimation. The manner in which an ignited vapor cloud potentially transforms from a burning event to an exploding event is not well understood. It rarely occurs when the weight of airborne vaporis less than 1OOOpounds[83]. Should a detonation occur, widespread damage is possible.A detonation can generate powerful blast waves reaching far beyond the actual cloud boundaries. Most hydrocarbodair mixtures have heats of combustion greater than the heat of explosion of TNT [8], making them very high energy substances. The possibility of vapor cloud explosions is enhanced by closed areas, including partial enclosures created by trees or buildings. Unconfined vapor cloud explosions are rare but nonetheless a real danger. Certain experimental military bombs are designed to take advantage of the increased blast potential created by the ignition of an unconfined cloud of hydrocarbodair vapor. Damages that could result from overpressure (blast) events are discussed in Chapter 14.
Vapor cloud size The predicted vapor cloud size is a function of variables such as the release rate, release duration, product characteristics, threshold concentrations of interest, and surrounding environment (e.g., weather, containment barriers, ignition source proximity) at the release site. A cloud of lighter-than-air vapors such as natural gas or hydrogen will normally be buoyant-it will tend to rise quickly
with minimum lateral spreading. An HVL cloud will normally be negatively buoyant (heavier than air) due mostly to the evaporative cooling of the material. These dense vapors tend to slump and flow to low points in the immediate topography. Typical cloud configurations are roughly cigar or pancake shaped. A stable atmospheric condition is usually chosen for release modeling, in order to generate scenarios closer to worst case scenarios. Atmospheric stability classes are discussed in Chapter 14 and shown in Table 14.3 1. This stability class represents some fraction of possible weather type days in any year. Under very favorable conditions,unignited cloud drift may lead to extended hazard zone distances, but such events are seen to be rare, difficult to estimate, and generally considered within the conservativeassumptions already included in these estimates. Many variables affect the dispersion of vapor clouds. In general, these include
0
0
0
Release rate and duration Prevailing atmospheric conditions Limiting concentration Elevation of source Surrounding terrain Source geometry Initial density ofrelease [ 5 ] .
Release duration is not as critical in estimating maximum cloud size since the release rate will diminish almost instantly as the pipeline rapidly depressures under the pipeline rupture scenario. Smaller size leaks could create vapor clouds that would be more dependent on release duration, especially under weather conditions that support cloud cohesiveness, but these scenarios are not thought to produce maximum cloud sizes. The extreme complexities surrounding a vapor release scenario make the problem only approximately solvable for even a relatively closed system. An example of a somewhat closed system is a well-defined leak from a fixed location where the terrain is known and constant and where weather conditions can be reasonably estimated from real-time data. A crosscountry pipeline, on the other hand, complicates the problem by adding variables such as soil conditions (moisture content, temperature, heat transfer rates, etc.), topography (elevation profile, drainage pathways, waterways, etc.), and often constantly changing terrain and weather patterns (amount of sunshine, wind speed and direction, humidity, elevation, etc.). Even though it vaporizes quickly, a highly volatile pipeline product can form a liquid pool immediately after release. This could be the case with products such as butane or ethylene. The pool would then become a secondary source of the vapors. Vapor generation would be dictated by the temperature of the pool surface, which in turn is controlled by the air temperature, the wind speed over the pool, the amount of sunshine to reach the pool, and the heat transfer from the soil (Figure 7.6). The soil heat transfer is in turn governed by soil moisture content, soil type, and both recent and current weather. Even if all of these factors could be accurately measured, the system is still a nonlinear relationship that cannot be exactly solved.
Cloud modeling A vapor cloud that covers more ground surface area, either due to its size or its cohesiveness, has a greater area of opportunity
Dispersion 7/151
concentration
c _ _
Ground level
Depressure wave Figure 7.6
Vapor cloud from pipeline rupture
to find an ignition source or to harm living creatures. This should be reflected in the risk assessment. To fully characterize the maximum dispersion potential, numerous scenarios run on complex models would be required. Even with much analysis, such models can only provide bounding estimates, given the numerous variables and possible permutations of variables. So again, we turn to a few easily obtained parameters that may allow us to determine a relative risk ranking of some scenarios. An exact numerical solution is not always needed orjustified. Dispersion studies have revealed a few simplifying truths that can be used in this risk assessment. In general, the rate of vapor generation rather than the total volume of released vapor is a more important determinant of the cloud size. A cloud reaches an equilibrium state for a given set of atmospheric conditions. At this equilibrium, the amount of vapor added from the source theoretically exactly equals the amount of vapor that leaves the cloud boundary (the cloud boundary can be defined as any vapor concentration level). So when the surface area of the cloud reaches a size whereby the rate of vapor escaping the cloud equals the rate entering the cloud, the surface area will not grow any larger (see Figure 7.6). The vapor escape rate at the cloud boundary is governed by atmospheric conditions. The cloud will therefore remain this size until the atmospheric conditions or the source rate change. This fact thus yields one quantifiable risk variable: leak rate. So, given a constant weather condition, the cloud size is most sensitive to the leak rate. The cloud will reach an equilibrium where the rate of material added to the cloud balances the rate of material leaving the cloud, thereby holding the cloud size constant. The sensitivity is not linear, however. A IO-fold increase in leak rate is seen as only a 3-fold increase in cloud size, in some models.
A second simplifying parameter is the effect of molecular weight on dispersion. Molecular weight is inversely proportional to the rate of dispersion. A higher molecular weight tends to produce a denser cloud that has a slower dispersion rate. A denser cloud is less impacted by buoyancy effects and air turbulence (caused by temperature differences, wind, etc.) than a lighter cloud. Using this fact yields another risk variable: product molecular weight. In the absence of more exact data, it is therefore proposed that the increased amount of risk due to a vapor cloud can be assessed based on two key variables: leak rate and product molecular weight. Meteorological conditions, terrain, chemical properties, and the host of other important variables may be intentionally omitted for many applications. The omission may be justifiable for two reasons: First, the additional factors are highly variable in themselves and consequently difficult to model or measure. Second, they add much complexity an4 arguably, little additional accuracy for purposes of relative risk evaluation. Therefore, measures of relative leak rate and molecular weight can be used to characterize the relative dispersion of a released gas.
Liquid spill dispersion Physical extent ofspill The physical extent of the liquid spill threat depends on the extent of the spill dispersion, which in turn depends on the size ofthe spill, the type ofproduct spilled, and the characteristics of the spill site. The size of the spill is a function of the rate of release and the duration. Slow leaks gone undetected for long periods can sometimes be more damaging than massive leaks that are quickly detected and addressed.
71152 Leak Impact Factor
To fully analyze a liquid spill scenario, a host of variables must be assessed: Product characteristics Product viscosity Product vapor pressure Product flow rate Product pressure Product solubility Product miscibility Evapotranspiration rate. Pipeline Characteristics Root cause of failure Hole dimensions Proximity to isolation valves Time to recognize event Time to confirm release Time to close block valves Initial release volume Stabilization release volume. Environment Characteristics Soil infiltration rate Drainage pathways Weather patterns Proximity to ignition sources Vegetative cover effects Slope effects Groundwater flow patterns Proximity to surface waters. In identifying all possible liquid leak impact ranges, it may not be necessary to fully evaluate all of the potential interplays among each of these variables. The added complexities and modeling costs often outweigh the benefits of such detailed calculations. A range of leak analysis options is available, each of which might be appropriate for a certain type of evaluation. Topography aspects will be a critical determinant in most liquid spill scenarios. It is difficult to generate a universally applicable scoring table for topography. Which is preferable-rapid, wide surface dispersion or limited surface transport but more rapid ground penetration? The unfortunate (from a modeling perspective) answer is that “it depends.” In some cases, a concentrated spill (limited dispersion) poses less risk, while in other cases, even at the same location, the opposite is true. A rapid and wide dispersion might reduce ignition probability and burn time, should ignition occur. In other cases, ignition might be preferable, thereby eliminating contamination potential or preventing migration of the spill to other receptors. The possible receptor interactions are critical elements of topographical considerations. This includes receptors of ground and surface water, in addition to other environmental receptors and population density and property. It is difficult to find simplifying assumptions to use in ranking potential liquid spill scenarios, given the widely varying threats accompanying the many differences in terrain and topography and product characteristics. A range of analysis options is available, of which several methods are listed inTable 7.6, from the simpler to the more complex:
Adding leak detection and emergency response considerations impacts the volumes released and adds a level ofresolution to any of the above analyses. It is especially important to consider leak detection capabilities for scenarios involving toxic or environmentally persistent products. In those cases, a full line rupture might not be the worst case scenario. Slow leaks gone undetected for long periods can be more damaging than massive leaks that are quickly detected and addressed. A leak detection capability curve (see Figure 7.7) can be used to establish the largest potential volume release. The more complex analyses are becoming more commonplace given the increased availability of powerful computing environments and topographical information in electronic databases. An important benefit of the more complex analysis approaches is the ability to better characterize the receptors that are potentially exposed to a spill-those that are actually “in harm’s way.” In many cases, receptors may be relatively close to, but upslope of, the pipeline and hence at much less risk. Focusing on the locations that are more at risk is obviously an advantage in risk management. Spills in soil or water are the most common pipeline environmental concern. Such spills also carry the potential for groundwater contamination. Product movement through the soil depends on such soil factors as adsorption, percolation, moisture content, and bacterial content. Soil characteristics can be best assessed by using one of the common soil classification systems, such as the USDA soil classification system, which incorporates physical, chemical, and biological properties of the soil. For simplicity, only one soil characteristic-permeability-is considered in some risk evaluations. This is also the soil characteristic that is used in the EPA hazard ranking system (HRStpermeability of geologic materials [ 141. Releases into surface waters are the second potential type of environmental insult and pathway to population receptors. The size of the body of water and its uses determine the severity of the hazard. Ifthe water is used for swimming, fishing, livestock watering, irrigation, or drinking water, pollution concentrations must be kept quite low. Spills into water should take into account the miscibility of the substance with water and the water movement. A spill of immiscible material into stagnant water would be the equivalent of a relatively impermeable soil. A highly miscible material spilled into a flowing stream is the equivalent of a highly permeable soil. (See later section dealing with spills into waterways.)
Thermal effects Addmg to the physical extent of the spilled product are the potential thermal effect distances arising from pools of ignited and burning product. These are more fully discussed in Chapter 14 under the calculation of hazard zones. Potential thermal effects are largely dependent on the size of the pool created from the spilled product. Pool growth can be simulated using a calculation method specified by the EPA, the Federal Emergency ManagementAgency, or the US. Department oflransportation (DOT) [5, 861.This correlation relates the release size to the pool area: Log (A) = 0.492 log (M) + 1.617
where M represents the total liquid mass spilled in pounds and A is the pool area in square feet.
Dispersion 71153 Alternatively, a simple geometry expression can be used to calculate pool radius, once a pool depth is assumed. In either case, assumptions must be made regarding rate of penetration into the soil, evaporation, and other considerations. Thermal radiation is related to the emissivity and transrnissivity. In accounting for shielding by surrounding layers of smoke, emissivity is related to the normal boiling point of the material. Higher boilingpoint fluids tend to burn with sooty flames. Emissivity has been correlated to boiling point by means of the following relationship: E,
= -0.3
I3 xTb+l17
where E, IS the effective emissive power (kW/m2) and T, is the normal boiling point in degrees Fahrenheit [ 5 ] . Transmissivity is a measure ofhow much ofthe emitted radiation is transmitted to a potential receptor. It is mainly a function of the path to the receptor: distance, relative humidity, and flame temperature. Water and carbon dioxide tend to reduce the transmissivity. With assumptions like constant transmissivity, the thermal radiation from a pool fire is related to spill size and boiling point. The emissivity value can be used in an inverse square relationship to calculate thermal radiation levels at certain distances from the fire. Equations for pool growth and emissivity are shown here because they offer the opportunity to extract some simplifying assumptions, as will be shown later. The calculation scheme could be tstiinate spill size+ calculatepool area- add boiling point+ calculaterelative hazard distance
Contunzinution potential Most spills of hydrocarbon liquids will present hazards related to both fire and contamination. Potential damages from each hazard type tend to overlap. and are interchangeable in some cases and additive in others. Contamination potential sometimes depends on the thermal radiation potential-if the product burns on release, then the contamination potential is diminished or eliminated. The environment can be very sensitive to certain substances. Contaminations in the few parts per billion or even parts per trillion are potentially of concern. If contamination is defined as I O parts per billion, a IO-gallon spill of a solvent can contaminate a billion gallons of groundwater. A 5000-gallon spill from a pipeline can contaminate 500 billion gallons of groundwater to 10 ppb. The potential contamination is determined by the simple formula: V,XC,=Vp*X~,, where V , = volumeofspill Vz,y = volume of' groundwater contaminated Cy = average concentration of contaminant in spilled material Cg%= average concentration of contaminant in groundwater. It is very difficult to generalize a contamination area esti-
mate. Any estimate is highly dependent on volume released
rate of release, soil permeability, surface flow resistance, groundwater movement, surface water intersects, etc., all of which are very location specific. As one possible generalization. the contamination can be modeled as being proportional to the potential pool size. Pool size can be estimated as described previously. Some multiple of this pool size (2x to lox, perhaps) can be used as a standard relative measure of contamination distance. This will of course routinely over-and underestimate the true distances, but. when used consistently, can help rank the damage potential.
Spill migration While a full topographical, hydrogeological analysis is the best way to estimate contamination potential, some basic concepts of migration of hydrocarbons through a medium such as soil and water can be used to model the range of a spill. Depending on numerous factors, a hydrocarbon spill will spread laterally as well as penetrate the soil. Quantities of lighter-than-water hydrocarbons that penetrate the soil will often form a pancake shape at some level below the surface, as gravity and buoyancy forces are balanced and the spill spreads laterally. The surface spread and soil penetration depth and movement through soil are generally related to the product and soil characteristics captured in a variable termed h-vdruulic conducrivici.. An additional consideration that impacts the depth of penetration and the spread is the soil retention capacity (or residual saturation capacity, hydrocarbon retained by soil particles) as shown in Table 7.7. Increasing soil retention reduces the spread of the spill. The product viscosity is the chief product characteristic to be considered. The soil permeability is well correlated with both the hydraulic conductivity and the soil retention capacity, so it is a valuable variable for the station risk model. For example, a scoring regime can be set up for which the contamination potential is partly a function of the soil retention and hydraulic conductivity. In Table 7.7, higher scores represent higher spread of contaminants. The phenomenon of source strength, the intensity with which dissolved chemicals may be released from a spilled hydrocarbon into water, is considered in assessing the product hazard component ofthis model (where that assessment should consider the presence of trace amounts of hazardous components) and in the use of groundwater depth as a variable. Deeper groundwater affords more opportunity for soil retention and may minimize the lateral spread of the spill. Any changes to the hydraulic conductivity of the soil due to the spilled hydrocarbon are beyond the resolution of this model.
Table 7.7 Soil retention and hydraulic conductivitiesfor various types of soil
Stonekoarsegravel Gravel/coarsesand Coarse to medium sand Medium to fine sand Fine sand to silt
Clay
5
I 100
I5 25
40
I0
x
8 I 0-0- I
h 4
] 0-8-
10-2 10 -10. I 0 -4
1
I
7/154 Leak Impact Factor
A drawbackto this scale appears for the special case in which a low-penetration soil promotes a wider spill-surface area and hence places additional receptors at risk. In very general terms, a spill of a more acutely hazardous product might generate less risk with greater soil penetration. This is especially true when the product is less persistent in the soil, as is often the case with higher flammability products. The counter to this reasoning is the increased cleanup costs and decreased volatility as soil penetration increases. Note also that natural factors such as wind strength and direction or topography can protect a receptor from damage, even if that receptor is fairly close to the leak site.
Spill and leak mitigation There are sometimes opportunities to reduce the volume or dispersion of released pipeline contents after a failure. The pipeline operator’s ability to seize these opportunities can be included in the risk assessment. Secondary containment and emergency response, especially leak detectionireaction, are considered to be risk mitigation measures that minimize potential consequences by minimizing product leak volumes and/or dispersion. The effectiveness of each varies depending on the type of system being evaluated.
Secondary containment Opportunities to fully contain or limit the spread of a liquid release can be considered here. These opportunities include Natural harriers or accumulation points Casingpipe Linedtrench Berms or levees Containment systems. Most secondary containment opportunities are found at stations and are discussed in Chapter 13. Although waterways are often areas of special environmental and population concern, they also sometimes offer an environment in which a liquid release is readily isolated. This may be the case when the spill occurs in a stable water body such as a pond or lake, which offers limited transport mechanisms and in which the spilled product is relatively immiscible and insoluble. This can enable more rapid and complete cleanup, including the possible (and controversial)choice to bum off a layer of spilled hydrocarbon from the water surface. A more damaging scenario involves water bodies with more rapid transport mechanisms and spills that reach the more sensitive receptors that are typically found on shorelines. Where secondary containment exists, or it is recognized that special natural containment exists, the evaluator can adjust the spill score accordingly. A system for evaluating secondary containment for pipeline stations is shown in Chapter 13.
Emergency response Emergency response and especially leak detection and reaction, is appropriately considered as a mitigation measure to minimize potential consequences by minimizing spill volumes
and/or dispersion. The effectiveness varies depending on the type of system being evaluated. Emergency response and leak detection evaluation methods are more fully discussed later in this chapter. Leak detection and vapor dispersion Leak detection plays a relatively minor role in minimizing hazards to the public in most scenarios of gas transmission pipelines. Therefore, many vapor dispersion analyses will not be significantly impacted by any assumptions relative to leak detection capabilities. This is especially true when defined damage states (see Chapter 14) use short exposure times to thermal radiation, as is often warranted. Reference [83] illustrates that gas pipeline release hazards depend on release rates which in turn are governed by pressure. In the case of larger releases, the pressure diminishes quicHymore quickly than would be affected by any actions that could be taken by a control center. In the case of smaller leaks, pressures decline more slowly but ignition probability is much lower and hazard areas are much smaller. In general, there are few opportunities to evacuate a pressurized gas pipeline more rapidly than occurs through the leak process itself, especially when the leak rate is significant. A notable exception to this case is that of possible gas accumulation in confined spaces. This is a common hazard associated with urban gas distribution systems and is covered in Chapter 11. Another, less common exception would be a rather remote scenario involving the ignition of a small leak that causes immediate localized damages and then more widespread damages as more combustible surroundings are ignited over time as the fire spreads. In that scenario, leak detection might be more useful in minimizing potential impacts to the public. Leak detection and liquid dispersion Leak detection capabilities play a larger role in liquid spills compared to gas releases. Long after a leak has occurred, liquid products can be detected because they have more opportunities for accumulation and are usually more persistent in the environment. A small, difficultto-detect leak that is allowed to continue for a long period of time can cause widespread contamination damages, especially to aquifers. Therefore, the ability to quickly locate and identify even small leaks is critical for some liquid pipelines.
Scoring releases Once she has an understanding of release mechanisms and risk implications, the evaluator will next need to model potential releases for the risk assessment. This is often done by assigning a score to various release scenarios. To score the relative dnpersion area or hazard zone of a spill or release, the relative measures of quantity released and dispersion potential can be combined and then adjusted for mitigation measures. When the quantity and dispersion components use the same variables, it might be advantageous to score the two components in one step. As more and more variables are added to the assessment in order to more accurately distinguish relative consequence potential, the benefits of the scoring approach diminish. Eventually the evaluator should consider performing the detailed calculations-estimating actual hazard zones using
Scoring releases 7/155 models for dispersion and thermal effects. As is noted in the introductory chapters of this hook, the challenge when constructing a risk assessment model is to fully understand the mechanisms at work and then to identify the optimum number of variables for the model's intended use. For instance, Table 7.8 implies that overpressure (blast effects from a detonation) is not a consideration for natural gas. This is a modeling simplification. Unconfined vapor cloud explosions involving methane have not been recorded, hut confined vapor cloud explosions are certainly possible. Table 7.8 lists the range of possible pipeline product types and shows the hazard type and nature. The type of model and some choices for key variables that are probably best suited to a hazard evaluation of each product are also shown. Assessment resolution issues further complicate model design, as discussed in Chapter 2. The assessment of relative spill characteristics is especially sensitive to the range of possible products, pipe sizes, and pressures. As noted in Chapter 2, a model that is built for parameters ranging from a 40-in., 2000psig propane pipeline to a I-in., 20-psig fuel oil pipeline will not be able to make many risk distinctions between a 6-in. natural gas pipeline and an 8411. natural gas pipeline. Similarly, a model that is sensitive to differences between a pipeline at 1 100 psig and one at 1200 psig might have to treat all lines above a certain pressure/diameter threshold as the same. This is an issue of modeling resolution. In most cases, the scoring of a pipeline release will closely parallel the estimation of a hazard area or hazard zone. This is reasonable since the spill score is ranking consequence potential, which in turn is a function of hazard area. The hazard zone is a function of the damage state of interest, where the damage state is a function of the type of threat (thermal, overpressure, contamination, toxicity) and the vulnerabilities of the receptors, as discussed later. In the scoring examples presented below, it is important to recognize that the hazard zone is a measure of the distance from the source where a receptor is threatened. The source might not he at the pipeline failure location. Especially in the case of hazardous liquids, whose hazard zones often are a function of pool size, the location of the pool can be some distance from the leak site. Envision a steeply sloped topography where the spilled liquid will accumulate some distance from the leak site. Note also that a recep-
Table 7.8
tor can be very close to a leak site and not suffer any damages, depending on variables such as wind strength and direction, topography, or the presence ofbarriers.
Scoring hazardous liquid releases As discussed, a relative assessment of potential consequences from a liquid spill should include relative measures of contamination and thermal effects potential, both of which are a function of spill volume. Contamination area is normally assumed to be proportional to the extent of the spill. Thermal effects are normally assumed to be a function of pool size and the energy content of the spilled liquid. Three possible approaches to evaluate relative hazard areas, independent of topographic considerations, are discussed next. Additional examples of algorithms used to evaluate relative liquid spill consequences are shown in Chapter 14. Since there are many tradeoffs in risk modeling, and there is no absolutely correct procedure, the intention here is to provide some ideas to the designer of the risk assessment model. Scoring approach A One simple (and coarse) scheme to assess the potential liquid spill hazard area in a relative fashion is as follows: Contamination potential = (spill volume score) x (RQj
Thermal hazard= (spill volume score) x (N,) Here the relative consequence area is assessed in two components: contamination hazard and thermal hazard. The spill volume score is critical in both hazards. It can be based on the relative pumping rate and maximum drain volume and its scale should be determined based on the full range of possible flow rates and drain volumes. The spill score is then multiplied by the pertinent product hazard component. We noted previously that, in many scenarios, an area of contamination can be more widespread than a thermally-impacted area. Some multiplier applied to the estimated pool size might be representative of the relative contamination potential. Because RQ is on a 12-point scale and N, is on a 4-point scale, this scheme is consistent with that belief and possibly avoids the need for a pool size multiplier. In other
Dominant hazards and variables for various products transported ~~
Product
Hazard tjpe
Hazard nature
Dominant hazard model
Flammable gas (methane, etc.)
Acute
Thermal
Toxic gas (chlorine, H2S, etc.)
Acute
TOXlC
Torch fire; thermal radiation; vapor cloud dispersion Vapor dispersion modeling
HVL (propane, butane. ethylene, etc.)
Acute
Thermal and blast
Flammable liquid (gasoline, etc.)
Acute and chronic Chronic
Thermal and contamination Contamination
Relatively nonflammable liquid (diesel, fuel oil, etc.)
Vapor dispersion modeling; torch fire; overpressure (blast) event Pool fire; contamination Contamination
Key variables impacting hazard area Molecular weight (MW). pressure, diameter MW, pressure, diameter, weather, toxicity level MW, pressure, diameter, weather, H,, C, MW, boiling pt, specific gravity, topography, ground surface Topography,ground surface, toxicity. environmental persistence
7/156 Leak Impact Factor
words, this approach uses the scales for RQ and N, as a simplification to show the perceived relationships between consequence area and product characteristics. This scheme is based on an understanding of the underlying variables and seems intuitively valid as a mechanism for relative comparisons. It captures, for example, the idea that a gasoline and a fuel oil spill of the same quantity have equivalent contamination potential but the gasoline potentially produces more thermal effects. However, this or any proposed algorithm should be tested against various spill scenarios before being adopted as a fair measure of relative consequence potential. This approach produces two non-dimensional scores, representing the relative consequences of contamination and thermal hazards from a liquid spill. Depending on the application, the contamination and thermal effects potentials might be combined for an overall score. In other applications, it might be advantageous to keep the more chronic contamination scenario score separate from the more acute thermal effects score. If an equivalency is to be established, the relative consequence “value” of each hazard type must be determined.When contamination potential is judged to be a less serious consequence than thermal effects (or vice versa), weightings can be used to adjust the numerical impacts of each relative to the other. Perhaps, from a cost and publicity perspective, the following relationship is perceived: Thermal hazard = 2 x (contamination potential)
This implies that potential thermal effects should play a larger role (double) in risk assessment and therefore in risk management. This may not be appropriate in all cases.
Scoring approach B Another example approach that focuses only on a thermal hazard zone, combines the relative spill volume and thermal effects in an algorithm that relates some key variables. For example, the spill score for liquids can be based on pool growth and effective thermal emissivity models as previously described: Liquid spill score = LOG[(spill mass) x 0.5]/[(hoiling p ~ i n t ) ~ - ~ ]
This relationship was created by examination of the underlying thermal effects formulas and a trial-and-error process of establishing equivalencies among various thermal effects hazard zones. It provided satisfactory differentiation capabilities for the specific scenarios for which it was applied. However, this algorithm has not been extensively tested to ensure that it fairly represents most scenarios. Pressure is not a main determinant in spill volume in this algorithm since the product is assumed to be relatively incompressible. Except for a scenario involving spray of liquids, the potential damage area is not thought to be very dependent on pressure in any other regard. Potential contamination impacts are not specifically included in this relationship. It may be assumed that contamination areas are encompassed by the thermal effects or, alternatively, a separate contamination assessment can be performed.
Scoring approach C Scoring Approach C might be suitable for a simple relative assessment where potential contamination consequences are seen to be the only threat. It assumes that the
main threat is to groundwater, so soil permeability is a key determinant. The problem is simplified here to two factors: leak volume and soil permeability (or its equivalent if a release into water is being studied). Points can be assessed based on the quantity of product spilled, under a worst case scenario. The worst-case scenario can range from a large volume, sudden spill to a very slow, below-detection-limits spill. Pounds spilled
Point score
< 1000 1001-1 0,000 10,001-100,000 100,001-1,000,000 >1,000,001
This is an example of a score-assignment table that is designed for a certain range of possible spills. The range of the table should reflect the range of spill quantities that is expected from all systems to be evaluated. This will usually be the largest diameter, highest pressure pipeline as the worst case, and the smallest, lowest pressure pipeline as the best case. Some trial calculations may be needed to determine the worst and best cases. lfthe range is too small or too large, comparisons among spills from different lines may not be possible. Table 7.9 can then be used to score the soil permeability for liquid spills into soil. This assignment of points implies that more or faster liquid movements into the soil increase the range of the spill. Of course, greater soil penetration will decrease surface flows and vice versa. Either surface or subsurface flow might be the main determinant of contamination area, depending on site-specific conditions. Since groundwater contamination is the greater perceived threat here, this point scale shows greater consequences with increasing soil permeability. When this is not believed to be the case, the evaluator can modify the awarding of points to better reflect actual conditions. The soil permeability score from Table 7.9 is the second of the two parts of the liquid spill score. The point values from Tables 7.8 and 7.9 are added or averaged to yield the relative score for contamination area. This score represents the belief that a larger volumes, spilled in a higher permeability soil, leads to a proportionally greater consequence area. Ultimately, a scoring of the spilled substance’s hazards and persistence (considering biodegradation, hydrolysis, and photolysis) will combine with this number in evaluating the consequences of the spill.
Adjustments As an additional consideration to any method of scoring the liquid hazard zone, adjustments can be made to account for local features that might act as dispersion amplifiers or reducers.These might include sloping terrain, streams, ravines, water bodies, natural pooling areas, sewer systems, and other topographical features that tend to extend or minimize a hazard area. Scoring hazardous vapor releases If the model is intended only to assess risks of natural gas pipelines (or another application with only one type of gas being transported), then a simple approach is to use only the
Scoring Releases 11157 Table 7.9 Soil permeability score
Permeabilih. Description
Impervious barrier
Scoring approach A
A direct approach for evaluating the potential consequences from a natural gas release can be based on the hazard zone generated by a jet fire from such a release: r=[(2348~pxd')/I]~~ where r =radius from pipe release point for given radiant heat intensity (feet) I =radiant heat intensity (Btuihr/ft*) p =maximum pipeline pressure (psi) d = pipeline diameter (in.). For natural gas, when a radiant heat intensity of 5000 Btulhrift2 is used as the potential damage threshold of interest, this equation simplifies to: r = 0.685 x p ) where r = radius from pipe release point for given radiant heat intensity (ft) p = maximum pipeline pressure (psi) d = pipeline diameter (in.) [83].
In either case, the gas spill score can be related directly to the hazard radius:
5 4 3 2
0
Clay, compact till, unfractured rock Silt, silty clay, loess, clay loams, sandstone Fine sand, silty sand moderately fractured rock Gravel, sand highly fractured rock
pipelineS pressure and diameter to characterize the relative hazard zone. This assumes that there is a fixed thermal radiation level of interest as is discussed in Chapter 14, but that level does not necessarily need to be identified for purposes of a relative risk assessment. Some modeling or scoring approaches to obtain relative consequence scores arc presented next. Other examples can be found in Appendix E.
Point score
Icrnhec)
40-7 I 0-5 -I 0-7 10-3-10-5 > Io-'
I
even chlorine, then an additional variable is needed to distinguish among gases. Density might be appropriate when the consequences are thought to be more sensitive to release rate. MW or heat of combustion might be more appropriate for consequences more sensitive to thermal radiation. If a gas to be included is thought to have the potential for an unconfined vapor cloud explosion, then the model should also include overpressure (explosion) effects as discussed for HVL scenarios. One of the equations from ApproachA above can be modified with some measures of energy content and dispersion content. The scoring could also be simplified to a relationship such as this one: Gas spill score
=m) x MW
This algorithm is based on the previous thermal radiation relationship [83] and supposition that dispersion. thermal radiation, and vapor cloud explosive potential are proportional to MW. This score can also be normalized as described in Approach A. Scoring approach C
As an even simpler approach to scoring gas releases, a point schedule can be designed to quantify the increase in hazard as the dispersion characteristics of molecular weight and leak rate are combined (see Table 7.10). Table 7.10 is an example of a table that is designed for a certain range of possible spills. The range of the table should reflect the range of spill quantities expected. This will usually be the largest diameter, highest pressure pipeline as the worst case, and the smallest, lowest pressure pipeline as the best case. Some trial calculations may be needed to determine the worst and best cases. If the range is too small or too large, comparisons between spills from different lines may not be possible. See Appendix B for a discussion of leak size determination.
Gas spill score = r
This can be normalized so that scores range from 0 to 10 or 0 to 100, based on the largest radius calculated for the worst case scenario evaluated. Note that these thermal radiation intensity levels only imply damage states. Actual damages depend on the quantity and types of receptors that are potentially exposed to these levels. A preliminary assessment of structures has been performed identifying the types ofbuildings and distances from the pipeline. See Chapter 14 for more discussion ofthese equations. Scoring approach B
When a model is needed to evaluate risks from various flammable gases such as methane, hydrogen, or
Table 7.10
Point schedule for quantifying hazards based on molecularweight and leak rate MW Produc f releasedafier 10 mrnufes (Ih)
250 2849 <27
0-5000
5000-50.000
50.000-500,000
>500.000
4 pts
3 pts 4 5
2 pta
I pta
3
2
4
3
5 6
7/158 Leak Impact Factor
These points are the vapor spill score. In Table 7.10, the upper right comer reflects the greatest hazard, while the lower left is the lowest hazard. By the way in which the dispersion factor is used to adjust the acute or chronic hazard, a higher spill score will yield a safer condition. By using only these two variables, several generalizations are being implied. For instance, the release of 1000 pounds of material in I O min potentially creates a larger cloud than the release of 4000 pounds in an hour. Remember, it is the rate of release that determines cloud size, not the total volume released. The 1000-poundrelease therefore poses a greater hazard than the 4,000-pound release. Also, a 1000-pound release of a MW 16 material such as methane is less of a hazard than a 1000-poundrelease of a MW 28 material such as ethylene. The schedule must now represent the evaluator’s view ofthe relative risks of a slow 4000-pound, MW 28 release versus a quick 1000-pound, MW 16 release. Fortunately, this need not be a very sensitive ranking. Orders of magnitude are sufficiently close for the purposes of this assessment.
Combined scoring When hazardous liquid and vapor releases are to be assessed with the same model, some equivalencies must be established. Such equivalencies are difficult given the different types of hazards and potential damages (thermal versus overpressure versus contamination damages, for example). A basis of equivalency must first be determined: Thermal radiation only? Total hazard area? Extent of contamination versus thermal effects area? Equivalenciesbecome more problematic when applied across hazard types since different types of potential damages must be compared. For instance, 10,000 square feet of contaminated soil or even groundwater is a different damage state than a 10,000-square-foot burn radius. Valuing the respective damage scenarios adds another level of complexity and uncertainty. One method to establish a relative equivalency between the two types of spills (gas and liquid), at least in terms of acute hazard is to examine and compare the calculated hazard zones for buman fatality and serious injury. Using some very specific assumptions, some damage zones involving multiple products, diameters, pressures, and flow rates were calculated to generate Table 7.11. The reader is cautioned against using these tabulated hazard distances because they are based on very specific assumptions that often will not apply to other scenarios. The specific assumptions are intentionally omitted to further discourage the use of these values beyond the intention here. The estimates from Table 7.1 1 are based on very rudimentary dispersion and thermal effects analyses and are generated only to compare some specific scenarios. It is assumed that pumping (flow) rate is the determining factor for liquid releases and orifice flow to atmosphere (sonic velocity) determines vapor release rates at MAOP. Using the Table 7.11 list of “actual” hazard distances and the simplifying rules shown later, possible equivalencies are shown in Table 7.12, ordered by spill score rank.The rank merely normalizes all spill scores against the maximum spill score, which is assigned a value of 6 . The spill scores in Table 7.12 were generated with some simple relationships, unlike the hazard zones which required rather
Table 7.11 Sample hazard radius calculations
Product
Pipe diameter (in.)
Pressure (psig)
Flow rate (Ibliquid/hr)
Hazard mdius (f?)
40 20 12 8 6 4 6 8 12 12 8 8 24
1450 1440 800 1400 180 220 1440 1440 1000 1000 500 1000 1000
0 0 0 0
0 0 0
0 0 0 0 0 0 1,100,000 1,900,000 15,200 15,200 4,777 6,756 60,801 3,213,000 2,268,000 945,000 472,500
1045 521 233 205 55 41 760 1300 92 196 147 160 275 730 670 54 1 456
Naturalgas Naturalgas Naturalgas Natural gas Naturalgas Naturalgas Propane“ Propane“ Fuel oil Gasoline Gasoline Gasoline Gasoline Gasoline Gasoline Gasoline Gasoline a
0
Propane cases include 0.4-psi overpressure from midpoint distance
of UFL and LFL.
complex computations. The spill score for the natural gas and propane scenarios used this equation: Gas spill score
=e) x MW
based on thermal radiation relationship [83] and supposition that dispersion, vapor cloud explosive potential, and thermal radiation from a jet fire are proportional to MW. Spill score for liquids (gasoline and fuel oil) were determined with this equation: Liquid spill score =LOG[(spill mass) x 0.5]/-)
x
20,000
based on the pool growth model, the effective emissivity model [86], and a constant (20,000)to numerically put the liquid spill scores on par with the vapor spill scores. This comparison between spill scores and hazard zones for various product release scenarios indicates that the spill score was fairly effective in ordering these scenarios-from largest hazard area to smallest. It is not perfect since some actual hazard radii are not consistent with the relative rank. Note, however, that many assumptions go into the actual calculations, especially where vapor cloud explosions are a potential (the propane cases in Table 7.12). So, the actual hazard zone calculations are themselves uncertain. The table ranking seems to be plausible from an intuitive standpoint also. A large-diameter, high-pressure gas line poses the largest consequence potential, followed by an HVL system, and then a large-volume gasoline transport scenario. Of course, any ofthese could be far worse than the others in a specific scenario so the ordering can be argued. On the lower end of the scale are the lower volume pipelines and the less flammable liquid (fuel oil). Other observations include the fact that liquid spill cases are relatively insensitive to pressures-flow rate is the main deter-
Adjustments to scores 71159 Table 7.12 Sample spill scores compared to hazard radii Product
Natural gas Propane Gasoline Natural gas Gasoline Gaso1ine Gasoline Propane Gasoline Gasoline Gasoline Gasoline Natural gas Natural gas Fuel oil Natural gas Natural gas
Pipe diameter (in)
Pressure (psrg)
40 8
1450 1440
20
1440
6 24 12 8 8 12 8 12 6 4
1440 1000 1000 1000 500 800 1400 1000 180 220
Flow rate (lb Iiquid/hr)
1,900,000 3,213,000 2,268,000 945,000 472,500 1,100,000 60,801 15,200 6.756 4,777 15.200
Hazard radius ft)
I045 1300 730 521 670 54 1 456 760 215 I96 160 147 233 205 92 55 41
Spillscore
Spillscore rank
24,370 13,357 12,412 12,143 12,109 1 1,349 10,747 10,018 8,966 1,762 7,057 6,756 5,431 4,789 4,481 1,288 949
6.0 5.5 5.1 5.0 5.0 4.7 44 4. I 3.7 3.2 2.9 28 2.2 2.0 I .x 0.5 04
minant of hazard zone. Except for a scenario involving sprayed material, this is plausible. Another observation is that the relative contamination potential is modeled as being equivalent to the relative spill score. As previously noted, this incorporates the assumption that for a liquid spill, the thermal damages and contamination damages offset eachother to some extent: as one increases, the other decreases. This is, of course, a modeling convenience only and real-world scenarios can be envisioned where this is not the case.
final outcome of an acute event in terms of loss of life, injuries. and property damage. This is not thought to impact the acute hazard, however. A spill with chronic characteristics, where the nature of the hazard causes it to increase in severity as time passes, can be impacted by emergency response. In these cases, emergency response actions such as evacuation, blockades, and rapid pipeline shutoff are effective in reducing the hazard. Consequence-reducing actions must do at least one of three things:
VII. Adjustments to scores
2. Limit the area of opportunity for consequences. 3. Otherwise limit the loss or damage caused by the spill.
1. Limit the amount of spilled product. As noted earlier in this chapter, two pipeline activities that can contribute to consequence reduction are secondary containment and emergency response. Both are useful only as consequence reducers since both are reactionary to a release that has already occurred and neither provides an opportunity to prevent a failure. There is little argument that, especially in scenarios involving more chronic consequences, secondary containment and emergency response can indeed minimize damages. They are therefore included as modifiers to the dispersion portion of the leak impact factor. The amount of the contribution to the overall risk picture is arguable, however, and must be carefully evaluated. Chronic hazards have a time factor implied: potential damage level increase with passing time. Actions that can influence what occurs during the time period of the spill will therefore impact the consequences. Acute hazard scenarios offer much less opportunity to intervene in the potentially consequential chain of events. The most probable pipeline leak scenarios involving acute hazards suggest that the consequences would not increase over time because the dnving force (pressure) is being reduced immediately after the leak event begins and dispersion of spilled product occurs rapidly. This means that reaction times swift enough to impact the immediate degree of hazard are not very likely. We emphasize immediate here so as not to downplay the importance of emergency response. Emergency response can indeed influence the
Limiting the amount of product spilled is done by isolating the pipeline quickly or changing some transport parameter (pressure, flowrate, type of product, etc). The area of opportunity is limited by protecting or removing vulnerable receptors, by removing possible ignition sources, or by limiting the extent of the spill. Other loss is limited by prompt medical attention, quick containment, avoidance of secondary damages, and cleanup ofthe spill. The following consequence-reducing opportunities are discussed in this section: Leak detection Emergency response Spill limiting actions “Area of opportunity” limiting actions Loss limiting actions
Leak detection Leak detection can be seen as a critical part of emergency response. It provides early notification of a potentially consequential event, and hence allows more rapid response to that event. Given the complexity of the topic, leak detection is examined independently of other emergency response actions, but can be considered a spill reducing opportunity aspect of emergency response.
7/160 Leak Impact Factor
The role of leak detection can be evaluated either in the determination of spill size and dispersion or as a stand-alone element that is then used to adjust previous consequence estimates. The former approach is logical and consistent with the real-world scenarios. The benefit ofleak detection is indeed its potential impact on spill size and dispersion. The latter approach, evaluating leak detection capabilities separately, offers the advantage of a centralized location for all leak detection issues and, therefore, a modeling efficiency. As discussed on pages 142-146, leak size is at least partially dependent on failure mode. Small leak rates tend to occur due to corrosion (pinholes) or some design (mechanical connections) failure modes. The most damaging leaks occur below detection levels for long periods of time. Larger leak rates tend to occur under catastrophic failure conditions such as external force (e.g., third party, ground movement) and avalanche crack failures. Larger leaks can be detected more quickly and located more precisely. Smaller leaks may not be found at all by some methods due to the sensitivity limitations. The trade-offs involved between sensitivity and leak size are usually expressed in terms of uncertainty. The method of leak detection chosen depends on a variety of factors including the type of product, flow rates, pressures, the amount of instrumentation available, the instrumentation characteristics, the communications network, the topography, the soil type, and economics. Especially when sophisticated instrumentation is involved,there is often a trade-off between the sensitivity and the number of false alarms, especially in “noisy” systems with high levels oftransients. At this time, instrumentation and methodology designed to detect pipeline leaks impact a relatively narrow range of the risk picture. Detection of a leak obviously occurs after the leak has occurred. As is the case with other aspects of post-incident response, leak detection is thought to normally play a minor role, if any, in reducing the hazard, reducing the probability of the hazard, or reducing the acute consequences. Leak detection can, however, play a larger role in reducing the chronic consequences of a release. As such, its importance in risk management for chronic hazards may be significant. This is not to say that leak detection benefits that mitigate acute risks are not possible. One can imagine a scenario in which a smaller leak, rapidly detected and corrected, averted the creation of a larger, more dangerous leak. This would theoretically reduce the acute consequences of the potential larger leak. We can also imagine the case where rapid leak detection coupled with the fortunate happenstance of pipeline personnel being close by might cause reaction time to be swift enough to reduce the extent of the hazard. This would also impact the acute consequences. These scenarios are obviously unreliable and it is conservative to assume that leak detection has limited ability to reduce the acute impacts from a pipeline break. Increasing use of leak detection methodology is to be expected as techniques become more refined and instrumentation becomes more accurate. As this happens, leak detection may play an increasingly important role. As notedpreviously, leak quantity is a critical determinant of dispersion and hence of hazard zone size. Leak quantity is important under the assumption that larger amounts cause more spread of hazardous product (more acute impacts), whereas lower rates impact detectability (more chronic impacts). The
rate of leakage multiplied by the time the leak continues is often the best estimate of total leak quantity. However, some potential spill sizes are more volume dependent than leak-rate dependent. Spills from catastrophic failures or those occurring at pipeline low points are more volume dependent than leak-rate dependent. Such spill events are not best estimated by leak rates because the entire volume of a pipeline segment will often be involved, regardless of response actions.
Detection methodologies Pipeline leak detection can take a variety of forms, several of which have been previously discussed. Common methods used for pipeline leak detection include
0
0
Direct observation by patrol Happenstance direct observation (by public or pipeline operator) SCADA-based computational methods SCADA-based alarms for unusual pressure, flows, temperatures, pumpicompressor status, etc. Flow balancing Direct burial detection systems Odorization Acoustical methods Pressure point analysis (negative pressure wave detection) Pig-based monitoring
Each has its strengths and weaknesses and an associated spectrum of capabilities. Despite advances in sophisticated pipeline leak detection technologies, the most common detection method might still be direct observation. Leak sightings by pipeline employees, neighbors, and the general public as well as sightings while patrolling or surveying the pipeline are examples of direct observation leak detection. Overline leak detection by handheld instrumentation (sntffeevs) or even by trained dogs (which reportedly have detection thresholds far below instrument capabilities) is a technique used for distribution systems. Pipeline patrolling or surveying can be made more sensitive by adjusting observer training, speed of survey or patrol, equipment carried (may include gas detectors, infrared sensors, etc.), altitudeispeed of air patrol, topography, ROW conditions, product characteristics, etc. Although direct observation techniques are sometimes inexact, experience shows them to be rather consistent leak detection methods. More sophisticated leak detection methods require more instrumentation and computer analysis. A mainstay of pipeline leak detection includes SCADA-based capabilities such as monitoring of pressures, flows, temperatures, equipment status, etc. For instance, (1) procedures might call for a leak detection investigation when abnormally low pressures or an abnormal rate of change of pressure is detected; and ( 2 ) a flow rate analysis, in which flow rates into a pipeline section are compared with flow rates out of the section and discrepancies are detected, might be required. SCADA-based alarms can be set to alert the operator of such unusual pressure levels, differences between flow rates, abnormal temperatures, or equipment status (such as unexplained pumpicompressor stops). Alarms set to detect unusual rates of changes in measured flow parameters add an additional level of sensitivity to the leak
Adjustmentsto scores 7/161 detection. These methods may have sensitivity problems because they must not give leak indications in cases where normal pipeline transients (unsteady flows or pressures, sometimes temporary as the system stabilizes after some change is made) are causing pressure swings and flow rate changes. Generally the operator must work around the inevitable tradeoff between many false alarms and low sensitivity to actual leaks. Because pipeline leaks are, fortunately, rare-occurrence events, the latter is often chosen. SCADA-based capabilities are commonly enhanced by computational techniques that use SCADA data in conjunction with mathematical algorithms to analyze pipeline flows and pressures on a real-time basis. Some use only relatively simple mass-balance calculations. perhaps with corrections for linefill. More robust versions add conservation of momentum calculations. conservation of energy calculations, fluid properties, instrument performance, and a host of sophisticated equations to characterize flows, including transient flow analyses. The nature of the operations will impact leak detection capabilities, with more less steady flows and more compressible fluids reducing the capabilities. The more instruments (and the more optimized the instrument locations) that are accurately transmitting data into the SCADA-based leak detection model, the higher the accuracy of the model and the confidence level of leak indications. Ideally, the model would receive data on flows, temperatures, pressures, densities, viscosities, etc., along the entire pipeline length. By tuning the computer model to simulate mathematically all flowing conditions along the entire pipeline and then continuously comparing this simulation to actual data, the model tries to distinguish between instrument errors, normal transients, and leaks. Reportedly, depending on the system, relatively small leaks can often be accurately located in a timely fashion. How small a leak and how swift a detection is specific to the situation, given the large numbers of variables to consider. References 131 and 141 discuss these leak detection systems and methodologies for evaluatingtheir capabilities. Another computer-based method is designed to detect pressure waves. A leak will cause a negative pressure wave at the leak site. This wave will travel in both directions from the leak at high speed through the pipeline product (much faster in liquids than in gases). By simply detecting this wave, leak size and location can be estimated. A technique called pressure point cznalvsic. (PPA) detects this wave and also statistically analyzes all changes at a single pressure or flow monitoring point. By statistically analyzing all of these data, the technique can reportedly, with a higher degree of confidence, distinguish between leaks and many normal transients as well as identify instrument drift and reading errors. A final method of leak detection involves various methods of direct detection of leaks immediately adjacent to a pipeline. One variation of this method is the installation of a secondary conduit along the entire pipeline length. This secondary conduit is designed to sense leaks originating fiom the pipeline. The secondary conduit may take the form of a small-diameter perforated tube, installed parallel to the pipeline, which allows air samples to be drawn into a sensor that can detect the product leaks. The conduit ]nay also totally enclose the product pipeline (pipe-in-pipe design) and allow the annular space to be tested for leaks. Obviously these systems can cause a host of logistical problems and ai-eusually not employedexcept on short lines.
The method ofleak detection chosen depends on a variety of factors including the type of product, flow rates, pressures, the amount of instrumentation available, the instrumentation characteristics, the communications network, the topography. the soil type, and economics. As previously mentioned when highly sophisticated instruments are required a trade-off often takes place between the sensitivity and the number of false alarms, especially in “noisy” systems with high levels of transients.
Evaluation of leak detection cupubi1itfe.v The evaluator should assess the nature of leak detection abilities in the pipeline section he is evaluating. The assessment should include What size leak can be reliably detected How long before a leak is positively detected How accurately can the leak location be determined. Note that many leak detection systems perform best for only a certain range of leak sizes, However. many overlapping leak detection capabilities are often present in a pipeline. A leak
detection capability can be defined as the relationship between leak rate and time to detect. This relationship encompasses both volume-dependent and leak-rate-dependent scenarios.The former is the dominant consideration as product containment size increases (larger diameter pipe at higher pressures). but the latter becomes dominant as smaller leaks continue for long periods. As shown in Figure 7.7. this relationship can be displayed as a curve with axes of “Time to Detect Leak’ versus “Leak Size.” The areannder such a curve represents the worst case spill volume, prior to detection. The shape of this curve is logically asymptotic to each axis because some leak rate level is never detectable and an instant release of large volumes approciches an infinite leak rate. A leak detection capability curve can be developed by estimating the leak detection capabilities of each available method for a variety of leak rates. A table of leak rates IS first selected as illustrated in Table 7.13. For each leak rate, each system’s time to detect is estimated. In assessing leak detection capabilities, all opportunities to detect should be considered. Therefore, all leak detection systems available should be evaluated in terms of their respective abilities to detect various leak rates. A matrix such as that shown inTable 7.14 can be used for this. References [3] and [4] discuss SCADA-based leak detection systems and offer methodologies for evaluating their capabilities. Other techniques will likely have to be estimated based on time between observations and the time for visual. olfactory. or auditory indications to appear. The latter will be situation dependent and include considerations for spill migration and evidence (soil penetration, dead vegetation. sheen on water, etc.). The total leak time will involve detection, reaction. and isolation time. As a further evaluation step, an additional column can be added to Table 7.14 for estimates of reaction time for each detection system.This assumes that there are diferences in reactions, depending on the source ofthe leak indication.A series of SCADA alarms will perhaps generate more nnmediate reaction
7/162 Leak Impact Factor
h
a, *
2 Y
m
J a,
TO
Tl 000
Tl 0
Tl 0 Time to Detect Leak
Figure 7.7 Leak detection capabilities. than a passerby report that is lacking in details and/or credibility, The former scenario has an additional advantage in reaction, since steps involving telephone or radio communications may not be part of the reaction sequence.
the value of such valves and the practicality of such automation is questionable. In addition to the previously noted limitations to leak detection capabilities, there are limitations and issues regarding other aspects of an automated detection and reaction system.
Emergency response
A. Automatic valves. Set to close automatically, these valves are often triggered on low pressure, high pressure, high flow, or rate of change of pressure or flow. Regular maintenance is required to ensure proper operation. Experience warns that this type of equipment is often plagued by false trips that are sometimes cured by setting relatively insensitive response trigger points.
Spill volume limiting actions l b s opportunity for consequence reduction includes leak detectionheaction and is often the most realistic way for the operator to reduce the consequences of a pipeline failure. Some common approachesto limiting spill volumes are discussed below. One of the theoretically fastest detection and response scenarios could be valves that automatically isolate a leaking pipeline section based on some continuously monitored parameter that has indicated a leak. However, in real applications, Table 7.13 ~~
Check valves are another form of automatic valves and play a spill-reducing role. A check valve might be especially useful for liquid lines with elevation changes. Strategically placed
Detection of various leak volumes ~
Time
Q (gaNday)
Notes and detection times
TO TI
Volume I 10 100 1000
Total pipeline volume between valves that are predicted to be used to isolate the leak Very slow, very difficult to detect leak; T = a few days to several months Slow leak, difficult to detect;T = hours to days Significant leak, readily apparentto eyes, nose, ears; T = minutesto hours Large leak, immediately apparent;T = minutes
TI0
TI,
TI000
Adjustments to scores 71163 Table 7.14 Matrix for evaluating ability of leak detection systems to detect various leak rates
Leak detection system
*,
TI"
T,oo
nance) of additional valves would far outweigh the possible benefits, and also imply that such valves may actually introduce new hazards.
TI000
Mass balance for facility (SCADA and manual) Patrol Overline surveys Acoustic monitoring SCADA alarms SCADA-based computational methods SCADA-based mass balance Staffing of surface facilities Passerby reporting Other
check valves may reduce the draining or siphoning to a spill at a lower elevation. Included in this section should be automatic shutoffsofpumps, wells, and other pressure sources. Redundancy should be included in all such systems before risk-reducing credit is awarded (see Chapter 6, Incorrect Operations Index). B. Valve spacing. Close valve spacing may provide a benefit in reducing the spill amount. This must be coupled with the most probable reaction time in closing those valves.
Discounting failure opening size and pressure, the two components of a release volume from a liquid line are (1) the continued pumping that occurs before the line can be shut down and (2) the liquid that drains from the pipe after the line has been shut down. The former is only minimally impacted by additional valves-perhaps only helping to stop momentum effects from pumping if a valve is rapidly closed (but potentially generating threatening pressure waves ). The main role of additional valving, therefore, seems to be in reducing drain volumes. Because a pipeline is a closed system, hydraulic head andor a displacement gas is needed to affect line drainage. Hilly terrain can create natural check valves that limit hydraulic head and gas displacement of pipeline liquids. Concerns with the use of additional block valves include costs and increased system vulnerabilities from malfunctioning components andor accidental closures, especially where automatic or remote capabilities are included. For unidirectional pipelines, check valves (preventing backflow) can provide some consequence minimization benefits. Check valves respond almost immediately to reverse flow and are not subject to most of the incremental risks associated with block valves since they have less chance of accidental closure due to human error or, in the case of automatidremote valves, failure due to system m a l h c tions. Their failure rate (failure in a closed position) is uncertain. Studies of possible benefits of shorter distances between valves of any type produce mixed conclusions. Evaluations of previous accidents can provide insight into possible benefits of closer valve spacing in reducing consequences of specific scenarios, By one study of 336 liquid pipeline accidents, such valves could, at best, have provided a 37% reduction in damage [76]. Offsetting potential benefits is the often substantial costs of additional valves and the increased potential for equipment malfunction, which may increase certain risks (surge potential, customer interruption, etc.). Rusin and Savvides-Gellerson [76] calculate that the costs (installation and ongoing mainte-
C. Sensing devices. Part of the equation in response time is the first opportunity to take action. This opportunity depends on the sensitivity of the leak detection. All leak detection will have an element ofuncertainty, from the possibility of crank phone calls to the false alarms generated by instrumentation failures or instrument reactions to pipeline transients. This uncertainty must also be included in the following item. D. Reaction times. If an operator intervention is required to initiate the proper response, this intervention must be assessed in terms of timeliness and appropriateness. A control room operator must often diagnose the leak based on instrument readings transmitted to him. How quickly he can make this diagnosis depends on his training, his experience, and the level of instrumentation that is supporting his diagnosis. Probable reaction times can be judged from mock emergency drill records when available. The evaluator can incorporate his incorrect operations index ratings (training, SCADA, etc.) into this section also. If the control room can remotely operate equipment to reduce the leak, the reaction time is obviously improved. Travel time by first responders must otherwise be factored in. If the pipeline operator has provided enough training and communications to public emergency response personnel so that they may operate pipeline equipment, response time may be improved, but possibly at the expense of increased human error potential. Public emergency response personnel are probably not able to devote much training time to a rare event such as a pipeline failure. If the reaction is automatic (computer-generated valve closure, for instance) a sensitivity is necessarily built in to eliminate false alarms.The time it takes before the shut down device is certain of a leak must be considered. '%ea of Opportunity Limitinghctions "
As noted previously, the area of opportunity can sometimes be limited by protecting or removing vulnerable receptors, by removing possible ignition sources, or by limiting the extent of the spilled product.
A. Evacuation. Under the right conditions, emergency response personnel may be able to safely evacuate people from the spill area. To do this, they must be trained in pipeline emergencies. This includes having pipeline maps, knowledge of the product characteristics, communications equipment, and the proper equipment for entering to the danger area (breathing apparatus, fire-retardant clothing, hazardous material clothing, etc.). Obviously, entering a dangerous area in an attempt to evacuate people is a situation-specific action. The evaluator should look for evidence that emergency responders are properly trained and equipped to exercise any reasonable options after the situation has been assessed. Again, the criteriamust include the time factor. Credit is given when the risk can be reliably reduced by 50% due to appropriate emergency response actions. B. Blockades. Another limiting action in this category is to limit the possible ignition sources. Preventing vehicles from entering the danger zone has the double benefit of reducing human exposure and reducing ignition potential.
7/164 Leak Impact Factor
C . Containment. Especially in the case of restricting the movement of hazardous materials into the groundwater, quick containment can reduce the consequences of the spill. The evaluator should look for evidence that the response team can indeed reduce the spreading potential by actions taken during emergency response. This is usually in the form of secondary containment. Permanent forms of secondary containment are discussed in Chapter 1 1.
Loss limiting actions Proper medical care of persons affected by the spilled product may reduce losses. Again, product knowledge, proper equipment, proper training, and quick action on the part of the responders are necessary factors. Other items that play a role in achieving the consequencelimiting benefits include the following:
0
0 0
Emergency drills Emergencyplans Communications equipment Proper maintenance of emergency equipment Updated phone numbers readily available Extensive training including product characteristics Regular contacts and training information provided to fire departments, police, sheriff, highway patrol, hospitals, emergency response teams, government officials.
These can be thought of as characteristics that help to increase the chances ofcorrect and timely responses to pipeline leaks. Perhaps the first item, emergency drills, is the single most important characteristic. It requires the use of many other list items and demonstrates the overall degree of preparedness ofthe response efforts. Equipment that may need to be readily available includes Hazardous waste personnel suites Breathing apparatus Containers to store picked up product Vacuum trucks Booms Absorbant materials Surface-washing agents Dispersing agents Freshwater or a neutralizing agent to rinse contaminants Wildlife treatment facilities. The evaluator/operator should look for evidence that such equipment is properly inventoried, stored, and maintained. Expertise is assessed by the thoroughness of response plans (each product should be addressed), the level of training of response personnel, and the results of the emergency drills. Note that environmental cleanup is often contracted to companies with specialized capabilities.
Assessing emergency response capabilities Many emergency response considerations have been mentioned here. The evaluator should examine the response possibilities and the most probable response scenario. The best evaluations of effectiveness will be situation specific-the role
of emergency response in limiting spill size or dispersion for specific segments of pipeline. The next step is to incorporate those evaluations into the relative risk model. By most methods of assessing the role of spill size in risk, an 8-in. diameter pipeline presents a greater hazard than does a 6-in. diameter pipeline (all other factors held constant). When the leak detectiodemergency response actions can limit the spill size from an 8-in. line to the maximum spill size from a 6in. line, some measure ofrisk reduction has occurred. For simplicity sake, risk reduction could be assumed to be directly proportional to reductions in spill size and/or extent. Alternatively, and as a further assessment convenience, a threshold level of consequence-reduction capabilities can be established. Below this threshold, credit would not be given in the risk assessment for emergency response capabilities. For instance, the threshold could be: “reliable reduction of consequences by at least 50% in the majority of pipeline failure scenarios.” When response activities can reliably be expected to reduce consequences by 50% compared to consequences that would otherwise occur, the spill or dispersion score can be adjusted accordingly. Failure to meet this threshold (in the eyes of the evaluator) warrants no reduction in the previously calculated spill or dispersion scores. At first look, it may appear that an operator has many of emergency response systems in place and they are fimctioning to a high level. Realistically, however, it is difficult to meet a criteria such as a 50% reduction in the effective spill size. The spill and dispersion scores assess the amount of product spilled, assuming worst case scenarios. To reduce either of these, emergency actions would have to always take place quickly and effectively enough to cut either the volume released or the extent of the spill in half. The evaluator can take the following approach to tie this together to calculate the liquid spill score. An example follows. Step I : The evaluator uses the worst case pipeline spill scenario or a combination of scenarios from which to work. She calculates the worst case as a spill score based on a l-hour, full bore rupture. Step 2: The evaluator determines, with operator input, methods to attain a 50% risk reduction such as reduce spill amount by 50%, reduce population exposure by 50%(number ofpeople or duration of exposure), contain 50% of spill before it can cause damage, reduce health impact hy 50%. Step 3: The evaluator determines if any action or combination of actions can reliably reduce the risk by 50%. This is done with consideration given to the degree of response preparedness.
If she decides that the answer in Step 3 is yes, she improves the liquid spill score calculated earlier to show only one-half of the previously-assumed spill volume.
Example 7.3:Adjustments to the liquid spill score (CaseA) The evaluator is assessing a section of gasoline pipeline through the town of Smithville. The scenario he is using involves a leak of the full pipeline flow rate. This hypothetical leak occurs at a low point in the line profile, in the center of Smithville. He recognizes the acute
Receptors 7 / 1 6 hazard of flammability and the chronic hazards of toxicity (high benzene component), residual flammability (from pockets of liquid), and environmental insult. He feels that a 50% reduction in risk can be attained if the spill size is reduced by 50%. if 50% of the spilled product is contained quickly, or if 50% of the potentially affected residents can be evacuated hefbre they are exposed to the acute hazard. He has determined that the leak detection and emergency response activities are in place to warrant an adjustment of the leak impactfactor. The basis for this determination is the following items observed or ascertained from interviews with the operators: Automatic valves are set to isolate pipeline sections around the town of Smithville. The valves trigger if a pressure drop of more than 20% from normal operating conditions occurs. The valves are thoroughly tested every 6 months and have a good operating history. A 20% drop in pressure would occur very soon after a substantial leak, Annual emergency drills are held, involving all emergency response personnel from Smithville. The drills are well documented and reflect a high degree of response preparedness. The presence of the automatic valves should limit the spill to 50% of what it would be without the valves. This alone would have been sufficient to adjust the chronic leak impact factor. The strong emergency response program should limit exposure due to residual flammability and ensure proper handling of the gasoline during cleanup. Containment is not seen as an option, but by limiting the spill size, the environmental insult is minimized also. The evaluator sees no relief from the acute hazard but feels an adjustment for the chronic hazard is appropriate.
Exaniple 7.4:Adjustments to the liquid spill score (Cuse B) The evaluator is assessing a section of brine pipeline in a remote, unpopulated area. The leak scenario she is using involves a complete line rupture. The hazards are only chronic in nature; that is, there are no immediate threats to public or responders.The chronic threat is the exposure to the groundwater table, which is shallow in this area. The best chance to reduce the chronic risk by SO% is thought to be limiting the spill size by 50%. Emergency response will not reliably occur quickly enough to isolate the leaking pipeline before line depressurization and pump shutoffs slow the leak anyway. containment in a timely fashion is not possible. No adjustments to the chronic leak impact factor are made.
D. Receptors Of critical importance to any risk assessment is an evaluation of the types and quantities of receptors that may be exposed to a hazard from the pipeline. For these purposes, the term receptor refers to any creature. structure, land area, etc., that could “receive” damage from a pipeline rupture. The intent is to capture relative vulnerabilities of various receptors, as part of the consequence assessment.
Possible pipeline rupture impacts on the surrounding environmental and population receptors are highly location specific due to the potential for ignition andor vapor cloud explosion. Variables include the migration of the spill or leak, the sensitivity of the receptor, the nature of the thermal event, the amount of shelter and barriers, and the time of exposure. Because gaseous product release from a pipeline is a temporary excursion, the pollution potential beyond immediate toxicity or flammability is not specifically addressed here for releases into the air. This discounts the accumulative damage that can be done by many small releases of atmospheredamaging substances (such as possible ozone damages from greenhouse gases). Such chronic hazards are considered in the assignment of the equivalent reportable release quantity ( RQequlvalent) for volatile hydrocarbons. Ideally, a damage threshold would lead to a hazard area estimation that would lead to a characterization of receptor vulnerability within that hazard area. Damage threshold levels for thermal radiation and overpressure effects are discussed in Chapter 14.
D l . Population density As part ofthe consequence analysis, a most critical parameter is the proximity ofpeople to the pipeline failure. Population proximity is a factor here because the area of opportunity for harm is increased as human activity occurs closer to the leak site. In addition to potential thermal effects, the potential for ingesting contaminants through drinking water, vegetation. fish, or other ingestion pathways is higher when the leak site is nearby. Less dilution has occurred and there is less opportunity for detection and remediation before the normal pathways arc contaminated. The other pathways, inhalation and dermal contact, are similarly affected. A full evaluation of human health effects from pipeline failures is often unnecessary when the pipeline’s products are common and epidemiological effects are well known (see discussion of product hazurd, this chapter). In assessing absolute probabilities of injury or fatality from thermal effects. the time and intensity of exposure must be estimated. This is discussed earlier in this chapter and methods for quantifying these effects are shown in Chapter 14. Shielding and ability to evacuate are critical assumptions in such calculations. Most general risk assessment methods will produce satisfactory results with the simple and logical premise that risk increase as nearby population density increases. Population density can he taken into account by using any of the published population density scales such as the DOT Part 192 class locations I , 2.3, and 4 (see Table 7. IS). These are for rural to urban areas, respectively. The class locations are determined by examining the area 660 ft on either side of the pipeline centerline, and 1 mile along the pipeline. This I-mile by 1320-ft rectangle, centered over the pipeline, is the defined area in which to conduct counts of dwellings. If any 1-mile stretch of pipeline has more than 46 dwellings inside this defined area, that section is designated a Class 3 area. A section with fewer than 46 dwellings but more than 10 dwellings in the defined area is a Class 2 area. Each mile with fewer than 10 dwellings is a Class 1 area. A Class 4 area exists when the defined area has a prevalence of multistory buildings.
7/166 Leak Impact Factor
A Class 3 area is also defined as a section ofpipeline that has a high-occupancy building or well-defined outside meeting area within the defined area. Buildings such as churches, schools, and shopping centers that are regularly occupied (5 days per week or 10 weeks per year) by 20 or more people are deemed to be high-occupancy areas. The presence of one of these within 660 ft of the pipeline is a sufficient condition to classify the pipeline section as Class 3. The population density, as measured by class location, is admittedly an inexact method of estimating the number of people likely to be impacted by a pipeline failure. A thorough analysis would necessarily require estimates of people density (instead of house density), people’s away-from-home patterns, nearby road traffic, evacuation potential, time of day, day of week, and a host of other factors. This approach is further discussed in Chapter 14. The class location, however, is thought to be reasonably correlated with “potential” population density and, as such, will serve the purposes of this risk assessment. Table 7.15 shows some possible population estimates based on the class locations. For more discrimination within class location categories, a continuous scale can be devised in which an actual house count would yield a score: 6 houses = 1.6,32 houses = 2.7, etc. More qualitative estimates such as “high-density class 2 = 2.7” and “low-density class 2 = 2.1” would also serve to provide added discrimination. Additional provisions can be added to make distinctions between the four classifications. Another population density classification example, expanding and loosely linked to the DOT classifications, is shown in Table 7.16. The U.S. DOT also creates categories of populated areas for use in determining high consequence areas for certain pipelines. As defined in a recent regulation (CFR Part 192), High Population Area (HPA) means “an urbanized area, as defined and delineated by the Census Bureau, which contains 50,000 o r more people and has a population density of at least Table 7.16
Example of population density scoring
Population type
DOTclass
Extraordinarysituation Multistorybuildings Commercial Residentialurban Residential suburban Industrial Semi rural Rural Isolated,very remote
Table 7.15
10 8-9 8 7 6 5 4 2
4
3
2 1
1
Considerations for restricted mobility populations (nursing homes, rehabilitation centers, etc) and difficult-to-evacuate populations might also be appropriate. Additional modeling efforts might also assess traffic volumes and occupancies based on time-of-day, day-of-week, and/or season. These methods of incorporating population density into the risk analysis support scoring or categorizing approaches. If absolute risk values in terms of injury or fatality rates are required, then additional analyses are usually needed. Other methods ofquantifyingpopulations are discussed in Chapter 14.
D2. Environmental issues In most parts of the world today, there is a strong motivation to ensure that industrial operations are compatible with their enviTable 7.17 Scale that characterizes population densitieson a scale
of 1 to 2 points Value
Commercialheavy Commercial light Interstate Recreation Residentialheavy Residentiallight Residential light future Residentialmedium Residential medium future Rural School
2 1.5 I .2 1.2
2 1.5 1.4 1.7 1.6 1 1.9
house counts and equivalent densities
DOTclass location
One-mile house count
1 2 3 4
< 10
a
Population score = [general population] +[special population]
Category
Population score
DOT classifications of
1,000 people per square mile.” Other Populated Area (OPA) means “a place, as defined and delineated by the Census Bureau that contains a concentrated population, such as an incorporated or unincorporated city, town, village, or other designated residential or commercial area.” Table 7.17 provides an example of a scale characterizingpopulation densities on a relative I-to 2-point scale. By this scale, proximity or exposure to a light residential area would raise risk levels 1.5 times higher than those for a rural area. A heavy commercial (shopping center, business complex, etc.) or residential exposure would double the risk, compared to a rural area. Another scoring approach that combines a general population classification with special considerations into a 20-point scale (where more points represents more consequences) is shown inTable 7.18.
One-mile population count (estimatedie < 30
I N 6
3&150
> 46 or high-occupancybuildings
15&400 > 400
Multistory buildings prevalent
Not part of DOT definition-estimatesonly.
Receptors 7/167 Table 7.18
Sample scoring of population density (20 pt scale)
General population categoty
Commercial High density Industrial Residential Rural Specialpopulation categor7,
Apartmentsitownhomes Hospital Multifamily, trailer park Residential backyards Residential backyards (fenced) Roadways School
Odors Special activities such as flaring, pigging, painting, and cleaning Barriers to animal movements and other natural activities Rainwater runoff/drainage from facilities VisuaUaesthetic impact.
Poinis
IO IO
IO 10 5
Points
IO IO
8 9 9 5 9
ronments. Ideally, an operation will have no adverse impact whatsoever on the natural surroundings-air, water, and ground. Realistically, some trade-offs are involved and an amount of environmental risk must be accepted. It is practical to acknowledge that, as with public safety risks, environmental risks cannot completely be eliminated. They can and should however, be understood and managed. As with other pipeline risk aspects, it is important to establish a framework in which all environmental risk factors can be quantified and analyzed, at least in a relative fashion. The assessment of environmental risk is best begun with a clear understanding of pertinent environmental issues and the pipeline company’s position with regard to that understanding. A company will often write a document that states its policy onprotecting the environment. This document serves to clarify and focus the company’s position. The policy should clearly state what environmental protection is and how the company, in its everyday activities, plans to practice it. The policy is often a statement of intent to comply with ail regulations and accepted industry practices. A more customized policy statement will more exactly define environmentally sensitive areas that apply to its pipelines. Here the phrase unusually sensitive is implied because every area is sensitive to some degree to impact from a pipeline failure. A comprehensive policy will also address the company’s position with regard to environmental issues that do not involve a pipeline leak. These issues will be defined later in this section. Assessing the environmental risk is the object of this discussion. Environmental risk factors will overlap public safety risk factors to a large extent. The most dramatic negative environmental impact will usually occur from a pipeline spill. However, some impact can occur from the installation and the long-term presence of the pipeline. The pipeline can cause changes in natural drainage and vegetation, groundwater movements, erosion patterns, and even animal migratory routes. The presence of aboveground facilities can impact wildlife (and neighbors) in a variety of non-spill ways, such as these: Noise (pumps, compressors, product movements through valves and piping, maintenance and operations activities) Vibrations Vehicular traffic
These non-spill impacts are normally considered in the design phase of a new pipeline. Sensitivity to such issues in the design phase can save large remediatiodmodification costs at a later stage. Because these consequences can be fairly accurately predicted, the non-spill impacts can be seen more as anticipated damages than risks. Risk assessment plays a larger role where there is more uncertainty as to the consequences. Another non-spill environmental risk category that is not directly scored in this module is the risk of careless pipelining activities. There are many opportunities to cause environmental harm through misuse of common chemicals and through improper operation and maintenance activities. In pipeline operations, some potentially harmful substances commonly used include Herbicides Pesticides Paints Paint removers Oils/greases/fuels Cleaning agents Vehicle fluids Odorants Biocideskorrosion inhibitors. Some activities generate potentially dangerous waste products, including 0
0
Truck loading/unloading Pigginghnternal cleaning Hydrotesting Sandblasting Valve maintenance.
Even common excavating can disrupt natural drainage, allow water migrations between previously separated areas, promote erosion, and hinder vegetation and related wildlife propagation. As evidence of an operator’s seriousness about environmental issues, the evaluator can examine these common practices and procedures to gauge the environmental sensitivity of the operation. It is worth noting that pipeline construction can and has been designed to incorporate environmental improvements. An example is a pipeline river crossing project that included as an ancillary part of the scope the installation of special rock structures that facilitated a nine fold increase in fish population [8 11. Other examples can be cited where natural habitats have been improved or other environmental aspects enhanced as part of a pipeline installation or operation.
D3. Environmental sensitivity For the initial phases of risk management, a strict definition of environmentally sensitive areas might not be absolutely
7/168 Leak impact Factor
necessary. A working definition by which most people would recognize a sensitive area might suffice. Such a working definition would need to address rare plant and animal habitats, fragile ecosystems, impacts on biodiversity, and situations where conditions are predominantly in a natural state, undisturbed by man. To more fully distinguish sensitive areas, the definition should also address the ability of such areas to absorb or recover from contamination episodes. The environmental effects ofa leak are partially addressed in theprodud hazard score. The chronic component of this value scores the hazard potential of the product by assessing characteristics such as aquatic toxicity, mammalian toxicity, chronic toxicity, potential carcinogenicity, and environmental persistence (volatility, hydrolysis, biodegradation, photolysis). When the RQ score is above a certain level, the pipeline surroundings might need to be evaluated for environmental sensitivity. Liquid spills are generally more apt to be associated with chronic hazards. The modeling of liquid dispersions is a very complex undertaking and is approximated for risk modeling purposes. Areas more prone to damage andor more difficult to remediate can be identified and included in the risk assessment, Since the RQ definition includes an evaluation of environmental persistence, spills of substances whose chronic component, RQ, is greater than 3 are normally the types ofproducts that can cause contamination damage. The threshold value of RQ 2 3 eliminates most gases and includes most non-HVL hydrocarbon substances transported by pipelines. Some exceptions exist, such as H,S where the chronic component is 6 and yet the environmental impact of an H,S leak may not be significant. The evaluator should eliminate from this analysis substances that will not cause environmental harm. Accumulation effects such as greenhouse gas effects may need to be considered in environmental sensitivity scoring. In the United States, a definition for high environmental sensitivity includes intake locations for community water systems, wetlands, riverine or estuarine systems, national and state parks or forests, wilderness and natural areas, wildlife preservation areas and refuges, conservation areas, priority natural heritage areas, wild and scenic rivers, land trust areas designated critical habitat for threatened or endangered species and federal and state lands that are research natural areas [811.These area labels fit specific definitions in the US. regulatory world. In other countries, similar areas, perhaps labeled differently, will no doubt exist. Shorelines can be especially sensitive to pipeline spills. Specifically for oil spills, a ranking system for impact to shoreline habitats has been developed for estuarian (where river currents meet tidewaters), lacustrine (lake shorelines), and riverian (river banks) regions. Ranking sensitivity is based on the following [20]: Relative exposure to wave, tidal, and river flow energy Shoreline type (rocky cliffs, beaches, marshes) Substrate type (grain size, mobility, oil penetration, and trafficability) Biological productivity and sensitivity. The physical and biological characteristics of the shoreline environment, not just the substrate properties, are ideally used to gauge sensitivity. Many of the environmental rankings
shown in Table 7.22 are taken from Ref. [20], which in turn modified the National Oceanic and AtmosphericAdministration (NOAA) Guidelinesfor Developing Environmental Sensitiviv Index (ESr)Atlases and Databases (April 1993), other NOAA guidance for freshwater environments, and FWS National Wetlands Research Center. In spills over water, the spilled materials behavior is critical in determining the vulnerability of the water biota and the potential migration of the spill to sensitive shorelines. Table 7.19 relates some properties of spilled substances to their expected behavior in water. This can be used to develop a scoring protocol for offshore product dispersion based on the material’s properties. (See also Chapter 12 for offshore pipeline risk assessments.) As an example of an assessment approach, an evaluation of a gasoline pipeline in the United Kingdom identified, weighted, and scored several critical factors for each pipeline segment. The environmental rating factors that were part of the risk assessment included Landcovertype Distance to nearest permanent surface water Required surface water quality to sustain current land use Conservation value Habitat preserves Habitats with longer lived biota (woods, vineyards, orchards, gardens) Slope Groundwater Rock type and likelihood of aquifer Permeability Depth to bedrock Distance to groundwater extraction points. This assessment included consideration of costs and difficulties associated with responding to a leak event. Points were assigned for each characteristic and then grouped into qualitative descriptors (low, moderate, high, very high) [%I. Another example of assessing environmental sensitivities is shown in Appendix E
D3. High-value areas For both gas and liquid pipelines, some areas adjacent to a pipeline can be identified as “high-value” areas. A high-value area (HVA) can be loosely defined as a location that would suffer unusually high damages or generate exceptional consequences for the pipeline owner in the event ofa pipeline failure. In making this distinction, pipeline sections traversing or otherwise potentially exposing these areas to damage should be scored as more consequential pipeline sections. HVAs might also bring an associated higher possibility of significant legal costs and compensations to damaged parties. Characteristics that may justify the high value definition include the following: Higher proper@ values. A spill or leak that causes damages in areas where land values are higher or more expensive structures are prevalent will be more costly to repair or replace. Another example of this might be agricultural land where more valuable crops or livestock could be damaged, and especially where such damage precludes the use of the area for some time.
Receptors 71169 Table 7.19 Spill behavior in water Boilingpoinr
Ciiporpressure
Sprc
Su/ubi/ip
Expected behavior in nafer
Relow ambient
Very high
Any
Insoluble
l$elo\r/ambient
Very high
Less than water
Low or partial
Relow ambient
l'ery high
Any
High
Above ambient
Any
Less than water
Insoluble
:lho\s ambient
Any
I-css than water
Low or partial
Abo\,e ambient
Any
Less than \hater
High
Abobe aiiibicnt
Any
Near watei
Insoluble
,4bove ambient
Any
Near water
Low or partial
4bo\ e ambient
Any
Any
High
Above ambient
Any
Greater than warel
Insoluble
Above ambient
.Any
Greater than water
Low or partial
Above ambient
Any
Greater than water
High
All of the liquid will rapidly boil from the surface ofthe water. Underwater spill will most often result in the liquid boiling and bubbles rising to the surface. Most of the liquid will rapidly boil off. but some portion will dissolve in the water. Some ofthe dissolved material will evaporate with time from the water. Underwater spill will result in more dissolution in water than surface spills As much as 50% or more of the liquid may rapidly boil offthe water while the rest dissolves in water. Some of the dissolved material will evaporate with time from the water. Underwater spills will result in more dissolution in water than surface spills. Indeed, little vapors may escape the surface if the discharge is sufficiently deep. The liquid or solid will float on the water. Liquids will form surface slicks. Substances with significant vapor pressures will evaporate with time. The liquid or solid will float on water as above, but will dissolve over a period oftime. Substances with significant vapor pressures may simultaneously evaporate with time. These materials will rapidly dissolve in water up to the limit (if any) oftheir solubility. Some evaporation ofthe chemical may take place from the u'ater surface with time if its vapor pressure is significant. Difficult to assess. Bemube they will not dissolve. and because specific gravities are close to water, they may float on or beneath the surface ofthe water or disperse as blobs of liquid or solid particles through the water column. Some evaporation of the chemical may take place from the water surface with time if its vapor pressure is significant. Although a material with these properties will behave at first like material described directly above, it will eventually dissolve in the water. Some evaporation of the chemical may take place from the water surface with time if its vapor pressure is significant. These materials will rapidly dissolve in water up to the limit (if any) of their solubility Some evaporation ofthe chemical niay take place from the water surface with time if its vapor pressure is significant. Heavier-than-water insoluble substances will sink to the bottom and stay there. Liquids may collect in deep water pockets. These materials will sink to the bottom and then dissolve over a period of time. These materials will rapidly dissolve in water up to the limit [ifany) oftheir solubility. Some evaporation ofthe chemical may take place from the water surface \\ ith time if its vapor pressure is significant.
Source ARCHE (Automated Resource for &hem!cal Hazard !nodent _Evaluat/on),prepared for the Federal Emergency Management Agency, Department of Transportation, and Environmental Protection Agency, for Handbook of Chemrcal Hazard Analysls Procedures (approximate date 1989)and software for dispersion modeling. thermal, and overpressure impacts
.4reus thut are more dificult to remediute. If a spill occurs where access is difficult or conditions promote more widespread damage, costs of remediation might be higher. Examples might b e terrain difficult for equipment to access (steep slopes, swamps, dense vegetation growth); topography that widely and quickly disperses a spilled product, perhaps into sensitive areas such as streams; damages t o surface areas that are disruptive to repair; a n d damages to agricultural activities where damages would preclude the use o f the area for long periods of time. T h e reader should realize that s o m e remediation efforts can continue literally for decades.
Structures or ,facilities that ure more difficult io replace. An example would be a hospital or university with specialized equipment that is not adequately reflected in property values. 0
Higher associated costs. If a spill occurs in a marina, a harbor, an airport, o r other locations where access interruption could b e potentially very costly t o the local industry, this could justify the high-value area score. If a business is interrupted b y a spill (for example, a resort area where beaches are made inaccessible), higher damages and legal costs can b e anticipated.
7/170 Leak Impact Factor
Historzcal areas. Areas valuable to the public, especially when they are irreplaceable due to historical significance, may carry a high price if damaged due to a pipeline leak. This high price might be seen indirectly in terms of public opinion against the company (or the industry in general) or increased regulatory actions. Archaeological sites may fit into this category. High-use areas. The areas are generally covered by population density classifications (high-occupancy buildings such as churches, schools, and stores cause the class location to rise to Class 3 in U.S regulations, if not already there) or by environmentally sensitive areas such as state and national parks. Evaluators may wish to designate other high-use areas such as marinas, beaches, picnic areas, and boating and fishing areas as high-value areas due to the negative publicity that a leak in such areas would generate.
Identification and scoring of HVAs can be done by determining the most consequential conditions that exist and scoring according to the following scale (or according to the scale of Table 7.21, shown later). Note that the probability of a leak, fire, and explosion is not evaluated here-nly potential consequences should such an event occur. Interpolations between the classifications should be done. The following classifications use qualitative descriptions of HVA’s and environmental sensitivities to score potential receptor damages. Neutral (default) 0 No extraordinary environmental or high-value considerations. Because all pipeline leaks have the potential for environmental harm and property damage, the neutral classification indicates that there are no special conditions that would significantly increase the consequences of a leak, fire, or explosion. Higher 0.1-0.6 Some environmental sensitivity. A spill has a fair chance of causing an unusual amount of environmentalharm. Values of surrounding residential properties are in the top 10% of the community. High-value commercial, public, or industrial facilities could be impacted by a leak’s fire or explosion. Remediation costs are estimated to be about halfway between a normal remediation and the most extreme remediation. Extreme 0.7-1.0 Extreme environmental sensitivity. Nearly any spill will cause immediate and serious harm. High-cost remediation is anticipated. High-value facilities would almost certainly be damaged by a leak, fire, or explosion. Widespread community disruptions would occur, as well as long-term or permanent environmental damage. Another sample of scoring HVAs is shown in Table 7.20. In this scheme, various high-value areas are “valued” on a 0- to 5point scale with higher points representing more consequential or vulnerable receptors. Attempts to gauge all property values and land uses along the pipeline may not be a worthwhile effort, especially since such evaluations must be constantly updated. The HVA designation can be reserved for extraordinary situations. Experienced pipeline personnel will normally have a good feel for extraordi-
Table 7.20 Sample high-value area scoring ~
HVA description
None School Church Hospital Historic site Cemetery Busy harbor Airport (major) Airport (minor) University Industrial center Interstate highway Recreational arealparks Special agriculhrre Water treatmenthource Multiple Other
Paints
0
5 3.5 5 2 2
3.5 3 2
nary conditions along the lines that merit special treatment in a risk assessment.
Equivalencies of receptors A difficulty in all risk assessments is the determination of a damage state on which to base frequency-of-occurrence estimates. This is further complicated by the normal presence of several types of receptors, each with different vulnerabilities to a threat such as thermal radiation or contamination.The overall difficulty is sometimes addressed by running several risk assessments in parallel, each corresponding to a certain receptor or receptor-damage state. In this approach, separate risk values would be generated for, as an example, fatalities, injuries, groundwater contamination, property damage values, etc. The advantage of this approach is in estimating absolute risk values. The disadvantage is the additional complexity in modeling and subsequent decision making. An example of this type approach is shown in Appendix E Another approach is to let any special vulnerability of any threatened receptor govern the risk assessment. An example of this approach is shown in Appendix E Appendix F presents a protocol for grouping various receptor impacts into three sensitivity areas: normal, sensitive, and hypersensitive. This was developed to perform an environmental assessment (EA) of a proposed gasoline pipeline. Receptors considered and the basis of their evaluation in this EA are shown in Table 7.21. Under this categorization, an area was judged to be sensitive or hypersensitive if any one of the receptors is defined to be sensitive or hypersensitive. This conservatively uses the worst case element, but does not consider cumulative effects-when multiple sensitive or hypersensitiveelements are present. A third option in combining various receptor types into a risk assessment is to establish equivalencies among the receptors. The following scheme is an example scoring of receptors for a hazardous liquid pipeline:
Population High-value areas
0-10 pts 0-10 pts
Receptors 71171 Table 7.21
Bases for evaluation of various receptors
Receptor
Basis of categorization
Human populations Groundwater
Housebuilding counts Distance to public drinking water facilities in various geologic formations; special hydrogeological evaluation Stream characterization; flow path modeling; criticality of water; scoring model Government agencies; studies; field spot checks Parklands; rivenistreams upstream of parklands; aquifers feeding parklands
Surface water Threatened and endangered species Recreational areas
Public lands (national parks and forests) Wetlands Water intakes Waters Commercially navigable waterways Total
0-5 pts &5 pts 0-5 pts 0-5 pts 0-5 pts 45 pts
This approach might be more controversial because judgments are made that directly value certain types of receptor damages more than others. Note, however, that the other approaches are also faced with such judgments although they
might be pushed to the decision phase rather than the assessment phase of risk management. Table 7.22 presents another possible scoring scheme for some environmental issues and HVAs. In this scheme, the higher scores represent higher consequences. This table establishes some equivalencies among various environmental and other receptors, including population density. These equivalencies may not be appropriate in all cases.This table was designed to be used with a 4-point population density classification (the 4 classes defined by DOT). It proposes a 1-to 5-point scale to include scores not only for population density, but also for envi-
Table 7.22 Scoring for environmental sensitivity and/or high-value areas Score
Environmental sensitiviw descriptions
High-value descriptions
0.9
Nesting grounds or nursing areas of endangered species; vital sites for species propagation; high concentration of individuals of an endangered species.
0.8
Freshwater swamps and marshes; saltwater marshes; mangroves; vulnerable water intakes for community water supplies (surface or groundwater intakes); very serious damage potential. Significant additional damages expected due to difficult access or extensive remediation; serious harm is done by a pipeline leak. Shorelines with rip rap structures or gravel beaches; gently sloping gravel riverbanks.
Rare equipment; hard to replace facilities; extensive associated damages would be felt on loss of facilities; major costs of business interruptions anticipated; most serious repercussions are anticipated; high degree ofpuhlic outcry; nationalhternational news. Very high property values; high costs and high likellhood of business interruption; expensive industry shutdowns required; widespread community disruptions are expected high publicity regionally, some national coverage. Moderate business interruptions anticipated; well-known or important historical or archaeological sites; a degree of public outrage is anticipated. Long-term (one growing season or more) damage to agriculture; other associated costs; some community disruption; regional news stories. Low-profile historical and archaeological sites; high-expense cleanup area due to access, equipment needs, or other factors unique to this area; high level of local public concern would he seen. Unusual public interest in this site; high-profile locations such as recreation areas; some industry interruption (without major costs); local news coverage. Some level of associated costs, higher than normal, is anticipated; limited-use buildings (warehouses, storage facilities. small offices, etc.) might have access restricted. Picnic grounds, gardens, high-use public areas; increasing property values. Property values are higher than normal. Potential damages are normal for this class location; no extraordinary damage expected.
0.7
0.6
0.5
0.4
0.3
0.2 0.1 0
Mixed sand and gravel beaches; gently sloping sand and gravel river banks; topography that promotes wider dispersion (slopes, soil conditions, water currents, etc.); more serious damage potential. Coarse-grained sand beaches; sandy river bars; gently sloping sandy river hanks; national and state parks and forests. Fine-grained sand beaches; eroding scarps; exposed, eroding river banks; difficulties expected in remediation; higher than “normal” spill dispersal. Wave-cut platforms in bedrock; bedrock river hanks; minor increase in environmental damage potential. Shoreline with rocky shores, cliffs, or banks. No extraordinary environmental damages.
7/172 Leak Impact Factor
ronmental sensitivity and high value areas (HVAs). Scores are determined based on qualitative descriptions and are to be added to the population class number. The worst case (highest number) in each column should govern. When conditions from both columns coexist, both scores can be added to the population class number. The extremes of this consequence scale will be intuitively obvious-the most environmentally sensitive area and the highest population class and the highest value areas simultaneously occurring in the same section would be the highest consequence section. The scale midrange, however, might discomfort some people in that a certain amount of environmental sensitivity (or value ofthe surroundings) is said to equal a certain population increase. In other words, environmental loss and economic loss are being equated to loss of life. In Table 7.22, the highest environmental sensitivity and the greatest HVA can each change the surroundings score by the equivalent of one population class designation. Assessment schemes such as the one shown in Table 7.22 are of course very general and contain value judgments that might be controversial. They can, however, be useful screening or high-level tools in a risk assessment. The following examples illustrate some general (high-level) consequence scoring for various receptors, based onTable 7.22.
Example 7.5: Neutral consequences A natural gas pipeline traverses an agricultural area ofclass 1 and Class 2 (low and medium) population densities. Soil conditions are organic clay and sand. Nearby housing and commercial buildings are consistent with most comparable class locations. There is no known endangered species that could be impacted by a leak in this area. A leak of natural gas is lighter than air and would have minimal chronic impact as shown by its product hazard score (chronic component) of 2. No environmental or high-value receptors are vulnerable from these sections.
Example 7.6: Higher consequences Outside a major metropolitan area, a subdivision of very expensive mansions has recently been constructed within 1800 feet of a 6-in., 400-psig fuel oil line. The class location is 2. The pipeline is located on a slope above the new houses. The soil is sandy. Groundwater contamination is a possibility, but there are no intake locations for community water supplies nearby. Spill remediation would be higher than normal due to the slope effects, the highly permeable soil, and the anticipatedproblems with longterm remediation equipment operating near the residential area. The housing is judged to be far enough from the pipeline and the thermal effects from burning fuel oil are limited enough that the immediate impact to that community is a remote possibility. The evaluatorscores this situation 0.5 on a 0-to 1.O-point scale, in consideration of the topography and the high house values. He adds this to the population class to get 2.5 as the receptor score.
Example 7.7: Extreme consequences A high-pressure, 30-in. natural gas pipeline is in a corridor that runs within 300 ft of a major university including a
researchheaching hospital. By population density, the class location is 4 (very high). Cleanup costs for leaked natural gas are expected to be minimal. If a fire or explosion occurs, damage could be extensive. Given the unique nature of the structures nearby and the value of the contents within-specialized equipment, research in progress, records, and files-the evaluator feels the surroundings represent a higher value and scores the additional consequences for pipeline operations in this area as 0.9 on a 0-to 1.0-point scale. He adds the environmental/ HVA score to the population class to get 4.9 as the receptor score. Emergency response to a gas leak would not always be quick enough to reduce potential damages. No spill score adjustments are made.
Example 7.8: Extreme consequences A 24-in. crude oil pipeline traverses a wetlands area and parallels a stream for over a mile within the wetlands. This is a Class 1 area (low population). Cleanup of a spill in this freshwater marsh would involve much damage associated with heavy equipment and long-term remediation activities (temporary roads, establishment of pumping stations, etc.). Immediately adjacent to the wetlands area, and within onequarter mile of the pipeline, a small community removes water from the stream to supplement its groundwater intakes. Noting the immediate wetlands threat from any spill, the high cost of remediation, and the threat to a community water supply, the evaluator scores the conditions as 0.8 on a 0-to 1.O-point scale. If the water intake was the community’s only water supply and if endangered species were involved, the evaluator would have scored the situation as 0.85 or 0.9. The receptor score is then 1 + 0.8 = 1.8. The operator has a very strong environmental program that includes a detailed, well-practiced response plan. Cornpanyowned equipment and contract equipment are on standby and can be quickly placed in this area through the use ofa helicopter that is also on 24-hour-per-day standby. Trained, equipped personnel can be at this site within 1 hour. A manned control room should be able to detect a significant leak here within a very few minutes. The evaluator judges that this level of response can indeed reduce spill consequences by 50% (threshold established for modeling purposes) and, hence, he adjusts the spill score, effectively reducing the assumed quantity spilled.
Hazard zones Hazard zones are defined to be distances from a pipeline release point at which significant damages could occur to a receptor (see also Chapter 14). Therefore, a hazard zone is usually a function of how far potential thermal and overpressure effects may extend from the release point. Note that when hazard zone is calculated as a distance from a source such as a burning liquid pool or vapor cloud centroid that source might not be at the pipeline failure location. In fact, the source can be some distance from the leak site. Relative hazard zones for a vapor release are illustrated in Figure 7.8. A hazard zone might also include potential liquid contamination distances for vulnerable receptors such as water intakes and sensitive environ-
Receptors 711 73 ments. The thermal and overpressure distances are themselves a function of many factors including release rate, release volume, flammability limits, threshold levels ofthermal/overpressure effects, product characteristics, and weather conditions. A hazard zone must be defined in terms of specific damage thresholds that could occur under defined scenarios. An example of a damage threshold is a thermal radiation (heat flux) level that causes injury or fatality in a certain percentage of humans exposed for a specified period of time. Another example is the overpressure level that causes human injury or specific damage levels to certain kinds of structures. Such damage threshold criteria are further discussed in Chapter 14. Receptors falling within the hazard zones are considered to be vulnerable to damage from apipeline release. In the case of a gas release, receptors that lie between the release point and the lower flammable concentration boundary of the cloud are considered to be at risk directly from fire. Receptors that lie between the release point and the explosive damage boundary may additionally be at risk from direct overpressure effects. Some receptors within the hazard zone would also be at risk from thermal radiation effects from ajet fire as well as from any secondary fires resulting from the rupture-ignition event. See Chapter 14 and Figure 7.8. In the case ofliquid spills, migration of spilled product, thermal radiation from a potential pool fire, and potential contamination would define the hazard zone. Because an infinite number of release scenarios-and subsequent hazard zones-are possible, some simplifying assumptions are required. A very unlikely combination of events is often chosen to represent maximum hazard zone distances. The
assumptions underlying such event combinations produce very conservative (highly unlikely) scenarios that typically overestimate the actual hazard zone distances. This is done intentionally in order to ensure that hazard zones encompass the vast majority of possible pipeline release scenarios. A further benefit of such conservatism is the increased ability of such estimations to weather close scrutiny and criticism from outside reviewers. As an example of a conservative hazard zone estimation, the calculations might be based on the distance at which a full pipeline rupture, at maximum operating pressure with subsequent ignition, could expose receptors to significant thermal damages, plus the additional distance at which blast (overpressure) injuries could occur in the event of a subsequent vapor cloud explosion. The resulting hazard zone would then represent the distances at which damages could occur, but would exceed the actual distances that the vast majority of pipeline release scenarios would impact. More specifically, the calculations could be based on conservative assumptions generating distances to the LFL boundary. doubling this distance to account for inconsistent mixing, and adding the overpressure distance for a scenario where the ignition and epicenter of the blast occur at the farthest point. However, such conservatism may also be excessive, leading to inefficient and costly repercussions-in the case of land-use decisions, for example. An estimation of potential hazard zones from pipeline releases is an aspect ofthe previously discussed spill score. The same issues that are used to establish relative consequences are
Thermal effects 1
Thermal effects 2
Wind
A
/
I
.*
lgnitability range Figure 7.8
Thermal and overpressure damage zones.
Overpressure 1
7/174 Leak Impact Factor
used to estimate hazard zones. Such estimates are an important part of modeling when absolute risk estimates are sought (see Chapter 14). The calculations underlying the estimates are important in a relative risk assessment because they identify the critical variables that make one release potentially more consequential than another. They also help the evaluator to better understand the threat from the pipeline and more appropriately characterize receptors that are potentially damaged. Damage states that can be used to define hazard zones are discussed in Chapter 14.
Pipe-diameter Max flow rate HVA Public lands Water Population
IX. Leak impact factor sample Many approaches are possible for evaluatingthe relative consequences of a pipeline failure. For each component of the LIF that should be considered, some sample scoring protocols have been presented. Some additional algorithm samples can be found in Appendix E and the case studies of Chapter 14.
Leak Impact Factor Samples In this sample LIF algorithm, a liquid pipeline operator uses the relationships shown in Table 7.23 to evaluate the LIE A brief description of the variables used is as follows: product hazard, scored as described in this chapter a score ranging from 0 to 1.O proportional to relative volume of potential release; 1.O reflects largest volume spill possible in this risk model volume lost to leak prior to system shut down volume lost to leak from detection to system isolation volume lost to leak due to drainage of isolated pipeline section measure of relative dispersion range measure of relative dispersion due to surface flows measure of relative dispersion due to subsurface flows surrogate for drain volume, this is actually the upstream and downstream lengths of
prod-haz spill
v1 v2 v3 Spread Overland Subsurface Drain
Table 7.23
Water intake LIF
pipe that would contribute to a specific leak location, scaled from 0 to IO. pipe diameter pumped flowrate of product high value area, as defined in this chapter, susceptible to spill damage binary measure of presence of national parks, wildlife refuges, etc. susceptible to spill damage binary measure of presence of water body susceptible to spill damage 0-10 point scale indicating relative population density susceptible to spill damage binary measure of presence of drinking water intake structure susceptible to spill damage Leak impact factor, as defined in this chapter
This sample algorithm is a high-level screening tool used to identify changes in consequence along the route of a specific pipeline. The relative consequences are measured by the LIF, whose main components are Product hazard (PH) Receptors (R) Spill volume (S) Spread range or dispersion (D) where S is a function of pumping rate, leak detection capabilities, drain volume, and emergency response capabilities; and LIF= PH x R x S x D
This model is applied to a pipeline transporting butadiene, whose product hazard is greater than for most hydrocarbonsabout twice as high as for butane or propane. A higher health hazard score (Nh per NFPA), reactivity score (N, per NFPA), and a lower CERCLA reportable quantity create the higher hazard level. In the initial application of this algorithm, changes in consequence are thought to be driven solely by changes in operating pressure and population density along this pipeline. Other variables are included in the model but are not used initially.
Algorithms for scoring the leak impact factor
LIF
[(prod-ha) x (spill) x (spread) x (receptors)]
Product hazard Spill Spread Receptors v1
(prod-haz) { [(Vl) + (V2) + (v3)]/23000)/10 + 0.2 [(overland)/3 + (subsurface)/8J [(population) + (HVA) + (public-lands) + (wetlands) + (water-intake) + (waters)] [ ( m a flow rate)/l2]
v2
(0)
v3
[(drain) x (pipe_diameter)*]
High value
(HVA)
Product hazard is calculated elsewhere and stored as a database variable 3 components of total spill volume are adjusted by scaling factors Variables adjusted by scaling factors Total receptor score is sum of individual receptor scores, weighted elsewhere Spill volume contributed by pumping flow rate Volume contributed by leak detection and response time; to be included later Contributing lengths of upstream and downstream pipe are adjusted hy pipe diameter as surrogate volume calculation
Leak impact factorsample 71175 Table 7.24
Sample LIF algorithm
Risk rjariable
Scoringprotocol
Comments
Spill size
Square root offpressure)
Spread Product hazard
18
Spill size is modeled as being a function of(proportiona1 to the square root 00 of internal pipe pressure No differences in dispersion potential are recognized Constant for this pipeline; for comparisons with other pipelines. natural gas = 7, gasoline= IO Two variables are used to quantify the receptor of interest (population density); seeTable 7.18 Constant: no changes along pipeline Constant: no changes along pipeline
I
Receptors
(general population density) +(special population areas)
Leak detection Emergency response
1
I
Table 7.24 shows the LIF variables used in the model. Future improvements to the analyses could include evaluations of leak
detection, drain volumes, emergency response, additional receptors, and dispersion potential.
7/176 Leak Impact Factor
81177
Data Management and Analyses Contents
VI Sconng 81182
er environments 81183 finitions 81178 Datacollectconand format 81179 Point events and continuousdata 81179 Eliminatingunnecessary segments 8/ I80 Creating categones of measurements 81 Assigningzones ofinfluence 811 80 Countable events 81181 Spatial analyses 8118 1 Data aualitvhncertaintv 81181 V. SegmkntaGon 8/18I .
1. Background Although subsequentchapters discuss possible additions to the risk assessment, many will, at least initially, want to work solely with the results of the risk assessment methodology described in Chapters 3 through 7. Therefore,this chapter discusses some data management issues and then begins the natural progression from risk assessment to risk management. Risk assessment is, at its core, a measurement process. As noted in Chapter 1, there is a discipline, perhaps even an “art” to measuring. This involves a philosophy with a clear understandingof the intent ofthe measuring.Furthermore, it requires defined processes and structure for performing the measurements including all associateddata handling efforts. Having accumulated some risk assessment data, the next step is to prepare that data for decision making. Guidance on data management issues that will often arise in the risk assess-
es of central tendency esofvanation 81’190 andcharts 81190 Examples 8/192
ment process is offered here. Some of these issues may not be apparent until the later stages of the effort, so advance planning will help ensure an efficient process. The numerical techniques that help extract information from data are discussed here. Using that information in decision making is also discussed and then more fully detailed in Chapter 15, Risk Management.
II. Introduction Risk assessment is a data-intensiveprocess. It synthesizes all available informationinto new information-risk values, either relative or absolute. The philosophy behind data collection is discussed in Chapters 1 and 2. The risk values themselvesbecome informationthat must be managed.All through this book the risk “model” has meant the
8/178 Data Management and Analyses
set of algorithms used to define risk. To many, the term model refers to the software that holds data and presents risk results. This is actually the environment in which the model resides, not the model itself, by the terminology of this book. The environment housing and supporting the risk model is critical to the success of the risk management process.
111. Risk managementprocess This section describes the mechanics of performing a risk assessment using the common software tools of a spreadsheet and desktop database. The risk assessment process is designed to capture pertinent information in a format that can be used to first create segments with constant risk characteristics and then assign risk scores to those segments. Many of the data processing steps in a risk assessment can appear complex when first studied, especially when those steps are described rather than demonstrated. Most processes are, however, fairly self-evident once the risk assessment efforts are under way. As such, the reader is advised not to become reluctant to embark on the effort based on the apparently significant issues of the process, but rather to begin the effort and use the following sections as a reference document as issues arise. Chapter 1 presents an overall process for risk management. This process can be revisited when considering potential software environments since the software will ideally fully support each step in the process.
Step 1: risk modeling As previously noted, a pipeline risk assessment model is a set of algorithms or “rules” that use available information and data relationships to measure levels of risk along a pipeline. A model can be selected from some existing and commercially available models, customized from existing models, or created “from scratch” depending on your requirements. Algorithms can be created to use data directly from a database environment to calculate risk scores. Several common software environments will support efficient data storage, retrieval, and algorithm calculations.
Step 2: data preparation Data preparation or conditioning produces data sets that are ready to be loaded into and used by the risk assessment model. Data preparation includes processes to smooth or enhance data into zones of influence, categories, or bands as may be appropriate. Computer routines greatly facilitate these processes.
Step 3: segmentation Because risks are rarely constant along apipeline, it is advantageous to first segment the line into sections with constant risk characteristics (dynamic segmentation) or otherwise divide the pipeline into manageable pieces. This might be a one-time or rare event, to ensure consistent segments. Alternatively, it might change every time the underlying data change. Again, computer routines facilitate this.
Step 4: assessing risks After a risk model has been selected and the data prepared, risks along the pipeline route can be assessed. The previously selected risk assessment model can be applied to each segment to get a unique risk score for that segment. These relative risk numbers can later be converted into cumulative risk values and/or absolute risk numbers. Moving from overviews of risk down to the smallest details of a specific piece ofpipe will often be necessary. Softwarethat supports rapid tabularization and perhaps map overlays is useful. Spill flowpath or dispersion modeling, soil penetration, and hazard area determinations are aspects of the more robust risk assessments.
Step 5: managingrisks Having performed a risk assessment for the segmented pipeline, now comes the critical step of managing the risks. Inthis area, the emphasis is on decision support-providingthe tools needed by the risk manager to best optimize resource allocation. This process generally involves steps such as these:
0
0
Analyzing data (graphically and with tables and simple statistics) Calculating cumulative risks and trends Creating an overall risk management strategy Identifying mitigation projects Performing “what-if” scenarios.
Patterns, trends, and relationships among data sets can become an important part of managing risks. Software that supports analytical graphics routines will be useful.
Definitions Several terms in this discussion might be used in a manner that is unfamiliar to the reader. Terminology is not consistent among all risk modelers so these definitions are more for convenience in describing subsequent steps here. Each record must have an identifier that relates that record to a specific portion of the overall pipeline system, that is, an ID. This identifier, along with a beginning station and ending station, uniquely identifies a specific point on the pipeline system. It is important that the identifier-stationing combination does indeed locate one and only one point on the system. An alphanumeric identification system, perhaps related to the pipeline’s name, geographic position, line size, or other common identifying characteristics, is sometimes used to increase the utility of the ID field. Risk variables are also commonly called events in keeping with modem GIS terminology. The current characteristics of each event are called conditions (also sometimes called atm’bUtes or codes). For example, for the event (population),possible conditions include residential high, residential low, commercial high, etc. For the event mapshecords, possible conditions are excellent, fair.poor. none. The event depth of cover could have a number or numerical ranges such as 24”, 19”, > 48”, or 12-24” as its possible conditions. Events, as variables in the risk assessment, can be named using standardized labels.
Data preparation 8/179 Several industry database design standards are emerging as of this writing. Adhering to a standard model facilitates the efficient exchange of data with vendors and service providers (ILI, CIS, etc), as well as other pipeline companies and governmental databases. Each event must have a condition assigned. Some conditions can be assigned as general defaults or as a system-wide characteristic. Each event-condition combination defines a risk characteristic for aportion of the system. A restricted vocabulary is enforced in the most robust software applications. Only predefined terms can be used to characterize events. This eliminates typos and the use of different conditions to mean the same thing. For instance, for the eventpipe manufacturer = “Republic Steel Corp,” and not “Republic” or “Republic Steel” or “republic” or “RSC”; coating condition = “fair” and not “F” or “ok,” “medium” or “med,” etc. The data dictionary is a document that lists all events and their underlying source, as well as all risk variables. It should also show all conditions used for each event along with the full description of each condition and its corresponding point values. The data dictionary is designed to be a reference and control document for the risk assessment. It should specify the owner (the person responsible for the data) as well as update frequency, accuracy, and other pertinent information about each piece of data, sometimes called meta data. In common database terminology, each row of data is called a record and each column is called afield. So, each record is composed of several fields of information and each field contains information related to each record. A collection of records and fields can be called a database, a data set, or a table. Information will usually be collected and put into a database (a spreadsheet can be a type of database). Results of risk assessments will also normally be put into a database environment. GIs is a geographical information system that combines database capabilities with graphics (especially maps) capabilities. GIS is increasingly the software environment of choice for assets that span large geographic areas. Most GIS environments have a programming language that can extract data and combine them according to the rules of an algorithm. Common applications for more detailed risk assessments will be modeling for flowpath or dispersion distances and directions, surface flow resistance, soil penetration, and hazard zone calculations. It can also be the calculating “engine” for producing risk scores. SQL refers to Structured Query Language, a software language recognized by most database software. Using SQL, a query can be created to extract certain information from the database or to combine or present information in a certain way. Therefore, SQL can take individual pieces of data from the database and apply the rules of the algorithm to generate risk scores.
IV. Data preparation Data collection and format Pertinent risk data will come from a variety of sources. Older data will be in paper form and will probably need to be put into electronic format. It is not uncommon to find many different identification systems, with some linked to original alignment sheets. some based on linear measurements from fixed
points, and some based on coordinate systems, such as the Global Positioning System (GPS). Alignment sheets normally use stationing equations to capture adjustments and changes in the pipeline route. These equations often complicate identifiers since a stationing shown on an alignment sheet will often be inconsistent with the linear measurements taken in most surveys. Information will need to be in a standard format or translation routines can be used to switch between alignment sheet stationing and linear measurements. All input information should be collected in a standard data format with common field (column) names. A standard data format can be specified for collection or reformatting. Consider this example: ID
Begstation
Endstation
Desc
Code
notes
where =identifier relating to a specific length ofpipeline ID Begstation =the beginning point for a specific event and condition, using a consistent distance measuring system Endstation =the end point for a specific event and condition, using the same measurement system. Desc = the name of the event Code =the condition. Each record in the initial events database therefore corresponds to an event that reports a condition for some risks variable for a specific distance along a specific pipeline. In data collection and compilation, an evaluator may wish to keep separate data sets-perhaps a different data set for each event or each event in each operating area-for ease of editing and maintenance during the data collection process. The number of separate data sets that are created to contain all the information is largely a matter of preference. Having few data sets makes tracking of each easier, but makes each one rather large and slow to process and also may make it more difficult to find specific pieces of information. Having many data sets means each is smaller and quicker to process and contains only a few information types. However, managing many smaller data sets may be more problematic. Especially in cases where the number of event records is not huge, maintaining separate data sets might not be beneficial. Separate data sets will need to be combined for purposes of segmentation and assignment of risk scores. The combining of data sets can be done efficiently through the use of certain queries in the SQL ofmost common database software. A scoring assessment requires the assignment of a numerical value corresponding to each condition. For example, the event environ sensitivity is scored as “High’ which equals a value of 3 points, in a certain risk model, It is also useful to preserve the more descriptive condition (high, me4 low, etc.).
Point events and continuous data There is a distinction between data representing a specific point versus data representing a continuous condition over a length of pipeline. Continuous data always have a beginning and ending station number. A condition that stays generally constant over longer distances is clearly continuous data. Point event data have a beginning station number but no ending station-that is,
8/180 Data Management and Analyses
an event with no length. The distinction often has more to do with how the data are collected. For instance, depth of cover is normally measured at specific points and then the depth is inferred between the measurements. So even though the depth itself is often rather constant, the way in which it is collected causes it to be treated as point data.
can be drawn to determine population density around the pipeline. These type of data are generally converted into continuous bands by assuming that each reading extends one-half the distance to the next reading.
Examples ofpoint Event Data
Eliminating unnecessary segments
A pipe-to-soil measurement Soil pH measurements at specific points Depth of cover-actual measurements Drain volume calculations at specific points Elevation data. Examples of Continuous Datu
Pipe specifications Depth of cover (when estimated) Flowrates Procedures score Training score Maintenance score Earth movement potential Waterways crossings Wetlands crossings. Some of these continuous data examples are evaluation scores, such as “Procedures score,’’ which is described elsewhere.
Inferring continuous data Because the risk model requires variables to be characterized continuously along the pipeline, all data must eventually be in continuous format. Special software routines can be used to convert point event data into continuous data, or it can be done manually. Some data are generated as point events, even though they would seem to be continuous by their nature. In effect, the continuous condition is sampled at regular intervals, producing point event data. There are an infinite number of possible measurement points along any stretch of pipeline. The measurements taken are therefore spot readings or samples, which are then used to characterize one or more conditions along the length of the pipeline. This includes point measurements taken at specific points, such as depth of cover, pipe-to-soil voltage, or soil pH. In these cases, measurements are assumed to represent the condition for some length along the line. Other point event data are not direct measurements but rather the result of calculations. An example is a drain volume calculated based on the pipeline’s elevation profile. These can theoretically be calculated at every inch along the pipeline. It is common practice to select some spacing, perhaps every 100 ft or 500 ft, to do a calculation. These calculated points are then m e d into continuous data by assuming the calculated value extends half the distance to the next calculation point. Other examples include internal pressure and population density. Internal pressure changes continuously as a function of flowrate and distance from the pressure source. Similarly, as one moves along the pipeline, the population density theoretically changes with every meter, since each meter represents a new point from which a circle or rectangle
Data that are collected at regular intervals along the pipeline are often unchanging, or barely changing, for long stretches. Examples of closely spaced measurements that often do not change much from measurement to measurement include CIS pipe-to-soil potential readings, depth of cover survey readings, and soil pH readings. Unless this is taken into account, the process that breaks the pipeline into iso-risk segments will create many more segments than is necessary. A string ofrelatively consistent measurements can be treated as a single band of information, rather than as many separate short bands. It is inefficient to create new risk segments based on very minor changes in readings since, realistically, the risk model should not react to those minor differences. It is more efficient for a knowledgeable individual to first determine how much of a change from point to point is significant from a risk standpoint. For example, the corrosion specialist might see no practical difference in pipe-to-soil readings of 910 and 912 millivolts. Indeed, this is probably within the uncertainty of the survey equipment and process. Therefore, the risk model should not distinguish between the two readings. However, the corrosion specialist is concerned with a reading of 9 10 mV versus a reading of 840 mV, and the risk model should therefore react differently to the two readings. The use of normal operating pressures is another example. The pipeline pressure is continuously changing along the pipeline, but smaller changes are normally not of interest to the risk assessment.
Creating categories of measurements To eliminate the unnecessary break points in the event bands, a routine can be used to create categories or “bins” into which readings will be placed. For instance, all pipe-to-soil readings can be categorized into a value of 1 to IO. There will still be sharp delineations at the break points between categories. If a reading of -0.89 volts falls into category = 4 and -0.90 volts falls into category = 5, then some unnecessary segments will still be created (assuming the difference is not of interest). However, the quantity of segments will be reduced perhaps vastly, depending on the number of categories used. The user sets the level of resolution desired by choosing the number of categories and the range of each. A statistical analysis of actual readings, coupled with an understanding of the significance of the measurements, can be used to establish representative categories. A frequency distribution of all actual readings will assist in this categorization process.
Assigning zones of influence A special case of converting point data into continuous data involves assigning a zone of influence. Some data are very location specific but provide some information about the surrounding lengths of pipe. These data are different from the sample data previously discussed since the event of interest is
Segmentation 8/181
not a sample measurement but rather represents an event or condition that is tied to a specific point on the pipeline. However, it will he assumed to be representing some distance either side of the location specified. An example is leak or break data. A leak usually affects only a few inches ofpipe, hut depending on the type of leak, it yields evidence about the susceptibility of neighboring sections of pipeline. Therefore, a zone of influence, x number of feet either side ofthe leak event, is reasonably assigned around the leak. The whole length of pipeline in the zone of influence is then conservatively treated as having leaked and containing conditions that might suggest increased leak susceptibility in the future. Considerations will be necessary for overlapping zones of influence, when the zone for one event overlaps the zone for another, leaving the overlap region to he doubly influenced.
Countable events Some point events may be treated not as sample measurements but rather as countable events. An example is foreign line crossings or one-call reports or ILI anomalies (when an anomalyspecific evaluation is not warranted). The count or density of such events might be of interest, rather than a zone of influence. The number of these events in each section can be converted into a density. However, the density calculation derived after a segmentation process can be misleading because section length is highly variable under a dynamic segmentation scheme. A density might need to be predetermined and then used as an event prior to segmentation.
tion can also be made regarding the origin and confidence surrounding the collected data. It is entirely appropriate to gather some data as a simple table-top exercise-for example, field personnel indicating on an alignment sheet their knowledge of ROW condition or depth of cover-with field verification to come later. However, it is useful to distinguish this type of assumed information from actual measurements taken in the field. A soil resisitivity measured near the pipeline should usually have a greater impact on risk perception than an assumed regional level of soil corrosivity. Increasing uncertainty should be shown as increasing risk, for reasons detailed in earlier chapters. One way to account for variations in data quality is to ‘penalize’ risk variables that are not derived from direct measurement or observation. This not only shows increasing risk with increasing uncertainty, but also helps to value-show the benefits of-the direct measurements and justify the costs of such activities that most agree intuitively are a risk mitigation measure. Table 8.1 shows an example ofadjustments for data quality. The adjustment factor can then be used along with an age (decay) adjustment as follows: Variable score x (Quality Adjustment Factor) x (Age Adjustmrnt Factor)
to ensure that less certain information leads to higher risk estimates.
V. Segmentation Spatial analyses The most robust risk assessments will carefully model spill footprints and from those estimate hazard areas and receptor vulnerabilities within those areas. These footprints require sophisticated calculation routines to consider even a portion of the many factors that impact liquid spill migration or vapor cloud dispersion. These factors are discussed in Chapter 7. Establishing a hazard area and then examining that area for receptors requires extra data handling steps. The hazard area will be constantly changing with changing conditions along the pipeline, so distances from the pipeline to perform house counts, look for environmental sensitivities, etc. will be constantly changing, complicating the data collection and formatting efforts. For instance, a liquid pipeline located on a steep terrain would prompt an extensive examination of downslope receptors and perhaps disregard for upslope receptors. Modern GIS environments greatly facilitate these spatial analyses, but still require additional data collection, formatting, and modeling efforts. The risk assessor must determine if the increased risk assessment accuracy warrants the additional efforts.
As detailed in Chapter 2, an underlying risk assessment principle of most pipeline risk models is that conditions constantly change along the length of the pipelines. A mechanism is required to measure these changes and assess their impact on failure probability and consequence. For practical reasons. lengths of pipe with similar characteristics are grouped so that each length can be assessed and later compared to other lengths. Two options for grouping lengths of pipe with similar characteristics are fixed-length segmentation and dynamic Table 8.1
Sample adjustments for data quality
Measurement
100%
Estimate
80%
Informed guess
60%
Data qualityhncertainty As discussed in Chapters 1 and 2, there is much uncertainty surrounding any pipeline risk assessment. A portion of that uncertainty comes from the data itself. It might be appropriate to characterize collected data in terms of its quality and age-both of which should influence the evaluator perception of risk and hence, the risk model. A ‘rate of decay’ for information age is discussed in Chapter 2. Adding to the decay aspect, a distinc-
Worst case default
Actual measured value or direct observation Based on knowledge ofthe variable, nearby readings. etc. Confident of this condition, but not confirmed by actual measurement; value proposedwill be correct 99% of the time Based on some knowledge and expertjudgment, but less confident: value proposed will be correct 90% of the time Applied where no reliable info I S available
8/182 Data Managementand Analyses
segmentation. In the first, some predetermined length such as 1 mile or 1000 ft is chosen as the length of pipeline that will be evaluated as a single entity. A new pipeline segment will be created at these lengths regardless of the pipeline characteristics. Under this approach then, each pipeline segment will usually have non-uniform characteristics. For example, the pipe wall thickness, soil type, depth of cover, and population density might all change within a segment. Because the segment is to be evaluated as a single entity, the non-uniformity must be eliminated. This is done by using the average or worst case conditionwithin the segment. An alternative is dynamic segmentation. This is an efficient way of evaluating risk since it divides the pipeline into segments of similar risk characteristics-a new segment is created when any characteristicchanges. Since the risk variablesmeasure unique conditionsalong the pipeline they can be visualized as bands of overlapping information.Under dynamic segmentation, a new segment is created every time any condition changes, so each pipeline segment, therefore,has a set of conditions unique from its neighbors. Section length is entirely dependent on how often the conltions change. The smallest segments are only a few feet in length when one or more variables are changing rapidly. The longest segments are several hundred feet or even miles long where variables are fairly constant.
Creating segments A computerroutine can replace a rather tedious manual method of creating segments under a dynamic segmentation strategy. Related issues such as persistence of segments and cumulative risks are also more efficiently handled with software routines. A softwareprogram should be assessed for its handling of these aspects. Segmentation issues are fully discussed in Chapter 2.
VI. Scoring The algorithms or equations are “rules” by which risk scores will be calculated from input data. Various approachesto algorithm scoring are discussed in earlier chapters and some algorithm examples are shown in Chapters 3 through 7 and also in Appendix E. The algorithm list is often best created and maintained in a central location where relationshipsbetween equations can be easily seen and changes can be tracked. The rules must often be examined and adjusted in consideration of other rules. Ifweightings are adjusted, all weightings must be viewed together. If algorithm changes are made, the central list can be set up to track the evolution of the algorithms over time. Alternate algorithms can be proposed and shown alongside current versions. The algorithms should be reviewed periodically, both as part of a performance measuring feedback loop and as an opportunity to tune the risk model for new information availability or changes in how informationshould be used.
Assigning defaults In some cases, no information about a specific event at a specific point will be available. For example, it is not unusual to have no confirmatory evidence regarding depth of cover in many locations of an older pipeline. This can be seen as an
informationgap. Prior to calculatingrisk scores, it is necessary to fill as many information gaps as possible. Otherwise, the
final scores will also have gaps that will impact decision making. At every point along the pipeline, each event needs to have a conditionassigned. If data are missing, risk calculationscannot be completed unless some value is provided for the missing data. Defaults are the values that are to be assigned in the absence of any other information. There are implicationsin the choice of default values and an overall risk assessment default philosophy should be established. Note that some variables cannot have a default reasonably assigned.An example is pipe diameter, for which any kind of default would be problematic. In these cases, the data will be absent and might lead to a non-scoring segment, when risk scores are calculated. It is useful to capture and maintain all assigned defaults in one list. Defaults might need to be periodically modified. A central repository of default information makes retrieval, comparison,and maintenance of default assignments easier. Note that assignment of defaults might be governed by rules also. Conditional statements (“if X is true, then Y should be used”) are especially useful: If (land-use type) =“residential high” then (populationdensity) = “high”
Other special equations by which defaults will be assigned may also be desired. These might involve replacing a certain fixed value, converting the data type, special considerations for a date format, or other special assignment.
VII. Quality assurance and quality control Several opportunitiesarise to apply quality assuranceand quality control (QNQC) at key points in the risk assessment process. Prior to creating segments, the following checks can be made by using queries against the event data set (or in spreadsheets)as the data are collected: Ensure that ail IDS are included-to make sure that the entire pipeline is included and that some portion of the system(s) to be evaluated has not been unintentionally omitted. Ensure that only correct IDS are used-find errors and typos in the ID field. Ensure that all records are within the appropriate beginning and ending stationsfor the system ID-find errors in stationing, sometimescreated when converting from field-gathered information. Ensure that thesum ofalidistances (endstation - begstation) for each went does not exceed the total length of that I& the sum might be less than the total length if some conditions are to be later added as default values. Ensure that the end station of each record is exactly equal to the beginning station of the next record-this check can also be done during segmentation since data gaps become apparent in that step. However, corrections will generally need to be done to the events tables so the check might be appropriate here as well.
Computer environments8/183 Several opportunities for Q N Q C also arise after each section has been scored. The following checks can be made by using queries against the database of scores: Find places where scores are not being calculated. This will usually be the result of an information gap in some event required by the algorithm. After the default assignment, there should not be any more gaps unless it is an event for which a default cannot logically be assigned (such as “diameter” or “product type”). Common causes of non-scoring segments include misnamed events or conditions, incorrect condition values, and missing default assignments. Find places where score limits are being exceeded. This is usually a problem with the algorithm not functioning as intended, especially when more complex “if. . . then” conditional equations are used. Other common causes include date formats not working as intended and changes made to either an algorithm or condition without corresponding changes made to the other. Ensure that scores are calculating properly. This is often best done by setting up queries to show variables, intermediate calculations, and final scores for the more complex scores especially. Scanning the results of these queries provides a good opportunity to find errors such as incorrect data formats (dates seem to cause issues in many calculations) or point assignments that are not working as intended. These QNQC opportunities and others are summarized below. Common input data errors include
1. Use of codes that are not exactly correct, Le., “high” when “ H is required, or misspelled codes 2. Wrong station numbers, Le., a digit left off, such as entering 21997 when219997 iscorrect 3. Conflicting information, Le., assigning different conditions to the same stretch ofpipeline, sometimes caused by overlap of the beginning and ending stations of two entries. Some Q N Q C checks that are useful to perform include the following:
I . Ensure that all pipeline segment identifiers are included in the assessment 2. Ensure that only listed IDSare included. 3 . Find data sets whose cumulative lengths are too long or too short, compared to the true length of an ID. 4. Find individual records within a data set whose beginning station andor ending station are outside the true beginning and ending points of the ID. 5. Ensure that all codes or conditions used in the data set are included in the codes or conditions list. 6 . Ensure that the end station of each record is exactly equal to the beginning station of the next record when data are intended to be continuous. 7. Ensure that correctkonsistent ID formats are being used. Common errors associated with risk score calculations include:
1. Problems with dates. The algorithms are generally set up to accommodate either a day-month-year format or a monthyear format or a year-only format, but not more than one of these at a time. The algorithm can be made more accommodating (perhaps at the expense of more processing time) or the input data can be standardized. 2. Missing or incorrect codes. Non-scoring values (nulls) or errors are often generated when input data are missing or incorrect. These create gaps in t t e final scores. 3 . Data gaps. As noted in item 2, these generally represent non-scoring values. Errors are easily traced to the initiating problem by following the calculation path backward. For example, using the algorithms detailed in chapters 3 through 6 , an error in IndexSum means there is a error somewhere in one of the four underlying index calculations (thdpty. corr, design, or incops). That error, in turn,can be traced to an error in some subvariable within that index. 4. Maximum or minimum values exceeded. Maximum and minimum queries or filters can be used to identify variables that are not calculating correctly.
VIII. Computer environments The computer is obviously an indispensable tool in a dataintensive process such as pipeline risk management. Because a great deal of information can be gathered for each pipeline section evaluated, it does not take many evaluations before the total amount of data become significant. The computer is the most logical way to store and, more importantly, organize and retrieve the data. The potential for errors in number handling is reduced if the computer performs repetitive actions such as the calculations to arrive at riskvalues.
Options Many different software environments could be used to handle the initial data input and calculations. As the database grows, the need for programs or routines that can quickly and easily (from the user standpoint) search a database and display the results of the search becomes more important. More sophisticate risk assessment models will require more robust software applications. A model that requires spatial analyses of information, perhaps to determine spill migration or hazard zone perimeters, requires special software capabilities. Additional desired capabilities might include automatic segmentation, assignment of zones of influence, or calculation of intermediate pressures based on source strength, location, and flowrate.
Use computers wisely An interesting nuance to computer usage is that too much reliance on computers is potentially more dangerous than too little. Too much reliance can degrade knowledge and cause insight to be obscured and even convoluted-the acceptance of ‘black box’ results with little application of engineering judgment. Underutilization of computers might result in inefficiencies-an undesirable, but not critical event. Regardless of potential misuse, however, computers can obviously greatly
8/184 Data Management and Analyses
increase the strength of the risk assessment process, and no modern risk management process is complete without extensive use ofthem. The modem software environment is such that information is usually easily moved between various programs. In the early stages of a project, the computer should serve chiefly as a data repository. Then, in subsequent stages, it should house the algorithms-how the raw information, such as wall thickness, population density, soil type, etc., is turned into risk information. In later stages ofthe project, data analysis and display routines should be available. Finally, computer routines to ensure ease and consistency of data entry, model tweaking, and generation of required output should be available. Software use in risk modeling should always follow program development-not lead it. Software should be viewed as a tool and different tools are appropriate for different phases of the project. Early stage. Use pencil and paper or simple graphics software to sketch preliminary designs of the risk assessment system. Also use project management tools if desired, to plan the project.
Intermediate stages. Use software environments that can store, sort, and filter moderate amounts of data and generate new values from arithmetic and logical (if. . . then . . . else) combinations of input data. Simplest choices include modem spreadsheets and desktop databases. Later stages. Provide for larger quantity data entry, manipulation, query, display, etc., in a long-term, secure, and userfriendly environment. If spatial linking of information is desired, consider migrating to a GIS platform. If multiuser access is desired, consider robust database environments. At this stage, specialized software acquisition or development may be beneficial.
A decision matrix can be set up to help evaluate software options. An example (loosely based on an actual project evaluation) is shown inTable 8.2. The costs shown in Table 8.2 will not be relevant for many applications-they are for illustration only. Many variables will impact the costs ofany alternative. These options should be better defined and hlly developed with software developers, programmers, IT resources, GIS providers, and other available expertise.
Table 8.2 Decision matrix of software options
Option
Advantages
Disadvantages
Estimated costs
Use spreadsheet tools only.
Inexpensive, completely flexible and user customizable; data easily (but manually) moved to other applications; in-house maintenance is possible. As above, plus makes information and capabilities accessible to more users, automates some data handling. Increased performance; more robust data handling capabilities; more secure and user friendly; network and GIS compatible. Existing customer base, vendor support ofproduct; strong graphics modules included; uses common database engine.
Requires a knowledgeable user, relatively fragile environment, some maintenance required; lacks some features of modern database environment. Some costs; performance limitations in using spreadsheets.
Minimal
More costly, might have to rely on outside programming expertise for changes.
100-200 person-hours for stand-alone program; plus some hours to build links to GIS (-$lO,000-$20,000)
Costly; reduced flexibility and capabilities; some data conversion needed; limited data analysis capabilities; outside programming support needed for modifications. Reduced flexibility; outside programming support needed f or modifications.
-$50,000 per user plus maintenance fees; plus 8&200 hours of data conversion and entry effort
Possible high costs; possibly less common software; outside programming support may he needed for modifications. Custom application; outside programming support needed for modifications.
10&200 person-hours for
Enhancehpgrade spreadsheet tools to increase security and user friendliness. Migrate model to custom desktop database program with user-friendly front end; linked to GIS environment. Purchase commercial software--option A
Purchase commercial software--option B.
GIS module programmed
directly into GIS software. Modifyhpgrade commercial software to link directly to existing spreadsheet tools.
Inexpensive, directly compatible with existing data; secure and user friendly; strong data analysis routines; uses common Microsoft Access database engine. Seamless integration with GIS environment is possible. Keeps flexible spreadsheet environment, adds power of the existing application and desktop database environment
2&80 person-hours (-$200&$7000)
<$10,0OOperuserplus somedata formatting and entry effort
stand-alone program; plus some hours to build links to GIS (-$10,000-$20,000)
$2,000 plus 50-100 hours; (-$5,000-$10,000); costs assume (and do not include onginal cost of) an existing software program
Computer environments81185
Applications of risk management A critical part of the role of risk assessment is of course its role
in risk management. Some potential user applications are discussed in the following subsections.
Application I : risk awareness This is most likely the driving force behind performing risk evaluations on a pipeline system. Owners andor operators want to know bow portions of their systems compare from a risk standpoint. This comparison is perhaps best presented in the form of a rank-ordered list. The rating or ranking list should include some sort of reference point-a baseline or standard to be used for comparisons. The reference point, or standard, gives a sense of scale to the rank ordering of the company’s pipeline sections. The standards may be based on: Governing regulations, either from local government agencies or from company policies. So, the standard is the risk score of a hypothetical pipeline in some common environment that exactly meets minimum requirements of the regulations. A pipeline or sections that are intuitively thought to be safer than the other sections. A fictitious pipeline section-perhaps a low-pressure nitrogen or water pipeline in an uninhabited area for a low-risk score, perhaps a high-pressure hydrogen cyanide (very flammable and toxic) pipeline through a large metropolitan area for a high-risk score. By including a standard, the user sees not only a rankordered list of his facilities, he also sees how the whole list compares to a reference point that he can understand. Ideally, the software program to support Application 1 will run something like this: Data are input for the standard and for each section evaluated. The computer program calculates numerical values for each index, the leak impact factor (product hazards and spill scores), and the final risk rating for every section. Any of these calculations may later be required for detailed comparisons to standards or to other sections evaluated. Consequently, all data and intermediate calculations must be preserved and available to search routines. The program will likely be called on to produce displays of pipeline sections in rank order. Sections may be grouped by product handled, by geographic area, by index, by risk rating, etc.
Examples o/risk data analyses There are countless ways in which the risk picture may need to he presented. Four examples of common applications are: I . Pipeline company management wants to see the 20 pipeline sections that present the most risk to the community. A list is generated, ranking all sections by their final relative risk number. A bar chart provides a graphic display of the 20 sections and their relative magnitude to each other. 2. Pipeline company management wants to see the 20 highest risk pipeline sections in natural gas service in the state of
Oklahoma. A rank-ordered list for natural gas lines in Oklahoma is generated. 3 . The corrosion control department wants to see a rank ordering of all sections, ranked by corrosion indexes, lowest to highest. All pipeline sections are ranked strictly by corrosion index score. 4. A pipeline company wants to compare risks for LPG pipelines in Region 1 with crude oil pipelines in Region 2. Distributions of pertinent risk scores are generated. From the distributions, analysts see the relative average risks, the variability in risks, and the relative highest risks, between the two pipeline types.
Application 2: conzpliance Another anticipated application of this program is a comparison to determine compliance with local regulations or with company policy. In this case, a standard is developed based on the company’s interpretation of government regulations and on the company policy for the operation ofpipelines (if that differs from regulatory requirements). The computer program will most likely be called on to search the database for instances of noncompliance with the standard(s). To highlight these instances of noncompliance, the program must be able to make correct comparisons between standards and sections evaluated. Liquid lines must be compared with liquid regulations; Texas pipelines must be compared with Texas regulations, etc. If the governing policies are performance based (“. . . corrosion must be prevented . . .,” “. . . all design loadings anticipated and allowed for. . .,” etc.), the standard may change with differing pipeline environments. It is a useful technique to predefine the pipeline company’s interpretations of regulatory requirements and company policy. These definitions will be the prevention items in the risk evaluation. They can be used to have the computer program automatically create standards for each section based on that specific section’s characteristics. Using the distinction between attributes and preventions, a floating standard can be developed. In the floating standard, the standard changes with changing attributes. The program is designed so that a pipeline section’s attributes are identified and then preventions are assigned to those attributes based on company policies. The computer can thus generate standards based on the attributes of the section and the level of preventions required according to company interpretations. The standard changes, or floats, with changes in attributes or company policy.
Example 8.1: Compliance A company has decided that an appropriate level of public education is to be mailouts, advertisements, and speaking engagements for urban areas, and mailouts with annual landowneritenant visits for rural areas. With this definition, the computer program can assign a different level of preventions for the urban areas compared with rural areas. The program generates these standards by simply identifying the population density value and assigning the points accordingly. By having the appropriate level of preventions pre-assigned into the computer, consistency is ensured. When policy is
81186 Data Management and Analyses
changed, the standards can be easily updated. All comparisons between actual pipeline sections and standards will be instantly updated an4 hence, based on the most current company policy. It is reasonable to assume that whenever an instance of noncompliance is found, a detailed explanation will be required. The program can be designed to retrieve the whole record and highlight the specific item(s) that caused the noncompliance. As policies and regulations change, it will be necessary to change the standards. Routines that allow easy changes will be useful.
ter evaluated using the cumulative risk techniques discussed in Chapter 15. This could then be judged against the effects of spending the same amount of money on, say, close interval surveys or new operator training programs. The costbenefit analyses will not initially produce absolute values because this risk assessment program yields only relative answers. For a given pipeline system, relative answers are usually adequate. The program should help the user decide where his dollar spent has the greatest impact on risk reduction. Where absolute levels of spending are to be calculated techniques described in Chapters 14 and 15 will be needed.
Application 3: what-iftrials
Application 5: detailed comparisons
A useful feature in the computer program will undoubtedly be the ability to perform "what-if" trials. Here, the user can change items within each index to see the effect on the risk picture. For example, if air patrol frequency is increased, how much risk reduction is obtained? What if an internal inspection device is run in this section? If we change our public education program to include door-to-door visits, how does that influence the risk of third-party damage? It will be important to preserve the original data during the what-if trial. The trial will most likely need to be done outside the current database. A secondary database of proposed actions and the resulting risk ratings could be built and saved using the what-if trails. This second database might be seen as a target or goal database, and it could be used for planning purposes. The program should allow specific records to be retrieved as well as general groups of records. The whole record or group of records will need to be easily modified while preserving the original data. Comparisons or before-and-after studies will probably be desirable. Graphic displays will enhance these comparisons.
In some of the above applications and as a stand-alone application, comparisons among records will almost always be requested. A user may wish to make a detailed comparison between a standard and a specific record. She may wish to see all risk variables that exceed the standard or all variables that are less than their corresponding standard value. Groups of records may also need to be compared. For example, the threat of damaging land movements of all Texas pipelines could be compared with all Louisiana pipelines or the internal corrosion potential of natural gas pipelines could be compared with those for crude oil pipelines. Graphics would enhance the presentation of the comparisons.
Application 4: spendingprioritization As an offshoot to the ranking list for relative risk assessment, it will most likely be desirable to create rank-order lists for prioritizing spending on pipeline maintenance and upgrades. The list of lowest scored sections from a corrosion risk standpoint should receive the largest share of the corrosion control budget, for instance. The spending priority lists will most likely be driven by the rankorderedrelative risk lists, but there may be a need for some flexibility. Spending priority lists for only natural gas pipelines may be needed, for example. The program could allow for the rearrangement ofrecords to facilitate ths. A special column, or field in the database, may be added to tabulate the projected and actual costs associated with each upgrade. Costs associated with a certain level of maintenance (prevention) activities could also be placed into this field. This will help establish values of certain activities to further assist in decision making. The user may want to analyze spending for projects on specific pipeline sections. Alternatively, she may wish to perform costhenefit analyses on the effects of certain programs across the whole pipeline system. For instance, if the third-party damage index is to be improved, the user may study the effects of increasing the patrol frequency across the whole system. The costs of the increased patrol could be weighed against the aggregate risk reduction, perhaps expressed as a percentage reduction in the sum or the average of all risk values, but is bet-
Additional applications Embedded or implied in some of the above applications are the following tasks, which may also need to be supported by risk management s o h a r e : Due diligence-investigation and analysis of assets that might be acquired Project uppmvuls-as part of a regulatory process or company internal, an examination of the levels of risk related to a proposed project and the judgment of the acceptability of those risks Alternative route analyses-a comparison, on a risk basis, of alternative routes of a proposed pipeline. Budget setting-a determination of the value and optimum timing of a potential project or group of projects from a risk perspective Risk communications-presenting risk results to a number of different audiences with different interests and levels of technical abilities.
Properties of the software program The risk assessment processes are often very dynamic. They must continuously respond to changing information if they are to play a significant role in all planning and decision making. The degree of use of this risk assessment is often directly related to the user friendliness and robustness of the software that supports it. Properties of a complete risk assessment model are discussed in Chapter 2 along with some simple tests that can be used as measures of completeness and utility. Those same tests can be used to assess the environment that houses the risk model.
Computer environments 81187
If suitable off-the-shelf software is not available, custom software development is often an attractive alternative. Software design is a complex process and many reference books discussing the issues are available. It is beyond the scope of this book to delve into the design process itself, but the following paragraphs offer some ideas to the designer or to the potential purchaser of s o h a r e . Before risk data are collected or compiled, a computer programmer could be participating in the creation of design specifications. He must be given a good understanding of how the program is going to be used and by whom-software should always be designed with the user in mind. Programs often get used in ways slightly different from the original intentions. The most powerful software has successfully anticipated the user’s needs, even if the user himself has not anticipated every need. Data input and the associated calculations are usually straightforward. Database searches, comparisons, and displays are highly use-specific. The design process will benefit from an investment in planning and anticipating user needs. A complete risk management software package will be called on to support several general functions. These functions can be identified and supported in various ways. The following is an example of one possible grouping of functionality: 1. Risk algorithms 2. Preparation ofpipeline data 3. Risk calculations a. Decide on and apply a segmenting scheme b. Run the risk assessment model against the data to calculate the risks for each segment 4. Risk management.
Risk management is supported by generating segment rankorder lists and running “what-if” scenarios to generate work plans. Many specific capabilities and characteristics of the best software environment can be listed. A restricted vocabulary will normally be useful to control data input. Error-checking routines at various points in the process will probably be desirable. There will most likely be several ways in which the data will have to be sorted and displayed-reporting and charting capabilities will probably be desired. This is again dependent on the intended use. Data entry and extraction should be simple-required keystrokes should be minimized, use of menus and selection tools optimized and the need for redundant operations eliminated. Some other important software capabilities are discussed below.
Dynamic environment Because the risk assessment tool is designed to be dynamicchanging with changing conditions and new information-the software program must easily facilitate these changes. New regulations may require corresponding changes to the model. Maintenance and upgrade activities will be continuously generating new data. Changes in operating philosophies or the use of new techniques will affect risk variables. New pipeline construction will require that new records be built. Increases in population densities and other receptors will affect consequence potential. The relative weighting of index variables might also be subject to change after regular reviews.
The ability to quickly and easily make changes will be a critical characteristic of the tool. As soon as updates are no longer being made, the tool loses its usefulness. For instance, suppose new data are received concerning the condition of coating for a section of a pipeline. The user should be able to input the data in one place and easily mark all records that are to be adjusted with the new information. With only one or two keystrokes, the marked records should be updated and recalculated. The date and author of the change should be noted somewhere in the database for documentation purposes. As noted in Chapter 2 and also in this chapter, segmentation strategies and issues can become challenging and software assistance is often essential. Issues include initial segmentation options, persistence of segments, calculating cumulative (length-sensitive) risks and risk values, and tracking risk performance in a dynamic segmentation environment.
Searches In most applications, it will be necessary to find specific records or groups of records. Most database routines make this easy. Normally the user specifies the characteristics of the record or records she is seeking. These characteristics are the search parameters the computer will use to find the record(s) of interest. User choices are made within fields or categories of the data. For instance, some fields that will be frequently used in database searches include: Producttype Geographical area Linesize Leak impact factor Index values. When the user perfoms searches, he chooses specifics within each field: It is important to show what the possible choices are in each non-numeric field. The choices must usually be exact matches with the database entries. Menus and other selection tools are useful here. The user may also wish to do specific searches for a single item within an index such as Find (depth ofcover scores) >1.6
or Find (public education programs) = I5 pts
It is useful if the user can specify ranges when she is searching for numerical values, for example, (hydrotestscores) from 5 to 15 points or (hydrotest scores) < 20 pts. Searches can become complex: Find all records where (product) =“natural gas”AND (locarion)= “South Texas” AND [(pipediameter) > “4 in.”AND < “1 2 in.” OR (construct date) < I9701 AND (corrosionindex) < 50.
Ideally, the user would be able to perform searches by defining search parameters in general fields, but still have the option of defining specific items. It would be cumbersome to prompt the user to specify a search criteria in every field prior to a search. He should be able to quickly bypass fields in which he is not interested. An acceptable solution would be to have
8/188 Data Management and Analyses
more than one level of fields. An upper, general level would prompt the user to choose one or more search parameters, perhaps from the example list above. He may also then choose the next level of fields if he wishes to specify more detailed parameters.
Tracking Users may want the program to be designed so that it can automatically track certain items. Overall changes in the risk picture, changes in indexes, or changes in the scoring of specific items may be of interest. Tracking of risk results over time shows deterioration or progress toward goals. Following and quantifying risk changes over time has special challenges in a dynamic segmentation environment. This is discussed in Chapters 2 and 15.
Graphics Pictures reveal things about the data that may otherwise go unnoticed. Bar graphs, histograms, pie charts, correlations, and run charts illustrate and compare the data in different ways. Routines should be built to automatically produce these pictures. Graphics routines can also put information in geographically referenced format such as a map overlay showing risk values or hazard zones in relation to streets, water bodies, populated areas, etc. Graphics are very powerful tools-they can and should be used for things like data analysis (trends, histograms, frequency distributions, etc.) and for presentations. A distinction should be made between analytical graphics and presentation graphics. The former denotes a primary risk management tool, while the latter denotes a communication tool. Presentation graphics can and should be very impressiveincorporating map overlays, color-coded risk values, spill dispersion plumes spreading across the topography, colorful charts showing risk variables along the ROW, etc. These are effective communication tools but not normally effective analysis or management tools. It is usually impossible to manage risks frompresentation graphics. Apipeline is a long, linear facility that cannot be shown with any resolution on a single picture. To manage risks, the user must be able to efficiently sort, filter, query, correlate, prioritize, and drill into the often enormous amount of data and risk results. That cannot realistically be done in a presentation environment where information is either very high level or spread across many drawing pages or many screen views. In simplistic terms, capabilities that involve charting and comparing data and results will be analysis tools. Capabilities that involve maps and alignment sheet style drawings will be presentation tools. Note that presentation tools often enhance the ability to investigate, research, and validate information. This is part of their role as communications tools. The analyses tools will normally be used first in risk management. They will identify areas of special interest. Their use will lead to the subsequent use of the presentation tools to better assess or communicate the specific areas of interest identified. In evaluating or designing graphics capabilities in a software environment, the relative value of each type of graphics tool should be established. The inexperienced risk manager will be very attracted to presentation graphics and will be tempted to
direct a lot of resources toward those. When this is done at the expense of analytical tools, the risk effort suffers.
Comparisons Search capabilities (as previously described) facilitate comparisons by grouping records that support meaningful analysis. For example, when investigating internal corrosion, it is probably useful to examine records with similar pipeline products. In examining consequence potential, it might be useful to group records with similar receptor types. Comparisons between groups of records may require the program to calculate averages, sums, or standard deviations for records obtained by searches. Detailed comparisons-side-byside comparison of each risk variable or even all underlying data-might also he needed. The program should be able to display two or more records or groups of records for direct comparison purposes. The program may be designed to highlight differences between records of certain magnitudes, for instance, highlight a risk variable when it differs by more than 10% from some corresponding “standard” value. Records being compared will need to be accessible to the graphics routines, since the graph is often the most powerful method of illustrating the comparisons. A distribution of risk scores tells more about the nature of the risk of those pipeline segments than any one or even two statistics. Correlations, both graphic and quantitative, will be useful.
Accessibility andprotection The risk model and/or its results may need to he accessed by multiple users in different locations. Network or Internet deployment options are often a part of risk management software design. The database should be protected from tampering. Access to the data can generally be given to all potential users, while withholding change privileges. Because all users will normally be encouraged to understand and use the program, they must be allowed to manipulate data, but this should probably be done exclusive of the main database. An individual or department can be responsible for the main database. Changes to this main database should only be made by authorized personnel, perhaps through some type of formal change-order system. Modern software has many protection features available, requiring certain authorization privileges before certain operations can be completed.
Statistics The ability to generate the general statistics discussed on pages 189-192 should be a part of the software features. Note that most of risk management decision making will be supported by data analysis-usually involving statistical tools-rather than by graphical tools.
Documentation If a commercial risk model is purchased, it is imperative that the full explanation of the risk model be obtained. Consistent with all engineering practice, the user will be responsible for the
Data analyses 8/189 results of the risk assessment and should understand and agree with all underlying assumptions, calculations, and protocols employed. This book may provide some of the background documentation necessary for a s o h a r e program that incorporates a model similar to the one described here. It contains explanations as to why and how certain variables are given more points than others and why certain variables are considered at all. Where the book may provide the rationale behind the risk assessment, the software documentation must additionally note the workings of all routines, the structure of the data, and all aspects of the program. A data dictionary is normally included in the software documentation.
IX. Data analysis An earlier chapter made a connection between the quality process (total quality management, continuous improvement, etc.) and risk management. In striving to truly understand work processes, measurement becomes increasingly important. Once measurement is done, analysis of the resulting data is the next step. Here again, the connection between quality and risk is useful. Quality processes provide guidance on data analysis. This section presents some straightforward, techniques to assist in interpreting and responding to the information that is contained in the risk assessment data. In using any risk assessment technique, we must recognize that knowledge is incomplete. This was addressed in Chapter 1 in a discussion of rare occurrence events and predictions of future events using historical data. Risk weightings, interactions, consequences, and scores are by necessity based on assumptions. Ideally, the assumptions are supported by sound engineering judgment and hundreds of person-years of pipeline experience. Yet in the final analysis, high levels of uncertainty will be present. Uncertainty is present to some degree in any measurement. Chapter 1 provides some guidance in minimizing the measurement inconsistencies. Recognizing and compensating for the uncertainty is critical in proper data analysis. The data set to be analyzed will normally represent only a small sample of the whole “population” of data in which we are really interested. Ifwe think of the population of data as all risk scores, past, present, and future, then the data sample to be analyzed can be seen as a “snapshot.” This snapshot is to be used to predict future occurrences and make resource allocation decisions accordingly. The objective of data analyses is to obtain and communicate information about the risk of a given pipeline. A certain disservice is done when a single risk score is offered as the answer. A risk score is meaningful only in relation to other risk scores or to some correlated absolute risk value. Even if scores are closely correlated to historical accident data, the number only represents one possibility in the context of all other numbers representing slightly different conditions. This necessitates the use of multiple values to really understand the risk picture. The application of some simple graphical and statistical techniques changes columns and rows of numbers into trends, central tendencies, and actioddecision points. More information is extracted from numbers by proper data analysis, and the common mistake of “imagining information when none exists”
is avoided. Although very sophisticated analysis techniques are certainly available, the reader should consider the costs of such techniques, their applicability to this type of data, and the incremental benefit (if any) from their use. As with all aspects of risk management, the benefits of the data analysis must outweigh the costs of the analysis. When presented with almost any set of numbers, the logical first step is to make a “picture” of the numbers. It is sometimes wise to do this even before summary statistics (average, standard deviation, etc.) are calculated. A single statistic, such as the average, is rarely enough to draw meaningful conclusions about a data set. At a minimum, a calculated measure of central tendency and a measure of variation are both required. On the other hand, a chart or graph can at a glance give the viewer a feel for how the numbers are “behaving.” The use of graphs and charts to better understand data sets is discussed in a following section. To facilitate the discussion of graphs and statistics, a few simple statistical measures will be reviewed. To help analyze the data, two types of measurements will be of most use: measures of central tendency and measures of variation.
Measure of central tendency This class of measurements tells us where the “center of the data” lie. The two most common measures are the average (or arithmetic mean, or simply mean) and the median. These are often confused. The average is the sum of all the values divided by the number of values in the data set. The mean is often used interchangeably with the average. but is better reserved for use when the entire population is being modeled. That is, the average is a calculated value from an actual data set while the mean is the average for the entire population of data. Because we will rarely have perfect knowledge of a population, the population mean is usually estimated from the average of the sample data. There is a useful rule of thumb regarding the average and a histogram (histograms are discussed in a following section): The average will always be the balance point of a histogram. That is, if the x axis were a board and the frequency bars were stacks ofbricks on the board, the point at which the board would balance horizontally is the average. The application of this relationship is discussed later. The second common measure of central tendency, the median. is often used in data such as test scores, house prices, and salaries. The median yields important information especially when used with the average. The median is the point at which there are just as many values above as below. Unlike the average, the median is insensitive to extreme values-cither very high or very low numbers. The average of a data set can be dramatically affected by one or two values being very high or very low. The median will not be affected. A third, less commonly used measure of central tendency is the mode. The mode is simply the most frequently occurring value. From a practical viewpoint, the mode is often the best predictor ofthe value that may occur next. An important concept for beginners to remember is that these three values are not necessarily the same. In a normal or bell-shaped distribution, possibly the most commonly seen distribution, they are all the same, but this is not the case for other common distributions. lfall three are known, then the data set is already more interpretable than if only one or two are known.
8/190 Data Managementand Analyses
Measures of variation Also called measures of dispersion, this class of measurements tells us how the data organize themselves in relation to a central point. Do they tend to clump together near a point of central tendency? Or, do they spreaduniformly in either direction from the centralpoint? The simplest method to define variation is with a calculation of the range.The range is the difference between the largest and smallest values of the data set. Used extensively in the 1920s (calculationsbeing done by hand) as an easy approximation for variation, the range is still widely used in creating statistical control charts. Another common measure is the standard deviation. T h s is a property of the data set that indicates, on average, how far away each data value is from the average of the data. Some subtleties are involved in standard deviation calculations, and some confusion is seen in the applicationsof formulasto calculate standard deviations for data samples or estimate standard deviations for data populations. For the purposes of this text, it is important for the reader merely to understandthe underlying concept of standard deviation. Study Figure 8.1 in which each dot represents a data value and the solid horizontal line represents the average of all of the data values. If the &stances from each dot to the average line are measured, and these &stances are then averaged, the result is the standard deviation: the average distance of the data points from the average (centerline) of the data set. Therefore, a standard deviation of 2.8 means that, on average, the data falls 2.8 units away from the average line. A higher standard deviation means that the data are more scattered, farther away from the center (average) line. A lower standard deviationwould be indicated by data values “hugging” the center (average) line. The standard deviation is considered to be a more robust measure of dispersion than the range. This is because, in the range calculation, only two data points are used: the high and the low. No indication is given as to what is happening to the other points (although we know that they lie between the high and the low). The standard deviation, on the other hand, uses information from every data point in measuring the amount of variation in the data. With calculatedvalues indicatingcentral tendency and variation, the data set is much more interpretable.These still do not, however, paint a completepicture of the data. For example, data symmetry is not considered. One can envision data sets with identical measures of central tendency and variation, but quite different shapes. While calculationsfor shape parameters such
Data points Distance from data point to average
I 1 I
Figure 8.1
I
Concept of standard deviation.
Average of data points
I
I
I
I
I
I
as skewness and kurtosis can be performed to better define aspects of the data set’s shape, there is really no substitute for a picture of the data.
Graphs and charts This section will highlight some common types of graphs and charts that help extract information from data sets. Experience will show what manner of picture is ultimately the most useful for a particular data set, but a good place to start is almost always the histogram.
Histograms In the absence of other indications, the recommendation is to first create a histogram of the data.A histogram is a graph of the number of times certain values appear. It is often used as a surrogate for a frequency distribution.A histogram uses data intervals (called bins), usually on the horizontal x axis, and the number of data occurrences,usually on the verticaly axis (see Figure 8.2). By such an arrangement, the histogram shows the quantity of data contained in each bin. The supposition is that future data will distributeitself in similarpatterns. The histogram provides insight into the shape of the frequency distribution.The frequency distributionis the idealized histogram of the entire population of data, where number of again, occurrences is replaced by frequency of occurrence usually on the vertical axis. The frequency versus value relationship is shown as a single line, rather than bars. This represents the distributionof the entire population of data. The most common shape of frequency distributions is the normal or bell curve distribution (Figure 8.3). Many, many naturally occurring data sets form a normal distribution.If a graph is made of the weights of apples harvested from an orchard, the weights would be normally distributed.A graph of the heights of the apple trees would show a bell curve. Test scores or measures of human intelligence are usually normally distributed as well as vehicle speeds along an interstate, measurements of physical properties (temperature,weight, etc.), and so on. Much of the pipeline risk assessment data should be normally distributed. When a data set appears to he normally distributed, several things can be immediately and fairly reliably assumed about the data:
e?),
The data are symmetrical. There should always be about the same number of values above an average point as below that point. The average equals the median. The average point is equal to both the median and the mode. This means that the average represents a value that should occur more often than any other value. Values closer to the average occur more frequently; those farther away less frequently. Approximately 68% of the data will fall within one standard deviationeither side of the average. Approximately 97% of the data will fall within three standard deviationseither side ofthe average. Other possible shapes commonly seen with risk-related data include the uniform distribution, exponential, and Poisson distribution. In the uniform (or rectangular) distribution (see Figure 8.3), the following can be assumed:
Data analyses 8/191
4.
16.1
28.2
40.3
52.4
64.6
76.7
88.8
100.9
113.
Risk Score Figure 8.2
Histogram of riskscores.
The data set is symmetrical. The average point is also the median point, but there is not a mode. All values have an equal chance of occurring. Exponential and Poisson distributions (see Figure 8.3), often seen in rare events, can have the following characteristics:
The data are nonsymmetrical.Data values below the average are more likely than those above the average. Often zero is the most likely value in this distribution. The average and median and mode are not the same. The relationship between these values provides information relating to the data.
Bimodal distribution (or trimodal, etc.)
Normal (bell-shaped)
When the histogram shows two or more peaks (see Figure 8.3), the data set has multiple modes. This is usually caused by two or more distinct populations in the data set, each correspondingto one of the peaks. For each peak there is a variable(s) unique to some of the data that causes that data to shift from the general distribution. A better analysis is probably done by separating the populations. In the case of the risk data, the first place to look for a variable causing the shift is in the leak impactfactor. Because of its multiplying effect, slight differences in the LZF can easily cause differing clumping of data points. Look for variations in product characteristics, pipe size and pressure, population density, etc. A more subtle shift might be caused by any other risk variable. A caution regarding the use of histograms and most other graphical methods is in order. The shape of a graph can often be radicallychanged by the choice of axes scales. In the case of the histogram, part ofthe scaling is the choice ofbin width.A width too wide conceals the actual data distribution.A width too narrow can show too much unimportant,random variation (noise).
n Uniform
Run charts Poisson Figure 8.3
Examples of distribution.
Bi-modal
When a time series is involved, an obvious choice of graphing technique is the run chart. In this chart, the change in a value
8/192 Data Management and Analyses
over time is shown. Trends can therefore be spotted that is, “In which direction and by what magnitude are things changing over time?” Used in conjunction with the histogram, where the evaluator can see the shape of the data, information and patterns ofbehavior become more available. Correlation charts
Of special interest to the risk manager are the relationships between risk variables. With risk variables including attributes, preventions, and costs, the interactions are many. A correlation chart (Figure 8.4) is one way to qualitatively analyze the extent of the interaction between two variables. Correlation can be quantified but for a rough analysis, the two variables can be plotted as coordinates on an x,y set of axes. If the data are strongly related (highly correlated), a single line ofplotted points is expected. In the highest correlation, for each value ofx, there is one unique corresponding value ofy. In such high correlation situations, values of y can be accurately predicted from values ofx. If the data are weakly correlated, scatter is seen in the plotted points. In this situation, there is not a unique y for every x. A given value of x might provide an indication for the corresponding y if there is some correlation present, but the predictive capability of the chart diminishes with increasing scatter of the data points. The degree of correlation can also be quantified with numerical techniques. There are many examples of expected high correlation: coating condition versus corrosion potential, activity level versus third-party damage, product hazard versus leak consequences, etc. Both the presence and absence of a correlation can be revealing.
HLC charts A charting technique borrowed from stock market analysis, the high-low-close (HLC) chart (Figure 8.5) is often used to show daily stock share price performance. For purposes of risk score analysis, the average will be substituted for the “close” value. This chart simultaneously displays a measure of central tendency and the variation. Because both central tendency and variation are best used together in data analysis, this chart provides a way to compare data sets at a glance. One way to group the data would be by system name, as shown in Figure 8.5. Each system name contains the scores of all pipeline sections within that system. Other grouping options include population density, product type, geographic area, or any other meaningful slicing ofthe data. These charts will visually call attention to central tendencies or variations that are not consistent with other data sets being compared. In Figure 8.5, the AB Pipeline system has a rather narrow range and a relatively high average. This is usually a good condition. The Frijole Pipeline has a large variation among its section scores, and the average seems to be relatively low. Because the average can be influenced by just one low score, a HLC chart using the median as the central tendency measure might also be useful. The observed averages and variations might be easily explained by consideration of product type, geographical area, or other causes. An important finding may occur when there is no easy explanation for an observation.
Examples We now look at some examples of data analysis.
Correlation Chart $1,000,000
X
$800,000
$600,000
0”a 2
X
s
X
X
x x x
c
$400,000
x
$200,000
X
.I
x
X X
xex
.,
X
xyx
00
X 50
100
Risk Score Figure 6.4 Correlation chart: risk score versus costs of operation.
I
150
Data analyses 81193
-
Maximum - Average Minimum
150 ?!?
100
fi
8
0 Y In
E 50
AB Pipeline
CD Pipeline
Cisco Mainline
DF Pipeline
Frijole Pipeline
Standard
XY Pipeline
System Name Figure 8.5 HLC chart of risk scores.
Example 8.2: Initial analysis The pipeline system evaluated in this example was broken into 21 distinct sections as the initial analysis began. Each section was scored in each index and the corresponding LIF.The evaluatorplaces the overallrisk scores on a histogram as shown in Figure 8.6. Normally, it takes around 30 data points to define the histogram shape, so it is recognized that using only these 21 data points might present an incomplete picture of the actual shape. Nonetheless, the histogram reveals some interesting aspects of the data. The data appear to be bimodal, indicating two distinct groups of data. Each set of data might form a normal distribution (at least there is no strong indication that the data sets are not normally distributed).Rather than calculating summary statisticsat this point, the evaluator chooses to investigate the cause of the bimodal distribution. Suspectingthe LZF as a major source of the bimodal behavior, a histogram of LZF scores is created as shown in Figure 8.6. A quick check of the raw data shows that the difference in the LIF scores is indeed mostly due to two population densities existing in this system: Class 1 and Class 3 areas. This explains the bimodal behavior and prompts the analyst to examine the two distributions independently for some issues. The data set is now broken into two parts for fixther analysis. The seven records for the Class 1 area are examined separately from the Class 3 records. Figure 8.7 shows an analysis by index of the risk scores for each data set. There do not appear to be any major differences in index values within a data set (an item-byitem comparison would be the most accurate way to verify this). Some quick calculations yield the following preliminary analysis: For this system, and similar systems yet to be evaluated, Class 1 area sections are expectedto score between 70 and 140, with the average scores falling around 120. Class 3 area
50
0
2
4
100 Risk Scores
6 LIF
Figure 8.6 Example8.2 analysis.
8
150
10
12
81194 Data Management and Analyses
Class 1 0Class 3
System 1 Figure 8.8
Third Party Figure 8.7
Corrosion
Design
Inc. Operations
Example 8.2 index comparison
scores should range from 30 to 90 with the average scores falling around 60. In either case, every 10 points of risk reduction (index sum increases) will improve the overall safety picture by about 5%. From such a small overview data set, it is probably not yet appropriate to establish decision points and identification of outliers.
Example 8.3: Initial comparisons In this example, the evaluating company performed risk assessments on four different pipeline systems. Each system was sectioned into five or more sections. For an initial comparison ofthe risk scores,the evaluatorwants to compareboth central tendency and variation. The average and the range are chosen as summary statistics for each data set. Figure 8.8 shows a graphical representation of this information on a HLC chart. Each vertical bar represents the risk scores of a corresponding pipeline system. The top and bottom tick marks on the bar show the highest and lowest risk score; the middle tick mark shows the average risk score. Variability is highest in system 2. This would most likely indicate differences in the LIFwithin that set of records. Such differences are most commonly caused by changes in population density, but common explanations also include differences in operating pressures, environmental sensitivity, or spreadability. Index items such as pipe wall thickness, depth of cover, and coating condition also introduce variability, but unless such items are cumulative, they do not cause as much variability as LIF factors. The lowest overall average of risk scores occurs in system 4. Because scores are also fairly consistent (low variability)here, the lower scores are probably due to the LIF. A more hazardous product or a wider potential impact area (greater dispersion) would cause overall lower scores. In general, such an analysis provides some overall insight into the risk analysis. Pipeline system 4 appears to carry the highest risk. More risk reduction efforts should be directed there. Pipeline system 2 shows higher variability than other systems. This variability should be investigated because it may indicate some inconsistencies in operating discipline. As
System 2
System 3
System 4
Example 8.3 analysis
always, when using summary scores like these, the evaluator must ensure that the individualindex scores are appropriate.
Example 8.4: Verificationof operating discipline In this example, the corrosion indexes of 32 records are extracted from the database.The evaluator hypothesizes that in pipeline sections where coating is known to be in poor condition more corrosion preventive actions are being taken. To verify this hypothesis, a correlation chart is created that compares the coating condition score with the overall corrosion index score. Initially, this chart (Figure 8.9a) shows low correlation; that is, the data are scattered and a change in coating condition is not always mirrored by a correspondingchange in corrosion index. To ensure that the correlation is being fairly represented, the evaluator looks for other variables that might introduce scatter into the chart. Attribute items such asproduct corrosivi@,presence ofACpower nearby, and atmospheric condition might be skewing the correlation data. Creating several histograms of these other corrosion index items yields more information. Seven of the records represent pipeline sections where internal corrosionis a significantpotential problem. Two records have an unusually high risk from the presence ofAC power lines nearby. Because internal corrosion potential and AC power influences are not of interest in this hypothesis test, these records are removed from the study set. This eliminates their influence on the correlation investigation and leaves 23 records that are thought to be fairly uniform. The resulting correlation of the 23 records is shown in Figure 8.9b. Figure 8.9b shows that a correlation does appear. However, there are two notable exceptions to the trend. In these cases, a poor coating condition score is not being offset by higher corrosion index scores. Further investigation shows that the two records in question do indeed have poor coating scores, but have not been recently surveyed by a close interval pipe-to-soil voltage test. The other sections are on a regular schedule for such surveys.
X. Risk model performance Given enough time and analyses, a given risk model can be validated by measuring predicted pipeline failures against actual. The current state-of-the-art does not allow such validation for
Risk model performance 81195
W
=I
u
-
C
n ._
ul
2 L
0
0
(a) Coating Condition Score (32 records)
(b) Coating Condition Score (23 records) Figure 8.9
Example 8.4 analysis.
reasons including; models have not existed long enough, data collection has not been consistent enough, and pipeline failures on any specific system are not frequent enough. In most cases, model validation is best done by ensuring that risk results are consistent with all available information (such as actual pipeline failures and near-failures) and consistent with the experiences and judgments of the most knowledgeable experts. The latter can be at least partially tested via structured model testing sessions andor model sensitivity analyses (discussed later). Additionally, the output of a risk model can be carehlly examined for the behavior ofthe risk values compared with our knowledge of behavior of numbers in general. Therefore, part of data analysis should be to assess the capabilities of the risk model itself, in addition to the results produced from the risk model. A close examination of the risk results may provide insight into possible limitations of the risk model including biases, inadequate discrimination, discontinuities, and imbalances. Some sophisticated routines can be used to evaluate algorithm outputs. A Monte Carlo simulation uses random numbers
to produce distributions ofall possible outputs from a set of risk algorithms. The shape of the distribution might help evaluate the “fairness” of the algorithms. In many cases a normal, or bell-shaped, distribution would be expected since this is a very common distribution of material properties and properties of engineered structures as well as many naturally occurring characteristics (height and weight of populations, for instance). Alternative distributions are possible, but should be explainable. Excessive tails or gaps in the distributions might indicate discontinuities or biases in the scoring possibilities. Sensitivity analyses can be set up to measure the effect of changes in any variables on the changes in the risk results. This is akin to signal-to-noise discussions from earlier chapters because we are evaluating how sensitive the results are to small changes in underlying data. Because some changes will be “noise”-uncertainty in the measurements-the sensitivity analysis will help us decide which changes might really be telling us there is a significant risk change and which might only be responding to natural variations in the overall systembackground noise.
8/196Data Management and Analyses
Sensitivity analysis The overall algorithm that underlies a risk model must react appropriately-neither too much nor too little-to changes in any and all variables. In the absence of reliable data, this appropriate reaction is gauged to a large extent by expertjudgment as to how the real-world risk is really impacted by a variable change. Sensitivity analysis generally refers to an evaluation of the relative change in results due to a change in inputs-the sensitivity of outputs to changes in inputs. Sensitivity analysis can be a very statistically rigorous process if advanced techniques such as ANOVA (analysis of variance), factorial design, or other statistical design of experiments techniques are used to quantify the influence of specific variables. However, some simple mathematical and logical techniques can alternatively be used to gauge the impact on results caused by changing certain inputs. Some of the previously discussed graphical tools can be useful here. For example, a correlation chart can help verify expected relationships among variables or alert the analyst to possible model weaknesses when expectations are not realized. From the mathematical formula behind the risk algorithm presented in Chapters 3 through 7, the effect of changes on any risk variable can be readily seen. Any percentage change in an index value represents a change in the probability of failure and hence, the overall risk. For example, an increase (improvement)
in the corrosion index translates to some percentage reduction in risk of that type of failure. This improvement could he achieved through changes in a risk activity or condition such as in-line inspection, close-interval surveys, or coating condition or through some combination of changes in multiple variables. Similarly, a change in the consequences (the leak impactfactol; LIF) correlates to the same corresponding change in the overall risk score. Some variables such as pressure and population density impact both the probability and consequence sides of the risk algorithm. In these cases, the impact is not obvious. A spreadsheet can be developed to allow "what-if'' comparisons and sensitivity analyses for specific changes in risk variables. An example of such comparisons for a specific risk model is shown in Table 8.3. The last column of this table indicates the impact of the change shown in the first column. For instance, the first row shows that this risk model predicts a 10% overall risk reduction for each 10% increase in pipe wall thickness, presumably in a linearly proportional fashion. (Note that any corrosion-related benefit from increased wall thickness is not captured in this model since corrosion survivability is not being considered.) Table 8.3 reflects changes from a specific set of variables that represent a specific risk situation along the pipeline. Results for different sets of variables might be different. This type of"what-if" scenario generation also serves as a riskmanagement tool.
Table 8.3 "What-if"comparisons and analyses of changes in risk variables
Change
Variablesaffected
Change in overall risk 5(%)
Increase pipe wall thickness by 10%. Reduce pipeline operating pressure by 10%. Improve leak detection from 20 min to 10 min (including reaction). If population increases from density of 22 per mile to 33 per mile (50% increase). Increase air patrol frequency. Increase pipe diameter by 10%. Improve depth-of-cover score by 10%.
Pipe factor Pipe factor, leak size, MAOPpotential, etc. Leak size (LIF)
-0. I -2.3 -2.1
LIF
+5.0
Air patrol (third-party index) Pipe factor, leak size (LIF) Cover (third-party index)
+9.1
Possibly -5 depending on initial and end states -0.6
91197
Additional Risk Modules
This chapter offers some ideas for considering two additional topics in the basic risk assessmentmodel: Stress and human errors-measurable variables that indicate a more stressful workplace, possibly leading to higher error rates Sabotag+variables to consider when the threat of intentional attacks against a pipeline facility are to be assessed.
Where either is seen to be a significant contributor to failure potential, inclusion of additional risk variables into the risk assessment might be warranted. However, for many pipelines, issues regarding operator stress levels and sabotage potential are either not significant or so uniform as to make distinctions impossible. So, either of these can be a part of the risk assessment but should be added only when the evaluatorjudges that its benefit exceedsthe cost ofthe complexitythat is added by its inclusion.
1. Stress and human errors Background The incorrect operations index is largely a measure of the potential for human errors. When there is no knowledge
deficiency, human error is almost exclusively caused by distraction. That is, when the person knows what to do and how to do it but inadvertently does it incorrectly, that incorrect action is the result of at least a momentary loss of focus-a distraction. Stress is a known contributor to loss of focus. Many studies have explored the relationship between stress and accidents. A general consensus is that there is indeed a strong correlation between the two. Stress can also be a beneficial condition because it creates the desire to change something. Some expertstherefore make a distinction between positive and negative stress. For purposes of this discussion, the focus will be on negative stress-that set of human reactions that has a potentially destructive effect on health and safety. Stress is a highly subjectivephenomenon in that equal external conditions do not initiate equal stress states among all people. It is not the external conditionthat causes the stress, it is the manner in which the external condition is viewed by an individual that determines the reaction. More and more, stress is being viewed as a matter of personal choice, indicating that people can control their reaction to external stimulus to a greater degree than was previously thought. Nonetheless, experience shows that certain external stimuli can be consistently linked with higher stress states in many individuals.
9/198 Additional Risk Modules
Because the stress level in an individual is so subjective, it is nearly impossible to estimate the impact of a stressor on a person’s job functioning ability (the external stimulus). For example, the fear ofjob loss might be a significant cause of concern in one employee but have virtually no impact on another. The differences might be due to present financial condition, financial responsibilities, confidence in obtaining alternative employment, history ofjob losses, fear of rejection, presence of any stigmas attached to loss of employment, etc., all of which are highly subjective interpretations. It is beyond the scope of this text-and perhaps beyond present scientific capabilities-to accurately quantify the level of stress in a given work group and relate that to accident frequency. A thorough psychological screening of every individual in the workplace would be the most exacting method to identify the ability to handle stress and the ability to avoid focus errors. This might give a snapshot indication of the propensity for human errors in the work group. The benefits of such a study, however, including the associated high levels of uncertainty, may not outweigh the costs of the effort. For purposes of risk assessment, however, we can identify some common influences that historically have been linked to higher levels of stress as well as some widespread stress reducers. This is useful in distinguishing groups that may be more prone to human error during a specified time interval. Adjustments to the risk score can be made when strong indications of higher or lower thannormal stress levels exist.
Stressors
Physical stressors Noise, temperature, humidity, vibration, and other conditions of the immediate environment are physical contributors to stress. These are thought to be aggravating rather than initiating causes. These stimuli tend to cause an increase in arousal level and reduce the individual’s ability to deal with other stresses. The time and intensity of exposure will play a role in the impact ofphysical stressors.
Job stressors Working relationships Examples of these stressors include roles and responsibilities not clearly defined, personality conflicts, and poor supervisory skills. Promotions Examples include no opportunity for advancement, poorly defined and executed promotion policies, highly competitive work relationships. Job security Indicators that this might be a stress issue include recent layoffs, rumors of takeovers, and/or workforce reductions. Changes This is a potential problem in that there may be either too many changes (new technology, constantly changing policies, pressures to learn and adapt) or too few, leading to monotony and boredom. Workload Again, either too much or too little can cause stress problems. Ideally, employees are challenged (beneficial stress) but not overstressed.
Office politics When favoritism is shown and there is poor policy definition or execution, people can sense a lack of fairness, and teamwork often breaks down with resulting stress. Organizational structure and culture Inhcators of more stressful situations include the individual’s inability to influence aspects of his or her job, employee’s lack of control, and lack of communication. Perception of hazards associated with thejob If a job is perceived to be dangerous, stress can increase.An irony here is that continued emphasis on the hazards and need for safety might increase stress levels among employeesperforming the job.
Other common stressors Shift work A nonroutine work schedule can lead to sleep disorders, biological and emotional changes, and social problems. Shift work schedules can be designed to minimize these effects. Family relationships When the job requires time away from home, family stresses might be heightened. Family issues in general are occasional sources of stress. Social demands Outside interests, church, school, community obligations, etc., can all be stress reducers or stress enhancers, depending on the individual. Isolation Working alone when the individual’s personality is not suited to this can be a stressor. Undesirable living conditions Stress can increase when an individual or group is stationed at a facility, has undesirable housing accommodations near the work assignment, or lives in a geographical area that is not oftheir choosing.
Assessing stress levels Even if the evaluator is highly skilled in human psychology, it will be difficult to accurately quantify the stress level of a work group. A brief visit to a work group may not provide a representative view of actual, long-term conditions. On any given day or week, stress indicators might be higher or lower than normal. A certain amount of job dissatisfaction will sometimes be voiced even among the most stress-free group. Because this is a difficult area to quantify, point changes due to this factor must reflect the high amount of uncertainty. It is recommended that the evaluator accept the default value for a neutral condition, unless he finds strong indications that the actual stress levels are indeed higher or lower than normal. In adjusting previously assigned risk assessment scores, it has been theorized that a very low stress level can bolster existing error-mitigation systems and lead to a better incorrect operations index score. A workforce free from distractions is better able to focus on tasks. Employees who feel satisfied in their jobs and are part of a team are normally more interested in their work, more conscientious and less error prone. Therefore, when evidence supports a conclusion of “very low stress,” additional points can be added. On the other hand, it is theorized that a high stress level or high level of distraction can undermine existing error-
Stress and human errors 9/199 mitigation systems and lead to increased chances of human error. A higher negative stress level leading to a shortened attention span can subvert many of the items in the incorrect operations index. Training, use of procedures, inspections, checklists, etc., all depend on the individual dedicating attention to the activity. All loss of focus will reduce effectiveness. It will be nearly impossible to accurately assess the stress level during times of design and construction of older pipelines. Therefore, the assessments will generally apply to human error potential for operations and maintenance activities of existing pipelines and all aspects of planned pipelines. Stress levels can, of course, impact the potential of other failure modes, as can many aspects of the incorrecf operations index. As a modeling convenience and consistent with the use of the incorrect operations index, only that index is adjusted by the human stress issue in this example risk model. Indications of higher stress andor distraction levels can be identified and prioritized. The following list groups indicators into three categories, arranged in priority order. The first categories provide more compelling evidence of a potentially higher future error rate: Category INegative Indicators High current accident rate High current rate of errors
these indicators are selected partly because they are quantifiable measures, the data are not always readily available. In the absence of such data, it is suggested that no point adjustments be made. Where indications exist, a relative point or percentage adjustment scale for the incorrect operations index can be set up as shown inTable 9.1, In this example table, a previously calculated Incorrect Operations index score would be reduced by up to 20 points or 25% when significant indicators of negative stress exist. There is also the possibility that a workforce has unusually low stress levels, presumably leading to a low error rate. Indications of lower stress levels might be Category I Positive Indicators
Low accident rate Low rate of errors Category II Positive Indicators 0
Category Ill Positive Indicators
High motivation, general satisfaction Strong sense of teamwork and cooperation Much positive feedback in employee surveys or interviews Low employee turnover High degree of control and autonomy among most employees High participation in suggestion systems.
Category II Negative Indicators
High substance abuse High absenteeism High rate of disciplinary actions Category Ill Negative Indicators 0
Low motivation, general dissatisfaction, Low teamwork and cooperation (evidence of conspiracies, unhealthy competition, “politics”) Much negativity in employee surveys or interviews High employee turnover Low degree of control and autonomy among most employees Low (or very negative) participation in suggestion systems.
Interpreting these signs is best done in the context of historical data collected from the workplace being evaluated and other similar workplaces. The adjective high is, of course, relative. The evaluator will need some comparative measures, either from other work groups within the company or from published industry-wide or country-wide data, or perhaps even from experience in similar evaluations. Care should be exercised in accepting random opinions for these items. Although most of Table 9.1
Low substance abuse Low absenteeism Low rate of disciplinary actions
As with the negative indicators, comparative data will be required and opinions should be only very carefully used. For instance, a low incidence of substance abuse should only warrant points if this was an unusual condition for this type of work group in this culture. Where indications exist, arelative point or percentage adjustment scale for the incorrect operations index can be set up as shown inTable 9.2. In the examples given in Tables 9.1 and 9.2, the results of the stress/distraction analysis would he as follows: When one or more of the indicators shows clear warning signals, the evaluator can reduce the overall incorrect operations index score by up to 20 points or 25%. When these signs are reversed and clearly show a better work environment than other similar operations, up to 20 points or 25% can be added to the incorrect operations index. These are intended only to capture unusual situations. Points should be added or deducted only when strong indications of a unique situation are present.
Example adjustment scale for the three negative indicator categories
Condition
Presence ofany Category I negative indicators Presence of any Category I1 negative indicators Presence ofany two Category I11 negative indicators Combined maximum
Point changefmrnpreviously calculated Inc Ops Score - 12
-8 -6 -20
Percent change applied toprevroush calculated Inc Ops Score -1s -10
-5 -2s
9/200AdditionalRisk Modules Table 9.2 Example adjustmentsto incorrect Operations Index for the three positive indicator categories
Condition
Point change from previously calculated Inc Ops Score
Percent change applied to previously calculated Inc Ops Score
+I2 +8 +6 +20
+I5
Presence of any Category I positive indicators Presence of any Category 11positive indicators Presence of any two Category I11 positive indicators Combined maximum
High stress Neutral Low stress
-20 pts or -25% 0 pts +20 pts or +25%
The following example scoring scenarios use the point adjustment option (rather than percentage adjustment) from the previous adjustment tables.
Example 9.1:Neutral stress conditions In the work environment being scored, the evaluator sees a few indications of overall high stress. Specifically, she observes an increase in accident/error rate in the last 6 months, perhaps due to a high workload recently and loss of some employees through termination. On the other hand, she observes a high sense of teamwork and cooperation, an overall high motivation level, and low absenteeism. Although the accident rate must be carefully monitored the presence of positive as well as negative indicators does not support a situation unusual enough to warrant point adjustments for stress conditions.
Example 9.2: Higher stress conditions In this workplace being scored, the evaluator assesses conditions at a major pumping station and control room. There are some indications that a higher than normal level of stress exists. In the last year, many organizational changes have occurred, including the dismissal of some employees. This is not a normal occurrence in this company. Job security concerns seem to be widespread, leading to some competitive pressures within work teams. Upper management reported many employee complaints regarding supervisors at these sites during the last 6 months. There is no formal suggestion system in place--employees have taken it on themselves to report dissatisfactions. In light ofjob security issues, the evaluator feels that this is an important fact. Records show that in the last 6 months, absenteeism has risen by 5% (even after adjusting for seasonalityta figure that, taken alone, is not statistically significant. The evaluator performs informal, random interviews of three employees. After allowing for an expected amount of negative feedback, along with a reluctance to “tell all” in such interviews, the evaluator nonetheless feels that an undercurrent of unusually high stress presently exists. Accident frequencies in the last year have not increased, however. The evaluator identifies no Category I items, possibly one Category I1 item (the uncertain absenteeism number), and two Category I11 items (general negativity, high complaints). He
+IO i 5
+25
reduces the incorrect operations index by 7 points in consideration ofthese conditions.
Example 9.3: Lower stress conditions At this site, the evaluator finds an unusual openness and communication level among the employees. Reporting relationships seem to be informal and cordial. Almost everyone at a meeting participates enthusiastically; there seems to be no reluctance to speak freely. A strong sense of teamwork and cooperation is evidenced by posters, bulletin boards, and direct observation of employees. There appears to be a high level of expertise and professionalism in all levels, as shown in the audit for other risk items. Absenteeism is very low; the unit has been accident free for 9 years-a noteworthy achievement considering thz amount of vehicle driving, hands-on maintenance, and other exposures of the work group. The evaluator identifies Category I, 11, and 111 items, assesses this as an unusually low stress situation, and adds 18 points to the incorrect operations index. The full score of 20 points is not applied because the evaluator is not as familiar with the work group as she could be and therefore decides that an element ofuncertainty exists.
II. Sabotage module The threat of vandalism, sabotage, and other wanton acts of mischiefare addressed to a limited degree in various sections of this risk assessment such as the third-party dumuge and design indexes. This potential threat may need to be more fully considered when the pipeline is in areas ofpolitical instability or public unrest. When more consideration is warranted, the results of this module be incorporated into the risk assessment. For purposes here, the term sabotage will be used to encompass all intentional acts designed to upset the pipeline operation. Sabotage is primarily considered to be a direct attack against the pipeline owner. Because ofthe strategicvalue of pipelines and their vulnerable locations, pipelines are also attacked for other reasons. Secondary motivationsmay include pipeline sabotage as An indirect attack against a government that supports the pipeline A means of drawing attention to an unrelated cause A protest for political, social, or environmental reasons A way to demoralize the public by undermining public confidence in its government’s ability to provide basic services and security.
Sabotage module 9/201 It would be naive to rule out the possibility of attack completely in any part of the world. However, this module is designed to be used when the threat is more than merely a tbeoretical potential. Inclusion of this module should be prompted by any of the following conditions in the geographical area being evaluated:
Previous acts directed against an owned facility have occurred Random acts impacting owned or similar facilities are occurring The company has knowledge of individuals or groups that have targeted it. Because the kinds of conditions that promote sabotage can change quickly, the potential for future episodes is difficult to predict. For some applications, the evaluator may wish to always include the sabotage module for consistency reasons. An important first step in sabotage assessment is to understand the target opportunities from the attackers’ point of view. I t is useful to develop “what-if” scenarios of possible sabotage and terrorist attacks. A team of knowledgeable personnel can be assembled to develop sabotage strategies that they would use. should they wish to cause maximum damage. The scenarios should be as specific as possible, noting all ofthe following aspects: What pipeline would be targeted? Where on the pipeline should the failure occur? What time of year, day of week, time of day? How would the failure be initiated? How would ignition be ensured, if ignition was part of the scenario? What would be the expected damages? Best case? Worst case‘? What would be the probability of each scenario? As seen in the leak impact,factor development discussion, the most damaging scenarios could involve unconfined vapor cloud explosions, toxic gases, or rapidly dispersed flammable liquids (via roadways, sewer systems, etc), all in “target-rich” environments. Fortunately, these are also very rare scenarios. Even if a careful orchestration of such an event were attempted, the practical difficulties in optimizing the scenario for maximum impact would be challenging even for knowledgeable individuals. The threat assessment team should use these scenarios as part of a vulnerability assessment. Existing countermeasures and sequence-interruption opportunities should be identified. Additional prevention measures should be proposed and discussed. Naturally, care should be exercised in documenting these exercises and protecting such documentation. The nature of the sabotage threat is quite different than all threats previously considered.A focused human effort to cause a failure weighs more on the risk picture than the basically random or slower acting forces of nature. Because any aspect of the pipeline operation is a potential target, all failure modes can theoretically be used to precipitate a failure but the fast-acting failure mechanisms will logically be the saboteur’s first choice. It must be conservatively assumed that a dedicated intruder will eventually find a way to cause harm to a facility. This implies
that, eventually, a pipeline failure will occur as long as the attacks continue. It is recommended that the sabotage threat be included as a stand-alone assessment. It represents a unique type of threat that is independent and additive to other threats. To be consistent with other failure threat assessments (discussed in Chapters 3 through 6), a 100-point scale, with increasing points representing increasing safety, can be used in evaluations. Specific point values are not always suggested here because a sabotage threat can be so situation specific. The evaluator should review all of the variables suggested, add others as needed, and determine the initial weightings based on an appropriate balance between all variables. Variables with a higher potential impact on risk should have higher weightings. The overall potential to a sabotage event can first be assessed based on the current sociopolitical environment, where lower points reflect lower safety-greater threat levels. A score of 100 points indicates no threat of sabotage.
..
. .. . . . . . . . . ... .O-100 pts Then points can be added to the “attack potential” score based on the presence of mitigating measures. In the sample list of considerations below, seven mitigating measures are assessed as are portions of the previously discussed Incorrect Operations index: Attack Potential
A. Community Partnering B. Intelligence C. Security Forces D. Resolve E. Threat of Punishment F. Industry Cooperation G. Facility Accessibility (barrier preventions, detection preventions)
Incorrect Operations Index: A . Design B. Construction C. Operations D. Maintenance
Finally, some modifications to the Leak Impact Factor detailed in Chapter 7 might also be appropriate, as is discussed.
Attack potential Anticipation of attacks is the first line of defense. Indications that the potential for attack is significant include (in roughly priority order) 0 0 0
A history of such attacks on this facility
A history of attacks on similar facilities Presence of a group historically responsible for attacks High tension situations involving conflict between the operating (or owner) company and other groups such as 0 Activists (political, environmental, labor, religious extremists, etc.) Former employees Hostile labor unions 0 Local residents.
9/202Additional Risk Modules
In many cases, the threat from within the local community is greatest. An exception would be a more organized campaign that can direct its activities toward sites in different geographic areas. An organized guerrilla group is intuitively a more potent threat than individual actions. An aspect of sabotage, probably better termed vandalism, includes wanton mischief by individuals who may damage facilities. Often an expression of frustration, these acts are generally spontaneous and directed toward targets of convenience. While not as serious a threat as genuine sabotage, vandalism can nonetheless be included in this assessment. Experience in the geographic area is probably the best gauge to use in assessing the threat. If the area is new to the operator, intelligence can be gained via government agencies (state department, foreign affairs, embassies, etc.) and local government activities (city hall, town meetings, public hearings, etc.). The experience of other operators is valuable. Other operators are ideally other pipeline companies, but can also be operators of production facilities or other transportation modes such as railroad, truck, and marine. To assess the attack potential, a point adjustment scale can be set up as foIlows:
Low attack probability 50-80 pts (situation is very safe) Although something has happened to warrant the inclusion of this module in the risk assessment, indications of impending threats are very minimal. The intent or resources of possible perpetrators are such that real damage to facilities is only a very remote possibility. No attacks other than random (not company or industry specific) mischief have occurred in recent history. Simple vandalism such as spray painting and occasional theft of non-strategic items (building materials, hand tools, chains, etc.) would score in this category. Medium probability 20-50 pts This module is being included in the risk assessment because a real threat exists. Attacks on this company or similar operations have occurred in the past year and/or conditions exist that could cause a flare-up of attacks at any time. Attacks may tend to be propagated by individuals rather than organizations or otherwise lack the full measure of resources that a well-organized and resourced saboteur may have. High probability (threat 0-20 pts is significant) Attacks are an ongoing concern. There is a clear and present danger to facilities or personnel. Conditions under which attacks occur continue to exist (no successfd negotiations, no alleviation of grievances that are prompting the hostility). Attacks are seen to be the work of organized guerrilla groups or other well-organized, resourced, and experienced saboteurs. Assigning of points between those shown is encouraged because actual situations will always be more complex than what is listed in these very generalizedprobability descriptions. A more rigorous assessment can be done by examining and scoring specific aspects of attack potential.
Sabotage mitigations As the potential for an attack increases, preventive measures should escalate. However, any mitigating measure can be overcome by determined saboteurs. Therefore, the risk can only be reduced by a certain amount for each probability level. Awarding of points and/or weightings i s difficult to generalize. Most anti-sabotage measures will be highly situation specific. The designer of the threat assessment model should assign weightings based on experience, judgment, and data, when available. Insisting that all weightings sum to 1OGrepresenting 100% of the mitigation potential-helps in assigning weights and balancing the relative benefits of all measures. In a sense, evaluating the potential for sabotage also assesses the host country’s ability to assist in preventing damage. The following sabotage threat reduction measures are generally available to the pipeline ownerioperator in addition to any support provided by the host country.
A . Communitypartnering One strategy for reducing the threat of sabotage and vandalism is to “make allies from adversaries.” The possibility of attack is reduced when “neighbors” are supportive of the pipeline activities. This support is gained to some extent through general public education. People feel less threatened by things that they understand. Support of pipeline operations is best fostered, however, through the production of benefits to those neighbors. Benefits may include jobs for the community, delivery of needed products (an immediate consumable such as heating oil or gas for cooking is more important than intermediate products such as ethylene or crude oil), or the establishment of infrastructure by the company. Threat of attack is reduced ifpipeline operators establish themselves as contributing members of a community. In developing countries, this strategy has led to agricultural assistance, public health improvements, and the construction of roads, schools, hospitals, etc. Improvements of roads, telephone service, and other infrastructure not only improve the quality of life, they also have the secondary benefit of aiding in the prevention and response to sabotage. An appreciative community will not only be less inclined to cause damage to the facilities of such a company, but will also tend to intervene to protect the company interests when those interests benefit the community. Such a program should not be thought of (and definitely not be labeled) as a bribe or extortion payment by the operating company. In some cases, the program may be thought of as fair compensation for hsrupting a Community. In other cases where the pipeline is merely used as a convenient target in a regional dispute that does not involve the operation at all, assistance programs can be seen as the cost of doing business or as an additional local tax to be paid. Whatever the circumstances, a strategy of partnering with a community will be more effective if the strategy is packaged as the “right thing to do” rather than as a defensive measure. The way the program is presented internally will affect company employees and will consequently spill over into how the community views the actions. Employee interaction with the locals might be a critical aspect of how the program is received. If the pipeline company or sponsoring government is seen as corrupt or otherwise not legitimate, this assistance might be seen as a temporary payoff without long-
Sabotage module 9/203
term commitment and will not have the desired results. It might be a difficult task to create the proper alliances to win public support, and it will usually be a slow process. (See also the “Intelligence” section next.) Community partnering can theoretically yield the most benefit as a risk mitigator because removal of the incentive to attack is the most effective way to protect the pipeline. When such a program is just beginning, its effectiveness will be hard to measure. For risk assessment purposes, the evaluator might assess the program initially and then modify the attack potential variable as evidence suggests that the program is achieving its intended outcome. Various elements of a community partnering program can be identified and valued, in order to assess the benefits from the program: Significant, noticeable, positive impact of program Regular meetings with community leaders to determine how and where money is best spent Good publicity as a community service. These elements are listed in priority order, from most important to least, and can be additive-add points for all that are present, using a point assignment scale consistent with the perceived benefit of this mitigation. In many cases, this variable should command a relatively high percentage of possible mitigation benefits-perhaps 2&70%.
B. Intelligence Forewarning of intended attacks is the next line of defense. Intelligence gathering can be as simple as overhearing conversations or as sophisticated as the use of high-resolution spy satellites, listening devices, and other espionage techniques. Close cooperation with local and national law enforcement may also provide access to vital intelligence. Local police forces are normally experienced in tracking subversives. They know the citizens, they are familiar with civilian leaders, they can have detailed information on criminals and subversive groups, and their support is important in an active anti-sabotage program. However, some local police groups may themselves be corrupt or less than effective. When the local police force is seen as a government protection arm (rather than protection for the people), a close alliance might be counterproductive and even impact the effectiveness of a damage prevention program 1121. The evaluator should be aware that effectiveness of intelligence gathering is difficult to gauge and can change quickly as fragile sources of information appear and disappear. Maximum value should be awarded when the company is able to reliably and regularly obtain information that is valuable in preventing or reducing acts of sabotage. As a rough way of scoring this item, a simple ratio can be used: Number of acts thwarted through intelligence gathering efforts number of acts attempted
7
Hence, if it is believed that three acts were avoided (due to forewarning) and eight acts occurred (even if unsuccessful, they should be counted), then award 318 ofthe maximum point value.
C. Securityjbrces The effectiveness of a security force will be situation specific. Rarely can enough security personnel be deployed to protect the entire length of a pipeline. If security is provided from a government that is presently unpopular, the security forces themselves might be targets and bring the risk of damage closer to the pipeline. It is not uncommon in some areas for pipeline owners to deploy private security personnel. The evaluator should look for evidence of professionalism and effectiveness in such situations. Maximum value should be awarded when the security force presents a strong deterrent to sabotage.
D. Resolve A well-publicized intention to protect the company’s facilities is a deterrent in itself. When the company demonstrates unwavering resolve to defend facilities and prosecute perpetrators, the casual mischief-maker is often dissuaded. Such resolve can be partially shown by large, strongly worded warning signs. These warnings should be reinforced by decisive action should an attack occur. A high-visibility security force also demonstrates resolve. Maximum value should be awarded for a highprofile display that might include signs, guards, patrols, and publicized capture and prosecution of offenders.
E. Threat ofpunishment Fear ofpunishment can be a deterrent to attacks, to some extent. A well-publicized policy and good success in prosecution of perpetrators is a line of defense. The assessed value of this aspect can be increased when the threat of punishment is thought to play a significant role. The evaluator should be aware that a government that is not seen as legitimate might be deemed hypocritical in punishing saboteurs harshly while its own affairs are not in order. In such cases, the deterrent effect of punishment might actually foster support for the saboteurs [ 121. In many cases, threat of punishment (arguably) has a minimal impact on reducing attacks.
F Industry cooperation Sharing of intelligence, training employees to watch neighboring facilities (and, hence, multiplying the patrol effectiveness), sharing of special patrols or guards, sharing of detection devices, etc., are benefits derived from cooperation between companies. Particularly when the companies are engaged in similar operations, this cooperation can be inexpensive and effective. Maximum value should be awarded when a pipeline company’s anti-sabotage efforts are truly expanded by these cooperative efforts.
G. Facility accessibility Attacks will normally occur at the easiest (most vulnerable) targets and, as a secondary criteria, those targets that will cause the most aggravation to have repaired. Such sites include the remote, visible stations along the pipeline route (especially pump and compressor stations), the exposed piping on supports and bridges, and locations that will be difficult to repair (steep mountain terrain, swampland, heavy jungle, etc.).
9/204AdditionalRisk Modules The absence of such facilities is in itself a measure of protection and would be scored as the safest condition. The underlying premise is that a buried pipeline is not normally an attractive target to a would-be saboteur, due to the difficulty in access. Line markers might bring unwanted attention to the line location. Of course, this must be weighed against the benefits of reducing unintentional damage by having more signage.The evaluator may wish to score incidences of line markers or even cleared ROW as aspects of sabotage threat if deemed appropriate. Where surface facilities do exist, points should be subtracted for each occurrence in the section evaluated. The magnitude of this point penalty should be determined based on how much such facilities are thought to increase the attack potential and vulnerability for the pipeline segment. Different facilities might warrant different penalties depending on their attractiveness to attackers. Surface facilities such as pump and compressor stations are often the most difficult and expensive portions of the pipeline system to repair. Use of more sophisticated and complex equipment often requires associated delays in obtaining replacement parts, skilled labor, and specialized equipment to effect repairs. This is fiuther reason for a stronger defensive posture at these sites. Preventive measures for unintentional third-party intrusions (scored in the third-party damage index) offer some overlap with mischief-preventing activities (fences around aboveground facilities, for example) and are sometimes reconsidered in this module. More points should be awarded for devices and installations that are not easily defeated. The presence of such items better discourages the casual intruder. Preventive measures at each facility can bring the point level nearly to the point of having no such facilities, but not as high as the score for “no vulnerable facilities present.” This is consistent with the idea that “no threat” (in this case “no facility”) will have less risk than “mitigated threat,” regardless of the robusmess of the mitigation measures. From a practical standpoint, this allows the pipeline owner to minimize the risk in a number of ways because several means are available to achieve the highest level of preventive measures to offset the point penalty for the surface facility. However, it also shows that even with many preventions in place, the hazard has not been removed. Mitigations can be grouped into two categories: barrier-type preventions, where physical barriers protect the facility, and detection-type preventions, where detection and response are a deterrent. The “penalty” assigned for the presence of surface facilities can be reduced for all mitigative conditions at each facility within the pipeline section evaluated. Some common mitigation measures or conditions, in roughly priority order from most effective to least, are listed here: Burrier- Type Preventions Electrified fence in proper working condition Strong fencdgate designed to prevent unauthorized entry by humans (barbed wire, anti-scaling attachments, heavy-gauge wire, thick wood, or other anti-penetration barrier) Normal fencing (chain link, etc.) Strong locks, not easily defeated Guards (professional, competent) or guard dogs (trained) Alarms, deterrent type, designed to drive away intruders with lights, sounds, etc.
Staffing (value dependent on hours manned and number of personnel) High visibility (difficult to approach the site undetected; good possibility exists of “friendly eyes” observing an intrusion and taking intervening action) Barriers to prevent forcible entry by vehicles (These may be appropriate in extreme cases. Ditches and other terrain obstacles provide a measure of protection. Barricades that do not allow a direct route into the facility, but instead force a slow, twisting maneuver around the barricades, prevent rapid penetration by a vehicle.) Dense, thorny vegetation (This type of vegetation provides a barrier to unauthorized entry. On the other hand, it also provides cover for a perpetrator. Awarding of points is situation specific and should weigh the advantages and disadvantages of such vegetation.). All detection-type preventions must be coupled with timely response unless the detection device is solely for purposes of later apprehension and prosecution of trespassers. Options, listed in roughly priority order (most valuable to least) are listed here: Detection- Type Preventions Staffing (Give maximum value for full-time staffing with multiple personnel at all times.) Video surveillance, real-time monitoring and response Video surveillance, for recording purposes only Alarms, with timely response: motion detectors (infrared, trip beams, trip wires, pressure sensors on floor, etc.) and sound detectors (may not be feasible in a noisy station) Supervisory control and data acquisition (SCADA) system (Such a system can provide an indication of tampering with equipment because the signal to the control room should change as a transmitter or meter changes.) Satellite surveillance, with increasingly better resolution (Such an option is viable today for observing a pipeline and the surrounding area continuously or at any appropriate interval.) Explosive dye markers (These are devices that spray a dye on a perpetrator to facilitate apprehension and prosecution.)
Patrolling is already scored in the third-party damage index. Varying the patrol and inspection schedules enhances this as a sabotage prevention measure. Any of the above measures can also be simulated rather than real. Examples of simulated measures include plastic that appears to be steel bars, fake cameras, and signs of warning measures that do not exist. While obviously not as effective as the genuine deterrents, these are still somewhat effective and some mitigation credit can be awarded. Preventive measures are most effective in discouraging the casual mischief-maker. The more sophisticated aggressor who is intent on causing harm to a specific facility will most likely infiltrate the facility and defeat the detection devices, regardless of the measures employed. With more modern technology, attack is also possible from greater distances. Other equivalent prevention actions and devices can be similarly scored within the spirit of the ranking lists. Note; In all awarding of values, the evaluator is cautioned to carefully study the “real-world effectiveness of the antisabotage measure.
Sabotage module 9/205 Factors such as training and professionalism of personnel, maintenance and sensitivity of devices, and response time to situations are all critical to the usefulness of the measure. As with the potential itself, scoring will necessarily be quite judgmental. A basic assortment of protection measures such as fencing, locks, signs, and SCADA can be scored for each station so equipped. This package is a fairly normal arrangement for pipeline facilities when there is no special sabotage threat. Where a significant threat does exist, adding features such as guards and detection devices can add points up to the maximum allowed. A surface facility should never score as well as the absence of such a facility since its very existence creates a target for sabotage.
Casingpipe While a buried pipeline is relatively inaccessible, casings are possible exceptions. As a special case of surface facilities, sections of buried pipeline that are encased in a casing pipe can be more vulnerable than directly buried pipe. The vulnerability arises from the common use of vent pipes attached to the casing that provide a direct route to the carrier pipe. An explosive charge, dropped into a vent pipe, can then detonate against the carrier pipe. A simple prevention is to place bends in the vent pipe so that a dropped object no longer has a direct access to the carrier pipe. If the bends are below ground level, would-be attackers may not know that there is not an unrestricted path to the main line. Permanent screens or other barriers on the vent pipe entrance are also deterrents to this type of attack.
Incorrect operations index In addition to the variables just discussed, other aspects of the pipeline’s design, construction, and operation can be examined with the threat of sabotage considered. Since these aspects are also covered to some degree in a standard assessment of human error potential as discussed in the Incorrect Operations Index (Chapter 6 ) ,the same categories can be used here.
A . Design This is the first phase of a pipeline operation where attention can be focused on the threat of attack. Route selection should take into consideration all political and security factors associated with a proposed installation. Public relations will ideally begin in the design phase, long before construction begins. Even the presence of a survey crew can generate bad will and false rumors if neighbors are surprised by the activity. Project approval from national or regional government levels may not be enough if this government is unpopular with the local residents. Whereas local approval may not be feasible for a number of reasons, any progress toward local support is valuable. For purposes of this sabotage module, preparatory work done in the design phase can be scored as follows: Level ojSupport,for Project Low-National support only; no attempts are made to communicate with regional or local residents. Medium-Some attempts are made to communicate the purpose of the project; more generalized modes such as televi-
sion, newspapers, and public postings are used, however, little feedback is received from residents. High-Widespread communication and campaigning for the project are conducted using the most effective modes to reach the most people. This may entail visits to villages, town meetings, etc., to hold sessions (in the native language) to deliver information and address concerns. When attacks can be expected, the design phase presents the opportunity to do a few things to minimize the impact of the attacks. Equipment can be selected that is more easily repaired (availability of spare parts, ease of assembly/disassembly, simple design, etc.); aboveground facilities can be located with defense in mind; and detection and prevention options can be included in initial designs. The degree of success and risk reduction in these efforts is covered (and scored) in previous variables.
B. Construction Installation of new facilities or modification of existing facilities provides many opportunities for sabotage. Defects can be introduced and then concealed, counterfeit materials can be substituted, equipment can be stolen or sabotaged etc. In today’s construction environment, a great deal of inspection is often required to ensure that errors are not made and shortcuts are not taken by constructors working against deadlines and cost constraints. When the potential for intentional, malicious acts is introduced, the problem is vastly compounded. Inspection efforts must be greatly expanded in order to have a fair chance of preventing such acts. Security must be present even when work is not being performed in order to protect equipment and property. Points may be awarded based on the degree of security offered during the construction phase: Low-No special security measure are taken. Medium-The threat is acknowledged and planned for. Some steps to increase security during construction are taken. Materials and equipment are secured; extra inspection is employed. High-Extraordinary steps are taken to protect company interests during construction. These include 24-hour-per-day guarding and inspection Employment of several trained, trustworthy inspectors Screened, loyal workforce-perhaps brought in from another location System of checks for material handling Otherwise careful attention to security through thorough planning of all job aspects.
C. Operations An opportunity to combat sabotage exists in the training of company employees. Alerting them to common sabotage methods, possible situations that can lead to attacks (disgruntled present and former employees, recruitment activities by saboteurs, etc.), and suspicious activities in general will improve the vigilance. An aspect of sabotage potential is intentional attacks by company employees or those posing as company employees.
9/206 Additional Risk Modules
An employee with intent to do harm is usually in a better posi-
tion to cause damage due to his likely superior knowledge of the process, equipment, and security obstacles, as well as his unquestioned access to sensitive areas. An employee with intent to do harm can be either ‘’unintentionally acquired” or “created.” One is acquired when saboteurs infiltrate the company through the normal employee hiring process or as emergency substitutes for regular employees. One is created usually through a revenge motive due to a perceived wrong done by the company or through recruitment of the employee by a saboteur organization. Recruitment is usually achieved by addressing the individual’s psychological needs. Such needs include wealth, acceptance, love, guilt, and ideals. Some preventive measures are available to the operating company. Points should be awarded based on the number of obstacles to internal sabotage that exist. Common deterrents include
0 0
Thorough screening of new employees Limiting access to the most sensitive areas Identification badges Training of all employees to be alert to suspicious activities.
D. Maintenance Opportunities for attacks during the maintenance phase are mostly already included in the operations and construction aspects of this index. Attention to maintenance requirements in the design phase, especially planning for repair and replacement, can help to minimize the impact of attacks. These factors can be somewhat addressed in the cost of service interruption. Variables that can also be considered in this module include some that are scored as part of the basic risk assessment. Their consideration here can duplicate the scoring previously or be modified at the modeler’s discretion. More Significant Items Patrolling-A high visibility patrol may act as a deterrent to a casual aggressor; a low-visibility patrol might catch an act in progress. Station visits-Regular visits by employees who can quickly spot irregularities such as forced entry, tampering with equipment, etc., can be a deterrent. Varying the times of patrol and inspection can make observation more difficult to avoid.
0
Less Significant Items 0 Depth of cover-Perhaps a deterrent in some cases, but a few more inches of cover will probably not dissuade a serious perpetrator. 0 ROW condition4lear ROW makes spotting of potential trouble easier, but also makes the pipeline a target that is easier to find and access.
Special emphasis on these variables may help offset a higher risk of attack. When evaluating a variable’s contribution to risk mitigation, a condition or activity that plays a more important role in the risk picture should have a greater impact on the overall point score.
Leak impact factor considerations It would be somewhat comforting to think that most saboteurs are trying to send messages and cause a company unnecessary expense but do not necessarily want to harm innocent parties. Realistically, however, this idea should not be a source of complacency. A saboteur in an extreme case might seek to use the pipeline contents as a weapon to create far-reaching destruction. For example, a hydrocarbon vapor cloud, allowed to reach some optimum size and then ignited, might magnify the consequences of an “unassisted” pipeline leak. If the conditions are right, such an intentional ignition in suitable surroundings may create an unconfined vapor cloud explosion with the resulting damages from blast effects (overpressure) and fireball thermal effects. An attacker could similarly wait for weather conditions that would enhance the spread of a cloud of toxic gases from a pipeline release. Regardless of the initial motivation for the attack, it is felt that the worst case consequences are comparable to those of an unintentional pipeline release. However, the probability of worst case consequences can be increased by an intentional release of pipeline contents. It must be conservativelyassumed then, that in the case of sabotage, there is a greater likelihood of the consequences being more severe.This leads to the inclusion of a factor to modify the leak impactfactor (LIF) to reflect the influence of sabotage-caused leaks. Whenever this module is used in a risk assessment, the evaluator should consider increasing the LIF in consideration of worst case scenarios possibly occurring more frequently under the threat of sabotage. If this increase is applied uniformly, it will not affect the results of a relative risk assessment unless pipelines under a sabotage threat are compared against those without. The LIF increase will be apparent if the relative risk scores are correlated to some measure of absolute risk (see Chapter 14). In some cases, the LIF will include the consequences of service interruption, where longer periods of interruption increase consequences (plant shut downs, lack of heating to homes and hospitals, etc). Restoration priority can be established using concepts from the service interruption risk, discussed previously in this chapter. This would show the components of the system that would need to be repaired first, given that there are damages to several portions.
Example 9.4:Low threat of sabotage The pipeline system for this example has experienced episodes of spray painting on facilities in urban areas and rifle shooting ofpipeline markers in rural areas. The community in general seems to be accepting of or at least indifferent to the presence of the pipeline. There are no labor disputes or workforce reductions occurring in the company. There are no visible protests against the company in general or the pipeline facilities specifically. The evaluator sees no serious ongoing threat from sabotage or serious vandalism. The painting and shooting are seen as random acts, not targeted attempts to disrupt the pipeline. The evaluator elects not to include a special sabotage threat assessment in this risk assessment.
Sabotage module 9/20?
Example 9.5: Medium threat of sabotage In the pipeline system considered here, the owner company has a history of violent labor disputes. Although there have not been any such disputes recently, altercations in the past have involved harassment of employees and sabotage of facilities. One such dispute coincides with the construction period of this section of pipeline. Similar forces seem to still be present and the current labor contract will be renegotiated within the year. The evaluator scores the potential risk as between “medium” and ‘‘low’’ based on the above information. As negotiations begin, the company has made extra efforts to communicate to labor representatives its intention to protect facilities and prosecute to the fullest extent possible any attacks against facilities. This communication has been verbal, documented as meeting minutes, and in the form ofposters in employee areas. The company has alerted local law enforcement of their concerns. The evaluator awards points for resolve and for fear of punishment. There are no cooperative efforts with neighboring industries. Points are also awarded for items in the operations aspect as follows: ID badges, employee screening, and controlled access. In the section being evaluated,one aboveground metering/ block valve station is present. It has a standard protection package that includes a chain-link fence with barbed wire on top, heavy chains and locks on gates and equipment, signs, and a SCADA system. By developing a point scale and applying a relative risk assessment to the situation, the overall risk of pipeline failure is judged to have increased by about 40% by including the threat of sabotage. This includes a 30%increase in failure probability coupled with a 15% increase in potential consequences, as measured by the evaluator’s assessment model. ~~
~
~~
~~
_____
Example 9.6: High threat ofsabotage In this evaluation, the pipeline ownedoperator has installed a pipeline in a developing country with a long history of political unrest. The routing of the line takes it close to rural villages whose inhabitants are openly antigovernmentand, because of the government’s association with the company, anti-pipeline. In the past 2 years, pipeline service has been routinely disrupted by acts of sabotage on aboveground facilities and on cased installations below ground. The potential for attack is scored as high. In the last 6 months, the company has embarked on a community assistance program, spending funds to improve conditions in the villages along the pipeline route. There is evidence that these communities, while not tempering their hostility toward the government, are beginning to view the pipeline company as a potential ally instead of a partner of the government. Such evidence comes from informal interviews and recent interactions between pipeline employees and villagers. Company security officers have a close working relationship with government intelligence sources. These sources confirm that perceptions might be changing in the villages. There have been no attacks in the last 4 months (but it was not unusual for attacks to be spaced several months apart). Points are awarded for a community partnering program and intelligence gathering. Based on the recent intelligence and the observed trend in attacks, the evaluator may be able to score the attack potential as less than “high at some point in the future. As more evidence continues to confirm the reduced potential, the scores will be reevaluated.
The company employs security managers and consultants but no guards or direct response personnel. Two points are awarded for “security force” for the use of the managers and consultants. Any efforts to publicize the company’s intent to protect facilities and prosecute attackers is not thought to be effective. Government threats of apprehension and punishment are similarly not seen as a deterrent to the saboteurs. The section being evaluated has two surface facilities. These facilities are protected by electric fences (at least 75% reliability), remotely operated video surveillance cameras, SCADA, and trained guard dogs. All are judged to be effective anti-sabotage methods. The video surveillance or problems spotted with the SCADA prompt a quick response by local authorities or by a company helicopter. Points are awarded for these items. Where the pipeline route is not obscured by dense vegetation, digitized satellite views are transmitted to company headquarters twice a week. These views will detect movements of people, equipment, etc., within 1 mile either side of the pipeline. While not a continuous surveillance, these snapshots will alert the company to activity in the vicinity, perhaps spotting a staging area for attacks or the creation of an attack route to the line. The evaluator considers this to he an addition to the patrolling efforts and awards additional points for this effort. Additional points are awarded for other mitigation measures: Design-A high level of support is sought for all future construction in this area. This company has much experience with the sabotage risk. A special anti-sabotage team assists in the design of new facilities and coordinates efforts to obtain support from pipeline neighbors. Construction-Private guards are hired to protect job sites 24 hours per day. Construction inspectors are trained to spot evidence of sabotage and are experienced (and effective) in dealing with the local workforce and property owners. The inspection staff is increased so that at least two sets of eyes monitor all activities. Operations-Operations mitigation measures include use of ID badges, employee screening, controlled access, and employee awareness training. New scores are calculated based on apoint system developed by the company. The high attack potential has been partially offset by the thorough mitigation efforts. Nonetheless, the results of the sabotage assessment taken together with the basic risk assessment, imply that overall risk has more than tripled due to the high threat of sabotage. Including the threat of sabotage in the risk evaluation is done by considering this threat as an addition to the existing risk picture. As seen from the examples, inclusion of this special threat can have a tremendous impact on the risk picture, as is consistent with the reality of the situation. Before adding in the risk of sabotage, the threats to a pipeline are predominantly from slowacting or rare forces of nature (corrosion, earth movements, fatigue, etc.) and random errors or omissions (outside damage, incorrect operations, etc.).The sabotage risk, on the other hand, represents a highly directed and specific force. Consequently then, this can represent a greater risk to the pipeline than any other single factor. The increased risk is due primarily to the increased probability of a failure-and possibly a more likely higher consequence failure scenario.
101209
Service Interruption
1. Background A service interruption is defined here as a deviation from product andor delivery specifications for a sufficient duration to cause an impact on a customer.The definition implies the existence of a specification (an agreement as to what and how delivery is to occur), a time variable (duration of the deviation), and a customer. These will be discussed in more detail later. Terms and phrases such as speczjkation violations, excursions, violations of delivery parameters, upsets, specification noncompliances, and offspec will be used interchangeably with service interruption. Assessing the risk of service interruption is more complicated than assessingthe risk of pipeline failure. This is because pipeline failure is only one of the ways in which a service interruption can occur. Service interruptions also have a time variable not present in the risk of pipeline failure. An event may or may not lead to a service interruption depending on how long the event lasts.
Note that ensuring an uninterruptible supply often conflicts with ensuring a failure-proof system. The conflicts occur when erroneousvalve closures or equipment failures cannot be tolerated and steps are taken to make shutdownsmore difficult.In so doing, necessary, desirable shutdowns are also made more difficult. This often presents a desigdphilosophy challenge, especially when dealing with pipeline sectionsclose to the customer where reaction times are minimal. This module is a parallel version of the overall risk assessment methodology.In fact, the basic risk assessment model is a part of the risk of service interruption,Because a pipeline failure as modeled by this technique almost certainly leads to a serviceinterruption,the probability ofpipeline failure itself is a component of the risk of service interruption. Added to this potential is the potential for events that cause a service interruption but do not cause a pipeline failure. Therefore, the sample point scale for the potential of service interruption (the equivalent of the index sum, M O O points, in the basic risk assessment model) is 540 points. An additional 140 points of
IO1210 Service Interruption Risk
‘failure potential’ variables has been added to the 400 points already assigned to measure the relative probability of failure due to excessive leakage. This sum is then modified by a consequence factor. As in the basic risk assessment model, the numerical range is not very important-numbers are most meaningful relative to other risk assessments.
II. The process The overall process is generalized as follows: 1. Define service interruption. What must happen and for how
long? 2. Identify OccUrrences that lead to service interruption. Weight these based on likelihood and severity. 3. Identify mitigating measures for these occurrences. Note that sometimes a mitigating measure can be taken far downstream of the excursion.
4. Define potential consequences of service interruption. These consequences are normally expressed as monetary costs. They represent a separate component of the leak impact factor.
Some sections of pipeline are more critical than others in terms of service interruption. In a distribution system, a service main failure will impact many end customers, whereas a service line failure will impact only a few. A transmission line failure might impact several entire distribution systems. A pipeline section very close to a customer, where early detection and notification of an excursion is not possible, will show a greater risk than a section on the same line far enough away from the customer where detection and notification and possibly avoidance of customer interruption are possible. Much of the potential for service interruption will be consistent along a pipeline because all upstream conditions must always be considered. The opportunity for reactionary preventions, however, will often change with proximity to the customer.
Product origin (0-20 pts)
Production
Pipeline dynamics (0-20 pts)
\
/ Other (0-20 pts)
(0-80% of difference from maximum scores) Pipeline failures*
\
Blockages
I
I
parameters deviation
service
/
service
Operator / error (0-20 pts)
‘Index sum from basic risk assessment model
Figure 10.1 Cost of service interruption module.
Upset score 101211
The definition for service interruption contains reference to a time factor. Time is often a necessary consideration in a specification noncompliance. A customer’s system might be able to tolerate excursions for some amount of time before losses are incurred. When assessing customer sensitivity to specification deviations, the evaluator should compare tolerable excursion durations with probable durations. In the basic risk model, variable scoring is geared toward a pipeline failure, basically defined as leakage. Therefore, all previously scored items in the basic risk assessment model will be included in assessing the risk of service interruption. As previously noted, because a service interruption can occur for reasons other than a pipeline leak, some index items must be revisited. Considerations unique to service interruptions will be scored and added to the safety risk scores. When a pipeline failure will not necessarily lead to a service interruption, the assessment becomes more difficult. Once done, care should be exercised in making comparisons--it may not be appropriate to compare the basic risk assessment with an expanded assessment that includes service interruption risk. In keeping with the philosophy ofthe basic risk model, risk is calculated as the product of the interruption likelihood and consequences: Service interruption risk = (upset score) x (impact factor)
The impact,/uctorrepresents the magnitude of potential consequences arising from a service interruption. The upset score IS the numerical score that combines all pertinent risk likelihood elements-both risk contributors and risk reducers. It encompasses the two types of service interruptions (excursions): ( I ) deviations from product specifications and (2) deviations from specified delivery parameters. The upset score also captures any intervention possibilities, in which an event occurs along the pipeline, but an intervention protects the customer from impact. We now look at the upset score in more detail: lJpsetscore=(PSD+DPD)+(IA)
where PSD =product specification deviation-the potential for the product transported to be off-spec for some reason DPD =delivery parameter deviation-the potential for some aspect of the delivery to be unacceptable IA = intervention adjustment--the ability of the system to compensate or react to an event before the customer is impacted. This is a percentage that applies to the difference between actual PSD and DPD scores and maximum possible PSD and DPD scores. Here is a breakdown ofthe PSD. DPD, and IA categories: A. Product Specification Deviation (PSD) A 1. Product Origin A2. Product Equipment Malfunctions A3. Pipeline Dynamics A4. Other B. Delivery Parameter Deviation (DPD) B 1 . Pipeline Failures B2. Pipeline Blockages B3. Equipment Failures
&SO pts 20 pts 20 pts 20 pts 20 pts W 6 0 pts 400 pts 20 pts 20 pts
84. Operator Error 20 pts C.InterventionAdjustment (IA) Up to 80% of [(8O PSD) + (460 - DPD)] 0-540 pts Total Upset Score ~
Note: As with the basic risk assessment model, higher numbers indicate a safer (less risk) situation. Point values are based on perceived frequency and severity of the variables. They are not currently based on statistical evidence but rather on judgments of variable importance relative to other variables that contribute to risk. For example, in the sample point scheme shown above, the variable pipeline blockages plays approximately the same role in risk as does depth of cover (as an aspect of failure potential in the Third-party Index). Figure 10.1, shown earlier, illustrates the calculation of the service interruption risk.
111. Upset score A.
Product specification deviation (PSD)
Deliveries of products by pipeline are normally governed by contracts that inciude specifications. Most specifications wiil state the acceptable limits ofproduct composition as well as the acceptable delivery parameters. Deviations from contract specifications can cause an interruption of service. When formal contracts do not exist, there is usually an implied contract that the product supplied will be in a condition that fits the customer’s intended use. When a city resident orders a connection to the municipal gas distribution system, the implied contract is that gas, appropriate in composition, will be supplied at sufficient flow and pressure to work satisfactorily in the customer’s heating and cooking systems. The product specification can be violated when the composition of the product changes. This will be termed contamination and will cover all episodes where significant amounts of unintended materials have been introduced into the pipeline product stream. Significant is defined in the specifications. Common contamination episodes in hydrocarbon pipelines involve changes in the following: Hydrocarbon composition (fractions of methane, ethane, butane, propane, etc.) Btucontent Water content Hydrocarbon liquids CO,,H,S Solids (sand rust, etc.). Some of these contaminants are also agents that promote internal corrosion in steel lines. To assess the contamination potential, the evaluator should first study the sensitivity of the customers. The customer tolerance to hydrocarbon composition changes is the key to how critical this factor becomes in preventing service interruptions. The customer specifications should reflect the acceptable composition changes, although there is often a difference between what can actually be tolerated versus what contract specifications allow. If this becomes a critical issue, interviews with the customer process experts may be warranted. When the customer is an unsophisticated user of the product, such as a typical residential customer who
101212 Service Interruption Risk
uses natural gas for cooking and home heating, the manufacturer of the customers’ equipment (stove, heater, etc.) will be the more reliable information source for contaminant tolerances. The evaluator must assess potentials in all upstream sections when scoring the possibility of contamination in a given section. General sources are identified as
0
Product origin Product equipment malfunctions Pipeline dynamics Other.
These sources are scored qualitatively here because general awarding ofpoints for all possible scenarios is usually not practical. The evaluator is to judge, within guidelines set forth, the potential for excursions from a specific source. To accomplish this, the evaluator should have a clear understanding of the possible excursion episodes. A list can be developed, based on customer specifications, that shows critical contaminants. Along with each potential contaminant, specific contaminant sources can be identified. This list will serve as a prompter for the evaluator as assessments are made. An example is shown in Table 10.1. Optional columns such as detectability and sensitivity can be added to provide more guidance during the evaluation. This will also serve to better document the assessment.
A I . Product origin The possibility for contamination from the point ofproduct origin, including the potential for malfunction in the sourcing equipment, is considered here. If the product source is wells, tanks, reservoirs, processing plants, or pipelines not directly under the control of the operator, the operator of the sourcing facility must be relied on in part to prevent contamination. One possible source of contamination in many hydrocarbon products would be any radical change in the product’s hydrocarbon mix. Many product streams are composed of several hydrocarbons. Even a relatively pure natural gas stream will
Table 10.1
often contain 5 to 10% molecules heavier than methane (such as ethane, propane, butane, pentane, usually in that order) and the balance as methane molecules. A change in the amount andor the types of additional molecules in the methane could change the gas Btu content and hence its burning characteristics. The majority of users of natural gas burn the gas, but Btu changes will rarely be a problem for them. Electrical power generation plants often are more sensitive to Btu changes. Hydrocarbon mix changes are commonly seen when the gas source changes, perhaps from a different blending of pipeline supplies, different wells used, different gas compositions within a single well, or changes in the processing of the gas. Many pipeline product streams are blends of several different upstream product streams and hence are sensitive to the proportion mixture from the various streams. If the product source is a processing plant, the composition may depend on the processing variables and techniques. Temperature, pressure, or catalyst changes within the process will change the resulting stream to varying extents. Materials used to remove impurities from a product stream may themselves introduce a contaminatior..A carryover of glycol from a dehydration unit is one example; an over-injection of a corrosion inhibitor is another. Inadequate processing is another source of contamination. A CO, scrubber in an LPG processing plant, for example, might occasionally allow an unacceptably high level of CO, in the product stream to pass to the pipeline. Changes of products in storage facilities and pipeline change-in-service situations are potential sources of product contamination.A composition change may also affect the density, viscosity, and dew point of a gas stream. This can adversely impact processes that are intolerant to liquid formation. The evaluator can develop a qualitative scale as follows to assess the contamination potential from changes at product origin. High 0 pts Excursions are happening or have happened recently. Customer impacts occur or are only narrowly avoided (near misses) by preventive actions.
Critical contaminants Notes
Sources Contaminant
Product Origin
Water
Dehydrator malfunction at foreign pipeline facility Scrubber or amine unit malfunction at processingplant
CO,
Solids
PL Dynamics
Detectable?
Sensitivity
Sweep of free liquids
Yes, detector at city gate
High
Low flow condition prevents blending
Yes, at plant master meter station
High
No on-line detection
Slight
If>10% change No on-line detection
Only if >20% High
Pipeline station glycol dehydrator carry-over
Glycol Propane
Equipment
Depropanizermalfunction at processingplant Well sand bypassing separator at foreign well operation
On-line filter bank passthrough or accidental bypass
Pressureiflow changes loosen and carry pipe wall rust flakes
Upset score 10/213
Medium 10 pts Excursions have happened in the past in essentially the same system, but not recently; or theoretically, a real possibility exists that a relatively simple (high-probability) event can precipitate an excursion. Preventive mechanisms minimize customer impacts. Low 15 pts Rare excursions have happened under extreme conditions. Highly effective and reliable prevention mechanisms exist to correct these rare occurrences. Customer impacts are almost nonexistent. None 20 pts System configuration virtually disallows contamination possibility. A customer impact never occurred in the present system configuration. High reliability, redundant measures are employed to virtually eliminate the possibility of customer impact. Because products often originate at facilities not under the control of the pipeline operator, he can reduce the risk in only certain of limited ways. Preventive actions for point-of-origin contamination episodes include
0
0
0
0
0
Close working relationship with third-party suppliers (inspections, quality monitoring, control charts) Monitoring of all pipeline entry points (and possibly even upstream of the pipeline-in the supplier facility itself-for early warning) to detect contamination or potential contamination at earliest opportunity Redundant decontamination equipment for increased reliability Arrangements of alternate supplies to shut off offending sources without disrupting pipeline supply Plans and practiced procedures to switch to alternate supplies to ensure quick, reliable moves to backup suppliers Automatic switching to alternate supplies for the quickest possible reaction to excursions Operator training in human error prevention techniques to support prompt and proper detection and reaction to excursions.
Any preventive actions should be factored into the assessment ofcontamination potential.
A2. Product equipment maljunctions Pipeline equipment designed to remove impurities on-line can malfunction and allow contaminants to enter the product stream. Some on-line equipment such as dehydrators serves a dual role of protecting the pipeline from possible corrosion agents and eliminating product contamination. Hence, their reliability in preventing contamination will overlap previous analysis of their reliability in preventing internal corrosion. Equipment that is designed to introduce foreign substances into the product stream can also be a source of contamination. Normally, the foreign substances must be kept within a certain concentration range in order to perform their intended function without adversely affecting the product. Corrosion inhibitor liquids or flow-enhancing chemicals are two examples of injected substances. Equipment malfunction or flow regime changes may introduce a higher concentration of these products than what was intended.
Offshore pipelines, in which combined streams of hydrocarbon gas, liquids, and water are simultaneously transported often rely on onshore equipment to perform separation. Potential for contamination from on-line equipment malfunctions is as follows:
High 0 pts Excursions are happening or have happened recently. Customer impacts occur or are only narrowly avoided (near misses) by preventive actions. Medium 10 pts Excursions have happened in the past in essentially the same system, but not recently; or theoretically, a real possibility exists that a relatively simple (high-probability) event can precipitate an excursion. Preventive mechanisms minimize customer impacts. Low 15 pts Rare excursions have happened under extreme conditions. Highly effective and reliable prevention mechanisms exist to correct these rare occurrences. Customer impacts are almost nonexistent. None 20 pts System configuration virtually disallows contamination possibility. A customer impact has never occurred in the present system configuration. High reliability, redundant measures are employed to virtually eliminate possibility of customer impact. No processingequipment is in use. The following prevention activities can be factored into the evaluation for excursions due to equipment malfunctions: 0
0
Strong equipment maintenance practices to prevent malfunctions Redundancy of systems (backups) to increase reliability of equipment or systems to reduce the probability of overall failures Early detection of malfunctions to allow action to be taken before damaging excursions occur.
A3. Pipeline dynamics Another contamination source is liquids or solids introduced into a product stream by a change in pipeline system dynamics. A possible source of solids could be rust particles displaced from the pipe wall. To cause this, rust would have to be present initially. An accompanying event could be a significant disturbance to the pipe that displaces a large amount of rust at one time. Liquids are another possible contamination source. It is not uncommon for free liquids, both water and heavier hydrocarbons, to be present in low-lying areas of a pipeline. This often occurs in spite of precautionary measures to dry the gas prior to injection into the pipeline. Water and hydrocarbon liquids are often atomized and suspended In the gas stream. Changes in gas stream pressure, velocity, or temperature can cause droplets to form and condense in the pipe. As a liquid, the water and hydrocarbons will gravity flow to the low points of the pipeline. If gas stream velocity is later increased, the liquids may move as a slug or liquid droplets will be picked up into the gas and carried along the line. It is conservative to always assume the presence of free liquids. Pigging or analysis during high-flow conditions often verifies this assumption.
10/214 Service Interruption Risk
Previous excursions, perhaps from the other sources listed above, may accumulate and later precipitate major events in this category. Note that pipeline dynamics can also precipitate a service interruption due to a delivery parameter not being met. Pressure surges or sudden changes in product flow may not create a contamination episode, but may interrupt service as a control device engages or the customer equipment is exposed to unfavorable conditions. Even though these are not contamination-related situations, they can be considered here for convenience. Potential for contamination from changes in pipeline dynamics is as follows: High 0 pts Excursions are happening or have happened recently. Customer impacts occur or are only narrowly avoided (near misses) by preventive actions. Medium 10 pts Excursions have happened in the past in essentially the same system, but not recently; or theoretically, a real possibility exists in that a relatively simple (high-probability) event can precipitate an excursion. Preventive mechanisms minimize customer impacts. Low 15 pts Rare excursions have happened under extreme conditions. Highly effective and reliable prevention mechanisms exist to correct these rare occurrences. Customer impacts are almost nonexistent. None 20 pts System configuration virtually disallows contamination possibility. A customer impact has never occurred in the present system configuration. High reliability, redundant measures are employed to virtually eliminate possibility of customer impact. No conceivable change in pipeline dynamics can precipitate an excursion. These prevention activities can be factored into the assessment for contamination potential due to pipeline dynamics: Proven procedures are used for special tasks. Procedures should reflect knowledge and experience in performing pipeline pigging, cleaning, dehydration, etc., in manners that prevent later excursions. A “management of change” discipline establishes a protocol that requires many experts to review any planned changes in pipeline dynamics. Such reviews are designed to detect hidden problems that might trigger an otherwise unexpected event. Close monitoring/control of flow parameters is conducted to avoid abrupt, unexpected shocks to the system.
A4. Other This category includes any other potential Contamination sources. Examples include improper cleaning of pipeline after maintenance or change in service or infiltration of ground water into a low-pressure distribution system piping. When such “other” events can be envisioned, they can be assessed with a qualitative scale. Potential for contamination from other sources is as follows:
High 0 pts Excursions are happening or have happened recently. Recent pipeline activities allow the possibilities of excursions (recent maintenance work, change in service, etc.). Frequent changes in pipeline products are occurring. Customer impacts occur or are only narrowly avoided (near misses) by preventive actions. Medium 10 pts Excursions have happened in the past in essentially the same system, but not recently; or theoretically, there exists a real possibility of a relatively simple (high-probability) event precipitating an excursion; occasional changes in product transported occur. Low 15 pts Rare excursions have happened under extreme conditions. Highly effective and reliable prevention mechanisms exist to correct these rare occurrences. Customer impacts are almost nonexistent. None 20 pts System configuration virtually disallows contamination possibility. Very stable pipeline uses. A customer impact has never occurred in the present system configuration. High reliability, redundant measures are employed to virtually eliminate possibility of customer impact. No other possible contamination events can be envisioned.
B. Delivery parameters deviation (DPD) The second possibility that must be included in assessing the risk of service interruption is the failure to meet acceptable delivery parameters. Delivery parameters or conditions normally include pressure and flow. Product state conditions (viscosity, density, purity, etc.) are usually covered in the product composition specifications discussed previously. Temperature may be included as either a delivery condition or as part of a product state requirement. General causes of delivery parameter deviations are Pipeline failures Pipeline blockages Equipment failures Operator error. Conditions upstream of the section assessed must be included in the evaluation. As the assessment begins, a list should be developed, based on customer specifications, that shows critical delivery parameters. Along with each potential delivery requirement, specific mechanisms that could upset those parameters should be identified. This list will serve as a prompter for the evaluator as assessments are made. Table 10.2is an example of such atable. The threat of sabotage will normally increase the risk of pipeline failure and equipment failure, so evaluators should include the sabotage module when this threat is significant.
BI. Pipeline failures A pipeline failure will usually precipitate a delivery interruption. The possibility of this is scored by performing the basic risk assessment detailed in Chapters 3 through 6 . The resulting index sum is a measure of the failure potential.
Upset score 101215 Table 10.2 Critical delivery parameters
Delivetypurumeter
Pipehnefailure
Pipeline hlockage
Equipment failure
Operator e m i r
Flow
Any pipeline failure
Buildup(paraffin, polyethylene, etc.) on pipe walls
Miscalibration; improper procedure
Pressure Temperature
Same
Same
Valve closure; pump failure; relief valve opening;control valve malfunction:false signal Same Heat exchanger failure
B2. Pipeline blockages Mechanisms exist that can restrict or totally block flow in a pipeline but not lead to a failure of the pipe wall. Common blockages include paraffin or wax plugging as paraffinic hydrocarbons crystallize in the bulk fluid or on the pipe wall; hydrate formation as free water freezes in the flowing product stream; and scale deposits as salts, such as barium sulfate, crystallize on the pipe wall. These mechanisms depend on a host of variables such as chemical compositions, flowing conditions (pressure, temperature, velocity, etc.), and pipe wall condition. While complete flow blockage would usually interrupt pipeline service, partial blockages often cause pressure increases sufficiently high to increase operational costs or reduce flow rates to unacceptable levels. The rate of blockage formation may also be an important variable. A sample qualitative scale to evaluate the potential for blockage follows.
High 0 pts Blockage will almost certainly occur if mitigating actions are not regularly taken. The formation of the block can occur relatively quickly. Medium 10 pts Conditions exist that may cause blockage. Contamination episodes can form blockages. Low 1.5 pts Remote possibility of conditions conducive to blockage formation. Blockage would be very slow in forming. 20 pts Impossible Even considering contamination potential, the product will not form blockages in the pipe. Corrective actions taken include Monitoring via pressure profile, internal inspection device, etc. Cleaning (mechanical, chemical, or thermochemical) at frequencies consistent with buildup rates and the effectiveness of the cleaning process Inhibitors to prevent or minim.ize buildup. These should be considered in assessing the blockage potential.
53 Equipmenrfuilures Any piece of equipment that could upset a delivery parameter should be examined as a potentla1 cause of servlce interruption
Same Failure to adjust for decreased flow rate
This includes many safety devices that, while protecting the system from overstressing, could also impact a delivery parameter. An “unwanted action” of such devices was not covered in the basic risk assessment model because such malfunctions do not usually lead to pipeline failure. Therefore, this additional risk item must be added when service interruption is being evaluated. Where redundant equipment or bypasses exist and can be activated in a timely manner, risk is reduced. Weather outages or outages caused by natural events such as hurricanes, earthquakes, fires, and floods are also considered here as a type of equipment failure. When such occurrences cause a pipeline failure, they are addressed in the basic risk model. When they cause a service interruption (without a pipeline failure), the probability ofthe event can be considered here. A common example is an offshore pipeline system that is intentionally shut down whenever large storms threaten.
Pressure und,flows regulating equipmen? Rotating equipment such as pumps and compressors used to maintain specified flows and pressures is a potential source of specification violation. In such complex equipment, it is rare to not have allowances for outages since they are more prone to failure. A whole host of relatively minor occurrences will stop these devices in the interest of safety and prevention of serious equipment damage.
Flow stopping devices Devices that will stop flow through a pipeline are potential causes of specification violations. Mainline block valves, including emergency shut-in, automatic, remote, and manual configurations are included here. When the product source is a subterranean well or reservoir, any and all attached shut-in devices should be considered. Sufet?, sewices Relief valves, rupture disks, and other automatic shutdowns will normally impact delivery parameters when they are tripped. Often, the more complicated the shutdown schemes, the greater the probability of unnecessary triggering of the system. A sophisticated SCADA system can provide quick detection of equipment failures and can be considered to be a potential prevention opportunity.
Equipmenf controlling other product properties Where temperature or temperature-related properties such as density and viscosity are critical customer requirements, malfunctions in heat exchangers, coolers, heaters, etc., are sources of specification violation.
10l216 Service Interruption Risk
Prevention activities for service interruptions caused by equipment malhnctions include
0
0
Strong equipment maintenance practices Regular and thorough inspections and calibrations including all monitoring and transmitting devices Redundancy so that one erroneous signal will not unilaterally cause a shutdown.
The evaluator should consider the number and nature of devices that could malfunction and cause a delivery upset. Taken together with the system dynamics and mechanisms that prevent equipment failure, the probability can be assessed. Potential for delivery parameter deviation due to equipment failure is as follows: High 0 pts Excursions are happening or have happened recently. Customer impacts occur or are only narrowly avoided (near misses) by preventive actions. Weather-related interruptions are common. Medium 10 pts Excursions have happened in the past in essentially the same system, but not recently; or theoretically, a real possibility exists in that a relatively simple (high-probability) event can precipitate an excursion. Occasional weather-related interruptions. Preventive mechanisms (bypass, redundancy, etc.) minimize customer impacts. Low 15 pts Rare excursions have happened under extreme conditions. Highly effective and reliable prevention mechanisms exist to correct these rare occurrences. Customer impacts are almost nonexistent.The number ofdevices is few, and failure potential is extremely low. None 20 pts System configuration virtually disallows contamination possibility. A customer impact never occurred in the present system configuration. High reliability, redundant measures are employed to virtually eliminate possibility of customer impact. There is no equipment in the section.
Reference is made to the phrase ‘‘single point of failure.” For purposes here, this will mean that one event is sufficient to cause the equipment to fail in a fashion that would precipitate a service interruption. Examples include failures of valve seats, pressure sensors, relief valve springs, relief valve pilots, instrument power supply, instrument supply lines, vent lines, and SCADA signal processing.
Example IO. I : Equipmentfailure potential Single points of failure on a section of a high-pressure gas transmission system are identified as Pressure controller at customer gate Control valve at meter site (Failure possibilities include miscalibration or failure of pressure sensor, loss of instrument power supply, fail closed, incorrect signal from SCADA system.) Three automatic mainline block valves Mainline compressor station where station bypass would not allow sufficient downstream pressure.
Five years of operation shows no delivery parameter deviation due to equipment failure. Because many potential points of failure exist, the evaluator would score the potential as high. However, with a fairly long history of no excursions, the score is set at 8 points, closer to a “medium” potential. Note that none of the equipment failures in the above example would cause apipeline failure, but a service interruption has a high chance of occurring.
B4. Operator error As part of the risk of service interruption, the potential for human errors and omissions should be assessed. The incorrect operations index in the basic risk assessment addresses the human error potential in pipeline failure. An additional qualitative assessment is made here specifically to address the impact of errors in service interruption. While the potential for human error underlies this entire evaluation, a special circumstance has not yet been given enough consideration. That circumstance is the potential for an on-line operational error such as an inadvertent valve closure, an instrument miscalibration, unintentional trip of a pump or compressor, or other errors that do not endanger the pipeline integrity but can temporarily interrupt pipeline operation.To be complete, errors during maintenance, calibration, and operation of the equipment must all be considered. The evaluator should identify the service interruption events of the highest potential and examine them from a human error standpoint. Where a single error from a single operator can precipitate an excursion, the evaluator should examine the training and testing program for assurances that measures are in place to avoid such errors. Other error prevention activities include warning signs or signals, the use of checklists and procedures, and scenario designs that require a sequence of errors before an excursion is possible. A high possibility for human error should be reflected in scoring the potentials for contamination and delivery parameter violation. Sensitivity of operation to human error can be scored using a scale similar to the following: High 0 pts An error is easy to make and consequences could be severe. One or more single points of failure opportmities exist. Very little or no checking is in place to catch carelessness. Medium 10 pts Relatively difficult for a single error to precipitate a service interruption. A good deal of checks (through teams or control room) are made to prevent careless errors. Low 15 pts System or customer is relatively insensitive to possible single errors. High levels of redundancy exist or this is an extremely stable system that can be disrupted only with highly unusual circumstances allowed to continue for long periods of time. None 20 pts It is virtually impossible for even a combination of errors to cause a service interruption.
C. Intervention adjustment (IA) In the basic risk assessment, the possibility for interventions to prevent pipeline failures is included in the index items that are
Upset score 10/217 scored. In the service interruption risk, interventions to prevent events that lead to service interruptions are also scored early in the assessment, but then another intervention possibility is factored in. This reflects the opportunity for intervention after an episode has occurred that would otherwise lead to a service interruption. In the risk numbers, this adjustment allows the section score to partially “recover” from low points in episode likelihood. In many pipeline systems for which an uninterruptible supply is critical, extra provisions have been made to ensure that supply. These provisions allow for reactions to events that, if not addressed, would cause service interruptions. Examples include halting the flow of an offending product stream and replacing it with an acceptable product stream, blending of a contaminant to reduce concentration levels, treating of a contaminant on-line, and notifying the customer so that alternate supplies can be arranged. The reactions can be assessed in terms of their effectiveness in preventing service interruptions after an event has occurred. Even a pipeline failure will not necessarily cause a service interruption. This would be the case if an alternative supply could be found to replace the lost supply or the leak can be repaired with the pipeline in service. Note that in assessing the effectiveness ofa reaction, a time variable may be important. A given reaction may prevent a service interruption for only a certain amount of time beyond which the interruption will occur. Note that by use of this adjustment factor, a high-probability excursion that has a low probability of actually impacting the customer is recognized and scored differently than the same event that is more likely to impact the customer. Some interventions have already been included in assessing the upset score. Reconsidering them here is acceptable as long as a consistent approach is used.The intervention adjustment is sensitive to the section being evaluated. System dynamics play a role in assessing interventions. Consideration should be given to systems that are more “forgiving” in that they are slower to react to an upset. An example of this is a high-pressure, large-volume gas system in which outflows will only slowly depressure the system upon temporary loss of inflows. Contrast this with a small-volume liquid system that is effectively “tight-lined (inflows balance outflows with no temporary imbalances tolerable). In this latter case, reaction times are more critical. To score the availability and reliability of interventions, add percentages for all the mitigating actions that are present and functioning. Note that these actions apply to any and all identified episodes of product specification deviation or delivery parameter deviation. If an action cannot reliably address excursions of any type, then intervention credit is awarded only to the benefiting excursion. For example, if an early detection system can find and allow reporting of a contamination episode, but there is no equivalent system to react quickly to a pipeline failure, then the intervention adjustment is only applied to the PSD. Therefore, these percentages will be used to adjust scores for PSD and DPD independently. The percentage will apply to the difference between the actual PSD or DPD score and the maximum possible score, up to 80%. The means that the PSD andlor DPD scores can recover from low point conditions up to 80% of the maximum possible points. Increasing points in this fashion does not indicate a reduced probability of the event, only the
reduced probability of the event causing customer upset. This is an important distinction; see the example at the end of this chapter. A qualitative scale to assign a value to the intervention adjustment follows.
Early detection is not always possible
No adjustment awarded When the excursion is not detectable, reactionary intervention is not possible. When some of the possible excursions are detectable, score according to the next category. Up to 30% Early detectionlaction Maximum points are awarded when all excursion episodes are detected early enough to allow action to be taken to prevent or minimize customer interruption. This may be at a point where contaminated product is prevented from entering the pipeline, or where product streams may be blended to reduce contaminate levels to appropriate concentrations, alternate delivery equipment can be brought into play, or where alternate sources can be routed to the customer. The reliability of detection must be considered here. The time to detect and take action must include time to receive, interpret, and respond to the detection information. Indirect indications, such as a pressure drop after an accidental valve closure, serve as detection mechanisms. Note that unusual events will normally require more human analysis time before action is taken. Emergency drills can be a useful measure of detectionireaction times. Often a point on the pipeline near the customer may have a problem (such as a closed mainline valve) in which there would not be enough time to make a meaningful early detection and notification. When some excursion types can be detected and some may not be, or when detection is not reliable, no more than 10% should be awarded.
Customer warning is sufficient 50% to prevent an outage for that customer These percentage points are awarded only when there exists a unique situation in which, by the action of notifying the customer of a pending specification violation, that customer can always take action to prevent an outage. Coupled with a reliable early detection ability, this allows an 80% (30% + 50%) factor to reduce the service interruption potential. An example would be an industrial consumer with alternative supplies where, on notification, the customer can easily switch to an alternate supply. Customer warning will minimize 10% impact (but not always prevent an outage) When a customer early warning is useful but will not always prevent an outage, these percentage points are awarded. An example would be an industrial user who, on notification of a pending service interruption, can perform an orderly shutdown of an operation rather than an emergency shutdown with its inherent safety and equipment damage issues. Almost every customer will benefit to some degree from early warning. Even residential gas users, given a few moments notice before an outage, can make plans and adjustments to better respond to the service interruption. The customer’sability to react to the notification should be measured assuming the most likely detectiodnotification time period.
10/218 Service Interruption Risk
Redundant equipmentkupply 25% Points are awarded here when more than one line of defense exists in preventing customer service interruption. For maximum points, there should be no single point of failure that would disable the system's ability to prevent an excursion. Credit can also be given for system configurations that allow rerouting ofproduct to blend out a high contaminant concentration or otherwise keep the customer supplied with product that meets the specifications. The redundancy must be reliably available in a time frame that will prevent customer problems.
Percentage points, up to a maximum of 25%, should be awarded when the switching system has Human intervention required 0% Automatic switching 5?"o Regular testing of switching to alternative sources 6% Highly reliable switching equipment 10% 12% Knowledgeable personnel who are involved in switching operations 2% Contingency plans to handle possible problems during switching Operator training/procedures 15% Points are awarded here when operator training plays a role in preventing or minimizing consequences of service interruption episodes. Training to prevent the likelihood of episodes was already covered earlier in the Operator error section. Operator training is important in calibration, maintenance, and servicing of detection and mitigation equipment as well as monitoring and taking action from a control room. The evaluator should look for active procedures and tra;ning programs that specifically address service interruption episodes. The availability of emergency checklists, the use of procedures (especially when procedures are automatically computer displayed), and the knowledge of operators are all indicators ofthe strength of this item. Emergency/practicedrills 10% Points are awarded here when drills can play a role in preventing or minimizing service interruptions. While drilling can be seen as a part of operator training, it is a critical factor in optimizing response time and is considered as a separate item to be scored here. Maximum points should be awarded where regular drills indicate a highly reliable system. Especially when human intervention is required and especially where time is critical (as is usually the case), drilling should be regular enough that even unusual events will be handled with a minimum of reaction time. Again, these percentages, up to a maximum of 80%, apply to the differences between actual and maximum points in the PSD and the DPD. This reflects the belief that reliable intervention mechanisms can reduce the change of a customer impact due to an excursion of either type.
Example 10.2: Service interruptionpotential In this example, XYZ natural gas transmission pipeline has been sectioned and evaluated using the basic risk assessment model. This pipeline supplies the distribution systems of several municipalities, two industrial complexes, and one electric
power generation plant. The most sensitive of the customers is usually the power generation plant. This is not always the case because some of the municipalities could only replace about 70% of the loss of gas on service interruption during a cold weather period. Therefore, there are periods when the municipalities might be critical customers. This is also the time when the supply to the power plant is most critical, so the scenarios are seen as equal. Notification to customers minimizes the impact of the interruption because alternate supplies are usually available at short notice. Early detection is possible for some excursion types, but for a block valve closure near the customer or for the sweeping of liquids into a customer service line, at most only a few minutes of advance warning can be assumed. There are no redundant supplies for this pipeline itself. The pipeline has been divided into sections for risk assessment. Section A is far enough away from the supplier so that early detection and notification of an excursion are always possible. Section B, however, includes metering stations very close to the customer facilities. These stations contain equipment that could malfimction and not allow any time for detection and notification before the customer is impacted. Because each section includes conditions found in all upstream sections, many items will score the same for these two sections. The potential for service interruption for Section A and Section B is evaluated as follows: Product specification deviation (PSDj Product origin
15 pts Only one source, comprising approximately 20% of the gas stream, is suspect due to the gas arriving from offshore with entrained water. Onshore water removal facilities have occasionally failed to remove all liquids. Equipment failure 20 pts No gas treating equipment in this system. Pipeline dynamics 11 pts Past episodes of sweeping of fluids have occurred when gas velocity increases appreciably. This is linked to the occasional introduction of water into the pipeline by the offshore supplier mentioned previously. Other 20 pts No other potential sources identified. Delivery Parameter Deviations (DPD) Pipeline failure
242 pts From previous basic risk assessment model. Blockages 20 pts No mechanisms to cause flow stream blockage. Equipment 15 pts Automatic valves set to close on high rate of change in pressure have caused unintentional closures in the past. Installation of redundant instrumentation has theoretically minimized the potential for this event again. However, the evaluator feels that the potential still exists. Both sections have equivalent equipment failure potential. Operator error (SectionA) 16 pts Little chance for service interruption due to operator error. No automatic valves or rotating equipment. Manual block valves are locked shut. Control room interaction is always done.
Service interruption-impact factor 10/219 12 pts Operator error (Section B) A higher chance for operator error due to the presence of automatic valves near customers and relief valves in this section. SectionA total = 15 + 20+ 11 + 2 0 + 2 4 2 + 2 0 + 15 + l 6 = 359 Section B total = 355 points
Reactive interventions are next evaluated. For Section A, it is felt that system dynamics allow early detection and notification of any of the excursions that have been identified. The volume and pressure of the pipeline downstream of Section A would allow an adequate response time to even a pipeline failure or valve closure in Section A. Percentages are awarded for early detection (30), notification where the customer impact is reduced (IO), and training (8). These percentages apply to all excursion types and, hence, increase the overall score based on the difference between actual and maximum scores. Therefore, SectionA scores48% ~ ( 5 4 0 - 3 5 9 ) + 3 5 9=446pointsinupset score. Early notification is not able to provide enough warning for every excursion case in Section B, however. Therefore, reactive interventions will only apply to those excursions that can be detected, namely, those occurring upstream of Section B. For the types of excursions that can be detected in a timely manner, product origin and equipment problems, percentages are awarded for early detection (30), notification where the customer impact is reduced (IO), and training (8). Percentages are applied to the differences between actual and maximum scores. Potential for service interruption (upset potential) for Section B is therefore the point total at far right, 361. This analysis shows a much higher potential for service interruption for episodes occurring in Section B (361 pts) as opposed to episodes in Section A (446 pts). The impact factor would be calculated next. A direct comparison between the two sections for the overall risk of service interruption can then be made:
Excursion t ~ p e
Product ongin Product equipment Pipeline dynamics Other Pipeline failure Pipeline blockages Pipeline equipment Operator error
Intervention udjustment 30+ l 0 + 8 = 4 8 %
NiA
30 + 10+ 8 = 4X% NIA 0 NIA 0
0
Score 48%x(20- l5)+ l5= 17 20 48% X (20 11) + 1 1 = I5 20 242 20 15 ~
12 361
IV. Service interruption-impact factor One of the real consequences associated with a pipeline outage is the cost of the interruption in service. Such an interruption can occur through a pipeline leak, a product contamination episode, or a loss of delivery pressure due to a non-leak event. Because pipe failures are modeled as complete line ruptures in this assessment, most failures will lead to service interruptions (from the failed section, at least), but as previously covered not all service interruptions are due to pipeline failures. Costs associated with pipeline failure and spilled product are indirectly assessed in the basic risk model and the optional environmental
module. This is done through the determination of consequence severity based on the pipeline surroundings. Note that a high potential cost of a pipeline failure would be addressed in the assessment of the pipeline surroundings in the basic risk assessment model (leak impact factor). Those implied costs (damages, injuries, etc.) are not repeated in this module, even though they are legitimately an aspect ofthat particular type of service interruption. Some customers can incur large losses if interruption occurs for even an instant. An example of this is an electric power generation unit that uses natural gas to fire turbines. Upon interruption of fuel to the turbine, all power generation might stop. Restarting such an operation is often a hugely expensive undertaking. This is due to the complexity of the process. Many variables (temperature, pressure, flow rates, equipment speeds, etc.) must be simultaneously brought to acceptable points, computers must be reprogrammed, safety systems must be reset, etc. A similar situation exists for a petrochemical processing plant. If the feedstock to the plant (perhaps ethane, propane, or crude oil) is interrupted for a long period, the plant must shut down. Again, costs to restart the operation are often enormous. Many operations that are this sensitive to service interruption will have redundant sources of product that will reduce the possibilities of loss. In a residential situation, if the pipeline provides heating fuel under cold conditions, loss of service can cause or aggravate human health problems. Similarly, loss of power to critical operations such as hospitals, schools, and emergency service providers can have far-reaching repercussions. While electricity is the most common need at such facilities, pipelines often provide the fuel for the generation ofthat electricity. Some customers are only impacted if the interruption is for an extended period of time. Perhaps an alternative source of product is available for a short time, after which consequences become more severe. The most obvious cost of service interruption is the loss of pipeline revenue due to curtailment of product sales. Other costs include Legal action directed against the pipeline operation Loss of some contract negotiating power Loss of some market share to competitors Loss of funding/support for future pipeline projects. Legal action, for purposes of this module, can range from breach of contract action to compensation for customer losses. There is often a direct legal responsibility to compensate for specified customer losses. In addition, there is an implied legal responsibility that will no doubt be translated into compensation for damages not directly specified by contracts. The possibility and severity of legal action will depend on the legal system of the area and the degree of harm suffered by the customer. In certain cultures and societies, a real but not-so-obvious cost of service interruption exists. This can be termed the “sponsorship loss” of an interruption. Simply stated, the loss of service to certain customers can have more severe consequences than an equivalent loss to other similar customers. The critical customer often has a degree of power or influence over the pipeline operation. If this customer becomes hostile toward the operation, consequences such as loss of funding or
10/220 Service Interruption Risk
dismissal of key personnel or loss ofpolitical support are possible in some cases. In some societies, the loss of service to a critical customer might have the opposite effect. In this case, the interruption of service might bring emphasis to a need for resources. If the critical customer has her attention brought to such a need, her power and influence might be favorably directed toward the acquisition of those resources. Where such situations exist, this additional risk may not be well publicized, but, in the interests ofthoroughness, it should be considered in some fashion. Loss of credibility, loss of shareholder confidence, and imposition of new laws and regulations are all considered to be political costs ofpipeline failure. It is realistic to assume that in most situations, regulatory burdens will increase on a higher incidence of pipeline accidents and perhaps even as a result of severe service intermptions. These burdens might be limited to more regulatory inspection and oversight or they might also include more requirements of the pipeline. Arguably, some regulatory reactions to incidents are somewhat exaggerated and politically motivated. This can be a reaction forced by an outraged public that insists on the most reliable pipeline operation. Regardless of the initiating mechanism, regulatory requirements represent a real cost to the pipeline operation. In a capitalist economy, loss of shareholder confidence can be reflected in a reduced stock price. This in turn might reduce the company’s ability to carry on financial transactions that otherwise might have enhanced its operation. A lower stock price might also impact the company’s operating c x t s if the “cost of money” is higher as a result of the stock price change. This in turn will affect the resources available for pipeline operations. Loss of credibility reduces the company’s effectiveness in contract negotiations. The ability to show a superior performance and reliability record commands a premium to some customers. In a competitive market, such a record is especially valuable because it sets one company apart from others. The common denominator in all of these aspects of cost of service interruption is the cost. This cost can generally be expressed in monetary terms. Even the cost of human safety can be expressed in monetary terms with some degree of success (see Chapter 14). Some aspects are easily quantifiable and, hence, easy to score in this risk assessment. Other aspects are indirect costs and are not easily scored. A weighting scheme is needed to place the various aspects in proper relation to one another. The evaluator is urged to carefully examine the model scheme presented here to see if this model is appropriate for the socioeconomic situation of the pipeline to be evaluated. Costs are relative and must be expressed as monetary amounts or as percentages of some other benchmark.
quence. Note that the total revenues can be for the pipeline company as a whole or for a specific region or for specific products, depending on the type of comparisons desired. The revenue is intended to be a measure of the importance of the section from a business standpoint. It must be acknowledged that this is an imperfect measure since complicated business arrangements can obscure the actual value of any specific pipeline section. Within a single pipeline section, there might be product destined for several markets at several prices. Product in the pipeline might be owned by other parties, with the pipeline operator obtaining revenues from the transportation service only. Sales should include all revenue generated by the pipeline section while in service. When only transportation fees are received, the annual sales should include those transportation fees and a figure representing the value of the product itself.
Outage period The costs associated with a service interruption will usually be related to the duration of the outage. For convenience, direct costs that are time dependent are normalized to monthly values. While any time frame could be used, a month is chosen as appropriate because quarterly or annual figures might overshadow the one-time costs, and shorterperiods might be inconvenient to quanti@. Other outage periods may be more appropriate depending on product value and magnitude of onetime costs. While it is not anticipated that an outage will last for a month-most will be for hours or days-this is a time frame that will serve to normalize the costs.
V. Scoring the cost of service interruption The costs of a service interruption are grouped as direct costs and indirect costs.
Direct costs Using the somewhat arbitrary outage period of 1 month, a worksheet can be developed to tabulate the direct costs (see Table 10.3). __instances x $__average cost per incident Costs ofnot receiving product into pipeline (interruption ofa supplier) $ __per month Total direct costs $__per month It can be conservatively assumed that the event that caused the service interruption also caused the loss ofthe product con-
Table 10.3 Cost of service interruption worksheet-direct costs
Revenues Revenues from the section being evaluated are thought to be a reasonable measure of the value of that section. Note that a section’s revenues must include revenues from all downstream sections. This automatically values a “header” or larger upstream section higher than a single-delivery downstream section. Comparing the revenues for the section evaluated with the total revenues provides the basis needed to score the conse-
Monthly revenue from this pipeline segment Direct Costs Loss of sales Value ofproduct in section Damages to be paid per contract Probable additional damages to be paid
$ __per
month
$ __per $$ __per $ __
month month
Scoring the cost of service interruption 101221 tained in the pipeline section. The value of the product lost will then be part ofthe direct costs. This will obviously not hold true for most service interruption episodes, but always including it will ensure consistency in this evaluation.
The impact factor should be constrained to never be less than 1 .O. A higher number indicates a greater impact. The upset
score is divided by the impact factor to amve at the risk of service interruption.
Indirect costs These costs are difficult to calculate and are very situation specific. When no better information is available, it is recommended that a default percentage of the direct costs be used to encompass the total indirect costs. Possible default values for such a qualitative assessment are as follows:
High profile customers impacted Direct costs x 2 Large-volume single users, many individual customers. Notable or critical (hospital, school, strategic industry, etc.) customers impacted. Legal action is probable. Competitors will benefit. Public outrage is possible. High degree of unfavorable publicity is possible. Additional impacts downstream of customer being supplied. High political costs possible. Neutral Direct costs x 1.0 No critical services are interrupted. Public concern would have to be addressed. Some negative publicity. Isolated episodes of legal action are anticipated. Low Direct costs x 0.5 Little or no legal action anticipated. Competition factor is not relevant. No critical services are interrupted. Note that the actual costs can be dramatically higher in a specific situation. Use of this default provides a convenient method to acknowledge the existence of indirect costs even when they cannot be accurately quantified. Because a relative assessment is being done, absolute accuracy is not critical. Alternatively, when indirect costs can be identified and quantified, a worksheet can be designed to tabulate these costs (seeTable 10.4). We then combine the two worksheets (Tables 10.3 and 10.4): Total costs (direct and indirect)$ Total costsimonthly pipeline revenues $ Quantifying costs versus revenues in this fashion automatically weights higher those pipeline sections that are more critical. A section of pipe near the termination of a major transmission line, for example, carries a high annual sales volume and will score a high cost of service interruption. The impact factor is then calculated based on the ratio of service interruption costs to total revenues (as defined earlier). This ratio is multiplied by 10 only to make the numbers easier to handle. Impact factor = (total costsirevenues) x I O Table 10.4 Cost of service interruption worksheet-indirect
costs
Loss of future sales (includesany reduction in contract negotiating power Loss of financial/legislativesupport Cost of increased regulatory burden
$ -per month $-per month $ -per month
Total indirect costs
$-per
month
Example IO.3: Low indirect costs (Case A) The section of pipeline being evaluated for this example is a gas transmission line serving a single user: a power cogeneration plant. This plant has no alternative supplies. The contract with the pipeline company specifies that the pipeline company is responsible for any and all damages resulting from a service interruption (unless that interruption is caused by force rnujeur-natural disaster, acts of Go4 war, etc.). Damages would include costs to the power plant itself (lost power and steam sales, cost of restarting) plus damages to the power and steam users. The service interruption potential (upset score) was previously scored as 484 points. Gas sales to the plant are valued at approximately $9,000 per month. Company-wide gas sales are approximately $72,000 per month. The volume of gas in this section of pressurized pipe is valued at approximately $1 1,000. Power plant restan costs are estimated to be $60,000 (including some possible equipment damage costs). Damages to power and steam users (customers of the power plant) are estimated to be $0.5 million per year. The cost of not getting contracted volumes of gas into the pipeline are estimated at $2,600per month. Indirect costs are thought to be low because most costs are already covered in the direct costs (because they are specified in the contract). Also, the customers impacted are all industrial with fewer anticipated repercussions (not already covered by the contract) from an interruption. Indirect costs are scored as 0.5 x direct costs. Revenue loss = $9,000 Direct costs = 9,000 + 1 1,000 + 60,000 + 500,000/12+ 2.600 = 4125,000 Indirect costs = 0.5 x 125,000= 463,000 Total costs = -$188,000 Total revenues (company-wide)= $72,000,000 per year = $6,000,000 per month Impact factor= 10x(188,000L6,000,000)=3.1 Risk of service interruption = 484 3.1 = 156 This is seen as a critical pipeline section in terms of risk of service interruption, due to the relatively low score of 156.
Example 10.4: Low indirect costs (Case B) The section being evaluated is a high-pressure liquefied petroleum gas (LPG; propane and ethane mixture) pipeline that serves an industrial complex. This line scored 391 points in a previous evaluation for potential for service interruption (upset score). The industrial plant has alternate sources of LPG from nearby storage facilities. The contract with the pipeline company allows for some service interruptions with only minor penalties. Value ofproduct sold and transported via this section is approximately $2,000,000per month. All pipeline LPG business in this company amounts to approximately $27,000,000
10/222 Service InterruptionRisk
per month. The product contained in the section is valued at approximately $2000. Service interruption penalties per contract are $3000. No other direct costs are foreseen. Indirect costs are considered to be low. Revenue loss = $2,000,000 Direct costs = 2,000,000 + 2000 + 3000 = $2,005,000 Indirectcosts=0.5 ~2,005,000=$1,002,000 Total Costs = $3.0 million Impact factor = 10 x 3.007,000 + 27,000,000 = 1.1 Risk of service interruption= 391 + I . 1 = 355 ~~~
~
~
~
Table 10.5 Comparisonof service interruption examples ~~~~
Example
Upset s c o r e Impact
interruption' Notes
10.3
484
3.1
156
10.4
391
1.1
355
10.5
422
9.7
~
Example 10.5: High indirect costs This section of gas transmission pipeline supplies two municipal distribution systems, each of which has alternate supplies to provide approximately 50% of the peak winter load. Gas sales that would be lost on interruption of this section are estimated to be $1 8 million per month. Total company gas sales are approximately $60 million per month. Volume of the gas in the section is valued at $47,000. Costs for rerouting gas supplies to assist the alternate suppliers and the costs of fulfilling contractual obligations for gas purchases are estimated at $2.1 million per month. A previous analysis has scored the potential for service interruption (upset score) at 422 points. Indirect costs are seen to be high. There would be a great deal of public discomfort and possibly related health problems associated with a winter outage. The present regulatory environment would probably overreact to any serious pipeline problem due to loud public reaction as well as the fact that many legislators themselves would be impacted. Many businesses and light industrial users would experience business losses that might prompt legal action against the pipeline company. In the present competitive environment, it is believed that some amount of sales would be permanently lost due to an outage. The evaluator scores the indirect costs at a 1.9 factor. Had there been no redundant supplies at all, it would have scored 2.0. Revenue loss = $ I8 million Direct costs = 18,000,000 million+47,000+2,100,000 = $20,147,000 Indirect costs = 20,147,000 x 1.9 = $38.2 million Total costs = $58.3 million Impact factor= 10 x (58.3 million + 60 million) = 9.7 Risk of service interruption= 422 + 9.7 = 43.5
a
43.5
Least potential for service interrup tion (high upset score) with mod erate impact Lowest impact from a service interruption Highest risk due to high consequences ifthis line segment is out of service
Higher numbers are safer (less risk)
This is also a critical pipeline section, due to the low score for service interruptions.
Nonmonetary modeling In some countries, an economic model that involves pipeline revenues, product values, transportation fees, business competition, and legal costs is not appropriate. Despite the lack of direct monetary relationships, certain customers or groups of customers can usually be identified as more critical than others in terms of service interruption. Hospitals, schools, and certain industries are possible examples. In these cases, emphasis is placed on product uses that are viewed as more valuable, even if that value is not expressed in monetary terms. Risk of service interruption in such cases may not be as complicated as more directly business-drivenpipeline operations. The evaluator can assign criticality values instead of monetary values. Qualitative values of high, medium, and low (or more categories if needed) would distinguish consequences of service interruption. A qualitative impact factor scale can then be used in combination with the service interruption potential (upset score) to score the risk.
I11223
Distribution Systems
1. Background There are many similaritiesbetween transmissionand distribution systems, but there are also critical differences from a risk standpoint. A transmission pipeline system is normally designedto transport product to large end-users such as a distribution system, which in turn deliversthat product to all users in towns and cities (e.g., natural gas for cooking and heating or water for multiple uses is delivered to homes and other buildings by the distributionsystem within a municipality).The similarities between transmission and distribution systems arise because a pressurized pipeline installed underground will experience threats common to all such systems. Differences arise due to varying material types, pipe connection designs, interconnectivityof components, pressure ranges, leak tolerance, and other factors. Chapters 3 through 7 detail a risk
assessment system to measure pipeline risks with a focus on transmission systems. Distribution systems present some different issues for the risk evaluator, as are discussed in this chapter. For purposes of this chapter, a distribution pipeline system will be considered to be the piping network that delivers product from the transmission pipeline to the final user (i.e., the consumer).This includes the low-pressure segments that operate at pressures close to those of the customers’ appliances as well as the higher pressure segments that require pressure regulation to control the pressure to the customer. The most common distribution systems transport water and natural gas, although steam and other product systems are also in use. An easy way to picture a distribution system is as a network or grid of mains, service lines, and connections to customers (see Figure 11.1). This grid can then be envisioned as overlaying
11/224 DistributionSystems
the other grids of streets, sewers, electricity lines, phone lines, and other utilities. This chapter offers ideas and guidance for risk assessments primarily for natural gas distribution systems and water transmission and distribution systems. Historically, operators of natural gas distribution systems have been more aggressive in applying risk management practices, specifically addressing repair-and-replace strategies for their more problematic components. These strategies incorporate many risk assessment and risk management issues, including the use of scoring models for screening and risk assessments. Many of these concepts will also generally apply to wastewater systems and any other pipeline operations in predominantly urban environments.
Comparisons All pipeline systems share similar risk profiles. All are vulnerable to varying degrees from external loadings including thirdparty damage, corrosion, fatigue, overstressing (often due to high internal pressures), and human error. When the pipelines are in similar environments (buried versus aboveground, urban versus rural, etc.) and have common materials (steel, polyethylene, etc.), the similarities become even more pronounced. Similar risk mitigations techniques are commonly chosen to address similar risks. There are often only a few areas of the risk assessment technique that must be modified for either a distribution or transmission system. Unfortunately, safety data are limited for
pipeline operations of all types. However, municipal distribution systems, both water and gas, usually have much more leak data available than hydrocarbon transmission systems. There appears to be a readily identifiable reason for this difference, as discussed later in the pipeline integrity section ofthis chapter. A common complaint among most distribution system operators is the incompleteness of general system data relating to material types, installation conditions, and general perfomance history. This situation seems to be changing among all operators, most likely driven by the increased availability and utility of computer systems to capture and maintain records as well as the growing recognition of the value of such records. Despite companies' increased data availability, it is difficult to make meaningful correlations among all of the factors believed to play a significant role in accident frequency and consequence. These factors, however, can be identified and considered in a somewhat qualitative sense, pending the acquisition of more comprehensive data. For these reasons, and for the benefits of consistency, an indexing approach for distribution lines that parallels the basic pipeline risk analysis (transmission pipelines) is recommended. The primary differences, from a risk perspective, among pipeline systems include: Materials and components Pressure/stress levels Pipe installation techniques Leak tolerance.
n Meter and regulator
District regulator
Large industriai customer
District regulator
; '
Low pressure 0.5 psig r---_ _ _ *-_ _
I - - - - -
-
+-4 - - - - _j_ I
I
Figure 11.1 Typical gas distribution system.
- _ _ .
4
- - - -.
Medium pressure 0.5-25 psig
Risk modeling 111225
Distribution systems differ fundamentally from transmission systems by having a much larger number of end-users or consumers, requiring specific equipment to facilitate product delivery. This equipment includes branches, meters, pressure reduction facilities, etc., along with associated piping, fittings, and valves. Curb valves, curb cocks, or curb shutoffs are additional valves usually placed at the property line to shut off service to a building. A distribution, gas, or water main refers to a piece of pipe that has branches, typically called service lines, that deliver the product to the final end-user. A main, therefore, usually carries more product at higher pressure than a service line. Where required a service regulator controls the pressure to the customer from the service line. The operating environments of distribution systems are often materially different from that of transmission systems. Normally located in highly populated areas. distribution systems are generally operated at lower pressures. built from different materials, installed under and among other infrastructure components such as roadways, and transport less hazardous materials. (Although natural gas is a hazardous material due to its flammability. distribution systems do not normally transport the high-pressure, more hazardous toxic and flammable materials that are often seen in transmission lines.) Many distribution systems are much older than transmission lines and, hence, employ a myriad of design techniques and materials that were popular during various time periods. They also generally require fewer pieces of large equipment such as compressors (although water distribution systems usually require some amount of pumping). Operationally, significant differences from transmission lines include monitoring (SCADA, leak detection, etc.), right-of-way (ROW) control, and some aspects of corrosion control. Because of the smaller pipe size and lower pressures, leak sizes are often not as big in distribution systems as they are in transmission systems; however. because of the environment (e.g.. in towns, cities, etc.), the consequences of distribution pipe breaks can be quite severe. Also, the number of leaks seen in distribution systems is often higher. This higher frequency is due to a number of factors that will be discussed later in this chapter.
II. System integrity Pipeline system integrity is often defined differently for hydrocarbon transmission versus distribution systems. In the former, the system must not allow any leakage (beyond the microscopic, virtually undetectable amounts), so integrity normally means “leak free.” This intolerance of even the smallest leak is due to potential consequences from leaks ofany size. Many distribution systems, on the other hand, tolerate some amount of leakage-system integrity is considered compromised only when leakage becomes excessive. The higher leak tolerance leads naturally to a greater incidence of leaks in a distribution system. These are often documented and monitored and placed on “to be repaired’ lists. Knowledge of leaks and breaks is often the main source of system integrity knowledge. It, rather than inspection information, is usually the first alert of issues of corrosion of steel, graphitization of cast iron, loss of joint integrity, and other signs of system deterioration. Consequently, risk modeling in urban dis-
tribution systems has historically been more focused on leakibreak history. Coupled with the inability to inspect many portions of an urban distribution system, this makes data collection for leaks and breaks even more critical to those risk management programs. Several sections of this chapter and pages 301-302 of Chapter 14 further discuss the application of leakibreak data to risk assessment and risk management. Table 14.13 shows an example of predicting leakibreak probabilities based on relative risk assessment results.
System losses Unaccounted-for gas or systent losses are terms common to distribution operators. Normally expressed as a percentage of total system throughput, the terms relate to imbalances in quantities ofproduct received into the system and quantities of product delivered out. In a modern, well-maintained system. system losses will be a small percentage of the total quantities moved and are often not real loss of product, rather, they are caused in large part by the accuracy limitations of flow measurement devices. This is not surprising when it is noted that the measurement accuracy is dependent on the accuracy of several instruments, not just one. Gas flow measuring instruments include a volumetric meter, a pressure meter, sometimes a Btu meter, and possibly others. Other sources of unaccounted-for product include intentional and unintentional releases. Intentional releases of pipeline products are often necessary to perform a variety of maintenance and construction activities on a pipeline. Unintentional releases is, of course, primarily due to pipeline leaks. Although the system loss number depends on many factors such as system age, complexity, and operation practices, it can provide the risk evaluator with a general sense of how “tight” the distribution system is. The operator’s understanding and use of these numbers to reach improvement goals may also provide insight into the company philosophy and professionalism and in that respect. might be as important as the numbers themselves.
111. Risk modeling Risk management efforts As noted risk management approaches for distribution systems seem to have been focused on pipeline break forecasting. Emphases have been on support for “repair versus replace” decision making and on strategies and models that estimate budgetary requirements for system maintenance in future periods. Some programs have been implemented as a result of dramatic increases in the number of breaks for a municipality. Studies are available that describe programs in many parts of the world including Australia. Canada. and Europe (Italy, France, Switzerland, United Kingdom) as well as many US. cities. Consider these overall observations from a general literature review: The pipe material cast iron features prominently in many studies. both for reasons of its common use during certain
11/226Distribution Systems
0
installation periods and of a dramatically increasing failure rate observed in many locations. Many investigators report that an exponential relationship between the passage of time and future leaks is the most appropriate forecasting model. That is, break rates increase exponentially with the passage of time. Other investigators report constant or decreasing break rates for specific groupings ofpipes in certain cities [41]. One reference characterizes current statistical break prediction models into deterministic, probabilistic multivariate, and probabilistic single-variate models applied to grouped data. Reference [40] reports that a three-parameter Weibull curve is generally accepted as the best predictor of time to failure, given adequate failure history. Investigators use a variety ofvariables to characterize breakage patterns. These variables tend to divide the population of all breaks into groups that experience similar break rates over time. The most widely reported variables influencing break rate seem to be Pipe material Pipe diameter Soil temperature Soil moisture content Previous break countirate Age of system. Additional variables that appear in some break forecasting models include Soil resistivities 0 Joint type Pressure Tree locations Traffic. In some models, variables are identified but not fully populated for the analysis. They therefore serve as input locations (placeholders) for information that may be gathered in the future. Some investigators note that for cast iron, only a fraction of through-wall corrosion holes reveal themselves by becoming breaks [41]. The holes cause leakage below detection thresholds or within leak tolerance. Many references report “as-new’’ conditions observed on pipelines, even those with more problematic materials such as cast iron that have been in service for many decades. Reference [40] uses a median of 220+ years for cast iron pipe failures and states that this is collaborated by inspection of some 75+-year-old cast iron pipe “that looks to be in factory-new condition.” Metal porosity and excessively large graphite flakes are sources ofweaknesses observed in gray cast iron pipe, especially in larger diameters [42]. Similar efforts (deterioration modeling and break forecasting) have been undertaken for sewer pipes.
more leaks and can generate more extensive (and, hence, more statistically certain) information on leaks. This can be useful for failure prediction, wherefailure is defined as “excessive leakage.” Given the leak tolerances, the risk assessments for lower pressure systems often make a distinction between leaks and breaks, where only the latter are considered to be failures.
Sectioning It may not be practical to examine each piece ofpipe in a distribution system, at least not for an initial risk assessment. It may be more important to examine the general portions of the system that are of relatively higher risk than other sections. In many cases, the higher risk areas are intuitively obvious. Areas with a history of leaks, materials more prone to leaks, and areas with higher population densities often already have more resources directed toward them. The more detailed risk assessment becomes useful when the risk picture is not so obvious. The subtle interactions between many risk variables will often point to areas that would not have otherwise been noticed as being high risk. A geographical segmentation scheme might be appropriate in some applications. A segment could represent a page in a map book, a grid, a pressure zone, or some other convenient grouping. To optimize the sectioning of a distribution grid (see also general Sectioning discussion in Chapter2) each section should exhibit similar characteristics within its boundaries, but have at least one differing characteristic compared to neighboring sections. This difference is the reason for the section boundary. A hierarchical list of sectioning characteristics can be created as explained on page 26. For example, if the distribution system to be examined is composed of more than one material of construction, then “material type” could be the first characteristic to distinguish sections. As the second attribute, perhaps the pressure reduction points or pipe diameter changes provide a suitable break point. For instance, section 1A of Acme Distribution System might be all polyethylene (PE) pipe operated above 50 psig in the northeast quadrant of the city of Metropolis. Because steel distribution systems are often divided into electrically isolated sections for cathodic protection purposes, this corrosion-control sectioning might be followed for risk assessment purposes also. In certain cases, it might be advantageous to create noncontiguous sections. In the preceding example, a section could include all steel pipe operated at less than 50 psig. Such a section would contain unconnected pieces of the distribution network. In this scheme, pipes of similar characteristics and environment are grouped, even if they are geographically separate.
Data
IV. Assigning risk scores
Differences in leak tolerance and uses of inspection result in differences in information availability for many distribution systems. As noted elsewhere, leakage information in the distribution industry replaces inspection data in the hydrocarbon industry. More leak-tolerant systems generally have
As previously noted, a risk model similar to that described for transmission pipelines in Chapters 3 through 7 can be used to assess distribution systems. The following sections discuss similarities and differences and suggest changes to the assignment ofpoints in the risk model.
Assigning risk scores 111227
V. Third-party damage index In many areas, third-party damage is one of ifnot fhemostcommon cause of distribution pipeline failure. With the proximity of distribution systems to population centers and the high congestion of other buried utilities, the exposure to potentially harmful excavation activities is high. Offsetting this to some degree is the fact that in these areas, excavators expect to find buried utilities, which may prompt better use of one-call systems or better cooperation with other utility owners and excavators. It is usually unclear exactly why a high incidence of thirdparty damage exists on any system. Possible factors that contribute to the damage incidence rate in urban areas include the following: Smaller contractors may be ignorant ofpermit processes. Excavators have no incentive to avoid damaging the lines when the repair cost (to the damaging party) is smaller than the avoidance cost. Use of inaccurate maps and records. Attempts at locating buried utilities by operators are imprecise. A study into possible contributing symptoms can provide guidance on how avoidance of incidents is best achieved. In general, some factors that define a pipeline’s susceptibility to third-party damage failure, can be identified grouped and evaluated in order to better understand the threat. Factors that logically impact any pipeline’s vulnerability include
0
Depth and types of cover Exposure vulnerability (unburied portions of the system) ROW accessibility Accuracy, thoroughness, and timeliness of locating process Accuracy of maps and records regarding the system’s physical location and characteristics Patrol or other advance reporting of nearby activities.
Factors that are thought to correlate to potential for damaging activity near the pipeline are thought to be Potential level of excavation or other threatening activity nearby Presence of other buried utilitics Population density Pending use of the area-development in progress or planned ROW condition and control Use of one-call system or other indication of informed excavators Traffic (for exposure ofunburied portions). Given the presence of third-party activity, factors that make the pipe less-susceptible to failure from third-party activities include Material type Pipe wall thickness and toughness Stress level.
These factors are needed to fully consider the probability of actual pipe failure from third-party damages, rather than just the probability ofthird-party damage. They are evaluated in the design index (Chapter S), which includes many aspects of system strength. The specific variables and their relative weightings that can be used to evaluate third-party damage potential are very similar to those detailed in Chapter 3 . The suggested weightings are different from those used in Chapter 3 as is shown in Table 1 1.1, These are discussed in the following paragraphs.
Cover Cover for a distribution system often includes pavement materials such as concrete and asphalt as well as sub-base materials such as crushed stone and earth. These are more difficult materials to penetrate and offer more protection for a buried pipeline. Additionally, many municipalities control excavations through pavements. This control may offer another opportunity to avoid damage to a pipeline buried beneath a roadway, sidewalk, etc. Score this item as described beginning on page 46.
One-cull systems One-call systems are usually a very effective means of facilitating communication among affected parties. Score as shown on pages51-53.
Activiv level While a high activity level nearby normally accompanies a distribution system, this is not always an automatic risk increaser. Sometimes a more sophisticated group of excavators works near distribution systems. These excavators have more expenence working around buried utilities, expect to encounter more buried utilities, and are more likely to ensure that owners are notified of the activity (usually through a one-call system). Nonetheless, it is still more defensible to conservatively assume that more activity near the line offers more opportunity for unintentional damage to a pipeline. Score this item as shown on pages 48-50.
Aboveground facilities Surface facilities are susceptible to unique dangers such as traffic impact, loadings, and vandalism. Score this item as described on pages 50-5 1, Table 11.I Third-party damage index: possible variables and weigts Ezriahle
Cover One-call systems Activity level Aboveground facilities Public educatiodlocating process ROW condition Patrol Third-party index total
IVerght 20 10
15
IO 20 IO
1s I00
11/228 Distribution Systems
Public education/locatingprcess The approach to public education in a distribution system might be slightly different than that of a transmission system. The higher concentration of people allows for the effective use of certain communications media. With a distribution system, most of the pipeline neighbors are also customers, and they are easily reached through information included in the customer bill. Other common methods of public education include newspaper ads and public service broadcasts (radio, TV). Points should be awarded based on how effective the program is. Effectivenessis best measured by results: the number ofpeople near the pipeline who understand that the pipeline system exists, what constitutes possible threat to pipe integrity, and appropriate reactions to threats or evidence of leaks. Especially in a high population density situation, knowledgeable and cooperative neighbors add a great deal to pipeline security from third-party damage. A significant number of serious accidents occurs in service lines-those pieces of pipe between a distribution main and a building. These lines are not always owned by the distribution company-the service line may be owned and maintained by the building owner or property owner. From a risk standpoint, it is important that the maintainers know the safety issues involved. Depending on the boundaries of the risk assessment, the evaluator may check that reasonable steps are taken to ensure the proper maintenance of the distribution system that leads to the customer’s equipment. Public education for water systems often focuses more on customer health issues such as contamination potential. Outside of the one-call system advertisements, public education is not commonly done as a means to protect water systems from third-party damage. In this case (and any case where public education scoring is not applicable), the evaluator can simply award no points for this variable. Alternatively, he can change the risk model so that a replacement variable is used or the public education points can be redistributed among the other variables. A candidate replacement variable might be locating process-an evaluation of the process for receiving notification of pending activity and response to that notification including marking, oversight, and follow-up. This would overlap the one-call variable to some extent. Modifications to the suggested point scale on page 53 can reflect the use of education practices for distribution systems.
larly available along pipelines under pavement. Placement of pipelines is consistent relative to sidewalks, roadways, etc. Routings of service lines are uniform (standard design) and marked wherever practical. Average ROW conditions are inconsistent, More markers are needed. More opportunities for leak detection are needed. Signs are not always in legible condition. Poor No markers present anywhere. Placement of lines is inconsistent. Areas of vegetation are overgrown. Debris or structures cover the pipelines. Very difficult for anyone to know of presence of buried utility line. See also page 54.
Patrol Formal patrols might not be part of a distribution system owner’s normal operations. However, informal observations in the course of day-to-day activities are common and could be included in this evaluation, especially when such observations are made more formal. Much of an effective system patrol for a distribution system will have to occur at ground level. Company personnel regularly driving or walking the pipeline route can be effective in detecting and halting potentially damaging third-party activities. Training or other emphasis on the drive-by inspections could be done to heighten sensitivity among employees and contractors. Other patrolling concepts are discussed beginning on page 54. A point scale can be created to assess how much of the system is being examined and on what frequency. The following equation provides such a scale. Note that issues regarding patrol effectivenessshould be incorporated into this score. That is, a less effective patrol performed more frequently is basically equivalent to a more effective but less frequent patrol. (Number of weekly patrols + 5) x (% of system observed on each patrol) x 15 =point score (if 15 points is the maximum point level)
Using this equation, maximum points ( 1 5 ) are awarded for patrols occumng five times per week that observe 100% of the system on each patrol. Twice per week patrols that view 80% of the system would be equivalent to patrols four times per week seeing 40% of the system on each patrol (approximately 5 points).
ROW condition A distribution system ROW is usually quite different from a transmission line ROW. It is impractical to mark all locations of the distribution pipes because many are under pavement or on private property. Nonetheless, in some areas, markers and clear ROW postings are practical and useful in reducing incidences of third-party intrusions. Included in this item are inspection opportunities designed to assist in leak detection surveys. A qualitative scale can be devised to assign points to a section of distribution piping being evaluated: Excellent ROW is clear and unencumbered. Signs are present wherever is practical. Signs are clear in their warning and phone numbers are prominent. Leak detection survey points are regu-
VI. Corrosion index Depending on the material being used, the same corrosion mechanisms are at work in a distribution system as are found in transmission pipelines. It is not unusual, however, to find older metallic distribution lines that have no coating or other means of corrosion prevention. In certain countries and in certain time periods in most countries, corrosion prevention was not undertaken. As would be expected, corrosion leaks are seen more often in such pipes where no or little corrosion prevention steps are taken. The presence of unprotected iron pipe and noncathodically protected steel lines is statistically correlated with a higher incidence of leaks [5 11 and a primary consideration in many “repair-and-replace”models.
Assigning risk scores 11/229
Corrosion is defined in the broadest sense here-any degradation of a material in its environment. This encompasses many possible mechanisms such as temperature degradation, graphitization, emhrittlement, chemical deterioration of concrete, and other processes. As with other failure modes, evaluating the potential for corrosion follows logical steps, replicating the thought process that a corrosion control specialist would employ. This involves (1) identifying the types of corrosion possible: atmospheric, internal, subsurface; (2) identifying the vulnerability of the pipe material; and (3) evaluating the corrosion prevention measures used, at all locations. Corrosion mechanisms are among the most complex of the potential failure mechanisms. As such, many more pieces of information are efficiently utilized in assessing this threat. Because corrosion is often a highly localized phenomenon, and because indirect inspection provides only general information, uncertainty is usually high. With this difficulty in mind, the corrosion index reflects the potential for corrosion to occur, which may or not mean that corrosion is actually taking place. The index is therefore not directly measuring the potential for failure from corrosion. That would require inclusion of additional variables such as pipe wall thickness and stress levels. This is further discussed later in this chapter (corrosion rate discussion) and again in Chapter 5. Three potential types of corrosion are commonly encountered in a pipeline system: atmospheric, internal, and suhsurface (Table 11.2). Atmospheric is considered to he the least aggressive form of corrosion under normal conditions. Internal corrosion is a significant threat for unprotected water pipe, but less of a factor in most gas distribution systems. Subsurface corrosion is seen as the highest corrosion threat for most metallic pipelines. The higher threat is a result of potentially very aggressive subsurface corrosion mechanisms, including various types of galvanic corrosion cells and interference potential from other buried structures, as well as the general inability to inspect and gain knowledge of actual corrosion on subsurface components. Background issues of all types of corrosion are discussed in Chapter 4. The first step in assessing the corrosion potential involves evaluating the pipe’s environment. This can be done most efficiently by a risk model that has been populated with pertinent information. The following discussion illustrates one approach to characterizing each pipe’s environmental exposures (the threats to the pipe from its immediate environment). The computerized risk model first searches for indications of atmospheric exposure, including casings, tunnels, spans, valve vaults, manifolds, and meters. These occurrences are noted in the database and identify one of the potential threats as atmospheric corrosion. The model assumes that all portions of the system are exposed to the product being transported and, hence, Table 11.2 Corrosion index possible variables and weights Variable
Weight
Atmospheric corrosinn
10
Internal corrosion Subsurface corrosion Corrosion index total
80
10
I00
to any internal corrosion potential promulgated by that product. Therefore, ali portions have exposure to internal corrosion. If the pipe is not exposed to the atmosphere, then the model assumes it is exposed to soil and is treated as being in a subsurface corrosive environment. For each exposure type-atmospheric, internal, subsurface-an assessment is made of the relative corrosivity of the environment. Each pipeline’s immediate environment is characterized based on its relative corrosivity to the pipe material-steel, concrete, or plastic, for example. In the scoring system presented here, points are usually assigned to each condition independently and then summed together to represent the corrosion threat. This system adds points for safer conditions. For example, for the subsurface corrosion variable, three main aspects are examined: environment, coating, and cathodic protection. The best combination of environment (very benign), coating (very effective), and cathodic protection (also very effective) commands the highest points. An alternative approach (also described in Chapter 4) that may he more intuitive in some ways, is to begin with an assessment of the threat level and then consider mitigation measures as adjustment factors. Here, the evaluator might wish to begin with a rating of environment--either atmosphere type, product corrosivity, or subsurface conditions. Then, multipliers are applied to account for mitigation effectiveness. For example, in a scheme where increasing points represents increasing risk, perhaps a subsurface environment of Louisiana swampland warrants a risk score of 90, very corrosive, while a dry Arizona desert environment has an environmental rating of 20, very low corrosion. Then, the best coating system decreases or offsets the environment by 50% and the best cathodic protection system offsets it by another 50%. So, the Louisiana situation with very robust corrosion prevention would he 90 x 50% x 50% = 22.5. This is very close to the Arizona desert situation where no corrosion preventions are employed, but the environment is very benign. This is intuitive because a benign environment is really roughly equivalent to a corrosive environment with mitigation, from a corrosion rate perspective. Further discussion of scoring options can he found in Chapter 2. See also discussions regarding information degradation on pages 25-3 1. We now discuss the Chapter 4 corrosion variables as applied to distribution systems. See Chapter 4 for background discussions of all corrosion mechanisms noted here.
Atmospheric corrosion Where pipe materials exposed to the atmosphere are not susceptible to any form of degradation, or where there are no atmospheric exposures, this variable can be scored as no risk from atmospheric corrosion. The evaluator is cautioned about discounting entirely the possibility of atmospheric corrosion. For example, while plastics are often viewed as corrosion proof, sunlight and airborne contaminants (perhaps from nearby industry) are two degradation initiators that can affect certain plastic materials. Note also that casings, tunnels, valve vaults, and other underground enclosures allow the possibility of atmospheric corrosion. Where there are many atmospheric exposures and the pipe material is susceptible to corrosion, the weighting of this variable may need to he increased.
11/230 Distribution Systems
Score the potential for atmospheric corrosion as shown in Chapter 4.
Internal corrosion Water is a pipelined product that presents special challenges in regard to internal corrosion prevention. Most metallic water pipes have internal linings (cement mortar lining is common) to protect them from the corrosive nature of the transported water. Raw or partially treated water systems for delivery to agricultural and/or landscaping applications are becoming more common. Water corrosivity might change depending on the treatment process and the quality of the transported water. With the lower pressures normally seen in distribution systems, infiltration can be a potential problem. Infiltration occurs when an outside material migrates into the pipeline. Most commonly, water is the substance that enters the pipe. While more common in gravity-flow water and sewer lines, a high water table can cause enough pressure to force water into even pressurized pipelines including portions of gas distribution systems. Conduit pipe for fiber optic cable or other electronic transmission cables is also susceptible to infiltration and subsequent threats to system integrity. When foreign material enters the pipe, product contamination and internal corrosion are possible.. Scoring the variables for internal corrosion, product corrosivity,and internalprotection can be done as described and in consideration of additional corrosion scenarios as discussed above.
Subsurfacecorrosion In this section, the evaluator looks for evidence that corrosion can or is occurring in pipe buried underground and that proper actions are being directed to prevent that corrosion. A distinction is made between metal and nonmetal buried pipe. For nonmetal pipe, a subsequent section offers ideas on how to assess corrosion potential. Another section shows one methodology for combining subsurface corrosion assessments of metal and nonmetal pipe. Common industry practice is to employ a two-part defense against galvanic corrosion of a steel pipeline. One line of defense is a coating over the pipeline-the other line of defense is application of cathodic protection (CP). These are discussed in detail in Chapter 4 and can be generally assessed according to the protocols described there. Additional considerations for Chapter 4 variables are discussed below.
Subsurface environment Because a coating system is always considered to be an imperfect barrier, the soil is always assumed to be in contact with the pipe wall at some points. Soil corrosivity is primarily a measure of how well the soil can act as an electrolyte to promote galvanic corrosion on the pipe. Additionally, aspects of the soil that may otherwise directly or indirectly promote corrosion mechanisms should be considered. These include bacterial activity and the presence of corrosive-enhancing chemicals in the soil. The evaluator should be alert to instances where the soil conditions change rapidly. Certain road bed materials, past waste disposal sites, imported foreign materials, etc., can cause
highly localized corrosive conditions. In a city environment, the high number of construction projects leaves open the opportunity for many different materials to be used as fill, foundation, road base, etc. Some of these materials may promote corrosion by acting as a strong electrolyte, attacking the pipe coating, or harboring bacteria that add corrosion mechanisms. In the case of cast iron, a lower resistivity soil will promote graphitization of low ductility cast iron pipe as well as corrosion of carbon steel. Points should be reduced where soil conditions are unknown, known to be corrosion promoting, or where placement of nonnative material has added an unknown factor. Score this item as described.
Coating In general, the coating condition variables for subsurface metallic pipes can be scored as detailed in Chapter 4. Some different coating materials might be found in distribution systems compared with transmission pipelines (such as loose polyethylene bags surrounding cast iron pipes), but these are still appropriately evaluated in terms of their suitability, application, and the related maintenance practices.
Cathodicprotection Modem metallic distribution systems (steel and ductile iron, mostly) are installed with coatings and/or cathodic protection when soil conditions warrant. However, in many older metal systems, little or no corrosion barriers were put into design considerations. Note that the absence of an anticorrosion coating, when one is warranted, scores no points-high risk of corrosion--under this evaluation system. Full points, however, can be awarded in both the cathodic protection and condition of coating variables when the criterion of “no corrosion possible” is met, even if an engineered corrosion prevention system does not exist. That is, if it can be demonstrated that corrosion will not occur in a certain area, credit for a cathodic protection system may be given. The evaluator should ensure that adequate tests of all possible corrosion-enhancing conditions at all times of the year have been made. In general, the cathodic protection variables for subsurface metallic pipes can be scored as detailed in Chapter 4 with special attention paid to the increased potential for interferences in a more urban environment. This and some other considerations are discussed below. Distribution systems are often divided into sections to optimize cathodic protection. Older, poorly coated steel sections will have quite different current requirements than will newer, well-coated steel lines. These systems must be well isolated (electrically) from each other to allow cathodic protection to be effective. Given the isolation of sections, the grid layout, and the often smaller diameters of distribution piping, a system of distributed anodes-strategically placed anodes-is sometimes more efficient than a rectifier impressed current system. Cathodicprofectioneffectiveness Test leads. Where cathodic protection is needed but is not being used, this item should normally score 0 points. While it can be argued that pipe-to-soil protection readings can be taken even in the absence of applied cathodic protection, this information may only provide an incomplete picture of corrosion mechanisms.
Assigning risk scores 11/231
Pipe-to-soil protection readings can also be taken at other aboveground locations, such as meter risers. Credit may be given for these locations where meaningful information on corrosion control is regularly obtained and properly analyzed. To assess this item for distribution systems, pages 80-82 provides background information. A scale can be set up to assess the effectiveness of the test leads based on an estimation of how much piping is being monitored by test lead readings. As with transmission pipelines, we can assume that each test lead provides a reasonable measure of the pipe-to-soil potential for some distance along the pipe on either side of the test lead. As the distance from the test lead increases, uncertainty as to the actual pipe-to-soil potential increases. How quickly the uncertainty increases with distance from the test lead depends on soil conditions (electrolyte) and the presence of other buried metals (interference sources). Rather than a linear scale in miles of pipe between test leads, a percentage of pipe monitored might be more appropriate for a distribution piping grid. A distance can be assumed (perhaps a few hundred feet in relatively uncongested areas) and an approximation as to how much pipe is being protected can be made as follows: Less than 30% yf'piping monitored-a high incidence of other unmonitored buried metals with potential interferences Thir[iapercent to 70% ofpiping monitored-moderate incidence of other unmonitored buried metals Greater than 70% of piping monitored-few incidences of unmonitored other buried metals. The interval of monitoring at the test leads is critical, as is the interpretation ofthose readings.
Close interval strrveys Although not as common as in transmission systems, the close interval or close-spaced survey (CIS) technique can be very important in a metallic-pipe distribution system. Many potential sources of interferences can often be detected by a CIS. A major obstacle is the prevalence of pavement over the pipelines preventing access to the electrolyte. Score as detailed on pages 80-82. Cathodic protection interference Interferences are situations where shorting (unwanted electrical connectivity) occurs with other metals or shielding prevents adequate protective currents from reaching the pipe. Interference will hinder the proper functioning of cathodic protection currents and may lead to accelerated corrosion. A problem sometimes encountered in distribution systems is the use of the pipe as an electrical ground for a building's electric system. Although normally a violation of building codes (and other regulations), this situation is nevertheless seen. Unintentional shorting can occur across the electrical isolators normally placed near customer meters. This occurs if items such as a bicycle chain lock, garden tool, or metallic paint are placed in a way such that an electrical connection is made across the isolator. Some companies perform regular surveys to detect all such shorting situations. The evaluator should he alert to the problem and seek evidence that the operator is sensitive to such scenarios and their possible impact on cathodic protection, corrosion, spark generation, and other possible effects.
In this item and also in the cathodic protection item. the evaluator should be alert to situations where piping of different ages and/or coating conditions is joined. Dissimilar metals, or even minor differences in chemistry along the same piece of steel pipe, can cause galvanic cells to operate and promote corrosion. Because distribution systems are often located in areas congested with other buried utilities, the evaluator should look for operator methods by which interference could be detected and prevented. Examples include strict construction control, strong programs to document locations of all buried utilities, close interval surveys, extensive use of test leads and interference bonds. Score as described on pages 82-85. AC-induced current AC induction presents a potential problem in distribution systems as it does in transmission pipelines. Anytime high voltages are present, there exists a risk of a nearby buried metal conduit becoming charged. In a distribution system, the grid-type layout, increased sources of AC power, and the often-extensive presence of other buried utilities might complicate the analysis of this variable. Score as shown on pages 83-84.
Mechanical corrosion Score as shown on pages 77-78
Subsurface corrosion qf'nonmetallicpipes An alternate methodology is needed to score the risk of buried pipe corrosion for nonmetallic materials. since coatings and cathodic protection are not normally corrosion control methods. For nonmetallic pipe materials, the corrosion mechanisms may be more commonly described as degradation mechanisms. Under the term corrosion, all such mechanisms that can reduce the structural integrity ofthe nonmetallic pipe should be examined. Because this section of the evaluation applies to all nonmetallic pipe materials, some generalized relationships between likelihood of corrosion and preventive measures exist and can be used to evaluate the threat. Corrosion mechanisms such as chemical degradation, ultraviolet degradation, temperature degradation, attack by soil organisms, attack by wildlife (such as rodents gnawing on pipe walls--considered here rather than as an external force), corrosion of a part of a composite material (such as the steel in reinforced concrete pipe), dissolution by water (some clay or wood pipes are susceptible), and general aging effects. Where cementing agents or adhesives are used (usually in the joining process), corrosive effects on these materials must also be considered. In the case of plastics, resistance to inorganic chemicals is usually high. Only very strong oxidizing or reducing agents will damage most modern plastic pipe materials. Organic chemicals, however, can damage plastics by solvation-the absorption of a foreign liquid such as a solvent, possibly resulting in swelling, softening, reduction in physical properties, or even dissolution of the material. Organic chemicals can also aggravate environmental stress corrosion cracking [2]. Aging of plastics is theoretically possible because chemical and physical changes result from oxidation, hydrolysis, absorption, or
11/232 Distribution Systems
biological impacts. In practice, most modem plastics are resistant to such factors [2]. This category of corrosion can be scored by assessing the material susceptibility in general and then looking at preventive measures and actual conditions. Note that a high susceptibility can be mostly, but not entirely, offset by preventions and the presence of rather benign conditions. Material susceptibility The pipe wall material susceptibility to any form of buried pipe external corrosion in a reasonably foreseeable environment should first be assessed. Where possible contact with corrosive substances would be rare, score the material as less susceptible.A qualitative rating scale can be set up to facilitate scoring.
High The pipe material is relatively incompatiblewith some environments that it can reasonably be expected to contact. In such incompatible environments, corrosion damage or leaks have occurred in the past. Damage can occur relatively quickly. Without preventive measures, pipe failures are common. Corrosive mechanisms might have the potential for highly localized, rapid damage. Medium Some corrosion is expected, but serious damage is improbable. Perhaps the formation of a protective layer or film of corrosion by-products precludes continuation of the damage. Several potentially damaging reactions are possible, but damage would be slow and not severe. When the pipe is of the age where chemical or physical changes have caused a minor reduction in its structural properties, this score may be appropriate. Low There is a remote chance of corrosion mechanisms under somewhat rare conditions. Perhaps rare weather conditions causing changes in the soil or rare spill of chemicals occasionally seen in the area could promote damage. Only rare substances not normally found in the soil can corrode the pipe wall, or corrosion mechanisms might be so slow as to be virtually no threat. None No known corrosive mechanisms exist for the pipe in any foreseeable environment.
0
Testing.A program is in place to test buried pipe for corrosion damage. The rate of corrosion should be a factor in the program design. The test time interval should be specified so that all potentially harmful corrosion will be detected by the test before the line can fail in service. Barrier-type protection. Some means of separating the pipe from a potentially harmful environment has been employed. The evaluator should award full points when she is ensured that the design, installation, and maintenance of such protection will indeed eliminate the corrosion potential. Ideally, a testing or monitoring program will verify the bamer effectiveness.
Soil corrosivity When the material susceptibility variable identifies a potential for external corrosion, soil corrosivity should be scored to reflect the presence of conditions that enhance the threat. As the environmentthat is in direct contact with the pipe, soil characteristics that promote corrosion must be identified. The evaluator should list those characteristics and score the section appropriately. Minimum points are awarded when there is a high presence of potentially damaging characteristics in the soil. Maximum points would indicate a benign soil condition. See pages 76-78 for more discussion on soil corrosivity.
Preventive measures Where preventive measures are employed to eliminate or reduce a known threat of corrosion, those measures can be evaluated and scored based on how effective they are in reducing the potential damage. When more than one technique is used, points may be added up to the specified maximum. When preventive measures are absolutely unnecessary, this variable can receive maximum points. The following are examples of common preventive measures and some sample criteria for their assessment.
Mechanical corrosion This risk variable is more fully discussed in Chapter 4. Note that nonmetal materials are also susceptible to mechanical-corrosion mechanisms such as stress corrosion cracking (SCC). While the environmental parameters that promote SCC in nonmetals are different than in metals, there are some similarities. When a sensitizing agent is present on a sufficiently stressed pipe surface, the propagation of minute surface cracks accelerates. This mirrors the mechanism seen in metal pipe materials. For plastics, sensitizing agents can include detergents and alcohols. The evaluator should determine (perhaps from the material manufacturer) which agents may promote SCC. A high stress level coupled with a high presence of contributing soil characteristics would score the lowest point levels. Score this item as discussed on page 78 or by comparing the stress level in the pipe wall with the aggressiveness of the environment (as captured in variables such as the product corrosivity score and the soil corrosivity score). External erosion is also considered here as a potential corrosion mechanism. For instance, an exposed concrete pipe in a flowing stream can be subject to erosion as well as mechanical forces (assessed in the design index). See page 77 for more information on erosion potential. By this scoring, maximum points are awarded for the safest conditions, that is, when no external corrosion mechanisms are present. Increasing material susceptibility andor more threatening conditions will lower the score.
Monitoring. A program is in place to reliably detect (and take appropriate action for) all potentially harmful corrosion. The inspection might be based on statistically sampling sections of pipe. Full points should only be awarded when all pipe is examined or when the statistically driven program can be demonstrated to reduce the inspection error.
Subsurface corrosivity is more problematic for a risk model assessing and comparing many different pipe materials. Each material might have different sensitivitiesto different soil characteristics. Soil resistivity is widely recognized as a variable that generally correlates with corrosion rate of a buried metal.
0
Generalized subsurface corrosionpotential
Assigning risk scores 11/233 Additional soil characteristics that are thought to impact metallic and concrete pipes include pH, chlorides, sulfates, and moisture. Some publicly available soils databases (such as USGS STATSGO) have ratings of corrosivity of steel and corrosivity of concrete that can be used in a risk evaluation. A scoring protocol can be developed based on a basic understanding of material vulnerabilities. Table 11.3 illustrates a basic scoring philosophy for the subsurface environment variable. Factors thought to influence soil corrosivity are listed in the left column and their possible role in specific corrosion potential is shown in the right-most columns. Defaults can be used where no information is available and should be generally conservative (that is, biased toward over-predicting corrosivity). For practical reasons, this may need to be tempered when an extreme condition such as contamination is very unlikely for the vast majority of the pipeline.
System deterioration rate (corrosion rate) Age is a factor in many leakibreak models. While age might be a gross indicator of break likelihood given the presence of active corrosion mechanisms, it does not indicate the presence of corrosion.The recommendation here is to evaluate the actual mechanisms possibly at work, rather than using age as a surrogate. Age is not a relevant risk factor if no time-dependent failure mechanisms are active. The risk model described in this book is measuring the probability and relative aggressiveness of corrosion and other timedependent mechanisms. To translate that into the probability of failure for a pipeline, additional factors such as the pipe wall thickness, corrosion rate, and age need to be considered. It is believed that the scores relate to corrosion rates; however, the actual relationship can only be determined by using actual measured corrosion rates in avariety of environments. Until the relationship between the corrosion index and corrosion rate can be established, a relationship can be theorized. For example, an equation similar to the following might be appropriate for some scenarios: Corrosion rate (in./yr)= exp[-9 x (Corrosion Index)/l00]
This equation was generated via a trial-and-error process using actual corrosion scores until the calculated corrosion rates at either end of the corrosion index scale seemed intu-
itively plausible. For example, some corrosion failures have occurred in pipelines after less than a year in service, so a very low corrosion index score should reflect this. Although arhitray, this relationship is consistent, at least in general mathematical terms, with many researchers’ conclusions that pipeline break rates increase exponentially with the passage of time, under the influence of corrosion. The above relationship produces the corrosion rates shown in Table 1 1.4. Given an initial wall thickness, the time to corrode through the pipe wall can be estimated. An arbitrary initial wall thickness of 0.2 in. is selected to show the years before through-wall corrosion would occur. That is not necessarily the time to failure, however, because even minor wall loss can contribute to a failure in a high-stress (usually from internal pressure) situation, and, at the other extreme, pinhole leaks through the pipe wall do not necessarily constitute failure under the “excessive leakage” definition proposed. The corrosion rates shown in Table 11.4 were theorized to apply to all pipe materials in a particular study. This is, of course, an oversimplification of the real-world processes, but is a modeling convenience that may not detract from the usefulness of the assessment. Table 11.4 reflects the belief that where corrosion mechanisms are not present or only minimally active, as indicated by higher corrosion index scores, corrosion is very slow. Examples include well-lined steel pipe in dry, sandy, benign soils; pipes well protected by coatings and cathodic protection; Table 11.4 Theoretical corrosion rates (exampleonly) Corrosion index 99 95 90 80 70 60 50 40 30 20 10
0
Corrosion rate (in.(vrJ
Years to corrode
0.0001 0.0002 0.0003 0.0007 0.0018 0.0045 0.01 11 0.0273 0.0672 0.1653 0.4066 1.a000
1481 1033
659 268 109 44 I8 7 3 I 0 0
Table 11.3 Scoring for subsurface environment Measurements and scores Soil corrosivio’ factor
Be.st (score (ohm-em)
Resistivity Conductivity PH Chlorides Sulfates Interferences Contamination Moisture
> 100,000
Low 7-9 Low Low None None Low
Factors used
= 1.0)
Wor.st (score = 0)
Default
400 High >9or< 7 High High High High High
0.3 0.3 0.9 0.7 0.7 0.5 0.9 0.3
Corrosivity to metals X X
X X X
X X X
Cormsivit?: tu concrete
Corrosivr f?. to plastrcs
11/234 Distribution Systems
and concrete lines in dry, neutral pH soils. The very long time periods shown in this table for higher corrosion index scores may at first appear excessive. However, they are not inconsistent with previously cited research including one study that uses 22ot years as a median life expectancy for the normally corrosion-vulnerable material of cast iron [2]. Also illustrated by Table 1 1.4 is the other extreme, where low corrosion index scores indicate aggressive corrosion conditions. Examples include acidic, contaminated soils; steel pipe with a high potential to become anodic to other buried structures; and concrete pipe in high chloride soils. In these cases, a high corrosion rate can lead to through-wall corrosion in a matter of months. In producing this table for a specific study, it was recognized that these hypothesized corrosion rates will not likely prove to be accurate in the real world, because they are not based on any empirical data. Nevertheless, an estimated relationship between the corrosion scores and corrosion rates may be useful when applied consistently in this relative model. As databases become more populated and engineers specifically seek data that demonstrate the relationship sought, the equations can be better established to increase the ability of the model to predict actual failure rates.
VIII. Design index This index captures much of the system strength or failureresistance considerations and is fully discussed in Chapter 5. The emphasis of the described assessment is to identify and rank the presence and severity ofpotential failure mechanisms. When failure resistance is coupled with the measurement of a failure mechanism’s aggressiveness, time-to-failure estimates can be made. For example, a corrosion index score indicating aggressive corrosion, coupled with a design index indicating low pipe strength and higher stress states, suggests a short time to failure.
All of the pipe materials discussed here have viable applications, but not all materials will perform equally well in a given service. Some materials are better suited for postinstallation inspection. Although all pipelines can be inspected to some extent by direct observation and remotely controlled video cameras, steel lines benefit from maturing technologies employing magnetic flux and ultrasound inspection devices (see Chapter 5). Because there is no “miracle” material, the material selection step ofthe design process is partly aprocess ofmaximizing the desirable properties while minimizing the undesirable properties. The initial cost ofthe material is not an insignificant property to be considered. However, the long-term “cost of ownership” is a better view of the economics of a particular material selection. The cost of ownership would include ongoing maintenance costs and replacement costs after the design life has expired. This presents a more realistic measure with which to select a material and ultimately impacts the risk picture more directly. The evaluator should check that pipe designs include appropriate consideration of all loadings and correctly model pipe behavior under load. Design calculations must always allow for the pipe response in determining allowable stresses. Pipe materials can be placed into two general response classes: flexible and rigid. This distinction is a necessary one for purposes of design calculations because in general, a rigid pipe requires more wall thickness to support a given load than a flexible pipe does. This is due to the abiiity ofthe flexible pipe to take advantage of the surrounding soil to help carry the load. A small deflection in a flexible pipe does not appreciably add to the pipe stress and allows the soil beneath and to the sides to carry some of the load. This pipe-soil structure is thus a system of high effective strength for flexible pipes [60] but less so for rigid pipes. Some common pipe materials, often found in distribution systems, are discussed below.
Rigidpipe
Pipe materials,joining, and rehabilitation A basic understanding of common pipe materials is important in assessing the risks in this index. Although transmission pipelines are overwhelmingly constructed o f carbon steel, distribution lines have historically been built from a variety of materials. Because a distribution system will often be a composite of different materials, it is useful to distinguish between materials that influence the risk picture differently. The material’s behavior under stress is often critical to the evaluation. A more brittle material has less impact resistance. Impact resistance is particularly important in reducing the severity of outside force loadings. In regions of unstable ground, materials with higher toughness will better resist the stresses of earth movements. Traffic loads and pipe handling activities are other stress inducers that must be withstood by properties such as the pipe material’s fatigue (cracking) and bending (tensile) strengths. Stresses resulting from earth movements and/or temperature changes may be more significant for certain pipe materials. In certain regions, a primary ground movement is caused by the seasonal freedthaw cycle. One study shows that in some pipe materials, as temperature decreases, pipe breaks tend to increase exponentially [5 11.
Asbestos cement pipe is generally viewed as a rigid pipe although it does have a limited amount of flexibility. Because asbestos fibers and dust are hazardous to health, special care is warranted in working around this material if airborne particles are generated. This pipe has been used in both pressurized and gravity-flow systems. Clay pipe is a low-strength material historically used in nonpressure applications. The advantages of the material include high abrasion resistance and high resistance to corrosion. Concrete pipe includes several designs such as prestressed concrete cylinder pipe, reinforced concrete cylinder pipe, reinforced concrete noncylinder pipe, and pretensioned concrete cylinder pipe. These pipes are available in medium to large sizes and are typically used in nonpressure to moderately pressurized systems. In recent years, large leaks have resulted from failed concrete pipe where the steel reinforcement has corroded and the pipe has failed in a brittle fashion [60]. Cast iron pipe, also called gray cast iron, is a part of the pipeline infrastructure in many countries. The first gas distribution systems installed in the United States were almost entirely of cast iron pipe. More than 50,000 miles of cast iron pipe remain in the U.S. distribution systems [15]. Cast iron pipe is
Design index 111235
relatively brittle and is subject to graphitization, a form of corrosion. Its brittle nature allows for more dramatic failure modes such as rapid crack propagation and circumferential breaks. Such failures are potentially much more severe than more ductile failure modes commonly seen in today's pipe materials. Smaller diameter cast iron pipes have reportedly been more prone to failure. There is also statistical evidence that cast iron installed after 1949 (1 8-ft segments) experiences a higher frequency of breaks than does pre- 1949 ( 12-fi segments) cast iron [51]. Alternate pipe materials have more satisfactory properties. In many locations, active efforts are being made to replace all cast iron piping in gas service. A prioritization program to drive such replacements will often rate pipe sections based on their proximity to occupied buildings, susceptibility to earth movements, leak history, size, and operating pressure. In other areas, cast iron has been shown to provide centuries of good performance with no replacement programs planned. Today, rigid pipes are most commonly installed for low-pressure or gravity-flow water and wastewater applications.
Flexible pipe Steel is a flexible material and is the most commonly used material for high-pressure hydrocarbon transmission pipelines and high-pressure applications in general. Steel is also a common material for lower pressure municipal applications. The higher strength steels (>35,OOO-psi yield stress) are less common in the lower pressure service seen in most distribution systems. When used as a gravity-flow conduit, steel pipe cross sections are frequently noncircular and have a corrugated wall for a better strength-to-wall thickness relationship. Because carbon steel is susceptible to corrosion, coatings and linings of bitumen-type materials, Portland cement, and polymers are common. The use of galvanized or aluminized steel is also an anticorrosion option. Copper is sometimes used in lower pressure piping applications. Copper is susceptible to galvanic corrosion and is avery ductile material. It is normally used in small-diameterpipes. Ductile iron pipe is the more flexible iron pipe that has replaced cast iron. The addition of magnesium in the formation of the pipe has improved the material toughness. Ductile iron pipe, as its name implies, is more fracture resistant than cast iron pipe. Because both external and internal corrosion are potential problems, lining materials such as cement mortar and external wrappings such as polyethylene are used when soil conditions warrant. Occasionally, cathodic protection has been employed in preventing corrosion in buried ductile iron. Although ductile iron is found in gas distribution systems, today, it is mainly placed in water and wastewater service. Plastics are now a common material for pipe construction. Advantages cited include low cost, light weight, ease of installation, and low corrosion susceptibility. Drawbacks include difficulties in line location after installation, susceptibility to damage (plastics generally are less strong than steels), some degree of gas permeability, and certain difficulties in the joining process. Also, the buildup of static electricity charges in plastic lines is a well-known phenomenon that requires special precautions to prevent possible sparking. Two categories of plastics are available: thermosets (or thermosetting plastics, FRF') and thermoplastics (PVC, PE, ABS). The thermoset is characterized by its inability to be melted or
reformed after it has been set. The set is the curing process of the plastic and usually occurs under application of heat or in the presence of certain chemical agents. A thermoplastic, on the other hand can be repeatedly softened and rehardened by increases and decreases of temperature, respectively. The most common thermoplastic piping material is polyvinyl chloride (PVC). In the United States, PVC accounts for the vast majority of all plastic pressurized water pipe and sewer pipe. It came into widespread use in the 1960s. but was first used in Germany in the 1930s [60]. PVC is very durable, inert to water, corrosion resistant, and resistant to biological degradation. But it has less stiffness and impact resistance than some other pipe materials and can fail in a brittle fashion. Polyethylene pipe is another popular plastic pipe. In the United States, a majority of new and replacement distribution pipelines in recent years have been made from PE [21]. PE is available in several formulations, some of which may be more susceptible to environmental stress cracking. Stress corrosion cracking is aphenomenon seen in higher stress conditions ifthe pipe material is simultaneously weakened by its interaction with certain chemicals. PE is popular in gas distribution systems. Its flexibility offers a measure ofprotection against external forces caused by earth movements. It also allows the pipe to be crimped as a means to shut off flow. This weakens the pipe at the crimping location and generally requires a reinforcing sleeve when the line is placed back in service, but is nonetheless a valuable feature. A high-density PE formulation is available for higher pressure applications; a medium-density PE is normally used in low-pressure applications. A substantial material cost savings is often associated with lower density PE versus high density, but this ofcourse has accompanying tradeoffs in desirable properties. Acrylonitrile-butadiene-styrene (ABS) is a material seen primarily in nonpressure applications (vents, drains, smalldiameter sewers). Polybutylene, cellulose acetate butyrate, and styrene rubber are other less common thermoplastic materials used in pipe manufacture. Among thermosets, fiberglass reinforced plastic (FRP1 pipe employs a thermoset resin and fiberglass for reinforcing. It is used in both pressure and nonpressure applications, but is not as common as the thermoplastics. Unraveling is a common failure mode.
Joining In any pipeline design, provisions must be made to join pieces of pipe. There is a myriad ofjoining methods available for the various pipe materials found in distribution systems. Welding, bell and spigot connections, couplings, fusions, flanges. and screwed connections can all be found in distribution piping. In many cases, the joint is structurally the weakest part of the pipeline. Joint type has been identified as a critical factor in pipeline susceptibility to seismically induced failures (see pages 112-1 13). Ensuring a continuous anticorrosion coating or lining across a joint is also a challenge. The number ofjoints in a pipeline design depends on the length of pieces ofpipe that are to be joined. Although there are practical considerations such as the length of pipe that can be economically produced, transported and handled during installation, the number of joints is normally minimized in a good pipeline design. The evaluator should take note of the joining technique and its
11/236 Distribution Systems
susceptibility to failure, especially when joint failures are characterized by complete separation of the pipe sections. A rating scheme can be devised to assess pipelines with more problematic joints-those that historically have failed more often or more catastrophically in certain environments. Joining designs and installation processes are also covered in the incorrect operations index.
Rehabilitatedpipelines In some portions of distribution systems, replacement of pipelines by conventional open-cut methods is impractical, extremely costly, and/or unacceptably disruptive to the public. Adverse environmental impact, road closures, traffic delays, site restorations, and other disruptions to the community are challenges to urban pipeline rehabilitation. Trenchless techniques are now often being used to minimize these impacts. Common trenchless pipe rehabilitation techniques involve the insertion a liner of some type into an existing pipeline whose integrity has become compromised. Liner materials include synthetic fibers, polyurethane membranes, textile hose, and high-density polyethylene. Sometimes the liner is bound to the existing pipe wall with an adhesive; at other times a friction fit locks the two systems together. Sometimes, a smaller pipeline is merely inserted into the line to be rehabilitated, where the existing line becomes only a conduit for the new line. To compensate for the reduced diameter, the newer line can be designed for a higher operating pressure andor have a lower resistance to flow. From a risk viewpoint, these composite material systems may require special consideration (see Chapter 5). Because some liner techniques are relatively new, in-service failure modes are not well defined. Possible gas migration through a liner (on a molecular level) can pressurize an annular spacebetween the liner and the original pipe wall-which may not be intended to contain pressure. Composite systems also bring with them challenges for leak pinpointing, should the new liner develop a leak. The evaluator should incorporate failure experience into the evaluation of such systems as it becomes available. We now take a look at the Chapter 5 design inder variables as they apply to distribution systems. Table 1 1.5 lists the variables and their possible weights for a distribution system risk assessment, which are discussed in the following subsections.
Safety factor Pipeline strength is characterized in this part of the risk model. Pipe wall thickness, above what is needed for internal stresses Table 11.5
Design index possible variables and weights
Variable
Safety factor Fatigue Surge potential Integrity verifications Land movements Design index total
Weight
30 15 15
20 20 IO0
and known loadings, provides a margin of safety against unanticipated loads as well as an increased survival time when corrosion or fatigue mechanisms are active. If nonpipe components are in the section being evaluated, their strengths should also be considered in calculating safety margins. Inspection may reveal areas of wall loss, pinhole corrosion, graphitization (in the case of cast iron), and leaks. This information should be included in the model to adjust the estimated wall thickness. When actual wall thickness measurements are not available, the nominal wall thickness can be adjusted by an estimated corrosion rate or a conservative assumption based on material type, age, and suspected deterioration mechanisms. In scoring the safetyfactor, the evaluator should take into account material differences and other pipe design factors peculiar to distribution systems. This can be done by first scoring the variable as described on pages 94-102 and then adjusting this score by material considerations when it is deemed appropriate to do so. Table 7.3 shows the material toughness for some materials commonly seen in distribution piping. When the evaluator feels that the type of material limits its usefulness as extra pipe wall thickness, be can adjust the pipe safety factor accordingly. In deciding whether normal or maximum pressures are to be used in calculating safety margins, special attention should be given to the design of pressure regulation for the distribution system (see also page 94).
Fatigue Note that traffic loadings can be a significant source of fatigue on distribution system components. Score this item as described on pages 102-104.
Surge potential Score as described on pages 104105. Note that this item applies only to transported fluids that can generate surges. This usually excludes highly compressible fluids (gases).
Integrity verifications In hydrocarbon transmission pipelines, inspection plays a large role in integrity management. For most hydrocarbon transmission (and increasingly for gathering systems also), it is imperative to ensure that the system integrity will not be compromised and to quickly detect any size leak should system integrity fail. As such, many inspection techniques have been developed to detect even the most minor flaw in continuously welded steel pipelines-by far the most prevalent type of high-pressure pipeline. The application of these techniques and the frequency of application play large roles in risk management and, in fact, are the basis of some regulatory initiatives. Distribution system integrity verification includes pressure testing, acoustic or electrical conductivity testing for reinforced concrete pipe materials, visual inspections, and others. Where inspection! monitoring techniques are used to verify distribution system integrity, risk reduction can be noted. However, inspection does not usually play a significant role in most nontransmission pipeline systems. Few in situ inspection techniques exist or are practical to accommodate the complicated configurations of branches, components, and
Incorrect operations index 11/237 customer disruption potential, much less the wide variety of materials and joint types commonly seen in distribution systems. It has even been reported that certain physical inspections may actually increase leak rates in older, low-pressure pipelines-the act of temporarily removing earthen cover and side support can actually increase leak rates in certain situations [40]. As already noted, distribution system leakage is normally more tolerable with some amount of leakage acceptable even for some newly installed systems. Leaks often replace inspection as the early warning system for distribution pipelines. It is normally conservatively assumed that some deterioration mechanisms are active in any pipeline (even though this is certainly not the case in many systems). As time passes, these mechanisms have an opportunity to reduce the pipe integrity. A good risk assessment model will show this possibility as increased failure probability over time. An assumed deterioration rate is confirmed by inspection in hydrocarbon transmission pipelines and often by the presence of leaks in other systems. An effective inspection has the effect of “resetting the clock” in terms of pipeline integrity since it can show that loss of integrity has indeed not occurred (or deficiencies can be cured when detected) and that it is appropriate to assume a certain level of system integrity or strength. Careful monitoring of leaks also confirms assumed deterioration in the case of some distribution systems. Integrity is often not thought to be compromised unless or until leaks are seen to be increasing over time in such systems. Only an unacceptably high andor increasing leak rate, above permissible original installation leak rates, would be an indication of loss of integrity. So, leak detection surveys can be credited as a type of integrity verification when results are intelligently and appropriately used to assess integrity. Although visual inspections with cameras are sometimes used to inspect pipe interiors, and some tools exist to assess the integrity of steel reinforcements of some concrete pipes, the use of sophisticated internal inspection devices such as intelligent pigs is relatively rare in distribution systems. This variable will therefore not play a significant risk-mitigating role in most cases. If a distribution system does use these devices or other means for inspecting the pipe wall, the scoring can be kept consistent with the transmission pipeline model Post-installation pressure testing can be assessed as an integrity verification technique as discussed in Chapter 5. The tracking and evaluation of leak rates can also be assessed as part of this variable scoring. Opportunities for direct assessment of excavated pipe can provide indications of current integrity and can be used with zones of influence (see Chapter 8) or statistical sampling thinking to credit these efforts as integrity verifications. Formal assessments of coating or pipe condition should be minimum requirements for awarding of point credits when scoring these activities. The evaluator may also choose to include the inspection information from other variables such as leak surveys, corrosion control surveys, and effectiveness of coating and cathodicprotectionsystems.
Land movements The risk variable of land movements assesses the potential for damaging geotechnical events. This includes seismic events such as fault movements and soil liquefaction in addition to
potentially damaging events of soil shrink-swell, subsidence, erosion, landslide, scour, and others as described in Chapter 5. Differences in pipe material properties will complicate the modeling of distribution system pipeline vulnerability to land movements. Larger diameter pipelines made from more flexible materials and joining processes that create a more continuous structure, such as a welded steel pipeline, have historically performed better in seismic events. In colder regions, failure considerations exist that are not present in more temperate climates. These are related to soil movements from frost action and subsurface temperature changes. Seasonal changes in moisture content and temperature effects have been correlated with both water and gas distribution system break rates in many studies. These are often shown to be at least partially related to soil movements and resulting changes in stresses on the buried pipe. Where such correlations are established, they can be used in risk assessment and break forecasting efforts as well as in comparative risk assessments between regions with differing climates. Score this variable as described on pages 105-1 10.
IX. Incorrect operations index As noted in Chapter 6, human error potential is perhaps the most difficult failure mode to assess. An important point in assessing this is the supposition that small errors at any point in a process can leave the system vulnerable to failure at a later stage. With this in mind, the evaluator must assess the potential for human error in each of four phases in pipelining: design, construction, operation, and maintenance. A slight design error may not show up for years when it is suddenly the contributor to a failure. By viewing the entire process as a chain of interlinked steps, possible intervention points can be identified. These are opportunities where checks or inspections or special equipment can be inserted into the process in order to avoid a human-errortype failure. It is a valid observation that human error is also a factor in each ofthe other failure mechanisms. Partly as a modeling convenience, this index is designed to capture all types of human error potential in a single part ofthe model. This recognizes that the same variables would apply in most other failure modes and it makes sense to evaluate such variables in a single place in the model. This approach has the added benefit of facilitating more efficient mitigation since human error issues can be more readily assessed and addressed in a whole scale fashion. So, in this index, variables that are thought to increase or decrease the potential for human-error precipitated failures are examined (Table 11.6). Table 11.6 Incorrect operations index possible variables and weights Variable
Design Construction Operations
Maintenance Incorrect operations index total
Wright 30 20 35 15 100
11/238 Distribution Systems
Design In general, the potential for human error in the design phase can be evaluated as described in Chapter 6 with some additional considerations as discussed below. In addition to the previously noted definitions of failure, other failure modes-such as overpressure of the customer’s facilities, infiltration of contaminants, service interruption, and the failure of a gas odorization system--can be especially important in a distribution system risk assessment. Because facilities designed to operate at various pressures are interconnected in most distribution systems, special attention should be paid to prevention of overpressure. This may include overpressure protection for systems downstream of the distribution pipes, if the evaluation considers such risks. A common design practice in distribution systems is the installation of redundant pressure control to protect downstream components from overpressure. This is accomplished via an internal fail-safe feature in one regulator or through the use of two regulators (or both). Installed in series, the second regulator is designed to control pressure should the first regulator fail. Detection of a failed primary pressure control should be part of a regular maintenance program. It is often (but not always) the responsibility of the distribution system to protect the customer from overpressure. When this is the case, the evaluator should examine the system capabilities and safety systems designed to prevent overpressure of downstream equipment. The practice ofodorization ofgas in distribution systems is a leak detection provision used to reduce the impact of a pipeline failure or to alert individuals of faulty or leaking equipment. As such, it is covered mostly in the leakimpactfactor section of the risk model (see Chapter 7).
Construction Complete construction records ofthe distribution facilities are often unavailable due to the age of many systems, the construction philosophies of the past, and record-keeping practices. Evidence to score construction-relateditems might have to he accumulated from information such as 1eaWfailure histories, visual inspections of the systems, and comparisons with similar systems in other areas. As previously discussed, protection of the pipeline from third-party damage is critical in most distribution systems. When part of the damage prevention program relies on accurate drawings and records, the evaluator should examine the error potential of the documentation program. This includes as-built construction documentation in particular. The potential for human error during the construction phase can be generally evaluated as detailed in Chapter 6.
Operations The evaluation of operations-phase human error warrants some discussion specifically for distribution systems. This variable is best examined in several parts as are described below.
Procedures Score as described in Chapter 6, with additional considerations as discussed below.
Locating processes-finding and marking buried utilities prior to excavation activities-are important for any subsurface system, but perhaps especially so for distribution systems that often coexist with many other subsurface structures. These procedures may warrant additional attention in this evaluation. With the high activity level commonly seen around urban distribution systems, the operating company usually devotes a significant amount of resources to receiving notifications of digging activities and then marking owned facilities and communicating with the notifying party. Whereas the same evaluation technique used in transmission lines can be used for distribution lines, the evaluator of a distribution system should be alert to a heavy reliance on drawings and records to locate lines, and the discipline of the line locating program in general. Any history of line strikes (lines being struck by excavating equipment) after locating was done should be investigated.
SCADA/comrnunications As a means of early problem detection and human error reduction, the effectiveness of a SCADA system or control center or communications protocols can be scored as shown in Chapter 6, with additional considerations as discussed below. As a means of reducing human errors in transmission pipelines, the use of SCADA systems and/or other systems of regular communications between field operations and a central control is a suggested intervention point for human error reduction. The nature of distribution systems, however, does not normally benefit to the same degree from this error avoidance measure. By their design, distribution systems operate at lower pressures and are intended to respond constantly to changing conditions as customers increase and decrease their product use. Point values for this variable should reflect the somewhat reduced role of SCADA and communications as a risk reducer in distribution systems.
Drug testing Score this item as described in Chapter 6 .
Safetyprograms Score this item as detailed in Chapter 6.
Surveys/maps/records The role of surveys, maps, and records as potential error reducers is discussed in Chapter 6 . The evaluation suggested there applies for distribution systems as well. As a special type of survey in gas distribution systems, leak surveys are usually a normal undertaking and may warrant special attention in the evaluation. as discussed next. Leaksurveys The first determination for the risk role of leakage surveys is whether they play a role mostly in terms of failure avoidance or consequence minimization. It can be argued that a leak detection survey should be scored in the leak impact factor because such a survey acts as a consequence-limiting activity-the leak has already occurred and, under special circumstances, early detection would reduce the potential conse-
Incorrect operationsindex 11/239 quences of the leak. This is the logic behind the discussion of leak detection in the discussions in Chapter7. However, the situation for distribution systems is thought to be different. Leakage is more routine (and even expected, for reasons previously noted) and leak detection and repair is a normal aspect of operations. Distribution systems tend to have a higher incidence leaks compared to transmission systems. This is due to differences in the age, materials, construction techniques, and operating environment between the two types of pipelines. With the increased opportunity for leaked products to accumulate beneath pavement, in buildings, and in other dangerous locations and with the higher population densities seen in distribution systems, this higher leak propensity becomes more important, especially for gas distribution. Furthermore, leak rates often provide early warning of deteriorating system integrity. Therefore, attention to leaks should be a strong consideration in assessing the risks of distribution systems. Regular leakage surveys are routinely performed on gas distribution systems in many countries. Hand-carried or vehiclemounted sensing equipment is available to detect trace amounts of leaking gas in the atmosphere near the ground level. Flame ionization detectors (FID), thermal conductivity, and infrared detection are some of the technologies commonly used in leak detection equipment. The use of trained animals-usually dogs-to detect small leaks is a ground-level technique that has also been successful. One of the primary means of leak detection for gas distribution is the use of an odorant in the gas to allow people to smell the presence of the gas before flammable concentrations are reached. As a special type of leak detection, the use and potential failure of the odorization system can be covered in the leak impact factor. Other types of leak detection techniques include [6] 0
0
Subsurface detector survty-in which atmospheric sampling points are found (or created) near the pipe. Such sampling points include manways, sewers, vaults, other conduits, and holes excavated over the pipeline. This technique may be required when conditions do not allow an adequate surface survey (perhaps high wind or surface coverage by pavement or ice). A sampling pattern is usually designed to optimize this technique. Vegetation survey-which is also done in transmission lines as a part of routine air patrol. The observer seeks visual indications of a leak such as dying vegetation, bubbles in water, or sheens on the water or ground surface. Pressure loss test-in which an isolated section ofpipeline is closely monitored for loss ofpressure, indicating a leak. Ultrasonic leak detectors-in which instrumentation is used to detect the sonic energy from an escapingproduct. Bubble leakage-used on exposed piping, the bubble leakage test in one in which a bubble-forming solution can be applied and observed for evidence of gas leakage.
Other leak detection techniques more commonly seen in transmission systems are discussed beginning on page 160. It is beyond the scope of this text to offer specific guidance on the effectiveness of various leak surveying methods. The effectivenessof many leak surveys often depends on environ-
mental actors such as wind, temperature, and the presence of other interfering fumes in the area. Therefore, specific survey conditions and the technology used will make many evaluations situation specific. An estimate of survey effectiveness (&loo%) can be made part of the risk assessment. A default for test effectiveness can be used when no further information is available-a value such as 70% might be an appropriate default. This can be combined with two more factors to score this variable: Amount of system surveyed and Zme since lasf survey (see discussion of information decay, Chapter 2) A possible scoring algorithm could therefore be: 100 (1 0 x years since last test) =time "YO Leak survey score = (maximum points) x (test effectiveness)x (amount of system tested) x (time since last test) ~
For example, a test method deemed to be 80% effective and performed annually over 50% of the system would score 9 x (0.8) x (0.05)x (0.09) = 3.2 ifthe variable weighting is 9 points. The operator's use of established procedures to positively locate a leak can be included in this assessment. Follow-up actions including the use of leak rates to assess system integrity and the criteria and procedures for leak repair should also be considered. This variable can logically be weighted higher than suggested in Chapter 6 due to leak surveys' increased role in distribution systems. The risk model designer should determine the weighting based in consideration of all other failure variables.
Training Score this item as described in Chapter 6 .
Mechanical error preventers The role of error prevention devices can be evaluated as discussed in Chapter 6 .As noted there, error prevention devices might include 0
0 0
Three-way valves with dual instrumentation Lock-out devices Key-lock-sequence program Computer permissives Highlighting of critical instruments
where points are added for each application up to a maximum ofpoints. If a section that does not have any applications (and hence no opportunity for this type of error) is being evaluated, the maximum points are awarded. Note that in scoring a section for this item, upstream sections may need to be considered because the error can occur there and affect all downstream sections.
Maintenance A low score in maintenance should cause doubts regarding the adequacy of any safety system that relies on equipment operation. Because overpressure protection is identified as a critical aspect in a distribution system, maintenance of regulators and
1V240 Distribution Systems
other pressure control devices is critical. The evaluator should seek evidence that regulator activity is monitored and periodic overhauls are conducted to ensure proper performance. Other pressure control devices should similarly be closely maintained. The care of an odorization system in a gas distribution system should also be included with maintenance procedures. Score the maintenance practices as described in Chapter 6.
X. Sabotage The risk of sabotage is difficult to fully assess because such risks are so situation specific and subject to rapid change over time, The assessment would be subject to a great deal ofuncertainty, and recommendations may therefore be problematic. Note, however, that many current risk variables and possible risk reduction measures overlapthe variables and measures that are normally examined in dealing with sabotage threats. These include security measures, accessibility issues, training, safety systems, and patrol. The likelihood ofapipeline system becoming a target of sabotage is a function of many variables, including the relationship of the pipeline owner with the community and with its own employees or former employees.Vulnerability to attack is another aspect. In general, the pipeline system is not thought to be more vulnerable than other municipal systems. The motivation behind a potential sabotage episode would, to a great extent, determine whether or not this pipeline is targeted. Reaction to a specific threat would therefore be very situation specific. Guidance documents concerning vulnerability assessments for municipal water systems are available and provide some potential input to the current risk model. An effort could be undertaken to gather this information and incorporate sabotage and terrorism threats into the assessment, should that be desirable. See Chapter 9 for more discussion on sabotage issues and ideas for risk assessments.
XI. Leak impact factor In general, the leak impact factor (LIF) for a distribution system can be scored in a manner similar to that described in Chapter 7. Some key points of consequence assessment are reiterated and some considerations specific to distribution systems are discussed below. As in the transmission model, both multiplication and addition operations can be used to better represent real-world relationships. For example, a top-level equation, LIF = @roducthazard) x (spill) x (dispersion) x (receptors + outage),
captures the idea that the overall consequences are proportional to the spill size and product hazard. If either variable is zer+no spill or no product hazard-then there are no consequences. It also shows that locations where both receptor damage and losses due to service interruption (outage)are high are the most consequential. As either or both of these are reduced, so too is overall consequence.
Product hazard Note that a chronic component of a product hazard is often enhanced where a leaking product can accumulate in buildings, beneath pavement, etc. This is generally considered when assigning RQ points to substances such as methane. The evaluator is encouraged to review pages 138-142 to ensure that the reasoning behind the RQ assignments are appropriate for the evaluation. In the case of water systems, the main product hazard will be related to the more mechanical effects of escaping water. This includes flood, erosion, undermining of structures, and so on. The potential for people to drown as a result of escaping water is another consideration. The product hazard variable can be assessed as described on pages 136142.
Spill size and dispersion One of the chief concerns of gas distribution systems operators is the potential for a hazardous material to enter a building intended for human occupancy. In a city environment, the potential is enhanced because gas can migrate for long distances under pavement, route through adjacent conduits (sewer, water lines, etc.), permeable soils, or find other pathways to enter buildings. For more catastrophic pipe break scenarios, and as a modeling simplification, spill size can be modeled as a function of only pipe diameter and pressure as discussed on pages 142-143. The underlying assumption in most consequence assessments is that higher spill quantities result in higher potential damages. The drain volume and flow stoppage time (reaction time) are determining factors for total volume released on water systems. In simplist terms, low spots on large-diameter, high-flow-rate pipelines can be the sites of largest potential spills and larger diameter, higher pressure gas pipeline mains can generally cause greater releases. As discussed in Chapter 7, leak size is also function ofthe failure mechanism and the material characteristics. Smaller leak rates tend to occur at corrosion (pinholes) or some design failure modes (mechanical connections). The most costly small leaks occur below detection levels for long periods of time. Larger leak rates tend to occur under catastrophic failures such as external force (equipment impact, earthquake, etc.), avalanche crack failures, and with system shocks to graphitized cast iron pipes. In assessing potential hole sizes, the failure mechanism and pipe material properties would ideally be considered. As noted, a failure mechanism such as corrosion is characterized by a slow removal of metal and, hence, is generally prone to producing pinhole-type leaks rather than large openings. Outside forces, especially when cracking is precipitated, can cause much larger openings. The final size of the opening is a function of many factors including stress levels and material properties such as toughness. Because so many permutations of factors are possible, hole sizes can be highly variable. The risk reduction benefits of a leak detection and response system can be captured in the spillscore. The ability to reliably minimize the exposure time or area of exposure needs to be measured, at least in some general way, in order to score these aspects. The leak detectiodreaction capabilities can be assessed at all points along the pipeline and are a function of
Leak impact factor 11/241 instrumentation, ability to stop flows, and abilities to mobilize and execute loss-minimizing reactions. Spreading and accumulation effects also determine consequences for spilled liquids. Depending on the receptor, damages from a water system might be greater from spill accumulation (deeper flood waters) or from surface flow rates (erosion effects or force of flowing water). A distinction between the two scenarios could be made in a risk model. Slope and land-use factors leading to an estimate of relative resistance to surface flow would logically be included in the evaluation.
Gas odorization As a special leak detection and early warning system for most natural gas distribution systems, gas odorization warrants further discussion.An important component of the leak impact from natural gas distribution systems is the use of odorization. Methane has very little odor detectable to humans. Natural gas that is mostly methane will therefore be odorless unless an artificial odorant is introduced. It is common practice to inject an odorant at such levels that gas will be detected at levels far below the lower flammable limit of the gas in air--often one-fiflh of the flammable limit, meaning that accumulations of 5 times the detection level are required before fire or explosion is possible. This allows early warning of a gas pipe leak anywhere in the system or in a customer’s building and reduces the threat of human injury. Gas odorization can be a more powerful leak detection mechanism than many other techniques discussed. While it can be argued that many leak survey methods detect gas leaks at very low levels, proper gas odorization has the undeniable benefits of alerting the right people (those in most danger) at the right time.
Odorization svstem design Aspects of optimum system design include selection of the proper odorant chemical, the proper dosage to ensure early detection, the proper equipment to inject the chemical, the proper injection location(s), and the ability to vary injection rates to compensate for varied gas flows. Ideally, the odorant will be persistent enough to maintain required concentrations in the gas even after leakage through soil, water, and other anticipated leak paths. The optimum design will consider gas flow rates and odorant absorption in some pipe materials (new steels) to ensure that gas at any point in the distribution piping is properly odorized.
System operation/maintenance Odorant injection equipment is best inspected and maintained according to well-defined, thorough procedures. Trained personnel should oversee system operation and maintenance. Inspections should be designed to ensure that proper detection levels are seen at all points on the piping network. Provisions are needed to quickly detect and correct any odorization equipment malfunctions.
Perfirmanre Evidence should confirm that odorant concentration is effective (provides early warning to potentially hazardous concen-
trations) at all points on the system. Odorant levels are often confirmed by tests using human subjects who have not been desensitized to the odor. When new piping is placed in service, attention should be given to possible odorant absorption by the pipe wall. “Over-odorizing” for a period of time is sometimes used to ensure adequate odorization. When gas flows change, odorant injection levels must be changed appropriately. Testing should verify odorization at the new flow rates. Odorant removal (de-odorization) possibilities should be minimized, even as gas permeates through soil or water. The role that a given gas odorization effort plays as a consequence reducer can then be scored as follows:
High-reliability odorization-consequence reduction A modem or well-maintained, well-designed system exists. There is no evidence of system failures or inadequacies of any kind. Extra steps (above regulatory minimums) are taken to ensure system functioning. A consistent, naturally occurring odor that allows early detection of a hazardous gas can fall into this category if the odor is indeed a reliable, omnipresent factor. Odorization-No point change This is the neutral or default value. Where an odorization system exists and is minimally maintained (by minimum regulatory standards, perhaps) but the evaluator does not feel that enough extra steps have been taken to make this a high-reliability system, no change to the population score is made. Questionable odorization system4onsequence increases A system exists; however, the evaluator has concerns over its reliability or effectiveness. Inadequate record keeping, inadequate maintenance, lack of knowledge among system operators, and inadequate inspections would all indicate this condition. A history of odorization system failures would be even stronger evidence. No odorization effortdonsequence increases Despite its use in similar systems, the assessed distribution system does not use odorization and hence, potential consequences are higher, compared to otherwise equivalent systems.
Receptors For our purposes, the term receptor refers to any creature, structure, land area, etc., that could “receive” damage from a pipeline failure. The intent of a risk assessment is to capture vulnerabilities of various receptors, at least in a relative way as discussed in Chapter 7. This vulnerability, coupled with other aspects of the spill scenario, will show locations of greater potential consequences. Receptors at risk from most distribution systems include the following: 0
Snfety--consequences involving human health issues: Population density 0 Permanent population 0 Transitoryioccasional population 0 Special population (restricted mobility) 0 Collateral safety 0 Contamination.
I11242 Distribution Systems
0
0
Property damage--consequences involving property damages and losses: Structure value High-value areas Contents Landscape Collateral. Environmental sensitivities-damages to areas that are especially vulnerable to damage, from an environmental viewpoint. Business impacts-consequences resulting from business interruptions in the immediate vicinity of the spill and as a direct consequence of spill effects. Damages related to service interruptions are captured in the “outage” aspect of this assessment since such damages are not necessarily limited to the immediate spill vicinity. Proper@ damage-this can be assessed through an examination of the following variables: population, property type (commercial, residential, industrial, etc.), property value, landscape value, roadway vulnerability, and highway vulnerability, and other considerations.
The model weightings of various receptors should be based on the perceived vulnerability and consequence potential of each receptor. This includes direct damages and secondary effects such as public outrage. Valuing of receptors is discussed beginning on page 165. Outage Consequences of distribution system failures can also be categorized as “outage related.” These include damages arising from interruption of product delivery, including the relative time of the interruption. (See also Chapter 10 for a detailed discussion of risk of service interruption assessment techniques.) Some customers are more damaged by loss of service than others. It might not be realistic to link specific customers or even customer counts to all potential spill locations. As a surrogate, the volume or pressure transported in any portion of the system could be assumed to be directly proportional to the criticality of that supply. Therefore, failure locations where higher
flow rates are potentially interrupted are modeled to also cause higher outage consequences. In addition, we can assume that the number ofusers potentially interrupted by a spill at a certain location is proportional to the nearby population. This is an assumption that will be incorrect in situations such as when a transmission line runs through a populated area, but does not serve that area directly. Nevertheless, it is correct often enough and tends to overstate rather than understate the risk and hence, is an appropriate modeling convenience. The interruption time is thought to be a function of ease-ofrepair and response capabilities.Relative repair costs can capture the ease-of-repair aspect and could be measured as a function of the variables such as these (underlying assumptions shown also): Diameter-Larger diameters lead to more expensive repairs due to higher material costs, greater excavation requirements, increased repair challenges, and need for larger equipment. Slope-Steeper slopes lead to more expensive repairs due to difficulties in accessing and stabilizing repair site, the possible need for more specialized equipment, and general increases in time needed to complete repairs. Repair readiness-This is a rating capturing the training and expertise of repair crews, the availability of equipment and replacement parts, and other factors influencing the efficiency with which repairs can be made. Surface type-Postexcavation repair of concrete and asphalt surfaces are thought to be more expensive. Population-In general, increased population density leads to more expensive repairs due to the need for increased protection of job site, traffic rerouting, avoidance of secondary damages during construction, etc. Response capabilities can include leak detection capabilities, emergency response capabilities, and availability of makeup supply during an outage. The latter, availability of make-up supply, can often require a complex network analysis with many assumptions and possible scenarios. As a modeling convenience, availability of make-up could be assumed to be proportional to the normal flow rate under the premise that the greater the flow rate that is interrupted, the more difficult will be the replacement of that supply.
121243
Offshore Pipeline Systems Contents I Background 121243 TI Third-party damage index 121244 A. Dcpth of cover 121245 B Actnity level 121245 C Aboveground facilitic$ 121246 D Damage prevention I21246 E Right-of-way condition I21247 F P&ol frequency 12’247 111 Coriocion index 121247 A Atmospheric corrosion 121248 B Internal corrosion I21248 C Submerged pipe corrosion 121248 IV Design index I2/249 A Safety factor I2/250 B Fatigue 121250
I. Background Since offshore pipelines were first installed in shallow waters in the early 1950s, the technical difficulties of operating and maintaining lines in the subsea environment have challenged the industry.Today, these challenges are multiplied as pipelines coexist with busy harbors, industrial ports, commercial and recreational fishing areas, general recreational areas, environmentally sensitive areas, and other offshore facilities. Deep water had been defined as depths greater than 650 ft (the edge of the Outer Continental Shelf) but is now typically considered to be a depth greater than 1600 ft. Offshore pipelines are routinely installed in water depths of up to 7000 ft, as of this writing. Current technology is allowing installation at everincreasing depths. In the Outer Continental Shelf waters of the United States, corrosion was the largest single cause of pipeline failures (50%) between 1967 and 1990, with maritime activities
121250 E. Stabk; Alternative scoring approach
I21252
A Design 121253 B. Conqtruction 12 C. Opeiations 1212 D Maintenance 12
Emergency response
12’255
accounting for 14% and natural forces 12% of the remaining known causes of 1047recorded pipeline failures. Interestingly though, almost all of the deaths, injuries, damages, and pollution episodes were caused by damages from vessels [71]. Deaths and injuries are associated with gas pipelines, which, because of the highly compressed flammable gas, have higher explosive potential than most liquid lines. Even though corrosion caused a greater number of leaks, most of the pollution (in volume of spilled product) was caused by anchor damage [71]. In this data sample, therefore, the most prevalent cause was not the most consequential cause. When shallow water accidents are included in the analysis, it is thought that maritime activities (third-party damage) and natural forces play an even larger role. The dynamic nature of pipeline operations offshore often makes the risk picture more complex than onshore operations. Offshore facilities are normally built to facilitate the recovery
l a 2 4 4 Offshore PipelineSystems
of suspected hydrocarbon fields whose exact location and extent are never precisely known. The costs to recover the hydrocarbonsand their value on the world market are similarly estimated values only. Consequently, it is not unusual for a pipeline to be abandoned for long periods of time until economic conditionschange to warrant its returnto service or until technology overcomes some obstacle that may have idled the line. Many lines are ultimately placed in a service for which they were not originally designed. Pressures, flow rates, velocities, and the composition of the products transported change as new fields are added or existing fields cease production. Ownership of the pipelines can change as new operators feel that they can increase the profitability of an operation. Another aspect of offshore pipeline operations is the higher costs associated with most installation, operation, and maintenance activities. When pipelines are placed in an environment where man cannot live and work without special life-support systems, additional challenges are obvious. Inspection,maintenance, repair, and modification requires boats, special equipment, and personnel with specialized skills. Such operations are usually more weather limited and proceed at a slower pace than similar onshore operations,again adding to the costs. Offshore systems are often more vulnerable to weatherrelated outages, even when no damage to equipment occu~s. This is coveredin the cost ofsentice interruption assessment in Chapter 10. As with onshore lines, historical safety data of offshore pipeline performance are limited. We cannot currently make meaningful correlations among all of the factors believed to play a significant role in accident frequency and consequence. The factors can, however, be identified and considered in a more qualitativesense, pending the acquisition of more statistically significant data. For these reasons, and for the sake of consistency,an indexing approach for offshore lines that parallels the onshore pipeline analysis is often the most useful risk assessment option. Offshore pipeline systemsare either transmission pipelineslong, larger-diameter pipelines going to shore--or pipelines associateddirectlywith production-flow lines,gatheringlines. For purposes of this risk assessment, the two categories are treated the same.The scoringfor the offshore risk model will parallel very closely the onshore model for transmission lines described in Chapters 3-7. Although this chapter is primarily aimed at ocean and sea environments, most conceptswill apply to some degree to pipeline crossings of rivers, lakes, and marshes. After customization, the offshore risk model could have the followingitems and associated weightings: Third-partyDamage Index A. Depth of Cover B. Activity Level C. Aboveground Facilities D. Damage Prevention E. Right-of-way Condition F. Patrol Frequency
IOO% 20% 25% 10% 20% 5% 20%
Comsion Index A. Atmospheric Corrosion AI. Atmospheric Exposures A2. AtmosphericType
100% 10% 5% 2%
A3. Atmospheric Coating B. Internal Corrosion B1. Product Corrosivity B2. Internal Protection C. SubmergedPipe Corrosion C 1. SubmergedPipe Environment C2. Cathodic Protection C3. Coating
3% 20?! 10% 10% 70% 20% 25% 25%
Design Index A. Safety Factor B. Fatigue C. SurgePotential D. IntegrityVerification E. Stability
I OP/o 25% 15%
Incorrect OperationsIndex A. Design B. Construction C. Operations D. Maintenance
100%
10%
25% 25% 30% 20% 35% 15%
LeakZmpact Factor ProductHazard Dispersion Spill Score Receptors
Somemodest changes to somerisk variables should be made to account for differences between the onshore and offshore pipelines, Examples of differences include external forces related to sea bottom stability, inspection challenges, ROW issues, and potential consequences. However, most risk model variables will be identical. Sampleweightings are shown in the variable descriptions in this chapter. These are determined as discussed in Chapter 2. Weightings should be carefully analyzed by the risk evaluator (or risk model designer) and changed when experience, judgment, or failure data suggests different values are more appropriate. Risers, commonlydefined as the portion of the pipeline from the sea bottom up to the platform (sometimes including pig traps and valveson the platform), can be evaluatedas part of the pipeline system or alternatively,as part of a risk assessment for structures like platforms. Note that abandoned facilities may also be included in an assessment as a potential threat to public safety if consequences from the facility are identified (navigation hazard for surface facilities,threat of flotation, etc.). In that case, the assessment variables will need to be modified to reflect the probability and consequences of those particular hazards.
II. Third-party damage index Consistentwith the definition in Chapter 3, the term third-party damage as it is used here refers to any accidental damages done to the pipe by the activities of personnel not employed by the pipeline operator. Intentional damages are covered in the sabotage module. Accidental damages done by pipeline personnel are usually covered in the incorrect operutions index. In the case of offshore operations, external damage can be associated
Background 121245 with personnel performing platform activities or working on other pipelines. Anchoring and dropped objects are examples of damage causes related to nearby work activities. Even though the offending personnel may be employed by the owner/operator company and hence not be ‘third-party damage’ technically, this threat may be more efficiently addressed in this index. Although not the cause of the majority of offshore pipeline accidents, third-party damages appear to the cause of most of the deaths, injuries, damages, and pollution [71]. Consequently,this is a critical aspect of the risk picture.
A. Depth of cover (weighting: 20%) Cover, as a means to reduce third-party damages, actually has two components in most offshore cases: water cover (depth) and sea bottom burial depth. Each can provide a measure of protection from third-party damage since increasing water depth usually limits the number of activities that could be harmfd to the pipeline, and sea bottom cover provides a physical barrier against damage. When depth is sufficient to preclude anchoring, dredging, fishing, and other third-party activities as possible damage sources, failure potential is reduced. When a pipeline poses a known threat to navigation, there is effectively no cover and the threat of impact is usually high. Note that submerged pipelines also have a threat of damage from dropped objects (see discussion of activity level next), which is minimized by protective barriers. Accurate knowledge of the amount of cover is sometimes difficult to obtain. Profile surveys are necessary to monitor constantly changing seabeds. The frequency of surveys should be dependent on water conditions such as wave and current action, and on seabed and bank stability, as is evidenced by historical observation. In scoring the depth of cover, the evaluator must also judge the uncertainty of the knowledge. This uncertainty is dependent on the timing and accuracy of survey data. See the design index (Chapter 5 ) for a further discussion of survey techniques. Especially susceptible areas for damage are shore approaches and, to a lesser degree, platform approaches. A common practice is to protect the pipelines by trenching to a depth of 3 ft out to a distance of 200 to 500 ft from a platform. However, shore approach protection is inconsistent. Shore approaches are often the most hazardous section ofthe offshore pipeline. Long-term seabed stability is best when the shoreline is minimally disrupted. Use of riprap, twin jetties, directional dnlling, dredging, and backfilling are common techniques used near shorelines. In many modem installations, a shore approach is directionally drilled and placed well below any depth where normal activities or wave actions can affect the pipeline. The historical performance of a certain technique in a certain environment would be of value in future design efforts and in assessing the stability of the cover. Other types of barrier protection can serve the same purpose as depth of cover, and should be scored based on their effectiveness in preventing third-party damages. Certain barriers may also receive risk mitigation credit in reducing the threat from floating debris and current forces (see design index discussion). Examples of barriers include rock cover, concrete structures, and metal cages. Many offshore pipelines will have a ‘weight coating’ such as concrete to ensure negative buoyancy
(prevent flotation) and to protect the corrosion coating. This concrete coating provides a measure of protection against impacts and can be considered as a type of cover protection and scored as suggested.
B. Activity level (weighting: 25%) In this variable, the evaluator assesses the probability of potentially damaging activities occurring near the pipeline. For simplicity and consistency, a list of activities or conditions can be generated to guide the assessment. Indications of high activity levels may include high vessel traffic, high density of other offshore structures (including other pipelines), and shoreline development activities. Any of these might increase the opportunity for pipeline damage. More specific activities that could be assessed include fishing, dredging, anchoring, construction, platform activities, excavation, underwater detonations, diving, salvage operations, and recreational boat traffic. Potential damage depends on characteristics of the striking object. Force, contact area, angle of attack, velocity, momentum, and rate of loading are among these characteristics. Potential consequences include damages to coating, weights, anodes, and pipe walls, possibly leading to rupture immediately or after some other contributing event. To better estimate possible loadings that could be placed on the pipeline, fishing and anchoring can be assessed based on the types of vessels, engine power, and type of anchors or fishing equipment. Although anchoring is usually forbidden directly over a pipeline, the setting of an anchor is imprecise. Anchoring areas near the pipeline should be considered to be threats. Fishing equipment and anchors that dig deep into the sea bottom or which can concentrate stress loadings (high force and sharp protrusions) present greater threats. Analyzing the nature of the threat will allow distinctions to be made involving types of anchored vessels or certain fishing techniques. Such distinctions, however, may not be necessary for a simple risk model that uses conservative assumptions. As another threat from third-party activities, dropped objects can strike the pipeline with sufficient force to cause damage. Objects can be dropped from some surface activity (construction, fishing, platform operations, mooring close to platforms, cargo shipping, pleasure boating, etc.) and, depending on conditions such as the object’s weight in water, its shape, and water currents the object will reach a terminal velocity. The impact stress on the pipe is partly dependent on this velocity. Shore approaches and harbors are often areas ofhigher activities. Beach activities, shoreline construction, and higher vessel traffic all contribute to the threat in an often unstable sea bottom area. External overpressure can occur from subsea detonations. An example is the common practice of clearing structural elements from abandoned platforms down to 15 A below the mudline by detonating an explosive charge inside each of the hollow supporting members that penetrate the sea bottom (platform legs and well conductors). Possible unintended damage to nearby structures can result from the shock wave, specific impulse, and energy flux density associated with the event. The evaluator can create qualitative classifications by which the activity level can be scored. In concert with the categories shown in Chapter 3, a classification guide specifically for offshore lines could be similar to the following:
la246 Offshore Pipeline Systems
High Area has high vessel traffic and/or shore approaches withpopulation nearby, or is a commonly dredged area. Normal anchoring nearby creates potential for damaging anchor loads. If a fishing area, the use of potentially damaging equipment is normal. Construction activity, third-party damage that has occurred in the past, and the presence of other offshore structures suggest a higher threat level. Medium Area has shore approaches with occasional human visitation, some vessel traffic, a fishing area where mostly nonthreatening equipment is being used, is only an occasional anchoring area for higher anchor loads, and an anchoring area for smaller (low-damage-potential) vessels. Low Areas with rare human visitations and due to water depth or other factors, potentially damaging activities are possible, but very rare. There is little or no vessel traffic, no anchoring, and no dredging. None This category is assigned where essentially no potentially damaging activity can occur. An example might be very deep water where no other activities (no anchoring, drilling, diving, cable or pipeline installations, etc.) are possible.
As an alternative to the scoring approach above, individual contributors to activity level can be weighted, assessed, and combined into an activity score. For example, a possible list of contributors and weightings is shown below: Foreign crossings (pipelines, cables, etc.) Fishingicrabbing area Recreation area Vessel traffic Distance from shore Dumping site Anchoring areas Water depth
10% 15% 15% 15% 10% 10% 15% 10%
outside forces. The protecting structure’s frame geometry and embedment depth are significant factors in determining the possibility of fouling from fishing gear. In general, score the presence of surface facilities as 0 pts and then add points for all measures that would reduce the likelihood of third-party damage, up to a maximum of I O pts (see Chapter 3).
D. Damage prevention (weighting:20%) A damage prevention program for an offshore pipeline can have many of the same aspects of its onshore counterpart. Risk variables that can be evaluated in order to assess the quality and effectiveness of a damage prevention program include public education, notification systems, and patrol. The first two are discussed here and patrol will be discussed as an separate variable.
Public education Public education is often an integral part of a damage prevention program (see Chapter 3). The public to be educated in this case includes boaters of all kinds, fishermen, offshore constructors, supply boats, recreational craft, and law enforcement. Pipeline route maps could be supplied and informal training given to groups to alert them to signs such as bubbles or sheens indicating possible pipeline damage. Training should emphasize the susceptibility to damage by anchors or dredging. There is often a misconception that a steel pipeline, especially when concrete coated, is unharmed by anchors and nets. The quality of the public education program can be assessed by evaluating components such as: Mailouts Maximum points are appropriate for regular, effective mailed communications to people engaged in potentially harmful activities.
Maximum points can be awarded for quality programs targeting audiences that engage in potentially harmful activities.
Presentations
These factors make up 100% ofthe activity score, in this example. Each would be assessed, assigned apoint value, adjusted by a respective weighting, and summed with all other factors.
While not as specific as other measures, this may reach a wider audience. Maximum points are appropriate where there is evidence of advertisement effectiveness.
Advertisements
C. Abovegroundfacilities (weighting: 10%) As with its onshore counterpart, exposed facilities offshore can be a good or bad thing, from a risk standpoint. Being in clear view, the facilities are less exposed to certain types of accidental damage, but they are more exposed to intentional damage or use for unintended purposes. Many offshore platforms are unmanned and visited infrequently. Platforms are susceptible to ship traffic impact and are sometimes convenient locations for temporary mooring of vessels, especially recreational fishing boats. Warning signs, lights, and on-site or remote monitoring (alarmed motion detectors, video surveillance, sound monitors, etc.) with adequate response offer a degree ofprotection. When considering third-party damage potential, submerged but unburied pipelines can be evaluated in the same way as surface facilities. Where valve assemblies are located on the seafloor, it is common practice to use subsea valve protectorsstructures placed around the valves to protect them from
Route maps Maximum points can be awarded for high quality, accurate route maps that are widely distributed and effective in reducing third-party intrusions.
Notijication systems One-call systems are probably not meaningful in the offshore environment. An exception would be a program that duplicates the intent of the land-based one-call program. Such a program would require anyone performing potentially pipeline-damaging activities in the water to contact a central clearinghouse that would notify owners of facilities of the impending activity. To be effective, such a program must be regularly used by all parties concerned, contacts to the clearinghouse must indeed be made prior to any work, and
Corrosion index 12/247
the clearinghouse must have current, complete locations of all facilities. See also Chapter 3 for more information.
E. Right-of-way condition (weighting: 5%) Along with a damage prevention program, marking of the pipeline route provides a measure of protection against unintentional damage by third parties. Buoys, floating markers, and shoreline signs are typical means of indicating a pipeline presence. On fixed-surface facilities such as platforms, signs are often used. When a jetty is used to protect a shore approach, markers can be placed. The use of lights, colors, and lettering enhances marker effectiveness. This item is normally only appropriate on shore approaches or shallow water where marking is more practical and thirdparty damage potential is higher. Note that in deeper water where this item will probably score low, the activity level item will often indicate a lower hazard potential. These will offset each other to some extent. A qualitative scoring scale can be devised similar to the following:
Excellent At every practical opportunity, high visibility signs and markers clearly indicate the presence of the pipeline and contact telephone numbers for the pipeline operator. All known hazards are clearly marked. Fair Some locations have signs and markers, not all of which are in good condition. Poor No attempt has been to mark the pipeline location, even in areas where it would be practical to do so. Where marking is impractical everywhere, use this point level.
F. Patrol (weighting:20%) As with the onshore case, pipeline patrolling is used to spot evidence of a pipeline leak, but it is often more usehl as a proactive method to prevent third-party intrusions. A potential threat does not have to be in the immediate vicinity of the pipeline. An experienced observer may spot a dredge working miles away or the movements of an iceberg or the activity of fishermen that may cause damage in the following weeks or that may have already caused unreported damage. The patrol might also note changes in the waterway or shoreline that may indicate a pipeline exposure due to shifting bottom conditions. A small amount of spilled hydrocarbon is not always easy to visually spot, especially from moving aircraft. A variety of sensing devices have been or are being investigated to facilitate spill detection. Detection methods proposed or in use include infrared, passive microwave, active microwave, laser-thermal propagation, and laser acoustic sensors [78]. As with the case onshore, offshore patrol effectiveness is a product of several factors including speed and altitude of aircraft, training and abilities of the observer, and effectiveness of any sensing devices used in the patrol. Scores should be awarded based on frequency and effectiveness ofpatrol on a point scale similar to that shown in Chapter 3.
111. Corrosion index Offshore pipelines are typically placed in service conditions that promote both external and internal corrosion. In considering external corrosion, steel is placed in a very strong electrolyte (seawater), which is a very aggressively corrosive environment. Because it must be recognized that no pipe coating is perfect, it must also be assumed that parts of the pipe steel are in direct contact with the electrolyte. Scoring for corrosion in offshore pipelines is similar to scoring for onshore lines. Additional factors for the offshore environment must often be considered however. As with other failure modes, evaluating the potential for corrosion follows logical steps, replicating the thought process that a corrosion control specialist would employ. This involves (1 ) identifying the types of corrosion possible: atmospheric, internal, subsurface; (2) identifying the vulnerability of the pipe material; and (3) evaluating the corrosion prevention measures used at all locations. Corrosion mechanisms are among the most complex of the potential failure mechanisms. As such, many more pieces of information are efficiently utilized in assessing this threat. A. Atmospheric Corrosion
A 1. Atmospheric Exposures A2. Atmospheric Type A3. Atmospheric Coating Total
0-5 pts 0-2 pts 0-3 pts 0-10 pts
B. Internal Corrosion B 1. Product Corrosivity B2. Internal Protection Total
0-10 pts 0-10 pts
0-20 pts
C. Submerged Pipe Corrosion c1. Submerged Pipe Environment Soil Corrosivity Mechanical Corrosion c 2 . Cathodic Protection Effectiveness Interference Potential c3. Coating Fitness Condition
0-20 pts 0-1 5 pts &5 pts 0-25 pts 0-15 pts 0-10 pts 0-25 pts 0-10 pts 0-1 5 pts
The general balance of 10% atmospheric corrosion, 20% internal corrosion, and 70% submerged pipe corrosion will allow comparisons among pipelines that are at least partially exposed to these hazards. Where no system to be evaluated has any atmospheric exposure, for example, the evaluator may choose to eliminate this component and increase the other hazards by 5% each. When this is done, each item can be increased proportionately to preserve the weighting balances. If onshore and offshore pipelines are to be compared scoring should be consistent. As noted in other chapters, the primary focus of this assessment is the potential for active corrosion rather than time-tofailure. In most cases, we are more interested in identifying locations where the mechanism is more aggressive than in predicting the length of time the mechanism must be active before failure occurs.
121248 Offshore Pipeline Systems
In the scoring system presented here, points are usually assigned to conditions and then added to represent the corrosion threat. This system adds points for safer conditions. As noted in Chapter 4, an alternative scoring approach, which may be more intuitive in some ways, is to begin with an assessment of the threat level and then consider mitigation measures as adjustment factors. In this approach, the evaluator might wish to begin with arating of environment-ither atmosphere type, product corrosivity, or subsurface conditions.Then, multipliers are applied to account for mitigation effectiveness.
A. Atmospheric corrosion (weighting: lO0/o)
A I . Atmospheric exposures (weighting: S%) Portions of offshore pipelines often are exposed to the atmosphere on platforms or onshore valve stations. Where such components exist in the section being evaluated, score this item as described in Chapter 4.
A2. Atmospheric type (weighting: 2%) The offshore environment is among the harshest in terms of corrosion to metal. Humid, salty, and often hot conditions promote the oxidation process. In addition, some platfoms where pipeline components are exposed to the atmosphere produce additional chemicals to accelerate corrosion. Score as described in Chapter 4.
A3. Atmospheric coating (weighting:3%) Coating is a most critical aspect of the atmospheric corrosion potential. Score this item as detailed in Chapter 4.
B. Internal corrosion (weighting: 20%) Internal corrosion, caused by corrosiveness of the product inside the pipeline, is a common threat in offshore hydrocarbon pipelines. Hydrocarbon production usually involves the production of several components such as oil, gas, water, and various impurities. While pure hydrocarbon compounds are not corrosive to steel, substances such as water, CO,, H,S, which are intentionally or unintentionally transported, provide a corrosive environment inside the pipe. Until recently, separation of these components occurred offshore, where waste streams were easily (and in an environmentally unsound manner) disposed of. As such practices are discontinued, pipelines designed to transport a single phase component (either oil or gas), after offshore product separation had occurred, now are called on to transport un-separated product streams to shore where separation and disposal is more economical, The increased chance for internal corrosion from the now common practice of transporting un-separated production as a multiphase mixture must be considered. It is not uncommon for an offshore line to experience a change in service as new wells are tied in to existing pipelines or the product experiences changes in composition or temperature. While an internal corrosive environment might have been stabilized under one set of flowing conditions, changes in those conditions may promote or aggravate corrosion. Liquids settle as transport velocity decreases. Cooling effects of deeper water
might cause condensation of entrained liquids, further adding to the amount of free, corrosive liquids. Liquids will gravity flow to the low points of the line, causing corrosion cells in lowlying collection points. Inhibitors are commonly used to minimize internal corrosion (see Chapter 4). Generally, it is difficult to completely eliminate corrosion through their use. Challenges are even more pronounced in two-phase or high-velocity flow regimes. Any change in operating conditions must entail careful evaluation of the impact on inhibitor effectiveness. Other preventive measures that can be credited in the assessment include the use of probes and coupons, scale analysis (product sampling), inhibitor residual measurements, dewpoint control, monitoring of critical points by ultrasonic wall thickness measurements, and various pigging programs. Score the product corrosivity and internal protection items as described in Chapter 4.
BI. Product corrosivity (weighting: 10%) B2. Internalprotection (weighting: 10%) C . Submerged pipe corrosion (weighting: 70%) Offshore pipelines will be exposed to water, soil, or both. There are many parallels between this environment and the subsurface (soil) environment discussed in Chapter 4. The scoring for this portion of the corrosion index closely follows the onshore risk assessment model. The threat is evaluated by assessing the corrosivity of the pipeline’s environment and then the effectiveness of the common mitigation measures cathodicprotection and coating.
C1. Submergedpipe environment (weighting: 20%) In this item, distinctions between the corrosive potential of various electrolytes can be considered. In the case of offshore systems, the electrolyte is usually a highly ionic water (saltwater or brackish water) that is very conducive to corrosion of metals. It is often appropriate to score all sections as low resistivity (high corrosion potential) as described in Chapter 4. From an electrolyte standpoint, differences between buried and unburied conditions might be minimal and quite changeable because of shifting sea bottom conditions-pipelines are often covered and uncovered periodically by shifting sea bottom conditions. It is also conservative to assume that burial soils will also have a high ionic content because of the entrainment of saltwater. Differences between water conditions might also be minimal. However, changes in electrolyte oxygen content, temperature, and resistivity might be anticipated with resulting changes in cathodic protection effectiveness and corrosion potential. When distinctions are appropriate, the evaluator can consider such factors to score different environments. Mechanical corrosion As with onshore pipelines, the potential for corrosion that involves a mechanical component should be addressed in the risk assessment. Erosion is apotential problem in some production regimes. Production phenomena such as high velocities, two-phase flows, and the presence of sand and solids create the conditions necessary for damaging erosion. Stress corrosion cracking (SCC) can occur when stress
Design index 12/249
levels are high and a corrosive environment exists, either inside or outside the pipe wall. Note that seawater is a corrosive environment for metal and higher stress levels are common in offshore operations. Score this item as described in Chapter 4.
from other pipelines, offshore platforms, or shore structures. When isolation is not provided, joint cathodic protection of the structure and the pipeline should be in place. Score this item as described in Chapter 4.
AC interference C2. Cathodicprotection (weighting: 25%) On pages 74-76, we discuss some basic concepts of galvanic corrosion and common industry practices to address the corrosion potential. These apply equally to offshore pipelines. Because of the strong electrolytic characteristics of seawater (uniform conductivity), cathodic protection is often achieved by the direct attachment of anodes (sometimes called bracelet anodes) at regular spacing along the length of the pipeline. Impressed current, via current rectifiers, is sometimes used to supplement the natural electromotive forces. Attention should be paid to the design life of the anodes. Score this item as described in Chapter 4.
Test leads The effectiveness of the cathodic protection is often monitored by measuring the voltage of the pipe relative to a silvedsilver nitrate reference electrode in the water in the same fashion as the copper/copper sulfate reference electrode is used in onshore analysis.The use of test lead readings to gauge cathodic protection effectiveness has some significant limitations since they are, in effect, only spot samples of the CP levels. Nonetheless, monitoring at test leads is the most commonly used method for inspecting adequacy of CP on onshore pipelines. A discussion oftest leads for onshore lines @ages 79-82) applies in theory to offshore lines as well. Offshore lines normally provide few opportunities to install and later access useful test leads. Therefore, it is thought that this item does not play as significant a role as it does in the onshore case. When pipeto-electrolyte readings are taken by divers or other means at locations along the pipeline, points may be awarded here or as a type of close interval surveys.
Close interval survey A close interval survey (CIS) technique for offshore lines can involve towing an electrode through the water above the line and taking continuous voltage readings between the pipe and its surroundings. Another technique involves the use of remotely operated vehicles (ROVs) and/or divers to follow the pipeline and perform a visual inspection as well as pipe-to-electrolyte readings. Because the reference electrode must be electrically connected to the pipeline, limitations in the practical use of these techniques exist. When conditions allow, spot checking by divers can also provide information similar to the close interval survey. Score this item as described in Chapter 4.
Currentjlow to other buried metals When the density of foreign pipelines or other metallic structures is high, the potential for cathodic protection interferences is correspondingly high. In scoring this item, the evaluator should note the isolation techniques used in separating piping
This variable will often not apply for offshore pipelines, except perhaps at shore approaches. The evaluation can he based on the same criteria as discussed in Chapter 4. Because the AC interference is normally not an important risk indicator for offshore pipelines, those possible points can be distributed to other variables where there is a belief that other variables play a larger role in the offshore pipeline risk picture.
C2. Coating (weighting: 250/,) As a primary defense against corrosion, the pipe coating is intended to provide a barrier between the pipe wall and the electrolyte. Because concrete is often placed over the anticorrosion coating for buoyancy and/or mechanical protection, it can he evaluated as part of the coating system. The concrete should be compatible with the underlying coating during installation and long-term operation. Metal reinforcing within the concrete can interfere with the cathodic protection currents and should be designed for proper performance. Offshore coatings must often be designed to withstand more forces during installation, compared with onshore installations. Coating properties such as flexibility, moisture uptake, and adhesion may be more critical in the offshore installation. Some amount of coating degradation is to be expected with the aging of a pipeline. A pipeline operated at higher temperatures may cause more stress on the coating. Score this item as described in Chapter 4. Points can be awarded based on Quality of coating Quality of application Quality of inspection Quality of defect corrections.
IV. Design index The design environment for an offshore pipeline is quite different from that of an onshore pipeline. The offshore line is subjected to external pressures and forces from the water/ wavekurrent environment that are usually more dynamic and often more severe.As previously noted, the pipe is being placed in an environment where man cannot live and work without the aid of life-support systems. The difficulties in installation are numerous. Many of the risk-related differences between onshore and offshore pipeline systems will appear here in the design index. Related to this, see also the construction portion of the incorrect operations index. It should be assumed that the industry will continue to move into more challenging environments such as deeper water, more extreme temperatures, and arctic conditions. This presents new problems to overcome in design, construction, and integrity monitoring.
12/250 Offshore Pipeline Systems
A. Safety factor (weighting: 25%) The safety factor is a risk “credit” for extra pipe wall thickness when this thickness is available for protection against impacts, corrosion, and other integrity threats. Required wall thickness must account for all anticipated internal and external loadings. Wall thickness in excess of this requirement is a risk ‘credit.’ From a cost of material and installation viewpoint, higher strength materials are often attractive. This is especially true in the challenging offshore environment. However, special welding considerations and strict quality control are often needed in the higher strength materials. Other desirable material properties such as ductility are sometimes sacrificed for the higher strength. Pipe installation procedures (techniques such as S-lay, J-lay, etc.) are another consideration. Anticipated stresses on the pipe during installation may be higher than operational stresses. The evaluator should seek evidence that installation stresses and potential for pipe damage during construction have been adequately addressed. Offshore pipelines can have a high external loading due to water pressure. This leads to increased chances of collapse from external forcebuckle. Calculations can be done to estimate buckle initiation and buckle propagation pressures. It is usually appropriate to evaluate buckle potential when the pipeline is depressured and thereby most susceptible to a uniformly applied external force. This is the worst-case scenario and reasonable since a depressured state is certainly plausible if not routine. In cases of larger diameter, thin-walled pipe, buckle arrestors are sometimes used to prevent propagation of buckle. Buoyancy effects must also be considered in the loading scenario. If the weight coating is partially lost for any reason, the pipe must be able to withstand the new stress situation including possible negative buoyancy. Additional considerations for the offshore environment might include hydrodynamic forces (inertia, oscillations, lateral forces, debris loadings, etc.) caused by water movements and an often higher potential for pipe spans and/or partial support scenarios. With these considerations, variable can be assessed as described on pages 94-102.
B. Fatigue (weighting: 15%) As a very common cause of material failure, fatigue should be considered as part of any risk analysis. Fatigue, as discussed on pages 102-1 04, should therefore become a part of the offshore pipeline evaluation. In addition to fatigue initiators discussed in Chapter 5, an additional fatigue phenomenon is seen in submerged pipelines. A free-spanning (unsupported) length of pipe exposed to current flows can oscillate as vortex shedding creates alternating zones of high and low pressure. The extent of the oscillations depends on many factors including pipe diameter and weight, current velocity, seabed velocity, and span length. The pipeline will tend to move in certain patterns of amplitude and speed according to its natural frequency. Such movements cause a fatigue loading on the pipe. There is evidence that fatigue loading conditions may be more critical than once thought, including “ripple loading” phenomena where relatively small amplitude load perturbations (ripple loads) cause fracture at lower stress intensity levels. This in turn requires more emphasis on crack propagation
and fracture mechanics in such dynamic, fatigue-inducing environments. Higher fracture toughness materials might even be warranted. Scoring the potential for this type of fatigue requires evaluating the potential for spans to exist and for water-current conditions to be of sufficient magnitude. Because both of these factors are covered in an evaluation of land movements (Le,, stability) (see page 1 10). wave-induced fatigue potential is also at least partially addressed in that variable. Score fatigue potential as described in Chapter 5, with the additional considerations discussed here.
C. Surge potential (weighting: loo/) Score this item as detailed on pages 104-105 and also see Appendix D.
D. Integrity verifications (weighting: 25%) This variable normally includes an evaluation of pressure testing and in-line inspection (ILI) as methods to verify system integrity. The considerations to the offshore environment are the same but can also include inspection by side-scan sonar, ROC: or diver inspection, for partial assurances of integrity (‘partial’since visual inspections should not generate the same level of confidence as more robust integrity verifications). Score this variable as described on pages 105-1 10.
E. Stability (weighting: 25%) The interaction between the pipeline and the seabed will frequently set the stage for external loadings. If a previously buried line is uncovered because of scour or erosion of the seabed, it becomes exposed to current loadings and impact loadings from floating debris and material being moved along the seabed. Upon hrther scour or erosion, the pipeline can become an unsupported span. As such, it is subjected to additional stresses due to gravity and wave/current action. If stresses become severe enough, possible consequences include damage to coatings and buckling or rupture of the pipe. On a longer term basis, cycling and fatigue loadings may eventually weaken the pipe to the point of yield. Fatigue and overstressing are amplified by larger span lengths. Such fatigue loadings can be caused by movements of a free-spanning pipeline which, given the right conditions, will reach a natural frequency of oscillations as previously discussed. Changes in bottom conditions also impact corrosion potential. As pipelines move from covered to uncovered states, the galvanic corrosion cell changes as the electrolyte changes from soil to seawater and back. CP currents must be sufficient for either electrolytic condition. The presence of “high-energy” areas, evidenced by conditions such as strong currents and tides, is a prime indication of instability. Sometimes, seabed morphology is constantly changing due to naturally occurring conditions (waves, currents, soil types, etc.). The wave zones and high steady current environments promote scour and vortex shedding. At other times, the pipeline itself causes seabed changes because of the current obstruction that has been introduced into the system. Periodic bottom-condition surveys and installation of spancorrecting measures are common threat-reducing measures.
Design index 121251
Span correction techniques include concrete mattresses, grout bags, mechanical supports, antiscour mats, and rock dumping. Different techniques are found to be effective in different regions. Some stabilization using the above methods is often done as part of initial construction. Naturally occurring external forces may need to be more fully investigated in the offshore environment. Uncertainty is usually high. Often bottom conditions such as current and seabed morphology must be estimated from more available surface wind- and wave-induced current models. Even when more definitive surveys are done, actual conditions can often vary dramatically over time. This plays a critical role in the stress situation ofthe pipeline. Floating debris and material being moved along the seabed are potential sources of damage to an exposed pipeline. Such external forces can damage coatings, both concrete and anticorrosion types, and even damage the pipe steel with dents, gouges, or punctures. Special considerations for instability events also include hurricanes, tsunamis, and associated storm-related damages to platforms, changes in bottom topography, temporary currents, tidal effects, and iceipermafrost challenges. Potential damages can be caused by the presence and movements of ice including ice scour (ice gouging), subscour soil deformation (even when the pipeline is below the maximum scour depth, a danger exists), icebergs, ice keels of pressure ridges, and ice islands. Note that there can be extensive differences in the presence of icebergs in a given region from season to season [71]. The stubilig variable can be scored as detailed on pages 110-1 15 with the additional considerations noted for offshore conditions. Points are awarded based on the potential for damaging stability events and mitigating measures. Potential can be scored as high, medium, low, or none, as discussed next. Interpolation between these categories is appropriate and, as always, higher uncertainty should cause the risk model to show higher risk. They can be scored as follows: High Any of the following conditions is sufficient to score the potential as high: Areas where damaging soil movements and/or water effects are common or can be quite severe; where a high-energy water zone-wave-induced currents, steady currents, scouring-is causing continuous, significant seabed morphology changes; where unsupported pipeline spans are present and changing relatively quickly; where water current action is sufficient to cause oscillations on free-spanning pipelines-fatigue loading potential is highor impacts of floating or rolling materials; regular fault movements, landslides, subsidence, creep, or other earth movements are seen; ice movements are common and potentially damaging; the pipeline is or can easily be exposed to any of these conditions. Rigid pipelines, under less severe conditions should be included in this high potential category, because of their diminished capacity to withstand certain external stresses. Medium Damaging soil movements are possible but unlikely to routinely affect the pipeline due to its depth or position. Unsupported pipeline spans might exist, but are relatively stable. Water energy is sometimes (but not continuously)
severe enough to cause oscillations or impact loads from floating or rolling debris. Rare occurrence events have a high probability of damage if they should occur. This includes hurricanes, severe storms, and rare ice movements. Low Evidence of soil movements or unsupported spanning is rare. The area is stable in terms of potentially damaging events and/or the pipeline is so well isolated from such events as to make the potential almost nonexistent. Rigid pipes may fall into this category even if the potential threat is seen as “none.” None No evidence of any potentially threatening soil, ice, earth, or water event is found. Seabed profile surveys are a powerful method to gauge the stability of an area. (The effectiveness of the survey technique should be considered as discussed below.) When surveys are unavailable and anecdotal evidence (personal observations over the years) is minimal, the evaluator may score the area as relatively unstable in order to reflect the uncertainty ofthe situation. Of course, previous episodes of pipeline damages are a very strong indicator of potential. To the above scores, ‘credits’ can be awarded if actions such as the following are taken to reduce the potential damage: 0
0
Regular monitoring and corrective actions, if needed, are done at least annually and in accordance with a welldesigned survey program. Continuous monitoring and corrective actions are taken. Stress relieving.
Note that the use ofmitigating measures should not increase the point score to the highest level-the level at which no threat exists (20 points). This is in keeping with the philosophy used throughout this book. Note also that credit for extra strong pipe to withstand instability events is awarded in the safety factor item and should not earn credit here.
Regular monitoring Monitoring is achieved by a variety of survey methods for subsea pipelines. As an indirect preventive measure, an accurate survey will alert the operator to pipe sections more susceptible to external damage. Regular, appropriately scheduled surveys that yield verifiable information on pipeline location, depth of cover, and water depth should score the most points. Common survey techniques range from hands-on, where the divers use their hands and probing rods to locate and record pipe location, to the use of manned or unmanned subsea vehicles (ROVs), to sophisticated instrumented surveys (sonar and/or signals impressed onto the pipe) that measure both seabed profiles and pipeline profiles. Side-scan sonar is one such instrumented survey that can detect free spans, debris on or near the pipeline, and seabed marks caused by anchors, fishing equipment, etc., and in general record the position of the pipeline. The evaluator should award points partly based on the reliability and accuracy of the technique. Repeatability-where multiple surveys of the same area with the same technique yield the same result-is often a good indicator of the usefulness of the technique.
12/252 Offshore Pipeline Systems
Where movements of icebergs, ice keels, and ice islands are a threat, well-defined programs of monitoring and recording ice events can be awarded points, based on the program’s effectiveness in reducing pipeline risk. Scores should also be awarded based on timeliness of detection. Frequency of surveying should be based on historical seabed and bank stability, wave and current action, and perhaps risk factors of the pipeline section. The evaluator can review the basis for survey frequency-ideally, a written report with backup documentation justifying the frequencyto determine if adequate attention has been given to the issue of timeliness.
Continuousmonitoring This implies the existence of devices that will alert an operator of a significant change in stability conditions. Such devices might be direct indicators, such as strain gauges on the pipe wall itself, or indirect indicators, such as seabed or current monitors. In the case of indirect indicators, some follow-up inspection would be warranted. The advantage of continuous monitoring is, of course, that corrective actions can be applied immediatelyafter the event-the uncertainty of scheduling surveys is removed. The evaluator should award maximum credit for mitigation only if the monitoring is extensiveenough to reliably detect all damaging or potentially damaging conditions.
Stress relieving Corrective actions, as a follow-up to monitoring, include pipe burial (or reburial), use of protective coverings, and the placement of support under a free-spanning pipe. These can be considered to be stress relieving actions since they are designed to ‘unload’ the pipe, reducing the stress levels. This is often accomplished by using concrete mattresses, grout bags, mechanical supports, antiscour mats, rock dumping, etc., to offset natural forces would otherwise add stresses to the pipeline. Maximum credit can be awarded when the stress relieving is a proactive action or a design feature specifically put in place to mitigate the effects on a possible instability.An example would be supports beneath a pipeline where scour-induced free spans are a possibility but have not yet occurred. Another example is the excavation of a trench to prevent transmittal of soil movement forces onto the pipeline (perhaps only temporarily). Points are awarded when actions have been taken to substantially reduce the possibility of damages due to soil, ice, seismic, or water forces.
Example 12.I : O f f o r e earth movements An offshore pipeline makes landfall in a sandy bay. The line was originally installed by trenching. While wave action is slight, tidal action has gradually uncovered portions of the line and left other portions with minimal cover. With no weight covering, calculations show that negative buoyancy (floating) is possible if more than about 20 ft of pipe is uncovered. The potential for stability problems is therefore scored as somewhat worse than the “medium” potential classification. This shore approach is visually inspected at low-tide conditions at least weekly. Measurements are taken and observations are formally
recorded. The line was reburied using waterjetting 8 years ago. With the strong inspection program and a history of corrective actions being taken, the evaluator adjusts the score upward to show less threat. This yields a score for the stability variable approximately equivalent to a “low” potential for damages due to stability problems.
Alternative scoring approach One of the largest differences between the risk assessments for offshore and onshore environments appears in this variable of stability.This reflects the very dynamic nature of most offshore environments even under normal conditions and more so with storm events. Instead of evaluatingthis potential failure mechanism using the general, qualitative categories of threat discussed above, the evaluator might choose to use many subvariablesthat can be independently assessed and then combined into a stability score. Support and stability issues consider potentially damaging ground or water effects, primarily from a support andor fatigueloading viewpoint, and conservatively assume that increased instability of sea bottom conditions leads to increased potential for pipeline over-stressing and failure. Subsurface features that might indicate future instabilities are considered as part of the threat assessment. A segment of pipe is “penalized when potentially damaging conditions are present, and then “rewarded” as mitigating actions or design considerations are employed. However, in keeping with the overall philosophy ofthe suggested risk assessment, the sum of all mitigating actions should never completely erase the penalty due to the conditions. Only the absence of any “potentially damaging conditions” results in the lowest risk. For new or proposed construction, the more threatening areas along the pipeline route are normally identified in the preconstruction phase design studies. Identified threats are usually fully addressed in the design process and that process is in fact a risk management process. Therefore, the risk assessment of a new pipeline will generally reflect the mitigated threat. However, as evidence of past instabilities and/or indications of possible future instabilities, the potentially damaging themselves can still be captured in the assessment, regardless of mitigation measures designed to offset their presence. In general, situations or variables that contribute to a higher threat include regions of potential instability as indicated by Slope Sand ripples and waves Nearby depressions/slumpingpotential Liquefaction potential Highest water current actions Scour, erosion, or washout potential Known or suspected seismic activity or faults Mobile bedfonns. Loading and potential over-stressing situations more unique to the offshore environment include Pipe buckling potential (including both initiation and propagation points) Current forces (steady current, storm currents, etc.)
Incorrect operations index 121253 Other hydrodynamic forces (debris impact and loading, oscillations, mobile bedforms, inertia, etc.) Sea ice scour potential.
A full evaluation of any of these issues requires an evaluation of many subvariables such as soil type, seismic event types, storm conditions, cover condition, water depth, etc. So, stability issues generally fall into one of two types: support and loadings. For purposes of risk understanding and this risk model design, some subcategories of stability variables can be created. Support or stability issues are perhaps most efficiently examined in four categories: 1. Fault movement
2. Liquefaction movement 3 . Slope stability 4. Erosion potential. 5. Loadings These threats all impact the support condition and potentially, the stress level of the pipeline. They are combined to arrive at a relative score for stability. In algorithm form, the relationships can be shown as follows: Potential for damaging instabilities = fifault movement; liquefaction; slope stability: erosion, loadings}
where Fault movement damage potential = flfault type; slip angle; pipeline angle; seismic event; pipe strength} Liquefaction damage potential = f{seismic event; soil type; cover condition; pipe strength} Slope stability = f{slope angle; soil type; rock falls; initiating event; angle of attack; landslide potential; pipe strength) Erosion potential = f(current speed bottom stability; pipe strength; coating strength} Loadings = f( hydrodynamic forces; debris transport; current speed; water depth]
Most of the subvariables are also comprised of several factors. For instance, bottom stability, a subvariable under the erosion threat, can be evaluated in terms of several factors that are commonly found in design documents or recent inspection surveys. Bottom stability = f{observed mobile bedforms; meganpples; sand dunes; bottom current conditions}
These, in turn, can also be further subdivided. For example, Bottom current conditions = fjspeed direction, duration, tsunami potential, tidal effects, storm conditions, river flow}
One possible mitigation to land movement threats is increased pipe strength, specifically the ability to resist external loads considering both stress and strain issues. Other mitigation measures include 0 0
Inspection type and frequency Time since last inspection (linked to storms and seismic events)
0
Pipeline stabilization (cover condition, anchors, piles, articulated mattresses, various support types, etc.) Frequency of sea bottom survey.
An example weighting scheme for pertinent variables and subvariables is shown in Table 12.1. In this scheme, each subvariable is to be scored and then combined with scores for other subvariables according to the following algorithm: Potential for damaging ground movements = (erosiodsupport threats) + (seismic movements) + (liquefaction) + (slope stability) + (loadings) + (mitigations)
V. Incorrect operations index More than 80% of high-consequence offshore platform accidents can be attributed to human error, according to one source [78]. Whereas platforms normally have a high density of components and a more complex design compared to pipelines, this statistic can also serve as a warning for the potential for human error in pipeline operations. As is the case for the basic risk assessment model, the incorrect operations index score will sometimes apply to a whole pipeline system. Many of the human error prevention factors represent a company-wide approach to work practices and operating discipline. Only a few risk items such as MOP potential, safety systems, and SCADA are more location specific.
A. Design (weighting: 30%) The design considerations for offshore pipelines are sometimes radically different from onshore pipelines. Special design aspects must be included just for the installation process. From a human error potential, however, the same items can be scored for their roles in the risk picture. Score the design items as describedonpages 119-124.
B. Construction (weighting:20%) Although the risk items to be scored here are identical to the onshore model, the evaluator should consider the unique offshore construction challenges. Installation of the pipeline usually occurs from the water surface. The pipe is welded on the construction barge and lowered into the water into a preexcavated trench or directly on the sea bottom in a predetermined area. Sometimes, the pipeline lying on the seabed is later buried using pressure jetting or some other trenching technique. Handling of the pipe (which is already coated with a corrosion-barrier coating as well as a concrete weight coating) is critical during all phases ofthe process because certain configurations can overstress the coating or the pipe itself. A high amount of tensile stress is often placed on heavy pipe during installation, even when handling is done correctly. Buoyancy and external pressure effects (before and after filling of the line) must also be considered. The exact placement of the pipe on the seabed is also important. The seabed will rarely be uniform. Unsupported pipe spans are usually avoided altogether, but the pipe is often designed to safely handle some length of free span under certain wave loading conditions. A surveyed route that provides a correct pipeline profile is the target installation location. One of the challenges in the offshore environment is the inability to directly observe the pipeline being installed. This is sometimes overcome through the use of divers, cameras,
12/254 Offshore Pipeline Systems Table 12.1 Sample variable list for sub s e a stability assessment
Variable
Erosionisupport threats
Weight
20%
Subvariable
Weight
Current speed Mobile bedforms Tsunami erosion vulnerability
20% 50% 10%
Megaripples
20%
Notes Consider frequency, duration, direction Function of current speed, soil type Event, maximum wave height, maximum scour potential Consider size, angle, and interpretation by specialist, (Might already be included in mobile bedforms)
100%
Slope stability
20%
Slope%
20%
Slope instability Landslide potential
30% 20%
Rockfall potential Slope angle of attack
20%
Liquefaction potential Axial strain-maximum tension liquefaction
10% 20% 100% 20% 30%
If no slope present, other variables are scored as ‘no threat’ Includes seismic induced landslide; mudslides, etc.
In relation to pipeline configuration Function of soil type, seismicity Tension-dominant loading case; Soil resistance is a key consideration, based on calculations ofpipe reaction
Liquefaction Axial strain-maximum compression liquefaction Liquefaction depth
30%
20%
Compression-dominant loading case; Soil resistance is a key consideration, based on calculations ofpipe reaction Function of soil type, seismicity
100%
Seismic ground movements
20%
Event type Axial strain-maximum tension faulting
10% 40%
Assumed maximum dip angle of fault Tension-dominant case, based on calculations ofpipe reaction
Axial strain-maximum compression faulting Fault type
40%
Compression dominant case, based on calculations of pipe reaction Dip angle, pipeline angle ofattack, assumed displacement
Mobile bedforms Hydrodynamic forces
Loadings
100% 20% 30%
Water depth
20%
Sea ice scour Geohazard relative rating
0% 20%
Function of current speed, soil type Consider tsunami, current speed, debris transport Add maximum wave height
20%
Current speed Inspection
Mitigations
10%
80%
Pile stabilization Sea bottom cover
10% 100% 40%
30% 30% 100%
Pertinent in colder regions A general assessment variable from a previous study Consider both steady-state and storm events Consider type, frequency and follow-up (timely and appropriate span reductions, buckle repair, etc.) Reduces some loadings (debris impact, current action); adds to others (some seismic loadings)
Leak impact factor 12/255 sonar, and subsea vehicles, but even then, the observation is not equivalent to that for an onshore installation. The uncertainty caused by this situation should be considered in the assessment. An increased reliance on indirect observation methods increases the potential for errors at some point in the process. When the method requires interpretation, uncertainty is even higher. With these considerations in mind, score this item as described on pages 124-125.
C. Operations (weighting: 35%) Because this phase ofpipelining is considered to be “real time,” the possibilities for intervention are somewhat reduced. Error prevention, rather than error detection, is emphasized. Score this item as described on pages 125-132. Note the importance of survey techniques here, especially bottom condition and external pipe condition surveys. Internal inspections are discussed in the covrosion index material. Other survey techniques are discussed in other parts of the assessment also.
D. Maintenance (weighting: 150/) As in the basic model, a low score in maintenance should cause doubts regarding the adequacy of any safety system that relies on equipment operation. Score this item as described on page 132.
VI. Leak impact factor The type of product spilled, the distance to sensitive areas, and the ability to reduce spill damages will usually govern the leak impact for offshore lines. Spills of gases or highly volatile products offshore should be scored as they are in the onshore risk assessment model (see Chapter 7). This involves assessment and numerical scaling of product hazard, relative spill size, dispersion potential, and vulnerable receptors. More minor impacts seen in the offshore environment include the possible impact on marine life from pipeline noise during operations and the presence of the pipeline as a barrier to marine life movements. These can be addressed in an evaluation ofreceptor vulnerabilities.
Receptors Unlike the onshore counterpart, population density might not be a dominant concern for offshore pipeline failures. The U.S. Department of Transportation regulations consider offshore pipelines to be class 1 (rural) areas. Proximity to recreational areas (beaches, fishing areas, etc.), harbors and docks, popular anchoring areas, ferry boat routes, commercial shipping lanes, commercial fishing and crabbing areas, etc., will often replace the onshore measures of population densities, when considering the potential to impact human receptors. In many cases, the most significant impact from an offshore spill will be the effect on environmentally sensitive areas. Offshore liquid spills pose aunique set ofchallenges. A qualitative scale that can gauge the degree ofdispersion based on wind and current actions and product miscibility can be developed. The sensitivity of environmental receptors is discussed in Chapter 7.
Spills and dispersion For the more persistent liquid spills, especially oils, mixing and transport phenomena should be considered. Consider these examples: Heavy oils can submerge and experience overwashing. Such phenomena make spill detection and cleanup more difficult. Shorelines remain in danger because submerged oil can still migrate. Overwashing tendency and the resultant particle size and depth of submergence are related to the oil density and the density of the water and the sea energy (wave height) [78]. Once spilled, heavy oil can theoretically increase in density due to evaporation. However, this increase is quite minor 1781. Sunlight-induced reactions can occur after initial evaporation of the volatile components. These reactions include photo-oxidation, photodecomposition, and polymerization. The effectiveness of the reactions depends on the type and composition of the oil as well as the sunlight intensity and duration. Some photoxidation products and effects can worsen the spill because toxicity, density, and emulsification tendency may increase [78]. Crude oil spilled in a marine environment can form a waterin-oil emulsion that has properties different from the original oil. Such emulsions can be persistent and can aggravate spill countermeasure techniques. The chemical composition of the oil is thought to determine the tendency to form emulsions [78]. A table of expected behavior for various spills on water is showninTable 7.19. The potential range of a spill can he scored using Table 7. I9 and the material’s properties, or using more qualitative descriptions as follows:
High A highly miscible material has spilled into a fast current. Conditions are conducive to quick mixing of the product in the water and fast transport ofthe mixture away from the spill site. High-energy water conditions and wind-driven spreading promote wide dispersal of spilled substance. Medium Some mixing is possible under most normal conditions or thorough mixing is possible under more unusual conditions. Travel of the mixture will occur, but relatively slowly or in a direction away from environmental receptors. Some water energy is present. Low An immiscible material is spilled into stagnant water. The spilled material will tend to stay separate from the water. Movements of spilled material will be very minor. Lowenergy water conditions exist. Spill remains localized and is relatively easy to clean up.
Emergency response Adjustments to the leak impact factor can be made when response activities are seen to reliably reduce the spill consequences by some set threshold-perhaps 50% or more. These activities are discussed in the onshore risk assessment model
121256 Offshore Pipeline Systems
(see page 159). Some additional considerations, specific to offshore spills, are as follows. The need for quick detection is important in most offshore environments because of the potential for contaminant spread coupled with the remote locations of many offshore installations. In situ burning of oil on water is often attractive as a means of mitigating a spill. The need for physical collection, transportation, storage, and disposal of spilled product is reduced. Drawbacks include the visible smoke plume containing soot and other combustion byproducts and the lack of knowledge about heat and radiation from large fires. Response plans should take full advantage of spill cleanup technology. Oil spill chemical treating agents include dispersants, emulsion breakers, beach cleanup agents, biodegradation agents, and surface-washing agents. Although these have been proven to be effective in specific cases, wide differences in oil type and composition complicate attempts to identify agents that are effective across a wide range of products [78]. Knowledge of available agents and their application to specific spills is required to make the best use ofthe agents. This is a relevant knowledge area for the evaluator to explore with the pipeline operator when assessing response capabilities.
Other spill-limiting conditions such as emergencyblock values and secondary containment are covered in the basic model (see Chapter 7) and can apply to an offshore analysis as well. ~~
~
Example 12.2: Leak impactfactor A pipeline transporting light crude oil is being evaluated. The product hazard is evaluated as described on pages 136142. The worst case section has strong subsurface currents and strong prevailing winds close to shore. These will tend to spread the product as it rises from the submerged pipe and again as it pools on the water surface. The range is therefore scored to be nearly the highest dispersion score. (The highest score for this company is reserved for spills into fast flowing freshwater streams.) Receptors combine the population density (rural) and known recreational and commercial use areas with the environmental and the high-value areas, of which there are none in this case, except for near-shore approaches. Those areas indicate higher consequence potential and hence, higher overall risk. Response capabilities, including a state-of-the-art SCADA-based leak detection system, are considered to be fairly robust and should facilitate minimization of consequences in all areas distant from the shore approach.
131257
Stations and Surface FaciIities Contents T. Background 13i257 Types of facilities 13125Y 131260 Scope 131260 Sectioning 131260 Data requirements 13/26 1 Model design 131263 Weightings 131263 Process 131264 HI. Risk assessment model 131264 Risk model components I31265 11. Station risk assessment
Equivalent surface area 131265 External forces index 131266 Corrosion index 13’267 Design index 131268 Incorrect operations index 131268 Leak impact fdctor 13/271 I? Modeling ideas I 131275 VI Modeling idcas 11 131277 VI1 Modeling ideas 111 131278 1X Example ofriskmanagemcnt application 131286 X Companng pipelines and station? 131287 XI Station nskvaiiables 131288
1. Background Most pipelines will have surface (above ground) facilities in addition to buried pipeline. These include pump and compressor stations, tank farms, as well as metering and valve locations. Such facilities differ from pipe-only portions of the pipeline in significant ways an4 yet, must be included in most decisions regarding risk management. Typical operating and maintenance processes involve prioritizing work on tanks, pumps, and compressors with ROW activities. Many modem risk assessments are including surface facilities in a manner
that accounts for the differences in risk and still allows direct comparisons among various system components. This chapter outlines some techniques for such risk assessements. Many station facilities employ design techniques, such as piping corrosion allowances, reliability-based equipment maintenance, and best preventive maintenance practices. Facilitiesoften include pieces of large rotating equipment (e.g., compressors,pumps, motor operated valves), as well as sophisticated electronic monitoring equipment (e.g., SCADA, programmable logic controllers, leak detection, on-site control centers, etc.).
13/258Stationsand Surface Facilities
Identify risk assessment model structure
Identify all probability variables and consequence
.-. -.-.-. .. .-
Station facilities risk assessment model (document)
.*
. I -
/_.*-
Identify critical variables to be included in model
o,o,-,
0
0
0
0-
Risk model (spreadsheet/ database)
---------
,,
Risk assessment procedures (manual) Establishscoring and auditing procedures
Perform necessary supporting calculations (stresses, release parameters, etc.)
- - - - - - - -- -----1
..'.
Gather data, apply algorithm to pilot facilities
I Establishon-going risk I* - - - - - - - - management program -----------------
,'' r"'-'--'-----------------~
i - !-
- ---- :
Decision support tools Resource allocation model ! Prioritizedmaintenance Dlannina Administrative procedures
I
I '-,-----------------------.
:
Figure 13.1 Risk management system for stations.
Because of increased property control and opportunities for observation, the cause, size, duration, and impact of leaks at stations are often smaller than a pipeline failure. Liquid facilities usually have spill containmentberms and storm water collection systems, as well as equipmentleak detection and capture system, so the potential for a product release to reach the surroundmg environment is significantly mitigated compared with a release
on the pipeline ROW.Stations handling gaseous products normally have vents, flares, and safety systems designed to minimize off-site excursions of product releases. Given the differences between pipeline ROW and the associated surface facilities,it is not surprisingthat leak experiences are also different. Figure 13.2 shows that liquid pipeline station facility leak volumes are approximately35% of line pipe leakvolumes, per
Types of facilities 13/259
8
45
m
3
m
40
0)
vi -
35
?!
2 30 m 0
8
25
5
F 20 0
>
5
15
2
10
! c
5 Line Pipe: 37% TanWPump: 4%
17% 18%
13% 38%
16% 3%
1% 26%
6% 9%
7%
-
5% 0%
4% -
2% 2%
1%
-
Figure 13.2 Liquid pipeline failure causes: line pipe versus station facilities.
an ASME B3 1.4 Committee Station study from U.S. reportable leak data. These data also highlight that equipment failures are the primary cause (38%) of station facility leaks, compared with third-party damage for line pipe (37%) [9a]. Surface facilities are sometimessubjectedto differentregulatory requirements, compared with pipeline operations on the ROW. The majority of the larger, hazardous liquid pipeline station facilities in the United States comply with process safety management (PSM) regulations, mandated by OSHA in 1992, which require specific actions related to pre-startup safety reviews, process hazard analyses, creation of operating procedures, training, qualifications of contractors, assurance of mechanical integrity, hot work permits, managementof change, incident investigations,emergency planning, compliance with safety audits, and employee participation in safety programs. Most U.S. natural gas pipeline station facilitiesare exempt from compliancewith PSM regulations,but many operators adopt at least portions of such regulations as part of prudent operating processes. Some special environmental regulations will also apply to any surface facility in the United States. In addition, the U.S. Department OfTransportation(DOT) is in the process of promulgating various pipeline integrity management (PIM) regulations that require all jurisdictional hazardous liquid and gas pipeline facilities to perform a risk assessment as the basis for creating a integrity assessmentplan. Several states, such as Texas, are also imposing PIM-related regulations for intrastatepipeline facilities.
II. Types of facilities In this chapter, the termfuciliw applies to a collection of equipment, whereasstation refers to a tank farm, pumping station, or
other well-defined collection of mostly abovegroundfacilities. All stations have facilities-ven if only a single block valve. Facilities to be evaluated in a risk assessment might include Atmospheric storagetanks (AST) Underground storagetanks (UST) sumps Racks (loading and unloading, truck, rail, marine) Additive systems Piping and manifolds Valves Pumps Compressors Subsurface storage caverns. Comparisonsbetween and among facilitiesand stations is often desirable. Most pipeline liquid storage stations consist primarily of aboveground tanks and related facilities that receive and store products for reinjection and continued transportation by a pipeline or transfer to another mode of transportation,such as truck, railcar, or vessel. Most storagetanksfor products that are in a liquid state under standard conditionsare designed to operate near atmospheric pressure, whereas pressurized vessels are used to store highly volatile liquids (HVLs). Liquid pipeline facilities include pumps, meters, piping, manifolds, instrumentation, overpressure protection devices and other safety systems, flow and pressure control valves, block valves, additive injection systems, and breakout tanks. Pipeline gaseous product storage facilities serve the same purpose as liquid tank farms, but include buried high-pressure bottle-type holders, aboveground low-pressure tanks, andor underground caverns. Gas pipeline facilities used to manage
13/260 Stations and Surface Facilities
product flow through the mainline include compressors, meters, piping, manifolds, instrumentation, regulators, and pressure relief devices and other safety systems, and block valves. Smaller station facilities, such as block valves, manifolds, meters, andregulators, are often located within small, protected aboveground areas, or inside buried vaults, often made of concrete. Larger pipeline stations, such as pump/compressor stations or tank f m s , can cover many acres and be heavily secured. Most station facilities could be more accessible than a buriedpipeline, so they typically have unauthorized access prevention measures such as fencing, locked gates, barbed wire, concrete barriers, berms, lighting, and security systems. Dependmg on the station’s size and use, they may be manned continuously or visited by operations or maintenance personnel periodically. Station piping and equipment are sometimes built from different materials and operate at different pressures than the pipeline. Ancillary hazardous materials and processes can also be present at liquid stations, which adds to the level of risk and complexity.
Tanks Product storage tanks might warrant their own rating system since they are often critical components with many specific risk considerations unique to each individual tank. A risk model can use industry standard inspection protocols such as API 653, which specify many variables that contribute to tank failure potential. Common variables seen in tank inspection criteria are Year tank was built Previous inspection type, date, and results Product Changes in product service Types of repairs and repair history Internal corrosion potential and corrosion mitigation Construction type Shell design, materials, seam type Roofdesign Leak detection Anodes under tank If bottom was replaced, year bottom replaced, minimum bottom before repair, and minimum bottom after repair Corrosion rate Cycling frequency Cathodic protection.
sections as part of a corrosion prevention program, and not include all factors that could be considered to support a relative costbenefit analysis for a comprehensive risk-based maintenance budget. Evaluators can and should use the results from other risk analysis methods, such as matrix or process hazard analysis (PHA) techniques, to provide information supporting an indexbased analysis (see Chapter 2). PHAs (e.g., HAZOP, “what-if” scenarios, FMEA) are sometimes completed every several years to meet PSM requirements, but they do not routinely gather and integrate large volumes of facility data as would a comprehensive risk model. Existing PHA action items can be evaluated for risk reduction effectiveness by developing a relative risk mitigation scenario (defined in risk model terms) and calculating a costhenefit ratio (action costiscore reduction). This is dmussed in Chapter 15.
Scope As discussed in Chapter 2, the scope of a risk assessment
should be established as part of the model design. This chapter assumes a risk assessment effort that focuses on risks to public safety, including environmental issues, and covers all failure modes except for sabotage. Sabotage can be thought of as intentional third-party damage. The risk of sabotage commands a special consideration for surface facilities, which are more often targeted compared to buried pipelines. Sabotage often has complex sociopolitical underpinnings. As such, the likelihood of incidents is usually difficult to judge. Even under higher likelihood situations, mitigative actions, both direct and indirect, are possible. The potential for attack and an assessment of the preventive measures used are fully described in Chapter 9. As noted in Chapter 1, reliability issues overlap risk issues in many regards. This is especially true in stations where specialized and mission-critical equipment is often a part of the transportation, storage, and transfer operations.Those involved with station maintenance will often have long lists of variables that impact equipment reliability. Predictive-Preventive Maintenance (PPM) programs can be very data intensiveconsidering temperatures, vibrations, fuel consumption, filtering activity, etc. in very sophisticated statistical algorithms. When a risk assessment focuses solely on public safety, the emphasis is on failures that lead to loss of pipeline product. Since PPM variables measure all aspects of equipment availability, many are not pertinent to a risk assessment unless service interruption consequences are included in the assessment (see Chapter IO). Some PPM variables will of course apply to both types of consequence and are appropriately included in any form of risk assessment. See page 19 for discussions on reliability concepts.
111. Station risk assessment A station risk assessment model is just one of several important
tools normally used within a pipeline operator’s overall risk management program. Ideally, the station risk model would have a flexible user-defined structure and be modular, allowing the evaluator to scale the risk assessment to the needs of the analysis. For example, the user may decide to simply employ an index-based approach to prioritize higher risk pipeline facility
Sectioning For purposes of risk assessment, it may not be practical to assess a station facility’s relative risks by examining each instation section ofpiping, each valve, each tank, or each transfer pump for instance. It is often useful to examine the general areas within a station that are ofrelatively higher risk than other areas. For example, due to the perceived increased hazard asso-
Station risk assessment 13/261
ciated with the storage of large volumes of flammable liquids, one station section may consist of all components located in a bermed storage tank area, including tank (floor, walls, roof), transfer pump, piping, safety system, and secondary containment.This section wouldreceiveariskscorereflectingtherisks specific to that portion of the station. The risk evaluations for each section can be combined for an overall station risk score or kept independent for comparisons with similar sections in other stations. Often, a station’s geographical layout provides a good opportunity for sectioning. There are usually discrete areas for pumps, manifold, truck loadinghnloading, additives, tanks, compressors, etc., that provide appropriate sections for risk assessment purposes. Further distinctions could be made to account for differences in tanks, pumps, compressors, etc., thereby creating smaller sections that have more similar characteristics. In certain cases, it might be advantageous to create contiguous or grouped station sections. In the above example, a section could then include all piping, independent of the tank, pump, or process facility to which it is connected. Another approach could be to include all liquid pipeline station tanks in one section, independent of their type, location, and service. The sectioning strategy should take into account the types of comparisons that will be done for risk management. If individual tanks must be compared (perhaps to set specific inspection frequencies), then each tank should probably have its own evaluation. If all “compressor areas,” from station to station, are to be compared, that should lead to an accomodating sectioning strategy. A sectioning strategy should also consider the need to produce cumulative, length-sensitive scores for comparison to pipeline lengths. This is discussed on page 287.
Data requirements As noted in Chapter 1, a model is a simplified representation of the real world. The way to simplify real-world processes into an accurate facilities model is to first completely understand the real-world processes in their full complexity. Only then are we able to judge which variables are critical and how they can be logically combined into a valid model. The objective is not to simulate reality, but to model it accurately. The ideal station risk model must be able to withstand a critical engineering evaluation, in addition to its application in real-world risk management decision making. As with line pipe, the quality and quantity of safety data are limited for pipeline station facilities. Therefore, few statistically based correlations can be drawn from all of the factors believed to play a significant role in failure frequency and consequence. The contributing factors, however, can be identified and considered in a more qualitative sense, pending the acquisition of more statistically significant data. Concepts from statistical failure analysis are useful and underlie portions of this station risk model. However, given the unavailability of data, the uncertainty associated with the rare event data, and the complexities of even the simplest facility, a departure from strict statistical analysis is warranted. This departure requires the inclusion of experience and judgment, even when such judgment is only weakly supported by histori-
cal data. It is acknowledgedand accepted that in using most risk assessment models, some realism is being sacrificed in the interest of understandability and usability. This is consistent with the intent ofmost models. The ideal risk assessment methodology works well under conditions of “very little data” as well as conditions of “very extensive data.” An overview assessment, where data are scarce, might base an assessment on only a few variables such as Nearby population density Presence of special environmental areas Quantity of storedproducts Type of products handled Incident history at the facility Date of last API 653 out-of-service inspection (for tanks) In this case, the model would not provide much guidance on specific equipment or procedural changes for a specific tank. It could, however, point to areas where the greatest amounts of resources are best sent. A more detailed version of the methodology, designed to help in detailed decision making, might use a data set including all of the above as well as the following:
Tank surface area Tank profile (heighdwidth ratio) Tankjoint type (bolt, rivet, weld) Tank year of construction Tank foundation type Tank level alarms Tank level alarm actions (local, remote, automatic, etc.) Tank corrosion rate Staffing level Traffic flow patterns Traffic barriers Security fences Visitor control Programmable logic controller (PLC) usage Critical instrument program Management of change program Operator training specifics Use of SCADA systems UT inspection program MF inspection program Pump type Pump speed Pump seal type Pump seal secondary containment Fatigue sources Material toughness Etc. This list can easily extend into hundreds of variables as shown at the end of this chapter. The risk assessment methodology should work for operators who wish to work with limited data as well as those with extensive, pre-existing databases that need to be incorporated. Figure 13.3 provides an example of an overall station risk model, showing some of the variables chosen for one of the facility modules.
131262 Stations and Surface Facilities
Station Risk Score
I
t Piping risk score
I
?Risk model
Consequence of
I+
I
I Risk
1
model
W
Probability of
I Product hazards Hc MW Vapor pressure Densities Boiling point Soil permeability Water miscibility Pressure Diameter Volume Population density Highly sensitive areas High-value areas Business risk
t Age Soil Coatings Interference Inspections
Size Tank count Sympathetic failures Separations Barriers Initial failures Enclosures Traffic Weather Sabotage
t
Potential preventions Pressure testing Design factors Stress levels Fatigue
I ~
Product preventions
-
Loadings/unloadings Fill levels Training Procedures Substance-abuse testing Safety programs Safety systems “Susceptibility to error” factor Design and construction issues Maintenance programs
Figure 13.3 Sample of station risk model structure.
Station risk assessment 131263
Model design
Table 13.1 Typical database fieldsfor risk variables
For those desiring to develop a custom station risk model, a database-structured approach to model development could be used. Here, a database of all possible variables is first created. Then, depending on the modeling needs, a specific risk model is created from a selection of appropriate variables. The comprehensive station risk variable database will identify the contribution of any and all possible risk variables. The user will then be able to quantify the relative risk benefit or penalty of an action, device, design specification, etc. However, more than 400 variables can be readily identified (see page 288) as possible contributors to station risk. Some add to the risk, others reduce it, and they do not impact the risk equally. One of the initial objectives of a model design should be to determine the critical variables to be considered, which is a function ofthe level of detail desired. A costhenefit balance will often need to be struck between a low- and high-level risk assessment. A comprehensive, highresolution station facilities risk model will include all possible variables, rigorously defined to allow for consistent quantitative data gathering. A more manageable low-resolution (highlevel-screening only) station model will include only variables making a larger impact to risk. The large volume of detailed data necessary to support a detailed risk model often has initial and maintenance data gathering costs that are many times the costs of gathering a moderate volume of general data that can be filtered from existing sources. The risk variables database should be structured to allow sorting, filtering, and selection of variables based on any of the database fields to provide optimum flexibility. The evaluator can easily create multiple custom risk models, or continuously change a model, depending on requirements for level of detail, cost of evaluation, or changes in the perceived importance of specific variables. Within the context of overall risk assessment, making adjustments to the list of variables will not diminish the model’s effectiveness. On the contrary, customizing for desired resolution and company-specific issues should improve the model’s effectiveness. To support this approach to model design, each potential model variable should be classified using several database fields to allow for sorting and filtering. The fields shown in Table 13.1 are examples, selected from many possible database fields, that can define each variable. For example, a variable such as pump motor type would be classified as a high-level-of-detail variable, applying to pumps, when consequences of business interruption are considered in the model; while a variable such aspopulation density would be a low-level-of-detail variable that would probably be included in even the simplist risk model. Screening of the database for appropriate variables to include in the model is done using the fields shown in Table 13.1, perhaps beginning with the “Level of detail” field. This initial screening can assist the evaluator in identifying the appropriate number of variables to include in high-, medium-, or low-resolution models. The grouping of variables by failure modes is done for two reasons:
Database,field
Example entries
Type of data (used to estimatethe cost of modeling the variable)
Engineering: data that are directly counted or measured with common measuring tools Frcquenq: measurable events that occur often enough to have predictive power Semiquantitative: combination of frequency data and forecasting (where frequency data are rare, but potential exists) and/or ajudgment of quality Third-party damage Corrosion Design Incorrect operations Health Environmental Business Aboveground storage tanks Underground storage tanks Collection sumps Transfer racks Additive systems Pumps Compressors Engines Piping High-se only for very detailed models Medium-use for models ofmoderate complexity Low-use for all models
I , Data handling, analysis, and reactions are enhanced because specific failure modes can be singled out for comparisons, deeper study, and detailed improvement projects.
Type of failure mode
Type of impact Type offacility
Level of detail
2. The ability to compare modeling results is better preserved, even if the choice of variables changes from user to user or the model structure changes. For example, the relative risk of failure due to internal corrosion can be compared to assessments from other models or can be judged by an alternate selection of variables.
Weightings Each variable in the database should be assigned a weight based on its relative contribution to the risk. Whether the variable represents a potential conditiodthreat (risk-increasing factor) or a preventiodmitigation (risk reduction factor), it can be first assessed based on a scale such as that shown inTable 13.2. The number of variables included in the model will determine each variable’s influence within the model since the total risk is distributed among all the variables. This raises a model resolution issue: The more variables included in the model, the smaller the role of each variable because of a dilution effect if all weightings sum to 100%. Overall company risk management philosophy guidelines should be established to govern model building decisions. Example guidelines on how risk uncertainty can be addressed include these:
731264 Stationsand Surface Facilities
Table 13.2 Variable risk contributionweighting
4. Volume of product stored,product hazards, prevention, and
mitigation systemsall drive the magnitudeof consequences. Conditiondthreats 5 4 3
2 1
5
4 3
2 1
Process
Variable can easily, independently cause failwe-highest weight Variable can possibly independently cause failure Variable is significant contributor to failure scenarios Variable, in concert with others, could cause failure Variable plays minor role in this failure mode-lowest weight
To outline a risk model based on the optimum number of variables from all of the possibilities shown in the database, the following procedurecan be used
?‘reventiom/mitigations
1. Conceptualizea level of data collectioneffort that is accept-
Variable can easily, independently prevent failure-highest weight Variable can possibly independently prevent failure Variable is significant obstacle to failure scenarios Variable, in concert with others, could prevent failure Variable plays minor role in this failure mode-lowest weight
2.
3. 1. Results from older surveys and inspections (e.g., tank inspections, CP readings) will have less impact on risk assessments. The “deterioration” of information value depends on many factors and is specific to the survey/ inspectioniequipmenttype (see Chapter 2). 2. Estimated data will have less impact on risk scores than data with a known level of accuracy (e.g., depth of cover, coating condition)(see Chapter 8).
4.
5.
Uncertaintyis further discussed in Chapters 1 and 2. When deciding on a particularrisk model structure, many cost and effectiveness factorsshould be considered, such as minimizing duplication of existing databases, efficiently extracting information from multiple sources, capturing experts’ knowledge, and periodically collecting critical data. All risk model data are best gathered based on data collection protocols (e.g., restricted vocabulary, unknown defaults, underlying assumptions) as discussed in earlier chapters. A lower level risk model should be structured to allow “dnlling down” to assess individual equipment, whereas a high-level risk model may be structured to allow assessment at only the overall station level. The following are general risk beliefs that, if accepted by the model designer, can be used to help structurethe model. 1. A more complex facility will generally have a higher likelihood of failure. A facility with many tanks and piping will
have a greater area of opportunity for something to go wrong, compared to one with fewer such facilities (if all other factors are the same). A way to evaluate this is described on pages 265-266. 2. A manned facility with no site-specific operating procedures and/or less training emphasis will have a greater incomet operations-relatedlikelihood of human error than one with appropriate level of procedures and personnel training.
3. A facility handling a liquefied gas, which has the mechanical energy of compression as well as chemical energy and the ability to produce vapor cloud explosions, creates considerably more potential health and safety-related consquence than does a low vapor pressure liquid, which has no mechanical energy and is much harder to ignite. On the other hand, some nonvolatile liquidscan create considerably more environmentally related consequences.
able-perhaps in terms of hours of data collection per station. This can be the criterion by which the final variable list is determined. Begin with an extensive list of possible risk variables, since any variable could be critical in some scenario. See the sample variable list at the end of this chapter. Filter out variables that apply to excluded types of threatsones that will never be a consideration for facilitiesassessed (e.g., if there is no volcano potential, then the volcanorelated variables can be filtered out; similarly, threats from meteors, hurricanes,freezes,etc., might not be appropriate). Examinethe total variable count, estimatedcost of data, and distribution of variables across the failure modes-if acceptable, exit this procedure, determine how best to combine the variables, and create data collection forms to populate a database. To minimize the level of detail (and associated costs) of the model, examine the lower weighted variables and filter out variables that have minimal application. In effect, the model designer is beginning at the bottom of the list of critical variables and removing variables until the model becomes more manageable without sacrificing too much risk-distinguishing capability. This becomes increasingly subjectiveand use-specific.
At any time in this process, variables can be edited and new ones added. As implied in this procedure, care should be taken that certain failure modes are not over- or underweighted. This procedure can be applied for each failure mode independently to ensure that a fair balance occurs. Each failure mode could also have a preassigned weighting. Such weighting might be the result of company incident experience or industry experience. This should be done carefully, however, since drawing attention away from certain failure modes might eventually negatively change the incident frequency. Having determined the optimum level of detail and a corresponding list of critical variables, the model designer will now have to determine the way in which the variables relate to each other and combine to represent the complete risk picture, The following sections describe some overall model structures in order to give the designer ideas of how others have addressed the design issue. Most emphasis is placed on the first approach since it parallels Chapters 3 through7 ofthis text.
111. Risk assessment model This approach suggests a methodology to generate risk assessments that are very similar to those generated for the pipe-only portions of a pipeline system. It is based on the evaluation system describedin Chapters 3 through 7. For facilitiesthat are for
Risk assessment model 13/265
the most part aboveground, such as terminals, tank farms, and pump stations, and are usually on property completely controlled by the owner, the approach described in those chapters should be somewhat modified. Some suggested modifications are designed to better capture the risks unique to surface facilities, while maintaining a direct comparability between these facilities and the pipe-only portions of the pipeline system. The basic components of the risk score for any station facility are showninTable 13.3.
Risk model components In the revised model, variables in the corrosion, design, and incorrect operations indexes are scored as described in Chapters 4 through 6 , respectively, with only minor modifications. The leak impactfactor (LIF) is similarly scored with only a slight possible modification, as described later. The main difference in the revised model entails the treatment of certain external forces. In Chapter 3, an index called the third-party damage index is used to assess the likelihood of unintentional outside forces damaging a buried pipeline or a small aboveground component such as a valve station. A different set of outside forces can impact a surface facility so this index title has been changed to External Forces for use in station assessments. Comparisons and references to the basic model are made in the descriptions of scorable items that follow. After customization, the risk model for pipeline station facilities could have the following items: External Forces Index Corrosion Index A. Atmospheric Corrosion B. Internal Corrosion C. Subsurface Corrosion Design Index A. Safety Factor B. Fatigue C. Surge Potential D. Integrity Verification E. Land Movements
Table 13.3 Basic components of a risk score fora station facility Risk model component @pe of informution needed
Probability Probability vanables Area of opportunity Consequence Product hazard Spill size Receptors
Conditions and activities that are integrity threats; qualities ofvariables and weightings Physical equipment and material sizes; counts of more problematic components Acute and chronic product hazards; stored energy quantities Volumes stored; leak detection capabilities; secondary containment Population, environmental receptors, highvalue area considerations; rangeability; loss control systems
Risk score = probability x consequence = [Index Sum] / [LIF]
Incorrect Operations Index A. Design B. Construction C. Operations D. Maintenance Leak Impact Factor Product Hazard Spill Size Dispersion Receptors [Index Sum] = [External Forces] + [Corrosion] +[Design] + [Incorrect Operations] [Relative Risk] = [Index Sum] / [LIF]
Given the many types of stations that might be evaluated with this model, an additional adjustment factor, to take into account the relative size and complexity of a station, is recommended. This is called the equivalent surface area, discussednext, and it is used to adjust the index sum.
Equivalent surface area In this risk assessment approach, the failure probability of a station is thought to be directly proportional to the station’s complexity and density of more “problematic” components. The facility dimensions, adjusted for components that historically are more problematic, provide a relative “area of opportunity” for failures. Specifically, larger surface areas result in more chances for corrosion, traffic impacts, fire impingement, projectile loadings, wind loadings, and often complexity-which can lead to human error. It is reasonable to believe that more tankage, more piping, more pumps, more vessels, etc., lead to more risk of failure. Under this premise, stations will show higher failure probabilities overall as they become larger and more complex, compared to cross-country pipe or smaller stations. This is consistent with commonly held beliefs and seems to be supported by many company’s incident databases. A measuring scale can be developed to capture the relative complexity and nature of facilities. This scale is called the equivalent surface area. It selects a base case, such as 1 square foot of aboveground piping. All other station components will be related to this base case in terms of their relative propensity to initiate or exacerbate leaks and other failures. The equivalent surface area measure first evaluates the physical area of assessed facilities. Actual surface area is calculated based on facility dimensions: combined surface areas of all piping, tankage, compressors, etc. Adjustments are then made for higher leak-incident components by converting a count of such components into an equivalent surface area. Table 13.4 is a sample table of equivalencies for some commonly encountered station components. The relationships shown in Table 13.4 are established based on any available, published failure frequency data (in any industry) or on company experience and expert judgment otherwise. Table 13.4 implies that, from a leak incident standpoint, 1000 ft2 of above-ground piping = 200 fiz of tank bottom = 1/2 of a Dresser coupling = 5 other mechanical couplings = 20 tandem pump seals. This reflects a belief that couplings and tank bottoms cause more problems than aboveground piping.
13/266 Stations and Surface Facilities Table 13.4
Components and their equivalent surface areas
Component
Piping (aboveground) Tanks Tank bottom Dresser coupling Other mechanical coupling Pump seal, tandem Pump seal, single Already corrodedbmaged material Atmospheric corrosion hot spots Pump (perhorsepower) Valves Penalty for buried component
Equivalent area ft2) 1 2 5 2000
200 50 100
20 5 10 10 0.5
Table 13.4 also shows that the equivalency designers believe that buried components are twice as problematic as above ground. A penalty is assigned for buried or otherwise difficult to inspect portions of the facility. While buried portions enjoy a reduced risk from external forces and fire, on balance it is felt that the inability to inspect and the increased opportunity for more severe corrosion, warrants a penalty. This is contrary to the case of cross-country pipelines where, on balance, buried components are thought to present a reduced risk. The penalty assigned to station buried facilities results in increasing the equivalent surface area by 50%, in the example table above. A good way to develop these relationships in the absence of actual failure data is to ask station maintenance experts collectively questions such as “From a maintenance standpoint, how much piping would you rather have than one pump seal?” This puts the issue in perspective and allows the group to come up with the equivalencies needed. The scale should be flexible since knowledge will change over time. Changes to the equivalent lengths can automatically convert into new risk scores if arobust computer model is used. The equivulent surface ureu is numerically scaled from the highest to lowest among stations and facilities to be assessed. That is, the largest equivalent area station sets the high mark on the reIative scale. The low mark can be taken at 0 or the smallest station, depending on model resolution needs. The equivalent surface area factor-the ratio of the station’s score to the highest score of any facility to he evaluated-is then used to adjust the index sum.So, if the index sum for two facilities t u n s out to be exactly equal, then the one with the larger equivalent surface areu will show a higher failure prohability level. The exact amount of impact that the equivulent surface areu has on the index sum is a matter ofjudgment. Saying that the most complex station will have a failure probability of 50% more than the least complex or that the failure rate is 10 times higher than the least complex station are both justifiable decisions, depending on the station types, operator experience, historical data, etc. The mathematics is therefore left to the evaluator to determine.
External forces index For surface facilities, the third-party damage index can be replaced by the external forces index. This index is more
fully explained here. Based on 100 points maximum (safest situation = 100 points), as with the other indexes, the external forces index assesses risks from possible outside forces related to Traffic Weather Successive reactions.
Trafic The potential for damage by outside force increases with increasing activity levels, which include the type, frequency, intensity, complexity, and urgency of station activities. This also includes the qualifications of personnel who are active in the station, weather conditions, lighting, third-party access, traffic barriers, security, and a third-party awareness/damage prevention program. Vehicle impact against some facility component is a threat. The type of vehicular traffic, the frequency, and the speed of those vehicles determine the level of threat. Vehicle movements inside and near the station should be considered, including Aircraft Trucks Railtraffic Marine traffic Passenger vehicles Maintenance vehicles (lawn mowers, etc.). Vehicles might be engaged in loadinghnloading operations, station maintenance, or may simply be operating nearby. Traffic flow patterns within the station can be considered: Is the layout designed to reduce chances of impact to equipment? Use of signs, curbs, barriers, supervising personnel, operations by personnel unfamiliar with the station (perhaps remote access by nonemployee truckers), lighting, and turnradii are all considerations. With closer facility spacing, larger surface areas, and poor traffic control, the potential for damage increases. Type and speed of vehicles can be assessed as a momentum factor, where momentum is defined in the classic physics sense of vehicle speed multiplied by vehicle mass (weight). Momentum can be assessed in a quantitative or qualitative sense, with a qualitative approach requiring only the assignment of relative categories such as high, medium, and low momentum. The frequency can be similarlyjudged in a relative sense. Note that relative frequency scales can and should be different for different vehicle types. For example, a high frequency of aircraft might be two or three planes per hour, whereas a high frequency for trucks might be several hundred per hour (on a busy highway). For each type of vehicle, the frequency can be combined with the momentum to yield a point score. Where the potential for more than one type of vehicle impact exists, the points are additive. Where protective measures such as barrier walls or protective railings have been installed, the momentum component for the respective vehicle can be reduced. Similarly, natural barriers such as distance, ditches, and trees can be included here. This is consistent with the physical reality ofthe situation, since the barrier will indeed reduce the momentum before the impact to the facilities occurs.
Risk assessment model 131267
Weather
A . Atmospheric corrosion
The threat associated with meteorological events can be assessed here. Events such as a wind storm, tornado, hurricane, lightning, freezing, hail, wave action, snow, and ice loadings should be considered. (Note that earth movements such as earthquakes and landslides are considered in the design index.) A relative, qualitative scale can be used to judge the frequency of occurrence for each possible event and the potential damages resulting from any and all events. In areas where multiple damaging events are possible, the score should reflect the higher potential threats. Mitigation measures can reduce threat levels.
Atmospheric corrosion potential is a function of facility design, environment, coating systems, and preventive maintenance practices. There are many opportunities for “hot spots” as described in Chapter 4. Many station facilities are located in heavy industrial areas or near waterways to allow for vessel transfers. Industrial and marine environments are considered to be the most severe for atmospheric corrosion, whereas inland dry climates are often the least sevcre. Score the potential for atmospheric corrosion as shown in Chapter 4.
Successive reactions The threat associated with one portion of the facility (or a neighboring facility) causing damage to another portion of the facility is assessed here. Examples include vessels containing flammable materials that, on accidental release and ignition, can cause flame impingement or explosion overpressure damages (including projectile damages) to adjacent components of the facility. Therefore, portions of a facility that are more susceptible to such secondary accident effects will show a higher risk. The threat value associated with this external force is logically less since another event must first occur before this event becomes a threat. This reduces the probability of the successive reaction event. A qualitative scale can be used to judge this risk level including the damage potential of the causal event. The type and quantity of the material stored determines the damage potential. A calculation of the overpressure (blast wave) effects from an explosion scenario is a valid measure of this potential (see Chapters 7 and 14). Where such calculations are not performed, an approximation can be made based on the type, quantity, and distance of the nearby flammables. Points are assigned based on the vulnerability of nearby facilities. Where protective shields, barriers, or distance reduce the likelihood of damage from the causal event, the threat is reduced and point assignments should reflect the lower potential. Protective barriers and shields should be assessed for their realistic ability to protect adjacent components from thermal and blast effects. Note that, for simplicity, the likelihood of failure of the causal event is usually not considered since such consideration involves another complete risk assessment. This additional assessment might not be possible if the causal event can occur from a neighboring facility that is not under company control.
Corrosion index Depending on the materials being used, the same corrosion mechanisms are at work on pipeline station facilities as are found in buried or abovegroundpipe on the ROW. However, it is not unusual to find station piping that has little or no coating, or other means of corrosion prevention, and is more susceptible to corrosion. As in the basic line pipe model, corrosion potential is assessed in the three categories of atmospheric, internal, and subsurface.
B. Internal corrosion During normal operations, station facilities are generally exposed to the same internal corrosion potential as described in Chapter 4. However, certain facilities can be exposed to corrosive materials in higher concentrations and for longer durations. Sections of station piping, equipment, and vessels can be isolated as “dead legs” for weeks or even years. The lack of product flow through these isolated sections can allow internal corrosion cells to remain active. Also, certain product additive and waste collection systems can also concentrate corrosion promoting compounds in station systems designed to transport products within line pipe specifications. Score the items for internal corrosion, product corrosivity, and internal protection as described elsewhere in this text.
C. Subsurface corrosion In some older buried metal station facility designs, little or no corrosion prevention provisions were included. If the station facilities were constructed during a time when corrosion prevention was not undertaken, or added after several years, then one would expect a history of corrosion-caused leaks. Lack of initial cathodic protection was fairly common for buried station piping constructed prior to 1975. If it can be demonstrated that corrosion will not occur in a certain area due to unsupportive soil conditions, CP might not be required. The evaluator should ensure that adequate tests of each possible corrosion-enhancing condition at various soil moisture levels during a year have been made, before subsurface corrosion is dismissed as a failure mechanism. Modem stations employ the standard two-part defense of coatings and cathodic protection detailed in Chapter 4. Subsurface corrosion potential can be evaluated as described in that chapter, with consideration for some issues. Older, poorly coated, buried steel facilities will have quite different CP current requirements than will newer, well-coated steel lines. These sections must often be well isolated (electrically) from each other to allow cathodic protection to be effective. Given the isolation of buried piping and vessels, a system of strategicallyplaced anodes is often more efficient than a rectifier impressed current system at pipeline stations. It is common to experience electrical interferences among buried station facilities where shorting (unwanted electrical connectivity) of protective current occurs with other metals and may lead to accelerated corrosion. Even within a given pipeline station, soil conditions can change. For instance, tank farm operators once disposed of tank bottom sludges and other chemical wastes on site, which can
131268 Stationsand Surface Facilities
cause highly localized and variable corrosive conditions. In addition, some older tank bottoms have a history of leaking products over a long period of time into the surrounding soils and into shallow groundwatertables. Some materials may promote corrosion by acting as a strong electrolyte, attacking the pipe coating or harboring bacteria that add corrosion mechanisms. Station soil conditions should ideally be tested to identify placement of non-native material and soils known to be corrosionpromoting. Station piping of different ages and/or coating conditions may be joined. Dissimilar metals can create galvanic cells and promote corrosion in such piping connections. Pipeline stations sometimes use facilities as an electrical ground for a control building’s electrical system, which can possibly impact the cathodic protection system, corrosion rates, and spark generation. AC induction is a potential problem in station facilities anytime high voltages are present. Large compressor and pump stations, as well as tank farms, normally carry high-voltage and high-current electrical loads. Therefore, nearby buried metal can act as a conduit, becoming charged with AC current. Although AC induction is primarily a worker safety hazar4 it has also been shown to be disruptive to the station’s protective DC current and a direct cause of metal loss.
Design index As detailed in Chapter 5, the design index is a collectionof failure mechanismsand mitigations related to original design conditions.The main variables describedthere are also appropriate for a station risk model. Those factors are: A. Safety Factor B. Fatigue C. Surge Potential D. IntegrityVerification E. Land Movements Some additional issues arise regarding the safeqfuctor and fatigue assessments,as are discussed here.
A . Safetyfactor Althoughpipeline station facilities are typically constructedof carbon steel, other construction materials are also used. Because station equipment can be made of a composite of different materials, it can be useful to distinguishbetween materials that influence the risk picture differently. In scoring the safe@factor,the evaluator should take into account material differences and other pipe design factors peculiar to station facilities. The stresslevel of a component, measured as a percentageof maximum allowable stress or pressure., shows how much margin exists between normal operating levels and component maximum stress levels. At stress levels close to absolute tolerances, unknown material defects or unanticipated additional stresses can easily result in component failure. Systemsthat are being operated at levels far below their design levels have a safety margin or safety factor. Many pressure vessels and pipe components have safety factors of 1.5 to 2.0. When the safety factor is close to 1.0, there is little or no margin for error or to
handle unanticipated stresses. Components with complex shapes are often difficult to calculate. Manufacturer information is often used in those cases. Either normal operating pressures or maximum operating pressures can be used in calculating stress levels, just as long as one or the other is consistently applied. Adjustments for joint efficiencies in tanks and piping might also be appropriate. Materials with a lack of ductility also have reduced toughness. This makes the material more prone to fatigue-type failures and temperature-related failures and also increases the chances for brittle failures. Brittle failures are often much more consequentialthan ductile failures since the potential exists for larger product releases and increased projectile loadings. The potential for catastrophic tank failure should be considered, perhaps measured by shell and seam construction and membrane stress levels for susceptibilityto brittle fracture.
B. Fatigue As one of the most common failure mechanisms in steel, fatigue potential is assessed as discussed on pages 000-000. Instancesof high stress levels at very rapid loading and unloading (high frequency of stress cycles) are the most damaging scenario. The threat is reduced as cycle frequency or magnitude is reduced. It is common practice to put extra strength components with very high ductility into applications where high fatigue loadings are anticipated. Common causes of fatigue on buried components and aboveground connections to equipment include loading cycles from traffic, wind loadings, watex impingements, harmonics in piping, rotating equipment, pressure cycles, temperature cycles, and ground freezindthawing cycling. Mitigation options include the removal or reduction of the cycles or, as previously mentioned, the use of special materials. Vibration monitoring As a further measure of potential fatigue loadings, sources of vibration can be assessed. As a prime contributor to vibration effects, rotating equipment vibrations can be directly measured or inferred from evidence such as action type (piston versus centrifugal, for example), speed, operating efficiency point, and cavitation potential. Common practices to minimize vibration effects include careful attention to equipment supports, PPM practices, pulsation dampers, and the use of high ductility materials operating far from their maximum stress levels.
Incorrect operationsindex Human error i s a significant factor to consider when scoring risk at a pipeline station. Human error is oAen the true root cause of facility failureswhen one considersthat proper design, construction, testing, operations, inspection, and maintenance should prevent almost all equipment and product containment integrity failures. A station environment provides many more opportunitiesfor human error but also provides more chances to interrupt an accident sequencethroughmitigationmeasuresto avoid human error. This part of the assessment builds on Chapter 6. Several previously described risk variables are discussed here that are specific to the station environment.
Risk assessmentmodel 131269
’4. Design
B. Construction
Overpressurepotential A measure of the susceptibility of the facility to overstressing is a valid risk variable. The safest condition is when no pressure source exists that can generate sufficient pressure to exceed allowable stresses. Where pressure sources can overstress systems and safety systems are needed to protect the facility, then risk increases. This includes consideration of the pumping head, which can overfill a tank. It also includes consideration of changing allowable stresses due to changes in temperature. Note that the adequacy of safety systems and the potential for specialized stresses such as surges and fatigue are examined elsewhere in this model. It is common in the industry for systems to contain pressure sources that can far exceed allowable stresses. Overpressure of customer facilities should also be considered for station facilities. It is primarily the responsibility of the customer to protect their facilities downstream from a custody transfer station from an overpressure event. When in-station piping directly supplies adjacent customer stations, or when it laterals off a mainline pipe end at a custody transfer station (e.g., block valve, manifold, regulators, meter set), the customer’s downstream overpressure protection scheme should he examined to confirm that their safety system capabilities are designed to prevent overpressure of downstream equipment and piping. In general, score these items for design, hazard ID, MAOP potential, safety systems, material selection, and checks as described on pages 1 19-124.
Because of the age of many station facilities and the construction philosophies of the past, complete construction and test records of the facilities are typically not available. Evidence to score construction-related items might have to be accumulated from information such as leawfailure histories, visual inspections of the systems, and comparisons with similar systems i n other areas. Score these items for inspection, materials, joining, backfill, handling, and coatings as described on pages 124- 125.
Safety systems Risk is reduced as safety systems are able to reliably take independent action-without human intervention-to prevent or minimize releases. Although there is no real standard in the industry, most agree that if false alarms can be minimized, then safety systems that close valves, stop pumps, and/or isolate equipment automatically in extreme conditions are very valuable. Early warning alarms and status alerts when actions are taken should ideally be sent to a monitored control center. Also valuable is the ability of a manned control center to remotely activate isolation and shutdowns to minimize damages. Not as valuable, especially for unmanned, infrequently visited sites, are safety systems that merely produce a local indication of abnormal conditions. Safety system actions that provide increasing station facility overpressure protection include equipment shutdown, equipment isolation, equipment lock-out, station isolation, station lock-out, and full capacity relief. Lock-out typically requires a person to inspect the station conditions prior to resetting trips and restarting systems. Safety systems evaluation To ensure the adequacy of safety systems, periodic reviews are valuable. Such reviews should also be triggered by formal management of change policies or anytime a change in made in a facility. HAZOPS or other hazard evaluation techniques are commonly used to first assess the need and/or adequacy of safety systems. This is often followed by a review of the design calculations and supporting assumptions used in specifying the type and actions of the device. The most successful program will have responsibilities, frequencies, and personnel qualifications clearly spelled out. DOT requires or implies an annual review frequency for overpressure safety devices.
C. Operations Station operations typically have more opportunities for errors such as overpressure due to inadvertent valve closures and incorrect product transfer resulting in product to the wrong tank or to overfilled tanks. Some changes are made from the basic risk assessment model for scoring items in this part ofthe incorrect operations index, as discussed next.
CI. Procedures Score as described on pages 125-126, with the following additional considerations. A comprehensive and effective “procedures program” effort should capture all current station facility design, construction, maintenance, operations, testing, emergency response, and management related procedures. Current station procedures that are considered important or required to adequately operate the station should be available at each station or easily accessible to station personnel. Key station-related activity procedures should allow for the recording of data on procedure forms (records) for personnel review and future use. There should be no recent history of station procedure-related problems. All procedures should be appropriate for the necessary type (design, operations, maintenance, etc.), conditions (location, personnel skills, systems complexity, etc.), best practices (industry, company, etc.), communications method (written, verbal, video), and needs (job safety analysis, job task analysis, job needs analysis). Several layers of procedures should be in place, ranging from general corporate policies (is., I O principles of conduct) to guideline standard practices (Le., damage prevention program) to station-specific procedures (Le., abnormal operations procedures) to detailed job task recommended practices (Le., valve manufacture maintenance procedures). Many technical writing ‘best practices’ could be listed to provide guidelines for “what makes an excellent procedure,” but this is outside the scope of this text. Management of change A formal management of change (MOC) process should be in place that identifies facility procedure-related changes that may affect the procedures program and provides adequacy review guidelines (see below). A formal written process should exist that provides best practices for field personnel’s modification of company procedures, including communication of changes, procedure revision, and change distribution and implementation. Recent procedure changes should be incorporated into company standards, recommended practices, and local procedures for daily use by station personnel. Procedure changes that are more than 3 months old should be reflected in newly issued procedures accompanied by a change log.
13/270 Stations and Surface Facilities
C2. SCADAkommunicutions A SCADA system allows remote monitoring and some control functions, normally from a central location, such as a control center. Standard industry practice seems to be 24-hours-per-day monitoring of “realtime” critical data with audible and visible indicators (alarms) set for abnormal conditions. At a minimum, control center operators should have the ability to safely shut down critical equipment remotely when abnormal conditions are seen. Modem communication pathways and scan rates should bring in fresh data every few seconds with 99.9%+ reliability and with redundant (often manually implemented dial-up telephone lines) pathways in case of extreme pathway interruptions. Protocols that require field personnel to coordinate all station activities with a control room offer an opportunity for a second set of eyes to interrupt an error sequence. Critical stations are identified and must be physically occupied if SCADA communications are interrupted for specified periods of time. Proven reliable voice communications between the control center and field should be present. When a host computer provides calculations and control functions in addition to local station logic, all control and alarm functions should be routinely tested from the data source all the way through final actions. As a means ofreducing human errors, the use of supervisory control and data acquisition (SCADA) systems and/or other safety-related systems, which provide for regular communications between field operations and a central control, is normally scored as an error reducer in the basic risk model. As a means of early problem detection and human error reduction, the presence of a SCADA system and a control center that monitors instation transfer systems can be similarly scored as shown on pages 126-128.
should be recorded electronically (database) or on forms (records) for personnel review and future use. There should be no recent history of station documentation-related problems. All as-built station data and drawings should accurately reflect the current facility conditions. A formal MOC process should be in place that identifies facility activity or condition changes that may affect the documentation program and provides adequacy review guidelines (see below). A formal written process should exist for the modification of station facility data and drawings (records, procedures, maps, schematics, alignment sheets, plot plans, etc.) that provides standard practices for field personnel modification of records/drawings, communication of information, databasehawing revision, and change distribution and use. Recent facility modifications should be noted on station drawings for daily use by station personnel. Station modifications more than 3 months old should be reflected on newly issued station drawings, records, and procedures (including equipment labeling) and noted in achange log.
Vibration monitoring program As a component of maintenance or as a type of survey, a vibration monitoring program might be appropriate in many stations. The details of a successful vibration monitoring program are highly situation specific. PPM practices should define requirements to prevent excessive vibrations that might shorten the service life of equipment and endanger components subject to increased fatigue loading. Industry practices are based on equipment types, specific equipment vibration history, and general experience. The PPM program should consider susceptibility of equipment and exposed components and specify frequency of monitoring, type of monitoring, type of acceptable corrective actions, type of early warning indicators, etc.
C3. Drug testing Score this item as described on page 128.
C4. Safetyprograms Score this item as described on page 128. Good “housekeeping” practices can be included under this risk variable. Housekeeping can include treatment of critical equipment and materials so they are easily identifiable (using, for instance, a high-contrast or multiple-color scheme), easily accessible (next to work area or central storage building), clearly identified (signs, markings, ID tags) and clean (washed, painted, repaired). Housekeeping also includes general grounds maintenance so that tools, equipment, or debris are not left unattended or equipment left disassembled. All safetyrelated materials and equipment should be maintained in good working order and replaced as recommended by the manufacturer. Station log and reference materials and drawings should be current and easily accessible. C5. Suwey/maps/records Score this item as detailed on pages 128-129. For maximum risk-reduction credit under this evaluation, a comprehensive and effective “documentation program” effort should have captured all current station facility design, construction, testing, maintenance, and operations related data and drawings. Current, or as-built, station data and drawings, which are considered important or required to adequately operate the station, should be available at each station or easily accessible to station personnel. Key station activities and conditions data
C6. Training Score this itemasdescribedonpages 129-131, with additional considerations as discussed below. For full risk-reduction credit under this variable, a comprehensive and effective job needs analysis (JNA), job task analysis (JTA), or job safety analysis (JSA) effort should document all current station personnel tasks related to design, construction, maintenance, operations, testing, emergency response, and management activities (including contract positions). Current employee skills, tasks, or knowledge that are considered important or required to safely and adequately operate the station should be identified for each tasWposition and used as the basis for qualification of personnel on each tasWposition specific requirement. Key position requirements are outlined and described in a JNA, which is the basis for creating position descriptions. Position descriptions outline primary responsibilities, tasks, authority, communications, training and testing levels, etc. Key job task requirements are outlined and described and can form the basis for creating task-based procedures. Key job safety requirements can be outlined and described as the basis for creating safety-based procedures. There should be no recent history of station position-related problems. All training should be appropriate for the position type (design, operations, maintenance, etc.), effectiveness (completeness, appropriateness, retention, detail, etc.), best practices (industry, company, etc.), method (written, verbal, video, simulator, CBT [computer-based training], OJT [on-the-job training], etc.), and needs. All testing should be consistent with
Risk assessmentmodel 13/271
the training being conducted and clear task/position qualification objectives, testing methods, minimum requirements, and refresher requirements should be documented as part of an overall company personnel qualification program. Several layers of training and testing may need to be in place to cover general corporate policies, standard practices, stationspecific procedures, and detailed job task recommended practices. Many personnel training and testing details could be listed to provide guidelines for “what makes an excellent qualifications program,” but this is outside the scope of this book. A formal MOC process should be in place that identifies personnel qualification-related changes that may affect the qualifications program and provides adequacy review guidelines (see below). A formal written process should exist that provides best practices for field personnel’s modification of local qualification requirements, including taskiposition changes, communication of changes, and change distribution and implementation. Recent program changes should be incorporated into company practices, procedures, and documents for daily use by station personnel. Program changes more than 3 months old should be reflected in newly issued program documents accompanied by a change log. C7. Mechanical error preventers This variable is fully described onpages 131-132. As a means of reducing human error potential and enhancing operations control, computer permissives are routines established in local logic controllers (field computers) or central host computers (see earlier discussion of SCADA systems). These routines help to ensure that unsafe or improper actions, including improper sequencing of actions, cannot be performed. They are most often employed in complicated, multistep procedures such as station starts and stops and pump line-ups. Also in this category are control functions that cover more complex routines to interpret raw data and that take actions when preset tolerances are exceeded. Examples ofcomputer permissives include routines that prevent a pump from starting when the discharge valve is closed, delay a pump shutdown until a control valve has reached a certain position, open a bypass valve when a surge is detected, and automatically start or stop additional pumps when flow and pressure conditions are correct.
D. Muintenance As in the pipe-only assessment, a low score in maintenance should cause doubts regarding the adequacy of any safety system that relies on equipment operation. Because features such as overpressure protection and tank overfill protection are critical aspects in a station facility, maintenance ofpressure control devices and safety systems is critical. Score the maintenance practices for documentation, schedule, and procedures as described on page 132. Whereas some regulations mandate inspection and calibration frequencies of certain safety devices, it is common industry practice to perform regular PPM activities on all “critical instruments.” The term critical instrument should be defined and all devices so labeled should be identified and placed on a special, formal PPM program. Commonly, pressure relief valves, rupture disks, and certain pressure, temperature, and flow sensors and switches are considered critical devices,
depending on the consequences of their failure to perform as designed. Where reliance is placed on another company’s safety system, risk is increased. The extra risk can be partially reduced to the extent that witnessing of the other company’s PPM activities takes place. AntzJLeezeprogram In many regions, freeze prevention is a critical part of failure avoidance. This can be added to the risk assessment when appropriate. For maximum risk-reduction credit, each potential “dead space” that can be exposed to product and subzero ambient temperatures should be on a seasonal or annual “antifreeze” maintenance program that includes identifying all potential equipment, component, piping, tubing, or sump areas where water can collect and freeze causing material stresses, cracks, or failures. Examples of practices to prevent freeze problems include the following:
To protect station sensing tubingipots, an appropriate solution of fluid is injected every fall where facilities are vulnerable. To protect station piping, low spots are removed or pigged and dead legs are flushed periodically during cold weather. Station valve stems and lubrication tubing are injected with low-temperaturegrease each fall. Pump drains and sumps are periodically flushed during cold or heat traced aboveground (buried below grade). The risk evaluator should look for a comprehensive and effective “antifreeze” effort that is incorporated into the station PPM program. Specific facility design, maintenance, and operations procedures should also exist and be maintained to cover all program requirements. A formal MOC process should be in place that identifies facility conditions or designrelated changes that may affect the antifreeze program and provides adequacy review guidelines (see below). There should be no recent history of equipment/material freezerelated problems.
Leak impact factor The potential consequences from a station spill or release can be assessed in the general way described in Chapter 7. This involves assessment of the following consequence components: Product Hazard Spill Size Dispersion Receptors Where special considerations for stations are warranted, they are discussed here. In most modem hydrocarbon pipeline stations, a leak of any significant size would be cause for immediate action. Gaseous product pipeline stations typically control compressor or pressure relief discharges by venting the gas through a vent stack within the station. In the case of high-pressureivolumereleases, large-diameter flare stacks (with apiloted ignition flame) combust vented gases into the atmosphere. Gas facilities are normally leak checked periodically and remotely monitored for equipment or piping leaks.
13/272Stations and Surface Facilities
Liquid stations often have several levels of leak monitoring systems (e.g., relief device, tank overfill, tank bottom, seal piping, and sump float sensordalarms), operations systems (e.g., SCADA, flow-balancing algorithms), secondary containment (e.g., seal leak piping, collection sumps, equipment pad drains, tank berms, stormwater controls), and emergency response actions. Therefore, small liquid station equipment-related leaks are normally detected and corrective actions taken before they can progress into large leaks. If redundant safety systems fail, larger incorrect operations-related spills are typically detected quickly and contained within station berms. In some cases, stormwater is gathered and sampled for hydrocarbon contamination prior to discharge. Note that the chronic component of theproduct hazardis often enhanced where a leaking liquid can accumulate under station facilities.
Product hazard As with a pipeline failure on the ROW, a station product release can present several hazards. The fire hazard scenarios of concern for all hydrocarbon product types at station facilities include the following: Fireball-where a gaseous fluid is released from a highpressure vessel, usually engulfed in flames, and violently explodes, creating a large fireball with the generation of intense radiant heat. Also referred to as a boiling liquid expanding vapor explosion (BLEVE) episode. Liquid pool fire-where a pool of product (HVLs and liquids) forms, ignites, and creates a direct and radiant heat hazard. Vapor cloudfire/explosion-where a product (gases, liquefied gases, and HVLs) vapor cloud encounters an ignition source and causes the entire cloud to combust as air and fuel are drawn together in a flash fire. This is not an expected fire scenario for crude oil and most refined products that remain in a liquid state. Flamejet-where an ignited stream of product (gases, liquified gases, HVLs, and liquids) leaving a pressurized vessel or pipe creates a long horizontal to vertical flame jet with associated radiant heat hazards and the possibility of a direct impingement of flame on other nearby equipment. Contumination--can cause soil, groundwater, surface water, and environmental damages due to spilled product. As a measure of increased exposure due to increased quantities of flammable or unstable materials, an energy factor can be included as part ofthe product hazard or the potential spill size. This will distinguish between facilities that are storing volumes of higher energy products that could lead to more extensive damages. The heat of combustion, Hc (BTUllb) is a candidate for measure of energy content. Another product characteristic that can used to measure the energy content is the boiling point. The boiling point is a readily available property that correlates reasonably well with specific heat ratios and hence burning velocity. This allows relative consequence comparisons since burning velocity is related to fire size, duration, and radient heat levels (emissive power), for both pool fires and torches. The energy factor can be multiplied by the Ibs of product contained to set up an energy-content adjustmet scale to modify the LIE
Spill size A spill or leak size in any scenario is a function of many factors such as the failure mechanism, facility design, product characteristics, and surrounding environment. Smaller leak rates tend to occur due to corrosion (pinholes) or design (mechanical connections) failure modes. The most damaging leaks at station facilities may be small leaks persisting below detection levels for long periods of time. Larger leak rates tend to occur under catastrophic failures such as external force (e.g., equipment impact, ground movement) and avalanche crack failures. There may be little advantage in directly correlating a wide range of possible leak sizes with specific failure modes in a risk assessment. Up to the maximum station facility volume, almost any size leak is possible in any facility component. The potential leak volume and leak rate must both be considered in modeling potential spill size. Certain station spill sizes are volume dependent-more so than leak rate dependent. Spills from catastrophic vessel failures or failures of any isolated station component, such as failure of an overfilled liquid storage tank, reach a size dependent upon the volume ofproduct contained in the vessel or component. Such spill events are not appropriately measured by leak rates because the entire volume ofa vessel can be release within seconds. Human error spills can often involve immediate loss of limited volumes of product. Leak rate is important since higher rates of release can cause more spread of hazardous product (more acute impacts), whereas lower rates are influenced by detectability (more chronic impacts). Leaked volume, as a function of leak rate, leak detection, reaction time, and facility capacity, adds to the vulnerability of receptors due to normally wider spreading and increases costs associated. Two effective spill volumes therefore come into consideration. The first is the facility’s capacity-dependent leak volumes and represents the catastrophic station spill scenario (V,J The second is the leak-rate-dependent volume, which is based on the area under the curve of the “leak rate versus time to detect” curve (Fig 7.7). In this graph, “time to detect” includes identification, recognition, reaction, and isolation times. As shown in Figure 7.7, depending on the equation of the curve, volume V, can quickly become the dominant consideration as product containment size increases, but volume V, becomes dominant as smaller leaks continue for long periods. The shape of this curve is logically asymptotic to each axis since some leak rate level is never detectable and because an instant release of large volumes approaches an infinite leak rate. Because leak detection is equally valuable in smaller facility containment volumes as in larger, it is not practical to directly combine V, with V, for a station risk assessment. A simple combination will always point to higher-volume containment as warranting more risk mitigation than smaller containments-a premise that is not always correct. Some mathematical relationship can be used to amplify the leak rate-dependent volume to provide the desired sensitivity and balance. The amplification factor is used to inflate the influence of small leak detection since the smaller leaks tend to be more prevalent and can also be very consequential. With this provision, the model can more realistically represent the negative impact of such leaks, which far exceed the impacts predicted by a simple proportion to leak rate. For example, a 1 gal/day leak detected after 100 days is often far worse than a 100 gal/day leak rate
Risk assessment model 13/273
detected in I day, even though the same amount of product is spilled in either case. Unknown and complex interactions between small spills, subsurface transport, and groundwater contamination, as well as the increased ground transport opportunity, account for the increased chronic hazard. One application of such an amplification factor established an equivalency by saying that a 200,000-barrel (bbl) containment area with very good leak detection capabilities is roughly equivalent to a 500-bbl containment area with very poor leak detection capabilities-from a risk perspective. The larger containment area has a greater potential leak volume due to its larger stored volume, but either can produce a smaller, but consequential leak. Making these two scenarios equivalent emphasizes the importance of leak detection capabilities and limits the ‘penalty’ associated with higher storage volumes. This equivalency seems to be reasonable, although any ratio will suit the purposes of a relative assessment. With a desired amplification factor fixed, various combinations of containment volume and leak detection capabilities can be assessed, used to produce spill scores, and then compared on a relative basis. Improvements to the spill score are made by reducing the product containment volume in the case of volume-dependent spills, and by reducing the source (e.g., pressure, density, head, hole, time-to-detect) in the case of rate-dependent spills. Note that improvements in leak detection also effectively reduce the source, in the leak-rate dependent case. In assessing station leak detection capabilities, all opportunities to detect can be considered. Therefore, leak detection systems that can be evaluated are shown in Table 13.5.The time to detect various leak volumes (T, through T,,,,, in Table 13.5, representing volumes from 1 bbl to 1000 bbl of spilled product and defined in Table 7.1 3) can be estimated to produce a leak detection curve similar to Figure 7.7 for each type of leak detection as well as for the combined capabilities at the station. The second column, reaction time, is for an estimate of how long it would take to isolate and contain the leak, after detection. This recognizes that some leak detection opportunities, such as 24-7 staffing of a station, provide for more immediate reactions compared to patrol or off-site SCADA monitoring. This can be factored into assessments that place values on various leak detection methodologies.
Station stuffing As an opportunity to detect and react to a leak, the staffing level of a facility can be evaluated by the following relationship: Opportunity to detect = [(inspection hours) + (happenstance detection)]
where Inspection hour
=an inspection that occurs within each hour Happenstance detection = 50% ofmanned time per week.
In this relationship, it is assumed that station personnel would have a 50% chance of detecting any size leak while they were on site. This is of course a simplification since some leaks would not be detectable and others (larger in size) would be 100% detectable by sound, sight, or odor. Additional factors that are ignored in the interest of simplicity include training, thoroughness of inspection, and product characteristics that assist in detectability. An alternate approach to evaluating the staffing level as it relates to detection is to consider the maximum interval in which the station is unmanned: Worst case = maximum interval unobserved
Examples of evaluating various staffing protocols using the two techniques are shown in Table 13.6. The last column shows the results of a “maximum interval unobserved” calculation while the next to the last column shows the “opportunity to detect” calculation. The maximum unobserved interval method is simple, but it appears worthwhile to also consider the slightly more complicated “opportunity” method, since the “max interval” method ignores the benefit of actions taken while a station is manned, that is, while performing formal inspections of station equipment-rounds. The “opportunity” method, while providing similar relative scores, also shows benefits that more closely agree with the belief that more directed attention during episodes of occupancy (performing inspection rounds) are valuable.
Table 13.5 Leak detection opportunities
Leukdetection system 7 x 24 manning with formal. scheduled “rounds” 5 x 8 staffing with formal, scheduled rounds 7 x 24 staffing, no formal rounds 5 x> 8 staffing, no formal rounds Other staffing combinations Occasional site visits (weekly) Mass balance for facility Mass balance for station Pressure point analysis Acoustic monitoring SCADA real-time monitoring Groundwater monitoring Surface drain system (monitored) Soil vapor monitoring Passerby reporting
Reaction time
Ti
T,,
TI 00
~Iooo
13/274 Stations and Surface Facilities
Table 13.6 Station staffingfor leak detection Field operations and maintenance staffing
Hoursper week on site
Inspection hours per week
168
84
40 I68 40 2
20
7 days per week x 24 hours per day, with rounds (every 2 hr) 5 x 8, with rounds (2 hr) 7 x 24, no rounds 5 x 8, no rounds Once/week,2 hr on site
Opportunityhours
Max interval hours
50% x 168
168
2
50% x 40 50% x I68 50%x40 50% x 2
40 84 20
60 10 est. 60 I66
Happenstancehours
0 0
2
3
Nofe: Partial credit for remote surveillance can also be included in this scheme.
A drawback of the “opportunity” scheme is the inability to show preference of a 1 hr per day x 5 days per week staffing protocol over a 5 hours x 1 day per week protocol, even though most would intuitively believe the former to be more effective. To obtain the best results, the two methods are merged through the use of a ratio: (maximum unobserved interval) / (opportunity), and this ratio is in units of “opportunity-hours.” Staffing levels from the Table 13.6 are converted to leak detection capabilities (scores) using detection sensitivity and opportunity assumptions and are shown in Table 13.7. Detection sensitivity assumptions are as follows: 1. A leak rate o f 1000 galiday is detected on the first opportu-
nity-hour (immediately). 2. A leak rate of 100 gal/day is detected on the 10th opportunity-hour (100 gaVday leak rates have a 10% probability of detection during any hour). 3. A leak rate of 10 gal/day is detected on the 50th opportunity-hour (a 2%chance of detection during any hour). 4. A leak rate of 1 gaUday is detected on the 100th opportunity-hour (a 1% probability of detection during any hour). In the example shown in Table 13.7, a leak detection score for each spill volume is calculated for various staffing scenarios. Higher numbers represent longer relative times to detect the spill volume indicated. A 7-24 staffing arrangement, with for-
mal inspection rounds, has leak detection capabilities several orders of magnitude better than a weekly station visit, in this example. The important message from this exercise is that various ‘staffing of stations’ scenarios can be evaluated in terms of their leak detection contributions and those contributions can be a part of the overall risk assessment. Staffing, as a means of leak detection, is seen to supplement and partially overlap any other means of leak detection that might be present. As such, the staffing level leak detection can be combined with other types of leak detection. The combination is not seen as a straight summation, because the benefit is normally more of a redundancy rather than an increased sensitivity. For example, the combination can be done by taking the best value (the smallest leak quantity, as set by the best leak detection system) from among the parallel leak detection systems, and improving that number by 50% of the next best value and then adding back in the difference between the two. This recognizes the benefit of a secondary system that is as good or almost as good as the first line of defense, with diminishing benefit as the secondary system is less effective. No credit is given for additional parallel systems beyond the second level, and the primary spill score is never worsened by this calculation. For example, a leak detection system with a spill quantity of 3000 bbl is supplemented by a staffing level that equates to a leak detection capability of 2000 bbl. When
Table 13.7 Example station staffing leak detection capabilities Leak rate detection scores
Assumed detection sensitivity (opportunityhours before detection)
Staffingscenario 7 x 24, with rounds 7x24 5 x 8 with rounds 5x8 Weekly
#SeeTable13.6.
OpportunirJP fir)
Maximum unobserved timC (hr)
168 84
2 10
40 20 3
60 60 166
Ratio
100
0.01
1 11
0.11
1.5 3.0 55.3
150 300 5530
50 0.5 6 75 150 2765
IO 0.1 1.1
15 30 553
I 0.01 0.11 1.5 3.0 55.3
Modeling ideas I 131275 both of these “systems” are employed, the spill quantity to be used in the model is 2000 bbl - [1/2 x (3000 bbl)] + (3000 2000) = 1500 bbl. If the first spill volume is 4500 bbl, then the model value is 2000 - [ 112 x (4500 bbl)] + (4500- 2000) = 2000 bbl (since the primary score should not be worsened by this exercise). The value of 50% is rather arbitrary-as is the mathematical relationship used-and can be replaced by any value or scoring approach more suitable to the evaluator. Consistency is more critical than the absolute value, in this case. Recall that “penalties,” in the form of increased surface area, are also assigned to portions of the facility that are hidden from view (buried) and therefore have less opportunity for leak detection by some methods. Added to the detection time is the reaction time, which is generally defined as the amount of additional time that will probably elapse between the strong leak indication and the isolation of the leaking facility (including drain downtime). Here, consideration is given to automatic operations, remote operations, proximity of shutdown devices, etc. As a simple way to account for various reaction times in the aforementioned scoring protocols, the following rationale can be used: A spill volume equal to (a leak rate of 1000 gal/day) x (the most probable reaction time) is added to the original spill volume. Benefits of remote and automatic operations as well as staffing levels are captured here. This is thought to fairly represent the value of reaction time. Of course, for a large leak, this value is probably understated and for a small leak it is probably overstated, but, over the range of model uses and for a relative assessment, this approach might be appropriate. In one application of a methodology similar to the one outlined here, a sensitivity analysis showed that changes in leak detection and reaction capabilities from 5,000 to 10,000 gallons changed the spill score and also the overall risk by 2 to 3%. This seemed reasonable for the resolution level of that risk assessment. In a situation where the spill score is less dominated by the leak-volume component of the calculation and/or where the range of the spill calculation is smaller, the impact on the spill score and the risk would be greater. Secondary containment With any spill size scenario, the presence of secondary containment can be considered as an opportunity to reduce (or eliminate) the “area of opportunity” for consequences to occur. Secondary containment must be evaluated in terms of its ability to
Contain the majority of all foreseeable spills. Contain 100% of a potential spill plus firewater, debris, or other volume reducers that might compete for containment space-largest tank contents plus 30 minutes of maximum firewater flow is sometimes used [26]. Contain spilled volumes safely-not exposing additional equipment to hazards. Contain spills until removal can be effected-no leaks.
of the former, the greater the protection offered by secondary containment, the smaller the spill size to be used in modeling spill consequences: Spill size reduction percentage = [(secondarycontainment Oh)
(adjustment factor)] where Secondary containment YO =portion of total facility volume that can be held Adjustment factor = obtained by adding all conditions that apply to the secondary containment, up to the value of the secondary containment YO, as showninTable 13.8. Inthis table, items are detractors from secondary containment effectiveness, except the first. Limited secondary containments such as pump seal vessels and sumps are designed to capture specific leaks. As such they provide risk reduction for only a limited range of scenarios. Risk reduction credit can be given for secondary containment proportional to the size of the effective area it protects. Using this approach in one recent application, the credit was capped at a maximum of go%, regardless of the mathematical results, as shown inTable 13.9.
V. Modeling ideas I Dow Chemical Company’s Fire and Explosion Index [26] is a well-regardedloss estimation system for process plants. It is an indexing type assessment used for estimating the damage that would probably result from an incident in a chemical process plant. The F&EI system is not designed for public safety evaluations or environmental risk assessments, but provides some useful concepts that can be used in such assessments. The process plant incidents addressed in this evaluation include A blast wave or deflagration Fire exposure Missile impact Other releases as secondary events.
The secondary events become more significant as the type and storage amounts of other hazardous materials increase. The F&EI is directly related to the area of exposure. In performing F&EI calculations, the nature of the threat is assessed by examining three components: a material safety Table 13.8
Secondary containment sample adjustment faclors
Condition
Note that ease of cleanup of the containment area is a secondary consideration (business risk). Risk is reduced as secondary containment improves.The risk “credit” can be in the form of a reduced spill size rating or evaluated as an independent variable assessing the dispersion potential, when secondary containment is present. In the case
Impervious liner Semipervious liner No immediate fill indication No overtlow alarms Additional equipment exposed to spilled product
Adjustmentfactor (?A)
15 40
S 5 IO
13/276 Stations and Surface Facilities Table 13.9
Secondary containmentsample credit
~
ljp of secondary containment
Faciliy coverage (%)
Adjustments
Spillsize reduction (%)
125% facility containment (containment holds 25% more volume than tank volume); impervious dike for single tank
100
15 impervious liner I O no fill or overflow alarms 15 15 15
125-25= 10090%capapplies
Double-walled tank; with alarms 100% facility containment; impervious dike; alarms 75% facility containment; impervious dike; alarms 100% facility containment; semipervious dike, shared with other tanks
LOO 100
Pump sump, 50% of facility volume Pump seal vessel, leak detection alarm via SCADA, effective surface area ratio = IOO/IOOO ftz None
50
40liner 10 additional exposures 10 no alarms 10+10
10%ofeffective 0
NA
75 100
factor, general process hazards, and special process hazards.A material safety factor is first calculated as a measure of the “intrinsic rate ofpotential energy release from fire or explosion produced by combustion or other chemical reaction.” It uses the same NFPA factors for flammability (Nf) and reactivity (NJ, which are used in the relative risk model and described in Chapter 7. The general process hazards are aspects thought to play a significant role in the potential magnitude of a loss. General Process Hazards Exothermic chemical reactions. Endothermicprocesses. Materials handling and transfer. Adds risk factors for loading, unloading, and warehousing of materials. Enclosed or indoor process units. Adds risk factors for enclosed or partially enclosed processes since the lack of free ventilation can increase damage potential. Credit for effective mechanical ventilation is provided. Access. Consideration is given to ease of access to the process unit by emergency equipment. Drainage and spill control. Adds risk factors for situations where large spills could be contained around process equipment instead of being safely drained away. This factor requires calculation of process capacity and containment capacity. For highly volatile materials such as those considered in this study, this factor is not significant. The special process hazards are thought to play a significant role in the probability of a loss.
Special Process Hazards # Toxic materials. Insofar as toxic materials can complicate an emergency response, their presence, based on the NFPA Nh factor, is considered here. Subatmospheric pressure. Adds risk factors when the introduction of air into a process is a hazard possibility. Operation in/nearflammable range. Adds risk factors when air can be introduced into the process to create a mixture in a flammable range. Considers the ease with which the flammable mixture is achieved.
100 - 15= 85 100 - I5 = 85 75-15-60
100- 60=40 50 - 20 = 30
IO 0
Dust explosion. Reliefpressure. Adds risk factors dependent on the pressure level of the process. Equipment maintenance and design become more critical at elevated pressures, because spill potential greatly increases in such a situation. Low temperature. Adds risk factors when temperaturerelated brittleness of materials is a potential concern. Quantity offammable materials. Adds risk factors based on the quantities of materials in the process, in storage outside the process area, and combustible solids in the process. Corrosion and erosion. Considers the corrosion rate as the sum ofexternal and internal corrosion. Leakage. Adds risk factors where minor leaks around joints, packing, glands, etc., can present an increased hazard. Considers thermal cycling as a factor. Use offired heaters. Historically problematic equipment. Hot oil exchange systems. Historically problematic equipment. Hot rotating equipment. Historically problematic equipment. Adds risk factors for rotating equipment, contingent on the horsepower. The general process and special process hazards are combined with the material safetyfactor to generate the F&EI score. The F&EI score can then be used to estimate hazard areas and magnitudes of loss. In making such estimates, the evaluator takes credit for any plant features that would reasonably be expected to reduce the loss. Loss reduction can be accomplished by either reducing or controlling the potential consequences. These loss control credit factors are selected based on the contribution they are thought to actually make in a loss episode. The three categories of loss control credit factors are (1) process control, (2) material isolation, and (3) fire protection. In Table 13.10, the items evaluated within each category are listed along with some possible “credit percentages” that could be used to reduce the potential loss amount. This table suggests that these factors, if all applied together, can reduce the maximum probable damage by a large amount. The loss controlcredit factors do not impact the F&EI score. They only impact the estimated losses arising from an episode.
Modeling ideas I1 131277 Table 13.10 Maximum probable property damage reduction factors
Property. damage reduction factors
Process Control Factors Emergency power Cooling Explosion control Emergency shutdown Computer control Inert gas Operating instructions/procedures Reactive chemical review (can substitute “risk management program”) TOTAL impact ofprocess control factors
Credit multiplier
0.98 0.97 0.84 0.96 0.94 0.91 0.91
0.9 1 54%
Material Isolation Remote control valves Dumpiblowdown Drainage Interlock TOTAL impact ofmaterial isolation factors
0.96 0.96 0.9 I 0.98 829.6
Fire Protection Leak detection Structural steel Buried and double-walled tanks Water supply Special systems Sprinkler systems Water curtains Foam Hand extinguishers Cable protection TOTAL impact of fire protection factors
0.94 0.95 0.84 0.94 0.9 1 0.74 0.97 0.92 0.95 0.94 38%
Using the maximum credit for every item would reduce the loss to 17% of an uncredited amount (an 83% reduction in potential damages). Of course, to achieve the maximum credit, many expensive systems would need to be installed, including foam systems, water curtains, leak detection, dumpiblowdowns, and double-walled tanks. The loss control credits, as originally intended, do not account for secondary containment. The loss control variables shown here are generally applied to spill volumes that have escaped both primary and secondary containment. They can also be applied when they minimize the product hazard during secondary containment (before cleanup). Table 13. I O is for illustration of the approach only. The evaluator would need to define the parameters under which credit could be awarded for each of these. The percentage loss reduction may not be appropriate in all cases. Within station limits, the drainage of spills away from other equipment is important. A slope of at least 2% (1% on hard surfaces) to a safe impoundment area of sufficient volume is seen as adequate. Details regarding other factors can he found in Ref. [ 2 6 ] .
VI. Modeling ideas II Another possible scoring algorithm that has been recommended by an operator of natural gas station facilities is
shown below. This shows factors, called risk drivers, that were determined to be critical risk indicators. The relative weightings of the probability and consequence categories are also shown. [0.27Pc,+0.22Pdd+0.19Pp,,+0.15Pnc+0.17P,,] x[0.4CIp+0C,,, 0.6C,uJ = total station risk
+
where: P,, = Probability of an equipment-related event P,, = Probability of a design deficiency-related event PPlc = Probability of a pipeline contamination-related event P,, = Probability of an event related to natural causes P,, = Probability of damage by a third party C,p = Consequence to life or property C,, = Consequence to the environment Cbus= Consequence to business. This algorithm contains weightings for both probability and consequence factors. For instance, the designer shows that “natural causes” constitutes 15% of the total probability of failure and 60% of potential consequences are business related. Environmental consequences are assigned a 0 weighting. The failure probability categories are comprised of factors as follows: Equipment issues A failure due to the malfunction of a piece of station equipment. Risk Drivers Obsolete equipment Antiquated equipment Equipment complexity. Design deficiencies A failure due to a deficiency in design. The deficiency is either a result of improper design or changes in the operation of the station after construction. Risk Driven Improper capacity Velocity > 100 fps Adequacy of filtration Control loops. Equipment separation Vaults and lids Valves Venting Manufacturer flaws. Pipeline contaminants the gas stream.
A failure caused by contaminants in
Risk Drivers Pipeline liquids Construction debris Rust scale and sand Valve grease Bacteria (internal corrosion) Employee safely An injury or accident involving an employee. Note that this factor is not used in the preceding algorithm.
13/278 Stations and SurfaceFacilities
Risk Drivers Neighborhood Ergonomics (workspace, equipment access) Exposure to hazard (confined space, traffic, environmental exposure). Natural causes A failure caused by the forces of nature. Risk Drivers Earthquake Landslide Stream erosion Floods Groundwater Atmospheric corrosion Fire.
Table 13.11
Designlmatenals algorithmvariables
Atm-Cor
Soil-Side-Con
Internal-Corr
Design
Damage by a third party third parties.
A failure caused by damage from
Risk Drivers Traffic hazard Railway hazard Vandalism AC electric impacts. Human-Error
Operator error A failure due to operator error. Note that this factor is not used in the preceding algorithm. Risk Drivers Equipment tagging Station drawings Clearance procedures Maintenance instructions Employee competence Incident record Quality of response plans.
It appears that this algorithm was designed for future expansion. Several variables are identified, included as ‘place-holders’in the model, but not yet used in the risk calculations.
VII. Modeling ideas 111 Here we look at another example of an assessment system for probability of failure within station facilities. In this scheme, higher points mean higher risk, and scores assigned to variables are summed to get category weights. The scoring protocols were unfinished in this example, so weighting do not always sum to 100%. Some variables are left in their abbreviated form, but their meanings should be apparent to the reader experienced with pipeline station facilities.
Design and materialsalgorithm variables Table 13.11 lists the design and materials algorithm variables. Examples of scoring scales for some of these variables are then provided. Examples of some variable scoring scales for the variables in Table 13.11 areprovidednext.
Outside-Force
Atm-Corrosion-Control-Program Atm-Coating Adequacy Corrosive Atmospheric Conditions Facility-Age Soil-Agressive Corr-Hot-Spot Coating CP-Syst-Perform NDE-Metal-Loss-Insp Facility-Age Internal-Corr-Control-Prog Product-Corr Internal-Coating Internal-CP NDE-Metal-Loss-Insp Static-Liquid Conditons Safety-Syst-Adequ-Review Safety-Syst-PPM Material-Cyclic-Stress Pressure-Test-Stress Pressure-Test-Year Vibration Monitoring Safety-System Exceedance Safety-Syst-Actions Housekeeping Anti-Freeze-Program SCADA-System Documentation-Prog Critical-Equip-Security Computer-Permissives Security-Detection-Systems Lighting-Systems Protective-Barriers Severe-Weather Ground-Movements Traffic-Damage Station-Activity-Level
40% 30% 30% 10% 15% 20% 25% 30%
adj 10% 25% 20% 15% 10%
4 20% 15% 15% 10% 10% 10% 10% 15% 15% 10% 10% 20%
20?6 20% 20% 15% 5% 20% 15% 15% 15% 15%
Material susceptibility [Material Operating Stress]- Evaluation of various in-service material stress levels by comparing the maximum operating pressure (MOP) to maximum design pressure (MDP). Expressed as a percentage: (MOP/MDP’ 100%).
0.0 pts [NotApplicable] 2.0 pts [MOP <24% of SMYSI-Low operating stress level 4.0 pts [MOP 24% to 48% of SMYSJ-Moderate operating stress level 6.0 pts [MOP 48% to 72% of SMYSI-High operating stress level 10.0 pts [MOP >72% of SMYSI-Very high operating stress level 5.0 pts [Unknown Operating Stress] &faterialDuctili&]- Evaluation of various in-service material’s ductile properties.
0.0 pts ~otApplicable] 2.0 pts [High Ductility]-Material ductility is 232 ft-lb 4.0 pts [Moderate Ductility]-Material ductility is 10-32 ft-lb
Modeling ideas I l l 13/279 10.0 pts [Low Ductility]-Material 5.0 pts [Unknown Ductility]
is
[Material Cyclic Stress]Evaluation of various in-service material’s frequency, duration, and level and location of cyclic stresses, including severe pump starts/stops, pressure cycles, fill cycles, traffic loadings, etc.
0.0 pts [Not Applicable] 2.0 pts [Low Cyclic Stress]-Material is subjected to low cycle frequency (number of eventdtime), short cycle duration (time), low cycle magnitude (condition change amount), and/or distant proximity from source (feet) 4.0 pts [ModerateCyclic Stress]-Material is subjectedto moderate cycle frequency, moderate cycle duration, moderate cycle magnitude and/or moderate proximity from source 10.0 pts [High Cyclic Stress]-Material is subjected to high cycle frequency, long cycle duration, high cycle magnitude, and/orproximity from source 5.0 pts [Unknown Cyclic Stress] [Material Vibration]Evaluation of various in-service equipmenthaterial’s frequency, duration, level and location of vibration stresses from various sources, including pumps, rotating equipment, wind, throttling valves, surges, temperature changes, ground movements, traffic, etc.
0.0 pts [NotApplicable] 2.0 pts [Low Vibration Stress]-Material is subjected to low vibration frequency (# eventsitime), short vibration duration (time), low vibration magnitude (condition change amount) andor distant proximity from vibration source (feet). 4.0 pts [Moderate Vibration Stress]-Material is subjected to moderate vibration frequency, moderate vibration duration, moderate vibration magnitude and/or moderate proximity from vibration source. 10.0 pts [Highvibration Stress]-Material is subjected to high vibration frequency, long vibration duration, high vibration magnitude and/or close proximity from vibration source. 5.0 pts [UnknownVibration Stress] [Pressure Test Stress]Evaluation of pipe, vessel and component pressure test levels by comparing the minimum test pressure (MTP) and SMYS. Expressed as a percentage: (MTP / SMYS * 100%).
0.0 pts mot Applicable] 2.0 pts [MTP >loo% SMYSI-High test pressure level 5.0 pts [MTP 80% to100Y0 SMYSI-Moderate test pressure level 10.0 pts [MTP 180%SMYSI-Low test pressure level 5.0 pts [UnknownTest Stress] Evaluation of pipe, vessel, and component pressure test ages by recording the time since the last appropriate facility test. [Pressure Test Age]-
0.0 pts mot Applicable] 2.0 pts [< 5Yrs Old] 10.0 pts [>5 Yrs Old] 5.0 pts [UnknownTest]
pibration Monitorin+ Monitoring of in-service equipmentimaterial’s frequency, duration, and level and location of vibration stresses from various sources, including pumps, rotating equipment, wind, throttling valves, surges, temperature changes, ground movements, traffic, etc.
0.0 pts mot Applicable] 2.0 pts [No Vibration Monitoring Needed]-Equipment‘material is subjected to low or no vibration so does not require monitoring 4.0 pts [Continuous Vibration Monitoring w/Shutdown]Equipmentimaterial is monitored for vibration frequency, vibration duration, and vibration magnitude and/or proximity from vibration source, which shuts down equipment on vibration limit exceedence 6.0 pts [Continuous Vibration Monitoring w/AlannEquipmentimaterial is monitored for vibration frequency, vibration duration, and vibration magnitude and/or proximity from vibration source which alarms locally/remotely on vibration limit exceedence. 8.0 pts [Manual Vibration Monitoring]-Equipmentimaterial is monitored for vibration frequency, vibration duration, and vibration magnitude and/or proximity from vibration source manually on a periodic basis (less than one time per year) 10.0 pts [No Vibration Monitoring]-Equipmentimaterial is not monitored for vibration 5.0 pts [UnknownVibration Monitoring] [Safety Systems Exceedence Overstress Potentid- Evaluation of the potential to exceed any level, pressure, temperature, or flow safe operating limits based on maximum system operating conditions, equipment design limits, and safety system limitations
0.0 pts P o Exceedence Potential or Not Applicable]Maximum system operating conditions cannot exceed equipment design or safety system limits 2.0 pts [Low Exceedence Potentiall-Maximum system operating conditions occasionally exceed equipment safety system limits but not design limits 4.0 pts [Moderate Exceedence Potentiall-Maximum system operating conditions routinely exceed equipment safety system limits but not design limits 10.0 pts [High Exceedence Potentiall-Maximum system operating conditions routinely exceed equipment design limits and safety system limits 5.0 pts [Unknown Exceedence Potential] [Safety Systems Actions]Evaluation of the various actions that initiate, or are initiated by, station safety systems involving changing level, flow, temperature, and pressure conditions.
0.0 pts [Not Applicable] 2.0 pts [Automatic EquipmentiStation ShutdownCondition-sensing device or permissive limit exceedences automatically initiate a full, or partial, shutdown of affected station equipment, with an alarm to remote/local personnel 4.0 pts [Remote EquipmentiStation Shutdown-Conditionsensing device or permissive limit exceedences alarm at a continuously manned location and requires operators to
13/280 Stations and Surface Facilities
evaluate the conditions and remotely initiate a full, or partial, shutdown of affected station equipment 6.0 pts [Remote Monitoring Only]-Condition-sensing device or permissive limit exceedences alarm at a continuously manned location and requires operators to evaluate the conditions and on-site manually initiate a full, or partial, shutdown of affected station equipment 8.0 pts [Local Alarms Onlyl-Condition-sensing device or permissive limit exceedences alarm at a noncontinuously manned location and requires operators to evaluate the conditions and on-site manually initiate a full, or partial, shutdown of affected station equipment 10.0 pts [No Safety Systems]-No safety systems present, including condition sensing, permissives, alarms, or other devices 5.0 pts [Unknown Safety Systems]
Human error algorithm variables
[Safety Systems Adequacy Review Program]- Evaluation of the adequacy of various station safety systems, including associated sensing, measurement, and control devices
[Anti-FreezeProgram]- Evaluation of antifreeze program for all facilities, including water drains, control valves, instrumentation.
0.0 pts m o t Applicable] 1.0 pts [Excellent Adequacy Review Program]-A formal program exists that exceeds all company and industry minimum recommended or required safety system design and “adequacy for service” review practices 4.0 pts [Adequate Adequacy Review Program]-A semiformal program exists that meets all company and industry minimum recommended or required safety system design and “adequacy for service” review practices 8.0 pts [Inadequate Adequacy Review Program]-An informal program exists that does not meet all company and industry minimum recommended or required safety system design and “adequacy for service” review practices 10.0 pts P oAdequacy Review Program]-No known program exists and few company and industry minimum recommended or required safety system design and “adequacy for service” review practices are met 5.0 pts [Unknown Adequacy Review]
0.0 pts [Not Applicable] 1.0 pts [Excellent Anti-Freeze Program]-A formal program exists that exceeds all company and industry minimum recommended or required antifreeze practices 3.0 pts [Adequate Anti-Freeze Program]-A semiformal program exists that meets all company and industry minimum recommended or required antifreeze practices 8.0 pts [Inadequate Anti-Freeze Program]-An informal program exists that does not meet all company and industry minimum recommended or required antifreeze practices 10.0 pts m o Anti-Freeze Program]-No known program exists and few company and industry minimum recommended or required antifreeze practices are met 5.0 pts [Unknown Anti-Freeze Program]
(Safety S’stems PPM]- Evaluation of various station safety system’s “predictive and preventative maintenance” (PPM) programs, including equipmentkomponent inspections, monitoring, cleaning, testing, calibration, measurements, repair, modifications and replacements.
0.0 pts [Not Applicable] 1.0 pts [Excellent PPM Program-A formal program exists which exceeds all company and industry minimum recommended or required PPM practices. 4.0 pts [Adequate PPM Program-A semi-formal program exists which meets all company and industry minimum recommended or required PPM practices. 8.0 pts [Inadequate PPM Program]-An informal program exists which does not meet all company and industry minimum recommended or required PPM practices. 10.0 pts P o PPM Program]-No known program exists and few company and industry minimum recommended or required PPM practices are met. 5.0 pts [Unknown PPM Program]
Evaluation of facility equipmenUmaterials organization and overall maintenance.
[Housekeepin&
0.0 pts [Not Applicable] 1.O pts [Excellent Housekeeping]-All equipment and materials are well marked, accessible, maintained, and exceed industry and company best practices 3.0 pts [Adequate Housekeeping]-All equipment and materials are marked, accessible, and maintained per industry and company best practices 10.0 pts [Inadequate Housekeeping]-Equipment and materials are not well marked, accessible, andor maintained per industry and company best practices 5.0 pts [Unknown Housekeeping]
[Computer Permissives Program]- Evaluation of a computer permissives program for all facilities, including PLC, PLCC, SCADA, and other logic-based application programs. Permissive programs that control safe operations of valve alignments, pressures, flows, and temperatures are considered.
0.0 pts mot Applicable] 1.O pts [Excellent Permissives Program]-A comprehensive computer-based program exists that exceeds all company and industry minimum recommended or required permissive practices 3.0 pts [Adequate Permissives Program]-A semiformal computer-based program exists that meets all company and industry minimum recommended or required permissive practices 8.0 pts [Inadequate Permissives Program]-An informal computer-based program exists that does not meet all company and industry minimum recommended or required permissive practices 10.0 pts [No Permissives Program]-No known computerbased program exists and few company and industry minimum recommended or required permissive practices are met 5.0 pts [Unknown Permissives Program]
Modeling ideas Ill 131281
[SCADA System/- Evaluation of a centralized SCADA system for all facilities, including RTU, PLC, PLCC-based application programs, conditions monitoring, remote control capabilities, automatic alarmdshutdown capabilities, protocols and communication systems. 0.0 pts [Not Applicable] 1 .0 pts [Excellent SCADA System]-A comprehensive SCADA system exists that exceeds all company and industry minimum recommended or required system monitoring and control practices 3.0 pts [Adequate SCADA System]-A semiformal SCADA system exists that meets all company and industry minimum recommended or required system monitoring and control practices 8.0 pts [Inadequate SCADA System]-An informal SCADA system exists that does not meet all company and industry minimum recommended or required system monitoring and control practices 10.0 pts [No SCADA System]-No known SCADA system exists and few company and industry minimum recommended or required system monitoring and control practices are met 5.0 pts [Unknown SCADA System] [Documentation Program/- Evaluation of various forms of documenting current facility conditions and activities, including maps, drawings, records, electronic data, etc. 0.0 pts [Not Applicable] 1.O pts [Excellent Documentation Program]-A formal program exists that exceeds all company and industry minimum recommended or required documentation practices 3.0 pts [Adequate Documentation Program]-A semiformal program exists that meets all company and industry minimum recommended or required documentation practices 8.0 pts [Inadequate Documentation Program]-An informal program exists that does not meet all company and industry minimum recommended or required documentation practices 10.0 pts [No Documentation Program]-No known program exists and few company and industry minimum recommended or required documentation practices are met 5.0 pts [Unknown Documentation Program] [Procedures Progrum]- Evaluation of the types, overall condition, adequacy, and appropriateness of various operations, maintenance, engineering, construction, testing and management procedures. 0.0 pts [Not Applicable] I .O pts [Excellent Procedures Program]-A formal program exists that exceeds all company and industry minimum recommended or required procedure best practices 3.0 pts [Adequate Procedures Program]-A semiformal program exists that meets all company and industry minimum recommended or required procedure best practices 8.0 pts [Inadequate Procedures Program]-An informal program exists that does not meet all company and industry minimum recommended or required procedure best practices
10.0 pts [No Procedures Program]-No known program exists and few company and industry minimum recommended or required procedure best practices are met 5.0 pts [Unknown Procedures Program] [Personnel Qualifications Program]- Evaluation of the types of training and testing methods, overall effectiveness. adequacy, and appropriateness of operations, maintenance, engineering, construction, testing and management personnel’s qualification for performing position requirements. 0.0 pts [Not Applicable] 1.O pts [Excellent Qualifications Program]-A formal program exists that exceeds all company and industry minimum recommended or required personnel qualification best practices 3 .O pts [Adequate Qualifications Program]-A semiformal program exists that meets all company and industry minimum recommended or required personnel qualification best practices 8.0 pts [Inadequate Qualifications Program]-An informal program exists that does not meet all company and industry minimum recommended or required personnel qualification best practices 10.0 pts m o Qualifications Program]-No known program exists and few company and industry minimum recommended or required personnel qualification best practices are met 5.0 pts [Unknown Qualifications Program] [PositionAnut’ysis/- Evaluation of the analysis that went into defining position responsibilities, tasks, authority, communications, training and testing levels, safety, etc. Includes maintenance, engineering, construction, testing, and management positions. 0.0 pts [Not Applicable] I .O pts [Excellent Position Analysis]-A formal analysis exists that exceeds all company and industry minimum recommended or required position analysis best practices. 3.0 pts [Adequate Position Analysis]-A semiformal analysis exists that meets all company and industry minimum recommended or required position analysis best practices 8.0 pts [Inadequate Position Analysis]-A informal analysis exists that does not meet all company and industry minimum recommended or required position analysis best practices 10.0 pts Do Position Analysis]-No known analysis exists and few company and industry minimum recommended or required position analysis best practices are met. 5.0 pts [Unknown Position Analysis] [Hazard Analyses]- Evaluation of the historical hazard analyses conducted for station facilities, including HAZOP, “what-if” scenarios, fault trees, and relative risk assessment, as part of failure investigations or an overall company risk management program. Analyses should be appropriate, comprehensive, and recent, with follow-up ofrisk reduction recommendations. 0.0 pts [Not Applicable] 1.O pts [Excellent Hazard Analyses]-Formal
analyses exists that exceed all company and industry minimum recommended or required hazard analysis best practices
13/282 Stationsand Surface Facilities
3.0 pts [Adequate Hazard Analyses]-Semiformal analyses exists that meet all company and industry minimum recommended or required hazard analysisbest practices 8.0 pts [Inadequate Hazard Analyses]-Informal analyses exists that do not meet all company and industry minimum recommended or required hazard analysis best practices 10.0 pts [No Hazard Analyses]-No known analyses exist and few company and industry minimum recommended or required hazard analysis best practices are met 5.0 pts [Unknown Hazard Analyses]
[Critical Equipment Security]- Evaluation of security for critical or key facility equipment and systemsaccess, including building locks, locks, keys, chains, protocols, etc. 0.0 pts [NotApplicable] 1.O pts [Excellent Equipment Security]-All critical equipment is well secured, marked, and maintained in a manner exceeding industry and company best practices (or is not needed) 3.0 pts [Adequate Equipment Security]-All critical equipment is secured, marked, and maintained to meet industry and company best practices 10.0 pts [Inadequate Equipment Security]-Equipment and materials are not well secured, marked, and/or maintained to meet industry and company best practices 5.0 pts [Unknown Equipment Security]
Outside force algorithm variables Site Security Mitigation [Security Detection Systems]- Evaluation of various station security detection systems and equipment,includinggadflame detectors, motion detectors, audio/video surveillance, etc. Security system appropriateness,adequacy for service conditions, coverage completeness,and PPM are evaluated. 0.0 pts [NotApplicable] 1.Opts [Excellent Security Detection Systems]-Systems are very effective and exceed industry and company required or recommended security detection systems best practices (or are not needed) 3.0 pts [Adequate Security Detection Systems]-Systems are effective and meet industry and company required or recommended securitydetection systems best practices 8.0 pts [InadequateSecurity Detection Systems]-Systems are not effective and do not meet industry and company required or recommended securitydetection systemsbest practices 10.0 pts [No Security Detection Systems]-No systemsexist 5.0 pts [Unknown Security Detection Systems]
Fighting Systems]- Evaluation of various station lighting systems, including security and perimeter systems and equipment and working areas. System appropriateness, adequacy for service conditions, coverage completeness, and PPM are evaluated. 0.0 pts [NotApplicable] 1.0 pts [Excellent Lighting System]-System is very effective and exceeds industry and company required or recommended lighting system best practices (or are not needed)
3.0 pts [Adequate Lighting System]-System is effective and meets industry and company required or recommended lighting system best practices 8.0 pts [Inadequate Lighting System]-System is not effective and does not meet industry and company required or recommended lighting system best practices 10.0 pts [No Lighting System]-No system exists 5.0 pts [Unknown Lighting System]
[Protective Barriers]- Evaluation of various station thirdparty and vehicle access barriers, includingrailings, 6-ftchainlink fence, barbed wire, walls, ditches, chains, and locks. Barrier appropriateness, adequacy for conditions, strength, coverage completeness, and PPM are evaluated. 0.0 pts [NotApplicable] 1.O pts [Excellent Protective Barriers]-Barriers are very effective and exceed industry and company required or recommended best practices (or are not necessary) 3.0 pts [Adequate Protective Barriers]-Barriers are effective and meet industry and company required or recommended best practices 8.0 pts [Inadequate Protective Barriers]-Barriers are not effective and do not meet industry and company required or recommended best practices 10.0pts m o Protective Barriers]-No barriers exist 5.0 pts [Unknown Protective Barriers]
Outsideforce susceptibility [Severe Weather]- Evaluation of various hazardous weather events, including extreme rainfall, floods, freezing, hail, ice, snow, lightning, and/or winds. The hazardous event potential is determined by historical frequency, severity, duration, and damage caused. 0.0 pts [NotApplicable] 2.0 pts [Low Severe Weather Potentiall-Low potential of one or more severe weather events occurring during an average year with the potential to cause significant facility damage 5.0 pts [Moderate Severe Weather Potentiall-Moderate potential of one or more severe weather events occurring during an average year with the potential to cause significant facility damage 10.0 pts [High Severe Weather Potentiall-High potential of one or more severe weather events occurring during an average year with the potential to cause significant facility damage 5.0 pts [Unknown Severe Weather Potential]
[Ground Movemen+ Evaluation of various hazardous ground movement events, including severe earthquakes, erosion, washouts, expansive soil movement, frost heave, landslide, subsidence or blasting. The hazardous event potential is determined by historical frequency, severity, duration, and damage caused. 0.0 pts [NotApplicable] 2.0 pts [Low Ground Movement Potentiall-Low potential of one or more severe ground movement events occurring during an average year with the potential to cause significant facility damage
Modelingideas Ill 13/283
5.0 pts [Moderate Ground Movement Potentiall-Moderate potential of one or more severe ground movement events occurring during an average year with the potential to cause significant facility damage 10.0 pts [High Ground Movement Potentiall-High potential of one or more severe ground movement events occurring during an average year with the potential to cause significant facility damage 5.0 pts [Unknown Ground Movement Potential] r r u f i c Dumugel- Evaluation of various hazardous traffic events, includmg moving object congestion, frequency, duration, direction, mass, speed, and distance to facilities.The hazardous event potential is determined by historical accident frequency, severity and damage caused by cars, trucks, rail cars, vessels, andor plane impacts from within and outside the station. 0.0 pts m o t Applicable] 2.0 pts [Low Traffic Damage Potentiall-Low potential of one or more hazardous traffic events occurring during an average year with the potential to cause significant facility damage 5.0 pts [Moderate Traffic Damage Potentiall-Moderate potential of one or more hazardous traffic events occurring during an average year with the potential to cause significant facility damage 10.0pts [HighTrafficDamage Potentiall-high potential ofone or more hazardous traffic events occurring during an average year with the potential to cause significant facility damage 5.0 pts [UnknownTraffic Damage Potential] [Activity Leveu- Evaluation of the overall station activity levels, including the frequency and duration of in-station excavations, facility modifications, and vehicle traffic. Controlled access, third-party facilities present, and continuous work inspection are also evaluated. 0.0 pts [Not Applicable] 2.0 pts [Low Activity Levell-Annual (average of liyr) hazardous activities occur during an average year with the potential to cause significant facility damage 4.0 pts [Moderate Activity Level]-Monthly (average limonth) hazardous activities occur during an average year with the potential to cause significant facility damage 7.0 pts [High Activity Levell-Weekly (average liwk) hazardous activities occur during an average year with the potential to cause significant facility damage 10.0 pts [Very High Activity Levell-Daily (average l/day) hazardous activities occur during an average year with the potential to cause significant facility damage 5.0 pts [Unknown Activity Level]
Corrosion algorithm variables
2.0 pts [Mild Atmospheric Conditions]-Mild corrosive atmospheric conditions exist 6.0 pts [ModerateAtmospheric Conditions]-Moderate corrosive atmospheric conditions exist 10.0 pts [Severe Atmospheric Conditions]-Severe corrosive atmospheric conditions exist
External corrosion susceptibility [Facility Age]- Evaluation of station facilities (pumps, piping, vessels, equipment, and components) ages by recording the last facility installation or replacement date.
0.0 pts [
0.0 pts [No Ground Conditions]-No corrosive ground conditions exist 2.0 pts [Mild Ground Conditions]-Mild corrosive ground conditions exist 6.0 pts [Moderate Ground Conditions]-Moderate corrosive ground conditions exist 10.0 pts [Severe Ground Conditions]-Severe corrosive ground conditions exist [Corrosive “Hot Spot” Conditions]- Evaluation of the presence and significance of CP interferences, including casings, bundled piping, foreign line crossings, interference crossings, DCiAC stray currents.
0.0 pts [No “Hot Spot” Conditions]-No structures or stray currents are present that could interfere with CP systems, or create a corrosion hot spot, causing significant buried metal loss 5.0 pts [Single “Hot Spot” Condition]-One type of structure or stray current is present that could interfere with CP systems, or create a corrosion hot spot, causing significant buried metal loss 10.0 pts [Multiple “Hot Spot” Conditions]-Multiple structures or stray currents are present that could interfere with CP systems, or create corrosion hot spots, causing significant buried metal loss
Internal corrosion susceptibility
Atmospheric corrosion susceptibility [Corrosive Atmospheric Conditions]- Evaluation of various atmospheric corrosivity conditions, including contaminants, temperature, moisture, chemicals, splash zones, etc.
[Static Liquids Condition]- Evaluation of piping facilities where liquids are static, including piping dead-legs, pig traps and out-of-service equipment. Also evaluate frequency, duration, and product movements related to static conditions.
0.0 pts [No Atmospheric Conditions]-No pheric conditions exist
0.0 pts [No Static Conditions]-No exist
corrosive atmos-
corrosive static conditions
131284 Stations and Surface Facilities
2.0 pts [Mild Static Conditions]-Mild corrosive static conditions exist 6.0 pts [Moderate Ground Conditions]-Moderate corrosive static conditions exist 10.0 pts [Severe Ground Conditions]-Severe corrosive static conditions exist [Product Corrosivity- Evaluation of various product (crude oil and refined products) corrosivity conditions, including chlorides, pH, temperature, bacteria, moisture, dissolved gases (CO,, O,, H,S), and contaminant levels.
0.0 pts [No Product Conditions]-No corrosive product conditions exist 2.0 pts [Mild Product Conditions]-Mild corrosive product conditions exist 6.0 pts [Moderate Product Conditions]-Moderate corrosive product conditions exist 10.0 pts [Severe Product Conditions]-Severe corrosive product conditions exist
Atmospheric corrosion mitigation [Atmospheric Corrosion Control Program/- Evaluation of various atmospheric corrosion control, including coating appropriateness, adequacy for conditions, coverage completeness, installation, and PPM. Also includes API 653 inspection on tanks and hot-spot protection as part of the PPM program.
0.0 pts [Excellent Atmospheric Corrosion Program]-A formal program exists that exceeds all company and industry minimum recommended or required atmospheric corrosion control best practices 1.0 pts [Adequate Atmospheric Corrosion Program]-A semiformal program exists that meets all company and industry minimum recommended or required atmospheric corrosion control best practices 5.0 pts [Inadequate Atmospheric Corrosion Program]-An informal program exists that does not meet all company and industry minimum recommended or required atmospheric corrosion control best practices 10.0 pts [No Atmospheric Corrosion Program]-No known program exists and few company and industry minimum recommended or required atmospheric corrosion control best practices are met
External corrosion mitigation [Buried Metal CoatingAdequacy/Qpe?/- Evaluation of various buried metal coatings, including appropriateness, adequacy for conditions, coverage completeness, installation, and PPM. PPM includes API 653 inspections and bellhole inspections. 0.0 pts [Excellent Buried Metal Coating]-Coating exists that exceeds all company and industry minimum recommended or required external coating best practices [Excellent coatings typically include fusion bonded epoxy (FBE) types in good condition.] 2.0 pts [Adequate Buried Metal Coating]-Coating exists that meets all company and industry minimum recommended or
required external coating best practices [Adequate coatings typically include somastic, asphaltic, coal tar, poly jacket, and tar/glass/felt (TGF) types in good condition.] 7.0 pts [Inadequate Buried Metal Coating]-Coating exists that does not meet all company and industry minimum recommended or required external coating best practices [Inadequate coatings typically include any disbanded, improperly installed or damaged coating.] 10.0 pts b o Buried Metal Coating]-No known coating exists and few company and industry minimum recommended or required external coating best practices are met. [No coatings typically exist on carrier pipe within a casing or old pipe.] [Buried Metal Corrosion Control Program/- Evaluation o f various buried metal corrosion control measures including program appropriateness and system adequacy for conditions, coverage completeness and PPM. PPM program includes API 653 inspection on tanks and hot-spot protection.
0.0 pts [Excellent Buried Metal Corrosion Program]-A formal program exists that exceeds all company and industry minimum recommended or required buried metal corrosion control best practices 2.0 pts [Adequate Buried Metal Corrosion Program]-A semiformal program exists that meets all company and industry minimum recommended or required buried metal corrosion control best practices 7.0 pts [Inadequate Buried Metal Corrosion Program]-An informal program exists that does not meet all company and industry minimum recommended or required buried metal corrosion control best practices 10.0 pts [No Buried Metal Corrosion Program]-No known coating program exists and few company and industry minimum recommended or required buried metal corrosion best practices are met [CP System Perfarmunce/- Evaluation of CP system effectiveness in controlling external corrosion based on CP performance criteria used, number and location of test points (coverage), frequency of test point readings, variance of readings from criteria, corrective actions and timeliness, data and activities documentation, system equipment PPM, etc. 0.0 pts [Excellent CP System]-An
effective CP system exists that exceeds all company and industry minimum recommended or required buried metal corrosion control best practices 2.0 pts [Adequate CP System]-A CP system exists that meets all company and industry minimum recommended or required buried metal corrosion control best practices 7.0 pts [Inadequate CP System]-A CP system exists that does not meet all company and industry minimum recommended or required buried metal corrosion control best practices 10.0 pts w o CP System]-No known CP system exists and few company and industry minimum recommended or required buried metal corrosion best practices are met
[CIS Performunce]- Evaluation of the effectiveness of a close interval survey (CIS) for identifying CP system problems and external corrosion hot spots based on the CP performance
Modeling ideas Ill 13/285
criteria used number and location of test points (coverage), frequency of test point readings, variance of readings from criteria, corrective actions and timeliness, data and activities documentation, equipment used and its PPM, etc. 0.0 pts [Excellent CIS Performance]-An effective CIS was conducted that exceeds all company and industry minimum recommended or required CIS corrosion control best practices 2.0 pts [Adequate CIS Performance]-A CIS was conducted that meets all company and industry minimum recommended or required CIS corrosion control best practices 7.0 pts [Inadequate CIS Performance]-A CIS was conducted that does not meet all company and industry minimum recommended or required CIS corrosion control best practices 10.0pts B o CIS Conducted]-No known CIS was conducted and few company and industry minimum recommended or required CIS corrosion best practices are met
[NDE PerformanceJ- Evaluation of the effectiveness of a nondestructive examination (NDE) for identifying system metal loss problems and external corrosion hot spots based on the NDE performance criteria used, number and location of inspection points (coverage), frequency of inspection point readings, variance of readings from criteria, corrective actions and timeliness, data and activities documentation, equipment used and its PPM. etc. 0.0 pts [Excellent NDE Performance]-An effective NDE was conducted that exceeds all company and industry minimum recommended or required NDE corrosion control best practices 2.0 pts [Adequate NDE Performance]-A CIS was conducted that meets all company and industry minimum recommended or required NDE corrosion control best practices 7.0 pts [Inadequate NDE Performance]-A CIS was conducted that does not meet all company and industry minimum recommended or required NDE corrosion control best practices 10.0 pts [No NDE Conducted]--No known NDE inspections were conducted and few company and industry minimum recommended or required NDE corrosion best practices are met
Internal corrosion mitigation [Internal Couting Adequacj]- Evaluation of various internal metal coatings, including coating appropriateness, adequacy for conditions, coverage completeness, installation, and PPM.
0.0 pts [Excellent Internal Metal Coating]-Coating exists that exceeds all company and industry minimum recommended or required internal coating best practices 2.0 pts [Adequate Internal Metal Coating]-Coating exists that meets all company and industry minimum recommended or required internal coating best practices 7.0 pts [Inadequate Internal Metal Coating]-Coating exists that does not meet all company and industry minimum recommended or required internal coating best practices 10.0 pts [No Internal Metal Coating]-No known coating exists and few company and industry minimum recommended or required internal coating best practices are met
[Internal Corrosion Control Program]- Evaluation of various internal corrosion control measures including performance criteria appropriateness, inhibitor and metal loss measurement frequency and adequacy for conditions, inhibitor coverage completeness, and system installation and PPM. Also includes API 653 inspection on tanks and hot-spot protection as part of the PPM program.
0.0 pts [Excellent Internal Corrosion Program]-A formal program exists that exceeds all company and industry minimum recommended or required internal corrosion control best practices 2.0 pts [Adequate Internal Corrosion Program]-A semiformal program exists that meets all company and industryminimum recommended or required internal corrosion control best practices 7.0 pts [Inadequate Internal Corrosion Program]-An informal program exists that does not meet all company and industry minimum recommended or required internal corrosion control best practices 10.0 pts [No Internal Corrosion Program-No known program exists and few company and industry minimum recommended or required internal corrosion best practices are met
P D E Performance/- Evaluation of the effectiveness of NDE for identifying system metal loss problems and internal corrosion hot spots based on the NDE performance criteria used number and location of inspection points (coverage), frequency of inspection point readings, variance of readings from criteria. corrective actions and timeliness, data and activities documentation, equipment used and its PPM, etc. 0.0 pts [Excellent NDE Performance]-An effective NDE was conducted that exceeds all company and industry minimum recommended or required NDE corrosion control best practices 2.0 pts [Adequate NDE Performance]-A NDE was conducted that meets all company and industry minimum recommended or required NDE corrosion control best practices 7.0 pts [Inadequate NDE Performance]-A NDE was conducted that does not meet all company and industry minimum recommended or required NDE corrosion control best practices 10.0 pts [No NDE Conducted]-No known NDE inspections were conducted and few company and industry minimum recommended or required NDE corrosion best practices are met [Tank Mixer Adequacy]- Evaluation of tank mixers, including appropriateness and adequacy for conditions. effective coverage, and PPM. 0.0 pts [Excellent Mixing]-One or more mixers exist that exceed all company and industry minimum recommended or required tank mixing best practices (or mixers not needed) 2.0 pts [Adequate Mixing]-One or more mixers exist that meets all company and industry minimum recommended or required tank mixing best practices 7.0 pts [Inadequate Mixing]-One or more nilxers exist that do not meet all company and industry minimum recommended or required mixing best practices 10.0 pts [No Mixers]-No known mixing is performed
13/286Stations and Surface Facilities
mode and determine that some resources should be allocated to certain risk reduction actions. Specifically,they want to reduce the risk of “human error” and “design issues’’ type failures on three tanks at the Metropolis Station (see Table 13.13). This will target “overfill” scenarios and other possible failures that involve aspects ofhuman error and design issues. Operator AST Inc. drills deeper into their risk data to see why risks are greater for these tanks and to see where their risk mitigation efforts could be best applied. Because each failure category is comprised of many risk variables, they can retrieve those variables to see why the risk level is too high. They see that the risk variables listed in Table 13.14 are seen to be weak, relative to other tanks and company standards. Operator AST Inc. can view the risk components of likelihood and consequences separately. They see that there are more (and cheaper) possible actions to prevent-r reduce the likelihood of-an event compared with impacting the consequences. Most of their alternatives in better controlling an event after it occurs (consequence reducers) are very expensive. Some immediately rejected consequence-limiting actions include the following:
IX. Example of risk management application Tank farm operator AST Inc. has performed a basic risk assessment for all of their facilities. They now have risk scores for each station and for each tank within each station. They also have risk numbers for sumps, pumps, piping, loadingiunloading facilities, and other equipment groupings. The risk scores represent all available information regarding the facility to which it applies. They are readily compared to a statistic or some measure of acceptability, as shown in Table 13.12 for a sample oftheir data from the Metropolis Station.The risk score is a summary number that can be broken into failure categories of external forces, corrosion, human error, and design issues as well as a “consequence-of-failure’’ value (Table 13.13). Risk score = (likelihood) x (consequence) Likelihood = P1+ P2 + P3 + P4
and, for example, P2 = f(product corrosivity, atmospheric conditions, soil resistivity, moisture content,pipe-to-soil voltages, inspection procedures, liners, coatings, interference potential, inhibitors, anodes. etc.}
0
AST Inc. has evaluated their data carefully. They determine which tanks pose the greatest risks, which tanks have the greater likelihood of failure, and which have the greater consequences, should failure occur. They analyze their data by failure Table 13.12
Riskscore
Deviationfrom average (or ‘hcceptable ’7 (?A)
154
-14.0
146
-12.5
235
-28.1
Tank 101 Tank315 Tank 655
Breakdown of summary riskscores
Equipment tag
Tank 101 Tank315 Tank 655
Table 13.14
Other consequence-reduction possibilities that are more practical include emergency response, increased leak detection capabilities, fire suppression systems, better secondary containment, and others. Whereas all options can be investigated, AST Inc. chooses to concentrate for now only on the secondary containment alternative for consequence reduction. Noting which risk variables are relatively weak also points directly to what corrective actions can be applied. From preestablished project lists and cost data, the operator assesses the costs of several mitigative actions. They compare these costs with the benefit-the risk reduction-predicted by their model,
Summary of relative risk assessment results
Equipment tag
Table 13.13
0
Changing product type (less flammable, less persistent in environment, lower energy content, less toxic, etc.) Changing the receptors (move the station, move the nearby town, etc.).
Risk Score
Likelihood
Consequence
Pl-external forces
P2-Corrosion
P3-Design issues
P4-Human error
154 146
77 76 49
2.0 1.9 4.8
22
19 24
25
22
11 9
17
16
2
235
21 14
Evaluation of risk variables
Risk variable Consequence receptors (forTank 655 only)
Deviationfrom average (or “acceptable”risk) (96)
-32
Tank level alarms
-8
Staffing levels Personnel training Secondary containment
-2 4 -1 1
Notes Higher risks due to proximity to population center, water intakes, and predicted rangeability of spill (flowing river nearby) HHA (high-high alarm) only alarms locally-panel light in office flashes Once per week visits currently No formal training for loaders-pamphlet only Dikes in need of repair, too permeable, not sufficient volume for large releases
Comparing pipelines and station as shown in Table 13.15. From this table, a costibenefit ratio is easily calculated. This ratio is used to help prioritize maintenance and capital expenditures for the next period. AST Inc. utilizes all of their lower cost (high costmenefit ratio) options first. They do this with the confidence that their process is automatically compensating for risk reduction benefits-that is, because all risk points are of the same magnitude, it makes sense to first exhaust the low-cost alternatives to improve risk. Then, the more expensive alternatives can be explored if risks are still seen as being unacceptable. This yields the greatest amount ofbenefits because resources are most efficiently utilized. AST Inc. decides to focus improvement efforts on Tank 655 first, given the higher risks (higher consequences) seen there. They further decide to budget additional resources toward improved secondary containment only for Tank 655. This partially offsets the higher receptor risk in that area. The other tanks, having lower risks, will have their secondary containment improvements prioritized among all alternative uses of resources. The projected impact on risk is demonstrated in Table 13.16. Note from Table 13.16 that AST Inc. controls both the risk level and the rate of change in this risk management process. They decide whether to systematically and slowly improve their entire tank population or rather to target identified hot spots for immediate improvements. (Note that the numbers used in this example merely provide the reader with a sense of the methodology; they are not necessarily in correct proportion, mathematically correct. or representative of actual data.)
X. Comparing pipelines and stations Operators often want to compare pipeline segments with stations or parts of stations-facilities within stations. This might Table 13.15
13/287
be for reasons of project prioritization or to assist in design decisions such as pipeline loops versus more pump stations. The pipeline relative risk scores represent the relative level of risk that each point along the pipeline presents to its surroundings. The scores are usually insensitive to length. If two pipeline segments, 100 and2600 A,respectively, have the same risk score, then each point along the 100-ft segment presents the same risk as does each point along the 2 6 0 0 4 length. Of course, the 2600-ft length presents more overall risk than does the 100-ft length because it has many more risk-producing points. A cumulative risk calculation as described in Chapter 14 adds the length aspect to a risk score so that a 100-ft length of pipeline with one risk score can be compared against a 2600-ft length with a different risk score. A direct and intuitive way to make comparisons with station facilities is to recognize that, just as with the pipeline risk scores, each point within the station (or station section, if several portions of a station are scored separately) presents a certain risk to its surroundings. To quantify the total risk introduced by the station, we can use the station’s length and width summed together, just as we use the pipeline segment’s length to get a cumulative risk score for the pipeline. So, a station that is 50 ft wide by 100 ft long has a risk score that applies to 150 ft. It has the same cumulative risk as a 150-ft-long pipeline with the same risk score, or as a pipeline segment that is 300 ft long with half the risk score, and so forth. With this simple approach, all station scores can be compared to pipeline segments using the cumulative risk relationship. Alternatively, the risk evaluator may choose to use a perimeter or 2 times the width + length as a better basis for comparison with pipeline ROW lengths. Where available, failure rates in stations and pipeline ROW respectively can be used to help establish a sizeequivalency relationship. This approach is consistent with the use ofrelease volume calculations and implied hazard zones for releases. A station with
Cost-benefitanalysis of risk mitigation
Project
Risk improvement (%)
Cos? ($)
8 19 3
4K 21K 9K 16K 7K 18K 430K 620K 140K
Communicate HHA (tank level alarm) to central control room HHA to central control room plus automatic tank isolation valves Upgrade tank level gauge to laser model Increase station visits (with formal “rounds”) by 5 hours per week Require orientation course for all station visitors Annual refresher training for employees Add impermeable liner to secondary containment Increase secondary containment volume by raising dike level Patch damaged dike areas
I 3 4 12 26 18
Notes
Replaces 12-yr-old mechanical model Improves several risk variables
Improves “consequence” side of equation Improves “consequence” side of equation
alu-yr NPV or equivalentcalculation. Table 13.16 AST Inc.’s risk improvement plan
Risk scores Equipmerit tag
Tank 655 Tank 101 Tank 3 15
Current
Next yearplan
Five-year target
235 I54 146
178
151 130
140 130 130
Notes
Projects 43C, 22, 16 next quarter; projects 18,14D in subsequent years Project 22 next quarter; project 15 in 2 years Project 18, 22 next quarter; then maintain risk level
13/288 Stations and SurfaceFacilities
large stored volumes and/or a high density of complex equipment in a small area will have a cumulative risk score that suggests a more concentrated risk than one with a larger “footprint.” A station blends chfferent types of release points for modeling convenience-a release point on a tank shell is obviously different from a release point on a 4-in. pipe. The risk score only “knows” that there are release points within that station that present a certain level of risk, even if all possible release points are not equal, so worst case points will govern. If this is not acceptable, a different station segmenting strategy can be employed. Of course, this approach has many assumptions. It also allows for the possibility of sectioning strategies designed to present less risks. This may be possible by manipulation of section boundaries-geographic areas-around equipment with Table 13.17
varying release potentials in order to optimize the cumulative risk scores. Nevertheless, this approach can be a simple way to establish equivalencies among risk scores for different facility types, at least until more definitive relationships can be developed.
Xi. Station risk variables Table 13.17 provides an extensive list of station risk variables that can be used to determine risk using the approaches outlined in this chapter. Note that these variables will vary in their impact on risk. The choice of risk variables in designing a risk assessment model is discussed elsewhere.
Station risk variables
aboveground coatings access (for emergency equipment) activity levelkhared stations additive system pressure additive system volume adequacy of coating-external adequacy of coating-internal adequacy of procedures adequacy of training anticorrosion effectiveness-visual antifreeze actions taken area gas detectors area motion detectors area video/audio surveillance atmosphere moisture content atmosphere temperature atmospheric coating damage UV freeze, ice, movements, etc. atmospheric corrosion hot spots atmospheric corrosion potential--overall atmospheric corrosive contaminants auto block valves availability of outside emergency responders average op pressures atmospheric corrosion control program traffic barrier effectiveness (strengtwdesign) block valves booms, absorbants building design business loss to competition canned pumps check valves clean up costs cleanup equipisupplies availability company image damaged computer permissives computer permissives for critical procedures congestion construction phase error reductions construction year control roomlSCADA protocols corrosion rates-xtemal corrosion rates-internal cost ofproduct cost of service interruption CP s u r v e y 4 I S CP survey--coating CP survey-DCVG CP survey-metal-soil test lead
critical instruments program depth of cover design phase error reductions design verificationsichecks design--use of extra heavy pipe and fittings diameter dike condition dike liner type dike volume dike wall materials dissimilar metals drainage and spill control dust generation earth movements-earthquake earth movements-rosioniwashout earth movements--expansive soils earth movements-frost heave earth movements-landslide earth movements-monitoring earth movements--overall susceptibility earth movements-preventions earth movements-stress relief earth movements-subsidence earth movements-volcano electrical area classifications electrical cable protection electrical equipment areas-locks and fences electrical grounding electrical power lines in area electrical-static charges potential emergency drills emergency medical treatment emergency response capabilities, outside assistance emergency response capabilities, in-house emergency shutdown systems endangered species nearby engine mechanical alarms equipment profile (heightiwidth total and ratio) facility lighting fatigue-high frequency, high stress fatigue-high frequency, low stress fatigue-low frequency, high stress fatigue-material susceptibility fencesA-i? chain link or equivalent f e n c e s 4 f t chain link or equivalent fencesA-ft chain link plus barbed wire fineslpenalties from regulatory agencies fire suppression systems-deluge/sprinklers
Station risk variables 131289 fire suppression systems-autoimanuai fire suppression systems-delivery ratelcoverage fire suppression systems-foam system fire suppression systems--hand extinguishers, maintenance, records, marking fire suppression systems- -hand extinguishers, numberilocation fire suppression systems-hand extinguishers, type fire suppression systems-monitor guns fire suppression systems-overall effectiveness fire suppression systems-redundancy fire suppression systems-volume available fire suppression systems-water curtains flame arrestors flammable, combustible materials nearby flexible connections foundations fugitive emissions gaskets. joint seals, packing groundwater depth hazard identification program high-value areas nearby historic sites nearby housekeeping incident follow-ups incident history incident history (type. frequency, preventions) incident investigations increased regulatory oversight inert gas padding of flammable vapors In-line pipe inspection inspections for atmospheric corrosion inspections for buried metal corrosion inspections for internal corrosion Inspections, overall inspector qualificationa interference corrosion potential-AC interference corrosion potential-buried metals interference corrosion potential--other stray currents interior vessel inspectiondye penetrant interior vessel inspection-ultrasound intenor vessel inspection-visual key-lock sequencing for critical procedures known structural flaws languageicomprehension issues leak detection-acousticlultrasoniciinfrared leak detection-coverage: additive systems leak detection--coverage: pump seals leak detection--coverage: racks leak detection--coverage: USTs leak detection-ground monitoring well leak detection-instrumented ground patrol leak detection--mass balance (station) leak detection-mass balance all mainlines leak detection-reaction time leak detection-realtime model leak detection-soilivapor monitoring leak detection -visual length of dead-leg piping length of dead-leg piping-above ground length of dead-leg piping-buried liner effectiveness loadingiunloading automation loading/unloading operations loading/unloading rack safety systems loading/unloadmg system complexity loading/unloading system pressure loadinghloading system volume loadingiunloading-number of systems
locks maintenance program management of change protocols maps, records. drawings marking of critical equipment/instrumentation material strength material stress levels (MAOP vs. NOP) material toughness material transition temperature meteor events monitoring by station personnel nonfadure causes of service interruption number of additive systems number ofASTs number of loadmgiunloadiny systems number of USTs, sumps overpressure relief devices overpressure source strengthlpotential pathways to receptors (slopes, ravines, surface roughness. etc.) patrols-air (altitude, speed observer. etc.) patrols-ground personal protective equipment personnel occupancy pipe joint count by type pipe joint type piping flange gasket type piping joints-butt weld piping joints-Dresser couplings piping joints-flanges piping joints-welded piping-coating piping-location, marking (subsurface) piping-protection from external force piping-semi design population density population type (commercial. residential, etc.) power backup systems pressure testing-stress levels pressure testing-time since last pressure testing--which facilities proceduresdraining procedures-enforcement procedures-loading/unloading procedures-lock oudtag out procedures-nurnber/complexity of actions during routine operations procedures-verall adequacy procedures-pump seal installationlmaintenance procedures-pump stadstop procedures-qualification to procedures-reviewltesting frequencies procedures-training effectiveness procedures-use of checklists in field procedures-written content product CO,, H,S. content product contamination product corrosivity product flammability product gravity product MIC potcntial product moisture content product persistence product reactivity product temperature product toxicity product velocity / intermitlent flows product viscosity Product-acute hazards Product-boiling point
13/290Stations and Surface Facilities Table f3.17 Station risk variable-nt'd Product--chemical additives used Product--chronic hazards Product-conductivity (dissolved solids) Product-contamination Product4ensities Product4issolved gases (CO,, H,S, 0,) Product-H, (heat of combustion) Product-NFPA product ratings Product-pH Product-RQ ratings Product-suspended solids Product-type Product-vapor pressure public awareness-advertisement impact public awareness-door-to-door public awareness-mailouts public awareness-meet with local responders public awareneso-overall effectiveness public awareness-public forums effectiveness public awareness-public officials meetings pump bearing temp shutdown pump criticality for product movement pump high flow shut down pump inspection frequency and effectiveness pump mechanical safety systems pump motor type pump overpressure shut down pump overvoltage, current shutdowns pump pressure/volume pump product temperature shutdown pump pulsation dampers pump seal flush lines pump seal leak potential pump seal secondary containment pump seal secondary seal Pump type pump vibration monitoring rails-strong barriers against traffic rails-weak relief valvesaesign verifications restricted access provisions risk management program / PHA sabotage potential sabotage susceptibility sabotage-physical preventions sabotage-prior attacks or threats sabotage-regional instability sabotage-social preventions safety factors safety program safety systems-adequacy safety systems-calibration/majntenance safety systems-fail safe strategies safety systemsinspectiodcalibratiodmaintenance safety systemolevels of redundancy safety systems--ownership satellite imaging SCADA control SCADA monitoring SCC secondary contairment security for critical equipment (locks, chains, etc.) security measures, miscellaneous shorelineshiverbanks nearby signs smoke detection
soil aggressiveness soil chlorides soil corrosivity soil MIC potential soil moisture content soil permeability soil pH soil sulfates soil type spark generation potential SRB induced corrosion--extemal SRB induced corrosion-internal station effective surface area station equipment countlsizeshplcomplexity station personnel monitoring frequency and duration station piping countlvolume station size (area, volume) station staffing level station subsurface drain with skimmer-alarmed station subsurface drain with skimmer-monitored station-number of block valves station-number of buried mechanical connectors substance abuse program successive reaction potential successive reactions-barriers successive reactions--overall susceptibility successive reactions-potential force successive reactions-probability successive reactions-separation distances sump safety systems sumps-product retention time supervision of excavation sites surface runoff retention and analysis surge potential surgepreventions tank capacities tank condition (dents, buckling, level, thinning, etc.) tank design--API 650,653 tank design-bolted tank design-proper venting, MI 650 tank design-rivets tank design--roof supports tank design-vacuum collapse potential tank design-welded tank fill levels tank foundation-asphalt ring tank foundation-zoncrete pad tank foundation-concrete ring tank foundation-gravel tank foundation--sand tank mixer-fixed angle tank mixer-variable angle tank--age tank-API 653 inspection tank-bottom external corrosion prevention tank-bottom inspection, floor scan tank-bottom inspection, mag, dye tank-bottom inspection, vacuum weld test tank-bottom inspection, visual tank-bottom liner agekondition tank-bottom liner desigdinstallation tank-bottom liner type (thick or thin) tank-bottom wall thickness tan!-bottom corrosion monitoring central point tank-bottom corrosion monitoring perimeter points tank-crackmg inspections
Station risk variables 131291 tank-racking visual inspection t a n k d e p t h of water bottoms tank4iameter t a n k 4 o u b l e bottom tank--external liner type tank-xternal loads consideredidocwnented tank-foundation condition tank-foundation inspection tank-foundation risk factors present tank-height tank-inspection for external atmospheric coiTosion tank-inspection frequencies tank-internal anode distribution tank-level alarm (H only) tank-level alarm actions-alarm only tad-level alarm actions-station shutdown tank-level alarm actions-tank isolation tan!-level alarm types tank--1evei alarms (H and HH) tank-level alarms test frequency tank-metal loss inspections tank-mixers (erosion-corrosion) tan!-pressure tanl-repair history tank-roof type, cone tank-roof type, internal floating tank-roof type, external floating tank-seam condition tank-settlement inspectionskistory tank-stairway fencing and locks tad-turnover frequencyifill cycles tank-under-tank monitor tad-visuals external inspection tank-visuals external inspection frequency tank-volume tad-wall inspection for internal corrosion tank-wall temperature > 60'F tad-wall thickness > 0.5 in.
thermal reliefdevices thermal relief valves-inspectionimaintenance torque specsltorque inspections traffic exposures-airimarine traffic exposures-ground outside station traffic exposures--overall susceptibility traffic exposures-preventions traffic exposures-ground within station traffic panernslroutingiflow training-completeness of subject matter training-job needs analysis training-testing, certification, and retesting use of colorslsignsllocksi"idiot-proofing" use of temporary workers UST-material of construction UST pressure UST volume UST-number of independent walls vacuum truck(s) vessel level safety systems vibration vibration: antivibration actions wall thickness walls < 6 A high walls > 6 A high water bodies nearby water body type (river, stream, creek, lake. etc.) water intakes nearby weather events-floods weather events-freeze weather events-hailhceisnow loading weather events-lightning weather events-potential weather events-windstorm wetlands nearby workplace ergonomics workplace human stress environment
141293
Absolute Risk Estimates
duction 14/293 General failure data 141295 Additional failure data 14/2 Relative to absoluterisk 14 V. Index sums versus failure pro diction 141301 obabilitzes 1413 e limits 141304 EX. Receutorvulnerabilities 141305 Population 141305 Generalizeddamage states
1. Introduction As noted in Chapter 1, risks can be expressed in absolute terms, for example, “number of fatalities per mile year for permanent residents within one-half mile of pipeline. . .” Also common is the use of relative risk measures, whereby hazards are prioritized such that the examiner can distinguish which aspects of the facilities pose more risk than others. The former is a frequency-based measure that estimates the probability of a specific type of failure consequence. The latter is a comparative measure of current risks, in terms of both failure likelihood and consequence. A criticism of the relative scale is its inability to compare risks from dissimilar systems-pipelines versus highway transportation, for example-and its inability to provide direct failure predictions.The absolute scale often fails in relying heavily on historical data, particularlyfor rare events that are extremely difficult to quantify, and on the unwieldy numbers that often generate a negativereaction from the public. The absolute scale
Ph.rt.
also often implies a precision that is usually not available to any risk assessmentmethod. So, the “absolutescale” offers the benefit of comparabilitywith other types of risks, whereas the “relative scale” offers the advantage of ease of use and customization to the specific risk being studied. Note that the two scales are not mutually exclusive. A relative risk ranking is converted into an absolute scale by equating previous accident histories with their respective relative risk values. This conversion is discussed in section IV on page 298. Absolute risk estimates are converted into relative numbers by simple mathematical relationships. Each scale has advantages, and a risk analysis that marries the two approaches may be the best approach. A relative assessment of the probability of failure can efficiently capture the many details that impact this probability.That estimate can then be used in post-failure event sequences that determine absolute risk values. (Also see Chapter 1 for discussion of issues such as objectivity and qualitative versus quantitative risk models.)
14294 Absolute Risk Estimates
Although risk management can be efficiently practiced exclusively on the basis of relative risks, occasionally it becomes desirable to deal in absolute risks. This chapter provides some guidance and examples for risk assessments requiring absolute results-risk estimates expressed in fatalities, injuries, property damages, or some other measure of damage, in a certain time period-rather than relative results. This requires concepts commonly seen in probabilistic risk assessments (PRAs), also called numerical risk assessments (NRAs) or quantitative risk assessments (QRAs). These techniques have their strengths and weaknesses as dmussed on pages 23-25, and they are heavily dependent on historical failure frequencies. Several sources of failure data are cited and their data presented in this chapter. In most instances, details of the assumptions employed and the calculation procedures used to generate these data are not provided. Therefore, it is imperative that data tables not be used for specific applications unless the user has determined that such data appropriately reflect that application. The user must decide what information may be appropriate to use in any particular risk assessment. Case studies are also presented to further illustrate possible approaches to the generation of absolute risk values. This chapter therefore becomes a compilation of ideas and data that might be helpful in producing risk estimates in absolute terms. The careful reader may conclude several things about the generation of absolute risk values for pipelines: 0
0
Results are very sensitive to data interpretation. Results are very sensitive to assumptions. Much variation is seen in the level of detail of analyses. A consistency of approach is important for a given level of detail of analysis.
II. Absolute risks As noted in Chapter 1, any good risk evaluation will require the generation of scenarios to represent all possible event sequences that lead to all possible damage states (consequences).To estimate the probability of any particular damage state, each event in the sequence is assigned a probability. The probabilities can be assigned either in absolute terms or, in the case of a relative risk assessment, in relative term-showing which events happen relatively more often than others. In either case, the probability assigned should be based on all available information. In a relative assessment, these event trees are examined and critical variables with their relative weighting (based on probabilities) are extracted as part of the model design. In a risk assessment expressing results in absolute numbers, the probabilities are assigned as part of the evaluation process. Absolute risk estimates require the predetermination of a damage state or consequence level of interest. Most common is the use of human fatalities as the consequence measure. Most risk criteria are also based on fatalities (see page 305) and are often shown on FN curves (see Figure 14.1 and Figure 15.1) where the relationship between event frequency and severity (measured by number of fatalities) is shown. Other options for consequence measures include
Humaninjuries Environmental damages
Property damages Thermal radiation levels Overpressure levels from explosions. Total consequences expressed in dollars Ifthe damage state of interest is more than a “stress” level such as a thermal radiation level or blast overpressure level, then a hazard area or hazard zone will also need to be defined. The hazard area is an estimate of the physical distances from the pipeline release that are potentially exposed to the threat. They are often based on the “stress” levelsjust noted and will vary in size depending on the scenario (product type, hole size, pressure, etc.) and the assumptions (wind, temperature, topography, soil infiltration, etc.). Hazard areas are discussed later in this chapter and also in Chapter 7. Receptors within the defined hazard area must be characterized. All exposure pathways to potential receptors, as discussed in Chapter 7 should be considered. Population densities, both permanent and transient (vehicle traffic, time-of-day, day-ofweek, and seasonal considerations, etc.); environmental sensitivities; property types; land use; and groundwater are some of the receptors typically characterized.The receptor’s vulnerability will often be a function of exposure time, which is a function of the receptor’s mobility-that is, its ability to escape the area. The event sequences are generated for all permutations of many parameters. For a hazardous substance pipeline, important parameters will generally involve Chance of failure Chance of failure hole size Spill size (considering leak detection and reaction scenarios) Chance of immediate ignition Spill dispersion Chance of delayed ignition Hazard area size (for each scenario) Chance of receptor@)being in hazard area Chance of various damage states to various receptor. A frequency of occurrence must be assigned to the selected damage state-how often might this potential consequence occur? This frequency involves first an estimate of the probability of failure of the pipeline. This is most often derived in part from historical data as discussed below. Then, given that failure has occurred, the probability of subsequent, consequenceinfluencing events is assessed. This often provides a logical breakpoint where the risk analysis can be enhanced by combining a detail-oriented assessment of the relative probability of failure with an absolute-type consequence assessment that is sensitive to the potential chains of events.
111. Failure rates Pipeline failure rates are required starting points for determining absolute risk values. Past failures on the pipeline of interest are naturally pertinent. Beyond that, representative data from other pipelines are sought. Failure rates are commonly derived from historical failure rates of similar pipelines in similar environments. That derivation is by no means a straightforward exercise. In most cases, the evaluator must first find a general pipeline failure database and then make assumptions
Failure rates 14/295
.00E-02
.00E-03
.00E-04
.00E-05
.00E-06
.00E-07 1
1( 30
10
Number of Fatalities (N) Flgure 14.1 FN curve for riskcharacterization.
regarding the best “slice” of data to use. This involvesattempts to extract from an existing database of pipeline failures a subset that approximates the characteristics of the pipeline being evaluating. Ideally, the evaluator desires a subset of pipelines with similar products, pressures, diameters, wall thicknesses, environments, age, operations and maintenances protocols, etc. It is very rare to fiid enough historical data on pipelines with enough similaritiesto provide data that can lead to confident estimates of future performance for a particular pipeline type. Even if such data are found, estimating the performance of the individual from the performance of the group presents another difficulty. In many cases, the results of the historical data analysis will only provide starting points or comparison points for the “best” estimates of future failure frequency. The evaluator will usually make adjustments to the historical failure frequencies in order to more appropriately capture a specific situation. The assumptions and adjustments required often put this risk assessment methodology on par with a relative risk assessment in terms of accuracy and predictive capabilities. This underlies the belief that, given some work in correlating the two scales, absolute and relative risks can be related and used interchangeably. This is discussed below.
General failuredata As a common damage stateof interest, fatalityrates are a subset of pipeline failure rates. Very few failures result in a fatality. A
rudimentary frequency-based assessment will simply identify the number of fatalitiesor injuriesper incidentand use this ratio to predict future human effects. For example, even in a database with much missing detail (as is typically the case in pipeline failure databases),one can extract an overall failure rate and the number of fatalities per length-time (i.e., mile-year or kmyear). From this, a “fatalities per failure” ratio can be calculated. These values can then be scaled to the length and design life of the subject pipeline to obtain some very high-level risk estimates on that pipeline. A sample of high-level data that might be useful in frequency estimates for failure and fatality rates is given inTables 14.1 through 14.4. A recent study [67] for pipeline risk assessment methodologies in Australia recommends that the generic failure rates shown in Table 14.5 be used. These are based on U.S., European, and Australiangas pipeline failure rates and are presumably recommended for gas transmission pipelines (although the report addresses both gas and liquid pipelines). Using the rates from Table 14.5 and additional assumptions, this study producesthe more detailedTable 14.6, a table of failure rates related to hole size and wall thickness. (Note: Table 14.6 is also a basis for the results shown later in this chapter for Case Study B.) As discussedin earlier chapters,there is a difference between ‘frequency’ and ‘probability’ even though in some uses, they are somewhat interchangeable. At very low frequencies of occurrence,the probabilityof failure will be numerically equal to the frequency of failure. However, the actual relationship between failure frequency and failure probability is often
14/296AbsoluteRisk Estimates Table 14.1 Compilation of pipeline failure data for frequency estimates
Location
Trpe
Canada USA USA USA USA USA USA USA USA Western Europe
Oiligas Oiligas Oil Gas Gas transmission Refinedproducts Hazardousliquids Crude oil Hazardousliquid Gas
Period 1989-92 1987-91 1982-91 1987-9 1 1986-2002 1975-1999 1975-1999 1975-1999 1986-2002
Fatality rate (no.perfailure)
Length
Failure rate
294,030 km 1,725,156 km 344,649 km 1,382,105 !an 300,000 miles
0.16h-year 0.25 0.55 0.17 0.267 failures/lOOO mile-year 0.6811000 mile-year 0.8911000 mile-year 0.11ilOOOmile-year
1.2 million mile-year
Re$
0.025 0.043 0.01 0.07
95 95 95 95
0.0086 0.0049 0.0024
86 86 86
0.29 /lo00 mile-year
44
Table 14.2 U.S. national hazardous liquids spill data (1975-1999)
Event category
Crude oil reportable rate
Refinedproducts reportable rate
Crude oil + refinedproducts reportable rate
Spill frequency Deaths Injuries
1 . 1 x 10-3 2.4 x 10-3 2.0 x 10-2
6.8 x IO4 8.6 x lo-' 6.1 x
8 . 9 IO" ~ 4.9 x 10-3 3.6~
Units Spillsiyearimile Deathsiincidents Injuriesiincidents
Source: URS Radian Corporation, "EnvironmentalAssessment of Lonqhorn Partners Pipeline," report prepared for U.S. EPA and DOT, September
2000.
modeled by assuming a Poisson distribution of actual frequencies. The Poisson equation relating spill probability and fiequency for a pipeline segment is P(X)SPILL = [(f *t)X/X !] * exp (-f * t )
where P(X)SPILL =probability of exactly X spills f =the average spill frequency for a segment of interest (spills/year) t =the time period for which the probability is sought (years) X =the number of spills for which the probability is sought, in the pipeline segment of interest. The probability for one or more spills is evaluated as follows: P(prohahi1ity nfone 0rmore)SPILL = 1 - P(X)SPILL
where X = 0.
Table 14.4 Comparisonof commonfailure causes for U.S. hazardous liquidpipelines ~~
Outside forces Corrosion Equipmentfailure(meta1fatigue, seal, gasket, age) Weld failure (all welds except longitudinal seam welds) Incorrect operation Unknown Repairiinstall Other Seam split Total
Tabla 14.5 Table 14.3 Average US. national hazardous liquid spill volumes and frequencies (1990-1997)
US. national average Pipe spill frequency Pipe spill volume Pipe and station spill frequency Pipe and stations spill volume
0.86 spillsiyearl1000 miles 0.70 hhl/year/mile 1.3 spillsiyearlmile 0.94 bhllyearimile
Source: URS Radian Corporation, "Environmental Assessment of Longhorn Partners Pipeline." report prepared for U.S. EPA and DOT, September 2000.
Percent of total
Cause
25 25 6
5 7 14 7
I 5 100
Generic failure rates recommendedin Australia
Cause ofFailure External force Corrosion Material defect Other Total
Failure rate p e r h - y e a r ) 3.00E-4 1.DOE4 1.00E-4 5.OOE-5 5.50E4
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (ORA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002.
Failure rates 14/297 Table 14.6
Failure rates related to hole size and wall thickness ~
Hole size (mm)
Wall thickness (mm)
Impuct facto?
CorrosionfactoP
I.3
2 0.95 0 2 0.95 0 2 0.95 0 2 0.95 0 2 0.95 0
<6
5
6-10
0.36
>10
0.04 I.3 0.36 0.04 I .3 0.36 0.04 I .3 0.36 0.04 1.3 0.36 0.04
<6 6-10
25
>I0
<6 610
70
>I0 16 6-10 >I0 <6 6-10 >I0
IO0
I50
~~~~
Externalforce Ifraction)
Corrosion Ifraction)
Material defect Other (fraction) (fraction) Failures
2.08E4
0.125
0.5
0.34
1.20E4
0.5
6.05E-5 2.08E4 1.20E4 6.05E-5
0.125
0.5
0.34
0.5
0.285
0
0
0
3.08E-5 3.42E4
0.285
0
0
0
0.18
0
0
0
3.088-5 3.42E-6 7.02E-5 1.94E-5 2.16E-h
I.IIE4
l.llE-4
3 00E-4
Generic failure ratesb(overall = 5.50E")
1 .0E4
1.0E4
5.0E-5
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)." Standards Australia ME-038.01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002. a See wall thickness adjustments, Table 14.8. These are the study-recommended generic failure rates to use for QRA in Australia (see Table 14.5).
fit from a mitigation is derived from engineering models or simply from logical analysis with assumptions Some observations from various studies are discussed next. The European Gas Pipeline Incident Group database (representing nine Western European countries and 1.2 million mileyears of operations as of this writing) gives the relative
Additional failure data A limited amount of data are also available to help make distinctions for pipeline characteristics such as wall thickness, diameter, depth of cover, and potential failure hole size. Several studies estimate the benefits of particular mitigation measures or design characteristics. These estimates are based on statistical analyses in some cases. These are ofienmerely the historical failure rate of a pipeline with a particular characteristic, such as a particular wall thickness pipe or diameter or depth of cover.This type ofanalyses must isolate the factor from other confounding factors and should also produce a rationale for the observation. For example, if data suggest that a larger diameter pipe ruptures less often on a per-length, per-year basis, is there a plausible explanation? In that particular case, higher strength due to geometrical factors, better quality control, and higher level of attention by operators are plausible explanations, so the premise could be tentatively accepted. In other cases, the bene-
Table 14.7
Table 14.8 Suggested wall thickness adjustments
Wall thickness (mml
External force coejficient
<6
Corrosion coefficient
2 0.95 0
1.3 0.36 0.04
&IO >10
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd , ADril2002.
European Gas Pipeline Incident Group database relative frequency of failure data
Percent ofdifferent hole size
Case Third-party interference Construction defect Corrosion Land movement Otheriunknown Total
Failure rate (mile-year)-'
Percent of total failure rufe
I.50E-04 5.30E-05 4.40E-05 1.80E-05 3.20E-05 2,90E+4
50 18
15 6 II I00
<2 cm
2 em-FBR
25 69 97 29 74 4s
56 25 3 31 25 39
FBR (jiull bore rupture)
19 6
40
Dependence on wall thickness Yes Potential Yes Potential
Yes
13
Source: Pluss, C., Niederbaumer, G., and Sagesser, R.. "Risk Assessment of the Transitgas Pipeline," Journal of Pipeline Integrity, September 2002.
14/298AbsoluteRisk Estimates Table 14.9
Relationship between wall thickness and failure rate Failure rate (jer IO00 km-year)
Wall thickness (mm)
0.750 0.220 0.025
0-5
5-1 0 10-15
Source: European Gas Pipeline Incident Data Group (EGIG) 1993 report. frequency with which certain hole sizes have been observed for various failure modes (seeTable 14.7). Reference [67] suggests that some adjustments should be applied to recommended failure rates (shown in Table 14.5), in order to account for observed reductions in failure rates with increasing wall thickness. These are shown in Table 14.8 and were used in Table 14.6.A relationship between wall thickness and failure rate due to external forces is also given in Table 14.9. Similarly, a relationship to depth of cover is offered inTable 14.10. Potential risk reduction benefits from several mitigation measures, as suggested by various references, have been compiledinTable 14.11.
IV. Relative to absolute risk As previously noted, it may be advantageous to marry a relative assessment of the probability of failure with an absolute failure Table 14.10 Summary of failure frequencies(per 1000 km-years)by depth of burial
Depth ofpipeline burial
Normal0.9m
1.5m
2m
3m
Mechanical failure Operational
0.143 0.047
0.143 0.047
0.143 0.047
0.143 0.047
Corrosion Natural External impact Total
0.085 0.013 0.132 0.42
0.085 0.013
0.085 0.013 0.066 0.354
0.085 0.013 0.0013 0.289
0.099
0.387
Source: Morgan, B.,et ai., “An Approach to the Risk Assessment of Gasoline Pipelines,” presented at Pipeline Reliability Conference, Houston, TX. November 1996.
frequency and then with various consequence scenarios.A relative risk model serves an operator especially well when it provides guidance and decision support for resource allocations. It shows him the system vulnerabilities and points to mitigation measures to remedy them. The consequence assessment is mostly there to indicate the priorities and perhaps suggest the appropriate level of mitigation. There are normally few opportunities to significantly change the consequences directly (see Chapter 7). Consequences are more critical in risk communications and regulatory decision making, often leading to the need for absolute risk values. This makes a study and quantification of incident event sequences more necessary. Many ofthe events in the sequences studied will be related to a particular damage state. The sequence begins with a failure probability but then follows paths that are ultimately measuring the likelihood of various consequence scenarios-is there immediate ignition or delayed ignition? How big a cloud may form? What are the likely temperature and wind conditions? What if an explosion occurs? How far are the vulnerable receptors? The overall likelihood of failure of the pipeline-ofien the starting point for the event sequence-is a h c t i o n of all of the variables discussed in Chapters 3 through 6 of this book. Most risk assessment efforts similarly focus on the probability of failure. This is not only because failure frequency reduction is usually the best way to reduce risks, but also because so many variables impact failure frequency that a model is needed to properly consider all of the important factors. Inferring a failure frequency from a relative risk score is illustrated later in Case Study C. The concept could also be applied to the other case studies. The process involves a correlation between a failure frequency curve and relative risk scores. Ideally, this would be established by many data points, demonstrating that certain failure frequencies are to be expected with risk scores produced by a certain model. As more and more companies practice formal risk management and gather data over several years for many miles of pipe, this relationship will solidify. Case Study C is forced to make the linkage with only one data point and the end points of the risk score scale-the minimum three points required to define a curve. Case Study C takes advantage of the fact that the end points of a relative risk assessment scale also have meaning. A good scoring model should show that one end of the scale represents
Table 14.1I Some reported mitigation benefits
Mitigation
Impact on risk
Increase soil cover Deeper burial Increased wall thickness Concrete slab Concrete slab Underground tape marker Additional signage Increased one-call awareness and response Increased ROW patrol Increased ROW patrol Improved ROW, signage, public education
56% reduction in mechanical damage when soil cover increased from 1 .O to 1.5 m 25% reduction in impact failure frequency for burial at 1.5 m; 50% reduction for 2m;99% for 3m 90% reduction in impact frequency for > I 1.9-mmwall or >9.1 -mm wall with 0.3safety factor Same effect as pipe wall thickness increase Reduces risk of mechanical damage to “negligible” 60%reduction in mechanical damage 40% reduction in mechanical damage
Reference
70 58 58
58 70 70 70
50% reduction in mechanical damage 30% reduction in mechanical damage 30% heavy equipment-relateddamages;20%rancWfann activities; 10% homeowner activities
70 86
5-15% reduction in third-party damages
86
Index sums versus failureprobability scores 14/299 a pipeline with no safety provisions, operated in the most hostile environment-consequently, failure is imminent. On the other end of the scale, is the hypothetical “bullet-proof” pipeline-ane that is buried 20 ft deep, has a quadruple heavy wall, is fracture resistant, uses corrosion-proof metal with secondary containment, has a fenced and guarded ROW 24 hours with daily integrity verification, etc.-and has virtually no chance of failure. A whole family of curves can be defined to pass through the three points defined in the case study. But as long as the relationship is isotonic-does not fold back on itself-ten the curves are bounded. By picking the most conservative of all possible curves that can pass through these three points, a tentative relationship can be established at least until better information becomes available, A curve with an initial shape as either steep or shallow has a reasonable logical basis. A curve asymptotic to the J’ axis (steep, Curve A of Figure 14.2) suggests “immediate and dramatic gains” from the first mitigation measure. A flatter initial curve (Curve B of Figure 14.2) represents a “critical mass” scenarieuntil enough mitigation measures are employed risk reductions are minimal. Because the initial regions of the curve represent unrealistically high
failure rates, they are of less interest. The middle and end regions are the most critical because that is where most real pipelines will operate.
V. Index sums versus failure probability scores Simple conversion h d e x sums-the
result of risk scoring as shown in Chapters 3 through &involve a simple summation of the relative scores of the four failure modes. The index sum is a measure of the overall failure probability, in a relative sense. A caution in the use of this final value, however, is that each index should be checked independently to ensure that a deficiency in one index is not being masked by an excess in others. In other words, the user should ensure that the impact of the worst case index score is not overshadowed by a high index sum value. For instance, a relatively high index sum can he achieved thru an abundance of mitigation in areas of third-party damage potential, human error avoidance, and design issues while completely ignoring
High Failure Probability
I
Failure Probability
I
Figure 14.2 Bounding curves’correlatingrisk scores with failureprobability
14/300 Absolute Risk Estimates
the corrosion threat. This unmanaged risk situation might not be apparent unless individual index scores are inspected. This is also an important determination if index sums are to be used to infer actual failure probabilities. The conversion ofinden sum scores into what will be called a “failure probability score” eliminates the need for any extra examination of individual index scores. The failure probability scores are calculated from index sum scores by using simple probability theory. The first step is to assume that each index sum represents a survival probability, expressed as a percentage chance that the segment will survive for some predetermined time period. For example, a corrosion index of 65 indicates that the segment has a 65% chance of survival and 35% chance of failure in some time period and by some definition of “failure.” These percentages are not exact because the index sum is a relative indicator. However, higher index sums do indicate lower threats and accompanying higher survivability rates, so some proportionality does exist. Using the simple percentage relationship serves the purpose here. The failure probability score is obtained by calculating the probability that the pipeline section will survive all four failure modes. Subtracting this probability from 1.Oresults in the relative chance that the pipeline section will fail by any one of the failure modes. This process is illustrated by the following formula: Failure probability score = 1 - (111100 x 12/100 x 13/100 x 141100) where I1 through 14 are the four indexes representing the failure mechanisms measured in this risk model. This relationship captures the effects of serious deficiencies in any one index-representing a very active failure mechanism-ven if the other indexes are presenting a relatively favorable risk picture. For example, the two scenarios shown in Table 14.12 have equal index sum scores, but very differentfuifurepvobability scores, due to the influence of one “bad” index. The probability offailure scores highlights this difference. The relationship underlying the failure probability score can be visualized by recognizing that a segment survives only if failure does not occur via any of the failure mechanisms. So, the probability of surviving is (third-party damage survival) AND (corrosion survival) AND (design survival) AND (incorTable 14.12 sum
Calculating a ‘failureprobability score’from an index Scenario I
Index
scorea
Third-party damage Corrosion Design Operations
Probabilip offailure score (%)
Scenario 2
Index scot@
60
90
70 80
10 90 90 280
70 280
76.4Sb
aAssumedlo be survival probability, in percent 1.0- ( 0 . 60~. 7 x 0 . 8 x 0.7) = 76.48%. 1 .O- (0.9x 0.9x 0.1 x 0.9) = 92.71 %.
Probability offailure score (%)
92.7lC
rect operations survival). Replacing the ANDs with multiplication signs provides the relationship for probability of survival. Subtracting this resulting product of multiplication from one (1 .O) gives the probability of failure. Because the index scores are not calibrated to actual survival rates, the probability of failure is a relative score rather than an actual probability. This does not detract from its usefulness as a measurement for decision support in matters ofrelative risk. The conversion to a failure probability score as described here might be appropriate if a correlation to actual failure rates is sought or just as a convenience in using relative risk assessment results with a scale of “increasing points = increasing safety.” See Chapter 2 for a discussion of the relative merits of either scaling choice.
More complex relationships The probability of failure tends towards zero when any of four possible scenarios exist:
No failure mechanisms exist Failure mechanisms are mitigated-a threat exists but is prevented from acting on the system. For instance, a high depth of cover or concrete slab protects a pipeline from third-party damage. The system is designed for the threat-a failure mechanism acts on but cannot harm the system. For example, a pipeline with sufficient structural strength to resist a slow-acting land movement. The time-to-failure from a failure mechanism is always greater than the time to detect and correct system weaknesses. For instance, cracks and corrosion flaws can be detected and removed while they are still of a size to have no impact on pipeline integrity. As noted in chapters 3 through 6, the index sum measures the aggressiveness of potential failure mechanisms and effectiveness of mitigation measures and design features rather than the failure potential. These are assumed to be closely relatedhigher failure potential is associated with more aggressive failure mechanisms and/or less mitigation and design allowances. The absence of a time period is a modeling convenience and reflects the conservative belief that an aggressive and unmitigated failure mechanism will eventually lead to a failure. However, the absence of a time-to-failure aspect makes a subtle difference in failure potential estimates for time-dependent failure mechanisms such as corrosion and fatigue, since there is an opportunity to find and remove developing weaknesses from such mechanisms. The design index captures the critical time-to-failure considerations in variables measuring system strength (sufetyfactor) and integriv verzjkutions. For purposes of a rigorous conversion of index sums to failure probability estimates, these vanables might also need to be used outside of the design index to more directly show the effect of a safety margin and a program to periodically remove weaknesses on failure probability in any specific time period. A more complex relationship might also model the benefits of integrity verifications as a function of failure mechanism aggressiveness. This reflects the belief that a very short integrity re-verification interval can mitigate even the most severe time-dependent failure mechanism.
Failure prediction 14/301
Another way to view this is that unanticipated (undesigned for) stresses to a pipeline system are being evaluated in the third-par02 and incorrect operations indexes, and also in the design index variables of surge and land movements. Unanticipated weaknesses are being evaluated in the corrosion index and in the design index variable of fatigue. The design index variables of safep factor and integrit), verification are mitigation measures to address all unanticipated stresses and weaknesses. In a conversion to absolute failure probability estimates, the role of the mitigation measures in all failure mechanism might need to be more directly measured than is shown in the design index. See the discussion of load and resistance curves in Chapter 5.
VI. Failure prediction A good risk assessment always produces some estimate of failure probability. Given the relationship between probability and frequency of future events, this estimate can also be seen as a predictive tool. As failure probability changes over time, so would a predicted leakibreak rate. In most transmission pipelines, insufficient system-specific information exists to build a meaningful leakibreak prediction model--events are so rare that any such prediction will have very large uncertainty bounds. Possible exceptions include situations in which timedependent failure mechanisms can be more reliably tracked and behave in more predictable fashions. Distribution systems, where leaks are precursors to “failures,” are often more viable candidates for prediction models. Where the evaluator believes that leakhreak predictions of useful accuracy are possible, she may wish to incorporate results from the risk assessment as discussed below. The following paragraphs generally describe the philosophy for how the relative risk scores can support a future leak prediction model. A leakibreak rate assessment should capture both timedependent failure mechanisms such as corrosion and fatigue and more random failure mechanisms such as third-party damages and seismic events. The random events will normally occur at a relatively constant rate over time for a constant set of conditions. See the classic bathtub curve shape discussed in an earlier chapter. A current risk state for each length of pipe is determined via the risk model. This current state is based on all available information. The risk scores can also represent a theoretical leakibreak rate for the pipe. This rate is called a “deterioration” rate by some. but that phrase seems to be best applied to timedependent failure mechanisms only (corrosion and fatigue). This linkage between risk scores and a predicted leakibreak rate is logical because both are appropriately based on all known conditions. recent inspection (if available), design, operations, maintenance, and environmental considerations. Both are assessed from indirect evidence unless and until a physical inspection is performed to better establish actual conditions. The physical inspection then supersedes the previous estimates and the risk scores are adjusted in light of the new information. A series of inspections and changing risk scores at the same location over a period of time verifies or corrects estimated deterioration predictions. Even though they are expressed as a single value, each failure probability estimate or score really represents a failure dis-
tribution-number of failures versus time-with an average. median, and standard deviation. This distribution describes the likely failures that would accompany any pipeline section with that particular score. For third-parv damage and incorrect operations, the relationship can perhaps be assumed to be a direct proportionality: Failures increase proportionally with the index scores and are constant over time for a fixed index score.Therefore, if the conditionsremain constant. as evidenced by a constant index score. then the failure rate will be constant. This is supported by the belief that the underlying causes for third-party damage and human error are largely random (assuming that all other factors are equal). A corrosion failure probability score index may be theorized to be related to corrosion rate by an exponential relationship. Note that the corrosion index measures the potential for and aggressiveness of corrosion, but not the time to failure from corrosion. The latter requires incorporation of pipe wall thickness, pipe stress levels, age, and other considerations. In the design index, both random forces (earth movements) and some time-dependent mechanisms (fatigue) are at work. Therefore, this index could also be representing a non-constant failure rate over time. The design index also plays a large role in determining the time to failure since it measures remaining wall thickness and pipe strength. To begin building a predictive model, a baseline deterioration rate can be established for time-dependent failure mechanisms that assumes two end conditions: ( I ) There is no probability of failure immediately after installation and (2) there is a 100%probability of failure after some selected time in service. For example, a service life on the order of 200 years might be selected. It is well understood that a pipe can fail immediately after installation; many pipes will fail long before 200 years have passed; and some pipes may survive beyond 200 years. Any service life can be selected as long as it has some plausibility. A straight-line deterioration rate of time-dependent index risk scores (such as corrosion) can initially be assumed for simplicity-change in risk score is directly (or inversely, depending on scoring regime) proportional to changes in leakhreak rate for time-dependent mechanisms. Note that more complex relationships such as a theorized exponential increase in leaks over time can also be used. The assumptions are initially for modeling convenience only. The assumptions can be readily changed when better information becomes available or shoilld abandonment ofthe simplifications be warranted. Table 14.13 is a very generalized example of the type of analysis described. In this example. a risk model as described in Chapters 3 through 6 has been used to create index scores. An index score of 100 is theorized to mean a 100% chance of survival for 200 years; a score of 40 indicates a 40% chance of survival for 200 years. etc. Theoretically, information flow is continuous and complete so that the index scores are continuously changing with changing conditions. The failure mechanisms of third-party damage and incorrect operations (human error) are assumed to be constant. The mechanisms of corrosion and design (due to the fatigue component and increased uncertainty of pipe integrity over time) are assumed to cause deterioration (straight-line rate) in index scores representing those failure mechanisms. On initial installation (day 1 ). index sum scores in this example total almost 400 on a scale where 400 represents the highest
14/302AbsoluteRisk Estimates Table 14.13
Example of predicting leaklbreak probabilities overtime based on relative risk scores
Time
Third-party damage index Corrosionindex Design index Incorrect operations index Index sum Relative failure probability score
Day 1
Year1
100
60
100 100 99
90 90 70
399 1.00%
310 65.98%
Year I O
60 85.5 85.5 70
Year20
60 81 81 70
294.5 265.05 69.30% 72.44%
Year30
Year40
60
60
76.5 76.5 70
72 72 70
225.29 75.42%
safety level, indicating almost no chance of failure. In other words, a brand new system, successfully pressure tested and placed into operation will probably not immediately fail. At year 1, the chances for something to go wrong have increased and failure probability is dominated by third-party damage potential and human error, neither ofwhich are sensitive to age of the system. Gradually, the time-dependent mechanisms become dominant, until at year 200, there is nearly a 100% chance that the segment has had at least one failure since installation. This assumes there is no integrity verification, inspection, or other means of “resetting” the clock to demonstrate that deterioration is not actually occurring. So, in year 100, there is still a 70% chance of survival of third-party related failures according to the third-par@ index, but due to assumed deterioration impacts over time, the survival probability from corrosion effects is only 40%.The relative failure probability scores are based on the statistical combination of the four failure probabilities (this calculation is discussed on page 300).They can be converted into leakibreak rates with the proper establishment of space and time parameters-determining that the relative probabilities relate to the chances of one or more failures perXfeet per Yyears. Table 14.13 shows a very rudimentary and perhaps overly simplistic example with some very generalized assumptions such as assuming that each index score is actually a survival probability-an assumption that is potentially disputed given the complex relationships between risk variables and the use of a simplifying scoring regime. This table is provided only to illustrate the one possible approach for leaWbreak prediction using risk scores. An actual forecasting tool should be calibrated by historical failure rate data and careful scrutiny of the model to ensure that key survivability factors such as wall thickness are appropriately considered in the index scores. The leakibreak rate should be continuously adjusted by risk factors andor inspection results to arrive at an assessment for all lengths of pipe. Historical breaks and leaks would normally be considered to be evidence that overrides previous estimates of failure probability. Converting corrosion scores to corrosion rates might be especially important when making repair versus replace decisions. Such decisions are typical in distribution systems where leak indications are often the main source of integrity information (see Chapter l l). A discussion ofusing corrosion scores to estimate deterioration rates (corrosion rates) can also be found in Chapter 1 1 .
Years0
Year60
Year 100
Year150
Year 199
60
60
60
60
60
67.5 67.5 70
63 63 70
45 45 70
22.5 22.5 70
0.45 0.45 70
47.31 91.50%
11.83 97.87%
180.23 135.18 94.62 78.23% 80.86% 83.33%
0.06 100.00%
VII. Ignition probabilities The possibility of ignition of a flammable pipeline product is a part of most hazard scenarios for hydrocarbon pipelines. Ignition is usually thought to increase consequences, but can also theoretically reduce them. A scenario where immediate ignition causes no damage to receptors but eliminates a contamination potential (preventing groundwater contamination or shoreline damage from an offshore spill, for example) is such a case. Ignition probability is, of course, very situation specific. Countless scenarios are possible for most pipelines. Ignition of a flammable gas release can occur at either the source or a location some distance away. A buoyant gas such as hydrogen or natural gas will rise rapidly on release and limits the formation of a flammable gas cloud in open space. With the assumption that most ignition sources are at or near ground level, this reduces the probability of remote ignition for these lighter gases. In some cases, the source of ignition is related to the loss of containment itself, such as sparks generated by impact from machinery or heat generated by the release process itself (including static electricity or sparks from flying debris collisions). Other sources of ignition include Vehicles or equipment operating nearby Grinding and welding Residential pilot lights or other open flames External lighting or decorative fixtures (gas or electric). It is not uncommon during gaseous product release events for the ignition source to be created by the release of energy, including static electricity arcing (created from high dry gas velocities), contact sparking (e.g., metal to metal, rock to rock, rock to metal), or electric shorts (e.g., movement of overhead power lines). Estimates of ignition probabilities can be generated from company experience, pipeline failure databases, or obtained via literature searches. The following empirical formula is recommended for use in quantitative risk assessments for gas pipelines in Australia [67]: Ignitionprobability = 0.0156(release rate in kgis)o.”2
Some other examples of ignition probability estimates are showninTables 14.14 through 14.18.
Ignition probabilities 141303 Table 14.14 Estimates of ignition probabilities for natural gas, based on offshore data.
Release Minor (
50 kgisec) Major (1-50 kg/sec)
0.01 0.3 0.07
Ignition probabilrr);
Pinhole/crack(dim. < 20 mm) Hole (20 mm < diam. c 200 mm) Rupture (200 mm < diam. < 400 mm) Rupture (diam. > 400 mm)
0.027 0.019 0.099 0.235
Source. Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)." Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd., April 2002. Derived from the European Gas pipeline incident data Group (EGIG) foronshore pipelinesfrom 1970to 1992. Note that these findings are based on hole size and not on release rate, which will vary with pipeline pressure.
One study uses 12% as the ignition probability ofNGL (natural gas liquids, referring to highly volatile liquids such as propane) based on U.S. data [43]. Another study concludes that the overall ignition probability for natural gas pipeline accidents is about 3.2% [95]. A more extensive model of natural gas risk assessment, called GRI (Gas Research Institute) PIMOS [33], estimates ignition probabilities for natural gas leaks and ruptures under various conditions. This model is discussed in the following paragraphs. In the GRI model, the nominal natural gas leak ignition probabilities range from 3.1 to 7.2% depending on accumulation potential and proximity t o structures (confinement).The higher range occurs for accumulations in or near buildings. There is a 30% chance of accumulation following a leak and a 30% chance of that accumulation being in or near a building, given that accumulation has occurred and an 80% chance of ignition when near or in a building, given an accumulation. Hence, that scenario leads to a 7.1% chance of ignition (30% x 30% X 80% = 7.1%). The other extreme scenario is (30% chance of accuTable 14.16 Estimates of ignition probabilitiesfor various products
Gasoline Gasoline and crude oil
Above and belousground
Belowgmundonl)
Crude oil Diesel oil Fuel oil Gasoline Kerosene Jet fuel Oil and gasoline
3.1 1.8 2 6 0 4.5 3.4
2
All
3.6
~~
Table 14.15 Estimates of ignition probabilities of natural gasfora range of hole sizes (European onshore pipelines)
Product
Ignition probability Pi)
Ignition probahilir);
Source: Office of Gas Safety, "Guide to Quantitative Risk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002..
Failure mode
Table 14.17 Estimates of ignition probabilities for various products above and below grade
lgnition probahiliiy 5(%)
4-6 3
Source-Table created from statements in Ref. [86], which cites various sources for these probabilities.
~
1.5 0 3.1 0 38
0 2.1
mulation) x (70% chance ofnot near a building) x (1 5% chance o f ignition when not near building) = 3.1%. For ruptures, the ignition probabilities nominally range from about 4 to 15% with the higher probability occurring when ignition occurs immediately at the rupture location. Given a rupture, the probability of subsequent ignition at the rupture location is given a value 15%. If ignition does not occur at the rupture (85% chance of no ignition at rupture), then the probability of subsequent ignition is 5%. So, the latter leads to a probability estimate of 85% x 5% = 4.3%. In both the leak and rupture scenarios, these estimates are referred to as base caseprobabilities.They can be subsequently adjusted by the factors shown inTables 14.19 and 14.20. These probabilities are reportedly derived from U.S. gas transmission pipeline incident rates (U.S. Department o f Transportation,
Table 14.18 Estimates of ignition probabilities for below-grade gasoline pipelines
Ignition probabiliy (9% Location
Rupture
Hole
Leak
Rural Urban Rural Urban Rural Urban
3.1 6.2 1.55 3.1
3.1 6.2 1.55 3. I 1.55 3.1
0.62
~~~
Overall Immediate Delayed
1.55 3.1
1.24 0 31 0.62 0.31 0.62
Source: Morgan, B.,et al., "An Approach to the Risk Assessment of Gasoline Pipelines," presented at Pipeline Reliability Conference, Houston, TX, November 1996. Notes: US. experience is approximately 1.5 times higher than CONCAWE (data shown above are from CONCAWE). Assumes the urban is 2x base rates and that base rates reflect mostly rural experience. Leak ignition probability is 20% of that for ruptures or holes. Immediate and delayed ignitions occur with equal likelihood. Rupture is defined as 0.5 diameter or larger. Hole is > I O mm, but less than the rupture Leak is
14/304AbsoluteRisk Estimates Table 14.19 Adjustments affecting the probability of ignition forleaks
Factor
Adjustments
Value
Accumulation potential
Topography
Conducive to accumulation Not conducive Heavy components No heavy components High Medium Low 1 2
Gas composition Gas flow rate Class location
Percent change
Industrial Nonindustrial 1 2 3
0 100 200 25 0 0 50 50
4
400
Near Not near
0 -10
3 4
Ignition in or near building
Land use Class location
Ignition other location
Proximity to ignition source
10 -10 10 -10 10 0 -10 -10
Source: Gas Research Institute, “PipelineInspection and Maintenance Optimization Program, PIMOS,”Final Report, prepared by Woodward-Clyde
Consultants,February 1998. Table 14.20 Adjustments affecting the probability of ignition for ruptures
Factor
Adjustments
Value
Ignition at mpture
Cause of failure
Third-party damage Other Heavy components No heavy components 1 2 3
Gas composition Ignition away from rupture
Class location
4 Proximity to ignition source
Near Not near
Percent change 400
0 10 -10 0 10 200
300 0 -10
Source: Gas Research Institute. “PipelineInspection and Maintenance Optimization Program, PIMOS,”Final Report, prepared by Woodward-Clyde Consultants,February 1998 1970 to 1984) where possible, but it is acknowledged that few are estimated directly from the database. The last columns in these two tables indicate the magnitude of the adjustment. For example, a class 4 area (high population density) increases the probability of ignition away from a rupture by 300%. Similarly, a third-party damage incident is thought to increase the ignition-at-rupture-site probability by 400%. As another example, a high gas flow rate decreases by 10% the probability of an accumulation of gas in a leak scenario (changing that probability from 30 to 27% and hence the base case from 7.1 to: 27% x 30% x 8O%= 6.5%) [33]. The adjustments in Tables 14.19 and 14.20 make intuitive sense and illustrate (at least apparently) the use of normalized frequency-based probability estimates-the use of judgment when observed frequencies alone do not correctly represent the situation. For instance, it is logical that the ignition probability is sensitive to the availability of ignition sources, which in turn is logically a function of population density and industrialization. Chapter 7 discusses the role of gas density in vapor cloud
formation and supports the presumption that a heavier gas leads to a more cohesive cloud (less dispersion) leading to a higher ignition probability. Confinement of a vapor cloud (topography and proximity to buildings) also leads to less dispersion and greater opportunity for accumulations in the flammability range, also implying higher ignition probabilities.
VIII. Confidence limits Confidence limits or intervals are commonly used in association with statistical calculations. Available data are normally considered to be a sample that is used to estimate characteristics ofthe overall population. The population is all possible data including possible future measurements. The sample data can be used to calculate a point estimate, such as an average leak rate in leaks per mile per year. A point estimate approximates the value for the entire population of data, termed the “true” value. However, this approximation is affected by the uncer-
Receptor vulnerabilities 14/305
tainty in the sample data set. A confidence interval bounds the uncertainty associated with the point estimate. For example, a leak rate estimated to a 95% confidence level has a 95% probability of including the true leak rate. When the number of data points available is small, the confidence limits are wide, indicating that not enough information is available to be confident that all future data will be close to the small data set already obtained. Data on pipeline failure rates are limited. The use of upper limits of statistical confidence intervals, especially at a high 95% confidence level, would not present meaningful representations of true failure potential. It will present unrealistically large predictions, strictly as a result ofthe small number ofdata points available. Such uncertainty-adjusted predictions do not represent best estimates of failures. It may be theoretically correct to say, for example, that “one can be ninety-five percent confident that there is no more than a one in ten chance of a spill in this area” as a result of a statistical confidence calculation on limited spill data. However, the best estimate of spill probability might be only one chance in ten thousand. An alternative to the normal calculation of confidence intervals or bounds about the mean leak frequency is available for instances where the data set is very small. The confidence intervals can be calculated using methods that assume a Poisson distribution ofthe leak frequency data [86]. The use of confidence intervals in risk communications is discussed in Chapter 15.
IX. Receptor vulnerabilities
pipeline-related fires in Canada each year, compared to 70,000 other fires and 9,000 forest fires. Their conclusion is that gas pipelines generally pose little threat to the environment based on the low incident of fires initiated by gas pipelines [95]. *eats from more persistent pipeline releases include contamination scenarios as discussed in Chapter 7. Case Study C presents a case where damage states for various environmental receptors were defined.
Population Many consequence assessments focus on threats to humans. To estimate potential injury and fatality counts, the population in the hazard zone must be characterized. This includes exposure times, which can be estimated by characterizing population densities at any point in time. This includes estimating Permanent population Transitory/occasional population Special population (restricted mobility). A thorough analysis would necessarily require estimates of people density (instead of house density), people’s away-fromhome patterns, nearby road traffic, evacuation opportunities, time of day, day of week, and a host of other factors. Several methods can be devised to incorporate at least some of these considerations. An example methodology, from Ref. [67], is discussed next. According to Ref [67], average population densities per hectare can be determined for a particular land use by applying the following formula:
An “estimate of risk expressed in an absolute terms” modeling
approach requires identification of a hazard zone and a characterization of receptors within that zone. A doseresponse type assessment, as is often seen in medical or epidemiological studies, may be necessary for certain receptors and certain threats. Focusing on possible acute damages to humans, property, and the environment, some simplifying assumptions can bemade, as discussed below and as seen in the case studies in this chapter. As noted in Chapter 7, a robust consequence assessment sequence might follow these steps: 1. Determine damage states of interest (see discussions this chapter) 2. Calculate hazard distances associated with damage states of interest 3. Estimate hazard areas based on hazard distances and source (burningpools, vapor cloud centroid etc.) location (seepartide truce element in Table 7.6) 4. Characterize receptor vulnerabilities within the hazard areas
This process is rather essential to absolute risk calculations. Environmental damages are often very situation dependent given the wide array of possible biota that can be present and exposed for varying times under various scenarios. Thermal radiation levels for non-piloted ignition of wood products can be used as one measure of an acute damage state. A drawback might be the uncertainty surrounding the spread of a fire, once ignition in some portion of the environment has occurred. One Canadian study concludes that there are on average about two
Population per hectare = [ 10,000/(area per person)] x (“YO area utilized) x (%presence)
This reference describes the process of population density estimation as follows (excerpt, but not direct quote): Indoor population densities have been based on the number of square meters required per person according to the local building code. Residential dwellings are not covered in this building code, but have been assigned a value of 100 m2 per person, on the basis of a typical suburban density of 30 persons per hectare and one-third actual dwelling area. For nonresidential use, available floor space has been set at 75% ofthe actual area, to allow for spaces set aside for elevators, corridors, etc. Based on the above, the indoor populations shown in Table 14.21have been estimated. For rural and semirural areas, the outdoors population is generally expected to be greatest on major roads (excluding commercial areas). If an appropriate value for vehicular populations can be determined then this can be conservatively applied to all outdoor areas. Assuming that a major rural road is 10m wide, 1 hectare covers a total length of 1 km. For rural areas, an average car speed of 100 km/Iu and an average rate of I car per minute has been assumed. Based on this and an average of 1.2 persons per car, an outdoor population density of 1 person per hectare has been determined. Using 60 kmihr and a 30-second average separation, a population density of 4 people per hectare is applied to semirural areas. For rural commercial outdoor areas and urbdsuburban outdoor areas, the population values given shown in Table 14.22 are suggested.
Other typical population densities from another source (Ref. [43]) are shown in Table 14.23. (Discussions regarding valuations placed on human life can be found in Chapter 15.)
14/306 Absolute Risk Estimates
Table 14.23 Typical population densities
Table 14.21 Indoor population densities
Use
Residential Office Retail (ground level) Retail (other) Hotellmotel School classroom
Percent floor urea occupied
Areu per person (mz)
Hours per week occupation
People Per hectare
100 75 75
IO0 IO 3
112 50 112
67 223 1667
75 75 75
5
112
15
84
2
30
1000 250 670
Designation
Dwelling unitsper hectare
High urban Low urban High rural Low rural Agricultural
30 5 I .67 0.17 0.03
Source: Jaques, S.. “NEB Risk Analysis Study, Development of Risk Estimation Method,”National Energy Board of Canada report, April 1992.
Note: Three persons per dwelling unit are assumed.
Table 14.22 Outdoor population densities
Use
Area per person (mz)
Hours perweek occupation
Commercial 500 outdoor (rural) Commercialoutdoor 200 (semirural) Outdoor (suburban) 50 Outdoor (urban) 20
Percent time occupied
An examination of the implied and stated probabilities behind the GEU PIMOS program [33] yields the probability estimates for various damage states given in Table 14.24. From this table, we can see that 30% of all leak scenarios are thought to result in some damage state, including a “no ignition” scenario where some property damage is possible. All (100%) of the rupture scenarios are thought to result in some damage. Note that these are “base case”probabi1ities that can be adjusted by the factors shown inTables 14.19 and 14.20andadditional factors thought to affect damage potential including fracture toughness, land use, and population density.
People per hectare
60
35.71
7
60
35.71
I8
60 60
35.71 35.71
71 179
Source: Jaques, S.. “NEB Risk Analysis Study, Development of Risk Estimation Method,”National Energy Board of Canada report, April 1992.
X. Hazard zone calculations Generalized damage states Examples of specific damage estimates to receptors can be found in the case studies presented later in this chapter. Simplifying assumptions are made in those studies, as is required in nearly any such analysis. More general assumptions can be used to set overall damage states. For example, a study of natural gas releases uses an approximate exposure time of 30 seconds and several other assumptions to set a suggested damage threshold at a thermal radiation (from ignited natural gas release) level 5,000 Btu/f12-hr as is discussed on page 308. Another study of thermal radiation impacts from ignited pools of gasoline assumes the following:
As noted earlier, a hazard zone must be established in order to characterize the receptors that might be vulnerable to a pipeline release. Hazard zones distance estimates using modeling shortcuts are discussed in Chapter 7. In this chapter, more information relating to damage levels, which define the hazard zone, is provided. Hazard zone calculations normally focus on acute threatsthermal and blast (overpressure) impacts. Thermal radiation is generated from flames jets (or torch fires), fireballs, or pools of burning liquids. Overpressure events are generated if a flammable vapor cloud is detonated. The scenarios of concern include the following:
There is a 100% chance of fatality in pools of diameter greater than 5 m. The fatality rate falls linearly to 0% at a thermal radiation level of 10 kW/m2 [59].
Flamejets-in which an ignited stream of material leaving a pressurized vessel creates a long flame jet with associated radiant heat hazards and the possibility of a direct impingement of flame on other nearby equipment.
Table 14.24 Probabilities of various damage states Scenario
Leak; accumulation in or near building; ignition Leak; accumulation in or near building; no ignition Leak; accumulation not in or near building; ignition Leak accumulation; not in or near building; no ignition Leak scenario totals Rupture; ignition at rupture Rupture; no ignition at rupture; ignition away from rupture Rupture; no ignition at rupture; no ignition away from rupture Rupture scenario totals
Igiury/fatality
Properg damage o n b
No damage
0.072 0.018 0.03 15 0.1785 0.3 0.15
0.3
0.7 0.5 0.8 0.15
0.85
0.3
0.45
0.0425
0.1 0.01
0.7
0.25 0.2
0.45
0.54
Scenario pmbabilip
0.8075 1
0.2
0.5
Hazard zone calculations 14/307
Vapor cloud,fire-in which a cloud encounters an ignition source and the entire cloud combusts as air and fuel are drawn together in a flash fire situation. Liquidpooljires-a liquid pool of flammable material could form and create radiant heat hazards. Fireballs-not thought to be a potential for subsurface pipeline facilities, this is normally caused by boiling liquid expanding vapor explosion (BLEVE) episodes in which a vessel, usually engulfed in flames, violently explodes, creating a large fireball (but not blast effects of other types of explosions) with the generation of intense radiant heat. Vapor cloud explosion-potentially occurs as a vapor cloud combusts in such a rapid manner that a blast wave is generated. The transition from normal burning in a cloud to a rapid, explosive event is not fully understood. Deflagration is the more common event. A confined vapor cloud explosion is more common than unconfined, but note that even in an atmospheric release, trees, buildings, terrain, etc., can create partial confinement conditions. Any explosive event can also have associated missiles and high-velocity debris whose damage potentials have been dramatically demonstrated, but are very difficult to accurately model. The hazard scenario is deDendent on the DiDeline’s Droduct. as noted in Table 14.25. Mosi damage state or hazard zone cal: culations result in an estimated threat distance from a source, such as a burning liquid pool or a vapor cloud centroid. It is important to recognize that the source might not be at the pipeline failure location. The location of the source can actually be some distance from the leak site and this must be considered when assessing potential receptor impacts. Note also that a receptor can be very close to a leak site and not suffer any damages, depending on variables such as wind direction, topography, or the presence of barriers. Another potential hazard for pipelines containing HVLs is a BLEVE episode described earlier. This is a rare phenomenon for most buried pipelines. For surface facilities, where a vessel can become engulfed in flames, the BLEVE scenario should be evaluated. .
I
Table 14.25 Pipeline products and potential hazard scenarios
Product
Flammable gas (methane, etc.) Toxic gas (chlorine, H,S, etc.) HVL (propane, butane, ethylene, etc.)
Flammable liquid (gasoline, etc.) Relatively nonflammable liquid (diesel, fuel oil. etc.)
Hazard I?‘pe
Hazard nafure
Dominant hazard model
Acute
Thermal
Acute
Toxicity
Acute
Thermal and blast
Flame jet; fireball Dispersion modeling Dispersion modeling; flame jet; fireball; overpressure (blast) event Pool fire; contamination Contamination
Acute and chronic Chronic
Thermal and contamination Contamination
Thermal radiation damage levels Thermal radiation levels are typically measured in units of kW/mZ or Btu/hr-ft2. Thresholds of thermal radiation can be chosen to represent specific potential damages that are of interest. These can then be used to calculate distances from the pipeline at which that level of thermal radiation would be expected. Recognized “thermal load versus effect” models estimate that a bum injury will occur within 30 seconds of exposure at a heat flux of 1600 to 2000 Btu/hr-ft2 (5.0 to 6.3 kW/m2). At a radiant heat intensity of 5000 Btu/hr-ft2 (15.8 kW/m2) the likelihood of a fatal bum injury within this exposure period becomes significant (I%), where 1 in 100 people exposed would not survive. Various wood ignition models have been used to estimate the steady-state effects of thermal radiation on property based on the duration of exposure required to cause piloted and spontaneous ignition. These models conservatively establish a radiant heat intensity threshold of 4000 Btu/hr-ft* (12.6 kW/m2)for piloted wood ignition and a 10,000 Btu/hr-ft2 (3 1.6 kW/m2) threshold for spontaneous wood ignition. At 8000 Btuihr-ft2 (25.2 kW/m2) spontaneous ignition is very unlikely, but after 38 seconds in the presence of a pilot source, piloted wood ignition will occur [83]. Some representative thermal radiation levels of interest are shown inTables 14.26 through 14.28. The U.S. Department of Housing and Urban Development (HUD) published a guidebook in 1987 titled Siting ofHUDAssisted Projects Near Hazardous Facilities: Acceptable Separation Distances from Explosive and Flammable Hazards. The guidebook was developed specifically for implementing the technical requirements of 24 CFR Part 5 I , Subpart C, of the Code of Federal Regulations. The guidebook presents a method for calculating a level ground separation distance (ASD) from pool fires that is based on simplified radiation heat flux modeling. The ASD is determined using nomographs relating the area of the fire to the following levels of thermal radiation flux: Thermal radiation-buildings. The standard of 10,000 Btu/hr-ft2 is based on the thermal radiation flux required to ignite a wooden structure after an exposure of approximately 15 to 20 minutes, which is assumed to be the fire department response time in an urban area. Thermal radiation-people. The standard of 450 Btu/hr-ft2 for people in unprotected outdoor areas such as parks is based on the level of exposure that can be sustained for a long period of time.
Table 14.26 Representative thermal radiation levels ~~~~~
Thermal mdiatron level (Btdhr-fi)
Description
12,000 5,000 4,000 1,600
100% mortality m -30 seL 1% mortality in -30 sec Eventual wood ignition Onset of injury after -30 sec
Source Stephens, M J , “A Model for Sizing High Consequence Areas Associated with Natural Gas Pipelines.” C-FER Topical Report 99068 prepared for Gas Research Institute. Contract 8174 October 2000
141308AbsoluteRisk Estimates Table 14.27
More sample thermal radiation levels
Table 14.28 Fatality probability at various thermal radiation levels
Thermal Heatflux (kWhn2)
radiation
(kWhnz)
Description
1.2 2.1 4.7
Received from the sun at noon in summer Minimum to cause pain after 1 minute Will cause pain in 15-20 seconds and injury (at least second-degree burns) after 30 seconds of exposure; intensity in areas where emergency actions lasting up to several minutes may be required without shieldingbut with protective clothing Intensity in areas where emergency actions lasting up to 1 minute may be required without shielding but with protective clothing Intensity at design flare release at locations to which people have access and where exposurewould be limited for a few seconds for escape Significant chance of fatality for extended exposure; high chance of injury; heats wood such that it will ignite with anaked flame (piloted ignition of wood) Intensity on structures where operators are unlikely to be performing and where shelter is available Likely chance of fatality for extended exposure and chance of fatality for instantaneous exposure; spontaneous ignition ofwood and failure of unprotected steel atter long exposure Cellulosicmaterini will pilot ignite within 1 minute of exposure;significant chance of fatality for people exposed instantaneously Intensity at which damage is caused to process equipment
6.3
9.5
12.6
15.6 23
35
37.5
Source: Office of Gas Safety,"Guideto Quantitative Risk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd., April 2002.
Jet fire modeling The following is based on work presented in Ref. [83]. Models are available to characterize the heat intensity associated with ignited gas releases from high-pressure natural gas pipelines. Escaping gas is assumed to feed a fire that ignites shortly after pipe failure. The affected ground area can be estimated by quantifying the radiant heat intensity associated with a sustainedjet fire. The relationship presented below uses a conservative and simple equation that calculates the size of the affected worst case failure release area based on the pipeline's diameter and operating pressure. This release impact model includes the following elements: 1. Fire model. The fire model relates rate of gas release to the heat intensity of the fire. This approach conservativelymodels releases as vertically oriented jet flame or trench fire impact areas. The conservatism compensates for the possibility of a laterally oriented jet, delayed ignition fireball, and/or the potential wind effect on actual fire position. Additionai conservatism is employed because a significant portion of the radiant heat energy will actually be absorbed by the atmosphere.
6.3 8 9.5 12.6 15.6 19 24 31.5 39 47.5 60
Percentfatalities outdoors 3 11 21 49 70 85 95
100 IO0 100 100
Percen tjotalities indoors 0 0 0 0 3
11 21 49 70 85 95
Source: Office of Gas Safety."Guideto QuantitativeRisk Assessment (QRA)," Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd.. April 2002.
Release model. The release model assumes that the gas peak effective release rate feeds a steady-state fire even though the rate of gas released will immediately drop to a fraction of the initial peak rate. Therefore, the release model's calculated effective release rate is a maximum value that overestimates the actual rate for the full release duration of a typical gas pipeline rupture fire. Heat intensity threshold. A heat intensity threshold establishes the sustained radiant heat intensity level above which the effects on people and property would be considered significant. The degree of harm to people caused by thermal radiation exposure is estimated by using an equation that relates the chance of burn injury or fatality to the thermal load received. The degree of damage to wooden structures through piloted ignition and spontaneous ignition can also be estimated as a function of the thermal load received. Combining the model's effective release rate equation with the radiant intensity versus distance equation gives a hazard area equation of r =4(2348 x p x dZ)/I where r =radius from pipe release point for given radiant heat intensity (fit) I = radiant heat intensity (Btu/hr-ft2) p = maximum pipeline pressure @si) d = pipeline diameter (inches).
Reference [83] recommends the use of 5000 Btu/hr-A2 as a heat intensity threshold for defining a "high consequence area."This heat intensity corresponds to apredicted 1% mortality rate for people, assuming they are exposed for 30 seconds while seeking shelter after the rupture, and a level where no nonpiloted ignition of wooden structures would cccur, regardless of the exposure time. It is chosen because it corresponds to a level below which Property, as represented by a typical wooden structure would not be expected to bum
Hazard zone calculations 14/309 People located indoors at the time of failure would likely be afforded indefinite protection and People located outdoors at the time of failure would be exposed to a finite but low chance of fatality [83]. If 5000 Btu/hr-ft2 is used then the previous equation (for methane) simplifies to: r=0.685 x JpXdZ
where r = radius from pipe release point for given radiant heat intensity (feet) p = maximum pipeline pressure (psi) d = pipeline diameter (inches). Note that thermal radiation intensity levels only imply damage states. Actual damages are dependent on the quantity and types of receptors that are potentially exposed to these levels. A preliminary assessment of structures has been performed, identifying the types of buildings and distances from the pipeline.
Pool fire damages As an example of some hazard zone distances for a delayedignition, zero wind gasoline pool fire event, the information in Table 14.29 was extracted from published work done in the United Kingdom [ 5 8 ] . Details of the calculation procedure were not presented in this reference. Note that the pool diameter is the most critical factor in most calculation procedures (see Chapter 7). Therefore, factors such as release rate, topography, and soil permeability are needed to estimate pool size. Table 14.30 is another example of gasoline pool hazards. This table also shows hazards from oil pools for comparison. While hazard distances are similar for oil and gasoline, note the significant differences in ignition probabilities between the products. Other examples of hazard zone distances can be found in the case studies later in this chapter and in examples of spill scoring shown in Chapter 7.
Vapor dispersion As discussed in Chapter 7, the release of a gaseous pipeline product creates a vapor cloud, the extent of which can be defined by some concentration of the vapor mixed with air. A
flammable limit is often chosen for hydrocarbon gases. The use of the lower flammability limit-the minimum concentration of gas that will support combustion-is the most common cloud boundary. It conservatively represents the maximum distance from the leak site where ignition could occur. Sometimes 112 of the LFL is used to allow for uneven mixing and the effects of random cloud movements. This lower concentration creates a larger cloud. In the case of a toxic gas, the cloud boundary must be defined in terms of toxic concentrations. Note that unignited sour gas (hydrogen sulfide, H2S) releases have been estimated to cause potential hazard zones 4 to 17 times greater than from an ignited release [95]. The extent and cohesiveness of a vapor cloud are critical parameters in determining possible threats from that cloud, as is discussed in Chapter 7. Meteorological conditions have a large influence on these parameters. In most dispersion modeling, an atmospheric stability class is often assumed as part of the model requirements. This can be based on analyses of weather patterns in the area of interest or simply defaulted to conservative (worst case) conditions. Often, an atmospheric class F-moderately stable-is chosen because it results in larger cloud sizes compared to conditions involving more wind or sunlight effects. The information shown in Table 14.31 is often used in determining the atmospheric stability class.
Vapor cloud explosion The mechanisms thought to underlie the detonation of a vapor cloud are generally discussed in Chapter 7. This event potentially occurs as a vapor cloud combusts in such a rapid manner that a blast wave is generated. A confined vapor cloud explosion is more common than unconfined, but note that even in an atmospheric release, trees, buildings, terrain, etc., can create partial confinement conditions. Any explosive event can have associated missiles and high-velocity debris whose damage potentials have been dramatically demonstrated but are very difficult to accurately model. The explosive potential of a vapor cloud is related to its mass, heat of combustion, and the amount of total energy that would contribute to the explosive event-the yield factor. Yield factors are critical for the calculation, but are the least precisely known variable in the calculation. They generally range from 2 to 20% and some representative values are shown inTable 14.32.
Table 14.29 Sample hazard zone distances ~~~
Soil type
Release mte (kb/sj
16 in. (406 nun)
Average Clay Average Average Clay Average Average Clay
205
100
78
126
164
100 85 100 46 19 13
78 70 78
I26
12.75 in.1324 mni) 8.625 in. (219 mm) 6.625 in. (168 mm) 1 O-mm leak
100 I00 30 5.3 5.3
Pool diameter (m)
Radial distance to 10 k W/m2 (m)
Hole diameter
Flame length (m)
46
25 63
110
I26 65 30 96
Source: Morgan, E., "The Importanceof Realistic Representation of Design Features in the Risk Assessment of High-pressure Gas Pipelines,"presented at Pipeline ReliabilityConference, Houston, TX, September 1995. Note: Diameters shown are maximum spreading pool diameters reached before ignition.The diameters have been limited to 100 m maximum.
14/310 Absolute Risk Estimates Table 14.30
Modeling results
Maximum impact distance f m m pool centroid) Ift,, Poolfire
Flash fire Gasoline Release location
Release volume (bbl)
50 500 1269 50 500 1500 4777 50 500 1500 3448 50 500 1500 6,25 1
A
B
C
D
Spillsurface area fiz)
4,140 12,800 20,300 4,140 12,800 22,100 39,000 4,140 12,800 22,100 33,200 4,140 12,800 22,100 44,500
Oil
Oil
I kW/mz
165 482 514 165 482 49 1 603 165 482 49 1 613 I65 482 491 489
36 60 73 36 60 76 96 36 60 76 90 36 60 76 101
396 685 858 396 685 894 1178 396 685 894 1089 396 685 894 1255
Gasoline
4 kW/m2
I kW/mz
4 kW/m'
207 353 439 207 353 458 598 207 353 458 555 207 353 458 636
476 819 1019 476 819 1061 1390 476 819 1061 1282 476 819 1061 1485
246 415 51 1 246 415 530 686 246 415 530 633 246 415 530 734
Source: URS Radian Corporation, "EnvironmentalAssessment of Longhorn Partners Pipeline," report prepared for U.S. EPA and DOT, September 2000. Note: Maximum distances (ft) measured from pool centroid.
Blast effects from an explosion are typically measured as overpressure in units ofpressure such as psi. Expected damages from various levels of overpressure are shown in Table 14.33. Note that an explosion can originate anywhere within the flammable limits, so the distances calculated for overpressure are additive to the flammability distances. Some modelers assume that the explosive epicenter occurs midway between the
calculated flammability limit distances, whereas others more conservatively double the distance to the lower flammability limit (LFL) (to account for possible pockets of gas dispersing farther) and then assume that the explosive epicenter occurs at this 2x calculated LFL limit. Regardless of the assumptions, the cumulative distances will often far exceed hazard zones due to thermal effects alone.
Table 14.31 Atmospheric stability classes
A B C D E F
Extremely unstable conditions Moderately unstable conditions Slightly unstableconditions Neutralconditionsa Slightly stable conditions Moderately stable conditions
Daytime conditions
Nighttime conditions
Strength ofsunlight
Suface windspeed (mph)
<4.5 4.54.7 6.7-1 1.2 23.4 213.4
Strong
Moderate
Slight
Thin overcast 2 4/8 cloudinessb
A
A-B B-C
E C C
E
F
D
E
D D
D D
D D
A-E B C C
Rc
C-D
D
5 3/8 cloudiness
Source: "ARCHIE (Automated Resource for Chemical Hazard lncident Evaluation)," prepared for the Federal Emergency Management Agency, Department of Transportation, and Environmental Protection Agency, for Handbook of Chemical Hazard Analysis Procedures (approximate date 1989)and software for dispersion modeling, thermal, and overpressure impacts. aApplicableto heavy overcast conditions day or night. Degree of cloudiness = fraction of sky above horizon covered by clouds.
Hazard zone calculations 14/311 Table 14.32
Representativeyield factors
Substance
Yieldfactor
Butadiene Carbon monoxide Ethane Hydrogen Methane Methanol N-Butane Propane Styrene Toluene Vinyl chloride Ethylene Propylene oxide Acetylene Methyl acetylene Vinyl acetylene
0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.06 0.06 0.19 0.19 0.19
Source: "ARCHIE (Automated Resource for Chemical Hazard lncident Evaluation)," prepared for the Federal Emergency Management Agency, Department of Transportation, and EnvironmentalProtection Agency, for Handbook of Chemical Hazard Analysis Procedures (approximate date 1989) and software for dispersion modeling, thermal, and overpressureimpacts.
Highly volatile liquids HVL releases are complex, nonlinear processes, as discussed in Chapter 7. Hazards associated with the release of an HVL include several flammability scenarios, an explosion potential, and the more minor hazard of spilled material displacing air and asphyxiating creatures in the oxygen-free space created. The flammability scenarios of concern include the following (previously described): Table 14.33
Expected damage for various levels of overpressure
Peak overpressure (psW Expected damage 0.03 0.3 0.5-1.0 1.0 2.0 2.0-3.0
2.5 3.W.O 5.0 5.0-7.0 IO. 14.5-29 .0
Occasional breakageof large windows under stress Some damage to home ceilings; 10% window breakage Windows usually shattered; some frame damage Partialdemolitionofhomes; made uninhabitable Partial collapse of home wallsiroofs Nonreinforced concrete/cmderblock walls shattered 50% destructionof home brickwork Framelesssteel panel buildings ruined Wooden utility poles snapped Nearly complete destructionof houses Probabletotal building destruction Range for 1-99% fatalities among exposed populationsdue to direct blast effects.
Source: "ARCHIE (Automated Resourcefor Chemical Hazard lncident Evaluation)," prepared for the Federal Emergency Management Agency, Department of Transportation, and Environmental Protection Agency, for Handbook of Chemical Hazard Analysis Procedures (approximate date 1989) and software for dispersion modeling, thermal, and overpressure
Flumejets Vapor cloudfire and/orfireball-in which a cloud encounters an ignition source and causes the entire cloud to combust as air and fuel are drawn together in a flash fire situation. Liquidpoolfires-not thought to be a very high potential for HVL releases unless ambient conditions are cold a liquid pool of flammable material could form and create radiant heat hazards. Vapor cloud explosionBecause precise modeling is so difficult. many assumptions are often employed. Use of conservative assumptions helps to avoid unpleasant surprises and to ensure acceptability of the calculations, should they come under outside scrutiny. Some sources of conservatism that can be introduced into HVL hazard zone calculations include Overestimation of probable pipe hole size (can use full-bore rupture as an unlikely, but worst case release) Overestimation of probable pipeline pressure at release (assume maximum pressures) Stable atmospheric weather conditions at time of release Ground-level release event. Maximum cloud size occurring prior to ignition Extremely rare unconfined vapor cloud explosion scenario with overpressure limits set at minimal damage levels Overpressure effects distance added to ignition distance (assume explosion epicenter is at farthest point from release). These conservative parameters would ensure that actual damage areas are well within the hazard zones for the vast majority ofpipeline release scenarios. Additional parameters that could be adjusted in terms of conservatism include mass of cloud involved in explosion event, overpressure damage thresholds, effects of mixing on LFL distance, weather parameters that might promote more cohesive cloud conditions and/or cloud drift, release scenarios that do not rapidly depressurize the pipeline, possibility for sympathetic failures of adjacent pipelines or plant facilities, ground-level versus atmospheric events, and the potential for a high-velocity jet release of vapor and liquid in a downwind direction.
Hazard zone defaults In the absence of detailed hazard zone calculations, some default distances can be set based on regulatory requirements or conservative fixed distances. For example, a type of hazard zone for a natural gas pipeline could be based on generalized distances from specific receptors such as those given in Table 14.34. These are actually "distances of concern," rather than hazard zones, since they are based on receptor vulnerability rather than damage distances from a pipeline release. Case Study C uses a default 1250-ft radius around an 18-in. gasoline pipeline as a hazard zone, but allows for farther distances where modeling around specific receptors has shown that the topography supports a larger potential spill-impact radius.
14/312AbsoluteRisk Estimates Table 14.34 Sample "distance of concern" for naturalgas pipelines
Characteristic
Distance
Population class 3 or4 Hard-to-evacuatefacilities (schools, day cares, prisons, elder care, rehabilitationclinics, etc.) Hard-to-evacuatefacilities, pipe diameter 30 in., and pressures > 1000 psig Areas ofpublic assembly Areas of public assembly, pipe diameter> 30 in., and pressures > 1000 psig
u?
660 800 1000
660 1000
In cases of HVL pipeline modeling, default distances of 1000 to 1500 ft are commonly seen, depending on pipeline diameter, pressure, and product characteristics. HVL releases cases are very sensitive to weather conditions and carry the potential for unconfined vapor cloud explosions, each of which can greatly extend impact zones to more than a mile. (See also the discussion on land-use issues in a following section for thoughts on setback distances that are logically related to hazard zones.) A draft Michigan regulatory document suggests setback distances for buried high-pressure gas pipelines based on the HUD guideline thermal radiation criteria. The proposed setback distances are tabularized for pipeline diameters (from 4 to 26 in.) and pressures (from 400 to 1800 psig in 100-psig increments). The end points of the various tables are shown inTabIe 14.35. It is not known ifthese distances will be codified into regulations. In some cases, the larger distances might cause repercussions regarding alternative land uses for existing pipelines. Land use regulations can have significant social, political, and economic ramifications, as are discussed in Chapter 15. The U.S. Coast Guard (USCG) provides p d a n c e on the safe distance for people and wooden buildings from the edge of a burning spill in their Hazard Assessment Handbook, Commandant Instruction Manual M 16465.13 . Safe distances range widely depending on the size of the burning area, which is assumed to be on open water. For people, the distances vary from 150 to 10,100 ft, whereas for buildings the distances vary from 32 to 1900 ft for the same size spill. The spill radii for these distances range between 10 and 2000 ft [35]. A summary of setback distances was published in a consultant report and is shown in Table 14.36.
Table 14.35 Sample proposed setback distances
Minimum setback Ifr,
Facility
Multifamily developments (10,000 Btu/hr-A2 criteria) Elderly and handicappedunits Unprotected areas of congregation (450 Btuihr-A2criteria) Primary egress
4-in.pipeline a1400psig
26-in.pipeline at 1800psig
Table 14.36 Summary of setback requirements in codes. standards, and other guides
Code, standard, guide
Setback requirement for tankporn public (jii
IFC 2000 (adopted in Alaska and proposed in municipality ofhchorage UFC 2000 @re-2001 in Alaska) UFC 1997 APA
5-175
Tank size and type of adjacent use
5-175
Tank size and type of adjacent
50-75 Performance standard
HUD
Buildings: 130-155 People:650-775 150-> 10,000
Type ofadjacentuse Site specific and process driven Product and tank size Diameter of spill
USCG (open-water fire)
Yariables
Source: Golder and Associates, "Report on Hazard Study for the Bulk POL Facilities in the POA Area," prepared for Municipality of Anchorage POL Task Force, August 9,2002. Notes: APA. American Planning Association; USCG. US. Coast Guard (USCG); HUD, Department of Housing and Urban Development (HUD). The National Fire Protection Association (NFPA) publishes NFPA Code 30, Flammable and Combustible Liquids Code, 2000 Edition. The lnternational Code Council publishes the lnternational Fife Code 2000 (IFC). The Western Fire Chiefs Association publishes the Unifofm Fife Code 2000 Edition(UFC).
Any time default hazard zone distances replace situationspecific calculations, the defaults should be validated by actual calculations to ensure that they encompass most, ifnot all, possible release scenarios for the pipeline systems being evaluated.
XI. Case studies The following case stumes illustrate some techmques that are more numerically rigorous in producing absolute risk estimates. These are all extracted from public domain documents readily obtained from Internet sources and/or proceedings from regulatory approval processes. Company names and locations have been changed since the focus here is solely on illustrating the technique. Other minor modifications to the extracted materials include the changing of table, figure, and reference numbering to correspond to the sequencing in this book.
Case StudyA: natural gas Quantitative risk calculationsfor XYZpipeline
67
318 772
147
3164
40
1489
40
The following case study illustrates the estimation ofrisk using calculated hazard zones and frequency-based failure frequencies for a natural gas pipeline. Portions of this discussion were extracted or are based on Ref. [18], in which a proposed highpressure gas pipeline, having both onshore and offshore components, was being evaluated. For this example, the proposed
Case studies 141313
pipeline name is XYZ and the ownerioperator company will be called ACME. In this case, a relative risk assessment has been performed, but is to be supplemented by an evaluation of risks presented in absolute terms. The introduction very briefly describes the purpose and scope of the analysis. This document presents preliminary estimates of risks to the public that might be created by the proposed operation of the XYZ pipeline. The additional risk calculations build on the worst case estimates already provided in the regulatory application and will be used for emergency response planning. This analysis is preliminary and requires verification and review before using in connection with emergency pianning.
A frequency of failures, fatalities, and injuries is estimated based on available data sets. As it is used here, “failure” refers to an incident that triggers the necessity of filing a report to the governing regulatory agency. So failure counts are counts of “reportable incidents.”The failure frequency estimates are also later used with hazard area calculations.
Normalized frequency-basedprobabilistic risk estimates Risk is examined in two parts: probability of a pipeline failure and consequences of a failure. In order to produce failure probabilities for a specific pipeline that is not yet operational, a failure frequency estimate based on other pipeline experience is required. Four sets of calculations, each based on a different underlying failure frequency, have been performed to produce four risk estimates for the proposed XYZ pipeline. The estimates rely on frequencies of reportable incidents, fatalities, and injuries as recorded in the referenced databases. The incident rate is used to calculate the probability of failure and the fatalityiinjury rates are used to estimate consequences. The frequency estimates that underlie each of the four cases are generally described as follows: Case I . The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” Acme-owned (ACME) gas transmission pipeline. For this case, ACME system leak experiences are used to predict future performance ofthe subject pipeline. Case 2. The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” Canadian gas transmission pipeline.
In this case, the Canadian Transportation Safety Board historical leak frequency is used to predict future performance of the subject pipeline. Case 3 . The subject pipeline is assumed to behave exactly like a hypothetical, statistically “average” US. gas transmission pipeline. In this case, the U.S. historical leak frequency is used to predict future performance ofthe subject pipeline. Case 4 . The subject pipeline is assumed to behave like some U.S. gas transmission pipelines; in particular, those with similar diameter, age, stress level, burial depth, and integrity verification protocols. In this case, the U.S. historical leak frequency is used as a starting point to predict future performance of the subject pipeline. In all cases, failures are as defined by the respective regulations (“reportable accidents”) using regulatory criteria for reportable incidents. The calculation results for the four cases applied to the proposed 37.3 miles(60.0 km)ofXYZpipelineareshowninTable 14.37: The preceding part of this analysis illustrates a chief issue regarding the use of historical incident frequencies. In order for past frequencies to appropriately represent future frequencies, the past frequencies must be from a population of pipelines that is similar to the subject pipeline. As is seen in the table, basing the future fatality and injury rate on the experiences of the first two populations of pipelines results in an estimate of zero future such events since none have occurred in the past. The last column presents annual probability numbers for individuals. Such numbers are often desired so that risks can be compared to other risks to which an individual might be exposed. In this application, the individual risk was assumed to be the risks from 2000 ft of pipeline, 1000 ft either side of a hypothetical leak location.
Case 4 discussion Case 4 produces the best pomt estimate for risk for the XYZ pipeline. Note that all estimates suggest that the XYZ pipeline will experience no reportable failures during its design life. Probabilities of injuries andor fatalities are extremely low in all cases. The US.DOT database of pipeline failures provides the best set of pertinent data from which to infer a failure frequency. It is used to support calculations for Cases 3 and 4 above. Primarily basing failure calculations on U S . statistics, rather than Canadian, is appropriate because:
Table 14.37 Calculationsfor Cases 1 through 4
Comparison criteria ~
~~
Failuresper year
Injuriesper year
Fatalitiesper year
Years to fail
Years fo injua
Years to Annual fataliy
0.01055 0.01200 0.01015 0.04344 0.00507
0 0 0.00167 0.00348 0.00084
0 0 0.00044 0.00050 0.00022
100.4 83.3 98.6 23.0 197.26
Never Never 600.2 287.4 1,200.4
Never Never 2278.8 1987.6 4557.6
Annual Probabilit?,ofan individualfatali$
~~
Case I: ACME’ Case 2: Canada2 ca~e3:U.S.~ U.S. liquid3 Case 4: U S . adjusted4
0 0 4.8E-06 4.7E-06 2.4E-06
Notes: ACME, all Acme gas transmission systems, 1986-2000. TSB, Canadian gas transmission pipelines, 1994-1998; only one fatality (in 1985 third-party excavation) reported for NEB jurisdictional pipelines since 1959; a significant change in definition of reportable incidents occurred in 1989. OPS, US. gas transmission pipelines, 19862002. Adjusted by assuming failure rate of subject pipeline is -50% of U.S. gas transmission average, by rationale discussed. Assumes an individual is threatened by 2000 fl of pipe (directlyover pipeline, 1000 ft either side, 24i7 exposure); 2000 ft is chosen a s a conservative length based on hazard zone calculations.
’
*
14/314 Absolute Risk Estimates 0 0 0
0
More complete data are available (larger historical failure database and data are better characterized). Strong influence by a major US. operator on design, operations, and maintenance. Similar regulatory codes, pipeline environments, and failure experiences. Apparently similar failure experience between the countries.
Since the combined experience of all US.pipelines cannot realistically represent this pipeline’s future performance (it may “encompass” this pipeline, hut not represent it), a suitable comparison subset of the data is desired. Variables that tend to influence failure rates and hence are candidates for criteria by which to divide the data, include time period, location, age, diameter, stress level, wall thickness, product type, depth ofcover, etc. Unfortunately, no database can be found that is complete enough to allow such characterization of a subset. Therefore, it is reasonable to supplement the statistical data with adjustment factors to account for the more significant differences between the subject pipeline and the population of pipelines from which the statistics arise. Rationale supporting the adjustment factors is as follows: 0
Larger diameter is 4 0 % of failures in the complete database (90+% benefit from higher diameter is implied by the database but only 25% reduction in failures is assumed) Lower stress decreases failure rate by 10% (assumption based on the role of stress in many failure mechanisms) New coating decreases failure rate by 5% (assumption note the well-documented problem with PE tape coatings in Canada) New IMP (integrity management program) procedures decreases failure rate 10% (assumption based on judgment of ability for IMP to interrupt incident event sequence) Deeper cover (2 f i of additional depth is estimated to be worth 30% reduction in third-party damages according to one European study so a 10%reduction in overall failures is assumed) More challenging offshore environment leads to 10% increase in failures (somewhat arbitrary assumption, conservative since there are no known unresolved offshore design issues).
Combining these factors leads to the use of a -50% reduction from the average US. gas transmission failure rate. This is conservativeaccepting a bias on the side of over-predicting the failure frequency. Additional conservatism comes from the omission of other factors that logically would suggest laver failure frequencies. Such factors include Initial failure frequency is derived from pipelines that are predominantly pre- 1970 construction-there are more stringent practices in current pipe and coating manufacture and pipeline construction Better one-call (more often mandated, better publicized, in more common use) Better continuing public education Designed and mostly operated to Class 3 requirements where Class 3 pipelines have lower failure rates compared to other classes from which baseline failure rates have been derived Leaks versus ruptures (leaks less damaging, but counted if reporting criteria are triggered) Company employee fatalities are included in frequency data, even though general public fatalitieshjuries are being estimated Knowledge that frequency data do not represent the event of “one or more fatalities,” even though that is the event being estimated.
Model-basedfailure consequence estimates An analysis of consequence, beyond the use of the historical fatalityhnjury rate described above, has also been undertaken. The
severity of consequences (solely from a public safety perspective) associated with a pipeline’s failure depends on the extent of the product release, thermal effects from potential ignition of the released product, and the nature of any damage receptors within the affected area. The area affected is primarily a function of the pipeline’s diameter, pressure, and weather conditions at the time of the event. Secondary considerations include characteristics of the area including topography, terrain, vegetation, and structures.
Failure discussion The potential consequences from a pipeline release will depend on the failure mode (e.g., leak versus rupture), discharge configuration (e.g., vertical versus inclined jet, obstructed versus unobstructed), and the time to ignite (e.g., immediate versus delayed). For natural gas pipelines, the possibility of a significant flash fire or vapor cloud explosion resulting from delayed remote ignition is extremely low due to the buoyant nature of gas, which prevents the formation of apersistent flammable vapor cloud near common ignition sources. ACME applied a “Model of Sizing High Consequence Areas (HCAs)Associated with Natural Gas Pipelines” [83] to determine the potential worst case ACME Pipeline failure impacts on surrounding people and property. The Gas Research Institute (GRI) funded the development of this model for U.S. gas transmission lines in 2000, in association with the U S . Office of Pipeline Safety (OPS), to help define and size HCAs as part of new integrity management regulations. This model uses a conservative and simple equation that calculates the size of the affected worst case failure release area based on the pipeline’s diameter and operating pressure.
Failure scenarios There are an infinite number of possible failure scenarios encompassing all possible combinations of failure parameters. For evaluation purposes, nine different scenarios are examined involving permutations of three failure (hole) sizes and three possible pressures at the time of failure. These are used to represent the complete range of possibilities so that all probabilities sum to 100%. Probabilities of each bole size and pressure are assigned, as are probabilities for ignition in each case. For each of the nine cases, four possible damage ranges (resulting from thermal effects) are calculated. Parameters used in the nine failure scenarios are shown in Table 14.38.
Table 14.38 Parameters for the nine failure scenarios under
discussion Probability of occurrence (99)
Hole s u e (in.) 50% to full-bore rupture(8-16) 0.5-8 <0.5
20
40 40
Pressure (psig) 1800-2220 (2220 psig is used)
20
150&1800 (1800 psig is used) 4500 ( 1 500 psig is used)
70
IO
Comments
Possible result of third-party damage or land movement Corrosion or material defect related The contract delivery pressure is 1800 psig; current connection pressures normally are -800 psig; > 1800 psig would not be normal.
Case studies 14/315 For ACME Pipeline release modeling, a worst case rupture is assumed to be guillotine-type failure, in which the hole size is equal to the pipe diameter. at the pipeline’s 15,305-kPa (2220-psig) Maximum Allowable Operating Pressures (MAOP). This worst case rupture is further assumed to include a double-ended gas release that is almost immediately ignited and becomes a trench fire. Note that the majority of the ACME Pipeline will normally operate well below its post-installation, pressure-tested MAOP in Canada. Anticipated normal operating pressures in Canada are in the range of 800 to I 100 psig, even though this range is given only a 40% probability and all other scenarios conservatively involve higher pressures. Therefore the worst case release modeling assumptions are very conservative and cover all operational scenarios up to the 15.305-kPa (2220-psig) MAOP at any point along the pipeline. Other parameters used in the failure scenarios cases are ignition probability and thermal radiation intensity (Table 14.39). Ignition probability estimates usually fall in the range of 5 to 12% based on pipeline industry experience; 65% is conservatively used in this analysis. The four potential damage ranges that are calculated for each of the nine failure scenarios are a function of thermal radiation intensity. The thresholds were chosen to represent specific potential damages that are of interest. They are described generally inTable 14.40. These were chosen as being representative of the types of potential damages of interest. Reference [83] recommends the use of 5000 Btu/hr-ft* as a heat intensity threshold for defining a “high consequence area.” It is chosen because it corresponds to a level below which: -Property, as represented by a typical wooden structure would not be expected to burn -People located indoors at the time of failure would likely be afforded indefinite protection and -People located outdoors at the time of failure would be exposed to a finite but low chance of fatality. Note that these thermal radiation intensity levels only imply damage states. Actual damages are dependent on the quantity and types of receptors that are potentially exposed to these levels. A preliminary assessment of structures has been performed, identifying the types of buildings and distances from the pipeline. This information is not yet included in these calculations but will be used in emergency planning.
Table 14.40 Four potential damage ranges for each of the nine failure scenarios under discussion
Thermal radiation level (Btu/hr-ft2i
Description
12,000 5,000 4,000 1,600
100%mortality in -30 sec 1 % mortality in -30 sec Eventual wood ignition Onset injury -30 sec
impacted by any assumptions relative to leak detection capabilities. This is especially true since the damage states use an exposure time of -30 seconds in the analysis.
Results Results of calculations involving nine failure scenarios and four damage (consequence) states as measured by potential thermal radiation intensity are shown in Table 14.41 The nine cases are shown graphically in Figure 14.3. The rightmost end of each bar represents the total distance of any consequence type. The farthest extent of each damage type is shown by the right-most end point of the consequence type’s color. These nine cases can also be grouped into three categories as shown in Figure 14.4, which illustrates that 11% of all possible failure scenarios would not have any of the specified damages beyond 29 ft from the failure point. Of all possible failure scenarios, 55% (44% + 11%) would not have any specified damages beyond 457 ft. No failure scenario is envisioned that would produce the assessed damage states beyond913 ft. In these groupings, the worst case (largest distance) IS displayed. For example, the specific damage types can be interpreted from the chart as follows: Given a pipeline failure, 100% (-44% + -44% + -1 1%) of the possible damage scenarios have a fatality range of 333 ft or less (the longest bar). There is also a 56% chance that, given a pipeline failure, the fatality range would be 167 ft or less (the second longest bar).
Case Study B: natural gas Role ofleak detection in consequence reduction The nine failure scenarios analyzed represent the vast majority of all possible failure scenarios. Leak detection plays a relatively minor role in minimizing hazards to the public in most of these possible scenarios. Therefore, the analysis presented is not significantly Table 14.39 Additional parameters for the nine failure scenarios
under discussion
Hole size (in.)
Ignition probabiliti: given failure has occurred (%)
50% to fnll-bore rupture (8-16)
40
0.5-8 <0.5
20 5
Table 14.42 shows results of modeling as described in Ref. [67]. The analyses were performed on a 150-mm-diameter natural gas pipeline using various pressures and hole sizes with corresponding release rates and ignition probabilities. Two damage states, based on thermal radiation levels were of interest to these investigators. Failure probabilities are based on European Gas data with adjustment factors as shown in Table 14.7.
Case Study C: gasoline Comments
Larger release rates, as driven by larger hole diameters, may find more ignition sources due to the more violent nature of rupture and larger volumes of gas.
This case study is extracted from Appendix 9B of Ref. [86], which is an environmental assessment (EA) of a proposed -700-mile-long gasoline pipeline, called LPP, from Houston to El Paso in the state of Texas. Portions of this pipeline existed and new portions were to be constructed. Existing portions were in crude oil service under the former ownership of a company herein referred to as EPC. MTBE refers to a gasoline additive that was being contemplated. T h i s additive makes the gasoline more environmentally persistent and hence, increases the chronic product hazard. This EA
14316Absolute Risk Estimates Table 14.41 Nine failure scenario results ~
Impact distances @) effected by specified Btu/hr-f$ thermal intensity Damage state Pressure scenario Hole (in.) @si& 12,000
1 2 3 4 5 6 7
16 8 16 8 0.5 16 8 0.5 0.5
8 9
1800 1800 2220 2220 1800 1500 1500 2220 1500
300ft 150 333 167 9 274 137 10 9
~~~~~
Probability
5000
4000
1600
Hale (?A)
465ft 232 516 258 15 424 212 16 13
520ft 260 578 289 16 475 237 18 15
822ft 411 913 457 26 751 375 29 23
20 40 20 40 40 20 40 40 40
Pressure
Ignition
(?A)
(?A)
70 70 20 20 70 10 10 20 10
Damage state over project
40 20 40 20 5 40 20 5 5
Damage Individual experiencing a state if failure (?A) damagestad
3.23E-04 3.23E-04 9.24E-05 9.24E-05 8.08E-05 4.62E-05 4.62E-05 2.31E-05 1.15E-05 1.04E-03
31 31 9
9 8 4 4 2 1 100
7.65E-06 1.53E-05 2.19E-06 4.37E-06 1.53E-05 1.09E-06 2.19E-06 4.37E-06 2.19E-06
~~
Notes: Failure rate used is 0.0005failuresper mile-year as calculated in Case 4 of the normalized,frequency-based probabilistic calculations. Probabilitiesof one or more damage states over the life of the project is 1.04E-03. This calculation uses failure frequency for 2000 R of pipe and assumes an individual is directly over the pipeline continuously (24/7) and therefore continuouslyexposed to the potential damage states for 40 years.
*
received unprecedented attention due to several factors, including the very environmentally sensitive areas that would be crossed, the age of the existing portions, and economic/competitiveissues related to the intended product transport. The ‘ ‘ L W refers to a document specifying mitigation measuresto be taken to reduce risks.
Appendix F and Chapter 7 also contain information from this EA [86] and have been referenced in the following excerpt.
Executive Summary This report presents estimated impact frequencies and probabilities of nine different potential impacts along the LPP pipeline. The potential
Damage Cases If Failure Occurs 1%
Fatality
2%
E E
5000 BTU/hr ft2
4%
Eventual fire
m n
0Injury
g 4% a 8%
3
0
8 m
2
9% 9% 31% 31% 0
100
200
300
400
500
600
700
800
900
Distance from Source Figure 14.3 Relative probabilitiesof the nine possible hazard zones.
1000
Case studies 14317
Damage Cases If Failure Occurs
Fatality 5000 BTUkr ft2
Eventualfire
0Injury
1 1 0
400
200
600
800
1000
Distancefrom Source Figure 14.4 Relative probabilities of three possible hazardzones.
Table 14.42 Results of modeling for Case Study B
Pressure (Mpa)
Hole size (mm)
Release rate
(W)
Ignition probability
Downwinddistance to 6.3 kW/m*
m? 3
5.6
7
8.5
15
5 25 70 100 150 5 25 70 100 150 5 25 70 100 150 5 25 70 100 150 5 25 70 100 150
0.07 1.83 14.32 29.23 65.78 0.13 3.35 26.16 53.39 120.12 0.17 4.16 32.53 66.39 149.38 0.2 5.04 39.36 80.33 180.73 0.35 8.82 68.94 140.7 316.58
0.0029 0.023 0.0861 0.1362 0.2293 0.0043 0.0339 0.1268 0.2005 0.3 0.0049 0.039 0.1459 0.2306 0.3 0.0056 0.0441 0.1649 0.2606 0.3 0.008 0.0631 0.2363 0.3 0.3
0.3 10 32.2 46.4 68.8 2.1 14.3 43.5 61.5 91.8 1 16.1 48.3 68.4 101.6 1.6 18.1 52.7 75.2 111.3 2.9 24.4 69.3 97.7 144.5
Downwind distance to 15.6 kW/m2
m? N.A. 1.3 14.9 22.5 34.7
N.A. 4.8 21 30.8 46.8
N.A. 6 23.6 34.4 52.2
N.A. 7.1 26.1 37.8 57.6
N.A. 10.7 34.9 50.3 76.2
14/318Absolute Risk Estimates
due to the uncertainties involved in such calculations, they are not the primary basis oftbe EA findings. For the purposes oftbis report, “overall risk is defined as the risks to receptors along the entire pipeline length over a period of 50 years. “Segment-specific risk” is defined as the risk to a point receptor that is presented by 2,500 l? of the pipeline, over a period of 50 years. In this usage. the pipeline segment-specific risk is essentially the overall risk normalized to a length of 2,500 ft. Except in special circumstances, a point receptor is exposed to risks from leaks occurring along a maximum pipeline length of 2,500 A. The basis for this “impact zone” is described in the EA. Longer receptors such as aquifers are exposed to multiples of the segment-specific risks, in proportion to their lengths. It is useful to examine a shorter length of pipeline in order to show risks that are more representative of individual receptor risks and are more comparable to other published risk criteria. This report uses some special terminology that is defined as follows: “Reportable” refers to 49 CFR Part 195 criteria for formal reporting of accidents. A spill size of 50 barrels (bbl) is one of the triggers requiring the accident to be reported. Therefore, most OPS spill data contain spills of 50 bbl or greater, although there a1.e some cases where a different criterion has mandated the reporting of an incident. A number of spills with volumes of less than 50 barrels are reported even though the reporting is not apparently required for any apparent reason. Because of the uncertainties associated with the reported spills of less than 50 barrels, “Reportable” is considered to include only spills of 50 barrels or more. “Index sum” refers to the EA relative risk model’s measure of relative probability of failure. “Post-mitigation” means the condition of and risks to the pipeline after full and complete achievement of all aspects of the mitigation plan (LMP). This includes the establishment of specified ongoing
impacts are those associated with the proposed project, transporting refmedproductsfromHouston toEl Paso atamaximumrateof225,OOO barrels per day (bpd). Impact frequencies are calculated for several scenarios involving various combinations of leak frequencies, spill sizes, and receptorvulnerabilities.Selected scenarios are also presented as leak probabilities. The calculations in this report offer some quantitative support to the findings of the EA, but, due to the uncertainties involved in such calculations, they are not the primary basis of the EA findings. Post-mitigation impact frequencies (Case 4 as described below) are calculated to be 10 to 30 times lower than pre-mitigation and industry average frequencies. Estimated post-mitigation leak frequencies for the modeled potential impacts are tabulated [inTable I]. The frequencies shown in Table 1 are converted to probabilities and shown inTable 2. These estimates are supported by a combination of quantitative and qualitative information as descnbed in this report. Nevertheless, there is a high level of uncertainty associated with these estimates, primarily due to the limited amount ofdata available.
Introduction This report presents results of calculations that estimate frequencies ofnine different potential ~mpactsalong the LPPpipeline. Impact frequencies are calculated for scenarios involving various combinations of leak frequencies, spill sizes, and receptor vulnerabilities. Selected probabilities were also calculated, using the frequencies and assuming a Poisson distribution of events. The calculations in this report offer some quantitative support to the findings of the EA, but, Table 1 Calculated Post-MitigationFrequencyof Selected Impacts
Overal/ Risk
Segment-SpeciJic Risk
Average
Mitigated Leak Rateper Mile- Year
0.00007
PredicredLeuk Countfor 700 Miies und 50 Years
2.6
Frequenq over Lge of Project
Annual Frequency
Frequency over Li/e ofproject
Annual Frequencv
Drinking water contamination
0.005
0.00010
0.00000346
0.0000000692
Drinking water contamination, no MTBE
0.003
0.000051
0.00000173
0.0000000346
Fatality InJW
0.005 0.024
0.0001 1 0.00047
0.00000356 0.00001600
0.0000000712 0.00000032
Recreational water contamination
0.087
0.00174
0.0000588
0.00000118
Pnme agricultural land contamination
0.035
0.00070
0.0000238
0.00000048
Wetlands contamination
0.05 1
0.00 I01
0.0000462
0.00000092
Lake Travis drinking water supply contamination
0.00019
0.0000038
0.00000013
0.0000000026
Edwards Aquifer water contamination
0.00019
0.0000039
0.00000013
0.0000000026
Potenrial Jmpuct
Case studies 14/319
Table 2 Calculated Post-Mitigation Probabilities of Selected Impacts
Overall Risk
Averuge Leak Rate per Mile-Year 0.00007
Predicted Leak Countfor 700 Miles and 50 Years 2.6
Porentinl Impact
Dnnking water contamination Drinking water, no MTBE Fatality Injury Recreational water contamination Prime agricultural contamination Wetlands contamination Lake Travis drinking water supply Edwards Aquifer water contamination
Segmenr-Specific R I rk
Annual prohubr1ih;f one or more events during the life of the project (%J
Pmhubilih; ofone or more events over the life of theproject (%)
Prohahilit? ofone or more evenfs over the lfeofthe project
(W
Annual prf~habikiy (done or more evenh during !he lifi ofthe projecf l%)
0.5
0.0 I0
0.00035
0 00001
0.3
0.005
0.00017
0.000004
0.5 2.3 8.3
0.01 1 0.047 0.17
0.00036 0.00 160 0.006
0.00001 0.00003 0.000 12
3.5
0.070
0.002
0.00005
4.9 0.02
0.10 0.0004
0.005 0.000013
0.00009 0 oooooo26
0.02
0.0004
0.000013
0.00000026
operation and maintenance activities. “Receptors” refer to the sites or organisms that are threatened by a spill of refined products. Receptors in this report include people, drinking water supplies, and wetlands. Each impact potentially damages one or more receptors.
Leak Frequencies Pipeline leak frequencies are estimated from several data sources. Four (4)“frequencyofleak”casesare examinedinthisreport. Eachcase represents a different estimated incident rate and is used independently to perform an impacts assessment. Three cases use only historical data with no consideration given to possible benefits ofmitigation. These are included for reference and represent impact frequencies that might be seen on an unmitigated LPP pipeline and on a typical US hazardous liquid pipeline. The fourth case considers the effects of mitigation. The four leak frequencies are generally described as follows: Case I (all U S . hazardous liquidpipeline leak rate): The average leak incident rate for reportable accidents on US hazardous liquids pipelines, from 1968&1999(DOT,1999 andinEAChapter5). Case 2 former EPCpipeline, reportable leak rate): The reportable incident (i.e., accidents in which spill volumes were 50 barrels or more) rate for 450 miles ofthis pipeline under EPC operation in 29 years. (Incident rate) = (10 leaks) / (450 miles x 29 years). Case 3 former EPCpipeline, overall leak rate): The overall incident rate, regardless of spill size, for 450 miles of pipeline (not including pump stations) under EPC operations in 29 years. (Incident rate) = (26leaks)/(450milesx29years). Case 4 (uses an estimate of mitigation efeects plus historical data): Cases 1-3 use leak frequencies that do not consider index sums and hence do not consider effects of mitigation. In case 4, distinctions are made regarding the impacts of mitigation for the various tier categories or for a specific geographic area. The corresponding index sum is used to estimate a leak frequency. The leak frequency is therefore estimated by correlating the index sum scale to an absolute leak frequency. The correlating equation used represents the curve that best fits the following points:
&
~
~~
~
~
~
Index Sum
Pmbabilip ofLeak (e.\rimated byfrequen<:vin w7it.s of leaks per mileyear)
0 189 400
1.0 (100 percent chance o f a leak) 0.00199 (historical EPC leak rate on this pipeline) 0 (virtually no chance of a leak)
Note that this exercise does not create a curve that passes exactly through each of these points. In fact, the curve that best fits all points actually passes through a point that represents a mitigation-effect level of90 to 95 percent. For Cases 3 and 4, leak probabilities are calculated in addition to leak frequencies. These are obtained by calculating the Poisson probability estimate of“one or more” leaks over the life ofthe project, as shown below. The probability ofno spills is calculated from.
P(x)sPILL=[(~’~)x:x!]*~~~(-~‘~~ where P(X)SPILL = probability ofexactly X spills = the average spill frequency for a segment of interest. f spills /year = the time period for which the probability is sought. t years = the number of spills for which the probability is X sought. in the pipeline segment of interest The probability for one or more spills is evaluated as follows: P(probabi1ity of one or more)SPILL = I- Probablhty of no spills = 1 -P(X)SPILL where X = 0. The results of these calculations are shown in Table 2 (Executive Summary) and in Tables 5 through 8 [Tables 7 and R not included in this hook]. The leak frequency estimates have a high degree ofuncertainty, primarily due to the limited amount of data available No data
14/320 Absolute Risk Estimates
that would better refine these estimates have been available. It is also important to note that frequencies and probabilities like these represent averages expected only over long periods oftime. Short time periods can have different experience and still be appropriately represented by these frequencies. Therefore, the predictive power of these probabilities is limited. As an additional evaluation step, the plausibility of the estimated post-mitigation leak frequency was examined qualitatively. The estimate is generally supported by this qualitative analysis. summarized as follows: 1. Low leak frequencies over long periods oftime are being experi-
2.
3.
4.
5.
6.
enced by US pipeline operators on hazardous liquid pipeline of similar length to the LPP pipeline, but without the extraordinary level of mitigations as proposed in the LMP. This is indicated by informal interviews with pipeline operators and with searches and analyses of OPS accident data. Analyses of these latter data are discussed in Attachment E [not included in this book]. Results of summary analyses of DOT and other data are provided. These data and analyses suggest that the estimated leak frequency is possible, especially with increased mitigation. The correlation as described in Attachment A [see page 2981, although weak in terms of statistically valid data quantity and quality, nonetheless offers a semi-quantitative linkage that supports the estimate. Appendix T [not included in this book] shows leak rate estimates for approximately 60 U.S. hydrocarbon liquid pipeline operators. These leak rates, presumably achieved under typical industry mitigation levels, show the range of different leak rates that are possible. This includes company-wide leak rates that are approaching the estimated post-mitigation leak frequency estimates forthe LPP pipeline. The scenario-based analyses detailed in Attachment B [this excerpt can be found in Chapter 3, see pages 0 0 0 0 1 suggests that the estimated leak rate reductions can be achieved with rather modest assumptions regarding mitigation effectiveness, even for the more problematic challenge of reducing third-party damage. An alternative approach to estimating failure probabilities from several common pipeline failure mechanisms has produced very similar results. This alternative approach, shown in the preliminary ORA [ORA = Operational Reliability Assessment, not included in this book] uses concepts from fracture mechanics, materials science, historical data. and statistics to calculate failure rates and probabilities. The fact that two separate approaches to failure probability estimation arrived at similar conclusions provides support for both calculations. In the experience ofthe EA authors, the LMP reflects levels ofmitigation unprecedented in the industry. This suggests that high levels of leak rate reductions are possible, even if not commonly observed.
In addition to overall leak frequencies, spill size frequency also plays a role inmany ofthe impacts.Aspill sizedistributionforspillslargerthan 50 bbl was derived from DOT hazardous liquid pipeline reportable spills from 1975 to early 2000. The fraction of spills smallerthan 50 bbl was estimated from the 29 year EPC leak experience on the 450 mile segment fromValveJ- 1 to Crane. EPC leak experiencecontains too few larger-sizedspills to create a meaningful profile. Embedded in this approach is the assumption that the national spill size distribution (DOT data) is representative of the LPP’s future spill size distribution. This implies that the following variables are also representative: Topography; Failure mechanisms that determine hole size; Leak detection capabilities; and Leak reaction capabilities.
Since the national pipeline system is not characterized in these terms, the similarities cannot be confirmed. However, since the LMP specifies several state-of-the-art spill size reduction measures not typically seen in other pipelines, it is reasonable to assume that the national data will not underestimate the spill size potential and very probably will overestimate the potential. A second assumption is that the <50 bbl spill size fraction seen under EPC operations is representative of LPP’s future spill size distribution. Since the <50 bbl size triggers few impacts and since >50 bbl spill fraction can be separated from the “all size” distribution, the absolute validity of this assumption is not critical to thls analysis. An additional underlying assumption in these estimates is that the relative probability of failure remains fairly constant over the life of the project. This is accomplished by LPP reacting appropriately to changing conditions along the line, as is specified in the LMP. It also requires that the integnty verifications as scheduled by ORA calculations, ensure that the probability of failure does not exceed the projected leak probabilities between integrity verifications. This is discussed in Appendix 9D [not included in this book].
Description ofpotential impacts Nine distinct potential impacts are studied in this report. Impacts are site-specific and sensitive to many variables, and therefore must be somewhat generalized to present a risk picture of the entire line. For modeling purposes, the frequency of each impact is potentially affected by variables of. Index sum-representing the probability of pipeline failure; Spill size; and Tier designation-representing receptor vulnerability and sensitivity (e.g., Tier 3 is hypersensitive). However, not all impacts are modeled as being sensitive to all ofthese, due to data availability limitations. Below is a general description of the impacts modeled. These descriptions offer the reader a general sense of the rationale behind the calculation, but note that the actual results are based on more than a hundred calculated scenarios. More detailed descriptions can be found in Attachment C [Appendix F of this book].
Fatalities and injuries While it is common to express risks of injuries and fatalities as a function of “hours exposed,” this analysis uses only a calculation of fatalities and injuries per reportable leak. All distinctions of rural versus urban; permanent residents versus temporary exposures; distances to leaks; ignition probabilities; etc. are therefore aggregated in these ratios. This implies that the LPP system is similar to the national data in terms of these variables. The national pipeline system is not characterized to the extent that such similarities can be confirmed. However, no compelling reasons are found to suggest that LPP is not similar, with regards to the distinctions previously noted. Therefore, for the purposes of the overall impact estimations, the national data (DOT) is assumed to be representative of LPP’s future risks for this impact. An example of fatalities and injuries, is Case 1 shown in Table 3. It can be described in general terms as follows: Statistically, one fatality is expected to occur for every 217 reportable leaks and an injury is expected to occur for every 48 reportable leaks. The industry average leak rate applied to this pipeline results in an estimate of 35 leaks over 50 years and, hence predicted fatalities and injuries of 0.16 and 0.72, respectively.
Case studies 14/321
This impact is modeled with no sensitivity to actual population density differences or index sum differences along the line. A threshold spill size of 50 bhl is assumed, below which frequencies of fatality or injury are assumed to be zero. Further discussion of the fatality and injury rates used can be found in Attachment C ofthis report [Appendix F ofthis book].
Ofthose leaks, 50 percent would contaminate a surface water supply in Tier 3, I O percent in Tier 2. Additionally, 75 percent would contaminate a ground water supply in Tier 3, 25 percent inTier 2. Using the tier miles, these aggregate to a 100 percent chance for about 3 1 miles, or about 4 percent for the overall pipeline. The industry average leak rate applied to this pipeline predicts 35 leaks and, hence, about 6 spills (I6 percent of 35) would he of sufficient volume to contammate a drinking water supply, and 0.2 spills would occur at a location that contaminates a drinking water supply. This is equivalent to saying one contamination episode occurs every five pipeline lifetimes or 250 years, since the 0.2 is based on a 50year period.
Drinking water contamination Drinking water contamination is defined as a potential level of contamination which : causes an exceedance ofTexas dnnking water standards, or causes an exceedance of proposedlexas ground water contamination limits; and can potentially impact a public drinking water supply for a period of time exceeding normal system storage capacity (estimated at about 24 hours). The drinking water probability is a sum of the probability of impacting ground water resources used for public drinking water supplies, and the probability of impacting surface water resources used for drinking water. There are 29 miles along the pipeline rated sensitive or hypersensitive for potential surface water drinking water quality. Based on surface water modeling performed at the most hypersensitive locations long the pipeline, a threshold spill size of 1,500 hhl was set for surface water drinking water impacts. A spill smaller than this would not (because of losses of water contaminants through natural processes such as volatilization), pose drinking water quality impact, even under adverse climate (rainfall. evaporation) conditions. There are 66 miles rated sensitwe or hypersensitive for potential ground water drinking water impacts. (Note: surface water and ground water sensitive areas are not necessarily mutually exclusive.) Based on the potential for various factors to retard transport of contaminants to an aquifer, two separate threshold levels are set:
0
Over porous media aquifers, confined or unconfined, a threshold of 1.500 bbl reflects the potential for soil to ahsorh contaminants, and for conventional ground water remediation technologies such as pump-and-treat to control contaminants from reaching sensitive receptors. Over hypersensitive karst aquifers, a lower threshold of 500 bbl reflects the potential for adsorption on the thinner soil layers overlaying karst, and the rapid transport in karst aquifers which can limit remediation effectiveness. Rose (Rose, 1986) [this reference not included in this book] estimated this threshold at 1,000 bbl, and a figure one-half that estimate was used to add a factor of conservatism.
Index sum averages for each tier are used to estimate leak incident rates in Case 4. Further discussion of how this receptor is modeled can be found in Attachment C of this report [Appendix F of this hook].
Drinking Water Contamination-No MTBE The previous impact assumes 15 percent MTBE is transported in the pipeline. If no MTBE is present, the potential for impacts is assumed to he one-half of the previous case. Rationale for this is presented in Attachment C ofthis report [Appendix F ofthis book].
Edwards Aqu fer Contamination This is a special case of“ground water drinking water contamination.” focused specifically on the three miles between Milepost (MP) 170.5 and MP 173.5 (all new pipe as proposed in LMP). Because ofthe documented pathways for rapid contamination of drinking water wells in Sunset Valley, this represents “worst case” probability for ground water contamination. This case has the following assumptions in addition to the general drinking water impacts.
0
0
Since this area is over known hypersensitive karst, the spill size threshold is set at 500 bbl. Spills of this size and larger are assumed to he equally harmful. In the mitigated case, the enhanced leak detection system in this area is credited with reducing the frequency of larger sized spills. Specifically, the types of potentlal large spills reduced are those created by a slow leak, below the detection capabilities of normal leak detection, continuing for long periods of time. The index sum represents the additional leak prevention measures proposed in these three miles.
Further discussion of how this receptor is modeled can he found in Attachment C ofthis report [Appendix F of this book].
Lake travis drinking water contamination
An example of this impact is Case I shown in Table 3 and can be generally described as follows:
This is a special case of“surface water drinking water contamination” which focuses on spills in the Pedemales watershed that could impact drinking water supplies drawn from Lake Travis. The potential for contamination of Lake Travis was analyzed in detail because of the large number of people served by this reservoir (up to a million), and the duration contaminant levels in excess of drinking water cnteria or advisory levels could he exceeded (on the order of I to 2 months for any lake water users, including the City of Austin). The analysis involves 1.54miles of pipeline located in Tier 2 areas and 2.74 miles inTier 3. This represents worst case probability for contamination of surface water used as a drinking water supply. The spill size threshold is set at 1,500 bbl. Spills of this size and larger are assumed to he equally harmful and spills helow this threshold would not cause the impact.
I . About I6 percent of reportable leaks are of a size to pose a threat to a drinking water supply
Further discussion of how this receptor is modeled can be found inAttachment C ofthis report [Appendix F ofthis hook].
This impact is modeled as being sensitive to tier location, index sum, and spill volume. Since the tier designations consider vulnerability of drinking water sources, a “probability of contamination” is assigned for each tier. Depending on the vulnerability of a given resource, threshold spill size is assumed before any impact is possible. Above that threshold, impacts are judged to he equally likely, regardless of spilled volume. This is conservative, since even the spill volumes closer to the threshold are modeled as being as harmful as the largest spill volumes.
Table 3 Overall risks
Overall Risks
Case
if:
Average Leak Rateper Mile-Year
Estimated Leak Count f o r 700 Miles and 50 Years
Frequency of Impact over Lije of Project Impact
Annual Frequency ( X I 000) f o r Impact
Notes
~~
1
2
3
4
Industry average reportable leak rate applies
Pre-mitigation reportable leak rate continues
Pre-mitigation leak rate continues
Post-mitigation leak rate estimate
0.001
0.0007'
0.001992
0.000073
35
26.8
69.1
2.6
Drinking water contamination
0.27
5.35
Fatality lnjury Recreational water contamination Prime agricultural land contamination Wetlands contamination
0.16 0.72 2.80
3.21 14.42 55.96
1.06
21.14
1.65
32.92
Drinking water contamination
0.20
4.10
Fatality Injury Recreational water contamination Prime agricultural land contamination Wetlands contamination
0.12 0.553 2.14
2.46 11.05 42.88
0.81
16.20
1.26
25.22
Drinking water contamination
0.23
4.69
Fatality Injury Recreational water contamination Prime agricultural land contamination Wetlands contamination
0.14 0.63 2.45
2.82 12.65 49.06
0.93
18.53
I .44
28.86
Drink water contamination
0.005
0.10
Drinking water contamination, no MTBE Fatality Injury Recreational water contamination Prime agricultural land contamination Wetlands contamination
0.003
0.051
0.005 0.024 0.087
0.11 0.47 1.14
0.035
0.70
0.051
1.01
4 4
10 reportable p 5 0 bbl) over 450 miles in 29 years 4
4
4 4
Case studies 14/323 Table 3 Overall risks-cont'd Overall Risky
Case
Average Leak Rate per Mile-Year
d
Estimated Leak Count f o r 700 Miles and SO Years
Impact
Lake Travis water supply contaminatlon Edwards Aquifer water contamination
Frequency of Impact over Life ofproject f o r Impact
Annual Frequency (XI000)
0.00019
0.004
0.00019
0.004
Notes
Pedernales watershed
Notes: 1 10 reportable (s50 bbl) leaks over450 miles in 29 years. 2 26 leaks (some less than 50 bbl) over 450 miles in 29 years. 3 Leak estimate is for any leak, including 6 0 bbl; approximate leak count for 50 bbl (reportable) = 1.1 in 50 years. 4 Fatality and injury rates are based on DOTfatality and injury rates per reportable leak applied to 700 miles. Table 4
Segment-specificrisks Segment-Specific Risk (2,SOOP ofpipeline)
Cuse ~~
1
2
Averuge Leak Rate per Mile- Year-
if. .. ~
Estimated Leak Countfor 700 Miles & SO Years
Impact
Frequency ( ~ 1 0 ' ) oflmpact over Life ofproject
Annual Frequencj. (XI O'),fiir lmpact
Notes
~
Industry average reportable leak rate applie5
Pre-mitigation reportable leak rate continues
0 001
0.00077 I
35
26.8
Drinking water contamination
181
3 62
Fatality Injury Recreational water contamination Prime agricultural land contamination Wetlands contamination
109 488 I893
2.17 9.76 37.85
715
14.30
1502
30.03
139
2.77
83 374 1450
1.66 7.48 29.01
548
10.96
1151
23.01
Drinking water contamination Fatality Injury Recreational water contamination Prime agricultural land contamination Wetlands contamination
4
4
3.372 R, special length for this receptor
4 4
3312~ special length for this receptor Continued
14/324 Absolute Risk Estimates
Table 4 Segment-specific risks-cont'd Segment-Specific Risk (2SOOfi ofpipeline)
Case
$..
3
Pre-mitigation leak rate continues
4
Post-mitigation leak rate estimate
Average Leak Rate p e r Mile-Year 0.001992
0.000073
Estimated Leak Countfor 700 Miles & SO Years 69.1
2.6
Frequency (XI@) of Impact over Life ofProject
Impact
Annual Frequency (x106)f.r Impact
Drinking water contamination
159
3.17
Fatality Injury Recreational water contamination Prime agricultural land contamination Wetlands contamination
95 428 1659
1.90 8.55 33.18
627
12.54
1316
26.33
Drinking water contamination
3.5
0.069
Drinking water contamination, no MTBE Fatality
1.7
0.035
3.6 16.0 58.8
0.071 0.320 1.175
23.8
0.475
46.2
0.920
Injw Recreational water contamination Prime agricultural land contamination Wetlands contamination
Lake Travis water supply contamination EdwardsAquifer water contamination
0.13
0.003
0.132
0.003
Notes
3312 ft special length for this receptor
4 4
3372 ft special length for this receptor Pedemales watershed
Notes: 1 10reportable ( ~ 5 bbl) 0 leaks over450 miles in 29 years. 2 26 leaks (some less than 50 bbl) over450 miles in 29 years. 3 Leak estimate is for any leak, including 6 0 bbl; approximate leak count for 50 bbl (reportable) = 1.1 in 50 years. 4 Fatality and injury rates are based on DOTfatality and injury rates per reportable leak applied to 700 miles.
Recreationalwater contamination Recreational water contamination is defined as levels of contamination which could cause violation ofthe Clean Water Act through creation of a visible pewoleurn sheen on any surface waters, or through impacts to fish populations (including levels of dissolved oxygen and toxic constituents in the water). No potential concentration levels were analyzed for recreational water contamination, and it is possible that contaminant levels in excess of those which may result from a pipeline release already exist in watersheds from urban
runoff and usage of recreational watercraft. Threshold spill sizes applied for certain portions of the pipeline represent the size of spill which would need to occur prior to a spill reaching a surface water body.
This impact is modeled as being sensitive to tier location, specifics within the tier, and spill volumes. An example of this impact is Case I shown in Table 3 and can he generally described as follows:
Table 5 Overall Impact Probabilities for Cases 3 and 4
Overall Impact Probabilifv* ~~
Case
f.
3
Pre-mitigation leak rate estimate
Average leak rateper mile-year
Estimated leak countf o r 700 miles and 50 yeam
0.001991
69.1
Probability of one or more
impacts over Impact
lfe ifproject
Annual probability** ofone or more impcts over life qf project
Probabilifv changes in a thousand
Annual changes in a thousand
Drinking water contamination
20.9%
0.47%
209
4.68
Fatality Injury Recreational water contamination Prime agricultural
13.1% 46.9% 91.4%
0.28% 1.26% 4.79%
131 469 914
2.81 12.6 47.9
60.4%
1.8%
604
18.36
76.4%
2.84%
764
28.4
0.5%
0.010%
5.10
0.1 02
0.3%
0.005%
2.55
0.051
0.5% 2.3% 8.3%
0.01 1% 0.047% 0.17%
5.25 23.38 83.20
0.105 0.473 1.736
3.5%
0.070%
34.50
0.702
4.9%
0.10%
49.42
1.013
0.02%
0.0004%
0.19
0.004
0.02%
0.0004%
0.19
0.004
Notes
2 2
land contamination Wetlands contamination 4
Post-mitigation leak rate estimate
0.000073
2.6
Drinking water contamination Drinking water contamination, no
MTBE Fatality Injury Recreational water contamination Prime agricultural land contamination Wetlands contamination LakeTravis water supply contamination EdwardsAquifer water contamination
'Overall impact probability is probability of one or more events in 50 years over 700 miles. "Overall impact probability, annual, is probability of one or more events in 1 year over 700 miles. Notes: 1 26 leaks (some less than 50 bbl) over450 miles in 29 years. 2 Fatality and injury rates are based on DOTfatality and injury rates per reportable leak, applied to 700 miles. 3 Leak estimate is for any leak, including <50 bbl; approximate leak count for 50 bbl (reportable) = 1 in 50 years. 4 Pedernales watershed.
2 2
4
a
2
Table 6 Segment-Specific Impact Probabilitiesfor Cases 3 and 4
m D
Impact Probabilityfir Specific Locations’
Case 3
4
c. Re-mitigation leak rate estimate
Post-mitigation leak rate estimate
Average leak rateper mile-year
Estimated leak countfor 700 miles and 5Oyears
0.001991
69.1
0.000073
2.6
Impact
Pmbabiligof one or more impacts over life ofproject
5 Probability changes in a million
Annual changes in a million
0.0159%
0.000317%
159
3.17
Fatality Injury Recreational water contamination prime agricultural land contamination Wetlands contamination
0.0095% 0.0428% 0.166%
0.000190% 0.000855% 0.00332%
95 428 1658
1.90 8.55 33.2
0.0627%
0.001254%
627
12.54
0.132%
0.00263%
1315
Drinking water contamination Drinking water contamination. no MTBE Fatality
0.00035%
0.00001%
3.5
0.069
0.0001 7%
0.0000035%
1.7
0.035
0.00036% 0.00160% 0.006%
0.00003% 0.00003% 0.00012%
3.6 16.0 58.8
0.07 1 0.320 1.175
0.002%
0.00005%
23.8
0.475
0.005%
0.00009%
46.2
0.925
0.0000 13%
0.00000026%
0.13
0.003
0.00001 3%
0.00000026%
0.13
0.003
Recreational water contamination Prime agricultural land contamination Wetlands contamination LakeTravis water supply contamination EdwardsAquifer water contamination
Impact probabilityfor specific locations is probability of one or more events in 50 years per 2,500 ft. Impact probabilityfor specific locations,annual, is probability of one or more events in 1 year per 2.500 fl. Notes: 1 26 leaks (some less than 50 bbl) over450 miles in 29 years. 2 F a t a l i and injury rates are based on DOT fatality and injury rates per reportable leak, applied to 700 miles. 3 Leak estimate is for any leak, including <50 bbl; approximate leak count for 50 bbl (reportable) = 2 in 50 years. 4 Pedernales watershed. *’
: : 0
Drinking water contamination
Injury
*
Annual probabilify** of one or more impcts over life of project
a X
m
Notes
2 2
26.3
2 2
4
8I
Case studies 14/327
I . About 38 percent of reportable leaks are of a size to pose a threat to a recreational water supply. 2. Ofthose leaks, -25 percent would contaminate the receptor. This is determined by characterizing the various lengths of such receptors present within each tier. Each length within each tier is assigned a probability, indicating that length’s vulnerability. In aggregate, these compute to be the equivalent of about a 25 percent probability all along the pipeline. 3. The industry average leak rate applied to this pipeline predicts 35 leaks and, hence, about I3 (38 percent of35) would heofsufficient volume, and -2.8 would occur at the right location to contaminate one of these receptors. Further discussion of how this receptor is modeled can be found in Attachment C ofthis report [Appendix F ofthis book].
Prime agricultural land contamination A spill size of 500 bbl over prime agricultural land is viewed as impacting agricultural lands, based on the potential for spread of a rapid release to impact 1/4 acre of agricultural lands. Further discussion of how this receptor is modeled can be found in Attachment C of this report [Appendix F ofthis book].
Wetlands contamination A spill size of 500 bbl over wetlands is viewed as impacting the wetlands. This threshold is set as a level which would potentially overcome the natural processes of volatilization and adsorption, and cause serious degradation of high quality impacts. Discussion of how this receptor is modeled can be found in Attachment C of this report [Appendix F ofthis book].
Summav of results Post-mitigation impact frequencies are calculated to be I O to 30 times lower than pre-mitigation and industry average frequencies. The frequency reduction is not constant since different permutations of leak frequencies, spill size frequencies, and lengths-impacted are combined. The followingtables show the results of all frequency estimates for all impacts. Case 4 in all tables shows the estimate for post-mitigation results. Other cases are included for comparison. Table 3 shows overall frequenciesfor all cases andTable 4 shows segment-specific frequencies forall cases.Tables5 and6focusonCases 3 and4andpresentprobabilities (in slightlydifferent formats thanTables3 and 4) ofimpacts.
Case Study D: highly volatile liquids This case study is the example presented in Ref. [43]. That reference describes a recommended methodology for calcuiating hazard zones for highly volatile liquids (HVLs). This report appears to have been produced for the National Energy Board ofCanada. The example is a 25-km-long propane pipeline with 0.1589-m inner diameter, operated at 9928 kPa, in a population class 1 (rural, 5 dwelling units). The scenario is a fullrupture event with the following assumptions: a frequency of failure of 2.0E-03, wind speed 4 m/s (probability of the population being exposed based on wind direction), probability of ignition = 12%, probability of exposure = 11% (taking into account moving populations), and mortality rate = 0.5 fatalities per event.
From these input and the equations and assumptions shown in Ref. [43], the calculations shown in Table 14.43 were made.
Case Study E: Sour gas The following is an excerpt from Ref. [22], a quantitative risk analysis of anatural gas gathering system located in southwestern Wyoming. This describes an analysis approach and some assumptions used in assessing risks from a toxic gas system and expressing those risks in absolute terms. The objective of the analysis was to determine the risk the pipeline and associated wells pose to the public population along the pipeline route. This required the completion of four major tasks. Task 1: Determine potential pipeline and wellhead accidents that could create life-threatening hazards to persons located near the pipeline or well sites. Task 2: Derive the Frequency of occurrence (probability) of each accident identified in the first task. Task 3: Determine the consequences of each accident identified in the first task. Task 4: Combine the consequences and the probability of occurrence of each accident to arrive at a measure of public risk created by the pipeline and well network. The natural gas being produced and transported through the network varies in composition from one section of pipeline to another, according to the gas produced from each well. However, all pipeline sections transport natural gas containing some hydrogen sulfide. The pipeline creates no hazards for persons near the pipeline or well sites as long as the sour natural gas is contained within the pipeline. Accidental releases of sour natural gas from the well: pipeline network could create potentially life-threatening hazards to persons near the location of the release. Due to the presence of hydrogen sulfide in the natural gas, the vapor cloud created by a release of gas to the atmosphere would he toxic as well as flammable. Persons inhaling air containing toxic hydrogen sulfide vapor could be fatally injured if the combination of hydrogen sulfide concentration and time of exposure exceeds the lethality threshold. I f the cloud is ignited, persons in or very near the flammable vapor cloud could be fatally injured by the heat energy released by the fire. The frequency of occurrence of each potential pipeline accident identified in Task 1 was estimated from historical pipeline failure rate data gathered by the US. Department of Transportation. Event trees were then used to estimate the percentage of releases of various sizes that would create a toxic or fire hazard. For example, it was estimated that 50 percent of moderate-sized releases of sour natural gas from the pipeline do not ignite but do create a toxic cloud; I O percent ignite immediately on release and create a torch fire; and 40 percent ignite after some delay, thus creating a toxic cloud followed by a torch fire. The frequency of sour gas well blowouts was derived from sour gas well historical data. The largest documented data base covers wells in the Province of Alberta, Canada. According to the data, an uncontrolled sour gas well blowout would occur with a frequency of 3.55E-06 blowouts per well per year. This failure rate is for wells equipped with subsurface safety valves (SSSV’s). All the wells in the Wahsatch network will be equipped with SSSV’s. Computerized consequence models were used to calculate the extent of potentially lethal hazard zones for toxic vapor clouds
14/328 Absolute Risk Estimates
and/or gas fires created by each potential accident identified in Task 1. Calculations were repeated for numerous combinations of wind speed and atmospheric stability conditions in order to account for the effects of local weather data. When making these calculations,it was assumed that large releases of gas (ruptures and punctures) from underground pipelines were capable of blowing away the soil overburden because of the pipeline’s high operatingpressure. As a result, the released gas enters the atmosphere with high velocity, resulting in rapid mixing with air near the point ofrelease. For corrosionholes, it was assumed that the gas being released from an underground pipeline was incapable of blowing away the soil overburden. As a result, the released gas enters the atmosphere with little momentum after passing through the soil above the pipeline. The number of persons expected to receive fatal injuries due to exposure to each of the toxic or fire hazard zones was determined as a function of wind direction. The risk was then calculated by summing the potential exposures to each ofthe hazards for all accidents identified in Task 1, and modifying the exposures to each potential hazard zone by its probability of occurrence. For example, the probability of a specific flash fire is the product of the following probabilities. Probabilityofthe accidentthat releases sour natural gas. Probabilitythat the release creates a flammable vapor cloud under a unique combination of wind speed, wind direction, and atmospheric stability conditions. Probabilitythat the flammable vapor cloud is not ignited immediately but is ignited after some delay. The number of persons potentially exposed to a specific hazard zone is a function of the population density and distributionnear the accident location. The population density varies along each pipeline section, and many ofthe sectionsdo not have any permanentdwellingsor population close enough to the pipeline to be affected by a pipeline release. In addition, some ofthe physical aspects ofthe pipeline (e.g., pipe diameter and operatingpressure) and the composition of the gas in the pipeline also vary with location. Therefore, the pipelineiwell network was divided into twelve pipeline sectionsand six well sites on the basis of pipeline diameter, operating pressure, and local population density. Calculationsofexpected failure rates and exposures were
performed for each of the twelve pipeline sections and six well locations. For each pipeline section or well site, one particular accident will create the largest potentially lethal hazard zone for that section.As an example, one accident is a full rupture ofthe pipeline without ignition of the flammable cloud, thus resulting in a possible toxic exposure downwind of the release. Under worst case atmospheric conditions, the toxic hazard zone extends 2.600 feet from the point of release. Under the worst case conditions, it takes about 11 minutes for the cloud to reach its maximum extent. The hazard “footprint”associated with this event is illustrated in two ways. One method presents the footprint as a “hazard corridor” that extends 2,600 feet on both sides of the pipeline for the entire length. This presentation is misleading since everyone within this comdor cannot be simultaneously exposed to potentially lethal hazards from any single accident. A more realistic illustration of the maximum potential hazard zone along the pipeline is the hazard footprint that would be expected IF a full rupture of the pipeline were to occur, AND the wind is blowing perpendicularto the pipeline at a low speed, AND “worst case” atmospheric conditions exist, AND the vapor cloud does not ignite. The probability of the simultaneous occurrence of these conditions is about 1.87E-07 occurrences/pipelinemile-year, or approximately once in 5,330,000 years. The highest risk along this section ofthe pipeline network is to persons located immediately above the pipeline. The maximum risk posed by this portion of pipeline is about 5.0E-6chances of fatality per year. This is for an individual located directly above the pipeline 24 hours per day for 365 days. In other words, an individual in this area of the pipeline network would have one chance in 200,000 of being fatally injured by some release from the pipeline for an entire year, if this individual remained directly above the pipeline for an entire year. An individual in this same area, but located 50 meters from the pipeline, would have about one chance in one million of being fatally injured by a release from the pipeline, if the individual were present at that location for the entire year. The risk posed to the population within the appropriate “hazard comdor” for the pipeline/well network can also be presented in the form of fM curves. This type ofrisk presentation, often called societal risk, is a plot of the frequency, f, at which N or more persons are expected to be fatally injured. The fM curve shows that the frequency
Table 14.43 Case Study D calculations
Keyparameters
Volue
Notes
Initial mass release rate
562 kg/sec
Adiabatic choked flow theory
Total mass ofNGL available Event mass release rate Distances to LFLAJFL Distance to explosion epicenter NGL mass within flammabilitylimits TNT equivalent of explosion Radius of overpressure Radius of hazard area Population density Consequences Probability Risk
119,657kg 109 kg/sec 84 m/41 m 63 m 723 kg 7 tons 12 m 75 m 0.23 personsha 0.10 fatalitiesper event 4.0E-05 eventslyear 4.OE44
Assumes a time to close emergency valves Accounts for reductions in release rate Over 360-second release episode Uses assumed meteorological stabilityconditions Assumed to be midway between LFL and UFL Uses 60% factor 20% ofmass assumed involved To 138 Walevel, 50%mortality Epicenterdistance+ overpressureradius Five dwelling units x 3 persons per dwelling unifi64 ha Population density x area x mortality rate Failure frequency x length x p(wind) x p(ignition)x p(exposure) Consequences x probability
Note: These calculation results could not be replicated using t h e formulas and assumptions included in the examples of Ref. [43]. Note also that some assumptionsare not necessarily conservative: The epicenter of the explosioncould be at LFL ratherthan the assumed midpoint between LFL and UFL. LFL could be farther, given inconsistent mixing, and the overpressure criterion is very high. It is not known if errata to this reference are available or if t h e document is in current use in the form it was obtained. Results shown above are for illustration of t h e thought process only and should not be relied on without further validation.
QRAquality 14/329
of accidents that would affect one or more persons, on average, is less than 6.OE-07 chancedyear, or one chance in 1.7 million. Historical data on fatal accidents involving natural gas gathering and transmission pipelines have been compiled by the Department of Transportation (DOT). During the 14.5 years for which summary data are available, the maximum number of fatalities due to any single accident was six, and only two accidents caused six fatalities. For the pipeline/well network involved in this study, the maximum expected number of fatalities for any single accident is five on average. To put this type of evaluation into perspective, it is instructive to look at the types of risks people are ordinarily exposed to during dayto-day life. There are voluntary activities (driving a car) and involuntary activities (being hit by lightning) that involve risks higher than those due to this pipeline/well network.
How to evaluate the quality of a QRA The following are common errors seen in QRA studies. A check for these can be used as a simple audit of a QRA report. 0
0
0
0
0 0
0
XII. QRA quality The above case studies are based on numerical analyses techniques, making them at least akin to QRA-type evaluations. It is useful to examine characteristics that might make one QRAtype approach generally more complete or more useful as a valid measure of risk. The following lists are extracted from checklists offered by a well-known practitioner of QRA, as a means of evaluating the QRA study itself. Reference [91] states that from an analyst’s point of view, a QRA study can be thought of as having three general stages:
0
0
0
The following are features that may indicate a high-quality QRA, equal to the best QRAs currently being performed: 0
0
TOP Establishing the objectives and scope ofthe study Collecting all relevant information Identifying what can go wrong
0 0
0
MIDDLE Estimating event frequencies Performing consequence modeling Calculating risk results 0
TAIL Investigating risk reduction measures Developing cost-effective solutions Communicating the results From a client’s point of view, many QRAs seem to place too much emphasis on the technical details in the middle, at the expense of the top and, more notably, the tail. This is perhaps not unexpected since the middle part is the most technically demanding. However, the tail part is arguably the most valuable to the client since it is usually acritical aspect of decision making.
Failure to define clearly the scope and boundaries of the study Failure to cover all relevant hazards Insufficient failure cases Screening of failure data, optimistic assumptions and other biases tending to produce low-risk results Concentration of modeling and risk reduction effort on hazards that do not dominate the risks Lack of attention to escalation ofhydrocarbon events Use of only one risk measure Failure to define individual risk Use of assumptions even though data are available Failure to provide references (or an auditable internal referencing system) for quoted frequency and probability data Use of insufficient sources for frequency and probability data Lack ofattention to risk reduction measures.
0
Use of formal hazard identification procedure linked to failure case generation Use of intelligent failure case and accident scenario selection Use of validated software for modeling Use of audited software for risk summation Documentation of all input data and modeling assumptions Traceability of risk results through intermediate results to input data Quantitative uncertainty analysis, including identification of the most critical assumptions and exploration of the effects of alternatives (Note that the existence and application of an uncertainty analysis is a better indicator of quality than the degree of uncertainty that is estimated.) A smooth FN curve Experience with FN curves has suggested that more detailed analyses, especially those with intelligently selected failure cases, tend to produce smoothly rounded F’N curves, whereas less detailed studies produce FN curves with large discontinuities (unless some additional smoothing is used). Use of actual accident experience in developing accident scenarios and validating risk results [91].
However, many ofthese aspects might not be ofhigh value to the user of the analysis, so not all of these ingredients is necessarily appropriate for every study.
Risk Management
Contents Introduction 15’331 Applications 151332 Measurement tool 151333 Cumtilati.rcrisk 151333 Changes over time 151334 Cumulative risk example I51334 Acceptdblerisk 15 334 Societal versus individual n\k 151335 Risk criteria 151336 Qualitative cnteria I51337 ALARPprinciple 151337 Examples of numerical criteria 151339 Studies 151340 “One in a million” as acceptance criteria 151341 Decision points 15’341 Numerical criteria 15/341 Data-based critena 151341 Precedent-bascd critena 151342 Continuou\ improieinent 15 342 Risk mitigation 15 342 Wherc to start 151342 Prioritiidtion 151343 Resource allocation 151343 Mitigation optionc 15’343 Conaequencc-dominated riski 151343
1. Introduction Once a risk assessment has been completed and the results analyzed (see Chapter 8), the natural next step is risk management: “What, if anything, should be done about this risk picture that has now been painted?” This chapter explores some techniques and issues regarding the management of pipeline risks.
Cost ofmitigation 151344 Land-use issues 151344 X Costs 15 344 Cost nsk relationships 15 Estimating costs of mitigati Costirisk curve 15 345 Costhcncfit of route alternatives 151345 Efficiciicies 15’346 Costit isk modeling 15’346 Cost ofaccident\ 15‘347 I’alue ofhuman life I51347 Rate of spending 151347 XI. Program adminirtration 151348 Organi7ation 15/348 Control documents 15/349 Program elements 1 5 \ 3 4 Process elcments 15 350 Scope of risk analysir 151350 Definitions of the RMP Procedure 151350 Riskassessment model 15’350 Risk control and decision support 151350 Rclatcd procedure\ 151352 XI. Risk communications I51352 Communi~dtionsbenefits I51352 The conimunicatoi I 5 352 Audience considerations 1.51353 Risk coinpansons 15’355 Comparisons \n itb other inodec of tran\portation and risks I5 355
Risk management implies the need for judgment of risk levels, perhaps by the establishmentof “acceptable”or “tolerable” risk levels. This is an enormously complex issue. This chapter discusses but will not resolve nor even do justice to the many high-quality books and countless articles written on the subject. Nor will the following paragraphs fully explore the many and often subtle socioeconomic, political, and philosophical aspects of decisions regarding risks. Rather, the intention here
151332 Risk Management
is to equip the practicing pipeline risk manager with concepts that will help him better understand risk issues and interact more effectivelywith risk managers from other industries. Many challenging questions are implied in risk management:
0
Where and when should resources be applied? How much urgency should be attached to any specific risk mitigation? Should only the worst segments be addressed first? Should resources be diverted from less risky segments in order to better mitigate risks in higher risk areas? How much will risk change ifwe do nothing differently?
It must be recognized that a finite amount of resources can be spent on pipeline risk reduction. Beyond some point, expenditures no longer make sense from a business or societal viewpoint. An irony of the situation is that public safety can actually be threatened by spending too much on pipeline safety! This occurs when costs of pipeline transportation are driven so high that business is diverted to less safe modes of transportation. In some sense, we have nearly complete control of the risk. We can spend nothing on preventing accidents or we can spend enormous sums of money over-designingfacilities, employing an army of inspectors, and routinely shutting down lines for preventive maintenance and replacement. Pragmatically, operators spending too little on preventing accidents will be out of business because of regulatory intervention or because of the cost of accidents. On the other hand, if an operator spends too much on accident prevention, that company can be driven out of business by the competition--even if, perhaps, that competition has more accidents! Risk management, to a large extent, revolves around the central process of making choices in the design and day-to-day operations of a pipeline system. Many of these choices are mandated by regulations whereas others are economically (budget) constrained. By assigning a cost to pipeline accidents (a sometimes difficult and controversial thing to do) and including this in the cost of operations, the optimum balance point is the lowest cost of operations. No operator will ever have all of the relevant information he needs to guarantee safe operations. There will always be an element of the unknown. Managers must control the “right” risks with limited resources because there will always be limits on the amount of time, manpower, or money available to apply to a risk situation. Managers must weigh their decisions carefully in light of what is known and unknown. The deliverable most requested after risk assessment is therefore a resource allocution model. In such a model, the output of the risk assessment would play a key role in evaluating the benefits of any project or activity. The user would in essence be performing “what-if‘’ scenarios to see the risk level that results after any proposed action.
common applications of a pipeline risk management program typically include the following: 1. Identification of risks. This is simply the acquisition of
knowledge-such as levels of risk and changes in risk over time-as a first step toward applying the knowledge to improve pipeline safety. 2 . Reduction of risks. Establishing baseline risk levels and risk significance thresholds provides system parameters for evaluating multiple risk reduction projects. 3 . Reduction of liabiliw. Having a comprehensive and effective risk management program in place should reduce the number, frequency, and severity of failures, as well as the severity of failure consequences. In addition to operating cost savings, the company would expect to see a long-term reduction of indirect liability-related costs, including insurance costs, third-party legal actions, government agency enforcement actions, special interest group actions, etc. 4. Resource allocations. This entails optimizing, from a risk perspective, the choices of day-to-day expenditures of manpower and dollars. A risk-based maintenance or capital budget cost effectively allocates limited funds to risk mitigation measures with the greatest risk reduction per unit cost. The optimum allocation of resources is one of, if not the, most pressing challenges to managers of pipeline operations. 5 . Project approvals. As part of a regulatory process or company internal process, this involves an examination of the levels of risk related to a proposed project and the judgment of the acceptability ofthose risks. 6 . Budget setting. Budgets are used to determine the value and optimum timing of a potential activity, project, or group of projects from a risk perspective. 7. Due diligence. Due diligence is required to investigate and evaluate assets that might be acquired, leased, abandoned, or sold from a risk perspective. An acquiring company can evaluate data from a pipeline of interest in its risk model and compare the level of risk to its existing system, thereby identifying the potential cost and budget priority to meet established significance thresholds. 8. Risk communications. This involves presenting risk information to a number of different audiences with different interests and levels of technical abilities. These audiences can include new pipeline ROW-affected parties, existing pipeline ROW-affected parties, corporate stakeholders, employees, customers, general public, special interest groups, local emergency response parties, and local/state/ federal governmental agencies. The risk results can also be used to support specific tasks in risk management, including 0
It. Applications Applications of risk management techniques range from simple “interesting to know” comparisons to the full basis ofbudgets and operating disciplines. The latter may drive design, construction, operating, maintenance, and emergency response decision making and associated resource allocation. The most
Design an operating discipline Assist in route selection Optimize spending Strengthen project evaluation Determine project prioritization Determine resource allocation Ensure regulatory compliance
and so on.
Cumulative risk 15/333
111. Measurementtool The risk assessment model described in Chapters 3 through 7 produces relative risk scores by combining the possible failure mode scores and dividing this sum by the potential consequences score: Relative risk = [index sum]/[le& impact factor] Index sum = [third party] + [corrosion]+[design]+ [incorrect operations] Having built a formal risk assessment system, it is useful to step back and assess what information is now available. Recall that the final risk numbers should be meaningful in a practical, realworld sense. They should represent everything that is known about a specific piece of pipe-the collective intelligence of the whole company including all knowledge gained over years of operating experience, all of the statistical data that can be gathered, all intuitive beliefs, and all engineering calculations. Ifthe model has not captured all of this, then there is room for improvement. If any personnel are more knowledgeable in any risk area than the model is, there is still work to be done. If the risk assessment results are not believable, then something is either wrong with the model or the perceptions of the disbelievers. In either case, the disconnect can be identified and resolved. When, after careful evaluation and much experience, the results are believable and trusted, the user will find many ways to use the numbers that he perhaps did not foresee. In creating a risk assessment system, a measurement tool has been created. As with any measurement tool, it must have a suitable “signal-to-noise ratio” if it is to provide useful results. This means that the “noise,” the amount of background variability in the measurement (due to numerous causes), must be low enough so that the “signal,” the risk value of interest, can be read. Every system, especially complex systems such as a pipeline existing in a natural environment, will show a great deal of variation in many characteristics. Some of this natural variation will be of interest-a signal-since it changes our risk perceptions. Some of the variation, however, will only be “background noise”-not of real risk interest and perhaps obscuring the real signals. In the case of pipeline risk, some sources of variation that must be filtered for signals include
0
0
0
Varying static conditions along a pipeline and between compared pipelinesdifferent soils, vegetation, temperatures, pipe materials, pipe sizes, operating practices, etc. Varying dynamic conditions-activities of people, presence of people, weather events, stress conditions, soil moisture content, etc. The high level ofuncertainty associated with the modeling of phenomena such as dispersion and explosion Small amounts of statistical data from which to predict event frequencies Large numbers of variables that can contribute to risk changes and which are often confounded with each other.
A highly variable system limits the ability of the risk assessment tool to distinguish real changes in risk level from changes that do not necessarily contribute to risk. We should be careful
not to think we can find a 2% change in risk with a tool that is only sensitive to *IO% changes. This is similar to the “accuracy” of the model, but involves additional considerations that surround the high level of uncertainty associated with risk management. However, it would not be reasonable to assume that this tool cannot be continuously improved. Improvement opportunities should be constantly sought. See also Chapter 8 for a some simple statistical and graphical tools that can be used to further explore a risk model’s capabilities.
IV. Cumulative risk Cumulative risk is a metric used to gauge the risk posed by any length of pipeline. Because risk values are very location specific along the pipeline, a method of rolling up all of the risks for a certain stretch ofpipeline is important. As noted in Chapter 2, the pipeline risk scores represent the relative level of risk that each point along the pipeline presents to its surroundings. It is insensitive to length. If two pipeline segments, 100 and 2600 ft, respectively, have the same risk score, then each point along the 100-ft segment presents the same risk as does each point along the 2 6 0 0 4 length. Of course, the 2600-ft length presents more overall risk than does the 100-ft length, because it has many more risk-producing points. A technique is needed to add the length aspect so that a 100-ft length of pipeline with one risk score can be compared against a 2600-ft length with a different risk score. This issue is also discussed in terms of individual and societal risks in Chapter 14. Many pipelines will have short lengths of relatively higher risk among long lengths of lower risk. In summarizing the risk for the entire pipelhe, a simple average or median will hide the shorter, higher risk sections. A cumulative risk-all of the higher and lower portions with their respective lengths compiled into a summary number-will produce the most meaningful measure. The cumulative risk characteristic is measured in order to track risk changes over time, compare widely different types of projects, and equate relative risks to absolute risks, if, for example, we want to compare the risk benefit of clearing 20 miles of pipeline ROW and installing new signs to the value of lowering and recoating 100 feet of pipeline. On one hand, the failure potential is being reduced significantly along a short stretch of ROW. On the other hand, amore widespread mitigation is being broadcast over a long length. Even with relative risk scores at each location, the comparison is not intuitive unless a method of equivalency is established. This measure can be called cumulative risk (CR). With a relative risk scale like the one presented in Chapters 3 through 7, a simple formula can be used to calculate cumulative risk: CR =(]/risk score)x (length)
The reciprocal of risk score is used because the “risk” score is really a “safety” score-higher points mean more safety-in the model shown in Chapters 3 through 7. Each pipeline and segment of pipeline has a CR value. Longer lines have higher CR values, reflecting higher risk. This is appropriate since a longer line logically has a higher
15/334 Risk Management
probability of failure and generally exposes more receptors to consequences. Projects such as ILI, public education, ROW maintenance, and patrol can impact many miles of pipe and hence often have a large impact on CR.
pipelines in the state. Of course, the localized benefit to receptors near to Hwy 1-244 is dramatic.
Changes over time
Adjectives are often added to the word risk when discussing how much risk we “should” be exposed to. Examples include
Note that the CR can also demonstrate the natural risk increase over time. The risk model measures the relative risk at all points along a pipeline, at a specific point in time. The risk numbers are therefore a snapshot. They represent all conditions and activities at the time of the snapshot. If inspections and maintenance are not done, the CR degrades. The most meaningful measure of changes in the risk situation will be how the risk score for the length of interest changes over time. So, changes in risk are easily tracked by comparing risk snapshots. This can be done for a specific point on a pipeline, an entire pipeline, or any stretch of any pipeline. It can also be done for any set of pipelines, such as “all pipelines in Texas,” “all propane lines,” “all mainlines,” “all lines older than 20 years,” and so on. The cumulative risk calculation also remedies the difficulties possibly encountered in tracking risk changes when segment boundaries change after every assessment. The CR can be calculated for any length of pipe, regardless of segment boundaries.
Cumulative risk example As an example of the use of a CR calculation, Line ABC in Alabama at Hwy 1-244 has a CR value of about 20. If a proposed 100-ft line replacement occurs, risk will be reduced. The CR value for that location will be improved about 400% from 20 to 80. A11 of Line ABC in Alabama has a CR of about I1 7.5. The 1-244 project improves about 100 ft of Line ABC in Alabama and improves the overall risks of Line ABC Alabama (as measured by CR) by about 3%. As is expected, a project can have a very dramatic impact on the local risk. Its impact is lessened from a broader view. If an employee training and public education program is expanded and enhanced, the expected benefit will apply to all pipelines in Alabama. These efforts would change the CR score forAlabama from 117.5 to 107.5, an improvement ofabout 8%. A comparison of the two projects is shown in Table 15.1. A large risk change in a very short distance seems to do less for the overall risk in Alabama than does a small change to all Table 15.1
V. Acceptable risk
Acceptable risk levels Tolerable risk levels Justifiable risk levels Negligible risks Trivial risks. There do not seem to be any universally accepted definitions of these phrases, with many being used interchangeably. Acceptable risk is seen more often in regulatory decision making and sometimes implies a negligible risk. On the other hand, to tolerate a risk does not necessarily mean it is regarded as negligible. Tolerability seems to refer to a willingness to live with a risk in order to secure certain benefits and perhaps in the confidence that the risk is being properly controlled. Risk tolerance/acceptability/justificationis a complex topic with social and psychological implications. This book uses the term acceptable risk with the understanding that it includes implications of the other terms. In general, society decides what is an acceptable level of risk for any particular endeavor. That level changes depending on the activity. What is acceptable for highway traffic deaths is generally not acceptable for pipeline accident deaths, for instance. Many social and economic considerations are thought to influence the human risk tolerance. These are beyond the scope of this text. A main principle, however, is that risk reduction is a cost to society. Society weighs the costs of improved safety in a specific situation against alternate expenditures. Do we spend an extra dollar to spare one traffic fatality every 10 years? Or do we spend that dollar to feed a hungry child for 2 days? These types of value judgments help determine the acceptable risk. Most determinations of acceptable risk level are made in comparison with other risks. Unless a risk is very high, it is usually not interpreted as a standalone value. It may also be examined with regard to how much risk it adds to an individual’s other exposures. A public interest level will logically be involved in many risk decisions. The criteria of “one chance in a million” is familiar
Comparison of two pipeline projects ~~-
Replace 1-244
Location
Oto41+67 41+67 to 59+20 62+76 to71+99 1-244 Total
Distance ft)
4000 2000 1000 100
7100
Riskscore
50 100
80 20 250
Curnulafive risk
80 20 12.5 5 117.5
N m risk score
50 100 80 80 3%
Training andpublic education
Cumulative risk
New risk score
80 20 12.5 1.25 113.75
55 105 85 25 8%
Cumulative risk
72.7 19.0 11.8 4.0 107.5
Societal versus individual risk 15/335 to many. Many established criteria are set at or near this value for the consequence of “increased chance of fatality.” One shidy suggests that North American society is not particularly concerned with risks that fall below this level [95]. So, the “one chance in a million” can be perhaps be seen as a ‘level of interest’ indicator. This might be a valid basis on which to establish some risk criteria. See page 341 for more discussion about the “one chance in a million” criterion. In the en4 risk acceptability is a very personal judgment. No one wants to accept any risk without some accompanying benefit. The value of the benefit is very subjective as is the perception of the level of risk, regardless of how many risk measurements are presented. There are many relatively trivial risks that disturb us greatly, while more threatening risks are of relatively little concern. For example, every year, thousands are accidentally electrocuted, yet there are no mass demonstrations against electricity or demands that distribution voltages be reduced [57]. Decisions about risk are made in many dimensions. The use of risk analysis requires interpretation, context, and an understanding of the analysis itself. When a regulatory body has to determine the “acceptability” of a risk, their determination is normally based on many things such as the number of people exposed, societal benefits derived from the activity, precedents set by other “approved activities, the degree of control over exposure, and many other factors. An interesting aspect of risk acceptability is that, whether or not criteria are quantified, a risk tolerance level can be inferred from regulations or industry actions. The acceptable risk levels impiied by regulations can be quantified with the assumption that currently observed accident frequencies are the result of adherence to minimum requirements. This is somewhat complicated when regulations are performance based rather than prescriptive, thus requiring “enough” mitigation to offset threats, rather than prescribing exactly which specific actions and what frequency of action are required. In that case, common industry practices will normally arise from such performance-based regulations and those can be used to infer the current acceptable risk levels. Similarly, an individual company’s level of “acceptable risk” can be “back calculated from their actions, even when no such quantification is offered by the company. For example, if a company performs actions based on strict adherence to minimum regulatory requirements and has systems that are similar to most other systems, then their risk levels should be similar to that of all other companies following similar protocols and operating in similar environments. The performance record of the entire population of pipeline systems has therefore been implicitly judged to be acceptable by the company. Such estimations will, of course, be very uncertain because many assumptions of similarity must be made and, even then, finding comparative failure rate data in sufficient quantity to draw meaningful conclusions will be difficult. Nevertheless, knowledge that actions themselves lead to acceptable risk estimates is an interesting concept-intuitive on some level, but with subtle implications. As previously noted, an ironic phenomenon may occur in the quest for risk reduction in pipelining. Because most activities are cost driven, money spent in the name of safety may actually increase the overall risks. For example, if safety-enhancing spending is mandated for pipelines, the increased costs may drive more freight to alternate transportation modes. If these
alternate modes are less safe than pipelines, society’s risk exposure has actually increased. Similarly, if an individual pipeline operator determines that current, regulatory-implied levels of risk are not acceptable, and they choose to spend resources to reduce their risk levels, then they may incur some economic consequences from competitors who choose to accept higher risk levels. The higher-perceived risk operators may or may not incur additional costs due to their tolerance of higher risk levels. It has been expressed in several studies that societal risk aversion is inversely related to the number of potential fatalities: lin and Uno where n = number of fatalities, have each been proposed as “risk aversion functions” [95]. Such functions can be used to help quantify differences in risk or risk perception, as population densities change.
VI. Societal versus individual risk A distinction is often made between individual risk and societal risk. Individual risk provides an estimate for the risk to an individual at a specific location for a specified period of time. In many applications, individual risk is equivalent to the risk to “one or more individuals.” An individual risk for a pipeline, with potential consequences expressed in terms of fatalities, might be expressed like this: “This pipeline presents a risk of chance of fatality per year.” This is normally equivalent to saying “This pipeline has a one in amillion chance of causing one or more fatalities per year.” The individual risk is insensitive to the number of individuals present, but the time of exposure for an individual can be considered. Societal risk is usually taken to mean the relationship between the frequency and number of individuals that could suffer a specified harm-for instance, the annual risk of death of a large number of people in one pipeline incident. It does consider the number of individuals exposed as well as their times of exposure. Because societal risk must aggregate many possible scenarios (such as various fatality count scenarios), FN curves such as those shown in Figure 15.1 are often used to display risks. (FN curves are also discussed in Chapter 14.) An individual is obviously not exposed to the threat from the entire length of a multiple-mile pipeline simultaneously. Her maximum exposure occurs if she is very near (perhaps directly over) the pipeline 24 hours of every day. She is also exposed to pipeline failures some distance along the pipeline to either side. If she moves perpendicularly away from the line, her risk decreases because she is exposed to less pipeline, based on simple geometry. So, under one approach, the risk per unit length of a pipeline can be used to estimate individual risk by determining the length of pipe that can affect a single point. Logically, this length would be determined by using hazard zone calculations. A probabilistic risk assessment (PRA) is traditionally applied to process industry scenarios where the bounds of a perceived threat can be clearly defined. Applications to pipeline become more problematic, especially when comparisons are to be made with nonlinear facilities. For example, a 700-mile pipeline will have a societal risk proportional to 700 times its 1-mile risk, or 3696 times its 1000-ft risk, and so on, if unit-length risks are extrapolated to their full length. Compared to a chemical plant that can only impact a limited geographical area, the pipeline will appear to present a greater risk.
15/336 Risk Management
1 .OE-02
1.OE-03
-2 % C
-
1.OE-04
m * c
u)
.-
& CL
Y
al
1.OE-05 I
3
i
4J
0-
?
I
LL
I
\
\I
I
ALARP region I
I
I
I
I
I
I.
I
I
I
I
I
I
I
I
I
I
I
I
,
I
I
I
]
I
I
I 1 1 1 1 1
I IIIII
I I I IIIII
1
I
I
I
1.OE-06 I
1.OE-07
I
I
I
I
I
I
I
I
I
I
I l N l l
lllru
!1
Broadly
I I1acceptable
I 1 1 1 1 1
1 . 1
- ,
I !
I 1. 1. I I I I *
I I
1 .OE-08 1
10 Minimum Number of Fatalities
100
1000
Figure 15.1 FN curve.
Both societal and individual risks, when derived from a unitlength risk value, will be highly sensitive to the length of interest. Doubling the length will double the cumulative risks, if all other factors are constant. One study [67] recommends that standardized lengths of pipeline, predetermined based on population density, be used in risk calculations: 150 m for the highest population density and 100 m for suburban areas. Case Study C in Chapter I4 avoids the use of terms such as individual risk and societal risk and substitutes segment-specific risk and overall risk. The segmentspecific risk is based on 2500 ft, which is derived from calcula-
tions that estimate a hazard radius from a pipeline failure to be about 1250 ft.
VII. Risk criteria Establishment of risk criteria is the common method by which risk acceptability is expressed. Setting and communicating risk criteria are obviously challenging and controversial (see risk communications, page 352). For example, documenting that, as a by-product of a certain activity, a certain number of fatalities
Risk Criteria 151337 per year are acceptable would be a part of such criteria but is controversial for obvious reasons. As previously noted, the notion of acceptable risk is central to risk management. Because acceptability is often linked to numerical criteria, the use of risk values expressed in absolute terms is often required. However, more qualitative criteria are also commonly used, as illustrated later. Areas where absolute risk acceptability criteria are more commonly seen include Regulatory approvals and standards New designs deviating from accepted practices Instances where conventional mitigation does not “appear to be” adequate. Risk criteria bridge the gap between numerical risk estimates and decision criteria such as “insignificant risk” or “acceptable risk,” which incorporates a value judgment. Some countries and local governing agencies have established numerical risk criteria while others have avoided them. As previously discussed an argument can be made that, even in the absence of numerical criteria, acceptable risk levels are still established implicitly via regulations that dictate design, operations, and maintenance activities. When a regulatory authority establishes risk criteria or establishes design and operations requirements, it is making a social and political decision, which can be guided but not replaced by technical advice. It is impossible for risk criteria to represent with precision what is or is not acceptable to the public. Such value judgments vary within and between societies, and alter with time, recent accident experience, and changing societal values. Risk criteria can be established for risk as a whole or for its components of probability and consequence. In some cases, it is more appropriate to focus on a component. For example, if there is no opportunity to change the consequence portion of the risk, a probability-only criterion might be more useful. In addition to the complex socioeconomic considerations, establishing pipeline risk criteria is an exercise that may draw from 0
0
0
Pipeline risk assessments Analyses of historical pipeline failure and consequence rates Comparisons of risks from other similar and/or common activities Comparisons with existing criteria in other areas (different countries, regions, etc.) Comparisons with existing criteria for other structures or other industries.
Criteria can be established on the basis of human life safety, potential environmental damages, economic considerations (including the costs of failure), or other factors. Some factors can be seen to dominate certain types ofpipelines. For example, a natural gas pipeline is perhaps best judged on the basis of life safety, whereas a crude oil pipeline is perhaps better held to an environmental damages criterion. Because so many nontechnical issues are embedded in risk criteria and no clear-cut guidance can be given from a purely technical standpoint, it is useful to review some existing criteria.
Qualitative criteria Some practitioners have developed qualitative matrices to help decision makers evaluate risks. An example is shown in Figure 15.2. While such charts have their limitations, they do provide a framework from which decision makers can agree on terminology and assign relative risk levels. Therefore, they do provide a tool to remove at least some subjectivity from the process of risk evaluation. Sometimes risk levels are classified using qualitative terms. For example:
0
Negligible-where the occurrence of the event is very improbable and the consequence minor. No further action is required for this level of risk beyond regular reviews. Low-where the risks are considered manageable through appropriate mitigations measures that are in place to keep the risk at this level. Intermediate-where the risks are higher than desired and actions are required to reduce the risk to low. negligible, or ALARP High-risks that are considered intolerable and must be reduced to intermediate or lower.
Additional qualitative terms include tolerable. intolerable or tolerable ifALARF: and broadly acceptable. Sometimes terms like these are also coupled with numeric risk criteria (see Figure 15. I ) to define the boundaries between these regions. When this is done, it seems reasonable to assume that little further attention is paid to the qualitative terms. The concept of “as low as reasonably practical” (ALARP) is widely used throughout risk assessment and management. Safety regulators worldwide require hazardous industries to evaluate the risks associated with manufacturing plants or processes of those industries. Generally, the philosophy is that the risks should be minimized wherever possible. Another approach to qualitative criteria attempts to avoid criteria all together. That philosophy de-emphasizes risk criteria in favor of a “continuous improvement” approach. Under this philosophy, the risk manager is continuously evaluating and ranking risks and working to improve risks according to some predetermined strategy. There are no “passifail” criteria. No risk will ever be acceptable because attempts to improve are ongoing. Lower risk portions will just get less attention. This approach is appealing in many ways, particularly in that it avoids some “tough” decisions, and can indeed be useful in budget setting and other company internal efforts. However, it will not support certain decisions and will not shield the practitioner from the fact that his actions can be used to infer a risk tolerance expressed in absolute terms (see page 335).
ALARP principle The ALARP (as low as reasonably practical) principle is derived from the U.K. Health and Safety at Work. .. Act of 1974, which requires “every employer to ensure, so far as is reasonably practicable, the health, safety and welfare of all his employees.”This is interpreted as requiring employers to adopt safety measures unless the cost is grossly disproportionate to the riskreduction [91].
151338 Risk Management
Probability
Consequence (receptor)
~~~
T
~~~
~
Never Has Has Happens Happens heard of in occurred occurred several several industry in industry in company times per times per year in year in No health effect
No damage
No effect
No impact
Minor injury potential
Minor damage
Minor effect
Limited impact
Major injury potential
Localized damage
Localized effect
Considerable impact
Fatality
Major damage
Major effect
National impact
1 I
Massive effect
Multiple fatalities
Extensive damage
I
International impact
Figure 15.2
Exampleof qualitativerisk criteria matrix.
The ALARF’principle recognizes that no industrial activity is entirely free from risk. It attempts to gauge the point where risk reduction has gone far enough. Achieving ALARF’ levels involves balancing reduction in risk against the time, trouble, difficulty, and cost of achieving them. The point at which the time, trouble, difficulty, and cost of further reduction measures become unreasonably disproportionate to the additional risk reduction achieved is ALARP A widely referenced ALARP framework for risk acceptability is from the Health and Safety Executive(HSE) in the United Kingdom, which divides risks into three bands (Figure 15.3). 1. An unacceptable region, where risks are intolerable except in extraordinarycircumstances, and risk reduction measures are essential. 2. A middle band, where risk reduction measures are desirable, but may not be implemented if a costhenefit analysis shows that their cost is disproportionateto the benefit achieved. In the United Kingdomthis is known as the ALARP region, and risks are considered tolerable providing they have been made “as low as reasonably practicable” This ALARF’ concept can be seen in many other regulatory risk criteria. 3. A negligible region, within which the risk is tolerable, and no risk reduction measures are needed. In the United Kingdom this is known as the broadly acceptable region, suggesting that the activity would be acceptable to a broad majority ofthe public, and the term negligible is reserved for still lower risks.
To define the transitions between the zones, two levels of criteria are set: 1. A maximum tolerable criterion (or intolerable level), above which the risk is intolerable 2. A negligible criterion (or broadly acceptable level), below which the risk is insignificant. The phrase acceptable risk has been defined as “a risk which has been evaluated in accordance with accepted practices and for which an informed decision to accept the frequency and consequence that comprise that risk has been made and documented” [91]. This is obviouslya difficult definitionto apply. In the UK, the procedure to show whether the risks on an installation are ALARF’ is as follows:
1 . Estimate the risks and compare with appropriate risk criteria. If they exceed the maximum tolerable criterion, then measures must be taken to make them tolerable; otherwise operations must cease. If they are broadly acceptable, the risks are ALARP and no further risk reduction measures need be considered, provided appropriate diligence is applied to maintain risks in this region. If they are in the ALARF’region, continue as follows. 2. Identify a complete range of practicable risk reduction measures, based on best modern practice, focusing primarily on large risk contributors.Assess the feasibilityand cost of the risk reduction measures.
Risk Criteria 151339
When the cost and effort ofmitigation measures are not disproportionate to the benefits, the measures should be considered to be practicable and should be put into effect. When it can be demonstrated that the cost and effort of implementation substantially outweigh the benefit of the risk reduction, then the measures are not considered to be reasonably practicable to implement. This demonstration should be sensitive to uncertainties in the risk estimates and in the treatment of aversion to high-fatality accidents. Once all measures have either been implemented (or the company is committed to implementing them) or demonstrated to be not reasonably practicable, the risks are ALAW. HSE considers that following engineering codes and good safety management practices will in general produce an installation whose risks are tolerable, but fiuther consideration of practicable risk reduction measures is also necessary to show whetherrisksare ALARP [91].
Table 15.3 Acceptable risk thresholds
Annual chance offatalrty
Criteria
< I o x IO" > I o x 10-4
Insignificant, no action justifiable Unacceptable, action to reduce risk mandatory Action to reduce nsk may be warrantea but should be justified on a costbenefit hasis
1O4
Source: Jaques, S., "NEB Risk Analysis Study, Development of Risk Estimation Method," National Energy Board of Canada report, April 1992.
Although some examples can be found specifically for pipelines, it is also useful to examine numerical acceptable risk criteria established for land-use planning, worker safety, and other industries. In Table 15.2, Ref. [37] was used to compare some governmental risk criteria from various sources.
In one Canadian NEB study, acceptable risk thresholds for individuals are defined as shown in Table 15.3. Canadian design guidelines for pipelines state that accidental loadings occurring less frequently than 1O4 per year do not have to be considered. Impact from machinery onshore and from vessels, anchors, trawl boards, and dropped objects are noted as examples of accidental loadings. In a sense, this too is a criterion because it implies that from a design standpoint, occurrences (at least the accidental type) less frequent than IOannually are not significant enough to be incorporated into a design.
Canada
UK. Health Safety Executive
The Major Industrial Accidents Council of Canada has recommended criteria for land-use planning around hazardous industries, based on increased fatality rates, as follows:
U.K. HSE documents are frequently quoted in risk reports and studies. Individual risk of fatality criteria are
Examples of numerical criteria
Residential and institutional Commercial and low-density residential Industrial and active open space
1 x 1o-6 per year 1 x 10-5 per year 1 x I o4 per year.
These criteria apply to new land uses and assume that an active emergency plan is in effect. If there is no emergency plan, the criteriaarereducedbyafactorof I O [91].
Maximum tolerable risk for workers Maximum tolerable risk for the public Broadly acceptable risk
IO-) per year I o4 per year per year
These criteria were originally published for nuclear power stations, but subsequent documents have indicated that they should apply to any large industrial plant in any industry [911.
Table 15.2 Examples of individual risk criteria forthe public
Maximum tolerable risk @er year)
Negligible risk Iperyear)
AuthoriQ
Application
Ministry of Housing, Physical Planning, and Environment (VROM),The Netherlands VROM, The Netherlands VROM, The Netherlands HSE, United Kingdom HSE, United Kingdom Advisory Committee on Dangerous Substances, United Kingdom HSE, United Kingdom Hnng Kong Government Department of Planning, New South Wales Environmental Protection Authority, Western Australia Santa Barbara, California, USA
New plants
10-6
Not used
Existing or combined plants Transport Existing hazardous industry New nuclear power station Existing dangerous substances transport New housing near existing plants New plants New plants and housing
I 0-5
Not used Not used
10-5 I 0-5 10-6
Not used Not used
New plants
10-6
Not used
New plants
I 0-5
I 0-7
10" 10-4 10-5 104
10" 10-6 10-6 10-6
151340 Risk Management
HSE risk criteria for land-use planning The individual risk criteria are based on the concept of a “dangerous dose,” rather than risks of death.The criteria for a housing developmentof 10 houses (25 people) are Substantial risk (HSE advises against) Negligible risk (HSE does not object)
10-5 per year 10“ per year
For developments with individual risk between these limits, HSE advises against the development. For a development of 30 houses (75 people), per year is taken as the “substantial risk” level. For highly vulnerableor very large facilities, a criteper year is used. rion of 3 x per year for a dangerous dose correThe criterion of spondsto an averagerisk of fatality of about 3 x lo4 per year, 10“ per year therefore translates to a fatality risk of 3 x 10-7 per year. A widely referenced HSE risk criteria chart is a modified FN curve shown in Figure 15.3.
Studies Many studies have been performed that examined existing criteria-criteria established both directly by regulat-
ory bodies and those implied by regulations and design practices. Some excerpts from those studies are examined here. In examining existing criteria, previous studies, failure rates, and safety factors for limit state design criteria, preliminary work performed for Canadian pipeline regulations established some suggested “target failure probabilities” for pipeline design [95].These suggestionsand conclusionsinclude the following: Human life safety considerations govern the cases of gas and highly volatile liquid (HVL) pipelines and lead to a target failure probability of per kilometer per year for such pipelines. This corresponds to a societal life risk of less than loT7 per person per year. Environmental damage potential governs the case of low vapor pressure liquids and also leads to a target failure probability of per kilometer per year for these pipelines. Sour gas pipelines near populated areas should have a target failure probability set an order of magnitude lower than that for other gas pipelines.
‘
Unacceptable region
Risk cannot be justified except in extraordinary circumstances
Tolerable only if risk reduction is impracticable or if its cost is grossly disproportionate to the improvement gained
ALARP region (risk is undertaken only if a benefit is desired)
Tolerable if cost of reduction would exceed the improvement
Broadly acceptable region (no need for detailed working to demonstrate ALARP)
Necessary to maintain assurance that risk remains at this level
Negligible risk Figure 15.3 Framework for risk criteria.
Decision points 15/341 Table 15.5 Reliabilitylevelsforoffshorestructures (CANCSAS471-92, Appendix A)
Other structures Because a pipeline is an engineered structure placed in public areas, it is also useful to examine risk criteria established for other structures. Building codes imply a level of acceptable risk (see Tables 15.4 and 15.5).These relate to hazards in structural design and do not include the probability of failure due to human error or material degradation [95].
One of the most prevalent absolute risk criterion in common use today is or one chance in a million. This number can be found in many applications such as regulations for pesticides and food additives, environmental contamination limits for groundwater and air quality, incremental increases in cancer deaths, and even in some pipeline risk guidelines in several countries. Reference [46], entitled “The Myth of As a Definition ofAcceptance Criteria,” argues that there is no sound scientific, as a social, economic, or other basis for the selection of criterion for an acceptable level of risk and that the number has never received widespread debate or even thorough regulatory or scientific review. The origins ofthe value were traced back to a 1961 article where two researchers had chosen (arbitrarily, they later said) this value as a definition of “safety” for use in their research on animal studies and cancer-causing substances. Regulatory agencies apparently later adopted this number as the “maximum lifetime risk that is essentially zero” from a regulatory consideration standpoint. This in turn seems to have evolved into an acceptable level of risk for a number of applications.
VIII. Decision points Numerical criteria A numerical risk criterion provides one clear decision point for risk management, as discussed in this chapter. However, given the uncertainty in risk estimates and the compromises inherent in any numerical risk criteria, it is usually only one, very high level consideration in risk management. It might be a starting point from which detailed risk management can begin. For example, a numerical risk assessment might demonstrate that the entire project is within an “acceptable” or “ALARY’ zone. This may only suggest that the project is viable or can be made viable from a regulatory perspective. It Table 15.4 Reliabilitylevels for buildings (CSA-S408,1981) Annual fargefreliability Gradualfailure
Very serious Serious Not serious Serviceability
1O E 4 IOE-5 1 OE-4
Consequence7
1
Great risk to life or high potential for environmental pollution or damage Small risk to life and low potential for environmental pollution or damage Impaired function
2
“One in a million” as acceptance criteria
Consequences
Safer?,class
Sudden.failui-e 1 OE-7 IOE-6
IOE-5
10E-1 to IOE-2
Source: Zimmerman. T. Chen, Q., and Pandey, M, “Target Reliability Levels for Pipeline Limit States Design,” presented at ASME International Pipeline Conference, 1996.
Serviceability
Annual target reliahdrh 1 OE -5
1 OE-3
lOE-l
Source: Zimmerman, T, Chen, Q., and Pandey, M. “Target Reliability Levels for Pipeline Limit States Design,” presented at ASME International Pipeline Conference, 1996. might also suggest a level of effort that might be appropriate in examining specific portions of the pipeline for mitigation opportunities. More effort should obviously be expended when the risk estimates are closer to the “intolerable” zone. This high-level decision point does not, however, provide much guidance on the many risk-impacting decisions an operator routinely makes.
Data-based criteria The analysis of scores from a relative risk assessment can lead to the establishment of action triggers. Chapter 8 discusses some data analyses techniques that might be useful in using risk assessment data to make risk management decisions. In the discussion of frequency distributions, it was noted that most measurable events do not form haphazard distribution shapes. They tend to follow distinct, characteristic patterns. Some patterns have better predictive capabilities than others. The ability to reasonably assume these patterns led to the practice of establishing decision points. The use of decision points is a disciplined methodology to distinguish “signals” from “noise” in data. A decision point is a value beyond which a data point is thought to be an outlier-a data point that is not the same as the other data points. Within the boundaries of the decision points, data values are thought to be alike; that is, they all have the same forces acting on them. Differences in data values within the decision points are attributed to noise: measurement errors (see Chapter 1) and common, random forces acting on the data. It is not productive to single out a data point in this region for further study because all points are thought to be essentially equal products of the overall system. On the other hand, an outlier should be investigated to determine the nonrandom, non-common causes that forced this data point to fall outside the decision region. Depending on the shape of the data distribution, other decision criteria can be established within the boundaries of the decision points already set. For example, in any symmetrical distribution, it is expected that 50% of the data will fall either side of the average line. The possibility of obtaining a long string of consecutive values always on one side of the average becomes increasingly remote as the string gets longer. At some point, perhaps after seven or eight consecutive points, it should
15/342 Risk Management
be assumed that some non-common causes are at work. That is, some new “force” has been introduced into the system and should be investigated. The following four example cases illustrate the response to some possible data distributions (often viewed by a histogram of risk scores or index sum scores). In these examples, lower scores represent lower safety and, hence, higher risks-the same scoring protocol used in Chapters 3 through 6 of this book. Case 1: Extreme Outliers Description: Some low scores are more than 3 standard deviations away from the overall average. The “3 standard deviations” is a common decision point based on statistical analysis that says measurements farther away from the average in a normal distribution have a high probability of being influenced by nonrandom effects. In other words, these measurements are statistically different from other measurements and probably warrant special attention. Response: Implement immediate mitigation measures to bring higher risk scores to within 3 standard deviations within 6 months. Case 2: Frequent Outliers Description: Lowest scores are more than 2.5 standard deviations away from average. Response: Implement mitigation to bring these scores to within 2.5 standard deviations within 1 year. Case 3: Infrequent Outliers Description: Low scores do not meet 1 or 2 above, but are still distinct outliers, based on visual examination of graphs. Response: Implement mitigation to bring these scores to “the edge” of the main population within 1 year. Case 4: Uniform Description:Tight range of risk scores (no apparent outliers). Response: Perform preferential mitigation with percentage of total (mileage-normalized) mitigation proportional to the distance from median. Pipeline segments farther from the population average will receive proportionally higher levels of mitigation. A formula can be developed to dictate precisely the level of mitigation for each section, even to the extent of reducing mitigation on the safest sections in order to redirect resources to higher risk sections.
Precedent-basedcriteria Even without a formal risk management system, certain levels of risk have always triggered immediate action. A trigger or action point can be seen as the risk level that is not tolerable, not even for a short time. One trigger point for many operators is an active leak of such magnitude that damages could occur. This trigger point is ohviously a reaction to a failure that has already happened. Nevertheless, it can also be viewed as a risk that is no longer acceptable.
Continuousimprovement Establishing an absolute level of acceptable risk is often not the best approach to risk management. The first problem with an absolute level is the inherent inaccuracies associated with failure probability and consequence calculations. Addressing risks above a certain threshold and ignoring any risks below that
threshold requires more confidence in the risk assessment accuracies than is probably prudent. Another problem is the realistic premise that risk tolerance is not a fixed value in any company. In the complexities of the business world, it is influenced by economic conditions, public perception, and political conditions. A further complication is the need for a time factor in setting a risk tolerance. A certain level of risk is tolerable for some period of time, until the situation can be efficiently addressed. For instance, shallow cover may not requue immediate attention and can be addressed in conjunction with other work planned in the area. At some level, however, the risk is seen to he so unacceptable that immediate action, even the shut down of the pipeline, may be warranted. Rather than setting a full range of definitive action points, the more prudent approach to risk management is thought to lie in prioritizing. The choice of a relative risk assessment system reflects this. In a prioritization approach, the operator will always be ranking portions of their system, based on the level of risk. This ranking in turn generates a list of possible projects to reduce the risk level. More resources may then be allocated toward changing the risk level of the worst sections first, and then progressing down the list. In most cases, the amount of available resources will then set the defacto level of acceptable risk, since money usually m s out before the list of “things to do” is exhausted.
IX. Risk mitigation Where to start As discussed in previous sections, identifying the need for and appropriate aggressiveness of risk reduction efforts can be a very complex process. In the face of generous amounts of new information, the new risk manager might well feel overwhelmed and need to ask “Where do I begin?” The experienced risk manager will usually immediately see a host of things she can now do more quantitatively where previously she was equipped with mostly opinion. She will see the advantages stemming from standardized valuations on risk conditions and mitigations, avoiding competition for resources and much uncertainty in decision making. Under the continuous improvement philosophy noted earlier, a fundamental premise is that the risk management process will not reach a conclusion. That is, there is no threshold level of risk that, once attained, will result in the end of the program. Rather, it is assumed that some amount ofresources will always be applied to risk reduction and a universally acceptable risk level will most likely never be attained. Issues of risk tolerance for individuals and the public in general can be examined in this regard. The notion of a continuous effort of risk reduction is also a realistic premise since the risk level tends to naturally increase with time. The aging of the infrastructure, timedependent failure mechanisms, and encroaching population density are mechanisms that help to increase risks over time. Increasing competition and regulation are two mechanisms that help to reduce the amount of available resources. A simple and entirely appropriate initial approach is to create lists of system components rank ordered by overall risk and/or specific failure modes. Then, focus attention and discretionary resources on the segments showing relatively higher risks. This
Risk mitigation 15/343
focus, coupled with the increased understanding ofthe underlying risk issues, will usually lead to more detailed strategies for overall risk management.
Prioritization The scores from the risk assessment should represent the best information available on each pipeline section. Basing prioritization and resource allocation decisions on the risk scores is therefore a defensible, traceable process. The highest and lowest rated pipeline sections from a prioritized risk ranking are significant to risk management. Because the lowest scores show the lowest safety (highest risk), a disproportionate amount of resources is justifiably spent on them. Recognizing that the amount of resources is limited reducing the spending on the safest sections in order to improve the safety of the highest risk sections may similarly be justified. This is especially true for discretionary spending-the portion of the budget that can be allocated on a basis other than regulated activities and direct revenue-generating activities. In prioritizing segments, it is important to look beyond the summary numbers such as the risk and index sum scores used in this book. To ensure that a deficiency in an index is not being masked by an excess in others, it is important to examine each index independently, in addition to the overall index sum. This can also be done by converting the index sum scores into failure probability scores, to better capture the worst case index score as discussed on page 300. In either case, this examination can be combined with the consequence (LIF) prioritization to develop risk management strategies for each failure mode.
Resource allocation At the center of risk management is the need to properly allocate scarce resources. Managers strive to control the “right” risks with limited resources-there will always be limits on the amount of time, manpower, or money to apply. Risk can be reduced through the allocation of new resources or through redistribution of the existing level of available resources. This must be done with consideration given to often-conflicting factors such as
0 0
Availability of resources (money) to address the needs Relative risks that currently exist within the system The costs and benefits of various operations, maintenance, and capital expenditure choices The rate at which improvement needs to progress.
The rate of spending is a key issue in resource allocation. As spending is thought to improve the risk situation, questions arise as to how fast improvements should be made and, if resources are being reallocated, how much increased risk on the currently safer sections can be tolerated. A resource allocation strategy should be developed with the risk assessment results serving as a key measurement of the effectiveness of the strategy.
Mitigation options In general, it is preferable to reduce risk by reducing the probability of failure-variables in the index sum. Reducing or mini-
mizing potential consequences is usually more difficult because it would involve changing some aspect of the product stream and/or the pipeline’s surroundings to effect the greatest change. Emergency response and leak detection are, however, very realistic opportunities to reduce consequences. By its nature, the risk assessment model points to risk mitigation opporhmities. Each variable that is measured as a risk contributor can, at least theoretically, be changed to effect a reduction in risk. In practical terms, changing certain items are of course much more attractive than others. For example, when pipe replacement is considered as a risk reduction option, preference should obviously be given to the higher risk sections. If increased cathodic protection is considered as an option to prevent corrosion damage, preference should be given to higher risk sections, possibly even to the point of reducing or temporarily eliminating activities in lowrisk areas. While intentionally increasing the risk in an otherwise safe area should only be done after careful and thoughtful analysis, it must be recognized that, when additional resources are not available, redistribution of existing resources may be prudent.
Consequence-dominated risks As noted before, it is usually preferable to reduce risk by decreasing failure potential. Options for reducing potential consequences are normally fewer and more problematic. Examples of consequence reduction measures include changing the product type or pressure, installing secondary containment, relocating the pipeline or removing receptors, reducing the pipe diameter or flowrate, and improving leak detection and emergency response. A pressure reduction somewhere on the system would require significant changes elsewhere to ensure adequate product deliveries. It is recognized that occasionally, options selected for reducing failure probability are not enough to bring the risk level to an acceptable level (by whatever acceptability criteria is chosen). If this occurs, the suggested approach is as follows: 1. Determine to what level the index sum would need to be
increased in order for this risk to be brought in line with “normal” risk exposures? 2. Is this score level possible? 3. Is this score level feasible, with spending constraints considered? If it is determined that a high enough level of index sum is not possible, investigate possible changes to LIE Changing the LIF means changing the potential consequences of a failure. The answers to the following questions leads to possible LIF changes: 0 0
Can the pressure be reduced? Can the pipeline be rerouted? Can the potential spill size be reduced? Can emergency response actions be upgraded to reliably reduce consequence potential?
As part of investigating more extreme measures. it is sometimes useful to convert the relative risk value to an absolute risk value. This is discussed elsewhere in Chapter 14.
151344Risk Management
Cost of mitigation From an economical perspective, the lowest cost risk reduction options can (and probably “should”) be exhausted before the more expensive options are considered. A risk assessment model values activities based on their risk-reducing benefit, with no consideration given to the cost of the activity. Therefore, the least expensive risk points can be sought when spending is directed to a specific index. It is a simple matter to establish costibenefit ratios for possible actions and use them as at least a partial basis to prioritize or fund projects. Projects with lower costs (both initial and ongoing costs) and larger impact on risks are obviously more desirable. An example analysis is shown inTable 15.6.
Land-use issues Often in the course of a risk evaluation, someone asks “Why not just locate the pipeline away from sensitive receptors?” This could be desirable, of course, and is usually one of the first mitigations explored. However, it normally involves trade-offs such as tremendous expense and often just moves the threat to a different location and exposes different receptors. Some communities (and countries) have enacted building setback distances and zoning requirements to keep a separation between pipelines and the public. Such regulations are very challenging since they trigger many sociopolitical issues: What about pipelines that were there first? Is it really in the communities’ best interest to restrict development on large tracts of land in order to avoid very low probability events? What about smaller, low-pressure pipelines? (Note that distribution and service gas and propane lines are commonly needed in very congested urban areas.) Some hypothetical projects, with example costibenefit values, are shown in Table 15.6. Columns 4 and 5 show the relative change in failure probability and risk for the segment where the work is to be performed. Column 6 shows what impact the project has on overall, system-wide risk. Note that some actions Table 15.6
have a very location-specific impact-high numbers in column 4, but not necessarily a large system-wide impact (column 6). Column 5 values represent the location-specific change in overall risk and are often the change seen in column 4 divided by the LIF (unless the project changes potential consequences, as in the last action). Other options, such as action 2, have a system-wide impact, as is shown in column 6 . See the discussion on cumulative risk calculations earlier in this chapter.
x.
costs
An American physicist wrote in 1990, “One out of five American deaths is from smoking. . . . If we spent as much per untimely death caused by smoking as we do on coal mine safety, there would be no money left in the United States for any other purpose-it would require the entire gross national product” [57]. Risk perception and decisions regarding acceptable risks are not always logical and fact based. This was addressed earlier in this chapter and will be discussed further in the Risk communications section later in this chapter.
Cost/risk relationships The costs associated with pipeline safety cannot realistically be ignored when practicing risk management. Collecting the costs and linking them with specific risk activities is a step that allows decision makers to allocate resources optimally, An operating discipline that documents all aspects of the operation can then be built. Each day, a pipeline operating group performs a variety of activities that are driven by initiatives such as Ensuring transportation obligations Compliance with government regulations Conformance with industry standards Continuance of previous operating habits. Each of these activities has an associated cost. There is also an opportunity cost since resources could be used in alternative ways.
Sample mitigation project cost-benefitanalysis
I
dction
2
3
4
5
6
Cost NP V
Failure mechanism impacted
Reduction in relative failure probability in segment (%)
Risk reduction in segment (%)
Cumulative risk reduction (pipeline wide) (%)
(W
1000-ft pipe replacement Increased traininglprocedures
82 25
Upgrade cathodic protection Mapdrecords improvements
46 33
Information management system improvements Recoat 400 ft
22 2
5 0.5
0.2 0.5
14 8
3.5 2. I
0.07 2.1
19
All Incorrect operations Corrosion Third party; incorrect operations All
17
4.2
4.2
76
Corrosion
8
6
0.8
costs 15/345
Managing risk necessarily means managing costs. “Intelligent spending” practices are needed: not spending too much, yet spending enough to minimize the risk. Tracking the costs of risk elements, especially risk-reducing activities, must therefore become part of the management process. The operator destined to remain in business will inevitably weigh the costs of risk avoidance against economic returns from improved system productivity. The operator’s goal is ultimately to achieve a judicious balance between the risk of pipeline failure and financial gains. By assigning a cost to pipeline accidents (a sometimes difficult and controversial thing to do) and including this in the cost of operations, the optimum balance point is the lowest cost of operations.
Estimating costs of mitigation The total cost of a risk reduction measure includes Costs of capital investment (e.g.. purchase and installation of new safety hardware) written off over an assumed working lifetime of the measure at an appropriate discount rate Operating expenditure (e.g., on annual safety training, extra staff, maintenance etc.) Lost profits if the measure involves withdrawing from an activity altogether Extra operating costs from safer working practices are not normally included, because they are assumed to be balanced by cost savings from the generally more efficient operation. A detailed cost evaluation for each activity can be very useful data in risk management. When options are considered, it is important to capture not only the initial cost but also the ongoing cost of ownership. Some simplifying assumptions can be made that are not thought to materially diminish the accuracy of the conclusions. The added complexity of a netpresent value (NPV) calculation is, however, probably important enough to include in this analysis. A “higher initial cost/lower ongoing cost” scenario must often be compared with a “lower initial codhigher ongoing cost” alternative. NPV provides a method for doing this. The NPV calculation will require the use of a percentage rate designed to capture the “value of money” in the current and foreseeable economic conditions. Spreadsheets can be readdy developed to estimate costs associated with various pipeline activities. Initial costs and recurring costs over 10 years or some other time period can be combined in an NPV calculation. Cost estimates should be derived from data collected on recent projects and from comparable costs seen elsewhere. As better cost data become available, it is easily added to a properly designed spreadsheet. For example, by simply modifying the unit cost value, the spreadsheet should automatically correct all related calculations. A fixed percentage overhead charge can be added to all estimates. This is added to include normal overhead costs associated with buildings, support staff, equipment, consumable supplies, taxes, etc.
costs. Significant risk reduction can be achieved up to a point, by increasing expenditures. Diminishing returns occur as further risk reductions are only available at higher and higher costs. In zone I of this curve, not enough is being done in the interest of pipeline safety. Little money is being spent and risks are high. In zone 3, possibly too much is being spent. Each increment of risk reduction is being achieved at an increasingly higher cost. Increasingly large expenditures are required for even modest risk improvements. This is probably indicating that the point of diminishing returns has been past. Zone 2 is the idealized part of the curve where expenditures on pipeline risk reduction are better balanced with actual risk reduction. Within zone 2, the operator still has many options in selecting the optimumposition on the curve. Note that pipeline operators have always positioned themselves on such a curve. Only recently, however, has it become more common to measure or document that position. This is necessary to ensure a disciplined approach and operational consistency. Such documentation also provides a defensible record for the sometimes difficult choices that are often involved in managing an operation. Plotting different pipelines or even different sections of the same pipeline produces a family of curves (Figure 15.5). Curve A may represent a pipeline with higher risk than curve B, due to a greater probability of failure or greater consequences should a failure occur. Alternately, curve A may represent a pipeline with the same risk as curve B, but with greater associated costs. Higher costs may be reflective of higher labor or material costs (perhaps geographic differences), or they may be reflective of the operational choices made by the operator. Adding or deleting risk-reducing activities changes the position along the curve. Doing the same activities more or less efficiently sh$s the curve itself (to the lei? when the activitiesare done at a lower cost).
Costlbenefit of route alternatives It is usually not practical to assign a cost to an unchangeable condition along the pipeline. Examples include soil conditions, nearby population density, potential for earth movements. and nearby activity level. The exception might be when alternate
7
Zone 3
Zone 2
Cost/risk curve While definitive, quantitative cosdrisk relationships are not presently within our grasp, we can note some useful qualitative relationships. Intuitively, we imagine a curve (Figure 15.4) showing the relationship between pipeline operation risk and
cost
-
Figure 15.4 Idealizedrisklcosts relationships.
Zone 1
15/36 Risk Management L
,
cost Figure 15.5
-
Risklcost curves for two pipeline sections.
routes are considered. In this case, a less expensive route alternative may be assigned a “route penalty,” expressed in risk points, as an offset to the cost savings. This in effect assigns a cost to the condition(s) causing the increased risk. For example, pipeline route A might be shorter than pipeline alternate route B. The shorter distance results in a savings of $265,000 in materials and installation costs. However, route A contains incidences of AC powerline presence, swampy (more corrosive) soils, the presence of more buried foreign pipelines, and a higher potential incident rate of third-party damage. Even after mitigating measures, these additional hazards cause the risk score for route choice A to be lower by 14 points (more risk than route B). In effect then, those risk points are worth $265,000/14 = $19,000 each. A difference in pipeline routes involving differing population densities would result in even more pronounced impacts on risk score.
Efficiencies
variables. A costirisk mathematical relationship can be theorized for each mitigation activity. For most management decisions, this relationship need not be highly precise. The user is usually interested in comparing the cost of equivalent riskreducing alternatives. When costs of alternatives are close, further refinement of costs may be required; however, the costs of many of the possible options will be orders of magnitude dfferent. Three or more general relationships can be imagined, as shown in Figure 15.6.In curve A, the relationship is assumed to be linear so that for every increase in risk-reducing activity, there is a proportional cost increase. For example, doubling the “inspection of rectifiers” score (corrosion index item) would double the cost of those inspections if this item follows a linear relationship. In curve B, the cost‘risk relationship is assumed to be exponentially increasing. In this case, risk points become more expensive: Every gain in risk score is accompanied by an increasing cost that increases faster than the risk score does. For example, in a certain area, increasing the depth of cover score might be relatively inexpensive as the first few inches of earth are added over the line. However, for additional cover requirements, reburial of the line becomes necessary with correspondingly higher costs for each inch of depth added, perhaps due to rocky ground or the need for increased workspace. In curve C, the cost‘risk relationship is assumed to be exponentially decreasing. Here, risk points become cheaper: Incremental costs of a higher risk score are cheaper. For example, in a contract for air patrol service, a fixed amount might be spent on securing the service with a variable amount per patrol (perhaps per mile or per flight or per hour). As more patrolling is done, the cost per risk point decreases. Other examples might include cases in which an initial capital expenditure is required, perhaps for an expensive piece of equipment, but thereafter, incremental costs for use of the equipment are low. In modeling approximate costs in this manner, the risk manager can set up automatic costing of “what-ifs” and is more able to decide among risk reduction options. Obtaining the optimum cost‘risk relationship can also be visualized as shown in Figure 15.7. Here, failure costs are included in the analysis to determine the lowest total cost.
Each segment of pipeline analyzed will have a signature costirisk curve (see Figure 15.5). The shape and position of the curve is determined by all aspects of the pipeline’s operation and environment. Note the relationship between total quality management (TQM) and riskmanagement: InTQM, efforts are made to shift the entire curve up and to the left-getting more for less cost; in risk management, efforts are directed at moving along the curve-positioning to an “acceptable level of risk.” For convenience, it may be desirable to assume that activity costs have been optimized. That is, the costs represent the activity being done in the most efficient, cost-effective manner. This is rarely t r v e in practice. In a search for additional resources, the very first place that should be examined is the amount of waste in current practices. Nonetheless, optimized costs are a useful simplification when first setting up costhenefit relationships.
Costkisk modeling A general approach to cost analysis might be to first determine the general trends of cost versus the risk score of specific
I Risk Points Figure 15.6 CosWrisk relationships for specific activities.
costs 151347 pay, looks at how much an individual is willing to pay (in terms of other goods and services given up) to gain a reduction in the probability of accidental death (again, a statistical death, not the individual’s). Each method has its drawbacks and benefits.
Minimum Expected Cost
Human capital approaches. These estimate the value of life in terms of the future economic output that is lost when a person is killed. This may be in terms of gross output (in effect, the lifetime salary) or net output (in effect, the lifetime tax payments). This narrow economic approach is now largely discredited, because it is recognized that people value life for its own sake rather than for its capacity to maintain economic output. Willingness-to-pay approaches. These estimate the amount that people in society would be prepared to pay to avoid a statistical fatality, using their observed behavior in the past or their expressed opinions on hypothetical situations in questionnaires. This is generally considered to be the most credible approach, although estimates are very variable.
-
Optimal Risk
Risk
Figure 15.7 Theoretical cost optimization relationships. (From Stephens, M., and Nessim, M. “Pipeline Integrity Maintenance Optimization-A Quantitative Risk-Based Approach,” presented at API Pipeline Conference,Dallas, 1995.)
Cost of accidents The primary benefit of risk mitigation is the avoided cost of accidents. This avoided cost is the ‘‘benefit’’ side of the costibenefit analysis and includes The value of human life (see Chapter 14) The cost of hospital treatment, lost production, and human costs to people injured. Based on a willingness-to-pay study of road accidents, costs of serious and slight injuries are approximately 10 and 0.8%of the cost of a life, respectively ~911. The cost of damage to property. The business interruption costs, including lost production, customer damages, contract penalties, and the damage to company reputation These costs may not include indirect costs such as customer dissatisfaction, political and legal ramifications, contract violations, loss of customer confidence, and other considerations. When deemed prudent, adjustments to costs can be made to capture some of these additional costs. One approach is to quantify indirect costs as a percentage of direct costs, depending on factors such as quantity of product delivered, type of consumer, and location ofpipeline. Indirect costs arising from a system failure are not quantified here, but are discussed in Chapter IO.
Value of human life Historically, there have been two primary methods used for determining the economic value of a human life. We should point out that this is a “statistical life,” not an identified individual. Society has always been willing to spend much more to save an individual in a specific situation-a trapped coal miner, for instance. The statistical life reflects the amount that society is willing to spend to reduce the statistical risk of death by one. One method is the human capital approach in which the value is based on the economic loss of future contributions to society by an individual. The other approach, willingness to
As of this writing, valuations seem to range from about $1.5 million up to about $15 million [9 11. For those wishing to use a single estimate without researching the rationale behind the many valuations used for many different purposes, a value of about $3 million is commonly seen and might be appropriate.
Rate of spending An often critical risk management decision is the question of how quickly a risk situation should be improved. The rate of spending is normally influenced by one or more incentives such as
0
Complying with regulations andor customer concerns Reducing the rate of system deterioration (to avoid future losses) Halting unacceptable current losses.
If spending is driven for long periods of time by requirements other than these, such as artificial budgets set without considerations for risks, long-term costs might rise. This includes increasing direct losses from incidents and associated indirect costs such as fines, incidents, and loss of customer confidence. After regulatory and minimum customer obligations are met, the amount of spending on any pipeline section can be governed by a target risk score for an index or by a preset maximum spending level. One spending strategy could immediately move the “worst case” sections to “midpack’ cases, at which point, a new set of worst case sections appears on the list. Another spending strategy could gradually improve ail sections, with the worst case sections improving at a faster rate than the balance of sections. Either spending strategy might be appropriate depending on the nature of the risks seen in the worst case sections. For example, several “risk situations” for any portion of the system can be identified. For each situation, a spending strategy may be readily apparent. This approach to rate of spending can be driven by data-based risk criteria as described previously. A primary deterioration mechanism in some systems appears to be the corrosion of metallic pipe. Given accurate corrosion rate information, a pipe rehabilitation rate (miles per
15/348 Risk Management
year) that balances the rate of deterioration could be calculated. If this number can be calculated with relatively high confidence. it can be used to establish portions ofthe budgets. An example of the practical application of risk management follows.
Example 15.1: Cost/Riskanalysis Note: The information contained in this example is a hypotheti-
cal situation created only to illustrate the reasoning process. One should not assume that the hypothetical decisions shown here are appropriate for a specific real-world situation. XYZ Pipeline Company determined that their LPG pipelines running through high-population areas score an average of 210 points on their 400-point risk scale. XYZ has been tracking incident rate (not necessarily reportable accidents as defined by DOT) for several years. XYZ’s incident frequency for the last 10 years has been 0.0012 per mile per year, or about an incident per year for every 1000 miles of pipeline. Therefore, a risk score of 210 points equates to an incident frequency of 0.0012/mile/year for these pipelines, assuming that the incident rate fairly represents the portions of the system in high-population areas. In the present economic climate, XYZ decides that it cannot increase its spending toward risk reduction. The present risk score and, hence, the present incident frequency, is therefore deemed to be the risk target. Even though next year’s score will be lower due to aging effects, additional spending is prohibited. With the exception of normal salary increases and the rare renegotiated contract for services, costs are to remain fixed in the coming time period. The challenge to the pipeline operator is to either maintain or reduce the current risk level at present costs. As already noted, the current risk level is slowly, but constantly increasing. Points have been lost due to the length of time since the last integrity verification and the last close-interval survey and the condition of the right-of-way . So even if the operator performs exactly the same activities as last year, the risk level will have worsened by 5 points. The operating team must choose the highest value activities and perhaps reduce or eliminate some lower value activities. They created Table 15.7 to help make these choices (note that higher point scores mean less risk). From such a table, the team derives the cost and risk impact of various activity choices. They choose four ofthese options: 1. Changing the public education program from a door-to-door
visit every year to a visit every other year saves $8000 annually. However, this slightly increases third-party damage risk by 2 points. This is seen to be an acceptable trade-off. Table 15.7
For a minimum cost, they increase ground patrol (mostly while technicians are already performing other duties) to gain 2 risk reduction points (reduce the risk of third-party damage). For a cost of $8,000 they choose to do a close interval survey to increase the corrosion index (reduce the corrosion risk) by 8 points. Finally, witnessing the testing of safety valves and critical switches that impact their pipeline system yields another risk reduction point (reducing the threat of incorrect operutions) for virtually no additional operating costs.
The four activity changes add 9 points of risk reduction to the risk score for virtually no increase in operating costs. This offsets the 5 points natural decay (increase in risk) and nets a safety increase of (9 - 5)/210 = 2%, based on the assumptions in this methodology. Note that the four activities impacted three indexes: the third-party damage, design, and corrosion indexes. A quick check confirms that the new risk level in each index is still in an acceptable range. See also Chapter 13 for more examples ofrisk management for distribution systems.
XI. Program administration To ensure proper use and ongoing utility of the risk rnanagement program (RMP), control documents and administrative processes should be established. The processes must, of course, be under the control of an individual or group. This section discusses issues related to how a pipeline company can effectively administer a RMP.
Organization If formal pipeline risk management is new to a pipeline company, a decision regarding the part of the organization that will be responsible for the program must be made. Experience shows that strong support is often required for any new effort to take root in an organization, especially when the effort involves changes to how work processes have historically been accomplished. Even though the RMP is mostly a formalization of current practices and knowledge, it involves enough new processes and disciplines that some resistance may be encountered. This is especially true when manpower is limited and new programs are seen as merely adding to the existing workload. Because a full RMP touches on many different parts of a pipeline organization, strong leadership and coordination may
Example cosvbenefit analysis
Activiry
Public education Patrol ROW Witness third-party test ofsafeties Painting Close interval survey Hydrotest Procedures
Present cost
$39K 94 0 2 0 0 2
Proposed change
cost impact
Biannual door-to-door visits increase to 2lweek ground patrol Begin doing this New program Redo Redo Increase training
-$SO00 Minimum Minimum +6000
-2 +2 +1 +2
+so00
+S + 10 +2
+1sooo +so00
Riskscore
Program administration 151349 be essential in overcoming resistance from any individual or group to fully participate and contribute to the effort. A certain amount of momentum is required to entrench a new philosophy into an organization. The keys to obtaining the momentum for a successful implementation usually involve strong upper management support, some amount of training and communications across the entire workforce, and a “champion” individual or group who can overcome the commonly encountered obstacles. The roles and responsibilities for the department responsible for the RMP generally involve the following: Ensure that the RMP is a useful and efficient program. Develop and maintain data collection processes. Develop and maintain risk assessment algorithm and software tools. Provide decision-support information to all parties. Conduct ongoing evaluations and improvement for the RMP. Enhance communications, both internal and external, of program. Ensure that the appropriate RMP knowledge and training levels exists in all departments with responsibilities under this document. The RMP could logically reside in any of several departments within a typical pipeline organization. Although the leadership abilities of the specific individual or group responsible for the effort are the largest detennining factor of success, some general advantages and disadvantages can be identified for each group. A group that routinely acquires and handles a lot of the information required for the risk model is a candidate for ownership of the RMP. This might point to a corrosion, engineering, or information technology (IT) department. On the other hand, a group that will make most use of the RMP results might be a good choice. This might point to a planning, projects, or operations/maintenance group. Especially in the early stages, the RMP will need to be set up and run with project management controls similar to a construction project, as discussed beginning on page 38. This implies that a group with experience in managing projects would be a likely candidate. Regardless of who “owns” the RMP, many different people in any organization will hold key information and knowledge on which the RMP is very dependent. The RMP owners’ ability to efficiently coordinate with many different groups within the company will often be essential to the success of the program. Technical and information management expertise are also essential but do not necessarily have to reside with the program-owner as long as such expertise is made available to the program-owner.
Control documents The format and hierarchy of RMP control documents can take many forms. lf a company has already established a preferred system of such documents for controlling procedures, processes, roles, responsibilities, training requirements, etc., then the RMP control documents can be incorporated into that existing structure. The following procedures present a possible outline for documentation to administer the risk management program. They are organized as recommended by U.S. DOT Pipeline Risk
Management Program Standard documents, which recommend that administrative aspects be established for program elements as well as forprocess elements, as described below.
Program elements This section describes the organizational infrastructure and activities that support and ensure the viability of the risk management process. The technical aspects of the process are covered in the next section on process elements. The following are section or document titles that should be a part of the control documentation for the risk management prograin. A brief description or sample content follows each title. These are samples only and not necessarily recommendations for specific position titles and roles.
Administrative procedure This document serves as the overall control document for the RMP. It states the purpose and background of the program and identifies the other documents that control the entire program.
Roles and responsibilities Specific job titles and general job duties are described here. Note the potential training and knowledge benefits of making some tasks a rotating job responsibility. Some example content is shown below.
RMP manager This manager is charged with responsibility for and overall management of the RMP, including Efficient use of risk concepts within the organization Application of RMP outputs to day-to-day activities. Communications, both internal and external, of program (see paragraph below) Ongoing evaluation and improvement of the RMP Support and direction of RMP coordinator.
RMP coordinator It is the responsibility of this position to oversee all technical aspects of the RMP, including Maintain risk assessment algorithms; conduct annual algorithm tuning sessions. Maintain software tools. Maintain current section risk rankings and project cost!benefit rankings. Maintain all database structures and report deficiencies promptly to the department responsible for the affected data. Ensure that costs are appropriately captured. Ensure that risk reduction activities are accurately modeled in appropriate spreadsheets. Submit to manager(s) RMP-related reports and other requested documentation for decision support. Ensure that the appropriate RMP knowledge and training levels exist in all departments with responsibilities under this document. Provide tools and procedures to area supervisors to collect necessary information.
15/350 Risk Management
As the technical lead and hands-on position for the program, the RMP coordinator position could involve more than one individual. It is often advantageous to assign risk management duties as portions of several employees’ job functions. This provides training opportunities and backups for key personnel. A sample staffing scenario could involve the following employees: associate engineer, assistant engineer, and senior drafting aide. A drafting aide could be responsible for collecting, analyzing, and inputting new data into the model. The results of the model could be computed and analyzed by the assistant engineer as a part-time function. The associate engineer could oversee the program and communicate the results to the RMP manager. Examples of additional roles, responsibilities and procedures are as follows:
reports shall present information concerning the goals and implementation of the risk management program, the relevant input data, and the current results being acted on. These status reports shall be available for external parties such as federal, state, and local regulators, the public, operators of adjacent facilities, and other stakeholders. Communications for specific purposes will be custom developed and are at all times under the direct control of the RMP Manager.
The supervisors responsible for system operations and maintenance will maintain current information regarding all conditions and activities in their respective districts. This information will be delivered to RMP coordinator per the appropriate procedure(s).
Evaluation and Improvement
Area supervisordmanagers
Corrosion control This department will maintain the corrosion control database per procedure XYZ. This department will respond to RMP coordinator requests for updated information. This department will also maintain a current listing of system materials and their condition, including coating (if applicable) for all pipe and components of the pipeline system. This department will provide the RMP coordinator with updated facilities information as it becomes available.
GIs, Mapping, and Records This department will ensure that current information is available on maps, drawings, and GIS databases. Policies and procedures will be created in concert with the RMP coordinatorand will be maintained hy this group.
Personnel qualifications Personnel involved in the execution of this program must be adequately qualified to perform the specified duties. This section lists qualifications and training requirements for all positions. Specific training is outlined in Procedure ABC.
Management of change The risk management program shall be a part of the management of change process as detailed in procedure MOCI. Most changes that would trigger the management of change procedure would logically have an impact on risk.
Communications Internal: Reports for internal communications will be prepared
by the RMP coordinator at least once each year. These reports shall present information concerning the goals and implementation of the risk management program, the relevant input data, and the current results being acted on. These status reports shall be posted andor distributed so that any interested employee can view them. External: Reports for external communications will be prepared by the RMP coordinator at least once each year. These
Documentation All documents produced under this program are to be maintained by the RMP coordinator under the current record retention protocols. Backups of electronic data are to be kept per company policy AI.
At least once each year, representatives from the following departments shall meet with the RMP manager to review the past period’s RMP results and to discuss possible changes to the program: 0
Planning Maintenance Quality control Corrosion control Mapping and records Field operations Control center operations. The evaluation shall include The quality and effectivenessof the administration, communications, and documentationprogram elements The quality and effectiveness of the analytical processes used to assess risks, identify possible ways to control these risks, allocate resources to control risks in the most costeffective way, and monitor performance The impacts of risk management decisions on the choice of performance measures The conclusions and reassessments about program effectiveness resulting from performance monitoring and feedback.
The RMP manager has ultimate responsibility for implementing any changes.
Process elements This section of the control document set should describe the technical and analytical activities comprising the risk management process. These documents serve as the how- to manuals for performing risk assessments and using the information in risk management decision making. Some sample content is shown under each sectioddocument title.
Pipeline system covered This program covers cfill in description(s) of system(s) included in the R M P } . Valve stations, regulator stations, and direct appurtenances ofthe pipeline are also covered.The {note
Program administration 151351
an?, portions of systems or facilities not included in RMP} are not currently included in this program.
Portions of the system l f e cycle to be considered Elements ofpipeline risk, from the system’s original design and construction through present-day operations will be considered; however, the emphasis of this program is directed toward current operations and maintenance activities that can impact the risk picture.
maximum point level of 100 points where higher points indicate safer situations. Leak impactfactor (LIF)-This is the consequence number of the risk assessment model. Taking into account elements such as maximum leak size, and immediate surroundings, a numerical score is assigned to each pipeline section. A higher LIF indicates a higher potential leak consequence. Netpresent value (NPV)-A measure of the value of a stream of hture costs based on an assumed interest rate. This represents the cost, in today’s dollars, of future expenditures. Hazard-The threat that a facility or activity presents.
Scope of’hazard (type ofrisks) This program addresses the hazards to the public associated with the transport and delivery of {fill in products covered in risk assessment and risk management}. The release of large quantities of {product name} poses threats to life and property in areas that could he impacted by the event. Direct damage effects and secondary effects such as business losses are thought to be the main hazards. Larger diameter, higher pressure segments of pipeline, located in more densely populated areas, pose the greatest hazards. See {cross-reference to technical documents} for further analysis ofthe hazards. Based on industry accident rates, life-threatening pipeline accidents are very rare events. In this system, corrosion and third-party damages are the leading causes of failures and normally result in relatively minor leaks. The scope ofhazards will be revisited as part of the annual maintenance of the RMP. Additionally, the consequences of interruption of delivery capability is included in this assessment.
Scope ofrisk analysis All pipeline failure modes will he considered. Failure types are grouped into four categories: Third-party damage Corrosion Design Incorrect operations The threat of sabotage is considered in a separate module.
Definitions of the RMP Procedure The following definitions are offered to assist the reader in using this procedure. Terms are defined specifically as they are used in this document and may slightly deviate from more generic definitions. Risk-A combination of the likelihood and magnitude of an event that can harm life and property and environment Risk assessment-A formal evaluation of the amount of risk associated with a facility and how it is being operated Risk management-A formal methodology to understand and manage the amount of risk associated with a facility and its operating discipline Index-A category of the risk assessment model that corresponds to a failure mode. There are four indexes corresponding to the failure mode categories of third-party damage, corrosion, design, and incorrect operations. Each index has a
Riskassessment model (Details ofthis model are described in the data dictionary document {add document name} The risk assessment model of this program is an indexing model. It is designed to capture all relevant risk information, both statistically verifiable data and information based on operator and industry experience. The results of an assessment using this model provide a relative ranking of risks between areas of the pipeline system. Therefore, each area of the pipeline system can be compared to every other area, from a risk standpoint. In general, the possible pipeline failure modes are identified and categorized into one of four indexes: third-party damage, corrosion, design, and incorrect operations. Within each of these indexes, all conditions and activities that impact the risk, either adding more risk or reducing the risk, are scored. More important items command a higher point value by having a higher “weighting.” Each index can score a maximum of 100 points. These index scores together are called the “index sum” and represent the likelihood of a pipeline failure. Possible index sums range from the highest risk scenario of 0 points to the safest condition of400 points. The index sum and each individual index value represent the likelihood of a failure overall and within each failure mode, respectively. Statistical data an4 when no data are available, expert judgments are used to set the weightings, which in turn set the relative likelihood of failure. The index sum is divided by the leak impact factor (LIF), which is a measure of the potential consequences of a pipeline failure. The LIF considers system aspects such as pressure, pipe size, and nearby population density. The relative risk score (index sum divided by LIF) of each section of the system is expected to range between 0 and 200 points, with higher scores representing safer (less risk) situations.
Riskcontrol and decision support Inherent in this program is the belief that risks are controllable to a large degree. It is more practical and prudent to reduce the “likelihood” component of the risk rather than the “consequence” portion. This is done through the directed allocation of resources (money, manpower, equipment, etc.) toward riskreducing activities. Many of these activities are mandated by DOT regulations. Beyond those minimum requirements, optional activities are chosen based on their relative costs and value in reducing risk. Costs and benefits are then collected and analyzed in the risk management program.
151352 Risk Management
The following sections or independent documents could fall under this portion of the program.
Preliminary risk management Until a preliminary risk assessment has been performed on all sections of the system, more attention will be directed toward sections that Have higher incident frequency Carry more product Are in higher population areas. Use OfRiskAssessment Results
0
Action points Identification of risk drivers Prioritized ranking of system components Project cost estimation Project evaluation Rate of spending.
Performance monitoring andfeedback Through this program, improvements are expected in decision consistency, overall risk levels, and control of spending. Verification of these expected improvements is achieved through tracking of 0
Pipeline leaksibreaks System outages Other failures Incident reports Riskscores Repair costs Spending levels.
Related procedures The following are possible procedure titles that should he developed in support of the overall risk management program. These procedures would address the specifics of items noted in the more general, high-level procedures for which samples are shown above. 1.1 Data Collection and Maintenance Procedure Set 1.1.1 Management of Change (triggers, roles, responsibilities, processes, etc.) 1.1.2 Repairs (including documentation and data collection) 1.1.3 Leaksibreaks (investigation and data collection) 1.1.4 Claims (data collection and feedback loops) 1.1.5 Corrosion Control Data 1.2 Risk Assessments Procedure Set 1.2.1 Pipeline 1.2.2 Pump stations 1.2.3 Tank Farms 1.2.4 Processing plants 1.2.5 Other 1.3 Risk Management Procedure Set 1.3.1 Analyses of Risk Assessment Results 1.3.1.1 Tests for Model Bias (histogram analyses)
1.3.2 1.3.3 1.3.4 1.3.5
1.3.1.2 Interpretation ofResults Risk Management Strategy Project Prioritization Risk Management in Design Process Performance Tracking
XII. Risk communications As with the concept of “acceptable risks,” the multifaceted topic of risk communications is not fully explored in this text. Many fascinating reports and psychological, sociological, and anthropological studies dealing with risk perceptions, behavior, and communications can be found. The intent here is to equip the practicing risk manager with some basic concepts underlying this issue so that he or she might be more effective.
Communications benefits Some risk assessments are done for the express purpose of communicating risks to very specific audiences. A common example is a regulatory approval process involving public hearings. Beyond such special applications, having effective communications abilities in an organization could result in the following benefits to a company: Improved community perception and understanding of potential risks Improved community understanding and support of emergency preparation activities Improved ability of the community to act on requests for emergency actions (shelter-in-place, evacuation) Reduced impact in the event of an emergency or disaster Decreased potential for legal action by the community to protest what it may consider to be an inequitable risk balance. seems reasonable that, if a risk to the community exists. the community deserves to be informed and consulted. This might involve presenting a fair and balanced assessment-informing only. It might also involve informing with a specific objective in mind such as to alert, that is, to prepare an audience to take action; or to reassure, that is, to reduce anxiety about very unlikely events. A fundamental distinction in risk communication is deciding whether people are likely to be more concerned than is considered appropriate (over-react) or be less concerned than is considered appropriate (under-react). Generally, the public will tend to overreact, especially to unfamiliar threats. A company’s communications objectives will change depending on the stage of the event. These stages are listed in Table 15.8.
The communicator Before the issue of public communication can be explored, it is important that the true nature of the risk analysis be very clear in the communicator’s mind. In the construction and application of models to estimate risk, it is easy to lose sight of the inherent uncertainties involved. Particularly when results are expressed in numbers that appear to be “scientific” (e.g., 3.42 x10@ fatalities per mile-year for permanent residents within
Risk communications151353 Table 15.8
Varying objectivesat various stages of risk events
Stuge
Objective
Before
Reduce the anxiety about potential emergencies that the agency considers unlikely. Prevent panic in mid-crisis. Prevent or reduce outrageabout actions (or inaction) taken.
During
After
Source- Davis, G.. and Jones, D., “Risk Communication Guide for State and Local Agencies,”CaliforniaGovernor’sOffice of Emergency Services.October 2001
200 ft of the pipeline), the impression of accuracy and precision is given. In reality, estimates that span several orders of magnitudc are often accepted as being as precise as practical. It is also easy to lose sight of what the results of a statistical analysis is really telling us. The numbers usually represent our belief as to how a large number of pipelines (or pipeline segments), or a specific pipeline operating over a very long period of time, will tend to behave. The inferences to a specific system or specific time period will be very uncertain. as is always the case when trying to predict the behavior of the individual from the behavior of the group. Challenges to failure estimates should be expected and must be addressed. It is the position of this text that historical leaks of any pipeline or group of pipelines provide only a very limited estimate of future failure potential for a specific pipeline. They can easily under- or overestimate future leak potentials because conditions have changed and will change from the previous operations. Therefore, leak history should not be used in isolation for judgments of failure probability. It is used as evidence of certain conditions that might exist, and this evidence, along with all other information that can be obtained, is used in a relative risk assessment to present a more realistic view of the risk. The communicator of risks must understand these and other limitations of the assessment being presented. Preparations for both technical and emotional challenges to any information presented to an audience are warranted.
Audience considerations
Risk perception Risk judgments are heavily influenced by how memorable past events are and the public’s ability to imagine future events. Recent events, especially when covered extensively by media reporting, can seriously distort perception. People tend to overestimate risks from such events (such as homicides or pipeline explosions), while underestimating less dramatic risks (such as diabetes or fatal falls) [33]. Knowledge of recent events or community-memorable events will be useful in predicting the temperament of an audience. It has been shown that people try to reduce the anxiety generated in the face of uncertainty by discounting that uncertainty [33].Therefore, the tendency might be to hear or retain only the consequence potential and not the more difficult-to-appreciate improbability that is being communicated. This is especially true when a worst case scenario, though highly improbable,
evokes a dramatic and frightening mental picture in the audience. Other interesting aspects of risk perception involve the familiarity of the system and the concept of “signal potential” from an event. Society tends to perceive as a greater risk one that is new or unusual or can be viewed as a harbinger of more to come [33].For example, a train wreck claiming many lives might not generate as much social response-being a more familiar and well-understood system-than a small pipeline accident, in which the system is less understood and the consequences are perceived to be less controllable and more catastrophic; hence, the more widespread public outcry from a pipeline accident (‘‘These lines are everywhere. they are old and aging, and could be routinely failing!”).
Presenting risk estimates Risk is a very complex subject not fully understood by technical managers, much less most public audiences. Differences in terminology and practices among the “experts” only increase the difficulties surrounding risk communications. Therefore. it seems prudent to first solidify the communicator’s understanding of the risk analysis. Then, considerations must be extended to the perceptions, concerns. and potential comprehension limitations ofthe audience. In Chapter 1, we noted that a scientific theory must be falsifiable in order to be considered a “theory” rather than an act of faith, for example. In presenting risk estimates to the public, the presenter probably believes that these estimates are based on scientific methods, are indeed bona fide theories, and can be proven incorrect or at least demonstrated to be largely inaccurate, over time. However, for relatively rare events like pipeline failures, it might take long periods of time to amass enough contradictory evidence to state that the presented risk assessment was incorrect. Therefore, an audience hearing risk predictions regarding a system of concern may appreciate that from a practical standpoint, they are being asked to take the numbers “on faith.” The communicator should be sensitive to this. Worst case scenarios are commonly used in a risk analysis, sometimes to demonstrate that such scenarios are almost unimaginably improbable or that a system is relatively safe even if the lowest probability event would occur. Of course, oftentimes the worst case scenario is catastrophic. Such scenarios must often be presented if for no other reason than to demonstrate that the communicator fully understands the risk and is not hiding critical information. It is “full disclosure” and is needed for credibility. However, such scenarios will often work against the effective communication of risks since such scenarios are often the only fact retained from the communication. Risk communications always involve the quantification of complex issues. This requires that numbers be shown. At first glance, numbers appear to speak for themselves-they appear to be clean of biases. A false precision is often assigned to an analysis yielding a specific number, especially when such a number appears to be the result of extensive calculations. In reality, uncertainty and some bias will always be present in risk estimates, as is discussed in Chapter 1. High variability is a characteristic of most pipelines and we add to that inherent uncertainty when we measure risk with our imprecise tools.
15/354 Risk Management
If risk estimates are expressed as a single number-a point estimate-that number is normally an average. An average, of course, means there are many probable outcomes both higher and lower than this estimate. As a matter of fact, the average might not be a likely outcome at all, if the underlying distribution of all risk values is other than a normal, bell-shaped distribution. If estimates are presented as a range, then the implication is that the presenter has some degree of confidence that the “true” value will fall within that range. In many statistical applications, the degree of confidence is often 90 or 95%, based on calculations from observed or assumed frequency distributions. The range created by the degree of confidence is very sensitive to the variability of the underlying data. If one is asked “What is the failure probability of Pipeline XYZ in the next five years, to a 95% percent confidence level?” the answer implies that arange should be offered. Ifwe have a few leak rate data points that we feel represent Pipeline XYZ’s future failure potential, we can, with some assumptions, calculate the range using accepted statistical concepts. If, for example, the data show the vast majority of years with no failures but one aberrant year in which two failures occurred, then the correct answer might well be “With a 95% percent confidence level, we believe the number of failures will range from no failures to 5 failures in the next five years.” This might be a statistically correct response, derived from specific calculations, even when the point estimate-our best that statistiestimate of future failure potential-suggests cally, a failure will occur only once every 50 years, for example. The 95% confidence level requires that almost all possibilities, no matter how remote, are included in the range. On hearing the response, many audiences will in fact hear “He
Table 15.9
Risks associated with common activities and natural
Table 15.10
Fatality risk comparisons Chancefor one individual in a
50-year period
Event
1 in2 1 in 10 I in 123 1 in 380 1 in 870
Motor vehicle injury Cancer fatality Motor vehicle fatality Fatality by fall (all locations) Pedestnan fatality (by motor vehicle accident) Fatality by fall (public places) Recreational boating fatality Fatality from firearms in public places
1 in 1,000 1 in 1,840 I in 10.600
Source: URS Radian Corporation, “Environmental Assessment of Longhorn Partners Pipeline,” report prepared for US. EPA and DOT, September2000 believes the pipeline will fail several times in the next five years” (see the earlier risk perception discussion), even though the presenter believes the most likely outcome is no failures for 50 years. See also the discussion of confidence limits in Chapter 14. There is widespread agreement among communications specialists that expressing individual risks in terms such as are not helpful to most people. Expressions involving relative or comparative values are often suggested as alternatives: “about equal to the chance of being struck by lightning.” Combining comparative events is another suggestion: “less than the chance of simultaneously being struck by lightning and a meteorite.” There is also a suggestion of risk presentations in multiple formats. However, there is no widespread
Table 15.11
More fatality risk comparisons
phenomena
Activity or event
Smoking (20 cigarettes/day) Mountaineering All accidents Motor vehicle accidents All industrial accidents
Risk Ifatalities per exposed person per year)
5 . 0 10E-3 ~ 2.0 x 1 0 5 3 5 . 010E4 ~ 2.5 x 10E4 1.7 x 10E4
Unacceptablerisk threshold p - 1 . 0lOE-4) ~
Falls Drowning Fires Air travel Railway travel
7.2 x 10E-5
5.0~ 10E-5 3.1 x10E-5 7.0 x 10E-6 2.0 x 10E-6
Cause offatality
All diseases Heart disease Cancer Cerebrovasculardisease Pneumonia Diabetes All accidents Motor vehicles Falls Drowning Fires, bums Natural hazards and environmentalfactors Cataclysm (tornado, flood, earthquake, etc.) Excessive ~~. heat Excessive cold Lightning ~
Acceptable risk threshold (cI.0 x 10E-6)
Lightning Meteorites
8.0x 1OE-7 6.0 x 10E-11
Source: Jaques, S., “NEB Risk Analysis Study, Development of Risk Estimation Method,”National Energy Board of Canada report, April 1992.
Probability offatality @er 100,000peopleper.vear) 830 320 190 64 28.3 15.1 39
19 5 2.2 2.1 0.8
0.09 0.09 ~. 0.40
0.04
Source: “ARCHIE (Automated Resource for Chemical Hazard Incident Evaluation),” prepared for the Federal Emergency Management Agency, Department of Transportation, and Environmental Protection Agency, for Handbook of Chemical Hazard Analysis Procedures (approximate date 1989) and software for dispersion modeling, thermal, and overpressure impacts.
Risk communications 151355
agreement. Each of the above suggestions is sometimes criticized due to an audience’s unfamiliarity with concepts, ease of confusion, or its tendency to readily under- and overestimate the likelihood of some comparative risks.
Risk comparisons Comparisons are often made among voluntary risks and among involuntary risks. Individuals have different risk tolerances when it comes to chosen risks-witness mountain climbers, parachutists, and even driving habits. It is often appropriate to compare risks of a pipeline introduced into a community with other non-voluntary risks to which that community might be exposed. Table 15.9 is extracted from a
Canadian NEB study [43] and shows some common risks expressed in fatalities per year. Tables 15.10, 15.11, and 15.12 provide additional risk comparisons. Based on historical accident rates, pipelines have far fewer safety incidents than other transportation modes such as truck or rail. Truck transportation has a fire and explosion incident rate approximately 35 times higher and rail transportation 8.5 times higher than pipeline transportation accident rates. Fatality rates are correspondingly 85 and 2.5 times higher, respectively, and injury rates are 2 and 0.5 times higher [86]. In terms of incidents per barrel or pound of product transported, pipelines have been safer than any other transportation mode including marine, barge, or air transport.
Table 15.12 Summary of common individual risks
Event
Chancefor one individual in a 50-yearperiop
Motor vehicle deaths Motor vehicle injuries
1 in 123
Pedestrian deaths (by motor vehicle accident) Falling deaths (public places) Falling deaths (all locations)
1 in 870
Deaths from firearms in public places Recreational boating deaths
1 in 10,600
Tornado deaths (1999, states with reported tornado deaths) Tornado deaths (1999, entire US)
I in 16,600
Lightning deaths
1 in 119,000
Cancer deaths
1 in I O
Cancer deaths in males Cancer deaths in females
1 in9
1 m2
I in 1000 1 in 380
1 in 1,840
1 in 58,000
1 in 14
Source/basis ofestimate Accident Facts, 1997, p. 78. Estimated based on reported death rate of 16.3 deathdyear per 100,000persons for 1996. Accident Facts, 1997, p. 78. Estimated based on reported total injuries of2,600,000 for 1996 and a 1996 US. population of 265,229,000 persons. Assumes total population exposed each year and constant population. Accident Facts, 1997, p. 100. Estimated based on reported total deaths of6,IOO for 1996 and a 1996 U.S. population of 265,229,000 persons. Assumes the entire population has the potential to be a pedestrian. Accident Facts, 1997, p. 100. Estimated based on 5300 reported deaths for 1996 and a 1996 US. population of 265,229,000 persons. Excludes fall-related deaths at home and work. Accidenr Facts, 1997, p. 8. Estimated basedon 1996 death rate of5.3 deathdyearper 100,000 population. Includes unintentional fall-related deaths in all locations (public, home and work). Accident Facts, 1997, p. 117. Estimated based on 500 reported deaths for 1996 and a 1996 U S population of 265,229,000 persons. Excludes firearm-related deaths at home and work. Unintentional deaths only, homicidedsuicides excluded. Based on report from National Association of State Boating Law Administrators, “Factors Related to Recreational Boating Participation in the United States: A Review ofthe Literature:’ August 17,2000, pp, 5,62. Total of 815 deaths in 1998. Recreational boating participants of 74,847,000 in 1998 (approx. 29% ofthe total US. population). National Climatic Data Center website. Based on tornado data from 1999.94 tornado deaths in 13 states. Total population of these 13 states of 78,000,000 (29 percent of the US population) was taken from the US Census Bureau web site for 1999.
National Climatic Data Center website. Tornado data from 1999. Total of 94 tornado deaths in 13 states. Total U.S. population of 272,690,s 13 was taken from the Census Bureau website for 1999. National Climatic Data Center website. Based on 46 lightning deaths in 1999.Total 1999 US. population of272,690,813 taken from Census Bureau website for 1999. Statistics taken from American Cancer Society website. Expected cancer deaths rate in 1999of563,lOO.Riskbasedontotal 1999U.S. population. American Cancer Society, Cancer Facts and Figures-1 997, from the ACS website. Male: 219 deathslyear per 100,000population. American Cancer Society, Cancer Facts and Figures-1 997, from the ACS website. Female: 142 deathslyear per 100,000 population.
Source: URS Radian Corporation, ’Environmental Assessment of Longhorn Partners Pipeline,” report prepared for US. EPA and DOT, September 2000. a Chance for one individual in a 50-year period was calculated by multiplying the risk in 1 year by 50. For example, if the risk is 1 death/year per 100,000 population, then the risk for 50 years is 50 times the 1-year risk or 50 deaths per 100,000population &e..1 in 2000 chance over a 50-year period).
-
Appendix AI357
Appendix A
Typical Pipeline Products
This appendix shows some physical and chemical properties of common pipeline products. These properties are often useful in risk assessments, especially when evaluating
product hazards, spill sizes, and hazard zones as portions of the potential consequence assessment.
~
Product
Benzene 1,3-Butadiene Butane Carbon monoxide Chlonne Ethane Ethyl alcohol Ethylbenzene Ethylene Ethylene glycol Fuel oil (#1-#6) Gaso11ne Hydrogen Hydrogen sulfide Isobutane Isopentane Jet Fuel A & A I Jet Fuel A & A I Kerosene Methane Mineral oil Naphthalene Nitrogen Petroleum (crude) Propane Propylene Toluene
Boiling Pt (“F) 176 24 31 -314
-128 173 277 -155 387 304-574 100-400
422 -76
II 82
Nh
Nl
2
3 4 4 4 0 4
2 1 2 3 1 0 2 I
I 0 1 0 3 1 1 0
I 304574 -259 680 424
3 4 -5 3 23 1
3 3 4
1 2 3 4 4 4 4 2 3
0
2
I 0 2
4
0 1
0
I I 2
I 2 3 4 4 3
N”
RQpoints“
0
8 10
2 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I 0
2 2 8
2 4 4
2 6 6 6 0 6 2h 6 6 6 6
2 6 6 0 6
2 2 4
Continued
358 Typical Pipeline Products ~
Product ~
~
Vinyl Chloride Water
7 212
2 0
4 0
1
10
0
0
Source: Dow Chemical. Fire and Explosion lndex Hazard ClassificationGuide, 6th ed.. Dow Chemical Co., May 1987. 'Based on 1991 CERCLA reportable quantities (RQ) and Figure 7.4 with the following:
None 5000 IO00 100 10 1
0 2 4 6 8 10
w h e n at temperatures higher than the boiling point.
Compound
Air Ammonia Benzene 1,2-Butadiene 1,3-Butadiene I-Butene n-Butane Carbon dioxide Carbon monoxide Chlorine Ethane Ethene (ethylene) Ethyl alcohol Ethylbenzene Ethyne (acetylene) Hydrogen Hydrogen chloride Hydrogen sulfide Methane Methyl alcohol Nitrogen Oxygen Propane Propene Styrene (phenyl ethylene) Toluene Water
Formula
MW('F)
Boiling pointj'F)
28.966 17.032 78.108 54.088 54.088 56.104 58.12 44.101 28.01 70.914 30.068 28.052
-317.7 -28.1 176.185 50.5 24.06 20.73 31.1 -109.3 -313.6 30.3 -127.53 -154.68
46.069 106.16 26.036
173.3 277.137 -1 19
2.016 36.465 34.076 16.042 32.042 28.016 32 44.094 42.078 104.144
-422.9 -121 -76.5 -258.68 148.1 -320.4 -297.4 43.73 -53.86 293.4
92.134 18.016
23 1.121 212
Vapor pressure (psia, 100°F)
211.9 3.224 20 60 63.05 51.6 154
2.3 0.3706
Liquid specific gravity @ 1 atm and 60% 0.6173 0.8846 0.658 0.6272 0.601 1 0.5844 0.8159 0.801 1.423
0.794 0.8718 0.615
Gas specific gmvity @ 1 atm and 60 "F 1 0.588 2.6965 1.8673 1.8673 1.9368 2.071 1.5194 0.967 2.4482 1.046 0.9684
1.86
8.41
12.5
74.2
3.28
18.95
4
74.2
4.3 5 6.72
45.5 15 36.5
2.37 2 1.1
9.5 11.1 6.1
1.27
6.75
190 226.4 0.24
0.5077 0.5218 0.91 11 0.8719 1
3.1808 0.622
Source: McAllister, E., Pipeline Rules Thumb, 5th ed.. Houston, TX: Gulf Publishing Co.,1998.
27 6.75
1.5905 3.665 0.8988
0.796
1.032 0.9492
15.5 1.41
12.45 28.6
4.6
0.8558 0.79
Upper flammability Iimit (~01%)
3.22 3.05
0.0696 1.268 1.1764 0.555 1.1062 0.9672 1.1047 1.547 1.4526 3.5954
900 554.6
Lower flammability limit (voI%)
Typical pipeline products /359 Some properties of crude oil fractions
Name
Petroleum gases Naphtha Gasoline Kerosine Gas oil (diesel oil) Lubricating oil Fuel oil Residue
Carbon chain length
Boiling range /'C
1 4
<5 2&180 20-200 180-260 260-340 37MOO 330 upwards Nondistillahle
5-9 5-10 10-16 14-20 2&50 2&70 >70
Source: http://www.schoolscience.co.uk/conten~4/chemistry/fossils/ p8.html
Substance
Closed-cupflash point CF)
Propane Gasoline Acetone Isopropyl alcohol Turpentine Fuel oil no. 2 Motor oil Peanut oil
Very low -45 to -36 -4 53 95 126600 275400 540
Lowerflammability limit (%)
2.1 1.4-1.5 2.5 2.0 0.8
Upperflammability limit (99)
9.5 7.4 13 12.7@200 'F
Autoignition temperature? ("F)
842 7.6 869 750 488
494 325425 833
Source: "ARCHIE (Automated Resource for Chemical Hazard lncident Evaluation)," prepared for the Federal Emergency Management Agency, Department of Transportation, and Environmental Protection Agency, for Handbook of Chemical Hazard Analysis Procedures (approximate date 1989) and softwarefor dispersion modeling,thermal, and overpressure impacts. aAutoignitiontemperature is the minimum temperature necessary to initiate or cause self-sustaining combustion in the absence of a flame or spark.
Typical heats of combustion (H,) (Bhdlb): Gasolines and fuel oils = 18,700 Jet fuels = 21,700 Ammonia = 8,000 Most fuel gases, 20,000 Btu/lb or 50 MJikg.
-
-
Appendix W361
Appendix B
Leak Rate Determination
Fluid flow through pipelines is a complex and not completely understood problem. It is the subject of continuing research by engineers, physicists, and, more recently, those studying nonlinear dynamic systems, popularly called the science of chaos. In a relative risk assessment, we are less concerned with exact numerical solutions, and more interested in comparative values. In general, fluid flow in pipes is assigned to one of two flow regimes, turbulent or laminar. Some make distinctions between rough turbulent and smooth turbulent, and a region termed the transition zone is also recognized. However, in simplest terms, the flow pattern will be characterized by uniform, parallel velocities of fluid particles-laminar f l o w - a by turbulent eddies and circular patterns of fluid particle velocities-turbulent flow--or by some pattern that is a combination ofthe two. The flow pattern is dependent on the fluid average velocity, the fluid kinematic viscosity, the pipe diameter, and the roughness ofthe inside wall of the pipe. Several formulas that relate these parameters to fluid density and pressure drop offer approximate solutions for each flow regime. These formulas make a distinction between compressible and non-compressible fluids. Liquids such as crude oil, gasoline, and water are considered to be non-compressible, whereas gases such as methane, nitrogen, and oxygen are considered to be compressible. Highly volatile products such as ethylene, propane, and propylene are generally transported as dense gases-they are compressed in the pipeline until their properties resemble those of a liquid, but will immediately return to a gaseous state on release of the pressure. For purposes of a relative risk assessment, any consistent method of flow calculation can be used. Because the primary intent here is not to perform flow calculations but rather to quickly determine relative leak quantities, some simplifying parameters are in order. Original (first two editions of this book) suggestions for calculations of a Leak Impact Factor (Chapter 7) used the following modeling simplifications:
0
0
0
0
0
Release duration is arbitrarily chosen at 10 minutes for a gas and 60 minutes for a liquid. Complete line rupture (guillotine-type failure) is used.' Operation at MAOP is taken as the initial condition.+ Initial conditions are assumed to continue for the entire release duration (except for flashing fluids). Depressurization, flow reductions, etc., which occur during the release scenario, are generally ignored. An arbitrary transition point from liquid to gas is chosen for flashing fluids. Pooling of liquids and vapor generation from those pools is ignored. Temperature effects are ignored in the equations but should be considered in choosing the liquid calculation versus the gas calculation. The evaluator should assume the worst case. for example, a butane release on a cold day versus a hot day. Pressure due to elevation effects is considered to be a part of MAOP.
These are often conservative and appropriate assumptions for risk modeling. However, using any simplifying parameters must not mask a worst case scenario. The parameters are selected to usually reflect conservative, worst case scenarios. The evaluator must affirm that one or more ofthe above parameters does not actually reflect a less severe scenario. Again, almost any consistent modeling of a leak quantity will serve the purpose of a relative risk assessment. Consistency is absolutely critical, however. One approach that is currently in use involvesthe above parameters and model releases as follows: 0
Gas-the quantity of gas released from a full-bore line ruptured at M O P (or normal operating pressure) for 10 minutes.
'Reasoning behind selection of this parameter is provided In Chapter 7. tAs an alternative, the evaluator can use a pressure profile to determine maximum expectedpressure.
362 Leak Rate Determination
Liquid-the quantity of liquid released from a full-bore line rupture at MAOP (for normal operating pressure) for 1 hour (60 minutes). An alternative approach is to model the spill volume as the maximum pumped flowrate for a fixed time period (perhaps based on estimated reaction times) plus the volume that would be drained to the spill location. Flashingfluid-the quantity of liquid released from a fullbore line rupture at MAOP (or normal operating pressure) for 3 minutes plus the quantity of gas released from a fullbore line rupture at the product’s vapor pressure for 7 minUtes (see Figure B. 1).
upstream condition. Sonic velocity is a limiting factor for gas flow through an orifice.
Liquid flow For incompressible fluids,the equationof flow through an orifice is essentially the same with the exceptionof the expansion factor, Y, which is not needed for the case of incompressible fluids [23]: q=CA
where
Gas flow For compressible fluids, a calculation for flow through an onfice can be used to approximate the flow rate escaping the pipeline [23]:
L(2g) 144AP
q = YCA
where
= expansion factor (usuallybetween 0.65 and 0.95) = cross-sectionalarea of the pipe (ft2) c = flow coefficient(usually between 0.9 and 1.2) g = accelerationof gravity (32.2 ft/sec per second) A P = change in pressure across the orifice (psi) P = weight density of fluid (lb/ft3) q = flow rate (ftVsec). Y
A
In the case of a discharge of the fluid to atmosphere (or other low pressure environment), Y can be taken at its minimum value, and the weight density of the fluid should be taken at the
(ooeratina oFcKal4 piessurij
Mass,of liquid
Vapor pressure p!
3
CD
!+!
a
0 min
-
3 rnin 10 rnin Time Spill Quantity = (Mass of Liquid) + (Mass of Sas) Figure B.1 Spill q u a n t i model for a flashing fluid.
A C g Ap p q
= =
= = = =
cross-sectional area of the pipe (e2) flow coefficient (usually between 0.9 and 1.2) acceleration ofgravity (32.2 ft/sec per second) change in pressure across the orifice (psi) weight density of fluid (lb/ft3) flow rate (ftVsec).
Alternately, other common liquid flow equations such as the Darcy equation can be used to calculate this flow. A consistent approach is the important thing. Note that continued pumping rate and drain volumes are often the determining factor of a liquid pipeline spill. These calculationsmight be more appropriate than the orifice flow calculation for liquid pipelines. Drain calculationsmay take into account siphoningpossibilities, but that might also be an unnecessary modeling complication. Crane Valve [23] should be consulted for a complete discussion of these flow equations.
Flashing fluidslhighly volatile liquids Fluids that flash, that is, that transform h m a liquidto agaseous state on release from the pipeline, pose a complicated problem for leak rate calculation. Initially, droplets of liquid, gas, and aerosolmists will be generated in some combination.These may form liquid pools that continue to generate vapors. The vapor generation is dependent on temperature, soil heat transfer, and atmospheric conditions. It is a nonlinear problem that is not readily solvable. Eventually, if the conditions are right, the liquid will all flash or vaporize and the flow will be purely gaseous. To simplify this problem, an arbitrary scenario is chosen to simulate this complex flow. Three minutes of liquid flow at M O P is added to 7 minutes of gas flow at the product’s vapor pressure to arrive at the total release quantity after 10 minutes. This conservatively simulatesa situation in which, on pipeline r ~ p tpure ~ ~liquid , is released until the nearby pipeline contents are depressured from the rupture pressure to the product’s vapor pressure. Three minutes at the higher pressure-the initial pressure (MAOPHimulates this. Then, when the nearby pipe contents have reached the product’s vapor pressure, any liquid remaining in the line will vaporize. This vapor generation is simulated by 7 minutes of gas flow at the vapor pressure of the pipeline contents. Figure B. 1 illustratesthis concept. This is, of course, a gross oversimplification of the actual process. For this application however, the scenario, if applied consistently, should provide results to make adequate distinctions in leak rates between pipelines of different products, sizes, and pressures.
Appendix C/363
Appendix C
Pipe Strength Determination
Some equations and design concepts are presented in this section to give the evaluator who is not already familiar with pipeline design methods a feel for some of the commonly used formulas. This section is not intended to replace a design manual or design methodology. Used with the corresponding risk evaluation sections, this appendix can assist the nonengineer in understanding design aspects of the pipeline being examined.
Stresses Minimum pipeline wall thicknesses are determined based on the amount of stress that the pipe must withstand. Design stresses are determined by careful consideration of all loadings to which the pipeline will be subjected. Loadings are not limited to physical weights such as soil and traffic over the line. A typical analysis of anticipated loads for a buriedpipeline would include allowances for: Internal pressure Surge pressures Soil loadings (including soil movements) Traffic loadings. Additional criteria are considered for special installation circumstances such as drilled crossing and overhead spans. These criteria include 0
Bending Stresses (Overhead crossings Crossings) Tensile Loads (Drilled Crossings) Bouyancy.
and Drilled
For each of these loadings, failure must be defined and all failure modes must be identified. Failure is often defined as per-
manent deformation of the pipe. After permanent deformation, the pipe may no longer be suitable for the service intended. Permanent deformation occurs through failure modes such as bending, buckling, crushing, rupture, bulging, and tearing. In engineering terms, these relate to stresses of shear, compression, torsion, and tension. These stresses are further defined by the directions in which they act; axial, radial, circumferential, tangential, hoop, and longitudinal are common terms used to refer to stress direction. Some ofthese stress direction terms are used interchangeably. Pipe materials have different properties. Ductility, tensile strength, impact toughness, and a host of other material properties will determine the weakest aspect of the material. If the pipe is considered to be flexible (will deflect at least 2% without excessive stress), the failure mode will likely be different from a rigid pipe. The highest level of stress directed in the pipe material’s weakest direction will normally be the critical failure mode. The exception may be buckling, which is more dependent on the geometry of the pipe and the forces applied. Another way to say this is that the critical failure mode for each loading will be the one that fails under the lowest stress level (and, hence, requires the greatest wall thickness to resist the failure). Overall then, the wall thickness will be determined based on the critical failure mode of the worst case loading scenario.
Internal loadings Internal pressure is often the governing design consideration for pressurized pipelines. The magnitude of the internal pressure along with the pipe characteristics determines the magnitude of stress in the pipe wall (due to internal pressure alone), which in turn determines the required wall thickness. This stress (or the associated wall thickness) is calculated using an equation called the Barlowformula:
364 Pipe Strength Determination
P,xD
Gm,,
=
External loadings
2xt
where 0
ma^
P, D t
= = = =
maximum stress (psi) internal pressure (psig) outside diameter (in.) wall thickness (in.).
This equation specifically calculates the tangential or hoop stress of a thin-walled cylinder of infinite length (Figure C. 1). It assumes that the wall thickness is negligible compared to the diameter. Normally the outside diameter is used in the equation (rather than the average diameter) to be slightly more conservative. An exception is concrete pipe, in which the internal diameter is used in the calculation [55]. This allows for concrete’s minimal tensile strength. Barlow’s formula is not theoretically exact, but yields results within a few percent of actual, depending on the Dit ratio. (Higher D/t yields more accurate results, lower yields more conservative results; see [55].) Many plastic pipe manufacturers refer to a standard dimension: D O
SDR= t Thus we have
where SDR Do t
= = =
S I maX
=
PI
=
standard dimension ratio outside diameter (in.) pipe wall thickness (In.) maximum stress (psig) internal pressure (psig)
External forces require complex calculations both in determining actual loadings and the pipe responses to those loadings. Soil loads, traffic loadings, buoyancy, and the pipe weight are typical loadings. For offshore and submerged pipelines, the effects of water pressure, currents, floating debris (producing impact loadings), and changing bottom conditions must also be considered. An equation given to calculate required wall thickness to resist buckling due to a static uniform external pressure is [ 5 5 ] : t
=
D
X
d
q
where t D p E
= = = =
wall thickness (in.) diameter(in.) uniform external pressure (psi) pipe modulus of elasticity (psi).
This equation does not consider the soil-pipe interaction that is a critical part of the buried pipeline system. A rigid pipe must directly withstand the external loads applied. On overstressing, typical failure modes are shear and crushing. A flexible pipe, however, deflects under load, allowing the surrounding soil to assist in the support ofthe load. Ifthis deflection or bending becomes excessive, ring deflection may be the failure mode causing buckling of the flexible pipe. Ifthe external load has a velocity component associated with it, this must also be considered. Highway traffic, rail traffic, and aircraft landings are examples of moving or live loads that, in addition to their static weight, carry an impact factor due to their movement. This impact factor can magnify the static effect of the vehicles’ weight. Design formulas to calculate loadings from moving vehicles can be found in pipeline design manuals. Calculations can he done to estimate buckle initiation and buckle propagation pressures for subsea pipelines. It is usually appropriate to evaluate buckle potential when the pipeline is in the depressured state and thereby most susceptible to a uniformly applied external force.
Longitudinal stresses
/#-
uu
PDl2T
1 -
Figure C.1
Diameter (D) Barlow’sformula forinternal pressure stress.
- 1
While the primary stress caused by internal pressure is hoop stress, stresses are also produced in other directions. The longitudinal stress produced by internal pressure can be significant in some pipe materiais. The amount of restraint on the pipeline in the longitudinal direction will impact the amount of longitudinal stress generated in the pipe. If the pipe is considered to be completely restrained longitudinally, the magnitude of the longitudinal stress is directly proportional to the hoop stress. The proportionality factor is called Poisson k coefficient or ratio. Some values of Poisson’s ratio are: Steel 0.30 Ductile iron 0.28 PVC 0.45 Aluminum 0.33
Other considerations1365
If the pipe is considered to be unrestrained longitudinally, the longitudinal stress is numerically equal to about one-half of the hoop stress. In most cases, the actual stress situation is somewhere between the totally restrained and totally unrestrained conditions. A rule of thumb for buried steel pipelines shows that the longitudinal stress generated by internal pressure can be approximatedby [90]: S, =0.45 x S,
where S, =longitudinal stress and S,= tangential stress. Longitudinal stresses also occur as a result of differential temperatures.These stresses can be calculatedfrom: 0 temp =-a(AT) X E
where 0 temp
=
a
= =
AT E
=
temperature-inducedlongitudinal stress linear coefficient of expansion temperature change modulus of elasticity of pipe material.
Bending stresses are caused by deflection of the pipe. Inadequate lateral support of the pipeline can therefore allow axial bending and hence longitudinal stress (Figure C.2). Inadequatesupport can be caused by Uneven excavation during initial construction Underminingdue to subsurfacewater movements Varying soil conditions that allow the differentialsettling. Intentional bending during construction for directional changes, either laterally or vertically.
Figure C.2
In general, flexiblepipes are less susceptibleto damage from these causes because the pipe can deflect and adjust to changing lateral supports. In the case of either flexible or rigid pipes, design considerationsmust be given. Beam formulasare usually used to calculatebending stresses. Assumptions are made as to the end conditionsbecause this is a critical aspect of the beam calculations.Whether or not the pipe is free to move in the longitudinal direction determines how much bending stress is generated. In the case of buried pipelines, the end condition-the freedom of movement in the longitudinal direction-+ dependent on the amount of pipe-tosoil bonding and on the pipeline configuration(nearby bends or valves may act as anchors to restrict movements). A compressive bending reduces the longitudinal stress induced by pressure. A tensile bending stress is directly additive to the longitudinal stress induced by pressure. In general, hoop stresses are independent from longitudinal stresses. This means that the most severe stress will govern-the stresses are not additive. A third category of pipe stresses is radial stress. Radial stresses are usually considered to be negligible in comparison with hoop and longitudinalstresses.
Other considerations Depending on the pipe material, other criteria may govern wall thickness calculations. Buckling, cracking, deflection, shear, crushing, vacuum collapse, fatigue, etc., may ultimately determine the wall thickness requirements. More specific formulas are available for detailed analyses of loadings associated with these failure modes.
Bending stresses
366 Pipe Strength Determination
Areas where the calculated remaining strength of the pipe results in a safe operating pressure that is less than the current established MOP at the location Areas of general corrosion with a predicted metal loss of >50% of nominal wall Areas where the predicted metal loss of >50% of nominal wall at crossings of another pipeline exists Weld anomalies with a predicted metal loss >50% of nominal wall Potential crack indications that when excavated are determined to be cracks Corrosion of or along seam welds Gouges or grooves greater than 12.5% ofnominal wall.
In all pipe materials, special allowances must be made for ‘‘stress risers.” Notches, cracks, or any abrupt changes in wall thickness or shape can amplify the stress level in the pipe wall. See further discussions about fracture toughness in Chapter 5.
Anomalies and defects A defect in a pipe wall is considered to be any undesirable pipe anomaly, such as a crack, gouge, dent, or metal loss, that reduces pipe strength or could later lead to a leak or spill. Note that not all anomalies are defects. Some dents, gouges, metal loss, and even cracks will not affect the service life of a pipeline. Possible defects include seam weaknesses associated with low-frequency ERW and electric flash welded pipe, dents or gouges from past excavation damage or other external forces, external corrosion wall losses, internal corrosion wall losses, laminations, pipe body cracks, and circumferential weld defects and hard spots. Damages can be detected by visual inspection or through integrity verification techniques. Until an evaluation has shown that an indication detected on a pipe wall is potentially serious, it is normally called an anomaly. It is only a defect if it reduces pipe strength significantly-impairing its ability to be used as intended. Many anomalies will be of a size that do not require repair because they have not reduced the pipe strength from required levels. However, a risk assessment that examines available pipe strength should probably treat anomalies as evidence of reduced strength and possible active failure mechanisms. A complete assessment of remaining pipe strength in consideration of an anomaly requires accurate characterization of the anomaly-its dimensions and shape. In the absence of detailed remaining strength calculations, the evaluator can reduce pipe strength by a percentage based on the severity of the anomaly. Higher priority anomalies-those with a very high chance of being defects-include: Areas of metal loss greater than 80% of nominal wall regardless of dimensions Areas where predicted burst pressure is less than the maximum operating pressure at the location of the anomaly Any dent on the top of the pipeline (above the 4 and 8 o’clock positions) with or without any indicated metal loss. Important anomalies-probable defects-might
include
Any dents with metal loss or dents that affect pipe curvature at a girth or seam weld Any dents with reported depths greater than 6% of the pipe diameter
Additional anomalies that might warrant attention in the risk evaluation include 0
Any area where the data reflect a change since the prior assessment Any area where the data indicate mechanical damage that is located on the top half of the pipe Any area where the data indicate anomalies that are abrupt in nature Any area where the data indicate anomalies that are longitudinal in orientation Any area where the data indicate anomalies that are occurring over a large area Any area with anomalies located in or near casings, crossings ofanother pipeline, and areas with suspect cathodic protection.
There are several industry-accepted methods for determining corrosion-flaw severity and for evaluating the remaining strength in corroded pipe. ASME B31G, ASME B31G Modified, and RSTRENG are examples of available methodologies. Several proprietary calculation methodologies are also used by pipeline companies. These calculation routines require measurements of the depth, geometry, and configuration of corroded areas. Depending on the depths and proximity to one another, some areas will have sufficient remaining strength despite the corrosion damage. The calculation determines whether the area must be repaired. For cracklike defects, fracture mechanics and estimates of stress cycles (frequency and magnitude) are required to determine this. For metal loss from corrosion, the failure size for purposes of probability calculations can be determined by two criteria: (1) the depth of the anomaly and (2) a calculated remaining pressure-containing capacity of the defect configuration. Two criteria are advisable since the accepted calculations for remaining strength are not considered as reliable when anomaly depths exceed 80% of the wall thickness. Likewise, depth alone is not a good indicator of failure potential because stress level and defect configuration are also important variables [ 8 6 ] .
Appendix D
Surge Pressure Calculations
Surge pressures, often called waterhamme< are caused when a moving fluid is suddenly brought to a stop. The resulting translation of kinetic (moving) energy to potential energy causes an increase in the internal pressure-the creation of a pressure wave. An associated positive and negative pressure wave will travel in both directions along the pipe, reflecting and overlapping, depending on the system configuration. The magnitude of the pressure increase is found with the following equation [ 5 5 ] . Surge pressure in feet of water is readily converted to psig by multiplying by 0.43 psig/feet of water: AH=
where AH a g AV
= = = =
t1 -
XAV
surge pressure (feet of water) velocity ofthe pressure wave (Wsec) acceleration due to gravity (32 ft/sec2) change in velocity of fluid (Wsec)
We can see from this equation, that the magnitude of the pressure surge is directly related to the speed of the pressure wave and the fluid velocity change. To calculate the speed of the pressure wave in the pipe, we can use the following equation [ 5 5 ]
a = 12
x
where a K p
D t
E C,
= =
modulus ofelasticity ofpipe material (Ib/in.2) constant dependent on pipe constraints.
We can see from this equation that pressure wave speed is dependent on pipe properties (diameter, thickness, modulus of elasticity) as well as fluid properties (bulk modulus, density). This means that the pressure wave will travel at different speeds depending not only on the product, but also on the pipeline itself. A more elastic pipe material slows down the pressure wave. As the diameter-to-wall thickness ratio increases, the wave speed decreases. Because fluid compressibilityis dependent on density and bulk modulus, we can see that the pressure wave speed varies inversely with the compressibility.Fairly incompressiblefluids will support faster pressure waves and, hence, greater surge potentials. Note that hydrocarbonsare far more compressiblethan water. Another component of the pressure surge calculations should be the wave attenuation. Due to friction losses in the pipeline, the pressure wave will be dampened as it travels. This reduction in pressure magnitude with distance traveled can be calculated and becomes a consideration in pipeline design. The above equations assume instantaneous fluid velocity changes. If the abruptness of the velocity change is controlled, the maximum surge pressure is also controlled. A common example is the rate of closure of a valve. A slamming shut ofthe valve effectively brings the velocity to zero instantly.A gradual closure causes small, incremental velocity changes with corresponding small surges. How fast is too fast? The following equation allows a critical time to be calculated {SS]:
T,, = =
= = =
=
pressure wave velocity (Wsec) bulk modulus ofthe fluid (Ib/im2) density of the liquid (slugs/ft2) internal diameter ofpipe (in.) pipe wall thickness (in.)
2XL a
where T,, = critical time (sec) L = distance ofpressure wave travel before reflection (ft) a = velocity of pressure wave (Wsec).
368 Surge PressureCalculations
The critical time is the maximum flow stoppage time that will still allow the maximum surge pressure. Flow stoppage times that are higher than this value produce smaller surge pressures. This critical time is dependent on the piping configuration because the reflection time from the initiating event governs the calculation. It is important to note that, in the case of valve closures, the flow stopping is not necessarily proportional to the actual amount ofclosure. A gate valve, for instance, may cause 90% of the flow stoppage within the last 10 to 50% of the gate travel.
The designer must consider the effective closure time as opposed to the actual closure time. Pressure surges may be caused by valve closures, pumps starting or stopping, the sudden meeting of fluid columns moving at different velocities, or other phenomena that abruptly change the velocity of the pipe fluid. Many design options are available that allow for pressure surge effects including relief valves, surge tanks, valve closure controls, pump bypasses, and use of stronger pipe at critical sections.
Appendix E
Sample Pipeline Risk Assessment Algorithms
The intent ofthis appendix is not to provide complete risk algorithms to the reader, but rather to convey a sense of how modeling has been done in the past. Presentation of complete models would necessitate the inclusion of full documentation of all theory and rationale implicit in the model. That would take a book of this size for each comprehensive model evaluated. By presenting examples, it is hoped that the reader will gain confidence in setting up his or her own models. This confidence is gained through the knowledge that there is no “magic approach’ that guarantees better results than any other. A good risk model will be firmly rooted in engineering concepts and consistent with experience and intuition. That is why there are so many similarities in the efforts of many different modelers examining many different systems at many different times for differing objectives. Beyond compatibility with engineering and experience, a model can take many forms especially in differing levels of detail and complexity.
Rehabititation,” a report to the American Gas Association. September 28,1990. This report presents the development of PIMAR, the A.G.A./PRC sponsored ranking algorithm for pipeline mamtenance. One focus of this paper is the development of the risk assessment algorithm, PIMAR, for prioritizing pipeline maintenance and rehabilitation. Each of the contributingfactors associatedwith probability of failure and the consequences are defined. The parameters chosen for the algorithm in terms ofprobability of failure werecategonzed into eight different groups:type ofpipe,soil stability,coating integrity,cathodic protection, damage susceptibility,hydrostatic test history, leakhpture history, and pipeline condition.Those related to consequencesof failure were class location, security of throughput, product type, propensity for ductile fracture propagation, and transition temperature. The rationales for the selection of these parameters was provided. The end result of each of these contributing factors was an algorithm for probability of failure, given below. PF = Pt(JtSS+ CaP*SCA+ DS + HT + LR +PiCo)
GRI model reviews The following discussions of two published pipeline risk models are extracted from preliminary work performed by Kiefner and Associates, Inc., on behalf of the Gas Research Institute [GRI]). Many other references on these models are available in the technical literature, These two were chosen as being fairly representative of many systems developed by consultants and by operating companies themselves.
Model 1 H., Orbun, J E., and Feder; P I., Kiefnrr: . IF, Vieth, “Mt.thod5 for Prioritizing Pipeline Maintenance and
where Pt JtSS
=
CaP*SCA
=
DS
=
HT LR PiCo
=
=
= =
risk as a function ofthe type of pipe risk associated with longitudinal stresses caused by soil-induced forces and older joining methods risk associated with corrosion susceptibility damage susceptibility hydrostatic test history service IeaWrupture history pipe condition.
370 Sample Pipeline Risk Assessment Algorithms
Within the paper, each of the above parameters is defined, and their respective equations provided. The resulting equation for the consequences of failure is as follows. CF = [20(DS1) + ST + 5(OS’PD) + 5(TT)] PR +40( 1 - PR)(D/8)1/2
where iPF is the probability factor due to a particular failure mode, i. Breaking down the individual failure modes allows one to identify the influence of each mode on the entire pipeline. The authors also include an equation that takes the probability of a particular failure mode and further breaks it down into specific risk factors:
where iPF = iSSF x iSVF
20(DS 1) ST 5(OS*PD) 5CW PR
= =
= = =
class location effect security of throughput ductile fracture propagation transition temperature of the material multiplier reflecting product type.
Again, a detailed description of each of the above is provided. The end result for calculating the relative risk combines the above factors as RR=PF*CF
where iPF iSSF iSVF
probability factor due to failure mode i susceptibility factor due to failure mode i severity factor due to failure mode i.
=
= =
The next section of the paper discusses the consequences of failure and the consequence factors. The consequence of failure is the damage or cost incurred when a pipeline fails, as defined as the sum ofall the feasible consequence factors.
This equation can be used to prioritize pipelines for maintenance, in-line inspection, hydrostatic retesting, or rehabilitation. Parameters can also be omitted to suit special situations. The paper also provides a sensitivity analysis to justify the selection of the coefficients in the equations and 10 example problems where the algorithm is applied to various pipeline situations. Finally, the limitations of the algorithm are discussed, mainly that it is only a ranking mechanism, no number calculated should be thought of as an absolute risk.
The authors give consequence factors as risk to life, damage to property, loss of service, cost of failure, and environmental effects. These factors are not weighted against each other; rather, weighting is decided for each factor by the pipeline operator. These above equations combine to form the full relative risk equation as follows:
Model 2
or
Kirkwood, M. G., and Kamm, M.. ‘2 Scheme for Setting Pipeline Repair; Maintenance and Inspection Priorities,” presented at Pipeline Risk Assessment, Rehabilitation and Repair Conference, September 12-15,1994. The focus of this paper is to provide the reader with a strategy to maintain and repair a pipeline using relative risk assessment. Risk is defined as the combination of the probability of occurrence of a hazard and the magnitude of the consequences of the failure. Both quantitative and qualitative risk are defined and the paper provides a method that utilizes qualitative data, thus producing risk within pipeline segments relative to one another. Using quantitative data produces absolute risk, rather than relative, but oftentimes not enough statistical data exist to properly determine the risk. The relative risk method uses engineering knowledge, experience, and awareness. The next portion of the paper provides a detailed description of the pipeline hazards used in the risk assessment. These hazards include internal corrosion, external corrosion, fatigue, stress-corrosion cracking, mechanical damage, third-party intervention, and loss of ground support. This is not an exhaustive list, but the examples are characteristic of the considerations required. The total probability of failure (PF) is given as the sum of each of the individual probability factors: PF=ZiPF
CF = Z jCF
RR =
RR =
1 1 . ZiPF X - ZICF 7 5
I 1 . Z (ISSF X iSVF) X - ZICF 7 5
The 7 (the number ofprobability factors) and the 5 (the number of consequence factors) in the denominators are used to average the total probabilities of each. The authors also suggest that this scheme can be used to examine risk of failure due to a specific factor. For example, the risk of failure from internal corrosion can be determined from: 1 1 . CIR = - S (ICPF + ECOF) X - ZICF 2 5
where CIR ICPF ECPF
= = =
a corrosion inspection rating probability factor for internal corrosion probability factor for external corrosion.
The parameters used in the priority rating are presented in a list as questions for the operator to answer. A value is then calculated for each of the risk factors taking into account all of the parameters that have an effect on the pipeline. The parameter list was compiled using references, a review of pipeline failure data, and expert opinion. These input parameters are normally assigned a value in the range of 0.0 to 1.O. An example is provided showing this system, calculating the risk factor for fatigue (FSSF). Once the relative ranks are calculated, the given scheme can be used in two ways:
Sample algorithms /371
1. Pipeline rating-prioritizing
pipelines for a particular
maintenance action 2. Failure-mode rating-applying the scheme to one pipeline and determining and prioritizing the most likely causes of failure and implementing a maintenance program to prevent these failure causes. Due to the lengthy calculations involved, a Microsoft Windows PC program was developed to compute the relative risk values. Fifty-nine questions are asked of the user that relate to the design, condition, history, and environment for each pipeline segment. Input and output screens are shown in the paper.
damages, the role ofpipe strength, the potential for internal corrosion, and potential consequences (LIF). Words in brackets represent risk variables. Brackets identify variables for some versions of SQL software. Variables multiplied by factors are normally done to weight the importance or to convert some measurement (such as deDth of cover in feet), to a Doint score. For example, the equation ([depth-cover] = [cover] / 3 * IO) means that a variable called "cover''-a measure of depth of cover-is divided by 3 and multiplied by a weighting factor of I O to arrive at a point score for the calculated variable called "depth-cover." In this case, [cover] represents the actual measurement of cover in inches and [depth-cover] is the risk variable created from that measurement. Variables are left in their abbreviated form, but should be readily recognizable by experienced pipeline personnel.
Sample algorithms The following are examples of algorithms used to assess risk variables such as the probability of damage from third-party
Variable
Interpretation
Sample Third-party Damage Algorithm ThdF'tySum (([depth-cover]+ [activity]+[exposed-facilities]+ [one-call]+[patrol]+ [public-edn]+[ROW-condl)' (IIf( [leak]=l,0.9,IIf([leak]=5,0.9,1)))' [repair-thd-pty]) depth-cover ([earth cover] x [earth type]) + [pavement] activity (l/([utilities]+ [one-calk]+ [pop]'2)*12 + [prev activity IevelIR)
exposed-facilities
([ab\._gmd]/2+Ilf( [cover]=0,0,5)) x [vulnerability]
one-call
([mandated] + [effectiveness] + [use] + [response])
public-edn ROW-cond
((2 x [door-to-door]) + [mail out] +[advertisement]) x [pub ed fieq] ([undergrowth] + [overgrowth] + [signs/markers])
patrol
([air_patrl_freq]'Ilf( [ROW]<3,O. 5, [air_patrl_effl)*2)
Miscellaneous Algorithm Variables pipe-fctr. (Ilf( [pipe-maxpress]/[MOP]< 1,0,IIf(( [pipe-maxpress]/ [MOP])>2,20,( [pipe-maxpress]/[MOP]-l)*20))' [ILI-design-flaw])
Sum of all subvariables adjusted by leak history
First, as the number of foreign utilities, one-call reports, and/or population density increases, the activity score decreases (worsens). Population density is the most important indicator so it is doubled in this first part of the calculation. This is then multiplied hy 12, a scaling factor, and by a previous assessment of activity level, divided by 2 to reduce its impact (since it is older information). A vulnerability assessment (done elsewhere) is multiplied by the type of exposure. If an aboveground component has been previously identified, it will appear as a nonzero value in the [abv-grnd] variable. As another check. if depth of cover is zero, then that is also considered an exposure. If no exposures are found, maximum points (5 pts) are awarded. Combines variables that evaluate properties of a one-call system and the company's reaction to one-call reports. Combines variables that evaluate aspects of a public education system. Combines variables that evaluate aspects of ROW condition. Combines variables that evaluate properties of patrol-frequency and effectiveness.
As a measure of pipe strength, the ratio of
available strength versus operating pressure is scored (max 20 pts) and adjusted by the results ofthe most recent search for flaws by in-line inspection. Continued
372 Sample Pipeline Risk Assessment Algorithms Interpretation cont’d
Variable
pipe-maxpress,
([pipe-barlow]’(IIf( [yr]<72,(1lf([pipe_seam]=I, (IIf( [integrity-test]<5,0.8,0.95)),0.95)),0.95)))
internal-con;
(([prod-corr]+IIf( [drain]=l,O,5))’(lIf([yr]~94,0.5, I)))
Sample Third-party Damage Algorithm 2 [pipe geometry] x [material factor] x [depth cover] x external force [exposure] x [damage prevention] pipe geometry material factor
1 /[pipe wall1~2+ 12 / [pipe diameter] 1OOOOO/[pipeSMYS]+(10/[pipeCharpyVnotch])t [temp vs transition temp]
depth cover exposure
(3/[cover])~l.5 1 + 0. I x ( [foreign line xings] + [road xings] + [river xings]) + [activity level] + 100x [incident rate])
damage prevention
1 + 5 x [one-call] + 5 x [patrol] + [land ownei
communications] + [ROW]
The pressure calculation per Barlow’s formula is adjusted if low-frequency ERW pipe is present (pre- 1970 and ERW seam type) and further adjusted by the score for the most recent integrity test (pressure test or in-line inspection). A product corrosivity score added to a “low spot” score (ifthere is a low spot, as identified by the [drain] variable, then 0 pts are assigned) multiplied by an age factor, where a segment older than 1994 receives a 50% penalty. Multiply all variables together to anive at score for ‘external force’; variables are defined below. Higher numbers reflect higher risk level (Ref[48]). Both in inches. SMYS is specified minimum yield strength in psi; [Charpy Vnotch] is the Cbarpy V-notch upper shelf energy in ft-lb, and [temp vs transition temp] is 0 ifthe pipe is never exposed to a temperature below its ductile-tobrittle transition temperature, 1 if the pipe is sometimes exposed to temperatures up to 60’F below its transition temperature, and 10 ifthe pipe is frequently exposed to temperatures of 60’F or more below its transition temperature or if its transition temperature is unknown. [cover] = depth of cover in feet The number of foreign line crossings, road crossings, and river crossings in the segment; [activity level] is a number ranging from 1 to 5 depending on the perceived construction activity; and [incident rate] is the number of outside force incidents per mile per year on the segment. Damage Prevention is [one-call] ranges from 0 to 1 and represents the one-call response, Le., 1 for locating and marking the pipeline, 0.5for locating, marking and providing an onsite representative, and 0.1 if the operator exposes the line for the excavating contractor; [patrol] ranges from 0 to 1 and represents patrolling, i.e., 1 .O for biweekly overflights, 0.5 for 2 overflights per week, 0.1 for 3 or more overflights per week; [landowner communications] ranges from 0 to 1 and represents communication with land-owners, i.e., 1 for none, 0.8 for periodic contacts by mail, 0.5 for periodic face-to-face meetings; [ROW] represents right-of-way maintenance andmarkingandrangesfrom0.5 to l,i.e.,0.5 for robust program, 1.O for normal program.
Sample algorithms1373 Sample LIFAlgorithm The following flowchartioutline shows variables that support an overall consequence assessment of LIF = ([prod-haz] * [spill] * [disp] * [receptors]). This type of assessment is fully discussed in Chapter 7. Vuriahle I
Vnriahle 2
Vnrrable 3
Chriable 4
LIF
prod haz
acute
Nf Nr Nh
chronic VI v2
RQ
spill
v3
dispersion
haz zone
max flowrate leak detect capability pipe-dia Drain-vol
Variable 5
As defined in Chapter 7
As defined previously.
drain length
prod-type VI pipe dia
t h e m factor
prod-type prod-hoilpt prod-MW [ignition proh]
acute
contam area
receptors
dist from \ource
source location leak location
disp issue
disp-feature disp-effect
POP
pop-density pop-special-dens pop-special-non-mobile environ-drink-water environ-T&E environ-rec-water environ-prime-ag environ-aquifer environ-wetlands environ-shoreline environ-sensitive HVA
environmental
HVA
Notes
spill-vol
Analysis determines length ofpipe that will contribute to drain at leak point. Pipe diameter is then used to calculate the drain volume. These variables are part ofthe hazard zone calculation: iif([prod-type]=O, iif( [acute]<2,[contam area], LOG([spill-vol]/2), ( [pipe-dial A~'[MOP])'[therm factor] (see Chapter 7). This variable establishes an equivalency between different types ofthermal effects (liquid pool fire vs flame jet vs fireball) as discussed in Chapter 7. Algorithm equation is: iif([prod_type] = O,iif( [acute] <2,1 ,SQRT(prod_boilpt)'[constant]), prod-MW)' Ignition probability is a function ofproduct ignitahility and the presence of ignition sources. Ifno thermal or overpressure effects and product is liquid, then hazard zone is based on potential contamination area, which is a function of spill volume. This variable is used to capture a situation where the source of a thermal or overpressure effect is not at the leak location. For example, a liquid pool that forms some distance from the leak and then ignites. This is a flag to alert to special dispersion issues-maybe significantly increasing or decreasing exposed receptors; examples include special topography, presence of sewers, and prevailing wind considerations. Variables to capture all aspects of human population vulnerabilities. Variables to capture all aspects of environmental vulnerabilities. Recreational waters.
Other special sensitivities. High value areas; as defined previously
Appendix F1375
Appendix F
Receptor Risk Eva1uation
The following is extracted from Attachment C ofAppendix 9B of Ref. [86]. This reference is an environmental assessment (EA) of a proposed 700 mile gasoline pipeline across the state of Texas. This extract illustrates the use of specific consequence determinations in a QRA. The references to Cases 1 through 4 in the text below refer to four different pipeline failure probabilities that were used in this EA [86]. These failure probability cases are described in Chapter 14’s Case Study D (we apologize for the confusing use of ‘cases’ and ‘Case Studies’). That Case Study also originates from Ref [86] and shows how the assumptions and calculations shown below were used to produce probabilities of specific damage states. Tables referenced below can be found in Case Study D and references noted can be found in Ref [86]. This extract is included in this book because it details the considerations that went into assessing various receptor vulnerabilities. These can generate ideas and concepts useful to the designer of any risk assessment methodology.
Details of assumptionsand calculations used in QRA for environmental assessment of proposed LPP pipeline Fatalities and injuries The fatality and injury rates for the 4 cases described inTables 3 and 4 of this appendix were calculated from data in the DOT Database (DOT, 1999). The fatality and injury rates for the period 1975-1999 were derived from the total number of fatalities and injuries associated with pipelines carrying refined products and crude oil during this period. These rates, expressed as fatalitieshnjuries per reportable spill, are calculated as the total number of fatalities or injuries divided by the total reportable spills (spill volumes 2 50 barrels, mostly) in the period 1975-1999. There were 11 fatalities and 57 injuries
associated with 2,395 reportable spills during this 25-year period. The fatality rate is calculated as: Fatality rate
= =
11 fatalitied2395 reportable spills 0.00459 fatalities per reportable spill
The injury rate was calculated in a similar manner. The fatality and injury rates were 0.00459 and0.0238 per spill, respectively. This approach assumes that there is no more than one fatalityiinjury per reportable spill even though this is not the case. This assumption introduces conservatism into the fatality/ injury rate estimates since the “fatalitieshnjuries per reportable spill” rates overstate the rate which is really sought: the frequency of “one-or-more fatalitieshnjuries per reportable spill.” These two rates are referred to interchangeably in EA discussions, but are always based on the conservative calculations described here. The overall risks of fatalities and injuries from pipeline spills were determined from the overall leak rate expressed as leaks per mile per year. For example, the estimated average number of LPP pipeline leaks predicted over the next 50 years, using industry average reportable leak rates as a basis, is 35. The equivalent number of fatalities that can be expected for this same length of pipeline over 50 years is 0.00459 (fatalities per spill) x 35 (spills per 50 years) = 0.16. The annual frequency is calculated as the project-life frequency divided by the project life of 50 years. The fatality and injury rates for Case 2 were calculated in a similar manner, using the estimated leak count of 26.8 determined from the pre-mitigation reportable leak rate of 0.00077 leaks per mile per year (10 leaks in 450 miles over 29 years). The average leak rate of 0.00199 leaks per mile used in Case 3 includes all leaks: those less than 50 barrels in volume in addition to reportable leaks. In estimating the fatality and injury rates, it was assumed that there were no injuries or
376 Receptor Risk Evaluation
fatalities associated with leaks of less than 50 barrels. Since the estimated leak counts included leaks of less than 50 barrels, the estimated leak rates were reduced by the ratio of reportable to total leaks. Approximately 56% of the total leaks are below 50 barrels in size. Thus, the leak rates were multiplied by 0.44 to obtain the estimated fatality and injury rates. For example, the fatality rate for Case 3 was calculated in the following manner: Fatalities = 0.00459 x 69.7 x 0.44 = 0.14 fatalitiesover the project life. The fatality and injury rates for Case 4 were calculated in a similar manner. The average leak rate for Case 4 was determined as described elsewhere in this appendix. The segment-specific fatality and injury frequencies shown in Table 4 were calculated in much the same manner as those given in Table 3. The frequencies for the 2500-ft segments were produced by reducing the frequencies for the entire pipeline by the ratio of 2500 ft to 700 miles. For example, the fatality frequency for Case 1 was calculated as follows: Fatalityfrequency=(0.00459x35~2500)/(700x5280)=1 0 9 10" ~
Drinking water contamination Contamination of public drinking water resources may occur either from contamination of sensitive ground water or surface water supplies. Tier 2 and Tier 3 areas for potential drinlang water contamination were defined by the sensitive and hypersensitive designations in Chapter 7 [of the EA-Ref [86]]. The mileage ofTier 2 and Tier 3 areas for ground water and surface water were therefore derived directly fromTables 7.1 and 7.2 [not included in this book, but Chapter 7 of this book discusses the tier designations]. Note that sensitive and hypersensitive areas for ground water and surface water are not mutually exclusive, and therefore some overestimation of overall probability will result. The assignment of sensitive and hypersensitive areas is based on hydrological and hydrogeological evaluation of the characteristics of surface water streams and aquifers which could be impacted by the pipeline. The designation of sensitive was intended to indicate those areas where it is deemed possible for damages to occur to a drinking water supply resulting from a release. The designation of an area as hypersensitive suggests that there is a higher probability of an impact within these areas. A release to either a sensitive or hypersensitive area does not guarantee an impact. There are various location- and timespecific determining factors, such as distance to surface water or karst feature, flow rate in a receiving stream, saturation of soils, temperature, and wind speed, and nature of the event causing the release. Based on an overview of these factors, the probability of contaminating drinking water supplies as a result of a major release along the pipeline were set conservatively at the rates shown in this report. Fifty percent potential contamination for surface water/drinking water contamination was set after reviewing modeling results of the most sensitive crossing with respect to significant drinking water contamination along the p i p e l i n e t h e crossing of the Pedernales River upstream from Lake Travis. Modeling exercises conducted to date show that during mean flow conditions on the Pedernales, a worst case
spill at this location would have no significant impacts on drinking water quality. Therefore, under at least 50 percent of the flow conditions in the river, there would be no impact. The 50 percent number is also conservative with respect to the worst case crossings at Flat Creek and the Pedernales. The 50 percent estimate is also thought to be very conservative in light of other areas which are currently designated hypersensitive, but for which more recent modeling suggests that a sensitive/Tier 2 designation would be more appropriate. Surface water drinking supplies in Tier 2 areas are less vulnerable than those in Tier 3 areas. For surface water contamination in a Tier 2 area to impact public drinking water supply, very improbable stream flow, soil, and water use (such as drought stage water needs) would need to occur simultaneously. These conditions exist at a lower frequency than is represented by I O percent probability number assigned for Tier 2 areas. For ground water, a higher probability (relative to the surface water case) is assigned to Tier 2 sensitive and Tier 3 areas, in order to account for a number of factors. These include the uncertainty about localized ground water flows at every point along the pipeline, the potential presence of private drinking water wells which may be impacted, the distance to karst recharge features, the extent of time for which contaminants could remain in ground water at significant concentrations, and the variations in ground water flux due to aquifer level and rainfall conditions. However, a major spill in a hypersensitive area does not guarantee impacts to drinking water quality within the associated aquifer. Factors such as uptake by the soil, runoff, and volatilization from the surface can reduce much of the volume of the product which reaches the aquifer. Additional modeling assumes a case where MTBE is removed from the gasoline, and that benzene is the primary constituent of concern. This modeling indicates that the potential for significant impacts to drinking water use when MTBE is removed is far less than one-halfthe potential for spills containing MTBE. In order to be conservative, the impact was set at one-half of the potential with MTBE. [MTBE refers to a gasoline additive that was being contemplated. This additive makes the gasoline more environmentally persistent and hence, increases the chronic product hazard.]
Edwards aquifer contamination The three miles of pipeline crossing hypersensitive recharge formations in the Edwards Aquifer/Balcones Fault Zone were concluded to represent worst case ground water impacts. As explained generally in LMC 33, and specifically in the Phase I1 BA [biological assessment], LPP will investigate and seal off any recharge features within the pipeline ROW while laying new pipe. This should reduce pathways for product spilled to impact the aquifer by percolating through surface soils to a subsurface recharge feature or flowing overland to a recharge feature. It is assumed that soils will readily absorb between 500 and 1,500bbl of a spill: the lower level (500 bbl) is set as the minimum spill of consequence. The probability of any spill greater than 500 bbl impacting ground water is set at 75 percent, to reflect the large number of recharge features in the zone. It is assumed that any contamination of the aquifer will in turn impact drinking water supplies in Sunset Valley.
Details of assumptions and calculationsused in QRA 1377
Lake travis drinking water contamination (PedernalesWatershed) A number of river and stream crossings in the Pedernales watershed were rated as hypersensitive for potential drinking water quality impacts to Lake Travis. Additional creeks as well as some dry channels identified as potential overland flow paths of concern, were identified as sensitive. The total mileage of these sensitive (Tier 2) and hypersensitive (Tier 3) stretches along the pipeline were factored in as locations which could impact Lake Travis water quality, using the factors described for “Drinking Water Contamination’’ in this section.
Recreational water contamination The potential for recreational waternays contamination is based on the idea that any product spill which reaches a waterway has the potential for negatively impacting recreational uses. This may be a result of short-term impacts to surface water quality which limit contact recreation, and fish kills or contamination which may limit recreational fishing. Two thresholds of spill size were used in determining whether a surface water body would potentially be affected by a spill. For portions of the pipeline where it is more likely that a spill would impact a surface water body, a threshold of 500 bbls was used. For those portions of the pipeline that were either very remote from the potentially threatened surface water body, or which were in an area of very flat topography, a threshold of 1500 bbls was used as a minimum spill size. It should be noted that most ofthe streams that are crossed by the pipeline are small, and in many cases are seasonal. A product release may therefore result in a large portion of the total stream flow consisting of product contaminants, for some distance downstream from the point ofrelease. Therefore, aprobability of 100 percent for contamination was set for any 100-meter segment along the pipeline containing a river or stream crossing as well as for each of the adjoining 100-meter segments in order to account for the close overland pathways which could impact a stream. In addition, some probability exists that a release at additional points in the watershed may impact the surface water quality. Since overland flow modeling was performed to identify the flow pathways from points along the pipeline, the characteristics of these flow pathways were used to establish for each pathway a probability of impacting the surface water stream during a major release. These characteristics included distance from the pipeline along the pathway to the surface water body, slope ofthe pathway, terrain type (urban, agricultural, forested range land)as an indicator of ground cover which could promote or retard overland flow, and soil permeability. These characteristics are used to generate a composite number for each flow pathway. Those pathways which were not within a 300-meter band across each stream crossing, but which had a score equal to or higher than the 300-meter band were assigned a probability of impact of 90 percent. Areas of lower scores were rated incrementally with probabilities of 70 percent and 40 percent. The final two sets ofpathways were scored at 10 percent and 0 percent probability. Pathways that are assigned a 0 percent probability largely represent points along the pipeline over
flat, high permeability range lands in the western portion of the pipeline.
Prime agricultural land contamination A spill volume of 500 bbl is set as the threshold for impacts to agricultural lands. A spill this size resulting from a rupture could be expected to contaminate about 114 of an acre of soil. Impacts to agriculture were evaluated by reviewing soils data from U.S. Department ofAgriculture databases. Prime agricultural was identified as those farmlands having the following soil types: BaA, BaB, BeB, Bo, BuB, HeB. HOB, KrA, Nd, No. ROB,Sa, Sg, Sm, andTr. The distance of these types of soils crossed by the pipeline was measured with the supposition that any prime farmland along the pipeline could be impacted from apipeline accident up to a distance of 1,250 ft from the point of release. Therefore, the band of impact along the pipeline for evaluating any point was 2,500 ft. In most cases, overland spread would cause impacts of two to three acres from any individual spill event. Although localized channels, ditches, or roadways may provide a conduit for product to avoid major contamination of farmland in general, it is assumed that any release over farmland will have an impact to that farmland. Therefore, a probability of 100 percent for impacts to agriculture is associated with any release over prime farmland. For most ofthe pipeline, it was assumed that prime farmland was over Tier 1 areas. However, in Bastrop County, where a major portion of the pipeline is rated as sensitive for potential contamination of ground water resources, the distance of agricultural lands covered by Tier 1 and Tier 2 portions of the pipeline were tabulated separately. The average farmlands crossing distance was 872 ft, and the median 94 ft.
Wetlands contamination A spill volume of 500 bbl is set as the threshold for impacts to wetlands. Two separate types ofwetlands crossings are noted along the pipeline right-of-way-palustrine and riverine. A total of 967 wetland areas were identified within the pipeline corridor, with a total of 159.7 miles of pipeline crossing or adjacent to wetlands. These figures were tabulated by comparing the pipeline right-of-way with national wetlands inventory maps. Of the wetlands types, there were 857 palustrine wetlands which could be potentially impacted, consisting mainly of small ponds within the 2,500-foot (ft) corridor. The average linear distance of the palustrine wetlands is 711 ft. The average linear distance of the 110 riverine wetlands is 2,127 ft. with a median distance of 1,339 ft. Therefore, the potential for impact to any wetland resource is represented by the distance across the wetland plus 1.250 ft to either side along the pipeline. A length of analysis for impacts to individual wetlands is set at 3,372 ft in order to encompass the average wetland crossing, plus the 1,250 ft to either side which could impact the wetland during a spill. The probability of impact from a spill into or proximal to the wetland is set at 100 percent.
.
Appendix G1379
Appendix G
Examples of Common Pipeline Inspection and Survey Techniques
Inspection or test ppe
Purpose
Attributes
Pressure testing (hydrostatic pressure testing)
Commonly used as a pre-service integrity validation ofthe pipe and components by pressurizing to a level above the intended operating pressure. Destructively “detects” defects so they do not subsequently fail while in service.
A regulatory requirement for new pipe sections, to uprate existing pipe sections. and conversion from vapor to liquid service.
Allows the establishment ofthe real minimum strength ofthe pipeline and components, as opposed to the mill tensile test, which is based on a sample ofpipe.
Cathodic protection (CP) inspections and surveys
In-line inspection (ILI)
Determines the adequacy of cathodic protection voltages and currents for protecting the pipeline against corrosion and to detect areas of potentially defective coating.
Detects areas of anomalies, such as metal loss, deformations, cracks, dents, gouges, and laminations.
Done by sectioning line according to terrain elevations. One or more valve sections can be included within test segment. Requirements and procedures defined in regulations and industry standards.
Hydrotesting service is usually provided by a specialty contractor overseen by operating company staff. Rectifier inspections are done to ensure that the rectifiers are in service and providing the required impressed current for cathodic protection.
Station tests or surveys are done to measure CP voltages at test station locations. This also includes readings taken at pipe casings under roads and railway crossings. Various forms ofclose interval surveys (CIS) are taken at intervals of 2 to IO ft along a pipeline to provide a profile along the line to greater resolution than can be obtained with a station survey. Automated internal inspection tools or “smart pigs” vary in anomaly types that can be detected and terms of degree ofresolution. Services are provided by a specialty pigging contractor. Common technologles are ultrasonic and magnetic flux leakage (see Chapter 5). Results often require expertise in interpreting data. Either part or all of a pipeline is inspected depending on the location of pig launching and receiver equipment and the size and geometry of the pipeline system. Continued
380 Examples of Common Pipeline Inspection and Survey Techniques Inspection o r test type
Purpose
Attributes
Manual ultrasonic wall thickness measurement
Determines wall thickness and identifies areas of metal loss by direct measurement ofpipe wall thickness. Finds active leaks over a hydrocarbon pipeline. Determines elevation profile and depth of cover beneath waterway. Determines actual amount of cover over pipeline. Identifies any adverse conditions associated with coating or pipe, such as corrosion, dents, scrapes, gouges, or deteriorating or damaged coating. Finds discontinuities in metallic pathways; used in reinforced concrete pipe. Evaluates coating condition of buried steel pipeline. Uses techniques such as ultrasonic, magnetic particle, dye penetrant, etc., to find pipe wall flaws that are hard to detect with the naked eye. Can also include testingof coating properties such as thickness and numbers of holidays. Identifies external conditions that might adversely affect the pipeline, such as third party activity and ROW encroachments. Also used as a means of detecting leaks.
Manually held instrument used in conjunction with exposed pipe inspections. Requires coating removal.
Leak surveys Water crossing surveys Depth of cover surveys Visual surveys
Acoustic monitoring Subsurface coating condition Nondestructive testing
Ground patrols and aerial surveys
Usually includes handheld instrumentation to detect hydrocarbon vapors. May require divers to probe or use instrumentation to locate depth ofpipe below stream or lake bottom. Depth is measured by instrument or by physical “probing” ofthe pipeline. Done in conjunction with finding exposed pipe or exposing pipe for inspection by digging at various pipe locations. The hare pipe can only be examined when the coating is removed. Impressed sound waves are analyzed for discontinuities or active failure. Impresses a signal onto a pipeline and measures attenuation of signal to determine signal leakage through coating. Usually done in conjunction with visual inspection; can he done on coating or on pipe wall.
These apply more to the effects of external factors on the pipeline and the detection of leaks than to factors associated with the conditions ofthe pipe itself. They complement visual surveys.
-
-~______.
Glossaryl381
Glossary
This glossary defines terms as they are used in this text. In some cases, the definitions may differ slightly from strict dictionary definitions.
Backfill. The soil that is placed over a pipe as one ofthe final steps in pipeline installation. Sand is often used as a backfill material because of the uniform support it provides and because it does not damage the pipe coating during installation. Block valves. Block valves are designed primarily to function in either the fully closed or fully opened position. They are used to stop flow completely, rather than throttling or controlling the flow rate. Block valves can be closed automatically, remotely, or manually depending on the closure mechanism. Block valve designs include gate-, plug-, and ball-type configurations.
leaks. These have historically been used primarily where a pipeline crosses under a road or railroad. Cathode. A component of a corrosion cell, the cathode is the metal that attracts ions and gains mass through the corrosion process. Cathodic protection (CP). Method of protection against galvanic corrosion of a buried or submerged pipeline. In the CP method, a low-voltage charge is impressed on a metal in order to protect it from corrosion. Essentially, the pipeline is turned into a cathode by application of protective currents, which prevents the loss of metal. The CP system is generally comprised of anodes, rectifiers (when needed), electrical connections, and monitoring points. Chronic hazard. A potential threat that can continue to cause harm long after the initial event. Examples include carcinogenicity, groundwater contamination, and long-term health effects. Check valve. A special type of valve or flow-restricting device designed to allow flow in one direction only and prevent flow in the other. Cleaning pigs. A device transported through the pipeline to remove scale and sediment. Close interval survey. In this technique, pipe-to-soil voltage measurements are performed every 2 to 15 feet along an entire pipeline to test if CP levels are adequate. Ideally, such a profile of pipe-to-soil potential readings will indicate areas of interference with other pipelines or casings, areas of inadequate CP, and even areas where pipe coating is defective. Coating. A material that is placed around and adheres to a pipeline component to protect that component from contact with a potentially harmful substance. Control valves. These valves are designed to operate in the full range of positions from closed to fully open. The function of control valves is to control fluid flow rates by operating in partially open positions. Corrosion. The wearing away of a material, usually by a chemical reaction.
Casing. A pipe completely surrounding the pipeline to provide protection and act as a conduit for potential product
DOT. Department of Transportation. The regulatory agency of the U.S. government that is charged with regulating
Acute hazard. A potential threat whose consequences occur immediately after initiation of an event. Examples include fire, explosion, and contact toxicity. Anode. A component of a corrosion cell, the anode is the metal that gives up ions and loses mass during the corrosion process. It is designed to sacrificially deteriorate in order to prevent the protected structure (the cathode) from corroding. Anode bed. Generally comprised of many individual anodes-bars or packets of materials that are naturally anodic compared to steel. These individual anodes are electrically connected and installed together underground in a “bed.” Anomaly. An indication of potential pipe wall flaw or defect discovered during an in-line inspection. It is often not known ifan anomaly is really a flaw until it is investigated. Automatic valve (also called automatic block valve). A mechanical device that prevents flow in a pipeline and is designed to operate when it receives a predetermined signal. The signal is transmitted without human action. See also Remotely operated valve.
3821 Glossary
aspects of pipeline design, construction, and operation. The Office of Pipeline Safety (OPS) is the department within DOT charged with ensuring pipeline safety. Draindown. Quantity of product that will gravity flowdrain-to the leak site after a pipeline rupture, based on topography, pipeline diameter, pressure, valve location, and response time. EPA. Environmental Protection Agency. The regulatory agency of the U.S. government that is charged with regulating activities that may be harmful to the environment. ERW. Electric resistance welding is a manufacturing process for pipe. Although modern ERW pipe is considered to be high quality, a low-frequency ERW process common before about 1970 produced a longitudinal weld seam that is more susceptible to certain failure mechanisms. Failure. The point at which a structure is no longer capable of serving its intended purpose. Although a pipeline that is actually leaking product is the most obvious indication of failure, failure is often also defined as the point at which the material is stressed beyond its elastic or yield point-it does not return to its original shape. Fatigue. The process of repeated application and removal of stress. Because fatigue can cause a failure to occur at a relatively low stress level, materials that must resist such cycles of stress must be specially designed for this service. Flaw. A defect in the pipe wall that could be a threat to pipeline integrity. Examples include cracks, gouges, and metal loss. Fracture toughness. The ability of a material to resist cracking. Materials that are more ductile can absorb larger amounts of energy before cracks spread. Lead has high fracture toughness; glass has low fracture toughness. Girth weld. Welds of the circumferential seams where the ends oftwo sections ofpipe arejoined. HAZ. Heat-affected zone. The area of metal around a weld that has been metallurgically altered by the heat of the welding process. This area is often more susceptible to cracking than the parent metal. Hazard. A potential event that can lead to a loss of life, property, income, etc. Hydrostatic pressure test (hydrotest). An integrity verification test involving the pressurization of the pipeline system with water to a level higher than its intended operating pressure in order to prove the system’s strength. The test pressure is held for several hours and carefully monitored to ensure that even very minor leaks are detected. Index. One of four general categories to which pipeline accidents can be attributed. Aspects of pipeline design, operation, and environment are scored to arrive at numerical values for the third-party index, corrosion index, design index, and incorrect operations index. Index sum. A summary number from the risk model (Chapters 3 through 6) that represents an assessment of all variables that affect spill probability. Index sums vary between a theoretical low ofzero (extremelyhigh probability
of failure) to a theoretical high of 400 (virtually no chance of failure). In-line inspection (ILI). The use of an electronically instrumented device, traveling inside the pipeline, to measure characteristics of a pipe wall, especially the detection of anomalies such as metal loss due to corrosion, dents, gouges, and cracks. Several ILI tool technologies are available, each with relative strengths in terms of types of anomalies detected, ability to characterize the anomaly, and accuracy. Internal corrosion. Any form of corrosion that occurs on the inside wall of the pipe or internal surfaces of any pipeline component. Landslides. The moderately rapid to rapid (on the order of I foot per year or greater) downslope movement of earth by means of gravitational body stresses. Leak. Loss of containment from a pipeline component; the unintentional release of product from the pipeline. Although the terms leak and spill are used interchangeably in this text, a distinction could be that a leak is any amount of product escaping the pipeline, whereas a spill refers to the results of a leak-the final leaked volume and accumulation point, for instance. Leak impact factor. A number that represents the overall consequence of a pipeline failure in the risk assessment methodology presented in this book. This factor is a score based on the product hazard and the dispersion factor. The leak impact factor is divided into the sum of the four index values to arrive at the relative risk score. MAOP. Maximum allowable operating pressure; also called MAWP for maximum allowable working pressure. The highest internal pressure to which the pipeline may be subjected based on engineering calculations, proven material properties, and governing regulations. Palustrine. The palustrine system was developed to group the vegetated wetlands traditionally called by such names as marsh, swamp, bog, fen, and prairie, which are found throughout the United States. It also includes the small, shallow, permanent, or intermittent water bodies often called ponds. Palustrine wetlands may be situated shoreward of lakes, river channels, or estuaries; on river flood plains; in isolated catchments; or on slopes. They may also occur as islands in lakes or rivers. Peak ground acceleration. The force related to the ground acceleration during a seismic event, expressed as a percent of one gravity. The peak acceleration is the maximum acceleration experienced by a particle during the course of an earthquake motion and is used as a measure of seismic damage potential. Pig. A device designed to move through a pipeline for purposes of cleaning, product separation, or information gathering. A pig is usually propelled by gas or liquid pressure behind the pig. The term pig is said to have originated from the sound the device makes as it moves through the pipeline. Pressure relief valve. Also called a pop valve or a safety valve, this class of mechanical safety device is designed to operate at a predetermined pressure to reduce the internal
Glossaryi383
pressure of a vessel. The valve is often designed to close again when the vessel pressure is again below the set point. Product hazard. A numerical score that reflects the relative danger of the material being transported through the pipeline. This relative ranking of the product characteristics considers acute and chronic hazards such as flammability, toxicity, and carcinogenicity. psi, psig, psia. Pounds per square inch, pounds per square gauge, or pounds per square absolute. This is the normal unit of pressure measurement in the United States. The gauge pressure is psig and is the reading that is seen on a pressure gauge calibrated to zero under atmospheric pressure. Therefore, psig does not separate atmospheric pressure from the reading seen on the gauge. Zero psig is equal to about 14.7 psia, depending on the exact atmospheric pressure of the area. Public education. The program sponsored by pipeline coinpanies to teach the general public about the pipeline industry. The emphasis is usually on how to avoid and report threats to the pipeline and what precautions to take should a leak be observed.
Rectifier. A device that converts AC electricity into DC electricity and delivers the current onto the pipeline for purposes of cathodic protection. Relative risk value. Also relative risk rating or score. This number represents the relative risk of a section of pipeline in the environment and operating climate considered during the evaluation. Release quantity. This is the quantity of spilled material that will trigger an EPA investigation. Possible categories are I-, IO-, 100-, 1000-, and 5000-pound spills. More hazardous substances trigger investigations at lower release amounts. For this risk assessment model. release quantities have been assigned to substances not normally regulated by the EPA. Remotely operated valve. A mechanical device that prevents flow in a pipeline and is designed to operate on receipt of a signal transmitted from another location. Risk. The probability and consequences of a damaging event. Riverine. The riverine system includes all wetlands and deep habitats contained in natural or artificial channels periodically or continuously containing flowing water, or which forms a connecting link between the two bodies of standing water. Upland islands or palustrine wetlands may occur in the channel, but they are not part ofthe riverine system. ROW. Right of way. The land above the buried pipeline (or below the aboveground pipeline) that is under the control of the pipeline owner. This is usually a strip of land several yards wide that has been leased or purchased by the pipeline company. Safety device. A pneumatic, mechanical, or electrical device that is designed to prevent a hazard from occurring or to reduce the consequences of the hazard. Examples include pressure relief valves, pressure switches, automatic valves, and all automatic pump shutdown devices.
SCADA. Supervisory control and data acquisition. A SCADA system allows conditions along the pipeline to be monitored and certain types of equipment to be controlled from a central location. This is a system to gather information such as pressures and flows from remote field locations and regularly transmit this information to a central facility where the data can be monitored and analyzed. Through this same system, the central facility can often issue commands to the remote sites for actions such as opening and closing valves and starting and stopping pumps. SCC. Stress corrosion cracking. This is a potential fallure mechanism that is a combination of mechanical loadings (stTess)and corrosion. It is often an initiating or contributing factor in fatigue failures. Seam weld. Generally refers to the welds of longitudinal seams of the pipe produced during certain pipeline manufacturing processes such as ERW. Secondary containment. Any system designed to catch and retain escaping product if a portion of the pipeline system fails. Levees and impermeable liners around tanks serve as secondary containment. Seismic event. A sudden motion or trembling of the earth caused by the abrupt release of slowly accumulated strain in the earth’s crust related to faulting or volcanism also known as earthquakes. SMYS. Specified minimum yield strength. The amount of stress a material can withstand before permanent deformation (yielding) occurs. This value is obtained from the manufacturer of the material. Stress. The internal forces acting on the smallest unit of a material, normally expressed in psi (in the United States). When an external loading such as a heavy weight is placed on a material, a level of stress is created in the material as it resists deformation from the load. Surge pressure. Also referred to as wuterhumrner: this I S a phenomenon in pipeline operations characterized by a sudden increase in internal pressure. This surge is often caused by the transformation ofkinetic energy to potential energy as a stream of fluid is suddenly stopped. Third party. Any individual or group not employed by the pipeline owner or contracting with the pipeline owner. Thirdparty damages occur when an individual not associated with the pipeline in any way accidentally strikes the pipeline while performing some nonrelated activity. Wall thickness. The dimension measurement between a point on the inside surface ofthe pipe and the closest point on the outside surface of the pipe. This is the thickness of the pipe material. Yield point. In general, this is the point. defined in terms of an amount of stress, at which inelastic deformation takes place. Up to this point, the material will return to its original shape when the stress is removed; past this point. the stress has permanently deformed the material.
References
1. “ALOHA (Areal Locations of Hazardous Atmospheres),” software for dispersions of contaminants in the atmosphere, developed by the National Oceanic and Atmospheric Administration and the Environmental Protection Agency, October 1997. 2. AGA Plastic PipeManualfor GasService, Catalog No. XR 8902. Arlington, VA: American Gas Association, February 1989. 3, American Petroleum Institute, “Evaluation Methodology for Software Based Leak Detection Systems,” API 1155, Washington, DC: API, February 1995. 4. American Petroleum Institute, “Pipeline Variable Uncertainties andTheir Effect on Leak Detectability,” API 1149, Washington, DC: API, November 1993. 5. “ARCHIE (Automated Resource for Chemical Hazard Incident Evaluation),” prepared for the Federal Emergency Management Agency, Department of Transportation, and Environmental Protection Agency, for Handbook of Chemical Hazard Analysis Procedures (approximate date 1989) and software for dispersion modeling, thermal, and overpressure impacts. 6. ASME Codejbr Pressure Piping, B31: “Gas Transmission and Distribution Piping Systems,” ANWASME B3 1.8, I986 ed. 7. ASTM, “Standard Test Methods for Notched Bar Impact Testing of Metallic Materials,” E23-93a, American Society forTesting and Materialism, July 1993. 8. Baker, W. E., et al., Explosion Hazards and Evaluation, NewYork: Elsevier Scientific Publishing Co., 1986. 9. Battelle Columbus Division, “Guidelines for Hazard Evaluation Procedures,” New York: American Institute of Chemical Engineers, 1985. 9a. Beighle, B., engineering consultant. Benchmark Engineering LLC, Billings, MT, personal correspondence. 10. Bernstein, F? L., Against the Gods: The Remarkable Stoyv ofRisk, New York: John Wiley and Sons, 1998. I I . Bolt, R., and Logtenberg, T., ‘‘Pipelines Once Buried Never to be Forgotten,” in Reliabiiity on the Move: Safir?: and Reliability in Transportation (G. B. Guy, Ed.), London: Elsevier Applied Science, 1989, pp. 195-207.
12. Bowman, B., U.S. Army, Special Forces Branch, personal correspondence. 13. Bray, J., “Political and Security Risk Assessment,” presented at Pipeline Risk Assessment, Rehabilitation, and Repair Conference, Houston, TX, September 13- 16, 1993. 14. Briggum, S., Goldman, G. S., Squire, D. H., and Weinberg, D. B., Hazardous Waste Regulation Handbook. New York: Executive Enterprises Publications Co., 1985. IS. Brown, J., Collette, P., and Goffred, R., “Utilities Focus on Cast Iron Management Programs,” Pipeline and Gus Journal, VoI. 222, No. 3, March 1995. 16. Caldwell, J. C., “Pipe Line Safety Arena;’ P i p Line Industiy, November 1990, p. 15. California Governor’s Office of Emergency Services, October 2001. 17. Cameron, R. M., Halliday. W. S.. and Stryker, R. A. “Electromagnetic Surveys of Pipelines and Cathodic Protection Systems,”PL Risk, September 1993. 18. Canadian National Energy Board, Joint Review Panel Commission Hearing Order GH-4-2001, “GSX Canada Pipeline Project; Application to Construct and Operate a New Natural Gas Pipeline and Related Facilities (in U.S. and Canada),” hearing held in Sidney. British Columbia. March 2003. 19. Clarke, N. W. E., Buried Pipelines, London: Maclaren and Sons, 1968. 20. Code of Federal Regulations, Vol. 59, No. 60, “Guidance for Facility and Vessel Response Plans Fish and Wildlife and Sensitive Environments,” National Oceanic and Atmospheric Administration, March 29, 1994. 21. Congram, G. E., “US Utility Expenditures Remain Near $6 Billion for 1995.” Pipeline and Gas .Journal.December 1994. 22. Cornwell, J. B., and Martinsen, W. E., “Quantitative Risk Analysis of the Wahsatch Gas Gathering Pipeline System.” Norman, OK: Quest Consultants Inc. 23. Crane Valve Company, “Flow of Fluids through Valves, Fittings, and Pipe.” Crane Technical Paper No. 410. New York: CVC, 1986. 24. Davis, G., and Jones, D.. “Risk Communication Guide for State and Local Agencies,”
3861 References
25. DIN 2413, DeutscheNormen, Berlin, June 1972. 26. Dow Chemical, Fire and Explosion Index Hazard Classlfication Guide, 6th ed., Dow Chemical Co., May 1987. 27. Dragun, J., The Soil Chemistry of Hazardous Materials, Silver Spring, MD: Hazardous Materials Control Research Institute, 1988. 28. Ductile Iron Pipe Research Association, “Polyethylene EncasementBrochure,”Ply.Tech/l1-92/10M, Birmingham, AL:Ductile Iron Pipe Research Association. 29. Esparza, E. D., et al., Pipeline Response to Buried Explosive Detonnzioiis. Vols. I and 11, Pipeline Research Committee Final Report AGA Project PR-15-109, Southwest Research Institute Final Report SWRI project 02-5567,American GasAssociation,August 1981. 30. Federal Register Rules and Regulations, Vol. 54, No. 155, August 14, 1989, pp. 3342e33424, August 30, 1989, pp. 35989-35990. 3 1. Flinn, R. A,, and Trojan, P. K., Engineering Materials and Their Applications, 3rd ed., Boston: Houghton Mifflin, 1 9 8 6 , ~513-560. ~. 32. Galley, M., Think Reliability, Houston, TX ,personal communications. 33. Gas Research Institute, “Pipeline Inspection and Maintenance Optimization Program, PIMOS,” Final Report, prepared by Woodward-Clyde Consultants, February 1998. 34. Gleick, J., Chaos, NewYork: Penguin Books, 1988. 35. Golder and Associates, “Report on Hazard Study for the Bulk POL Facilities in the POA Area,” prepared for Municipality of Anchorage POL Task Force, August 9, 2002. 36. “Government Guidelines: State and Federal Regulatory Briefs,” Pipeline and Gas Journal, May 1995. 37. Greenwood, B., Seeley, L., and Spouge, J., “Risk Criteria for Use in Quantitative Risk Analysis.” 38. Gummow, R., Wakelin, R., and Segall, S . , “AC Corrosion-A New Threat to Pipeline Integrity?’ presented at ASME International Pipeline Conference, 1996. 39. Hanna, S. R., and Drivas, P.J., Guidelinesfor Use o f k p o r Cloud Dispersion Models, New York: American Institute of Chemical Engineers, 1987. 40. Huges, D., Assessing the Future: Water UtilityInfrastructure Management, American Water Works Association, 2002, Chap. 12. 41. Huges, D., Assessing the Future: Water Uiilitylnfastruciure Management, American Water Works Association, 2002, Chap. 13. 42. Huges, D.. Assessing the Future: Water UtilityInfrastructure Management, American Water Works Association, 2002, Chap. 23. 43. Jaques, S., “NEB Risk Analysis Study, Development of f i s k Estimation Method,” National Energy Board of Canada report, April 1992. 44. Pluss, C., Niederbaumer, G., and Sagesser, R., “Risk Assessment ofthe Transitgas Pipeline,” Journal of Pipeline Integrity, September 2002. 45. Kaplan, S . , “The General Theory of Quantitative Risk Assessment,” in Proceedings of F p h Conference on Risk Based Decision Making in Water Resources, ASCE, 1991.
46. Kelly, K. A,, and Cardon, N. C., “The Myth of lo4 as a Definition ofAcceptance Criteria,” EPA Watch,Vol. 3, No. 17. 47. Keyser, C. A,, Materials Science in Engineering, 3rd ed., Columbus, OH: Charles E. Merrill Publishing Co., 1980, pp.75-101,131-159. 48. Kiefner, J. F., “A Risk Management Tool for Establishing Budget Priorities,” presented at the Risk Assessment/ Management of Regulated Pipelines Conference, a NACE TechEdge Series Program, Houston, TX, February l G I 2 , 1997. 49. Larsen, K., et al., “Mitigating Measures for Lines Buried in Unstable Slopes,” Pipe Line Industv, October 1987, pp. 22-25. 50. Leeds, J. M., “Interaction between Coatings and CP Deserves Basic Review,” Pipeline and Gas Industry, March 1995. 51. Lockbaum, B. S., “Cast Iron Main Break Predictive Models Guide Maintenance Plans,” Pipe Line Industv, April 1994. 52. Martinez, F. H., and Stafford, S . W. “EPNG Develops Model to Predict Potential Locations for SCC,” Pipeline Industry, July 1994. 53. McAllister, E., Pipeline Rules Thumb, 5th ed., Houston, TX: Gulfpublishing Co., 1998. 54. Megill, R. E., An Introduction to Risk Analysis, 2nd ed., Tulsa, OK: PennWell Books, 1984. 55. Memtt, F. S., Standard Handbook for Civil Engineers, New York: McGraw-Hill, 1976, Sec. 2 1. 56. Microsoft Encarta and various Internet sources in the education field. 57. Miller, P. O., et al., “Dealing with Risk,” Canberra, Australia: The Institution of Engineers, March 1993. 58. Morgan, B., “The Importance of Realistic Representation of Design Features in the Risk Assessment of HighPressure Gas Pipelines,” presented at Pipeline Reliability Conference, Houston, TX, September 1995. 59. Morgan, B., etal., “An Approach to the RiskAssessment of Gasoline Pipelines,” presented at Pipeline Reliability Conference, Houston, TX, November 1996. 60. Moser, A. P,Buried Pipe Design, New York: McGrawHill, 1990. 61. Engberg, D. J., “Multiobjective Programming Models for the Planning of Offshore and Onshore Natural Gas Pipeline Systems,” Ph.D. dissertation, Johns Hopkins University, Baltimore, MD, 1980. 62. NACE, “Recommended Practice: Mitigation of Alternating Current and Lightning Effects on Metallic Structures and Corrosion Control Systems. National Association of Corrosion Engineers,” NACE Standard RP01-77 ( 1 983 Revisions), Item No. 53039. 63. National Energy Board, “Report of the Inquiry: Stress Corrosion Cracking on Canadian Oil and Gas Pipelines,” Report MH-2-95. November 1996. 64. National Transportation Safety Board, “Protecting Public Safety through Excavation Damage Prevention,” Safety Study NTSB/SS-97/01, Washington, DC: NTSB, 1997. 65. Naylor, C. E., and Davidowitz, D., “Brittle Behavior of Pipelines,” 94-DT-016.
Referencesi387 66. Norman, R. S. “PE Technology Developments Aid Industry Safety, Cost Control,” Pipeline Industry, September 1994. 67. Office of Gas Safety, “Guide to Quantitative Risk Assessment (QRA),” Standards Australia ME-038-01 (Committee on Pipelines: Gas and Liquid Petroleum), Risk and Reliability Associates Pty Ltd., April 2002. 68. “One-Call Systems,” Pipeline Digest, March 1991, p. 15. 69. Pipeline Industries Guild, Pipelines: Design, Construction, and Operation, London, New York: Construction Press, Inc., 1984. 70. Playdon, D. K., “Risk Mitigation Strategy Study for Alliance Pipeline,” Edmonton, Alberta: Centre for Engineering Research, May 1997. 70a.Porter, M., A. Baumgard, and K. W. Savigny, “A Hazard and Risk Management System for Large Rock Slope Hazards Affecting Pipelines in Mountainous Terrain,” Proceedings of IPC 2002: 4th International Pipeline Conference, Calgary, Canada, September 2002. 70b.Porter, M., and K. W. Savigny, “Natural Hazard and Risk Management for South American Pipelines,” Proceedings of IPC 2002: 4th International Pipeline Conference, Calgary, Canada, September 2002. 71. Proceedings of the International Workshop of Offshore Pipeline Sqfety (D. V. Morris, Ed.), New Orleans, LA, December 4 6 , 1991, College Station: Texas A&M University. 72. Prugh, R. W., and Johnson, R. W., Guidelines for Vapor Release Mitigation, New York: American Institute of Chemical Engineers, 1988. 73. Riordan, M. A,, “The IR Drop Paradigm Calls for a Change,”PipeLine Industy, March 1991, p. 31-32. 74. RiskAssessment in the Federal Government:Managing the Process, Washington, DC: National Research Council, National Academy Press, 1983. 75. Rohrmann, B., “Verbal Qualifiers for Rating Scales: Sociolinguistic Considerations and Psychometric Data,” Project Report, Melbourne, Australia: University of Melbourne, September 2002. 76. Rusin, M., and Sawides-Gellerson, E., The Safety of Interstate Liquid Pipelines: A n Evaluation of Present Levels and Proposals for Change, Research Study 040, Washington, DC: American Petroleum Institute, July 1987. 77. Siegfried, C., “Multiple Uses of ROW for Pipelines,” presented at American Gas Association Transmission Conference, May 18,1971. 78. Simiu, E., Reliability ofoffshore Operations: Proceedings of an International Workshop, NIST Special Publication 833, Gaithersburg, MD: National Institute of Standards and Technology.
79. Smart, J. S., and Smith, G. L., “Pigging and Chemical Treatment Pipelines,” presented at Pipeline Pigging and Inspection Technology Conference, Houston, TX, February47,1991. 80. Southey, R. D., Dawalibi, F. P., and Donoso, E A. “Sharing the ROW can Affect Line Integrity,” Pipeline and Gas Journal, October 1993. 81. Stansbeny, R. R., “Usually Sensitive Areas: A Definition for Pipeline Operators,” presented at API Pipeline Conference, Environmental Session, Dallas, TX, 1995. 82. Stephens, M., and Nessim, M., “Pipeline Integrity Maintenance Optimization-A Quantitative Risk-Based Approach,” presented at API Pipeline Conference, Dallas, TX, 1995. 83. Stephens, M. J., “A Model for Sizing High Consequence Areas Associated with Natural Gas Pipelines,” C-FER Topical Report 99068, prepared for Gas Research Institute, Contract 8 174, October 2000. 84. Sutherland, V., and Cooper, C., Stress and Accidents in the Offshore Oil and Gas Industy, Houston, TX: Gulf Publishing Co., 1991. 85. Tuler, S., et al. “Human Reliability and Risk Management in the Transportation of Spent Nuclear Fuel,” in Reliability on the Move: Safety and Reliability in Transportation (G. B. Guy, Ed.), London: Elsevier Applied Science, 1989, pp. 167-193. 86. URS Radian Corporation, “Environmental Assessment of Longhorn Partners Pipeline,” report prepared for US. EPA and DOT, September 2000. 87. U.S. Department of Transportation, Research and Special Programs Administration, Office of Pipeline Safety, “Annual Report of Pipeline SafetyxalendarYear 1988,” 400 Seventh St., S.W., Washington, DC 20590. 88. Vick, Reagan, et al., 1989. 89. Vick, S. G., Degrees ofBelief Subjective Probability and Engineering Judgment, Reston, VA: ASCE Press, 2002. 90. Vincent-Genod, J., Fundamentals ofpipeline Engineering. Paris: Gulf Publishing Co., 1984. 91. Weber, B., D W , personal communications and various past project studies. 92. Wheeler, D. J., and Lyday, R. W., Evaluating the Measurement Process, 2nd ed., Knoxville, TN: SPC Press, 1989. 93. Williams, P. J., Pipelines and Permafrost: Ph.vsica1 Geography and Development in the Circumpolar North, Reading, MA: Longman, 1979. 94. Wright, T., Colonial Pipeline Company, Atlanta. GA, personal communications. 95. Zimmerman, T., Chen, Q., and Pandey, M, “Target Reliability Levels for Pipeline Limit States Design,” presented at ASME International Pipeline Conference, 1996.
Index Aboveground facilities 44.50-1, 227.246 Abrasion, coating 89 Absolute risk estimates 15.293-330 AC interference 83 Acceptable risk 334 Activity level 28.48-50.44,48,227,245 Acutehazards 136,381 Adhesion 89 Administrative processes 348-52 Age 233 facility 283 inspection (see a l ~ lnformation o degradation) 3 1 pipeline 26,30-1 system 30-1 of verification I05 ALARP (as low as reasonably practical) 337-40 Algorithms (see Risk) Anchoring 48,245 Animal attack 48-9 Anomaly 99,107,366,381 Antifreeze 271.280 Area of operation 163 As low as reasonably practical (.ceeALARP) Aseismic faulting 113 Assessment (seeRisk) Atmospheric corrosion (seeCorrosion) Atmosphenc stability classes 150,309-10 Attack potential (see also Sabotage) 201-2 Attributes 29-30 Automatic valve 162 Avalanche failure mode I43 B31G 366 Backfill 124-5,381 Bacteria 12,77 Barlow's formula 94.97.364 Barriers 5 1,204 Bathtub curve 6 Beliefs I2 Bias 31-2 Bimodal distribution 19 1 Biodegradation 140-2 Blast effects (see Over pressure effects) BLEVE (boiling liquid expanding vapor explosion; see also Vapor cloud) 135,272.307 Blockades 163 Blockages 214-5 Boiling liquid expanding vapor explosion ( A ~ BLEVE.) P Boilingpoint 153. 155.358-9 Bounding curve 299 Brittle fracture (seFatigue) 143 Buckling 96,250.364
Buoyancy 250,364 Burn radius (see also Fire. Thermal radiation) default 3 11 Business interruption (see Service) CAA(C1eanAirAct) 138 Caliper pig (see Pigging) Carbon dioxide (see CO,) Carbon steel (see Steel) Carcinogenicity 145 Casings 65-7,70, 84-5.96,205,381 Cast iron 226,234 Cathode(seeCP) Cathodic protection (seeCP) CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) 138 Charpy V-notchtests (seeTests) 143 Checkvalve 160, 163,381 Checklist. model design I1 Chronic hazards 136-8,381 CIS (close internal survey; see also Pipe-to-soil potential, DCVG) 8 I 128,231,381 Class location (seeThird patty) CleanAir Act (see CAA) Clean WaterAct (see CWA) Close internal survey (see CIS) Cloud dispersiowsize of 149-59,309,328 vapor(seea/soVapor) 135 CO, 72,211-2 Coatings 36,85-90, 125,381 application of 87 for atmosphere 68-70 concrete 234 conditions 26,230 defects (see also Cracks, Fatigue) 89,366 inspection (seeal.so lnspection 00 89 internal 73 offshore 249 Combustible 137. 155.359 Commissioning 38 Communications ofrisk 352-5 SCADA (see SCADA) Community partnering 201-3 Comparing pipelines and stations 287-8 Composite pipelines 234 Comprehensive Environmental Response, Compensation and Liability Act (seeCERCLA) Compressor, sabotage (see Sabotage) ~
390 Index Computer 374,183-6 environments 183-94 permissive 280 programs 38 software 186 useinriskprogram 37-8, 185 CONCAWE 303 Concrete pipe 234 Concrete slab, failure probability 298 Conductivity (see Terrain) Confidence limits 304-5 Consequences 6 Construction 38, 124-5 distribution systems 234-7 facilities 269 issues 99 offshore 253-4 Consultants, obtaining services from (see Proposals) Containment 164 Contamination 135, 153,155,211,272,321,376 Continuous improvement (see also Quality) 18,342 Control documents (seeDocuments) Correlation 192 Corrosion 40,44,61-90,99, 109,267-8,381 atmospheric 32-7 1,229,248,267,284 buried metal 284 crevice (see also ERW pipe) 98 distribution systems 228-33 facilities 2674,283-6 galvanic 75 hydrogen stress corrosion cracking (seeHSCC) internal 71-4,229-30,248,267,283,382 offshore 247-9 product 72 rate 64,233 selective seam 98 subsurface 74-90,230,248,267 costs direct 220-1 indirect 22 1-2 riskmanagement 344-8 Countable events 18 I Counterfeit materials 124 Cover (seeDepth of cover) CP (cathodic protection) 75,230-1,249,284,379,381 corrosion threat 78-85 surveys 78-82 CR (cumulative risk) 333-4 Crack arrestors 144-5 Cracks (see also HIC, HSCOS, SCC, Fatigue, Fractures) 102-4,110,144-5, 234,366 Critical instrunients 131 Cumulative risk (seeCR) Current 168,250-3 Customers 18,210,217 CWA(C1eanAirAct) 138 Damage (seealso Third-party damage) 3 17 prevention 246 states 306,314-6.3 19-27 Datacollection 10-11, 179 Datamanagement and analyses 177-96,226 DC interference 84 DCVG (direct current voltage gradient; see CIS, Pipe-to-soil potential) Deductive reasoning 2-3 Defect (see also Anomaly, Crack, Fracture) 366 Degradation inspection (seeInformation) information (see Information) Delivery parameters deviation (seeDPD) Dent 366 Department ofTransportation (seeDOT)
Depth ofcover 178 distribution 227 failure probability 298 offshore 245 survey 128,380 third party 46-8 Design 38.40.44 distribution systems 234-7 facilities 268-9 human errors I19 index 91-115,118 offshore 249-52,253 pressure 94 Detection opportunity 56 Deterioration (see Corrosion) Detonation (seeVapor cloud) Direct current voltage gradient (see DCVG) Direct evidence 34,90, 105-10 Dispersion 148-9, 156, 190 Distribution systems 223-42 Documents 29,129,132,281 computer 188-9 control 349,352 Dosage (SeeToxicity) Dose-response assessment 140-2 DOT (Department ofTransportation) 165,259,328,382 DPD (delivery parameters deviation) 2 14-6 Drainvolumes 142, 147-8,382 Dnnking water contamination 32 I Drug test 128,238 Ductile iron 234 Ductility 143 Dynamic segmentation 26, 181-2 EA (environmental assessment) I70 Earthquake (seeSeismic) Education (see Public) Edwards Aquifer 324,376 EGlG (European Gas Pipeline Incident Group) 298,303 Electric resistance welding pipe (see ERW) Electrolyte (see Soil corrosivity) Electromagnetic surveys 53 Emergency drill 2 18 Emergency response 162,255 Employee stresses (see Saessors) Entropy 1 Environmental assessment (seeEA) hazards (seeHazardous) module 166-9 not involvingpipe spills 167 persistence (see Biodegradation) sensitive areas 167-8 shoreline 168-9 Equivalent surface area 265 Erosion (seealso Land movement, Soil) I 13,254 ERW pipe (electric resistance welding pipe) 98,366,382 ESI (Environmental Sensitivity Index) 168 Estuaries 168 Evacuation 163 Event tree 14.25 Events (see Risk variables) Evidence direct 34,90, 105-10 unquantifiable 16 Expertjudgment 8 Explosion (seeOverpressure) Explosion limit (seealsoLFL) 149 Exposure 45,66 Exposure pathways (see Toxicity) External loadings 94,96,97,266,364
Index 391 Facilities (seealso Aboveground) 100,257-92 Failure 4.301.314,382 Failure modes (see also FMEA) 99 Failure modes and effects analysis (see FMEA) Failure probability (see abo Failure rate) 299-302, 3 14,320 Failure rate 5-7,294-8.312-6,319-20 Failure investigation (seeInspection) Fatalities(seealsoValueofhuman life) 295-6,305-6,321,355,375 Fatigue 102-4, 143,234,236,250,268,382 Fault treeanalysis 14,25 FBE (fusion bonded epoxy) 78 Fences(see Barriers) Firelignition scenarios (seealsoTherma1 radiation) 135, 149,272,309 probability 302-4 Fixed length segmentation ( s e e a h Sectioning) 26, 181 Flammability limits (seealso Ignition) 359 Flange 101 Flash point I37 Flashing fluids(seealso HVL) 361-2 Flexible pipe 235 Flowpath modeling 148 Fluidmodulus 367 FMEA (failure modes and effects analysis) 14 Fracture mechanics 143-6,366 Fracture toughness 102-4, 143-6,382 Fractures (seealso Cracks) 102-4, 1 IO, 143 Frequency 5,32,319 Frost heave 1 12 Fusion bonded epoxy (seeFBE) Fuzzylogic 3 Galvanic corrosion (see Corrosion) Gas release (see Release) Gas Research Institute (seeGRI) Gas spill (see Spill) Geographic information system (see CIS) CIS (geographic information system) 179 Global positioning system (seeGPS) GPS (global positioning system) 53,179 Gravity flow pipe (see Concrete pipe) GRI (Gas Research Institute) 45,306,369 Ground'air interface 66 Ground-penetrating radar 53 Groundwater (seeContamination) Handling, during installation 125 HAZ (heat affectedzone) 103-4.382 Hazard I 10.5, 136-42.281.382 definition 3 identification I19 natural 110-1 12,250-3 zone 1724,306-16 Hazard and operability study (seeHAZOP) Hazard ranking system (seeHRS) HAZOP (hazard and operability study) 14,25 HCA (high consequence area) 166 HCL (high-low-close) chart I92 Heat affected zone (see HAZ) Heat ofcombustion 359,272,155 Heat flux (see Thermal radiation) HlC (hydrogen induced cracking) 78, I03 High consequence area (see HCA) High population area (see HPA) High value area (see HVA) High-low-close (seeHCL) Highly volatile liquid (seeHVL) Histogram (see a b o Frequency) 190 Historic areas 170 Hole size (see also Materials, Fracture mechanics, Charpy test, Spill size, Rupture, Cracks, Stress) 142-6.303.314-5 Holiday (see Coating defect) Holiday detection (Are Coatings, Inspection)
Housekeeping and human error 128 HPA (high population area) 166 HRS (hazard ranking system) 152 HSCC (hydrogen stress corrosion cracking) 77-8 Hueristics 3 1-2 Human error (see also Procedures for prevention, Incorrect operations) I 178, 197-200,265,278,280-2 Human life, value of(see Value ofhuman life. Fatalities) Humidity HVA (high value area) 168 HVL(highlyvolatileIiquid) 147-8,259,272,311.340 Hydrogen embrittlement (see HIC, HSCC) Hydrogen stress corrosion cracking (see HSCC) Hydrostatic pressure test (seeTest) IA (intervention adjustment) 2 16-9 Ice, scour from 254 Ignition (see also Fire) 302-4 ILI (in-line inspection; see also Inspections) 34-590,379,382 Impact resistance (see Coating) Impressed current (seeCP) Incorrect operations (seealso Human error) 40,44, 102, 117-32 distribution systems 237 facilities 268-71 index 1 17-132,205.237 offshore systems 253-5 sabotage 205 Index sum 40,44,240,299-300.382 lnducedcurrent (AC) 83 Inductive reasoning 2-3 Information degradation 3 1,64 Inhibitor (see Internal corrosion) Injuries (see Fatalities) In-line inspection (see ILI; see also Inspections) Inspection (seealso Survey) 124 age 31, I05 construction degradation (seeInformation degradation) internal (see olso Pigging) 107 sabotage potential (see Sabotage) techniques 379 visual 36,380 Integrity assessments 29, 109-10 verification 100, 105-10,236-7.250.268 Insulation 66 Intelligence gathering (see Data) Intelligent pigging (see Pigging) Interference currents 82-5 Internal corrosion (see Corrosion) Internal inspection (see Inspection) Internal inspection tool (see Inspection) Internal pressure 94 Intervention adjustment (see JA) Interview data 3 1 1R drop (seealso Corrosion) 79-80 Jet fire (see also Fireiignition scenarios) 149,308 J-lay offshore pipe installation technique JNA bob needs analysis) 270 Joining ofmatenals 124 Joints (joining) 124,234-5,382,383 JSA (job safety analysis) 270 ITA Qob training analysis) 270 Key-lock sequence programs 131 Lacustnne regions 168 LakeTravis 327,377
392 Index Laminations 98 Land movements 110-5,237,252,268,282,382 Land use issues (see also Set back distances) 344 Landslides 93,110-5 Leakdetection 159-162,315 capabilities of 159-63 by odorization 24 1 staffing levels 272-5 at stations 272-5 techniques 159-63 Leak history 35 Leak impact factor (see LIF) Leakrate 361-2 Leak volume 142 Level ofproof I 18 LFL (lower flammability limit) 149 LIF (leakimpact factor) 40,44,99, 104, 133-75, 191-4.382 distribution systems 240-1 environmental module facilities 265,271-2 formula 133 offshore 255 sabotage (seealso Sabotage) 206-7 Line locating 5 1-53 Line marking 52 Liquid release (see Release, Spill) Load 92,254 Locating (see Line, Pipeline) Lock-out devices 131 Logic deductive (see Deductive reasoning) inductive (see Inductive reasoning) ladders 131 Loss limiting actions I 6 4 Lower flammability limit (see LFL) Magnetic flux(see ILI) Magnetic methods 53 Maintenance 239-40,255 facilities 271 human error 132 prioritization 343 reports 29 schedule 132 Management of change (see MOC) Manufacturing, pipe 98, 179 MAOP (maximum allowable operating pressure) 94- I 10,362,382 Maps and records 129 Marking (see Line marking) Materials 38 selection 123-4 stress (see Stress) strength 97 toughness 143 Matrix 25 Maximum allowable operating pressure (see MOP) Maximum operating pressure (seeMOP) Maximum permissible pressures (see also MOP) 94 Mean 189-90 Measurements 190 Mechanical effects 135,231-2 Mechanicalerrorpreventers 131-2,239,271 Median 189-90 Metallurgy (seealso Toughness, Fracture mechanics) 143 Meter stations 259 MIC (microbially induced corrosion) 77 Microbially induced corrosion (see MIC) Microorganisms (see MIC) Mill certifications (seealsoPipe strength) 99 Minimum test pressure (see MTP)
Mitigations 41,57-60, 114-5, 154,202,254,298,342-4 MOC (management ofchange) 269-70 Model calculating variables 34 choices 25 design checklist I 1 examples 369-73 facility 275-86 indexing 2 4 , Z matrix 23 modeling 3,9-IO performance test 17-8, 194 probabilistic 23-4 qualitative 16 quantitative 16 release 146-8 risk I I , 14,22-33,39,225,264,333,350 scope and resolution 30 Molecularweight 151, 155, 157, 158 Monte Carlo simulation (see Sensitivity analysis) MOP (maximum operating pressure) 94.1 19-20.363 MTP (minimum test pressure) 279 National Fire Protection Association (seeNFPA) National Oceanic and Atmospheric Administration (seeN O M ) Natural hazards(seealso Hazards) 110-112,250-3 NDE (nondestructive evaluation; see also NDT) 285 NDT (nondestructive testing; see also NDE) 89,380 Negligible risk (seeAcceptable risk) Network (see Computer) NFPA (National Fire Protection Association) 136,216 N O M (National Oceanic and Atmospheric Administration) 168 Nondestruclive evaluation (see NDE) Nondestructive testing (see NDT) NRA (numerical risk assessment) 23,294 Numerical nsk assessment (see NRA) Objective riskassessment 16 Odorizations 241 Offshore (see also Platform) 243-56 One-call systems 44,5 I-3,227 hitrate 45 reports 28 One-in-a-million chance 341 OPA (other populated area) 166 Operators (see alsoTraining) 22,29 Operational reliability assessment (seeORA) Operations (see also Incorrect) 125.2 16 data 29 distribution 238 facilities 269 offshore 254 ORA (operational reliability assessment) 320 Organic 37 Oscillations (seeVortex, Wave) Other populated area (see OPA) Outage period 220 Outside force (see Third-party damage) Overpressure (blast) effects 150, 173,306,3I 1 Painting (see Coatings, Atmospheric) Palustrine regions 168,382 Particle trace analysis 148 Patrol 30.54-7.247,380 PE (polyethylene pipe) 234 Performance tests, model (see Model) Permeability 26, 157 pH, soil 77.78 Photolysis (see Biodegradation)
Index 393 Pigging ( S ~ P U / . XInspection) J 74, 107. 128,382 PIM (pipeline integrity management) 259 Pinhole leak (see Hole size) Pipestrength 92-101. 143-6,363-6 Pipeline construction 93-4.99 depth 298 dynamics 213 installation (seeJ-lay, S-lay) integrity management (see PIM) locating (see Line locating) operators (seeOperators) products 357-9 properties (charactenstics) 112 seam ( s e e a h ERW) 98 strength (see also Materials) 363-8 wallflaws 110 Pipe-ro-soil potential i\ee also CIS, DCVG) 8 1-2 Plastic 234-6 Platform (see Offshore) Point event 179.80 Polanzation isee Surveys) Political instability (see Sabotage) Polyethylene pipe (see PE) Polyvinyl chloride pipe (see PCP) Pool size. liquid spill 151-53 Populationdensity 26.28.30, 128, 1656,263,305 Potential (see Surge) Potential damage (see ubo Sabotage, Third-party damage) 99 Potential threats 56 Potential upset (see Upset) PPA (pressure point analysis; see Leak detection) PPM (predictive preventive maintenance) 132,280 PRA (probabilistic risk assessment) 23-25.294 Predictive preventive maintenance (seePPM) Presentation graphics I88 Pressure, maximum (see also MOP) 94 Pressure point analysis (see PPA) Pressure switch (see Safety) Pressure test(wTest) Pressure vessel 101 Preventions 29,73 Prioritization, of mitigation 343 Probabilistic nsk assessment (see PRA) Probability(seerrlro Failure probability) 4-5.104 ofexceedance 113 Procedures for human error prevention (.xealso Human error) 132.28 I for internal corrosion (see Corrosion) maintenance (see Maintenance) for nsk programadministration 352 for surge (see Surge) Process safety management (see PSM) Product charactenstics (see Product hazard) contamination (see Contamination) corrosivity (seeCorrosion) hazard 136-42,272,383 specifications deviation (see PSD) Programs (see Computers) Propetty, high value (see HVA, Land use, Set hack distances) PSD (product specification deviation) 21 1-4 PSM (process safety management) 259 Public education 44.53,228,246,383 Pumps. sabotage (see Sabotage) PVC (polyvinyl chloride pipe) ORA (quantitative risk assessment) 23,294,329-30,337 Quality 18,32, 182,346 assurance (QA) and Control (QC) 182-3 data 181-2 Qualitative model (see Model)
Quantitatwe risk assessment (seeQRA t Quantitative model (seeModel) Radar (see Ground penetrating) Radiant heat (seeThermal radiation) Radiation, thermal (seeThermal) Radio frequency detection (see RF detection) Range (see Dispersion) Rangeability (see Dispersion) Rate (see Corrosion) RCRA (Resource Conservation and Recovery Act) 138 Reaction times I63 Reactivity 137-8 Receptor(seea/so LIF) 165, 170-4,241-2,255,305-6.375.7 Rectifier 80,82,383 Rehabilitation 236 Release (seeaiso Leak, Spill) 147,383 Reliability 19 Relief valves (see Safety system) Remote terminal units (see RTU) Remotevalve 160, 163 Remotely operated vehicle (seeROW Reportable quality (see RQ) Request for proposal (RFP; see Proposals) Resistivity (seeSoil) Resource allocation 343 Resource Conservation and Recovery Act (see RCRA) Revenues from pipeline 220 RF detection (radio frequency) 52-3 RFP (request for proposal; see Proposals) Rigidpipe 234 Risk absolute 15 acceptable 334 algorithms 278,369-73 assessment methods 7-8. IO, 22,36 communications 352 compansons 355 cnteria 336-42 cumulative 333 decision points definition 4 of environmental damage (see Environmental) factors 29 individual 335 management 7 , 13. 10. 18.22, 178. 225,331-55 management program (see RMP) model (see Model) other 355 process 9-10 program administration 348-352 relative 15.44 roll ups (see CR) of sabotage (see Sabotage) scope 30 scoring (see Scoring) societal 335 variables 22-4,45,94-1 10, 178,263 Riverian regions 168,383 River crossing survey 47 RMP (risk management program) 348 Root cause analysis 36 ROV (remotely operated vehicle) 25 I ROW (right ofway) 29,38,383 condition 44,54,228 distribution systems 228 facilities 257-9 offshore 247 RQ (reportable quality; see a h Chronic hazards) 138 RSTRENG 366
394 Index RTU (remote terminal unit) 126 Rupture (see Hole size) Sabotage (see also Incorrect operations) 200-206 attack potential 201 distribution systems 240 mitigations (see also Mitigations) 202 potential for (see Potential threats) Safety facilities 268 factor 94-102,236,250 programs 128 systems 119-23,238,279,383 SCADA (supervisory control and data acqulsition) 120, 1264,238,257, 270,281,383 SCC (stresscorrosioncracking) 78,103,383 Scientific method 2 Scope 13 Scoring 182 algorithms 39 corrosion 65-90 data 182 releases 154-65 risks 33,94-1 IO service interruption 220 surge potential 104 Scour 113 Screening analysis I5 Seabed stability 252 Secondarycontainment 142,148,154, 272,383 Sectioning (see Segmenting) Security forces 282 Segmenting 10,22,26, 102, 178,226, 260-1 data 181 distribution systems 226 manually establishing 26 risk 323 Seismic 112-3,254,383 Seismograph activity 48-9 Sensing devices 163 Sensitivity analysis 195-6 Service interruption 209-22,263 Setbackdistances 311-12 Shear 363 Signal-to-noiseratio 333 Signs 51,54,245,247 S-lay offshore pipe installation technique 253 Smart pig (see Pigging) SMYS (specified minimum yield strength) 78,364,383 Sniffers 160 Societal versus individual risk 335-6 Software (see Computer) Soil conditions 26,30,36 conductivity 53 corrosivity 76-8 movement (see Land movement) permeability 26,157 pH 77,78 resistivity 77,85 settling 1 1 1 shrinking 1 1 1 spill penetration 147 swell 11 1 Sourgas 328-9 Spans 96 Spatial analyses 18 1 Special loadings 94 Specifications 21 1-3 pipe (see SMYS)
Specified minimum yield strength (see SMYS) Spending prioritization (see also Cost) 343 Spill (see also Leak, Release) 142, 147 adjustments to size 159-65 migration 153-4 offshore 255 poolsize 151-3 score 104,146 size 148-59,240-1,272,312-29 Spill limiting actions (see also Spill) offshore 255 SQL (structured query language) 179 SSCC (sulfide stress corrosion cracking) 103 Staffinglevels and leak detection 272 Standard deviation 190 State soil geographic (see STATSGO) Stations (see Facilities) Statistics 5, 189-92 STATSGO (state soil geographic) 77.78 Steel, carbon (seealso Fatigue) 234 Steel mills 99 Strain gauge (see also Land movement) 1 14-5 Stress 144-5, 197-200,363,383 concentrations 366 corrosion cracking (see SCC) human errors (see also Human errors) 197-8 hydrogen stress corrosion cracking (see HSCC) hydrostatic test (seeTest) levels and fatigue (see Fatigue) longitudinal 364 MAOP (see M O P ) materials (see Materials) riser 366 soil movement (see Soil, Land movement) temperature 2 1 I , 365 tensile 364 wall thickness calculations 99 Stress corrosion cracking (see SCC) Stressors, workplace 197-8 Structured query language (see SQL) Subjectiverisk assessment 16 Subsurface corrosion (see Corrosion) Successive reactions 267 Sulfates 72,211-2 Sulfide stress corrosion cracking (see SSCC) Superfund (see CERCLA) Supervisory control and data acquisition (see SCADA) Supports 66 Surge 250,383 potential 30, 104-5,236,250,268 pressure calculations 367-8 pressures. (pressure spike) 104,367 water hammer (pressure s.) 104,367 Surveillance (see Patrol) Surveys (see also CIS, DCVG, Inspections) 29-30, 128-9 airpatrol 128 close interval (see CIS) coating condition 128 holiday detection (coatings) leak (see also Leak) 380 line locating (see Line locating) pigging (see also Pigging) 128 polarization 81-2 population density (see also Population density) 128 route 93 soil movements (seeSoil, Land movements) subsea(sonar)profile 128,251 thermographic 128 watercrossings 128,380 System integrity distribution system 225 System losses distribution system 225
Index 395 System safety factor (.wealso Safety) I20 System strength 94 Tanks 260.285 Technique, choosing 16 Temperature (see &
Vacuum extraction 53 Value added work 18 of human life (see ulso Fatalities) 347 of mitigation (see Mitigation) Vahes automatic (see 0 1 5 0 Automatic) 162 causing surges (see Surge) check valve (see Check) reliefv (see Relief) remote v (see Remote) SpdClng of 163 three-way 131 Vandalism (see Sabotage) Vapor clouds (seealro Clouds) 135. 149.307.309 dispersion 309 releases 146 toxic 142. 150-1 Variability variation 8, 10-1. 190 Variables (see Risk) Vehicles ( s e e Traffic) Vibration monitoring 276,279 Visual (see Inspections) Volumes (see Spill size) Vortex shedding 250-1 Wall thickness 94,98,383 probability failure 298 Warning tape. mesh 46-7 Waste (see Quality) Water crossing surveys (see Surveys) Water hammer (see Surge) Wave action 168,250-2 Weather 267.282 Weighting 25,32,33 Welding (see Joining) Wetlands 168, 327.377 What-iftrials 186. 196 Wildlife (.reeAnimal attack) Workplace stressors (see Stressors) X-ray (see Inspection) Zone-of-Influence 35.180-1
W. Kent Muhlbauer, WKM Consultancy, Texas, USA The new edition of thi ues the tradition set by previous editions: The only comprehensive treatment of pipeline risk management The book that has played a key role in establishing pipeline risk management techniques around the globe An indispensable decision support tool, helping decision-makersmake the best choices in pipeline design, operations, and maintenance Set the standards for global pipeline risk management Pipdine Risk Management Manual Third €dotion is the essential tool for all practitioners or for anyone seeking a deeper understanding of risk issues surrounding these critical components of infrastructure. If you're looking for a flexible, straightforward analysis system for your everyday design and operations decisions, this indispensable guidebook will lead you to immediate improvements in the understanding, quantification, and management of risks involved in all types of pipeline operations.
Now completely revised, updated, and expanded, this widely accepted standard reference builds upon the wealth of information accumulated in previous editions. New, updated material on many critical issues is presented, including: Actual case studies from liquid and gas risk assessments Surface Facilities in an assessment Alternate modelling approaches Hazard zone calculations "Absolute" versus "relative" probabilistic assessment techniques Failure rate estimates
Acceptable risk criteria The latest regulatory developments Data integration And much more!
W. Kent MuMbcucN, PE is President of WKM Consultancy, in Austin, Texas, a company offering speciaiizedengineering and management services to the pipeline industry. Mr. Muhlbauer is an internationally recognized authority on pipeline risk management and an author, lecturer, consultant, and software developer. He is an advisor to private industry, government agencies, and academia, as well as a frequently invited speaker at industry conferences worldwide. The author also has an extensive background in pipeline design, operations, and maintenance, having held technical and management positions in a pipeline operating company for over 13 years prior to becoming a full time pipeline risk management consuktant
Related Titles:
Pipeline Rules of Thumb Handbook, Fifth edition McAllister Paperback, ISBN 0750674717 Pipeline Rules of Thumb Handbook CD version 2.0 McAllister, Mulhbauer CD, ISBN 0750675209 Pipe Line Corrosion and Cathodic Protection, Third edition Parker, Peattie Paperback, ISBN 0872011496
Front cover image by Mieko Mahi energyimages.com
I
---
I S B N : 0 - 7 5 0 b - 7579 - 9
Gulf Professional publisking anirnprintofasevier
www.gulfpp.com
1