RELIABILITY AND RISK MODELS
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
RELIABILITY AND RISK MODELS SETTING RELIABILITY REQUIREMENTS
M.T. Todinov Cranfield University, UK
Copyright Ó 2005
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone
(þ44) 1243 779777
Email (for orders and customer service enquiries):
[email protected] Visit our Home Page on www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to
[email protected], or faxed to (þ44) 1243 770620. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging in Publication Data Todinov, M.T. Reliability and risk models : setting reliability requirements / M.T. Todinov. p. cm. ISBN 0-470-09488-5 1. Reliability (Engineering)—Mathematical models. 2. Risk assessment—Mathematical models. I. Title. TA169.T65 2005 6200 .004520 015118—dc22 2004026795 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0-470-09488-5 (HB) Typeset in 10/12pt Times by Integra Software Services Pvt. Ltd, Pondicherry, India Printed and bound in Great Britain by TJ International, Padstow, Cornwall This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
To Polly and Marin
Contents
PREFACE 1
SOME BASIC RELIABILITY CONCEPTS 1.1 1.2
1.3
1.4 2
Reliability (survival) Function, Cumulative Distribution and Probability Density Function of the Times to Failure Random Events in Reliability and Risk Modelling 1.2.1 Reliability of a System with Components Logically Arranged in Series 1.2.2 Reliability of a System with Components Logically Arranged in Parallel 1.2.3 Reliability of a System with Components Logically Arranged in Series and Parallel On Some Applications of the Total Probability Theorem and the Bayes Transform 1.3.1 Total Probability Theorem. Applications 1.3.2 Bayesian Transform. Applications Physical and Logical Arrangement of Components
COMMON RELIABILITY AND RISK MODELS AND THEIR APPLICATIONS 2.1 2.2 2.3 2.4
General Framework for Reliability and Risk Analysis Based on Controlling Random Variables Binomial Model Homogeneous Poisson Process and Poisson Distribution Negative Exponential Distribution 2.4.1 Memoryless Property of the Negative Exponential Distribution
xv 1 1 3 5 8 10 12 12 16 18
19 19 20 24 26 28
viii
Contents
2.4.2
2.5
2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 3
4
Link Between the Poisson Distribution and the Negative Exponential Distribution 2.4.3 Reliability of a Series Arrangement, Including Components with Constant Hazard Rates Hazard Rate 2.5.1 Difference between a Failure Density and Hazard Rate Mean Time to Failure (MTTF) Gamma Distribution Uncertainty Associated with the Mean Time to Failure Mean Time Between Failures (MTBF) Uniform Distribution Model Gaussian (Normal) Model Log-Normal Model The Weibull Model Reliability Bathtub Curve for Non-Repairable Components/Systems Extreme Value Models
28 29 30 32 33 34 37 39 40 41 45 46 49 50
RELIABILITY AND RISK MODELS BASED ON MIXTURE DISTRIBUTIONS
53
3.1 3.2 3.3
Distribution of a Property from Multiple Sources Variance of a Property from Multiple Sources Variance Upper Bound Theorem 3.3.1 Determining the Source Whose Removal Results in the Largest Decrease of the Variance Upper Bound 3.3.2 Modelling the Uncertainty of the Charpy Impact Energy at a Specified Test Temperature 3.4 Variation and Uncertainty Associated with the Charpy Impact Energy at a Specified Test Temperature Appendix 3.1 Appendix 3.2 An Algorithm for Determining the Upper Bound of the Variance of Properties from Sampling from Multiple Sources
53 55 59
BUILDING RELIABILITY AND RISK MODELS
69
4.1 4.2
69 72
General Rules for Reliability Data Analysis Probability Plotting 4.2.1 Testing for Consistency with the Uniform Distribution Model 4.2.2 Testing for Consistency with the Exponential Model 4.2.3 Testing for Consistency with the Weibull Distribution
60 61 62 64
67
75 75 76
ix
Contents
4.2.4
4.3 4.4
5
77 77 78 80 82
LOAD–STRENGTH (DEMAND–CAPACITY) MODELS
85
5.1 5.2 5.3 5.4
85 86 88
5.5 5.6
6
Testing for Consistency with the Type I Extreme Value Model 4.2.5 Testing for Consistency with the Normal Distribution Estimating Model Parameters Using the Method of Maximum Likelihood Estimating the Parameters of a Three-Parameter Power Law 4.4.1 Some Applications of the Three-Parameter Power Law
A General Reliability Model The Load–Strength Interference Model Load–Strength (Demand Capacity) Integrals Calculating the Load–Strength Integral Using Numerical Methods Normally Distributed and Statistically Independent Load and Strength Reliability and Risk Analysis Based on the Load–Strength Interference Approach 5.6.1 Influence of Strength Variability on Reliability 5.6.2 The Importance of the Shape of the Lower Tail of the Strength Distribution and the Upper Tail of the Load Distribution
90 91 95 95
99
SOLVING RELIABILITY AND RISK MODELS USING A MONTE CARLO SIMULATION
105
6.1 6.2
105 108
Monte Carlo Simulation Algorithms Simulation of Random Variables 6.2.1 Simulation of a Uniformly Distributed Random Variable 6.2.2 Generation of a Random Subset 6.2.3 Inverse Transformation Method for Simulation of Continuous Random Variables 6.2.4 Simulation of a Random Variable Following the Negative Exponential Distribution 6.2.5 Simulation of a Random Variable Following the Gamma Distribution 6.2.6 Simulation of a Random Variable Following a Homogeneous Poisson Process in a Finite Interval 6.2.7 Simulation of a Discrete Random Variable with a Specified Distribution 6.2.8 Selection of a Point at Random in Three-dimensional Space Region
108 109 110 111 112 112 113 114
x
Contents
6.2.9
Simulation of Random Locations Following a Homogeneous Poisson Process in a Finite Domain 6.2.10 Simulation of a Random Direction in Space 6.2.11 Simulation of a Random Variable Following the Three-parameter Weibull Distribution 6.2.12 Simulation of a random variable following the maximum extreme value distribution 6.2.13 Simulation of a Gaussian Random Variable 6.2.14 Simulation of a Log-normal Random Variable 6.2.15 Conditional Probability Technique for Bivariate Sampling 6.2.16 Von Neumann’s Method for Modelling Continuous Random Variables 6.2.17 Random sampling from a mixture distribution 6.3 Monte Carlo Simulation Algorithms for Solving Reliability and Risk Models 6.3.1 An Algorithm for Solving the General Reliability Model 6.3.2 Monte Carlo Evaluation of the Reliability on Demand during a Load–Strength Interference 6.4 Monte Carlo Simulation of the Lower Tail of the Strength Distribution for Materials Containing Flaws Appendix 6.1 7
ANALYSIS OF THE PROPERTIES OF INHOMOGENEOUS MEDIA USING MONTE CARLO SIMULATIONS 7.1 7.2 7.3 7.4
8
Analysis of Inhomogeneous Microstructures Using Random Transects Empirical Cumulative Distribution of the Intercepts Intercept Variance Lower Tail of Material Properties, Monotonically Dependent on the Amount of the Intercepted Fraction
115 116 117 118 118 119 120 121 123 123 123 124 127 130
133 134 135 139 142
MECHANISMS OF FAILURE
145
8.1
145 145 147 147 149 149 149 151
8.2
Overstress Failures 8.1.1 Brittle Fracture 8.1.2 Ductile Fracture 8.1.3 Ductile-to-brittle Transition Region 8.1.4 Yielding Wear-out Failures 8.2.1 Fatigue Failures 8.2.2 Failures due to Corrosion and Erosion
xi
Contents
8.3
9
OVERSTRESS RELIABILITY INTEGRAL AND DAMAGE FACTORISATION LAW 9.1 9.2
10
10.3 10.4 10.5
10.6
Background General Equation Related to the Probability of Failure of a Stressed Component with Internal Flaws Determining the Individual Probability of Triggering Failure, Characterising a Single Flaw Parameters Related to the Statistics of Fracture Triggered by Flaws Upper Bounds of the Flaw Number Density and the Stressed Volume Guaranteeing Probability of Failure below a Maximum Acceptable Level A Stochastic Model Related to the Fatigue Life Distribution of a Component Containing Defects
UNCERTAINTY ASSOCIATED WITH THE LOCATION OF THE DUCTILE-TO-BRITTLE TRANSITION REGION OF MULTI-RUN WELDS 11.1 11.2 11.3
12
Reliability Associated with Overstress Failure Mechanisms Damage Factorisation Law
DETERMINING THE PROBABILITY OF FAILURE FOR COMPONENTS CONTAINING FLAWS 10.1 10.2
11
Early-life Failures 8.3.1 Influence of the Design on Early-life Failures 8.3.2 Influence of the Variability of Critical Design Parameters on Early-life Failures
Modelling the Systematic and Random Component of the Charpy Impact Energy Determining the Uncertainty Associated with the Location of the Ductile-to-brittle Transition Region Risk Assessment Associated with the Location of the Transition Region
MODELLING THE KINETICS OF DETERIORATION OF PROTECTIVE COATINGS DUE TO CORROSION 12.1 12.2 12.3
Statement of the Problem Modelling the Quantity of Corroded Protective Coating with Time An Illustrative Example
153 153 154
159 159 162
165 165 168 170 173
174 176
179 179 181 186
191 191 192 194
xii 13
14
Contents
MINIMISING THE PROBABILITY OF FAILURE OF AUTOMOTIVE SUSPENSION SPRINGS BY DELAYING THE FATIGUE FAILURE MODE
199
RELIABILITY GOVERNED BY THE RELATIVE LOCATIONS OF RANDOM VARIABLES IN A FINITE DOMAIN
205
14.1
Reliability Dependent on the Relative Configurations of Random Variables 14.2 A Generic Equation Related to Reliability Dependent on the Relative Locations of a Fixed Number of Random Variables 14.2.1 An Illustrative Example. Probability of Clustering of Random Demands within a Critical Interval 14.3 A Given Number of Uniformly Distributed Random Variables in a Finite Interval (Conditional Case) 14.4 Applications 14.5 Reliability Governed by the Relative Locations of Random Variables Following a Homogeneous Poisson Process in a Finite Domain Appendix 14.1 15
RELIABILITY DEPENDENT ON THE EXISTENCE OF MINIMUM CRITICAL DISTANCES BETWEEN THE LOCATIONS OF RANDOM VARIABLES IN A FINITE INTERVAL 15.1
15.2 15.3
15.4
Problems Requiring Reliability Measures Based on Minimum Critical Intervals (MCI) and Minimum Failure-free Operating Periods (MFFOP) The MCI and MFFOP Reliability Measures General Equations Related to Random Variables Following a Homogeneous Poisson Process in a Finite Interval Application Examples 15.4.1 Setting Reliability Requirements to Guarantee a Specified MFFOP 15.4.2 Reliability Assurance that a Specified MFFOP has been met 15.4.3 Specifying a Number Density Envelope to Guarantee Probability of Clustering below a Maximum Acceptable Level
205
206 209 210 212
216 217
221
221 224
225 227 227 228
230
xiii
Contents
15.5
15.6 15.7
16
Setting Reliability Requirements to Guarantee a Minimum Failure-free Operating Period before Failures Followed by a Downtime 15.5.1 A Monte Carlo Simulation Algorithm for Evaluating the Probability of Existence of an MFFOP of Specified Length A Numerical Example Setting Reliability Requirements to Guarantee an Availability Target 15.7.1 Monte Carlo Evaluation of the Average Availability Associated with a Finite Time Interval 15.7.2 A Numerical Example
RELIABILITY ANALYSIS AND SETTING RELIABILITY REQUIREMENTS BASED ON THE COST OF FAILURE 16.1
16.2 16.3 16.4
16.5 16.6 16.7 16.8
16.9
General Models for Setting Cost-of-failure-driven Reliability Requirements for Non-repairable Components/Systems 16.1.1 A Constant Cost of Failure Cost of Failure from Mutually Exclusive Failure Modes Some Important Counterexamples Time-dependent Cost of Failure 16.4.1 Calculating the Risk of Failure in case of Cost of Failure Dependent on Time Risk of Premature Failure if the Cost of Failure Depends on Time Setting Reliability Requirements to Minimise the Total Cost. Value from the Reliability Investment Guaranteeing Multiple Reliability Requirements for Systems with Components Logically Arranged in Series Risk of Premature Failure for Systems whose Components are not Logically Arranged in Series 16.8.1 Expected Losses from Failures for Repairable Systems whose Components are not Arranged in Series Reliability Allocation to Minimise the Total Losses
231
233 234 235 236 237
239
240 240 242 246 249 249 251 252 259 261
264 264
APPENDIX A
267
1. 2. 3.
267 270 270
Random Events Union of Events Intersection of Events
xiv
Contents
4. Probability 5. Probability of a Union and Intersection of Mutually Exclusive Events 6. Conditional Probability 7. Probability of a Union of Non-disjoint Events 8. Statistically Dependent Events 9. Statistically Independent Events 10. Probability of a Union of Independent Events 11. Boolean Variables and Boolean Algebra
275 276 279 281 281 282 282
APPENDIX B
289
1. 2. 3. 4. 5. 6. 7. 8. 9.
Random Variables. Basic Properties Boolean Random Variables Continuous Random Variables Probability Density Function Cumulative Distribution Function Joint Distribution of Continuous Random Variables Correlated Random Variables Statistically Independent Random Variables Properties of the Expectations and Variances of Random Variables 10. Important Theoretical Results Regarding the Sample Mean APPENDIX C
CUMULATIVE DISTRIBUTION FUNCTION OF THE STANDARD NORMAL DISTRIBUTION
274
289 290 291 291 292 292 293 294 295 297
299
APPENDIX D c2 -DISTRIBUTION
301
REFERENCES
307
INDEX
313
Preface
Reliability engineering is neither reliability statistics nor solely engineering knowledge related to a particular equipment. Rather, it is an amalgam of theoretical principles, reliability analysis techniques and sound understanding of the physics of the component or system under consideration. Accordingly, most of the reliability and risk analysis models and techniques in this book have been related to practical engineering problems and applications. The intention was to obtain some balance between theory and application. Common, as well as little known models and their applications are discussed. Thus, a powerful generic equation is introduced for determining the probability of safe/failure states dependent on the relative configuration of random variables, following a homogeneous Poisson process in a finite domain. Seemingly intractable reliability problems can be solved easily using this equation which reduces a complex reliability problem to a simpler problem. The equation provides a basis for the new reliability measure introduced in Chapter 15, which consists of a combination of a set of specified minimum free distances before/between random variables in a finite interval, and a minimum specified probability with which they must exist. The new reliability measure is at the heart of a technology for setting quantitative reliability requirements based on minimum failure-free operating periods (MFFOP). A number of important applications of the new reliability measure are also considered such as: probability of a collision of demands from customers using a particular equipment for a specified time; probability of overloading of supply systems from consumers connecting independently and randomly; determining the hazard rate which guarantees with a minimum specified probability a minimum failure-free operating period before each random failure in a finite time interval. Problems related to probability of clustering are also discussed. It is demonstrated that even for a small number of random variables in a finite interval, the probability of clustering of two or more random variables within a critical distance is surprisingly high, and should always be accounted for in risk assessments.
xvi
Preface
Substantial space has been allocated for load–strength (demand–capacity) models and their applications. Common problems can easily be formulated and solved using the load–strength interference concept. On the basis of counterexamples, a point is made that for non-normally distributed load and strength, the reliability measures ‘reliability index’ and ‘loading roughness’ can be completely misleading. In Chapter 9, the load–strength interference model has been generalised, with the time included as a variable. The derived equation is in effect a new integral for determining reliability associated with an overstress failure mechanism. Models are also discussed related to building the lower tail of the strength distribution for materials with flaws. In Chapter 10, this has been done on the basis of a model for determining the probability of failure of a component with arbitrary shape and loading. The model can also be used to test vulnerability of designs to the presence of flaws and to develop optimised designs and loading, characterised by a low probability of failure. For industries with a high cost of failure, reliability requirements should be driven by the cost of failure. Setting reliability requirements based solely on a high availability target does not necessarily limit the risk of premature failure. Even at a high availability level, the probability of premature failure can still be unacceptably large. Reliability requirements must guarantee not only the required minimum availability but they must also reduce the risk of failure below a maximum acceptable level. Accordingly, in Chapter 16 a new methodology and models are proposed for reliability analysis and setting reliability requirements based on the cost of failure. Models and algorithms are introduced for determining the value from the reliability investment, the risk of premature failure, optimisation models for minimising the total losses, models for limiting the risk of failure below a maximum acceptable level and for guaranteeing a minimum availability level. It is proved that the expected losses from failures of a repairable system in a specified time interval are equal to the expected number of failures times the expected cost given failure. The models related to the value from the reliability investment can be used to quantify the effect from reducing early-life failures on the financial revenue and for selecting between alternative solutions. Setting reliability requirements at a system level has been reduced to determining the intersection of the hazard rate upper bounds which deliver the separate requirements. Furthermore, in Chapter 16 the conventional reliability analysis is challenged. Using counterexamples it is demonstrated that maximising the reliability of the system does not necessarily mean minimum expected losses from failures. Altering the hazard rates of the components may increase the reliability of the system and simultaneously increase the expected losses from failures! This counterintuitive result shows that the cost-of-failure reliability analysis requires a new generation of reliability tools, different from the conventional tools.
Preface
xvii
Uncertainties associated with model predictions are discussed on the basis of the location of the ductile-to-brittle transition region and the Charpy impact energy of multi-run welds at a specified test temperature. The large variation of the Charpy impact energy from sampling inhomogeneous microstructure, combined with a small number of test temperatures, propagates into a large uncertainty in the location of the ductile-to-brittle transition region. On the basis of the uncertainty model, a risk assessment model is presented for detecting fracture toughness degradation from a single data set. The assessment of the uncertainty associated with Charpy impact energy has been based upon an important result introduced rigorously in Chapter 3 referred to as ‘upper bound variance theorem’. The exact upper bound of the variance of properties from multiple sources is attained from sampling not more than two sources. Various applications of the theorem are presented. Methods related to assessing the consistency of a conjectured model with a data set and estimating the model parameters are also discussed. In this respect, a little known method for producing unbiased and precise estimates of the parameters in the three-parameter power law in Chapter 4 is introduced. All algorithms are presented in pseudocode which can be easily transformed into a programming code in any programming language. A whole chapter has been devoted to Monte Carlo simulation techniques and algorithms which are subsequently used to build algorithms for solving reliability and risk analysis problems. Basic mechanisms of failure and physics of failure models are discussed in Chapter 8, as well as the important topic related to early-life failures. Physicsof-failure concepts which help improve the reliability of automotive suspension springs by delaying the fatigue failure mode are the subject of Chapter 13. The conditions for the validity of common models have also been presented. A good example is the Palmgren–Miner rule. This is a very popular model in fatigue life predictions, yet no comments are made in the reliability literature in which cases this rule is applicable. Consequently, in Chapter 9, a substantial space has been allocated for a discussion related to the conditions under which the empirical Palmgren–Miner rule can be applied to predict fatigue life. In Chapter 12, seemingly abstract models related to coverage of surface by circular objects find an important application in modelling the corrosion kinetics of protective coatings. By trying to find the balanced mix between a theory, physics and application, my desire was to make the book useful to researchers, consultants, students and practising engineers. This text assumes limited familiarity with probability and statistics. Most of the basic probabilistic concepts have been summarised in Appendices A and B. Other concepts have been developed in the text, where necessary. In conclusion, I acknowledge financial support from British Petroleum, the editing and production staff at John Wiley & Sons, Ltd for their excellent work
xviii
Preface
and in particular, the cooperation and help of Wendy Hunter and Philip Tye. Thanks also go to many colleagues from universities and industry for their useful suggestions and comments. Finally, I acknowledge the help and support of my wife during the preparation of the manuscript. Michael T. Todinov
1 Some Basic Reliability Concepts
1.1
RELIABILITY (SURVIVAL) FUNCTION, CUMULATIVE DISTRIBUTION AND PROBABILITY DENSITY FUNCTION OF THE TIMES TO FAILURE
According to commonly accepted definitions (IEC, 50 (191), 1991) reliability is ‘the ability of an entity to perform a required function under given conditions for a given time interval’ and failure is the termination of the ability to perform the required function. In the mathematical sense, reliability is measured by the probability that a system or a component will work without failure during a specified time interval (0, t) under given operating conditions and environment (Figure 1.1). The probability P(T > t) that the time to failure T will be greater than a specified time t is given by the reliability function R(t) ¼ P(T > t), also referred to as the survival function. The reliability function is a monotonic non-increasing function, always unity at the start of life (R(0) ¼ 1, R(1) ¼ 0). It is linked with the cumulative distribution function F(t) of the time to failure by R(t) ¼ 1 F(t): Reliability ¼ 1 Probability of failure. If T is the time to failure, F(t) gives the probability P(T t) that the time to failure T will be smaller than the specified time t, or in other words, the probability that the system will fail before time t. The probability density function of the time to failure is denoted by f(t). It describes how the failure probability is spread over time. In the infinitesimal interval t, t þ dt, the probability of failure is f(t)dt. The probability of failure in any specified time interval t1 T t2 is
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
2
Reliability and Risk Models
0
Specified time interval
Failure
t
Time
Time to failure
Figure 1.1 Reliability is measured by the probability that the time to failure will be greater than a specified time t
Pðt1 T t2 Þ ¼
Z
t2
f ðtÞdt
ð1:1Þ
t1
Basic properties of the probability density of the time to failure are (i) f(t) is non-negative and (ii) the total area beneath f(t) is always equal to one: Ralways 1 f (t)dt ¼ 1. This is because f(t) is a probability distribution, i.e. the prob0 abilities of all possible outcomes for the time to failure must add up to unity. The cumulative distribution function of the time to failure is related to the failure density function by f ðtÞ ¼ dFðtÞ=dt
ð1:2Þ
From expression (1.2), the probability that the time to failure will be smaller than a specified value t is FðtÞ ¼ PðT tÞ ¼ FðtÞ ¼
Z
t
f ðÞd
ð1:3Þ
0
R1 where is a dummy integration variable; F(1) ¼ 0 f ()d ¼ 1, F(0) ¼ 0. Because f(t) is non-negative, its integral F(t) is Ra monotonic non-decreasing t function of t (Figure 1.2). The value F(t ) ¼ 0 f ()d of the cumulative distribution function at time t* gives the area beneath the probability density function f(t) until time t* (Figure 1.2). The link between the reliability function R(t), cumulative distribution function F(t) and probability density function f(t) is illustrated in Figure 1.2. P(t1 < T t2) is the probability of failure between times t1 and t2: Pðt1 < T t2 Þ ¼
Z
t2
f ðÞd ¼ Fðt2 Þ Fðt1 Þ
ð1:4Þ
t1
The hatched area in Figure 1.3 is equal to the difference F(t2) F(t1) and gives the probability that the time to failure T will be between t1 and t2. A comprehensive discussion related to the basic reliability functions has been provided by Grosh (1989).
3
Some Basic Reliability Concepts Probability, probability density R (t ) = 1– F (t )
F (t )
1 F (t ∗)
Area = F (t ∗)
f (t ∗) R (t ∗)
f (t ) 0 t∗
Time, t
Figure 1.2 Reliability function, cumulative distribution function of the time to failure and failure density function
Probability, probability density 1 F (t2)
F (t )
F (t1) f (t ) 0 t1
t2
Time to failure, T
Figure 1.3 Cumulative distribution and probability density function of the time to failure
1.2
RANDOM EVENTS IN RELIABILITY AND RISK MODELLING
Two common approaches in reliability and risk modelling are based on: (i) random events and (ii) random variables. Various properties associated with operations with random events are discussed in Appendix A. Consider the following common example: Example 1.1. Suppose that two pumps (an old one and a new one) have been installed and work independently from each other as part of a fluid supply system. The reliability of the old pump associated with a specified time interval
4
Reliability and Risk Models
is 0.6. The reliability of the new pump associated with the same time interval is 0.8. The probabilities of the following events are required: (a) (b) (d) (g)
There will exist a full fluid supply at the end of the specified time interval; There will exist a fluid supply at the end of the specified time interval; There will be no fluid supply at the end of the time interval; There will be insufficient fluid supply at the end of the time interval.
This basic problem can be formulated naturally and solved using random events. Let A denote the event the old pump will be working at the end of the time interval and B denote the event the new pump will be working at the end of the time interval. The probability that there will be a full fluid supply at the end of the time interval is equal to the probability of the event A\B that both pumps will be working. Because events A and B are statistically independent, for the probability of their intersection A\B (see Appendix A) we have PðA \ BÞ ¼ PðAÞ PðBÞ ¼ 0:6 0:8 ¼ 0:48
ð1:5Þ
The probability that there will be a fluid supply at the end of the specified time interval is equal to the probability of the event A [ B that at least one pump will be working. The probability of the union A [ B of the statistically independent events A and B is (see Appendix A) PðA [ BÞ ¼ PðAÞ þ PðBÞ PðAÞPðBÞ ¼ 0:6 þ 0:8 0:6 0:8 ¼ 0:92
ð1:6Þ
The probability that no fluid supply will exist at the end of the specified time interval is equal to the probability of the compound event A \ B: the old pump will not be working and the new pump will not be working, whose probability is PðA \ BÞ ¼ PðAÞ PðBÞ ¼ ð1 0:6Þ ð1 0:8Þ ¼ 0:08
ð1:7Þ
The event insufficient fluid supply is equivalent to the event exactly one pump will be working. This event can be presented as a union (A \ B) [ (A \ B) of the following mutually exclusive events: A \ B – the old pump will be working and the new pump will not be working and A \ B – the new pump will be working and the old pump will not be working. According to the formula related to a probability of a union of mutually exclusive events (see Appendix A): PðA \ B [ A \ BÞ ¼ PðA \ BÞ þ PðA \ BÞ ¼ PðAÞPðBÞ þ PðAÞPðBÞ ¼ 0:44
ð1:8Þ
5
Some Basic Reliability Concepts
1.2.1
RELIABILITY OF A SYSTEM WITH COMPONENTS LOGICALLY ARRANGED IN SERIES
Another important application of random events is the practically important case of a system composed of statistically independent components, arranged logically in series (Figure 1.4). Let S denote the event the system will be working and Ck denote the event the kth component will be working. For the series arrangement in Figure 1.4, event S is an intersection of all events Ck, k ¼ 1, 2, . . . , n, because the system will be working only if all of the components work. Consequently, S ¼ C1 \ C2 \ . . . \ Cn
ð1:9Þ
According to the formula related to a probability of an intersection of statistically independent events (Appendix A), the probability that the system will be working is PðSÞ ¼ PðC1 ÞPðC2 ÞPðC3 Þ . . . PðCn Þ
ð1:10Þ
the product of the probabilities that the separate components will be working. Since R ¼ P(S) and Rk ¼ P(Ck), where R is the reliability of the system and Rk is the reliability of the kth component, the reliability of a series arrangement is (Bazovsky, 1961) R ¼ R1 R2 Rn
ð1:11Þ
Two important conclusions can be made from this expression. The larger the number of components is, the lower is the reliability of the arrangement. Indeed, if an extra component Cnþ1 with reliability Rnþ1 is added as shown in Figure 1.5, the reliability of the arrangement is R1 R2 Rn Rnþ1 and since Rnþ1 < 1, R1 R2 Rn > ðR1 R2 Rn Þ Rnþ1
C1
C2
Cn
Figure 1.4 A system with components logically arranged in series
C1
C2
Cn
Cn+1
Figure 1.5 An extra component added to a series arrangement
ð1:12Þ
6
Reliability and Risk Models
Another important observation is that the reliability of a series arrangement is smaller than the reliability Rk of the least reliable component k: R1 R2 Rn < Rk
ð1:13Þ
This fact has important practical implications. It means that the reliability of a series arrangement cannot be improved unless the reliability of the least reliable component is improved. If a reliability improvement on a system level is to be made, the reliability efforts should be focused on improving the reliability of the least reliable component first, not on improving the reliability of the components with already high reliability. Without loss of generality, assume that the reliabilities of the components have been arranged in descending order: R1 > R2 > . . . > Rn1 > Rn. Suppose that the reliability of the least reliable nth component has been increased to the reliability of the most reliable component (Rn ¼ R1). Now, the reliability of the system is smaller than the reliability of its least reliable component Rn1: R1 R2 Rn < Rn1. In order to improve the reliability of the system, the reliability of this component must be increased. Suppose that it has also been increased to the reliability of the most reliable component: Rn1 ¼ R1. Continuing this with the rest of the components, finally reliability of the system will become: R ¼ Rn1 . Consider now a common practical example related to two very reliable components with high reliabilities Rc 1, connected with an interface of relatively low reliability r Rc (Figure 1.6). According to equation (1.11), the reliability of the arrangement R ¼ Rc r Rc r
ð1:14Þ
is approximately equal to the reliability of its weakest link, the interface r. In order to improve the reliability of the arrangement, the reliability of the interface must be increased. One of the reasons why so many failures occur at interfaces, despite the fact that the separate components are very reliable, is the fact that often interfaces are not manufactured to match the reliability of the corresponding components. An alternative presentation of the reliability of n identical, statistically independent components arranged logically in series is R ¼ ð1 pÞn
Rc
r
ð1:15Þ
Rc
Figure 1.6 Two very reliable components with reliabilities Rc 1 connected with an unreliable interface with reliability r
7
Some Basic Reliability Concepts
where p is the probability of failure of a single component. This equation provides an insight into the link between the complexity of a system and its reliability. Indeed, let us present equation (1.15) as R ¼ exp [nln(1 p)]. For small probabilities of failure p 1, ln(1 p) p and the reliability of the arrangement becomes R ¼ expðpnÞ
ð1:16Þ
As can be verified from the equation, if the number of components n in the system is increased by a factor of k, in order to maintain the reliability on a system level, the probability of failure p of a single component must be decreased by the same factor k. It must be pointed out that, for a relatively large number of components logically arranged in series, the error associated with reliability predictions based on equation (1.15) can be significant. Indeed, suppose that a large number n of identical components have been connected in series. Each component has a reliability r. Since the reliability of the system is R ¼ rn, a small relative error r/r in estimating the reliability r of the individual components gives rise to a large relative error R/R nr/r in the predicted reliability of the system. We will illustrate this point by a simple numerical example. Let us assume that the reliability estimate related to an individual component varies in the relatively small range 0.96 r 0.98. If r ¼ 0.96 is taken as a basis for predicting the reliability of a system including 35 components logically arranged in series, the calculated reliability is R ¼ 0.9635 0.24. If r ¼ 0.98 is taken as a basis for the reliability prediction, the calculated reliability is R ¼ 0.9835 0.49, more than twice compared to the previous estimate. This example can also be interpreted in the following way. If 35 identical components with reliabilities 0.96 are logically arranged in series, a relatively small reliability increase associated with the individual components yields a large system reliability increase. The application of the formula related to a series arrangement is not restricted to hardware components only. Assume that the successful accomplishment of a project depends on the successful accomplishment of 35 identical tasks. If a person accomplishes successfully a separate task with probability 0.96, an investment in additional training which increases this probability to only 0.98 will make a huge impact on the probability of accomplishing the project. In another example, suppose that a single component can fail due to n statistically independent failure modes. The event S: the component will survive time t, can be presented as an intersection of the events Si: the component will survive the ith failure mode, S ¼ S1 \ S2 \ . . . \ Sn. The probability P(S) of the event S, is the reliability R(t) of the component associated with the time interval (0, t), which is given by the product (Ebeling, 1997)
8
Reliability and Risk Models
RðtÞ ¼ PðS1 Þ \ PðS2 Þ \ . . . \ PðSn Þ ¼ R1 ðtÞR2 ðtÞ . . . Rn ðtÞ
ð1:17Þ
where Ri (t), i ¼ 1, 2, . . . , n, is the probability that the component will survive the ith failure mode. Since the probabilities of failure before time t, associated with the separate failure modes, are given by Fi(t) ¼ 1 Ri(t), the probability of failure of the component before time t is FðtÞ ¼ 1 RðtÞ ¼ 1
n Y
½1 Fk ðtÞ
ð1:18Þ
k¼1
A review of the theory of independent competing risks can be found in Bedford and Cooke (2001). If the failure modes are not statistically independent, the reliability is equal to the product RðtÞ ¼ PðS1 Þ PðS2 jS1 Þ PðS3 jS1 S2 Þ PðSn jS1 S2 . . . Sn1 Þ
ð1:19Þ
where P(Sk|S1S2 . . . Sk1) is the probability of surviving the kth failure mode given that no failure has been initiated by the first (S1), the second (S2), . . . , and the (k 1)th failure mode (Sk1). An application of this formula to determining the reliability in case of two statistically dependent failure modes: ‘failure initiated by individual flaws’ and ‘failure caused by clustering of flaws within a small critical distance’ can be found in Todinov (2005).
1.2.2
RELIABILITY OF A SYSTEM WITH COMPONENTS LOGICALLY ARRANGED IN PARALLEL
For the parallel logical arrangement in Figure 1.7, the event S (the system will be working) is a union of the events Ck: the kth component will be working,
C1
C2
...
Cn
Figure 1.7 Components logically arranged in parallel
9
Some Basic Reliability Concepts
k ¼ 1, 2, . . . , n, because the system will be working only if at least one component works. Consequently, event S can be presented as the union of events Ck: S ¼ C1 [ C2 [ . . . [ Cn
ð1:20Þ
Simpler expressions are obtained if the reasoning is in terms of system failure (S) rather than system success (S). For a parallel logical arrangement, the event S (system failure) is an intersection of events Ck , k ¼ 1, 2, . . . , n, because the system will fail only if all of the components fail: S ¼ C1 \ C2 \ . . . \ Cn
ð1:21Þ
The probability of system failure is P(S) ¼ P(C1 ) P(C2 ) P(Cn ). We notice here that while the reliability of a series arrangement is a product of the reliabilities of the components, the probability of failure of a parallel arrangement is a product of the probabilities of failure of the components. Since the reliability of the system is R ¼ 1 P(S) and the reliabilities of the components are Ri, i ¼ 1, 2, . . . , n, the reliability of the parallel arrangement (Bazovsky, 1961) is R ¼ 1 ð1 R1 Þ ð1 R2 Þ ð1 Rn Þ
ð1:22Þ
Two important conclusions can be made from this expression. The larger the number of components in parallel, the larger the reliability of the system. Indeed, if an extra component Cnþ1 with reliability Rnþ1 is added as shown in Figure 1.8, the reliability of the arrangement is R ¼ 1 ð1 R1 Þ ð1 Rn Þ ð1 Rnþ1 Þ and since Rnþ1 < 1,
C1
C2
... Cn Cn+1
Figure 1.8 An extra component added to a parallel arrangement
10
Reliability and Risk Models
1 ð1 R1 Þ ð1 Rn Þ < 1 ð1 R1 Þ ð1 Rn Þ ð1 Rnþ1 Þ
ð1:23Þ
The second conclusion is that the reliability of the parallel arrangement is larger than the reliability of its most reliable component. In other words, for a parallel arrangement, the relationship 1 ð1 R1 Þ ð1 Ri Þ ð1 Rn Þ > Ri
ð1:24Þ
holds. Indeed, since ð1 Ri Þ½1 ð1 R1 Þ ð1 Ri1 Þ ð1 Riþ1 Þ ð1 Rn Þ > 0 holds, relationship (1.24) follows immediately.
1.2.3
RELIABILITY OF A SYSTEM WITH COMPONENTS LOGICALLY ARRANGED IN SERIES AND PARALLEL
A system with components logically arranged in series and parallel can be reduced to a simple series or a parallel system, in stages, as shown in Figure 1.9. In the first stage, the components in parallel with reliabilities R1, R2 and R3 are reduced to a component with reliability R123 ¼ 1 (1 R1)(1 R2)(1 R3) (Figure 1.9A). Next, the components in parallel, with reliabilities R4 and R5, are reduced to a component with reliability R45 ¼ 1 (1 R4)(1 R5) and the components in series with reliabilities R6, R7 and R8 are reduced to a component with reliability R678 ¼ R6 R7 R8 (Figure 1.9B). As a result, in the second stage, the equivalent reliability block diagram B is obtained (Figure 1.9B). Next, the reliability block diagram B is further simplified by reducing components with reliabilities R123 and R45 to a single component with reliability R12345 ¼ R123 R45. The final result is the equivalent reliability block diagram C including only two elements arranged in parallel, whose reliability is R ¼ 1 (1 R12345)(1 R678). Now let us assume that a system consists of only two components, arranged in series, such as shown in Figure 1.10a. Two possible ways of increasing the reliability of the system are (i) by including an active redundancy at a system level (Figure 1.10b) and (ii) by including active redundancies at a component level (Figure 1.10c). Let us compare the reliabilities of the two arrangements. Arrangement (b) is characterised by reliability R1 ¼ m2 ð2 m2 Þ
ð1:25Þ
11
Some Basic Reliability Concepts
R1 R4 R2 R5 A
R3
R6
R7
R8
R123
B
R45 R678
R12345 C
R678
Figure 1.9 Reducing the complexity of a system including components logically arranged in series and parallel a
m
m
m
m
m
m
m
m
m
m
b
c
Figure 1.10 (a) A simple series arrangement and two ways (b and c) of increasing its reliability
while arrangement (c) is characterised by reliability R2 ¼ m2 ð2 mÞ2
ð1:26Þ
Since R2 R1 ¼ 2m2(m 1)2 > 0, the arrangement in Figure 1.10(b) has a lower reliability than that in Figure 1.10(c). This example demonstrates the
12
Reliability and Risk Models
well-known principle that redundancy at a component level is more effective than redundancy at a system level (Barlow and Proschan, 1965).
1.3
1.3.1
ON SOME APPLICATIONS OF THE TOTAL PROBABILITY THEOREM AND THE BAYES TRANSFORM TOTAL PROBABILITY THEOREM. APPLICATIONS
Consider the following common engineering problem. Example 1.2. Components are delivered by n suppliers A1, A2, .P . . , and An. The market shares of the suppliers are p1, p2, . . . , pn, respectively, ni¼1 pi ¼ 1. The probabilities characterising the separate suppliers, that the strength of their components will be greater than a minimum required strength of Y MPa, are y1, y2, . . . , yn, respectively. What is the probability that a purchased component will have strength larger than Y MPa? This problem can be solved easily using the total probability theorem. Let B denote the event the purchased component will have strength greater than Y MPa and A1, A2, . . . , An denote the events the component comes from the first, the second,. . ., or the nth supplier, respectively. Event B occurs whenever any of the mutually exclusive and exhaustive events Ai occurs. Events Ai form a partition of the sample space (Figure 1.11): Ai \ Aj ¼ ;
if i 6¼ j;
A1 [ A2 [ . . . [ An ¼ PðA1 Þ þ PðA2 Þ þ þ PðAn Þ ¼ 1
Ω Ai A1 An B A2
Figure 1.11 Venn diagram representing events related to the strength of components from n suppliers
13
Some Basic Reliability Concepts
Since B ¼ (A1 \ B) [ (A2 \ B) [ . . . [ (An \ B), the probability of event B is given by the total probability formula PðBÞ ¼ PðA1 ÞPðBjA1 Þ þ PðA2 ÞPðBjA2 Þ þ þ PðAn ÞPðBjAn Þ
ð1:27Þ
The probabilities P(Ai) are equal to the market shares pi of the individual suppliers, i ¼ 1, 2, . . . , n. Since the conditional probabilities P(B|A1) ¼ y1, P(B|A2)y2, . . . , and P(B|An) ¼ yn are known, according to the total probability formula (1.27), the probability that a purchased component will have strength greater than Y MPa is PðBÞ ¼ p1 y1 þ p2 y2 þ þ pn yn
ð1:28Þ
From the total probability theorem, an important special case can be derived if the probability space is partitioned into two complementary events only. Let events A and A be complementary: A \ A ¼ 0, A [ A ¼ , P(A) þ P(A) ¼ 1. Since these events partition the probability space, if B occurs then either A occurs and B occurs or A does not occur and B occurs. The probability of event B according to the total probability theorem is PðBÞ ¼ PðBjAÞPðAÞ þ PðBjAÞPðAÞ
ð1:29Þ
Equation (1.29) forms the basis of the decomposition method for solving reliability problems (Ramakumar, 1993) which we will illustrate by a generic engineering application. Example 1.3. A comparator includes four identical measuring devices (thermocouples, manometers, voltmeters, etc.) which measure a particular quantity (temperature, pressure, voltage) in two separate zones (A and B) of a component (Figure 1.12). There are also two identical control devices which compare Control device
Data transmitter
Zone A
Zone B
Measuring devices
Control device
Figure 1.12 A functional diagram of the generic comparator
14
Reliability and Risk Models
the readings from the measuring devices and send a signal when a critical difference in the measured quantity is registered between the two zones. Only one of the control devices is required to register a critical difference in the measured quantity from the two zones. Once a critical difference is registered a signal is sent. A data transmitter is also used to transfer data between the control devices. The reliability on demand of each measuring device is m and the reliability on demand of the data transmitter is c. What is the probability that a signal will be sent in case of a critical difference in the measured quantity from zones A and B? Such a generic comparator has a number of applications. If, for example, the measurements indicate a critical concentration gradient between the two zones, the signal may operate a device which eliminates the gradient. In case of a critical differential pressure for example, the signal may be needed to open a valve which will equalise the pressure. In case of a critical temperature gradient measured by thermocouples in two zones of the same component, the signal may be needed to interrupt heating/cooling in order to limit the magnitude of the thermal stresses induced by the thermal gradient. In case of a critical potential difference measured in two zones of a circuit, the signal may activate a switch protecting the circuit. The logical arrangement of the components in the generic comparator can be represented by the reliability block diagram shown in Figure 1.13. In the figure, letters m denote the reliabilities of the measuring devices and a letter c denotes the reliability of the data transmitter. Clearly, the probability that the comparator will send a signal if a critical difference in the measured quantity exists is equal to the probability of existence of a path through working components between nodes A and B (Figure 1.13). Let event S denote the event the comparator is working on demand, C is the event the data transmitter is working on demand and C is the event the data transmitter is not working on demand. Depending on whether the data transmitter is working, the initial reliability block diagram in Figure 1.13 decomposes into two reliability block diagrams (Figure 1.14).
m
m c
A
m
B
m
Figure 1.13 A reliability block diagram of the generic comparator
15
Some Basic Reliability Concepts
m
m c
A m
(a)
B m
Transmitter working
(b) m
m c
A
Transmitter not working
m
m
B B
A m
m m
m
Figure 1.14 Depending on whether the transmitter is working, the initial reliability block diagram of the comparator decomposes into two reliability block diagrams (a) and (b)
According to the decomposition method, the probability P(S) that the comparator will be working on demand is equal to the sum of the probabilities that the comparator will be working, given that the transmitter is working and the probability that the comparator will be working given that the transmitter is not working: PðSÞ ¼ PðSjCÞPðCÞ þ PðSjCÞPðCÞ
ð1:30Þ
Figures 1.14(a) and (b) give the reliability block diagrams corresponding to events C – the transmitter is working on demand and C – the transmitter is not working on demand. According to equations (1.26) and (1.25), P(S|C) ¼ m2(2 m)2, P(SjC) ¼ m2 (2 m2 ), and the probability of sending a signal becomes PðSÞ ¼ c½m2 ð2 mÞ2 þ ð1 cÞ½m2 ð2 m2 Þ
ð1:31Þ
which is linear with respect to the reliability c of the transmitter. Depending on the reliability of the transmitter, the reliability of the comparator decreases linearly from P(S|C) ¼ m2 (2 m)2 when c ¼ 1, to P(SjC) ¼ m2 (2 m2 ) when c ¼ 0. A value c ¼ 1 corresponds to a perfect transmitter, which is approximated well if the transmitter is simply a cable. A value c ¼ 0 corresponds to a disconnected transmitter, with the reliability block diagram in Figure 1.14(b).
16
Reliability and Risk Models
For all intermediate values for the reliability c of the transmitter, the reliability of the comparator is between the reliabilities of arrangements (1.14a) and (1.14b).
1.3.2
BAYESIAN TRANSFORM. APPLICATIONS
Suppose now that Ai, i ¼ 1, . . . , n are n mutually exclusive and exhaustive events, where P(Ai) are the prior probabilities of Ai before testing. B is an observation characterised by a probability P(B). Let P(B|Ai) denote the probability of the observation B, given that event Ai has occurred. From the definition of conditional probability (see Appendix A): PðAi jBÞ ¼
PðAi \ BÞ PðAi ÞPðBjAi Þ ¼ PðBÞ PðBÞ
ð1:32Þ
P Since P(B) ¼ ni¼1 P(Ai )P(BjAi ), the Bayes’ formula (Bayes’ transform) is obtained (DeGroot, 1989): PðAi ÞPðBjAi Þ PðAi jBÞ ¼ Pn i¼1 PðAi ÞPðBjAi Þ
ð1:33Þ
The probability P(B|Ai) is usually easier to calculate than P(Ai|B). The application will be illustrated by two basic examples. The first example is similar to an example related to diagnostic tests discussed by Parzen (1960). Example 1.4. Suppose that the probability of contamination with a particular harmful agent is 0.01. A laboratory test developed for identifying the agent. It is known that 90% of the contaminated samples will test positively and 10% of the non-contaminated samples will also test positively. What is the probability that a particular sample will be contaminated given that the test has been positive? Let B denote the event the test is positive and A the event the sample has been contaminated. Because events A and A are complementary, they partition the probability space, P(A) þ P(A) ¼ 1 and P(A \ A) ¼ 0. According to the total probability theorem PðBÞ ¼ PðBjAÞPðAÞ þ PðBjAÞPðAÞ From the Bayes’s formula PðAjBÞ ¼
PðBjAÞPðAÞ PðBjAÞPðAÞ ¼ PðBÞ PðBjAÞPðAÞ þ PðBjAÞPðAÞ
ð1:34Þ
17
Some Basic Reliability Concepts
After substituting the numerical values the probability PðAjBÞ ¼
0:90 0:01 ¼ 0:083 0:90 0:01 þ 0:10 0:99
is obtained. Note that only about 8% of the samples which test positively will indeed be contaminated! The second example deals with components which come from two suppliers only. Example 1.5. Supplier A produces high-strength components while supplier B produces low-strength components. A component from supplier A survives a mechanical test with probability 0.9 while a component from supplier B survives the test with probability 0.6. If the suppliers have equal market shares, what is the probability that a particular component will be a high-strength component (from supplier A), given that it has survived the test? Since the market shares of the suppliers are equal, the probabilities of the events A and B that the component comes from supplier A or B are P(A ) ¼ 0:5 and P(B ) ¼ 0:5, respectively. Let denote the event the component has survived the test. Now, the probabilities P(i ) can be modified formally through the Bayes’ theorem, in the light of the outcome from the mechanical test. Since the total probability of surviving the test is PðÞ ¼ PðjA ÞPðA Þ þ PðjB ÞPðB Þ ¼ 0:9 0:5 þ 0:6 0:5 ¼ 0:75 the probability P(A j) that the component comes from the first supplier given that it has survived the test is PðA jÞ ¼
Primary seal (PS)
PðjA ÞPðA Þ 0:9 0:5 ¼ 0:6 PðÞ 0:75 Secondary seal (SS) Physical arrangement PS
SS
Logical arrangement PS SS
Figure 1.15 Seals that are physically arranged in series but logically in parallel
18
Reliability and Risk Models Functional diagram
Physical arrangement C1 C2 C3
Logical arrangement C1
Figure 1.16
1.4
C2
C3
The seals are physically arranged in parallel but logically in series
PHYSICAL AND LOGICAL ARRANGEMENT OF COMPONENTS
It must be pointed out that there exists a difference between a physical (functional) arrangement and a logical arrangement. This is illustrated in Figure 1.15 for a system of seals. Although the physical arrangement of the seals is in series, their logical arrangement with respect to the failure mode ‘leakage in the environment’ is in parallel. Indeed, leakage in the environment is present only if both seals fail. Conversely, components can be physically arranged in parallel but their logical arrangement is in series. This is illustrated by the seals in Figure 1.16. Although the physical arrangement of the seals is in parallel, their logical arrangement with respect to the failure mode leakage in the environment is in series. Leakage in the environment is present if at least one seal is leaking. Various examples from the electronics regarding the difference between a physical and logical arrangement have been discussed by Amstadter (1971).
2 Common Reliability and Risk Models and Their Applications
2.1
GENERAL FRAMEWORK FOR RELIABILITY AND RISK ANALYSIS BASED ON CONTROLLING RANDOM VARIABLES
Some of the random factors controlling reliability are material strength, operating loads, dimensional design parameters, distributions of defects, residual stresses, service conditions (e.g. extremes in temperature) and environmental effects (e.g. corrosion). Each random factor controlling reliability can be modelled by a discrete or a continuous random variable which will be referred to as controlling random variable. The controlling random variables can in turn be functions of other random variables. Strength for example, is a controlling random variable which is a function of material properties, design configuration and dimensions: Strength ¼ Fðmaterial properties, design configuration, dimensionsÞ Modelling based on random variables is a powerful technique in reliability engineering. Some of the properties of random variables and operations with random variables are discussed in Appendix B. The general algorithmic framework for reliability and risk analysis, based on random variables, can be summarised in the following steps: . .
Identifying all basic factors controlling reliability and risk. Defining controlling random variables corresponding to the basic factors.
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
20 . . . . . .
Reliability and Risk Models
Selecting appropriate statistical models for the controlling random variables. Updating the model parameters in the light of new observations. Building a reliability and risk model incorporating the statistical models of the controlling random variables. Analysis of the quality of the model (e.g. its robustness). Solving the model using analytical or numerical techniques. Generating uncertainty bounds of the results predicted from the model, for example, by a Monte Carlo simulation or probability calculus.
2.2
BINOMIAL MODEL
Consider the following engineering example. Example 2.1. A fluid supply system consists of five pumps connected in parallel as shown in Figure 2.1. The pumps work and fail independently from one another. The capacity of each pump is 10 litres per second, and the reliability of the pumps for one year’s continuous operation is 0.85. What is the probability that at the end of one year’s continuous operation, the total output X from the pumps will be at least 30 litres per second (X 30 ls1)? This example can be generalised in the following way. Example 2.2. A system is composed of n identical components which work independently from one another (Figure 2.2). The system performs its function successfully, only if at least m components work. What is the probability that the system will perform successfully? Another common example is present when n independently working components fail with probability p. The question of interest is the probability of obtaining fewer than r failures among the n components. This is illustrated by the next example.
P (X ≥ 30) = ?
Figure 2.1 Functional diagram of the fluid supply system
21
Common Reliability and Risk Models and Their Applications n components p p • • •
m working
p
Figure 2.2 A system which performs successfully if at least m components work
Example 2.3. Suppose that a detection system for a particular harmful chemical substance is composed of n identical devices, detecting a chemical release with probability p, independently of one another. In order to avoid a false alarm, at least m devices (usually m ¼ 2) are needed to detect the chemical release in order to activate a shut-down system. What is the probability that the shut-down system will be activated in case of a chemical release? A common feature of these problems is the fixed number of identical trials, each of which results either in success (the component is working) or failure (the component is not working). Furthermore, the trials are statistically independent, i.e. the probability that a component will be working does not depend on the state (working or failed) of the other components. The probability of success in each trial (the probability that a component will be working) is the same. These common features define the so-called binomial experiment. The number of successful outcomes from a binomial experiment is given by the binomial distribution, a probability distribution of fundamental importance. In mathematical terms, if X is a discrete random variable denoting the number of successes during n trials, its distribution is given by f ðX ¼ xÞ ¼
n! px ð1 pÞnx x!ðn xÞ!
ðx ¼ 0; 1; 2; . . . ; nÞ
ð2:1Þ
Indeed, the probability of any particular sequence of x successes and n x failures is the product px (1 p)nx of probabilities characterising n statistically independent events: x successes each characterised by a probability p combined with n x failures, each characterised by a probability 1 p. There are n!/(x!(n x)!) different sequences yielding x successes out of n trials. The events characterising the realisation of the separate sequences are mutually exclusive, i.e. only one particular sequence can exist at the end of the n trials. The sum of the probabilities characterising these mutually exclusive sequences is the probability given by equation (2.1).
22
Reliability and Risk Models
In Figure 2.3, the binomial distribution has been illustrated by three experiments involving ten trials characterised by probabilities of success in each trial p ¼ 0.1, p ¼ 0.5 and p ¼ 0.8, respectively. The probability of obtaining a number of successes greater than or equal to a particular number m is PðX mÞ ¼
n X
n! px ð1 pÞnx x!ðn xÞ! x¼m
ð2:2Þ
Equation (2.2) gives in fact the sum of the probabilities of the following mutually exclusive events: exactly m successes at the end of the n trials whose probability is n! pm ð1 pÞnm ; m!ðn mÞ! exactly m þ 1 successes at the end of the n trials whose probability is n! pmþ1 ð1 pÞnðmþ1Þ ; . . . ðm þ 1Þ!½n ðm þ 1Þ! exactly n successes at the end of the n trials, whose probability is n! pn ð1 pÞnn ¼ pn n!ðn nÞ! (note that 0! ¼ 1). Going back to the engineering problem defined at the start of this chapter, the probability that the total output will be at least 30 ls1 is equal to the probability that at least three pumps will be working at the end of the year. Substituting the numerical values n ¼ 5; m ¼ 3 and p ¼ 0.85 in equation (2.2) results in PðX 3Þ ¼
5 X
5! 0:85x ð1 0:85Þ5x 0:973 x!ð5 xÞ! x¼3
The probability that the number of successes X will be smaller than or equal to a specified number X r is given by the binomial cumulative distribution function PðX rÞ ¼
r X
n! px ð1 pÞnx x!ðn xÞ! x¼0
ð2:3Þ
23
Common Reliability and Risk Models and Their Applications Binomial probability 0.4 0.3
n = 10 trials p = 0.1
0.2 0.1 Number of successes
0.0 (a)
0
1
2
3
4
5
6
7
8
9
10
Binomial probability 0.25 0.20 0.15
n = 10 trials p = 0.5
0.10 0.05
Number of successes
0.00 (b)
0
1
2
3
4
5
6
7
8
9
10
Binomial probability 0.3
0.2 n = 10 trials p = 0.8 0.1 Number of successes
0.0 (c)
1
2
3
4
5
6
7
8
9
10
Figure 2.3 Binomial probability density distribution associated with 10 trials, characterised by a probability of success (a) p ¼ 0.1; (b) p ¼ 0.5 and (c) p ¼ 0.8 in each trial
24
Reliability and Risk Models
Equation (2.3) is in fact a sum of the probabilities of the following mutually exclusive events at the end of the n trials: zero successes, characterised by a probability n! p0 ð1 pÞn0 ¼ ð1 pÞn 0!ðn 0Þ! exactly one success, characterised by a probability n! p1 ð1 pÞn1 ; . . . 1!ðn 1Þ! exactly n successes characterised by a probability n! pn ð1 pÞnn ¼ pn n!ðn nÞ! The mean of a random variable X which follows a binomial distribution with number of trials n and probability of success in each trial p is E(X) ¼ np and the variance is V(X) ¼ np(1 p) (Miller and Miller, 1999).
2.3
HOMOGENEOUS POISSON PROCESS AND POISSON DISTRIBUTION
A binomial experiment is considered, where the number of trials n tends to infinity and the probability of success p in each trial tends to zero in such a way that the mean np of the binomial distribution remains finitely large. Assume that the trials in the binomial experiment are performed within n infinitesimally small time intervals with length . A single experiment in each interval is performed which results in either success or failure (Figure 2.4). Let t ¼ np be the mean number of successes in the finite time interval t. Here is the number density of the occurrences (number of occurrences per
1 2 ∆
n n∆ = t
Figure 2.4 Trials of a binomial experiment performed within small time intervals with lengths
Common Reliability and Risk Models and Their Applications
25
unit interval). The probability density function of the binomial distribution can be presented as x n! t f ðX ¼ xÞ ¼ ð1 pÞnx x!ðn xÞ! n ¼
nðn 1Þ . . . ðn x þ 1Þ ðtÞx ð1 pÞn nx x! ð1 pÞx
Since n is large and p is small, (1 p)n exp ( np) ¼ exp (t). The number of successes is finitely large, therefore (1 p)x exp(xp) exp(0) ¼ 1. Since nðn 1Þ . . . ðn x þ 1Þ lim ¼ lim ðð1 1=nÞð1 2=nÞ . . . ð1 ðx 1Þ=nÞÞ ¼ 1 n!1 n!1 nx finally f ðX ¼ xÞ ¼
ðtÞx expðtÞ x ¼ 0; 1; 2; . . . x!
ð2:5Þ
is obtained. This is the probability mass function for the Poisson distribution describing the distribution of the number of occurrences from a homogeneous Poisson process which is a limiting case of a binomial experiment with parameters n and p ¼ t/n when n ! 1. In other words, a binomial experiment with a large number of trials n and a small probability p in each trial can be approximated reasonably well by a homogeneous Poisson process with intensity ¼ np/t. The homogeneous Poisson process is an important model for random events. It exists whenever the following conditions are fulfilled: .
The numbers of occurrences in non-overlapping intervals are statistically independent. . The probability of an occurrence in intervals of the same length is the same and depends only on the length of the interval, not on its location. . The probability of more than one occurrence in a vanishingly small interval is negligible. For a homogeneous Poisson process, the intensity is constant ( ¼ const:) and the mean number of occurrences in the interval 0, t is t. Since the Poisson distribution is a limiting case of the binomial distribution, its mean is E(X ) ¼ np ¼ t and the variance has the same value V(X) ¼ t because if n ! 1, p ! 0 and np ¼ t, then np(1 p) ! t. If a homogeneous Poisson process with intensity is present, the distribution of the number of occurrences in the time interval (0, t) is given by the Poisson
26
Reliability and Risk Models
distribution (2.5). The probability of r or fewer occurrences in the finite time interval (0, t) is given by the cumulative Poisson distribution: PðX rÞ FðrÞ ¼
r X ðtÞx x¼0
x!
expðtÞ
ð2:6Þ
As statistical models for random failures, the Poisson process and the Poisson distribution are used frequently (Thompson, 1988). The homogeneous Poisson process, for example, can also be used as a model of randomly distributed defects in a spatial domain. In this case, the random occurrences are locations of defects. Example 2.4. Failures of a repairable system is a homogeneous Poisson process with five average number of failures per year (365 days). (a) What is the probability that the system will fail twice during three months of operation? (b) What is the probability that the system will not fail in the next 24 hours? Solution:(a) The mean number of failures per three months is (5/12) 3 ¼ 1.25. According to equation (2.5), the probability that the system will fail exactly twice during three months of operation is f ð2Þ ¼
1:252 expð1:25Þ 0:22 2!
(b) The mean number of failures for 24 hours is (5/365) 1 ¼ 0.0137 failures. According to equation (2.5), the probability that the system will not fail in the next 24 hours is f ð0Þ ¼
2.4
0:01370 expð0:0137Þ ¼ 0:986 0!
NEGATIVE EXPONENTIAL DISTRIBUTION
Suppose that the probability that a component/system will fail within a short time interval is constant. In other words, the probability that a component which has survived time t will fail in the small time interval t, t þ t is constant (t) and does not depend on the age t of the component. Under this assumption we will show that the life distribution of the component is described by the negative exponential distribution FðtÞ ¼ 1 expðtÞ
ð2:7Þ
27
Common Reliability and Risk Models and Their Applications
Indeed, let the time interval (0, t) be divided into a large number n of small time intervals t ¼ t/n (Figure 2.5). The probability R(t) that the component/ system will survive n time intervals with lengths t is RðtÞ ¼ ð1 tÞn
ð2:8Þ
Since (1 t)n can be presented as exp (n ln [1 t]), considering that for t ! 0, ln [1 t] ¼ t and t ¼ nt, equation (2.8) becomes RðtÞ ¼ expðtÞ
ð2:9Þ
Since F(t) ¼ 1 R(t), for the probability of failure before time t, the negative exponential distribution (2.7) is obtained. The probability density function of the time to failure (Figure 2.6) is obtained by differentiating the cumulative distribution function (2.7) with respect to t (f(t) ¼ dF(t)/dt): f ðtÞ ¼ expðtÞ
ð2:10Þ
The negative exponential distribution applies whenever the probability of failure in a small time interval practically does not depend on the age of the component. It describes the distribution of failures characterised by a constant hazard rate . ∆t
∆t
...
∆t
∆t
0
t
Time
Figure 2.5 A time interval (0, t) divided into n small intervals with lengths t
Probability density λ f (t) = λ exp (–λt )
Time, t
Figure 2.6 Probability density function of the negative exponential distribution
28
Reliability and Risk Models
2.4.1
MEMORYLESS PROPERTY OF THE NEGATIVE EXPONENTIAL DISTRIBUTION
If the life of a component is exponentially distributed, the probability of failure in a specified time interval does not depend on the age of the component. The probability that the component will fail within a specified time interval is the same, irrespective of whether the component has been used for some time or has just been placed in use. In other words, the probability that the life will be greater than t þ t, given that the component has survived time t does not depend on t. The component is as good as new. Indeed, let A denote the event the component will survive time t þ t. Let B denote the event the component has survived time t. The probability PðAjBÞ that the component will survive time t þ t given that it has survived time t can be determined from the conditional probability formula PðT > t þ tjT > tÞ ¼
Rðt þ tÞ exp½ðt þ tÞ ¼ ¼ expðtÞ RðtÞ expðtÞ
ð2:11Þ
where R(t) ¼ exp (t) is the probability that the component will survive time t. From equation (2.11) it follows that the probability that the component will survive a time interval t þ t, given that it has survived time t, is always equal to the probability that the component will survive the time interval t, irrespective of its age t. This is the so-called memoryless property of the negative exponential distribution. It makes the exponential model suitable for components whose conditional probability of failure within a specified time interval practically does not depend on age. This condition is approximately fulfilled for components which practically do not degrade or wear out with time (certain electrical components, protected structures, static mechanical components, etc.).
2.4.2
LINK BETWEEN THE POISSON DISTRIBUTION AND THE NEGATIVE EXPONENTIAL DISTRIBUTION
Suppose that the occurrences of the random events in any specified time interval with length t is a homogeneous Poisson process with parameter t, where is the number density of occurrences. The probability f(x) of x occurrences in the interval 0, t is then given by the Poisson distribution (2.5). From equation (2.5), the probability that the time to the first occurrence will be larger than a specified time t can be obtained directly, by setting x ¼ 0 (zero number of occurrences): Pðno occurrencesÞ ¼ et
ð2:12Þ
29
Common Reliability and Risk Models and Their Applications t1 0
t2
t3
•••
tk–1
tk
•••
Time a
Figure 2.7 Times of successive failures in a finite time interval with length a
Consequently, the probability that the time T to the first occurrence will be equal to or smaller than t is given by P(T t) ¼ 1 et which is the negative exponential distribution (2.7). If the times between the occurrences are exponentially distributed, the number of occurrences follows a homogeneous Poisson process and vice versa. This link between the negative exponential distribution and the homogeneous Poisson process will be illustrated by the following important example. Suppose that a component/system is characterised by an exponential life distribution F(t) ¼ 1 exp (t). After each failure at times ti a replacement/ repair is initiated which brings the component/system to as good as new condition. Under these assumptions, the successive failures at times t1, t2, . . . in the finite time interval with length a (Figure 2.7) follow a homogeneous Poisson process with intensity . The mean number of failures in the interval (0, a) is given by a. Another important application of the negative exponential distribution is for modelling the lifetime of components and systems which fail whenever the load exceeds the strength of the component/system. Suppose that the load applications which exceed the strength of our component/system follow a homogeneous Poisson process with intensity . The reliability R associated with a finite time interval with length t is then equal to the probability R ¼ exp (t) that there will be no load application within the specified time t. Consequently, the probability of failure is given by the negative exponential distribution (2.7). Another reason for the importance of the negative exponential distribution is that it is an approximate limit failure law for complex systems containing a large number of components which fail independently and whose failures lead to a system failure (Drenick, 1960).
2.4.3
RELIABILITY OF A SERIES ARRANGEMENT, INCLUDING COMPONENTS WITH CONSTANT HAZARD RATES
The reliability of a system logically arranged in series is R ¼ R1 R2 Rn where R1 ¼ exp (1 t), . . . , Rn ¼ exp (n t) are the reliabilities of n components with constant hazard rates 1 , 2 , . . . , n . The failures of the components are statistically independent. Substituting the component reliabilities in the system reliability formula results in R ¼ expð1 tÞ expð2 tÞ expðn tÞ ¼ expðtÞ
ð2:13Þ
30
Reliability and Risk Models
P where ¼ ni¼1 i . As a result, the hazard rate of the system is a sum of the hazard rates of the components. The times to failure P of such a system follow a homogeneous Poisson process with intensity ¼ ni¼1 i . This additive property is a theoretical basis for the widely used parts count method for predicting system reliability (Bazovsky, 1961; MIL-HDBK 217F, 1991). The method is suitable for systems including independently working components, logically arranged in series, where failure of any component causes a system failure. If the components are not logically arranged in series, the system hazard rate P ¼ ni¼1 i calculated on the basis of the parts count method is an upper bound of the real system hazard rate. One downside of this approach is that the reliability predictions are too conservative.
2.5
HAZARD RATE
Suppose that the probability of failure in the small time interval t, t þ t, depends on the age of the component/system and is given by h(t)t, where h(t) will be referred to as the hazard rate. Again, let the time interval t be divided into a large number n of small intervals with lengths t ¼ t/n, such as shown in Figure 2.5. The probability R(t) that the component/system will survive all time intervals is RðtÞ ¼ ð1 h1 tÞð1 h2 tÞ ð1 hn tÞ
ð2:14Þ
where h1, . . . ,hn approximate the hazard rate function h(t) in the separate small time intervals with lengths t. Equation (2.14) can also be presented as RðtÞ ¼ expfln½ð1 h1 tÞð1 h2 tÞ ð1 hn tÞg ! n X lnð1 hi tÞ ¼ exp i¼1
For [1 hi t] hi t and considering that for t ! 0, Pn t ! 0,R ln t h t ! h()d, equation (2.14) becomes i i¼1 0 Z t RðtÞ ¼ exp hðÞd
ð2:15Þ
0
Rt where is a dummy integration variable. The integral H(t) ¼ 0 h()d in equation (2.15) is also known as the cumulative hazard rate. Using the
31
Common Reliability and Risk Models and Their Applications
cumulative hazard rate, reliability can be presented as (Barlow and Proschan, 1975): RðtÞ ¼ expðHðtÞÞ
ð2:16Þ
Reliability R(t) can be increased by decreasing the hazard rate h(t), which decreases the value of the cumulative hazard rate H(t). Correspondingly, the cumulative distribution of the time to failure is FðtÞ ¼ 1 expðHðtÞÞ
ð2:17Þ
If the hazard rate h(t) increases with age, the cumulative distribution of the time to failure is known as an increasing failure rate (IFR) distribution. Alternatively, if the hazard rate h(t) is a decreasing function of t, the cumulative distribution of the time to failure is known as a decreasing failure rate (DFR) distribution. If the hazard rate is constant, the negative exponential distribution is obtained from equation (2.17). The negative exponential distribution is a constant failure rate (CFR) distribution. Indeed, for h(t) ¼ ¼ const:, the cumulative hazard rate becomes Z t HðtÞ ¼ d ¼ t ð2:18Þ 0
and the cumulative distribution function of the time to failure is given by the negative exponential distribution (2.7). The hazard rate can also be presented as a function of the probability density f(t) of the time to failure. Indeed, the probability of failure in the time interval t, t þ t is given by f (t)t which is equal to the probability R(t)h(t)t of the compound event that the component will survive time t and after that will fail in the small time interval t, t þ t (Figure 2.8). Equating the two probabilities results in f ðtÞ ¼ RðtÞhðtÞ
ð2:19Þ
hðtÞ ¼ f ðtÞ=RðtÞ
ð2:20Þ
from which
t
0 Time to failure
t + ∆t Time
Figure 2.8 Time to failure in the small time interval t, t þ t
32
Reliability and Risk Models
Since f(t) ¼ R0 (t), where R(t) is the reliability function, equation (2.20) can also be presented as R0 (t)/R(t) ¼ h(t). Integrating both sides with the initial condition R(0) ¼ 1 gives expression (2.15) which relates the reliability associated with the time interval (0, t) with the hazard rate function. Example 2.5. What is the expected number of failures among 100 structural elements during 1000 hours of operation, if each element has a linear hazard rate function h(t) ¼ 106t? According to equation (2.15), Z t Z t 2 6 6 t hðvÞdv ¼ exp 10 vdv ¼ exp 10 RðtÞ ¼ exp 2 0 0 For t ¼ 1000 hours, the reliability is R(1000) ¼ exp( 12) 0:61. Since the probability of failure is F(1000) ¼ 1 R(1000) ¼ 0.39, the expected number of failures among the 100 elements is 100 0.39 ¼ 39.
2.5.1
DIFFERENCE BETWEEN A FAILURE DENSITY AND HAZARD RATE
There exists a fundamental difference between the failure density f(t) and the hazard rate h(t). Consider an initial population of N0 components. The proportion n/N0 of components from the initial number N0 that fail within the time interval t, t þ t is given by f (t)t. In short, the failure density f (t) ¼ n/(N0 t) gives the percentage of the initial number of items that fail per unit interval. Conversely, the proportion n/N(t) of items in service that will fail in the time interval t, t þ t is given by h(t)t. In short, the hazard rate h(t) ¼ n/(N(t)t) is the proportion of items in service that fail per unit interval. If age has no effect on the probability of failure, the hazard function h(t) will be constant (h(t) ¼ ¼ const:) and the same proportion dn(t)/n(t) of items in service is likely to fail within the infinitesimal interval t, t þ dt. Since this proportion is also equal to dt, we can write dnðtÞ=nðtÞ ¼ dt
ð2:21Þ
which is a separable differential equation. After rearranging to d ln [n(t)] ¼ dt and integrating within the time interval 0, t, we get nðtÞ ¼ n0 expðtÞ
ð2:22Þ
Common Reliability and Risk Models and Their Applications
33
where n0 is the initial number of items. The probability of survival of time t is therefore given by n(t)/n0 ¼ R(t) ¼ exp (t). Consequently, the probability of failure is obtained from F(t) ¼ 1 n(t)/n0 ¼ 1 exp ( t) which is the negative exponential distribution.
2.6
MEAN TIME TO FAILURE (MTTF)
An important reliability measure is the mean time to failure (MTTF) which is the average time to the first failure. It can be obtained from the mean of the probability density of the time to failure f(t): MTTF ¼
Z
1
tf ðtÞdt
ð2:23Þ
0
If R(t) is the R 1reliability function, the integral in equation (2.23) becomes MTTF ¼ 0 t dR(t) dt which, after integrating by parts (Grosh, 1989), gives MTTF ¼
Z
1
RðtÞdt
ð2:24Þ
0
For a constant hazard rate t ¼ const. MTTF ¼ ¼
Z
1
expðtÞdt ¼ 1=
ð2:25Þ
0
i.e. in this case the mean time to failure is the reciprocal of the hazard rate. Equation (2.25) is valid only for failures characterised by a constant hazard rate. In this case, the probability that a failure will occur earlier than the MTTF is approximately 63%. Indeed, PðT MTTFÞ ¼ 1 exp½MTTF ¼ 1 expð1Þ 0:63 Example 2.6. The mean time to failure of a component characterised by a constant hazard rate is MTTF ¼ 50 000 h. Calculate the probabilities of the following events: (i) the component will survive continuous service for one year; (ii) the component will fail between the fifth and the sixth year; (iii) the component will fail within a year given that it has survived the end of the fifth year. Compare this probability with the probability that the component will fail within a year given that it has survived the end of the tenth year.
34
Reliability and Risk Models
Solution: (i) Since MTTF ¼ 50 000 h ¼ 5.7 years, the hazard rate of the component is ¼ 1/5:7 years1. Reliability is determined from R(t) ¼ exp (t) and the probability of surviving one year is Rð1Þ ¼ PðT > tÞ ¼ e1=5:7 0:84 (ii) The probability that the component will fail between the end of the fifth and the end of the sixth year can be obtained from the cumulative distribution function of the negative exponential distribution: Pð5 < T 6Þ ¼ Fð6Þ Fð5Þ ¼ expð5=5:7Þ expð6=5:7Þ 0:07 (iii) Because of the memoryless property of the negative exponential distribution, the probability that the component will fail within a year, given that it has survived the end of the fifth year, is equal to the probability that the component will fail within a year after having been put in use: Pð5 < T 6jT > 5Þ ¼ Pð0 < T 1Þ ¼ 1 expð1=5:7Þ 0:16 Similarly, the probability that the component will fail within a year given that it has survived the end of the tenth year is obtained from Pð10 < T 11jT > 10Þ ¼ Pð0 < T 1Þ ¼ 1 expð1=5:7Þ 0:16 This probability is equal to the probability from the previous example (as it should be) because of the memoryless property of the negative exponential distribution.
2.7
GAMMA DISTRIBUTION
Consider k components whose failures follow a homogeneous Poisson process with intensity , and a system built with these components which fails when all components fail. Such is the k-fold standby system in Figure 2.9 consisting of k components with identical negative exponential life distributions exp (t). Component ci is switched on immediately after the failure of component ci1. The system fails when all components fail. The distribution of the time to k failures can be derived using the following probabilistic argument. The probability F(t) that there will be at least k failures before time t is equal to 1 R(t), where R(t) is the probability that there will be fewer than k failures. The compound event fewer than k failures before time t is composed of the following mutually exclusive events: no failures before time t,
Common Reliability and Risk Models and Their Applications
35
c1
c2
ck
Figure 2.9 A k-fold standby system
exactly one failure before time t, . . . , exactly k1 failures before time t. The probability R(t) is then a sum of the probabilities of these events given by the Poisson distribution (Tuckwell, 1988): ðtÞ1 ðtÞ2 expðtÞ þ expðtÞ 1! 2! ðtÞk1 expðtÞ þ þ ðk 1Þ!
RðtÞ ¼ expðtÞ þ
ð2:26Þ
The probability of at least k failures before time t is given by F(t) ¼ 1 R(t). FðtÞ ¼ 1 RðtÞ ¼
ðtÞk ðtÞkþ1 expðtÞ þ expðtÞ þ k! ðk þ 1Þ!
ð2:27Þ
Differentiating F(t) in equation (2.27) with respect to t gives the probability density function of the time to the kth failure: dF ðtÞk1 expðtÞ ¼ f ðtÞ ¼ dt ðk 1Þ!
ð2:28Þ
Denoting the mean time to failure by ¼ 1/, equation (2.28) becomes f ðtÞ ¼
tk1 expðt=Þ k ðk 1Þ!
ð2:29Þ
which is the gamma density function G(, ): Gð; Þ ¼
t1 expðt=Þ a ðÞ
ð2:30Þ
36
Reliability and Risk Models
with parameters ¼ k and ¼ ((k) ¼ (k 1)!) (Abramowitz and Stegun, 1972). As a result, the sum of k statistically independent random variables following the negative exponential distribution with parameter follows a gamma distribution G(k, 1/) with parameters k and 1/. The mean and the variance of a random variable following the gamma distribution G(k, 1/) are E(X) ¼ k/ and V(X) ¼ k/2 , respectively. If the time between failures of a repairable device follows the negative exponential distribution with mean time to failure ¼ 1/, the probability density of the time to the kth failure is given by the gamma distribution G(k, ) (equation 2.29). Here is an alternative formulation: if k components with identically distributed lifetimes following a negative exponential distribution are characterised by a mean time to failure , the sum of the times to failure of the components T ¼ t1 þ t2 þ þ tk, follows a gamma distribution G(k, ) with parameters k and . The negative exponential distribution is a special case of a gamma distribution in which k ¼ 1. Another important special case of a gamma distribution with parameters k and ¼ 2 is the 2 distribution G(k, 2) with n ¼ 2k degrees of freedom. Values of the 2 statistics for different degrees of freedom can be found in the 2 -distribution table in Appendix D. In the table, the area cut off to the right of the abscissa (see Figure 2.10) is given with the relevant degrees of freedom n. Gamma distributions have an additivity property: the sum of two random variables following gamma distributions G(k1 , ) and G(k2 , ), is a random variable following a gamma distribution G(k1 þ k2 , ) with parameters k1 þ k2 and : Gðk1 ; Þ þ Gðk2 ; Þ ¼ Gðk1 þ k2 ; Þ
ð2:31Þ
An important property of the gamma distribution is that if a random variable X follows a gamma distribution G(k, ), C X follows the gamma distribution G(k, C) (Grosh, 1989). According to this property, since the sum of k times to failure T ¼ t1 þ t2 þ þ tk follows a gamma distribution G(k, ), the distribution of the estimated mean time to failure ^ ¼ T/k ¼ (t1 þ t2 þ þtk )/k will follow the gamma distribution G(k, /k). Therefore, the quantity 2k^/ will follow a 2 -distribution G(k, 2) with n ¼ 2k degrees of freedom. An important application of the gamma distribution is for modelling the distribution of the time to failure of a component, subjected to shocks whose arrivals follow a homogeneous Poisson process with intensity . If the component is subjected to partial damage or degradation by each shock and fails completely at the kth shock, the distribution of the time to failure of the component is given by the gamma distribution G(k, 1/).
Common Reliability and Risk Models and Their Applications
37
Another application of the Gamma distribution will be discussed in the next section related to determining the uncertainty associated with the mean time to failure determined from a limited number of failure times.
2.8
UNCERTAINTY ASSOCIATED WITH THE MEAN TIME TO FAILURE
Suppose that components characterised by a constant hazard rate have been tested for failures. After failure, the components are not replaced and the test is truncated on the occurrence of the kth failure, at which point T componenthours have been accumulated. In other words, the total accumulated operational time T includes the sum of the times to failure of all k components. The MTTF can be estimated by dividing the total accumulated operational time T (the sum of all operational times) by the number of failures k: T ^ ¼ k
ð2:32Þ
where ^ is the estimator of the unknown MTTF. If denotes the true value of the MTTF, according to the previous section, the expression 2k^/ follows a 2 -distribution with 2k degrees of freedom, where the end of the observation time is at the kth failure. In other words, the expression 2T/ follows a 2 -distribution. This property can be used to determine a lower limit for the MTTF which is guaranteed with a specified probability (confidence level). Assume that we would like to find a bound about which a statement could be made that the true MTTF is greater than the bound with probability p. The required bound is obtained by finding the value 2, 2k of the statistics 2 which cuts off an area from the right tail of the 2 -distribution (Figure 2.10). The required bound is obtained from ¼ 2T/2, 2k where ¼ 1 p and 2, 2k is the value of the 2 statistics for the selected confidence level p ¼ 1 and degrees of freedom n ¼ 2k. The probability that the
χ2 Probability density
p=1–α χ2α, 2k
α χ2
Figure 2.10 Probability density function of the 2 -distribution
38
Reliability and Risk Models
MTTF will be greater than the lower bound is equal to the probability that the 2 statistics will be smaller than 2, 2k : Pð Þ ¼ Pð2 2;2k Þ ¼ 1 ¼ p
ð2:33Þ
Suppose now that 21, 2k and 22, 2k correspond to two specified bounds 1 and 2 (1 < 2 ). The probability P(1 2 ) that the true MTTF will be between the specified bounds 1 and 2 is equal to the probability that the 2 statistics will be between 22, 2k and 21, 2k (Figure 2.11): Pð1 2 Þ ¼ Pð22;2k 2 21;2k Þ ¼ 2 1
ð2:34Þ
Another question is related to estimating the confidence level which applies to the estimate of the MTTF given by equation (2.32). In other words, the probability with which the true MTTF value will be greater than the estimate ^ ¼ T/k is required. This probability can be determined if a value of the 2 -statistics is calculated first from 2 ¼ 2T/^. Next, for n ¼ 2k degrees of freedom from the table in Appendix D, it is determined what value of gives 2, 2k approximately equal to the calculated 2 ¼ 2T/^. The 1 is the confidence level with which it can be stated that the true MTTF is greater than the estimate ^ (Smith, 1972). In cases where two bounds are required, between which the MTTF lies with a specified probability p, two values of the 2 -statistics are needed. The lower limit 21/2, 2k of 2 (Figure 2.12) is used to determine the upper confidence limit U ¼ 2T/21/2, 2k of the confidence interval for the MTTF, while the upper limit 2/2, 2k (Figure 2.12) is used to determine the lower bound L ¼ 2T/2/2, 2k . The probability that the true MTTF will be between L and U is equal to the probability that 2 will be between 21/2, 2k and 2/2, 2k : PðL U Þ ¼ Pð21=2;2k 2 2=2;2k Þ χ2 Probability density
α2 – α1 α1 2
2
χα2, 2k χα1, 2k
χ2
Figure 2.11 The hatched area 2 1 gives the probability that the 2 -statistics will be between 22, 2k and 21, 2k
Common Reliability and Risk Models and Their Applications
39
χ2 Probability density
α /2
α /2
p=1–α
2
χ1 – α / 2, 2k
2
χα / 2, 2k
χ2
Figure 2.12 Two limits of the 2 -statistics necessary to determine a confidence interval for MTTF
This probability is equal to the specified confidence level p ¼ 1 (/2 þ /2) ¼ 1 : It must be pointed out that for failures characterised by a non-constant hazard rate, the MTTF reliability measure can be misleading because a large MTTF can be obtained from failure data where increased frequency of failure (low reliability) at the start of life is followed by very large periods of failurefree operation (high reliability). This will be illustrated by the following numerical example. Example 2.7. Given two components of the same type, with different MTTFs: Data set 1 (Times to failure, days) 5, 21, 52, 4131, 8032, 12170, 16209 MTTF1 ¼
5 þ 21 þ 52 þ 4131 þ 8032 þ 12170 þ 16209 5803 days 7
Data set 2 (Times to failure, days) 412, 608, 823, 1105, 1291, 1477 MTTF2 ¼
412 þ 608 þ 823 þ 1105 þ 1291 þ 1477 953 days 6
Suppose that reliable work is required during the first year (Figure 2.13). If the component is selected solely on the basis of its MTTF, the component with the smaller reliability in the first year will be selected!
2.9
MEAN TIME BETWEEN FAILURES (MTBF)
The mean time between failure (MTBF) reliability measure is defined for repairable systems. Assume that the failed component is restored to as good
40
Reliability and Risk Models Random failures MTTF1
0
Time
First year
MTTF2
0
Time
First year MTTF1 > MTTF2
Figure 2.13
Two components of the same type with different MTTFs
as new condition. Let Ui be the duration of the ith operational period (uptime) and Di be the downtime needed to restore the system after failure to as good as new condition. It is assumed that all Ui come from a common parent distribution and all Di come from another common parent distribution. It is also assumed that the sums Xi ¼ Ui þ Di are statistically independent. The MTBF is the expected (mean) value of the sum Xi ¼ Ui þ Di (Trivedi, 2002): MTBF ¼ EðUi þ Di Þ ¼ EðUi Þ þ EðDi Þ
ð2:36Þ
where E(Ui) and E(Di) are the expected values of the uptimes and the downtimes correspondingly. Since MTTF ¼ E(Ui) and MTTR ¼ E(Ui), where MTTR is the mean time to repair, the mean time between failures becomes MTBF ¼ MTTF þ MTTR
ð2:37Þ
Equation (2.37) illustrates the difference between MTTF and MTBF.
2.10
UNIFORM DISTRIBUTION MODEL
Often the only information available about a parameter X is that it varies between limits a and b. In this case, the uniform distribution is useful for modelling the uncertainty associated with the parameter X. A random variable X following uniform distribution in the interval (a, b) is characterised by a probability density function: f ðxÞ ¼ 1=ðb aÞ
ð2:38Þ
for a x b and f(x) ¼ 0, elsewhere (Figure 2.14). The cumulative distribution function of the uniform distribution is FðxÞ ¼
xa ba
ð2:39Þ
Common Reliability and Risk Models and Their Applications
41
F(x) = (x–a) / (b–a) 1 f(x) = 1 / (b – a)
Parameter, X a
b
Figure 2.14 Probability density and cumulative distribution function of the uniform distribution
for a < x b, F(x) ¼ 0 for x a and F(x) ¼ 1 for x b (Figure 2.14). The mean of a uniformly distributed random variable X is E(X) ¼ (a þ b)/2 and the variance is V(X) ¼ (b a)2/12. A uniform distribution for the coordinates is often used to guarantee unbiased sampling during Monte Carlo simulations or to select a random location. For example, a random path of length a on a plane can be simulated if the coordinates of the starting point of the path and the direction angle are uniformly distributed. Sampling from the uniform distribution can also be used to generate times to repair. The homogeneous Poisson process and the uniform distribution are closely related. An important property of the homogeneous Poisson process, well documented in books on probabilistic modelling (e.g. Ross, 2000), states: ‘Given that n random variables following a homogeneous Poisson process are present in the finite interval a, the coordinates of the random variables are distributed uniformly in the interval a.’ As a result, in cases where the number of failures following a homogeneous Poisson process in a finite time interval 0, a is known, the successive failures times are uniformly distributed in the length a.
2.11
GAUSSIAN (NORMAL) MODEL
Often the random variable of interest is a sum of a large number of random variables none of which dominates the distribution of the sum. For example, the distribution of a geometrical design parameter (e.g. length) incorporates the additive effects of a large number of factors: temperature variation, cutting tool wear, variations in the parameters of the control system, etc. If the number of separate contributions (additive terms) is relatively large, and if none of the separate contributions dominates the distribution of their sum, the distribution of the design parameter d (Figure 2.15) can be approximated by a normal distribution with mean equal to the sum of the means and
42
Reliability and Risk Models 0 d1
d2
d3
•••
dn
d
Length Load Weight
Figure 2.15 A design parameter d equal to the sum of n design parameters di, i ¼ 1, 2, . . . , n
variance equal to the sum of the variances of the separate contributions. The variation of a quality parameter in manufacturing often complies well with the normal distribution because the variation is usually a result of the additive effects of multiple factors, none of which is dominant. Formulated regarding a sum of statistically independent random P variables, the central limit theorem, states: The distribution of the sum X ¼ ni¼1 Xi of a large number n of statistically independent random variables X1, X2, . . . , Xn approaches a Gaussian (normal) distribution with increasing the number of random variables, if none of the random variables dominates the distribution of the sum. P P The sum X has mean ¼ ni¼1 i and variance V ¼ ni¼1 Vi , where i and Vi are the means and the variances of the separate random variables. Suppose that a system of collinear forces is present. According to the central limit theorem, for a sufficiently large number of forces, if none of the forces dominates the distribution of the sum, the total force is approximately normally distributed, irrespective of the individual distributions of the forces. Formulated regarding sampling from the same distribution, the central limit theorem states: If the independent random variables are identically distributed and have finite variances, the sum of the random variables approaches normal distribution with increasing their number (Gnedenko, 1962). This can be illustrated by an example where the average weight of n items, selected randomly from a batch, is recorded. The population of all items is characterised by a mean weight and standard deviation . For a sufficiently large number n of selected items, the sample mean (the average weight of the selected items) will approximately follow a normal distribution with mean pffiffiffi and standard deviation / n. A random variable X, characterised by a probability density function: 1 ðx Þ2 f ðxÞ ¼ pffiffiffiffiffiffi exp 22 2
! ð2:40Þ
where 1 < x < 1 is said to be Gaussian (normally) distributed. The normal distribution of a random variable X is characterised by two parameters, the
Common Reliability and Risk Models and Their Applications
43
mean E(X) ¼ and the variance V(X) ¼ 2 . After changing the variables by z ¼ (x )/, the Gaussian model transforms into 2 1 z f ðzÞ ¼ pffiffiffiffiffiffi exp 2 2
ð2:41Þ
which is the probability density function of the standard normal distribution. The new variable z ¼ (x )/ is normally distributed, with mean EðzÞ ¼
1 ½EðxÞ ¼ 0
and variance VðzÞ ¼
1 2 ½VðxÞ 0 ¼ ¼1 2 2
Equation (2.42) gives the cumulative distribution function of the standard normal distribution:
ðzÞ ¼ PðZ zÞ ¼
2 1 u pffiffiffiffiffiffi exp du 2 1 2
Z
z
ð2:42Þ
where u is a dummy integration variable. If the probability that X will be smaller than b is required, it can be determined from P(X b) ¼ (z2 ) where z2 ¼ (b )/. The probability that X will take on values from the interval [a, b] can be determined from Pða X bÞ ¼ ðz2 Þ ðz1 Þ where z1 ¼ (a )/ and z2 ¼ (b )/. The probability density function and the cumulative distribution function of the standard normal distribution are given in Figure 2.16, where the probability P(Z z ¼ 1) 0.8413 has been determined (the hatched area beneath the probability density function f(z)) using the statistical table in Appendix C. Although the table lists (z) for non-negative values z 0, it can also be used for determining probabilities P(Z |z|) associated with negative values |z|. Indeed, considering the symmetry of the standard normal curve it can be verified that P(Z |z|) ¼ 1 P(Z |z|).
44
Reliability and Risk Models
f (z) 1 Φ (z) Φ (1) 0 1.0
z = (x – µ) / σ
Figure 2.16 Probability density function of the standard normal distribution
Example 2.8. The strength X of an element built in a device is normally distributed, with mean ¼ 75 MPa and a standard deviation ¼ 10 MPa. (i) Calculate the probability that the strength X will be smaller than 65 MPa. Solution: 65 75 PðX 65Þ ¼ ¼ ð1Þ ¼ 1 ð1Þ ¼ 0:158 10 (ii) Calculate the probability P(55 X 65) that the strength X will be between 55 and 65 MPa. Solution:
55 75 ¼ ð2Þ 10
65 75 ¼ ð1Þ 10 Pð55 X 65Þ ¼ ð1Þ ð2Þ ¼ ½1 ð1Þ ½1 ð2Þ ¼ ð2Þ ð1Þ ¼ 0:9772 0:8413 ¼ 0:1359 An important property of the normal distribution holds for a sum of statistically independent, normally distributed random variables: The distribution of P the sum X ¼ ni¼1 Xi of n statistically independent, normally distributed random variables X1, XP 2, . . . , Xn is a normally distributed random variable. The sum X has mean ¼ ni¼1 i equal to the sum of the means i of the random variables P and variance 2 ¼ ni¼1 2i equal to the sum of the variances 2i of the separate random variables.
45
Common Reliability and Risk Models and Their Applications
F1
F2
F3
Figure 2.17 A load which consists of three normally distributed collinear loads
For collinear loads (Figure 2.17) whose magnitudes are normally distributed, the resultant load is always normally distributed for any number of loads and any magnitudes characterising the separate loads. The difference X ¼ X1 X2 of two normally distributed random variables with means 1 and 2 and variances 21 and 22 is a normally distributed random variable with mean ¼ 1 2 and variance 2 ¼ 21 þ 22 . 2.12
LOG-NORMAL MODEL
A random variable X is log-normally distributed if its logarithmQln X is normally distributed. Suppose that a quantity X is a product X ¼ ni¼1 Yi of a large number of statistically independent quantities Yi, none of which dominates the distribution of the product. A good example is the common model X ¼ Yn, where Y is a random variable. According to the central P limit theorem, for a large number n, the logarithm of the product ln X ¼ ni¼1 ln Yi will be approximately normally distributed regardless of the probability distributions of Yi. Consequently, if ln X is normally distributed, according to the definition of the log-normal distribution, the random variable X is log-normally distributed. Here is a multiplicative version of the central limit theorem: The distribution of a product of random variables none of which dominates the distribution of the product approaches a log-normal distribution with increasing the number of random variables. A basic application of the log-normal model is the case where (i) a multiplicative effect of a number of factors controlling reliability is present; (ii) the controlling factors are statistically independent and (iii) their number is relatively large. The log-normal model is characterised by a probability density function (Figure 2.18) " # 1 ðln x Þ2 f ðxÞ ¼ pffiffiffiffiffiffi exp x>0 ð2:43Þ 22 x 2 where and are the mean and the standard deviation of the ln x data.
46
Reliability and Risk Models Probability density
Value, X
Figure 2.18 Probability density function of the log-normal distribution
An important property of the log-normal model is its reproductive property. A product of n log-normal random variables with means Q i and standard deviations n i is a log-normal random variable. Indeed, from X ¼ i¼1 Yi , the logarithm of Pn X, ln X ¼ i¼1 ln Yi is normally distributed because it is a sum of normally distributed random variables. Because ln X is normally distributed, according to the definition of the log-normal distribution, X is log-normally distributed. Often, the log-normal distribution is appropriate for modelling material strength. With increasing the log-normal mean and decreasing the standard deviation, the log-normal distribution transforms gradually into a normal distribution. The hazard rate of the log-normal distribution increases at first and then decreases which makes it unsuitable for modelling times to failure of components. However, the log-normal distribution can often be used for describing the length of time to repair (Barlow and Proschan, 1965). The repair time distribution is usually skewed, with a long upper tail, which is explained by some problem repairs taking a long time.
2.13
THE WEIBULL MODEL
A universal model for the times to failure of structural components or systems which fail when the weakest component in the system fails is the Weibull model (Weibull, 1951). This model is also popular for the strength distribution of brittle materials. One of the reasons is that the fracture stress of a system composed of many links equals the smallest of the fracture stresses of the links, i.e. equals the fracture stress of the weakest link. The cumulative distribution function of the Weibull distribution is 0 m FðÞ ¼ 1 exp
ð2:44Þ
Common Reliability and Risk Models and Their Applications
47
where F() is the probability of failure at a loading stress , 0 is the minimum stress below which the probability of failure F() is zero; is a scale and m is a shape parameter. R() ¼ 1 F() is the probability of surviving a loading stress . If time t instead of stress was used, equation (2.45) is obtained: t t0 m FðtÞ ¼ 1 exp
ð2:45Þ
which describes a distribution of the time to failure. In equation (2.45), F(t) is the probability that failure will occur before time t, t0 is a location parameter or minimum life, is the characteristic lifetime and m is a shape parameter. In many cases, the minimum life t0 is assumed to be zero and the threeparameter Weibull distribution transforms into a two-parameter Weibull distribution: FðtÞ ¼ 1 expððt= Þm Þ
ð2:46Þ
Setting t ¼ in equation (2.46) gives F(t) ¼ 1 e1 0.632. In other words, the probability of failure before time t ¼ , referred to as characteristic life, is 63.2%. Alternatively, the characteristic life corresponds to a time at which 63.2% of the initial population of items has failed. If m ¼ 1, the Weibull distribution transforms into the negative exponential distribution F(t) ¼ 1 exp (t) with parameter ¼ 1/ . Differentiating equation (2.46) with respect to t gives the probability density function of the Weibull distribution: f ðtÞ ¼
m t m1 exp½ðt= Þm
ð2:47Þ
which has been plotted in Figure 2.19 for different values of the shape parameter m. As can be verified, the Weibull distribution is very flexible. By selecting different values of the shape parameter m and by varying the scale parameter
, a variety of shapes can be obtained to fit experimental data. Since the hazard rate is defined by h(t) ¼ f(t)/R(t), where f ðtÞ ¼
m t m1 exp½ðt= Þm
and the reliability function is R(t) ¼ exp [ (t/ )m ], the Weibull hazard rate function becomes hðtÞ ¼ ðm= Þðt= Þm1
ð2:48Þ
48
Reliability and Risk Models 0.4 m=5 0.3 m = 3.6 m = 2.5
m=1
f(t)
0.2
m=2
0.1
0.0
0
2
4
6
8
10
Figure 2.19 Two-parameter Weibull probability density function for different values of the shape parameter m
As can be verified from equation (2.48), for m < 1, the hazard rate is decreasing, for m > 1, the hazard rate is increasing and m ¼ 1 corresponds to a constant hazard rate. Weibull hazard rate functions for different values of the Weibull exponent m and for ¼ 1 have been plotted in Figure 2.20. 5 Hazard rate h (t ) = (m /η)(t /η)m – 1 m = 3.2
4
3 m=2
m = 0.5 2
m = 1.5
m=1
1
0 0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0 t
Figure 2.20 Weibull hazard rate for different values of the shape parameter m
49
Common Reliability and Risk Models and Their Applications
2.14
RELIABILITY BATHTUB CURVE FOR NON-REPAIRABLE COMPONENTS/SYSTEMS
The hazard rate of non-repairable components and systems follows a curve with bathtub shape, characterised by three distinct regions (Figure 2.21). The first region, referred to as the infant mortality region, comprises the start of life and is characterised by an initially high hazard rate which decreases with time. Most of the failures in the infant mortality region are quality related and result from inherent defects due to poor design, manufacturing and assembly. A substantial proportion of failures can also be attributed to a human error during installation and operation. Since most substandard components fail during the infant mortality period and the experience of the personnel operating the equipment increases, the initially high hazard rate gradually decreases. The cumulative distribution of the time to failure is a decreasing failure rate (DFR) distribution. The second region of the bathtub curve, referred to as the useful life region, is characterised by an approximately constant hazard rate. This is why the negative exponential distribution, which is a constant failure rate (CFR) distribution, is the model of the times to failure in this region. Failures in this region are not due to age, wear-out or degradation and preventive maintenance does not affect the hazard rate. The third region, referred to as the wear-out region, is characterised by an increasing with age hazard rate due to accumulated wear and degradation of properties (e.g. wear, erosion, corrosion, fatigue, creep). The corresponding cumulative distribution of the times to failure is an increasing failure rate (IFR) distribution. In the infant mortality region, the hazard rate can be decreased (curve 2 in Figure 2.22) and reliability increased, by better design, materials, manufacturing, inspection and assembly. A significant reserve in decreasing the hazard rate at the start of life is decreasing the variability of material properties and other design parameters. Another substantial reserve is decreasing the uncertainty Hazard rate h (t) Infant mortality region
Useful life region
Wear-out region
(CFR)
(IFR)
(DFR)
Time, t
Figure 2.21
Reliability bathtub curve
50
Reliability and Risk Models Hazard rate h (t ) 1
2 Time, t
Figure 2.22
Decreasing the hazard rate by eliminating early-life failures and delaying the wear-out phase by preventive maintenance
associated with the actual loads experienced during service. A method for quantitative assessment of the impact of the decreased hazard rate on reliability will be provided in Chapter 16. In the wear-out region, reliability can significantly be increased by preventive maintenance consisting of replacing old components. This delays the wear-out phase and, as a result, reliability is increased (Figure 2.22). For a shape parameter m ¼ 1, the Weibull distribution transforms into the negative exponential distribution and describes the useful life region of the bathtub curve, where the probability of failure within a specified time interval practically does not depend on age. For components, for which early-life failures have been eliminated and a preventive maintenance has been conducted to replace worn parts before they fail, the hazard rate tends to remain constant (Villemeur, 1992). A value of the shape parameter smaller than 1 (m < 1) corresponds to a decreasing hazard rate and indicates infant mortality failures. A value of the shape parameter greater than 1 (m > 1) corresponds to increasing hazard rate and indicates wear-out failures. Values in the interval (1.0 < m 4) indicate early wear-out failures caused for example by a low cycle fatigue, corrosion or erosion. Values of the shape parameter greater than 4 indicate old age wear-out (m > 4) (Abernethy, 1994). Most steep Weibull distributions have a safe period within which the probability of failure is negligible. The larger the parameter m, the smaller the variation of the time to failure. An almost vertical Weibull, with very large m, implies perfect design, quality, control and production (Abernethy, 1994). Components made from purer materials have larger Weibull exponents m compared to components made from materials containing impurities.
2.15
EXTREME VALUE MODELS
Suppose that X1, . . . , Xn are n independent observations of a random variable (e.g. load, strength). Let X ¼ max {X1, X2, . . . , Xn} denote the maximum value
51
Common Reliability and Risk Models and Their Applications
among these observations. Provided that the right tail of the random variable distribution decreases at least as fast as that of the negative exponential distribution, the asymptotic distribution of X for large values of n is the type I distribution of extreme values (Gumbel, 1958): x FðxÞ ¼ exp exp ð2:49Þ This condition is satisfied by most of the reliability distributions: the normal, log-normal and the negative exponential distribution. The maximum extreme value distribution model (Figure 2.23) is often appropriate in cases where the maximum load determines component failures. The type I extreme value model has a scale parameter and a mode . The mean of the distribution is þ 0:57722 and the standard deviation is ¼ 1:28255 (Metcalfe, 1994). Suppose that n statistically independent loads X1, X2, . . . , Xn following the maximum extreme value distribution F(x) have been applied consecutively to a component with strength x. The probability that the component will survive all n loads is PðX1 xÞ PðX2 xÞ PðXn xÞ ¼ ½FðxÞn ¼ expðn exp½ðx Þ=Þ Since exp (n exp [(x )/]) ¼ exp ( exp [(x ( þ ln n))/]), the distribution of the maximum load also follows an extreme value distribution FðxÞ ¼ exp½ expððx 0 Þ=Þ
ð2:50Þ
with a scale parameter and a new displacement parameter (mode) 0 ¼ þ ln n. Thus, the maximum value from sampling an extreme value distribution also follows an extreme value distribution. Probability density
Maximum extreme, X
Figure 2.23
Probability density function of the maximum extreme value model
52
Reliability and Risk Models Probability density
Minimum extreme, X
Figure 2.24 Probability density function of the minimum extreme value distribution
Suppose now that X1, . . . , Xn are n independent observations of a random variable (e.g. load, strength). Let X ¼ min {X1, X2, . . . , Xn} denote the minimum value among these observations. With increasing the number of observations n, the asymptotical distribution of the minimum value follows the minimum extreme value distribution (Gumbel, 1958): x FðxÞ ¼ 1 exp exp
ð2:51Þ
The minimum extreme value distribution (Figure 2.24) can for example be used for describing the distribution of the lowest temperature and the smallest strength. The Weibull distribution is related to the minimum extreme value distribution as the log-normal distribution is related to the normal distribution. If a variable is Weibull-distributed, its logarithm follows the minimum extreme value distribution. Indeed, it can be verified that if the transformation x ¼ ln z is made in equation (2.51), the new variable z follows the Weibull distribution. References and discussions related to other statistical models used in reliability and risk analysis can be found in Trivedi (2002) and Bury (1975).
3 Reliability and Risk Models Based on Mixture Distributions
3.1
DISTRIBUTION OF A PROPERTY FROM MULTIPLE SOURCES
Suppose that items arrive from M different sources in proportions p1, p2, . . . , pM, PM p ¼ 1. A particular property of the items from each source k is i k¼1 characterised by a mean k and variance Vk. Often, of significant practical interest is the variance V of the property for items collected from all sources. In another example, small samples are taken randomly from a three-component inhomogeneous structure (components A, B and C, Figure 3.1). The probabilities p1, p2 and p3 of sampling the structural constituents A, B and C are equal to their volume fractions A , B and C (p1 ¼ A p2 ¼ B p3 ¼ C ). Suppose that the three structural constituents A, B and C have volume fractions A ¼ 0:55, B ¼ 0:35 and C ¼ 0:1. The mean yield strengths of the constituents are: A ¼ 800 MPa, B ¼ 600 MPa and C ¼ 900 MPa, and the standard deviations are: A ¼ 20 MPaB ¼ 25 MPa and C ¼ 10 MPa, correspondingly. What is the variance of the yield strength from random sampling of the inhomogeneous structure? It can be demonstrated that in these cases, the property of interest from the different sources can be modelled by a distribution mixture. Suppose that M sources (i ¼ 1, M) are sampled with probabilities p1, p2, . . . , P pM, M i¼1 pi ¼ 1. The distributions of the property characterising the individual sources are Fi(x), i ¼ 1, 2, . . . ,M, correspondingly. Thus, the probability F(x) P(X x) of the event B (Figure 3.2) that the property X will be
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
54
Reliability and Risk Models
C
B A
Random samples
Figure 3.1 Sampling of an inhomogeneous microstructure composed of three structural constituents A, B and C Ω
Ai
A1 B
AM
A2
Figure 3.2 The probability of event B is a sum of the probabilities of the mutually exclusive events Ai \ B
smaller than or equal to a specified value x can be presented as a union of the following mutually exclusive and exhaustive events: A1 \ B: the first source is sampled (event A1) and the property is smaller than x (the probability of this compound event is p1F1(x)); A2: the second source is sampled (event A2) and the property is smaller than x (the probability of this compound event is p2F2(x)); . . . ; AM: the Mth source is sampled and the property is smaller than x (the probability of this compound event is pMFM(x)). According to the total probability theorem, the probability that the property X will be smaller than or equal to a specified value x is FðxÞ ¼
M X
pk Fk ðxÞ
ð3:1Þ
k¼1
which is the cumulative distribution of the property from all sources. F(x) is a mixture of the probability distribution functions Fk(x) characterising the individual sources, scaled by the probabilities pk, k ¼ 1, M with which they are
55
Reliability and Risk Models Based on Mixture Distributions
sampled. After differentiating equation (3.1), a relationship between the probability densities is obtained: f ðxÞ ¼
M X
pk fk ðxÞ
ð3:2Þ
k¼1
Multiplying both sides of equation (3.2) by x and integrating: Z
þ1
xf ðxÞdx ¼ 1
M X
Z
þ1
xfk ðxÞdx
pk 1
k¼1
gives ¼
M X
ð3:3Þ
pk k
k¼1
for the mean value of a property from M different sources (Everitt and Hand, 1981).
3.2
VARIANCE OF A PROPERTY FROM MULTIPLE SOURCES
The variance V of the mixture distribution (3.1) for continuous probability density functions fk(x) characterising the existing microstructural constituents can be derived as follows: V¼ ¼
Z
ðx Þ2 f ðxÞdx ¼
M X
Z pk
Z
ðx k Þ2 fk ðxÞdx þ
k¼1
þ
Z
ðx k þ k Þ2
2
Z
M X
pk fk ðxÞdx
k¼1
2ðx k Þðk Þfk ðxÞdx
ðk Þ fk ðxÞdx
R Because the middle integral in the expansion is zero: 2(x k )(k ) fk (x)dx ¼ 0, the expression for the variance becomes (Todinov, 2002a) V¼
M X
pk ½Vk þ ðk Þ2
ð3:4Þ
k¼1
where Vk, k ¼ 1, M are the variances characterising the M individual distributions. Although equation (3.4) has a simple form, the grand mean of the
56
Reliability and Risk Models
distribution mixture given by equation (3.3) is a function of the means k of the individual distributions. An expression for the variance can also be derived as a function only of the pairwise distances between the means k of the individual distributions. Indeed, substituting expression (3.3) for the mean of a distribution mixture P 2 into the term M p ( k¼1 k k ) of equation (3.4) gives M X
2
pk ðk Þ ¼
k¼1
M X
pk 2k
k¼1
M X
!2 ð3:5Þ
pk k
k¼1
The variance of the distribution mixture can now be expressed only in terms of the pairwise differences between the means of the individual distributions. Expanding the right-hand part of equation (3.5) results in: M X
pk 2k
k¼1
M X
!2 pk k
k¼1
¼ p1 21 þ p2 22 þ þ pM 2M p21 21 p22 22 p2M 2M 2
M X i1 X
pi pj i j
i¼2 j¼1
¼ p1 ðp2 þ p3 þ þ pM Þ21 þ p2 ðp1 þ p3 þ þ pM Þ22 X þ þ pM ðp1 þ p2 þ þ pM1 Þ2M 2 pi pj i j i<j
¼
X
pi pj ði j Þ2
i<j
because 1 p1 ¼ p2 þ p3 þ þ pM ; 1 p2 ¼ p1 þ p3 þ þ pM ; etc: Finally, the variance of the distribution mixture (3.4) becomes V¼
M X k¼1
pk ½Vk þ ðk Þ2 ¼
M X k¼1
pk Vk þ
X
pi pj ði j Þ2
ð3:6Þ
i<j
P The expansion of i<j pi pj (i j )2 has M(M 1)/2 number of terms, equal to the number of different pairs (combinations) of indexes among M indexes. For M ¼ 2 individual distributions (sources), equation (3.6) becomes
57
Reliability and Risk Models Based on Mixture Distributions
V ¼ pV1 þ ð1 pÞV2 þ pð1 pÞð1 2 Þ2
ð3:7Þ
For three sources (M ¼ 3, Figure 3.3), equation (3.6) becomes V ¼ p1 V1 þ p2 V2 þ p3 V3 þ p1 p2 ð1 2 Þ2 þ p2 p3 ð2 3 Þ2 þ p1 p3 ð1 3 Þ2
ð3:8Þ
Going back to the problem at the beginning of this chapter, according to equation (3.3), the mean yield strength of the samples from all microstructural zones is ¼ pA A þ pB B þ pC C ¼ 0:55 800 þ 0:35 600 þ 0:10 900 ¼ 740 MPa Considering that the variances Vk in equation (3.8) are the squares of the standard deviations A , B and C characterising the yield strength of the separate microstructural zones, the standard deviation of the yield strength from sampling all microstructural zones becomes ¼ ½pA 2A þ pB 2B þ pC 2C þ pA pB ðA B Þ2 þ pB pC ðB C Þ2 þ pA pC ðA C Þ2 1=2
ð3:9Þ
After substituting the numerical values in equation (3.9), the value 108:85 MPa is obtained for the standard deviation characterising sampling from all microstructural constituents. As can be verified, the value 108:85 MPa is significantly larger than the standard deviations A ¼ 20 MPa, B ¼ 25 MPa and C ¼ 10 MPa characterising sampling from the individual structural constituents.
Probability f1(x ) f2(x ) f3(x )
µ1
µ2
µ3
Property
Figure 3.3 Distributions of properties from three different sources
58
Reliability and Risk Models
If we examine equation (3.6) closely, the reason for the large variance becomes clear. In equation (3.6), the variance of the distribution mixture has P been decomposed into two major components. The first component M p k¼1 k Vk including the terms pkVk, characterises only variation of properties within the separate sources (individual distributions). The second component is the sum P 2 p p ( i<j i j i j ) and characterises the variation of properties between the separate sources (individual distributions). Assuming that all individual distri2 butions have the same mean (i ¼ j ¼ ), the terms pi pj ( Pi M j ) in equation (3.6) become zero and the total variance becomes V ¼ k¼1 pk Vk . In other words, the total variation of the property is entirely a within-sources variation (Figure 3.4). From equation (3.8), for three sources with the same mean, the variance becomes V ¼ p1 V1 þ p2 V2 þ p3 V3
ð3:10Þ
Now let us assume that all sources (individual distributions) are characterised by very small variancesVk 0 (Figure 3.5). In this case, the within-sources Probability
f1(x)
f2(x) f3(x) µ1 = µ2 = µ3
Property
Figure 3.4 Distribution of properties from three different sources with the same mean Probability f1(x)
f2(x)
f3(x)
Property µ1
µ2
µ3
Figure 3.5 Distribution of properties from three different sources with very small variances
59
Reliability and Risk Models Based on Mixture Distributions
PM variance k¼1 pk Vk 0 and the total variance becomes P can be neglected: 2 V ¼ i<j pi pj (i j ) . In other words, the total variation of the property is entirely a between-sources variation. From equation (3.8), for the three sources in Figure 3.5 characterised by negligible variances (V1 0, V2 0 and V3 0), the total variance from sampling all sources becomes V p1 p2 ð1 2 Þ2 þ p2 p3 ð2 3 Þ2 þ p3 p1 ð3 1 Þ2
3.3
ð3:11Þ
VARIANCE UPPER BOUND THEOREM
If the mixing proportions pk are unknown, the variance V in equation (3.4) cannot be calculated. Depending on the actual mixing proportions pk, the variance V may vary from the smallest variance Vk characterising one of the sources up to the largest possible variance obtained from sampling a particular combination of sources with appropriate probabilities pi. A central question is to establish an exact upper bound for the variance of properties from multiple sources irrespective of the mixing proportions pk with which the sources are sampled. This upper bound can be obtained using the simple numerical algorithm in Appendix 3.2 which is based on an important result derived rigorously in Appendix 3.1. Variance upper bound theorem (Todinov, 2003b). The exact upper bound of the variance of properties from sampling multiple sources is obtained from sampling not more than two sources. Mathematically, the upper bound variance theorem can be expressed as Vmax ¼ pmax Vk þ ð1 pmax ÞVs þ pmax ð1 pmax Þðk s Þ2
Variance
ð3:12Þ
Vmax
pk, ps 0
p max
1
Figure 3.6 Upper bound of the variance from sampling two sources with indices k and s
60
Reliability and Risk Models
where k and s are the indices of the sources for which the upper bound of the variance is obtained and 0 pmax 1 and 1 pmax are the probabilities of sampling the two sources (Figure 3.6). If pmax ¼ 1, the maximum variance is obtained from sampling a single source (the kth source) only. For a large M, determining the upper bound variance by directly finding the global maximum of expression (3.6) regarding the probabilities pk is a difficult task which can be made easy using the upper bound variance theorem. The algorithm for finding the maximum variance of the properties in Appendix 3.2 is based on the upper bound variance theorem and consists of checking the variances of all individual sources and the variances from sampling all possible pairs of sources. Finding the upper bound variance of the properties from M sources involves only M(M þ 1)/2 checks.
3.3.1
DETERMINING THE SOURCE WHOSE REMOVAL RESULTS IN THE LARGEST DECREASE OF THE VARIANCE UPPER BOUND
The variance upper bound theorem can be applied to identify the source whose removal yields the smallest value of the variance upper bound. The algorithm consists of finding the source (pair of sources) yielding the largest variance. If sampling from a single source yields the largest variance, removing this source yields the largest decrease in the variance upper bound. The algorithm will be illustrated by the following numerical example. Suppose that properties of items from five sources are characterised by individual distributions with variances V1 ¼ 208, V2 ¼ 240, V3 ¼ 108, V4 ¼ 102, V5 ¼ 90 and means 1 ¼ 39, 2 ¼ 43, 3 ¼ 45, 4 ¼ 56 and 5 ¼ 65, correspondingly. Removal of which source yields the largest decrease in the variance upper bound? The global maximum of the variance of properties from these sources is Vmax 323.15, attained from sampling the fifth source with probability pmax 0.41 and the first source with probability 1 pmax 0.59. Removing the fifth source yields the largest reduction of the variance upper bound. Indeed, the calculations show that after removing the fifth source the variance upper bound Vmax 241.4 is attained from sampling the fourth source with probability pmax 0.09 and the second source with probability 1 pmax 0.91. Removal of the fifth source yields a value of the variance upper bound which cannot be improved (decreased further) by the removal of any other source instead. This result is particularly useful in cases where the mixing proportions from the sources are unknown and a tight upper bound for the variance is necessary for a conservative estimate of the uncertainty associated with the properties from the sources.
Reliability and Risk Models Based on Mixture Distributions
3.3.2
61
MODELLING THE UNCERTAINTY OF THE CHARPY IMPACT ENERGY AT A SPECIFIED TEST TEMPERATURE
The variance upper bound theorem can also be applied for calculating the uncertainty associated with the Charpy impact energy from sampling inhomogeneous C–Mn welds. Multi-run welds are characterised by a large variation of the impact toughness. Determining the variance of the distribution mixture from sampling different microstructural zones is closely related to the probability bounds of mechanical properties associated with these zones, which is an important part of the structural reliability assessment. The microstructure of the C–Mn multi-run welds (Figure 3.7) is markedly inhomogeneous, consisting of a central zone characterised by a poor impact toughness, a reheated zone characterised by a relatively high impact toughness, and an intermediate zone characterised by an intermediate impact toughness (see Figure 3.8). The microstructure consists roughly of as-deposited metal (the dark zones in Figure 3.7) and reheated zones (the lighter zones) representing recrystallised weld metal. Each subsequent weld bead, grain refines (normalises) part of the previous weld metal underneath and
Intermediate zone Central zone
Reheated zone
(a)
Intermediate zone Central zone
Reheated zone
(b) Position of the Charpy V-notch
Figure 3.7 A typical inhomogeneous microstructure characterising C – Mn multi-run welds
62
Reliability and Risk Models Charpy impact energy (J) 140 120 100 80 60 40 20 Microstructural zones
0 Central zone
Intermediate Reheated zone zone
All zones
Figure 3.8 Scatter of the Charpy impact energy characterising sampling from the individual microstructural zones of a C–Mn multi-run weld and from all zones
refinements in microstructure result in improved toughness compared to the microstructure not affected by reheating (Easterling, 1983). The microstructural zone in which the V-notch is cut (Figure 3.7b) has a very strong influence on the distribution of the impact energy in the transition ductile-to-brittle region (Todinov et al., 2000). This is illustrated by the well separated distributions of the impact energy from the microstructural zones (Figure 3.8). Distribution mixtures arising from sampling inhomogeneous materials are characterised by large variances and this is also illustrated by the box plots in Figure 3.8 representing the scatter of the Charpy impact energy from sampling the individual microstructural zones and from sampling all zones.
3.4
VARIATION AND UNCERTAINTY ASSOCIATED WITH THE CHARPY IMPACT ENERGY AT A SPECIFIED TEST TEMPERATURE
Determining the variation of the Charpy impact energy at a specified test temperature is based on experimental data published in earlier work (Todinov et al., 2000) and on equation (3.8) related to the variance of a mixture distribution from three sources. The experimental data characterise the Charpy impact
Reliability and Risk Models Based on Mixture Distributions
63
energy from the central, intermediate and reheated zone of multi-run C–Mn welds at 0 C and include Charpy impact energy measurements from 117 impact tests with a V-notch placed in different microstructural zones. The Charpy impact energy distribution of the central zone is characterised by a mean 1 ¼ 62:32 J and standard deviation 1 ¼ 10:49 J calculated on the basis of 34 measurements, with a V-notch placed in the central zone only. Correspondingly, the impact energy of the intermediate zone is characterised by a mean 2 ¼ 92:22 J and standard deviation 2 ¼ 16:02 J calculated on the basis of 54 measurements, with a V-notch placed in the intermediate zone only and the Charpy impact energy of the reheated zone is characterised by an empirical distribution with mean 3 ¼ 106:22 J and standard deviation 3 ¼ 13:63 J calculated on the basis of 29 measurements with a V-notch placed in the reheated zone only. For M ¼ 3 microstructural zones (central, intermediate and reheated), equation (3.1) yields a model for the distribution of the Charpy impact energy: FðxÞ ¼ p1 F1 ðxÞ þ p2 F2 ðxÞ þ p3 F3 ðxÞ
ð3:13Þ
where p1, p2 and p3 are the probabilities of sampling the central (1), intermediate (2) and reheated zone (3) and F1(x), F2(x) and F3(x) are the cumulative distribution functions characterising the Charpy impact energy from the three microstructural zones. According to equation (3.3), the mean of the Charpy impact energy at 0 C is 0 ¼ p1 1 þ p2 2 þ p3 3
ð3:14Þ
where 1 ¼ 62:32 J, 2 ¼ 92:22 J and 3 ¼ 106:22 J are the means of the Charpy impact energy characterising the separate microstructural zones. According to equation (3.8), the variance of the Charpy impact energy at 0 C is V0 ¼ p1 21 þ p2 22 þ p3 23 þ p1 p2 ð1 2 Þ2 þ p2 p3 ð2 3 Þ2 þ p1 p3 ð1 3 Þ2
ð3:15Þ
where 1 ¼ 10:49 J, 2 ¼ 16:02 J and 3 ¼ 13:63 J are the standard deviations of the Charpy impact energy associated with the separate microstructural zones. Since the probabilities p1, p2 and p3 of sampling the individual microstructural zones are unknown, the variance V0 in equation (3.15) cannot be determined in general. In order to obtain a conservative estimate for the variance, its exact upper bound with respect to the probabilities of sampling pi needs to be determined. The global maximum of the variance of a distribution mixture composed of the three individual distributions is Vmax 492.109. This upper
64
Reliability and Risk Models
bound has been determined by maximising V0 in equation (3.15) with respect to the sampling probabilities p1, p2 and p3 using the numerical procedure in Appendix 3.2. According to the upper bound variance theorem, the upper bound of the variance of properties from sampling the inhomogeneous multirun weld is attained from sampling not more than two microstructural zones. In our case the upper bound was attained from sampling the central and the reheated zone with probabilities p1 0.5 and p3 ¼ 0.5. The mean of the distribution mixture which corresponds to the upper bound of the variance is ¼ 0:51 þ 0:53 ¼ 84:27 J. In this way, a conservative estimate of the variance of the Charpy impact energy at a specified test temperature has been obtained which will be used in Chapter 11 to determine the uncertainty associated with the location of the ductile-to-brittle transition region. As can be verified from equation (3.15), the total uncertainty associated with the Charpy impact energy at a specified test temperature is caused by the natural variability of the impact energy from sampling different microstructural zones (aleatory uncertainty) and from the lack of knowledge of the probabilities pk with which the different microstructural zones are sampled (epistemic uncertainty). According to equation (3.15) the total uncertainty can be reduced significantly if a single microstructural zone (for example the central zone) is sampled. The probabilities of sampling then become p1 ¼ 1, p2 ¼ 0 and p3 ¼ 0 and equation (3.15) transforms into V0 ¼ 21 . In this case, the total uncertainty in the impact energy is caused by the variability of the Charpy impact energy of the central zone. By sampling from a single microstructural zone, the variability associated with the Charpy impact energy P is decreased sub2 stantially because the sum of the squared differences i<j pi pj (i j ) in equation (3.15) becomes zero.
APPENDIX 3.1 The upper bound Vmax of the variance V of a distribution mixture is obtained by maximising the general equation (3.6) with respect to pk, k ¼ 1, M: Vmax ¼ max
p1 p2 ...pM
M X
pk V k þ
k¼1
X
! pi pj ði j Þ
2
ð3A:1Þ
i<j
The local extrema of expression (3A.1) can be determinedPusing Lagrange multipliers, because of the equality constraint: g(p1 , . . . , pM ) ( M k¼1 pk ) 1 ¼ 0. The necessary condition for an extremum of the variance given by expression (3.6) is @V=@pk þ @g=@pk ¼ 0
k ¼ 1; 2; . . . ; M
ð3A:2Þ
65
Reliability and Risk Models Based on Mixture Distributions
where isP the Lagrange multiplier. These M equations, together with the constraint M k¼1 pk 1 ¼ 0, form a system of M þ 1 linear equations in the M þ 1 unknowns p1, . . . ,pM and : p1 þ p2 þ þ pM ¼ 1 þ p1 ðk 1 Þ2 þ p2 ðk 2 Þ2 þ þ pk1 ðk k1 Þ2 þpkþ1 ðk kþ1 Þ2 þ þ pM ðk M Þ2 ¼ Vk where k ¼ 1, M. This linear system can also be presented in a vector form: Ap ¼ V
ð3A:3Þ
where 1 0 1 1 ... 1 2 2 B 1 . . . ð1 M Þ C 0 ð1 2 Þ C B 2 B A¼B 1 ð2 1 Þ 0 . . . ð2 M Þ2 C C A @... ... ... ... ... 2 2 1 ðM 1 Þ ðM 2 Þ . . . 0 1 1 0 0 1 B p1 C B V1 C C C B B C C B p¼B B p2 C and V ¼ B V2 C @ ... A @ ... A pM VM 0
ð3A:4Þ
Matrix A is a Cayley–Menger matrix (Glitzmann and Klee, 1994). The determinant of this matrix is at the basis of a method for calculating the volume V of an N-simplex (in the N-dimensional space). Suppose an N-simplex has vertices 2 1 , 2 , . . . , Nþ1 and dkm is the squared distance between vertices k and m . Let the matrix D be defined as 0
0 1 2 B 1 d 11 D¼B @... ... 2 1 dNþ1;1
1 ... 1 2 C ... d1;Nþ1 C ... ... A 2 . . . dNþ1;Nþ1
The equation V2 ¼
ð1ÞNþ1 2N ðN!Þ2
detðDÞ
66
Reliability and Risk Models
then gives the volume V of the N-simplex (Glitzmann and Klee, 1994; Sommerville, 1958). From this equation it is clear that if the Cayley–Menger determinant det (D) is zero, the volume of the simplex is also zero. The converse is also true, if the volume of the simplex is zero, the Cayley–Menger determinant is necessarily zero. As can be verified from our matrix A given by (3A.4), the means i can be considered as first-axis coordinates of the vertices of a simplex in an M-1-dimensional space with other coordinates set to zero. The ‘volume’ of this simplex is clearly zero except in the one-dimensional case (M 1 ¼ 1) where the ‘volume’ of the simplex is simply the distance j1 2 j between the two means of the individual distributions (sources). As a consequence, the determinant of matrix A is always zero for M > 2 and the linear system (3A.3) has no solution. We will now prove that no local maximum for the variance exists if exactly k > 2 individual distributions (sources) are sampled from a mixture distribution composed of M components (individual distributions). If k > 2 individual distributions are sampled, the sampling probabilities of these individual distributions must then all be different from zero. Without loss of generality let us assume that only the first k individual Pk distributions are sampled, with probabilities p1 6¼ 0; p2 6¼ 0, . . . , pk 6¼ 0, i¼1 pi ¼ 1, (pkþ1 ¼ 0, . . . , pM ¼ 0). Since the k individual distributions also form a mixture distribution and k > 2, the linear system (3A.3) has no solution, therefore no local maximum exists. This means that the global maximum is attained somewhere on the boundary of the domain 0 p1 1, . . . ,0 pk 1, either for some ps ¼ 0 or for some pt ¼ 1. The case pt ¼ 1, however, means that the rest of the sampling probabilities must be zero (pi ¼ 0 for i 6¼ t). In both cases, at least one of the sampling probabilities p1, p2, . . . , pk must be zero. Without loss of generality suppose that pk ¼ 0. The same reasoning can be applied to the remaining k 1 sources. If k 1 > 2, some of the p1, p2, . . . , pk 1 must be zero, etc. If k ¼ 2 (one-dimensional simplex), the matrix equation (3A.3) becomes 0
0 1 @1 0 1 ð1 2 Þ2
1 1 0 1 0 1 1 ð1 2 Þ2 A @ p1 A ¼ @ V1 A p2 V2 0
ð3A:5Þ
Because in this case the Cayley–Menger determinant is equal to 2(1 2 )2 , a solution of the linear system (3A.5) now exists and a local maximum may be present. The solution of the linear system (3A.5) is p1 ¼ 0:5 þ p2 ¼ 0:5
V1 V2 2ð1 2 Þ2 V1 V2 2ð1 2 Þ2
ð3A:6Þ ð3A:7Þ
Reliability and Risk Models Based on Mixture Distributions
67
These values of p1 and p2 correspond to a local maximum in the domain 0 p1 1, 0 p2 1, only if jV1 V2 j < (1 2 )2 . If jV1 V2 j (1 2 )2 , the maximum is attained at the boundary of the domain, either for p1 ¼ 1, p2 ¼ 0 (Vmax ¼ V1) or for p1 ¼ 0, p2 ¼ 1 (Vmax ¼ V2), whichever is greater (Vmax ¼ max {V1, V2}). If jV1 V2 j < (1 2 )2 , a local maximum of the variance exists for p1 and p2 given by relationships (3A.6) and (3A.7). The maximum value of the variance corresponding to these values is
Vmax ¼
V1 þ V2 ðV1 V2 Þ2 1 2 2 þ þ 2 2 4ð1 2 Þ2
ð3A:8Þ
In short, the global maximum of the right-hand side of equation (3.6) is attained either from sampling a single source/individual distribution, in which case one of the sampling probabilities pi is unity and the rest are zero (pi ¼ 1; pj=i ¼ 0), or from sampling only two individual distributions k and m among all individual distributions composing the mixture distribution. In this case, pk 6¼ 0 and pm 6¼ 0 and the rest of the pi are zero (pi ¼ 0) for i 6¼ k and i 6¼ m. If Vmax,k, m denotes the local maximum of the variance from sampling sources (individual distributions) k and m (k 6¼ m), the global maximum Vmax of the right-hand side of equation (3.6) can be found from Vmax ¼ max {V1,V2, . . . ,VM,Vmax,k,m} where k ¼ 2, M and m ¼ 1, k 1. Since there exist M (M 1)/2 number of terms Vmax,k,m, the global maximum is determined after M þ M (M 1)/2 ¼ M(M þ 1)/2 checks. As can be verified from the algorithm presented in Appendix 3.2, the maximum of the variance is determined by two nested loops. The control variable i of the external loop takes on values from 2 to M (the number of sources) while the control variable j of the internal loop takes on values from 1 to i 1.
APPENDIX 3.2 AN ALGORITHM FOR DETERMINING THE UPPER BOUND OF THE VARIANCE OF PROPERTIES FROM SAMPLING FROM MULTIPLE SOURCES The algorithm is in pseudo-code, where the statements in braces {op1; op2; op3; . . . ;} are executed as a single block. Detailed description of the various control structures is given in Chapter 6. The variable max contains the largest variance at the end of the calculations, the constant M is the number of components (sources) composing the mixture distribution. Variables k_max and m_max contain the indices of the sources sampling from which yields the largest variance. If the maximum variance is attained from sampling a single source, k_max and m_max will both contain the index of this source.
68
Reliability and Risk Models
Algorithm. max ¼V[1]; k_max ¼1; m_max ¼1; for i from 2 to M do { if (max < V[i]) then do { max ¼V[i]; k_max ¼i; m_max ¼i; pk_max ¼1; pm_max ¼1; } for j ¼ 1 to i do { if jV[i] V[j]j < ([i] [j])2 then do { (V[i] V [ j])2 [i] [j] 2 candidate_max = V[i]/2 þ V [ j ]/2 þ þ ; 2 4([i] [j])2 if (max < candidate_max) then do { max¼candidate_max; k_max¼i; m_max¼j; V[i] V [j] ; 2([i] [ j ])2 pm_max¼1-pk_max; pk_max ¼ 0:5 þ
} } } } Variables pk_max and pm_max contain the probabilities of sampling the two sources which yield the largest variance. As can be verified, the statements in the internal loop are executed only if the condition jV[i] V [ j ]j < ([i] [ j ])2 is fulfilled, which indicates a local maximum.
4 Building Reliability and Risk Models Failure rate and failure mode data are of vital importance for reliability analyses. For the purposes of the reliability predictions, failure rates should be quoted for specific failure modes. A reliability data record usually includes the following basic components (fields): .
Inventory field: Gives information on the particular component or subsystem. Failure mode field: Describes the particular failure event including its effect on the operational state of the system. . Failure time field: Includes time to failure, fraction of failed items per unit time, number of demands and number of failures on demand, etc. . Operational and environment conditions. .
Industry-specific data are related to a particular industry. An example of an industry-specific data source is the OREDA database (Offshore REliability DAta) (OREDA-1992) which contains field data. The database is a collection of offshore failure rate and failure mode data. Failure rates are listed for a wide range of offshore hardware components: electromechanical, mechanical and hydraulic components, and for a number of environmental applications.
4.1
GENERAL RULES FOR RELIABILITY DATA ANALYSIS
A basic initial step is to verify whether the data have been collected correctly since the quality of the field data varies between misleading data and useful
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
70
Reliability and Risk Models
data. Without good quality data, however, there is no quality prediction, despite the amount of sophisticated mathematics involved. A proper data collection should include data quality assurance. This involves auditing the source of data and specifying the procedures to be followed during data collection, recording and processing. After data collection, exploratory data analysis is the next step. It involves summarising and examining the data in an exploratory way using plots, charts and histograms. The results from the preliminary data analysis are helpful for the next step, where an appropriate reliability and risk model is formulated. Let us examine the data summary of lifetimes in Figure 4.1 which clearly indicates bimodality. There is a well-defined distribution of the lives of the weak subpopulation and another distribution, characterising the distribution of the lives of the normal population. Consequently, none of the classical unimodal distributions will be appropriate for this particular data set. Another example is presented in Figure 4.2, where the dependence of the Charpy impact energy on the percentage of ductile fracture over the fracture surface of a Charpy specimen was studied using a scatter plot. A strong positive correlation exists between the absorbed impact energy and the percentage of ductile fracture which suggests a possible connection between the two (Todinov et al., 2000). Indeed, ductile fracture is associated with a large amount of absorbed impact energy, as opposed to brittle fracture, which is associated with a small amount of impact energy. Thorough understanding of the failure mechanisms and factors which control failure is required by the next step – formulation of a reliability (risk) model. It is particularly important to verify whether the selected random variables indeed control reliability (risk). If a well-established model already exists, the model formulation may be reduced to assessing whether the new data conform to the existing model. In some cases models are obvious: for example the homogeneous Poisson process for random arrivals. If a suitable model is not obvious, data can be presented in various plots which often suggest a suitable model. Another approach is to use a general model that is
Relative frequency
Time
Figure 4.1 Bimodality in life data
71
Building Reliability and Risk Models 120
Impact energy (J)
100
80
60
40
20 30
40
50
60
70
80
90
100
Ductile fracture (%)
Figure 4.2 Scatter plot revealing a strong positive correlation between the Charpy impact energy and the percentage of ductile fracture (Todinov et al., 2000)
likely to include a suitable one as a special case. Such is for example the Weibull distribution which includes the negative exponential distribution as a special case. Once the model has been formulated, it usually involves a number of parameters which need to be estimated from data. Figure 4.3 gives basic types of probability density distributions for the estimates a^ of a particular parameter a. The quality of the estimates varies widely from unbiased and efficient, characterised by a small error, to biased and inefficient, characterised by a large error (Figure 4.3). Here are some of the objectives in building a statistical model. Following Chatfield (1998), a good statistical model should be: .
Physically based: the model should provide insight into the underlying physical mechanism of the modelled random factor. . Parsimonious: the model should involve the smallest number of parameters necessary to describe the data set. . Robust: small variations in the input data or in the estimates of the model parameters should lead to small variations in the model predictions. . Capable of correct predictions outside the data range and providing a good fit within the data range. Model building is not about getting the best fit to the observed data. It is about constructing a model which is consistent with the underlying physical
72
Reliability and Risk Models
Unbiased
Biased p.d.f. a^
Efficient
p.d.f. a^
a
a
estimate a^
estimate a^
p.d.f. a^
Inefficient
p.d.f. a^
a
estimate a^
a
estimate a^
Figure 4.3 Basic types of parameter estimates
mechanism of failure under consideration. The ability of a model to give a very good fit to a single data set may indicate little because a good fit can be achieved simply by making the model more complicated (Chatfield, 1998), by over-parameterising for example. As a rule, over-parameterised models have poor predictive capability. The robustness of a statistical model means that the output of the model is relatively insensitive to small variations in the input data, for example a system reliability insensitive to small variations in the reliabilities of the components. According to the example in Chapter 1, in case of a series system involving a very large number of identical components, small errors in the reliability estimate for the individual component gives rise to a large error in the system reliability estimate. A basic component of the final step of model building is the model validation. It involves comparing the predictions of the model under the range of conditions for which it holds with further experimental observations (field data).
4.2
PROBABILITY PLOTTING
Probability plotting provides a quick goodness-of-fit evidence whether a data set is consistent with the conjectured model. To construct a probability plot, the observations are first ranked in ascending order x(1), x(2), . . . ,x(n). This can be done by sorting the data values x1, x2, . . . ,xn. If the number n of data values is
73
Building Reliability and Risk Models
large, sorting can be performed by some of the well-known methods (Quicksort, Heapsort, Bubble sort, Insertion, Shell, etc.), covered in literature dealing with computer algorithms (Horowitz and Sahni, 1997). Next, an approximation F^i of the cumulative distribution function F(t) is made. The plotting positions F^i ¼ (i 0:5)/n, where i are the indices of the ordered observations, provide a good choice for most applications. If the probability plot is subsequently used for estimating the parameters of the conjectured distribution, the best plotting positions will depend on the assumed model (Meeker and Escobar, 1998). A basic result in the probability theory states that the empirical cumulative distribution function F^i (x) converges in a probabilistic sense as n grows to the true cumulative distribution function F(x) (DeGroot, 1989). In other words, for a large sample, the empirical cumulative distribution is close to the true cumulative distribution with high probability. At each value xi, the empirical cumulative distribution function has a jump of height 1/n, where n is the number of experimental observations (Figure 4.4). Building an empirical cumulative distribution can be done for the Charpy impact energy from the microstructural zones characterising multi-pass C–Mn welds (Figure 4.5). Clearly, the empirical distributions in Figure 4.5 provide strong evidence that the separate microstructural zones in multi-run C–Mn welds are characterised by distinct distributions of the impact energy. The cumulative distribution function F(t) can be linearised by applying an appropriate transformation of the coordinate axes t and F(t). Let the linear transformation be defined by z ¼ kx þ c, where z ¼ (F(t)) and x ¼ (t) are functions of F(t) and t. If zi ¼ (F^(ti )) are now plotted against xi ¼ (ti ), the plotted points will fall approximately along a straight line if the conjectured statistical distribution F(t) does indeed describe the data set. If the plotted points deviate systematically from a straight line, the conjectured distribution is not consistent with the observations. A linear transformation is used because deviations from a straight
Probability 1 ^ Fi
1/n
Random variable X x1
x2
xi
xn
Figure 4.4 Building the empirical cumulative distribution of a random variable
74
Reliability and Risk Models 1.0 0.9 0.8
Sampling from the intermediate zone
Tested at 0 °C
Probability
0.7 Sampling from the central zone
0.6 0.5 0.4 0.3 0.2
Sampling from the reheated zone
0.1 0.0
0
10
20
30
40
(a)
50
60
70
80
90 100 110 120 130 140
Charpy impact energy (J) 1.0 0.9 Tested at 0 °C
0.8
Mixed sampling
Probability
0.7 0.6
Sampling from the central zone
0.5 0.4 0.3
Sampling from the reheated zone
0.2 0.1 0.0 (b)
0
10
20
30
40
50
60
70
80
90 100 110 120 130 140
Impact toughness (J)
Figure 4.5 Cumulative empirical distribution of the Charpy impact energy characterising (a) sampling from the separate microstructural zones of C–Mn multi-run welds; (b) sampling from all microstructural zones (Todinov et al., 2000)
Building Reliability and Risk Models
75
line indicating a departure from the conjectured model are easily identifiable. Fitting a straight line to the transformed data can be done using linear regression. Once a candidate distribution has been identified, the next step is to estimate its parameters. Usually, the parameter estimates are obtained using the method of maximum likelihood. Probability plots, however, can also be used for estimating model parameters.
4.2.1
TESTING FOR CONSISTENCY WITH THE UNIFORM DISTRIBUTION MODEL
Let the ordered sample values x(1), x(2), . . . , x(n) come from an uniform distribution with probability density function defined by 1/(b a) in the interval [a, b] and zero elsewhere. The cumulative distribution function F(x) ¼ (x a)/(b a) can be presented as z ¼ kx þ c, where z ¼ F(x), k ¼ 1/(b a) and c ¼ a/(b a). After the cumulative distribution function has been approximated by F^i ¼ i/(n þ 1), where n is the number of observations, the values zi ¼ i/(n þ 1) are plotted against the ordered observations x(1), x(2), . . . , x(n). If the values do indeed come from a uniform distribution, the points will fall approximately along a straight line. Estimates a^ ¼ c /k and b^ ¼ (1 c )/k of the true parameters a and b can be obtained from the slope k* and the intercept c* of the best-fit straight line. The best-fit line is usually determined using the method of least squares.
4.2.2
TESTING FOR CONSISTENCY WITH THE EXPONENTIAL MODEL
In order to check whether a set of observations is consistent with the negative exponential distribution, an exponential probability plot can be made using an appropriate linear transformation of the axes. Since the cumulative distribution function of the exponential distribution is F(t) ¼ 1 exp (t), by taking a logarithm, the negative exponential distribution can be transformed into an equation of a straight line: z ¼ lnð1 FðtÞÞ ¼ t
ð4:1Þ
Next, the data points are ranked in ascending order: t(1), t(2), . . . , t(n) and the plotting positions are obtained from F^i ¼ i/(n þ 1), which estimates the probability that t will be smaller than t(i). If the data set does come from an exponential distribution, the values zi ¼ ln (1 F^i ) plotted against t(i), i ¼ 1, 2, . . . , n should be close to a straight line. The slope ^ of the best-fit straight line is an estimate of the unknown parameter in the negative exponential distribution.
76 4.2.3
Reliability and Risk Models
TESTING FOR CONSISTENCY WITH THE WEIBULL DISTRIBUTION
Weibull analysis may be carried out by using a very small number of observations (Abernethy, 1994; Dodson, 1994). The Weibull distribution F(t) ¼ 1 exp [t/]m is linearised first, by taking a double logarithm 1 ln ln ¼ m ln t m ln ð4:2Þ 1 FðtÞ This is an equation of the straight line z ¼ mx þ c, where 1 z ¼ ln ln x ¼ lnðtÞ and c ¼ m ln 1 FðtÞ The ordered observations are t(1) < t(2) < . . . < t(n) and the median rank approximations for the Weibull cumulative distribution function are F^i ¼ (i 0:3)/(n þ 0:4), where i ¼ 1, 2, . . . , n are the indices of the ordered observations and n is the number of observations. From the estimated F^i values, the values zi ¼ ln [ ln (1/(1 F^i ))] are determined. Using the method of ^ and the intercept c^ of the best-fit straight line (Figure 4.6) least squares, the slope m are obtained from n P zi ðxi xÞ i¼1 ^¼ n m ð4:3Þ P ðxi xÞ2 i¼1
and ^ x c^ ¼ z m
ð4:4Þ
^ zi = ln [ln (1/(1 – Fi ))]
0
Figure 4.6
c^
α
^ = tan (α) m xi = ln(ti )
A probability plot for a two-parameter Weibull distribution
77
Building Reliability and Risk Models
P P where x ¼ 1/n ni¼1 ln ti , z ¼ 1/n ni¼1 zi and xi ¼ ln (ti) (Draper and Smith, ^ is an estimate of the exponent m in the Weibull distribution 1981). The value m ^ ) is an estimate of the characteristic life . and ^ ¼ exp (^ c/ m
4.2.4
TESTING FOR CONSISTENCY WITH THE TYPE I EXTREME VALUE MODEL
Suppose that the ordered observations are x(1) < x(2) < x(3) < . . . < x(n). The median rank approximations of the cumulative distribution function of the type I extreme value distribution are F^i ¼ (i 0:4)/(n þ 0:2). If the extreme value distribution F(x) ¼ exp [ exp ((x)/)] is approximated by F^i ¼ exp ( exp ((x(i) )/)), taking natural logarithms twice gives xðiÞ ¼ þ ½ lnð ln F^i Þ
ð4:5Þ
After determining zi ¼ ln ( ln F^i ) the values x(i) are plotted against zi. If the plotted points fall approximately along a straight line, evidence is obtained in support of the conjecture that the observations are consistent with the maximum extreme value model. In case of a good fit, the slope and the intercept of the best-fit line are estimates of the unknown parameters and , respectively.
4.2.5
TESTING FOR CONSISTENCY WITH THE NORMAL DISTRIBUTION
Let the ordered sample values be x(1), x(2), . . . , x(n). If the sample x(1), x(2), . . . , x(n) comes from a normal distribution with mean and standard deviation , zi ¼ (x(i) )/ will follow the standard normal distribution. Estimates of zi for n observations can be obtained from the standardised normal scores, which satisfy i 0:5 ¼ PðZ zi Þ ¼ ðzi Þ n
ð4:6Þ
where ( ) is the cumulative distribution function of the standard normal distribution. The normal scores zi can be determined from zi ¼ 1 ((i 0:5)/n), where 1 ( ) is the inverse of ( ). Values of 1 ( ) are tabulated or obtained using numerical methods. The normal probability plot is constructed by plotting the standardised normal scores zi against x(i). If the data x(i) do come from a normal distribution, plotting x(i) versus the standard normal scores zi ¼ 1 ((i 0:5)/n) produces points falling
Standardised normal scores
78
Reliability and Risk Models 3
3
Tested at t = 50 °C Mixed sampling
2 1
1
0
0
–1
–1
–2
–2
–3
0
Standardised normal scores
(a)
10 20 30 40 50 60 70 80 90 100 110 120 130 140
Impact energy (J)
3
–3
0
(b)
10 20 30 40 50 60 70 80 90 100 110 120 130 140
Impact energy (J)
3
Tested at 0 °C Mixed sampling
2
1
0
0
–1
–1
–2
–2
0
(c)
10 20 30 40 50 60 70 80 90 100 110 120 130 140
Impact energy (J)
Tested at –40 °C Mixed sampling
2
1
–3
Tested at –20 °C Mixed sampling
2
–3
0
(d)
10 20 30 40 50 60 70 80 90 100 110 120 130 140
Impact energy (J)
Figure 4.7 Normal plots of the Charpy impact energy of C–Mn multi-run welds at different test temperatures (Todinov et al., 2000)
approximately along the straight line x(i) ¼ zi þ with slope and intercept . In case of a good linear fit, which indicates that the data are consistent with a normal distribution, its standard deviation and mean can be estimated from the slope and the intercept of the best-fit straight line. In Figure 4.7, the normal probability plotting technique has been illustrated with Charpy impact energy data related to the case where the Charpy V-notch has sampled from every single microstructural zone of a multi-run weld (mixed sampling). In Figure 4.7, the systematic deviations of the plotted points from a straight line indicate that the normal distribution is not an appropriate model for the Charpy impact energy of inhomogeneous multi-run welds (Todinov et al., 2000).
4.3
ESTIMATING MODEL PARAMETERS USING THE METHOD OF MAXIMUM LIKELIHOOD
Suppose that the conjectured model (statistical distribution) f(x, a) depends on a Q single parameter a. If the likelihood function L(x1 , x2 , : : : , xn , a) ¼ ni¼1 f (xi , a)
79
Building Reliability and Risk Models
has been defined, the maximum likelihood estimate a^ of the true parameter a is the value for which the likelihood function attains a maximum:
Lðx1 ; x2 ; . . . ; xn ; a^Þ ¼ maxa
n Y
! f ðxi ; aÞ
ð4:7Þ
i¼1
Maximising the logarithm of the likelihood function yields the same maximum likelihood estimate a^: maxa Lðx1 ; x2 ; . . . ; xn ; aÞ ¼ maxa ln½Lðx1 ; x2 ; . . . ; xn ; aÞ Setting the log-likelihood derivative to zero: d ln Lðx1 ; x2 ; . . . ; xn ; aÞ ¼0 da ja¼^ a and solving the equation analytically with respect to a to find a local maximum, or maximising the log-likelihood function numerically, yields the estimate a^. The method of maximum likelihood will be illustrated by estimating the parameter of the negative exponential distribution. Suppose that probability plotting has indicated that the data sample (x1, x2, . . . , xn) is compatible with the negative exponential distribution exp ( x). In order to obtain a maximum likelihood estimate of the only model parameter , the log-likelihood function is constructed first:
ln L ¼
n X
lnð expðxi ÞÞ ¼ n ln
i¼1
n X
xi
ð4:8Þ
i¼1
Differentiating the log-likelihood function with respect to gives d ln L/d P ¼ n/ ni¼1 xi . P Setting the derivative to zero and solving d ln L/d ¼ n/ ni¼1 xi ¼ 0 with respect to results in ^ ¼ n
X n
xi
ð4:9Þ
i¼1
For this value, the second derivative d2 ln L/d2 ¼ n/2 is negative, therefore ^ corresponds to a local maximum which is also the global maximum. The value ^ given by equation (4.9) is the maximum likelihood estimate of the unknown parameter density . Estimating the parameters of the models
80
Reliability and Risk Models
discussed earlier can also be done using the method of the maximum likelihood. Usually, the log-likelihood function cannot be maximised analytically except in cases involving a very small number of model parameters. Numerical optimisation methods are widely used as an alternative.
4.4
ESTIMATING THE PARAMETERS OF A THREE-PARAMETER POWER LAW
From a set of measurements yi, for the values xi, i ¼ 1, 2, . . . ,N of the controlled variable, we would like to obtain unbiased and efficient estimates of the parameters of the power law y ¼ kðx x0 Þm
ð4:10Þ
A method based on linear correlation has been proposed in Todinov (2001a). An estimate of the unknown exponent m in equation (4.10) is obtained by first transforming equation (4.10) into y1=m ¼ k1=m ðx x0 Þ
ð4:11Þ
Let us denote z ¼ y1/m. The values of the unknown exponent m are varied in the interval 0 < m mmax, where mmax is an appropriately selected upper bound. Denoting zi ¼ y1/m i , the values zi are plotted versus the observations xi, and for each value of m, the correlation coefficient PN ðmÞ ¼
i¼1 ðzi
zÞðxi xÞ ¼ sz sx
PN
i¼1 zi ðxi
sz sx
xÞ
ð4:12Þ
is determined, where vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N uX sz ¼ t ðzi zÞ2
ð4:13Þ
i¼1
and vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N uX sx ¼ t ðxi xÞ2
ð4:14Þ
i¼1
^ , for which the maximum value of the correlation coefficient of zi The value m versus xi is attained, is an estimate of the unknown exponent m.
81
Building Reliability and Risk Models
^ are determined from Next, the values zi corresponding to the best estimate m ^ m zi ¼ y1/ , i ¼ 1, N and the slope i PN zi ðxi xÞ p ¼ Pi¼1 N Þ2 i¼1 ðxi x
ð4:15Þ
c ¼ z p x
ð4:16Þ
and the intercept
of the best-fit straight line are finally determined. The parameters k and x0 in equation (4.10) are estimated from the estimated slope p ¼ k1/m^ and intercept c ¼ k1/m^ x0 of the best-fit straight line: k^ ¼ pm^
ð4:17Þ
x^0 ¼ ðc=pÞ
ð4:18Þ
and
It is important to point out that the true value of the location parameter x0 ^ of the shape parameter m. Indeed, let xi be does not affect the estimate m presented as xi ¼ x0 þ xi where xi is xi and x0. xi Pthe distance between P does not depend on x0. The expressions i (xi x) and i (xi x)2 entering in equation (4.12) for the correlation coefficient then become X ðxi xÞ and i
X X ðxi xÞ2 , x ¼ ð1=NÞ xi i
!
i
Since neither of these depends on x0, the value of the location parameter x0 does not affect the estimate of the unknown exponent m. This constitutes the strength of the proposed method: estimating the shape of the power curve defined by equation (4.10) is separated from its location along the x-axis. The method yields practically unbiased estimates for the parameters (Todinov, 2001a). Moreover, the precision with which the parameters are estimated is very good even in case of large scatter of the yi values for the values xi of the argument. Using appropriate transformations, a number of three-parameter models can be reduced to the power law (4.10) and the unknown parameters estimated using the proposed technique. Such is for example the important threeparameter model: FðxÞ ¼ 1 exp½kðx x0 Þm
ð4:19Þ
82
Reliability and Risk Models
which can be reduced to the three-parameter power law (4.10) by presenting it as exp [k(x x0)m] ¼ 1 F(x) and taking a logarithm from both sides. As a result, equation (4.10) is obtained, where y ¼ ln [1 F(x)].
4.4.1
SOME APPLICATIONS OF THE THREE-PARAMETER POWER LAW
The three-parameter power law can for example be used to describe the corrosion kinetics of structural components. Thus, the model d ¼ kðt t0 Þm
ð4:20Þ
has been adopted by Dianqing et al. (2004) to describe a corrosion of ship structures. In this model, d is the corroded thickness, t is time, t0 is the coating life, k and m are constants. If N measurements di for the thickness of the corroded material have been collected at N different times ti, i ¼ 1, 2, . . . , N, using the technique from the previous section, the unknown parameters in equation (4.20) can be estimated. The model (4.10) has also been used to fit the systematic variation of the ductile-to-brittle transition temperature of steels (Todinov, 2001a). The systematic variation E(x) of the Charpy impact energy (Figure 4.8) was modelled by EðxÞ ¼ EL þ ðEU EL ÞFðxÞ
ð4:21Þ
where x denotes temperature, EL and EU are the lower and the upper-shelf Charpy impact energies (estimated from experimental data) and F(x) is the normalised Charpy impact energy [E(x) EL]/(EU EL) modelled by equation (4.19). Charpy impact energy (J ) EU
E(x)
EL
Temperature (x )
Figure 4.8 Systematic variation of the Charpy impact energy with temperature
Building Reliability and Risk Models
83
If x in equation (4.19) denotes ‘time’, the equation can be used for modelling the time evolution of transformed phase produced by nucleation and growth. For example, the quantity of transformed phase from a phase transformation with constant nucleation rate and radial growth rate is described by the Kolmogorov–Johnson–Mehl–Avrami (KJMA) equation (Christian, 1965), whose functional form is again equation (4.19). If x in equation (4.19) denotes ‘time’, the equation can also be used for modelling the time evolution of sticking/jamming forces between sliding surfaces. Pilot experiments have indicated that equation (4.21) describes well the time evolution of the force necessary to separate by shear two jammed sliding surfaces. In this case EL in equation (4.21) corresponds to the minimum level of the jamming force and EU corresponds to a 100% jamming attained after a significant amount of time; F(x) is given by equation 4.19.
5 Load–Strength (Demand–Capacity) Models
5.1
A GENERAL RELIABILITY MODEL
Reliability is often derived from the probability of violating a limit (failure) state. A limited number of random variables X1, X2, . . . , Xn usually control the reliability of the system. Such are for example the critical design parameters material properties, dimensions and loads. Particular examples of random variables controlling reliability are the fracture toughness, the number density and the size of the flaws, the stress and strain magnitude. Random variables may not necessarily be statistically independent. Any set x1, x2, . . . , xn of values for the controlling random variables can be presented as a point in the n-dimensional variable space. The failure region F contains all realisations (combinations of values for the controlling variables) that result in failure (failure states) as opposed to the safe region S containing all realisations that do not result in failure. The surface L(x1,x2, . . . ,xn) ¼ 0 which divides the variable space into a failure and safe region is referred to as the failure surface. The points on the failure surface are also failure states. Suppose that the controlling random variables are characterised by a joint probability density function f(x1, x2, . . . , xn). The reliability on demand is then given by the integral Z Z Z R¼ . . . f ðx1 ; x2 ; . . . ; xn Þdx1 dx2 . . . dxn ð5:1Þ x1 ;...;xn 2S
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
86
Reliability and Risk Models
where integration is carried out only over the safe region S (Melchers, 1999). In other words, the integration of the probability density function is performed only at points where no failure can occur. As a result, the sum of the probabilities over the safe region must necessarily equal the reliability on demand. If the controlling random variables are statistically independent, the joint probability density function f(x1, x2, . . . , xn) can be factorised: f ðx1 ; x2 ; . . . ; xn Þ ¼ f ðx1 Þf ðx2 Þ . . . f ðxn Þ and the reliability integral (5.1) becomes R¼
Z
Z Z ...
f ðx1 Þf ðx2 Þ; . . . ; f ðxn Þdx1 dx2 . . . dxn
ð5:2Þ
x1 ;...;xn 2S
where f(x1), f(x2), . . . , f(xn) are the marginal probability densities of the controlling random variables. Significant difficulties arise in solving the general reliability models (5.1)–(5.2), some of which can be summarised as follows: 1. Insufficient amount of data to define the joint probability distribution of the controlling random variables. 2. Difficulties in defining the integration domain (the safe region) S. 3. The multidimensional numerical integration is extremely time-consuming. These difficulties no longer exist if Monte Carlo simulation is employed to evaluate the reliability integral.
5.2
THE LOAD–STRENGTH INTERFERENCE MODEL
A common framework for predicting structural/mechanical reliability at a component level is the load–strength interference model (Freudenthal, 1954) which is related to the interaction of the load distribution and the strength distribution. If strength and load were constant as shown in Figure 5.1, no failure would occur (O’Connor, 2003). Because of the variability of the load and strength and the interference (overlap) between the load distribution and the strength distribution, the reliability on demand is smaller than 100%. The reliability on demand is determined by the probability of a relative configuration of the load and strength in which load is smaller than strength (L < S). Reliability on demand is controlled by two random variables load (L) and strength (S) characterised by distinct distributions (Freudental, 1954). Numerous failure modes can be described conveniently in the context of the load– strength interference (see for example Carter, 1986). Such are for example
87
Load–Strength (Demand–Capacity) Models Probability density Load
Strength
µL
µS
Stress
Figure 5.1 Constant values for the load and strength
all overstress failures occurring whenever load exceeds resistance: (i) damage exceeding damage resistance; (ii) demand exceeding supply; (iii) loading stress exceeding yield strength; (iv) local stress exceeding local fracture toughness; (v) jamming (sticking) force exceeding applied force; (vi) corrosion damage exceeding corrosion resistance, etc. (Figure 5.2). The load–strength interference model is a special case of the general reliability model where only two controlling random variables are present: X1 (Strength) and X2 (Load). In this case, the failure surface is L(X1,X2) X1 X2 ¼ 0. Suppose that the load–strength joint probability density function is f(x1,x2) and the strength and the load are distributed in the domain defined by x1,min x1 x1,max, x2,min x2 x2, max, respectively (Figure 5.3). The values x1,min and x1,max are the lower and the upper limit of strength while values x2,min and x2,max are the lower and the upper limit of load. According to the general reliability model (5.1), reliability on demand is given by the integral Z Z f ðx1 ; x2 Þdx1 dx2 ð5:3Þ R¼ x1 ;x2 2S
Probability density Load, demand
Strength, supply, resistance
Figure 5.2 The universal nature of the load–strength interference
88
Reliability and Risk Models X2 (Load )
Failure surface X1 – X2 = 0
Failure region X1 – X2 ≤ 0 x2,max F
Safe region X1 – X2 > 0
S
x2,min X1 (Strength) x1,min
x1,max
Figure 5.3 Failure region, safe region and failure surface in load–strength interference
over the safe region S from the rectangular area in Fig.5.3 where X1 X2 > 0. If strength and load are statistically independent random variables, the reliability integral becomes R¼
Z Z
f ðx1 Þf ðx2 Þdx1 dx2
ð5:4Þ
x1 ;x2 2S
5.3
LOAD–STRENGTH (DEMAND CAPACITY) INTEGRALS
In the derivations to follow, load and strength will be denoted by indices L and S. Their probability density functions will be denoted by fL(x) and fS(x) and the cumulative distribution functions by FL(x) and FS(x), respectively. If a load–strength interference is present, the reliability on demand R equals the probability that strength will be greater than load. The probability that the strength will be in the infinitesimal interval x, x þ dx is fS(x) dx. The probability of the compound event, that the strength will be in the infinitesimal interval x, x þ dx and the load will not be greater than x, is FL(x)fS(x) dx. Because the strength can be in any infinitesimally small interval x, x þ dx between the lower bound Smin and the upper bound Smax (Figure 5.4a), according to the total probability theorem, integrating FL(x) fS (x) dx gives the reliability on demand R: R¼
Z
Smax
fS ðxÞFL ðxÞdx
ð5:5Þ
Smin
For the lower and the upper bounds Smin and Smax of the strength, fS(x) ¼ 0 is fulfilled for x Smin and x Smax. An alternative form of the load–strength
89
Load–Strength (Demand–Capacity) Models FL(x ) = P (L ≤ x ) dx
Smin
Smax Stress
x x + dx (a)
1–Fs (x ) = P (S > x ) dx
Stress
x x + dx
Lmin
Lmax
(b)
Figure 5.4 Possible safe configurations of the load and strength. Integration is performed between (a) Smin and Smax – the lower and the upper bound of the strength; (b) between Lmin and Lmax – the lower and the upper bound of the load
interference integral can be derived if integration is performed between the lower and the upper bound of the load (Figure 5.4b). The probability that the load will be in the infinitesimal interval x, x þ dx is fL (x) dx. The probability of the compound event, that the load will be in the infinitesimal interval x, x þ dx and the strength will be greater than x þ dx, is [1 FS (x)] fL (x)dx. Because the load can be in any infinitesimal interval x, x þ dx between its lower bound Lmin and upper bound Lmax, according to the total probability theorem, integrating [1 FS (x)] fL (x)dx gives the reliability on demand: Z Lmax Z Lmax fL ðxÞ½1 FS ðxÞdx ¼ 1 fL ðxÞFS ðxÞdx ð5:6Þ R¼ Lmin
Lmin
For the lower and upper bounds Lmin and Lmax of the load, fL (x) ¼ 0 is fulfilled for x Lmin and x Lmax. Suppose that the component is subjected to statistically independent and identically distributed loads. For n load applications, the probability that the loads L1, L2, . . . ,Ln will be smaller than a particular value x of the strength is PðL1 x and L2 x . . . : and Ln xÞ ¼ FLn ðxÞ
ð5:7Þ
where FL(x) is the cumulative distribution of the load. The probability of the compound event, that strength will be in the infinitesimal interval x, x þ dx and the loads from all load applications will be smaller than x, is F nL (x)fS (x)dx. Because the strength can be in any infinitesimally small interval x, x þ dx between Smin and Smax, according to the total probability theorem, integrating F nL (x)fS (x)dx gives Z Smax R¼ fS ðxÞF nL ðxÞdx ð5:8Þ Smin
90
Reliability and Risk Models FD (x–a) ≡ P (D ≤ x–a) a Smin
dx x x + dx
Quantity Smax
Figure 5.5 Relative configurations of the supply and demand which guarantee that supply exceeds demand by a quantity larger than a
for the reliability on demand associated with n load applications. Often, particularly in problems related to demand and supply, the question of interest is the probability that supply exceeds demand by a value greater than a (Figure 5.5). In what follows, supply and demand will be denoted by indices S and D. Their probability density distributions will be denoted by fS(x) and fD(x) and their cumulative density distributions by FS(x) and FD(x), respectively. The probability of the compound event, that the supply will be in the infinitesimal interval x, x þ dx and the demand will be smaller than or equal to x a, is FD (x a)fS(x)dx. According to the total probability theorem, integrating FD(x a)fS(x)dx yields R¼
Z
Smax
fS ðxÞFD ðx aÞdx
ð5:9Þ
Smin
for the probability that supply will exceed demand by a quantity larger than a. Smin and Smax are the lower and upper bounds of the supply, where fS(x) ¼ 0 for x Smin and x Smax.
5.4
CALCULATING THE LOAD–STRENGTH INTEGRAL USING NUMERICAL METHODS
The maximum extreme value distribution is often a good model for the maximum load. If the cumulative distribution function of the loading stress is given by x FL ðxÞ ¼ exp exp
ð5:10Þ
and the probability density distribution of the strength by the three-parameter Weibull distribution fS ðxÞ ¼
m x x0 m1 x x0 m exp
ð5:11Þ
91
Load–Strength (Demand–Capacity) Models
the reliability on demand can be determined from R¼
Z
Smax
FL ðxÞfS ðxÞdx
ð5:12Þ
Smin
This integral can be solved numerically using the Simpson’s method (Cheney and Kincaid, 1999). This approach will be illustrated by a numerical example related to calculating the risk of failure of a critical component. Example 5.1. A data set is given, regarding the strength of a component (yield strength, fracture toughness, bending strength, fatigue strength, etc.). The appropriate model for the strength was found to be the three-parameter Weibull distribution (5.11) with parameters: m ¼ 3:9
¼ 297:3
and
x0 ¼ 200 MPa
A data set is also given, regarding the maximum load over a number of consecutive time intervals (days, months, years). The measurements have been transformed into a set of calculated maximum loading stresses over the specified time intervals. The appropriate model for the strength was found to be the maximum extreme value distribution (5.10) with parameters: ¼ 119:0
and
¼ 73:64
Find the probability of failure of the critical component. Solution: Using the numerical values of the parameters, and a sufficiently large value Smax ¼ 3000 MPa for the upper integration limit of the strength, the value 0.985 was calculated for the reliability on demand: Z
x 119 exp exp 73:64 200 " 3:91 # 3:9 x 200 x 200 3:9 exp dx ¼ 0:985 297:3 297:3 297:3 3000
The required probability of failure is 1 0.985 = 0.015. In Chapter 6, this probability will be confirmed by a Monte Carlo simulation.
5.5
NORMALLY DISTRIBUTED AND STATISTICALLY INDEPENDENT LOAD AND STRENGTH
Load (L) and strength (S ) are assumed to be statistically independent, normally distributed random variables N(L , 2L ) and N(S , 2S ), where L and S are the
92
Reliability and Risk Models
means and L and S are the standard deviations of the load and strength distribution, respectively. The random variable y ¼ S L is normally distributed because y is a sum of normally distributed random variables. Its expected value is y ¼ EðyÞ ¼ EðSÞ EðLÞ ¼ S L
ð5:13Þ
VðyÞ ¼ 2y ¼ VðSÞ þ ð1Þ2 VðLÞ ¼ 2S þ 2L
ð5:14Þ
and the variance is
Because y is normally distributed, with mean y and standard deviation y (Figure 5.6), the probability that y 0 (the probability of failure) can be found using the linear transformation z ¼ (0 y )/y . This is needed for calculating the probability P(y 0) using the standard normal distribution: P(y 0) ¼ (z). Setting S L ¼ y =y ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2S þ 2L
ð5:15Þ
gives Pðy 0Þ ¼ ðÞ for the probability of failure, where ( ) is the cumulative distribution function of the standard normal distribution. In equation (5.15), the quantity , also known as the reliability index or safety margin, is an important reliability parameter which measures the relative separation of load and strength. Reliability on demand is determined from R ¼ 1 ðÞ ¼ 1 ½1 ð Þ ¼ ðÞ
ð5:16Þ
Probability density of y = S – L Failure Safe region region y≤0 y>0
2
2
N (µS – µL, σS + σL )
P (y ≤ 0)
µy = µS – µL 0
y=S–L
Figure 5.6 A normal probability density distribution of the difference y ¼ S L
93
Load–Strength (Demand–Capacity) Models
A table containing the area under the standard normal probability density function is given in Appendix C. We must point out immediately, however, that the safety margin has a meaning only for normally distributed load and strength. Later, we shall demonstrate that in the general case, a low safety margin does not necessarily mean low reliability and vice versa. Example 5.2. The strength of a structural component is normally distributed with mean 800 MPa and standard deviation 40 MPa. The load is also normally distributed with mean 700 MPa and standard deviation 30 MPa. Calculate the reliability on demand for the component if the load and strength are statistically independent. Solution: For statistically independent load and strength, the reliability index is S L 800 700 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 2 402 þ 302 2S þ 2L According to equation (5.16), the reliability on demand is R ¼ ð2Þ ¼ 0:977 Equation (5.16) can also be applied to normally distributed supply and demand. Demand (D) and supply (S) are assumed to be statistically independent normally distributed random variables N(D , 2D ) and N(S , 2S ), where D , S are the means and D and S are the standard deviations, respectively. The probability that supply will exceed demand by an amount greater than a called the reserve (Figure 5.7) is equal to the probability that y ¼ S D > a will be fulfilled. Considering that the random variable y ¼ S D a is normally distributed, the load–strength interference formula can be applied. Using the load–strength interference formula, the probability that after the demand there will remain a quantity greater than or equal to a (S D a) can be calculated, noticing that P(S D a) ¼ P(S D þ a). D þ a is also a normally distributed random variable with mean D þ a and standard deviation D . In other words, by increasing the demand by the amount of the required reserve a, the problem
Supply, S
Demand, D
Reserve, a
Figure 5.7 Supply, demand and reserve
94
Reliability and Risk Models
is reduced to the familiar load–strength interference problem. Consequently, the probability that supply will exceed demand by at least a is 0 1 BS ðD þ aÞC PðS D aÞ ¼ @ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A 2S þ 2D
ð5:17Þ
Assume that the supply is composed of contributions from M suppliers and the demand is composed of the consumption from N consumers. All supplied and consumed quantities are assumed to be normally distributed or constants. A constant supply or demand of magnitude c can formally be interpreted as ‘normally distributed’, with mean c and standard deviation zero. PMThe total supply in this case is also normally distributed, with mean ¼ S i¼1 Si and P 2 variance 2S ¼ M where and are the mean and the standard Si Si i¼1 Si deviation associated with the P ith supplier. The total demand is also normally PN 2 2 distributed, with mean D ¼ N i¼1 Di and variance D ¼ i¼1 Di where Di and Di are the mean and the standard deviation of the ith consumer. Substituting these in equation (5.17) gives 0 P 1 PM N þ a Si Di i¼1 B i¼1 C ð5:18Þ PðS D aÞ ¼ @ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PM 2 PN 2 A i¼1 Si þ i¼1 Di Supply–demand problems can also be illustrated by the following examples. Example 5.3. Fuel is delivered to a tank which is empty at the start of a sixmonth period. The amount of the delivered fuel within the six-month period by suppliers A and B are normally distributed with means A ¼ 60 t, B ¼ 20 t and standard deviations A ¼ 6 t and B ¼ 4:5 t. The amount of fuel taken from the tank during the six-month-period is also normally distributed, with mean D ¼ 65 t and standard deviation D ¼ 5 t. What is the probability that at the end of the six-month-period there will be a reserve of at least a ¼ 10 t fuel in the tank? Solution: The required probability can be calculated directly from the supply– demand formula: 0 1 60 þ 20 65 10 BA þ B D aC ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi q p PðS D aÞ ¼ @ 0:71 A¼ 62 þ 4:52 þ 52 2 þ 2 þ 2 A
B
D
Example 5.4. Spare parts are delivered in a warehouse. The amount of spare parts delivered for one year is normally distributed with mean S ¼ 2000 and standard deviation S ¼ 150. The amount of spare parts sold by the warehouse during the year is also normally distributed, with mean C ¼ 500 and standard
95
Load–Strength (Demand–Capacity) Models
deviation C ¼ 90. Calculate the probability that at the end of the year, there will be sufficient amount of spare parts left in the warehouse to satisfy a demand from a plant. The amount of spare parts required by the plant is normally distributed, with mean R ¼ 1200 and standard deviation R ¼ 200. Solution: Using the load–strength interference formula, the required probability can be calculated directly from 0
1
2000 500 1200 B S C R C PðS DÞ ¼ @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0:87 1502 þ 902 þ 2002 2S þ 2C þ 2R The supply–demand formulae can also be applied for solving other type of problems. Such is the next example. Example 5.5. The time for accomplishing a particular task by the operator A is normally distributed, with mean 40 minutes and standard deviation 8 minutes. The time for accomplishing the same task by the operator B is also normally distributed, with mean 60 minutes and standard deviation 12 minutes. If the two operators start accomplishing the task simultaneously and independently from each other, calculate the probability that operator A will accomplish the task at least 10 minutes before operator B. Solution: The probability that operator A will accomplish the task at least 10 minutes before operator B equals the probability that the time tB of operator B will be equal to or greater than the time tA of operator A plus 10 minutes. Using the load–strength interference formula, where A , B are the means and A , B are the standard deviations of the times for operators A and B, correspondingly, a probability of approximately 75% is obtained: 0
1
60 40 10 BB ðA þ 10ÞC PðtB tA þ 10Þ ¼ @ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 122 þ 82 2B þ 2A ¼ ð0:69Þ 0:75
5.6
5.6.1
RELIABILITY AND RISK ANALYSIS BASED ON THE LOAD–STRENGTH INTERFERENCE APPROACH INFLUENCE OF STRENGTH VARIABILITY ON RELIABILITY
Figure 5.8 illustrates a case where low reliability is a result of large variability of the strength. Large variability of strength is caused by the presence of weak (substandard) items due to poor material properties, manufacturing, assembly and quality control. The large variability of strength leads to a large overlap of
96
Reliability and Risk Models Probability density Load
µL
Strength
µS
Stress
Figure 5.8 Low reliability caused by a large variability of the strength
the lower tail of the strength distribution and the upper tail of the load distribution. A large overlap causes low reliability and can be decreased by a high-stress burn-in or proof-testing which cause weak items to fail. The resultant distributions (Figure 5.9) are characterised by a small or no overlap (Carter, 1986; O’Connor, 2003). Strength variability caused by variability of material properties is one of the major reasons for an increased interference with the load distribution which results in increased probability of failure (Figure 5.10). Here this point is discussed in some detail. Assume for simplicity that the load and strength are normally distributed. Since the reliability on demand is 0 1 B S L C R0 ¼ @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA 2S0 þ 2L decreasing the variability of the strength to S1 < S0 (Figure 5.10) increases the reliability on demand to 0
1
B S L C R1 ¼ @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA > R0 2S1 þ 2L Probability density Load
µL
Strength
µS
Stress
Figure 5.9 Increased reliability after a burn-in
97
Load–Strength (Demand–Capacity) Models Probability density Load
µL
Strength
σS0 > σS1
µS
Stress
Figure 5.10 The importance of the strength variation to the probability of failure
If, for example, variability of strength is due to sampling from multiple sources, it can be decreased by sampling from a single source – the source characterised by the smallest variance. It must be pointed out that strength variability depends also on the particular design solution. A particular material for example may have a low resistance to thermal fatigue. If the design solution, however, eliminates start–stop regimes that lead to substantial temperature variations, thermal fatigue will no longer be a problem. Another example can be given with fatigue failures of suspension springs. It is a well-established fact that the fatigue strength of suspension springs is very sensitive to the state of the spring wire surface. If the spring wire is shot-peened, however, compressive residual stresses are introduced in the surface layers. As a result, the effective amplitude of the loading stress on the spring surface decreases and the fatigue crack initiation and propagation are delayed. The overall effect is a significant increase of the fatigue life. Low reliability due to increased strength variability is often due to ageing and the associated with it material degradation. Material degradation can often be induced by the environment, for example due to corrosion and irradiation. A typical feature of the strength degradation is an increase of the variance and a decrease of the mean of the strength distribution (Figure 5.11).
Probability density Strength degradation Load
Initial strength
µL
µS
Stress
Figure 5.11 Decreased reliability due to a strength degradation
98
Reliability and Risk Models Probability density Load
Strength
µL
µS
Stress
Figure 5.12 Increased load–strength interference due to a large variability of the load (rough loading)
Low reliability is often due to excessive variability of the load. If variability of the load is large (rough loading), the probability of an overstress failure is significant (Figure 5.12). Mechanical equipment is usually characterised by a rough loading as opposed to electronic equipment which is characterised by a smooth loading (Figure 5.13). A common example of smooth loading is the power supply of electronic equipment through an anti-surge protector. Here are some possible options for increasing the reliability on demand: (i) decreasing the strength variability; (ii) reducing the lower tail of the strength distribution; (iii) increasing the mean strength; (iv) decreasing the mean load; (v) decreasing the variability of the load and obtaining a smooth loading; (vi) truncating the upper tail of the load distribution using stress limiters. Altering the upper tail of the load distribution using stress limiters is equivalent to concentrating the probability mass beneath the upper tail of the load distribution (the area marked U in Figure 5.14) into the truncation point A. Typical examples of stress limiters are the safety pressure valves, fuses and overload trips activated when pressure or current reaches critical values.
Probability density Load
µL
Strength
µS
Stress
Figure 5.13 High reliability achieved by smooth loading
99
Load–Strength (Demand–Capacity) Models Probability density Altered load distribution
Strength distribution
A U
µL Figure 5.14
5.6.2
µS
Stress
High reliability achieved by altering the upper tail of the load distribution using a stress limiter
THE IMPORTANCE OF THE SHAPE OF THE LOWER TAIL OF THE STRENGTH DISTRIBUTION AND THE UPPER TAIL OF THE LOAD DISTRIBUTION
The most important aspect of the load–strength interaction is the interaction of the upper tail of the load distribution and the lower tail of the strength distribution (Figure 5.15). Figure 5.16 shows a favourable shape of the tails, yielding intrinsically high reliability. The values from the lower tail of the strength distribution control reliability, not the high or central values (Figure 5.17). Consequently, an adequate model of the strength distribution should faithfully represent its lower tail. The normal distribution, for example, may not describe satisfactorily the strength variation in the distribution tails, mainly because the strength distribution is usually asymmetric, bounded on the left. In a large number of cases, the Weibull model and the log-normal model are suitable models for the variation of materials properties, but in some cases the strength distribution is in fact a distribution mixture. Probability density Load
Strength
Stress
Figure 5.15
Reliability is determined by the interaction of the upper tail of the load distribution and the lower tail of the strength distribution
100
Reliability and Risk Models Probability density Load
Strength
µL
µS
Stress
Figure 5.16 Tails of the load and strength distribution resulting in intrinsically high reliability
Probability density Region which controls reliability
Strength
Stress
Figure 5.17 The lower tail of the strength distribution controls reliability
Here we must point out that for non-normally distributed load and strength, the traditional reliability measures safety margin and loading roughness can be misleading. Let us have a look at Figure 5.18(a). The figure depicts a case where a low qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi safety margin ¼ (S L )/ 2S þ 2L exists (S L is small and 2S þ 2L is large) yet reliability is high. In Figure 5.18, L and S are the mean values of the load and strength and L and S are the corresponding standard deviations. Now let us have a look at Figure 5.18(b) which has been obtained by reflecting symmetrically the distributions from Figure 5.18(a). Since the reflection does not change the variances of the distributions, the new feature is the larger difference of the means 0S 0L > S L . Despite the larger new safety margin however: 0S 0L S L ffi > ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 S þ L 2S þ 2L
101
Load–Strength (Demand–Capacity) Models Probability density Strength Load
(a)
0
µL
µS
Stress
Probability density Strength Load
(b)
0
µ′L
µ′S
Probability density
Stress
Strength Load
(c)
0
µL
µS
Stress
Figure 5.18 A counterexample showing that for skewed load and strength distributions the traditional measures ‘reliability index’ and ‘loading roughness’ are misleading
the reliability on demand is smaller compared to the previous case. Clearly, the safety margin concept applied without considering the shape of the interacting distribution tails can be misleading. Similar considerations are valid regarding the parameter loading roughness qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi L / 2L þ 2S introduced by Carter (1986, 1997). If only the load in Figure 5.18(a)is reflected symmetrically regarding the mean L , the loading in Figure 5.18(c) is obtained. Since the standard deviation L of the load has not been qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi affected by the reflection, the loading roughness L / 2L þ 2S in Figure 5.18(c) is the same as in Figure 5.18(a), despite the completely different (much more severe) type of loading. These problems do not exist if for load and strength which do not follow a normal distribution, a numerical integration, instead of the reliability index,
102
Reliability and Risk Models Probability density Strength, fS(x)
Load, fL(x) 1 FL(x)
Stress, x 0 Smin
Lmax
Smax
Figure 5.19 Deriving the reliability on demand, by integrating within the interval (Smin, Lmax) including only the upper tail of the load distribution and the lower tail of the strength distribution
is used to quantify the interaction between the lower tail of the strength distribution and the upper tail of the load distribution. Furthermore, only information related to the lower tail of the strength distribution and the upper tail of the load distribution is necessary. Let us start with the load–strength integral which gives the probability of failure pf for a single load application: pf ¼
Z
Smax
½1 FL ðxÞ fS ðxÞdx
ð5:19Þ
Smin
where FL(x) is the cumulative distribution of the load and fS(x) is the probability density distribution of the strength. Suppose that the Smin and Smax in Figure 5.19 correspond to stress levels for which fS(x) ¼ 0 if x Smin or x Smin. The integral in equation (5.19) can also be presented as pf ¼
Z
Lmax
½1 FL ðxÞfS ðxÞdx þ
Smin
Z
Smax
½1 FL ðxÞ fS ðxÞdx
ð5:20Þ
Lmax
For x > Lmax, FL(x) 1 holds for the cumulative distribution of the load (Figure 5.19) and the second integral in equation (5.20) becomes zero R Smax ( Lmax [1 FL (x)] fS (x)dx 0). Consequently, the probability of failure becomes pf ¼
Z
Lmax
Smin
½1 FL ðxÞfS ðxÞdx
ð5:21Þ
Load–Strength (Demand–Capacity) Models
103
Finally, for the reliability on demand we get R¼1
Z
Lmax
½1 FL ðxÞ fS ðxÞdx
ð5:22Þ
Smin
The advantage of the reliability integral (5.22) is that in order to derive the reliability on demand, data covering the lower tail of the load distribution and the upper tail of the strength distribution are no longer necessary.
6 Solving Reliability and Risk Models Using a Monte Carlo Simulation
6.1
MONTE CARLO SIMULATION ALGORITHMS
Consider an experiment involving n trials to determine the mean of a random variable characterised by a probability distribution f(x) and standard deviation . According to the central limit theorem, with P increasing the number n of trials, the distribution of the mean x ¼ (1/n) ni¼1 p xiffiffiffi approaches a normal distribution with mean and standard deviation / n. The probability that pffiffiffi j x j > k/ n equals 2(k) where ( ) is the cumulative distribution function of the standard normal distribution. For a fixed k, the equation ! k 1 X n P x pffiffiffi ¼ 1 2ðkÞ ð6:1Þ n i¼1 i n P holds (Sobol, 1994). The error " ¼ (1/n) ni¼1 xi p ffiffi ffi is inversely proportional to the square root of the number of trials (" / 1/ n) and approaches zero as n increases. From equation (6.1) it follows that reducing the error m times requires increasing the number of Monte Carlo trials by a factor of m2. In order to improve the efficiency of the Monte Carlo simulation by reducing the variance of the Monte Carlo estimates, a number of techniques such as stratified sampling and importance sampling can be employed (see for example
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
106
Reliability and Risk Models
Ross, 1997; Rubinstein, 1981). The efficiency of the Monte Carlo simulations can also be increased by a better reproduction of the input distributions using latin hypercube sampling (Vose, 2002). In describing some basic Monte Carlo simulation algorithms below, a number of conventions are used. Thus, the statements in braces {Statement 1; Statement 2; Statement 3; . . . ;} separated by semicolons are executed as a single block. The construct: For i ¼ 1 to Number_of_trials do { .... } is a loop with a control variable i, accepting successive values from one to the number of Monte Carlo trials (Number_of_trials). In some cases it is necessary that the control variable i accepts successive decreasing values from Number_of_trials to one. The corresponding construct is For i ¼ Number_of_trials downto 1 do { .... } The loops execute the block of statements in the braces Number_of_trials number of times. If a statement break is encountered in the body of a loop, the execution continues with the next statement immediately after the loop (Statement n þ 1 in the next example) skipping all statements between the statement ‘break’ and the end of the loop: For i ¼ 1 to Number_of_trials do { Statement 1; ..... break; ..... Statement n 1; Statement n; } Statement n þ 1; The construct: While (Condition) do {Statement 1; . . . ;Statement n;} is a loop which executes the block of statements repeatedly as long as the specified condition is true. If the variable ‘Condition’ is false before entering
Solving Reliability and Risk Models Using a Monte Carlo Simulation
107
the loop, the block of statements is not executed at all. A similar construct is the loop Repeat Statement 1; .... Statement n; Until (Condition); which repeats the execution of all statements between Repeat and Until until the specified condition becomes true. Unlike the While-do loop, the Repeat-Until loop statements are executed at least once. In the conditional statement below, the block of statements in the braces is executed only if the specified condition is true. If (Condition) then {Statement 1; . . . ;Statement n;} A procedure is a self-contained section to perform a certain task. The procedure is called by including its name (‘proc’ in the next example) in other parts of the algorithm. procedure proc() { Statement 1; ... Statement n; } A function is also a self-contained section which returns value and which is called by including its name in other parts of the algorithm. Before returning to the point of the function call, a particular value p is assigned to the function name (‘fn’ in the next example) with the statement return. function fn() { Statement 1; ... Statement n; return p; } Text in italic between the symbols ‘/*’ and ‘*/’ is comments.
108 6.2 6.2.1
Reliability and Risk Models
SIMULATION OF RANDOM VARIABLES SIMULATION OF A UNIFORMLY DISTRIBUTED RANDOM VARIABLE
An efficient algorithm for generating uniformly distributed pseudo-random numbers is the congruential multiplicative pseudo-random number generator suggested by Lehmer (1951). If an initial value X0 called seed is specified, the random numbers Xiþ1 in the random sequence with seed X0 are calculated from the previous value Xi using the formula Xiþ1 ¼ AXi mod M
ð6:2Þ
where the multiplier A and the modulus M are positive integers. Xiþ1 is the remainder left when AXi is divided by M. For a different seed X00 , a different random sequence is obtained. After at most M generated values, the random sequence will repeat itself. Because 0 < Xi < M, uniformly distributed pseudorandom numbers in the interval (0,1) are obtained from ui ¼
Xi M
ð6:3Þ
Comprehensive discussion on generating random numbers and tests for statistical independence of the generated random numbers is provided in Knuth (1997), Rubinstein (1981) and Tuckwell (1988). A random sequence with good properties is obtained if A ¼ 16 807 and M ¼ 2147 483 647 are selected for the values of the constants in the recurrent formula (6.2) defining the pseudorandom generator (Park and Miller, 1988). The algorithm in pseudocode for simulating a random variable following an uniform distribution in the interval (0,1) can be presented as follows. Algorithm 6.1. function u_random() { t ¼ A*Seed; Seed ¼ mod (t, M); u ¼ Seed/M; /* Generating a random number in the interval (0,1) */ return u; } The function mod returns the remainder from the division of t and M. Before the first call of the function u_random(), the variable Seed is initialised with any number in the range 1, M 1. Subsequently this value is altered in the statement Seed ¼ mod (t, M).
Solving Reliability and Risk Models Using a Monte Carlo Simulation
109
Using the linear transformation xi ¼ a þ (b a)ui, where ui is a random number uniformly distributed in the interval (0,1), a uniformly distributed random value xi in any specified interval (a, b) can be generated. Uniformly distributed integer numbers in the range (0,n 1) with equal probability of generating any of the numbers 0, 1, 2, . . . , n 1 can be obtained using the expression xi ¼ ½nui
ð6:4Þ
where [nui] denotes the greatest integer which does not exceed nui. Consequently, the formula xi ¼ ½nui þ 1
ð6:5Þ
will generate with equal probability the integer numbers 1, 2, . . . , n. On the basis of equation (6.5), a function Rand(k) can be constructed which selects with the same probability 1/k one object out of k objects. The algorithm in pseudocode is straightforward: Algorithm 6.2. function Rand(k) { u¼u_random(); /* Generates a uniformly distributed random value in the interval 0,1 */ x ¼ Int (k*u) þ 1; /*Generates a uniformly distributed integer value x in the interval 1,. . ., k */ return x; } Function Int(k*u) returns the greatest integer which does not exceed the product k*u.
6.2.2
GENERATION OF A RANDOM SUBSET
In many applications, it is important to generate a random subset of size k out of n objects. The subset must be generated in such a way that no object in the subset appears twice and every object has an equal chance of being included in the subset. Some of the applications include: (i) selecting randomly a set of k test specimens from a batch of n specimens; (ii) selecting randomly a group of k people out of n people; (iii) selecting randomly k components for inspection, from a batch containing n components; (iv) a random assignment of n specimens to n treatments. The list can be continued.
110
Reliability and Risk Models
A random selection of k objects out of n objects can be made by making a random selection of one object out of n objects, making another random selection of one object out of the remaining n 1 objects, etc., until all k objects are selected. Let the n objects be indexed by 1, 2, . . . , n and stored in an array a[n] of size n. Calling the function Rand(n) will select with equal probability 1/n a random index r associated with an object a[r] stored in the array. In this way, a random selection of one out of the n objects is made. The remaining n 1 objects which have not been selected can be stored in the first n 1 positions of the array in the following way. The selected object a[r] and the last object a[n] are swapped. This means that the object initially stored in the nth cell of the array is now stored in the rth cell and the object from the rth cell has been moved into the nth cell. As a result, all of the objects which have not been selected are now in the first n 1 cells of the array. The process of random selection of the next object continues with selecting a random object from the first n 1 cells of the array by calling Rand(n 1) etc., until exactly k objects have been selected. The algorithm in pseudocode is as follows. Algorithm 6.3. for i ¼ n downto n-k þ 1 { x ¼ Rand(i); tmp¼a[i]; a[i]¼a[x]; a[x]¼tmp; /* swaps cells a[i] and a[x] */ } After executing the procedure, all k selected objects are stored in the last k cells of the array a[n] (from a[n k þ 1], to a[n]. If k has been specified to be equal to n, the algorithm generates a random permutation from the elements of the array, or, in other words, it scrambles the array in a random fashion.
6.2.3
INVERSE TRANSFORMATION METHOD FOR SIMULATION OF CONTINUOUS RANDOM VARIABLES
Let U be a random variable following a uniform distribution in the interval (0,1) (Figure 6.1). For any continuous distribution function F(x), if a random variable X is defined by X ¼ F1(U), where F1 denotes the inverse function of F(x), the random variable X has a cumulative distribution function F(x). Indeed, because the cumulative distribution function is monotonically increasing (Figure 6.1), the following chain of equalities holds: PðX aÞ ¼ PðF 1 ðUÞ aÞ ¼ PðU FðaÞÞ ¼ FðaÞ
ð6:6Þ
Solving Reliability and Risk Models Using a Monte Carlo Simulation
111
Probability 1
F(x)
F(a)
U
0
Random variable X
a
Figure 6.1 Inverse transformation method for generating random numbers
From the first and the last equality it follows that P(X a) ¼ F(a), which means that the random variable X has a cumulative distribution F(x).
6.2.4
SIMULATION OF A RANDOM VARIABLE FOLLOWING THE NEGATIVE EXPONENTIAL DISTRIBUTION
The cumulative distribution function of the negative exponential distribution is F(x) ¼ 1 exp (x) whose inverse is x ¼ (1/) ln (1 F). Replacing F(x) with U, which is a uniformly distributed random variable in the interval (0,1) gives x ¼ ð1=Þ lnð1 UÞ
ð6:7Þ
which follows the negative exponential distribution. A small improvement of the efficiency of the algorithm can be obtained noticing that 1 U is also a ‘uniform’ random variable in the range (0,1) and therefore x ¼ (1/) ln (1U) has the same distribution as x ¼ (1/) ln U. Finally, generating a uniformly distributed random variable ui in the interval (0,1) and substituting it in 1 xi ¼ lnðui Þ
ð6:8Þ
results in random variable xi following the negative exponential distribution. During the simulation, the uniformly distributed random values ui are obtained either from a standard built-in or from a specifically designed pseudo-random number generator.
112 6.2.5
Reliability and Risk Models
SIMULATION OF A RANDOM VARIABLE FOLLOWING THE GAMMA DISTRIBUTION
It is not possible to give a closed form expression for the inverse of the gamma cumulative distribution function. Generating a gamma random variable, however, can be done using the property that the gamma random variable G(n, 1/) can be presented as a sum of n random variables following the negative exponential distribution with parameter . Each negative exponential random variable is generated from (1/) ln (ui ), i ¼ 1, 2, . . . , n, where ui are statistically independent uniformly distributed random numbers in the interval (0,1). As a result, the gamma random variable G(n, 1/) can be generated with the sum 1 1 1 Gðn; 1=Þ ¼ lnðu1 Þ lnðu2 Þ lnðun Þ
ð6:9Þ
Equation (6.9) can be reduced to 1 Gðn; 1=Þ ¼ lnðu1 u2 . . . un Þ
ð6:10Þ
which is computationally more efficient due to the single logarithm.
6.2.6
SIMULATION OF A RANDOM VARIABLE FOLLOWING A HOMOGENEOUS POISSON PROCESS IN A FINITE INTERVAL
Random variables following a homogeneous Poisson process in a finite interval with length a can be generated in the following way. Successive, exponentially distributed random numbers xi ¼ (1/) ln (ui ) are generated according to the inverse transformation method, where ui are uniformly distributed random numbers in the interval (0,1). Subsequent realisations ti following a homogeneous Poisson process with intensity can be obtained from t1 ¼ x1
t2 ¼ t1 þ x2 ; . . . ; tn ¼ tn1 þ xn ðtn aÞ
The number of variables n, following a homogeneous Poisson process in the finite interval, equals the number of generated values ti smaller than the measure a of the interval. The nth generated value tn ¼ ð1=Þ lnðu1 Þ ð1=Þ lnðu2 Þ ð1=Þ lnðun Þ
Solving Reliability and Risk Models Using a Monte Carlo Simulation
113
can also be presented as tn ¼ ( 1/) ln (u1 u2 . . . un ). Generating uniformly distributed random numbers u1 , . . . , ui continues while ti ¼ (1/) ln (u1 u2 . . . ui ) a and stops immediately if ti > a. Because the condition (1/) ln (u1 u2 . . . ui ) a is equivalent to the condition u1 u2 . . . ui expðaÞ
ð6:11Þ
the algorithm in pseudocode for simulating a variable following a homogeneous Poisson process in a finite interval a becomes that shown in the following. Algorithm 6.4. Limit ¼ exp (a); S ¼ u_random(); k ¼ 0; While (S Limit) do {S ¼ S*u_random( ); k ¼ k þ 1;} At the end, the generated random variable following a homogeneous Poisson process remains in the variable k. Simulating a number of random failures characterised by a constant hazard rate in a finite time interval with length a can also be done by the described algorithm.
6.2.7
SIMULATION OF A DISCRETE RANDOM VARIABLE WITH A SPECIFIED DISTRIBUTION
A discrete random variable X takes on only discrete values X ¼ x1, x2, . . . , xn, with probabilities p1 ¼ f(x1), p2 ¼ f(x2) , . . . , pn ¼ f(xn) and no other value.
X P(X ¼ x)
x1 f(x1)
x2 f(x2)
... ...
xn f(xn)
where f(x) ¼ P(X ¼ x) is the probability (mass) function of the random variable: P n i¼1 f (xi ) ¼ 1. The algorithm for generating a random variable with the specified distribution consists of the following steps. Algorithm 6.5. P 1. Construct the cumulative distribution P(X xk ) F(xk ) ¼ ik f (xi ) of the random variable. 2. Generate a uniformly distributed random number r in the interval [0,1]. 3. If r F(x1), the simulated random value is x1, else if F(xk1) < r F(xk) the simulated random value is xk (Figure 6.2).
114
Reliability and Risk Models x2 0
f (x1)
f (x2)
f (xn)
• • •
1
u
Figure 6.2 Simulating a random variable with a specified discrete distribution
A binomial experiment, involving n statistically independent trials with probability of success p in each trial, can be simulated by generating n random numbers ui uniformly distributed in the interval (0,1). If the number of successes X is set to be equal to the number of trials in which ui p, the distribution of the number of successes X follows a binomial distribution with parameters n and p.
6.2.8
SELECTION OF A POINT AT RANDOM IN THREE-DIMENSIONAL SPACE REGION
In order to select a random point from a bounded three-dimensional region R (Figure 6.3) a rejection method can be used. The region R is first surmounted by a parallelepiped with sides a, b and c. A random point with coordinates au1, bu2 and cu3 is generated in the parallelepiped using three statistically independent random numbers u1, u2 and u3, uniformly distributed in the interval (0,1). If the generated point belongs to the region R, it is accepted. Otherwise, the point is rejected and a new random point is generated in the parallelepiped and checked z
(au1, bu2, cu3)
R c
y b a x
Figure 6.3 Selecting a random point in the region R
115
Solving Reliability and Risk Models Using a Monte Carlo Simulation
whether it belongs to R. The first accepted point is a point randomly selected in the region R.
6.2.9
SIMULATION OF RANDOM LOCATIONS FOLLOWING A HOMOGENEOUS POISSON PROCESS IN A FINITE DOMAIN
Random locations following a homogeneous Poisson process with a constant intensity in a finite domain (volume, area, interval) can be simulated in two steps. In the first step, using Algorithm 6.4, a number n of random variables is generated following a homogeneous Poisson process with intensity . In the second step the n variables generated are uniformly distributed in the finite domain. The method will be illustrated by an algorithm for generating random locations which follow a homogeneous Poisson process with intensity , in a cylindrical domain with area of the base S and height H (Figure 6.4). First, the number of locations k in the cylinder is generated according to Algorithm 6.4, where a ¼ SH is the volume of the cylinder. Next, k locations are generated, uniformly distributed in the volume of the cylinder. The second step can be accomplished by generating a uniformly distributed point with coordinates (x, y) on the base of the cylinder, followed by generating the z-coordinate of the location, uniformly distributed along the height H. A uniformly distributed point in the area of the base is generated by using the rejection method described in section 6.2.8: generating uniformly distributed points in the circumscribed rectangle containing the base, ignoring the points outside the area of the base and accepting the first point inside (Figure 6.4). The process of generating uniformly distributed points with coordinates (x, y, z) continues until the total number of k locations has been generated.
z
V
y
0
H (x,y) y S x
z (x,y) x
Figure 6.4
Generating uniformly distributed locations in the cylinder V
116 6.2.10
Reliability and Risk Models
SIMULATION OF A RANDOM DIRECTION IN SPACE
Suppose that a random direction needs to be selected from the origin of the coordinate system with axes 1 , 2 , 3 as shown in Figure 6.5. This problem is common in the applications and discussion related to it exists even in introductory texts on Monte Carlo simulation (see for example Sobol, 1994). Such a problem is present during a simulation of brittle fracture triggered by penny-shaped cracks. The orientation of the crack regarding the principal stresses 1 , 2 and 3 is specified by the unit normal vector n to the crack plane (Figure 6.5). A random direction in space means that the endpoint of the random direction is uniformly distributed on the surface of the unit sphere in Figure 6.6. The random direction is determined by the two angles 0 2 and 0 ’ (Figure 6.6). Let us divide the surface of the sphere, by planes perpendicular to one of the axes, into infinitesimally small surface elements ds. Because the endpoints are uniformly distributed on the surface of the sphere, the probability of selecting a particular element ds to which corresponds angle ’ is 1 ds=ð4Þ ¼ 2 sinð’Þd’=ð4Þ ¼ sinð’Þd’ 2 The latter expression defines the probability density (1/2) sin (’) characterising the angle ’. Integrating this probability density gives the cumulative distribution of the angle Fð’Þ ¼
Z
’ 0
1 1 sinðxÞdx ¼ ½1 cosð’Þ 2 2
Applying the inverse transformation method, values of the cosine of the angle ’ are generated from σ3
n° ϕ ψ
σ2
σ1 Figure 6.5 A random direction n in space defined by angles 0 ’ and 0
2
Solving Reliability and Risk Models Using a Monte Carlo Simulation
117
z
ds
ϕ 0
ψ
y
x
Figure 6.6 Unit sphere and a random direction in space
cosð’i Þ ¼ 1 2ui
ð6:12Þ
where ui are uniformly distributed numbers in the interval (0,1). Considering the axial symmetry with respect to the selected z-axis, the second angle is uniformly distributed in the interval (0, 2) and values for this angle are generated from i
¼ 2si
ð6:13Þ
where si are uniformly distributed numbers in the interval (0,1), statistically independent from the random numbers ui used for generating the values cos (’i ).
6.2.11
SIMULATION OF A RANDOM VARIABLE FOLLOWING THE THREE-PARAMETER WEIBULL DISTRIBUTION
Since the Weibull cumulative distribution function is x x0 m FðxÞ ¼ 1 exp the first step is to construct its inverse 1=m 1 x ¼ x0 þ ln 1 FðxÞ
118
Reliability and Risk Models
Next, F(x) is replaced with U, which is a uniformly distributed random variable in the interval (0,1). As a result, the expression 1=m 1 x ¼ x0 þ ln 1U is obtained. Generating uniformly distributed random values ui in the interval (0,1) and substituting them in xi ¼ x0 þ ð lnðui ÞÞ1=m
ð6:14Þ
yields random values xi following a three-parameter Weibull distribution.
6.2.12
SIMULATION OF A RANDOM VARIABLE FOLLOWING THE MAXIMUM EXTREME VALUE DISTRIBUTION
From the cumulative distribution function of the maximum extreme value distribution x x 0 FðxÞ ¼ exp exp the inverse x ¼ x0 ln [ ln F(x)] is determined. Replacing F(x) with U, which is a uniformly distributed random variable in the interval (0,1) results in x ¼ x0 ln ( ln U). Generating uniformly distributed random variables ui in the interval (0,1) and substituting them in xi ¼ x0 lnð ln ui Þ
ð6:15Þ
produces values xi following the maximum extreme value distribution.
6.2.13
SIMULATION OF A GAUSSIAN RANDOM VARIABLE
A standard normal variable can be generated easily using the central limit theorem applied to a sum X of n random variables Ui, uniformly distributed in the interval (0,1). According to the central limit theorem, with increasing n, the sum X ¼ U1 þ U2 þ þ Un approaches a normal distribution with mean EðXÞ ¼ EðU1 Þ þ þ EðUn Þ ¼ n=2 and variance VðXÞ ¼ VðU1 Þ þ þ VðUn Þ ¼ n ð1=12Þ
Solving Reliability and Risk Models Using a Monte Carlo Simulation
119
Selecting n ¼ 12 uniformly distributed random variables Ui gives a reasonably good approximation for many practical applications. Thus, the random variable X ¼ U1 þ U2 þ þ U12 6
ð6:16Þ
is approximately normally distributed with mean E(X) ¼ 12 (1/2) 6 ¼ 0 and variance V(X) ¼ 12 (1/12) ¼ 1, or, in other words, the random variable X follows the standard normal distribution (Rubinstein, 1981). Another method for generating a standard normal variable is the Box– Muller method (Box and Muller, 1958). A pair of statistically independent standard normal variables x and y are generated by generating a pair u1, u2 of statistically independent, uniformly distributed random numbers in the interval (0,1). Random variables following the standard normal distribution are obtained from pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ln u1 cosð2u2 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi y ¼ 2 ln u1 sinð2u2 Þ
x¼
ð6:17Þ ð6:18Þ
the derivation of which is given in Appendix 6.1. From the generated standard normal variable N(0, 1) with mean zero and standard deviation unity, a normally distributed random variable N(, ) with mean and standard error can be obtained by applying the linear transformation Nð; Þ ¼ Nð0; 1Þ þ
6.2.14
ð6:19Þ
SIMULATION OF A LOG-NORMAL RANDOM VARIABLE
A random variable follows a log-normal distribution if its logarithm follows a normal distribution. Suppose that the mean and the standard deviation of the logarithms of a log-normal variable X are ln and ln , correspondingly. A lognormal random variable can be generated by first generating a normally distributed random variable Y with mean ln and standard deviation ln using Y ¼ ln Nð0; 1Þ þ ln
ð6:20Þ
where N(0, 1) is a generated standard normal variable (see equation 6.19). The log-normal normal variable X is generated by exponentiating the normal random variable Y: X ¼ eY
ð6:21Þ
120 6.2.15
Reliability and Risk Models
CONDITIONAL PROBABILITY TECHNIQUE FOR BIVARIATE SAMPLING
This technique is based on presenting the joint probability density function p(x, y) as a product pðx; yÞ ¼ pðxÞpðyjxÞ
ð6:22Þ
of p(x) – the marginal distribution of X and p(y|x) – the conditional distribution of Y, given that X ¼ x. Simulating a random number with distribution p(x, y) involves two steps: (i) generating a random number x with distribution p(x) and (ii) for the generated value X ¼ x, generating a second random number y, with a conditional probability density p(y|x). The obtained pairs (x, y) have a joint distribution p(x, y). A common application of this technique is the random sampling from a bivariate normal distribution with a joint probability density distribution f ðx; yÞ ¼
1 pffiffiffiffiffiffiffiffiffiffiffiffiffi 2x y 1 2 ( " 1 x X 2 x X y Y 2 exp 2ð1 2 Þ X X Y 2 #) y Y þ Y
ð6:23Þ
where X , Y denote the means and X , Y denote the standard deviations of the random variables X and Y, respectively. The parameter is the linear correlation coefficient between X and Y, defined by ¼E
X X X
Y Y Y
ð6:24Þ
An important feature is that the bivariate normal distribution is a natural extension of the normal distribution in the two-dimensional space. If pairs (X, Y) have a bivariate normal distribution, the variables W and Z defined by W ¼ (X X )/X and Z ¼ (Y Y )/Y have a standardised bivariate normal distribution with a probability density function 2
1 1 2 w 2wz þ z f ðw; zÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi exp 2ð1 2 Þ 2 1 2
ð6:25Þ
Given that X ¼ x, the conditional distribution of Y is normal, with mean Y þ pffiffiffiffiffiffiffiffiffiffiffiffiffi (Y /X )(x X ) and standard deviation Y 1 2 (Miller and Miller, 1999).
Solving Reliability and Risk Models Using a Monte Carlo Simulation
121
A procedure for sampling from the standardised bivariate normal distribution consists of generating two random numbers w and z from the standard normal distribution N(0, 1). The random variates x and y following a bivariate normal distribution with specified means X , Y , standard deviations X , Y and a correlation coefficient are obtained from x ¼ X þ X w y ¼ Y þ
6.2.16
ð6:26Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffi Y ðx X Þ þ Y 1 2 z X
ð6:27Þ
VON NEUMANN’S METHOD FOR MODELLING CONTINUOUS RANDOM VARIABLES
This method is convenient in cases where the inverse function of the cumulative distribution function F(x) cannot be expressed in terms of elementary functions or in cases where the probability density function has been specified empirically (e.g. by a histogram). Suppose that the random variable is defined in the interval (a, b) and the probability density function f(x) is bounded: f(x) M (Figure 6.7). A value following the specified probability density function f(x) can be generated using the following steps:
Probability density
M
f(x)
X 0 a
x1
x0
x0 + dx
b
Figure 6.7 Rejection method for generating a continuous random variable with a specified probability density function f(x)
122
Reliability and Risk Models
1. A uniformly distributed random value x ¼ a þ (b a)u1 in the interval (a, b) is generated first, where u1 is a uniformly distributed random number in the interval (0, 1). 2. A random value y ¼ M u2 is also generated, uniformly distributed in the interval (0, M). 3. If y f(x) the random value x generated on step (1) is accepted. Otherwise the random value is rejected and the process continues with steps (1) and (2) until a generated value x is accepted. The algorithm in pseudocode can be presented as follows. Algorithm 6.6. Function_von_Neumann() { Repeat x ¼ a þ (b a)*u_random( ); y ¼ M*u_random(); until (y f(x)); return x; } Indeed, the probability that a generated value xi will belong to the interval x0, x0 þ dx is a product of the probability dx/(b a) that the random value xi will be generated in the interval x0, x0 þ dx and the probability f(x0)/M that it will be accepted. As a result, the probability that a generated value will belong to the interval x0, x0 þ dx and will be accepted becomes f(x0)dx/((b a)M). According to the total probability theorem, the probability of accepting a value is Z a
b
f ðxÞdx 1 ¼ ðb aÞM ðb aÞM
Rb because a f (x)dx ¼ 1. Finally, the conditional probability that a value will belong to the interval x0, x0 þ dx given that it has been accepted is
f ðx0 Þdx ðb aÞM
,
1 ¼ f ðx0 Þdx ðb aÞM
which means that the accepted values do follow the specified distribution f(x).
123
Solving Reliability and Risk Models Using a Monte Carlo Simulation
6.2.17
RANDOM SAMPLING FROM A MIXTURE DISTRIBUTION
Suppose that a sampling from the distribution mixture FðxÞ ¼ p1 F1 ðxÞ þ þ pM FM ðxÞ
ð6:28Þ
P is required, where pi , M i¼1 pi ¼ 1 are the shares of the separate individual distributions Fi(x) in the mixture. Sampling the distribution mixture (6.28) involves two basic steps: . .
Random selection of an individual distribution to be sampled. Random sampling from the selected distribution.
Random selection of an individual distribution can be done using Algorithm 6.5 for sampling from a discrete distribution with mass function X P(X ¼ x)
1 p1
2 p2
... ...
M pM
Random sampling from the selected distribution can be performed using for example the inverse transformation method.
6.3
6.3.1
MONTE CARLO SIMULATION ALGORITHMS FOR SOLVING RELIABILITY AND RISK MODELS AN ALGORITHM FOR SOLVING THE GENERAL RELIABILITY MODEL
According to Chapter 5, the general expression for the reliability integral is
R¼
Z Z
Z ...
f1 ðx1 Þ f2 ðx2 Þ fn ðxn Þdx1 dx2 . . . dxn
x1 ;...;xn 2S
where the integration is performed within the safe domain S. X1, X2, . . . , Xn are statistically independent controlling random variables where f1(x1), f2(x2), . . . , fn(xn) are their marginal probability densities. The Monte Carlo algorithm for performing the integration is as follows. Algorithm 6.7. x[n]: /* Global array containing the current values of the n random variables */
124
Reliability and Risk Models
procedure Generate_random_variable (j ) { /* Generates a realisation (value) of the j-th controlling random variable x[j] */ } function Limit_state( ) { /* For a particular combination of values of the random variables x[1], . . . , x[n] returns 1 or 0 depending on whether failure state is present or not */ } /* Main algorithm */ No_failure_counter ¼ 0; For i ¼ 1 to Number_of_trials do { /* Generate the i-th set of n controlling random variables */ For j ¼ 1 to n do Generate_random_variable( j ); Failure ¼ Limit_state(); /* Checks for a limit (failure) state using the current values of the random variables in the array x[n] */ If (not Failure) then No_failure_counter ¼ No_failure_counter þ 1; } Reliability_on_demand ¼ No_failure_counter / Number_of_trials; In the simulation loop controlled by variable i, a second nested loop has been defined, controlled by variable j whose purpose is generating instances of all controlling random variables. After obtaining a set of values for the random variables, the function Limit_state() is called to check whether the set of values for the random variables defines a point in the failure region. If the set of values defines a point in the safe region, the ‘no-failure’ counter is incremented. After the end of the Monte Carlo trials, reliability on demand is obtained as a ratio of the value stored in the ‘no-failure’ counter and the total number of Monte Carlo trials.
6.3.2
MONTE CARLO EVALUATION OF THE RELIABILITY ON DEMAND DURING A LOAD–STRENGTH INTERFERENCE
Main components of the load–strength interference model are: (i) a model for the strength distribution; (ii) a model for the load distribution and (iii) the load–strength interference integral. A Monte Carlo simulation approach for solving the load–strength reliability integral will be illustrated by an example related to calculating the risk of failure of a critical component. For the purposes
Solving Reliability and Risk Models Using a Monte Carlo Simulation
125
of the comparison with the numerical solution presented in section 5.4, the distributions and their parameters are the same as in Example 5.1. Accordingly, the distribution of strength is specified by the three-parameter Weibull distribution x x0 m FS ðxÞ ¼ 1 exp with parameters m ¼ 3.9, ¼ 297:3 and x0 ¼ 200 MPa and the load distribution is specified by the maximum extreme value distribution x FL ðxÞ ¼ exp exp with parameters ¼ 119 and ¼ 73:64. An algorithm in pseudocode which evaluates the reliability on demand using Monte Carlo simulation is presented below. Algorithm 6.8 Monte Carlo evaluation of the load – strength interference integral. function Weibull_rv( ) { /* Generates a Weibull-distributed random variable */} function Max_extreme_value_rv( ) { /* Generates an Extreme value random variable */} No_failure_counter ¼ 0; For i ¼ 1 to Number_of_trials do { /* Generates the i-th pair of load and strength */ Strength ¼ Weibull_rv( ); Load ¼ Max_extreme_value_rv( ); If (Strength > Load) then No_failure_counter ¼ No_failure_counter þ 1; } Reliability_on_demand ¼ No_failure_counter / Number_of_trials;. In the Monte Carlo simulation loop controlled by the variable i, instances of a random strength and a random load are generated in each trial. Their values are subsequently compared in order to check for a safe state. If a safe state is present (strength greater than load), the ‘no failure’ counter is incremented. Similar to the previous algorithm, after the end of the Monte Carlo trials, reliability on demand is obtained as a ratio of the no-failure counter value and the total number of Monte Carlo trials. The algorithms of the functions returning random strength following the Weibull distribution and random load following the maximum extreme value distribution are given in sections 6.2.11 and 6.2.12.
126
Reliability and Risk Models
For the reliability on demand, 100 000 Monte Carlo trials (Number_ of_trials¼ 100000) yielded 0.985, which coincides with the result from Example 5.1 obtained from a direct integration. In case of multiple load applications, whose number is a random variable, the algorithm related to a single load application can be extended. Suppose that the load and the strength distributions and the parameter values are according to the previous example. Suppose also that the finite time interval has a length a ¼ 100 months and the number density of the load applications is ¼ 0:5 month1. An algorithm in pseudocode, for Monte Carlo evaluation of the reliability in the specified finite time interval, can be constructed as follows. Algorithm 6.9. function Weibull_rv() { /* Generates a random variable following a Weibull distribution with specified parameters */} function Max_extreme_value_rv() { /* Generates a random variable following the Maximum Extreme value distribution with specified parameters */} function Generate_number_load_applications() {/* Generates a number of load application following a homogeneous Poisson process with density , in the finite time interval 0, a */} Failure_counter ¼ 0; For i ¼ 1 to Number_of_trials do { /* Generates a random strength */ Strength ¼ Weibull_rv( ); /* Generates the number of load applications */ Num_load_applications ¼ Generate_number_load_applications(); /* Generates the loads magnitudes and compares each load with the random strength */ For k ¼ 1 to Num_load_applications do { Load ¼ Max_extreme_value_rv(); If (Strength < Load) then {Failure_counter ¼ Failure_counter þ 1; break;} } } Reliability ¼ 1Failure_counter / Number_of_trials; A characteristic feature distinguishing this algorithm from the previous algorithm, dealing with reliability on demand, is the nested loop with control
Solving Reliability and Risk Models Using a Monte Carlo Simulation
127
variable k, accepting values from one to Num_load_applications (the number of load applications). A random strength is generated before entering the inner loop (the loop with control variable k) because strength is the same for all load applications. For each load application, a random load is generated and subsequently compared with the random strength. If strength is smaller than any of the applied loads, failure is registered by incrementing the failure counter (Failure_counter), after which the inner loop is exited immediately by the statement break. By dividing the content of the failure counter into the total number of trials, the probability of failure is obtained and subtracting it from unity gives the reliability associated with the time interval with length a ¼ 100 months. For the reliability associated with the finite time interval of 100 months, a computer program in Cþþ based on Algorithm 6.9 yielded R(t ¼ 100) ¼ 0.60.
6.4
MONTE CARLO SIMULATION OF THE LOWER TAIL OF THE STRENGTH DISTRIBUTION FOR MATERIALS CONTAINING FLAWS
In order to illustrate some of the Monte Carlo simulation techniques described earlier, we will demonstrate an algorithm for Monte Carlo simulation of fracture of a component with internal crack-like defects. As mentioned earlier, determining the probabilities from the lower tail of the strength distribution is of paramount importance for calculating the reliability on demand during load–strength interference. For materials with flaws, the lower tail of the strength distribution is affected significantly by the flaws. Many materials contain small cracks or crack-like flaws and fail prematurely because of a microcrack becoming unstable during loading. Consider a simple loading where a component has been loaded by a force F. The force induces stresses in the component and failure is caused by an unfavourable combination of a flaw location, flaw size, flaw orientation and local stress state. For any specified value F* of the loading force, the probability of fracture is calculated following the algorithm described below. Randomly oriented cracks with number density are generated in the volume of the component. Penny-shaped cracks can be assumed as a first approximation, whose size is defined solely by their radius. Before entering the Monte Carlo simulation, the distribution of the stress in the component is calculated using analytical techniques (Gere and Timoshenko, 1999) or numerical techniques (using for example finite elements). The number of cracks following a homogeneous Poisson process in the volume of the component is generated first using Algorithm 6.4. The locations of the cracks are generated by generating uniformly distributed points in the volume of the component using the method from section 6.2.8. The sizes of the cracks are generated by sampling the crack size distribution using for example the inverse
128
Reliability and Risk Models
transformation method. Each crack is randomly oriented in the space using the method from 6.2.10. Subsequently, the local stress tensor at the crack location is calculated and using fracture mechanics criteria relevant to a mixed mode fracture, it is determined whether the generated crack will be unstable (will cause failure). If the crack is unstable, the inner loop is exited immediately with a statement break, in order to avoid unnecessary checks for the rest of the cracks, because the component fails when at least one of the cracks becomes unstable. To determine the probability of fracture, the number of trials associated with fracture is divided into the total number of trials. Using this procedure, the lower tail of the strength distribution of any component with internal flaws can be built. Algorithm 6.10 Direct Monte Carlo evaluation of the probability of overstress fracture of a loaded component with internal flaws. procedure Calculate_stress_distribution () {/* Calculates the distribution of stresses in a loaded component using analytical solution or FEM solution */} procedure Calculate_principal_stresses (x, y, z) {/* Calculates the magnitude and the direction of the principal stresses at point (x, y, z) in the component */} function Generate_random_crack_size () {/* Samples the size distribution of the flaws and returns a random size of the crack at the selected location (x, y, z) */} procedure Random_orientation () {/* Generates the cosine directors of a randomly oriented pennyshaped crack with respect to the directions of the principal normal stresses, according to Algorithm 6.2.10 */} function Poisson_variable () {/* Returns a random number of cracks in the volume of the component, according to Algorithm 6.4 in Section 6.2.6. The cracks have a specified number density . procedure Generate_Random Crack_Location() {/* Generates a point with uniformly distributed coordinates in the volume of the component, according to the rejection method in Section 6.2.8 */} function Is_crack_unstable () {/* Uses mixed mode fracture criteria to check whether the crack is unstable and returns TRUE if the crack with the generated location, size and orientation initiates fracture */}
Solving Reliability and Risk Models Using a Monte Carlo Simulation
129
Failure_counter ¼ 0; Calculate_stress_distribution (); For i ¼ 1 to Number_of_trials do { M ¼ Poisson_variable( ); /* Contains the random number of cracks for the current trial For j ¼ 1 to M do { Generate_Random_Crack_Location(); /* Generates the coordinates x, y, z of a point defining a random location */ Calculate_principal_stresses (x, y, z) /* Calculates the normal principal stresses acting at the crack location x, y, z */ Generate_random_crack_size (); /* Samples the specified size distribution of the flaws and returns a random size */ Random_orientation (); /* Generates cosine directors defining a random crack orientation with respect to the principal normal stresses */ Unstable ¼ Is_crack_unstable(); /* Checks whether the crack is unstable and if so, returns TRUE (1) in the variable Unstable, otherwise it returns 0 */ If (Unstable) then {Failure_Counter ¼ Failure_Counter þ 1; break;} } } Probability_of_fracture ¼ Failure_counter / Number_of_trials; The probability of fracture can be calculated for different magnitudes of the load F, and in this way the entire lower tail of the strength distribution can be constructed. The lower tail of the strength distribution can subsequently be used to calculate the probability of early-life failure or the reliability associated with a specified time interval. Parametric studies related to how critical design parameters affect the probability of failure and the lower tail of the strength distribution can also be conducted using this procedure. Varying the design parameters and building the lower tail of the strength distribution provides a powerful tool for producing optimised designs with increased reliability. The proposed Monte Carlo simulation technique can also be used for producing robust designs, insensitive to small variations of the critical parameters. In Chapter 10 we will demonstrate a much more efficient method compared to the direct Monte Carlo simulation method described here. The method is based on a result linking the probability of fracture characterising a single random flaw to the probability of fracture characterising a population of random flaws.
130
Reliability and Risk Models
APPENDIX 6.1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Indeed, if we denote r ¼ 2 ln u1 and ¼ 2u2 , these can be regarded as polar coordinates (r, ) (Figure 6.8). The polar angle follows a uniform distribution with density 1/2 in the interval (0, 2). According to the algorithm related to generating a random variable with a negative exponential distribution (see equation 6.8), the square d ¼ r2 of the polar radius follows an exponential distribution 1/2 exp (1/2d) with parameter ¼ 1/2. Because u1 and u2 are statistically independent, the polar angle and the polar radius which are functions of u1 and u2 are also statistically independent (see Appendix B). d ¼ x2 þ y2
ð6A:1Þ
¼ arctanðy=xÞ
ð6A:2Þ
The joint distribution of d ¼ r2 and is therefore a product of the marginal density distributions of d ¼ r2 and : 1/(2) (1/2) exp (d/2) (Ross, 1997). The joint distribution f(x, y) of x and y equals the joint distribution of d and multiplied by the absolute value of the Jacobian of the transformation (DeGroot, 1989): @d jJj ¼ det @x @ @x
@d @y @ @y
¼2
Finally, for the joint distribution of x and y we obtain f ðx; yÞ ¼
1 exp½ðx2 þ y2 Þ=2 2
Y
(x,y)
r
θ X
Figure 6.8 Polar coordinates r and of point (x, y)
Solving Reliability and Risk Models Using a Monte Carlo Simulation
131
which can be factorised as 1 1 f ðx; yÞ ¼ gðxÞ gðyÞ ¼ pffiffiffiffiffiffi expðx2 =2Þ pffiffiffiffiffiffi expðy2 =2Þ 2 2 where 1 gðxÞ ¼ pffiffiffiffiffiffi expðx2 =2Þ and 2
1 gðyÞ ¼ pffiffiffiffiffiffi expðy2 =2Þ 2
are the probability density functions of two statistically independent standard normal random variables X and Y.
7 Analysis of the Properties of Inhomogeneous Media Using Monte Carlo Simulations
Material properties are often characterised by intrinsic variability which cannot be eliminated. Variability associated with properties is often attributable to the structural inhomogeneity. Many materials are inhomogeneous, for example ferrous and non-ferrous alloys, composites, ceramics and plastics. Determining probability bounds on the variation of properties due to inhomogeneity is therefore an important issue related to reliability of materials. Variability of properties associated with inhomogeneous structures is intrinsic. It is not due for example to measurement imprecision or inability to control the experiment and therefore cannot be reduced or eliminated. The influence of microstructural inhomogeneity is particularly strong for fracture properties compared to other properties (e.g. the material’s moduli). The reason is that fracture is particularly sensitive to microstructural heterogeneities associated with local zones of weak resistance to crack extension. Often, ahead of crack fronts in composite materials or in materials containing defects, the number density of the defects varies locally, which is another manifestation of the microstructural inhomogeneity. Reliability of materials is an interplay of materials science and applied statistics. The intersections defined by the keywords ‘statistics’, ‘structure’ and ‘properties’, determine key research areas (Figure 7.1). Important topics from the intersection ‘statistics and structure’ for example, are the spatial statistics of flaws and the spatial statistics of duplex structures.
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
134
Reliability and Risk Models
Structure
Statistics
Properties
Figure 7.1 The intersection of ‘statistics’, ‘structure’ and ‘properties’ defines key research areas in reliability of materials
Consider a case where the property y under consideration (e.g. yield stress, toughness, density) is solely determined by the quantity of one of the structural constituents in a sampled volume from a duplex structure. In this case, the property value y can be expressed by a functional relationship y(x) between the property value y and the quantity x of one of the structural constituents in the sampled volume. A significant predicament is that the functional relationship y(x) is usually unknown. In the next section, we will show that despite this difficulty, under certain conditions, the probabilities from the lower tail of the distribution of properties can still be determined.
7.1
ANALYSIS OF INHOMOGENEOUS MICROSTRUCTURES USING RANDOM TRANSECTS
A powerful method for investigating inhomogeneous structures is by random transects: linear, areal or volume transects. Random transects have been used in quantitative microscopy and stereology for estimating the volume fraction and surface area of various microstructural constituents (Weibel, 1980; Underwood, 1969). A line transect for example, can be defined as a segment AB of given length L ‘cast’ in a random fashion in the inhomogeneous media. A realisation of a random placement of the transect can be obtained with a uniform distribution of the orientation angle and the coordinates of the mid point M, over a microstructural image (Figure 7.2). The ratio L /L of the length of the transect L lying in one of the microstructural constituents (e.g. ) to the entire length L of the transect will be referred to as the intercept. For a three-dimensional transect, the intercept is defined by the ratio V /VT of the volume V intercepted from one of the microstructural constituents (e.g. ) to the entire volume VT of the transect (Figure 7.2). An important characteristic of duplex structures is the distribution of the intercepts. For a duplex material containing two phases characterised by
135
Analysis of the Properties of Inhomogeneous Media S L
B
β Lβ
α
γ M
VT
A
Figure 7.2 Analysis of an inhomogeneous media using random transects
different density for example, the density distribution of the 3D-sample taken from the material will mirror the distribution of the intercepts from the microstructural constituents. A Charpy V-notch test of multi-run C–Mn welds also illustrates the importance of the transect concept. The type of microstructure in which the Charpy V-notch (transect) is located has a crucial role on the distribution of the impact energy (see Figure 4.5a).
7.2
EMPIRICAL CUMULATIVE DISTRIBUTION OF THE INTERCEPTS
The random variable X referred to as the intercepted fraction or simply intercept, which accepts values x ¼ Lc/L, where index c stands for one of the existing structural constituents ( or ), is an important tool for analysing inhomogeneous media. While the expected value E(X) of the intercept X does not depend on the size of the transect (length, area, volume), the variance of the intercept is a function of the transect size. The mean of the intercepted fraction X from one of the structural constituents, for example , associated with transects of different size L is an unbiased estimate of the areal/volume fraction of the -constituent: E(X) ¼ ; þ ¼ 1. Indeed, let the transect with size L be divided into equal very small elements Lj (L1 ¼ L2 ¼ . . . ¼ L), so small that it can be safely assumed that the probability of a particular Lj sampling the -constituent is P(Lj 2 ) (the probability that L will not sample is 1 ¼ ). Thus, each small element Lj is associated with a random variable Uj indicating whether the element samples the -constituent or not. Consequently, the variables Uj have a Bernoulli distribution defined as follows: Uj accepts ‘1’ with probability and
136
Reliability and Risk Models
‘0’ with probability . Any line transect of arbitrary size L can be divided into q sufficiently small segments with lengths L, where q ¼ L/L. Then, for a random placement of the transect in the microstructure, the random variable ‘intercepted fraction X’ can be presented as X U1 L þ U2 L þ þ Uq L ¼ ð1=qÞ Uj qL j¼1 q
X¼
ð7:1Þ
According to a well-known result from the theory of probability (see Appendix B), the expected value of a sum of random variables is equal to the sum of the expected values of the random variables irrespective of whether the variables are statistically independent or not. In fact Uj in equation (7.1) are correlated random variables because for any pair Lj and Ljþ1 of adjacent small elements, if Lj samples , Ljþ1 is also likely to sample . Since the expected value of any Uj is EðUj Þ ¼ 1 þ 0 ð1 Þ ¼ for the expected value E(X) of the intercepted fraction, the expression EðXÞ ¼ ð1=qÞ
q X j¼1
EðUj Þ ¼
q ¼ q
ð7:2Þ
is obtained. Clearly, the expected value of the intercepted fraction is equal to the areal/volume fraction of the -constituent and does not depend on the length L of the transect. The cumulative distribution function of the intercepted fraction F(x, L) and the corresponding probability density function f (x, L) ¼ @F(x, L)/@x corresponding to a specified size L of the transect are important characteristics of inhomogeneous media sampled by transects. Both functions depend on the size L of the transect. Using the probability density function f(x, L), equation (7.2) can be presented as EðXÞ ¼
Z
1
xf ðx; LÞdx ¼
ð7:3Þ
0
Only in trivial cases (e.g. randomly distributed detached spheres in matrix ) can the distribution functions F(x, L) and f(x, L) be obtained analytically (Kendall and Moran, 1963). In the general case of microstructural constituents with irregular shape, the distributions of the intercepts can only be obtained by a Monte Carlo simulation. This consists of sampling images of the microstructure
Analysis of the Properties of Inhomogeneous Media
137
Figure 7.3 Inhomogeneous lamellar eutectic composed of -phase (dark zones) and -phase (white zones)
and registering the intercepted fractions. Figure 7.3 for example is a scanned image of cast microstructure (ASM, 1985a). The lamellar eutectic contains dark zones (-phase, ¼ 0:4) and white zones (-phase, ¼ 0:6). This microstructural image has been scanned as a bitmap file and read by a computer simulation procedure (Kay, 1995; Rimmer, 1993). The longer side of the microstructural section used in the simulation contains 640 length units. A random placement of the transect is defined by three random numbers: two random coordinates of the middle point M and a random angle which the transect subtends with the horizontal axis (Figure 7.2). The pixels along the line transect were scanned using the Bresenham algorithm (Hearn and Baker, 1997; Foley et al., 1996) and the number of pixels belonging to , counted. Dividing this number by the total number of pixels composing the transect was used to determine the intercept from . Empirical cumulative distributions of the intercepts from , for different lengths of the line transect: L ¼ 80, L ¼ 20 and L ¼ 8 units, are given in Figure 7.4. With increasing the size of the transect, the probability of intercepting almost entirely or decreases. With increasing transect size, the probability of intercepting fractions far from the mean areal/volume fraction decreases, while the probability of intercepting fractions close to the mean areal/volume fraction increases. In other words, for large transects, the probability density function of the intercepts from peaks at and large-size transects are associated with large values of the probability density function at . This has been illustrated in Figure 7.5.
138
Reliability and Risk Models 1.0 0.9 0.8
Probability
0.7 0.6 0.5 L=8
0.4 0.3
L = 20 L = 80
0.2 0.1
x1
0.0 0.0
0.1
x2
B
0.2 0.3 0.4 0.5 0.6 0.7 0.8 Intercepted fraction from β, x = Lβ /L
0.9
1.0
Figure 7.4 Empirical cumulative distributions of the intercepts from , for different lengths of the line transect
L4 > L3 > L2 > L1 L4
L3 L2 L1 0
ξβ
1
Intercept, Lβ /L
Figure 7.5 With increasing the size of the transect, the probability density distribution of the intercept from -constituent peaks at
139
Analysis of the Properties of Inhomogeneous Media
Suppose for example, that intercepting a critical amount x1 by the transect compromises (causes a significant decrease in) the property value. Such is the case with the toughness of a specimen cut from duplex material containing brittle particles which initiate cleavage fracture easily. The computer simulation result in Figure 7.4 shows that although the probability that a small transect will sample a large amount of is significant, small transects are associated with a smaller probability of ‘compromised toughness’ because the probability P(x x1) is larger (Figure 7.4). Next, suppose that the property value is compromised substantially only if the transect intercepts a relatively large critical amount x2 (Figure 7.4). This is for example the case where the yield strength of a specimen cut from duplex material containing hard and soft constituents is compromised if too much soft constituent has been sampled. As can be verified from Figure 7.4, the small-size transects are now associated with larger probability of intercepting large fractions of and compromised yield strength. Since the distribution of properties is significantly affected by the distribution of intercepts, the empirical cumulative distribution of the intercepts is an important fingerprint of inhomogeneous structures and contains valuable information regarding the risk of poor properties.
7.3
INTERCEPT VARIANCE
Another important characteristic of inhomogeneous media is the variance of the intercepted fraction denoted by Vc (X, L), (c 2 f, g). The variance is a function of the size of the transect and for a transect of particular size L, it is defined by the integral: Vc ðX; LÞ ¼
Z
1
2
f ðx; LÞ½x c dx ¼ 0
Z
1 0
x2 f ðx; LÞdx c2
ð7:4Þ
obtained using the well-known formula for a variance of a random variable side of equation (7.4) X: V(X) ¼ E(X2) [E(X)]2. The integral in the right-hand R1 is the expected value of the squared intercept: 0 x2 f (x, L)dx ¼ E(X 2 ). For a transect of zero size (L ¼ 0) which consists of a single point, the probability density distribution f(x, 0) of the intercept X from -phase for example, is the discrete Bernoulli distribution with parameter , i.e. the intercepted fraction is ‘1’ with probability and ‘0’, with probability 1 . The variance of the intercepted fraction X from , then becomes V ðXÞ ¼ EðX 2 Þ ½EðXÞ2 ¼ 2 ¼
ð7:5Þ
because E(X) ¼ 1 þ 0 ¼ and E(X 2 ) ¼ 12 þ 02 ¼ .
140
Reliability and Risk Models
For an infinitely large transect L and a random microstructure with no anisotropy, the transect intercepts a constant fraction x ¼ c and the variance Vc (X, L) in equation (7.4) becomes zero. The relationship between the variance of the intercepted fraction and the size L of the transect (Figure 7.6) is an important fingerprint of inhomogeneous microstructures. If, for example, a variance of the intercept smaller than a particular threshold (e.g. Vth ¼ 0.04 in Figure 7.6) causes a negligible variation in the property value, the material behaves as if quasi-homogeneous. Whether the material will behave as if quasi-homogeneous or inhomogeneous depends on the size of the transect. The intersection of the horizontal line corresponding to the variance threshold Vth with the graph of the intercept variance defines a limiting transect size Lth (Figure 7.6). Accordingly, the same media sampled with transects larger or smaller than Lth may behave as if ‘quasi-homogeneous’ or ‘inhomogeneous’. Thus, the variances of the intercepts characterising the different transect lengths (L ¼ 80, L ¼ 20 and L ¼ 8) in Figure 7.4 are 0.029, 0.073 respectively. They were calculated from P and 0.144, 2 V ¼ (1/N) N (x ) , where N is the number of trials (6000 in the simulai i¼1 tion), xi is the intercept from in the ith random placement of the transect, ¼ 0:4 is the areal/volume fraction of the -constituent which is also the mean of all intercepts xi. An important characteristic of the intercept variance is the ratio V/L (Figure 7.6) which measures the change of the intercept variance per specified transect size. The ratio is negative because the intercept variance decreases monotonically with increasing size of the transect. If the intercept 0.30 Intercept variance, Vα ξα ξβ
0.25 0.20
Inhomogeneous domain 0.15 0.10 Vth = 0.04
0.05
Quasi-homogeneous domain 0.00
0
5
Lth
10 15 20 25 30 35 40 45 50 55 60 65 70 Transect size L, units
Figure 7.6 Dependence of the intercept variance on the size of the transect (Todinov, 2002b)
141
Analysis of the Properties of Inhomogeneous Media
variance decreases quickly with increasing transect size, relatively small transect sizes are sufficient to ‘stabilise’ the intercept variance at a low level, which guarantees consistent intercept and property values. Such a variation of the intercept variance is typical for fine-grained duplex structures (Figure 7.7). Conversely, if the intercept variance decreases slowly with increasing size of the transect, large transects are necessary to stabilise the intercept variance at a low level and attain stable property values. Such a variation of the intercept variance is typical for coarse-grained duplex structures. Consequently, the graph of the intercept variance has an important application: in determining the minimum size of the sampler which stabilises the variation of the intercept (and the associated property) at low values. The intercept variance from sampling any inhomogeneous microstructure varies between the intercept variance of an ideal fine-dispersed structure (Figure 7.7, the vertical line A) and an ideal coarse-grained structure (Figure 7.7, the horizontal line B). Intercept variance curves of fine-grained structures are shifted towards the vertical line A, where very small transect sizes stabilise the intercept variance at a low level. Conversely, intercept variance curves characterising coarse-grained structures are shifted towards the horizontal line B, where relatively large transect sizes are needed to stabilise the intercept variance at a low level. With increasing size of the transect, the intercept variance decreases monotonically from to 0, because larger transects are characterised by a smaller variance of the intercepted fraction. The intercept variance can never exceed the
Intercept variance, V (x) Vmax = 0.25 ξα ξβ B Fine duplex structure
Coarse duplex structure
A 0 Transect size, L
Figure 7.7 Variation of the intercept variance with the transect size for fine and coarse duplex structures
142
Reliability and Risk Models
value and since the maximum of ¼ (1 ) is attained for ¼ 0:5, for any type of duplex structure, the intercept variance cannot exceed the absolute upper bound Vmax ¼ 0.25 (Figure 7.7). This is obtained for a transect of zero length, sampling a duplex microstructure for which 0:5 (Todinov, 2002b). Although all transect sizes yield unbiased estimates of the volume fraction of the microstructural constituents, the estimates from larger transects are characterised by smaller errors. Consequently, the common division: a ‘quasihomogeneous’ or ‘inhomogeneous’ structure is conditional. For a small size of the transect, a ‘fine-grained’ structure can be characterised by a scatter typical for a coarse-grained structure. Conversely, a coarse inhomogeneous structure, if sampled by a large transect, can be characterised by a very small scatter of properties, typical for fine-grained inhomogeneous structures. An important potential application of the concept ‘intercept variance’ is the topological optimisation of the microstructure where the distribution and the shape of the second microstructural constituent are varied until the intercept variance is minimised. In another example of a topological optimisation, a particular distribution of the second microstructural constituent may be sought which minimises the probability that a random transect of specified size will sample more than a certain fraction from the weaker constituent.
7.4
LOWER TAIL OF MATERIAL PROPERTIES, MONOTONICALLY DEPENDENT ON THE AMOUNT OF THE INTERCEPTED FRACTION
Efficient probability bounds of the property values can be obtained if the property y (e.g. density) is solely determined by the amount x of the intercepted quantity from one of the constituents. In this case, the property value y can be expressed by a functional relationship y(x). Although this functional relationship is usually unknown, it can be shown that provided that y(x) is monotonic, efficient probability bounds can be constructed regarding the property under certain conditions. The probability bounds are determined using estimates of the expected order statistics of the intercepts and estimates of the expected order statistics of the property, obtained from a relatively small number of data sets. The expected order statistics of the intercepts can for example be obtained by sampling images of the microstructure. Assume that estimates y1 < y2 < . . . < yn of n expected order statistics of the property are available from m ordered data sets, eachP consisting of n measurements/samples. Each value yr is an average yr ¼ (1/m) m j¼1 yj, r , r ¼ 1, n of the values with the same rank in the m data sets:
Analysis of the Properties of Inhomogeneous Media
143
y1;1 < y1;2 < . . . < y1;n y2;1 < y2;2 < . . . < y2;n ym;1 < ym;2 < . . . < ym;n The estimates of the expected order statistics x1 < x2 < . . . < xn of the intercepts PQ are formed in a similar fashion: each value xr is an average xr ¼ (1/Q) j¼1 xj, r , r ¼ 1, n of the values with the same rank in Q ordered sets of intercepts (Q m): x1;1 < x1;2 < . . . < x1;n x2;1 < x2;2 < . . . < x2;n xQ;1 < xQ;2 < . . . < xQ;n Each data set of intercepts consists of n random placements of the transect in the inhomogeneous media. It can be shown that if the variance of the order statistic xr of the intercepts is relatively small, the probability that the property Y will be smaller than the corresponding expected order statistics yr is approximately equal to the probability that the intercept X will be smaller than the expected order statistic xr of the intercepts, i.e. PðY yr Þ PðX xr Þ
ð7:6Þ
xr X xs Þ Pð yr Y ys Þ Pð
ð7:7Þ
and also
where r and s stand for the indices of any two order statistics with small intercept variances (Figure 7.8) (Todinov, 2002b). If the variance of the expected order statistic xr of the intercepts is large, however, relationship (7.6) is no longer valid. The expected order statistics of the intercepts can be obtained directly from Monte Carlo simulations. The first step is to ensure by a Monte Carlo simulation that the expected order statistic xr of the intercepts that has been selected is characterised by a small variance. If this is the case, relationship (7.6) is valid. Accordingly, the probability that the property will lie between two expected order statistics yr and ys whose corresponding expected intercept order statistics xr and xs are characterised by small variances, can be determined from relationship (7.7) (Figure 7.8).
144
Reliability and Risk Models Property, y y3 y2
y1 Intercept, x x1
x2
x3
Figure 7.8 Distributions of the order statistics of properties and the order statistics of intercepts for a monotonically increasing function y(x)
One of the advantages of the proposed approach is that the probability bounds of the property are determined without prior knowledge of the functional relationship between the property and the intercepts from the structural constituents. A large number of properties vary monotonically with the intercept from one of the structural constituents. The proposed model can be applied whenever the investigated property depends monotonically on the intercept from one of the structural constituents, and the expected order statistic of the intercepts is characterised by a small variance. No assumptions of a probability distribution function regarding the property are needed, which is a significant advantage compared to other methods. The method can also be applied for single-phase microstructures whose grain size varies substantially. Suppose that the investigated property depends monotonically on the number of intercepted grains W by the transect. Then, if the 1 < w 2 < . . . of the expected order statistics of the number of interestimates w cepted grains are built and also the corresponding estimates y1 < y2 < . . . of the expected order statistics of the investigated property, equations (7.6) and (7.7) will hold if the expected order statistics of the number of intercepted grains are characterised by small variances. The probability P(Y yr ) that the property will be smaller than its rth expected order statistic yr is approximately equal to the probability that the number of intercepted grains W will be smaller than its r : P(Y yr ) P(W w r ). rth expected order statistic w
8 Mechanisms of Failure Following Dasgupta and Pecht (1991), the mechanisms of failure can be divided broadly into two categories: 1. Overstress failures: (i) brittle fracture, (ii) ductile fracture, (iii) yield, (iv) buckling, etc. 2. Wear-out failures: (i) fatigue, (ii) corrosion, (iii) stress-corrosion cracking, (iii) wear, (iv) creep, etc. Overstress failures occur when load exceeds strength. If load is smaller than strength, the load has no permanent effect on the component. Conversely, wear-out failures are characterised by damage which accumulates irreversibly and does not disappear when the load is removed. Once the damage tolerance limit is reached, the component fails (Blischke and Murthy, 2000). Here are some of the most important failure mechanisms.
8.1 8.1.1
OVERSTRESS FAILURES BRITTLE FRACTURE
Brittle fracture is associated with initiation of an unstable crack by cracking or decohesion of particles. Some of these are inclusions but in other cases they may be an integral part of the microstructure. Under overstress, a high stress concentration can occur at these microflaws which results in a nucleation and sudden propagation of cracks. Cleavage fracture is the most common type of brittle fracture. It is associated with a relatively small plastic zone ahead of the crack tip and consequently, with a relatively small amount of energy needed
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
146
Reliability and Risk Models
for crack propagation. The very small amount of plastic deformation is indicated by the featureless, flat fracture surface. Due to the lack of plastic deformation, no blunting of the sharp cracks occurs and the local stress ahead of the crack tip can reach very high values which are sufficient to break apart the interatomic bonds (Ashby and Jones, 2000). As a result, the crack spreads between a pair of atomic planes yielding a flat cleavage surface. The stressintensity factor is an important measure of the magnitude of the crack-tip stress field. It depends on the geometry of the cracked component and the load configuration. For a tensile mode of loading (mode I) of a surface crack, where the crack surfaces move directly apart (Dowling, 1999), the stress intensity factor is pffiffiffiffiffiffi KI ¼ Y a
ð8:1Þ
where Y 1.1 is a calibration factor, is the tensile stress far from the crack, and a is the crack length. The critical value KIc of the stress-intensity factor that would cause fracture is the fracture toughness of the material. Fracture toughness is to the stress-intensity factor what the strength is to stress (Hertzberg, 1996). Given that a surface crack of length a is present, pffiffiffiffiffiffi in order to prevent failure, the design stress must be smaller than KIc /[Y a]. If the material has already been selected, for example because of its high strength and light weight, and the stress fixed at a high level, in order to prevent fracture, the maximum allowable crack size must be (1/)(KIc /Y)2 (Hertzberg, 1996). When the crack advances, it produces an increase of the surface area of the sides of the crack, a process which requires energy to break apart the interatomic bonds. The source of this energy is the elastic strain energy as the crack propagates. Griffith (1920) first established a criterion for the crack propagation which states that the crack will propagate when the decrease in elastic strain energy is greater than the energy required to create a new crack surface (Hertzberg, 1996; Ashby and Jones, 2000). Typically, brittle fracture occurs in a transgranular manner. In cases where the grain boundaries are weak, however (containing for example a brittle film of segregated elements), the brittle crack can also propagate in an intergranular manner. Fractographic studies suggest that cleavage in steels usually propagates from cracked inclusions (Rosenfield, 1997; McMahon and Cohen, 1965). Cleavage usually involves plastic deformation to produce dislocation pile-ups and crack initiation from a particle which has cracked during the plastic deformation. The type of particles has a strong influence on the probability of brittle fracture initiation. Due to tensile tessellation stresses for example, alumina or siliconbased inclusions are more likely to become initiators of brittle fracture or fatigue cracks, compared to sulphide inclusions of the same diameter and number (Murakami et al., 1989; Murakami and Usuki, 1989). Brooksbank and Andrews (1972) pointed out that the stresses in the matrix are considerably
Mechanisms of Failure
147
lower around duplex oxide–sulphide inclusions compared to single-phase oxide inclusions. The reason is the sulphide cover of the inclusions which provides a compensating layer during thermal contraction. Consequently, the association of sulphides with oxides reduces the probability of cleavage crack initiation. One of the criteria that must be satisfied for a cleavage to be triggered is a sufficiently large local peak of the tensile stress (Wallin et al., 1984). The volume where triggering of cleavage is possible is relatively small; it samples a small number of particles and, as pointed out by Rosenfield (1997), statistical methods are needed to describe the initiation of brittle fracture.
8.1.2
DUCTILE FRACTURE
Unlike brittle fracture, ductile fracture is accompanied by extensive plastic deformation. Ductile fracture in tension is usually preceded by a localised reduction in diameter called necking. Very ductile metals may draw down to a point before separation. Large plastic deformation is also associated with the region of the crack tip. Indeed, the crack has the effect of concentrating the local stress. At a certain distance from the crack tip, the stress reaches the yield strength of the material and plastic flow occurs which forms a plastic zone. As the yield stress of the material increases, the size of the plastic zone decreases. Consequently, cracks in soft metals have large plastic zones as opposed to cracks in hard metals or ceramics whose plastic zones are small. Ductile fracture occurs by nucleation and growth of microvoids ahead of the crack tip. These nucleate heterogeneously, at sites where compatibility of deformation is difficult. The preferred sites for void formation are inclusions or second-phase particles. In high-purity metals, voids can form at grain boundary triple points. As the microvoids link up, the crack advances by ductile tearing. A crack growth occurring by ductile tearing consumes a significant amount of energy by plastic flow, and ductile fracture is associated with a considerable amount of absorbed energy. A bigger plastic zone is associated with a larger amount of absorbed energy and plastic deformation during crack extension. The significant amount of plastic deformation during crack extension is indicated by a rough fracture surface.
8.1.3
DUCTILE-TO-BRITTLE TRANSITION REGION
Several factors contribute to a cleavage-type fracture: (i) low temperature, (ii) a triaxial state of stress and (iii) a rapid rate of loading. The combination of a low temperature and a triaxial stress state in the vicinity of a stress raiser is responsible for a large number of service failures. Since the effect of the low
148
Reliability and Risk Models
temperature and the stress raiser is increased by a high rate of loading, mechanical tests such as the Charpy V-notch impact test have been devised to determine the susceptibility of materials to cleavage fracture. Charpy specimens have a square cross-section (10 10 mm) and contain a 45 notch, 2 mm deep, with a 0.25 mm root radius. A Charpy specimen is supported as a beam in a horizontal position and loaded behind the notch by the impact of a swinging pendulum. As indicated by the experimental scatter plot in Figure 4.2, there exists a strong positive correlation between the absorbed impact energy and the percentage of ductile fracture on the fracture surface of specimens taken from C–Mn welds. A large number of metals and alloys (e.g. weld steels) have impact toughness varying over a range of temperatures known also as the ductile-to-brittle transition region (Hertzberg, 1996; ASM, 1985b). The transition region is the controlling factor in selecting a material with low tendency to brittle failure. Accordingly, materials with low transition temperature are to be preferred. With decreasing temperature, the absorbed impact energy varies in a sigmoidal fashion starting from a relatively constant large value (upper shelf), decreasing sharply in the transition region and levelling off at a relatively constant small value (lower shelf) (Figure 8.1). As a result, even a normally ductile mild steel can become brittle.
Upper shelf (ductile fracture)
Charpy impact energy (J) 140 120 100 Transition region (cleavage + ductile fracture)
80 60 40 20 0
0
100 Lower shelf (brittle fracture)
200
300
400
500
Temperature (K)
Figure 8.1 Ductile-to-brittle transition region for multi-run C–Mn welds (Todinov et al., 2000)
149
Mechanisms of Failure
Clearly, impact energy variation is due to the progressive change in the microscopic fracture mechanism from void nucleation, growth and coalescence, absorbing a large amount of energy, to cleavage fracture which absorbs a small amount of energy.
8.1.4
YIELDING
The yield strength is the stress required to produce a small specified amount of plastic deformation (Dieter, 1986). Stressing a component beyond its yield strength results in a permanent plastic deformation which, in most of the cases, constitutes failure. This is the reason why the yield strength is an important design parameter. Plastic deformation is closely associated with the shear stresses. For a triaxial stress state, von Mises theory states that yielding would occur when the equivalent stress e given by pffiffiffi h i1/2 2 ð1 2 Þ2 þ ð2 3 Þ2 þ ð3 1 Þ2 e ¼ 2
ð8:2Þ
exceeds the yield strength from a uniaxial tensile test. In equation (8.2), 1 , 2 and 3 are the principal normal stresses. The von Mises yield criterion implies that yielding is a function of all three values of the principal shear stresses and is independent of the hydrostatic component of the stress. The Treska theory states that yielding is taking place when the maximum shear stress max ¼
1 3 2
ð8:3Þ
where 1 and 3 are the largest and the smallest normal principal stresses, exceeds the shear strength associated with yielding in the uniaxial tension test. Both criteria are widely used to predict yielding of ductile materials and both of them indicate that hydrostatic stress does not affect yielding. For various stress states, the differences in the predictions from the two criteria do not exceed about 15% (Dowling, 1999).
8.2 8.2.1
WEAR-OUT FAILURES FATIGUE FAILURES
Fatigue failures are often associated with components experiencing cyclic stresses or strains which produce permanent damage. This accumulates until
150
Reliability and Risk Models
it develops into a crack which propagates and leads to failure. The process of damage and failure due to cyclic loading is called fatigue. A comprehensive discussion regarding the different factors affecting the fatigue life of components can be found in Hertzberg (1996). Usually, the fatigue life of machine components is a sum of a fatigue crack initiation life and life for fatigue crack propagation. Various probabilistic models have been devised to describe these stages (Bogdanoff and Kozin, 1985; Tryon and Cruse, 1997; Ortiz and Kiremidjian, 1988; Cox, 1989). The fatigue life of cast aluminium components and powder metallurgy alloys, for example, is strongly dependent on the initiating defects (e.g. pores, pits, cavities, inclusions, oxide films). Large pores can be regarded as ready fatigue cracks (Murakami et al., 1989; Murakami and Usuki, 1989). It has been reported (Ting and Lawrence, 1993) that in a cast aluminium alloy the dominant fatigue cracks (the cracks which caused fatigue failure) initiated from nearsurface casting pores in polished specimens or from cast-surface texture discontinuities in as-cast specimens. For fatigue cracks emanating from casting pores the nucleation life was almost non-existent (Ting and Lawrence, 1993). The crack growth rate da/dN is commonly estimated from the Paris– Erdogan power law (Hertzberg, 1996): da=dN ¼ C K m
ð8:4Þ
From equation (8.4) the fatigue life N can be estimated from the integral: N¼
Z
af
ai
da CðKÞm
ð8:5Þ
where C and m are material constants and K is the stress intensity factor range. The integration limits ai and af are the initial and final fatigue crack length. The fatigue crack growth rate increases with increasing the crack length. Most of the loading cycles are expended on the early stage of crack extension when the crack is small. During the late stages of fatigue crack propagation a relatively small number of cycles is sufficient to extend the crack until failure. This is the reason why fatigue life is so sensitive to the initial length ai of the initiating flaws. During fatigue life predictions, the distribution of the initial lengths of the fatigue cracks is commonly assumed to be the size distribution of the surface (subsurface) discontinuities and pores. It is implicitly assumed that the probability of fatigue crack initiation on a particular defect is equal to the probability of its existence in the stressed volume. This assumption, however, leads to overly conservative predictions for the fatigue life. A more refined approach
Mechanisms of Failure
151
should include an assessment of the probability with which the defects in the material initiate fatigue cracks (Todinov, 2001c). Sharp notches in components result in a high stress concentration which reduces the fatigue life and promotes early fatigue failures. Such are the sharp corners, keyways, holes, abrupt changes in cross-sections, etc. Fatigue cracks on rotating shafts often originate in badly machined fillet radii. Fillet radii may seem insignificant, but they are critical reliability elements and their manufacturing quality should always be checked (Thompson, 1999). Fatigue crack initiation is also promoted at the grooves and the microcrevices of rough surfaces. These can be removed if appropriate treatment (grinding, honing and polishing) is prescribed (Ohring, 1995). Fatigue failures can be significantly reduced if the development of a fatigue crack is delayed by introducing compressive residual stresses at the surface (see Chapter 13). Such a compressive stress causes a crack closure which substantially delays the crack initiation and decreases the rate of crack propagation. One of the reasons is that the compressive stress subtracts from the loading stress and results in a smaller effective stress. Eliminating low-strength surfaces can significantly reduce early-life failures due to rapid fatigue or wear. This can be achieved by: .
eliminating soft decarburised surface after austenitisation of steels; eliminating surface discontinuities, folds and pores; . eliminating structure with large grains at the surface; . strengthening the surface layers by surface hardening, carburising, nitriding and deposition of hard coatings (Budinski, 1996). For example, TiC, TiN and Al2O3 coatings delay substantially tool wear. Early-life failures due to rapid wear can be substantially reduced by specifying appropriate lubricants. These create interfacial incompressible films that keep the surfaces from contacting. .
8.2.2
FAILURES DUE TO CORROSION AND EROSION
A material with low corrosion or erosion resistance, selected for equipment working in aggressive environments (large environmental stresses), will promote early wear-out failures. Interfaces operating in sea water including incompatible metals, far apart in the galvanic series, can promote rapid galvanic corrosion. In many cases pitting occurs which is a particularly insidious form of corrosion. The component may be lost due to corrosion of a small amount of metal causing perforation through the thickness of the component (Budinski, 1996). Austenitic stainless steels for example, are particularly prone to pitting attack in salt water. Crevice corrosion may occur in poorly gasketed pipe flanges and under bolt heads. Similar to pitting, crevice corrosion can be very destructive because of the localised damage.
152
Reliability and Risk Models
Stress corrosion cracking (SCC) and hydrogen embrittlement (HE) collectively referred to as environment-assisted cracking (EAC) are responsible for a substantial number of field failures. Stress corrosion cracking, for example, is a spontaneous corrosion-induced cracking under static stress. Some materials which are practically inert in a particular corrosive environment become susceptible to corrosion cracking when stress is applied. The stress promoting stress corrosion cracking may not necessarily originate from external loading. It can also be due to the combined action of residual or thermal stresses and the corrosive environment. Aluminium alloys, for example, are prone to stress corrosion cracking in chloride solutions. Hydrogen embrittlement refers to the phenomenon where certain metal alloys experience a significant reduction in ductility when atomic hydrogen penetrates into the material. During welding, hydrogen can diffuse into the base plate while it is hot. Subsequently, embrittlement may occur upon cooling, by a process referred to as cold cracking in the heat-affected zone of the weld (Hertzberg, 1996). Hydrogen can also be generated from localised galvanic cells where the sacrificial coating corrodes: during pickling of steels in sulphuric acid and during electroplating. The atomic hydrogen diffuses into the metal substrate and causes embrittlement. Increasing strength tends to enhance the material’s susceptibility to hydrogen embrittlement. High-strength steels, for example, are particularly susceptible to hydrogen embrittlement. The poor design of the flow paths of fluids containing abrasive material also promotes rapid erosion and failures. A proper material selection and design can significantly minimise the effect of corrosion. Structural design features promoting rapid corrosion and erosion should be avoided (Mattson, 1989). Factors which promote failures due to rapid corrosion are, for example, the lack of crevice control (e.g. joints or gaskets that can accommodate the corrodent); creating interfaces working in aggressive environments with metals far apart in the galvanic series; lack of cathodic protection and corrosion allowance. A design based on a lower than the actual temperature may promote rapid anode wastage, decreased cathodic protection and early wear-out failure due to rapid corrosion. Often material processing during manufacturing and assembly decreases corrosion resistance. For example, the corrosion resistance of welded duplex or super duplex steels decreases significantly in the area of the heataffected zone of the weld. Designs associated with high static tensile stresses promote stress corrosion cracking. An efficient prevention technique is consulting corrosion databases in order to avoid environment–material combinations promoting rapid corrosion. Proper protective coatings, heat treatment, diffusion treatment and surface finish can substantially decrease the corrosion rate. A number of design solutions preventing corrosion are discussed in Budinski (1996).
Mechanisms of Failure
8.3
153
EARLY-LIFE FAILURES
Early-life failures occur within a very short period from the start of the design life of the installed equipment. Because they usually occur during the payback period and are also associated with substantial losses due to warranty payments, they have a strong negative impact on the financial revenue. Early-life failures also result in loss of reputation and market share. Early-life failures are usually caused by: . . . . . . .
Poor design Defects in the material from processing Defects from manufacturing Inadequate materials Poor inspection and quality control Misassembly, poor workmanship and mishandling Human errors.
Most of the early-life failures are overstress failures which occur during the infant mortality region of the bathtub curve. They are often caused by inherent defects in the system due to poor design, manufacturing and assembly. Human errors during design, manufacturing, installation and operation account for a significant number of early-life failures. Some of the causes for these errors are: (i) lack of training, and experience; (ii) time pressure and stress; (iii) overload and fatigue; (iv) unfamiliarity with the equipment, and the necessary procedures; (v) poor discipline, housekeeping and safety culture; (vi) poor communication between designers, manufacturers and installers; lack of feedback and interaction between teams and individuals; (vii) poor management of the interaction between teams working on the same project; (viii) poor specifications, documentation and procedures.
8.3.1
INFLUENCE OF THE DESIGN ON EARLY-LIFE FAILURES
Inadequate design is one of the most common reasons for early-life failures. Poor knowledge and underestimation of the dynamic environmental loads the equipment is likely to experience and the associated stress magnitudes result in designs prone to early-life failures. A common design error is the underestimation of the actual load/stress magnitude the equipment is likely to experience in service. For example, underestimating the working pressure and temperature may cause an early-life failure of a seal and a release of harmful chemicals into the environment. An early-life failure of a critical component may also be caused by unanticipated eccentric loads due to weight of additional components or by high-amplitude thermal stresses during start-up regimes. Early-life failures are also promoted by failure to account in the design for the extra loads
154
Reliability and Risk Models
during assembly. Installation loads are often the largest loads a component is ever likely to experience. Triaxial tensile loading stress states contribute to early-life failures. One of the reasons is that the triaxial tensile stress state is associated with large normal opening tensile stresses on the crack flanks, the magnitude of which is large irrespective of the crack orientation. For loading associated with one or more normal principal compressive stresses for example, only for particular orientations will the crack be unstable. Loading leading to stress tensors with three principal tensile stresses should be avoided in design where possible. The volumes where such stress tensors are present should be reduced in order to reduce the probability that a crack-like flaw will reside inside. Loading characterised by a stress tensor with one tensile and two compressive principal stresses for example is preferable to loading characterised by two tensile and one compressive principal stress. In order to improve the reliability of the products, design analysis methods such as FMEA (failure mode and effect analysis) (MIL-STD-1629A, 1977) and its extension FMECA (failure modes, effects and criticality analysis) including criticality analysis can be used (Andrews and Moss, 2002). These ensure that as many as possible potential failure modes have been identified and their effect on the system performance assessed. The objective is to identify critical areas where design modifications can reduce the consequences of failure (Thompson, 1999). Probabilistic design methods (Haugen, 1980; Christensen and Baker, 1982; Ang and Tang, 1975) based on the distributions of critical design parameters can also be used to assess the probability of unfavourable combinations of parameter values which will result in early-life failures. Designs should be robust. This means that the performance characteristics should be insensitive to variations in the manufacturing process, operating conditions and environment (Lewis, 1996). Various examples of designs insensitive to variations of the critical design parameters are discussed in Ross 1988. Operation outside the design specifications is also a major contributor to early-life failures. Early-life failures caused by inadequate operating procedures can be reduced significantly by implementing mistake proofing (Poka Yoke) devices into the systems, which eliminate the possibility of violating the correct procedures, especially during assembly.
8.3.2
INFLUENCE OF THE VARIABILITY OF CRITICAL DESIGN PARAMETERS ON EARLY-LIFE FAILURES
An important factor promoting early-life failures is the variability associated with critical design parameters (e.g. material properties and dimensions) which leads to variability associated with the strength. Material properties such as
Mechanisms of Failure
155
(i) yield stress; (ii) static fracture toughness; (iii) fatigue resistance; (iv) modulus of elasticity and elastic limit; (v) shear modulus; (vi) percentage elongation and (vii) density, are often critical design parameters. Which of the material properties are relevant depends on the failure mode. Material properties depend on the materials processing and manufacturing and are characterised by a large variability. Often, defects and unwanted inhomogeneity are the source of variability. Residual stress magnitudes are also characterised by a large variation. Strength variability caused by production variability and variability of material properties is one of the major reasons for an increased interference of the strength distribution and the load distribution which promotes overstress early-life failures. A heavy lower tail of the mechanical property distributions usually yields a heavy lower tail of the strength distribution thus promoting early-life failures. Low values of the material properties exert stronger influence on reliability than do high or intermediate values. An important way of reducing the lower tail of the material properties distribution is the high-stress burn-in. The result is a substantial decrease of the strength variability and increased reliability on demand due to a smaller interference of the strength distribution and the load distribution. Defects like shrinkage pores, sand particles and entrained oxides from casting, microcracks from heat treatment and oxide inclusions from material processing are preferred sites for early fatigue crack initiation leading to early fatigue failure. These flaws are also preferred sites for initiating fracture during an overstress loading. Segregation of impurities along the grain boundaries significantly reduces the local fracture toughness and promotes intergranular brittle fracture. Impurities like sulphide stringers for example cause a reduced corrosion resistance and anisotropy in the fracture toughness and ductility. Early-life failures caused by lamellar tearing beneath welds, longitudinal splitting of wire, increased susceptibility to pitting corrosion can all be attributed to anisotropy. Quenching and tempering which result in tensile residual stresses at the surface also promote early fatigue failures. Most of the failures occurring early in life are quality failures due to the presence of substandard items which find their way into the final products because of deficient quality assurance procedures. Production variabilities during manufacturing, not guaranteeing the specified tolerances or introducing flaws in the manufactured product, lead to early-life failures. Depending on the supplier, the same component of the same material manufactured to the same specification is usually characterised by different properties. Between suppliers, variation exists even if the variations of property values characterising the individual suppliers are small (see Chapter 3). A possible way of reducing the ‘between-suppliers variation’ is to use only the supplier producing items with the smallest variation of properties. Furthermore, due to the inherent variability of the manufacturing process, even items produced by the same manufacturer can be characterised by
156
Reliability and Risk Models
different properties. The ‘within-supplier variation’ can be reduced significantly by a better control of the manufacturing process, more precise tools, production and control equipment, specifications, instructions, inspection and quality control procedures. The manufacturing process, if not tightly controlled, can be the largest contributor to early-life failures. Because of the natural variation of critical design parameters, early-life failures are often due to unfavourable combinations of values (e.g. worst case tolerance stacks) rather than due to particular production defects. A comprehensive discussion related to the effect of the dimensional variability on reliability of the products can be found in Booker et al. (2001) and Haugen (1980). Since variability is a source of unreliability (Carter, 1997), a particularly important factor reducing significantly early-life failures is the manufacturing process control. Process control based on computerised manufacturing processes significantly reduces the variation of properties. Process control charts monitoring the variations of the output parameters, statistical quality control and statistical techniques are important tools for reducing the amount of defective components (Montgomery et al., 2001). Another important way of decreasing early-life failures is adopting the sixsigma quality philosophy (Harry and Lawson, 1992), based on production with very small number of defective items (zero defect levels). Modern electronic systems, in particular, include a large number of components. For the sake of simplicity, suppose that a complex system is composed of N identical components, arranged logically in series. If the required system reliability is Rs ¼ RN 0 , the reliability of a single component should be R0 ¼ (Rs)1/N. Clearly, with increasing the number of components N, the reliability R0 required from the separate components to guarantee the specified reliability Rs for the system approaches unity. In other words, in order to guarantee the required reliability Rs for a complex system, the number of defective components must be very small. Adopting a six-sigma process guarantees no more than two defective components out of a billion manufactured, and this is important philosophy for eliminating early-life failures in complex systems. A substantial amount of early-life failures are due to misassembly, which often introduces additional stresses not considered during design. Misalignment of components creates extra loads, susceptibility to vibrations, excessive centrifugal forces on rotating shafts, and larger stress amplitudes leading to early fatigue. Misassembly and mishandling often cause excessive yield and deformation or a breakdown of protective coatings, which promotes rapid corrosion. Furthermore, interfaces are rarely manufactured to the same standards as the components involved. As a result, interfaces are often weak links in the chain and their reliability limits the overall reliability of the assembly. Inspection and quality control techniques are important means of weeding out substandard components before they can initiate an early-life failure.
Mechanisms of Failure
157
Examples of inspection and quality control activities which help reduce earlylife failures are: .
Checking for the integrity of protective coatings and whether the corrosion protection provided by the cathodic potential is adequate. . Using non-destructive inspection techniques such as the ultrasonic inspection technique for testing components and welds for the presence of cracks and other flaws. . Inspection for water ingress in underwater installations. . Inspection for excessive elastic and plastic deformation. Finally, early-life failures can also be reduced by applying highly accelerated stress screens (HASS) which reduce the infant mortality hazard rate by removing faulty items (Hobbs, 2000).
9 Overstress Reliability Integral and Damage Factorisation Law
9.1
RELIABILITY ASSOCIATED WITH OVERSTRESS FAILURE MECHANISMS
According to the discussion in Chapter 8, overstress failures occur if load exceeds strength. If load is smaller than strength, the load has no permanent effect on the component. In this section, an integral will be presented related to reliability associated with all overstress failure mechanisms. Suppose that a random load characterised by a cumulative distribution function F(x) has been applied a number of times during a finite time interval with length t, and the times of load applications follow a homogeneous Poisson process with intensity . If the strength is described by a probability density distribution fS(x), the probability R of surviving n load applications is given by the classical load–strength interference model: R¼
Z
Smax Smin
F nL ðxÞfS ðxÞdx
ð9:1Þ
where Smin and Smax are the lower and the upper limit of strength. The probability of no failure (reliability) associated with the finite time interval with length t can be calculated from the following probabilistic argument. According to the total probability theorem, the probability of no failure is a
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
160
Reliability and Risk Models
sum of the probabilities of the following mutually exclusive and exhaustive events: the probability of no failure if no load has been applied during the time interval of length t, the probability of no failure associated with exactly one, two, three, . . . , k load applications, etc. Because the probability of k load applications during the time interval 0, t is given by the Poisson distribution f (k) ¼ (t)k exp (t)/k!, where t is the mean number of load applications in the finite time interval with length t, the probability of no failure or the reliability associated with time t is RðtÞ ¼ expðtÞ 1 þ t
Z
Smax
FL ðxÞfS ðxÞdx Smin
ðtÞ2 þ 2!
Z
Smax
Smin
! FL2 ðxÞfS ðxÞdx þ
ð9:2Þ
Equation (9.2) can be presented as Z RðtÞ ¼ expðtÞ
Smax
fS ðxÞdx½ðtFL ðxÞÞ0 þ ðtFL ðxÞÞ1 =1! Smin 2 þ ðtFL ðxÞÞ =2! þ . . .
ð9:3Þ
which simplifies to RðtÞ ¼ expðtÞ
Z
Smax
fS ðxÞdx exp½tFL ðxÞ
ð9:4Þ
Smin
Finally, RðtÞ ¼
Z
Smax
exp½tð1 FL ðxÞÞfS ðxÞdx
ð9:5Þ
Smin
Equation (9.5) is a reliability integral, associated with an overstress failure mechanism (Todinov, 2004d). The term exp [t(1 F(x))] in the overstress reliability integral (9.5) gives the probability that none of the random loads in the time interval 0, t will exceed the strength x while the term fS (x) dx gives the probability that strength will be in the interval x, x þ dx. Finally, the product exp [t(1 FL (x))] fS (x)dx gives the probability of the compound event that the strength will be in the interval x, x þ dx and none of the random loads will exceed it. When this product is integrated over all possible values x of the strength, the reliability associated with time interval 0, t is obtained. The overstress reliability integral can be
Overstress Reliability Integral and Damage Factorisation Law
161
generalised for any distribution of the load applications. Thus, if PL o (x) is the probability that none of the random loads in the time interval o,t will exceed the strength x, the reliability associated with time interval o,t is given by RðtÞ ¼
Z
Smax
PL o ðxÞ fS ðxÞ dx
Smin
The big advantage of the overstress reliability integral (9.5) is that it incorporates time, unlike the load–strength integral (9.1) which describes reliability on demand at time zero. Using Monte Carlo simulations, the overstress reliability integral (9.5) has been verified. Thus, for uniformly distributed load and strength in the interval (Smin, Smax), the overstress reliability integral (9.5) evaluates to RðtÞ ¼ ½1 expðtÞ=ðtÞ
ð9:6Þ
which for t ¼ 100 months and ¼ 0:017 months1 gives R(t) ¼ 0.48. For the same parameters t ¼ 100 months and ¼ 0:017 months1, Monte Carlo simulations of uniformly distributed load and strength in the interval Smin ¼ 20, Smax ¼ 90 for example, yield the empirical probability of R(t) ¼ 0.48. Next, a finite time interval t ¼ 100 months and a load number density ¼ 0:5 months1 were assumed. The stress was assumed to follow the maximum extreme value distribution x FL ðxÞ ¼ exp exp with parameters ¼ 119, ¼ 73:64, and the strength, the Weibull distribution fS ðxÞ ¼
m x x0 m1 x x0 m exp
with parameters x0 ¼ 200, ¼ 297:6 and m ¼ 3.9. Numerical integration of the right-hand side of equation (9.5), within integration limits Smin ¼ x0 ¼ 200, Smax ¼ 2000, yielded R(t) ¼ 0.60 for the reliability. The Monte Carlo simulation of the load–strength interference model with multiple load application based on the same parameters (see Algorithm 6.9) yielded an empirical reliability R(t) 0.60 which illustrates the validity of the overstress reliability integral (9.5). One of the advantages of the overstress reliability integral is its validity for non-normally distributed load and strength. As shown on the basis of counterexamples, in Chapter 5, for a load and strength not following a normal distribution, the standard reliability measures reliability index and loading roughness are misleading.
162 9.2
Reliability and Risk Models
DAMAGE FACTORISATION LAW
Suppose that damage due to fatigue, corrosion or any other type of deterioration is a function of time and a particular controlling factor p. In case of fatigue, for example, the controlling factor can be the stress or strain amplitude. Suppose that a particular component accumulates damage at M different intensity levels p1, . . . , pM of the controlling factor p. At each intensity level pi, the component has been exposed to damage for time ti . Suppose that ti corresponding to a constant intensity level pi of the controlling factor p denote the time for attaining a critical level of damage ac, after which the component is considered to have failed (Figure 9.1). It is also assumed that the sequence in which the various levels of the factor p are imposed does not affect the component’s life. The damage factorisation law states that if for a constant level p of the controlling factor, the rate of damage development can be factorised as a function of the current damage ‘a’ and a function of the factor level p, da=dt ¼ FðaÞGðpÞ
ð9:7Þ
the critical level of damage ac at different levels of the controlling factor will be attained when the sum t1 t2 þ þ t1 t2
ð9:8Þ
Intensity of the controlling factor ∆t1 p1
∆t2
p2 ∆tM
Time for attaining damage ac
pM
0
Figure 9.1
t1
t2
tM
Time, t
Exposure for times ti at different intensity levels pi of the controlling factor p
Overstress Reliability Integral and Damage Factorisation Law
163
becomes unity for some k: t1 t2 tk þ þ þ ¼1 t1 t2 tk
ð9:9Þ
The time tc to attain the critical level of damage ac is then equal to tc ¼ t1 þ t2 þ þ tk
ð9:10Þ
Conversely, if the time for obtaining the critical level of damage ac can be determined using the additivity rule (9.9), the factorisation (9.7) must necessarily hold. In other words, the damage factorisation law (9.7) is a necessary and sufficient condition for the additivity rule (9.9) (Todinov, 2001b). This means that if the rate of damage law cannot be factorised, the additivity rule (9.9) is not valid and should not be used. An alternative formulation of the damage factorisation law states (Todinov, 2001b) that if for a constant level p of the controlling factor, the time t for attaining a particular level of damage a can be factorised as a function of the damage and the factor level p: t ¼ RðaÞSðpÞ
ð9:11Þ
the time to reach a critical level of damage ac for different levels of the controlling factor can be determined using the additivity rule (9.9). Essentially, equation (9.9) is an additivity rule according to which the total time tc required to attain a specified level of damage ac is obtained by adding the absolute durations ti spent at each intensity level i of the factor p until the sum of the relative durations ti /ti becomes unity. The fraction of accumulated damage at a particular intensity level pi of the controlling factor p is the ratio of the time spent at level pi and the total time at level pi needed to attain the specified level ac of damage (from initial damage zero). An important application of the additivity rule is the case where damage is caused by fatigue. In this case the measure of damage is the length a of the fatigue crack. The additivity rule (9.9), also known as the Palmgren–Miner rule, has been proposed as an empirical rule in case of damage due to fatigue controlled by crack propagation (Miner, 1945). The rule states that in a fatigue test at a constant stress amplitude i , damage could be considered to accumulate linearly with the number of cycles. Accordingly, if at a stress amplitude 1 the component has n1 cycles of life, which correspond to amount of damage ac, after n1 cycles at a stress amplitude 1 , the amount of damage will be (n1 /n1 )ac . After n2 stress cycles spent at a stress amplitude 2 , characterised by a total life of n2 cycles, the amount of damage will be
164
Reliability and Risk Models
(n2 /n2 )ac , etc. Failure occurs when, at a certain stress amplitude M , the sum of partial amounts of damage attains the amount ac, i.e. when n1 n2 nM ac þ ac þ þ ac ¼ ac n1 n2 nM
ð9:12Þ
is fulfilled. As a result, the analytical expression of the Palmgren–Miner rule becomes M X
ni =ni ¼ 1
ð9:13Þ
i¼1
where ni is the number of cycles needed to attain the specified amount of damage ac at a constant stress amplitude i . The Palmgren–Miner rule is central to reliability calculations yet no comments are made whether it is compatible with the damage development laws characterising the different stages of fatigue crack growth. The necessary and sufficient condition for validity of the empirical Palmgren–Miner rule is the possibility of factorising the rate of damage as a function of the amount of accumulated damage a and the stress or strain amplitude (p): da=dn ¼ FðaÞGðpÞ
ð9:14Þ
The theoretical derivation of the Palmgren–Miner rule can be found in Todinov (2001b). A widely used fatigue crack growth model is the Paris–Erdogan power law (Paris and Erdogan, 1963): daðnÞ=dn ¼ CK m
ð9:15Þ
pffiffiffiffiffiffi where K ¼ Y a is the stress intensity factor range; C and m are material constants and Y is a parameter which can be presented as a function of the amount of damage a. Clearly, the Paris–Erdogan fatigue crack growth law can be factorised as in (9.14) and therefore it is compatible with the Palmgren– Miner rule. In cases where this factorisation is impossible, the Palmgren–Miner rule does not hold. Such is, for example, the fatigue crack growth law da=dn ¼ Ba D
ð9:16Þ
discussed in Miller (1993), which characterises physically small cracks. In equation (9.16), B and are material constants, is the applied shear strain range, a is the crack length and D is a threshold value.
10 Determining the Probability of Failure for Components Containing Flaws
10.1
BACKGROUND
A strong emphasis is increasingly placed on reliability requirements which eliminate early-life failures (e.g. in the offshore, aerospace and automotive industries). Early-life failures are often the result of poor manufacturing and inadequate design. A substantial proportion of the early-life failures are also due to variation of strength which is a complex function of material properties, design configuration and dimensions. Suppose that a component with volume V is subjected to uniaxial tension (Figure 10.1a) and in the volume V there exist N number of critical flaws which cause failure at the loading stress . Suppose also that the volume V has been divided into M small zones with volumes V (Figure 10.1a). The probability that no small volume V will contain a critical flaw is ð1 V=VÞN 1 N V=V ¼ 1 n V
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
166
Reliability and Risk Models
σ (a)
σ
M
12
σ
V = M ∆V
∆V A
sσ
B
C
σ
(b)
Figure 10.1 Uniaxial tensile loading
where n ¼ N /V is the number density of the critical flaws. The probability p0 that the entire volume V will survive the loading stress with no failure, equals the probability that all small volumes V will survive the stress : p0 ¼ ð1 n VÞM ¼ expðM ln½1 n VÞ expðn VÞ
ð10:1Þ
because for V 0, ln [1 n V] n V and V ¼ M V. In order to use equation (10.1), an expression for n is required. Weibull (1951) proposed the empirical relationship m n V0 ¼ 0
ð10:2Þ
where V0, 0 and m are constants, to which experimental data related to failure of brittle material conformed well (Hull and Clyne, 1996). Making this assumption, for the probability of failure p of the stressed volume V subjected to uniaxial stress of magnitude , the Weibull distribution V m p ¼ 1 exp ð10:3Þ V0 0 is obtained. This distribution is common for describing the strength distribution and reliability of materials (Jayatilaka and Trustrum, 1977; Bergman, 1985). An important factor affecting the strength of components is the presence of flaws due to processing or manufacturing. Currently, most of the existing models relate the probability of fracture triggered by defects to the probability of finding a defect of particular size in the stressed volume. Thus Curry and Knott (1979) and Wallin et al. (1984), in their statistical models for carbide-induced brittle fracture in steels, related the probability of brittle fracture to the probability that ahead of the crack tip a carbide will exist, which has a radius greater than some critical value, specified by Griffith’s crack advancement criterion. Relating the probability of existence of a flaw with critical size in a particular space region to the probability of fracture can be made only if the flaws are very
Determining the Probability of Failure for Components Containing Flaws
167
weak and initiate fracture easily. In the general case, only a small number of inclusions of any particular size are liable to crack, even though subjected to high matrix strains. Hahn (1984) pointed out that crack nucleation in hard particles is assisted by plastic deformation of the surrounding matrix but requires an additional stress raiser or a defect in the particles. Furthermore, to be ‘eligible’, the particle should have an orientation favourable for nucleating a crack, and the misorientations at the particle boundary should produce a low value of the local fracture toughness. All of these requirements are satisfied with certain probability. It is necessary to point out that that the probability of triggering fracture is not solely a function of the size of the flaws. Consider, for example, the thin strip in Figure 10.1(b) which contains through cuts oriented at different angles. If the projection of a cut is greater than the critical value s , the cut will initiate failure at a loading stress . Suppose that the strip has been loaded to a stress level . Clearly, neither of the large cuts A and C will trigger failure. Failure will be triggered by the smaller cut B in the middle, which has a critical projection on a plane perpendicular to the direction of the tensile stress. Batdorf and Crose (1974) proposed a statistical model for fracture of brittle materials containing randomly oriented microcracks. They demonstrated that for uniaxial tension their theory was equivalent to Weibull’s. Weakest-link theories pertinent to the fracture of brittle materials (Evans, 1978) yield a probability of failure given by Z Z S ðS; VÞ ¼ 1 exp dV gðSÞdS ð10:4Þ V
0
where S is the fracture strength, V is a sample volume and g(S)dS is the number of flaws per unit volume with strength between S and S þ dS. For the probability of fracture in a volume V stressed to a level , Danzer and Lube (1996) proposed the equation p ¼ 1 expðNc Þ
ð10:5Þ
where Nc is the expected number of defects with critical size in the stressed volume. Equations (10.4) and (10.5), however, are based on the number density of the critical flaws in the stressed volume which is not a measurable quantity. Using the concept of individual probability F() of triggering fracture by the flaws, the probability of failure of a component loaded at a constant stress level was found to be (Todinov, 2000b): p ¼ 1 exp½VF ðÞ
ð10:6Þ
In equation (10.6), is the number density of all flaws in the stressed volume V. It is assumed that the random flaws follow a homogeneous Poisson process
168
Reliability and Risk Models
with constant density ¼ const: in the stressed volume V. The type of flaws has a strong influence on the probability of failure. Due to tensile tessellation stresses, for example, alumina or silicon-based inclusions in steel wire are more likely to become initiators of failure compared to sulphide inclusions of the same diameter and numbers. In another example, sharp crack-like defects are characterised by a larger probability of triggering fracture compared to blunt defects. Furthermore, crack-like defects with a crack plane perpendicular to the direction of the uniaxial tensile stress are more likely to initiate failure than cracks oriented along the direction of the tensile stress. Equation (10.6) includes the number density of all flaws which is a measurable quantity. If all flaws are critical (initiate failure at a stress level ), F() ¼ 1 and the probability that at least one flaw will be located in the stressed volume V is p ¼ 1 eV
ð10:7Þ
which also gives the probability of failure. Since V is the expected number of critical flaws in the volume V, equation (10.7) is equivalent to equations (10.5) and (10.4). Next, we will show that equation (10.6) is valid not only for a simple uniaxial stress state. The equation can be easily generalised for an arbitrarily loaded component, with complex shape.
10.2 GENERAL EQUATION RELATED TO THE PROBABILITY OF FAILURE OF A STRESSED COMPONENT WITH INTERNAL FLAWS Suppose that a component with complex shape, containing crack-like flaws with number density , has been loaded in an arbitrary fashion (Figure 10.2). It is assumed that the flaws are from a single type and their locations follow a homogeneous Poisson process in the volume V of the component.
S1 Random flaws V
S2
Si S3
Figure 10.2 A component with complex shape, loaded with arbitrary forces Si
169
Determining the Probability of Failure for Components Containing Flaws
Each flaw is characterised by a probability Fc of initiating failure in the component. The probability of initiating failure in the component can be determined by subtracting from unity the probability of the complementary event that none of the flaws will initiate failure. The probability p0(r) of the compound event: exactly r flaws exist in the volume V of the component and none of them will initiate failure can be succinctly presented as: p0ðrÞ ¼ Pðr flaws in VÞ Pðnone of the flaws will initiate failurejr flawsÞ This probability is a product p0ðrÞ ¼
ðVÞr eV ½1 Fc r r!
ð10:8Þ
of the probabilities of two statistically independent events: (i) exactly r flaws reside in the volume V, the probability of which is given by the Poisson distribution Pðr flaws in VÞ ¼
ðVÞr eV r!
and (ii) none of the r flaws will initiate failure, the probability of which is Pðnone of the flaws initiate failurejr flawsÞ ¼ ½1 Fc r The event no failure will be initiated in the volume V is the union of the disjoint events characterised by probabilities p0(r) and its probability p0, according to the total probability theorem, is p0 ¼
X
Pðr flaws in VÞ
r
Pðnone of the flaws will initiate fracturejr flaws in VÞ from which p0 ¼
1 X
p0ðrÞ ¼
r¼0
1 X ðVÞr eV r¼0
r!
½1 Fc r
ð10:9Þ
Equation (10.9) can be simplified to p0 ¼ eV
1 X ½Vð1 Fc Þr r¼0
r!
¼ eV eV½1Fc ¼ expðVFc Þ
ð10:10Þ
170
Reliability and Risk Models
and the probability pf of failure for the component with volume V becomes pf ¼ 1 exp½VFc
ð10:11Þ
An upper bound of this probability can be produced if a weak flaws assumption (Fc 1) is made. This is a very conservative assumption, suitable in cases where the upper bound of the probability of failure is required. Equation (10.11) can be generalised for multiple type of flaws. Thus, if M types of flaws are present, the probability that no failure will be initiated is ! M X 0 p ¼ expð1 VF1c Þ expðM VFMc Þ ¼ exp V i Fic ð10:12Þ i¼1
where i and Fic are the flaw number density and the probability of initiating failure from the ith type of flaws. Equation (10.12) gives the probability that no failure will be initiated by the first, second, . . . , and the Mth type of flaws. The probability of failure is then ! M X pf ¼ 1 exp V i Fic ð10:13Þ i¼1
In order to distinguish between a complex stress state and a uniaxial stress state, for a volume V subjected to uniaxial stress (Figure 10.1a), the probability Fc in equation (10.11) will be denoted by F().
10.3 DETERMINING THE INDIVIDUAL PROBABILITY OF TRIGGERING FAILURE, CHARACTERISING A SINGLE FLAW The individual probability Fc of triggering failure characterising a single flaw can be determined using a Monte Carlo simulation. Random locations for the flaw are generated in the component with volume V (Figure 10.2). For each random location, a random orientation and a random size are selected for the flaw. Given the specified location, orientation and size, a mixed mode fracture criterion is applied to check whether the flaw will trigger fracture. The individual probability of triggering fracture Fc characterising an individual flaw is estimated by dividing the number of simulations in which ‘failure has been initiated’ to the total number of Monte Carlo trials. Finally, substituting the estimate Fc in equation (10.11) gives the probability of failure of the stressed component, irrespective of its geometry and loading. The algorithm in pseudocode is as follows. Algorithm. Monte Carlo evaluation of the probability of failure of a loaded component with internal flaws procedure Calculate_stress_distribution ()
Determining the Probability of Failure for Components Containing Flaws
171
{/*Calculates the distribution of the stresses in the loaded component using an analytical solution or Finite Elements solution */} procedure Calculate_principal_stresses () {/* Calculates the magnitude and the direction of the principal normal stresses at the crack location */} function
Generate_random_crack_size () {/* Samples the specified size distribution of the flaws and returns a random crack size */}
procedure Random_orientation () {/* Generates the cosine directors of a randomly oriented penny-shaped crack in the space, with respect to the directions of the principal normal stresses according to the method in section 6.2.10 from Chapter 6 */} procedure Generate_Random_Crack_Location() {/* Generates a point with uniformly distributed coordinates (x, y, z) in the volume of the component, according to Algorithm 6.2.8 from Chapter 6 */} function
Check_for_fracture_initiation () {/* Uses a mixed-mode fracture criterion to check whether the crack is unstable and returns TRUE if the crack with the generated location, size and orientation initiates fracture */}
Failure_counter ¼ 0; Calculate_stress_distribution (); For i ¼ 1 to Number_of_trials do { Generate_Random_Crack_Location(); Calculate_principal_stresses (); Generate_random_crack_size (); Random_orientation (); Unstable ¼ Check_for_fracture_initiation (); /* Checks whether the crack is unstable and if so, assigns TRUE to variable Unstable, otherwise FALSE is assigned */ If (Unstable) then {Failure_Counter ¼ Failure_Counterþ1;} } Fc ¼ Failure_counter/Number_of_trials; Probability_of_component_failure ¼ 1 exp (VFc ). Using this algorithm, for different stress levels, the lower tail of the strength distribution of any loaded component with internal flaws can be constructed. For any specified time interval, plugging the strength
172
Reliability and Risk Models
distribution into the load-strength reliability integral yields the reliability of the component with internal flaws. Compared to the direct Monte Carlo simulation where the number of flaws is generated at each simulation trial (see Algorithm 6.10), the described algorithm is much more efficient because the inner loop over the total number of flaws is no longer present and the complexity of the algorithm is now linear. The efficiency of the algorithm can be further increased if the loaded component is divided into N sub-volumes. Within a particular subvolume the magnitudes of the principal stresses do not vary. Instead of generating a random location, a random sub-volume is selected using the method described in Section 6.2.7. The discrete distribution is defined by
X PðX ¼ xÞ
1 2 V1 =V V2 =V
... N . . . VN =V
where X ¼ 1, 2, . . . , N is the index of the sub-volume, Vi is its volume and V is the total volume of the component. The probability with which the ith subvolume is selected is proportional to its volume fraction Vi /V. Once a subvolume is selected, no location is further generated because due to the uniform stress state in it all locations inside are equivalent. For a simple loading (e.g. tension), the probability Fc can even be determined empirically, from tests at a specified stress level. Indeed, suppose that N tests have been conducted at a constant stress level involving N test pieces, for example lengths cut from wire/fibre with known flaw number density . If Nf is the number of failures initiated by flaws, the probability F() of failure at a stress level characterising a single flaw can be estimated by solving Nf /N ¼ 1 exp [LF()] with respect to F(), where L is the length of the specimens. It must be pointed out that although equation (10.11) gives the probability of failure for the component, it does not say anything about the distribution of the locations where failure will be initiated most frequently. Failure will be initiated most frequently in the highest stressed regions where the conditions for crack instability will be met first during loading. Clearly, if failure is initiated first in the highest stressed region it cannot possibly be initiated in regions with a lower stress, despite the fact that the condition for crack instability may be fulfilled there too. If in the highest stressed region no flaw with appropriate orientation and size for initiating failure is present, failure will be initiated in a region with lower stress, where appropriate combination of stress, flaw orientation and flaw size exists.
Determining the Probability of Failure for Components Containing Flaws
10.4
173
PARAMETERS RELATED TO THE STATISTICS OF FRACTURE TRIGGERED BY FLAWS
The product 0 ¼ Fc in equation (10.11), which we refer to as the detrimental factor, is an important parameter associated with components with internal flaws. Consider for example two components with identical material and geometry. One of the components has a high density of flaws which initiate failure with small probability and the other has a low density of flaws which initiate failure with large probability. If both components are characterised by similar detrimental factors, the components will exhibit similar values for the probability of failure. Equation (10.13) shows that the most detrimental type of flaws is the type characterised by the largest factor i Fic . Consequently, the efforts associated with eliminating flaws from the material should concentrate on types of flaws with large detrimental factors. For uniaxial stress and very weak flaws which initiate fracture easily, the probability of triggering failure can be assumed to be unity F() ¼ 1. In this case, the probability of failure pf ¼ 1 exp (V) of the stressed volume V is equal to the probability that at least one weak flaw will be present in it. In the general case, however, the probability Fc of initiating failure characterising the individual flaws will be a number between zero and unity. From equation (10.11), it follows that the smaller the stressed volume V, the smaller the probability of failure, and this is one of the reasons why between two similar items from the same material but of different size, the bigger item is weaker. The Weibull distribution (10.3) can be obtained as a special case of equation (10.11). Indeed, if the probability of triggering failure at the stress level can be approximated by the power law m FðÞ ¼ ð10:14Þ 0 substituting in equation (10.11) gives the Weibull distribution (10.3). For material with flaws, whose individual probabilities of initiating fracture vary with the applied uniaxial stress, according to the power law (10.14), the probability of component failure is described by the Weibull distribution. For material with flaws, characterised by a different type of dependence F(), however, a distribution function different from the Weibull distribution will be obtained after the substitution in equation (10.11). Indeed, suppose that F() can be approximated by the monotonically increasing sigmoidal dependence F() ¼ 1 exp fkr g. After the substitution in equation (10.11), the probability of failure becomes p ¼ 1 expðVÞ exp½V expðkr Þ which is not the Weibull distribution.
174
Reliability and Risk Models
10.5 UPPER BOUNDS OF THE FLAW NUMBER DENSITY AND THE STRESSED VOLUME GUARANTEEING PROBABILITY OF FAILURE BELOW A MAXIMUM ACCEPTABLE LEVEL By solving equation (10.11) numerically with respect to (given a specified maximum acceptable probability of failure pfmax), the flaw number density upper bound u can be determined: u ¼
1 lnð1 pf max Þ VFc
ð10:15Þ
The upper bound guarantees that whenever the flaw number density satisfies u , the probability of failure of the component will be smaller than pf max. Figure 10.3 represents the dependence of the flaw number density upper bound u on pf max for different values of the stressed volume V in case of very weak flaws (Fc ¼ 1). In the vicinity of pf max ¼ 0, the curves can be approximated by straight lines with slopes equal to 1/V. Consider a component with volume V, which has been cut from material with a flaw number density and subjected to uniaxial stress (Figure 10.1a). It is assumed that the flaws are from a single type, with number density , whose locations follow a homogeneous Poisson process in the stressed volume V. Suppose that fracture is controlled solely by the size of the flaws in the material
0.07
Upper bound of the flaw number density λ u (cm–3)
0.06 0.05 0.04 0.03 V = 30 cm 3 0.02
V = 60 cm 3
0.01 V = 120 cm 3 0.00 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Maximum acceptable probability of failure, p f max
Figure 10.3 Dependence of the flaw number density upper bound on the maximum acceptable probability of failure for different values of the stressed volume
Determining the Probability of Failure for Components Containing Flaws
175
and does not depend on their orientation and shape. The size distribution G(d) of the flaws determines the probability G(d) ¼ P(D d) that the size D of the flaw will not be greater than a specified value d. Let d denote the critical flaw size for the stress level . In other words, a flaw with size greater than the critical size d will initiate fracture at a stress level . Given the size distribution of flaws, it is important to determine the maximum acceptable value V of the stressed volume which limits the probability of failure below a maximum acceptable level. In case of fracture controlled solely by the size of the flaws, F() in equation (10.6) becomes 1 G(d ) which is the probability that a flaw will initiate fracture at the stress level . Substituting F() ¼ 1 G(d ) in equation (10.6) gives p ¼ 1 expfV½1 Gðd Þg
ð10:16Þ
for the probability p of initiating fracture at a stress level . Limiting the size of the stressed volume limits the probability of failure initiated by flaws which is of significant importance to the design for reliability. By solving equation (10.16) with respect to V (given a specified maximum acceptable probability of failure p max at a stress level ), an upper bound V* for the stressed volume can be determined: V ¼
1 lnð1 p max Þ ½1 Gðd Þ
ð10:17Þ
The upper bound guarantees that if for the stressed volume V, V V* is satisfied, the probability of failure is smaller than the maximum acceptable level p max . Equation (10.17) can be used for calculating the probabilities of failure from the lower tail of the strength distribution in case of failure controlled by the size of the flaws in the material. Equation (10.11) is valid for an arbitrarily loaded component with complex shape. The power of the equation is in relating in a simple fashion the individual probability of failure Fc characterising a single flaw to the probability of failure characterising a population of flaws. If a direct Monte Carlo simulation was used to determine the probability of failure of the component, at each simulation trial, a large number of checks need to be done to verify whether there will be an unstable flaw which will initiate fracture (see Algorithm 6.10). If equation (10.11) is used to determine the probability of failure of the component, only a single generation of flaws in the component is required in order to estimate the probability Fc. Once Fc has been estimated, it is simply substituted in equation (10.11) to determine the probability of failure of the component. Using this procedure, for different stress levels, the lower tail of the strength distribution of any loaded component with internal flaws can be constructed. For any specified time interval, plugging the strength distribution into the load–strength reliability integral (see Chapter 9) yields the reliability of the component with internal flaws.
176
Reliability and Risk Models
In effect, equation (10.11) constitutes the core of a new statistical theory of fracture of components with internal flaws. An important application area of the derived equation is in assessing the vulnerability of designs to the presence of internal flaws. Parametric studies based on equation (10.11) may also be used to determine which design parameters and which type of loading are associated with the largest probability of failure and to modify designs and type of loading accordingly, in order to diminish the probability of failure.
10.6 A STOCHASTIC MODEL RELATED TO THE FATIGUE LIFE DISTRIBUTION OF A COMPONENT CONTAINING DEFECTS Similar to equation (10.11), a stochastic model can be developed for determining the fatigue life distribution of a loaded component whose surface contains manufacturing, or mechanical damage defects, with a specified number density, geometry and size distribution. The model is based on: (i) the concept ‘probability that the fatigue life associated with a single defect will be smaller than a specified value’; (ii) a model relating this probability to the probability that the fatigue life of a component containing a population of defects will be smaller than a specified value and (iii) a solution related to the stress field of a loaded component, determined by an analytical or numerical method. Let G(x) denote the probability that the fatigue life characterising a single defect located randomly on the component’s surface will be smaller than x cycles. The probability p0(r) of the compound event: exactly r defects exist on the surface of the component with area S and its fatigue life will be greater than x cycles can be presented as a product
p0ðrÞ ¼
ðSÞr eS ½1 GðxÞr r!
ð10:18Þ
of the probabilities of two statistically independent events: (i) exactly r defects reside on the surface S, the probability of which is (S)r eS /r!, and (ii) none of the fatigue lives associated with the defects will be smaller than x, the probability of which is [1 G(x)]r. The event component’s fatigue life will be greater than x cycles is the union of the disjoint events characterised by probabilities p0(r) and its probability p0, according to the total probability theorem, is
p0 ¼
1 X ðSÞr eS r¼0
r!
½1 GðxÞr
ð10:19Þ
Determining the Probability of Failure for Components Containing Flaws
177
which can be simplified to p0 ¼ exp½SGðxÞ
ð10:20Þ
The probability F(x) that the fatigue life of the component will be smaller than x cycles is equal to the probability that on the component surface with area S, there will be at least one defect with fatigue life smaller than x cycles. Accordingly, FðxÞ ¼ 1 exp½SGðxÞ
ð10:21Þ
The probability G(x) related to a single defect can be estimated using a Monte Carlo simulation, similar to estimating the probability Fc in equation 10.11. A single defect with a random orientation, size and location is generated on the surface S. For each Monte Carlo simulation trial, the fatigue life associated with the defect is calculated. G(x) is obtained as a ratio of the number of trials in which the fatigue life is smaller than x cycles and the total number of trials. Calculating the fatigue life for a particular combination of a random defect size, random orientation, and random location characterised by a particular stress state, incorporates models and experimental data related to the micromechanics of initiating fatigue cracks from defects. Parametric studies based on the stochastic model (10.21) can be conducted to explore the influence of the uncertainty associated with factors such as size distribution and number density of defects, on the confidence levels of the fatigue life predictions. Equation 10.21 can be a good basis for specifying the maximum acceptable level of the defects number density which guarantees that the risk of fatigue failure will remain below a maximum acceptable level. The probability G(x) in equation 10.21 also naturally incorporates the fact that not at all defects will initiate fatigue cracks. Fatigue crack initiation is associated with certain probability.
11 Uncertainty Associated with the Location of the Ductile-to-brittle Transition Region of Multi-run Welds
11.1
MODELLING THE SYSTEMATIC AND RANDOM COMPONENT OF THE CHARPY IMPACT ENERGY
Multi-run welds are characterised by a large variation of the impact toughness. A poor estimate or no estimate of the variation and uncertainty associated with the impact toughness promotes designs with low reliability and high risk of failure. Failure to indicate impact toughness degradation may lead to costly failures entailing loss of human lives, damage to the environment and huge financial losses. Other consequences are incorrect and costly decisions regarding the length of exploitation of components and structures and the exact time for a plant shutdown. The conservative estimates of the Charpy impact energy variance at fixed test temperatures in Chapter 3, can be used as a basis for conservative estimates of the location of the ductile-to-brittle transition region. Accordingly, the model for determining the uncertainty associated with a Charpy impact energy of C–Mn multi-run welds integrates the following basic components:
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
180
Reliability and Risk Models
.
A model of the systematic component of the Charpy impact energy variation. A model for obtaining unbiased and precise parameter estimates of the systematic component. . A model for determining the uncertainty associated with a Charpy impact energy at a specified test temperature. . A simulation model for determining uncertainty associated with the location of the transition region. .
The general statistical model regarding the variation of the Charpy impact energy IE(T) can be summarised by IEðTÞ ¼ EðTÞ þ "ðTÞ
ð11:1Þ
where E(T) is the systematic component of the impact energy variation and "(T) is the random component (Windle et al., 1996) (Figure 11.1). The model of the systematic variation E(T) of the Charpy impact energy (Figure 11.1) involves separate treatment of the shelf regions and the transition ductile-to-brittle region. Indeed, in the upper shelf region, the microscopic fracture mechanism is void nucleation, growth and coalescence, associated with a large amount of absorbed impact energy. With decreasing test temperature, zones fracturing by a distinct cleavage mechanism appear. The latter is associated with brittle fracture which absorbs a relatively small amount of impact energy. Because of the different fracture mechanisms associated with the upper and lower shelf, the impact energy in the shelf regions cannot depend on each other or on the impact energy in the transition region. Consequently, the impact energy levels of the lower and upper shelf are fitted independently from each other. The systematic variation of the Charpy impact energy is modelled by
E, IE(J )
EU IE(T )
ε (T ) E(T ) EL
T0
T
T(K)
Figure 11.1 Systematic and random component of the Charpy impact energy in the ductile-to-brittle transition region
Ductile-to-brittle Transition Region of Multi-run Welds
EðTÞ ¼ EL þ ðEU EL ÞFðTÞ
181 ð11:2Þ
where EL and EU are the lower- and the upper-shelf Charpy impact energies (Figure 11.1) estimated as mean values from experimental data, and FðTÞ ¼ 1 exp½kðT T0 Þm
ð11:3Þ
is the normalised Charpy impact energy (E(T) EL)/(EU EL) (Todinov et al., 2000). The parameters k, T0 and m are estimated from experimental data according to the method from section 4.4. The estimation method yields unbiased and precise estimates for the parameters k, T0 and m. Assumptions of a normal distribution for the random component of the impact energy at a specified test temperature can be justified only in cases where the Charpy V-notch samples the same, relatively homogeneous, microstructural zone. The normal probability plots in Figure 4.7 of real Charpy data in the case where the Charpy V-notch samples different microstructural zones show that the impact energy distribution at a specified test temperature cannot be approximated by a normal distribution. The appropriate model for the random component of the Charpy impact energy in this case is a distribution mixture, discussed comprehensively in Chapter 3.
11.2
DETERMINING THE UNCERTAINTY ASSOCIATED WITH THE LOCATION OF THE DUCTILE-TO-BRITTLE TRANSITION REGION
The method used for determining the uncertainty in the location of the ductile-tobrittle transition region is in fact a method for generating the distribution of the temperature corresponding to a specified level of Charpy impact energy, selected to mark the location of the transition region. In the next simulations, the temperature corresponding to 40 J Charpy impact energy (T40 J temperature) was selected to mark the location of the ductile-to-brittle transition region (Windle et al., 1996). A uniform distribution of the Charpy impact energy at a specified test temperature was assumed, which is a conservative assumption supported by the actual shape of the empirical distribution of the Charpy impact energy from mixed sampling (see Figure 4.5). The uncertainty in the T40 J transition temperature increases with increasing the variance of the impact energy at the test temperatures. In order to produce an estimate of the uncertainty in the location of the ductile-to-brittle transition region along the temperature axis, at each test temperature Ti, the individual distributions of the distribution mixtures representing the distribution of the impact energy are sampled with probabilities which yield the largest variance of the impact energy. It is assumed that the parameters characterising the Charpy impact energy of the separate
182
Reliability and Risk Models
microstructural zones (mean i and standard deviation i ) are known at the test temperatures. Assuming a uniform distribution for the Charpy impact energy at a given test temperature (a conservative assumption), the length of the uncertainty interval can be obtained from the upper bound of the variance i Vmax of the Charpy impact energy at a test temperature Ti. Let us recall from section 2.10 that the variance of the uniform distribution is L2i /12, where Li is i the lengthffi of the uncertainty interval. Accordingly, from Vmax ¼ L2i /12, Li ¼ pffiffiffiffiffiffiffiffiffiffiffiffi i 2 3Vmax is obtained for the length of the uncertainty interval characterising the ith test temperature, centred at the mean impact energy at that temperature. The Monte Carlo simulation algorithm for determining the uncertainty in the location of the transition region as a function of the selected test temperatures works as follows. Sequentially, the selected input test temperatures are scanned and at each test temperature, a random value of the Charpy impact energy is generated, uniformly distributed over the length Li of the uncertainty interval for the impact energy at that test temperature (Figure 11.2). In this way, by a single scan of all test temperatures, a random sparse data set is generated, containing a single impact energy at each test temperature. The obtained sparse data set is fitted according to the method from section 4.4. Next, an estimate T^40 J of the T40 J transition temperature from fitting the sparse data set is determined, after which another scan of the test temperatures is performed which results in a new sparse data set. The subsequent fitting of the new sparse data set results in another estimate of the T40 J temperature. Thus, the Monte Carlo trials continue with generating and fitting sparse data sets and determining the Charpy impact energy (J )
EU
^ f(T 40 J)
40 J
EL
T(K) ^ T40 J
Figure 11.2 Determining the uncertainty in the location of the ductile-to-brittle transition region. The distribution of the 40 J temperature has been represented by the continuous line f (T^40 J )
Ductile-to-brittle Transition Region of Multi-run Welds
183
estimates of the T40 J temperature until a sufficient number of estimates of the T40 J temperature are collected. These form an empirical distribution f (T^40 J ) (Figure 11.2) which is a measure of the uncertainty associated with the T40J temperature. Consequently, a Monte Carlo simulation routine for investigating the uncertainty associated with the location of the ductile-to-brittle transition region as a function of the number of test temperatures, the choice of the test temperatures and the variance of the impact energy at the test temperatures can be developed. The routine reads the Charpy impact energy parameters (the mean and the standard deviation) characterising the existing microstructural constituents (e.g. central, intermediate and reheated zone) at several input test temperatures and calculates the corresponding uncertainty intervals. For other selected test temperatures the routine calculates the associated uncertainty intervals by interpolation (Todinov, 2004a). The parametric study to follow is based on real Charpy impact data at eight test temperatures (70, 40, 20, 10, 0, 10, 20 and 50 C) (Todinov et al., 2000). The uncertainty associated with the transition T40 J temperature from sparse data sets generated at the eight test temperatures is presented by the histogram in Figure 11.3(a). The interval T^40 J 20 K where T^40 J is an estimated T40J transition temperature from any particular sparse data set, contains the true T40J temperature (T40J ¼ 233 K) with probability 93.6%. P(jT^40 J T40J j 20) ¼ 0:936. The simulation shows that for a transition region estimated on the basis of sparse data sets each including eight test values, there is 6.4% probability that the estimated location of the transition region will be more than 20 K away from the true location. The histogram in Figure 11.3(b) represents the uncertainty in the location of the transition region associated with the same eight test temperatures but with twice as small standard deviations of the variation of the Charpy impact energy. As can be verified from the histogram, decreasing the variation of the Charpy impact energy at the test temperatures significantly diminishes the uncertainty associated with the location of the transition region. Increasing the number of test temperatures has equally strong effect. This can be verified from the histogram in Figure 11.4(a) which represents the uncertainty in the location of the transition region associated with 16 test temperatures. The simulation shows that the location of the ductile-to-brittle transition region estimated on the basis of sparse data sets at 16 test temperatures is associated with relatively small uncertainty. There is only about 10% probability that the estimated location of the transition region will be more than 10 K away from the true value (P(jT^40 J T40 J j > 10 K) ¼ 0:096. A very small number of test temperatures result in a large uncertainty in the location of the transition region. This is illustrated by the histogram in Figure 11.4(b). It represents the uncertainty in the T40J transition temperature estimated from sparse data sets, each including four test temperatures selected to cover the
184
Reliability and Risk Models Probability density 0.045 0.040 0.035
Uncertainty associated with eight test temperatures: T1 = 203 K, T2 = 233 K, T3 = 253 K, T4 = 263 K, T5 = 273 K, T6 = 283 K, T7 = 293 K, T8 = 323 K
0.030 0.025 0.020 0.015 0.010 0.005 0.000 90
110
(a)
130
150
170
190
210
230
250
270
Transition temperature T40 J (K)
Probability density 0.09 0.08 0.07
Uncertainty associated with eight test temperatures and twice as small standard deviations of the Charpy impact energy
0.06 0.05 0.04 0.03 0.02 0.01 0.00 90 (b)
110
130
150
170
190
210
230
250
270
Transition temperature T40 J (K)
Figure 11.3 (a) Uncertainty in the location of the transition region determined from sparse data based on eight test temperatures; (b) the same test temperatures as in (a), but with twice as small standard deviation of the Charpy impact energy characterising the test temperatures (Todinov, 2004a)
entire transition region. The Monte Carlo simulation resulted in 24% probability that the estimated location of the transition region will be more than 20 K away from the true value (P(jT^40 J T40 J j > 20 K) ¼ 0:24). Clearly, the uncertainty in the location of the ductile-to-brittle transition region depends on the variation associated with the impact energy at the test temperatures. A large variation of the impact energy at the test temperatures
185
Ductile-to-brittle Transition Region of Multi-run Welds Probability density 0.09 0.08
Uncertainty associated with 16 test temperatures
0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00 90
110
(a)
130
150
170
190
210
230
250
270
250
270
Transition temperature T40 J (K) Probability density
0.027 0.024 0.021
Uncertainty associated with four test temperatures: T1 = 233 K, T2 = 263 K, T3 = 283 K, T4 = 323 K
0.018 0.015 0.012 0.009 0.006 0.003 0.000 90 (b)
110
130
150
170
190
210
230
Transition temperature T40 J (K)
Figure 11.4 Uncertainty associated with the location of the ductile-to-brittle transition region determined from sparse data sets based on (a) 16 and (b) 4 test temperatures (Todinov, 2004a)
propagates into a large uncertainty in the location of the transition region. In general, the variance of the Charpy impact energy at a specified test temperature is larger if more than one microstructural zone is sampled and can be decreased if a single microstructural zone is sampled (e.g. the central zone). Since the probabilities of sampling the separate microstructural zones are not known a
186
Reliability and Risk Models
priori, the length of the uncertainty interval should be based on the largest possible variance of the Charpy impact energy at the specified test temperature. Otherwise, the estimate of the uncertainty in the location of the transition region will not be conservative. The uncertainty in the location of the ductile-to-brittle transition region is particularly large if the transition region is fitted from data sets containing a small number of measurements (tests). A significant decrease of the uncertainty can be achieved from sparse data sets containing a sufficient number of measurements (test temperatures). The simulations also illustrate the important difference between uncertainty associated with the location of the ductile-tobrittle transition region and variability of the Charpy impact energy at a specified test temperature. The variability of the Charpy impact energy at a particular test temperature is natural and cannot be decreased by increasing the number of Charpy tests. Conversely, with increasing the number of test temperatures, the uncertainty associated with the location of the ductile-tobrittle transition region decreases significantly. For a large number of test temperatures, the uncertainty in the location of the ductile-to-brittle transition region is very small, despite the fact that the variability of the Charpy impact energy at the test temperatures is not affected and remains large. The reason is that the uncertainty in the location of the ductile-to-brittle transition region is determined on the basis of the systematic component of the Charpy impact energy. Its variation diminishes with increasing the number of test temperatures in the sparse data set, similar to the variation of the mean from a large number of measurements (see Appendix B). If the placement of the Charpy V-notch can be controlled, the uncertainty in the location of the transition region can be diminished by sampling from the central zone only (Figure 3.7b). This minimises the variance of the impact energy. Furthermore, the central zone is characterised by the worst Charpy impact energy compared to the other microstructural zones (Figure 4.5a). Consequently, fitting the transition region based on sparse data from the central zone results in a conservative location of the transition region, towards higher temperatures.
11.3 RISK ASSESSMENT ASSOCIATED WITH THE LOCATION OF THE TRANSITION REGION The degradation of C–Mn multi-run welds due, for example, to irradiation, plastic straining or corrosion, moves the ductile-to-brittle transition region towards higher temperatures. The magnitude of the change in the temperature selected to mark the location of the transition region is an indication of the degree of fracture toughness degradation also known as weld metal embrittlement. Often, in the nuclear industry in particular, only sparse data are available
187
Ductile-to-brittle Transition Region of Multi-run Welds
for establishing whether there has been a fracture toughness degradation due to irradiation (Windle et al., 1996). The ‘uncertainty model’ presented in the previous section can be used for establishing from a single sparse data set whether the change in the location of the transition region towards higher temperatures is due to weld metal embrittlement. Although the presence of a shift in the location of the transition region towards higher temperatures could be an indication that the weld metal has degraded because of embrittlement, it could simply be a result of the uncertainty associated with the location of the transition region. The model to be d presented here involves two basic components: (i) an estimate T40 J of the location of the transition region from a sparse data set taken from a weld exposed to embrittlement and (ii) a Monte Carlo estimate of the uncertainty in the location of the transition region for non-degraded (non-exposed) weld. During the Monte Carlo simulation, the sparse data sets needed to determine the uncertainty in the location of the transition region of the non-exposed weld are generated at the same temperatures as the test temperatures in the sparse data set from the exposed weld. During the Monte Carlo trials, the empirical d ^ probability P(T^40 J T40 J ) is determined, that the estimate T40 J of the location of the transition region for non-exposed weld will be as extreme or more extreme than the estimate T d40 J obtained from the sparse data set characterising the exposed weld. A small probability of producing a more extreme estimate T^40 J (e.g. 5% in Figure 11.5) indicates weld metal embrittlement with a high
Charpy impact energy (J ) EU ^ T40 J p.d.f. 5% 40 J
T d40 J
EL
T(K)
^ T40J
Figure 11.5 A statistical test for a fracture toughness degradation. A small probability (e.g. 5%) of generating a more extreme estimate T^40 J for the start of the transition region d than the estimate T40 J from an experimental sparse data set, indicates a weld metal embrittlement with a high degree of confidence
188
Reliability and Risk Models
degree of confidence. Indeed, if the probability P(T^40 J T40d J ) is small, it is d unlikely that the estimate T40 J is due to uncertainty associated with the location of the transition region. According to the histograms in Figure 11.4, with the increasing number of test temperatures in the exposed sparse data set, the width of the uncertainty interval decreases and our confidence that a relatively large value T d40 J is due to fracture toughness degradation rather than uncertainty, increases. This will be illustrated by a numerical example. Suppose that after fitting the sparse data set related to exposed to embrittlement material, T40d J ¼ 245 K has been estimated for the location of the transition region. Suppose also that the T40d J temperature has been obtained from fitting a sparse data set involving only the four test temperatures from Figure 11.4(b). The Monte Carlo simulation at these four test temperatures (the result is represented by the histogram in Figure 11.4(b) results in probability P(T^40 J > T40d J ) 0:25. Because of the large probability (25%) of obtaining a more extreme T^40 J estimate, no conclusive d statistical evidence exists that the sparse data set from which the estimate T40 J was obtained indicates impact toughness degradation. Due to the large uncertainty associated with the location of the transition region determined on the basis of a small number of tests, the sparse data set does not provide d conclusive statistical evidence that the value T40 J ¼ 245 K, obtained for the start of the transition region is due to fracture toughness degradation rather than uncertainty. Suppose now that the T40d J value for the exposed material has been obtained from fitting a data set based on 16 test temperatures (Figure 11.4a). The Monte Carlo simulation at the 16 test temperatures (the result is represented by the d histogram in Figure 11.4a) now yields a probability P(T^40 J > T40 J ) 0:02. This result now provides strong statistical evidence in support of the hypothesis of impact toughness degradation. There is very small chance (only 2%) of d obtaining a more extreme T^40 J estimate than T40 J ¼ 245 K. It is then highly d unlikely that the estimate T40 J is a result of uncertainty in the location of the ductile-to-brittle transition region. Clearly, a way of providing conclusive statistical evidence indicating fracture toughness degradation is to use a sparse data set containing a sufficient number of values (test temperatures). Monte Carlo simulations involving various numbers of test temperatures can be used to determine the minimum number of test temperatures which provides conclusive statistical evidence at the selected confidence level. For the appropriate number of test temperatures the width of the uncertainty interval will be sufficiently small. In this case even small changes associated with the location of the transition region can unmistakably be attributed to fracture toughness degradation caused by metal embrittlement. A specified magnitude of the change in the T40 J transition temperature, which indicates a certain degree of impact toughness degradation, can be used to make a decision regarding the
Ductile-to-brittle Transition Region of Multi-run Welds
189
length of safe use of a component/structure and a plant shutdown. The precision with which the magnitude of the change is estimated must be high because while early decommissioning and plant shutdown are associated with loss of production, late decommissioning is associated with significant risks of structural failure and severe accidents.
12 Modelling the Kinetics of Deterioration of Protective Coatings due to Corrosion
12.1
STATEMENT OF THE PROBLEM
The next problem is related to modelling the deterioration of coating systems protecting steel structures from corrosion. Often, steel structures are covered with a zinc coating that acts as a sacrificial anode. In the atmosphere and most aqueous environments, zinc is anodic to steel and protects it. In order to prevent corrosion of the zinc layer, an organic coating often protects the zinc coating. With time, deteriorated spots arise at random locations in the coating layer and expand by corroding it. Here, a physics-of-failure model is developed, describing the quantity of the corroded fraction from the coating, assuming that the radial corrosion rate k at which a corroded spot expands is constant and isotropic in all directions. A typical feature of this model is that the corroded area is formed by corrosion spots () arising and growing in a matrix () of non-corroded material. The nucleation and growth are contained in a system with finite area S (see Figure 12.1). Since the growth rate is isotropic (constant in any direction), the corrosion spots grow as circles and the growth stops at the points of contact of two growing spots (growth with impingement). Topologically, the deteriorating coating with finite area S is composed of interpenetrating circular corroded
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
192
Reliability and Risk Models
spots , uniformly distributed in the finite area S. It is assumed that the spots nucleate in the non-corroded regions from the finite area S at a timedependent rate I(). The number dN of newly formed spots in the infinitesimal time interval , þ d is proportional to the area S of the remaining noncorroded -regions: dN ¼ I()S d. There exists equivalence between the kinetics of a phase transformation of the type of nucleation and growth with a constant radial rate (Christian, 1965) and the process of continuous coverage by overlapping circles growing at a constant radial rate (Todinov, 1996). This allows us, instead of simulating the nucleation in the non-corroded -region whose areal fraction changes constantly, to simulate nucleation in the total area S which is constant. Corrosion spots, which ‘arise’ in the already corroded region, are ‘imaginary’ and do not contribute to the quantity of corroded area because, due to the constant growth rate, their boundary can never cross the boundary of the corrosion spot into which they ‘nucleated’. As a result, the number of real corrosion spots (those which nucleate in ), at any time, is proportional to the area of the remaining -region. Consequently, nucleation and growth of corrosion spots with a constant radial rate are mathematically equivalent to coverage of a surface by overlapping circles growing at a constant radial rate. The number of circular nuclei appearing in the infinitesimal interval , þ d is dN ¼ IðÞSd
ð12:1Þ
12.2 MODELLING THE QUANTITY OF CORRODED PROTECTIVE COATING WITH TIME The areal fraction of non-corroded area at time is equal to the probability that a randomly selected point A will sample the -region only (Figure 12.1). This probability in turn can be expressed by the probability that none of the corrosion spots that appear in the time interval 0, will cover the selected point. S β
A α
Figure 12.1 Nucleation and growth of corrosion spots () within the time interval 0, on a surface with finite area S
193
Modelling the Kinetics of Deterioration of Protective Coatings
Suppose that the time at which the coating is being analysed is . The probability that none of the nuclei with radii k that have appeared in the infinitesimal time interval , þ d will cover the point A is ½1 k2 2 =SSI d ¼ expfSI d ln½1 k2 2 =Sg
ð12:2Þ
where I is the nucleation rate at the instant and k are the radii (at time ) of the corroded spots that have nucleated at the instant ( is a time increment). This is in effect the probability that all corrosion spots which have appeared in the time interval , þ d will lie outside the exclusion zone S0 ( ) (Figure 12.2). For another time increment 1 , another exclusion zone is obtained: S0 ( 1 ) in Figure 12.2, corresponding to the time increment 1 . The probability that none of the corroded spots that have appeared in the time interval (0, ) will cover the selected point A is a product of the probabilities of ‘non-coverage’ characterising all portions of corrosion spots that have appeared in the time interval (0, ). Hence, the probability a that the point A will sample the non-corroded -area only is a ¼ exp
Z
2 2
SI ln½1 k =Sd
ð12:3Þ
0
where S is the area of the finite system where the nucleation takes place. Since a þ ¼ 1, the areal fraction of corroded area is ¼ 1 exp
Z
SI ln½1 k2 2 =Sd
ð12:4Þ
0
τ
τ–ν
τ – ν1
0 S
S0(τ – ν1)
S0(τ – ν) A
α
Figure 12.2
Exclusion zones of different size for corrosion spots which have nucleated at times and 1
194
Reliability and Risk Models
For a system of finite unit area (S ¼ 1) and a constant nucleation rate (I I ¼ const), equation (12.4) transforms into Z ¼ 1 exp I
ln½1 k2 2 d
ð12:5Þ
0
which describes the extension of the corroded area with time (Todinov, 1996). For a relatively large number of corrosion spots, on the surface S, the size of the separate corrosion spots is small because corrosion terminates ( approaches unity) for relatively small areas k2 2 of the corroded spots. As a result, the ratio () ¼ k2 2 /S remains small and the approximation ln [1 k2 2 ] k2 2 is possible (Abramowitz and Stegun, 1972). After the integration
Ik2 3 ðÞ ¼ 1 exp 3
ð12:6Þ
is obtained for the areal fraction of corroded area at time . It must be pointed out, however, that equation (12.6) is valid only for a relatively large number of corrosion spots in the finite area S. If the number of corrosion spots is small, the predictions from equation (12.6) may deviate substantially from the true values. If the number of nucleation spots was following a homogeneous Poisson process in the finite area S, equation (12.6) instead of equation (12.5) would be the correct equation. The nucleation of corrosion spots in S, however, is not a homogeneous Poisson process. The reason is that it is already known with certainty that all corrosion spots will nucleate in S. The corrosion spots follow a uniform distribution in the finite area S and there is no possibility that there will be no corrosion spots in S. Therefore equation (12.5) should be applied to model the fraction of corroded area. Since this error is very common (see for example the comments on the Kolmogorov–Johnson–Mehl–Avrami equation in Todinov, 2000a) this point is further clarified in the next section on the basis of an illustrative example.
12.3 AN ILLUSTRATIVE EXAMPLE For the simplified case where n corroded spots start their growth at the beginning of time interval 0, , and no other spots arise subsequently, the corroded fraction () at any time is equal to the probability () that for any fixed point A, at least a single corroded spot has nucleated inside the circle with radius k, with centre the point A (Figure 12.3).
Modelling the Kinetics of Deterioration of Protective Coatings
195
β
S kτ
S0(τ) kτ A
α
Figure 12.3 A simplified case where n random corroded spots start their growth at the beginning of time interval (0, ) and no other spots nucleate subsequently
The probability () is given by n S k2 2 k2 2 ¼ 1 exp n ln 1 ðÞ ¼ 1 S S
ð12:7Þ
which can also be presented as ðÞ ¼ 1 expðn ln½1 ðÞÞ
ð12:8Þ
where () ¼ k2 2 /S. Equation (12.8) is an analogue of equation (12.5). It also describes the covered fraction by n overlapping corroded spots of equal areal ratios (). Note that for a random coverage with overlapping objects, the covered fraction () in equation (12.8) depends only on the ratio () and does not depend on the shape of the covering objects. In case of a small areal ratio (), the approximation ln½1 ðÞ ðÞ
ð12:9Þ
is possible and equation (12.8) transforms into ðÞ ¼ 1 exp½n ðÞ
ð12:10Þ
If the number of nuclei n in S is large, approaches unity at relatively small corrosion spot sizes k2 2 . As a result, the ratio () ¼ k2 2 /S remains small and approximation (12.10) is possible. However, if the number of corroded spots in S is small, the areal ratio of the corroded spots () ¼ k2 2 /S is no longer small. Due to the sparse corroded spots, at some instant , the area k2 2 of the growing corrosion spots will become comparable with S. Consequently,
196
Reliability and Risk Models 1.0 0.9
Equation (12.8)
Corroded fraction
0.8 0.7
Equation (12.10)
0.6 0.5 0.4 0.3 0.2
Areal ratio, ψ
0.1
(a)
0.0 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.0 0.9
Equation (12.8)
Corroded fraction
0.8 0.7
Equation (12.10)
0.6 0.5 0.4 0.3 0.2
Areal ratio, ψ
0.1
(b)
0.0 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.0 0.9
Equation (12.10)
Corroded fraction
0.8 0.7
Equation (12.8)
0.6 0.5 0.4 0.3 0.2
Areal ratio, ψ
0.1
(c)
0.0 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Figure 12.4 Comparison between equations (12.8) and (12.10) for different numbers of corroded spots: (a) n ¼ 3; (b) n ¼ 7 and (c) n ¼ 13
Modelling the Kinetics of Deterioration of Protective Coatings
197
approximation (12.9) is no longer possible and equation (12.8) instead of equation (12.10) should be used. The difference between equations (12.8) and (12.10) (also between equations (12.5) and (12.6)) can be illustrated if their graphs are compared for n ¼ 3 (Figure 12.4a), n ¼ 7 (Figure 12.4b), and n ¼ 13 (Figure 12.4c). In all three graphs, the corroded fraction is plotted versus the areal ratio of the corroded spots. As can be verified, the difference between the results from equations (12.8) and (12.10) diminishes substantially for small areal ratios and also for a large number of corrosion spots. Indeed, the difference between the right-hand parts of equations (12.8) and (12.10) is ¼ exp½n lnð1 ðÞÞ exp½n ðÞ For small (), exp [n ln (1 ())] exp [n ()] and 0. For large n, both terms in the difference are very small (exp [n ln (1 ())] 0 and exp [n ()] 0) and again 0. For a small number of corrosion spots, however (e.g. n ¼ 3), the differences are substantial. It must be pointed out that the discrepancies in the predictions from the two equations depend on the number of growing corrosion spots and do not depend directly on their growth rate. Thus, for fixed nucleation and growth rates, if the area S contains a small number of growing spots, discrepancies will exist. If the area S is expanded to contain a large number of corrosion spots, the discrepancies will be negligible.
13 Minimising the Probability of Failure of Automotive Suspension Springs by Delaying the Fatigue Failure Mode The general approach to minimising the risk of premature failure is by removing a failure mode, reducing its likelihood or reducing the consequences associated with it. Usually neither is control over the cost of failure possible nor can a failure mode be removed. Risk reduction, however, can still be achieved by delaying the activation of a failure mode over the desired time interval (usually the design life). An example of a risk reduction achieved by delaying the activation of a failure mode can be given with automotive suspension springs. Typically, they are manufactured by hot winding. The cut-to-length cold-drawn spring rods are austenitised, wound into springs, quenched and tempered. This is followed by warm presetting, shot peening, cold presetting and painting (Heitmann et al., 1996). The only failure mode is fatigue and since it cannot be removed, an important way to improve reliability is to delay the onset of fatigue. One way to achieve this is by introducing favourable compressive residual stresses at the surface of the spring wire. Residual stresses at the surface of the spring exert a strong influence on its fatigue performance. Conventional processing at the key stages of the spring manufacturing – quenching, tempering and shot peening – often results in undesirable residual stresses at the surface. Residual stress measurements in as-quenched silicon–manganese (0.6C–Si2– Mn1 spring steel) springs have been reported (Todinov, 2000c). The specimens
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
200
Reliability and Risk Models 450 Residual stress (MPa) 400 350 300 250
Oil-quenched
200 150 100 50 0
Depth (µm) 0
100
200
300
400
500
600
Figure 13.1 Residual stress distribution near the surface of oil-quenched decarburised silicon–manganese spring wire
were taken from different turns of quenched-in-oil springs. All investigated springs exhibited tensile residual stresses at the surface (Figure 13.1), with a typical (modal) value of þ140 MPa. Residual stress measurements were also performed on 150 mm long specimens, taken from cold-drawn spring wire of diameter 12 mm. In order to remove the decarburisation during austenitisation, half of the specimens were machined and sealed in quartz tubes filled with argon. A completely decarburised surface layer of thickness 3050 mm was measured in the decarburised specimens. The residual stress measurements were performed on an X-ray stress diffractometer (Rigaku Corporation, 1994) using the standard sin2 method (Cullity, 1978; SAE, 1965) based on four inclined measurements at 0, 15, 30 and 45 . The residual stress distribution with depth was produced by X-ray measurements followed by dissolving the surface layers chemically, with a solution of nitric acid, applied gradually on top of the spring wire. Using a micrometer, the thickness of the removed layer was measured at a constant location of the irradiated spot on the spring wire surface. Successively measured stresses at the surface were also corrected by a small amount because of the stress relaxation created by the removed material (SAE, 1965). Oil quenching of non-decarburised spring wire of diameter 12 mm did not result in residual stresses because of the small thermal gradients produced by oil-quenching, which were unable to induce significant plastic flow during thermal contraction and transformation expansion (Todinov, 1999a).
Minimising the Probability of Failure of Automotive Suspension Springs
201
Unlike the non-decarburised specimens, after oil-quenching, the decarburised specimens exhibited tensile residual stresses at the surface (Figure 13.1) which decreased but did not disappear after tempering. The reason for the tensile residual stresses is the decarburised layer which is associated with an increased coefficient of thermal expansion. During quenching, the decarburised layer contracts more vigorously in comparison with the non-decarburised inner layers. Another reason is the volumetric effect due to the different microstructure at the surface and beneath the surface. The tensile residual stresses at the spring surface increase the effective net stress range and the mean stress during fatigue loading, which shortens the fatigue crack initiation life and increases the fatigue crack propagation rate. In order to improve fatigue resistance, shot peening has been used as an important element of spring manufacturing technology (Niku-Lari, 1981; Bird and Saynor, 1984). Compressive residual stresses from shot peening increase fatigue life by delaying the initiation and inhibiting the propagation of fatigue cracks. Shot-peened helical springs can be used at 65–70% higher stress levels than unpeened springs. Shot-peened leaf springs show no failures at over 1 000 000 cycles as compared with unpeened springs whose fatigue life is about 100 000 cycles (Burrell, 1985). The tensile residual stresses from quenching and tempering superpose with the compressive residual stresses from shot peening and form the net residual stresses. If the intensity of shot peening is small, the net residual stresses at the surface may even be dominated by a tensile component. Experimental measurements (Todinov, 2000c) identified zones of small compressive residual stresses or even tensile residual stresses beneath a spring surface covered by shot-peening craters. These findings indicated that at some regions over the spring turns the shot-peening intensity was not sufficient to create a large and uniform compressive residual stress. During shot peening, the orientation of the spring turns regarding the trajectory of the peening shot is shown in Figure 13.2. In Figure 13.2, is the shot impact angle which the normal n to the spring surface subtends with the incident trajectory of the peening shot; ’ is the circumference angle; ’ ¼ 0 corresponds to the outside (o) and ’ ¼ 180 to the inside (i) zone of the spring turn, while ’ ¼ 90 correspond to the top (t) and bottom (b) zone of the spring turn, respectively. Typical residual stress distributions after shot peening are characterised by a peak of the compressive residual stress, at some distance beneath the surface. On the outer surface of the spring helix, which is the best peened part, corresponding to a circumference angle ’ ¼ 0 and shot impact angle ¼ 0 , the peak of the compressive residual stress was often in the interval (600 to 700 MPa) (Figure 13.3). Figure 13.3 depicts residual stresses characterising a relatively uniformly shot-peened spring. Few of the investigated springs exhibited such uniformity of the compressive residual stress. The residual stress distributions were usually
202
Reliability and Risk Models Trajectories of the peening shot σrs(ϕ) λ o
t
n°
b
o (outside) ϕ
t (top)
b i
(bottom)
i (inside)
α
Figure 13.2 Orientation of the shot-peened spring turns regarding the trajectory of the peening shot
0 Residual stress (MPa) –100 –200 ϕ = 90°
ϕ = –90°
–300 –400 –500 –600 Depth (µm)
ϕ = 0° –700 0
100
200
300
400
500
600
Figure 13.3 Residual stress distributions measured on the top (’ ¼ þ90 ), bottom (’ ¼ 90 ) and outermost part of the helix (’ ¼ 0 ) for a relatively uniformly shotpeened silicon – manganese spring
Minimising the Probability of Failure of Automotive Suspension Springs
203
compressive, but with large variation in the magnitudes, which indicated uneven shot-peening intensity. On some of the springs, tensile residual stresses of relatively large magnitude were measured near the surface, despite the fact that the entire circumference of the spring wire had been shot peened. In a number of instances, compressive and tensile residual stresses were measured on the same spring turn (Figure 13.4). Tensile residual stresses at the surface were more frequently detected on springs tempered to a higher hardness level. The non-uniform residual stress from shot peening, over the spring wire circumference, was caused by the relative orientation of the trajectory of the peening shot and the spring coil (Figure 13.2). Shot-peening intensity depends on the shot impact angle . The larger the angle, the smaller the transmitted impact energy and the smaller the compressive residual stress. The outermost part of the spring helix (zone ‘o’ in Figure 13.2, ¼ 0 , ’ 0 ) receives the largest compressive stress from shot peening, while the top and bottom zones which are characterised by the largest impact angle (zones ‘t’ and ‘b’ in Figure 13.2, 90 , ’ 90 ) receive the smallest compressive residual stress. Often the shot-peening intensity on the top and bottom zones of the spring turns was not sufficient to mask the tensile residual stress after oil-quenching and tempering. Instead, a tensile net residual stress or a compressive net residual stress with small magnitude were obtained. In order to avoid such unfavourable residual stresses at the spring surface, the latter was subsequently peened at small impact angles,
400
Residual stress (MPa)
300 200
ϕ = –90°
100 0 –100
ϕ = 90°
–200 –300 Depth (µm)
–400 0
50 100 150 200 250 300 350 400 450 500
Figure 13.4 Tensile and compressive residual stresses on the opposite top and bottom regions of the same spring turn cut from a non-uniformly peened silicon–manganese suspension spring, tempered to a hardness level of 54 HRC
204
Reliability and Risk Models
which guaranteed large and uniform compressive residual stress over the spring turns. This resulted in a significant delay of the onset of fatigue failure which had been indicated by the significant improvement of the fatigue performance. In order to avoid unfavourable residual stresses at the spring surface, another important measure is preventing decarburisation during austenitisation. Decarburisation promotes tensile residual stresses at the surface after quenching and diminishes the fatigue strength (Gildersleeve, 1991). It limits the maximum magnitude of the compressive residual stresses from shot peening, decreases their penetration depth and increases the surface roughness. Decarburisation also diminishes the fatigue resistance of suspension springs by: (i) diminishing the local fatigue strength due to the decreased density of the surface layer, increased grain size and diminished fracture toughness and yield strength (Todinov, 1999a; Chernykh, 1991), and (ii) creating low-cycle fatigue conditions for the spring surface. These factors promote early fatigue crack initiation and premature fatigue failure. Consequently, in order to delay the onset of fatigue failure, during the austenitisation of the spring wire, decarburisation and excessive grain growth should be avoided. Delaying the onset of fatigue failure during the design life of the springs also requires: (i) a small susceptibility of the spring steel to surface decarburisation; (ii) improved quenching to remove the tensile residual stresses at the spring surface (Todinov, 1998); (iii) improved tempering to achieve optimal hardness corresponding to a maximum fatigue resistance; (iv) selecting cleaner steel with a smaller number of oxide inclusions which serve as ready fatigue crack initiation sites; (v) smaller number density of sulphide inclusions, which reduce the spring wire toughness and promote anisotropy. These measures increase the time for fatigue crack initiation, slow down the rate of fatigue crack propagation and, as a result, delay significantly fatigue failure.
14 Reliability Governed by the Relative Locations of Random Variables in a Finite Domain
14.1
RELIABILITY DEPENDENT ON THE RELATIVE CONFIGURATIONS OF RANDOM VARIABLES
There exist numerous examples of reliability governed by the relative locations of a number random variables, uniformly distributed in a finite domain. A commonly encountered problem is presented in Figure 14.1(a). During a finite time interval of length a, exactly n consumers connect to a supply system independently and randomly. Each connection is associated with a shock (increased demand) to the supply system which needs a minimum time interval s to recover and stabilise after a connection (demand). The supply system is overloaded if two or more successive demands follow within a critical time interval s (Figure 14.1a). The probability of overloading is equal to the probability that two or more random demands will cluster within the critical interval with length s. Another common case is where n users arrive randomly during a finite time interval of length a, and use a particular piece of equipment for a fixed time s. The problem is to calculate the probability of a collision, which occurs if two or more users arrive within a time interval s (Figure 14.1a). A mechanical problem of a similar nature is presented in Figure 14.1(b). A number of disks/wheels are attached to a common shaft, but independently
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
206
Reliability and Risk Models Random demands
a
0 (a) s
F2
F1
Critical configurations
s
F3
(b)
Figure 14.1 (a) Overloading a supply system due to clustering of a fixed number of random demands; (b) safe and failure configurations of centrifugal forces
from one another. Each disk is associated with an eccentricity which creates a centrifugal force Fi, i ¼ 1, 2, 3, on the shaft rotating at a high speed, as shown in Figure 14.1(b). If the forces cluster within a critically small angle s, the shaft will be subjected to excessive deformation during rotation at a high speed. The problem is to calculate the probability of clustering the centrifugal forces within a critical angle s.
14.2 A GENERIC EQUATION RELATED TO RELIABILITY DEPENDENT ON THE RELATIVE LOCATIONS OF A FIXED NUMBER OF RANDOM VARIABLES Only configurations of uniformly distributed random variables in a common domain are considered, where the safe/failure state depends only on the relative locations of the random variables and not on their absolute locations. The location of each random variable can be represented as a point in the common domain, and each point in the common domain corresponds to a possible location of a random variable; see Figure 14.2(a). The random variables are not necessarily identical; only their distribution in the finite domain is uniform. Suppose that a domain with measure has been defined as in Figure 14.2(a), where n uniformly distributed random variables form a safe/failure configuration with probability p. We assume the probability
207
Reliability Governed by the Relative Locations of Random Variables pi∗
p
ν
3
1
n
2
i
1
n
2
ν
3 i
Generic equation: n
∗
p ν n = C + ∫ ν n–1 ∑ pi ( ν) dν i =1
(a)
Random shocks A
B
B
A
0 s2
s1
(b)
Finite time interval, a
Random demands
a
p∗ = (s/a)n –1
0 s
Finite length, a
s
(c) A set of specified minimum intervals between random variables s12
s23
S12
S23
•••
si,i+1
•••
Si,i+1
sn–1,n Sn–1,n Finite interval, a
Actual intervals between the specified random variables (d)
Figure 14.2 (a) The probability of a safe/failure configuration of uniformly distributed random variables is given by the generic equation (14.1); (b) two uniformly distributed random shocks of different type; (c) total clustering of the demands from a given number of consumers, connecting independently to a supply system; (d) specified distances between random variables, whose total number is known
that two or more random variables will reside simultaneously into a very small domain increment is negligible. If pi is the probability of a safe/failure configuration when the ith random variable is located at the boundary of the common domain, the link between the probability p and the probabilities pi is given by p n ¼ C þ
Z
n1
n X i¼1
pi ðÞd
ð14:1Þ
208
Reliability and Risk Models
where C is an integration constant determined from the boundary conditions. In the case where all of the random variables are identical, all probabilities pi are equal (p1 ¼ p2 ¼ . . . ¼ p ), and equation (14.1) transforms into n
p ¼ C þ
Z
n1 np ðÞd
ð14:2Þ
The derivation of equation (14.1) is presented in Appendix 14.1. In the case of distinct (non-identical) random variables, the probabilities pi in equation (14.1) are different in general, and determining the integration constant C is more complicated. Thus, in a finite time domain a, depending on which random variable (event) appears first, different boundary conditions may be present. Consider a simplified illustrative example of a system subjected to exactly two uniformly distributed consecutive shocks of different types A and B (Figure 14.2b). Suppose that after a shock of type A, the system needs a minimum time s1 to recover before the shock of type B can arrive safely. Suppose also that if a shock of type B arrives first, the minimum recovery time is s2 before a shock of type A can arrive safely. The probability p of ‘smooth’ operation, with no overloading of the system, can be presented as a sum of the probabilities p1 and p2 of smooth operation associated with the mutually exclusive and exhaustive events where either the shock of type A or the shock of type B arrives first. Because equation (14.1) can also be applied to the case where one of the shocks (e.g. the shock of type A) always appears first, the probability p1 of smooth operation in this case can be calculated from n
p1 a ¼ C1 þ
Z
an1 p1 ðaÞda
ð14:3Þ
where a is the length of the time interval. If the shock of type A is fixed at the beginning ‘0’ of the finite time interval, the probability of a safe configuration is p1 ¼ (a s1 )/a (Figure 14.2b). The term p2 is missing because only configurations where the shock A arrives first are considered. Similarly, if a shock B arrives first, equation (14.1) gives Z ð14:4Þ p2 an ¼ C2 þ an1 p2 ðaÞda for the probability p2 of a ‘safe’ configuration if B arrives first. If the shock B is fixed at the beginning ‘0’ of the time interval, the probability of a safe configuration is p2 ¼ (a s2 )/a. The integration constants C1 and C2, and the probabilities p1 and p2 in equations (14.3) and (14.4), are associated with the cases where either the shock of type A or the shock of type B arrives first. Integrating equations (14.3) and (14.4) over the finite time interval a gives
Reliability Governed by the Relative Locations of Random Variables 2
p1 a ¼ C 1 þ 2
p2 a ¼ C 2 þ
Z Z
209
a½ða s1 Þ=ada ¼ C1 þ a2 =2 s1 a
ð14:5Þ
a½ða s2 Þ=ada ¼ C2 þ a2 =2 s2 a
ð14:6Þ
From the boundary conditions: p1 ¼ 0 if a ¼ s1, and p2 ¼ 0 if a ¼ s2, the integration constants C1 ¼ s21 /2 and C2 ¼ s22 /2 are determined. Adding the probabilities p1 and p2 as probabilities of mutually exclusive events, and dividing by a2 results in p ¼ 1 ðs1 þ s2 Þ=a þ ðs21 þ s22 Þ=ð2a2 Þ
ð14:7Þ
for the probability p ¼ p1 þ p2 of a smooth operation irrespective of which type of shock arrives first. Equation (14.7) has been verified by a Monte Carlo simulation. Thus, for a ¼ 1, s1 ¼ 0.1 and s2 ¼ 0.05, the equation yields probability p ¼ 0.856, which is close to the experimental probability p ¼ 0.8558 obtained from the simulation.
14.2.1
AN ILLUSTRATIVE EXAMPLE. PROBABILITY OF CLUSTERING OF RANDOM DEMANDS WITHIN A CRITICAL INTERVAL
The application of equation (14.1) can be illustrated by a simplified problem relating to the probability of overloading a supply system as in Figure 14.2(c), which occurs only if all of the n uniformly distributed demands due to the connecting of n consumers are concentrated within a small time interval s anywhere within the finite time interval a. In this case, the critical configuration of the random variables is a ‘total clustering’ within a distance s. Indeed, because p1 ¼ p2 ¼ . . . ¼ pn ¼ p , and p* ¼ (s/a)n 1, if one of the demands is fixed at the beginning of the length a (point ‘0’ in Figure 14.2(c), according to equation (14.2), the probability of total clustering is n
pa ¼ C þ
Z
an1 nðs=aÞn1 da ¼ C þ nsn1 a
ð14:8Þ
For a ¼ s, the probability of total clustering becomes p ¼ 1. Substituting a ¼ s in equation (14.8) then yields sn ¼ C þ nsn, from which C ¼ (n 1)sn. Finally, the probability of overloading the supply system becomes pf ¼ nðs=aÞn1 ðn 1Þðs=aÞn
ð14:9Þ
210
Reliability and Risk Models
14.3 A GIVEN NUMBER OF UNIFORMLY DISTRIBUTED RANDOM VARIABLES IN A FINITE INTERVAL (CONDITIONAL CASE) If the probabilities pi in equation (14.1) cannot be calculated easily, the problem can be solved by reducing its complexity, and solving a series of problems involving a smaller number of random variables. This method will be illustrated by finding the probability that the actual distances S12, S23, . . . , Sn 1,n between the locations of a given number n of uniformly distributed random variables in a finite interval a will be greater than the corresponding specified minimum distances ! n1 X s12 ; s23 ; . . . ; sn1;n si;iþ1 a i¼1
where si,iþ1 is the specified minimum distance between adjacent random variables with indices i and i þ 1 (Figure 14.2d). Suppose that pn is the probability of existence of the specified minimum distances between the n random variables. Then, from equation (14.2) it follows that Z a s n1 12 pn an ¼ C þ an1 n pn1 da ð14:10Þ a In this way, the probability pn that the actual distances S12, S23, . . . , Sn1,n between the random variables will be greater than the specified minimum distances s12, s23, . . . , sn1,n has been expressed by the probability pn1 related to n-1 variables. This is the probability that the actual distances between n-1 adjacent random variables will be greater than the specified minimum distances s23, . . . , sn1,n, if one of the random variables is ‘fixed’ at the beginning of the finite interval. The complexity of the initial problem has been reduced. The complexity of the simpler problem can also be reduced, and finally a problem with a trivial solution will be obtained. Starting from the trivial solution, all necessary intermediate solutions can be produced as well as the solution of the initial complex problem. Indeed, following equation (14.10): p12 a2 ¼ C þ
Z
2ap12 da ¼ C þ ða s12 Þ2
ð14:11Þ
where p12 is the probability of existence of a specified minimum distance s12 for two random variables only. Because p12 ¼ (a s12 )/a, and because for a ¼ s12, p12 ¼ 0 and C ¼ 0, from equation (14.11), it follows that p12 ¼ (1 s12/a)2. Similarly, for three random variables, the probability p123 of the existence of intervals greater than s12 and s23 is
211
Reliability Governed by the Relative Locations of Random Variables
p123 a3 ¼ C þ
Z
3a2
2 a s 2 s23 12 1 da ¼ C þ ða s12 s23 Þ3 a a s12
from which p123 ¼ [1 (s12 þ s23)/a]3; because for a ¼ s12 þ s23, p123 ¼ 0 and C ¼ 0. In a similar fashion p12...n ¼ PðS12 s12 \ S23 s23 \ . . . \ Sn1;n sn1;n Þ ¼ ½1 ðs12 þ s23 þ þ sn1;n Þ=an
ð14:12Þ
is obtained for the probability of existence of the specified minimum distances. Next, we will obtain an equation giving the probability p that the actual distances between n adjacent random variables will be at least s12, s23, . . . , sn1,n, and the actual distance S01 of the first random variable location from the start of the interval a will be at least s01 (S01 s01). The probability p is a product of the probability [(a s01)/a]n, that all random variable locations will be at a distance larger than s01 from the beginning of the finite interval a, and the probability given by equation (14.12). As a result, p PðS01 s01 \ S12 s12 \ S23 s23 \ . . . \ Sn1;n sn1;n Þ ¼ ½ða s01 Þ=an ½1 ðs12 þ s23 þ þ sn1;n Þ=ða s01 Þn is obtained which, after simplifying, becomes p PðS01 > s01 \ S12 > s12 \ S23 > s23 \ . . . \ Sn1;n > sn1;n Þ ¼ ½1 ðs01 þ s12 þ s23 þ . . . þ sn1;n Þ=an
ð14:13Þ
P where n1 i¼0 si, iþ1 < a. For the probability that some of the distances related to six random variable locations in an interval with length a ¼ 100 units will be at least s01 ¼ 15, s12 ¼ 7 and s45 ¼ 20, the Monte Carlo simulation yields the empirical probability of 0.038. This result is confirmed by the theoretical probability calculated from equation (14.13): p ¼ ½1 ðs01 þ s12 þ s45 Þ=a6 ¼ 0:038 By setting s01 ¼ 0 and s12 ¼ s23 ¼ . . . ¼ sn1,n ¼ s in equation (14.13), the probability pf ¼ 1 ½1 ðn 1Þs=an
ð14:14Þ
of a cluster of two or more random variables within a distance s is obtained.
212
Reliability and Risk Models
14.4 APPLICATIONS Now let us return to the problem related to a given number of users arriving randomly within a finite time interval with length a, and using the same piece of equipment for a fixed time s. The graphs of equation (14.14) in Figure 14.3 give the probability of existence of a cluster of two or more demands within three different fixed user times s ¼ 0.1 h, s ¼ 0.05 h and s ¼ 0.01 h. The length of the finite time interval was assumed to be a ¼ 3 h. Clearly, the probability of clustering (collision) of two or more demands during the user time s ¼ 0.05 h (the middle curve) increases rapidly with increasing the number of users. Even for a small number of users n ¼ 3, arriving randomly within a time interval of 3 h, the probability of collision is substantial (0.1). For n ¼ 10 users, the probability of collision is approximately 0.80, as shown in Figure 14.3. In this sense, the problem can be regarded as a continuous analogue of the well-documented birthday problem (DeGroot, 1989). With decreasing the number of users and with increasing the time interval, the probability of collision decreases. Thus, for a time interval of 3 h, containing 20 users, and for user time s ¼ 0.05 h, the probability of collision approaches unity (Figure 14.4). This probability decreases if the number of users decreases, as shown in Figure 14.4, or if the finite time interval is increased, as shown in Figure 14.5. Probability of collision (clustering of the demands) 1.0 s = 0.1 h
s = 0.05 h
0.8
0.6 s = 0.01 h 0.4
0.2
0.0
5
10
15
20
25
30
35
Number of users
Figure 14.3 Probability of clustering uniformly distributed random demands from a given number of users
213
Reliability Governed by the Relative Locations of Random Variables
Probability of collision (clustering of the demands) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2
User time: s = 0.05 h a=3h
0.1 0.0 20 19 18 17 16 15 14 13 12 11 10 9
8
7
6
5
4
3
2
1
Number of users Figure 14.4 Probability of clustering random demands from different number of users
Decreasing the number of users while keeping the finite time interval constant, as shown in Figure 14.4, leads to a much faster decrease of the probability of collision compared to increasing the time interval a while keeping the number of users constant, as in Figure 14.5. A simple calculation using equation (14.14) shows that for a ¼ 1000 h, the probability of collision is still substantial (approximately 2%). Even after increasing the time interval to a ¼ 10 000 h, there still exists a 0.2% chance of collision. According to equation (14.13), the probability that before each random variable location there will be a distance greater than s is nsn p¼ 1 a
ð14:15Þ
Now the solution of the problem related to the clustering of centrifugal forces due to eccentricity from a given number of disks on a rotating shaft (Figure 14.1b) follows from equations (14.9) and (14.12). Let us present the full 360 angle as a segment with length a ¼ 2. The probability that all n centrifugal forces will be within a critical angle s (s < a/2 ¼ ) is the sum of the probability p1 that the clustering angle will be smaller than s given that it does not include the
214
Reliability and Risk Models Probability of collision (clustering of the demands) 1.0 0.9 n = 20 users
0.8
User time: s = 0.05 h
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0
100
200
300
400
500
600
700
800
900
1000
Finite time interval (h)
Figure 14.5 Variation of the probability of clustering random demands with increasing the length of the finite time interval
point 0 (2) and the probability p2 that the clustering angle will be smaller than s given that it includes the point 0 (2). The first probability is given by equation (14.9): p1 ¼ nðs=aÞn1 ðn 1Þðs=aÞn and this is in fact the probability that the smallest angle bounding all centrifugal forces will be smaller than the critical angle s. The probability that clustering within angle s will include the point 0 (2) is equal to the probability of the existence of a gap of length greater than a s between two adjacent centrifugal forces (without specifying between which two) on the segment with length 2. Because only a single gap of length a s can be accommodated between the centrifugal forces, s < a/2 and a s > a/2, the gap can only be in n 1 possible positions relative to the n forces. Using equation (14.12) and the formula for a sum of probabilities of mutually exclusive events, the probability of a gap of length at least a s between two adjacent centrifugal forces, without specifying between which two, is a sn p2 ¼ ðn 1Þ 1 ¼ ðn 1Þ ðs=aÞn a
Reliability Governed by the Relative Locations of Random Variables
215
Finally, the probability that all n forces will cluster within the critical angle s becomes p ¼ p1 þ p2 ¼ nðs=aÞn1 ¼ n
s n1 2
ð14:16Þ
For the probability that all five centrifugal forces will be concentrated within a critical angle s ¼ 3/5 (Figure 14.1b), Equation (14.16) yields p ¼ 5 0.34 ¼ 0.0405. This result is in agreement with the empirical probability 0.0406 obtained from a Monte Carlo simulation. According to equation (14.13), for a given number of random variables in a finite time interval, the probability that at least one of the specified minimum gaps will be violated is p ¼ PðS01 s01 [ S12 s12 [ S23 s23 [ . . . [ Sn1;n sn1;n Þ ¼ 1 ½1 ðs01 þ s12 þ s23 þ þ sn1;n Þ=an where s01 0, s12 0, . . . , sn1,n 0. If in this equation n 1 specified minimum intervals are set to zero, the cumulative distribution F(s) of the gap s between the locations of any two adjacent random variables is obtained: PðS sÞ ¼ FðsÞ ¼ 1 ð1 s=aÞn
ð14:17Þ
which is discussed for example in Blake (1979) and Todinov (2003c). Equation (14.1) is generic and gives the probability of a safe/failure configuration governed by the relative configuration of a given number random variables uniformly distributed in a finite domain. Many intractable reliability problems can be easily solved using equation (14.1) by reducing them to problems with trivial solutions. Indeed, equation (14.1) links the probability of a safe/failure configuration for arbitrary locations of the random variables in their domain with the probability of a safe/failure configuration, where one of the random variables is ‘fixed’ at the boundary of the domain. As a result, the initial complex problem is reduced to a simpler problem, which is in turn can be simplified until problems with trivial solutions are obtained. The significance of equation (14.1) stems also from the fact that potential problems are not restricted to one-dimensional problems only, or to a simple function of the relative distances dij between the locations of random variables. The probability of a safe/failure configuration may also depend on complicated functions y ¼ f(dij) of the relative distances between locations uniformly distributed in a finite domain. As long as the function y depends only on the relative configuration of the random variables, not on their absolute locations, equation (14.1) can be applied.
216
Reliability and Risk Models
Equation (14.12) gives the probability of gaps of specified minimum lengths, between a given number of uniformly distributed random variables in a finite interval (conditional case). According to equation (14.12), the probability of a specified set of minimum intervals between adjacent random variables is equal to the fraction of the non-covered length raised to a power equal to the number of random variables. Interestingly, according to equation (14.12), the probability of existence of any specified set of free intervals between any selected pairs of adjacent random variables is the same, as long as the sum of the intervals is the same. It must be pointed out that equations (14.12) and (14.13) can only be applied in cases where the number of random variables in the finite time interval is known and guaranteed to exist. In this context, equation (14.13) also appears to be useful for making inferences when only the number of random failures following a homogeneous Poisson process is known, but not the actual failure times. This is the case where an inspection at time a has identified a certain number of failures, whose actual times had not been recorded. Indeed, suppose that n random failures following a homogeneous Poisson process have been registered by an inspection at time a. Equation (14.13), can then be used to calculate: the probability p ¼ 1 (1 s/a)n that the first failure had occurred before time s; (ii) the probability p ¼ (1 s/a)n of existence of a continuous failure-free operation interval with length at least s until the first failure; or (iii) the probability p ¼ (1 (s0 þ sn)/a)n that the first failure had occurred after a time s0, and the last failure had occurred before time sn.
(i)
14.5 RELIABILITY GOVERNED BY THE RELATIVE LOCATIONS OF RANDOM VARIABLES FOLLOWING A HOMOGENEOUS POISSON PROCESS IN A FINITE DOMAIN An important, commonly encountered case is where the random variables follow a homogeneous Poisson process in a finite interval. The homogeneous Poisson process and the uniform distribution are closely related. A basic property of the homogeneous Poisson process, well documented in books on probabilistic modelling (Ross, 2000), states: Given that n random variables following a homogeneous Poisson process are present in the finite interval a, the coordinates of the random variables are distributed uniformly over the interval a. For example, when a calibration length a is cut from wire containing flaws following a homogeneous Poisson process and the number of flaws n is known (given) in the length a, the successive coordinates of the n flaws are jointly distributed as the order statistics in a sample of size n from the uniform distribution in the length a. Assume that the random variables (not necessarily identical) follow a homogeneous Poisson process in the finite domain with measure (volume, area or
Reliability Governed by the Relative Locations of Random Variables
217
length). Then, the probability of k random variables in the domain is given by the Poisson distribution ()k e /k!, k ¼ 0, 1, 2; . . . , where is the mean number of variables in the finite domain , and is the number density of the random variables. The safe/failure states depend only on the relative configurations of the random variables in the common domain and not on their absolute locations. According to the total probability theorem, the probability of a safe/failure configuration is a sum of the probabilities of the mutually exclusive events involving all possible numbers of random variables in the finite domain . These mutually exclusive events are as follows: k random variables reside in the finite domain with measure , and the variables form a safe/failure configuration (k ¼ 0, 1, 2, . . .). The probability p(S) of a safe/failure configuration is then given by pðSÞ ¼
1 X ðÞk e k¼0
k!
pðSjkÞ
ð14:18Þ
where p(S|k) is the conditional probability of a safe/failure configuration, given that k random variables reside in the domain with measure . p(S|k) can be determined by considering that if the homogeneous Poisson process is conditioned on the number of random variables, the random variables will be uniformly distributed in the domain. In case of reliability dependent only on the relative configuration of the random variables, according to equation (14.1): " # Z k X k k1 pðSjkÞ ¼ ð1= Þ Ck þ pi ðÞd ð14:19Þ i¼1
where the Ck are integration constants determined from the boundary conditions. If all of the random variables are identical, all probabilities pi are equal (p1 ¼ p2 ¼ . . . ¼ p ), and equation (14.19) transforms into Z pðSjkÞ ¼ ð1= k Þ Ck þ k1 kp ðÞd ð14:20Þ Equation (14.18) is generic and can be applied to calculate the probability of a safe/failure configuration governed by the relative configuration of random variables following a Poisson process in a finite domain. In this case, the number of random variables in the finite domain is itself a random variable.
APPENDIX 14.1 Suppose that the common domain is incremented by a very small value . Then, the random variables may all reside in (event A0), and form the safe/ failure configuration with probability p. Alternatively, the ith random variable
218
Reliability and Risk Models
may reside in , and the rest of the random variables in (events Ai, i ¼ 1, n) forming safe/failure configurations with probabilities pi , depending on which random variable (i) is in the small domain increment . As a result, events Ai, i ¼ 0, n form a set of mutually exclusive and exhaustive events partitioning the probability space:
PðAi \ Aj Þ ¼ 0 if i 6¼ j
and
n X
PðAi Þ ¼ 1
i¼0
Let B be the event a safe/failure configuration of the random variables. Because event B may occur with any of the events Ai, according to the total probability theorem, the probability of event B is
PðBÞ ¼ PðBjA0 ÞPðA0 Þ þ
n X
PðBjAi ÞPðAi Þ
ð14A:1Þ
i¼1
where P(B|Ai) denotes the probability of B given Ai. Denoting p ¼ P(B|A0), for a zero domain increment ( ¼ 0), P(B) ¼ P(B|A0) ¼ p; because in this case P(A0) ¼ 1. Accordingly, a small increment of the domain will cause a small increment p of the probability P(B): P(B) ¼ p þ p. If a small domain increment is present, P(A0 ) ¼ [/( þ )]n (all n random variables reside in only); P(Ai ) ¼ /, i ¼ 1, 2, . . . , n are the probabilities that the ith random variable will reside in the small domain increment and P(BjAi ) ¼ pi are the probabilities of a safe/failure configuration provided that the ith random variable resides in . Substituting these values in equation (14A.1) gives
p þ p ¼ p þ
n þ
n X p i¼1 i
ð14A:2Þ
For a small
þ
n
¼ ð1 þ =Þn 1 n=
and equation (14A.2) becomes
p ¼ pn= þ
n X p i¼1 i
ð14A:3Þ
Reliability Governed by the Relative Locations of Random Variables
219
For an infinitesimal domain increment ! 0, from equation (14A.3), the linear differential equation dp=d þ pn= ¼
n 1X p i¼1 i
ð14A:4Þ
is obtained. The pi are easier to calculate, because they correspond to calculating the probability of a safe/failure configuration when one of the random variables is fixed at the domain boundary. Generally, the pi are functions of the measure R of the finite domain. The integrating factor of equation (14A.4) is ¼ exp[ (n/)d] ¼ n , and the general solution of differential equation (14A.4) is given by equation (14.1).
15 Reliability Dependent on the Existence of Minimum Critical Distances between the Locations of Random Variables in a Finite Interval
15.1
PROBLEMS REQUIRING RELIABILITY MEASURES BASED ON MINIMUM CRITICAL INTERVALS (MCI) AND MINIMUM FAILURE-FREE OPERATING PERIODS (MFFOP)
Reliability often depends on the existence of minimum critical distances between or before the locations of random variables. Commonly encountered examples where reliability depends on the existence of minimum critical distances between the locations of adjacent random variables are presented in Figure 15.1. Consider a finite time interval of length a during which consumers are connecting to a supply system independently and randomly (Figure 15.1a). Suppose that the connection times follow a homogeneous Poisson process and each connection is associated with a shock (increased demand) to the supply system, which needs a minimum time interval s to recover and stabilise after a connection (demand). The supply system is overloaded if two or more
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
222
Reliability and Risk Models Random demands (a)
Finite interval, a
s s
Random failures
(b)
a Random forces a
0 (c) s s (d)
Random flaws a
Figure 15.1 Common examples where reliability depends on the existence of minimum critical distances between the locations of random variables
demands follow (cluster) within a critical time interval s (Figure 15.1a). Here are further application examples: .
Forces acting on a loaded component which fails if two or more forces cluster within a critically small distance (Figure 15.1c). . Supply systems which accumulate the supplied resource before it is dispatched for consumption (compressed gaseous substances for example). Suppose that after failure followed by repair, the system needs a minimum failure-free operating period of specified length to restore the amount of supplied resource to the level existing before failure. In this case, the probability of disrupting the supply equals the probability of clustering two or more failures within the critical recovery period (Figure 15.1b). . Limited available resources and repairs consuming a substantial amount of resources. In this case, it is important to guarantee that failures will be apart, at distances greater than a specified minimum distance, so that there will be no running out of resources for repair. . Failures associated with pollution to the environment (e.g. a leak of chemicals from a valve). If such a failure is followed by another failure before a critical time interval has elapsed needed for recovery from pollution, irreparable damage to the environment could be inflicted. Such is for example the case where a failure associated with a release of chemicals in the sea water is followed by another such failure before the environment has fully recovered from the first failure. The result could be a dangerously
Reliability Dependent on the Existence of Minimum Critical Distances
223
high concentration of chemicals or increased acidity which will destroy marine life. . Clustering of two or more random flaws over a small critical distance s (Figure 15.1d) dangerously decreases the load-carrying capacity of thin fibres and wires. As a result, a configuration where two or more flaws are closer than a critical distance s cannot be tolerated during loading. Reliability in this case is governed by the probability of clustering of the random flaws. Often, it is essential that before the first random failure and after each subsequent failure throughout the design life of a system, a minimum operating interval (Figure 15.2a) of length s exists with high probability. Here are some examples: .
Specified rolling warranty periods are required before all failures (Figure 15.2a) or before specified failures (Figure 15.2b) in a finite time interval. The violation of any of the specified rolling warranty periods is associated with warranty payments. . Systems where a failure within the critical start-up period (of length s) is associated with severe consequences (Figure 15.2a). s
s
s
Finite time interval, a
(a)
Specified minimum failure-free operating periods before specified random failures Random failures s01 s12 s23
(b)
s
S01
S12
S23
Finite time interval, a
Actual intervals before specified random failures Random failures s
s Finite time interval, a
(c)
Downtimes s
0 Time to failure (d)
Time Random failure
Figure 15.2 Common examples where reliability depends on the existence of minimum critical distances before the locations of random variables
224
Reliability and Risk Models
.
Failure of a non-repairable device before its pay-off period with length s (Figure 15.2a). . In deep-water oil and gas production, where the downtimes after failure can be significant (Figure 15.2c), it is important to guarantee with a large probability failure-free operation before failure and replacement of a component. A failure-free operating interval guaranteed with large probability is also dictated by the high costs of the consequences of failure and the intervention for repair. Recently, some industries (e.g. the aerospace industry) shifted the emphasis towards guaranteeing a maintenance-free operation period (MFOP) (Relf, 1998; Kumar et al., 1999) (a period free from intervention for unscheduled maintenance).
15.2 THE MCI AND MFFOP RELIABILITY MEASURES All of the cases discussed in the previous section require a new reliability measure which can broadly be defined as minimum critical intervals (MCI) before or between random variables in a finite time interval, whose existence is guaranteed with a minimum probability pMCI ¼ P(all Sij sij) (Figure 15.2b) (Todinov, 2004c). The translation of the MCI-reliability measure to random failures in a finite time interval is: Specified minimum failure-free operating periods (MFFOP) sij before or between random failures in a finite time interval (Figure 15.2) whose existence is guaranteed with minimum probability pMFFOP. Equivalently, the MFFOP-reliability measure can be formulated as specified minimum failure-free operating periods (MFFOP) sij before or between random failures in a finite time interval and a maximum acceptable probability pf max ¼ P(at least one Sij < sij) of violating at least one of them (pf max ¼ 1 pMFFOP). A violation of an MFFOP interval is present when the actual interval Sij before failure is smaller than the corresponding specified MFFOP sij (Figure 15.2b). We must point out that for a single specified MFFOP interval, the definition of the MFFOP reliability measure coincides with the definition of reliability associated with this interval (Figure 15.2d): the probability of surviving a specified minimum time interval of length s. While for a non-constant hazard rate the classical reliability measure mean time to failure (MTTF) can be misleading (see Chapter 2), the MFFOP reliability measure is still valid. Yet another reason for the importance of the MFFOP reliability measure is the possibility to link naturally reliability with the cost of failure (see Chapter 16). The homogeneous Poisson process is an important model for component/ system failures because the useful life of components and systems (the flat region of the bathtub curve) can be approximated well by a constant hazard rate.
225
Reliability Dependent on the Existence of Minimum Critical Distances
15.3
GENERAL EQUATIONS RELATED TO RANDOM VARIABLES FOLLOWING A HOMOGENEOUS POISSON PROCESS IN A FINITE INTERVAL
Random failures following a homogeneous Poisson process in a finite time interval with length a are considered. The number of failures in the finite time interval a is a random variable. It is assumed that after each failure, the component/system is brought by a replacement to as-new condition. Failure is understood to be a critical event leading to a system halt or degeneration of the required function below an acceptable level. In this sense, failure requires immediate intervention. Specifying multiple MFFOPs is relevant to the case where it is important that before each random failure in a finite time interval there should exist with a specified probability a minimum period s of failure-free (free from intervention for unscheduled maintenance) operation. If a rolling warranty period of length at least s is required before each failure during a finite time period with length a (Figure 15.2a), the MFFOP reliability measure consists of a minimum failurefree operating (MFFOP) interval with length s and a minimum probability pMFFOP with which this interval is guaranteed. The maximum number of failure-free gaps of length s which can fit into the finite time interval with length a is r ¼ [a/s], where [a/s] denotes the greatest integer part of the ratio a/s which does not exceed it. The probability that before k random failures in a finite interval with length a, there will be distances greater than a specified minimum distance s is given by equation (14.15). According to equation (14.18), the probability of existence of a minimum gap of length at least s before each random failure is
pðSÞ pMFFOP ¼
r X ðaÞk ea
k!
k¼0
¼
r X ðaÞk ea k¼0
k!
1
ks a
ks 1 a k
k þ
1 X ðaÞk ea 0 k! k¼rþ1
ð15:1Þ
In equation (15.1), (a)k ea /k! is the probability of exactly k failures in the finite time interval a. According to equation (14.15), p(Sjk) ¼ (1 ks/a)k is the conditional probability that given k random failures, before each failure there will be a failure-free gap of length at least s. Expanding the sum in equation (15.1) results in pMFFOP ¼ expðaÞ
2 ða 2sÞ2 r ða rsÞr þ þ 1 þ ða sÞ þ 2! r!
!
ð15:2Þ
226
Reliability and Risk Models
for the probability pMFFOP that before each random failure in the finite interval a there will exist a failure-free interval greater than s. If only minimum critical intervals (MCI) between adjacent random variables are considered, without specifying a minimum critical distance s01 from the beginning of the finite interval (s01 ¼ 0) the equation 1 ðk 1Þs k X ðaÞk ea pðSÞ ¼ pMCI ¼ 1 0 þ a k! k! k¼0 k¼rþ1 r X ðaÞk ea ðk 1Þs k 1 ¼ a k! k¼0 r X ðaÞk ea
ð15:3Þ
is obtained from the general equation (14.18), where r ¼ [a/s] þ 1. In equation (15.3), (a)k ea /k! is the probability of exactly k random variables in the finite interval. According to equation (14.12), p(S|k) ¼ (1 (k 1) s/a)k, is the conditional probability that given k random variables in the finite interval with length a, between any two adjacent random variables there will be a gap of length at least s. Expanding the sum in equation (15.3) results in ! 2 ða sÞ2 r ½a ðr 1Þsr þ þ pMCI ¼ expðaÞ 1 þ a þ ð15:4Þ 2! r! for the probability pMCI that the distance between any two adjacent random variables will be greater than the specified critical distance s. The probability pc that two or more random variables will cluster within the critical distance s is 2 ða sÞ2 r ½a ðr 1Þsr þ þ pc ¼ 1 expðaÞ 1 þ a þ 2! r!
! ð15:5Þ
Equations (15.2) and (15.4) can be used for setting reliability requirements, in cases where the random variables follow a homogeneous Poisson process. For any specified MFFOP interval and a minimum probability pMFFOP with which it must exist, solving the equations with respect to yields an upper bound (an envelope) for the hazard rate. The hazard rate envelope guarantees that if the actual hazard rate lies in it, the specified MFFOP will exist with probability equal to or larger than the specified minimum probability pMFFOP. It is important to point out that solving the exponential equation pMFFOP ¼ exp (s) to specify the hazard rate guaranteeing a minimum failure-free operating period of length at least s until the first failure does not mean that this period will exist before each subsequent random failure in the finite time interval. To calculate the necessary hazard rate for an MFFOP before each random failure, equation (15.2) must be used.
Reliability Dependent on the Existence of Minimum Critical Distances
227
Given a maximum acceptable probability of clustering pc, by solving equation (15.5) numerically with respect to , an upper bound for the number density envelope of the random variables can be determined. This guarantees that whenever for the number density , is fulfilled, the specified minimum critical interval of length at least s will exist with minimum probability p ¼ 1 pc. In other words, the probability of clustering will be smaller than pc. Note that the MFFOP required before each random failure must be selected to be smaller than the smallest start of wear-out time of the components which can cause a critical failure of the system. The general equation (14.18) can also be used to derive the probability p(S) pMFFOP of a specified set of minimum failure-free operating periods (MFFOP) before specified random failures (not necessarily before all of them). Further discussion related to calculating the probability of existence of the specified intervals and determining the hazard rate envelope which guarantees them with the specified minimum probability can be found in Todinov (2004c).
15.4 15.4.1
APPLICATION EXAMPLES SETTING RELIABILITY REQUIREMENTS TO GUARANTEE A SPECIFIED MFFOP
Equation (15.2) has been used for setting MFFOP reliability requirements to guarantee a specified minimum failure-free operating interval of length at least s ¼ 30 months before each failure in a finite time interval of length a ¼ 100 months. pf max ¼ 0.21 has been specified as a maximum acceptable probability of violating at least one MFFOP interval. Equation (15.2) was solved numerically with respect to where pMFFOP ¼ 1 pf max ¼ 0.79. The numerical routine yielded a value ¼ 0:00614 month1 for the upper bound of the hazard rate which guarantees an MFFOP of length greater than s ¼ 30 months before each random failure, with probability equal to or greater than pMFFOP ¼ 0.79. This result has been verified by a Monte Carlo simulation. Given a hazard rate ¼ 0:00614 month1 the Monte Carlo simulation yielded pMFFOP 0.79 for the probability that before each random failure there will exist a failure-free interval greater than s ¼ 30 months. If the negative exponential distribution pMFFOP ¼ exp (s) was used to calculate the probability of a failure-free interval of length greater than s ¼ 30 months before each random failure, it would have yielded the incorrect value ¼ ð1=sÞ lnðpMFFOP Þ 0:0079 The value ¼ 0:0079 obtained from the exponential equation guarantees a single MFFOP interval only, until the first failure. Larger discrepancies are
228
Reliability and Risk Models
obtained if the length of the specified MFFOP interval before each failure is reduced. Thus, the maximum hazard rate which guarantees with minimum probability pMFFOP ¼ 0.79 an MFFOP of length larger than s ¼ 2 months before each random failure is ¼ 0:031. The value ¼ 0:118 obtained from solving the exponential equation pMFFOP ¼ exp (s) guarantees with probability pMFFOP ¼ 0.79 a single MFFOP interval only, until the first failure. In order to guarantee a failure-free interval larger than s ¼ 2 months before each failure, the hazard rate needs to be decreased to a value ¼ 0:031. These examples show that if a specified MFFOP interval (a rolling warranty period) is required before each random failure following a homogeneous Poisson process in a finite interval, equation (15.2) must be used to calculate the necessary hazard rate, not the negative exponential distribution.
15.4.2
RELIABILITY ASSURANCE THAT A SPECIFIED MFFOP HAS BEEN MET
Suppose that a number of tests have been performed from which a constant hazard rate has been estimated in the way discussed in Chapter 2. On the basis of this estimate, a reliability assurance is required about the existence of a minimum failure-free operating period before each random failure in a finite time interval with length a. According to the discussion in Chapter 2, the MTTF estimated from data follows a distribution. The probability P(1 2 ) that the true MTTF will lie between two specified bounds 1 and 2 is given by equation (2.34). Since the probability distribution of the MTTF can always be determined given the total operational time T and the number of failures k, let us assume for the sake of simplicity that f () is the probability distribution of the true MTTF, for a given number of failures and a total operational time T. Given that the true MTTF is in the infinitesimal interval , þ d, the probability that the actual failure-free operating intervals Siiþ1(i ¼ 0, 1, . . .) before all random failures will be larger than the specified minimum failurefree operating interval s is
Pðall Siiþ1 > sjÞ ¼ e
a=
ða 2sÞ2 ða rsÞr þ þ 1 þ ða sÞ= þ 2 2! r r!
! ð15:6Þ
where r ¼ [a/s]. The probability of the compound event that the true MTTF will be in the interval , þ d and given this, the actual failure-free operating intervals before the random failures will be greater than the specified MFFOP with length s is equal to P(all Siiþ1 > sj)f ()d. According to the total probability
Reliability Dependent on the Existence of Minimum Critical Distances
229
theorem, the probability that all of the actual failure-free operating intervals before failures will be larger than the specified MFFOP with length s becomes Pðall Siiþ1 > sÞ ¼
Z
max
Pðall Siiþ1 > sjÞf ðÞd
min
or, after the substitution:
Pðall Siiþ1 > sÞ ¼
Z
max
f ðÞea= 1 þ ða sÞ= þ
min
ða 2sÞ2 2 2!
ða rsÞr þ þ d r r!
ð15:7Þ
where min and max are the lower and the upper bound for , for which f () 0 if < min or > max . For a single specified MFFOP of length s before the first random failure only, the conditional probability of an MFFOP with length greater than s is obtained on the basis of the negative exponential distribution: PðS01 > sjÞ ¼ expðs=Þ
ð15:8Þ
The probability P(S01 > s) that the actual failure-free operating interval will be larger than the specified MFFOP with length s is PðS01 > sÞ ¼
Z
max
f ðÞ expðs=Þd
ð15:9Þ
min
Equations (15.7) and (15.9) can be used for providing reliability assurance that the specified MFFOP interval with length s has been met. Increasing the number of tests alters the distribution of the MTTF f () and also the probability P(all Siiþ1 > s). In order to minimise the number of tests needed to provide reliability assurance that the specified MFFOP is guaranteed with minimum probability pMFFOP, the following loop can be constructed: (i) increasing stepwise the number of tests; (ii) determining the distribution f () of the true MTTF; (iii) calculating the probability P(all Siiþ1 > s) from 15.7 or 15.9. This procedure can be used as a basis for determining the minimum number of tests which guarantees, with a specified minimum probability pMFFOP P(all Siiþ1 > s), the specified MFFOP.
230 15.4.3
Reliability and Risk Models
SPECIFYING A NUMBER DENSITY ENVELOPE TO GUARANTEE PROBABILITY OF CLUSTERING BELOW A MAXIMUM ACCEPTABLE LEVEL
In an illustrative example, the number density envelope of random demands will be determined which guarantees no clustering of demands within a critical interval of 0.5 h, necessary for a supply system to recover. Demands follow a homogeneous Poisson process in a finite time interval of 100 h and if two or more demands follow within the critical interval of 0.5 h, the system is overloaded. The maximum acceptable probability of overloading the supply system has been specified to be pc ¼ 0.1. An upper bound ¼ 0:0467 h1 of the number density of the demands was obtained by solving equation (15.4) with respect to where pMCI ¼ 1 pc ¼ 0.9. Whenever for the number density of demands ¼ 0:0467 is fulfilled, the probability of overloading the supply system is smaller than 0.1. Monte Carlo simulations (1 000 000 trials) of a homogeneous Poisson process with density ¼ 0:0467 yielded 0.1 for the probability of clustering two or more demands within the critical interval of 0.5 h, which confirms the result from equation (15.4). Thus, for approximately five demands in 100 h, the probability of clustering within an interval of 0.5 h is substantial ( 0.1). Even for the small mean number density of two demands in 100 h, the calculation from equation (15.4) shows that there is still approximately 2% chance of clustering two or more demands within 0.5 h. Figure 15.3 gives the Probability of clustering within a critical interval of 1 h 1.0 0.9 0.8 0.7 0.6 0.5 Finite time interval, a = 100 h
0.4 0.3 0.2 0.1 0.0
0
2
4
6
8
10 12 14 16 18 20 22 24 26 28 30 32
λ, Number density of the random demands × (1/100) (h–1)
Figure 15.3 Probability of clustering within a critically small interval of 1 h, random demands following a homogeneous Poisson process in a finite time interval of 100 h
Reliability Dependent on the Existence of Minimum Critical Distances
231
dependence of the probability of clustering of random demands (overloading the supply system) within a critical interval of s ¼ 1 h (a ¼ 100 h) for different values of the number density of the demands. Figure 15.3 shows that for a mean number of 14 demands per 100 h there is already 80% probability of clustering within a critical interval of 1 h. Clearly, the probability of clustering is substantial and should always be taken into consideration in risk assessments. Equations (15.2)–(15.5) are relevant to a wide class of reliability problems. Equation (15.4) can also be used to verify whether clustering of random failures is a ‘random fluctuation’ or not. Equation (15.5) can also be used to determine the probability of clustering two or more flaws in fibres or wires, within a critical distance s, given that the flaw number density is . It is assumed that the locations of the flaws follow a homogeneous Poisson process in the finite length a. Solving equation (15.5) with respect to the flaw number density defines an upper bound for the flaw number density which guarantees with a specified minimum probability no clustering of flaws within a small critical distance. This is important in cases where the probability of failure during loading is strongly correlated with the probability of clustering of flaws. Solving equation (15.4) with respect to the flaw number density in fact specifies requirements regarding the maximum acceptable flaw content in the material in order to reduce the probability of early-life failures caused by clustering of flaws. Equation (15.5) can also be used to determine the probability of collision of demands from users of the same equipment. Unlike the problem solved in Chapter 14, where the number of users was fixed, here the number of users is a random variable. If the users’ demands follow a homogeneous Poisson process in the finite time interval a, solving equation (15.4) with respect to results in an upper bound of the number density of users. This guarantees that if the actual number density of users is in the calculated number density envelope, the maximum acceptable probability of a collision among any two users will be smaller than the maximum acceptable value pc, max ¼ 1 pMCI. The proposed models and algorithms form the core of a methodology for reliability analysis and setting reliability requirements based on minimum critical distances between random variables in a finite interval.
15.5
SETTING RELIABILITY REQUIREMENTS TO GUARANTEE A MINIMUM FAILURE-FREE OPERATING PERIOD BEFORE FAILURES FOLLOWED BY A DOWNTIME
Random failures are usually followed by downtimes which, in some cases, can be significant. For subsea oil and gas production for example, the downtimes include the time for locating the failure, the time for mobilisation of resources
232
Reliability and Risk Models
and the time needed for intervention and repair/replacement. Downtimes can vary from several days to several months. Suppose that the distribution of downtimes is known, and a minimum probability pMFFOP has been specified, with which a minimum failure-free operating period of length s before each random failure is guaranteed. A hazard rate envelope can then be determined which guarantees that if the system hazard rate is in the envelope, the probability of existence of the minimum failure-free operating interval will be greater than pMFFOP. In the Monte Carlo simulation model of random failures with downtimes presented here, it is assumed that the random failures are characterised by a constant hazard rate, hence failure times are produced by sampling from the exponential distribution. The downtimes are simulated by sampling from the distribution representing them (an empirical, uniform, log-normal distribution, etc.). The algorithm consists of bracketing the hazard rate and subsequent checking the length of the failure-free distances before random failures until the target probability pMFFOP is attained. The initial interval for the hazard rate is a ¼ 0, b ¼ max . The probability p of the specified MFFOP interval associated with hazard rate 1 ¼ (a þ b)/2 is calculated first by dividing the number of trials in which the specified MFFOP interval before each random failure exists, into the total number of trials. If the calculated probability p associated with hazard rate 1 is smaller than the specified target pMFFOP (p pMFFOP), the initial hazard rate interval (a, b) is truncated to the interval (a, 1 ) whose length is twice as small. A new probability p is then calculated for a hazard rate 2 ¼ (a þ 1 )/2, which is in the middle of the truncated hazard rate interval (a, 1 ) (Figure 15.4). If the probability p associated with the value 1 is greater than the specified target (p > pMFFOP) the initial interval for the hazard rate (a, b) is truncated to (1 , b). A new probability p is calculated for the hazard rate 2 ¼ (1 þ b)/2, in the middle of the truncated interval (Figure 15.4). Truncating intervals and calculating for the hazard rate in the middle of each interval continues until the length of the last truncated interval becomes smaller than the required precision ". Since at each step of the calculation, the current interval for the hazard rate is halved, the total number of calculations is ln2 [max /"]. The algorithm in pseudo code is as follows:
0 a=0
p < pMFFOP λ2
p > pMFFOP λ1
λ2
Hazard rate b = λmax
Figure 15.4 An illustration of the algorithm for guaranteeing with a minimum probability PMFFOP an MFFOP of specified length
Reliability Dependent on the Existence of Minimum Critical Distances
233
function Calc_Pmffop () { /* returns the probability with which the specified MFFOP of length s exists for a hazard rate and the specified distribution of downtimes */} left ¼ min ; right ¼ max ; while (j right – left j > eps) do { mid¼(leftþright)/2; p_mid¼Calc_Pmffop (mid); if (p_mid < pMFFOP) then right¼mid; else left¼mid; } The variable eps contains the desired precision. At the end of the calculations, the hazard rate which guarantees the specified MFFOP of length s with minimum probability pMFFOP remains in the variable mid. The function Calc_Pmffop() returns the probability with which the specified MFFOP with length s exists before each random failure, given the specified hazard rate and the distribution of downtimes. Its algorithm is given in the next section.
15.5.1
A MONTE CARLO SIMULATION ALGORITHM FOR EVALUATING THE PROBABILITY OF EXISTENCE OF AN MFFOP OF SPECIFIED LENGTH
The algorithm of the routine Calc_Pmffop() in pseudocode is as follows: function Calc_Pmffop() { function Lognormal_down_time () {/* Generates a log-normaly distributed downtime with specified parameters */} function Exponential_uptime () {/* Generates an exponentially distributed uptime with the specified hazard rate */} clustering ¼ 0; /* Initialising the counter of the number of trials in which the MFFOP with length s has been violated */ for i ¼ 1 to Number_of_trials do { t_cumul ¼ 0; /* where the subsequent uptimes and downtimes are accumulated */
234
Reliability and Risk Models
Repeat /* Generate an exponential uptime */ exp_time ¼ Exponential_uptime(); t_cumul ¼ t_cumulþexp_time; if (t_cumul > a) then break; else if (exp_time < s) then { clustering ¼ clusteringþ1; break;} /* Generate a lognormal downtime */ Cur_downtime ¼ Lognormal_down_time(); t_cumul ¼ t_cumulþCur_downtime; if (t_cumul > a) then break; Until ‘break’ is executed anywhere in the loop; } Pmffop ¼ 1-clustering/Number_of_trials; /* Calculates the probability of existence of the MFFOP with length s before each failure */ return Pmffop; } Central to the routine is the statement else if (exp_time < s) then {clustering ¼ clusteringþ1; break;} where a check is performed whether the currently generated time to the next failure is smaller than the MFFOP of length s. If a violation of the specified MFFOP is present the clustering counter ‘clustering’ is incremented and the Repeat–Until loop is exited immediately, continuing with the next Monte Carlo trial. The probability of violating the specified MFFOP interval of length s is calculated by dividing the variable ‘clustering’ containing the number of Monte Carlo trials in which a violation of the specified MFFOP has been registered into the number of Monte Carlo trials. Subtracting from unity gives the probability Pmffop of the specified MFFOP interval. 15.6 A NUMERICAL EXAMPLE This example involves log-normally distributed downtimes. It is assumed that the natural logarithms of the downtimes (in days) are normally distributed with mean 3.5 and a standard deviation 0.5. For the length of the specified time interval and for the length of the specified MFFOP, a ¼ 60 months and s ¼ 6 months have been assumed respectively. A minimum probability pMFFOP ¼ 0.90 of existence of the MFFOP ¼ 6 months was also specified. Using
Reliability Dependent on the Existence of Minimum Critical Distances
235
the algorithm in section 15.5.1, the hazard rate envelope which guarantees this probability was determined to be s 0:0115 months1. In other words, whenever the system hazard rate is smaller than s ¼ 0:0115, the failure-free interval before each random failure will be greater than the specified MFFOP s ¼ 6 months (Figure 15.2a) with probability greater than or equal to pMFFOP ¼ 0.90.
15.7
SETTING RELIABILITY REQUIREMENTS TO GUARANTEE AN AVAILABILITY TARGET
The current practice for setting reliability requirements in subsea oil and gas production is based on specifying a high availability target because it provides a direct link with the cash-flow. Given a particular distribution of downtimes and a specified minimum availability, reliability requirements can be set to guarantee that the availability will be greater than the specified target. Again, the algorithm is based on bracketing the hazard rate and subsequent calculation of the availability until the target availability is attained with the desired precision. It is assumed that failures follow a homogeneous Poisson process and after each failure, the system is brought by repair/replacement to as good as new condition. The initial interval for the hazard rate is a ¼ 0, b ¼ max and the availability associated with the middle of the interval 1 ¼ (a þ b)/2 is calculated first. If the availability A associated with the value 1 is smaller than the specified target AT (A < AT) the hazard rate interval is truncated to (a, 1 ) and a new availability value is calculated at 2 ¼ (a þ 1 )/2 (Figure 15.5). If the availability associated with value 1 is greater than the specified target (A > AT) the hazard rate interval is truncated to (1 ,b) and the new availability value is calculated at 2 ¼ (1 þ b)/2 (Figure 15.5). The calculations, which are similar to finding a root of a nonlinear equation using the bisection method, continue until the final truncated interval containing the last calculated approximation n of the hazard rate becomes smaller than the desired precision ". If N ¼ [max /"] denotes the integer part of the ratio max /", after ln2 N calculations, the desired precision will be attained. For example, even for [max /"] ¼ 2100 , the solution is attained after only 100 calculations. The algorithm in pseudocode is as follows:
a=0
A < AT λ2
A > AT λ1
λ2
Hazard rate b = λ max
Figure 15.5 An illustration of the algorithm for guaranteeing a minimum availability AT
236
Reliability and Risk Models
function Simulate_availability () {/* returns the average availability associated with hazard rate */} left ¼ min ; right ¼ max ; while (j right-leftj > eps) do { mid¼(leftþright)/2; a_mid ¼ Simulate_availability (mid); if (a_mid < AT) then right¼mid; else left¼mid; } The variable eps contains the desired precision. At the end of the calculations, the hazard rate which guarantees the specified availability target AT remains in the variable ‘mid’. The function Simulate_availability (), whose algorithm is presented in the next section, returns the average availability over the finite time interval, for the specified downtimes distribution and the hazard rate . The failure times are produced by a Monte Carlo simulation involving sampling from the negative exponential distribution, while the downtimes are produced by sampling the distribution representing the downtimes (empirical, uniform, log-normal, etc.).
15.7.1
MONTE CARLO EVALUATION OF THE AVERAGE AVAILABILITY ASSOCIATED WITH A FINITE TIME INTERVAL
Here is the computer simulation algorithm in pseudocode for determining the average availability in a finite time interval: Availability [Number_of_trials]; /* Array containing the availability values calculated from each Monte Carlo trial */ function Lognormal_down_time () {/* Generates a log-normally distributed downtime with specified parameters */} function Exponential_uptime () {/* Generates an exponentially distributed uptime with a specified hazard rate */} function Simulate_availability () { For i ¼ 1 to Number_of_trials do { a_remaining ¼ a; Total_uptime ¼ 0;
Reliability Dependent on the Existence of Minimum Critical Distances
237
Repeat /* Generate an exponential uptime */ Cur_uptime ¼ Exponential_uptime( ); If (Cur_uptime > a_remaining) then {Total_uptime ¼ Total_uptime þ a_remaining; break;} Total_uptime ¼ Total_uptime þ Cur_uptime; a_remaining ¼ a_remaining Cur_uptime; /* Generate a lognormal downtime */ Cur_downtime ¼ Lognormal_down_time ( ); If (Cur_downtime > a_remaining) then break; a_remaining ¼ a_remaining Cur_downtime; Until ‘break’ is executed anywhere in the loop; Availability [i] ¼ Total_uptime / a; Sum¼SumþAvailability[i]; } Mean_availability ¼ Sum / Number_of_trials; return Mean_availability; } The algorithm consists of generating alternatively an uptime from sampling the negative exponential distribution, followed by a downtime obtained from sampling the log-normal distribution. This process continues until the finite time interval with length a is exceeded. In the variable ‘Total_uptime’ the total operation time is accumulated. In the indexed variable Availability[i], the availability characterising the ith Monte Carlo trial is obtained by dividing the total uptime into the length of the finite time interval a. Values stored in the Availability array can subsequently be used to plot the empirical distribution of the availability. Averaging the values stored in the Availability array gives the mean availability associated with the finite time interval a.
15.7.2
A NUMERICAL EXAMPLE
This numerical example is also based on log-normally distributed downtimes. Similar to the example in section 15.6 it is assumed that the natural logarithms of the downtimes (in days) are normally distributed with mean 3.5 and a standard deviation 0.5. The length of the specified time interval is assumed to be a ¼ 60 months. An average availability AT ¼ 95% has been specified. A hazard rate A(0:95) 0:043 months1, which guarantees the availability target AT ¼ 95%,
238
Reliability and Risk Models
was determined by a numerical procedure implemented in Cþþ. In other words, whenever the system hazard rate is smaller than A(0:95) , the average system availability A is greater than the target value AT ¼ 95%. Surprisingly, the probability of premature failure before time a ¼ 60 months, for a constant hazard rate A(0:95) 0:043 months1 is pf ¼ 1 exp(0.043 60) 0.92. This result means that even for such a high availability target it is highly likely that there will be a failure within 60 months. Consequently, setting a high availability target does not exclude a substantial probability of premature failure. In subsea deep-water production, for example, the cost of intervention to fix failures is high, therefore the risk (the expected losses) associated with failure is also high.
16 Reliability Analysis and Setting Reliability Requirements Based on the Cost of Failure
Critical failures in some industries (e.g. in the nuclear or deep-water oil and gas industry) can have disastrous environmental and health consequences. Such failures entail loss of production for very long periods of time and extremely high costs of the intervention for repair. Consequently, for deep-water oil and gas production, which is characterised by a high cost of failure, setting quantitative reliability requirements should be driven by the cost of failure. According to a commonly accepted definition, the risk of failure is a product of the probability of failure and the cost of failure (Henley and Kumamoto, 1981). The traditional approach to the reliability analysis of engineering systems, however, does not allow the setting of reliability requirements which limit the risk of failure below a maximum acceptable level because it is disconnected from the cost of failure. Incorporating the technical as well as the social components of failures is also vital in bridging the gap between the traditional engineering approach to setting reliability targets and society’s perception of risk (RAE, 2002). Let us consider two typical problems facing the mechanical engineerdesigner. A component is part of a control device. Tests of a number of components revealed that the fracture stress of the component is well described by the two-parameter Weibull model F() ¼ 1 exp [(/)m ] where the numerical values of the parameters and m are known. Fracture of the component results in cost C. What should be the maximum working stress for
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
240
Reliability and Risk Models
the component which guarantees that the risk of premature failure is below Kmax? Another problem of similar nature is the following. Suppose that a failurefree service for a time interval of length at least a is required from an electrical connector. A premature failure of the connector (before time a) entails a loss of an expensive processing unit and the warranty costs C are significant. Because the cost of failure C is significant, the designer wants to limit the expected losses from warranty payments within the warranty budget Kmax per electrical connector. What should be the MTTF characterising the connector so that the expected losses from premature failure remain within the warranty budget? In the following sections a new generation of models will be presented which form the basis of the cost-of-failure reliability analysis and setting reliability requirements. Two main concepts related to the expected losses will be used: (i) a risk of premature failure before a specified time and (ii) expected losses from multiple failures in a specified time interval. The concept ‘risk of premature failure’ applies to both non-repairable and repairable systems, while the concept ‘expected losses from failures’ applies only to repairable systems.
16.1 GENERAL MODELS FOR SETTING COST-OF-FAILURE-DRIVEN RELIABILITY REQUIREMENTS FOR NON-REPAIRABLE COMPONENTS/SYSTEMS 16.1.1
A CONSTANT COST OF FAILURE
The outlined problems can be easily solved using a cost-of-failure approach to reliability analysis and setting reliability requirements. Indeed, the risk of failure K is given by the relationship (Henley and Kumamoto, 1981) K ¼ pf C
ð16:1Þ
where pf is the probability of failure and C is a constant cost of failure, independent of the time to failure or failure mode. The cost of failure C, for example, could be a warranty payment if failure occurs before time a. Assume that Kmax ¼ pf max C is the maximum acceptable risk of failure, where pf max is the maximum acceptable probability of failure. This equation can also be presented as pf max ¼ Kmax =C
ð16:2Þ
If equation (16.1) is rewritten as pf ¼ K/C, then limiting the risk of failure K below Kmax is equivalent to limiting the probability of failure pf below the maximum acceptable level pf max (pf pf,max). This leads to the cost-of-failure
Reliability Analysis and Setting Reliability Requirements
241
concept for setting reliability requirements limiting the risk of premature failure (Todinov, 2003b, 2004b): pf Kmax =C
ð16:3Þ
As a result, whenever pf pf max ¼ Kmax/C the relationship K Kmax is fulfilled. Denoting the ratio rmax ¼ Kmax/C (0 rmax 1) as a maximum acceptable fraction of the cost of failure, the cost-of-failure concept for setting reliability requirements limiting the risk of failure becomes pf rmax
ð16:4Þ
The maximum acceptable risk of premature failure Kmax can conveniently be assumed to be the maximum budget for warranty payments due to a failure. Using equation (16.4), reliability requirements can be set without knowing the absolute value of the cost of failure. In this case, the maximum acceptable probability of premature failure can be interpreted conveniently as the maximum acceptable percentage of the cost of failure, whatever the cost of failure may be. The ratio can also be interpreted as the maximum fraction of the cost of failure which the customer is prepared to pay. Now, the solution of the first problem from the introduction can be outlined. The maximum acceptable risk Kmax, the cost of failure C, and the probability of failure pf F() ¼ 1 exp [(/)m ] are substituted in inequality (16.3). Solving inequality (16.3) with respect to yields max ¼ ð ln½1 Kmax =CÞ1=m
ð16:5Þ
for the upper bound of the working stress of the component. Working stress in the range 0 max limits the risk of failure below the maximum acceptable level Kmax. An important application of relationship (16.3) can be obtained immediately for a system characterised by a constant hazard rate . Such is for example the system including components arranged logically in series and characterised by constant hazard rates 1 , 2 , . . . : , n . Because the system fails whenever any of the components fails, its hazard rate is ¼ 1 þ 2 þ : þ n . For a maximum acceptable risk Kmax of premature failure (related to a finite time interval with length a) and a constant hazard rate , inequality (16.3) becomes pf ðÞ 1 expðaÞ Kmax =C from which ¼ ð1=aÞ ln½1 Kmax =C
ð16:6Þ
242
Reliability and Risk Models
is obtained for the system hazard rate envelope which limits the risk of failure below Kmax (Todinov, 2003b, 2004b). In other words, whenever the system hazard rate lies within the as-determined envelope ( ), the risk of failure before time a remains below the maximum acceptable level Kmax (K Kmax). Equation (16.6) links reliability with cost of failure and provides a solution to the second problem at the beginning of this chapter. An electrical connector with MTTF greater than or equal to MTTF ¼ 1/ guarantees that the risk of warranty payment C remains below Kmax. The hazard rate envelope ¼ (1/a) ln [1 rmax ], where rmax ¼ Kmax/C, guarantees that the probability of premature failure will be smaller than the specified maximum acceptable fraction rmax of the cost of premature failure whatever this cost may be. Substituting for example a ¼ 2 years and rmax ¼ Kmax/C ¼ 0.1, yields a hazard rate envelope 0:05 year1. An electrical connector with a hazard rate smaller than 0.05 year1 limits the risk of premature failure before 2 years below 10% of the cost of failure. Equation (16.6) provides the lower reliability bound (the minimum necessary reliability) which does not depend on whether the component is part of a system or not. In this respect, it is important to emphasise that identical type of components, whose failure is associated with different costs C1 and C2, are characterised by different upper bounds of the hazard rates. Indeed, for the same level of the maximum acceptable risk Kmax, from equation (16.2) it follows that the ratio of the probabilities of premature failure is pfmax,1/pfmax,2 ¼ C2/C1.
16.2 COST OF FAILURE FROM MUTUALLY EXCLUSIVE FAILURE MODES The basic cost-of-failure concept for setting reliability requirements given by equation (16.3) can also be used to determine the hazard rates of the components of a system which guarantee a risk of premature failure below a maximum acceptable level. Indeed, because the probability of premature failure pf and the cost of failure are functions of the hazard rate vector f1 , 2 , . . . : , n g with components the hazard rates of the components building the system, inequality (16.3) becomes pf ð1 ; 2 ; . . . : ; n Þ Cð1 ; . . . ; n Þ Kmax
ð16:7Þ
where C(1 , . . . , n ) is the cost of failure before a specified time interval with length a (Figure 16.1). From inequality (16.7), a domain D (an envelope) regarding the hazard vector can be determined which guarantees that if 2 D, inequality (16.7) will be fulfilled.
243
Reliability Analysis and Setting Reliability Requirements Cost of premature failure, C
a
c Minimum failure-free operating period a
Premature failure
Time
Figure 16.1 The cost of premature failure C if failure occurs before time a
Usually, the cost of failure depends on failure mode and varies significantly depending on which failure mode has been triggered or which component has failed. For example, the cost of failure if a pipeline has been blocked is different from the cost of failure if a corrosion of the pipeline has led to a loss of containment and a release of chemicals into the environment. Suppose that M mutually exclusive failure modes are present and pk|f are the conditional probabilities (the relative frequencies) of these failure modes, given that failure has occurred (k ¼ 1, . . . , M). Because there is no alternative no PM failure, for the conditional probabilities pk|f the relationship p ¼ 1 kjf k¼1 holds. A single failure mode is activated each time before failure occurs. A failure mode related to a loss of containment, for example a leak from a valve or a seal, can be associated with different magnitudes of the cost, depending on the severity of the leak. Similarly, the cost Xk associated with the kth failure mode can be modelled with a distribution function Ck(x|f) P(Xk x|f). The probability C(x) P(X x) that the cost of failure X will be smaller than a specified value x, can be presented as a union of the following mutually exclusive and exhaustive events: (i) the first failure mode is activated first and the cost of failure X is smaller than x (the probability of this compound event is p1|fC1(x|f)); (ii) the second failure mode is activated first and the cost of failure X is smaller than x (the probability of this compound event is p2|fC2(x|f)); . . . ; the Mth failure mode is activated first and the cost of failure X is smaller than x (the probability of this compound event is pM|fCM(x|f)). The probability of a union of mutually exclusive events equals the sum of the probabilities of the separate events. As a result, the distribution of the cost of failure is described by the mixture distribution: CðxÞ ¼
M X
pkjf Ck ðxjf Þ
ð16:8Þ
k ¼1
The probability distribution C(x) of the cost of failure is a mixture of the probability distribution functions Ck(x|f) characterising the individual failure modes, scaled by the probabilities pk|f with which they occur first given failure. From the hazard rates associated with the separate, statistically independent failure modes, the probabilities pk|f can be determined. Indeed, suppose that the
244
Reliability and Risk Models
individual failure modes are characterised by constant hazard rates 1 , 2 , . . . , M , where M is the total number of failure modes. The times to failure associated with the individual failure modes are given by the negative exponential distributions i exp (i t), where i ¼ 1, M. Using the probabilistic argument to follow, the probability that given failure, the first failure mode has initiated it can be determined. Let us calculate the probability that the first failure mode has initiated failure in the infinitesimal time interval t, t þ dt. This probability can be expressed as a product of the probabilities of the following statistically independent events: (i) the first failure mode has initiated failure first in the interval t, t þ dt, the probability of which is 1 e1 t dt and (ii) the other failure modes have not initiated failure before time t, the probability of which is exp (2 t) exp (M t). In other words, the probability that the first failure mode has initiated failure in the time interval t, t þ dt is 1 exp½ð1 þ 2 þ þ M Þdt According to the total probability theorem, the total probability p1|f that the first failure mode has initiated failure first is p1jf ¼
Z
1
1 exp½ð1 þ 2 þ þ M Þdt ¼
0
1 1 þ 2 þ þ M
As a result, pkjf ¼ k /(1 þ 2 þ þ M ) and equation (16.8) becomes
CðxÞ ¼
M X
M X
k Ck ðxjf Þ þ þ þ M 1 2 k ¼1
pkjf ¼
k ¼1
ð16:9Þ
M X
k ¼1 þ þ þ M 2 k ¼1 1
The expected cost of failure C from all failure modes is given by C ¼
M X
k Ck þ þ 1 M k ¼1
where Ck are the expected costs characterising the individual failure modes. The last equation gives the mean of a mixture distribution. Considering that the probability of failure before time a is given by pf ¼ 1 exp½ð1 þ 2 þ þ M Þa
Reliability Analysis and Setting Reliability Requirements
245
substituting in the risk equation (16.1) results in K ¼ ð1 exp½ð1 þ þ M ÞaÞ
M X
k Ck þ þ M k ¼1 1
ð16:10Þ
for the risk of premature failure of a non-repairable system from multiple failure modes. Suppose that a system contains a number of critical components. If failure of any of these critical components leads to a system failure, the risk of premature failure can be modelled by equation (16.10). The equation is also valid for a system composed of components logically arranged in series where failure of any component causes system failure and is part of the basis of a new technology for a cost-of-failure reliability analysis and setting reliability requirements. The use of Equation (16.10) can be illustrated by a numerical example involving a device (Figure 16.2a) composed of (i) a power block (PB) characterised by a hazard rate 1 ¼ 0:0043 months1 and cost of replacement C1 ¼ 350; (ii) an electronic control module (ECM) characterised by a hazard rate 2 ¼ 0:0014 months1 and cost of replacement C2 ¼ 850; and (iii) a mechanical device (MD) characterised by a hazard rate 3 ¼ 0:016 months1 and cost of replacement C3 ¼ 2300. The logical arrangement in series (Figure 16.2a) means that failure of any block causes a system failure. For a warranty period of a ¼ 12 months, substituting the numerical values in equation (16.10) yields
K ¼ ð1 exp½ð1 þ 2 þ 3 ÞaÞ
3 X
i Ci 417 þ 2 þ 3 i ¼1 1
for the risk of premature failure.
(a)
(b)
C1
C2
C3
PB
ECM
MD
λ1
λ2
λ3
C1
C2
1 λ1 + ∆
2 λ2 – ∆
Figure 16.2 (a) A device composed of a power block (PB), electronic control module (ECM) and mechanical device (MD), logically arranged in series. (b) Two components logically arranged in series. The cost of failure C1 associated with the first component is much greater than the cost of failure C1 associated with the second component
246
Reliability and Risk Models
In order to limit the risk of premature failure below a maximum acceptable level Kmax, the hazard rates of the components must satisfy 1 K ¼ð1 exp½ð1 þ þ M ÞaÞ C1 1 þ þ M ð16:11Þ M þ þ CM Kmax 1 þ þ M Unlike the case of a single component/system for which the equality K ¼ Kmax has a unique solution given by equation (16.6), for more than one component, the equality K ¼ Kmax in equation (16.11) is satisfied for an infinite number of values 1 , . . . , M for the hazard rates. In cases where the cost of failure is proportional to the time for intervention and repair of the failed component, Ci in equation (16.11) can be the time for intervention and repair of the ith failed component. The variance VC of the cost of failure is given by the formula for the variance of a distribution mixture (see Chapter 3): M X X VC ¼ pkjf Vk þ pijf pjjf ðCi Cj Þ2 ð16:12Þ k ¼1
i<j
where Vk are the variances of the cost of failure associated with the kth failure mode and pkjf ¼
k 1 þ 2 þ þ M
k ¼ 1; 2; . . . ; M
For three failure modes for example, equation (16.12) becomes VC ¼ p1jf V1 þ p2jf V2 þ p3jf V3 þ p1jf p2jf ðC1 C2 Þ2 þ p2jf p3jf ðC2 C3 Þ2 þ p3jf p1jf ðC3 C1 Þ2
ð16:13Þ
where pkjf ¼
k 1 þ 2 þ 3
k ¼ 1; 2; 3
P The first component M k ¼ 1 pkjf Vk of the variance of the cost of failure reflects the variability of the cost within the individual failure modes, whereas the P second component i<j pijf pjjf (Ci Cj )2 reflects variability of the cost between the separate failure modes.
16.3
SOME IMPORTANT COUNTEREXAMPLES
The fact that increasing the reliability of the system does not necessarily mean decreasing the risk of premature failure can be demonstrated on a very simple
247
Reliability Analysis and Setting Reliability Requirements
system containing only two components logically arranged in series (Figure 16.2b). Let the two components be characterised by hazard rates 1 and 2 , respectively. Suppose that the cost of failure C1 of the first component is much greater than the cost of failure C2 of the second component: C1 C2. The reliability of the system (the probability of surviving time t) is R(t) ¼ exp [ (1 þ 2 )t]. Increasing the hazard rate of the first component by (1 þ ) and decreasing the hazard rate of the second component by the same amount (2 ) does not alter the reliability of the system, which is still the same: R(t) ¼ exp [(1 þ 2 )t]. The expected losses from premature failure, however, increase, because the hazard rate of the first component, characterised by much higher cost of failure C1, has increased. Indeed, the cost of failure before altering the hazard rates is C¼
1 2 C1 þ C2 1 þ 2 1 þ 2
After altering the hazard rates, the probability of premature failure pf ¼ 1 exp [ (1 þ 2 )t] will remain the same, but the cost of failure will increase to C0 ¼
1 þ 2 C1 þ C2 1 þ 2 1 þ 2
As a result, the risk of failure will increase to K0 ¼ pf C 0 . Moreover, if the hazard rate of the first component is increased by and the hazard rate of the second component is decreased by 2, the probability of premature failure will decrease to p0f ¼ 1 exp [(1 þ 2 )t] while the cost of failure will increase to C00 ¼
1 þ 2 2 C1 þ C2 1 þ 2 1 þ 2
In some cases, the risk of failure may increase while the probability of premature failure has actually decreased! Indeed, let 1 ¼ 2 ¼ 0:012 months1, t ¼ 60 months, ¼ 0:004, C1 ¼ 1000 and C2 ¼ 100. The probability of failure is pf ¼ ð1 exp½ð1 þ 2 ÞtÞ 0:763 and the risk of failure according to equation (16.10) is K ¼ ð1 exp½ð1 þ 2 ÞtÞ
1 2 C1 þ C2 1 þ 2 1 þ 2
419:6
248
Reliability and Risk Models
After increasing the hazard rate of the first component by and decreasing the hazard rate of the second component by 2, the probability of failure is p0f ¼ ð1 exp½ð1 þ 2 ÞtÞ 0:699 and the risk of failure according to equation (16.10) is
0
K ¼ ð1 exp½ð1 þ 2 ÞtÞ þ
2 2 C2 1 þ 2
1 þ C1 1 þ 2
573
As can be verified from the numerical example, while the reliability of the system has increased (p0f < pf ) the risk of premature failure has also increased (K0 > K)! Suppose now that the hazard rate of the first component has been decreased by an amount , while the hazard rate of the second component has increased by 2. In this case, the probability of premature failure increases: p00f ¼ ð1 exp½ð1 þ 2 þ Þt 0:81 while the risk of failure decreases! K 00 ¼ ð1 exp½ð1 þ 2 þ ÞtÞ þ
2 þ 2 C2 1 þ 2 þ
1 C1 1 þ 2 þ
290:6
The counterexamples demonstrate that increasing the reliability of the system does not necessarily mean a decrease of the risk of premature failure, nor does decreasing the reliability of the system always mean increased risk of premature failure. Increasing the reliability of the system inappropriately may increase the risk (expected losses) of failure instead of decreasing it. In decreasing the risk of premature failure, the reduction of the hazard rates should start from the components associated with the highest risk of failure. The counterexamples also demonstrate that the cost-of-failure reliability analysis requires new reliability concepts, models and software tools, linking reliability with cost of failure.
249
Reliability Analysis and Setting Reliability Requirements
16.4 16.4.1
TIME-DEPENDENT COST OF FAILURE CALCULATING THE RISK OF FAILURE IN CASE OF COST OF FAILURE DEPENDENT ON TIME
The cost of failure C(t) may also be specified as a discrete function accepting constant values C1, C2, . . . ,CN in N subintervals (s0 ¼ 0, s1), (s1, s2), . . . , (sN1, sN ¼ a) dividing the total time interval a (Figure 16.3a). Often, the cost of premature failure at the end of the design life a is smaller than the cost of premature failure at the start of life. Suppose that a component/system is characterised by a non-constant hazard rate h(t) over the finite time interval (0, a) which has been approximated by constant hazard rates h1, . . . , hN as shown in Figure 16.3(b). The equation K¼
N X
Ci ½expðHi1 Þ expðHi Þ
ð16:14Þ
i¼1
P Pi where Hi1 ¼ i1 k¼1 hk ak (H0 ¼ 0) and Hi ¼ k¼1 hk ak are the cumulative hazard functions at the beginning and at the end of the ith time sub-interval Cost of failure s0
s1
si –1
C1 (a)
si
0 a0
sN –1
sN CN
Ci a i = si – s i – 1
Time
aN a
Hazard rate h1 hi hN
(b)
s0 = 0 s 1
si – 1
0
si 40
C1 = 300 a = 100 months
C2 = 265
Time
sN = a 70 100 C3 = 170 Time (months)
(c)
Figure 16.3 (a) Time-dependent cost of failure over a finite time interval with length a; (b) approximation of the hazard rate over the finite time interval a; (c) different values of the cost of failure in a finite time interval with length 100 months
250
Reliability and Risk Models
with length ai ¼ si si1, gives the risk of premature failure in the interval (0, a). The probability that a premature failure will occur in the ith time subinterval is exp(Hi1) exp(Hi) which is the basis of equation (16.14). If the hazard rate is constant, h(t) ¼ const:, Equation (16.14) transforms into K¼
N X
Ci ½expðsi1 Þ expðsi Þ
ð16:15Þ
i¼1
which is the risk of premature failure in the time interval (0, a). Suppose that the maximum acceptable risk of premature failure in the interval (0, a) is Kmax. Using Equations (16.14) and (16.15), the difference Kmax
N X
Ci ½expðHi1 Þ expðHi Þ
ð16:16Þ
i¼1
characterising a non-constant hazard rate, and the difference Kmax
N X
Ci ½expðsi1 Þ expðsi Þ
ð16:17Þ
i¼1
characterising a constant hazard rate can be formed. If < 0, the existing hazard rate is associated with a risk of premature failure greater than the maximum acceptable risk Kmax. If 0, the risk of premature failure is below the maximum acceptable level Kmax. A very important application of equations (16.16) and (16.17) is for determining whether the hazard rates characterising operating equipment are sufficient to guarantee risk of premature failure below maximum acceptable levels. The general equation Kmax
N X
Ci pi ðlÞ ¼ 0
ð16:18Þ
i¼1
implicitly specifies a domain D for the hazard rate which limits the risk of failure below the maximum acceptable level Kmax. Whenever the hazard rate belongs to the domain D, the risk of premature failure K in the finite time interval (0, a) is below the maximum acceptable level Kmax. In equation (16.18), N is the number of time subintervals, l f1 , 2 , . . . : , M g is the hazard rate vector, pi (l) is the probability of premature failure in the ith time subinterval (between times si1 and si), Ci is the expected value of the cost of premature failure in the ith time subinterval. For the sake of simplicity, it is assumed that the cost of failure Ci in the ith time subinterval is the same, irrespective of which component has failed or which failure mode has caused failure. For the
251
Reliability Analysis and Setting Reliability Requirements
practically important special case of M components arranged logically in series, ¼ 1 þ2 þ : þ M and pi () ¼ exp ( si1 ) exp ( si ), equation (16.18) transforms into Kmax
N X
Ci ½expðsi1 Þ expðsi Þ ¼ 0
ð16:19Þ
i¼1
The hazard rate envelope which limits the risk of premature failure below the maximum acceptable level Kmax can then be obtained as a numerical solution of Equation (16.19) with respect to . Whenever the system hazard rate is in the specified envelope ( ) the risk of failure is below the maximum acceptable limit Kmax (K Kmax).
16.5
RISK OF PREMATURE FAILURE IF THE COST OF FAILURE DEPENDS ON TIME
Suppose that the component or system fails due to M mutually exclusive failure modes or due to a failure of any of M components logically arranged in series, characterised by constant hazard rates 1 , 2 , . . . , M . The cost of premature failure has been specified by the matrix 0 1 C11 C12 . . . C1N B C21 C22 . . . C2N C B C @ ... ... ... ... A CM1 CM2 . . . CMN as a discrete function accepting constant values in N time subintervals (s0 ¼ 0, s1), (s1, s2), . . . , (sN1, sN ¼ a) dividing the total time interval with length a (Figure 16.3a), where each element Cki in the matrix gives the cost of failure from the kth failure mode on the ith time subinterval. For the ith time subinterval si1, si, the risk of failure Ki is Ki ¼ ½expðsi1 Þ expðsi Þ
M X
ðk =ÞCki
ð16:20Þ
k¼1
of the probability of where ¼ 1 þ 2 þ þ M . The risk Ki is a product P failure exp ( si1 ) exp ( si ) and the cost of failure M k¼1 (k /)Cki in the same time subinterval. For the total time interval (0, a), the risk of failure is K¼
N X i¼1
½expðsi1 Þ expðsi Þ
M X
! ðk =ÞCki
k¼1
ð16:21Þ
252
Reliability and Risk Models
In an example, assume N ¼ 2 time subintervals (s0 ¼ 0, s1 ¼ 25, s2 ¼ a ¼ 60 months, 1 ¼ 2 ¼ 0:012 months1, M ¼ 2 failure modes associated with costs C11 ¼ 1000 and C12 ¼ 650 in the first time subinterval and C21 ¼ 430 and C22 ¼ 250 in the second time subinterval. For the specified numerical values, the Monte Carlo simulation yields K ¼ 478 for the risk of failure which is confirmed from the direct calculation using equation (16.21) after substituting the numerical values of the parameters.
16.6 SETTING RELIABILITY REQUIREMENTS TO MINIMISE THE TOTAL COST. VALUE FROM THE RELIABILITY INVESTMENT Decreasing the probability of premature failure pf of a component or system can only be achieved by increasing their reliability. Increasing reliability, however, requires resources, and an optimisation procedure is necessary to minimise the sum of the expected losses due to premature failure and the cost of resources invested in reliability improvement. The total costs (losses) can be presented as: Lð xÞ ¼ QðxÞ þ Kð xÞ
ð16:22Þ
where L( x) are the total losses after decreasing the current hazard rate by x, Q(x) is the cost associated with decreasing the current hazard rate by x(Q(0) ¼ 0) and K( x) is the risk of premature failure after decreasing the current hazard rate by x. The difference L ¼ Lð xÞ LðÞ ¼ QðxÞ þ Kð xÞ KðÞ gives the relative losses from decreasing the hazard rate by a value x. The value of the hazard rate which yields the smallest amount of total losses L( x) can be obtained by minimising L( x) in equation (16.22) with respect to x in the interval (0, xmax) (0 x xmax < ). If x* is the value minimising L( x), opt ¼ x is the optimal value for the hazard rate, which minimises the total losses. The difference L taken with a negative sign will be referred to as the value from the reliability investment. V ¼ L ¼ LðÞ Lð xÞ ¼ KðÞ Kð xÞ QðxÞ
ð16:23Þ
According to the definition, the optimal hazard rate opt ¼ x , which minimises the total losses maximises the value from the reliability investment.
Reliability Analysis and Setting Reliability Requirements
253
A positive sign of V in equation (16.23) indicates that the reliability investment Q(x) creates value: the risk reduction exceeds the cost towards achieving this reduction. The larger the value of V, the bigger the value from the reliability investment. A negative sign of V indicates that the amount Q(x) spent towards reliability improvement is not justified by the risk reduction. Often a decision whether to replace existing equipment with more reliable but also more expensive equipment is required. Assume that the existing equipment costs Q0 and the expected losses from failures in a specified time interval with length a are K0. The alternative equipment costs Q1 and the expected losses from failures in the specified time interval with length a are K1. For the specified time interval, the total losses associated with the existing and the alternative equipment are Q0 þ K0 and Q1 þ K1, respectively. The difference V ¼ Q0 þ K0 (Q1 þ K1 ), which can also be presented as V ¼ ðK0 K1 Þ ðQ1 Q0 Þ
ð16:24Þ
will be referred to as the value of the alternative solution. Clearly, K0 K1 in equation (16.24) is the risk reduction associated with implementing the alternative solution and Q1 Q0 is the cost at which this risk reduction is achieved. Assuming that the costs of the alternative solutions are known, in order to determine the value the expected losses from failures must be determined, during the entire time interval with length a. If the cost of failure C(t) is a discrete function accepting constant values C1, C2, . . . , CN in N time subintervals, equation (16.22) regarding the total losses from premature failure becomes Lð xÞ ¼ QðxÞ þ
N X
Ci fexp½ð xÞsi1 exp½ð xÞsi g
ð16:25Þ
i¼1
The right-hand side of equation (16.25) can be minimised numerically with respect to x. If x* is the value minimising L( x), opt ¼ x is the optimal value for the hazard rate. The problem is from one-dimensional nonlinear optimisation and can for example be solved using standard numerical methods (Atkinson, 1989). It is possible that a decrease in L( x) may follow after some initial increase. In other words, decreasing the current hazard rate must go beyond a certain value before a decrease in the total losses L( x) can be expected. This is, for example, the case where a certain amount of resources is being invested into a more reliable design. If the investment does not continue with implementing the design, however, no reduction of the total losses will be present despite the initial investment. Often only information regarding the costs Q(xi) of N alternatives (i ¼ 1, . . . , N) is available. In this case, the losses L( xi ) characterising each
254
Reliability and Risk Models
alternative can be compared and the alternative k (1 k N) characterised by the smallest total losses L( xk ) selected. This can be illustrated by a numerical example. Suppose that for a finite time interval of 100 months, the consequences of premature failure are as follows: C1 ¼ 300 in the time interval (0, 40 months); C2 ¼ 265 in the time interval (40, 70 months) and C3 ¼ 170 in the time interval (70, 100 months) (Figure 16.3c). The hazard rate is sought which minimises the total losses. Suppose that four alternatives with costs 20, 30, 45 and 120 are available, which reduce the current hazard rate ¼ 0.016 by 10, 30, 65 and 80%, respectively. The numerical comparison yields that the third alternative, with cost 45, which reduces the hazard rate to ¼ 0:0056 months1 , is optimal. The losses from failure without optimisation are 216.2; the losses from failure after optimisation are 155.7. The absolute and the relative values from the reliability investment are 60.5 and 27.9%, respectively. An equation for the total losses can also be constructed for a system characterised by multiple components (M) logically arranged in series. Assume that for each component, R alternatives exist, characterised by hazard rates ij and costs Qij where the index i ¼ 1, M stands for the component number and j ¼ 1, R stands for the number of the selected alternative. For any selected vector of components with hazard rates ¼ f1 , . . . , M g to which corresponds a vector of costs Q() ¼ fQ1 , . . . , QM g, the total losses are
LðÞ ¼
M X k¼1
Qk þ
N X i¼1
½expðsi1 Þ expðsi Þ
M X
! ðk =ÞCki
ð16:26Þ
k¼1
where ¼ 1 þ þ M is the sum of the hazard rates of the selected components; N is the number of time subintervals. By minimising the right-hand side of equation (16.26) numerically with respect to ¼ f1 , . . . , M g, an optimal vector ¼ f1 , . . . , M g of alternatives can be found which minimises the total losses and maximises the value from the reliability investment. This is in effect a reliability allocation based on minimising the total losses. An equation similar to equation (16.22) can be applied to determine the value from the reliability investment for a repairable system. It is assumed that after each failure, the repair/replacement restores the system to as good as new condition. It is also assumed that the non-constant hazard rate h(t) over the time interval 0, a is reduced by a non-negative function x(t) resulting in a new hazard rate h(t) x(t), (Figure 16.4a). This is for example the case where the hazard rate characterising the start of life has been decreased by measures aimed at reducing early-life failures: e.g. better design, assembly, material, quality control and inspection, corrosion protection, etc. As a result of these measures, a smaller hazard rate h(t) x(t)
255
Reliability Analysis and Setting Reliability Requirements Hazard rate
x(t) S
h(t)
h(t) – x(t) (a)
a
Time,t
Hazard rate S
λ x λ–x
Time,t
(b)
a
Figure 16.4 (a) Decreasing the non-constant hazard rate characterising the early-life failure region from h(t) to h(t) x(t); (b) decreasing a constant hazard rate from value to a value x
is obtained. K(h(t) x(t)) are now the expected losses associated with all failures in the finite time interval a, not the risk associated with a premature failure. If the expected cost given failure is C, the expected losses K(h(t)) from all failures associated with a hazard rate h(t) can be determined using the following probabilistic argument. Configurations including exactly one, two, . . . , k failures in the time interval 0, a can be regarded as ‘mutually exclusive events’ associated with probabilities p(k) and costs C(k). The expected losses from all failures are K¼
1 X
ð16:27Þ
pðkÞ CðkÞ
k¼1
For a non-homogeneous Poisson process (Gross and Harris, 1985), the probability p(k) of k failures in the finite time interval 0, a is given by the Poisson distribution
pðkÞ ¼
1 k!
Z
k
a
hðtÞdt 0
Z exp
a
hðtÞdt 0
256
Reliability and Risk Models
The expected cost given k failures Ck in the interval 0, a is equal to k times the expected cost given a single failure. Substituting the expression C(k) ¼ kC for the expected cost given k failures, equation (16.27) results in Z K ¼ C exp
a
Z hðtÞdt
0
þ
1 2!
Z
1þ
hðtÞdt
0
!
2
a
hðtÞdt 0
a
þ...
¼C
Z
1 1!
Z
1
a
hðtÞdt 0
a
ð16:28Þ
hðtÞdt 0
The expression for the total losses then becomes L½hðtÞ xðtÞ ¼ QðxðtÞÞ þ C
Z
a
½hðtÞ xðtÞdt
ð16:29Þ
0
and the value from the reliability investment becomes V ¼ L ¼ L½hðtÞ L½hðtÞ xðtÞ ¼ C
Z
a
xðtÞdt QðxÞ
ð16:30Þ
0
Ra Because the integral S ¼ 0 x(t)dt in equation (16.30) gives the area between the initial hazard rate h(t) and the hazard rate h(t) x(t) after the risk reduction (Figure 16.4a), equation (16.30) can also be succinctly rewritten as V ¼ CS Q
ð16:31Þ
Suppose that a constant hazard rate h(t) ¼ const: has been decreased by a value x. Equation (16.29) then becomes Lð xÞ ¼ QðxÞ þ Cað xÞ
ð16:32Þ
and equation (16.30) regarding the value from the reliability investment becomes V ¼ L ¼ Cax QðxÞ
ð16:33Þ
Because the product ax in equation (16.33) is the area between the initial hazard rate and the hazard rate x after the risk reduction (Figure 16.4b), equation (16.33) can also be succinctly presented in the form of equation (16.31), where S ¼ ax. The models for determining the value from the reliability investment are related to both repairable and non-repairable components/systems. An
Reliability Analysis and Setting Reliability Requirements
257
important application of the models for cost-of-failure based reliability analyses is in investigating the effect of eliminating early-life failures on the financial revenue. The area S in Figures 16.4(a) and (b) can be interpreted as the expected number of failures prevented by the reliability improvement. Equation (16.30) shows that the reliability investment creates value if the expected losses from prevented failures exceeds the cost Q(x) towards this prevention. Equations (16.29) and (16.32) can be used for making a choice between two alternative solutions characterised by different costs and different expected losses from failure. In cases where the cost of system failure is proportional to the system’s downtime, C in equation (16.29) can be the system’s expected downtime. Consider now the common case of a repairable system with components logically arranged in series, characterised by constant hazard rates. Suppose that the individual components are characterised by constant hazard rates 1 , 2 , . . . , M , where M is the total number of components. The expected losses C given a system failure are: C ¼
M X
k Ck þ . . . þ 1 M k¼1
ð16:34Þ
where Ck is the expected cost given failure of the kth component. The expression for P the expected losses K from failures in the finite time interval 0, a is K ¼ 1 k ¼ 1 p(k)kC, where pðkÞ ¼
1 ðaÞk expðaÞ ð ¼ 1 þ 2 þ þ M Þ k!
is the probability of k failures in the finite time interval (0, a). Substituting expression (16.34) for the expected value of the cost given failure results in K ¼ a
M X k k Ck Ck ¼ a þ þ 1 M k¼1 k¼1 M X
ð16:35Þ
for the expected losses from failures in the time interval (0, a), of a repairable system composed of M components logically arranged in series. According to equation (16.35) the expected losses from failures of such a system are a sum of products of the expected number of failures ak characterising each component and P the expected cost given failure of the components. The expression c¼ M k ¼ 1 k Ck can be interpreted as expected losses from failures per unit time. It is independent of the length a of the finite time interval. Once determined for a particular system, the expression K ¼ ac can be used to calculate the expected losses from failures for different lengths a of the finite time interval.
258
Reliability and Risk Models
For the generic device in Figure 16.2(a), regarding the expected losses from failures in a finite P time interval (e.g. design life) a ¼ 60 months, equation (16.35) gives K ¼ a 3k ¼ 1 k C k ¼ 2369:7. If the budget allocated for failures is Kmax, and if each component is associated with the same cost of failure C, solving equation (16.35) with respect to the system hazard rate gives Kmax ¼ Ca
ð16:36Þ
for the upper bound of the hazard rate which guarantees that if for the system hazard rate is fulfilled, the expected losses from failures K will be within the allocated budget for failures Kmax. In case of a time-dependent cost of failure, Monte Carlo simulations can be employed to determine the expected losses from failures K( x)Þ for a repairable system. The simulations consist of generating random failures in the finite interval (0,a) according to the current hazard rate and calculating the costs associated with each failure. Adding up all costs gives the cost of failures per simulation. Finding the average cost of failures from all simulations yields the expected losses K( x) from failures corresponding to a hazard rate x. The algorithm consists of (i) varying (in a loop) the hazard rate by selecting different values x; (ii) for each x, in an inner loop, a fixed number of Monte Carlo simulations is performed, each of which consists of generating random failures according to the current hazard rate x; for each hazard rate x, the expected losses K( x) are calculated as an average value of the cost of failures from all Monte Carlo simulations. The calculations continue until a value x* is obtained which minimises the sum L( x) ¼ Q(x) þ K( x) where Q(x) is the cost towards reducing the hazard rate by x. An expression can also be derived for the expected losses from failures of a repairable system composed of M components logically arranged in series, where the separate components are characterised by arbitrary cumulative distribution functions Fk(t), k ¼ 1, 2, . . . , M of the time to failure and probability density functions fk (t) ¼ @Fk (t)/@t. The probability that given failure, the first component has initiated it in the elementary time interval t, t þ dt can be expressed by f1(t)[1 F2(t)] . . . ; [1 FM(t)]dt, which is a product of the probabilities of the following statistically independent events: (i) the first component has initiated failure in the interval t, t þ dt, the probability of which is given by f1(t)dt and (ii) the other components have not initiated failure before time t, the probability of which is [1 F2(t)] . . . [1 FM(t)]. According to the total probability theorem, the probability p1|f that given failure, the first component has initiated it is p1jf ¼
Z 0
a
f1 ðtÞ½1 F2 ðtÞ . . . ½1 FM ðtÞdt
259
Reliability Analysis and Setting Reliability Requirements
Similarly, the probability that given failure, the kth component has initiated it is Z a pkjf ¼ fk ðtÞ½1 F1 ðtÞ . . . ½1 Fk1 ðtÞ½1 Fkþ1 ðtÞ . . . ½1 FM ðtÞdt 0
ð16:37Þ P The expected cost given failure is C ¼ M k ¼ 1 pkjf Ck , where Ck are the expected costs given failure characterising the individual components. The risk of premature failure before time a for the system is K¼
Z 1 exp
a
hðtÞdt 0
M X
pkjf Ck
ð16:38Þ
i¼1
Ra where h(t) is the system hazard rate and 1 exp( 0 h(t)dt) is the probability that failure will occur before time a. R a Considering that the expected number of failures in the interval (0,a) is 0 h(t)dt, where h(t) is the hazard rate of the repairable system, the expected losses from failures are K¼
Z
a
hðtÞdt
0
M X
pkjf C k
ð16:39Þ
i¼1
where pk|f are given by equation (16.37). Summarising, the expected losses from failures of a repairable system in a specified time interval are equal to the expected number of failures E(N) times the expected cost C given failure. K ¼ EðNÞ C
16.7
ð16:40Þ
GUARANTEEING MULTIPLE RELIABILITY REQUIREMENTS FOR SYSTEMS WITH COMPONENTS LOGICALLY ARRANGED IN SERIES
The hazard rate envelope which guarantees more than a single reliability requirement, can be found as an intersection of the hazard rate envelopes guaranteeing the separate reliability requirements. Suppose that a hazard rate envelope is required to guarantee for a repairable system with components logically arranged in series, expected losses from failures below the maximum acceptable limit Kmax. The hazard rate envelope R guaranteeing that the expected losses from failures will be below the limit Kmax is obtained from equation (16.36). If a minimum availability target and an MFFOP have been
260
Reliability and Risk Models
specified (i.e. availability at least AT and an MFFOP of length at least s before each failure), given the distribution of downtimes, hazard rate envelopes A and s can be determined according to the methods from Chapter 15. These hazard rate envelopes guarantee that whenever A is fulfilled, the availability will be greater than the specified minimum target value A AT and whenever s is fulfilled, an MFFOP with length at least s will exist before each random failure, with the specified minimum probability pMFFOP. The intersection of all hazard rate envelopes is a hazard rate envelope ¼ minfR , A , s g which guarantees that all reliability requirements will be met if for the system hazard rate, the relationship is fulfilled (Figure 16.5). If for the optimum hazard rate opt minimising the total losses, opt is fulfilled, opt will not only minimise the total losses but will also guarantee a minimum availability target, expected losses from failures below the maximum acceptable level and a minimum failure-free operating interval. A significant drawback of the current practice for setting quantitative reliability requirements in subsea oil and gas production, for example, is that they are based solely on specifying a minimum availability target. As was shown in Chapter 15, specifying a high availability target does not necessarily guarantee a low risk of premature failure. It must also be pointed out that a high availability target does not necessarily account for the cost of failure. Although availability does account for the cost of lost production which is proportional to the downtime, it does not account for the cost of the resources for repair and the cost of intervention, which for deep-water oil and gas production can be larger than the cost of deferred production. Setting reliability requirements should be made by finding the intersection of the hazard rate envelopes which deliver low expected losses from failures, minimum total losses and average availability larger than the target value. A set of quantitative reliability requirements in deep-water oil and gas production for example, should contain several basic components: (i) a specified minimum availability target (asking for availability); (ii) a specified Minimum availability AT
An MFFOP of length at least s Expected losses below Kmax
Hazard rate, λ ∗
λR
∗
λs
∗
λA
Figure 16.5 Setting reliability requirements as an intersection of hazard rate envelopes
Reliability Analysis and Setting Reliability Requirements
261
maximum acceptable level of the expected losses from failures (asking for reliability); (iii) a requirement for minimum total losses (asking for a maximum value of the reliability investment). The intersection of the hazard rate envelopes delivering the individual requirements is a hazard rate envelope which delivers all of the requirements. The input data for setting basic reliability requirements at a system level which limit the expected losses from failures and deliver a required minimum availability target are: the distribution of the downtimes due to a critical failure, an average cost of the lost production per unit downtime, distribution of the cost of intervention for repair (includes the cost of locating the failure, the cost of mobilisation of resources, the cost of intervention and the cost of repair/ replacement). Other components of the input data are the required minimum availability and the budget allocated for fixing failures. Equation (16.6), for example, can be applied to determine the maximum hazard rate that guarantees that the losses from warranty payments due to failure within agreed warranty period 0, a are smaller than the warranty budget Kmax. It must be pointed out that the cost-of-failure-driven reliability requirements do not necessarily entail unrealistic hazard rates, difficult to achieve by suppliers. Indeed, according to equation (16.6), the hazard rate envelope is small when the maximum acceptable risk Kmax is small and can be very large if the maximum acceptable risk Kmax is close to the cost of failure C. An advantage of the proposed approach is that setting reliability requirements does not require historic failure data, which for new designs are non-existent.
16.8
RISK OF PREMATURE FAILURE FOR SYSTEMS WHOSE COMPONENTS ARE NOT LOGICALLY ARRANGED IN SERIES
For a system whose components are not necessarily arranged logically in series, the risk of premature failure and the losses from failures during the entire design life can be determined using a Monte Carlo simulation. Consider, for example, the generic comparator in Fig.1.12, whose reliability block diagram is given in Figure 16.6. The reliabilities of the measuring
m 1
2
m 4
c m
3
m
Figure 16.6 A reliability block diagram of the generic comparator from Figure 1.12. The nodes have been marked by black circles
262
Reliability and Risk Models
devices in Figure 16.6 for a ¼ 24 months finite time interval are m ¼ 0.9 and the reliability of the data transmitter is c ¼ 0.4. The cost of replacement of a failed measuring device is 200 units, while the cost of replacement of the data transmitter is 2000 units. Components are replaced only in case of a system failure before time a ¼ 24 months. The nodes have been represented by solid circles and numbered from 1 to 4. The system works only if there exists a path through working components between the start node ‘1’ and the end node ‘4’ (Figure 16.6). The reliability block diagram can be presented by the adjacency matrix 8 0 > > < 1 A 1 > > : 0
1 0 1 1
1 1 0 1
9 0> > = 1 1> > ; 0
for which aij ¼ 1 if nodes i and j are connected and aij ¼ 0, otherwise (i6¼j). An algorithm in pseudocode valid for reliability block diagrams with any number of nodes is presented below. Algorithm. function component_failure(i) {/* Returns 1 if the component with index i has failed and ‘0’ otherwise */ function path() {/* Returns ‘1’ if there is a path between the first and the end node and ‘0’ otherwise */ stack[]; /* An empty stack */ sp; /* Stack pointer */ Num_nodes; /* Number of nodes */ marked[]; /* An array containing information about which nodes have been visited */ /* Mark all nodes as ‘not visited’ */; For i¼1 to Num_nodes do marked [i]¼0; sp¼1; stack[sp]¼1;/* Add the first node to the stack */ While (sp > 0) do /* while the stack is not empty do the following statements */ { r_node ¼ stack[sp]; /* Take a node from the top of the stack */ marked[r_node]¼1; /* Mark the node as ‘visited’ */ sp¼sp-1; /* Remove the visited node from the stack */ /* Find all unmarked nodes adjacent to the removed node r_node */ For i¼1 to Num_nodes do if (marked[i]¼0) then /* if node ‘i’ is not marked */
Reliability Analysis and Setting Reliability Requirements
263
if (node i is adjacent to r_node) then { if (node i is the end node) then return 1; /* a path has been found */ else {sp¼spþ1; stack[sp]¼i;} /* put the i-th node into the stack */ } } return 0; /* a path has not been found between the start and the end node */ } Cum_losses¼0; For i¼1 to Number_of_trials do { Cost_fc¼0; Make a copy of the adjacency matrix; For j¼1 to Number_of_components do {simulate failure for the jth component; If (component_failure(j) ¼ 1) Cost_fc¼Cost_fc þ Cost[j]; Alter the copy of the adjacency matrix, to reflect that component j has failed; } If (path() ¼ 0) Cum_losses ¼ Cum_losses þ Cost_fc; /* In case of system failure accumulate the losses in Cum_losses */ } /* Calculate the risk of system failure */ Expected_losses ¼ Cum_losses/Number_of_trials; The function path() checks whether there exists a path through working components from the start to the end node. A stack is created where, initially, only the start node resides. Then, until there exists at least a single node in the stack, the node from the top of the stack is removed and marked as ‘visited’. A check is then conducted whether the end node is among the adjacent nonmarked (non-visited) nodes of the removed node r_node. If the end node is among them, the function path() returns immediately true (1). Otherwise, all non-marked adjacent nodes to the removed node are stored in the stack. If the node i is adjacent to the removed node ‘r_node’, this will be indicated by a greater than zero element A[r_node] [i] of the adjacency matrix. The algorithm then continues with removing another node from the top of the stack. The cost of failure of the jth component is given by Cost[j]. Using the algorithm, the risk of system failure within a ¼ 24 months has been calculated. The computer simulation based on 100 000 trials yielded 55 units for the the risk of premature failure before a ¼ 24 months. After disconnecting the data transmitter, which is reflected by setting a23 ¼ a32 ¼ 0 in the adjacency matrix A, the Monte Carlo simulation yielded 15 units for the risk of premature failure. Thus, despite the fact that the reliability of the system has been decreased by disconnecting the data transmitter,
264
Reliability and Risk Models
the risk of premature failure has also decreased. This result also confirms that decreasing the reliability of the system does not necessarily mean increasing the risk of premature failure. The above algorithm can be modified to give the expected losses from multiple system failures (not only the risk of premature failure) in a specified time interval. The expected losses from failures of two alternative solutions can then be compared and the solution with the smaller expected losses selected.
16.8.1
EXPECTED LOSSES FROM FAILURES FOR REPAIRABLE SYSTEMS WHOSE COMPONENTS ARE NOT ARRANGED IN SERIES
Assume a repairable system which after each failure is replaced/repaired to as good as new condition. The existing repairable system costs Q0 and its expected cost of failure is C0. According to equation (16.28), the expected losses from failures of R a the repairable system in the finite time interval (0, a) are given by K0 ¼ C0 0 h0 (t)dt ¼ C0 H0 (a), where h0(t) and H0(t) are the hazard rate and the cumulative hazard rate characterising the system. An alternative system is considered to replace the existing system. Suppose that the alternative system costs Q1 and, similar to the existing system, the expected R a losses from failures in the specified time interval with length a are K1 ¼ C1 0 h1 (t)dt ¼ C1 H1 (a). According to equation (16.24), the difference V ¼ C0 H0 ðaÞ C1 H1 ðaÞ ðQ1 Q0 Þ
ð16:41Þ
yields the value of the alternative solution. The replacement is worthwhile if V > 0 and has a relatively large magnitude.
16.9 RELIABILITY ALLOCATION TO MINIMISE THE TOTAL LOSSES The strategy to achieve a cost-of-failure-based reliability allocation at a component level which minimises the total losses can be outlined as follows. Minimising the total losses involves minimising the sum of the cost of the reliability investment Q() and the expected losses from failures K() within a specified time interval. The equation is L() ¼ Q() þ K() where ¼ f1 , 2 , . . . :, M g is a vector containing the hazard rates i of the components. The optimal value ¼ f1 , 2 , . . . :, M g yielding the smallest amount of total losses L() can be obtained by minimising L() with respect to under the constraints imposed on the hazard rates of the components. In the discrete case, suppose that for each component in a system composed of M components there are N available alternatives, each with a different hazard
Reliability Analysis and Setting Reliability Requirements
265
rate ij and cost cij. The index i stands for the ith component (i ¼ 1, 2, . . . , M) and the index j stands for the cost of the jth alternative of the ith component. The available hazard rates (alternatives) are specified by the matrix 0
11 B 21 ¼B @ ... M1
12 22 ... M2
1 . . . 1N . . . 2N C C ... ... A . . . MN
ð16:42Þ
Correspondingly, the costs of the different alternatives are specified by the matrix 0
c11 B c21 c¼B @ ... cM1
c12 c22 ... cM2
1 . . . c1N . . . c2N C C ... ... A . . . cMN
ð16:43Þ
For a particular system configuration, the problem reduces to selecting M hazard rates (alternatives) from each row of matrix (16.42) (1, a1 , 2, a2 , . . . , M, aM ) such that the total losses L¼
M X
ci;ai þ Kð1;a1 ; 2;a2 ; . . . ; M;aM Þ
ð16:44Þ
i ¼1
PM are minimised where i¼1 ci, ai is the cost of the selected alternatives and K(1, a1 , 2, a2 , . . . , M, aM ) is the expected losses from failures associated with the selected alternatives. The problem can be solved numerically, using methods for a discrete numerical optimisation. The general inequality (16.7) can also be used to obtain a domain for the hazard rate which limits the expected losses from failures below a maximum acceptable level Kmax.
Appendix A
1.
RANDOM EVENTS
Sample space The union of all outcomes in an experiment: Ω
Example. If the experiment is a toss of two independently rolled dice, the sample space has 36 equally likely outcomes (elements) each of probability 1/36: : 1; 1
1; 2
. . . 1; 6
2; 1 ...
2; 2 ...
. . . 2; 6 ... ...
6; 1
6; 2
. . . 6; 6
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
268
Reliability and Risk Models
The sample space from the toss of three coins has 2 2 2 ¼ 8 equally likely outcomes (H(eads) or T(ails)): : HHH, HHT, HTH, THH, HTT, THT, TTH, TTT The sample space of the states of a system containing three components A, B and C each of which can be in ‘working’ (e.g. A) or ‘failed’ state (e.g.A): : ABC; ABC; ABC; ABC; ABC; ABC; ABC; ABC Generally, a system containing n components each characterised by two distinct states contains a total of 2n different states.
Event A subset of the sample space (of all outcomes) (the event A in the Venn diagram, A ).
Venn diagram Pictorial representation of subsets in the sample space:
Ω A
A ðA is a subset of the sample spaceÞ
Certain event The sample space . Contains all possible outcomes.
Impossible event (null event) ; does not contain any outcomes. An empty set.
269
Appendix A
An elementary event Consists of a single element (outcome).
Disjoint (mutually exclusive) events Cannot occur simultaneously (e.g. the events denoting the two possible states of a system or component: working or failed ). Ω A B
Complementary events Whenever one does not occur the other does. The complement of event A is denoted by A. A includes all outcomes (elementary events) x which do not belong to A. A ¼ fxjx 2 = Ag Note: The notation A ¼ fxjPg means all outcomes x with property P; {x|x 2 = A} Ω A A
means all x with the property x 2 = A. ¼ ;. The Null event and the certain event are complementary events: ; ¼ ; Two events are equivalent and we write A ¼ B if A and B have the same outcomes x. A ¼ B is fulfilled whenever x 2 A then x 2 B and whenever x 2 B then x 2 A. Suppose that A and B are events. If every element of B is an element of A we say that the outcomes of B are a subset of the outcomes of A. In other words, whenever we have a realisation of the event B we automatically have a realisation of the event A.
270
Reliability and Risk Models
B A ðB is a subset ofAÞ x 2 B ! x 2 A ðfrom x 2 B it followsð‘ ! ’Þ that x 2 AÞ Ω
A B
2.
UNION OF EVENTS Ω A
B
The union A [ B of two events A and B is the event consisting of outcomes x belonging to A or B or both. A [ B ¼ fxjx 2 A or x 2 Bg S The union ni¼1 Ai ¼ A1 [ A2 [ . . . [ An of n events is the event which contains all outcomes x belonging to at least one of the events Ai. Ω Ai
A1 A2
n [
An
Ai ¼ A1 [ A2 [ . . . [ An ¼ fxjx 2 A1 or x 2 A2 or . . . x 2 An g
i¼1
3.
INTERSECTION OF EVENTS
The intersection A \ B of two events A and B is the event consisting of outcomes x common to both A and B.
271
Appendix A
A \ B ¼ fxjx 2 A and x 2 Bg
Ω B
A
T The intersection ni¼1 Ai ¼ A1 \ A2 \ . . . \ An of n events is the event consisting of outcomes x common to all events Ai.
Ω
An A2
A1 Ai
n \
Ai ¼ A1 \ A2 \ . . . \ An ¼ fxjx 2 A1 \ x 2 A2 \ . . . \ x 2 An g
i¼1
Example. Sample space: the outcomes from a toss of a die. Event A: the result is a number greater than 3; A ¼ {4, 5, 6} Event B: the result is an odd number; B ¼ {1, 3, 5} A [ B ¼ f1; 3; 4; 5; 6g A \ B ¼ f5g Let be the sample space and let A, B and C be events (subsets) in . The following laws hold:
Associative laws ðA [ BÞ [ C ¼ A [ ðB [ CÞ ðA \ BÞ \ C ¼ A \ ðB \ CÞ
272
Reliability and Risk Models
Commutative laws A[B¼B[A A\B¼B\A
Distributive laws A \ ðB [ CÞ ¼ ðA \ BÞ [ ðA \ CÞ A [ ðB \ CÞ ¼ ðA [ BÞ \ ðA [ CÞ
Identity laws A[;¼A A\¼A
Complement laws A [ A ¼ A \ A ¼ ;
Idempotent laws A[A¼A A\A¼A
Bound laws A[¼ A\;¼;
Absorption laws A [ ðA \ BÞ ¼ A A \ ðA [ BÞ ¼ A
273
Appendix A
Involution law A¼A
0/1 laws ;¼ ¼;
De Morgan’s laws for sets ðA [ BÞ ¼ A \ B ðA \ BÞ ¼ A [ B
Principle of duality If any statement involving ‘[’, ‘\’ and ‘ ’ is true for all sets, then the dual statement obtained by replacing ‘[’ by ‘\’, ‘\’ by ‘[’, ; by and by ; is also true for all sets.
Partition of the sample space A collection of events {Ai} is said to be a partition of the sample space if every element k of belongs to exactly one event Ak. In other words, a partition of divides into non-overlapping subsets. Ai are pairwise disjoint and their union is the sample space . Ai \ Aj ¼ ; if i 6¼ j [ Ai ¼ i
Ω
Ai A1 An A2
274 4.
Reliability and Risk Models
PROBABILITY
Classical approach to defining probability The ratio of the number of favourable outcomes to the total number of outcomes. PðAÞ ¼
Number of outcomes leading to event A Total number of possible outcomes
This approach can only be used when symmetry is present, i.e. if all outcomes are equally likely (the outcomes are interchangeable). Example. What is the probability P(A) of the event A that the sum of the faces of two independently thrown dice will be equal to 5? Since there exist only four favourable (successful) outcomes leading to event A: (1 þ 4, 2 þ 3, 3 þ 2, 4 þ 1) in the total sample space of 6 6 ¼ 36 possible symmetrical (equally likely) outcomes, the probability of event A is PðAÞ ¼
4 ¼ 1=9: 36
Empirical definition of probability Suppose that an experiment is performed in which a large number of components are tested under the same conditions. Thus, if N is the number of components tested and n is the number of failures, the probability of failure (event A) can be defined formally as n N!1 N
PðAÞ ¼ lim
According to the empirical definition, probability is defined as a limit of the ratio of occurrences from a large number of trials. Usually, a relatively small number of trials N gives a sufficiently accurate estimate of the true probability: P(A) n/N.
Axiomatic approach Probability can also be defined using Kolmogorov’s axioms: Axiom 1: 0 P(A) 1.
275
Appendix A
Axiom 2: P() ¼ 1. Axiom 3: This axiom is related to an infinite number of mutually exclusive events A1 ; A2 ; . . . ðAi \ Aj ¼ ;; when i 6¼ jÞ PðA1 [ A2 [ . . .Þ ¼ PðA1 Þ þ PðA2 Þ þ . . . : The probability of the certain event is unity (Axiom 2): P() ¼ 1. The probability of the null event is zero: P(;) ¼ 0. According to the first axiom, the probability of events is measured on a scale from 0 to 1, with ‘0’ being impossibility and ‘1’ being certainty.
5.
PROBABILITY OF A UNION AND INTERSECTION OF MUTUALLY EXCLUSIVE EVENTS
From the third Kolmogorov’s axiom it follows that If A and B are mutually exclusive (disjoint) events (P(A \ B) ¼ ;) then PðA [ BÞ ¼ PðAÞ þ PðBÞ Ω A B
Example. Event A: ‘the number of points from rolling a die is 5’; Event B: ‘the number of points from rolling a die is 4’ The probability of obtaining 5 or 4 points is PðA [ BÞ ¼ PðAÞ þ PðBÞ ¼ 1=6 þ 1=6 ¼ 1=3
Probability of complementary events PðAÞ ¼ 1 PðAÞ Indeed: A [ A ¼ ; PðAÞ þ PðAÞ ¼ PðÞ ¼ 1; \PðAÞ ¼ 1 PðAÞ
276
Reliability and Risk Models
An important application The probability that a measuring device will fail during operation is p. If three devices are present, the probability that at least one device (one or two or three) will fail can be determined using probability of complementary events. The probability of the event at least one of the devices will fail is equal to one minus the probability of the event none of the devices will fail because the two events are complementary: Pðat least one device will failÞ ¼ 1 Pðnone of the devices will failÞ Example. For an arrangement of five water pumps, the probability that at least three pumps will work can be expressed as follows: Pðat least three will workÞ ¼ 1 Pðtwo or fewer will workÞ
6.
CONDITIONAL PROBABILITY
Probability of intersection of events
Ω A
B
The number of ways A and B can occur equals the number of ways B can occur given that A has occurred: NA\B ¼ NBjA The last expression can also be presented as NA\B ¼
NA NBjA NA
where NA is the number of ways A can occur. Dividing the two sides by the total number of outcomes N results in NBjA NA\B NA NBjA NA ¼ ¼ N NA N N NA
277
Appendix A
Since according to the classical definition of probability PðA \ BÞ ¼
NA\B N
PðAÞ ¼
NA N
and
PðBjAÞ ¼
NBjA NA
finally: P(A \ B) ¼ P(A)P(BjA), where P(B|A) is the probability of B given that A has occurred (conditional probability). Alternatively, the number of ways A and B can occur equals the number of ways A can occur given that B has occurred. Therefore the corresponding probability expression becomes PðA \ BÞ ¼ PðBÞPðAjBÞ where P(A|B) is the probability of A given that B has occurred (conditional probability). Example. In a batch of 10 bearings, four are defective. Two bearings are selected randomly from the batch and installed in a pump. What is the probability that the pump will work properly? The pump will work properly only if both bearings are non-defective. Let A denote the event the first bearing is non-defective and B denote the event the second bearing is non-defective. Then PðA \ BÞ ¼ PðAÞPðBjAÞ Since P(A) ¼ 6/10 ¼ 3/5 and P(B|A) ¼ 5/9 PðA \ BÞ ¼ PðAÞPðBjAÞ ¼ ð3=5Þ ð5=9Þ ¼ 1=3
Probability of intersection of three events Ω A2
A1 A3
PðA1 \ A2 \ A3 Þ ¼ PðA1 ÞPðA2 jA1 ÞPðA3 jA1 A2 Þ Example. A component is put into service. The probability that the component will be defective is 0.03. If the component is defective, with probability
278
Reliability and Risk Models
0.4 the defect promotes increased load for the component. The increased load causes 13% of the defective components to fail immediately after being put into service, as opposed to non-defective components. Find the probability that a particular component will fail immediately after being put into service. Let A denote the event the component is defective, B denote the event the defect causes increased load and C denote the event the component will fail. Then PðA \ B \ CÞ ¼ PðAÞPðBjAÞPðCjABÞ Since P(A) ¼ 0.03, P(B|A) ¼ 0.4 and P(C|AB) ¼ 0.13, PðA \ B \ CÞ ¼ 0:03 0:4 0:13 ¼ 0:00156
Probability of intersection of n events The formula can be generalised for n events. The probability of intersection of n events Ai, i ¼ 1, n is PðA1 \ A2 \ . . . \ An Þ ¼ PðA1 ÞPðA2 jA1 ÞPðA3 jA1 A2 Þ . . . PðAn jA1 A2 . . . An1 Þ
Ω
An A2
A1 Ai
The probabilities P(A|B) of the event A given that B has occurred and of the event B given that A has occurred can be determined as follows: Number of ways A and B can occur Number of ways B can occur PðA \ BÞ PðAjBÞ ¼ PðBÞ PðA \ BÞ PðBjAÞ ¼ PðAÞ PðAjBÞ ¼
From the last two equations, it follows: PðAjBÞPðBÞ ¼ PðBjAÞPðAÞ
279
Appendix A
Example. It has been observed that 3% of the components arriving on an assembly line are both defective and from supplier X. If 30% of the components come from supplier X, find the probability that a purchased component will be defective given that it comes from supplier X. Let A denote the event the component comes from supplier X and B denote the event the component is defective. Then PðBjAÞ ¼
PðA \ BÞ 0:03 ¼ ¼ 0:1 PðAÞ 0:3
Thus, 10% of the components from supplier X are likely to be defective.
7.
PROBABILITY OF A UNION OF NON-DISJOINT EVENTS
Non-disjoint events
A \ B 6¼ ; Ω A
B
PðA [ BÞ ¼ PðAÞ þ PðBÞ PðA \ BÞ Indeed A ¼ ðA \ BÞ [ ðA \ BÞ B ¼ ðA \ BÞ [ ðA \ BÞ A [ B ¼ ðA \ BÞ [ ðA \ BÞ [ ðA \ BÞ PðAÞ þ PðBÞ ¼ PðA \ BÞ þ PðA \ BÞ þ 2PðA \ BÞ PðA [ BÞ ¼ PðA \ BÞ þ PðA \ BÞ þ PðA \ BÞ \PðA [ BÞ ¼ PðAÞ þ PðBÞ PðA \ BÞ Similarly, the probability of a union of three non-disjoint events can be calculated:
280
Reliability and Risk Models Ω A1
A2 A3
PðA1 [ A2 [ A3 Þ ¼ PðA1 Þ þ PðA2 Þ þ PðA3 Þ PðA1 \ A2 Þ PðA2 \ A3 Þ PðA1 \ A3 Þ þ PðA1 \ A2 \ A3 Þ The expression regarding the probability of a union of non-disjoint events can easily be generalised for n events:
Ω Ai
A1 A2
P
n [ i¼1
! Ai
¼
n X
PðAi Þ
An
XX
þ
PðAi \ Aj Þ
i<j
i¼1
XXX
PðAi \ Aj \ Ak Þ
i<j
þ ð1Þnþ1 PðA1 \ A2 \ . . . \ An Þ This expression is also known as the inclusion–exclusion expansion. The rule for the expansion can be summarised by the following steps (Sherwin and Bossche, 1993): 1. Add the probabilities of all single events. This means that the probability of the intersection of any pair of events is added twice and should be subtracted. 2. Subtract the probabilities of all intersections of two events from the previous result. Since the contribution of the intersection of any three events has been added three times through the single events and subsequently has been subtracted three times from the twofold intersections, the probabilities of all threefold intersections must be added. 3. For higher-order intersections, terms with odd number of events are added while the terms with even number of events are subtracted from the sum.
281
Appendix A
8.
STATISTICALLY DEPENDENT EVENTS
Two events are statistically dependent if the outcome of one of the events affects the probability of occurrence of the other event. PðA \ BÞ ¼ PðAÞPðBjAÞ Example. In a batch of 10 devices, four are defective. Two devices are selected randomly from the batch. The probability of the event the second selected device will be non-defective is dependent on the outcome of the first event. Indeed, if the first selected device is non-defective (event A), the probability that the second device will be non-defective (event B) is PðBjAÞ ¼ 5=9 Conversely, if the first selected device is defective (event A), the probability that the second device will be non-defective (event B) is PðBjAÞ ¼ 2=3 As a result, the probability of event B depends on the outcome of event A.
9.
STATISTICALLY INDEPENDENT EVENTS
Two events are statistically independent if the probability of occurrence of one of the events is not influenced by the outcome of the other event. Example. In an experiment of flipping two different coins, the events head on the first coin and tail on the second coin are statistically independent. Example. For two components coming from separate suppliers, the events the first component is non-defective and the second component is non-defective are statistically independent. In this case: PðAjBÞ ¼ PðAÞ and PðBjAÞ ¼ PðBÞ For statistically independent events, the probability that they will both occur simultaneously is PðA \ BÞ ¼ PðAÞPðBÞ
282
Reliability and Risk Models
Indeed, this follows immediately from P(A\B) ¼ P(A)P(B|A) and P(B|A) ¼ P(B). If there are n statistically independent events, the probability of all of them occurring simultaneously is PðA1 \ A2 \ . . . \ An Þ ¼
n Y
PðAi Þ
i¼1
10. PROBABILITY OF A UNION OF INDEPENDENT EVENTS For two statistically independent events, the probability that at least one will occur is PðA [ BÞ ¼ PðAÞ þ PðBÞ PðAÞPðBÞ Indeed, the above follows from PðA [ BÞ ¼ PðAÞ þ PðBÞ PðA \ BÞ and P(A\B) ¼ P(A)P(B), valid for statistically independent events.
11. BOOLEAN VARIABLES AND BOOLEAN ALGEBRA Boolean variables are indicator variables for the events. Boolean variables represent the two states of an event: occurrence and non-occurrence and this defines the link between events and Boolean variables. Thus, a Boolean indicator variable can only take values ‘0’ or ‘1’ and is defined in the following way: a¼
n
0 1
if event A does not occur if event A occurs
Boolean variables are often used in risk analysis to represent two states of a system or component (Let A be the event the system works. Then a ¼ 1 corresponds to the event A: the system works and a ¼ 0 corresponds to the event A: the system is in a failed state. For example, the state of components arranged in series or parallel can be presented with Boolean variables: The series arrangement fails if and only if at least one component fails: s ¼ c1 c 2 . . . c N
ðs ¼ 0 if any ck ¼ 0Þ:
The system logically arranged in series works if and only if all of the components work: s ¼ c1 c2 . . . cN
ðs ¼ 1 if all ck ¼ 1Þ:
283
Appendix A
The system in parallel fails if and only if all of the components fail: s ¼ c1 þ c2 þ þ cN
ðs ¼ 0 if all ci ¼ 0Þ
The system in parallel works if and only if at least one of the components works: s ¼ c1 þ c2 þ þ cN
ðs ¼ 1 if at least one ck ¼ 1Þ
Boolean algebra defines the operations with Boolean variables and is used in risk analysis to translate the state of a system into Boolean expressions. Boolean expressions represent the structure function of the system. It can be shown that the expected value of the structure function is the reliability of the system (Barlow and Proschan, 1975). Boolean expressions are also a useful tool for representing the structure of fault trees and reliability block diagrams (Hoyland and Rausand, 1994). The basic gate types used in a fault tree (AND gate, OR gate and NOT gate) correspond one-to-one to the basic Boolean operations.
BOOLEAN OPERATIONS Boolean algebra consists of Boolean variables and logical operations defined over them. The operations AND, OR and NOT in Boolean algebra are analogous to the operations intersection, union and complementation in set theory. The Boolean operations will be illustrated by the event fluid supply in a case where the fluid is delivered by two pumps A and B working independ ently. Values a ¼ 1 (b ¼ 1) correspond to the events pump A (B) in working state; whereas values a ¼ 0 (b ¼ 0) correspond to the events pump A (B) in failed state.
1.
Logical AND, ‘.’
Truth table: a
b
ab
1 1 0 0
1 0 1 0
1 0 0 0
Example. ab ¼ 1: Full capacity fluid supply is present (both pumps A and B work).
284
Reliability and Risk Models
A schematic presentation of an AND-gate is given below. It is a basic building block of fault trees. The output event of AND-gates occurs if all input events occur simultaneously. A∩B
A B
2.
Logical OR, ‘þ’ a
b
aþb
1 1 0 0
1 0 1 0
1 1 1 0
Example. a þ b ¼ 1: Fluid supply is present (at least one pump A or B is working). A schematic presentation of an OR-gate is given below. It is a basic building block of fault trees. The output event of an OR-gate occurs if at least one of the input events occurs. A∪B
A B
3.
Logical NOT, ‘’
Associated with the complementary event of A. If a ¼ 1 is the event the pump works: a ¼ 0 is the event the pump does not work. a
a
1 0
0 1
285
Appendix A
LAWS OF BOOLEAN ALGEBRA In the Boolean expressions OR is symbolised by ‘þ’ but note that the sign ‘þ’ does not imply algebraic addition. The sign for AND is omitted.
1.
Associative laws
ða þ bÞ þ c ¼ a þ ðb þ cÞ ðabÞc ¼ aðbcÞ
2.
Commutative laws aþb¼bþa ab ¼ ba
3.
Distributive laws aðb þ cÞ ¼ ab þ ac a þ bc ¼ ða þ bÞða þ cÞ
(Note that the last rule is not analogous to the distributive law in ordinary algebra!)
4.
Identity laws
aþ0¼a a:1 ¼ a
5.
Complementary laws a þ a ¼ 1 a a¼0
286 6.
Reliability and Risk Models
Idempotent laws aþa¼a aa ¼ a
7.
Bound laws aþ1¼1 a:0 ¼ 0
8.
Absorption laws a þ ab ¼ a aða þ bÞ ¼ a
9.
Involution law ¼a a
10. 0/1 laws 0¼1 1¼0
11. De Morgan’s laws ða þ bÞ ¼ ab ðabÞ ¼ a þ b The following relationships help remove redundancies in Boolean expressions: a þ ab ¼ a aða þ bÞ ¼ a a þ ab ¼ a þ b
287
Appendix A
All laws can be proved using truth tables. Proof of the distributive law a þ bc ¼ (a þ b)(a þ c):
a
b
c
a þ bc
(a þ b)(a þ c)
1 1 1 1 0 0 0 0
1 1 0 0 1 1 0 0
1 0 1 0 1 0 1 0
1 1 1 1 1 0 0 0
1 1 1 1 1 0 0 0
which shows that the two Boolean expressions a þ bc and (a þ b)(a þ c) are logically equivalent. Proof of the De Morgan’s law (ab) ¼ a þ b:
a
a
b
b
ab
ab
a þ b
1 1 0 0
0 0 1 1
1 0 1 0
0 1 0 1
1 0 0 0
0 1 1 1
0 1 1 1
Clearly, some of the Boolean operations resemble the operations in ordinary (numerical) algebra. However, in numerical algebra there is no unary operation equivalent to complementation and idempotency laws do not hold. Furthermore, although the union is distributive over intersection A [ ðB \ CÞ ¼ ðA [ BÞ \ ðA [ CÞ addition in ordinary algebra is not distributive over multiplication a þ bc = (a þ b)(a þ c). Proof of the relationship a þ ab ¼ a þ b: a þ ab ¼ ða þ aÞða þ bÞ ¼ a þ b
288
Reliability and Risk Models
SIMPLIFYING BOOLEAN EXPRESSIONS Reducing Boolean expressions to a sum-of-products form Boolean algebra rules are mainly used to rewrite complex Boolean expressions to their simplest sum-of-products form. Example. ðab þ cÞð ab þ cÞðab þ cÞ ¼ ðacb þ acb þ cÞðab þ cÞ ¼ abc Example. ðab þ cÞð ab þ cÞðab þ cÞ þ abcðab þ bc þ acÞ ¼ abc þ ð a þ b þ cÞðab þ bc þ acÞ ¼ abc þ abc þ abc þ ab c ¼ bcða þ aÞ þ abc þ ab c ¼ bc þ abc þ ab c ¼ cðb þ abÞ þ ab c ¼ cðb þ aÞ þ ab c ¼ cb þ ca þ ab c ¼ cb þ aðc þ b cÞ ¼ cb þ aðc þ bÞ ¼ ab þ bc þ ac
Appendix B
1.
RANDOM VARIABLES. BASIC PROPERTIES
Random variables can be discrete and continuous. A discrete random variable X takes only discrete values X ¼ x1,x2, . . . ,xn with probabilities f(x1),f(x2), . . . ,f(xn) and no other value.
X P(X ¼ x)
x1 f(x1)
x2 f(x2)
... ...
xn f(xn)
where f(x) ¼ P(X ¼ x) is the probability (mass) function of the random variable: n X
f ðxi Þ ¼ 1
i¼1
Example. The probability distribution of the score X from throwing a perfect die:
x P(X ¼ x)
1 1/6
2 1/6
3 1/6
4 1/6
5 1/6
6 1/6
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
290
Reliability and Risk Models
Distribution (cumulative distribution) function of discrete random variables PðX xÞ FðxÞ ¼
X
f ðxi Þ
xi x
Expected value (mean) EðXÞ ¼ ¼
n X
xi f ðxi Þ
i¼1
Variance VðXÞ ¼ Eð½X 2 Þ ¼
n X
ðxi Þ2 f ðxi Þ
i¼1
Examples (discrete random variables): .
The state of the system (working or failed) at a particular time. The number of failures of a repairable component (system) in a finite time interval. . The number of failed structural elements during a test. . The number of defects in a structural element after manufacturing. .
2.
BOOLEAN RANDOM VARIABLES
An important class of discrete random variables are the Boolean random variables. The random variable X takes on only two values: ‘X ¼ 1’ (true) and ‘X ¼ 0’ (false) with specified probabilities. X can take on: ‘1’ (occurrence); ‘0’ (non-occurrence) describing the state of a component or a system: ‘1’ (working) and ‘0’ (failed). The state of a system can be defined by the discrete distribution of the Boolean variable X, where ‘p’ is the probability that the system will be working:
State:
Working
Failed
X: f(x)
1 p
0 1p
291
Appendix B
3.
CONTINUOUS RANDOM VARIABLES
Continuous random variables take on continuous values from a specified interval. The probability that the continuous random variable X will take on values from the interval [a, b] is Pða X bÞ ¼
Z
b
f ðxÞdx a
f(x) is the probability density function (p.d.f.) of X. f(x) 0, for all x and Rwhere 1 1 f (x)dx ¼ 1. Examples: .
The value of a design parameter (e.g. length, yield strength). The operating load acting on a structural component. . Time to failure, time between failures. . The magnitude of the residual stress at the surface of a component. .
4.
PROBABILITY DENSITY FUNCTION
The distribution of the random variable X is specified by its probability density function (p.d.f.) f(x). The p.d.f. describes how the probability of obtaining particular values of the random variable is spread over the range of all possible values. It is important to understand that the function f(x) is not itself a probability but a probability density (probability per unit value). Thus, the probability of a value in the infinitesimal interval x, x þ dx is f(x)dx. The probability of obtaining a value in any specified interval x1 X x2 is Pðx1 X x2 Þ ¼
Z
x2
f ðxÞdx
x1
Probability density f (x )
0
x1
x2
Value, x
292
Reliability and Risk Models
Basic property of the probability density function The total area beneath the probability density function f(x) must always equal 1: Z
1
f ðxÞdx ¼ 1
1
The integral represents the fact that f(x) is a probability distribution, i.e. the probabilities of all outcomes must add up to unity.
5.
CUMULATIVE DISTRIBUTION FUNCTION
Let F(x) denote the (cumulative) distribution function (c.d.f.) of the random variable X. F(x) gives the probability P(X x) that the random variable X will be smaller than or equal to a specified value x:
FðxÞ ¼ PðX xÞ ¼ FðxÞ ¼
Z
x
f ðÞd
0
where f(x) is the probability density function. Accordingly, F(x) gives the area beneath the probability density function f(x) until value x. The cumulative distribution function is related to the probability density function by f(x) ¼ dF(x)/dx; F(x) is a monotone non-decreasing function of x. The probability that the random variable will accept values from the interval (x1, x2) is Pðx1 < X x2 Þ ¼
Z
x2
f ðÞd ¼ Fðx2 Þ Fðx1 Þ
x1
6.
JOINT DISTRIBUTION OF CONTINUOUS RANDOM VARIABLES
Joint probability density function f(x, y) is a joint probability density function of two random variables X and Y if f ðx; yÞ 0
for 1 < x < 1
and
1
293
Appendix B
and Z
1 1
Z
1
f ðx; yÞdxdy ¼ 1
1
Joint distribution function Fðx; yÞ ¼ PðX x and Y yÞ Z y Z x f ðr; sÞdrds Fðx; yÞ ¼ 1
1
@ 2 Fðx; yÞ f ðx; yÞ ¼ @x@y
Marginal probability density functions f1 ðxÞ ¼
Z
1
f ðx; yÞdy
f2 ðyÞ ¼
1
Z
1
f ðx; yÞdx
1
Marginal distribution functions F1 ðxÞ ¼ PðX xÞ ¼ lim Fðx; yÞ y!1
F2 ðyÞ ¼ PðY yÞ ¼ lim Fðx; yÞ x!1
7.
CORRELATED RANDOM VARIABLES
Suppose that X1 and X2 are random variables with expected (mean) values E(X1 ) ¼ 1 , E(X2 ) ¼ 2 and variances V(X1 ) ¼ 21 and V(X2 ) ¼ 22 . The covariance of X1 and X2 is defined as CovðX1 ; X2 Þ ¼ E½ðX1 1 ÞðX2 2 Þ ¼ EðX1 X2 Þ EðX1 ÞEðX2 Þ The correlation coefficient 1, 2 characterising X1 and X2 defined as 1;2 ¼
CovðX1 ; X2 Þ 1 2
0 1;2 1
measures the degree of linear association between X1 and X2. The random variables are not correlated if 12 ¼ 0.
294
Reliability and Risk Models
Properties of the covariance CovðX1 ; X2 Þ ¼ CovðX2 ; X1 Þ CovðX; XÞ ¼ VðXÞ ! n n X XX X V CovðXi ; Xj Þ Xi ¼ VðXi Þ þ 2 i¼1
i<j
i¼1
Covariance matrix 0
VðX1 Þ CovðX1 ; X2 Þ B . . . ... C¼B @ ... ... CovðXn ; X1 Þ CovðXn ; X2 Þ
... ... ... ...
1 CovðX1 ; Xn Þ C ... C A ... VðXn Þ
If the random variables are statistically independent, Cov(Xi, Xj) ¼ 0 for all i=j. Any pair of statistically independent random variables Xi, Xj are not correlated (Cov(Xi, Xj) ¼ 0) but the converse is not necessarily true. The random variables may not be correlated and still be statistically dependent! For statistically independent random variables: V
n X i¼1
! Xi
¼
n X
VðXi Þ
i¼1
Examples: .
Correlated random variables related to steels: X1 (Fatigue strength), X2 (Yield strength) X3 (Hardness). . Strongly correlated random variables: X1 (Fatigue strength), X2 (Residual stress at the surface). . Non-correlated random variables: X1 (Service load), X2 (Density of the material).
8.
STATISTICALLY INDEPENDENT RANDOM VARIABLES
1. Random variables X and Y are statistically independent if the values accepted by X do not depend on the values accepted by Y and vice versa. Let us define events A and B as follows: A: a X b and B: c Y d. Random variables X and Y are statistically independent if and only if A and
295
Appendix B
B are statistically independent events (P(A \ B) ¼ P(A)P(B)) for any choice of real numbers a, b, c and d. 2. Two random variables X and Y are statistically independent if and only if the factorisation P(X x and Y y) ¼ P(X x)P(Y y) or F(x,y) ¼ F1(x) F2 (y) is satisfied for all values x and y, where F(x, y) is the joint distribution function (d.f.) of X and Y. F1(x) is the marginal d.f. of X and F2(y) is the marginal d.f. of Y. 3. Two random variables X and Y are statistically independent if and only if the factorisation f(x, y) ¼ f1 (x)f2 (y) is possible where f(x, y) is the joint p.d.f. of X and Y; f1 (x) is the marginal p.d.f. of X and f2 (y) is the marginal p.d.f. of Y. 4. Two random variables X and Y are statistically independent if and only if f(x,y) ¼ g(x)h(y), where g(x) and h(y) are non-negative functions of x and y. f(x,y) 0 in a x b and c y d and f(x, y) ¼ 0, elsewhere. If the random variables X and Y are statistically independent then any two functions h(X) and g(Y) are also statistically independent. Examples of statistically independent random variables: .
X1 (Service load), X2 (Strength), in cases where the service load does not cause strength degradation. . X1 (Material property), X2 (Dimensions). Question. Is the strength of a load-carrying metal rope statistically independent from its length?
9.
PROPERTIES OF THE EXPECTATIONS AND VARIANCES OF RANDOM VARIABLES
Suppose that X is a random variable with p.d.f. f(x). The expected value of a function g(X) of the random variable is Z 1 E½gðXÞ ¼ gðxÞf ðxÞdx 1
Expected value of a linear function E½aX þ b ¼ aEðXÞ þ b
Expected value of a sum of random variables E½X1 þ X2 þ þ Xn ¼ EðX1 Þ þ EðX2 Þ þ þ EðXn Þ
296
Reliability and Risk Models
This formula is valid irrespective of whether the random variables are statistically independent or not. Example. The mean load from several collinear loads is a sum of the means of the separate loads irrespective of whether the loads are statistically independent or not. Alternative formula for the variance of a random variable: VðXÞ ¼ EðX 2 Þ ½EðXÞ2 Variance of a random variable added to a constant: Vða þ XÞ ¼ VðXÞ Variance of a random variable multiplied by a constant: VðaXÞ ¼ a2 VðXÞ Expected value of a product of statistically independent random variables: EðXYÞ ¼ EðXÞEðYÞ
Important property The variance of a sum of statistically independent random variables is equal to the sum of the variances of the random variables: VðX1 þ X2 þ þ Xn Þ ¼ VðX1 Þ þ VðX2 Þ þ þ VðXn Þ Example. Calculate the variance of the design parameter d from the machine part in the figure if the variances of the lengths dA dB and dC of parts A, B and C are 2A , 2B and 2C , respectively. Parts A and B are aligned with the edges of part C. dC
C
A
dA
d
B
dB
297
Appendix B
10.
IMPORTANT THEORETICAL RESULTS REGARDING THE SAMPLE MEAN
The strong law of large numbers The mean of a sum of identically distributed and statistically independent random variables will, with probability one, converge to the mean of the distribution characterising the random variables when the number of random variables approaches infinity. Let X1, X2, . . . ,Xn be n statistically independent and identically distributed random variables, where E(Xi ) ¼ is the mean of the common distribution. Then, with probability one, X1 þ X2 þ þ Xn ! n as n ! 1.
Expected value and variance of a sample mean Random samples x1, x2, . . . , xn, of size n are taken from a statistical distribuP tion with mean and variance 2 : E(xi ) ¼ , V(xi ) ¼ 2 . If x ¼ (1/n) ni¼1 xi is the sample mean, then Eð xÞ ¼
n 1X n ¼ Eðxi Þ ¼ n i¼1 n
and Vð xÞ ¼
n 1X n2 2 Vðx Þ ¼ ¼ i n2 i¼1 n2 n
The variance 2 /n of the sample mean of n measurements is always smaller than the variance 2 of a single measurement.
Appendix C Cumulative distribution function of the standard normal distribution
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.0 0.1 0.2 0.3 0.4 0.5
0.5000 0.5398 0.5793 0.6179 0.6554 0.6915
0.5040 0.5438 0.5832 0.6217 0.6591 0.6950
0.5080 0.5478 0.5871 0.6255 0.6628 0.6985
0.5120 0.5517 0.5910 0.6293 0.6664 0.7019
0.5160 0.5557 0.5948 0.6331 0.6700 0.7054
0.5199 0.5596 0.5987 0.6368 0.6736 0.7088
0.5239 0.5636 0.6026 0.6406 0.6772 0.7123
0.5279 0.5675 0.6064 0.6443 0.6808 0.7157
0.5319 0.5714 0.6103 0.6480 0.6844 0.7190
0.5359 0.5753 0.6141 0.6517 0.6879 0.7224
0.6 0.7 0.8 0.9 1.0
0.7257 0.7580 0.7881 0.8159 0.8413
0.7291 0.7611 0.7910 0.8186 0.8438
0.7324 0.7642 0.7939 0.8212 0.8461
0.7357 0.7673 0.7967 0.8238 0.8485
0.7389 0.7703 0.7995 0.8264 0.8508
0.7422 0.7734 0.8023 0.8289 0.8531
0.7454 0.7764 0.8051 0.8315 0.8554
0.7486 0.7793 0.8078 0.8340 0.8577
0.7517 0.7823 0.8106 0.8365 0.8599
0.7549 0.7852 0.8133 0.8389 0.8621
1.1 1.2 1.3 1.4 1.5
0.8643 0.8849 0.9032 0.9192 0.9332
0.8665 0.8869 0.9049 0.9207 0.9345
0.8686 0.8888 0.9066 0.9222 0.9357
0.8708 0.8906 0.9082 0.9236 0.9370
0.8729 0.8925 0.9099 0.9251 0.9382
0.8749 0.8943 0.9115 0.9265 0.9394
0.8770 0.8962 0.9131 0.9279 0.9406
0.8790 0.8980 0.9147 0.9292 0.9418
0.8810 0.8997 0.9162 0.9306 0.9429
0.8830 0.9015 0.9177 0.9319 0.9441
1.6 1.7 1.8 1.9 2.0
0.9452 0.9554 0.9641 0.9713 0.9772
0.9463 0.9564 0.9648 0.9719 0.9778
0.9474 0.9573 0.9656 0.9726 0.9783
0.9484 0.9582 0.9664 0.9732 0.9788
0.9495 0.9591 0.9671 0.9738 0.9793
0.9505 0.9599 0.9678 0.9744 0.9798
0.9515 0.9608 0.9686 0.9750 0.9803
0.9525 0.9616 0.9693 0.9756 0.9808
0.9535 0.9625 0.9699 0.9761 0.9812
0.9545 0.9633 0.9706 0.9767 0.9817
2.1 2.2 2.3 2.4 2.5
0.9821 0.9861 0.9893 0.9918 0.9938
0.9826 0.9864 0.9896 0.9920 0.9940
0.9830 0.9868 0.9898 0.9922 0.9941
0.9834 0.9871 0.9901 0.9924 0.9943
0.9838 0.9875 0.9904 0.9927 0.9945
0.9842 0.9878 0.9906 0.9929 0.9946
0.9846 0.9881 0.9909 0.9930 0.9948
0.9850 0.9894 0.9911 0.9932 0.9949
0.9854 0.9887 0.9913 0.9934 0.9951
0.9857 0.9890 0.9916 0.9936 0.9952
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
300
Reliability and Risk Models (Continued) 0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
2.6 2.7 2.8 2.9 3.0
0.9953 0.9965 0.9974 0.9981 0.9986
0.9955 0.9966 0.9975 0.9982 0.9987
0.9956 0.9967 0.9976 0.9982 0.9987
0.9957 0.9968 0.9977 0.9983 0.9988
0.9959 0.9969 0.9977 0.9984 0.9988
0.9960 0.9970 0.9978 0.9984 0.9989
0.9961 0.9971 0.9979 0.9985 0.9989
0.9962 0.9972 0.9979 0.9985 0.9989
0.9963 0.9973 0.9980 0.9986 0.9990
0.9964 0.9974 0.9981 0.9986 0.9990
3.1 3.2 3.3 3.4 3.5
0.9990 0.9993 0.9995 0.9997 0.9998
0.9991 0.9993 0.9995 0.9997 0.9998
0.9991 0.9994 0.9995 0.9997 0.9998
0.9991 0.9994 0.9996 0.9997 0.9998
0.9992 0.9994 0.9996 0.9997 0.9998
0.9992 0.9994 0.9996 0.9997 0.9998
0.9992 0.9994 0.9996 0.9997 0.9998
0.9992 0.9995 0.9996 0.9997 0.9998
0.9993 0.9995 0.9996 0.9997 0.9998
0.9993 0.9995 0.9996 0.9998 0.9998
3.6 0.9998 0.9998 0.9998 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
2-distribution
Appendix D
n
0.9995
0.999
0.995
0.990
0.975
0.95
0.90
0.80
0.70
0.60
1 2 3 4 5
0.06393 0.02100 0.0153 0.0639 0.158
0.05157 0.02200 0.0243 0.0908 0.210
0.04393 0.0100 0.0717 0.207 0.412
0.03157 0.0201 0.115 0.297 0.554
0.03982 0.0506 0.216 0.484 0.831
0.02393 0.103 0.352 0.711 1.15
0.0158 0.211 0.584 1.06 1.61
0.0642 0.446 1.00 1.65 2.34
0.148 0.713 1.42 2.19 3.00
0.275 0.02 1.87 2.75 3.66
6 7 8 9 10
0.299 0.485 0.710 0.972 1.26
0.381 0.598 0.857 1.15 1.48
0.676 0.989 1.34 1.73 2.16
0.872 1.24 1.65 2.09 2.56
1.24 1.69 2.18 2.70 3.25
1.64 2.17 2.73 3.33 3.94
2.20 2.83 3.49 4.17 4.87
3.07 3.82 4.59 5.38 6.18
3.83 4.67 5.53 6.39 7.27
4.57 5.49 6.42 7.36 8.30
11 12 13 14 15
1.59 1.93 2.31 2.70 3.11
1.83 2.21 2.62 3.04 3.48
2.60 3.07 3.57 4.07 4.60
3.05 3.57 4.11 4.66 5.23
3.82 4.40 5.01 5.63 6.26
4.57 5.23 5.89 6.57 7.26
5.58 6.30 7.04 7.79 8.55
6.99 7.81 8.63 9.47 10.3
8.15 9.03 9.93 10.8 11.7
9.24 10.2 11.1 12.1 13.0
16 17 18 19 20
3.54 3.98 4.44 4.91 5.40
3.94 4.42 4.90 5.41 5.92
5.14 5.70 6.26 6.84 7.43
5.81 6.41 7.01 7.63 8.26
6.91 7.56 8.23 8.91 9.59
7.96 8.67 9.39 10.0 10.9
9.31 10.1 10.9 11.7 12.4
11.2 12.0 12.9 13.7 14.6
12.6 13.5 14.4 15.4 16.3
14.0 14.9 15.9 16.9 17.8
21 22 23 24 25
5.90 6.40 6.92 7.45 7.99
6.45 6.98 7.53 8.08 8.65
8.03 8.64 9.26 9.98 10.5
8.90 9.54 10.2 10.9 11.5
10.3 11.0 11.7 12.4 13.1
11.6 12.3 13.1 13.8 14.6
13.2 14.0 14.8 15.7 16.5
15.4 16.3 17.2 18.1 18.9
17.2 18.1 19.0 19.9 20.9
18.8 19.7 20.7 21.7 22.6
26 27 28 29 30
8.54 9.09 9.66 10.2 10.8
9.22 9.80 10.4 11.0 11.6
11.2 11.8 12.5 13.1 13.8
12.2 12.9 13.6 14.3 15.0
13.8 14.6 15.3 16.0 16.8
15.4 16.2 16.9 17.7 18.5
17.3 18.1 18.9 19.8 20.6
19.8 20.7 21.6 22.5 23.4
21.8 22.7 23.6 24.6 25.5
23.6 24.5 25.5 26.5 27.4
31 32
11.4 12.0
12.2 12.8
14.5 15.1
15.7 16.4
17.5 18.3
19.3 20.1
21.4 22.3
24.3 25.1
26.4 27.4
28.4 29.4
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
302
Reliability and Risk Models (Continued)
n
0.9995
0.999
0.995
0.990
0.975
0.95
0.90
0.80
0.70
0.60
33 34 35
12.6 13.2 13.8
13.4 14.1 14.7
15.8 16.5 17.2
17.1 17.8 18.5
19.0 19.8 20.6
20.9 21.7 22.5
23.1 24.0 24.8
26.0 26.9 27.8
28.3 29.2 30.2
30.3 31.3 32.3
36 37 38 39 40
14.4 15.0 15.6 16.3 16.9
15.3 16.0 16.6 17.3 17.9
17.9 18.6 19.3 20.0 20.7
19.2 20.0 20.7 21.4 22.2
21.3 22.1 22.9 23.7 24.4
23.3 24.1 24.9 25.7 26.5
25.6 26.5 27.3 28.2 29.1
28.7 29.6 30.5 31.4 32.3
31.1 32.1 33.0 33.9 34.9
33.3 34.2 35.2 36.2 37.1
41 42 43 44 45
17.5 18.2 18.8 19.5 20.1
18.6 19.2 19.9 20.6 21.3
21.4 22.1 22.9 23.6 24.3
22.9 23.7 24.4 25.1 25.9
25.2 26.0 26.8 27.6 28.4
27.3 28.1 29.0 29.8 30.6
29.9 30.8 31.6 32.5 33.4
33.3 34.2 35.1 36.0 36.9
35.8 36.8 37.7 38.6 39.6
38.1 39.1 40.0 41.0 42.0
46 47 48 49 50
20.8 21.5 22.1 22.8 23.5
21.9 22.6 23.3 24.0 24.7
25.0 25.8 26.5 27.2 28.0
26.7 27.4 28.2 28.9 29.7
29.1 30.0 30.8 31.6 32.4
31.4 32.3 33.1 33.9 34.8
34.2 35.1 35.9 36.8 37.7
37.8 38.7 39.6 40.5 41.4
40.5 41.5 42.4 43.4 44.3
43.0 43.9 44.9 45.9 46.9
51 52 53 54 55
24.1 24.8 25.5 26.2 26.9
25.4 26.1 26.8 27.5 28.2
28.7 29.5 30.2 31.0 31.7
30.5 31.2 32.0 32.8 33.6
33.2 34.0 34.8 35.6 36.4
35.6 36.4 37.3 38.1 39.0
38.6 39.4 40.3 41.2 42.1
42.4 43.3 44.2 45.1 46.0
45.3 46.2 47.2 48.1 49.1
47.8 48.8 49.8 50.8 51.7
56 57 58 59 60
27.6 28.2 28.9 29.6 30.3
28.9 29.6 30.3 31.0 31.7
32.5 33.2 34.0 34.8 35.5
34.3 35.1 35.9 36.7 37.5
37.2 38.0 38.8 39.7 40.5
39.8 40.6 41.5 42.3 43.2
42.9 43.8 44.7 45.6 46.5
47.0 47.9 48.8 49.7 50.6
50.0 51.0 51.9 52.9 53.8
52.7 53.7 54.7 55.6 56.6
61 62 63 64 65
31.0 31.7 32.5 33.2 33.9
32.5 33.2 33.9 34.6 35.4
36.3 37.1 37.8 38.6 39.4
38.3 39.1 39.9 40.6 41.4
41.3 42.1 43.0 43.8 44.6
44.0 44.9 45.7 46.6 47.4
47.3 48.2 49.1 50.0 50.9
51.6 52.5 53.5 54.3 55.3
54.8 55.7 56.7 57.6 58.6
57.6 58.6 59.6 60.5 61.5
66 67 68 69 70
34.6 35.3 36.0 36.7 37.5
36.1 36.8 37.6 38.3 39.0
40.2 40.9 41.7 42.5 43.3
42.2 43.0 43.8 44.6 54.4
45.4 46.3 47.1 47.9 48.8
48.3 49.2 50.0 50.9 51.7
51.8 52.7 53.5 54.4 55.3
56.2 57.1 58.0 59.0 59.9
59.5 60.5 61.4 62.4 63.3
62.5 63.5 64.4 65.4 66.4
71 72 73 74 75
38.2 38.9 39.6 40.4 41.1
39.8 40.5 41.3 42.0 42.8
44.1 44.8 45.6 40.4 47.2
46.2 47.1 47.9 48.7 49.5
49.6 50.4 51.3 52.1 52.9
52.6 53.5 54.3 55.2 56.1
56.2 57.1 58.0 58.9 59.8
60.8 61.8 62.7 63.6 64.5
64.3 65.3 66.2 67.2 68.1
67.4 68.4 69.3 70.3 71.3
303
Appendix D (Continued) n
0.9995
0.999
0.995
0.990
0.975
0.95
0.90
0.80
0.70
0.60
76 77 78 79 80
41.8 42.6 43.3 44.1 44.8
43.5 44.3 45.0 45.8 46.5
48.0 48.8 48.8 50.4 51.2
50.3 51.1 51.9 52.7 53.5
53.8 54.6 55.5 56.3 57.2
56.9 57.8 58.7 59.5 60.4
60.7 61.6 62.5 63.4 64.3
65.5 66.4 67.3 68.3 69.2
69.1 70.0 71.0 72.0 72.9
72.3 73.2 74.2 75.2 76.2
81 82 83 84 85
45.5 46.3 47.0 47.8 48.5
47.3 48.0 48.8 49.6 50.3
52.0 52.8 53.6 54.4 55.2
54.4 55.2 56.0 56.8 57.6
58.0 58.8 59.7 60.5 61.4
61.3 62.1 63.0 63.9 64.7
65.2 66.1 67.0 67.9 68.8
70.1 71.1 72.0 72.9 73.9
73.9 74.8 75.8 76.8 77.7
77.2 78.1 79.1 80.1 81.1
86 87 88 89 90
49.3 50.0 50.8 51.5 52.3
51.1 51.9 52.6 53.4 54.2
56.0 56.8 57.6 58.4 59.2
58.5 59.3 60.1 60.9 61.8
62.2 63.1 63.9 64.8 65.6
65.6 66.5 67.4 68.2 69.1
69.7 70.6 71.5 72.4 73.3
74.8 75.7 76.7 77.6 78.6
78.7 79.6 80.6 81.6 82.5
82.1 83.0 84.0 85.0 86.0
91 92 93 94 95
53.0 53.8 54.5 53.5 56.1
54.9 55.7 56.5 57.2 58.0
60.0 60.8 61.6 62.4 63.2
62.6 63.4 64.2 65.1 65.9
66.5 67.4 68.2 69.1 69.9
70.0 70.9 71.8 72.6 73.5
74.2 75.1 76.0 76.9 77.8
79.5 80.4 81.4 82.3 83.2
83.5 84.4 85.5 86.4 87.3
87.0 88.0 88.9 89.9 90.9
96 97 98 99 100
56.8 57.6 58.4 59.1 59.9
58.8 59.6 60.4 61.1 61.9
64.1 64.9 65.7 66.5 67.3
66.7 67.6 68.4 69.2 70.1
70.8 71.6 72.5 73.4 74.2
74.4 75.3 76.2 77.0 77.9
78.7 79.6 80.5 81.4 82.4
84.2 85.1 86.1 87.0 87.9
88.3 89.2 90.2 91.2 92.1
91.9 92.9 93.8 94.8 95.8
n
0.50
1 2 3 4 5
0.455 1.39 2.37 3.36 4.35
6 7 8 9 10
5.35 6.35 7.34 8.34 9.34
11 12
10.3 11.3
0.40
0.708 1.83 2.95 4.04 5.13
0.30
0.20
0.10
2.71 4.61 6.25 7.78 9.24
0.05
0.025
0.01
0.005
0.001
3.84 5.99 7.81 9.49 11.1
5.02 7.38 9.35 11.1 12.8
6.63 9.21 11.3 13.3 15.1
7.88 10.6 12.8 14.9 16.7
10.8 13.8 16.3 18.5 20.5
1.07 2.41 3.67 4.88 6.06
1.64 3.22 4.64 5.99 7.29
6.21 7.28 8.35 9.41 10.5
7.23 8.38 9.52 10.7 11.8
8.56 9.80 11.0 12.2 13.4
10.6 12.0 13.4 14.7 16.0
12.6 14.1 15.5 16.9 18.3
14.4 16.0 17.5 19.0 20.5
16.8 18.5 20.1 21.7 23.2
18.5 20.3 22.0 23.6 25.2
22.5 24.3 26.1 27.9 29.6
11.5 12.6
12.9 14.0
14.6 15.8
17.3 18.5
19.7 21.0
21.9 23.3
24.7 26.2
26.8 28.3
31.3 32.9
304
Reliability and Risk Models (Continued)
n
0.50
0.40
0.30
0.20
0.10
0.05
0.025
0.01
0.005
0.001
13 14 15
12.3 13.3 14.3
13.6 14.7 15.7
15.1 16.2 17.3
17.0 18.2 19.3
19.8 21.1 22.3
22.4 23.7 25.0
24.7 26.1 27.5
27.7 29.1 30.6
29.8 31.3 32.8
34.5 36.1 37.7
16 17 18 19 20
15.3 16.3 17.3 18.3 19.3
16.8 17.8 18.9 19.9 21.0
18.4 19.5 20.6 21.7 22.8
20.5 21.6 22.8 23.9 25.0
23.5 24.8 26.0 27.2 28.4
26.3 27.6 28.9 30.1 31.4
28.8 30.2 31.5 32.9 34.2
32.0 33.4 34.8 36.2 37.6
34.3 35.7 37.2 38.6 40.0
39.3 40.8 42.3 43.8 45.3
21 22 23 24 25
20.3 21.3 22.3 23.3 24.3
22.0 23.0 24.1 25.1 26.1
23.9 24.9 26.0 27.1 28.2
26.2 27.3 28.4 29.6 30.7
29.6 30.8 32.0 33.2 34.4
32.7 33.9 35.2 36.4 37.7
35.5 36.8 38.1 39.4 40.6
38.9 40.3 41.6 43.0 44.3
41.4 42.8 44.2 45.6 46.9
46.8 48.3 49.7 51.2 52.6
26 27 28 29 30
25.3 26.3 27.3 28.3 29.3
27.2 28.2 29.2 30.3 31.3
29.2 30.3 31.4 32.5 33.5
31.8 32.9 34.0 35.1 36.3
35.6 36.7 37.9 39.1 40.3
38.9 40.1 41.3 42.6 43.8
41.9 43.2 44.5 45.7 47.0
45.6 47.0 48.3 49.6 50.9
48.3 49.6 51.0 52.3 53.7
54.1 55.5 56.9 58.3 59.7
31 32 33 34 35
30.3 31.3 32.2 33.3 34.3
32.3 33.4 34.4 35.4 36.5
34.6 35.7 36.7 37.8 38.9
37.4 38.5 39.6 40.7 41.8
41.4 42.6 43.7 44.9 46.1
45.0 46.2 47.4 48.6 49.8
48.2 49.5 50.7 52.0 53.2
52.2 53.5 54.8 56.1 57.3
55.0 56.3 57.6 59.0 60.3
61.1 62.5 63.9 65.2 66.6
36 37 38 39 40
35.3 36.3 37.3 38.3 39.3
37.5 38.5 39.6 40.6 41.6
39.9 41.0 42.0 43.1 44.2
42.9 44.0 45.1 46.2 47.3
47.2 48.4 49.5 50.7 51.8
51.0 52.2 53.4 54.6 55.8
54.4 55.7 56.9 58.1 59.3
58.6 59.9 61.2 62.4 63.7
61.6 62.9 64.2 65.5 66.8
68.0 69.3 70.7 72.1 73.4
41 42 43 44 45
40.3 41.3 42.3 43.3 44.3
42.7 43.7 44.7 45.7 46.8
45.2 46.3 47.3 48.4 49.5
48.4 49.5 50.5 51.6 52.7
52.9 54.1 55.2 56.4 57.7
56.9 58.1 59.3 60.5 61.7
60.6 61.8 63.0 64.2 65.4
65.0 66.2 67.5 68.7 70.0
68.1 69.3 70.6 71.9 73.2
74.7 76.1 77.4 78.7 80.1
46 47 48 49 50
45.3 46.3 47.3 48.3 49.3
47.8 48.8 49.8 50.9 51.9
50.5 51.6 52.6 53.7 54.7
53.8 54.9 56.0 57.1 58.2
58.6 59.8 60.9 62.0 63.2
62.8 64.0 65.2 66.3 67.5
66.6 67.8 69.0 70.2 71.4
71.2 72.4 73.7 74.9 76.2
74.4 75.7 77.0 78.2 79.5
81.4 82.7 84.0 85.4 86.7
51 52 53 54 55
50.3 51.3 52.3 53.3 54.3
52.9 53.9 55.0 56.0 57.0
55.8 56.8 57.9 58.9 60.0
59.2 60.3 61.4 62.5 63.6
64.3 65.4 66.5 67.7 68.8
68.7 69.8 71.0 72.2 73.3
72.6 73.8 75.0 76.2 77.4
77.4 78.6 79.8 81.1 82.3
80.7 82.0 83.3 84.5 85.7
88.0 89.3 90.6 91.9 93.2
305
Appendix D (Continued) n
0.50
0.40
56 57 58 59 60
55.3 56.3 57.3 58.3 59.3
58.0 59.1 60.1 61.1 62.1
61.0 62.1 63.1 64.1 65.2
64.7 65.7 66.8 67.9 69.0
69.9 71.0 72.2 73.3 74.4
74.5 75.6 76.8 77.9 79.1
78.6 79.8 80.9 82.1 83.3
61 62 63 64 65
60.3 61.3 62.3 63.3 64.3
63.2 64.2 65.2 66.2 67.2
66.3 67.3 68.4 69.4 70.5
70.0 71.1 72.2 73.3 74.4
75.5 76.6 77.7 78.9 80.0
80.2 81.4 82.5 83.7 84.8
66 67 68 69 70
65.3 66.3 67.3 68.3 69.3
68.3 69.3 70.3 71.3 72.4
71.5 72.6 73.6 74.6 75.7
75.4 76.5 77.6 78.6 79.7
81.1 82.2 83.3 84.4 85.5
71 72 73 74 75
70.3 71.3 72.3 73.3 74.3
73.4 74.4 75.4 76.4 77.5
76.7 77.8 78.8 79.9 80.9
80.8 81.9 82.9 84.0 85.1
76 77 78 79 80
75.3 76.3 77.3 78.3 79.3
78.5 79.5 80.5 81.5 82.6
82.0 83.0 84.0 85.1 86.1
81 82 83 84 85
80.3 81.3 82.3 83.3 84.3
83.6 84.6 85.6 86.6 87.7
86 87 88 89 90
85.3 86.3 87.3 88.3 89.3
91 92 93 94 95
90.3 91.3 92.3 93.3 94.3
0.30
0.20
0.10
0.05
0.025
0.01
0.005
0.001
83.5 84.7 86.0 87.2 88.4
87.0 88.2 89.5 90.7 92.0
94.5 95.8 97.0 98.3 99.6
84.5 85.7 86.8 88.0 89.2
89.6 90.8 92.0 93.2 94.4
93.2 94.4 95.6 96.9 98.1
100.9 102.2 103.4 104.9 106.0
86.0 87.1 88.3 89.4 90.5
90.3 91.5 92.7 93.9 95.0
95.6 96.8 98.0 99.2 100.4
99.3 100.6 101.8 103.0 104.2
107.3 108.5 109.8 111.1 112.3
86.6 87.7 88.8 90.0 91.1
91.7 92.8 93.9 95.1 96.2
96.2 97.4 98.5 99.7 100.8
101.6 102.8 104.0 105.2 106.4
105.4 106.6 107.9 109.1 110.3
113.6 114.8 116.1 117.3 118.6
86.1 87.2 88.3 89.3 90.4
92.2 93.3 94.4 95.5 96.6
97.4 98.5 99.6 100.7 101.9
102.0 103.2 104.3 105.5 106.6
107.6 108.8 110.0 111.1 112.3
111.5 112.7 113.9 115.1 116.3
119.9 121.1 122.3 123.6 124.3
87.2 88.2 89.2 90.3 91.3
91.5 92.5 93.6 94.7 95.7
97.7 98.8 99.9 101.0 102.1
103.0 104.1 105.3 106.4 107.5
107.8 108.9 110.1 111.2 112.4
113.5 114.7 115.9 117.1 118.2
117.5 118.7 119.9 121.1 122.3
126.1 127.3 128.6 129.8 131.0
88.7 89.7 90.7 91.7 92.8
92.4 93.4 94.4 95.5 96.5
96.8 97.9 98.9 100.0 101.1
103.2 104.3 105.4 106.5 107.6
108.6 109.8 110.9 112.0 113.1
113.5 114.7 115.8 117.0 118.1
119.4 120.6 121.8 122.9 124.1
123.5 124.7 125.9 127.1 128.3
132.3 133.5 134.7 136.0 137.2
93.8 94.8 95.8 96.8 97.9
97.6 98.6 99.6 100.7 101.7
102.1 103.2 104.2 105.3 106.4
108.7 109.8 110.9 111.9 113.0
114.3 115.4 116.5 117.6 118.8
119.3 120.4 121.6 122.7 123.9
125.3 126.5 127.6 128.8 130.0
129.5 130.7 131.9 133.1 134.2
138.4 139.7 140.9 142.1 143.3
306
Reliability and Risk Models (Continued)
n
0.50
0.40
0.30
0.20
0.10
0.05
0.025
0.01
0.005
0.001
96 97 98 99 100
95.3 96.3 97.3 98.3 99.3
98.9 99.9 100.9 101.9 102.9
102.8 103.8 104.8 105.9 106.9
107.4 108.5 109.5 110.6 111.7
114.1 115.2 116.3 117.4 118.5
119.9 121.0 122.1 123.2 124.3
125.0 126.1 127.3 128.4 129.6
131.1 132.3 133.5 134.6 135.8
135.4 136.6 137.8 139.0 140.2
144.6 145.8 147.0 148.2 149.4
References Abernethy R.B. (1994) The New Weibull Handbook, Robert B. Abernethy, North Palm Beach. Abramowitz M. and Stegun I.A. (eds) (1972) Handbook of Mathematical Functions/with Formulas, Graphs, and Mathematical Tables, Dover Publications. Amstadter B.L. (1971) Reliability Mathematics: Fundamentals, Practices, Procedures, McGraw-Hill Book Company. Andrews J.D. and Moss T.R. (2002) Reliability and Risk Assessment, Professional Engineering Publishing. Ang A.H.S. and Tang W.H. (1975) Probability Concepts in Engineering Planning and Design, vol. 1, Basic Principles, John Wiley & Sons, Ltd. Ashby M.F. and Jones D.R.H. (2000) Engineering Materials, vol. 1: An Introduction to their Properties and Applications, Butterworth-Heinemann. ASM (1985a) American Society for Metals, Metals Handbook, vol. 9: Metallography and Microstructures. ASM (1985b) American Society for Metals, Metals Handbook, 9th edn, vol. 8: Mechanical Testing, Metals Park, Ohio. Atkinson K.E. (1989) An Introduction to Numerical Analysis, 2nd edn, John Wiley & Sons, Ltd. Barlow R.E. and Proschan F. (1965) Mathematical Theory of Reliability, John Wiley & Sons, Inc. Barlow R.E. and Proschan F. (1975) Statistical Theory of Reliability and Life Testing, Rinehart and Winston, Inc. Batdorf S.B. and Crose J.G. (1974) ‘A statistical theory for the fracture of brittle structures subjected to non-uniform polyaxial stresses’, Journal of Applied Mechanics, 41, 459–464. Bazovsky I. (1961) Reliability Theory and Practice, Prentice-Hall, Inc. Bedford T. and Cooke R. (2001) Probabilistic Risk Analysis, Foundations and Methods, Cambridge University Press. Bergman B. (1985) Journal of Materials Science Letters, 4, 1143–1146. Bird G.C. and Saynor D. (1984) ‘The effect of peening shot size on the performance of carbon-steel springs’, Journal of Mechanical Working Technology, 10 (2), 175.
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
308
References
Blake I.F. (1979) An Introduction to Applied Probability, John Wiley and Sons, Ltd. Blischke W.R. & Murthy D.N. (2000) Reliability: Modelling, Prediction, and Optimisation, John Wiley & Sons, Ltd. Bogdanoff J.L. and Kozin F. (1985) Probabilistic Models of Cumulative Damage, John Wiley & Sons, Ltd, New York. Booker J.D., Raines M. and Swift K.G. (2001) Designing Capable and Reliable Products, Butterworth. Box G.E.P. and Muller M.E. (1958) ‘A note on the generation of random normal deviates’, Annals of Mathematical Statistics, 28, 610–611. Brooksbank D. and Andrews K.W. (1972) ‘Stress fields around inclusions and their relation to mechanical properties’, Journal of the Iron and Steel Institute, 4, 246–255. Budinski K.G. (1996) Engineering Materials: Properties and Selection, 5th edn, PrenticeHall, Inc. Burrell N.K. (1985) ‘Controlled shot peening of automotive components’, SAE Transactions, Section 3, 94 (850365). Bury K.V. (1975) Statistical Models in Applied Science, John Wiley & Sons, Ltd. Carter A.D.S. (1986) Mechanical Reliability, Macmillan Education Ltd. Carter A.D.S. (1997) Mechanical Reliability and Design, Macmillan Press Ltd. Chatfield C. (1998) Problem Solving: A Statistician’s Guide, Chapman & Hall. Cheney W. and Kincaid D. (1999) Numerical Mathematics and Computing, 4th edn, Brooks/Cole Publishing Company. Chernykh Yu.M. (1991) ‘Determination of the depth of the defective surface layer of silicon steel springs’, Metal Science and Heat Treatment, 33 (9–10), 792–793. Christensen P. and Baker M. (1982) Structural Reliability Theory and its Applications, Springer-Verlag. Christian J.W. (1965) The Theory of Transformations in Metals and Alloys, Pergamon Press. Cox B.N. (1989) ‘Inductions from Monte-Carlo simulations of small fatigue cracks’, Engineering Fracture Mechanics, 33 (4), 655–670. Cullity B.D. (1978) Elements of X-ray Diffraction, 2nd edn, Addison-Wesley. Curry D.A. and Knott J.F. (1979) ‘Effect of microstructure on cleavage fracture toughness of quenched and tempered steels’, Metal Science, 7, 341–345. Danzer R. and T. Lube (1996) ‘New fracture statistics for brittle materials’, Fracture Mechanics of Ceramics, 11, 425–439. Dasgupta A. and Pecht M. (1991) ‘Material failure mechanisms and damage models’, IEEE Transactions on Reliability, 40 (5), 531–536. DeGroot M. (1989) Probability and Statistics, Addison-Wesley. Dianqing L., Shengkun Z. and Wenyong T. (2004) ‘Risk based inspection planning for ship structures subjected to corrosion deterioration’, Proceedings of the Probabilistic Safety Assessment and Management Conference PSAM7-ESREL’04, Berlin, 6, 3336–3342. Dieter G.E. (1986) Mechanical Metallurgy, McGraw-Hill. Dodson B. (1994) Weibull Analysis, ASQC Quality Press. Dowling N.E. (1999) Mechanical Behaviour of Materials, Prentice Hall. Draper N.R. and Smith H. (1981) Applied Regression Analysis, 2nd edn, John Wiley & Sons, Ltd. Drenick R.F. (1960) ‘The failure low of complex equipment’, Journal of the Society for Industrial and Applied Mathematics, 8, 680–690. Easterling E. (1983) Introduction to the Physical Metallurgy of Welding, Butterworths. Ebeling C.E. (1997) An Introduction to Reliability and Maintainability Engineering, McGraw-Hill.
References
309
Evans A.G. (1978) ‘A general approach for the statistical analysis of multiaxial fracture’, Journal of the American Ceramic Society, 61, 302–308. Everitt B.S. and Hand D.J. (1981) Finite Mixture Distributions, Chapman & Hall. Foley J.D., van Dam A., Feiner S.K. and Hughes J.F. (1996) Computer Graphics: Principles and Practice, Addison-Wesley Publishing Company. Freudenthal A.M. (1954) ‘Safety and the probability of structural failure’, American Society of Civil Engineers Transactions, No. 2843, 1337–1397. Gere J. and Timoshenko S.P. (1999) Mechanics of Materials, 4th edn, Stanley Thornes Ltd. Gildersleeve M.J. (1991) ‘Relationship between decarburisation and fatigue strength of through hardened and carburising steels’, Materials Science and Technology, 7, 307–310. Glitzmann P. and Klee, V. (1994) ‘Polytopes: On some complexity of some basic problems in computational convexity II. Volume and mixed volumes’. In Polytopes: Abstract, Convex and Computational, eds T. Bisztriczky et al., Kluwer, Dordrecht. Gnedenko B.V. (1962) The Theory of Probability, Chelsea Publishing Company. Griffith A.A. (1920) Philosophical Transactions of the Royal Society, London, Series A, 221, 163. Grosh D.L. (1989) A Primer of Reliability Theory, John Wiley & Sons, Ltd. Gross D. and Harris C.M. (1985) Fundamentals of Queuing Theory, 2nd edn, John Wiley & Sons, Ltd. Gumbel, E.J. (1958) Statistics of Extremes, Columbia University Press. Hahn G.T. (1984) Metallurgical Transactions A, 15A, 947–959. Harry M.J. and Lawson J.R. (1992) Six Sigma Producibility Analysis and Process Characterisation, Addison-Wesley. Haugen E.B. (1980) Probabilistic Mechanical Design, John Wiley & Sons, Ltd. Hearn D. and Baker M. Pauline (1997) Computer Graphics: C version, Prentice Hall Inc. Heitmann W.E., T.G. Oakwood and G. Krauss (1996) ‘Continuous heat treatment of automotive suspension spring steels’. In Fundamentals and Applications of Microalloying Forging Steels, Proceedings of a symposium sponsored by the Ferrous Metallurgy Committee of TMS, eds Chester J. Van Tyne et al. Henley E.J. and Kumamoto H. (1981) Reliability Engineering and Risk Assessment, Prentice-Hall Inc. Hertzberg R.W. (1996) Deformation and Fracture Mechanics of Engineering Materials, 4th edn, John Wiley & Sons, Inc. Hobbs G.K. (2000) Accelerated Reliability Engineering, HALT and HASS, John Wiley & Sons, Ltd. Horowitz E. and Sahni S. (1997) Computer Algorithms, Computer Science Press. Hoyland A. and Rausand M. (1994) System Reliability Theory, John Wiley and Sons, Ltd. Hull D. and Clyne T.W. (1996) An Introduction to Composite Materials, 2nd edn, Cambridge University Press. IEC (International Electrotechnical Commission) (1991) International Vocabulary, Chapter 191: Dependability and Quality of Service, IEC 50 (191). Jayatilaka A. and Trustrum K. (1977) Journal of Material Science, 12, 1426. Kay D.C. (1995) Graphics File Formats, 2nd edn, Windcrest/McGraw-Hill. Kendall M.G. and Moran P.A.P. (1963) Geometrical Probability, Charles Griffin & Company Ltd. Knuth D.E. (1997) The Art of Computer Programming, vol. 2, Seminumerical Algorithms, 3rd edn, Addison-Wesley. Kumar U., Knezevic J. and Crocker J. (1999) ‘Maintenance free operating period – an alternative measure to MTBF and failure rate for specifying reliability?’, Reliability Engineering and System Safety, 64, 127–131.
310
References
Lehmer D.H. (1951) ‘Mathematical methods in large-scale computing units’. In Proceedings of the 2nd Annual Symposium on Large-scale Digital Computing Machinery, Harvard University Press, 141–145. Lewis E.E. (1996) Introduction to Reliability Engineering, 2nd edn, John Wiley & Sons Inc. McMahon C.J. and Cohen M. (1965) Acta Metallurgica, 13, 591. Mattson E. (1989) Basic Corrosion Technology for Scientists and Engineers, John Wiley & Sons, Ltd (Halsted Press). Meeker W.Q. and Escobar L.A. (1998) Statistical Methods for Reliability Data, John Wiley & Sons, Inc. Melchers R.E. (1999) Structural Reliability Analysis and Prediction, 2nd edn, John Wiley & Sons, Ltd. Metcalfe A.V. (1994) Statistics in Engineering, a Practical Approach, Chapman & Hall. Miller I. and Miller M. (1999) John E. Freund’s Mathematical Statistics, 6th edn, Prentice Hall International, Inc. Miller K.J. (1993) Materials Science and Technology, 9, 453–462. MIL-HDBK-217F (1991) Reliability Prediction of Electronic Equipment, US Department of Defence, Washington, DC. MIL-STD-1629A (1977) US Department of Defence Procedure for Performing a Failure Mode and Effects Analysis. Miner M.A. (1945) Journal of Applied Mechanics, 12, 159–164. Montgomery D.C., Runger G.C. and Hubele N.F. (2001) Engineering Statistics, 2nd edn, John Wiley & Sons, Ltd. Murakami Y., Kodama S. and Konuma S. (1989) ‘Quantitative evaluation of effects of non-metallic inclusions on fatigue strength of high strength steels. I: Basic fatigue mechanism and evaluation of correlation between the fatigue fracture stress and the size and location of non-metallic inclusions’, International Journal of Fatigue, 11 (5), 291–298. Murakami Y. and Usuki H. (1989) ‘Quantitative evaluation of effects of non-metallic inclusions on fatigue strength of high strength steels. II: Fatigue limit evaluation based on statistics for extreme values of inclusion size’, International Journal of Fatigue, 11 (5), 299–307. Nakov P. and Dobrikov P. (2003) Programming ¼ þþ Algorithms, 2nd edn, Top Team. Niku-Lari A. (1981) ‘Shot-peening’. In the First International Conference on Shot Peening, Paris, 14–17 September, p. 1, Pergamon Press. O’Connor P.D.T (2003) Practical Reliability Engineering, 4th edn, John Wiley & Sons, Ltd. Ohring M. (1995) Engineering Materials Science, Academic Press, Inc. OREDA-1992 (Offshore Reliability Data) (1992) DNV Technica. Ortiz K. and Kiremidjian A. (1988) ‘Stochastic modelling of fatigue crack growth’, Engineering Fracture Mechanics, 29 (3), 317–334. Paris P.C. and Erdogan F. (1963) Journal of Basic Engineering, 85, 528–534. Park S.K and Miller K.W. (1988) ‘Random number generators: good ones are hard to find’, Communications of the ACM, 31 (10), 1192–1201. Parzen E., (1960) Modern Theory of Probability and its Applications. John Wiley & Sons, Inc., New York. RAE (2002) ‘The societal aspects of risk’, Report produced by working groups of the Royal Academy of Engineering. Ramakumar R. (1993) Engineering Reliability, Fundamentals and Applications, Prentice Hall. Relf M.N. (1998) ‘Maintenance-free operating periods – the designer’s challenge’. In Proceedings of the 13th Advances in Reliability Technology Symposium, 21–23 April, Manchester.
References
311
Rigaku Corporation (1994) Instruction Manual for Strainflex MSF-2M/PSF-2M. Rimmer S. (1993) Bit-Mapped Graphics, 2nd edn, Windcrest/McGraw-Hill. Rosenfield A.R. (1997) In George R. Irwin Symposium on Cleavage Fracture (ed. Kwai S. Chan), The Minerals, Metals and Materials Society, Materials Park, Ohio, 229–236. Ross P.J. (1988) Taguchi Techniques for Quality Engineering, McGraw-Hill. Ross S.M. (1997) Simulation, 2nd edn, Harcourt Academic Press. Ross S.M. (2000) Introduction to Probability Models, 7th edn, A Harcourt Science and Technology Company. Rubinstein R.Y. (1981) Simulation and the Monte-Carlo Method, John Wiley & Sons, Ltd, New York. SAE J936 (1965) Report of Iron and Steel Technical Committee. Sherwin D.J. and Bossche A. (1993) The Reliability, Availability and Productiveness of Systems, Chapman & Hall. Smith D.J. (1972) Reliability Engineering, Putman Publishing. Sobol I.M. (1994) A Primer for the Monte-Carlo Method, CRC Press. Sommerville D.M.Y. (1958) An Introduction to the Geometry of N Dimensions, Dover. Thompson G. (1999) Improving Maintainability and Reliability through Design, Professional Engineering Publishing Ltd. Thompson W.A. (1988) Point Process Models with Applications to Safety and Reliability, Chapman & Hall. Ting J.C. and Lawrence F.V. (1993) ‘Modelling the long-life fatigue behaviour of a cast aluminium alloy’, Fatigue and Fracture of Engineering Materials and Structures, 16 (6), 631–647. Todinov M.T. (1996) ‘A new approach to the kinetics of a phase transformation with constant radial growth rate’, Acta Materialia, 44, 4697–4703. Todinov M.T. (1998) ‘Mechanism for the formation of the residual stresses from quenching’, Modelling and Simulation in Materials Science and Engineering, 6, 273–291. Todinov M.T. (1999a) ‘Influence of some parameters on the residual stresses from quenching’, Modelling and Simulation in Materials Science and Engineering, 7, 25–41. Todinov M.T. (1999b) ‘Maximum principal tensile stress and fatigue crack origin for compression springs’, International Journal of Mechanical Sciences, 41, 357–370. Todinov M.T. (2000a) ‘On some limitations of the Johnson–Mehl–Avrami– Kolmogorov equation’, Acta Materialia, 48, 4217–4224. Todinov M.T. (2000b) ‘Probability of fracture initiated by defects’, Materials Science and Engineering A, A276, 39–47. Todinov M.T. (2000c) ‘Residual stresses at the surface of automotive suspension springs’, Journal of Materials Science, 35, 3313–3320. Todinov M.T. (2001a) ‘An efficient method for estimating from sparse data the parameters of the impact energy variation in the ductile-to-brittle transition region’, International Journal of Fracture, 111, 131–150. Todinov M.T. (2001b) ‘Necessary and sufficient condition for additivity in the sense of the Palmgren–Miner rule’, Computational Materials Science, 21, 101–110. Todinov M.T. (2001c) ‘Probability distribution of fatigue life controlled by defects’, Computers and Structures, 79, 313–318. Todinov M.T. (2002a) ‘Distribution mixtures from sampling of inhomogeneous microstructures: variance and probability bounds of the properties’, Nuclear Engineering and Design, 214, 195–204. Todinov M.T. (2002b) ‘Distribution of properties from sampling inhomogeneous materials by line transects’, Probabilistic Engineering Mechanics, 17, 131–141.
312
References
Todinov M.T. (2003a) ‘Statistics of inhomogeneous media formed by nucleation and growth’, Probabilistic Engineering Mechanics, 18, 139–149. Todinov M.T. (2003b) ‘Modelling consequences from failure and material properties by distribution mixtures’, Nuclear Engineering and Design, 224, 233–244. Todinov M.T. (2003c) ‘Minimum failure-free operating intervals associated with random failures of non-repairable components’, Computers and Industrial Engineering, 45, 475–491. Todinov M.T. (2004a) ‘Uncertainty and risk associated with the Charpy impact energy of multi-run welds’, Nuclear Engineering and Design, 231, 27–38. Todinov M.T. (2004b) ‘Setting reliability requirements based on minimum failure-free operating periods’, Quality and Reliability Engineering International, 20, 273–287. Todinov M.T. (2004c) ‘A new reliability measure based on specified minimum distances before the locations of random variables in a finite interval’, Reliability Engineering and System Safety, 86, 95–103. Todinov M.T. (2005) ‘Limiting the probability of failure for components containing flaws’, Computational Material Science, 32, 156–166. Todinov M.T., Novovic, M., Bowen P. and Knott J.F. (2000) ‘Modelling the impact energy in the ductile/brittle transition region of C-Mn multi-run welds’, Materials Science and Engineering A, A287, 116–124. Trivedi K.S. (2002) Probability and Statistics with Reliability, Queuing and Computer Science Applications, 2nd edn, John Wiley & Sons, Ltd. Tryon R.G. and Cruse T.A. (1997) ‘Probabilistic mesomechanical fatigue crack nucleation model’, Journal of Engineering Materials and Technology, 119, 65–70. Tuckwell H.C. (1988) Elementary Applications of Probability Theory, Chapman & Hall. Underwood E.E. (1969) Quantitative Stereology, Addison-Wesley. Villemeur A. (1992) Reliability, Availability, Maintainability and Safety Assessment, vol. 2, John Wiley & Sons, Ltd. Vose D. (2002) Risk analysis, a quantitative guide, 2nd edn, John Wiley & Sons, Ltd. Wallin K., Saario T. and Torronen K. (1984) Metal Science, 18, 13–16. Weibel E.R. (1980) Stereological Methods: Theoretical Foundations (vol. 2), Academic Press, London. Weibull W. (1951) A statistical distribution of wide applicability, Journal of Applied Mechanics, 18, 293–297. Windle P.L., Crowder M. and Moskovic R. (1996) A statistical model for the analysis and prediction of the effect of neutron irradiation on Charpy impact energy curves, Nuclear Engineering Design, 165, 43–56.
Index
Absorption laws, 272, 286 Additive terms, 41 Additivity rule, 163 AND-gates, 284 Associative laws, 271, 285 Austenitisation, decarburisation, 204 Automotive suspension springs, probability of failure, 199–204 Bathtub curve, 49 Bayes’ theorem, 17 Bernoulli distribution, 135–136 Best-fit straight line, 81 Bimodality in life data, 70 Binomial cumulative distribution function, 22–24 Binomial distribution, 21–22, 114 Binomial experiment, 21 Binomial model, 20–24 engineering examples, 20 Binomial probability density distribution, 23 Bivariate normal distribution, 120 Bivariate sampling, conditional probability technique, 120–121 Boolean algebra, 282–288 laws of, 285–287 Boolean expressions, simplifying, 288 Boolean operations, 283 Boolean random variables, 290
Boolean variables, 282–288 Bound laws, 272, 286 Box–Muller method, 119 Bresenham algorithm, 137 Brittle failure, 166 Brittle fracture, 145–147, 166–167 Carbide-induced brittle fracture, 166 Cathodic protection, 152 Cayley–Menger determinant, 66 Cayley–Menger matrix, 65 Central limit theorem, 42, 105 Certain event, 268 Charpy impact energy C–Mn multi-run welds, 78 cumulative empirical distribution, 74 empirical distribution, 181 general statistical model, 180 modelling systematic and random component, 179–181 parameters characterising microstructural zones, 181–183 and percentage of ductile fracture, 70 at specified test temperature, uncertainty associated with, 62–64 systematic variation with temperature, 82 uncertainty associated with, 61–62 variance at specified test temperature, 185–186
Reliability and Risk Models: Setting Reliability Requirements. by M.T. Todinov Copyright 2005 John Wiley & Sons, Inc. ISBN 0-470-09488-5 (HB)
314 Charpy V-notch impact test, 135, 148 2 -distribution, 36–37, 301–306 2 -statistics, 36, 38–39 Cleavage fracture, 145–146 Clustering of random demands, probability of, 209 of two or more random flaws, 223 C–Mn multi-run welds Charpy impact energy, 78 ductile-to-brittle transition region, 148, 179–189 impact toughness, 61–62 microstructure characterising, 61–62 risk assessment associated with location of ductile-to-brittle transition region, 186–189 uncertainty associated with location of ductile-to-brittle transition region, 179–189 method of determining, 181–186 Collinear forces, 42 Collinear loads, 45 Commutative laws, 272, 285 Complement laws, 272 Complementary events, 269–270 Complementary laws, 285 Complex stress state, 170 Compressive residual stresses, 203–204 from shot peening, 201 Compressive stress, 151 Conditional distribution, 120 Conditional probability, 16, 276–279 formula, 28 technique for bivariate sampling, 120–121 Confidence interval for MTTF, 39 Constant failure rate (CFR) distribution, 31, 49 Constant hazard rate, 29–30, 33, 48 Continuous distribution function, 110 Continuous random variables, 291 joint distribution, 292–293 von Neumann’s method, 121–122 with specified probability density function, 121 Controlling random variables, 19–20 Correlation coefficient, 80–81, 293 Corrosion failures due to, 151–152
Index modelling deterioration of protective coatings due to, 191–197 nucleation and growth, 192 Corrosion allowance, 152 Cost of failure, 239–265 constant, 240–242 counter examples, 246–248 from mutually exclusive failure models, 242–246 maximum acceptable fraction, 241 models for setting reliability requirements, 240–242 premature failure, 251 probability distribution, 243 time-dependent, 249–252, 258 variances of, 246 Covariance, 293 properties of, 294 Covariance matrix, 294 Crack growth rate, 150 Crack-like flaws, 127–129 Crack propagation, 146, 163 Crevice control, 152 Crevice corrosion, 151 Critical failure, 239, 245 Critical level of damage, 162–163 Critical start-up period, failure within, 223 Cumulative distribution, 54, 89, 102, 105 of time to failure, 31 Cumulative distribution function, 1–2, 40, 43, 46, 73, 90, 110–111, 159, 292 discrete random variables, 290 intercepted fraction, 136 standard normal distribution, 299–300 time to failure, 3, 31 Cumulative hazard rate, 30–31 Cumulative Poisson distribution, 26 Cyclic stresses, 149–150 Damage factorisation law, 159–164 Data collection, 70 Data quality assurance, 70 Data transmitter, 14–16 De Morgan’s laws, 273, 286–287 Decarburisation during austenitisation, 204 Decomposition method, 13 Decreasing failure rate (DFR), 31, 49 Demand–capacity, see Load–strength (demand–capacity) integral
Index Design parameter, 42 Detrimental factor, 173 Discrete random variable, 289 cumulative distribution function, 290 with specified distribution, 113–114 Disjoint events, 269 see also Mutually exclusive events Distribution mixture, 62 upper bound of variance of, 64–67 variance of, 56 Distribution of properties from three different sources with same mean, 58 with very small variances, 58 Distributive laws, 272, 285, 287 Ductile fracture, 70, 147 Ductile-to-brittle transition region of multi-run welds, 147–149 risk assessment associated with location of, 186–189 uncertainty associated with location of, 179–189 method of determining, 181–186 Duplex material, 139 Duplex oxide–sulphide inclusions, 147 Duplex structures, 133–135 Early-life failures, 151, 153–157, 165, 254–255 causes, 153 due to misassembly, 156 influence of design, 153–154 influence of variability of critical design parameters, 154–157 Early wear-out failures, 50 Elementary event, 269 Empirical cumulative distribution, 73–74 Environment-assisted cracking (EAC), 152 Environmental conditions, 69 Environmental pollution, 222 Erosion, failures due to, 151–152 Events, 268–270 intersection of, 270–273 union of, 270 Exploratory data analysis, 70 Exponential model, testing for consistency, 75 Extreme value distribution, 51 Extreme value models, 50–52
315 Failure definition, 1 mechanisms, 145–157 see also Cost of failure; Early-life failures Failure density and hazard rate, 32 Failure density function, 3 Failure-free operating interval guaranteed with large probability, 224 Failure mechanisms, 70 categories, 145 Failure mode, 7–8, 86–87 and cost of failure, 242–246 Failure mode data, 69 Failure mode field, 69 Failure probability, 1, 7, 9, 27, 30–31, 47, 91, 102–103, 244 automotive suspension springs, 199–204 components containing flaws, 165–177 importance of strength variation, 97 initiated by flaws, 175 loaded component with internal flaws, 170–172 premature failure, 242, 248, 250 stressed component with internal flaws, 168–170 Failure rate data, 69 Failure region, 85, 88 Failure states, 85 Failure surface, 85, 88 Fatigue crack growth model, 164 growth rate, 150 initiation, 150 propagation, 150 Fatigue damage, 162 Fatigue failure, 149–151 Fatigue failure model, delaying, 199–204 Fatigue life, 150, 176–177 predictions, 150 and sharp notches, 151 Fatigue life distribution, stochastic model related to, 176–177 Flaw number density, 170 upper bound, 174–176 Flaw orientation, 172 Flaw size, 172 Flawed components, fracture theory of, 176 Flawed material, 173
316 Flaws fracture triggered by, 173 probability of failure initiated by, 175 Fluid supply system, functional diagram, 20 FMEA (failure mode and effect analysis), 154 FMECA (failure modes, effects and criticality analysis), 154 Fractographic studies, 146 Fracture component with internal crack-like defects, 127–129 controlled by size of flaws, 175 triggered by flaws, 173 Fracture properties, 133 Fracture stress, 239 Fracture theory of flawed components, 176 Fracture toughness, 146 statistical test for degradation, 187 Gamma density function, 35 Gamma distribution, 34–37 random variable following, 112 Gaussian (normal) model, 41–45 Gaussian random variable, simulation, 118–119 General algorithmic framework for reliability and risk analysis, 19–20 General reliability model, 85–86 algorithm for solving, 123–124 Generic comparator functional diagram, 13 reliability block diagram, 14 Goodness-of-fit, 72 Griffith’s crack advancement criterion, 166 Hazard functions, 249 Hazard rate, 27, 30–33, 46–47, 241–242, 246–252, 254–255, 257–260, 265 and failure density, 32 of non-repairable components and systems, 49 Hazard rate function, 30, 32 Highly accelerated stress screens (HASS), 157 Homogeneous Poisson process, 24–26, 29–30, 34, 36, 41, 159, 167–168, 194, 215 finite time interval, 230
Index random arrivals, 70 random failures following, 225–227 random locations following, 115 random variable following, 112–113 Hydrogen embrittlement (HE), 152 Idempotent laws, 272, 286 Identity laws, 272, 285 Importance sampling, 105 Impossible event (null event), 268 Increasing failure rate (IFR), 31, 49 Indicator variables, 282 Industry-specific data, 69 Infant mortality region, 49 Inhomogeneous lamellar eutectic, 137 Inhomogeneous media, analysis of properties, 133–144 Inhomogeneous microstructures, analysis using random transects, 134–135 Inhomogeneous structures, variability of properties associated with, 133 Inspection in early-life failures, 156–157 Intercept variance, 139–142 Intercepted fraction, 135–137 lower tail of material properties monotonically dependent on, 142–144 variance of, 139–142 Intercepts definition, 134 distribution of, 139 empirical cumulative distribution, 135–139 order statistics, 144 probability density distribution, 138 Interfaces in early-life failures, 156 Intergranular brittle fracture, 155 Intersection of events, 270–273 Inventory field, 69 Inverse transformation method, 110–111, 116 Involution law, 273, 286 Joint distribution, 130 Joint distribution function, 293 Joint probability density function, 292–293 k-fold standby system, 34 Kolmogorov–Johnson–Mehl–Avrami (KJMA) equation, 83 Kolmogorov’s axioms, 274–275
Index Lagrange multipliers, 64–65 Lamellar tearing, 155 Latin hypercube sampling, 106 Likelihood function, 78–79 Load, 86–87 normally distributed and statistically independent, 91–95 Loading roughness, 101, 161 Load–strength (demand–capacity) integral, 88–91 numerical methods, 90–91 Load–strength (demand–capacity) models, 85–103 Load–strength interaction, importance of shape of lower tail of strength distribution and upper tail of load distribution, 99–103 Load–strength interference approach influence of strength variability on reliability, 95–98 reliability and risk analysis based on, 95–103 reliability on demand, 124–127 with large variability of load, 98 Load–strength interference formula, 93, 95 Load–strength interference model, 86–88, 159, 161 Load–strength reliability integral, 175 Monte Carlo simulation, 124–127 Location parameter, 81 Log-likelihood derivative, 79 Log-likelihood function, 79–80 Log-normal distribution, 46 Log-normal model, 45–46 Log-normal random variable, simulation, 119 Log-normally distributed downtimes, 234, 237–238 Logical AND, 283–284 Logical arrangement of components, 18 Logical NOT, 284 Logical OR, 284 Longitudinal splitting, 155 Marginal distribution functions, 293 Marginal probability density functions, 293 Maximum acceptable fraction of cost of failure, 241 Maximum acceptable risk of premature failure, 241, 250
317 Maximum extreme value distribution, 161 random variable following, 118 Maximum extreme value model, 51 Maximum likelihood method, estimating model parameters, 78–80 Mean availability associated with finite time interval, Monte Carlo simulation, 236–237 Mean time between failures, see MTBF Mean time to failure, see MTTF Memoryless property of negative exponential distribution, 28, 34 MFFOP, 224, 226–227, 232, 260 application examples, 227–231 Monte Carlo simulation, 233–234 numerical example, 234–235 reliability assurance of meeting specification, 228–229 reliability measures based on, 221, 225 setting reliability requirements to guarantee, 227–228 Microstructure characterising C–Mn multi-run welds, 61–62 inhomogeneity, 133 sampling, 54 single-phase, 144 Microvoids, 147 Minimum critical distances before locations of random variables, 223 between locations of random variables, 222 Minimum critical intervals (MCI), 224 between adjacent random variables, 226 reliability measures based on, 221 Minimum extreme value distribution, 52 Minimum failure-free operating periods, see MFFOP Minimum operating intervals, 223 Mixture distributions, 53–68 random sampling from, 123 Model validation, 72 Monte Carlo simulation, 41, 86, 105–131, 143, 161, 170, 172, 175, 177, 182–184, 187–188, 215, 232, 258, 261–264 algorithms, 105–107, 123–127 analysis of properties of inhomogeneous media, 133–144 efficiency, 106
318 Monte Carlo simulation (Continued ) load–strength reliability integral, 124–127 lower tail of strength distribution for materials containing flaws, 127–129 mean availability associated with finite time interval, 236–237 MFFOP, 233–234 MTBF, 39–40 and MTTF, 40 MTTF, 33–35, 228–229, 239, 242 confidence interval, 39 and MTBF, 40 uncertainty associated with, 37–39 Multi-run welds, see C–Mn multi-run welds Multiple sources property distribution from, 53–55 upper bound of variance of properties from sampling from, 67–68 variance of property from, 55–59 Multiple type of flaws, 170 Mutually exclusive events, 269 probability of union and intersection, 275–276 Necking, 147 Negative exponential distribution, 26–27, 31, 50, 237 memoryless property of, 28, 34 and Poisson distribution, 28–29 probability density function, 27 random variable following, 111 Non-disjoint events, 279–280 Non-homogeneous Poisson process, 255 Non-repairable components/systems, 49–50 Non-repairable device, failure before pay-off period, 224 Normal distribution, 41–45 testing for consistency, 77–78 Null event (impossible event), 268 Number density envelope, 230–231 Offshore failure rate and failure mode data, 69 Old age wear-out, 50 Operational conditions, 69 OREDA database, 69
Index OR-gate, 284 Overstress failures, 145–149 mechanisms, 159–160 Overstress reliability integral, 160–161 Palmgren–Miner rule, 163–164 Parallel arrangement, 8–10, 18 Parameter estimation, 71 three-parameter power law, 80 Paris–Erdogan power law, 150, 164 Partition of sample space, 273 Parts count method, 30 Phase transformation, kinetics of, 192 Physical arrangement of components, 18 Physics-of-failure model, 191 Pitting corrosion, 155 Plastic deformation, 149 Poisson distribution, 24–26, 160, 255 and negative exponential distribution, 28–29 Poisson process, see Homogeneous Poisson process; Non-homogeneous Poisson process Polar coordinates, 130 Power curve, estimating shape of, 81 Power law, 173 applications, 82 parameter estimation, 80 Principle of duality, 273 Probabilistic design methods, 154 Probability, 1, 22, 274–275 axiomatic approach, 274–275 classical approach to defining, 274 clustering of random demands, 209 clustering of uniformly distributed random demands, 212–216 complementary events, 275 contamination, 16 empirical definition, 274 event as sum of probabilities of mutually exclusive events, 54 events, 4 failure, see Failure probability fracture, 129 intersection of events, 276 intersection of n events, 278–279 intersection of three events, 277–278 safe/failure configuration, 217 for arbitrary locations of random variables, 215 success, 23–24
Index triggering failure characterising a single flaw, 170–172 union and intersection of mutually exclusive events, 275–276 union of independent events, 282 union of non-disjoint events, 279–280 Probability density, 116 time to failure, 31, 33 Probability density distribution, 71, 90 intercept, 138 Probability density function, 2, 25, 40, 42–44, 46, 51–52, 86, 120, 131, 136, 291–292 2 -distribution, 37 basic property, 292 negative exponential distribution, 27 time to failure, 1–2, 27, 35 Probability distribution, 105 of cost of failure, 243 Probability distribution functions, 54 Probability mass function, 25 Probability plot for two-parameter Weibull distribution, 76 Probability plotting, 72–78 Probability theory, 73 Process control charts, 156 Property distribution from multiple sources, 53–55 Protective coatings, 157 modelling kinetics of deterioration, 191–197 modelling quantity of corrosion with time, 192–194 Quality control in early-life failures, 156–157 Random demands, probability of clustering, 209 Random direction in space, 116–117 Random events, 3, 267–270 Random failures followed by downtimes, 231–234 Random locations following homogeneous Poisson process, 115 Random point selection in n-dimensional space region, 114–115 Random sampling from mixture distribution, 123 Random subset, generation of, 109–110
319 Random transects, analysis of inhomogeneous microstructures, 134–135 Random variables, 3, 41–42, 44–46, 85, 91–93 basic properties, 289–290 continuous, 110–111 correlated, 293–294 following gamma distribution, 112 following homogeneous Poisson process, 112–113 in finite domain, reliability governed by relative locations of, 216–217 in finite interval, 225–227 following maximum extreme value distribution, 118 following negative exponential distribution, 111 following three-parameter Weibull distribution, 117–118 locations in finite intervals, 221–238 minimum critical distances before locations of, 222–223 probability of safe/failure configuration for arbitrary locations of, 215 properties of expectations and variances, 295–296 reliability dependent on relative configurations of, 205–206 reliability dependent on relative locations of fixed number, generic equation, 206–209 safe/failure configuration, 218 simulation, 108–123 statistically independent, 294–295 uniformly distributed, 108, 118–119 in finite interval (conditional case), 210–211 see also Continuous random variables; Discrete random variable Rejection method, 114 Reliability allocation to minimise total losses, 264–265 basic concepts, 1–18 bathtub curve, 49–50 block diagram, 14–15, 262 data analysis, general rules, 69–72 data record, basic components, 69 definition, 1
320 Reliability (Continued ) dependent on minimum critical distances before locations of random variables in finite interval, 221–238 dependent on relative configurations of random variables, 205–206 dependent on relative locations of fixed number of random variables, generic equation, 206–209 governed by relative locations of random variables following homogeneous Poisson process in finite domain, 216–217 governed by relative locations of random variables in finite domain, 205–219 on demand, 85–86, 91–92, 96, 102–103 during load–strength interference, 124–127 series arrangement, including components with constant hazard rates, 29–30 system with components logically arranged in parallel, 8–10 system with components logically arranged in series, 5–8 system with components logically arranged in series and parallel, 10–12 Reliability analysis based on cost of failure, 239–265 Reliability function, 1, 33, 47 Reliability index, 92, 101, 161 Reliability integral, 86, 123 Reliability investment for repairable system, 254 Reliability measures, 1, 224 based on minimum critical intervals (MCI) and minimum failure-free operating periods (MFFOP), 221–224 Reliability models based on mixture distributions, 53–68 building, 69–83 Reliability requirements based on cost of failure, 239–265 guaranteeing, 259–261 multiple, 259–261 quantitative, 260 to guarantee availability target, 231–238
Index to guarantee minimum failure-free operating period before failures followed by downtime, 231–234 to minimise total cost, 252–259 Repairable systems, 39–40 components not arranged in series, 264 Residual stress, 199 from shot peening, 201, 203 measurement in as-quenched silicon–manganese springs, 199–200 Resource limitations, 222 Risk assessment associated with location of ductile-to-brittle transition region of C–Mn multi-run welds, 186–189 Risk models based on mixture distributions, 53–68 building, 69–83 Risk of failure, 248 for cost of failure dependent on time, 249–252 Risk of premature failure, 239–242, 245–246, 248, 259–264 components not logically arranged in series, 261–264 cost of failure is dependent on time, 251–252 Risk of system failure, 263 Risk reduction by delaying activation of failure mode, 199 Rolling warranty periods, 223 Safe/failure configuration of random variables, 218 Safe region, 85–86, 88 Safety margin, 92, 100 Sample mean expected value and variance, 297 theoretical results, 297 Sample space, 267–268 partition of, 273 Sampling from multiple sources, upper bound of variance of properties from, 67–68 microstructure, 54 Scale parameter, 51 Scatter plot, 70 Series and parallel arrangement, 10–12 Series arrangement, 5–8, 10–12, 18, 29–30, 261–264 Shape parameter, 48, 50
Index Sharp notches and fatigue life, 151 Shot peening compressive residual stresses from, 201 residual stress distributions after, 201 residual stress from, 203 Silicon–manganese spring wire, 199–200 Six-sigma quality philosophy, 156 Spatial statistics, 133 Standard deviation, 45 Standard normal distribution, 43–44, 105, 119 cumulative distribution function, 299–300 Statistical models objectives in building, 71 robustness, 72 Statistically dependent events, 281 Statistically independent events, 281–282 Statistically independent random variables, 36 Statistics, structure and properties intersection, 133 Stochastic model related to fatigue life distribution, 176–177 Stratified sampling, 105 Strength, 86–87 normally distributed and statistically independent, 91–95 Stress corrosion cracking (SCC), 152 Stress intensity factor, 146, 150, 164 Stress limiter, 99 Stress raiser, 147–148 Stressed volume guaranteeing probability of failure below an acceptable level, 174–176 Strong law of large numbers, 297 Sulphide inclusions, 168 Sulphide stringers, 155 Supply-demand formulae, 94–95 Supply-demand problems, 94 Supply systems, probability of disrupting, 222 Survival function, 1 System reliability, 156 Tensile residual stresses, 203–204 at spring surface, 201 Tensile tessellation stresses, 146, 168 Testing for consistency exponential model, 75 normal distribution, 77–78
321 type I extreme value model, 77 uniform distribution model, 75 Weibull distribution, 76–77 Theory of independent competing risks, 8 Three-parameter power law applications, 82–83 parameter estimation, 80 Three-parameter Weibull distribution, random variable following, 117–118 Time interval, 27 Total probability formula, 13 Total probability theorem, 88, 159, 176–177, 244 applications, 12–16 Treska theory, 149 Triaxial stress, 147, 149 Triaxial tensile loading stress, 154 Truth table, 283 Two-parameters Weibull model, 239 Type I extreme value model, testing for consistency, 77 Ultrasonic inspection technique, 157 Uncertainty associated with Charpy impact energy, 61–62 at specified test temperature, 62–64 associated with location of ductile-to-brittle transition region of multi-run welds, 179–189 method of determining, 181–186 associated with MTTF, 37–39 Uniaxial stress, 170, 173 Uniaxial tension loading, 165–166 Uniform distribution model, 40–41 testing for consistency, 75 Uniform distribution, probability density and cumulative distribution function, 41 Union of events, 270 Union of independent events, probability of, 282 Unit sphere and random direction in space, 117 Unreliability and variability, 156 Upper bound of variance of distribution mixture, 64–67 of properties from sampling from multiple sources, 67–68 Useful life region, 49
322
Index
Value from reliability investment, 252–259 Variability of properties associated with inhomogeneous structures, 133 and unreliability, 156 Variance, 92 cost of failure, 246 distribution mixture, 56 upper bound, 64–67 intercepted fraction, 139–142 property from multiple sources, 55–59 Variance upper bound theorem, 59–62 application, 60–62 Venn diagram, 12, 268 von Mises yield criterion, 149 von Neumann’s method rejection method, 121–122
Wear-out failures, 145, 149–152 Wear-out region, 49–50 Weibull analysis, 76 Weibull cumulative distribution function, 117 Weibull distribution, 50, 52, 71, 90, 117–118, 161, 166, 173 testing for consistency, 76–77 Weibull empirical relationship, 166 Weibull hazard rate function, 47 Weibull model, 46–48 Weibull probability density function, 48 Weld metal embrittlement, 186–187
Weak flaws assumption, 170 Weakest-link theories, 167
0/1 laws, 273, 286 Zero defect levels, 156
Yield strength, 149 Yielding, 149
Index compiled by Geoffrey Jones