GEOTECHNICAL RISK AND SAFETY
PROCEEDINGS OF THE 2ND INTERNATIONAL SYMPOSIUM ON GEOTECHNICAL SAFETY & RISK, GIFU, JAPAN, 11–12 JUNE, 2009
Geotechnical Risk and Safety Editors Y. Honjo Department of Civil Engineering, Gifu University, Gifu, Japan
M. Suzuki Center for Structural Safety and Reliability, Institute of Technology, Shimizu Corporation, Tokyo, Japan
T. Hara Department of Civil Engineering, Gifu University, Gifu, Japan
F. Zhang Department of Civil Engineering, Nagoya Institute of Technology, Nagoya, Japan
Cover photo: Traditional cormorant fishing at the Nagara River, Gifu City, Japan Courtesy of Gifu Sightseeing Association
Taylor & Francis is an imprint of the Taylor & Francis Group, an informa business © 2009 Taylor & Francis Group, London, UK Typeset by Charon Tec Ltd (A Macmillan Company), Chennai, India Printed and bound in Great Britain by Antony Rowe (A CPI-group Company), Chippenham, Wiltshire All rights reserved. No part of this publication or the information contained herein may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by photocopying, recording or otherwise, without written prior permission from the publisher. Although all care is taken to ensure integrity and the quality of this publication and the information herein, no responsibility is assumed by the publishers nor the author for any damage to the property or persons as a result of operation or use of this publication and/or the information contained herein. Published by:
CRC Press/Balkema P.O. Box 447, 2300 AK Leiden, The Netherlands e-mail:
[email protected] www.crcpress.com – www.taylorandfrancis.co.uk – www.balkema.nl
ISBN: 978-0-415-49874-6 (Hbk) ISBN: 978-0-203-86731-0 (eBook)
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Table of Contents
Preface
IX
Organization
XI
Sponsors
XIII
Wilson Tang lecture Reliability of geotechnical predictions T.H. Wu
3
Keynote lectures Risk assessment and management for geohazards F. Nadim
13
Risk management and its application in mountainous highway construction H.W. Huang, Y.D. Xue & Y.Y. Yang
27
Recent revision of Japanese Technical Standard for Port and Harbor Facilities based on a performance based design concept T. Nagao, Y. Watabe, Y. Kikuchi & Y. Honjo
39
Special lecture Interaction between Eurocode 7 – Geotechnical design and Eurocode 8 – Design for earthquake resistance of geotechnical structures P.S. Sêco e Pinto
51
Special sessions Reliability benchmarking Reliability analysis of a benchmark problem for 1-D consolidation J.Y. Ching, K.-K. Phoon & Y.-H. Hsieh
69
Study on determination of partial factors for geotechnical structure design T.C. Kieu Le & Y. Honjo
75
Reliability analyses of rock slope stability C. Cherubini & G. Vessia
83
Reliability analysis of a benchmark problem for slope stability Y. Wang, Z.J. Cao, S.K. Au & Q. Wang
89
Geotechnical code drafting based on limit state design and performance based design concepts Developing LRFD design specifications for bridge shallow foundations S.G. Paikowsky, S. Amatya, K. Lesny & A. Kisse
V
97
Limit States Design concepts for reinforced soil walls in North America R.J. Bathurst, B. Huang & T.M. Allen
103
Loss of static equilibrium of a structure – Definition and verification of limit state EQU B. Schuppener, B. Simpson, T.L.L. Orr, R. Frank & A.J. Bond
111
Geotechnical criteria for serviceability limit state of horizontally loaded deep foundations M. Shirato, T. Kohno & S. Nakatani
119
Reliability-based code calibration of piles based on incomplete proof load tests J.Y. Ching, H.-D. Lin & M.-T. Yen
127
Sensitivity analysis of design variables for caisson type quay wall G.L. Yoon, H.Y. Kim, Y.W. Yoon & K.H. Lee
135
Evaluating the reliability of a levee against seepage flow Y. Shimizu, Y. Yoshinami, M. Suzuki, T. Nakayama & H. Ichikawa
141
Determination of partial factors for the verification of the bearing capacity of shallow foundations under open channels A. Murakami, S. Nishimura, M. Suzuki, M. Mori, T. Kurata & T. Fujimura
147
Application of concept in ‘Geo-code21’ to earth structures M. Honda, Y. Kikuchi & Y. Honjo
155
Limit state design example – Cut slope design W.K. Lin & L.M. Zhang
159
Probabilistic charts for shallow foundation settlements on granular soil C. Cherubini & G. Vessia
165
Resistance factor calibration based on FORM for driven steel pipe piles in Korea J.H. Park, J.H. Lee, M. Chung, K. Kwak & J. Huh
173
An evaluation of the reliability of vertically loaded shallow foundations and grouped-pile foundations T. Kohno, T. Nakaura, M. Shirato & S. Nakatani
177
Study on rational ground parameter evaluation methods for the design and construction of large earth-retaining wall Y. Yamamoto, T. Hirose, M. Hagiwara, Y. Maeda, J. Koseki, J. Fukui & T. Oishi
185
System reliability of slopes for circular slip surfaces J.-Y. Ching, Y.-G. Hu & K.-K. Phoon
193
Correlation of horizontal subgrade reaction models for estimating resistance of piles perpendicular to pile axis Y. Kikuchi & M. Suzuki
201
Risk management in geotechnical engineering The long and big tunnel fire evacuation simulation based on an acceptable level of risk and EXODUS software S.-Q. Hao, H.-W. Huang & Y. Yuan
211
Risk based decision support system for the pumping process in contaminated groundwater remediation T. Hata & Y. Miyata
217
VI
A risk evaluation method of countermeasure for slope failure and rockfall with account of initial investment T. Yuasa, K. Maeda & A. Waku Risk assessment on the construction of interchange station of Shanghai metro system Z.W. Ning, X.Y. Xie & H.W. Huang Challenges in multi-hazard risk assessment and management: Geohazard chain in Beichuan Town caused by Great Wenchuan earthquake L.M. Zhang
221
229
237
General sessions Design method (1) A study of the new design method of irrigation ponds using sheet materials M. Mukaitani, R. Yamamoto, Y. Okazaki & K. Tanaka
247
Research on key technique of double-arch tunnel passing through water-eroded groove Y. Chen & X. Liu
251
Safety measures by utilizing the old ridge road and potential risks of land near the old river alignments M. Okuda, Y. Nakane, Y. Kani & K. Hayakawa Bearing capacity of rigid strip footings on frictional soils under eccentric and inclined loads K. Yamamoto & M. Hira
257
265
Uncertainty Reliability analysis of slope stability by advanced simulation with spreadsheet S.K. Au, Y. Wang & Z.J. Cao Optimal moving window width in conjunction with intraclass correlation coefficient for identification of soil layer boundaries J.K. Lim, S.F. Ng, M.R. Selamat & E.K.H. Goh
275
281
Soil variability calculated from CPT data T. Oka & H. Tanaka
287
Reducing uncertainties in undrained shear strengths J.Y. Ching, Y.-C. Chen & K.-K. Phoon
293
A case study on settlement prediction by spatial-temporal random process P. Rungbanaphan, Y. Honjo & I. Yoshida
301
Construction risk management Reliability analysis of a hydraulic fill slope with respect to liquefaction and breaching T. Schweckendiek, G.A. van den Ham, M.B. de Groot, J.G. de Gijt, H. Brassinga & P. Hudig
311
A case study of the geological risk management in mountain tunneling T. Ikuma
319
Guideline for monitoring and quality control at deep excavations T.J. Bles, A. Verweij, J.W.M. Salemans, M. Korff, O. Oung, H.E. Brassinga & T.J.M. de Wit
327
A study on the empirical determination procedure of ground strength for seismic performance evaluation of road embankments K. Ichii & Y. Hata
VII
333
Geo Risk Scan – Getting grips on geotechnical risks T.J. Bles, M.Th. van Staveren, P.P.T. Litjens & P.M.C.B.M. Cools
339
Reduction of landslide risk in substituting road of Germi-Chay dam H.F. Aghajani & H. Soltani-Jigheh
347
Risk assessment Probabilistic risk estimation for geohazards: A simulation approach M. Uzielli, S. Lacasse & F. Nadim
355
A research project for deterministic landslide risk assessment in Southern Italy: Methodological approach and preliminary results F. Cotecchia, P. Lollino, F. Santaloia, C. Vitone & G. Mitaritonna
363
Reliability-based performance evaluation for reinforced railway embankments in the static loading condition M. Ishizuka, M. Shinoda & Y. Miyata
371
Maximum likelihood analysis of case histories for probability of liquefaction J.Y. Ching, C. Hsein Juang & Y.-H. Hsieh
379
Suggestions for implementing geotechnical risk management M.Th. van Staveren
387
Design method (2) Framework for evaluation of probability of failure of soil nail system I.S.H. Harahap & W.P. Nanak
397
Reliability analysis of embankment dams using Bayesian network D.Q. Li, H.H. Liu & S.B. Wu
405
Identification and characterization of liquefaction risks for high-speed railways in Portugal P.A.L.F. Coelho & A.L.D. Costa
411
Field characterization of patterns of random crack networks on vertical and horizontal soil surfaces J.H. Li & L.M. Zhang
419
Stochastic methods for safety assessment of a European pilot site: Scheldt M. Rajabalinejad, P.H.A.J.M. van Gelder & J.K. Vrijling
425
A report by JGS chapter of TC23 limit state design in geotechnical engineering practice Code calibration in reliability based design level I verification format for geotechnical structures Y. Honjo, T.C. Kieu Le, T. Hara, M. Shirato, M. Suzuki & Y. Kikuchi
435
Author index
453
VIII
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Preface
IS Gifu (2nd International Symposium on Geotechnical Risk and Safety) which is held on 11 and 12 June, 2009 at Nagara International Convention Center in Gifu, Japan, is a symposium held as part of a series of conferences organized by a group of people interested in geotechnical risk and safety. These conferences include LSD2000 (November 2000, Melbourne, Australia), IWS Kamakura (April, 2002, Tokyo and Kamakura, Japan), LSD2003 (June, 2003, Cambridge, USA), Georisk 2004 (November, 2004, Bangalore, India), Taipei2006 (November, 2006, Taipei), and the 1st International Symposium on Geotechnical Risk and Safety (1st ISGSR, October, 2007, Shanghai). Besides these events, this group has organized technical sessions in many international and regional conferences from time to time. The major themes of this symposium are: • • • • •
Evaluation and control of uncertainties concerning geotechnical structures. Performance based specifications, RBD and LSD of geotechnical structures, and design code developments. Risk assessment and management of geo-hazards. Risk management issues concerning large geotechnical construction projects. Repair and maintenance strategies of geotechnical structures.
IS Gifu is sponsored by ISSMGE, JGS and GEOSNet. Two technical committees now working in ISSMGE are taking lead in this symposium, namely, TC 23 ‘Limit state design in geotechnical engineering practice’ (chair Y. Honjo) and TC 32 ‘Risk assessment and management in geotechnical engineering practice’ (chair F. Nadim). The organizers greatly appreciate the support provided by the Japanese Geotechnical Society (JGS) on this symposium. ASCE Geo-Institute RAM (Risk Assessment and Management Committee) has also been involved in promoting this symposium. GEOSNet (Geotechnical Safety Network) is a topic-specific international platform to facilitate and promote active interaction on topics related to geotechnical safety and risk among the members, particularly between researchers and practitioners. GEOSNet was formed at Taipei 2006 in view of the increasing interest and momentum to rationalize risks in new design codes using reliability and other methods. GEOSNet is expected to take over this activity and become a permanent body to organize this series of ISGSR conferences. For this reason, we call IS Gifu also the 2nd International Symposium on Geotechnical Safety and Risk (2nd ISGSR). One of the important events related to GEOSNet in this symposium is the initiation of Wilson Tang Lecture series. The lecture is named to recognize and honour the seminal contributions of Professor Wilson Tang, who is one of the founding researchers in geotechnical reliability and risk. GEOSNet plans to host the Wilson Tang Lecture as the key presentation in future ISGSR events to honour distinguished peers and their achievements. The first Wilson Tang lecture is delivered by Professor T.H. Wu of Ohio State University, who is also one of the founding researchers in this domain. Finally, the organizers are grateful to all those who have helped and contributed to the organization of this event. A large part of the credit for the proceedings goes to the authors and reviewers. The publication cost of the proceedings is supported by Grant in Aid for Scientific Research, Scientific Research (A) entitled “Development and promotion of performance based design and reliability based design of geotechnical structures” (Grant No. 19206051, Y. Honjo as the representative researcher). The organizers are deeply indebted for this financial support. Yusuke Honjo Makoto Suzuki Takashi Hara Feng Zhang June 2009, Gifu, Japan
IX
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Organization
ISSMGE TC23 Limit State Design in Geotechnical Engineering Practice Honjo, Y. (Chair) Zhang, L.M. (Secretary) Becker, D.E. (Core Members) Matsui, K. (Core Members) Paikowsky, S. (Core Members)
Phoon, K.K. (Core Members) Schuppener, B. (Core Members) Simpson, B. (Core Members) Steenfelt, J. (Core Members)
ISSMGE TC32 Engineering Practice of Risk Assessment and Management Nadim, F. (Chair) Fenton, G.A. (Secretary) Bekkouche, A. (Core Members) Bolle, A. (Core Members) Ho, K. (Core Members) Jaksa, M. (Core Members)
Leroi, E. (Core Members) Manfred Nussbaumer, E.H. (Core Members) Pacheco, M. (Core Members) Phoon, K.K. (Core Members) Roberds, B. (Core Members)
GEOSNet Board Members Phoon, K.K. (Chair) Becker, D.E. Chin, C.T. Faber, M.H.
Honjo, Y. Horikoshi, K. Huang, H.W. Simpson, B.
Organizing Committee Honjo, Y. (Chair) Hara, T. Honda, M. Horikoshi, K. Kikuchi, Y. Kimura, T. Kobayashi, K. Kobayashi, S. Kusakabe, O. Maeda, Y. Matsui, K. Mizuno, H. Murakami, A. Nishida, H.
Nishimura, S. Ogura, H. Oishi, M. Okumura, F. Ohtsu, H. Rito, F. Satake, M. Shirato, M. Suzuki, H. Suzuki, M. Ueno, M. Yamamoto, K. Yamamoto, S.
Scientific Committee Suzuki, M. (Chair) Hara, T.
Kieu Le, T.C. Zhang, F.
XI
Local Advisory Committee Asaoka, A. Ohtani, T. Daito, K. Okumura, T. Hara, T. Rokugo, K. Hinokio, M. Sato, T. Honjo, Y. Sawada, K. Itabashi, K. Shibuki, M. Kamiya, K. Sugii, T. Kodaka, T. Sugito, M. Kojima, S. Tsuji, S. Ma, G. Yamada, K. Maeda, K. Yashima, A. Nakai, T. Yasuda, T. Nakano, M. Yoshimura, Y. Narita, K. Yoshio, O. Noda, T. Zhang, F. Nojima, N. International Review Panel Akutagawa, S. Becker, D.E. Calle, E. Ching, J.Y. Coelho, P. Cotecchia, F. Fujita, M. Furuta, H. Han, J. Hara, T. Harahap, I.S.H. Heidari, S. Honda, M. (Makoto) Honda, M. (Michinori) Horikoshi, K. Huang, H. Ichii, K. Karlsrud, K. Katsuki, S. Kikuchi, Y. Kitahara, T. Kobayashi, A. Kobayashi, S. Kojima, K. Kusakabe, O. Lee, S.R. Li, D. Li, X.Z. Lo, R. Maeda, K. Maruyama, O. Miyata, Y. Mori, Y. Moriguchi, S. Mukaitani, M.
Murakami, A. Nadim, F. Nagao, T. Nishida, H. Nishimura, S. Notake, H. Ohdo, K. Orr, T. Otani, J. Paikowsky, S. Qunfang, H. Rajabalinejad, M. Saito, T. Scarpelli, G. Schuppener, B. Schweckendiek, T. Shirato, M. Staveren, M. Sutoh, A. Suzuki, H. Tafazzoli, N. Taghavi, A. Takada, T. Thomson, R. Uzielli, M. Vessia, G. Wakai, A. Wang, Y. Yamaguchi, Y. Yamamoto, S. Yoon, G.L. Yoshida, I. Yoshinami, Y. Zhang, J. Zhang, L.
XII
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Sponsors
organized by the Japanese Geotechnical Society
under the auspices of the International Society for Soil Mechanics and Geotechnical Engineering
with support of geotechnical safety network (GEOSNet)
XIII
Wilson Tang lecture
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability of geotechnical predictions Tien H. Wu The Ohio State University, Columbus, Ohio, USA
ABSTRACT: This paper reviews the use of probabilistic and statistical methods in reliability analysis. Results from reliability estimates are compared with observed performance to provide a measure of the reliability of the methods and improve understanding of the parameters. Case histories of failures in clays are examined in detail.
1
RELIABILITY AND GEOTECHNICAL DESIGN
The concepts of risk and uncertainty are familiar to geotechnical engineers. Since predictions are not perfect there is always the possibility of failure or unsatisfactory performance. Structures are designed so that the risk of failure is acceptably small. The two principle components of geotechnical design are the “calculated risk” (Casagrande 1965) and the observational method. The first is described by Casagrande as (1) “the use of imperfect knowledge to estimate the possible ranges for all pertinent quantities that enter into a solution…” and (2) “the decision on an appropriate margin of safety, or degree of risk taking into consideration …losses that would result from failure”. Where the consequence of failure is large and a conservative design is expensive, Terzaghi proposed the use of “the observational method”, which is described by Peck (1969) as, "Base the design on whatever information can be secured. Make a detailed inventory of all the possible differences between reality and the assumptions. Then, compute on the basis of original assumptions, various quantities that can be measured in the field. …On the basis of the results of such measurements, gradually close the gaps in knowledge and, if necessary, modify the design during construction." These concepts can be matched with well known relations in reliability and decision making as illustrated in Figure 1 and 2. In Figure 1, Part (1) “estimate the possible range…” is represented by the probability density function f (s), where s = the shear strength. In Part (2), the “decision” is based on minimizing the expected cost E(C), given as Eq. (1). In Part (3), “an appropriate margin of safety” corresponds to Pf and “losses that would result from failure to Cf in Part (4). Figure 2 shows the relationship between the observational method and Bayes’ theorem, Eq. (2). Element (1), “Compute on the basis of original assumptions, various quantities that can be measured in the field”, is analogous to P[zi |xj ], where z = performance and x = soil property. In Element (2), “the original
Figure 1. The calculated risk.
assumptions” is analogous to P [xj ], the prior probability. Element (3), “on the basis of the results of such measurements, gradually close the gaps in knowledge”, corresponds to P [xj |zi ], the posterior probability or updated information. Element (4), “if necessary, modify the design”, is a decision process as described in Fig. 1. The following sections review methods for estimating the failure probability and updating by Bayes’ theorem.
2
UNCERTAINTIES IN GEOTECHNICAL DESIGN
Uncertainties in design parameters and methods lead to errors, which may be random or systematic. Soil variability is the most important source of random error and inaccuracies or simplifications in material and analytical models are the common sources of systematic errors. The result is errors in predictions, represented by the bias N , which is the ratio of the correct answer to the prediction. N has mean N , standard deviation σ[N ] and coefficient of variation [N ]. Soil variability due to spatial variations is represented by (x), where x = property.There is a wide range in (x), from (su ) = 0.10–0.50 for the undrained shear strength to (K) = 2.0–3.0 for permeability (Lumb, 1974). The variability of the permeability coefficient is the largest and can range over more than one order of magnitude within a soil deposit.
3
Figure 2. The observational method.
over FOSM is the use of response surface, or firstorder-reliability method (FORM). More elaborate are simulation and stochastic methods. Simulation has been done with stability analysis and FEM and stochastic FEM. It is instructive to compare results of reliability analysis with observed performance. This is presented in the following sections for failure in clays.
However, in geotechnical engineering practice, uncertainty about the subsoil stratification is often the major issue. Terzaghi’s writings (e.g. Terzaghi 1929) are full of warnings about complications in site conditions. A well known example is the soil profile at Chicopee Dam, MA, shown in Fig. 3a. Much less known is Fig. 3b, which shows the data Terzaghi used to construct Fig. 3a. The uncertainties involved in the extrapolation are obvious. Terzaghi recognized the difficulty of predicting the in-situ permeability at Chicopee Dam and, in an early example of the observational method, provided specific instructions on measurement of the seepage from the dam. Errors in stratification inferred from site exploration data can be considered as mapping and classification problems (Baecher and Christian 2003).The major obstacle is the difficulty of obtaining the necessary geologic parameters. A soil property model is often used to transform results of laboratory or in-situ tests to the property used for design to represent in-situ behavior of the structure. Transformation models, or material models, are approximate and contain errors. Analytical models which include limit equilibrium methods and finite element methods (FEM) also contain errors because of simplifications. Both are model errors and are systematic. Reliability analysis provides a rational method of evaluating the safety of a structure that accounts for all the uncertainties. The origin of reliability-based design can be traced to partial safety factors. Taylor (1948) explained the need to account for different uncertainties about the cohesional and frictional components of soil strength and Lumb (1970) used the standard deviation to represent the uncertainty and expressed the partial safety factors in terms of the standard deviations. In reliability analysis the failure probability is
3
FAILURE IN CLAYS
Stability of slopes on soft to medium clay under the undrained condition is a widely studied problem because it is simple and also because slopes failures in clay are perhaps the most common among failures in geotechnical construction. Many case histories are available and various reliability analyses have been made. 3.1 Soil Variability Models for soil variability range from simple parameters to stochastic fields. A common model represents the mean by a trend, usually taken as a function of depth z, and random departures from the trend by a standard deviation, σ(s), or a coefficient of variation (COV), (s). Soil variability is spatially correlated with correlation distances δx , δz for horizontal and vertical directions. To account for spatial correlation a variance reduction A is used and the average of s over a region A has mean and COV, sA = s, (sA ) = A (s), respectively (Vanmarcke 1977). For generic soil types, Rojiani et al (1991) and Phoon and Kulhawy (1999a) have summarized known data on (s) and δx , δz . There is ample evidence (Wu 2003, Wang and Chiasson 2006) that the trend is not constant even within a small region. Examples of soil variability are given in Table 1. There is a large range in δ. One should note that data on δx and δz were very limited until the 1980’s and Tang et al’s (1976) estimate was based on results from block samples. If δ = 2 m (Phoon and Kulhawy 1999a), then (sA ) ≈ 0.10. Also, because of the small number of samples, there is an uncertainty about the mean value, given as o . The large δ for the James Bay site is to account for the large segment of the slip surface that passes through a layer of lacustrine clay. This illustrates the significance of δ. Also shown are the values
Freudenthal (1947) may be the first to provide a comprehensive formulation of Pf for structures. Since the functions are complex, numerical integration is required. The use of first-order-second-moment (FOSM) method is more convenient. An improvement
4
from Phoon and Kulhawy (1999a) for clays in general. Uncertainty about soil stratification, or the subsoil model, is not included because of inadequate data. 3.2 Model Errors Model errors are systematic and include errors in the soil properties model and the analytical model. Bias in the property model has been investigated by laboratory and in-situ tests that compared the properties measured by different tests. Examples include the estimate of su from results of unconfined compression tests, vane shear tests or cone penetration tests. Transformation models and their reliabilities are summarized in Phoon and Kulhawy (1999b). The engineer must evaluate the results and exercise judgment in estimating the mean and COV of the model error. It is subjective; the true model error is of course unknown. Table 2 gives some examples of estimated errors of strength models. During earlier times the unconfined compression test was the common test used to measure the undrained shear strength of soft to medium clays. The two estimates in Table 2 differ in the factors considered and in the bias N m , and (Nm ). The estimate for vane shear test was based on the laboratory studies by Lefebvre et al (1988) on the same soil. Bias in analytical models has been investigated by comparison of results of simple models used in practice with those of more sophisticated ones. For limit equilibrium analysis of slope stability, the simple models range from the Fellenius method to the Morgenstern-Price method. Sophisticated methods include FEM (Zou et al 1995) and limit analysis (Yu et al 1998). Examples of estimated model errors for four sites are given in Table 3. The estimated error for limit equilibrium analysis is N a ≈ 1.0, (Na ) ≈ 0.05. If the Fellenius method is included, a ≈ł1.14, (Na ) ≈ 0.05. Another factor is the difference between the plane-strain analysis and threedimensional analysis. The first two cases in Table 3 have slide masses of limited width, where the 3-d effect is more important, while the embankment at James Bay is long and a plane-strain failure is likely.The landfill at Kettlement Hills has a complicated geometry. A comparison for an embankment on Bangkok Clay is given in Case 3, Table 4. Despite the different site conditions and the time span between the estimates, the estimated model errors are not very different. There is very little difference between limit equilibrium and limit analyses for example slopes with 30◦ < α < 45◦ .
(a) Permeability Profile at Chicopee Dam, MA. (Terzaghi and Peck 1948) Reprinted with permission from John Wiley and Sons.
3.3
Combined Uncertainty
The combined uncertainty represents the error of the prediction model
(b) Data used by Terzaghi to construct (a)
Using the upper and lower limits of the values in Tables 1–3, the combined prediction error is estimated
Figure 3. Chicopee Dam, MA.
5
Table 1.
Uncertainties due to Data Scatter.
Site
Soil
s kPa
(s)
0
δ(m)
(sA )
Ref.
Chicago
upper clay middle clay lower clay lacustrine clay slip surface∗∗
51.0 30.0 37.0 31.2
0.51 0.26 0.32 0.27 0.14 0.10–0.55
0.083 0.035 0.056 0.045
0.2 0.2 0.2 40∗ 24 1–6
0.096 0.035 0.029
Tang et al. 1976
James Bay Generic ∗
clay
0.063
deGroot and Baecher 1993 Christian et al. 1994 Phoon and Kulhawy 1999
δx , ∗∗ For circular arc slip surface and H = 12 m.
Table 2.
Uncertainties due to Errors in Strength Model.
sample disturbance
Model stress state
Errors,
N mi sample anisotropy size
Site,
Test
Detroit clay
unconf. 1.15, 0.08 comp.
Chicago upper clay middle clay lower clay
unconf. a) 1.38, 0.024 1.05, 0.03 1.00, 0.03 comp. b) 1.05, 0.02 a) 1.38, 0.024 1.05, 0.03 1.00, 0.03 b) 1.05, 0.02 a) 1.38, 0.024 1.05, 0.03 1.00, 0.03 b) 1.05, 0.02
Labrador field clay vane
X
1.00, 0.03 0.86, 0.09
1.00, 0.15
X
(Nmi ) strain rate
progress. failure
X
X
N m, (Nm ) Total
Refer.
0.99, 0.12 Wu & Kraft 1970
0.75, 0.09 0.80, 0.14 0.93, 0.03
Tang et al. 1976
0.93, 0.05 0.80, 0.14 0.97, 0.03 0.93, 0.05 0.80, 0.14 0.97, 0.03 1.05, 0.17
X
X
X
X
1.00,0.15
Christian et al. 1994
(a) stress change, (b) mechanical disturbance Table 3.
Errors in Analytical Model.
Detroit, Cut Chicago, cut James Bay, embankment Kettlement Hills, landfill Example
Analysis
Model 3-D
Error Slip surf.
N ai , (Ni ) Numerical
N a , (Na ) Total
Ref.
φ=0 φ=0 φ=0 c , φ φ=0
1.05,0.03 X 1.1,0.05 1.1,0.16 limit eq.
0.95,0.06 X 1.0,0.05 X limit analysis
X X 1.0,0.02 X 0.9, <0.01
1.0, 0.067 0.98, 0.087 1.0,0.07 1.1,0.16 Yu et al. 1998
Wu and Kraft 1970 Tang et al. 1976 Christian et al. (1994) Gilbert et al. 1998
to be N ≈ 1.0, and (N ) = 0.13–0.24 for limit equilibrium analysis with circular arc slip surface. For total stress analysis with su , (Fs ) = (N ). Note also that soil variability is not the only source of uncertainty. 4
by FOSM and FORM with limit equilibrium analysis and with FEM, Cases 1, 5, 6 and 7. Almost all analyses, except Case 7, consider only soil variability.The differences in Pf are minor. When simulation was done with a random trend, Case 8, slightly higher failure probabilities are obtained. Note that the author’s calculations are approximate because of various simplifications of details. The results of FEM simulation are less conclusive. The Pf for St. Hilliare, calculated with the charts of Griffith and Fenton (2004), lies between 0.02 and 0.05, depending on the spatial correlation model. This is considerably smaller than those from simpler models. For the James Bay Dike, the FEM, which does not include spatial correlation between elements, gives Pf = 0.07. This is much larger than that obtained by
CASE HISTORIES OF FAILURES
Case histories of failures in clay are used to compare the observed performance with reliability predictions. This provides some measure of the reliability of geotechnical predictions. 4.1
Reliability analysis
Table 4 and Fig. 4 show the results for a number of well documented case histories. Consider first the analysis
6
Table 4.
Safety Factors and Failure Probabilities
Reinforced Embankments
1 2 3
Site
Fs
Muar Clay stable
1.22 1.29 1.29 1.00 1.00
Bangkok Clay failure Embankments Bangkok Clay failure
β∗
Pf
1.61 1.84
0.05 0.03
1.03 1.07 0.98 1.06 0.6–0.8
4
5
6 7 8
∗ a
Bangkok Clay, Nong Ngoo Hao failure James Bay stable
Example Cut slopes Chicago, observed failure redesigned slope St. Hilliare failure
0.96 1.00 0.99 1.53 1.45
1.49 1.46 1.28
1.18 1.29 1.15a 1.30b 1.05a 1.14b ≈1.2c
2.60 2.66
0.002 0.009
2.71 1.18
0.003 0.12 0.07 0.005 0.03 0.025
1.86 1.96 0.9 0.50c 0.78c
Method
Reference
limit eq. FOSMc FORM limit eq. limit eq.
Chai & Bergado 1993 Low & Tang 1997 Bergado et al 1994a Low & Tang 1997
limit eq. circular, ps 3d limit eq. non-circ, ps 3d PROBISH, (FOSM?) depends on δx limit eq.
Bergado et al 1994b
FEM + limit eq. limit eq. limit eq. FOSM, (µ) = 0
Zou et al 1995 Low &Tang 1997 Christian et al 1994
(µ) = 0.07 limit eq. FORM, circular non-circ. FEM, FORM limit equi. + simulation limit eq. FOSMc FORM
0.20 0.05 0.31c 0.22c 0.58 0.44 0.02–0.05c δ = 10–20 m
Bergado et al 1994a
Xu & Low 2006 El-Ramly et al 2002 Low 2008
limit eq. FOSM,
Tang et al 1976
limit equi. µ = 0.85e FOSM Stoch. limit eq. δx < 10 m Stoch. FEM
Lafleur et al 1988 Wang & Chiasson 2006 Griffith & Fenton 2004
β = reliability index. = 45◦ , b = 34◦ , c = calculations by author, d corresponds to δ = 20 m, e µ = correction for vane shear test
in the FOSM the calculated range in Pf versus Fs is shown by two curves, which includes most of the case histories. The data in Figure 4 show that the US Army Corps of Engineers’ (1995) design requirement that Pf < 3 × 10−5 should be adequate. Another source of empirical data is the estimate of failure probability based on historic records. Meyerhof (1970) estimated a range from Pf = 0.05 for (Fs ) = 0.3 to Pf < 10−3 for (Fs ) = 0.1. This includes a wide range of soils and designs by unknown methods, with largely unknown safety factors. If one assumes that the range of the safety factor is from 1.3–1.5 for all the structures, the estimated Pf is shown in Fig 4. The relationship between Fs and annual failure rate has been estimated from performance record and expert judgment (Silva et al 2008). The area shown in Fig. 4 is for “Category III and IV”, which include “facilities without site-specific design” and with “little or no design”. The historic records include all types of failures and
FOSM and FORM. Also, the slip surface departs considerably from the circular arc used in FOSM and FORM, and is closer to the non-circular slip surface analyzed by Spencer’s method, which has Fs = 1.18. The estimated Fs for the FEM solution is 1.3 (Fig. 4). In both cases, one would expect the spatial correlation distance, especially in the horizontal direction to be an important factor. At James Bay a large segment of the slip surface passes through the comparatively weaker lacustrine clay layer and is a good illustration of the influence of subsoil stratification. It should be noted that the above cases all have fairly well defined soil strata. The studies concentrated on the random departures from the trend. In some cases, uncertainties about the layer thickness were considered. Fig. 4 shows that generally, Pf > 0.10 for failures. The uncertainty about soil strength, (sA ), for these sites is approximately 0.01. The largest combined uncertainty (N ) = 0.24. When these are used
7
Table 5. Type A Predictions Hp mean 70.6 ft
(Hp ) 0.10
N 1.03
(N ) 0.08
Malaysia Hf = 5.4 m
4.0 m
0.14
1.12
0.07
the information given to the predictors at the two symposia that are worth noting. At the MIT symposium, observed embankment performance during Stages 1 and 2 were known and predictors used it to calibrate their models before making the prediction of the additional height required to produce failure. However, the shear strength at the end of stage 2 was not known and had to be estimated. Table 5 shows the observed and predicted embankment heights at failure. Only the seven predictions that used rational models to estimate the shear strength are shown in Table 5. Most predictors used limit equilibrium analysis and a few used FEM. Despite different prediction models and assumptions, the results are very consistent with N = 1.03 and (N ) = 0.08. However, although the site conditions are considered to be well defined, Leonards (1982) suggested the probable presence of a weak layer. Depending on the strength of this layer and the failure surface, the safety factor could be between 0.35–1.20, which would imply that N ≈ 1.3 and (N ) ≈ 0.25. This is one more example of the importance of the subsoil stratification and its influence on the failure mode. In the Malaysian symposium, the embankment construction took 100 days and this was not known to the predictors. All predictors used the initial undrained shear strengths. The five invited predictors all predicted heights smaller than the observed, with N = 1.12. The underestimate of Hf is likely because the predictors did not account for the strength gain during construction. However, (N ) = 0.07, which is small considering the different methods and assumptions used. These numbers may serve as a measure of the uncertainty under the most favorable design scenario. These prediction errors are not much larger than those for Type C predictions and those estimated in Table 3. The above results show that, for the case histories reviewed, the simple methods used in design and reliability analysis are generally satisfactory. However, it must be emphasized that the cases represent very small spectrum of those in geotechnical practice and that results from simple cases cannot always be generalized. As an example, if one extends the problem to include long-term stability, then the pore pressure must be estimated. For slopes on unsaturated soils, the suction is strongly dependent on the soil-water characteristics (SWC). Studies by Chong et al (2000) and Zhang et al (2005) show that uncertainties about SWC alone can result in a (Fs ) as large as 0.2. This is about equal to the combined uncertainty given in Sec. 3.3.
Figure 4. Pf versus Fs solid dots denote failures.
are not the same as the cases in Table 4, which pertain to failure in the undrained state, or immediately or shortly after construction. 4.2
MIT Hf = 71.7 ft
Prediction of failures
There are many well-documented case histories of failures, especially of slopes on clays. The observed factor of safety of a failure is 1.The ratio of observed to calculated factors of safety provides an empirical measure of the prediction bias N and (N ) is a measure of uncertainty. A distinction should be made on the type of “prediction” as defined by Lambe (1973). Type A predictions are those made before observations Type B predictions are those made during observations and Type C predictions are those made after observations. Consider first Type C predictions. Bishop and Bjerrum (1960) summarized 27 slope and bearing capacity failures in clays and values of Fs calculated with su as measured by the unconfined compression test. The prediction error or bias, N = 1/Fs , has mean and COV of N = 1.01, and (N ) = 0.06 respectively. The prediction error (N ) is much smaller than the range given in Sec. 3.3. Tavenas and Lerouil (1980) summarized almost 60 slope failures and the Fs calculated with su measured by the vane shear test. Their results give N = 1.03, (N ) = 0.17. These are closer to the range given in Sec. 3.3. Good examples of the reliability of Type A predictions are the symposia at Massachusetts Institute of Technology (1975) and at Kuala Lumpur, Malaysia (Malaysian Highway Authority 1989), 14 years later. In each case, well respected predictors were asked to predict the height Hp at failure of a test embankment on clay. The actual height at failure, Hf , was unknown to the predictors. The observed safety factor is 1.0 and the model bias is N = Hf /Hp . The values of N and (N ) for the two symposia are given in Table 5. There are differences between
8
5
and i = 1 = failure, P[zj |xi ] = pdf of β as determined from investigations for failed slopes, and β = reliability index for the specific slope under investigation. To apply Bayesian updating, Eq. (2) is rewritten as,
COMPLEX PROBLEMS
Most design problems are far more complex than those examined in Sec. 3 and 4. Besides the subsoil stratification model, boundary conditions are usually difficult to determine. For slopes on unsaturated soils, surface infiltration may control stability and the change in pore pressure depends not only on the SWC but also on rainfall characteristics. Another complex boundary condition is the displacement of the support system in braced excavations. FEM analyses can predict the pressure distribution well under idealized conditions, but details of construction processes, which are difficult to predict, can have important effect on displacements and stresses. Other complex boundary conditions include wave loadings on offshore structures and ground motion from earthquakes. Evaluation of errors in many of the input data is beyond the scope of soil mechanics. Where the uncertainty predicted by reliability methods may be too large to be useful for design decisions, the observational method provides an attractive alternative to evaluate the safety. There have been many examples of the successful use of the observational method since the Chicopee Dam.
6
The results are used to formulate a slope maintenance strategy for Hong Kong.
7
SUMMARY AND CONCLUSIONS
The review of reliability methods emphasizes the comparison of reliability analysis with observed performance. Failure in soft to medium clays is examined in detailed with well documented case histories. The calculated failure probabilities of slopes on clay with simple subsoil profiles are in good agreement with observed performance and provide some confidence in the methods. It is also clear that for more complex subsoil stratifications, modeling the failure mechanism is critical. Complex boundary conditions where statistical data are often insufficient will require more input in the form of subjective probability based on judgment. For a comprehensive review of this topic, see Baecher and Christian (2003). For very complex design conditions, the observational method is often used to achieve a successful design. Bayesian updating provides an analytical model for the observational method. Two recent examples are given to indicate the potential applications of Bayesian updating.
BAYESIAN UPDATING
The important role of the observational method in solving complex design problems has already been mentioned. Bayesian updating provides a valuable model to evaluate observed performances. It can combine information from different sources including site investigation data, observations, and analytical results. Early examples include Tang (1971), and Matsuo and Asaoka (1978). Eq. (2) can be extended to solve problems with several performance modes (z) and properties (x) (Wu et al 2007). Two recent applications serve to illustrate the potentials of this approach to complex design problems. Gilbert et al (1998) used updating to evaluate the mobilized shear strength of the clay-geosynthetic interface at the Kettlement Hills landfill. The probability that the strength is xi given that the slope has failed is given by P[xi |zj ] in Eq (2), Fig. 2, where P [xi ] = prior probability that the strength xi , P[zj |xi ] = probability that the slope fails given the strength is xi , P[zi ] = probability that the slope fails. The evaluation accounts for various uncertainties in laboratory tests that contribute to P [xi ] and inaccuracies in the stability analysis that contribute to P[zj |xi ]. Regional records of slope performance, when related to some significant parameter, such as rainfall intensity, are valuable as an initial estimate of failure probability. Cheung and Tang (2005) collected data on failure probability of slopes in Hong Kong as a function of age and rainfall. This was used as prior probability in Bayesian updating to estimate the failure probability for a specific slope. The failure probability based on age is P[xi ] where i=0 = stable slope
ACKNOWLEDGEMENTS The author thanks the Organizing Committee for the opportunity to make this presentation, Prof. Y. Honjo for advice and help in the preparation of this paper, and Prof. B F. Low and Prof. G. A. Fenton for informative discussions about their FEM analyses. REFERENCES Baecher, G.B., and Christian, J.T. 2003. Reliability and Statistics in Geotechnical Engineering. Chichester: John Wiley and Sons. Bergado, D. T., Long, P.V., Lee, C.H., Loke, K.H., and Werner, G. 1994a. Performance of reinforced embankment on soft Bangkok clay with high-strength geotextile reinforcement. Geotextiles and Geomembranes. 13: 403–420. Bergado, D.T., Patron Jr, B.C., Youyongwatana, W., Chai, J-C., and Yudhbir, 1994b. Reliability-based analysis of embankment on soft Bangkok clay. Structural Safety 13:247–266. Bishop, A. W., and Bjerrum, L. 1960. The relevance of the triaxial test to the solution of stability problems. In Proc. Research Conf. on Shear Strength of Cohesive Soils. 437–501, New York: ASCE.
9
Casagrande, A. 1965. Role of the “calculated risk” in earthwork and foundation enmgineering. J. Soil Mechanics and Foundation Division ASCE, 91:1–40. Chai, J., and Bergado, D.T. 1993. Performance of reinforced embankment on Muar Clay deposiot. Soils and Foundations, 33 (4):1–17. Cheung, R. W. M., and Tang, W. H. 2005. Realistic assessment of slope stability for effective landslide hazard management. Geotechnique 55:85–94. Chong, P. C., Phoon, K. K., and Tan, T. S. 2000 Probabilistic analysis of unsaturated residual soil slopes. In R. E. Melchers and M. G. Stewart (eds.) Applications of Statistics and Probability, 375–382. Rotterdam: Balkema. Christian, J. T., Ladd, C. C., and Baecher G. B. 1994. Reliability applied to slope stability analysis. J. of Geotechnical Engineering, 120:2180–2207. DeGroot, D. J., and Baecher, G. B. 1993. Estimating autocovariance of in-situ soil properties. J. Geotechnical Engineering, 119:147–167. El-Ramly, H., Morgenstern, N.R., and Cruden, D.M. 2002. Probabilistic slope stability for practice. Canadian Geotechnical J., 39 :665–683. Freudenthal, A.M., 1947. The safety of structures. Trans. ASCE, 112:125–180. Gilbert, R. B., Wright, S. G., and Liedke, E. 1998 Uncertainty in back analysis of slopes: Kettleman Hills case history. J. Geotechnical and Geoenvironmental Engineering, 124:1167–1176. Griffith, V. D., and Fenton, G. A. 2004. Probabilistic slope stability analysis by finite elements. J. Geotechnical and Geoenvironmental Engineering, 130:507–518. Lafleur, J., Silvestri, V., Asselin, R., and Soulie, M. 1988. Behaviour of a test excavation in soft Champlain Sea clay. Canadian Geotechnical J. 25:705–715. Lefebvre, G, Ladd, C.C., and Pare, J-J. 1988. Comparison of field vane and laboratory undrained shear strengths in soft sensitive clays. In Vane Shear Testing in Soils: field and laboratory studies. STP 1014, 233–246. Philadelphia: ASTM. Lambe, T. W. 1973. Predictions in soil engineering. Geotechnique, 23:149–202. Leonards, G. A. 1982. Investigation of failures. J. Geotechnical Engineering, 108:222–283. Low, B. F. 2008. Practical reliability approach using spreadsheet. In K-K. Phoon (ed.) Reliability-Based Design in Geotechnical Engineering. London: Taylor and Francis. Low, B.F., and Tang,W.H. 1997. Reliability analysis of reinforced embankment on soft ground. Canadian Geotechnical J. 34:672–685. Lumb, P. 1970. Safety factors and the probability distribution of strength. Canadian Geotechnical J., 7:225–242. Malayasian4 HighwayAuthority 1989. In (eds.) R. R. Hudson, C. T. Toh, and S. F. Chan. Proc. Intern. Symp. on Trial Embankments on Malayasian Marine Clays. Kuala Lumpur, Malayasia. Massachusetts Institute of Technology 1975. Proc. Foundatiions Deformation Prediction Symp. Washington, DC: Federal Highway Administration. Matsuo, M. andAsaoka,A. 1978. Dynamic design philosophy of soils based on the Bayesian reliability prediction. Soils and Foundations 18(4):1–17 Meyerhof, G. G. 1970. Safety factors in soil mechanics. Canadian Geotechnical J. 7:333–338. Peck, R. B. 1969. Advantages and limitations of the observational method in applied soil mechanics, Ninth Rankine Lecture. Geotechnique, 19:171–187.
Phoon, K. K., and Kulhawy, F. H. 1999a. Characterization of geotechnical variability. Canadian Geotechnical J. 36: 612–624. Phoon, K. K., and Kulhawy, F. H. 1999b. Evaluation of geotechnical property variability. Canadian Geotechnical J. 36:625–639. Rojiani, K. B., Ooi, P.S.K., and Tan, C.K. (1991) Calibration of load factor design code for highway bridge foundations, Geotechnical Engineering Congress, Geotechnical Special Publication No. 217, 2:1353–1364. New York: ASCE. Silva, F. Lambe, T.W., and Marr, W.A. 2008. Probability and risk of slope failure. J. Geotechnical and Geoenvironmental Engineering, 134:1691–1699. Tang, W. H. 1971. A Bayesian evaluation of information for foundation engineering design. In P, Lumb (ed.) Statistics and Probability in Civil Engineering, 173–185, Hong Kong: Hong Kong Univ. Press. Tang, W. H. 1971. A Bayesian evaluation of information for foundation engineering design. In P, Lumb (ed.) Statistics and Probability in Civil Engineering, 173–185, Hong Kong: Hong Kong Univ. Press. Tang, W. H., Yucemen, M.S., and Ang, A. H-S. 1976. Probability-based short term design of soil slopes. Canadian Geotechnical J. 13:201–215. Tavanas, F., and Leroeil, S. 1980. The behavior of embankments on clay foundations. Canadian Geotechnical J. 17:236–260. Taylor, D. W. 1948. Fundamentals of Soil Mechanics, New York: John Wiley and Sons. Terzaghi, K. 1929 Effect of minor geologic details on the safety of dams. Technical Pub. 215, 31–44, American Institute of Mining and Metallurgical Engineering, New York. U.S. Army, Corps of Engineers 1995. Introduction to probability and reliability methods for use in geotechnical engineering. Technical letter 1110-2-547, Washington, DC Vanmarcke, E. H. 1977. Probabilisic modeling of soil profiles. J. Geotechnical Engineering Division, ASCE, 103:P1227-1246 Wang, Y-J., and Chiasson, P. 2006. Stochastic stability analysis of a test excavation involving spatially variable subsoil. Canadian Getechnical J. 43:1074–1087. Wu, T.H. 2003. Variation in clay deposits of Chicago. In E. Vanmarcke and G. A. Fenton, (eds.) Probabilistic Site Characterization at the National Geotechnical Test Sites. Geotechnical Special Pub. 121. Reston: ASCE. Wu, T.H., Kraft, L.M. 1970 Safety analysis of slopes. J. Soil Mechanics and Foundations Division, ASCE, 96:609–630 Xu, B., and Low, B. F. 2006. and Zou, J-Z., Williams, D. J., and Xiong, W-L. 1995. Search for critical slip surfaces based on finite element method. Canadian Getechnical J. 32:233 Yu, H.S., Salgado, R., Sloan, S.W., and Kim, J.M. 1998. Limit Analysis versus limit equilibrium for slope stability. J. Geotechnical and Geoenvironmental Eng. 124:1–11. Zhang, L. L., Zhang, L. M., and Tang, W. H. (2005) Rainfall-induced slope failure considering variability of soil properties, Geotechnique, 55:183–188. Zou, J-Z., Williams, D.J. and Xiong, W-L. 1995. Search for critical slip surfaces based on finite element method. Canadian Geotechnical J. 32:233–246.
10
Keynote lectures
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Risk assessment and management for geohazards F. Nadim International Centre for Geohazards (ICG) / Norwegian Geotechnical Institute (NGI), Oslo, Norway
ABSTRACT: Each year, natural disasters cause countless deaths and formidable damage to infrastructure and the environment. In 2004–5, more than 300,000 people lost their lives in natural disasters. Material damage was estimated at USD 300 billion. Many lives could have been saved if more had been known about the risks and possible risk mitigation measures. The paper summarizes the state-of-the-art in the assessment of hazard and risk associated with landslides, earthquakes and tsunamis. The role of such assessments in a risk management context is discussed and general recommendations for identification and implementation of appropriate risk mitigation strategies are provided.
1
INTRODUCTION
Table 1. Natural disasters in the period 1991–2000 (Source: IFRC 2001).
“Geohazards”, i.e. natural hazards that are driven by geological features and processes, pose severe threats to humans, property and the natural and built environment. During 2005, geohazards accounted for about 100,000 deaths worldwide, of which 84% were due to October’s Pakistan earthquake. In that year, natural disasters affected 161 million people and cost around US$ 160 billion – over double the decade’s annual average. Hurricane Katrina accounted for three quarters of this cost. During the period 1996 to 2005, natural disasters caused nearly one million lives lost, or double the figure for the previous decade, affecting 2.5 billion people across the globe (World Disaster Report, 2006). When the trend of fatalities due to natural hazards is studied over the last 100 years, it appears that the increase in the known number of deaths is due to the increase in the exposed population in this time scale and the increased dissemination of the information, and not to an increase in the frequency and/or severity of natural hazards. The economic consequences of geohazards show an even more dramatic increasing trend (Munich Re, 2007). Some of the reasons for this increase are obvious, others less so. The post-disaster effects can be especially severe in a vast, densely-populated area where sewers fail and disease spreads. Slums spring up in disaster-prone areas such as steep slopes, which are prone to landslides or particularly severe damage in an earthquake. Many of the world’s fastest growing cities are located on coastal land or rivers where climate variability and extreme weather events, from cyclones to heat waves to droughts, pose increasing risks of disaster. Several well-documented studies have shown clearly that developing countries are more severely affected by natural disasters than developed countries, especially in terms of lives lost (UNDP 2004, ISDR
Country classification
No. of disasters
No. of lives lost
Low and medium developed countries Highly developed countries
1838
649,400
719
16,200
2004 and International Federation of Red Cross and Red Crescent 2004). Table 1 shows the data compiled by IFRC (2001) for the decade 1991–2000. Of the total number of persons killed by natural disasters in this period, the highly developed countries accounted for only 5% of the casualties. In absolute numbers, the material damage and economic loss due to natural hazards in highly developed countries by far exceed those in developing nations. However, this reflects the grossly disproportionate values of fixed assets, rather than real economic vulnerability. Mitigation and prevention of the risk posed by natural hazards have not attracted widespread and effective public support in the past. However, the situation has changed dramatically during the past decade, and it is now generally accepted that a proactive approach to risk management is required to reduce significantly loss of lives and material damage associated with natural hazards. The wide media attention on major natural disasters during the last decade has clearly changed people’s mind in terms of acknowledging risk management as an alternative to emergency management. A milestone in recognition of the need for natural disaster risk reduction was the approval of the “Hyogo Framework for Action 2005–2015: Building the Resilience of Nations and Communities to Disasters” (ISDR 2005). This document, which was approved by 164 UN countries during the World Conference on Disaster
13
•
Reduction in Kobe, January 2005, clarifies international working modes, responsibilities and priority actions for the coming 10 years. The first step in any decision-making process for disaster risk reduction is the quantitative assessment of the risk. This paper provides an overview of the state-of-the-art for hazard and risk assessment for landslides, earthquakes and tsunamis, and discusses possible risk mitigation strategies for these geohazards.
• • • • •
2
In the following sections, the methodologies for answering one or more of these questions for landslides, earthquakes and tsunamis will be discussed.
RISK ASSESSMENT FRAMEWORK
The terminology used in this paper is generally consistent with the recommendations of ISSMGE Glossary of Risk Assessment Terms (listed on TC32 web page: http://www.engmath.dal.ca/tc32/). The important terms used in the context of this paper are: Danger (Threat): Natural phenomenon that could lead to damage, described by geometry, mechanical and other characteristics. Description of a threat involves no forecasting. Hazard: Probability that a particular danger (threat) occurs within a given period of time. Risk: Measure of the probability and severity of an adverse effect to life, health, property, or the environment. Mathematically, risk is defined as Risk = Hazard × Potential worth of loss. Vulnerability: The degree of loss to a given element or set of elements within the area affected by a hazard. It is expressed on a scale of 0 (no loss) to 1 (total loss). In UNISDR terminology on Disaster Risk Reduction (2009), “disaster” is defined as “a serious disruption of the functioning of a community or a society causing widespread human, material, economic or environmental losses which exceed the ability of the affected community or society to cope using its own resources. The term “natural disaster” is slowly disappearing from the disaster risk management terminology because without the presence of humans, one is only dealing with natural processes. These only become disasters when they impact a community or a society. Quantitatively risk can be evaluated from the following expression:
3
LANDSLIDES
3.1 Landslide threat Landslides represent a major threat to human life, property and constructed facilities, infrastructure and natural environment in most mountainous and hilly regions of the world. Statistics from The Centre for Research on the Epidemiology of Disasters (CRED) show that landslides are responsible for at least 17% of all fatalities from natural hazards worldwide. The socio-economic impact of landslides is underestimated because landslides are usually not separated from other natural hazard triggers, such as extreme precipitation, earthquakes or floods.This underestimation contributes to reducing the awareness and concern of both authorities and general public about landslide risk. As a consequence of climate change and increase in exposure in many parts of the world, the risk associated with landslides is growing. In areas with high demographic density, protection works often cannot be built because of economic or environmental constraints, and is it not always possible to evacuate people because of societal reasons. One needs to forecast the occurrence of landslide and the hazard and risk associated with them. Climate change, increased susceptibility of surface soil to instability, anthropogenic activities, growing urbanization, uncontrolled land-use and increased vulnerability of population and infrastructure as a result, contribute to the growing landslide risk. According to the European Union Strategy for Soil Protection (COM232/2006), landslides are one of the main eight threats to European soils. Water plays a major role in triggering of landslides. Figure 1 shows the relative contribution of various landslide triggering events factor in Italy. Heavy rainfall is the main trigger for mudflows, the deadliest and most destructive of all landslides. Many coastal regions have cliffs that are susceptible to failure from sea erosion (by undercutting at the toe) and their geometry (slope angle), resulting in loss of agricultural land and property. This can have a devastating effect on small communities. For instance, parts of the north-east coast cliffs of England are eroding at rates of 1 m/yr.
where R = risk associated with a particular danger H = hazard V = vulnerability of elements at risk E = expected cost of total loss of elements at risk Several risk assessment frameworks have been proposed, and Düzgün and Lacasse (2005) list a large number of these. The frameworks have the common objective of answering the following questions (modified from Lee & Jones, 2004): •
How often do the dangers of a given magnitude occur? [Hazard Assessment] What are the elements at risk? [Elements at Risk Identification] What is the possible damage to the elements at risk? [Vulnerability Assessment] What is the probability of damage? [Risk Estimation] What is the significance of the estimated risk? [Risk Evaluation] What should be done? [Risk Management]
What are the probable dangers and their magnitude? [Danger Identification]
14
factor of safety is defined as the ratio of the characteristic resistance (resisting force) to the characteristic load (driving force). The approach does not address the uncertainty in load and resistance in a consistent manner. The choice of “characteristic” values allows the engineer to implicitly account for uncertainties by using conservative values of load (high value) and resistance parameters (low value). The choice is somewhat arbitrary. Duncan (1992 and 1996) provided an overview of deterministic slope stability analysis method. The overview included the factor of safety approach, equilibrium methods of slope stability analysis (Janbu’s generalized method of slices, Bishop’s method, Spencer’s method, Morgenstern and Price’s method among others), techniques for searching for the critical slip surface, both circular and non-circular, three-dimensional analyses of slope stability, analyses of the stability of reinforced slopes, drained and undrained conditions, and total stress and effective stress analyses. Slopes with nominally the same factor of safety could have significantly different safety margins because of the uncertainties involved. Duncan (2000) pointed out that “Through regulation or tradition, the same value of safety factor is often applied to conditions that involve widely varying degrees of uncertainty. This is not logical.” To evaluate the hazard associated with the failure of a specific slope, the stability assessment must be put into a probabilistic format using one of the techniques mentioned earlier (FOSM, FORM, MCS, etc.). An overview of the available methods for doing probabilistic slope stability assessment for individual slopes is provided in Nadim et al. (2005).
Figure 1. Landslide triggers in Italy. Source: CNR-GNDCI AVI Database of areas affected by landslides and floods in Italy.
As a consequence of climatic changes and potential global warming, an increase of landslide activity is expected in the future, due to increased rainfalls, changes of hydrological cycles, more extreme weather, concentrated rain within shorter periods of time, meteorological events followed by sea storms causing coastal erosion and melting of snow and of frozen soils in high mountain regions like the Alps and the Himalayas. The growing landslide hazard and risk, the need to protect people and property, the expected climate change and the need to manage the risk have contributed to set the agenda for the profession to assess and mitigate the landslide risk. 3.2 Landslide hazard assessment for specific slopes Hazard assessment for a specific slope usually involves a probabilistic analysis of the slope, while hazard assessment for a region generally requires the computation of frequency of the landslides in the region. For regional analyses, data to be collected are in the form of maps related to geomorphology, geology, land-use/cover and triggers. For specific slopes, the required data for hazard analysis include slope geometry such as height, width, inclination of slope and potential failure plane, shape and length of failure plane etc., strength parameters data for possible trigger such as rainfall intensity, water level, severity of dynamic loads e.g. earthquake magnitude, acceleration and/or other characteristics. The probabilistic models used for a specific slope vary depending on the failure mechanism (e.g. flows, falls or slides) and the slope-forming material (e.g. soil or rock). Analyses of specific slopes use deterministic (factor of safety, numerical analyses) and/or probabilistic methods, e.g. first order, second-moment (FOSM), first order reliability method (FORM), point estimate methods, and Monte Carlo Simulation (MCS) (Ang & Tang 1984). Recent trends combine different approaches for an improved model of the hazard(s). An uncertainty analysis is essential prior to the calculation of slope failure probability as it allows a rational calculation of total uncertainties associated with different sources of uncertainty (e.g. in parameters and models). The quantification and analysis of uncertainties play a critical role in the risk assessment. The stability situation for natural and man-made slopes is often expressed by a factor of safety. The
3.3
Regional landslide hazard assessment
Landslide hazard and risk assessment is often required on a regional or national scale and it would not be feasible to do a stability assessment for all potentially unstable slopes in the study area. Therefore other techniques based on Geographical Information Technology (GIT) are employed in these situations. An example of this type of hazard assessment is the study done by Nadim et al. (2006) in the Global Hotspots study for the ProVention Consortium. That model, which is currently being updated for the Global Risk Update project of ISDR, assesses the landslide hazard by considering a combination of the triggering factors and susceptibility indicators. The principles of the model are demonstrated in Figure 2. In the latest version of the model, a landslide hazard index was defined using six parameters: slope factor within a selected grid cell, lithology (or geological conditions), soil moisture condition, vegetation cover index, precipitation factor, and seismic conditions. For each factor, an index of influence was determined and the relative landslide hazard level Hlandslide was obtained by multiplying and summing the indices. The landslide hazard indices were then calibrated against the databases of landslide events in selected (mostly European) countries to obtain the frequency of the
15
Figure 2. Schematic approach for landslide hazard and risk evaluation (Nadim et al., 2006). Figure 3. Landslide hazard map for parts of Latin America (Nadim et al., 2006).
landslide events, i.e. the landslide hazard. Figure 3 shows the landslide hazard map for parts of Latin America obtained by Nadim et al. (2006). 3.4
Landslide risk assessment
The most complete description of the possible losses (or risk) is quantitatively in terms of a “probability distribution”, which presents the relative likelihood of any particular loss value or the probability of losses being less than any particular value. Alternatively, the “expected value” (i.e., the probability weighted average value) of loss can be determined as a single measure of risk. A general scenario-based risk formulation is given by Nadim & Glade (2006):
Figure 4. Procedure for risk assessment of slopes.
where C is particular set of losses (of collectively exhaustive and mutually exclusive set of possible losses), S is particular scenario (of comprehensive and mutually exclusive discrete set of possible scenarios), P[S] is probability of occurrence of scenario S, P[C | S] is the conditional probability of loss set C given that scenario S has occurred, and E[Loss] is the “expected value” of loss. “Loss” may refer to any undesirable consequence, such as loss of human life, economic loss, loss of reputation, etc., in terms of its direct and indirect effects (e.g. local damage of railway tracks and related interruption of industrial traffic), its effects on different social groups (e.g. individuals, community, insurance, government) as well as its short- and long-term influences on a society (e.g. fatalities could include all children of a community, the tourist industry might collapse). Most often the focus is on the loss of human life. Calculation of the terms in the above equation is not trivial.The hazard term in the above equation (i.e. P[S]) is not constant with time. Moreover, the expected number of fatalities depends on many factors, for example on which week-day and what time of the day the landslide occurs, whether a warning system is in place and working, etc. The potentially affected population could be divided into groups based on for example the temporal exposure to the landslide: people living in houses
that are in the path of the potential landslide, locals in the area who happen to be passer-bys and tourists and/or workers who are coincidentally at the location during certain periods of the day of the year. Figure 4 summarizes a general procedure for risk assessment for slides. The key issue is the identification of potential triggers and their probability of occurrence, the associated failure modes and their consequences. The triggering mechanisms could be natural, such as earthquake, tectonic faulting, rainfall, temperature increase caused by climate change, excess pore pressures or man-made. Generally, one should consider several scenarios of plausible triggers, estimate the run-out distance and extent triggered by these events, and estimate the upper and lower bounds on the annual probability of occurrence of the scenarios (Roberds, 2005). This scenario-based approach involves the following steps: • •
Define scenarios for landslide triggering Compute the run-off distance, volume and extent of landslide for each scenario • Estimate the loss for the different landslide scenarios • Estimate the risk and compare it with tolerable or acceptable risk levels
16
4 4.1
EARTHQUAKES
1) specification of the seismic-hazard source model; 2) specification of the ground motion model (attenuation relationship); and 3) the probabilistic calculation. The seismic-hazard source model is a description of the magnitude, location, and timing of all earthquakes (usually limited to those that pose a significant threat). For example, a source model might be composed of N total earthquake scenarios, where each has its own magnitude, location, and annual rate of occurrence. The ground motion model used in PSHA consists of the source model and an attenuation relationship. The latter describes how rapidly a particular ground motion parameter decays with distance from the source. Given the typically large number of earthquakes and sites considered in an analysis, attenuation relationships must be simple and easy to apply. The most basic attenuation relationships give the ground motion level as a function of magnitude and distance, but many have other parameters to allow for a few different site types (e.g., rock vs. soil) or styles of faulting. Different relationships have also been developed for different tectonic regimes. All are developed by fitting an analytical expression to observations (or to synthetic data where observations are lacking). With the seismic-hazard source model and attenuation relationship(s) defined, the probabilistic-hazard calculation is conceptually simple. In practice, however, things can get messy. Besides the non-triviality of defining the spatial distribution of small earthquakes on large faults, there is also the problem that different attenuation relationships use different definitions of distance to the fault plane. The logic tree approach is a fundamental and wellestablished tool in PSHA aimed at capturing the epistemic uncertainties (uncertainties related to our lack of knowledge), primarily associated with seismic sources and with ground motion modelling (Kulkarni et al., 1984; Coppersmith andYoungs, 1986). The logic tree approach, which has been state-of-the-art in PSHA for many years, can also be described as a means by which one can include subjective information in an objective way. The use of experts is a fundamental component in the judgments that are needed in order to model epistemic uncertainties. To this end the state-of-the-art methodology is that developed by SSHAC (1997), which has been summarized in a more easily available way in a review by NRC (1997). McGuire (2004) explained how seismology, geology, strong-motion geophysics, and earthquake engineering contribute to the evaluation of seismic risk. He provided detailed description of the methods used for development of consensus probabilistic seismic hazard maps, an important prerequisite for the assessment of earthquake risk.
Earthquake threat
Earthquakes can be especially devastating when they occur in areas with high population density. The risk posed by earthquakes to large cities and other densely populated areas is by far greater than all other geohazards combined (Nadim & Lacasse, 2008). Similar to other natural hazards, the global fatality count from earthquakes continues to rise. This has occurred despite the adoption of earthquake-resistant building codes in most countries. In the past five centuries, the global death toll from earthquakes has averaged 100,000 per year, a rate that is dominated by large infrequent disasters, mostly in the developing nations (Bilham, 2004). Just in the past few years, two of the most catastrophic earthquakes in history have occurred in Asia (Pakistan in October 2005 and Sichuan, China in May 2008). The increase in earthquake-induced fatalities is mainly due to the steady growth in global population. At the same time, there is a decline in the fatality rate expressed as a percentage of instantaneous population. It is tempting to attribute this observation to the application of earthquake-resistant construction code in new city developments. A more pessimistic, and more realistic, interpretation is, however, that the apparent decline in risk is a statistical anomaly and future extreme earthquake disasters in some of the world’s megacities may arrest, or reverse, the current trend. 4.2
Earthquake hazard assessment
Seismic hazard analysis methods have a wide range of applications, from broadly based zonations aimed essentially only at describing and delineating the seismicity, to site-specific analyses aimed for design of specific structures. Within both fields the analyses range from relatively cursory to highly detailed, with the level of detail for design purposes being dependent on the sensitivity of the structure. Prior to 1970, the assessment of seismic hazard was based on a deterministic approach that considered the most likely scenarios for earthquakes that could affect a particular location. The seminal paper of Cornell (1968) introduced the methodology behind a Probabilistic Seismic Hazard Assessment (PSHA) changed the way most engineering seismologists did their hazard analyses. Traditionally, the peak ground acceleration (PGA) has been used to quantify the ground motion in PSHA. Today the preferred parameter is Response Spectral Acceleration (SA), which gives the maximum acceleration experienced by a damped, single-degree-of-freedom oscillator (a crude representation of building response). The oscillator period is chosen in accordance with the natural period of the structure and damping values are typically set at 5% of critical. The PSHA methodology for estimating the annual probability of occurrence of a ground motion characteristic is the same for both the PGA and the SA. In both situations, PSHA involves three steps:
4.3
Earthquake risk assessment
The most comprehensive work towards earthquake risk calculation to date is condensed in HAZUS, a software system prepared for use in the United States by the Federal Emergency Management Agency (FEMA, 2003).
17
Figure 7. Probability of a structure with spectral displacement dp on Figure 6 being in different states of damage after the earthquake.
between different building types, and also between different regions, reflecting on building code regulations and local construction practice. The HAZUS methodology includes standard methods for:
Figure 5. Flow chart of earthquake loss estimation methodology (HAZUS Technical Manual, Federal Emergency Management Agency, 2003).
1. Inventory data collection based on census tract areas 2. Using database maps of soil type, ground motion, ground failure, etc. 3. Classifying occupancy of buildings and facilities 4. Classifying building structure type 5. Describing damage states 6. Developing building damage functions 7. Grouping, ranking and analyzing lifelines 8. Using technical terminology 9. Providing output. The HAZUS approach is attractive from a scientific/technical perspective. However, the fact that it is tailored so intimately to the US situations and specific GIS software makes it difficult to apply in other environments and geographical regions. Aware of the need for a more internationally accessible tool for seismic risk estimation, the International Centre for Geohazards (ICG), through NORSAR and the University of Alicante, has developed a Matlab™based tool in order to compute the seismic risk in urban areas using the capacity spectrum method, SELENA (SEimic Loss EstimatioN using a logic tree Approach, see web page http://www.norsar.no/pc-35-68-SELENA.aspx). SELENA can compute the probability of damage in each one of the four damage states (slight, moderate, extensive and complete) for defined building types. SELENA is a stand-alone software package that can be applied anywhere in the world. It includes a logic treebased weighting of input parameters that allows for the computation of confidence intervals. The loss estimation algorithm in SELENA is based on the HAZUS methodology, and 144 predefined vulnerability curves detailed in the HAZUS manual (see e.g. Figure 8) can be applied in SELENA.
Figure 6. Estimation of earthquake-induced damage using the capacity-spectrum method.
The methodology used in HAZUS for earthquake loss estimation is outlined in Figure 5. The HAZUS approach is based on the so-called the capacity-spectrum method (see Figures 6 and 7). It combines the ground motion input in terms of a response spectrum (spectral acceleration versus spectral displacement) with the building’s specific capacity curve. The philosophy behind this approach is that any building is structurally damaged by the earthquakeinduced permanent horizontal displacement, and not by the acceleration per se. For each building and building type the interstorey drift is a function of the applied lateral force that can be analytically determined and transformed into building capacity curves (capacity to withstand accelerations without permanent displacements). Building capacity curves naturally vary
18
Figure 9. Historic tsunamis in the world from 17th century until today (from Tsunami Laboratory Novosibirsk, http://tsun.sscc.ru/tgi_1.htm). Different circle sizes and colors indicate different tsunami intensities (proportional to the average tsunami run-up).
Figure 8. Fragility curves for small refineries with unanchored components (HAZUS Technical Manual, FEMA 2003).
As input to SELENA, the user must supply built area or number of buildings in the different model building types, earthquake sources, attenuation relationships, soil maps and corresponding ground motion amplification factors, capacity curves and fragility curves corresponding to each of the model building types and finally cost models for repair or replacement. This probability is subsequently used with the built area or number of buildings to express the results in terms of damaged area (square meters) or number of damaged buildings. Simple models for computing economic damages and casualties are also included. 5 TSUNAMIS
Tsunamis constitute a serious natural hazard for the environment and populations in exposed areas. Future catastrophes can be mitigated or prevented by tsunami hazard evaluation from statistics and geological analysis, by risk analyses from studies of slide dynamics, tsunami propagation and coastal impact; and by risk mitigation measures such as tsunami warning systems, sea walls and dykes, area planning, evacuation routes to safe elevated areas, and education, preparedness and awareness campaigns. Moreover, tsunami predictions are fundamental in engineering design and location of coastal installations, dams, submerged bridges, offshore constructions, aquaculture, etc.
5.1 Tsunami threat
5.2 Tsunami hazard assessment
Tsunamis are gravity waves set in motion by large sudden changes of the sea water, having characteristics intermediate between tides and swell waves. Although they are infrequent (ca. 5–10 events reported globally pr. year), tsunamis represent a serious hazard to the coastal population in many areas, as demonstrated by the devastating effects of the 2004 Indian Ocean tsunami. Earthquakes are the most important mechanism of tsunami generation, causing more than 75% of all tsunamis globally. The generation mechanism is typically dominated by the co-seismic dip-slip fault movement, as strike-slip fault movements are generally less important for wave generation. Submarine landslides are also becoming increasingly recognized as an important trigger as well. Other sources of tsunamis include rock slides into bodies of water, collapsing/exploding volcanoes, and asteroid impacts. Tsunamis generated by large earthquakes in subduction zones along the major plate boundaries (so called convergent plate boundaries) contribute most to the global tsunami hazard. Such important areas of generation includes the “ring of fire” along the Pacific Rim, the Sunda Arc including Indonesia and the Philippines, Makran south of Pakistan, the Caribbean Sea, the Mediterranean Sea, and the fault zones off the Portuguese coastline. Figure 9 shows the historical tsunamis recorded worldwide since 1628.
Following the catastrophic Indian Ocean tsunami in December 2004, several research groups have started work on the development of a theoretical framework for Probabilistic Tsunami Hazard Assessment (PTHA). The PTHA methodologies are closely related to well-established Probabilistic Seismic Hazard Assessment (PSHA) and we define PTHA, consistently with the definition of PSHA, as the probability of exceeding a given tsunami size (either in terms of tsunami height or inundation) at a given location and in a given time interval. In this respect, the tsunami problem can (again in analogy with PSHA) be divided into three parts: the source (for example the earthquake generating a tsunami), the path (propagation from the source to some short distance from the coast line) and the site effects (inundation distance and height based on the local bathymetry and topography). In traditional PSHA, the sources are described through a zonation, which is characterized by activity rates in terms of a Gutenberg-Richter relationship. This is also the most common approach to follow for PTHA, with the difference that also distant tsunami sources must be accounted for in the PTHA. The path effects in traditional PSHA are described through simple attenuation relations giving the ground shaking level as a function of earthquake magnitude and distance from the rupturing fault. This approach cannot
19
based on earthquake mechanics, which can be as simple as magnitude/area relations but can also include physically-based constraints in addition to empirical data such as earthquake locations. Uncertainties in source parameters, such as slip rate and maximum possible earthquake on a source, were included using logic tree analysis. Tsunami hazard assessment methodologies are one of the main research topics within the project TRANSFER (Tsunami Risk and Strategies for the European Region, http://www.transferproject.eu/). TRANSFER aims at improving the understanding of tsunamis in the Euro-Mediterranean region, including hazard and risk assessment and strategies for risk reduction.
be applied for PTHA due to the strong influence of bathymetry on the tsunami propagation. It is therefore necessary to perform full wave propagation modeling to include the path effects in PTHA. This is the largest difference between PSHA and PTHA, though it should be noted that PSHA methodologies are currently being developed, based on full ground shaking scenarios rather than simple attenuation relations. However, because the computation time required for the solution of the path problem may limit its practical applicability, a more efficient and practical PTHA (but less accurate) approach would be to use approximate “amplification functions” for the tsunami maximum inundation or run-up heights (analogous to “attenuation functions” for peak ground acceleration in PSHA), which depend on the profile of the sea floor from a certain water depth and up to the site. Thio et al. (2007) presented a method for Probabilistic Tsunami Hazard Analysis (PTHA) based on the traditional Probabilistic Seismic Hazard Analysis (PSHA). In lieu of attenuation relations, their method uses the summation of finite-difference Green’s functions that have been pre-computed for individual sub-faults. This enables them to rapidly construct scenario tsunami waveforms from an aggregate of subfaults that comprise a single large event. For every fault system, it is then possible to integrate over sets of thousands of events within a certain magnitude range that represents a fully probabilistic distribution. Because of the enclosed nature of ports and harbors, effects of resonance need to be addressed as well. Their method therefore focuses not only on the analysis of exceedance levels of maximum wave height, but also of spectral amplitudes. As in PSHA, these spectral amplitudes can be matched with the spectral response of harbors, and thus allow a comprehensive probabilistic analysis of tsunami hazard in ports and harbors. As mentioned earlier, Probabilistic Seismic Hazard Analysis (PSHA) is based on methodology originally proposed by Cornell (1968) and is well documented in many references (e.g. SSHAC, 1997). The majority of tsunamis are caused by earthquake-induced displacement of the seafloor. Most of the world’s largest tsunamis, which have caused damage at locations thousands of miles away, have been caused by megathrust (subduction interface) earthquakes around the Pacific Rim and Indian Ocean. These include the 1960 Chile earthquake, the 1964 Alaska earthquake and the 2004 Sumatra-Andaman earthquake. On a local scale, smaller earthquakes can cause significant tsunamis as well, but usually the hazard from these events is lower because of their localized impact. A crucial element in PTHA is the estimation of the frequency of occurrence and maximum magnitudes of large tsunami-generating earthquakes in each source region. Due to the very short historical record for mega-thrusts and other large earthquakes in relation to their recurrence times, it is not possible to base such constraints directly on the observed seismicity. Thio et al. (2007) therefore used models that were partly
5.3 Tsunami risk assessment Tsunami vulnerability and risk assessment is a relatively unexplored discipline, and few reliable models exist. The Tsunami Pilot Study Working Group (2006) lists the following tsunami parameters as possible impacts metrics that may enter as parameters in tsunami vulnerability models (i.e. mortality, building damage, forces on structures): • • • •
Tsunami flow depth Wave current speed Wave current acceleration Wave current inertia component (product of acceleration and flow depth) • The momentum flux (product of squared wave current speed and flow depth). In many circumstances this is the best “damage indicator”. The above mentioned parameters are important in determining the mortality of the tsunami, as well as the wave forces on structures. The selection of the flow depth is obvious, being a direct measure of the thickness of the flowing water; the flow depth is also related to the current velocity. In a national tsunami risk evaluation for New Zealand, Berryman et al. (2005) suggested an empirically derived mortality model solely based on the flow depth of the tsunami (Figure 10), however, we note that such an approach is most likely too simplistic (see discussion below). The fluid force on a structure is proportional to the momentum flux, as well as impact forces of flotsam, and hence also a natural possibility as an impact metric. Perhaps more surprising is the inclusion of the wave current acceleration. A tsunami wave that runup on the beach will often accelerate when it hits the shoreline after breaking (Synolakis, 1987), and this effect may be counterintuitive for a lay person observing the tsunami, leading to a misinterpretation of the escape time. Tsunami risk evaluation is the combination of the tsunami hazard, tsunami exposure, and vulnerability as described above. A risk evaluation may focus on different elements at risk, for instance mortality or destruction of buildings or installations. For a proper evaluation, it is therefore crucial to determine the correct damage metrics. Generally, the population,
20
Figure 10. Empirical vulnerability (mortality) model of Berryman et al. (2005).
buildings etc. exposed to tsunamis are found by combining flood maps with population density maps, infrastructure maps and building maps in a GIS framework. Regional and global hazard evaluations aim at rough quantification of effects of tsunami inundation, and simple damage metrics and measures of exposure should preferably be used. The tsunami flood maps may be found using available computational tools. However, approximate methods for near-shore wave amplification usually have to be applied for large regions. In theory, mortality risk may then be obtained using relations similar to the one in Figure 10. In practice however, the regional analysis is limited to the hazard or at most, the population exposure. This is because mortality models are too simplistic, leaving out a number of important factors for mortality such as local evaluation of tsunami velocity and momentum flux, tsunami travel time, effects of warning systems, time of tsunami attack (which season, what time during the day, …) etc., and hence add little value to the analyses. Local risk evaluations on the other hand, can be done in detail and provide insight into appropriate local risk mitigation strategies. In a local analysis, run-up simulations may be done for smaller regions, which allow for a more accurate description of both the flow field and the inundated area. Furthermore, mapping of the different vulnerability parameters may be performed in far more detail than in regional evaluations, enabling the possibility of mapping population and building vulnerability at a high level of accuracy.
6 6.1
Figure 11. Risk estimation, analysis and evaluation as part of risk management and control (NORSOK Standard Z-013, 2001).
management process is a systematic application of management policies, procedures and practices to the tasks of communicating, consulting, establishing the context, identifying, analyzing, evaluating, monitoring and implementing risk mitigation measures (Draft ISO / IEC 31010 Ed. 1.0: Risk Management – Risk Assessment Techniques). As depicted in Figure 11, risk assessment is an important component of risk management. In the context of geohazards, Fell et al. (2005) provide a comprehensive overview of the state-of-the-art in landslide risk management. A large body of literature on earthquake risk management also exists. However, tsunami risk management is a relatively now topic and very few references specifically address this issue. 6.2 Acceptable risk One of the most difficult tasks in risk assessment/ management is the selection of risk acceptance criteria. As guidance to what risk level a society is apparently willing to accept, one can use ‘F-N curves’. The F-N curves relate the annual probability of causing N or more fatalities (F) to the number of fatalities, N. The term “N” can be replaced by other quantitative measure of consequences, such as costs. The curves can be used to express societal risk and to describe the safety levels of particular facilities. Figure 12
GEOHAZARDS RISK MANAGEMENT Risk management framework
Risk management broadly refers to coordinated activities to assess, direct and control the risk posed by geohazards to the society. It integrates the recognition and assessment of risk with the development of appropriate strategies for its mitigation. The risk
21
Figure 12. F-N curves (Proske, 2004). Figure 13. Hong Kong criteria (GEO, 2001).
presents a family of F-N-curves. Man-made risks tend to have a steeper curve than natural hazards in the F-N diagram (Proske, 2004). F-N curves give statistical observations and not the acceptable or tolerable thresholds. Who should define acceptable and tolerable risk level? The potentially affected population, government, or the design engineer? Societal risk to life criteria reflect the reality that society is less tolerant of events in which a large number of lives are lost in a single event, than if the same number of lives is lost in a large number of separate events. Examples are public concern to the loss of large numbers of lives in airline crashes, compared to the much larger number of lives lost in road traffic. Figure 13 presents an interim risk criterion recommendation for natural hillsides in Hong Kong (GEO, 1998). Acceptable risk refers to the level of risk requiring no further reduction. It is the level of risk society desires to achieve. Tolerable risk presents the risk level which one compromises to in order to gain certain benefits. A construction with a tolerable risk level requires no action/expenditure for reduction, but it requires proper control and risk reduction if possible. Risk acceptability depends on several factors such as voluntary vs. involuntary situation, controllability vs. uncontrollability, familiarity vs. unfamiliarity, short/long-term effects, existence of alternatives, type and nature of consequences, gained benefits, media coverage, availability of information, personal involvement, memory, and level of trust in regulatory bodies. Voluntary risk levels tend to be higher than involuntary risk levels. Once the risk is under personal
control (e.g. driving a car), it is more acceptable than the risk controlled by other parties. For landslides, natural and engineered slopes can be considered as voluntary and involuntary risk. Societies experiencing frequent geohazards may have different risk acceptance level than those experiencing them rarely. Informed societies can have better preparedness for natural hazards. Although the total risk is defined by the sum of specific risk, it is difficult to evaluate its sum, since the units for expressing each specific risk differ. Individual risk has the unit of loss of life/year, while property loss has the unit of loss of property/year (e.g. USD/yr). Risk acceptance and tolerability have different perspectives: the individual’s point of view and the society’s point of view or societal risk.
6.3 Risk mitigation strategies The strategies for the mitigation of risks associated with geohazards can broadly be classified in six categories: 1) land use plans, 2) enforcement of building codes and good construction practice, 3) early warning systems, 4) community preparedness and public awareness campaigns, 5) measures to pool and transfer the risks and 6) construction of physical protection barriers. The first five strategies are referred to as non-structural measures, which aim to reduce the consequences of geohazards; while the last strategy comprises active intervention and engineering works, which aim to reduce the frequency and severity of the geohazards.
22
As a consequence the focus on Early Warning System (EWS) development should take into account climatic changes and/or exceptional situations.
Identification of the optimal risk mitigation strategy involves: (1) hazard assessment (how often do the geohazards happen?), 2) analysis of possible consequences for the different scenarios, (3) assessment of possible measures to reduce and/or eliminate the potential consequences, (4) recommendation of specific remedial measure and if relevant reconstruction and rehabilitation plans, and (5) transfer of knowledge and communication with authorities and society. 6.4
Pillar 3: Strengthen national coping capacity. Most of the developing countries lack sufficient coping capacity to address a wide range of hazards, especially rare events like tsunamis. International cooperation and support are therefore highly desirable. A number of countries have over the last decade been supportive with technical resources and financial means to assist developing countries where the risk associated with natural hazards is high. A key challenge with all projects from the donor countries is to secure that they are need-based, sustainable and well anchored in the countries’ own development plans. Another challenge is coordination which often has proven to be difficult because the agencies generally have different policies and the implementation periods of various projects do not overlap. A subject which is gaining more and more attention is the need to secure 100% ownership of the project in the country receiving assistance. The capacity building initiatives should focus on institutions dealing with disaster risks and disaster situations in the following four policy fields:
Reducing the geohazards risk in developing countries
One can observe a positive trend internationally where preventive measures are increasingly recognized, both on the government level and among international donors. There is, however, a great need for intensified efforts, because the risk associated with natural disasters clearly increases far more rapidly than the efforts made to reduce this risk. Three key pillars for the reduction in risk associated with natural hazards in developing countries are suggested (modified from Kjekstad, 2007): Pillar 1: Identify and locate the risk areas, and quantify the hazard and the risk Hazard and risk assessment are the central pillar in the management of the risk associated with natural hazards. Without knowledge and characteristics of hazard and risk, it would not be meaningful to plan and implement mitigation measures.
•
Risk assessment and communication, i.e. the identification, evaluation and possibly quantification of the hazards affecting the country and their potential consequences, and exchange of information with and awareness-raising among stakeholders and the general public; • Risk mitigation, i.e. laws, rules and interventions to reduce exposure and vulnerability to hazards; • Disaster preparedness, warning and response, i.e. procedures to help exposed persons, communities and organizations be prepared to the occurrence of a hazard; when hazard occurs, alert and rescue activities aimed at mitigating its immediate impact; • Recovery enhancement, i.e. support to disasterstricken populations and areas in order to mitigate the long-term impact of disasters.
Pillar 2: Implement structural and non-structural risk mitigation measures, including early warning systems Mitigation means implementing activities that prevent or reduce the adverse effects of extreme natural events. In a broad prospective, mitigation includes structural and geo-technical measures, effective early warning systems, and political, legal and administrative measures. Mitigation also includes efforts to influence the lifestyle and behavior of endangered populations in order to reduce the risk. The Indian Ocean tsunami of 2004, which killed at least 230,000 people, would have been a tragedy whatever the level of preparedness; but even when disaster strikes on an unprecedented scale, there are many factors within human control, such as a knowledgeable population, an effective early warning system and constructions built with disasters in mind. All these measures can help minimize the number of casualties. Improved early warning systems have been instrumental in achieving disaster risk reduction for floods and tropical cyclones. Cuba has demonstrated that such reduction is not necessarily a question of expensive means. However, the recent tropical cyclone Nargis is a sad reminder that much remains to be done in decreasing the risk to tropical cyclones. Meteorological forecast in region where cyclones generally occur is quite effective, but early warning and response remains insufficient in unexpected regions (e.g. Catarina 2004 for South Atlantic Ocean).
In each of these fields, institutions can operate at local, regional, national or international levels.
7
CONCLUDING REMARKS
Management of the risk associated with geohazards involves decisions at local, regional, national and even transnational levels. Lack of information about the risk appears to be a major constraint to providing improved mitigation in many areas. The selection of appropriate mitigation strategies should be based on a futureoriented quantitative risk assessment, coupled with useful knowledge on the technical feasibility, as well as costs and benefits, of risk-reduction measures. Technical experts acting alone cannot choose the “appropriate” set of mitigation and prevention measures in many risk contexts. The complexities and technical details of
23
Duncan, J.M. 1996. Soil slope stability analysis. Landslides: investigation and mitigation. Ed. By Turner & Schuster. Washington 1996 TRB Report 247. Duncan, J.M. 2000. Factors of safety and reliability in geotechnical engineering, J. of Geotechnical and Geoenvironmental Engineering 126(4): 307–316. Düzgün, S. & Lacasse, S. 2005. Vulnerability and acceptable risk in integrated risk assessment framework. Landslide Risk Management, Hungr, Fell, Couture & Eberhardt (eds), Taylor & Francis, London: 505–515. Federal Emergency ManagementAgency 2003. HAZUS-MH MR3 Technical Manual – Earthquake Model. Web site: http://www.fema.gov/plan/prevent/hazus/index.shtm Fell, R., Ho, K.K.S., Lacasse, S. & Leroi, E. 2005. “A framework for landslide risk assessment and management – State of the Art Paper 1”. Landslide Risk Management, Hungr, Fell, Couture & Eberhardt (eds), Taylor & Francis, London: 3–25. GEO (Geotechnical Engineering Office) 1998. Landslides and Boulder Falls from Natural Terrain: Interim Risk Guidelines. GEO Report 75, Gov. of Hong Kong SAR. IFRC (International Federation of Red Cross and Red Crescent Societies) 2001. World Disaster Report, Focus on Reducing Risk. Geneva, Switzerland, 239 pp. IFRC (International Federation of Red Cross and Red Crescent Societies) 2004. World Disaster Report. ISDR (International Strategy for Disaster Reduction) 2005. Hyogo Framework for Action 2005–2015, 21 pp. Kjekstad, O. 2007. The challenges of landslide hazard mitigation in developing countries. Keynote Lecture presented at 1st North-American Landslide Conference, Vail, Colorado 3–8 June 2007. Kulkarni, R.B., Young, R.R. & Coppersmith, K.J. 1984. Assessment of confidence intervals for results of seismic hazard analysis. Proc. Eighth World Conf. on Earthquake Engineering, San Francisco, Vol. 1, pp. 263–270. Lacasse, S. & Nadim, F. 2008. Landslide risk assessment and mitigation strategy. Invited Lecture, State-of-the-Art. First World Landslide Forum, Global Landslide Risk Reduction, International Consortium of Landslides, Tokyo. Chapter 3, pp. 31–61. Lee, E.M. & Jones, D.K.C. 2004. Landslide Risk Assessment. Thomas Telford, London. McGuire, R. 2004. Seismic Hazard and Risk Analysis. EERI monograph (MNO-10), ISBN: 0-943198-01-1, 221 p. Munich Re Group 2007. NatCat Service 2007 – Great natural disasters 1950–2007. Nadim, F. 2004. Risk and vulnerability analysis for geohazards. Glossary of Risk Assessment Terms. ICG Report 2004-2-1, NGI Report 20031091-1, Oslo Norway. Nadim, F. Einstein, H.H. & Roberts, W.J. 2005. Probabilistic stability analysis for individual slopes in soil and rock – State of the Art Paper 3. Landslide Risk Management, Hungr, Fell, Couture & Eberhardt (eds), Taylor & Francis, London: 63–98. Nadim, F. & Glade, T. 2006. On tsunami risk assessment for the west coast of Thailand, ECI Conference: Geohazards – Technical, Economical and Social Risk Evaluation 18–21 June 2006, Lillehammer, Norway. Nadim, F., Kjekstad, O., Peduzzi, P., Herold, C. & Jaedicke, C. 2006. Global landslide and avalanche hotspots. Landslides, Vol. 3, No. 2, pp 159–174. Nadim, F. & Lacasse, S. 2008. Effects of global change on risk associated with geohazards in megacities. Keynote Lecture, Development of Urban Areas and Geotechnical Engineering, St. Petersburg, Russia, 16–19 June 2008. NORSOK standard Z-013 2001. “Risk and emergency preparedness analysis”, Rev. 2, www.standard.no
managing geohazards risk can easily conceal that any strategy is embedded in a social/political system and entails value judgments about who bears the risks and benefits, and who decides. Policy makers and affected parties engaged in solving environmental risk problems are thus increasingly recognizing that traditional expert-based decision-making processes are insufficient, especially in controversial risk contexts. Risk communication and stakeholder involvement has been widely acknowledged for supporting decisions on uncertain and controversial environmental risks, with the added bonus that participation enables the addition of local and anecdotal knowledge of the people most familiar with the problem. Precisely which citizens, authorities, NGOs, industry groups, etc., should be involved in which way, however, has been the subject of a tremendous amount of experimentation. The decision is ultimately made by political representatives, but stakeholder involvement, combined with good riskcommunication strategies, can often bring new options to light and delineate the terrain for agreement. The human impact of geohazards is far greater in developing countries than in developed countries. Capacity building initiatives focusing on organizations and institutions that deal with disaster risks and disaster situations could greatly reduce the vulnerability of the population exposed to natural disasters. Many of these initiatives can be implemented within a few years and are affordable even in countries with very limited resources.
ACKNOWLEDGEMENT The author wishes to thank his colleagues at ICG for their direct and indirect contributions to this paper. Special thanks are due to Prof. Hilmar Bungum of NORSAR (earthquake), and Drs Carl Harbitz (tsunami), Finn Løvhølt (tsunami) and Suzanne Lacasse (landslide and risk management) of the Norwegian Geotechnical Institute. REFERENCES Ang, A.H-S. & Tang, W.H. 1984. Probability Concepts in Engineering Planning and Design I & II, John Wiley & Sons, New York. Berryman, K. et al. (editors) 2005. Review of tsunami hazard and risk in New Zealand. Geological and Nuclear Sciences (GNS) report 2005/104. 140 p. Bilham, R. 2004. Urban earthquake fatalities: A safer world or worse to come? Seism. Res. Lett., December 2004. Coppersmith, K.J., & Youngs, R.R. 1986. Capturing uncertainty in probabilistic seismic hazard assessment within intraplate tectonic environments. Proc. Third US Natl. Conf. on Earthquake Engineering, Earthquake Engineering Research Institute, Vol. 1, pp. 301–312. Cornell, C. A. 1968. Engineering seismic risk analysis. Bull. Seism. Soc. Am 58, pp. 1583–1606. Duncan, J.M. 1992. State-of-the-art: Static stability and deformation analysis. Stability and performance of slopes and embankments-II, 1: 223–266.
24
Proske, D. 2004. Katalog der Risiken. Eigenverlag Dresden. 372 p. Roberds, W.J. 2005. Estimating temporal and spatial variability and vulnerability – State of the Art Paper 5. Landslide Risk Management, Hungr, Fell, Couture & Eberhardt (eds), Taylor & Francis, London: 129–158. SSHAC (Senior Seismic Hazard Analysis Committee) 1997. Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts, US Nuclear Regulatory. Commission report CR-6372, Washington DC. Synolakis, C.E. 1987. The run-up of solitary waves, J. Fluid Mech., 185, pp. 523–545. Thio, H.K., Somerville, P. & Ichinose, G. 2007. Probabilistic Analysis of Strong Ground Motion and Tsunami Hazards
in Southeast Asia. Proceedings from 2007 NUS-TMSI Workshop, National University of Singapore, Singapore, 7–9 March. Tsunami Pilot Study Working Group 2006. Seaside, Oregon Tsunami Pilot Study—Modernization of FEMA flood hazard maps. NOAA OAR Special Report, NOAA/OAR/PMEL, Seattle, WA, 94 pp. +7 appendices. UNDP (United Nations Development Programme) 2004. Reducing Disaster Risk – A Challenge for Development, Bureau for Crisis Prevention and Recovery, New York, 146 pp. UNISDR (2009). Terminology on Disaster Risk Reduction. http://www.unisdr.org/eng/library/UNISDR-terminology2009-eng.pdf World Disaster Report 2006. http://www.redcross.ca
25
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Risk management and its application in mountainous highway construction H.W. Huang & Y.D. Xue Key Laboratory of Geotechnical & Underground Engineering, Ministry of Education Department of Geotechnical Engineering, Tongji University, Shanghai, P.R. China
Y.Y. Yang Chief Engineering Office, Shanghai HuShen Highway Construction Development Co., Ltd. Shanghai, P.R. China
ABSTRACT: More and more highways will be constructed in mountainous areas of western China. The mountainous highway projects are subject to more risk than other constructions because they entail intricate site conditions and inhere so many uncertainties. Risk management is an effective approach to reduce and control the risks reasonably for achievement of project objectives. How to put the risk management techniques into practice in the highway projects is on the focus in this paper. Based on the study of the risk mechanism, an integrated project risk management framework is put forward. The organization of a risk management team is very important and each member should be selected suitably. Risk identification should be imposed with more efforts because its results have significant influence to the risk management aims. A synthetical rock tunnel risk identification method based on Fault Tree Analysis (FTA) is proposed and named as RTRI. The risk identification process is generally conducted in an iterative cycle which keeps step with the construction. Risk database is important and useful to risk management which is discussed in detail in another paper. The proposed risk management model for mountainous highway projects is practiced in Shuifu-Maliuwan Highway in Yunnan Province, China. The risks of construction time delay, cost overrun, poor quality and worker safety are assessed carefully. The results are proved relevant and help the decision-maker to deal with the risks effectively.
1
INTRODUCTION
of an industry accepted model of risk analysis may be one factor that limits the implementation of risk management (Lyons & Skitmore 2004). Along with the rapidly economic development, more and more highways will be constructed in mountainous region. The characteristics of these projects include complex geology, poor transport condition, bad weather, earthquake and so on. On the other hand, these projects always include more engineering types: tunnel, bridge, slope, subgrade and road surface. The potential risks include tunnel collapse, water burst, landslide, rockfall, injury, explosion, etc. It’s evident that the mountain area highway construction is faced with significant risks. This paper aims to supply a practicable risk management framework, and describe the process of each step. At last, a case study is conducted.
In general, the highway construction projects are always very complex and dynamic (Akintoye & MacLeod 1997, Carr & Tah 2001). Each project involves a variety of organizations and a larger number of people. It is known that all participants of a construction project are continuously faced with a variety of unexpected or unwanted situations. The projects have an abundance of risk due to the nature of construction. It is recognized that risk is built into any actions of people. In a construction project, the risk is perceived as events that influence the project objectives of cost, quality, time, safety and environment. To reduce or control the project risk, most people agree that project risk management (PRM) is suitable (Wideman 1986). Lyons & Skitmore (2004) had surveyed that the use of risk management in the Queensland engineering construction industry is medium to high. It has been a critical part of integrated project management system. Even though most people think risk management play a crucial role in project management and numerous papers on such subject have been printed, the actual use of risk management in practice is limited. Lack
2
GENERAL RISK MANAGEMENT
Risk management is a system which aims to identify and quantify all risks to which the project is exposed so that a conscious decision can be taken on how to manage the risks (Roger & George 1996,
27
3 Risk Identification
MOUNTAINOUS HIGHWAY CONSTRUCTION
3.1 Features In China, with the rapidly economic development, more and more highways are constructed or planned in mountainous areas. The engineering environments and conditions are far from the plain area. The main characteristics of mountain area highway can be summarized as: • High mountain • inconvenient transportation • bad weather • unfavorable geology • limited construction methods • long tunnels • long bridges with high pier • deep cut slope • construction material shortage • earthquake and • so on. In general, the main engineering types include rock tunnel, bridge, cutting/ nature slope, subgrade and road surface. According to the results of surveys and statistics to most accidents related with mountain road engineering in the last decade, the slope failure accidents are the most frequent and severest no matter in quantity or loss. The next one is long tunnel which is always suffered to poor geological condition. Then the risk is high pier bridge. New techniques and materials are widey used in bridge construction. This situation hints potential severe risks. The accidents about roadsubgrade and road surface are most faced in operation phase, seldom in construction phase.
Risk Classification
Risk Analysis Risk Attitude Risk Response Figure 1. The risk management framework (Roger & George 1996).
Eskesen et al. 2004, MOHURD 2007). It includes a series of processes. The Association of Project Managers defines nine phases of risk management: define, focus, identify, structure, ownership, estimate, evaluate, plan, and manage (APM Group 1997, Chapman 1997). Chapman (2001) think that all the process of risk analysis and management is composed of two stages, risk analysis and risk management. Roger defines some general stages are: risk identification, risk classification, risk analysis, risk attitude and risk response. The risk management framework is illustrated in Figure 1. The definition of project risk is important before it can be managed. Wideman defines project risk “as the chance of certain occurrences adversely affecting project objectives”. Chapman (2001) defines it as “an event, which should it occur, would have a positive or negative effect on the achievement of a project’s objectives”. The main difference is that whether or not consider the risk chance. In general, construction project risk has two types: pure risk and speculative risk. In this paper, the risk is defined as “a function of potential adverse event’s occurrence probability and consequence”. It is clear that only the pure risk concept is used in the highway construction project risk management. Risk identification is the most important stage in risk management. Unidentified risks may hide severe threats to a project’s objectives. Categorization of the source of risk is helpful to risk identification. British Standard 6079 considers that risks or adverse events generally fall into one of the following five categories: technological, political, managerial, sociological and financial. Zayed et al. (2008) divided the highway construction risks into two areas: company (macro) and project (micro) levels. In the micro hierarchy, emerging technology usage, contracts and legal issues, resources, design, quality, weather, etc. are included. Technological risk assessment is in general the main section of construction risk management.
3.2
Risk mechanism
There is abundant severe potential risk inherent with highway construction under such bad conditions. For the sake of effective risk management, the risk mechanism should be studied at first. A simple risk mechanism is obtained based on project management experience and theoretical studies. The mechanism of risk development is illustrated in Figure 2. Figure 2 can help people to understand the nature of construction risks. Activities, materials, tools and all kinds of equipments related with human can be considered as internal causes of risk event. Related surrounding environment can be thought as external causes of risk event. Through a careful screening of the project, the most risk factors can be identified. The risk effects mainly include economic loss, time delay, casualty, quality loss, environment loss, etc.According to the aim of risk assessment, the risk effects can be categorized differently. 4
MOUNTAIN HIGHWAY CONSTRUCTION RISK MANAGEMENT
It is known that there are numerous project risk management methods in different engineering areas. A risk
28
Internal environmental factors
Safety
Effect
Economic loss Time delay
Quality Casualties
Project decision-making
Project risk Geology and hydro-geology
Construction technology, Machinery, Operation, and etc.
Vulnerbility
Risk
External environmental factors
Risk surroundings
Riskfactors
Cost
Quality loss
Time
Ecological environment loss
Environment
Figure 2. Risk mechanism diagram. • •
management framework for mountain highway construction is proposed. It will also be accepted as a basic risk management structure in Guideline of Risk Assessment & Control for Safety Construction of Road Tunnel in China. 4.1
in construction phase: • • • • •
Risk management flowchart
The whole phase of a road construction project includes project development stage, construction contract procurement stage, design stages, construction stage and operation stage (ITIG 2006). Most people agree that the risk management should be used through the whole life cycle to reduce the risk. At present, risk management application in the execution and planning stages of the project life cycle is higher than in the conceptual or termination phases (Lyons & Skitmore 2004). In this paper, the main project phases include preliminary design, construction documents design and construction. The flowchart of risk management is illustrated in Figure 3. 4.2
principal designer or representative, contractor representative, consultant engineer, supervisor representative and other related personnel.
4.2.2 Scope, objective and strategy The scope of risk management defines the benchmark information such as client’s objective, why conduct the risk assessment, who will execute and control the process, when and how to assess the risk, anticipated achievement and other critical issues. The scope should be documented as task instruction document. General objective of a construction project risk management is identify and quantify all risks and consciously manage them. For risk management is time-person-cost-consuming, it is not reasonable to pay much attention to all low level risks. The detail of objective and client’s circumstance will influence the depth of risk analysis. The objective of mountain highway construction is dealing with the risks as low as reasonably practicable (known as ALARP principle). The mountain highway construction risk management strategy include:
Risk management
4.2.1 Risk management team To a new project/phase, an organization of risk management team should be established as the first task of the risk management. A risk analysis specialist plays a crucial role in the team organization, and this will significantly influence the following risk management processes. The team members are dynamic, and will change in different phases or special risk issues. To a risk management team, risk specialist, experienced project managers and client representatives are core members. Other members maybe include,
• • • • •
in design phase: • •
consultant engineer and other related personnel.
representatives of the core design team, design team representative,
29
carry out risk assessment through out the whole project construction process clarify the risk share of the various parties involved in the project a plan of dynamic risk assessment a training of risk view point to all persons involved a standard risk document format, including risk register and risk measures
Project phases Client Preliminary design phase
Construction documents design phase
Risk management specialist
Scope Objectives Strategy
Risk management team
Engineering type or area
Contractor and others
Risk identification
Construction phase Risk evaluation Risk acceptance criteria Risk response and monitor
Risk document and database
Figure 3. Risk management flowchart.
4.2.3 Engineering types or areas A typical mountain highway construction project is always so complex that risk management is very hard. Separation is an effective solution to complex problem. So it is used here to separate a project into a set of base elements for structuring the management. It is natural and reasonable to separate project according to its engineering types. In general, a mountain highway construction project can be separated into tunnel, bridge, slope, subgrade, road surface, etc. A typical kind engineering is usually complex, too. It can be separated into sub-engineering according to engineering areas futures.A tunnel project can be separated into tunnel portal section, poor geology section and other section depends on the nature of the tunnel engineering. A bridge will be separated into superstructure, substructure and connection section. A slope can be categorized as nature slope, cutting slope, rock slope and soil slope.
project. A survey result shows that risk identification and risk assessment are the most often used risk management elements ahead of risk control and risk documentation. The quality of identification results is greatly dependent on the team’s professional experience and knowledge. At the same time, the identification technique plays an important role. Risk identification methods generally include brainstorming, risk checklists, expert analysis/ interviews, modelling and analyzing different scenarios and analyzing project plans. Brainstorming and expert questionnaire (Ahmad & Minkarah 1988) are the most common risk identification techniques used in road construction project in China. Based on the Fault Tree Analysis method, a synthetical identification method (named RTRI) is used effectively in many tunnel construction projects. Its operation structure is illustrated in Figure 4. The key of this method is that severe and general risk events are distinguished. It is easy to control the risk identification depth, and can help to understand the internal logic relation of different risks. It is necessary to build up a risk database and register all the risks identified. The database is very useful and helpful to identify risk of a new analogous project.
4.2.4 Risk identification Risk identification is the most important step in the overall process of risk management. Unidentified and therefore unmanaged risks are clearly unchecked threats to a project’s objectives, which may lead to significant overruns. Floricel & Miller (2001) got that regardless of a thorough and careful identification phase, something unexpected occurred in every
4.2.5 Risk evaluation After the risks have been identified, they must be evaluated in terms of the probability of occurrence and
30
Figure 4. Rock tunnel construction risk identification method (RTRI). Table 1.
Risk occurrence probability ranking.
Ranking
Occurrence probability
Table 2.
1
2
3
4
5
Impossible P < 0.01%
Seldom 0.01% ≤ p < 0.1%
Occasional 0.1% ≤ p < 1%
Possible 1% ≤ p < 10%
Frequent P ≥ 10%
Risk impact ranking.
Ranking
1
2
3
4
5
Impact
Insignificant
Considerable
Serious
Severe
Disastrous
evaluation matrix (Table 3). For easy use, a colored standing for risk ranking is made as list in Table 4. The mountain highway construction risk acceptance criterion is described in Table 5. The criteria can aid the decision maker to deal with the risk.
impact. In practice, the risk probability and impact can be analyzed based on the historic statistic data. But for mountain highway construction project, the data are always very scarce. In this paper, probability and impact ranking is listed in Table 1 and Table 2 (MOHURD 2007) .The risk impact ranking is variable with different risks and client risk attitude. When the risk occurrence probability and impact is defined, the risk can be rated according to the risk
4.2.6 Risk response and monitor Risk response is a strategy taken to manage the identified risks. In general, there are four basic forms of
31
Table 3.
Risk evaluation matrix (risk ranking). Impact
Risk
Impossible Seldom Occasional Possible Frequent
Table 4.
Considerable
Serious
Severe
Disastrous
1 I I II II III
2 I II II III III
3 II II III III IV
4 II III III IV IV
5 III III IV IV IV
Colored standing for risk rank.
Risk ranks
Table 5.
Probability 1 2 3 4 5
Insignificant
Logo
Table 6.
Colour
Proposal for risk status definitions.
Risk status Identified Assessed Responses implemented Occurred Avoided Closed out
Risk acceptance criteria.
be communicated effectively among all parties. Risk communication is a key point to successful risk management. A software based on database technique has been developed for standard implementation of risk management in mountainous highway construction project. response or control strategies which can be used in risk management. The four types of risk response are acceptance, mitigation, transfer and avoidance. The decision of risk response should consider the risk acceptance criteria (Fig. 5). When a risk is identified and evaluated, the undertaker should take specific actions to control it. Any actions or measures should be analyzed carefully in order to achieve the project objectives. As an important part of risk response, project contingency arrangement should be established for critical risk. It includes the risk action schedules: • • • •
5
CASE STUDY
5.1 Project introduction Shuifu-Maliuwan Highway (shorted as Shui-Ma Highway) locates in the adjacent area of the Yungui Plateau and Liangshan mountain, which is in northeast of Yunnan Province in China. The total length of the highway is 135.55 km with three years’ construction time and the total general estimate of project investment is 92 billion RMB. Owing to Yanshan and Himalayan tectonic movement, the tectonic deformation is heavy. There are high mountains, steep gorges with heavy erosion, rapid rivers and saw-cuts everywhere. There are 39 tunnels with a total length of 27.21 km, 365 bridges with a total length of 91.4 km and many cut slopes and talus slopes along this highway in the whole project. The whole project was formally commenced with 28 bidding contracts in March 2005. Some characteristics of this project are illustrated in Figures 6 to 8.
Actions required (what is to be done); Resources (what and who); Responsibilities (who) and Timing (when).
Once the risk responses have been defined, the project risk source should be monitored in time throughout the construction. The states of risk are logged into the risk database. The risk status can be defined as Table 6.
5.2 Shui-Ma Highway construction risk management
4.2.7 Risk document and database The risk management is a systematic process.All materials related with the process should be documented. The materials include all kinds of memos, statistical data, photos, design and construction specifications, etc. A comprehensive risk register form logs the risks’ status and information. Formatted risk information can
According to the aforementioned method, the risk management is carried out for the Shui-Ma Highway construction project, including risk identification, risk evaluation, risk response and risk documentation. The risks are categorized into time, cost, quality and human
32
Risk acceptance criteria
Negligible risk
Acceptable risk
Unwantedrisk
Unacceptable risk
Risk acceptance
Risk mitigation
Risk transfer
Risk avoidance
Risk response Figure 5. The approximate relation between risk response and acceptance criteria.
Figure 6. Photo of construction of Guanhe bridge. Figure 8. Hazard of rock fall.
Table 7. No.
Risk events
P
C
R
R10 R11
Inconvenient traffic conditions Poor weather condition result in short effective construction time Poor social environment including the government mismanagement and the backwardness of people idea Delay of the design alteration, design change and the design approval Supplement of the unified supply materials are not in time Narrow construction site and poor construction conditions result in difficult transportation Power supply is not in time and the supply is unstable The treatment and influence of unfavorable geology Unreasonable resource preparing result in run-idle
3 5
2 3
II IV
2
2
II
3
2
II
3
2
II
4
2
III
2
3
II
3
3
III
3
3
III
R12 Figure 7. The panorama of the ancient landslide.
R13 R14
safety. Because this project is very long and complex, the project is separated into 28 sections according to the bidding contracts. As an example of the risk management, the risks of Contract 11 were evaluated. The occurrence probability ranking of risk event is decided according to Table 1 based on historic events statistic or experience of highway construction specialists. The impact ranking is decided according to Table 2. The Risk evaluation matrix (Table 3) is used for risk ranking. The results are listed in Table 7 ∼ Table 10 (P
R15 R16 R17 R18
33
Risk evaluation results of time delay.
Table 8.
Risk evaluation results of cost overrun.
Table 10.
Risk evaluation results of human safety.
No.
Risk events
P
C
R
No.
Risk events
P
C
R
R20
Insufficient credit level of the insurance company The price rising of raw materials, fuel and labor cost Natural disasters including flood and debris flow Owner improves the quality standard and contractor underestimates the exist risks Unfavorable geology including landslide, rock heap and rock fall Inconvenient traffic conditions result in the increasing of the materials transportation cost Construction contract risks Poor human environment Design risks (design alteration results in the increasing of investment) Narrow construction site results in the increasing of the spoil cost
3
2
II
R40 R41
3 3
3 3
III III
4
4
IV
4
4
IV
2
3
II
3
5
IV
R42 R43 R44 R45 R46 R47 R48
3 2 3 1 1 3 2
2 1 2 2 2 3 3
II 1 II I I III II
3
3
III
R49
Unfavorable geology Insufficient safety consciousness of constructors Improper construction protection Improper equipments manipulate High altitude construction Contact of injurant Road traffic accident Natural disasters Custody of initiating explosive device and blasting construction Power failure when the construction of the digging pile and the overturn of the bridge machine
1
5
III
3 1 2
3 2 4
III I III
2
4
III
R21 R22 R23 R24 R25 R26 R27 R28 R29
Table 9. No.
data, and the supporting parameters can be changed if necessary. The major quality risk is contract which is quoted with unreasonable low-price bid. Another major factor is unreasonable project schedule and time. The worker safety risk is acceptable. Risk education and reminder are effective management approaches to improve safety. Shi rock tunnel project is the key sub-project of Contract 11, and there are so many uncertainties in its construction that the potential risks are very big. The risk management system must be applied to achieve the final objectives.
Risk evaluation results of poor quality.
Risk events
R30 Construction contract risks (short construction period and low bid price) R31 Poor quality of equipments and materials R32 Protection risks (incorrect construction of anti-slide pile and shot Crete and anchor) R33 Site personnel quality R34 Technology risks of tunnel construction R35 Technology risks of subgrade and pavement construction R36 Technology risks of bridge construction R37 Unfavorable geology R38 Poor weather condition
P C R 5 3
IV
1 2 2 3
I II
3 3 2 4 3 3
III III III
4 1 2 4 4 2
II III III
5.3 Shi rock tunnel construction risk management Shi tunnel is a separated two-single-tunnel situated in Contract 11. Its left tunnel is approximately 4,75 2m in length, and the cover depth varies from 1.5 m to 352 m. The right tunnel is approximately 4,755 m in length and depth varies from 0 m to 357 m. The tunnel cross-sections are 11.7 m wide and 7 m high. Traditional Drill and Blast method is employed in the tunnel excavation. The conditions of the portal section are even worse which contains faults and fragment zones.
in the tables means probability, C means impact and R means risk rate). The risks of time delay (Table 7) show that the poor weather condition is a critical risk factor to the schedule. The adverse factor is mainly raining in the project area. There are almost 2/3 days raining in a year, and the effective workdays are limited. Because the weather condition is uncontrolled, the better risk response should be drawing a good plan which arranges the project processes thinking of weather influence. Table 8 shows that the major cost risks include price rising of materials, natural disasters and geo-hazards. The materials cost risk should be considered in the tendering. Different natural disasters risks should be assessed in detail. Then the contractor has a meeting to propose practicable risk control measures and emergency plans. To reduce the geohazards risk, the design of rock reinforcement should be certificated base on the new geological survey
5.3.1 Risk identification and assessment Based on in-situ investigation, tunnel construction design and other related materials, the main risks and general risks are identified with RTRI. The tunnel risk breakdown structure is listed in Figure 9. When all the risks are identified, Delphi method is employed to evaluate the risk rating. The risk evaluated results are given in Table 11. From Table 10, the manager can easily grasp and understand the risk profile of the tunnel project. Then the key points of risk management transported to all project parties. Because the Shi tunnel is very long and its geological conditions change greatly along the tunnel alignment, the tunnel is separated into several sections according to the geological conditions. The risks of each section are evaluated and diagramed in Figure 10.
34
Risks of Shi Tunnel during construction
Environmental impacts
Construction time delay
Waterflooding accidents
Gas accidents
Fire accidents
Harmful gases poisoning
High falling accidents
Machine accidents
Electric shock accidents
Blasting accidents
Rip spalling and floor heave
General risks
Surrounding rock large deformation
Tunnel collapse
Water or mud bursting
Tunnel portal slop failure
Main risks
Figure 9. Possible risks of Shi tunnel during construction.
Table 11.
cyclical footage and advanced support, etc. are much more important factors which will result in large deformation. The risk control methods of main risks and general risks are proposed accoding to the results of risk identification, risk assessment, analysis of risk factors, and the actual conditions of Shi tunnel. In the mean time, emergency plans against the possible risks are also put forward. The risk management should be implemented throughout the life of the project and keep all risks under control.
Risk evaluation results.
6
CONCLUSIONS
The mountainous highway construction projects are generally very complex, costly and time-consuming. Inevitably, there are a lot of potential risks which may hinder the project development and often result in poor performance with increasing costs and time delays. Most people agree that the risk management is very useful in project management of complex systems, but few people analyze the risks in highway construction practice other than by using intuition and experience. The major factors that limit the implementation of risk management maybe the lack of risk awareness and the lack of accepted risk assessment method. In this paper, a systematic risk management framework is proposed and practiced in Shui-ma highway construction project. The practice proved that the risk evaluation results can help the decision maker with more confidence. In the risk management process, the risk identification, risk evaluation, risk monitoring and risk database are key steps as well as project experience. The risk identification is the most important phase. This paper proposes a FTA based synthetical identification method which has been used in more than 10 road tunnels. Quantitative or semi-quantitative risk analysis method for complex mountainous highway construction project should be developed in future.
5.3.2 Sub-risk factors analysis For risk control, the sub-risk factors of main risk events are analyzed with specialist questionnaire method. From Figure 11 we can find that supporting quality, overburden, excavation method, etc. are the main factors that result in the occurrence of collapse, so during the construction, special attention should be paid to these factors in order to reduce the probability of collapse occurrence. Figure 12 shows that advanced geological forcast, construction organization design,undergroundwater disposal, etc. affect the water bursting and mud surging significantly. Special attention should be paid to these factors during the construction. Figure 13 shows that the excavation method, blasting method and advanced supporting influence the tunnel portal stability significantly. As a system risk control measure, the rock slop reinforcement quality and portal sunrrounding rock supporting paramaters must satisfy the design documents (Yang et al. 2006, Pine & Roberds 2005). From Figure 14, we can find that excavation method, blasting method,
35
36
II III IV
[BQ]=255
Middle risk, acceptable High risk, unwanted Extremely high risk, unacceptable
Depth:154~350m, rock: blastopsammite, less integrity,[BQ]=370
Fault effect area
Figure 10. Diagram of critical risks rating along the tunnel alignment.
Portal Collapse Water bursting Deformation
[BQ]=230
Shuikou Fault
Depth:170m,,[BQ]=340
Fault effect area,fragmentized
Depth:101~176m,,blastopsammite,,[BQ]=360
ACKNOWLEDGEMENTS The authors wish to acknowledge the support of National Natural Science Foundation of China (No.40772179) and Western Science & Technology Project of Ministry of Transport of China (No.2006318799107). REFERENCES Ahmad, I. & Minkarah, I. 1988. Questionnaire survey on bidding in construction. Journal of Management in Engineering 3(4): 229–243. Akintoye, A. & MacLeod, M. 1997. Risk analysis and management in construction. International Journal of Project Management 15(1): 31–38. APM Group. 1997. Project risk analysis and management. http://www.eurolog.co.uk/apmrisksig/publications/ minipram.pdf. Bao, H.L. & Huang, H.W. 2008. Risk assessment for the safe grade of deep excavation. In Ng C.W.W. et al. (eds), Geotechnical Aspects of Underground Construction in Soft Ground: 507–512. London: Taylaor & Francis. Carr, V. & Tah, J. 2001. A fuzzy approach to construction project risk assessment and analysis construction project risk management system. Advances in Engineering Software 32: 847–857. Chapman, C. 1997. Project risk analysis and managementPRAM the generic process. International Journal of Project Management 15(5): 273–281. Chapman, R.J. 2001. The controlling influences on effective risk identification and assessment for construction design management. International Journal of Project Management 19: 147–160. Eskesen, S.D. et al. 2004. Guidelines for tunnelling risk management: InternationalTunnellingAssociation, Working Group No. 2. Tunnelling and Underground Space Technology 19(3): 217–237. Floricel, S. & Miller, R. 2001. Strategizing for anticipated risks and turbulence in large scale engineering projects. International Journal of Project Management 19: 445–455. Huang, H.W. et al. 2006. Risk analysis of building structure due to shield tunneling in urban area. In Zhu H.H. et al. (eds), Underground construction and ground movement; Proc. of sessions of Geoshanghai, Shanghai, 2–4 June 2006. New York:ASCE. Lyons, T. & Skitmore, M. 2004. Project risk management in the Queensland engineering construction industry: a survey. International Journal of Project Management 22: 51–61. MOHURD. 2007. Guideline of Risk Management for Construction of Subway and Underground Works. Beijing: China Architecture & Building. MTPRC. 2004. Code for design of road tunnel. Chinese Standard: JTG D70-2004: 62–65. Pine, R.J. & Roberds, W.J. 2005. A risk-based approach for the design of rock slopes subject to multiple failure modes—illustrated by a case study in Hong Kong. International Journal of Rock Mechanics & Mining Sciences 42: 261–275. Roger, F. & George, N. 1993. Risk Management and Construction. London: Blackwell science Ltd. The International Tunnelling Insurance Group (ITIG). 2006. A code of practice for risk management of tunnel works.
Figure 11. Diagram of influence factors to tunnel collapse risk.
Figure 12. Diagram of influence factors to water bursting and mud surging.
Figure 13. Diagram of influence factors to tunnel portal slope failure.
Figure 14. Diagram of influence factors to tunnel large deformation.
37
http://www.munichre.com/publications/tunnel_code_of_ practice_en.pdf Wideman, R.M. 1986. Risk management. Project Management Journal 17(4): 20–26. Yang, Z.F. et al. 2006. Research on the Geo-hazards of Sichuan-Tibet Road and its Prevent and Control. Beijng: Science Press of China.
Yao, C.P. & Huang, H.W. 2008. Risk assessment on environmental impact in Xizang Road tunnel. In Ng C.W.W. et al. (eds), Geotechnical Aspects of Underground Construction in Soft Ground:601–606. London: Taylaor & Francis. Zayed T. et al. 2008. Assessing risk and uncertainty inherent in Chinese highway projects using AHP. International Journal of Project Management 26: 408–419.
38
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Recent revision of Japanese Technical Standard for Port and Harbor Facilities based on a performance based design concept T. Nagao National Institute for Land and Infrastructure Management, Yokosuka, Japan
Y. Watabe & Y. Kikuchi Port & Airport Research Institute, Yokosuka, Japan
Y. Honjo Gifu University, Gifu, Japan
ABSTRACT: The purpose of this paper is to introduce the revision of the Technical Standards for Port and Harbor Facilities (TSPHF) which was recently revised in April 2007. It is thought that the TSPHF is one of the first cases of a revision of a design code based on a performance based design/specification concept. First, the reason why a performance based design concept was introduced to the TSPHF is explained. Then, the philosophy of providing a performance concept is explained. The standard verification procedure in the TSPHF guidelines is explained using an example. The policy for determining the geotechnical parameters used for the performance based design concept is introduced. Finally, the adequateness surveillance system introduced is explained. This kind of organization is inevitably required for the new design system in order to achieve higher levels of transparency and fairness.
1
2
INTRODUCTION
The Japanese government published the first guidelines for port and harbor facilities in 1930. These were something like a collection of design case histories. Engineers at that time designed port facilities by themselves with the aid of such design examples. In 1967, the Port and Harbor Bureau published the Standards for Port and Harbour Facility Design. These standards were the basis of the standards for approximately 40 years. However, at that time, such standards had no concrete legal background. In 1973, the Port and Harbor Law was revised to provide a legal background for these standards. In 1979, standards and commentaries were revised to suit the law. The Port and Harbour Bureau then revised them twice during the period up until 1999. In these revisions, the concept of the standards was the same as those in 1967. In 2007, new technical standards were presented. The concept of these standards was different from the former standards. These standards were formulated in order to coincide with the WTO/TBT agreement. This paper presents the features of the new Japanese Technical Standard for Port and Harbor Facilities (TSPHF).
JAPANESE GOVERNMENT POLICY ON TECHNICAL STANDARDS AND ACCREDITATIONS
Since 1995 (the year in which the WTO/TBT agreement came into effect.), the Japanese government has adopted a policy of deregulation with regard to a variety of laws and rules related to economic activities and trade. In March 1998, the Three-Year Program for Promoting Deregulation was determined as a result of a cabinet decision, and the following tasks were delineated: 1 All economic regulations should be eliminated in principle, and social regulations should be minimized. 2 Rationalization of regulation methods. For example, testing can be outsourced to the private sector. 3 Simplification and clarification of the contents of the regulations. 4 International harmonization of the regulations. 5 Speeding up of regulation related procedures. 6 Transparency in regulation related procedures. Following the above plan, the Three-Year Plan for the Promotion of Regulatory Reform was determined
39
Table 1.
Revision of Port and Harbor Law.
Article 56 Item 2-2 (Before revision) Those port and harbor facilities, such as navigation channels and basins, protective facilities for harbors, and mooring facilities, should comply with the law that specifies such matters if such a law exists. In addition, their construction, improvement and maintenance should comply with the Technical Standards for Port and Harbor Facilities that have been specified as a ministerial ordinance by the Ministry of Land, Infrastructure and Transportation. (After revision) Those port and harbor facilities, such as navigation channels and basins, protective facilities for harbors, and mooring facilities (termed facilities covered by the TSPHF), should comply with the law that specifies such matters if such a law exists. In addition, construction, improvement and maintenance concerning the performance of facilities covered by the TSPHF should comply with the Technical Standards for Port and Harbor Facilities that have been specified as a ministerial ordinance by the Ministry of Land, Infrastructure and Transportation.
parliament, which made a proclamation in September 2006, and was implemented on 1 April 2007. The item that influenced the revision of the standards, Article 56 Item 2-2, is shown in Table 1. In the revision, rather than prescribing the specifications of design details, the performances of facilities are regulated. Based on the revised Port and Harbor Law, the TSPHF was fully revised. The main points of the revision summarized here are from two aspects, namely, the system for the performance based specifications, and the performance verification. Previously established comprehensive design codes (MLT, 2002; JSCE, 2003; JGS, 2004) provided foundations for the revision of these technical standards.
as a result of a cabinet decision in March, 2001. This plan consisted of the objectives shown below: 1 Realization of sustainable economic development through the promotion of economic activities. 2 Realization of a transparent, fair and reliable economic society. 3 Secure diversified alternative lifestyles. 4 Realization of an economic society that is open to the world. In order to realize such objectives, the promotion of essential and active deregulations in various administrative services was planned. In the field of standards and accreditations, the following basic policies were implemented.
3.1 Performance based specifications system
– Essential reviews of standards and accreditations in order to check the necessity of the involvement of the government. – In cases where administrative involvement was still required, administrative roles should be minimized, and self-accreditation or self-maintenance of standards and accreditations by the private sector should be promoted. – The international harmonization of standards, performance based specifications and the elimination of multiple examination procedures in accreditation processes should be promoted.
The basic system for the TSPHF is that the required performances of the structures are given as mandatory items in three levels, i.e. objectives, performance requirements and performance criteria, whereas the performance verification methods are not mandatory but are given in the annex or in the reference documents as some of the possible methods (Figure 1). The objectives state the necessity of the structures, whereas the performance requirements state the functions of the structures that need to be implemented in the structures in order to satisfy the objectives in plain language from the view point of accountability. The performance criteria restate the performance requirements from technical view points, thus making each performance requirement verifiable. The performance requirements are classified into basic requirements and other requirements (Table 2), where the former specifies structural performances against various actions and their combinations, and the latter specifies structural dimensional requirements arising from usage and conveniences. The basic requirements are further classified into serviceability, reparability and safety requirements as defined in Table 2. The basic requirements should be combined with the actions considered in the design, which are summarized in Table 3. The combinations of performance requirements and actions are termed design situations, where performance verification of the structure should be carry out for each design situation. The actions are classified into accidental and
The third item had a very strong impact on the revision of design standards and codes for civil structures. The Ministry of Land, Infrastructure and Transportation (MLIT) started a program entitled the Restructuring of Public Works Costs in March, 2003, and which includes the tasks shown below: – Revision of common specifications for civil works. – Review of the Highway Bridge Specifications. – Revision of the Technical Standards for Port and Harbor Facilities (TSPHF) to performance based. The revision of the TSPHF was started around this time with the goal of achieving harmonization between the standards and the international agreement. 3
REVISION OF THE TSPHF
Based on the background explained in the previous section, the Port and Harbor Law was revised in
40
Figure 1. Performance based specifications for the TSPHF. Table 2.
Performance requirements in the TSPHF.
Classification
Definition
Basic requirement
Performance of structural response (deformation, stress, etc.) against actions. The function of the facility would be recovered with minor repairs. The function of the facility would be recovered in a relatively short period of time after some repairs. Significant damage would take place. However, the damage would not cause any loss of life or serious economic damage to the hinterland. Performance requirements for structural dimensions concerning usage and convenience of the facilities.
Serviceability Reparability Safety
Other requirements
Table 3.
Figure 2. Classifications of performances, actions and frequency.
The objectives and the performance requirements are prescribed in the MLIT ministerial ordinance part of the TSPHF, whereas the performance criteria are specified in the MLIT declaration part of the TSPHF that defines the details of the TSPHF. In this way, the hierarchy of the performance specifications is maintained. Table 4 shows an example of the provisions in the new TSPHF. This example is for a protective facility. A breakwater is a representative protective facility. Figure 3 shows the cross section of a caisson-type breakwater. Table 5 shows the provisions for breakwaters in the former TSPHF. In the new TSPHF, objectives, performance requirements, and performance criteria are written clearly in accordance with the hierarchy shown in Figure 1. However, these were not clearly described in the former TSPHF. With regard to verification, this was mandatory in the former TSPHF, but is not mandatory in the new TSPHF. As performance verification in accordance with TSPHF is Approach B verification shown in Figure 1, the recommended verification method is presented in the guidelines, but is not mandatory.
Summary of the basic requirements.
Design situation Persistent Situation Transient Situation Accidental Situation
Definition Permanent actions (self weight, earth pressures) are major actions. Variable actions (waves, level 1 earthquakes) are major actions. Accidental actions (tsunamis, level 2 earthquakes) are major actions.
Performance Requirements Serviceability Serviceability – Serviceability – Reparability – Safety
permanent/variable actions employing an annual occurrence rate of approximately 0.01 (i.e. a return period of 100 years) as a threshold value. For both permanent and transient design situations, serviceability needs to be satisfied, whereas in accidental situations, either of the three performance requirements needs to be satisfied depending on the importance and functions of the structure under design. This concept is further illustrated in Figure 2. It should be noticed that the performance of a structure may not be verifiable in accidental situations in some cases.
3.2
Performance verifications
In order to harmonize the TSPHF with international standards such as ISO2394 and introduce newly developed verification methods using more sophisticated design calculation methods such as seismic response
41
Table 4.
Example of provisions in the new TSPHF. Mandatory situation
Level
Definition
Objectives
The reason why the facility is needed
Mandatory (Port and Harbor Law)
Performance requirements
Levels of performance facilities are required to possess
Mandatory (Port and Harbor Law)
Performance criteria
Concrete criteria which represent performance requirements
Mandatory (Notification)
Performance verification
Performances should be verified using engineering procedures
Not Mandatory (Guidelines are presented as references)
Example for breakwaters The calmness of navigation channels and basins should be maintained in order to safely navigate and moor ships, to handle cargo smoothly, and to safely maintain buildings and other facilities located in ports. Law Article 14 Damages due to the actions of self weights, waves, and level 1 earthquakes should not affect the objectives of the breakwater and its continuous usage. Law Article 14 -Serviceability requirements-Notification Article 351st Danger of the possibility of sliding failures of the ground under persistent situations in which the main action is self weight should be lower than the limit level. 2nd Danger of the possibility of sliding and rotation failures of gravity structures and of failures of the ground due to insufficient bearing capacity under variable situations in which the main actions are waves or level 1 earthquakes should be lower than the limit level. (Guidelines present the standard procedure for performance verifications for reference purposes)
tolerable limit values for the design in order for the users of the revised TSPHF to understand the intentions of the code writers. For this purpose, it is judged appropriate to provide the information in the form of an annex and supporting documents. Figure 4 shows the verification procedures for the sliding safety of a caisson type breakwater shown in the former and new TSPHF guidelines. For the design of a breakwater in a persistent situation where the major action is waves, it is recommended to employ RBD based on force equilibrium. Level I RBD is adopted, and recommended partial factors are provided in tables in the annex. In the case of a caisson-type breakwater on a rubble embankment, serviceability is required for a wave force with a 50 year return period (a variable action), and partial factors are provided which are determined based on the annual failure probability of 0.01 or below for sliding, overturning and bearing capacity failure.
Figure 3. Typical caisson type breakwater.
analyses, the following verification methods are introduced in the revised TSPHF (Table 6): – Reliability based design (RBD) methods, mainly level I partial factor approach. – Numerical methods (NM) capable of evaluating structural response properties. – Model tests. – Method based on past experiences.
3.3 Geotechnical parameters The soil parameters of the ground and the quality parameters of industrial products are completely different in terms of their treatments. Statistical treatments suitable for geotechnical parameters are strongly required in consideration of non-uniform sedimentary structures, investigation errors, testing errors, and limited numbers of data entries, etc. In the new TSPHF, a simplified and reasonable method, which pursues practical usability by simplifying statistical treatments, to determine soil characteristic values was introduced. Details have been published by Watabe et al. (2009a; 2009b). The method is briefly introduced in the following sections.
The performance verification implies design procedures to verify for the structures in order to satisfy the specified performance requirements and/or the performance criteria. In principal, the revised TSHPF does not specify any concrete allowable values for strength nor displacement. In order to perform the tolerable failure probability, safety indices and characteristic values for basic variables in the design are introduced. These are all decided by designers. However, it is considered necessary to provide standard verification methods together with the minimum
42
Table 5.
Example of provisions in the former TSPHF.
Objectives and performance requirements
Contents
Provisions in former TSPHF
Function
Protective facilities for harbors should maintain their functions under all natural situations such as geographical, meteorological, and marine phenomena, etc. (Law Article 7) Protective facilities should be safe against self weight, water pressure, wave forces, earth pressure, an earthquake forces, etc. (Law Article 7) The wave force acting on a structure shall be determined using appropriate hydraulic model experiments or design methods in the following procedure. (Notification Article 5) Examinations of the safety of the members of reinforced concrete structures shall be conducted as standard using the limit state design method. (Notification Article 34) Examinations of the stability of upright sections of gravity type breakwaters shall be based on the design procedures using the safety factors against failures. (Notification Article 48)
Safety Performance verifications (also described in notifications)
Calculation of forces Safety verification of members Stability check
of the general design code, in which the characteristic value is generally the expected value of the derived values.
Table 6. Summary of basic performance verification methods. Design situation
Major actions
Recommended performance verification procedures
3.3.1 Principle of soil parameter determination JGS4001: Principles for foundation design grounded in the performance-based design concept were published in 2004 by the Japanese Geotechnical Society. These are guidelines for determining soil parameters for performance-based reliability design in Japan. Figure 5 shows the flowchart in the TSPHF for determining soil parameters used for performance verifications. This flowchart was modified for the new TSPHF, but reflects the purpose of JGS4001. The measured value is the value directly recorded in a field or laboratory test. The derived value is the value obtained by using the relationship between the measured value and the soil parameter. The characteristic value is the representative value obtained by modeling the depth profile of the data by taking into account the variation of the estimated values. The value must correspond to the critical state for the performance considered in the design. Taking account of the application range of either the verification equation or the prediction equation, the characteristic value is converted into the design value by multiplying with an appropriate partial safety factor. The partial safety factors for each facility are listed in the design code corresponding to both the variation and sensitivity of the soil parameter in the design verification. The characteristic value of the geotechnical parameters in Eurocode7 is also defined following the same concept as JGS4001. In the case of industry products, the characteristic value is generally defined as 5% fractile corresponding to Equation 1, e.g. in Eurocode0 (EN 1990, 2002).
Persistent and Self weight, RBD transient earth and water situations pressures, live loads, waves, wind, ships, etc. Level 1 Non-linear response earthquakes analyses taking into consideration soil structure interactions RBD Pseudo-static procedures (e.g. seismic coefficient method) Accidental Level 1 Numerical procedure to situations earthquakes, evaluate displacements tsunamis, ship and damage extents collisions, etc.
For example, it is well known that undrained shear strengths obtained through unconfined compression tests are more variable than those obtained as a result of triaxial tests, indicating that the reliability of the former is much smaller than that of the latter (Tsuchida, 2000; Watabe and Tsuchida, 2001). In each design procedure, however, it is difficult to take account of data variations which are dependent on testing methods. Consequently, the method in the new TSPHF has adopted a concept in which the characteristic value is corrected in correspondence with the reliability of the testing method. This concept aims to use the partial safety factor, which is independent of the testing method. Therefore, the concept in the case of a large number of data entries is slightly different from that
where µ(x) is the average of x, and σ(x) is the standard deviation of parameter x. This kind of characteristic
43
Figure 4. Example of the verification of persistent and transient situations for caisson-type breakwaters.
Ovesen (1995) proposed a simple equation expressed as Equation (2) in order to obtain the lower limit of the 95% confidence level.
where n is the number of data entries. Schneider (1997) proposed a more simplified equation for n = 11.
When n is larger than 12, Schneider’s equation gives a more conservative value than Ovesen’s equation. The new TSPHF adopted a more practical method, which uses the outline of both Schneider’s and Ovesen’s equations and is partly consistent with JGS4001, in order to determine the characteristic value. Because an engineer who performs geotechnical investigations and an engineer who performs facility design are usually different engineers, the engineer who performs facility design cannot determine an appropriate partial safety factor taking into account data variations for soil parameters in association with the investigation and testing methods. In addition, it is virtually impossible to determine each partial safety factor taking into account data variations derived from the heterogeneity of the ground itself reflecting the soil locality. Therefore, it is ideal that the reliability of the soil parameters is always guaranteed to remain at the same level when the geotechnical information is transmitted from the geotechnical engineer to the facility
Figure 5. Flowchart for the determination of soil parameters in TSPHF.
value is applicable to structural materials; however, it is not applicable to soil parameters because they vary significantly. If we consider ground failure, for example, we have to treat the whole ground failure, not the failure in each element. From this background, Eurocode7 adopted the value corresponding to the 95% confidence level instead of the 5% fractile (Orr, 2006). In JGS4001, the characteristic value is described in the same manner as Eurocode7, but the confidence level is not fixed at 95%.
44
Table 7. Values for correction factor b1 .
design engineer. Consequently, the partial safety factor as a general value listed in the design code can be used.
Correction factor b1
3.3.2 Characteristic value determination with correction factors In the new TSPHF guidelines, the partial safety factor γ is determined from the empirical calibration taking into consideration the data variation. Because the data variation is a given condition in most cases, any efforts, e.g. examinations designed to obtain the most appropriate depth profile, adopting a reliable laboratory testing method, brushing up skills for site investigations and laboratory testing, designed to decrease the data variation are not rewarded. Consequently, we prefer to use conventional investigation methods. The new TSPHF guidelines aim to solve these kinds of issues. In JGS4001, because the confidence interval range narrows with increases in the numbers of data entries, the characteristic value is coincident with the mean value when the number of data entries becomes large. In the new TSPHF guidelines, because the characteristic value is determined according to the data variation, efforts to reduce the data variation are rewarded in the design. Because the derived value is influenced by the sampling method, laboratory testing method, sounding method, and empirical/theoretical equation, etc., the design values must reflect these influences. For example, it has been well known that the reliability of the undrained shear strength obtained using the unconfined compression test is much lower than that obtained using the recompression triaxial test. However, it is very difficult to take account of this fact in design. The method in the new TSPHF guidelines adopted the concept in which the characteristic value is corrected according to the reliability level of the testing method. The coefficient of variation (COV) is introduced to represent the data variation. To reflect the data reliability in the characteristic value, the estimated value is corrected according to the COV. Consequently, we can establish a design code with a common partial safety factor by using the characteristic value determined using this method, even though the derived value of the soil parameter has been obtained from a different soil test. A larger number of data entries is more desirable in order to reduce the COV. However, 10 data entries are sufficient in practice, because the number of data entries is generally very limited. In fact, in most cases the COV converges on a certain value when the number of data entries is more than 10. It is known that the COV for derived values obtained by highly skilled technicians is less than 0.1 (Watabe et al., 2007). In other words, the variation at this level is inevitable due to ground heterogeneity and laboratory testing errors. Ground heterogeneity, sample disturbances, inappropriate soil tests, and inappropriate modeling of depth profiles, etc., result in large COV values. In such cases, it is reasonable to conservatively
Coefficient of variation COV
Parameter for safe side
Parameter for unsafe side
COV < 0.1 0.1 < COV < 0.15 0.15 < COV < 0.25 0.25 < COV < 0.4 0.4 < COV < 0.6 0.6 < COV
1.00 1.00 0.95 1.05 0.90 1.10 0.85 1.15 0.75 1.25 Reexamination of the data/ Reexamination of the soil test
determine characteristic values by taking uncertainties into account. In order to calculate the characteristic value ak from the estimated value a∗ , the correction factor b1 is introduced as a function of the COV, then ak is defined as Equation (4).
When the soil parameter a contributes to either the resistance moment in the safety verification (e.g. the shear strength in the stability analysis) or the safety margin in the prediction (e.g. the consolidation yield stress pc ; the coefficient of consolidation cv in the consolidation calculation), the correction factor is defined as Equation (5).
On the other hand, when it contributes to either the sliding moment in the safety verification (e.g. the unit weight of the earth fill in the stability analysis) or the unsafe factor in the prediction (the compression index Cc ; coefficient of volumetric compression mv in the consolidation calculation), the correction factor is defined as Equation (6).
In these definitions, the characteristic values correspond to either 30% or 70% of the fractile value. Because the aim of this method is simplification, the values listed in Table 7 are to be used instead of the correction factors with detailed fractions. When the COV is larger than 0.6, it is judged that the reliability of the soil parameter is too low for the design. In this case, the test results are reexamined; i.e. the depth profile is remodeled if necessary. In some cases, the ground investigation may be performed again. In JGS4001 or Eurocode7, the characteristic value is defined as the upper/lower boundary of a certain confidence level (95% in most cases) as mentioned above. The new TSPHF guidelines using a simplified
45
method without real statistical treatment are partly consistent with JGS4001. The characteristic value is defined as 30% or 70% of fractile values which correspond to a 95% confidence level when the number of data entries n is 10 and the data variation COV is 0.1. If the number of data entries is not sufficient for statistical treatment, another correction factor b2 is introduced to correct b1 . Then, the characteristic value is expressed as Equation (7).
Approximately 10 or more data entries in the depth profile can be thought to be sufficient to reliably calculate COV. In cases with less than 10 data entries, when the soil parameter contributes to either resistance moment in the stability verification or safety margin in the prediction, the correction factor is defined as Equation (8).
Figure 6. Design verification surveillance system.
to the TSPHF. On the other hand, for verification results obtained using Approach B, it is necessary to certify that the verification results are evaluated adequately. Technical standards or verification procedures in public construction works in Japan are authorized by operating bodies such as railways, roads, or ports. In the existing system, evaluations of design results are performed by such operating bodies. This system seems to be less transparent or fairness viewing from outside of concerned group. In order to avoid this problem, a third party for certifying design verification results is required. This surveillance organization is required to satisfy the items required for an adequateness evaluation organization provided by ISO/IEC guide #65. When a design verification surveillance organization issues a certification for the adequateness of a design, it is responsible for conducting a survey designed to show that the design results are free from faults. It is also required to prepare a discharge of liability for any damages arising from a misevaluation. To perform this certification, the TSPHF has a rule that design results for important facilities will be checked by a governmental institution or a third party institution which is authorized by the government (Figure 6).
On the other hand, when it contributes to either the sliding moment in the stability verification or the unsafe factor in the prediction, the correction factor is defined as Equation (9).
Here, b2 for cases with only one data entry is set at 0.5 or 1.5, and the reliability is assumed to rapidly increase with the number of data entries. In this regard, however, the correction factor b1 cannot be obtained in the case of n = 1, because the COV cannot be calculated. This indicates that more than two data entries are required in this method. In the new TSPHF guidelines, the correction factor b2 is introduced when the number of data entries is less than 10. However, this number can be varied by each design guideline. Note here that b1 = 1 and b2 = 1 are used for soil parameters that contribute equivalently to both action and counteraction.
4 3.4
Institution for surveillance of adequateness to the TSPHF
CONCLUSION
The purpose of this paper is to introduce the revision of the Technical Standards for Port and Harbor Facilities (TSPHF) which was recently revised in April 2007. It is thought that the TSPHF is one of the first cases of a revision of a design code based on a performance based design/specification concept. First, the reason why a performance based design concept was introduced to the TSPHF is explained in this paper. Then, the philosophy of providing a performance concept is explained. The standard verification procedure in the TSPHF guidelines is explained using an example consisting of the sliding failure of a caisson type breakwater. The policy for determining the geotechnical parameters used for the performance based design concept
The new TSPHF admits designs which are not completely verified by the guidelines. There are two procedures for design verification. One procedure is to use Approach A in Fig. 1, which is a verification approach with designer’s original consideration certificating the satisfaction of the requirement of performance criteria provided in TSPHF. The other procedure is to use Approach B, which is a verification approach in accordance with recommended procedures in the guidelines. Even in Approach B, designers can make their own decisions in the verification procedures. For verification results using Approach A, it is necessary to certify that the verification results conform
46
Orr, T.L.L. (2006): Development and implementation of Eurocode 7, Proceedings of the International Symposium on New Generation Design Codes for Geotechnical Engineering Practice – Taipei 2006, CDROM, 1–18. Ovesen, N.K. (1995): Eurocode 7 for geotechnical design, Proceedings Bengt B. Broms Symposium on Geotechnical Engineering, Singapore, 333–360. Schneider, H.R. (1997): Definition and determination of characteristic soil properties, Proceedings 12th International Conference on Soil Mechanics and Geotechnical Engineering, Hamburg, Vol. 4, 2271–2274. The Japan Port and Harbour Association (2007): Technical standards, and commentaries for port and harbour facilities. (in Japanese) Tsuchida, T. (2000): Evaluation of undrained shear strength of soft clay with consideration of sample quality, Soils and Foundations, 40 (3), 29–42. Watabe, Y. and Tsuchida, T. (2001): Comparative study on undrained shear strength of Osaka Bay Pleistocene clay determined by several kinds of laboratory tests, Soils and Foundations, 41 (5), 47–59. Watabe, Y., Shiraishi, Y., Murakami, T. and Tanaka, M. (2007): Variability of physical and consolidation test results for relatively uniform clay samples retrieved from Osaka Bay, Soils and Foundations, 47 (4), 701–716. Watabe, Y., Tanaka, M. and Kikuchi, Y. (2009a): Soil parameters in the new design code for port facilities in Japan, Proceedings of the International Foundation Congress & Equipment Expo’09: IFCEE’09. (in print) Watabe, Y., Tanaka, M. and Kikuchi, Y. (2009b): Practical determination method for soil parameters adopted in the new performance based design code for port facilities in Japan, Soils and Foundations. (in print)
is introduced. The TSPHF guidelines introduce a simplified determination method allowing for ease of use for the practitioner, and with this determination procedure innovative geotechnical investigation methods and laboratory testing methods can be easily introduced. Finally, the adequateness surveillance system introduced is explained. This kind of organization is inevitably required for the new design system in order to achieve higher levels of transparency and fairness. Engineers in the field are experiencing a certain degree of confusion with regard to application of the new TSPHF. Misunderstandings of the concept is one of the reasons for this confusion, with the result that the introduction of a brand new concept is inevitable. As code writers we also intend to work hard to improve the design code. REFERENCES EN 1990: 2002: Eurocode 0, Basis of structural design. EN 1997-1: 2004: Eurocode 7, Geotechnical design –Part 1: General rules. JGS (2004), JGS-4001-2004: Principles for Foundation Designs Grounded on a Performance-based Design Concept (nickname ‘Geocode 21’). JSCE (2003), Principles, Guidelines and Terminologies for drafting design codes founded on performance based design concept (nickname ‘code PLATFORM ver.1’), Japan Society of Civil Engineers. MLIT (2002); Basis for design of civil and building structures, Ministry of Land, Infrastructure and Transportation.
47
Special lecture
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Interaction between Eurocode 7 – Geotechnical design and Eurocode 8 – Design for earthquake resistance of geotechnical structures P.S. Sêco e Pinto Faculty of Engineering, University of Coimbra, National Laboratory of Civil Engineering (LNEC), Lisbon, Portugal
1
INTRODUCTION
country to choose the safety level defined in each National Document of Application. The global safety of factor was substituted by the partial safety factors applied to actions and to the strength of materials. This invited lecture summarises the main topics covered by Eurocode 7 and the interplay with Eurocode 8 and also identify some topics that need further implementation. In dealing with these topics we should never forget the memorable lines of Lao- Tsze, Maxin 64 (550 B.C.):
The Commission of the European Communities (CEC) initiated a work in 1975 of establishing a set of harmonised technical rules for the structural and geotechnical design of buildings and civil engineers works based on article 95 of the Treaty. In a first stage would serve as alternative to the national rules applied in the various Member States and in a final stage will replace them. From 1975 to 1989 the Commission with the help of a Steering Committee with the Representatives of Member States developed the Eurocodes programme. The Commission, the Member states of the EU and EFTA decided in 1989 based on an agreement between the Commission and CEN to transfer the preparation and the publication of the Eurocodes to CEN. The Structural Eurocode programme comprises the following standards: EN 1990 Eurocode – Basis of design EN 1991 Eurocode 1 – Actions on structures EN 1992 Eurocode 2 – Design of concrete structures EN 1993 Eurocode 3 – Design of steel structures EN 1994 Eurocode 4 – Design of composite steel and concrete structures EN 1995 Eurocode 5 – Design of timber structures EN 1996 Eurocode 6 – Design of masonry structures EN 1997 Eurocode 7 – Geotechnical design EN 1998 Eurocode 8 – Design of structures for earthquake resistance EN 1999 Eurocode 9 – Design of aluminium alloy structures The work performed by the Commission of the European Communities (CEC) in preparing the “Structural Eurocodes” in order to establish a set of harmonised technical rules is impressive. Nevertheless, due the preparation of these documents by several experts, some provisions of EC8 with the special requirements for seismic geotechnical design that deserve more consideration will be presented in order to clarify several questions that still remain without answer. The actual tendency is to prepare unified codes for different regions but keeping the freedom for each
“The journey of a thousand miles begins with one step”.
2 2.1
EUROCODE 7 – GEOTECHNICAL DESIGN Introduction
The Eurocode 7 (EC7) “Geotechnical Design” gives a general basis for the geotechnical aspects of the design of buildings and civil engineering works. The link between the design requirements in Part 1 and the results of laboratory tests and field investigations run according to standards, codes and other accepted documents is covered by Part 2” EN 1997 is concerned with the requirements for strength, stability, serviceability and durability of structures. Other requirements, e.g. concerning thermal or sound insulation, are not considered.
2.2
EUROCODE 7 – Geotechnical Design – Part 1
The following subjects are dealt with in EN 1997-1 Geotechnical design: Section 1: General Section 2: Basis of Geotechnical Design Section 3: Geotechnical Data Section 4: Supervision of Construction, Monitoring and Maintenance Section 5: Fill, Dewatering, Ground Improvement and Reinforcement Section 6: Spread Foundations Section 7: Pile Foundations Section 8: Anchorages
51
Where relevant, it shall be verified that the following limit states are not exceeded:
Section 9: Retaining Structures Section 10: Hydraulic failure Section 11: Overall stability Section 12: Embankments
– Loss of equilibrium of the structure or the ground, considered as a rigid body, in which the strengths of structural materials and the ground are insignificant in providing resistance (EQU); – Internal failure or excessive deformation of the structure or structural elements, including footings, piles, basement walls, etc., in which the strength of structural materials is significant in providing resistance (STR); – Failure or excessive deformation of the ground, in which the strength of soil or rock is significant in providing resistance (GEO); – Loss of equilibrium of the structure or the ground due to uplift by water pressure (buoyancy) or other vertical actions (UPL); – Hydraulic heave, internal erosion and piping in the ground caused by hydraulic gradients (HYD).
2.2.1 Design requirements The following factors shall be considered when determining the geotechnical design requirements: – Site conditions with respect to overall stability and ground movements; – Nature and size of the structure and its elements, including any special requirements such as the design life; – Conditions with regard to its surroundings (neighbouring structures, traffic, utilities, vegetation, hazardous chemicals, etc.); – Ground conditions; – Groundwater conditions; – Regional seismicity; – Influence of the environment (hydrology, surface water, subsidence, seasonal changes of temperature and moisture).
The selection of characteristic values for geotechnical parameters shall be based on derived values resulting from laboratory and field tests, complemented by well-established experience. The characteristic value of a geotechnical parameter shall be selected as a cautious estimate of the value affecting the occurrence of the limit state. For limit state types STR and GEO in persistent and transient situations, three Design Approaches are outlined. They differ in the way they distribute partial factors between actions, the effects of actions, material properties and resistances. In part, this is due to differing approaches to the way in which allowance is made for uncertainties in modeling the effects of actions and resistances. In Design Approach 1 partial factors are applied to actions, rather than to the effects of actions and ground parameters, In Design Approach 2 approach, partial factors are applied to actions or to the effects of actions and to ground resistances. In Design Approach 3 partial factors are applied to actions or the effects of actions from the structure and to ground strength parameters. It shall be verified that a limit state of rupture or excessive deformation will not occur. It shall be verified serviceability limit states in the ground or in a structural section, element or connection.
Each geotechnical design situation shall be verified that no relevant limit state is exceeded. Limit states can occur either in the ground or in the structure or by combined failure in the structure and the ground. Limit states should be verified by one or a combination of the following methods: design by calculation, design by prescriptive measures, design by loads tests and experimental models and observational method. To establish geotechnical design requirements, three Geotechnical Categories, 1, 2 and 3 are introduced. Geotechnical Category 1 includes small and relatively simple structures. Geotechnical Category 2 includes conventional types of structure and foundation with no exceptional risk or difficult soil or loading conditions. Geotechnical Category 3 includes: (i) very large or unusual structures; (ii) structures involving abnormal risks, or unusual or exceptionally difficult ground or loading conditions; and (iii) structures in highly seismic areas. 2.2.2 Geotechnical Design by calculation Design by calculation involves: – Actions, which may be either imposed loads or imposed displacements, for example from ground movements; – Properties of soils, rocks and other materials; – Geometrical data; – Limiting values of deformations, crack widths, vibrations etc. – Calculation models.
2.2.3 Design by prescriptive measures In design situations where calculation models are not available or not necessary, the exceedance of limit states may be avoided by the use of prescriptive measures. These involve conventional and generally conservative rules in the design, and attention to specification and control of materials, workmanship, protection and maintenance procedures.
The calculation model may consist of: (i) an analytical model; (ii) a semi-empirical model; (iii) or a numerical model.
52
– The type of samples (category, etc) to be taken including specifications on the number and depth at which they are to be taken, – Specifications on the ground water measurement, – The types of equipment to be used, – The standards that are to be applied.
2.2.4 Design by load tests and experimental models When the results of load tests or tests on large or small scale models are used to justify a design, the following features shall be considered and allowed for: – Differences in the ground conditions between the test and the actual construction; – Time effects, especially if the duration of the test is much less than the duration of loading of the actual construction; – Scale effects, especially if small models are used. The effect of stress levels shall be considered, together with the effects of particle size.
The laboratory test programme depends in part on whether comparable experience exists. The extent and quality of comparable experience for the specific soil or rock should be established. The results of field observations on neighbouring structures, when available, should also be used. The tests shall be run on specimens representative of the relevant strata. Classification tests shall be used to check whether the samples and test specimens are representative. This can be checked in an iterative way. In a first step classification tests and strength index tests are performed on as many samples as possible to determine the variability of the index properties of a stratum. In a second step the representativeness of strength and compressibility tests can be checked by comparing the results of the classification and strength index tests of the tested sample with entire results of the classification and strength index tests of the stratum. The flow chart shown below demonstrates the link between design and field and laboratory tests. The design part is covered by EN 1997-1; the parameter values part is covered by EN 1997-2.
Tests may be carried out on a sample of the actual construction or on full scale or smaller scale models. 2.2.5 Observational method When prediction of geotechnical behaviour is difficult, it can be appropriate to apply the approach known as “the observational method”, in which the design is reviewed during construction. The following requirements shall be met before construction is started: – The limits of behaviour which are acceptable shall be established; – The range of possible behaviour shall be assessed and it shall be shown that there is an acceptable probability that the actual behaviour will be within the acceptable limits; – A plan of monitoring shall be devised which will reveal whether the actual behaviour lies within the acceptable limits. The monitoring shall make this clear at a sufficiently early stage and with sufficiently short intervals to allow contingency actions to be undertaken successfully; – The response time of the instruments and the procedures for analysing the results shall be sufficiently rapid in relation to the possible evolution of the system; – A plan of contingency actions shall be devised which may be adopted if the monitoring reveals behaviour outside acceptable limits. 2.3
3
EUROCODE 8 – DESIGN OF STRUCTURES FOR EARTHQUAKE RESISTANCE
3.1
Introduction
The Eurocode 8 (EC8) “Design of Structures for Earthquake Resistant” deals with the design and construction of buildings and civil engineering works in seismic regions is divided in six Parts. The Part 1 is divided in 10 sections: Section 1 – contains general information; Section 2 – contains the basis requirements and compliance criteria applicable to buildings and civil engineering works in seismic regions; Section 3 – gives the rules for the representation of seismic actions and their combination with other actions; Section 4 – contains general design rules relevant specifically to buildings; Section 5 – presents specific rules for concrete buildings; Section 6 – gives specific rules for steel buildings; Section 7 – contains specific rules for steel-concrete composite buildings; Section 8 – presents specific rules for timber buildings; Section 9 – gives specific rules for masonry buildings; Section 10 – contains fundamental requirements and other relevant aspects for the design and safety related to base isolation.
EUROCODE 7 – Part 2
EN 1997-2 is intended to be used in conjunction with EN 1997-1 and provides rules supplementary to EN 1997-1 related to the: – Planning and reporting of ground investigations, – General requirements for a number of commonly used laboratory and field tests, – Interpretation and evaluation of test results, – Derivation of values of geotechnical parameters and coefficients. The field investigation programme shall contain: – A plan with the locations of the investigation points including the types of investigations, – The depth of the investigations,
53
Further Parts include the following: Part 2 contains relevant provisions to bridges. Part 3 presents provisions for the seismic strengthening and repair of existing buildings. Part 4 gives specific provisions relevant to tanks, silos and pipelines. Part 5 contains specific provisions relevant to foundations, retaining structures and geotechnical aspects. Part 6 presents specific provisions relevant to towers, masts and chimneys. In particular the Part 5 of EC8 establishes the requirements, criteria, and rules for siting and foundation soil and complements the rules of Eurocode 7, which do not cover the special requirements of seismic design. The topics covered by Part 1 – Section 1 namely: seismic action, ground conditions and soil investigations, importance categories, importance factors and geotechnical categories and also the topics treated in Part 5 slope stability, potentially liquefiable soils, earth retaining structures, foundation system, topographic aspects are discussed. 3.2
Figure 1. Elastic response spectrum (after EC8).
In EC 8, in general, the hazard is described in terms of a single parameter, i.e. the value ag of the effective peak ground acceleration in rock or firm soil called “design ground acceleration” (Figure 1) expressed in terms of: a) the reference seismic action associated with a probability of exceeding (PNCR ) of 10% in 50 years; or b) a reference return period (TNCR ) = 475. These recommended values may be changed by the National Annex of each country (e.g. in UBC (1997) the annual probability of exceedance is 2% in 50 years, or an annual probability of 1/2475).
Seismic action
The definition of the actions (with the exception of seismic actions) and their combinations is treated in Eurocode 1 “Action on Structures”. Nevertheless the definition of some terms in EN 1998-1 further clarification of terminology is important to avoid common misunderstandings and shortcomings in seismic hazard analysis as stressed by Abrahamson (2000). In general the national territories are divided by the National Authorities into seismic zones, depending on the local hazard.
where: Se (T) elastic response spectrum, T vibration period of a linear single-degree-offreedom system, αg design ground acceleration, TB , TC limits of the constant spectral acceleration branch, TD value defining the beginning of the constant displacement response range of the spectrum,
54
S soil parameter with reference value 1.0 for subsoil class A, η damping correction factor with reference value 1.0 for 5% viscous damping.
Table 1. Values of the parameters describing the Type 1 elastic response spectrum*. Ground type
S
TB (s)
TC (s)
TD (s)
The earthquake motion in EC 8 is represented by the elastic response spectrum defined by 3 components. It is recommended the use of two types of spectra: type 1 if the earthquake has a surface wave magnitude Ms greater than 5.5 and type 2 in other cases. The seismic motion may also be represented by ground acceleration time-histories and related quantities (velocity and displacement). Artificial accelerograms shall match the elastic response spectrum. The number of the accelerograms to be used shall give a stable statistical measure (mean and variance) and a minimum of 3 accelerograms should be used and also some others requirements should be satisfied. For the computation of permanent ground deformations the use of accelerograms recorded on soil sites in real earthquakes or simulated accelerograms is allowed provided that the samples used are adequately qualified with regard to the seismogenic features of the sources. For structures with special characteristics spatial models of the seismic action shall be used based on the principles of the elastic response spectra.
A B C D E
1.0 1.2 1.15 1.35 1.4
0.15 0.15 0.20 0.20 0.15
0.4 0.5 0.6 0.8 0.5
2.0 2.0 2.0 2.0 2.0
3.3
Table 2. Values of the parameters describing the Type 2 elastic response spectrum*. Ground type
S
TB (s)
TC (s)
TD (s)
A B C D E
1.0 1.35 1.5 1.8 1.6
0.05 0.05 0.10 0.10 0.05
0.25 0.25 0.25 0.30 0.25
1.2 1.2 1.2 1.2 1.2
Subsoil S 2 – deposits of liquefiable soils, of sensitive clays, or any other soil profile not included in types A–E or S1 . For the five ground types the recommended values for the parameters S, TB , TC , TD , for Type 1 and Type 2 are given in Tables 1 and 2. The recommended Type 1 and Type 2 elastic response spectra for ground types A to E are shown in Figures 2 and 3. The recommended values of the parameters for the five ground types A, B, C, D and E for the vertical spectra are shown in Table 3. These values are not applied for ground types S1 and S2. The influence of local conditions on site amplification proposed by Seed and Idriss (1982) is shown in Figure 4. The initial response spectra proposed in the pre-standard EC8 based in Seed and Idriss proposal was underestimating the design levels of soft soil sites in contradiction with the observations of the last recorded earthquakes. Based on records of earthquakes Idriss (1990) has shown that peak accelerations on soft soils have been observed to be larger than on rock sites (Figure 5). The high quality records from very recent earthquakes Northridge (1994), Hyogo-ken-Nambu (1995), Kocaeli (1999), Chi-Chi (1999) and Tottoriken (2000) have confirmed the Idriss (1990) proposal. Based on strong motions records obtained during Hyogoken-Nanbu earthquake in four vertical arrays sites and using and inverse analysis Kokusho and Matsumuto (1997) have plotted in Figure 6 the maximum horizontal acceleration ratio against maximum base acceleration and proposed the regression equation:
Ground conditions and soil investigations
For the ground conditions five subsoil classes A, B, C, D and E are considered: Subsoil classA – rock or other geological formation, including at most 5 m of weaker material at the surface characterised by a shear wave velocity Vs of at least 800 m/s; Subsoil class B – deposits of very dense sand, gravel or very stiff clay, at least several tens of m in thickness, characterised by a gradual increase of mechanics properties with depth shear wave velocity between 360–800 m/s, NSPT > 50 blows and cu > 250 kPa. Subsoil class C – deep deposits of dense or medium dense sand, gravel or stiff clays with thickness from several tens to many hundreds of meters characterised by a shear wave velocity from 160 m/s to 360 m/s, NSPT from 15–50 blows and cu from 70 to 250 kPa. Subsoil class D – deposits to loose to medium cohesionless soil (with or without some soft cohesive layers), or of predominantly soft to firm cohesive soil characterised by a shear wave velocity less than 180 m/s, NSPT less than 15 and cu less than 70 kPa. Subsoil class E – a soil profile consisting of a surface alluvium layer with Vs, 30 values of type C or D and thickness varying between about 5 m and 20 m, underlain by stiffer material with Vs, 30 > 800 m/s. Subsoil S 1 – deposits consisting – or containing a layer at least 10 m thick – of soft clays/silts with high plasticity index (PI > 40) and high water content characterised by a shear wave velocity less than 100 m/s and cu between 10–20 kPa.
This trend with a base in a Pleistocene soil is similar to the Idriss (1990) proposal where the base was in rock.
55
ground classification of EC8 follows a classification based on shear wave velocity, on SPT values and on undrained shear strength, similar to UBC (1997) that is shown in Table 4. Based on the available strong-motion database on equivalent linear and fully nonlinear analyses of response to varying levels and characteristics of excitation Seed et al. (1997) have proposed for site depending seismic response the Figures 7 and 8, where A0 , A and AB are hard to soft rocks, B are deep or medium depth cohesionless or cohesive soils, C, D soft soils and E soft soils, high plasticity soils. Comments: The following comments are pointed: (i) as seismic cone tests have shown good potentialities they should also be recommended; (ii) the EC 8 (Part 5) stress the need for the definition of the variation of shear modulus and damping with strain level, but doesn’t refer to the use of laboratory tests such as cyclic simple shear test, cyclic triaxial test and cyclic torsional test. It is important to stress that a detailed description of laboratory tests for the static characterisation of soils is given in EC 7 Part 2 and the same criteria is not adopted in EC 8 – Part 5.
Figure 2. Recommended Type 1 elastic response spectrum (after EC8).
3.4 Importance categories, importance factors and geotechnical categories The structures following EC 8 (Part 1.2) are classified in 4 importance categories related with its size, value and importance for the public safety and on the possibility of human losses in case of a collapse. To each importance category an important factor γI is assigned. The important factor γI = 1,0 is associated with a design seismic event having a reference return period of [475] years. The importance categories varying I to IV (with the decreasing of the importance and complexity of the structures) are related with the importance factor γI assuming the values [1,4], [1,2], [1,0] and [0,8], respectively. To establish geotechnical design requirements three Geotechnical Categories 1, 2 and 3 were introduced in EC 7 with the highest category related with unusual structures involving abnormal risks, or unusual or exceptionally difficult ground or loading conditions and structures in highly seismic areas. Also it is important to refer that buildings of importance categories [I, II, III] shall generally not be erected in the immediate vicinity of tectonic faults recognised as seismically active in official documents issued by competent national authorities. Absence of movement in late Quaternary should be used to identify non active faults for most structures. It seems that this restriction is not only very difficult to follow for structures such as bridges, tunnels and embankments but conservative due the difficult to identify with reability the surface outbreak of a fault. Anastapoulos and Gazetas (2006) have proposed a methodology to design structures against major fault ruptures validated through successful Class A predictions of centrifuge model tests and have recommended some changes to EC8 – Part 5.
Figure 3. Recommended Type 2 elastic response spectrum (after EC8). Table 3. Recommended values of the parameters for the five ground types A, B, C, D and E. Spectrum
αvg /αg
TB (s)
TC (s)
TD (s)
Type 1 Type 2
0.9 0.45
0.05 0.05
0.15 0.15
1.0 1.0
The downhole arrays are useful: (i) to understand the seismic ground response; and (ii) to calibrate our experimental and mathematical models. Following the comments proposed by Southern Member States the actual recommended elastic response spectrum of EC8 incorporates the lessons learnt by recent earthquakes. The soil investigations shall follow the same criteria adopted in non-seismic areas, as defined in EC 7 (Parts 1, 2 and 3). The soil classification proposed in the pre-standard of EC8, based only on the 3 ground materials and classified by the wave velocities was simpler. The actual
56
Figure 4. Influence of local soil conditions on site response (after Seed and Idriss, 1982).
Figure 5. Influence of local soil conditions on site response (after Idriss, 1990).
valleys as well as of deeper geologic structures in determining site response was shown from the analysis of records in Northridge and Kobe earthquakes.
Comments: The following comments are presented: (i) no reference is made for the influence of strong motion data with the near fault factor (confined to distances of less than 10 km from the fault rupture surface) with the increases of the seismic design requirements to be included in building codes; (ii) also no reference is established between the ground motion and the type of the fault such as reverse faulting, strike slip faulting and normal faulting; (iii) EC8 refers to the spatial variation of ground motion but does not present any guidance; (iv) basin edge and other 2D and 3D effects were not incorporated in EC8. The importance of shapes of the boundaries of sedimentary
3.5
Potentially liquefiable soils
Following 4.1.3.(2) – Part5 – EC8 “An evaluation of the liquefaction susceptibility shall be made when the foundations soils include extended layers or thick lenses of loose sand, with or without silt/clay fines, beneath the water level, and when such level is close to the ground surface”.
57
Figure 6. Maximum horizontal acceleration ratio plotted against maximum base acceleration (after Kokusho and Matsumoto, 1997). Table 4.
Ground profile types (after UBC, 1997).
Ground profile type
Shear wave velocity Vs(m/s)
SPT test
Undrained shear strength (kPa)
1500 760–1500 360–760
– – >50
– – >100
180–360 <180
15–50 <15
50–100 <50
SA SB SC SD SE SF
Ground description Hard rock Rock Very dense soil and soft rock Stiff soil Soft soil Special soils
Figure 8. Proposed site-dependent response spectra, with 5% damping (after Seed et al., 1997). Table 5.
Magnitude scaling factors.
Magnitude M
Seed & Idriss (1982)
Idriss NCEER (1997)
Ambraseys (1988)
5.5 6.0 6.5 7.0 7.5 8.0 8.5
1.43 1.32 1.19 1.08 1.00 0.94 0.89
2.20 1.76 1.44 1.19 1.00 0.84 0.72
2.86 2.20 1.69 1.30 1.00 0.67 0.44
the factor (100/σvo )1/2 where σvo (kPa) is the effective overburden pressure. This normalisation factor should be taken not smaller than 0.5 and not greater than 2. The seismic shear stress τe can be estimated from the simplified expression:
where αgr is the design ground acceleration ratio, γf is the importance factor, S is the soil parameter and σvo is the total overburden pressure. This expression should not be applied for depths larger than 20 m. The shear level should be multiplied by a safety factor of [1.25]. The magnitude correction factors in EC8 follow the proposal of Ambraseys (1988) and are different from the NCEER (1997) factors. A comparison between the different proposals is shown in Table 5. A new proposal with a summary of different authors presented by Seed et al. (2001) is shown in Figure 9. Empirical liquefaction charts are given with seismic shear wave velocities versus SPT values to assess liquefaction. A comparison between NCEER (1997) and EC8 proposal for pre-standard is shown in Figure 10. It is important to refer that the proposal for EC8 is based on the results of Roberston et al.(1992)
Figure 7. Proposed site-dependent relationship (after Seed et al., 1997).
Soil investigations should include SPT or CPT tests and grain size distribution (Idriss and Boulanger, 2004). Normalisation of the overburden effects can be performed by multiplying SPT or CPT value by
58
Figure 11. Probabilistic approach for liquefaction analysis (after Cetin et al., 2001).
Figure 9. Recommendations for correlations with magnitude (after Seed et. al., 2001).
Figure 10. Liquefaction potential assessment by NCEER (1997) and EC8 (pre-standard).
Figure 12. Relationship between (N1) 60 and undrained residual strength (after Seed and Harder, 1990).
and the proposal of NCEER(1997) incorporates very recent results. However the EC8 standard version considers that these correlations are still under development and need the assistance of a specialist. The importance of this topic has increased and the assessment of liquefaction resistance from shear wave crosshole tomography was proposed by Furuta and Yamamoto (2000). A new proposal presented by Cetin et al. (2001) is shown in Figure 11 considered advanced in relation with the previous ones, as integrates: (i) data of recent earthquakes; (ii) corrections due the existence of fines; (iii) experience related a better interpretation of SPT test; (iv) local effects; (v) cases histories related more than 200 earthquakes; (vi) Baysiana theory.
Bray et al. (2004) have shown that the chinese criteria proposed by Seed and Idriss (1982) was not reliable for the analysis of silty sands liquefaction and have proposed the plasticity index. The topic related with the assessment of post liquefaction strength is not treated in EC8, but it seems that the following variables are important: fabric or type of compaction, direction of loading, void ratio and initial effective confining stress (Byrne and Beaty, 1999). A relationship between SPT N value and residual strength was proposed by Seed and Harder (1990) from direct testing and field experience (Figure 12). Ishihara et al.(1990) have proposed a relation of normalized residual strength and SPT tests, based on laboratory tests compared with data from backanalysis of actual failure cases (Figure 13). Also
59
Figure 13. Relation of normalized residual strength and SPT tests (after Ishihara et al., 1990).
Figure 15. Correlation between volumetric strain and SPT (after Tokimatsu and Seed, 1987).
Figure 14. Normalized residual strength plotted versus CPT values (after Ishihara et al., 1990).
Ishihara et al. (1990) by assembling records of earthquake caused failures in embankments, tailings dams, and river dykes have proposed the relation of Figure 14, in terms of the normalized residual strength plotted versus CPT value. Alba (2004) has proposed Bingham model, based in triaxial tests of large samples, to simulate residual strength of liquefied sands. The susceptibility of foundations soils to densification and to excessive settlements is referred in EC8, but the assessment of expected liquefaction – induced deformation deserves more consideration. By combination of cyclic shear stress ratio and normalized SPT N-values Tokimatsu and Seed (1987) have proposed relationships with shear strain (Figure 15). To assess the settlement of the ground due to the liquefaction of sand deposits based on the knowledge of the safety factor against liquefaction and the relative density converted to the value of N1 a chart (Figure 16) was proposed by Ishihara (1993). Following EC8 ground improvement against liquefaction should compact the soil or use drainage to reduce the pore water pressure. The use of pile foundations should be considered with caution due the
Figure 16. Post cyclic liquefaction volumetric strain curves using CPT and SPT results (after Ishihara, 1993).
large forces induced in the piles by the liquefiable layers and the difficulties to determine the location and thickness of these layers. Two categories of remedial measures against liquefaction were proposed: (i) Solutions aiming at withstanding liquefaction – confinement wall: stiff walls anchored in a
60
The proposed methods of remediation have an additional advantage minimizing the effects on existing structures during soil improvement. Comments: From the analyses of this section it seems that the following items deserve more clarification (Sêco e Pinto, 1999):
non liquefied layer (or a bedrock) to avoid lateral spreading in case of liquefaction; soil reinforcement – transfer of loads to a non-liquefiable layer. (ii) Solutions to avoid liquefaction: – soil densification: compaction grouting to minimise the liquefaction potential; – dewatering: to lower the water table in order to minimise the risk of liquefaction; – drainage: to facilitate the dissipation of pore pressure; – Fine grouting: to increase the soil cohesion.
i) It is important to quantify the values of extended layers or thick lenses of loose sand; ii) What is the meaning of “……when such level is close to the ground surface”? What depth? What is the maximum depth liquefaction can occur? iii) No recommendation is presented to compute seismic shear stress τe for depths larger than 20 m; iv) The use of Becker hammer and geophysical tests to assess the liquefaction of gravely materials should be stressed; v) The recommended multiplied factor CM for earthquake magnitudes different from 7.5 deserves more explanation. It is important to refer that the well known correlation proposed by Seed et al (1984) for cyclic stress ratio versus N1 (60) to assess liquefaction and adopted in Annex B of EC8 – Part 5 use different correction factor for earthquake magnitudes different from 7.5; vi) No reference is given for the residual strength of soil.
The liquefaction prediction/determination methods are covered by the following main japanese standards: (i) Design standards for Port and Harbour Structures, (ii) Design Standards of Building Foundations, (iii) Design Standards for Railway Structures; (iv) Design Specifications for Roads. Following EC8 ground improvement against liquefaction should compact the soil or use drainage to reduce the pore water pressure. The use of pile foundations should be considered with caution due the large forces induced in the piles by the liquefiable layers and the difficulties to determine the location and thickness of these layers. The remedial measures against liquefaction can be classified in two categories (TC4 ISSMGE, 2001; INA, 2001): (i) the prevention of liquefaction; and (ii) the reduction of damage to facilities due to liquefaction. The measures to prevent of occurrence of liquefaction include the improvement of soil properties or improvement of conditions for stress, deformation and pore water pressure. In practice a combination of these two methods is adopted. The measures to reduce liquefaction induced damage to facilities include (1) to maintain stability by reinforcing structure: reinforcement of pile foundation and reinforcement of soil deformation with sheet pile and underground wall; (2) to relieve external force by softening or modifying structure: adjusting of bulk unit weight, anchorage of buried structures, flattering embankments. In NEMISREF Project the following criteria for selection was used (Evers, 2005): (i) Potential efficiency; (ii) Technical feasibility; (iii) Impact on structure and environmental; (iv) Cost-effectiveness; (v) Innovation. Two methods were selected: (i) Soil grouting using calcifying bacteria; (ii) confinement wall. Related with calcifying bacteria the objective of soil consolidation is to create a cementation between the grains of soil skeleton increasing the cohesion. With confinement wall even if partial liquefaction could occur the final deformations will be controlled. The improvement of soil properties, to prevent soil liquefaction, by soil cementation and solidification is performed by deep mix method (Port Harbour Research Institute, 1997), so within this framework the use of bacteria technique is innovative. The structural strengthening is performed by pile foundation and sheet pile (INA, 2001) and so the confining wall can be considered innovative.
3.6 Foundation system In general for the Soil-Structure Interaction (SSI) the design engineers ignore the kinematic component, considering a fixed base analysis of the structure, due the following reasons: (i) in some cases the kinematic interaction may be neglected;(ii) aseismic building codes, with a few exceptions e.g. Eurocode 8 do not refer it; (iii) kinematic interaction effects are more difficult to assess than inertial forces (Sêco e Pinto, 2003). There is strong evidence that slender tall structures, structures founded in very soft soils and structures with deep foundations the SSI plays and important role. The Eurocode 8 states:” Bending moments developing due to kinematic interaction shall be computed only when two or more of the following conditions occur simultaneously: (i) the subsoil profile is of class D, S1 or S2 , and contains consecutive layers with sharply differing stiffness; (ii) the zone is of moderate or high seismicity, α > 0.10; (iii) the supported structure is of important category I or II. The stability of footings for the ultimate state limit design criteria shall be analysed against failure by sliding and against bearing capacity lure. For shallow foundations under seismic loads failure can not be defined for situations when safety factor becomes less than 1, but is related with permanent irrecoverable displacements. The seismic codes recommend to check the following inequality:
61
The engineering approach considers two subdomains (Figure 19): i) A far field domain where the non linearities are negligible; ii) A near field domain in the neighbouring of the foundation where the effects of the geometrical and material linearities are concentrated. The following effects shall be included: (i) flexural stiffness of the pile; (ii) soil reactions along the pile; (iii) pile–group effects; and (iv) the connection between pile and structure. The use of inclined piles is not recommended to absorb the lateral loads of the soils. If inclined piles are used they must be designed to support axial as well bending loads. Piles shall be designed to remain elastic, if this is not possible potential plastic hinging shall be considered for: (i) a region of depth 2 d (d-diameter of the pile) from the pile cap; (ii) a region of ±2 d from any interface between two layers with different shear stiffness (ratio of shear moduli >6). Evidence has shown that soil confinement increases pile ductibility capacity and increases pile plastic hinge length. Piles have shown the capability to retain much of their axial and lateral capacity even after cracking and experienced ductibility levels up to 2.5 (Gerolymos and Gazetas, 2006). The investigation methods for pile foundation damage are: direct visual inspection, the use of borehole camera inspection and pile integrity test. The ground deformation can be investigated by visual survey and GPS survey (Matsui et al. 1997). Comments: The following topics deserve more consideration:
Figure 17. Bounding surface for cohesive soils (after Pecker, 1997).
where Sd is the seismic design action and Rd the system design resistance. In the inequality (3) partial safety factors shall be included following the recommendations of Eurocode 8. Theoretical and experimental studies to provide bearing capacity solutions to include the effect of soil inertia forces led to the inequality (Pecker, 1997):
where φ = 0 defines the equation of the bounding surface (Figure 17). The combination of the loading lying the outside the surface corresponds to an unstable situation and the combination lying inside the bounding surface corresponds to a potentially stable situation. Piles and piers shall be designed to resist the following action effects: (i) inertia forces from the superstructure; and (ii) kinematic forces resulting from the deformation of the surrounding soil due the propagation of seismic waves. The complete solution is a 3D analysis very time demanding and it is not adequate for design purposes. The decomposition of the problem in steps is shown in Figure 18 and implies (Gazetas and Mylonakis, 1998): (i) the kinematic interaction involving the response of the base acceleration of the system considering the mass of superstructure equal to zero; (ii) the inertial interaction that involves the computation of the dynamic impedances at the foundation level and the dynamic response of the superstructure. For the computation of internal forces along the pile, as well as the deflection and rotation at the pile head, both discrete (based in Winkler Spring model) or continuum models can be used (Finn and Fujita, 2004). The lateral resistance of soil layers susceptible to liquefaction shall be neglected. In general the linear behaviour is assumed for the soil. The nonlinear systems are more general and the term non linearities include the geometric and material nonlinearities (Pecker and Pender, 2000).
i) The influence of pile cap; ii) The moment rotation capacity of pile footing; iii) The incorporation of the non linear behaviour of the materials in the methods of analysis; iv) The instrumentation of the piles for design purposes; v) Some guidelines about group effects, as there are significant different opinions on the influence of group effects related with the number of piles, spacing, direction of loads, soil types and construction methods of piles. For the evaluation of mitigation methods a preliminary analysis of the following solutions was performed (Evers, 2005): (I) Stiffening solutions – hard layer, reinforced concrete walls, soil stiffening at foundation level and inclined piles; (ii) Soft material barriers – soft layer, expanded polystyrene (EPS) walls, air-water balloons and soft caisson; (iii) oscillators. For the criteria of selection the following factors were used: Potential efficiency, technical feasibility, impact on structure and environment, costeffectiveness and innovation. From this analysis the following two mitigation methods: i) soil stiffening (inclined micro-piles) and ii) deformable soft barriers (soft caisson) were selected.
62
Figure 18. Soil-structure interaction problem (after Gazetas and Mylonakis, 1998).
4
INTERACTION WITH OTHER SEISMIC CODES
earthquakes has a significant effect for the design and construction in seismic areas. As an example due the consequences of Kobe earthquakes new improvements have been implemented for the assessment of liquefaction in Japanese codes (Yasuda, 1999).
The continuous process of elaborating codes and standards that incorporates the lessons learned by
63
convincing design approaches than had previously used. Thus the past years have seen a major change in interest and attitude towards this aspect of design. The lessons learned from recent earthquakes such as: Northridge (1994), Kobe (1995), Umbria-Marche (1997), Kocaeli (1999),Athens (1999), Chi-Chi (1999) and Bhuj (2001) have provided important observational data related with the seismic behavior of geotechnical structures. The need of cost effective methods to upgrade buildings by developing new specific foundations techniques is a major problem. So the objective of reducing the earthquake motion transferred to the structure through the foundation by developing innovative constructive techniques for soil improvement and soil reinforcement is getting increase attention. One very important question to be discussed is: (i) how detailed a seismic code must be; (ii) what is the time consuming to establish a set of harmonised technical rules for the design and construction works? (iii) How to improve the relations between the users: relevant authorities, clients and designers? and (iv) how to implement in practice that codes may not cover in detail every possible design situation and it may require specialised engineering judgement and experience? It is hoped that the contributions to be presented by CEN members, in the next years, will help to clarify several questions that still remain without answer. It is important to notice that true innovators have a mantra: They are constantly daring to make things better. They challenge the commonly accepted. They see no limits. We should not forget that growth, evolution and reinvention sustain life. So we need to keep challenging ourselves to think better, do better and be better. Confront our limitations. Failure is a gift anyway. It takes us closer to our dreams, equips us with more knowledge and tough us up. Success and failure go hand to hand. In dealing with Eurocodes we should not forget 4 lessons:
Figure 19. Conceptual subdomains for dynamic soil structure analyses (after Pecker and Pender, 2000).
The actual tendency in almost every region of the world is to elaborate unified codes. This was the main purpose of Eurocodes. In United States, nevertheless the long tradition of existing different codes in the states, due the different geographic regions of the U. S., where the western region in comparison with eastern region faces the largest levels of probabilistic seismic risk, efforts to elaborate an uniform code incorporating the UBC (Uniform Building Code) with NEHRP (National Seismic Hazard Reduction Program) in IBC (International Building Code), not an international code, but a unified national code, are undertaken (Seed and Moss, 1999). In general all geotechnical codes use factored parameters for loads, resistance and strength to harmonize with structural practice. But the way of doing is different, for instance the Eurocode uses partial safety factors for loads and strength (Cuellar, 1999) and in the New Zealand code the LRFD (load and resistance factored design) is used (Pender, 1999). It is important to stress that Eurocodes represent a significant step forward and allows various Nationally Determined Parameters (NDP) which should be confirmed or modified in the National Annexes. 5
Improve: Always be getting better; Observe: We need to keep our eyes open to absorb the changes; Connect: We need to receive different inputs; Adapt: The conditions are different, so we need to keep monitoring the process.
FINAL REMARKS
In dealing with this subject we should always have in mind:
The work performed by the Commission of the European Communities (CEC) in preparing the “Structural Eurocodes” in order to establish a set of harmonised technical rules is impressive. However we feel that some topics deserve more consideration. Earthquakes are very complex and dangerous natural phenomena, which occurs primary in known seismic zones, although severe earthquakes have also occurred outside these zones in areas considered being geologically stable. As a result, regulatory agencies became more stringent in their requirements for demonstration of adequate seismic stability and design engineers responded by developing new and more
All for Love “Errors, like straws, upon the surface flow; He who would search for pearls must dive below”. (John Dryden) REFERENCES Abrahamson, N.A. (2000) “State of the practice of seismic hazard evaluation.” Vol. 1, pp 659-685. GEOENG 2000, Melbourne.
64
Alba, P. (2004) “Residual strength.after liquefaction: a rheological approach”. Proc. of 3rd International Conference on Earthquake Geotechnical Engineering, Berkeley , Editors D.Doolin, A. Kammerer, T. Nogami, R.B. Seed e I.Towhata, Vol. 1 pp. 740–746. Ambraseys, N. N. (1998) “Engineering seismology”. Earthquake Engineering and Structural Dynamics, Vol. 17, pp. 1–105. Anastassopoulos, I. and Gazetas, G. (2006) “Design of foundations and structures against fault displacement”. ETC 12- Workshop, Athens. Bray, J., Sancio, R.B., Riemer, M. and Durgunoglu, H.T. (2004) Liquefaction susceptibility of fine grained soils. Proc. of 3rd International Conference on Earthquake Geotechnical Engineering, Berkeley, Editors D. Doolin, A. Kammerer, T. Nogami, R.B. Seed and I.Towhata, Vol. 1 pp. 665–662. Cetin, K.O e Seed, R.B, and Kiureghion. (2001) Reliability based assessment of seismic soil liquefaction initiation, XV ICSMGE TC4 Satellite Conference on Lessons Learned from Recent Strong Earthquakes, pp 327–332. Edited by Atilla Ansal,. Cuellar, V. (1999) “Codes and standard for Europe”. Proc. of the Second International Conference on Earthquake geotechnical Engineering. Edited by Pedro S. Sêco e Pinto. Published by Balkema.Vol. 3, pp. 1129–1133. EN 1990 Eurocode 0 – “Basis of structural design”. EN 1991 Eurocode 1 – “Actions on structures”. EN 1997 Eurocode 7 – “Geotechnical design”. EN 1998 Eurocode 8 – “Design of structures for earthquake resistance”. Evers, G. (2005) “Horizontal shaking mitigation implementation”. NEMISREF Seminar, CD-Rom, Athenes. Evers, G. (2005) “Liquefaction mitigation implementation”. NEMISREF Seminar, CD-Rom, Athenes. Finn, W. D. and Fujita, N.(2004) “Behavior of piles in liquefiable soils during esrthquakes: Analysis and design issues, Proc. Of the Fifth Case Histories in Geotechnical Engineering Conference, New York, NY, SOAP, pp.16. Furuta and Yamamoto (2000). “Liquefaction assessment by shear wave crosshole tomography tests”. Paper no 831. 12 th WCEE, Auckland, New Zealand. Gazetas, G. and Mylonakis, G (1998) “Seismic soil structure interaction: new evidence and emerging issues”. Geotechnical Earthquake Engineering and Soil Dynamics, ASCE II, pp 1119–1174. Gerolymos, G. and Gazetas, G. (2006) “Capacity design of pile foundations: Evidence in support of allowing pile yielding”. ETC 12- Workshop, Athens. Idriss, I. M. (1990) “Response of soft soil during soil earthquakes”. Proc. H. Bolton Seed Memorial Symposium, pp. 273–290. Idriss, I.M. and Boulanger, R.W.(2004) “Semi-empirical procedures for evaluation liquefaction potential during earthquakes” Proc. Of the Fifth Case Histories in Geotechnical Engineering Conference, New York, Ny, SOAP, pp.16. Ishihara, K. 1985. “Stability of natural deposits during earthquake”. Proc. 11 th ICSMGE, S. Francisco, Vol.2, pp. 321–376. Ishihara, K., Yasudfa, S. and Yoshida, Y. (1990) “Liquefaction induced flow failure of embankments and residual strength of silty sands”. SF, Vol. 30, no 3, pp. 69–80. Ishihara, K.(1993). “Liquefaction and flow failure during earthquakes”, 33rd Rankine Lecture. Geotechnique 43(3), pp 351–415. Kokusho, T. and Matsumoto, M (1997) “Nonlinear site response during the Hyogoken-Nanbu earthquake
recorded by vertical arrays in view of seismic zonation methodology”. Proc of the Discussion Special Technical Session on Earthquake Geotechnical Engineering during 14th ICSMFE, Hamburg, Edited by Pedro S.Sêco e Pinto. Published by Balkema., pp. 61–69. Matsui, Y., Kitazawa, M., Nanjo, A and Yasuda, F. (1977) “Investigation of damaged foundations in the Great Hanshin earthquake”. Proc of the Discussion SpecialTechnical Session on Earthquake Geotechnical Engineering during 14th ICSMFE, Hamburg, Edited by Pedro S.Sêco e Pinto. Published by Balkema., pp. 235–242. NCEER (1997) Proc. NCCER Workshop on Evaluation of Liquefaction Resistance of Soils, Summary Report, Edited by T. Leslie Youd and I.M. Idriss, National Center for Earthquake Engineering Research, University of Buffalo, Technical Report NCEER-97-0022. Paolucci, R. (2006) “Numerical Investigation of 3D Seismic Amplification by real Steep Topographic Profiles and Check of the EC8 Topographic Amplification Coefficients”. ETC 12- Workshop, Athens. Pecker, A. and Pender, M.J. (2000) “Earthquake resistant design of foundation. New construction”. Vol. 1, pp 313–332. GEOENG 2000, Melbourne. Pecker, A. (1997) “Analytical formulae for the seismic bearing capacity of shallow strip foundations”. ” Proc of the Discussion Special Technical Session on Earthquake Geotechnical Engineering during 14th ICSMFE, Hamburg, Edited by Pedro S.Sêco e Pinto. Published by Balkema., pp. 262–268. Pender, M.J. (1999) “Geotechnical earthquake engineering design practice in New Zealand”. Proc. of the Second International Conference on Earthquake geotechnical Engineering. Edited by Pedro S.Sêco e Pinto. Published by Balkema.Vol. 3, pp. 1129–1133. Roberston, P.K., Woeller, D.J. and Finn, W.D.L. (1992). “Seismic cone penetration test for evaluating liquefaction under cyclic loading”, Canadian Geotechnical Journal, Vol, 29, pp. 686–695. Sêco e Pinto, P. S. (1999) “The relation between Eurocode 8 and Eurocode 7”. Proceedings of the Twelfth European Conference on Soil Mechanics and Geotechnical Engineering, Vol. 3, pp. 2223–2228, Amsterdam, Edited by F.B.J. Barends, J. Lindenberg, H.J. Luger, L. de Quelerit and A. Verruit. Publisher A. A. Balkema. Sêco e Pinto, P. (2003) “Seismic behaviour of geotechnical structures”. Inaugural lecture, Proc. 13 th Regional African Conference of Soil Mechanics and Geotechnical Engineering pp 3–24., Marrakech, Edited by M.Sahli, L.Bahi & R.Khalid. Seed, H. B. and I. M. Idriss (1982). “Ground motions and soil liquefaction during earthquakes”. Earthquake Engineering Research Institute, Oakland, California. Seed, H. B., Tokimatsu, K.,Harder, L.& Chung, R. (1984) “The influence of SPT procedures in soil liquefaction resistance evaluations”. Earthquake Engineering Research Centre Report no 84/15 U.C. Seed, R.B., Cetin, K.O and Moss, R.E.S (2001) “Recent advances in soil liquefaction hazard assessment”, XV ICSMGE TC4 Satellite Conference on Lessons Learned from Recent Strong Earthquakes, Istanbul, pp. 319–326. Edited by Atilla Ansal, Seed, R.B., Chang, S. W., Dickenson, S.E. and Bray, J. B. (1997) “Site dependent seismic response including recent strong motion data”. Proc of the Discussion Special Technical Session on Earthquake Geotechnical Engineering during 14th ICSMFE, Hamburg, Edited by Pedro S.Sêco e Pinto. Published by Balkema., pp. 125–134.
65
Seed, H.B. and Harder, L.F. (1990) “SPT-based analysis of cyclic pore pressure generation and undrained residual strength”. Proc. of Memorial Symposium of H. B. Seed, Vol. 2, pp. 351–376. Seed, R.B. and Moss, R.E.S. (1999) “Recent advances in U.S. codes and policy with regard to seismic geotechnics”. Proc. of the Second International Conference on Earthquake geotechnical Engineering. Edited by Pedro S.Sêco e Pinto. Published by Balkema.Vol. 3, pp. 1111–1116. TC4 (ISSMGE) (1999) “Manual for Zonation on Seismic Geotechnical Hazards”. (Revised Version). TC4 (ISSMGE). (2001) “Case Histories of Post-Liquefaction Remediation”. Committee on Earthquake Geotechnical Engineering.
Tokimatsu, K and Seed, H. B. (1987). “Evaluation of settlements in sands due to earthquake shaking”, JGE, ASCE, 113, pp. 861–878. UBC (1997) “Uniform Buiding Code, International Conference of Building Officials”, Whittier, California, Vol. II. Yasuda, S. (1999) “Seismic design codes for liquefaction in Asia”. Proc. of the Second International Conference on Earthquake geotechnical Engineering. Edited by Pedro S. Sêco e Pinto. Published by Balkema. Vol. 3, pp. 1117– 1122.
66
Special sessions Reliability benchmarking
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability analysis of a benchmark problem for 1-D consolidation J.Y. Ching National Taiwan University, Taipei, Taiwan
K.-K. Phoon National University of Singapore, Singapore
Y.-H. Hsieh National Taiwan University of Science and Technology, Taipei, Taiwan
ABSTRACT: This paper presents the analysis results of a reliability benchmark problem for one-dimensional consolidation. The peculiarity of this example is that there are two local solutions for the design points, depending on the initial point of the search algorithm. One of the local design points is the global solution of the first-order reliability method, while the other is a fake one. Unfortunately, it turns out to be relatively easy to find the fake solution. This phenomenon of multiple design points is studied and documented in detail in this paper. The analysis results by using other reliability methods are also presented for comparison. Finally, recommendations are given for the suitable reliability methods. 1
INTRODUCTION
The purpose of this benchmark example is to examine the robustness of various reliability methods for problems with non-differentiable performance functions, which is clearly the case for a one-dimensional consolidation problem where switching between NC and OC regimes is possible.
1.1
Description of the problem
Consider a saturated clay layer of thickness H between two sand layers, as shown in Figure 1. The surcharge pressure at the ground surface is q. The clay is lightly overconsolidated with an uncertain OCR. The compression Cc and recompression Cr indices of the clay are also uncertain. It is assumed that the recompression index is an uncertain fraction α of the compression index:
Figure 1. The consolidation problem.
where eclay is the initial void ratio of the clay; σ0 and σ are the effective consolidation stresses before and after the surcharge is applied; σp is the preconsolidation stress;
The failure is defined as the consolidation settlement exceeds a prescribed allowable settlement Sallow . In other words, the performance function g can be written as (g < 0 defines failure):
and
where γ and γsat are the moist and saturated units weights of the soils; γw = 9.8 kN/m2 is the unit weight of water. The moist and saturated soil unit weights
69
Table 1. The values and distributions of the input variables.
Table 2.
Variable
Distribution
Statistics
Solution method
FOSM
FORM
SORM
Subsim
MCS
Sallow q H eclay esand clay Gs Gssand OCR Cc α
Deterministic Lognormal Gaussian Lognormal Lognormal Uniform Uniform Uniform Lognormal Uniform
0.05 m mean = 20 kN/m2 ; cov∗ = 20% mean = 4 m; cov = 10% mean = 1.2; cov = 15% mean = 0.8; cov = 15% [2.5, 2.7] [2.5, 2.7] [1.5, 2.5] mean = 0.4; cov = 25% [0.1, 0.2]
β
2.94
1.52
0.0017
0.063
0.064
% error in PF estimate # of g evaluations Estimator cov
−97.4
−0.8
–
19
1.55 2.29∗ 0.061 0.011 −4.7 −82.6 261
1.53
PF estimate
1.43 2.15∗ 0.076 0.016 19.3 −75.3 152
1900
106
n/a
n/a
n/a
10.2%
0.4%
* “cov” stands for coefficient of variation.
∗
are not independent, because they are related to the specific gravity of the soil solids Gs and the void ratio e:
Input variables
The values and distributions of the basic input variables are summarized in Table 1. 1.3
Solution methods
The reliability methods employed in this study include the First-Order Second Moment (FOSM), FirstOrder Reliability Method (FORM) (Ang and Tang 1984), Second-Order Reliability Method (SORM) (Der Kiureghian and Stefano 1991), direct Monte Carlo simulation (MCS) and Subset simulation (Subsim) (Au and Beck 2001). The FOSM method is implemented with the assumption that the performance function g is Gaussian. The FORM is implemented with the gradient projection algorithm (Liu and Der Kiureghian 1991) for the search of the design point; the SORM is implemented with the algorithm developed by Der Kiureghian and Stefano (1991). 1.4
multiple design point solutions.
the other methods, it is simply taken to be − −1 (PF ), where PF denotes the failure probability estimate, and is the cumulative density function (CDF) of the standard Gaussian distribution. For Subsim, 1000 samples are taken in each stage. The FOSM, FORM and SORM are analytical methods, so the PF estimators are deterministic. However, MCS and Subsim are simulation methods: their PF estimators are random. The coefficient of variation (cov) of their estimates are listed in the table. The cov for MCS is calculated based on the formula cov = [(1 − PF )/PF /n]0.5 , where n = 106 is the total number of MCS samples. The cov for Subsim is estimated based on the PF estimates from 100 independent Subsim runs. The % error in PF estimate is the percentage error compared to the MCS solution, which should be very close to the actual failure probability judging from the 0.4% cov. In spite of the high efficiency, FOSM significantly underestimates PF . The poor performance of FOSM may be due to its simplified assumption on the distribution of the g function and due to the inaccuracy of the linearization for the g function. For FORM, the adopted gradient projection (GP) algorithm yields two local solutions in the standard Gaussian space: the first one gives a reliability index of 1.43, and the second one gives a reliability index of 2.15. It is clear that the first solution is the global solution because it is closer to the origin. This global solution corresponds to a failure probability of 0.076, which is reasonably close to the MCS solution 0.064. Whether the GP algorithm would converge to the global solution depends on the location of the initial trial point in the standard Gaussian space. Unfortunately, when the initial trial point is randomly generated from the standard Gaussian distribution, there is only about 20% chance of converging to the global solution: 80% of the time the algorithm will converge to the fake solution. In particular, when the initial trial point is taken to be the origin of the standard Gaussian space, the GP algorithm converges to the fake solution. The SORM uses the solution from FORM, therefore SORM also leads to two solutions. The first solution of PF = 0.061 follows from the global solution of
where a degree of saturation = 20% is assumed for “moist”. There are nine independent random variables clay in this problem, including q, H , eclay , esand , Gs , sand Gs , OCR, Cc and α. 1.2
Primary analysis results.
Results of reliability analyses
Table 2 summarizes the analysis results of the adopted reliability methods. The factor of safety of this problem, defined as Sallow divided by the settlement when all uncertain variables are fixed at their mean values, is around 2.1. The reliability index β for FORM is the distance of the obtained design point to the origin. For
70
Table 3.
Coordinates of the two local design points. Coordinates of the design points in the standard Gaussian space
Component
Global solution β = 1.43 PF = 0.076
Fake solution β = 2.15 PF = 0.016
Q H eclay esand clay Gs Gssand OCR Cc α
0.82 −0.05 0.04 0.14 −0.10 −0.09 −1.09 0.35 0.18
0.97 0.39 −0.32 0.16 −0.12 −0.10 0.00 1.59 0.92
FORM, while the second solution of PF = 0.011 follows from the fake solution of FORM. Note that the first SORM solution is fairly close to the MCS solution 0.064, indicating SORM indeed improves FORM for this case study in the global solution can be found. However, SORM suffers from the same issue of multiple local design points because SORM uses the results from FORM. The PF estimate made by Subsim is quite accurate although the required computation is more than those for FOSM, FORM and SORM.
2
ISSUE OF MULTIPLE DESIGN POINTS Figure 2. The contour lines of the performance function around the two solutions (upper: global solution; lower: fake solution).
This section discusses in detail the issue of multiple local solutions (design points). This issue only affects the analysis results of FORM and SORM; MCS and Subsim are robust against this issue.
2.2 Non-differentiability of performance function Figure 2 is employed to demonstrate the geometry around these two local design points. The first plot shows the contour lines of the performance function g in the standard Gaussian space of q and OCR, where all other seven uncertain parameters are fixed at their solution coordinates of the global solution. The second plot is for the fake solution. The marker ‘x’ indicates the locations of the two local design points. The dark dashed lines represent the non-differentiable boundary for σ = σp . It is clear that the existence of two local design points has something to do with the nondifferentiability of the performance function. Moreover, the failure region in the standard Gaussian space seems to be the union of failure regions defined by the following two performance functions:
2.1 Multiple solutions Table 3 lists the coordinates of the two local solutions obtained in the GP algorithm. After verification, it is found that both solutions satisfy the following two necessary conditions for design point: a. The solution should reside right on the limit-state line g = 0. b. In the standard Gaussian space, the gradient vector of the performance function evaluated at the solution should be parallel to the solution vector itself, i.e. the solution is the point on the g = 0 line that is locally closest to the origin. Moreover, from the numerical values of the coordinates of the two solutions, it is not trivial to identify which solution is global or fake. As a consequence, for the instance that a single run of the GP algorithm converges to the fake solution, there seems to be no viable way at hand to detect the falsity.
71
Table 4.
Convergence for the three algorithms. Chance of convergence
Convergence Fake Global No if starting from Algorithm solution solution converge the origin GP Matlab Ang and Tang
78% 70% 83%
15% 28% 17%
7% 2% 0%
Fake solution Global solution Fake solution
Table 5. Analysis results for various choices of µq . Chance of convergence (PF estimate from FORM) µq kN/m2
Figure 3. The failure regions defined by g1 and g2 function around the second local design point.
5 10
and
20 30
Let us take the region around the second local design point (the lower plot in Figure 2) as an example. The region in the lower plot in Figure 2 satisfying g < 0 (failure) resides at the right hand side of the g = 0 contour line, which is clearly the union of the two failure regions defined by g1 < 0 and g2 < 0 shown in Figure 3. In other words, the consolidation problem is similar to an in-series system whose failure is defined by the disjunction of the two failure events, i.e. failure occurs either the failure of g1 or failure of g2 occurs. It is interesting to see that the original problem in (2) is not in-series, but it behaves like an in-series system due to non-differentiability of the performance function.
50
3
Fake solut.
Global solute.
No converge
MCS PF PF PF 2-order point (n = 106 ) bound est.
0% (n/a) 0% (n/a) 78% (0.016) 0% (n/a) 0% (n/a)
23% (6.2e-9) 66% (6.1e-5) 15% (0.076) 91% (0.35) 100% (0.85)
77%
0
34%
5.6e-5
7%
0.064
9%
0.37
0%
0.88
[6.2e-9 6.2e-9] [6.1e-5 6.1e-5] [0.083 0.087] [0.36 0.40] [0.91 1.00]
6.2e-9 6.1e-5 0.085 0.39 0.95
SENSITIVITY ANALYSIS
It is instructive to understand how the existence of multiple local design points for FORM/SORM would be affected by the change of the input parameters and distributions. Again, this issue of multiple local design points only affects FORM/SORM; MCS and Subsim are robust against this issue. 3.1 Sensitivity over mean value of q
2.3
Table 5 shows the analysis results by using the GP algorithm for various choices of the mean value of q, denoted by µq , while the input values and distributions of other variables are identical to those in Table 1. The chance of non-convergence is based on the case where the initial trial point is randomly generated by the standard Gaussian distribution. It is clear that the existence of multiple solutions only happens for µq near 20. Moreover, the chance of non-convergence increases as the failure probability becomes smaller. Compared with the MCS results, the PF estimated from the global solution of FORM is reasonably accurate, although a bias is noticeable.
Other design point algorithms
Apart from the GP algorithm, another two algorithms for FORM of finding the design point are examined, including the fmincon.m function in matlab and the simple algorithm in Section 6.2.4, p. 361 in Ang and Tang (1984). Table 4 summarizes the results of convergence. All algorithms may converge to the fake solution, depending on the location of the initial trial point. When the initial trial point is randomly generated from the standard Gaussian distribution, there is always larger chance to converge to the fake solution. When the initial trial point is taken to be the origin, only the matlab algorithm converges to the global solution: it is likely that this only happens by luck. Furthermore, there is some chance for non-convergence, i.e. the algorithm never reaches a stationary point.
3.2 Sensitivity over range of OCR Table 6 shows the analysis results by using the GP algorithm for various choices of the range of OCR,
72
Table 6. Analysis results for various choices of OCR range.
4
Chance of convergence (PF estimate from FORM) OCR range
Fake solut.
Global solute.
0–10
76% 16% (0.016) (0.15) 0–1.5 0% 100% (n/a) (0.94) 1.5–2.5 78% 15% (0.016) (0.076) 2.5–5 0% 98% (n/a) (0.016) 5–10 0% 93% (n/a) (0.016)
As mentioned earlier, the non-in-series system defined in (2) behaves similarly as the in-series system. Therefore, a possible solution to resolve the issue of multiple local solutions of FORM/SORM is to replace the original system by the in-series system. For an inseries system with two failure modes of g1 and g2 described in (7) and (8), two methods can be taken: (a) the second-order bound (Ang and Tang 1984) and (b) point estimate (Mendell and Elston 1974; Phoon 2008). These two methods are briefly reviewed herein. Outside the FORM/SORM framework, the issue of multiple local design points can be easily handled by adopting either MCS or Subsim.
PF MCS PF No PF 2-order point converge (n = 106 ) bound est.
8%
0.16
0%
0.95
7%
0.064
2%
0.011
7%
0.011
[0.16 0.16] [0.92 0.92] [0.083 0.087] [0.016 0.016] [0.016 0.016]
POSSIBLE REMEDY FOR FORM/SORM
0.16 0.92 0.085 0.016 0.016
4.1 Second-order bound The second-order bound simply says that Table 7. Analysis results for various choices of Sallow . Chance of convergence (PF estimate from FORM) Sallow mm
Fake solut.
Global solute.
1
0% (n/a) 10% (0.15) 78% (0.016) 19% (8.2e-6) 0% (n/a)
100% (0.98) 89% (0.24) 15% (0.076) 39% (8.4e-3) 29% (7.2e-4)
3 5 10 15
where PF is the system failure probability; Pi = (−||zi∗ ||) and zi∗ is the design point for the i-th failure mode: e.g. z1∗ is the design point if g1 in (7) is the only performance function (similar for z2∗ ), so the process of finding z1∗ or z2∗ will not suffer from + the issue of multiple solutions; P21 = P(B1 ) + P(B2 ); − P21 = max [P(B1 ), P(B2 )]; and
MCS PF PF No PF 2-order point converge (n = 106 ) bound est. 0%
0.99
1%
0.28
7%
0.064
42%
4.9e-3
71%
3.7e-4
[0.99 1.00] [0.29 0.34] [0.083 0.087] [8.4e-3 8.4e-3] [7.2e-4 7.2e-4]
0.99 0.33 0.085 8.4e-3
and 7.2e-4
where θ is the angle between the two design point vectors z1∗ and z2∗ . The resulting upper and lower bounds for the cases studied in the previous section are listed in Tables 5–7. Although the issue of multiple solutions is resolved (because the process of finding z1∗ or z2∗ does not involve multiple solutions), the bias of the bounds can be observed, i.e. some MCS results fall outsides the bounds.
while the input values and distributions of other variables are identical to those in Table 1. It is clear that the existence of multiple solutions happens for the range of 0–10 and 1.5–2.5. The chance of nonconvergence is reasonably small regardless the OCR range. Compared with the MCS results, the PF estimated from the global solution of FORM is reasonably accurate despite a certain amount of bias.
4.2 Point estimate The second-order bounds do not offer a single estimate of the system failure probability. The following point estimate can mitigate this issue:
3.3 Sensitivity over Sallow Table 7 shows the analysis results by using the GP algorithm for various choices of Sallow , while the input values and distributions of other variables are identical to those in Table 1. It is clear that multiple solutions occur for most cases. The chance of non-convergence increases as the failure probability gets small. Compared with the MCS results, the PF estimated from the global solution of FORM is reasonably accurate despite a certain amount of bias.
where PF is the system failure probability;
73
in-series system although the consolidation problem is clearly not an in-series system. Several reliability methods are implemented to investigate their feasibility and consistency. The FirstOrder Second-Moment (FOSM) method provides inconsistent reliability estimates. For First-Order and Second-Order Reliability Methods (FORM/SORM), three algorithms of finding design points are examined, and all of them show possibility of converging to a local design point that is different from the global one, hence giving inconsistent reliability estimate. Sometimes they do not converge at all. The second-order bound and point estimate methods can resolve the issue of multiple local design points for FORM/SORM. Monte Carlo simulation (MCS) and Subset simulation (Subsim) are robust and always provide consistent reliability estimates. It is recommended that the second-order bound and point estimate methods can be used if convergence is not an issue and the amount of bias in reliability estimate is acceptable. MCS and Subsim are highly recommended because they seem completely robust against the existence of local design points.
is an estimate of the probability that g1 and g2 failure events happen simultaneously, and
The resulting point estimates are also listed in Tables 5–7. Again, although the issue of multiple solutions is resolved, the bias of the point estimates is evident. 5
RECOMMENDED RELIABILITY METHODS
For the consolidation problem, FOSM does not provide a consistent reliability estimate. Moreover, the following two issues exist in FORM/SORM: (a) the possibility of finding a non-global (fake) design point and (b) the possibility of non-convergence. The secondorder bound and point estimate methods seem to be able to resolve the first issue although they cannot remove the bias in the reliability estimate. However, they cannot resolve the second issue. Therefore, the use of the latter two methods is recommended at the condition that the second issue does not exist and the amount of bias is acceptable. Monte Carlo simulation and Subset simulation are completely robust against the existence of local design points. Therefore, their use is recommended at the expense of more computation. 6
REFERENCES Ang, A. H.-S., & Tang, W. H. 1984. Probability Concepts in Engineering Planning and Design, Vol. II, Wiley, New York. Der Kiureghian, A., & Stefano, M. D. 1991. Efficient algorithm for second-order reliability analysis. ASCE Journal of Engineering Mechanics, 117(12), 2904–2923. Liu P. L., & Der Kiureghian, A. 1991. Optimization algorithms for structural reliability. Structural Safety, 191, 9(3), 161–177. Mendell N. R. & Elston R. C. 1974. Multifactorial qualitative traits: genetic analysis and prediction of recurrence risks. Biometrics, 30, 41–57. Phoon, K. K. 2008. Reliability-Based Design in Geotechnical Engineering: Computations and Applications, Taylor & Francis.
CONCLUSION
The reliability analysis of a one-dimensional consolidation is presented. This “simple” geotechnical problem is challenging because the performance function is not differentiable. It is found that this non-differentiability induces a behavior similar to an
74
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Study on determination of partial factors for geotechnical structure design T.C. Kieu Le & Y. Honjo Department of Civil Engineering, Gifu University, Gifu, Japan
ABSTRACT: Some of the aspects concerning design verification equations in geotechnical design are discussed, such as the format of verification formula, and the effectiveness of the BaselineApproach in determination of characteristic values. The Design Value Method (DVM) combined with First-Order Reliability Method (FORM) analysis has been used as a standard procedure to determine partial factors in the past. However, the method encounters difficulties when applied to determine partial factors of some of the geotechnical structures: (i) Some of the basic variables are included both in resistance and load; (ii) High non-linear nature of the performance functions. In order to overcome the difficulties in code calibration by the traditional method, a procedure, named as McDeva, which keeps philosophy of the DVM, and uses effectiveness of Markov Chain Monte Carlo Simulation (MCMC), is proposed to determine load and resistance factors. The method is applied to determinate load and resistance factors in retaining wall design to illustrate the feature.
1
INTRODUCTION
are applied to resulting resistance and load components to assure appropriate safety margin. LRFD is considered to be better format than MFA in geotechnical engineering design code at least in the present situation, due to the some of the reasons as follows:
The primary purpose of structural design codes is a simple, safe and economically efficient basis for the design of ordinary structures under loading, operational and environmental conditions. There are many sources of uncertainties including physical, statistical and modeling uncertainties that may impact on the performance of the structure. In traditional design practice, for example Allowable Stress Design (ASD), all uncertainties are embedded within a factor of safety. This single factor of safety depends on the past experiences and does not give rational basis for the treatment of uncertainties in design. As opposed to ASD, the Level 1 Reliability-Based Design (RBD) is more flexible and rational because it provides consistent levels of safety over various types of structural components by applying many factors to characteristic values in the verification formula to treat the uncertainties involved in the structural design. In Level 1 RBD, the design verification formula formats are classified into two categories: (i) Material Factor Approach (MFA), (ii) Load and Resistance Factor Design (LRFD). The basic philosophy of MFA is to treat the uncertainties at their origin. However, it is very difficult to evaluate the reliability of structure by reliability analysis based on superposition of all uncertain sources. In LRFD, the resistance and the load (external action) are first calculated based on the characteristic values of the basic variables. Then load and resistance factors
– Especially for the resistance side, the design calculation based on LRFD predicts the most likely behavior of the structure to its last stage of design. It is in coincidence with the philosophy that a designer should keep track of the most likely behavior of the structure to the last stage of the design as much as possible. – In geotechnical design, where the interaction between a structure and ground is so high, it is not possible to know whether the discount of the material parameter value may result safe design of a structure. It is especially true for a sophisticated design calculation method like FEM. Another important issue in Level 1 RBD is the choice of characteristic value. It is very conventional to choose characteristic values of load, and resistance as very high/low fractile values, such as 95%/5% based on so called Baseline Approach (e.g. Phoon et al., 2003). In order to check the effectiveness of this approach, the authors have carried out some calculations and realize that the advantage of Baseline Approach is much more obvious in determining the load factor comparing to that of resistance factor. It is recommended to take high fractile value for characteristic value of load, however the low fractile value need not to be chosen as characteristic value of resistance.
75
The procedure of determining load and resistance factors is termed code calibration process. The Design Value Method combined with First-Order Reliability Method (FORM) analysis is used as a standard procedure to determine partial factors of basic variables, mainly due to the fact that they are computationally inexpensive and the results are of acceptable accuracies. The method, however, encounters difficulties when applied to determine partial factors of some of the geotechnical structures: (i) Some of the basic variables are included both in resistance and load; (ii) High non-linear nature of the performance functions. To overcome the above-mentioned difficulties, in this paper, a method, which keeps philosophy of the Design Value Method, and using Monte Carlo Simulation (MCS), is proposed to determine load and resistance factors. Design of a gravity retaining wall under sliding failure mode is introduced as an illustrative example of the method. 2 2.1
Figure 1. Definition of design point.
On the other hand,
DETERMINATION OF LOAD AND RESISTANCE FACTORS
where αR and αS are called the sensitivity factors of R and S, which is defined by the following equations.
Design Value Method
The simplest form of a limit state function is R−S, where the load, S, and resistance, R, are statistically normally distributed independent variables, i.e. R ∼ N (µR , σR2 ) and S ∼ N (µs , σS2 ).
Then, the following equation can be obtained:
Let characteristic values of R and S be R and S, respectively, the load and resistance factors which can fulfill the target reliability index, βT , can be determined as follows:
2 where M also follows normal distribution N (µM , σM ),
where µM = µR − µS and σM = σR2 + σS2 . The reliability index β is given as:
The probability of failure pf may be defined by: where VR and VS are the coefficient of variation of R and S, respectively. The results are exact only when all the basic variables are normal, and the performance function consists of linear combination of these variables. When basic variables follow lognormal distributions, the following approximation is used (e.g., Ditlevsen and Madsen, 1996):
where (.) is the standard normal probability distribution function. On the failure surface, the design point D(ZR∗ , ZS ∗ ) is the point which is at the shortest distance from the origin in the standard normal space in Figure 1. The design point is the point, which maximizes the joint probability density function on the failure surface for a given problem. This concept will be used to determine the design point utilizing the samples generated during the Subset Monte Carlo simulation. For a performance function given in Eq. (1), it is required in design that β ≥ βT should always be satisfied, where βT is a target reliability index. Thus,
where ζ = ln(1 + V 2 ) is the standard deviation of lognormal distribution, sensitivity factors are defined as:
76
2.2
2.2.2 Subset MCMC algorithm Suppose the limit state function g(X ) composed of n basic variables Xi , i = 1,…,n. Based on the subsets and MCMC, the failure probability can be calculated by the following procedure: Step 1: k = 1, Nt samples for each basic variables Xj are generated based on given PDF of Xj . Then the performance function g(x) is also calculated for Nt number of generated x (x = {x1 , . . . xn }). Step 2: The intermediate failure event Fk is determined. A number of samples Ns that are closer to the limit state will be chosen as seeds for using in the next step. Step 3: k = k + 1. For each of Ns seeds of each basic variable, generate Nt /Ns samples by using M-H algorithm. Thus, the subset Fk+1 is defined.
Subset Markov Chain Monte Carlo simulation
Subset Markov Chain Monte Carlo (MCMC) method, proposed by Au and Beck (2003), is one of the most effective and stable methods to carry out reliability analyses by MCS. The basic idea of this method is that a small failure probability can be indicated as a product of conditional probabilities of some intermediate events. Thus, a simulation problem of a rare event is converted into the problem of a sequence of more frequent events. Let the total region and the failure region be denoted by F0 and F, respectively. The subsets of F0 are Fi (i = 1,…m) that satisfied the relationship:
The failure probability P(F) can be calculated based on the conditional probability of these intermediate subsets as follows: where gNs is the Ns th lowest value (Ns < Nt ) of the performance function obtained using x(k) when arranged in the ascending order as g1 < g2 < … < gNt . Step 4: Evaluate limit state function g(x(k+1) ) at each consecutive state of the basic variables. If x(k+1) lie in Fk , accept the state, while if they lie in Fk−1 , reject the state and take x(k+1) = x(k) . Step 5: Repeat steps 2 to 4 for appropriate number of times. Step 6: The failure probability of the target failure event Pf is estimated by the following equation:
2.2.1 Samples generation by Metropolis-Hastings algorithm In order to calculate P(Fi ), the samples are generated by MCS following a specified probability density function (PDF). For the calculation of the failure probability of the intermediate subsets, a procedure of samples generation will be carried out for any PDF defined on intermediate subset Fi by using an MCMC algorithm known as Metropolis-Hastings (M-H) algorithm. Let the current Markov chain state, or the seed for the next state, is x(k) . The M-H algorithm generates the following series of samples x(k+1) that follows given PDF π(.) by the procedure below: Step 1: Using the current state x(k) , generate a candidate state x from a proposal distribution q(x |x(k) ). Also, generate a sample u from uniform distribution U (0,1). Step 2: Calculate an acceptance probability α as
where Nfail is the number of samples falling into the failure region. A number of variations in the implementation of the above mentioned algorithm for Subset simulation is possible. For example, the choice of the total number of samples Nt the definition of intermediate failure events Fk (k = 1,…,m − 1) and the stopping criteria. 2.3
Procedure to determine load and resistance factors
In reliability analysis of structural design, the resistance component, R, and the load component, S, may contain several basic variables Xi (i = 1… n), i.e. R(X) and S(X). Some of the basic variables may appear in both resistance and load components of the limit state function. This leads to a troublesome situation that the design value method using FORM may not be applicable. As an alternative approach, a procedure, referred to as McDeva (Monte Carlo simulation based on Design Value Method) hereafter, to directly determine γR and γS for R and S, in place of basic variables Xi , is proposed. Step 1: Define basic variables Xi (i = 1,…,m) and their PDFs.
Step 3: x(k+1) is generated based on α as:
Step 4: Repeat steps (1) to (3) for necessary cycles to obtain the required number of samples. In M-H algorithm, the generated samples may be affected by: (i) the choice of the proposal density and (ii) the burn-in period or the number of runs required for the chain to approach its stationary density so that candidate states follow the target distribution.
77
If it is judged more appropriate to fit lognormal distribution, the sensitivity factors can be estimated by two ways shown in Eqs. (20) and (21) as follows:
where, (ZR∗ , ZS ∗ ) is the estimated design point in normalized space, i.e. ZR∗ = (ln R∗ − µln R )/σln R and ZS ∗ = (ln S ∗ − ln µS )/ln σS .
Figure 2. Concept of the subset MCMC method.
And Eqs. (8) and (9) should be used to estimate the load and resistance factors. Therefore, the method proposed in this study is combination of MCS and the design value method. The former is a very flexible method to evaluate reliability of the structure under consideration, whereas the advantage of the later is that one needs not to redesign the structure during code calibration as long as the safety level of the initially set structure is not very far from the target reliability level. In order to check the results obtained by McDeva, another procedure, based on a combination of OMC (Ordinary Monte Carlo simulation) and DVM, is also attempted as follows: Step 1: Define basic variables and their PDFs. Step 2: Carry out OMC. The OMC is carried out based on the probability distributions of basic random variables Xi (i = 1,…,n). A number of Nt samples are generated. pf may be estimated as the ratio of Nf (number of samples fall into the failure region) and Nt . Step 3: Define the approximate design point X∗ Step 4: Calculate αR , αS and γR , γS .
Step 2: Carry out subset MCMC in a number of runs to obtain failure probability and many sets of random samples of basic variables Xi (i = 1,…,n). In addition, by using the samples generated in the first step of MCMC procedure, it is possible to estimate the joint density function fR,S (R, S). Step 3: At the final step, it is expected that most of the points (or samples) are located closely to the limit state line (see Figure 2). A point which gives the maximum likelihood for the given PDFs is considered to approximate the design point. Step 4: With the specified design point (R∗ , S ∗ ), one can evaluate the sensitivity factors. The sensitivity factors can be obtained as follows:
where, (ZR∗ , ZS ∗ ) is the estimated design point in normalized space, i.e. ZR∗ = (R∗ − µR )/σR and ZS ∗ = (S ∗ − µS )/σS . Another simple but less accurate way to obtain sensitivity factors is shown in Eq. (19), using the probabilistic parameters of R and S obtained in Step 2.
3
EXAMPLES
3.1 Linear limit state function Z = R − S R and S are independently normally distributed random variables, i.e. R ∼ N(7.0,1.0) and S ∼ N(3.0,1.0), where true reliability index βTrue = 2.83 and true design point is (5.0,5.0). An example of the generated samples is shown in Figure 3. The results obtained by Subset MCMC are as follows: Probability of failure: pf = 0.26 × 10−2 Reliability index: β = 2.79 The samples of the last step are used to estimate the design point and necessary factors as shown below. Design point: D(R∗ ,S∗ ) = (5.1,5.0) Sensitivity factors: αR = −0.69 αS = 0.73 Partial safety factors: γR = 0.73 γS = 1.08 The true design point in this problem is (R∗ , S ∗ ) = (5.0,5.0), which is very close to the value
The sensitivity factors obtained from Eq. (18) may be more accurate than those obtained from Eq. (19) because Eq. (18) is based on the maximum likelihood point, whereas Eq. (19) is just simply based on the samples generated in the first step of subset MCMC procedure. Although the structure that has been evaluated may not have exactly the same target reliability level, one can obtain load and resistance factors that correspond to the target reliability index by using the equation developed for the design value method, i.e. Eqs. (7) and (8).
78
Table 1. Probabilistic parameters of basic variables in limit state function of retaining wall under sliding failure mode.
Figure 3. Generated samples for simple linear performance function.
Variables
Distribution µ
σ
COV λ
X1 = γf X2 = γs X3 = γc X4 = tanφf X5 = tanφs X6 = tanφbs X7 = q
Lognormal Lognormal Lognormal Lognormal Lognormal Lognormal Lognormal
1.000 0.950 1.250 0.107 0.086 0.070 1.500
0.05 2.994 0.050 0.05 2.943 0.050 0.05 3.218 0.050 0.14 −0.256 0.136 0.13 −0.402 0.127 0.12 −0.557 0.120 0.10 2.703 0.100
20 19 25 0.781 0.675 0.577 15
ζ
The actual form of R and S are given as follows:
tan (45◦ − φf /2) = 1 + ( tan φf )2 − tan φf tan(45◦ + φs /2) = 1 + (tanφs )2 + tan φs and q are considered γs , γf , γc , tan φs , tan φf , tan φbs as basic variables of the limit state function and some of them are included both in R and S. The probabilistic parameters of basic variables are shown in Table 1. Subset MCMC simulation is then carried out 1000 runs to obtain pf of the gravity retaining wall under sliding failure mode.The number of samples generated for each variable in a step is Nt = 100. In addition, from the generated samples of basic variables during subset MCMC simulation, series of numbers of R and S are also calculated. Thereafter, these R and S values are utilized for determination of partial safety factors. The estimated pf by 1000 simulation resulted the mean value of 0.27 × 10−2 (i.e. β = 2.70), and COV(pf ) = 1.40. where,
Figure 4. Description of retaining wall.
obtained. The true sensitivity factors, i.e. αR = −0.707 and αS = 0.707 are also very close. The method seems to work well in this simple example. 3.2 A tetaining wall example 3.2.1 Determination of load and resistance factors McDeva is now applied for reliability analysis of a gravity retaining wall (Figure 4) under sliding failure mode (Orr, 2005). The necessary parameters of the retaining wall, soil below and fill above the retaining wall are described below.
γf = 20.2 γs = 19.1 γc = 24.1 tan φf = 0.588 tan φs = 0.624 = 0.472 tan φbs q = 14.9 This approximated design point is equivalent to point (R = 169.1,S = 166.8). Nt points (R, S) calculated from Nt points Xi (i = 1,…,n), which are generated in the subset F0 , are utilized to estimate mean and standard deviation of R and S. µR = 201.1 σR = 21.8 µS = 120.8 σS = 20.2 In addition, based on fitting test on the generated points, the density function fR,S (R, S) seems to be Design point:
– Properties of sand beneath wall: cs = 0, φs , γs – Properties of fill behind wall: cf = 0, φf , γf – Groundwater level is at depth below the base of the wall – Friction angle between base wall and underlain sand: φbs – Thickness of retaining wall: w = 0.4 m The limit state function of the problem is given as:
where R: total horizontal force that resists horizontal movement of the wall, S: total horizontal force that moves the wall.
79
Table 2. Results obtained for βT = 3.1, sensitivity factors are obtained based on the estimated design point.
Table 4.
Mean(pf ) for Nt = 100.
Nf /Nt
By McDeva Normal R
S
By OMC Lognormal
Normal
R
R
S
Lognormal S
R
S
α −0.54 0.84 −0.61 0.79 −0.57 0.82 −0.64 0.77 γ 0.62 1.76 0.61 1.83 0.61 1.79 0.61 1.85
Table 3. Results obtained for βT = 3.1, sensitivity factors are obtained based on fR,S (R, S). By McDeva Normal R
S
Ns /Nt 0.02
0.05
0.1
0.2
0.3
0.4
0.5
0.02 0.04 0.10 0.20 0.50
0.0030 0.0028 0.0029 0.0031 0.0033
0.0027 0.0025 0.0027 0.0026 0.0028
0.0020 0.0017 0.0026 0.0026 0.0026
0.0017 0.0014 0.0023 0.0025 0.0024
0.0012 0.0012 0.0020 0.0023 0.0025
0.0009 0.0011 0.0018 0.0022 0.0024
0.0030 0.0029 0.0033 0.0035 0.0040
Table 5.
Nf /Nt
By OMC Lognormal
Normal
R
R
S
Lognormal S
COV(pf ) for Nt = 100.
R
S
α −0.73 0.68 −0.54 0.84 −0.75 0.67 −0.56 0.83 γ 0.57 1.66 0.63 1.87 0.57 1.69 0.63 1.91
Ns /Nt
0.02
0.05
0.1
0.2
0.3
0.4
0.5
0.02 0.04 0.10 0.20 0.50
1.17 1.21 1.22 1.20 1.07
1.08 0.96 1.04 1.15 1.01
1.21 1.14 1.07 0.95 0.86
1.68 1.32 1.05 0.89 0.77
2.24 1.19 1.12 0.92 0.77
2.71 0.88 1.28 0.92 0.73
3.06 0.56 1.35 0.97 0.73
Table 6.
best-fitted to joint Normal and Lognormal distribution functions. Finally, sensitivity factors, load and resistance factors are obtained for a target reliability index assumed to be βT = 3.1. The joint density function fR,S (R, S) are assumed to be joint normal and lognormal distribution. The obtained results are shown in Tables 2 and 3. Note that the characteristic value of R and S adopted in this example is calculated from characteristic values of basic variables which are chosen as the mean value for X1 to X6 , while 95% of fractile value for X7 . There are some remarks on the results as follows:
log(pf )/log(pfTrue ) for Nt = 100. Nf /Nt
Ns /Nt
0.02
0.05
0.1
0.2
0.3
0.4
0.5
0.02 0.04 0.10 0.20 0.50
0.96 0.96 0.94 0.93 0.91
0.96 0.97 0.97 0.96 0.95
0.98 0.99 0.98 0.98 0.97
1.02 1.05 0.99 0.98 0.98
1.06 1.09 1.00 0.99 0.99
1.12 1.11 1.02 1.00 0.99
1.16 1.12 1.04 1.01 0.99
for generating a subset, and Nf is the minimum number of samples falling in the failure region so that the Subset MCMC algorithm will be satisfied and stopped or cut-off criteria. Nt , Ns , and Nf are set as follows: Nt = 50, 100, 150 Ns /Nt = 0.02, 0.04, 0.10, 0.20, 0.50 Nf /Nt = 0.02, 0.05, 0.10, 0.20, 0.30, 0.40, 0.50 For each combination of the above Nt , Ns and Nf , McDeva has been carried out 1000 runs. For the case Nt = 100, tables 4 to 6 show mean(pf ), COV(pf ) and the bias of failure probability comparing to true failure probability in log scale, i.e. Log(meanpf )/Log(pfTrue ). The mean value of pf for all Nt cases (i.e. 50, 100, 150) ranged in (0.10 ∼ 0.47) × 10−2 , whereas its COV ranged between 0.90 ∼ 3.36. Most of the cases, the bias log(pf )/log(pfTrue ) = 1.0 ± 0.05, which may be considered acceptable. To define the optimum combinations of input parameters, the following factors have been considered:
– Load and Resistance Factors obtained all the cases, i.e. different joint density functions, different way to calculate sensitivity factors, are very similar. – The results obtained based on the estimated design point (Table 2) is better than those obtained based on only the samples generated in the first step of subset MCMC procedure (Table 3), as expected. OMC simulation has been carried out for 100,000 samples and the results obtained by OMC are also shown in Tables 2 and 3, showing that results obtained by the two procedures are very similar. 3.2.2
Some reflections of Subset MCMC and verification of the robustness of McDeva In this example, the true failure probability pfTrue is not available. Therefore, OMC simulation has been carried out with the total generated random samples of Nt = 1,000,000 to obtain the failure probability which is 0.289 × 10−2 . This value was assumed as the true value here. McDeva has also been carried out for different values of Nt , Ns , and Nf , where Nt is total samples generated for each subset, Ns is number of seeds used
– The number of times that the performance function is called, Zcall. Smaller Zcall means less computational time.
80
Table 7. Zcall for Nt = 100.
4
Nf /Nt
CONCLUSIONS
Ns /Nt 0.02
0.05
0.1
0.2
0.3
0.4
0.5
0.02 0.04 0.10 0.20 0.50
166.0 179.4 206.5 244.2 393.4
175.5 192.1 220.3 264.6 451.7
188.4 202.3 236.7 286.8 503.1
194.2 207.5 244.3 299.4 535.6
197.9 211.8 251.1 307.2 552.4
199.3 213.7 254.4 313.0 571.5
McDeva, based on the combination of subset MCMC simulation and DVM, is proposed to determine load and resistance factors for the limit state function that contains basic variables in both load and resistance functions, and show high non-linearity. This kind of performance function is frequently encountered in geotechnical design in which a retaining wall is an example. The results show that: – With acceptable bias and coefficient of variation of failure probability, McDeva can be used to obtain the load and resistance factors effectively. – McDeva seems to work well in the example when Ns /Nt = 0.1∼0.2, and Nf /Nt = 0.1 ∼ 0.2.
158.5 168.6 188.2 214.8 308.8
Table 8. TotP for Nt = 100. Nf /Nt Ns /Nt 0.02
0.05
0.1
0.2
0.3
0.4
0.5
0.02 0.04 0.10 0.20 0.50
127.6 150.3 193.6 255.3 496.3
148.2 177.2 221.9 295.2 607.2
176.0 199.6 256.8 341.6 708.1
189.5 211.1 272.6 367.5 774.4
197.6 219.5 287.9 384.8 808.7
200.5 223.8 295.9 396.7 847.6
111.8 127.7 158.1 199.5 345.5
For further developments, the issue that is necessary to be studied: in the present example, resulted correlation between R and S were very weak, by chance. Thus, R and S have been treated as independent variables. If there is some nonnegligible correlation between R and S, some additional consideration is required in determining load and resistance factors to secure sufficient safety margin.
Table 9. RejP for Nt = 100. Nf /Nt Ns /Nt 0.02
0.05
0.1
0.2
0.3
0.4
0.5
0.02 0.04 0.10 0.20 0.50
96.4 111.6 138.2 175.1 310.1
114.4 134.0 160.8 206.0 392.6
137.9 152.9 189.5 242.7 470.6
149.1 163.1 202.9 263.8 523.3
155.9 170.1 215.9 278.5 550.1
158.4 173.7 222.8 288.3 581.3
83.5 93.4 110.7 133.5 206.9
REFERENCES Au, S.K. & Beck, J.L. 2003. Subset simulation and its application to seismic risk based on dynamic analysis. Journal of Engineering Mechanics, ASCE, 901–917. Ditlevsen, O. & Madsen, H.O. 1996. Structural reliability methods, John Wiley & Sons Ltd, England. Gilks, W.R., Richardson, S. & Spiegelhalter D.J. (1998). Introducing Markov chain Monte Carlo. Markov chain Monte Carlo in practice (W.R. Gilks et al. Eds.), 1–19, Chapman and Hall/CRC. Honjo, Y., Kikuchi, Y., Suzuki, M., Tani, K. & Shirato, M. 2005. JGS Comprehensive Foundation Design Code: Geocode 21, Proc. 16th ICSMGE, pp. 2813–2816, Osaka. Orr, L.L.T. 2005. Proceedings of the International Workshop on the Evaluation of Eurocode 7, p. 71, pp. 127–149, Dublin. Phoon, K.K., Becker, D.E., Kulhawy, F.H., Honjo,Y., Ovesen, N.K., and Lo, S.R. 2003. Why consider Reliability Analysis for Geotechnical Limit State Design? LSD2003 International Workshop on Limit State Design in Geotechnical Engineering Practice, USA Phoon, K.K. & Honjo, Y. 2005. Reliability analyses of geotechnical structures: towards development of some user-friendly tools, Proc. 16th ICSMGE, pp. 2845–2848, Osaka.
– The number of total points generated in Subset MCMC, TotP. Smaller TotP means less computational time. – The number of points that are rejected by Subset MCMC from TotP points, RejP. The less RejP means the more effective McDeva procedure. The results show that Zcall and TotP increase with Nt , Ns and Nf . When Ns /Nt = 0.5, ToP and RejP become very large. COV ’s of TotP, and RejP seem to decrease when Nt increases. Mean values of Zcall, TotP, and RejP after N run = 1000 runs for Nt = 100 are shown in Tables 7 to 9, respectively. By considering the bias of failure probability comparing to true failure probability varies in log-scale, i.e. (log(Pf ) = log(PfTrue )), and its COV, and then checking other parameters indicating the computational time, for Nt varies from 50 to 150, McDeva seems to work well when Ns /Nt = 0.1 ∼ 0.2, and Nf /Nt = 0.1 ∼ 0.2.
81
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability analyses of rock slope stability C. Cherubini & G. Vessia Politecnico di Bari, Bari, Italy
ABSTRACT: The benchmark proposed is related to the topic of instability analyses in anchored rock slopes. As it can be observed the definition of a safety factor can be addressed in two different ways so that even different values can be provided as safety level in deterministic approach. Here reliability analyses are performed by means of FORM, SORM and MCS in order to investigate the effects of uncertainties and variability of rock design resistances on safety level. Moreover a comparison between the partial factor approach and the reliability based design has been performed pointing out the rule of characteristic values in modern design provisions.
1 1.1
INTRODUCTION Stability assessment for rock slopes
The main feature of mechanical behaviour in rock slope instability is that shear takes place along either a discrete sliding surface or within a zone behind the face: if the acting shear force is greater than the shear strength of the rock on this surface, then the slope becomes unstable. The definition of instability will differ depending on the method adopted. Commonly, the stability of a slope can be expressed in one or more of the following terms (Wyllie and Mah, 2004):
Figure 1. Loads and geometry of a rock slope wedge.
1) Factor of safety, FS. Stability is quantified by limit equilibrium of the slope: FS > 1 means stable slope. 2) Strain. Failure is defined by onset of strains great enough to prevent safe operation of the slope, or that the rate of movement exceeds the rate of mining in an open pit. 3) Probability of failure. Stability is quantified by probability distribution of difference between resisting and displacing forces (Safety Margin), which are each expressed as probability distributions. 4) LRFD (load and resistance factor design). Stability is defined by the factored resistance being greater than or equal to the sum of the factored loads.
for the factor of safety (FS) have been investigated (De Mello 1988):
where P is the wedge weight; T is the anchor pull; β is the anchor and the slope inclination; α is the sliding surface inclination and φ is the internal friction angle. Such factor of safety values can be compared with some target values proposed by Terzaghi and Peck (1967) and by Canadian Geotechnical Society (1992) that for earthworks vary between 1.3 and 1.5. The upper value applies to usual loads and service conditions while the lower value applies to maximum loads and the worst expected geological conditions. Therefore, apart from the ambiguity in computing procedure, the factor of safety is affected by uncertainties and variability which cannot be avoided.
In order to investigate the stability of a rock slope wedge involved in sliding mechanism, the equilibrium method is usually employed and the factor of safety is computed. In the case studied the presence of an anchorage force has been considered (see Fig. 1). Here, according to Mohr-Coulomb failure criterion for the rock mass behaviour, two different expressions
83
Table 1. Variable values for benchmark.
The study performed below focuses on reliability approach for computing the safety of the slope by accounting for random variables to describe rock resistance parameters.
Variable Sliding surface slope α [◦ ] Rock face slope β [◦ ] Friction angle φ [◦ ] Unit weight γ [kN/m3 ] Slope height H[m]
2 THE CASE STUDIED 2.1
Uncertainties in rock mass characterization
Strength of a rock mass is characterized by strength of intact rock and of discontinuities. Depending on number, orientation and condition of joints the rock mass behaviour is affected by anisotropy and weakness which can lead to the failure when slopes are concerned. Moreover human activity and weathering processes may contribute to increase failure prone conditions by means of increasing acting forces or reducing rock mass resistance. Besides, the first step in rock slope stability study is the mechanical characterization of rock mass by means of classification systems. Then, after a good analyses of discontinuities and their shear resistance the type of sliding movement can be forecast. In the proposed case study a rock slope sliding along sliding prone discontinuities is considered (see Fig. 1) where the slope is the rock face one of an open quarry reinforced by an anchorage. The friction angle of a joint in a rock mass, can be computed as Barton and Choubey (1977):
Mean value
Standard deviation value
40 7.5
4
Characteristic value 38
–
7.5
35
10.5
29.8
23
0.5
22.8
5 10 15
–
–
determined by means of structural investigations and consequently affected by human errors. Therefore, for α a normal probability distribution with a mean value of 40◦ is considered a coefficient of variation equal to 10%. 2.2 Factor of safety method The commonest approach to stability studies is based on the computation of the factor of safety. According to the equilibrium method and Mohr-Coulomb soil behaviour, the safety factor is computed by means of different expressions, namely Eq. (1) and (2): such two cases arise from different interpretation of the pull component along the sliding surface direction. As a matter of fact, it can be taken as a stabilizing force or as a negative contribution to the sliding weight component. The choice between the two approaches can be rationally undertaken considering a parametric deterministic and probabilistic study where anchorage pull and the height of the slope are varied. Table 1 shows values of variables used in such a benchmark. Three values for slope height have been investigated: 5, 10 and 15 m. According to anchor pull increasing the two factors of safety increase with a different trend as can be seen in Fig. 2. The expression Eq. (1), indicated as FS+ increases linearly although slope decreases when height increases. So that, the factor of safety is stronger affected by anchorage pull for smaller height of the slope. On the contrary, expression Eq. (2), reported as FS− , shows a parabolic trend with a rapid increase as anchor pull increases. Such a trend provides obviously negative values as the contribution of anchorage pull got higher than the weight component on the weak plane. Accordingly, FS− won’t be discussed further on.
where σn is the vertical effective stress acting on discontinuity wall; JRCn is the joint roughness coefficient for joint of actual length; JCSn is the joint wall compression strength for joint of actual length; φr is the residual friction angle that can be drawn experimentally and “i” is the roughness of discontinuity at large scale. JRCn and JCSn are calculated by Bandis et al.’s formulation (1981), that is:
where JCS0 and JRC0 are joint wall compression strength and joint roughness coefficient respectively computed for reference joint length L0 of 10cm; Ln is the actual joint length. JCS0 can be provided by reference tables whereas JRC0 can be drawn from Schmidt’ hammer test. As explained before, the evaluation of φ for a joint in a rock mass is affected by variability and uncertainty. Here, the friction angle probability distribution is modelled as lognormal with the coefficient of variation equal to 30, 40 and 50%. Nonetheless, the sliding plane inclination, named α, can be considered also as a random variable. It is
2.3 Reliability method As the FS− fails, only expression Eq. (1) can be considered for the safety factor. Then, it is worthy to compare
84
Figure 3. Reliability index versus anchorage pull for CVφ = 30%.
Figure 2. Variation of safety factor (Eq. 1) with the anchor pull. Table 2.
Variable
Random variable distribution type. Distribution Variation type coefficient
Sliding surface Normal slope α [◦ ] Rock face Uniform slope β [◦ ] Friction angle Lognormal φ [◦ ] Unit weight Normal γ [kN/m3 ] Anchorage pull Constant T [kN] Slope height Constant H[m]
Min Max value value
10% –
5
10
30, 40, 50% 2% – Figure 4. Reliability index versus anchorage pull for CVφ = 40%.
–
Results from FORM and SORM techniques coincide, thus just FORM are reported in the following. At first, reliability index is reported versus the anchorage pull, as already done for the Factor of safety (Fig. 2). Then, reliability index is compared with the safety factor for the same anchorage pull values. Fig. 3 shows curved trends of reliability index versus anchorage pull. For the case of slope height equal to 5 m, as anchorage pull varies from 100 to 150 kPa the reliability index increases from 2 to 4 when the friction angle coefficient of variation CVφ is 30%. Such trend can be seen for other two height values but with a slope reduction as the height increases: this means that when height increases an higher anchorage pull increase is needed for stabilizing the slope as in the case of the factor of safety. Such evidence is true also when CVφ increases (Figs. 4–5). So that, when the variability of friction angle increases the safety level of the slope reduces and much higher anchorage pull increments are needed for improving stability condition. Accordingly, Figure 6 shows, for the slope height equal to 5 m the safety factor and the reliability index values corresponding to the same anchorage pull for the three CVφ values. Hence, when the reliability index varies between 2 and 4 the safety factor increases its values according to the CVφ . As a matter of fact, in order to get a reliability index equals to 2.5, the factor of safety should be increased from 1.8 to 2.2.
the safety factor with the reliability index computed by means of a random variable approach. Such a parameter is strictly related to the probability of failure. Many studies have been performed on the allowable values of the probability of failure of the engineering project accepted by people. In the case of slope stability values of reliability index about 2-4 are suggested by Baecher (1982). Accordingly, in this analysis such interval will be investigated. Table 1 summarizes mean and standard deviation values for random variables while Table 2 shows the assumption about probability distribution for such provided variables. As the unit weight γ is concerned, it is can be assumed as a random variable with a small variability, proved by literature (Cherubini 1997). In this case it is considered normally distributed with a coefficient of variation equal to 2%. In this benchmark FORM, SORM and Monte Carlo methods are performed in order to evaluate the reliability index by means of COMREL code (1997). In this case, the performance function considered is:
85
2.4 LRFD method Finally, an interesting comparison is proposed between reliability index and the partial factor design method suggested by Eurocode 7 (1997) and by Italian Technical Building Code (TU 14/01/2008). The combination of factors suggested for the global stability analysis is named as Combination 2, where load factors (A2), resistance factor (M2) and global factor (R2) are considered as follows:
Figure 5. Reliability index versus anchorage pull for CVφ = 50%.
In Eq. (7), design variables values employed are characteristic values: such values are introduced by the Eurocode for limit state design but no suggestions are provided about the way to compute them. Characteristic values for reinforced concrete compression strength is commonly assumed as the value corresponding to the 95% probability to be exceeded. Assuming it is normally distributed we have:
where xm is the mean value and s is the standard deviation of the concrete compression strength. Such an assumption could be too much conservative in the case of geotechnical random variables. As a matter of fact, let’s consider the friction angle, with mean value equal to 35◦ and standard deviation equal to 10.5◦ ; its characteristic value is 14.4◦ , which is too much conservative value. Here the expression from Schneider (1997) reported also by Cherubini and Orr(1999) is used:
Figure 6. Factor of safety (Eq. 1) versus reliability index for CVφ = 30%, 40% and 50% and H = 5 m.
where CV is the coefficient of variation; xm is the random variable mean value. Three cases are then considered according to the CVφ variation. In this case, the friction angle characteristic value is 30◦ . Moreover, as Eq. (7) is concerned, building code requests that such difference is higher than zero although no minimum value suggests. Hence, Figs. 8–10 show the positive difference between resistance and action according to the expression Eq. (7) versus the anchorage pull values. The trend, for each slope height, is not linear and it shows a decrease of slope as height increases. The three values of CVφ do not affect the slopes of the trend while affect the magnitude of anchorage pull needed: when CVφ increases the anchorage pull must be increased in order to have the same positive difference between resistance and action. A meaningful correlation can be observed in Fig. 11 where reliability index is related to the positive difference between resistance and action for the same anchorage pull value. Each graph corresponds to different CVφ values for the case of 5 m slope height.
Figure 7. Reliability index computed by MCS versus anchorage pull for CVφ = 30, 40 e 50%: H = 5 m (continuous lines); H = 10 m (dotted lines); H = 15 m (dashed lines).
Therefore, the reference values 1.3–1.5 suggested for the factor of safety are definitively inadequate when φ variability is taken into account. As the other two slope height cases (H = 10 m and 15 m) are concerned they give the same results as coincident curves are provided. Results from MCS with an adaptive sampling scheme have been investigated: 30000 samples are needed for the case studied and reliability index values drawn are the same as FORM. Fig. 7 shows the same evolution of reliability index curves from MCS.
86
Figure 11. Combination for sliding condition in LRFD versus reliability index for slope height H = 5 m and CVφ = 30%, 40% and 50%. Figure 8. Combination for sliding condition in LRFD versus anchorage pull for CVφ = 30%.
considered increases as friction angle variability increases. This means that, in such a case, simply considering: 1) resistance just higher than actions; 2) characteristic values and partial safety factors; it cannot give a constant reliability level. These changes are related both to anchorage pull values, slope height and variability. Hence, when variability of the friction angle is enlarged higher differences between resistance and loads are needed but these values vary according not only to CVφ but also to slope height variation. 3
CONCLUDING REMARKS
A benchmark has been proposed to investigate stability of an anchored rock slope according to three different methods: the factor of safety, the reliability index and the load and resistance partial factor. Results show that, when variability of those random variables which govern the limit equilibrium of a rock slope has taken into account:
Figure 9. Combination for sliding condition in LRFD versus anchorage pull for CVφ = 40%
1 it causes a safety reduction which depends on variability magnitude; 2 target values for safety factor shall be increased according to variability magnitude; 3 the difference between factorized resistance and load must be increased according to the variability magnitude and the height of the slope. REFERENCES Bandis, S.C., Lumdsden, A.C. & Barton, N. 1981. Experimental studies of scale effects on the shear behaviour of rock joints. Int. Journal of Rock Mech. Min. Sci. and Geoch. Absts. 18: 1–21. Baecher G.B. 1982. Simplified geotechnical data analysis. Reliability theory and its application in structural and soil engineering. Thoft-Christensen P. (ed.), Dordrecht: Reidal Publishing. Barton, N. & Choubey, V. 1977. The shear strength of rock joints in theory and practice. Rock mechanics 10(1): 1–54.
Figure 10. Combination for sliding condition in LRFD versus anchorage pull for CVφ = 50%.
From Fig. 11 the LRFD differences between 30 and 60kPa correspond to the reliability index varying between 2 and 4 for the case of CVφ = 30%. Such difference shall be the same for higher variability although the lower boundary of the range
87
Canadian Geotechnical Society 1992. Canadian Foundation Engineering Manual. Vancouver, Canada: BiTech Publishers Ltd. Cherubini, C. 1997. Data and consideration on the variability of geotechnical properties of soils. In Proceedings of the ESREL’97 International Conference on Safety and Reliability, Lisbon: 1583–1591. Cherubini, C. & Orr, T.L.L. 1999. Considerations on the applicability of semiprobabilistic Bayesian methods to geotechnical design. In Atti del XX Convegno Nazionale di Geotecnica, Parma:421–426. COMREL 1997. Reliability Consulting Programs RCP GmbH, Munchen, Germany. De Mello, Victor F.B. 1988. Risks in geotechnical engineering: conceptual and practical suggestions. Geotechnical Engineering 19(2): 171–208.
Eurocodice 7, Progettazione geotecnica – Parte 1: Regole generali, UNI ENV 1997 – 1- Aprile 1997. Schneider, H.R. 1997. Definition and determination of characteristic soil properties contribution to discussion. In XIV ICSMFE, Hamburg: Balkema. Terzaghi, K. & Peck, R. 1967. Soil mechanics in Engineering Practice. New York: John Wiley and Sons. Testo Unitario 2008. D.M. Infrastrutture 14 gennaio 2008. Nuove Norme Tecniche per le Costruzioni. Ministero delle Infrastrutture, Ministero del’Interno, Dipartimento della Protezione Civile. Wyllie, Duncan C. & Mah, Christopher W. 2004. Rock slope engineering. London and New York: Spon Press.
88
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability analysis of a benchmark problem for slope stability Y. Wang, Z.J. Cao, S.K. Au & Q. Wang Department of Building and Construction, City University of Hong Kong, Hong Kong, China
ABSTRACT: This paper presents a benchmark study for slope stability analysis using the Swedish Circle method in conjunction with general procedure of slices. The purpose is to illustrate implementation of spatial variability of soil properties (i.e., undrained shear strength Su ) in the analysis. The study reveals that different reliability analysis methods use different critical slip surfaces. The pitfalls of using different slip surfaces in different methods are discussed.
1
INTRODUCTION
Limited published studies/implementation examples are believed as one reason for geotechnical practitioners’ reluctance to apply reliability methods to slope stability analysis. In view of this aspect, this paper provides a benchmark example that illustrates implementation of reliability methods in slope stability analysis. The example focuses on spatial variability of soil properties and different critical slip surfaces used in different reliability analysis methods. After this short introduction, a simple slope stability example is described, together with the random variables and solution methods used in the example. Then, the analysis results are presented, and difference among the results from different solution methods are highlighted. Finally, the pitfalls of using different critical slip surfaces in different methods are discussed.
2 2.1
Figure 1. Slope stability example.
segment, and an angle αi between the base of the slice and the horizontal. The factor of safety is then given by
where the minimum is taken over all possible choices of slip circles, i.e., all possible choices of (x, y) and r. Note that li , Wi , and αi change as (x, y) and/or r change (i.e., geometry of the ith slice changes). In addition, the Wi is a function of the clay total unit weight γ. Therefore, for a given choice of (x, y) and r, FS only depends on Sui and γ. The performance function P of this slope stability problem can be expressed as
BENCHMARK EXAMPLE Description of the example
Figure 1 shows a clay slope with a height and slope angle of 5 m and 45◦ , respectively. Stability of the slope is assessed using the Swedish Circle method in conjunction with general procedure of slices (Duncan and Wright 2005). The factor of safety FS is defined as the critical (minimum) ratio of resisting moment to the overturning moment, and the slip surface is assumed to be a circular arc centered at coordinate (x, y) and with radius r. The overturning and resisting moments are summed about the center of the circle to calculate the factor of safety, as shown in Figure 1. For moment calculations the soil mass above the slip surface is subdivided into a number of vertical slices, each of which has a weight Wi , circular slip segment length li , undrained shear strength Sui along the slip
Note that the mathematical operation of “min” in Equations (1) and (2) makes the performance function implicit and non-differentiable. 2.2 Input variables This example considers the spatial variability of soil properties, and the soil between the upper ground
89
Table 1. The values and distributions of the input variables. Variable Su γ ∗
Distribution Log-Normal (30 entries in the vector are iid.) Deterministic
Table 2.
Primary analysis results.
Solution method
β
Pf (%)
Relative error in Pf
FOSM FORM MCS with Slope/W MCS with Excel Subsim with Excel
4.74 3.89 4.34 2.95 3.04
1.1 × 10−4 5.1 × 10−3 7.0 × 10−4 1.6 × 10−1 1.2 × 10−1
−99.9% −96.8% −99.6% N/A −25.0%
Statistics Mean = 20 kPa Cov∗ = 20% 18 kN/m3
“Cov” stands for coefficient of variation.
surface and 15 m below is divided into 30 equal layers with a layer thickness of 0.5 m. A vector Su = [Su (1), Su (2), . . . , Su (30)]T is defined for the values of Su in these 30 layers. Then, Sui for the ith slice is average of the entries in Su that correspond to the depths of slip segment of the ith slice. All entries in Su are taken as independent and identically-distributed (i.e., iid) log-normal random variable with a mean and coefficient of variation (i.e., COV) of 20 kPa and 20%, respectively. The total unit weight of clay γ is taken as deterministic with a value of 18 kN/m3 . The values and distributions of these basic input variables are summarized in Table 1.
is developed for generating a random sample (realization) of the random variables Su . Starting with uniform random numbers provided by the built-in function ‘Rand()’ in Excel, transformation is performed to produce the random samples of desired distribution. Available VBA subroutines in Excel are used to facilitate the uncertainty modeling. From an input-output perspective, the uncertainty modeling worksheet takes no input but returns a random sample of Su as its output whenever a re-calculation is commanded. Then, the uncertainty model worksheets are ‘linked’ together with the deterministic slope stability analysis spreadsheet through their input/output cells to produce a probabilistic analysis model of the slope stability problem. The value of Su shown in the deterministic analysis worksheet is equal to that generated in the uncertainty modeling worksheet, and so the FS value calculated in the deterministic analysis worksheet is random. In other words at this stage one can perform a direct Monte Carlo simulation of the problem by repeatedly executing the built-in function ‘Rand()’ in Excel. In addition, a VBA code for Subset Simulation is developed that functions as an Add-In in Excel and can be called by selecting from the main menu ‘Tools’ followed by ‘SubSim’. A user form appears upon invoking of the function, and the Subset simulation can be performed accordingly. More details on the Excel spreadsheets and VBA functions/Add-In are referred to Au et al. (2009).
2.3 Solution methods For a given choice of (x, y) and r, FS is calculated by implementing Equation 1 in an Excel spreadsheet where each row represents a particular slice and each column represents the variables and terms in Equation 1. A VBA code has been written to calculate the ratio of resistant to overturning moment for different values of (x, y) and r and then pick the minimum value as the factor of safety. As a reference, the nominal value of FS that corresponds to the case where all Su values equal to their mean values of 20 kPa is equal to 1.25. For the critical slip surface, r = 15 m and (x, y) = (2.7 m, 8.8 m). The calculation results are found to be consistent with results from the commercial slope stability analysis software Slope/W (GEO-SLOPE International Ltd. 2008). Consequently, the Excel spreadsheet model is validated and used in the reliability analysis. The reliability methods employed in this study include the First-Order Second Moment (FOSM) with a given critical slip surface (Ang and Tang 1984, Tang et al. 1976, Wu 2008), First-Order Reliability Method (FORM) using the object-oriented constrained optimization tool in the Excel spreadsheet (Low and Tang 1997, Low 2003, Low and Tang 2007), direct Monte Carlo simulation (MCS) using commercial software Slope/W and the Excel spreadsheet, and Subset simulation using the Excel spreadsheet (Subsim) (Au and Beck 2001). The MCS in Slope/W uses the Swedish Circle method and general procedure of slices. Therefore, the performance function defined by Equation (2) is also applicable. To facilitate direct Monte Carlo simulation and Subset simulation using the Excel spreadsheet, a package of spreadsheets and VBA functions/Add-In are developed in Excel. An uncertainty model spreadsheet
2.4 Analysis results Table 2 summarizes the analysis results from different reliability methods. Totally 5,000,000 samples are taken in MCS with Slope/W, as the Pf obtained is extremely small. For MCS with Excel, the number of samples is 10,000. For Subsim with Excel, three levels of simulation are performed with 500 samples taken in each level. The second and third columns of Table 2 show the equivalent reliability index β = −1 (1 − Pf ) and its corresponding probability of failure Pf from FOSM, FORM, MCS with Slope/W, MCS with Excel and Subsim with Excel, respectively. The Pf from FOSM and MCS with Slope/W is on the order of magnitude of 10−4 %. In contrast, the Pf from MCS and Subsim with Excel is on the order of magnitude of 10−1 %, and the Pf from FORM falls between 10−4 % and 10−1 % (i.e., on the order of magnitude of 10−3 %). If the Pf from MCS with Excel is used as the reference
90
for comparison, the relative error in Pf is about 100% for FOSM, FORM, and MCS with Slope/W. The Pf from Subsim with Excel is reasonably accurate with a significant increase of computational efficiency (i.e., decreasing number of samples needed). The substantial difference between the Pf from FOSM and MCS with Slope/W and that from MCS and Subsim with Excel might seem surprising at the first glance. Detailed examinations show that the difference can be attributed to different critical slip surfaces used in different reliability methods, which are discussed in the next section. Figure 2. Critical slip surface in Slope/W.
3
PITFALL OF CRITICAL SLIP SURFACE
3.1 FOSM with a given critical slip surface FOSM is based on the first order approximation and uses the first terms of a Taylor series expansion of performance function P. Therefore, the mean of FS is estimated by setting all Su values equal to their mean values of 20 kPa and searching for critical slip surface. The resulting mean of FS is 1.25, and the corresponding critical slip surface has a r = 15 m and (x, y) = (2.7 m, 8.8 m). For this given critical slip surface, the standard deviation of FS is estimated as 0.053. Note that, for the given critical slip surface, Equation 1 only involves linear operation (i.e., summation) of random variables Su , and high-order partial derivative of the Equation is zero. The solution from FOSM is therefore reasonably accurate if only one given critical slip surface is concerned. 3.2
Figure 3. Examples of critical slip surfaces obtained from MSC with Excel.
3.3 MCS with slope/W
Form
Table 2 shows that the Pf from MCS with Slope/W is on the same order of magnitude as that from FOSM. In Slope/W, the MCS only takes into consideration the variability of soil properties, and it uses a critical slip surface that is first determined based on the mean values of the random variables. Figure 2 shows the critical slip surface obtained in Slope/W using the mean values of Su . The critical slip surface in Slope/W is quite consistent with the one used in FOSM. As a result, it is not surprising to find that the Pf from FOSM and MCS with Slope/W agrees well with each other. However, as the variation of potential critical slip surfaces is not properly accounted for in either method, their results are biased.
A practical object-oriented constrained optimization approach proposed by Low (2003) is used in this work to calculate the Hasofer-Lind reliability index. The approach is implemented in an Excel spreadsheet and uses the built-in optimization tool “Solver” to obtain the minimum distance between the performance function and center of an expanding equivalent dispersion ellipsoid in the original space of the random variables as the reliability index. The searching for critical slip surface is accounted for in the approach by including (x, y) and r as additional optimization variables and adding a constraint for FS (i.e., FS = 1) in the optimization for the minimum distance. Variation of potential critical slip surfaces is implicitly factored in the analysis. Consequently, as shown in Table 2, the Pf from FORM is more than one order of magnitude larger than that from FOSM which only one given critical slip surface is used. On the other hand, when compared with the Pf from MCS with Excel which includes the variation of critical slip surface explicitly, FORM significantly underestimates the Pf . The poor performance of FORM might be attributed to the inadequate linear approximation of the failure criterion (i.e., Equation 1), particularly when the variation of potential critical slip surfaces is accounted for.
3.4
MCS and Subsim with Excel
MCS in Excel starts with generation of random samples (realizations) for the random variables Su . Then, for each random sample of Su , critical slip surface is searched, and the minimum FS is obtained accordingly. Figure 3 show examples of different critical slip surfaces obtained from different random samples of Su . It is obvious that the critical slip surface changes significantly as the spatial distribution of Su changes for different random samples. As a reference, the critical
91
Table 3. Ranges of center coordinates and radius for critical slip surfaces obtained from MCS with Excel.
4
Parameter
Minimum
Maximum
Range
Coordinate x (m) Coordinate y (m) Radius r (m)
1.0 6.0 9.0
3.0 9.6 16.0
2.0 3.6 7.0
When spatial variability of soil properties is taken into consideration in reliability analysis of slope stability problem, variation of potential critical slip surfaces has a profound impact on the calculated Pf or β. Similar to FOSM with a given critical slip surface, MCS in Slope/W relies on a given critical slip surface obtained from deterministic analysis with mean values of soil properties, and hence, the impact of variation of critical slip surfaces is not accounted for in the analysis. Their results are therefore biased. The FORM approach considers implicitly the variation of potential critical slip surfaces. However, it significantly underestimates the Pf due to the inadequate linear approximation of the failure criterion (i.e., Equation 1), particularly when the variation of potential critical slip surfaces is accounted for. Therefore, the use of MCS or Subsim with explicit consideration of critical slip variation is recommended. Such consideration can be implemented in the analysis with relative ease, as illustrated in this paper.
Table 4. Comparison of simulation results with different critical slip surfaces. Relative error in Pf (%)
Solution method
β
Pf (%)
MCS with Slope/W and fixed critical slip MCS with Excel and fixed critical slip MCS with Excel and changing critical slip Subsim with Excel and fixed critical slip Subsim with Excel and changing critical slip
4.34 7.0 × 10−4
−99.6%
4.38 6.0 × 10−4
−99.6%
2.95 1.6 × 10−1
N/A
4.53 3.0 × 10−4
−99.8%
−1
−25.0%
3.04 1.2 × 10
5
RECOMMENDED RELIABILITY METHODS
CONCLUDING REMARKS
This paper presents a benchmark example for slope stability analysis using the Swedish Circle method in conjunction with general procedure of slices. Spatial variability of soil properties are taken into consideration in the reliability analysis, and the effect of variation of potential critical slip surfaces is highlighted. Several reliability methods are implemented to investigate their feasibility and efficiency. Similar to FOSM with a given critical slip surface, MCS in Slope/W does not account for the variation of potential critical slip surfaces and only uses a given critical slip surface obtained from deterministic analysis with mean values of soil properties. Their results are therefore biased. The variation of potential critical slip surfaces is implicitly considered in the FORM approach proposed by Low and his collaborators. However, it significantly underestimates the Pf due to the inadequate linear approximation of the failure criterion. MCS or Subsim with explicit consideration of variation of critical slip surfaces are implemented in an Excel spreadsheet environment. They are shown to provide reasonable results, and hence, their use is recommended at the expense of computation time/efforts.
slip surface #4 in Figure 3 is the one obtained from deterministic analysis with mean Su values and used in the FOSM and MCS in Slope/W. Table 3 summarizes ranges of (x, y) and r for critical slip surfaces obtained from MCS with Excel. The r varies from 9.0 m to 16.0 m and has a range of 7.0 m. When the variation of potential critical slip surfaces is considered explicitly in the simulation, the Pf from MCS with Excel is three order of magnitude larger than that from FOSM and MCS with Slope/W which use a given critical slip surface. To further illustrate the effect of variation of critical slip surfaces on the Pf , MCS is also performed in Excel using the critical slip surface #4 in Figure 3, which is obtained from deterministic analysis with mean Su values and used in the FOSM and MCS in Slope/W. As shown in Table 4, the resulting Pf is on the order of 10−4 % and agrees well with those from FOSM and Slope/W which uses the same/similar critical slip surface. The comparison summarized in Table 4 confirms that the substantial difference among Pf from different methods is mainly attributed to the variation of critical slip surfaces. Subsim is carried out in Excel with either a fixed critical slip surface (i.e., #4 in Figure 3) or searching for critical slip surfaces for different random samples of Su . As shown in Table 4, the results agree well with those from MCS with Excel. The Pf from Subsim is reasonably accurate with increasing computational efficiency (i.e., decreasing number of samples needed).
ACKNOWLEDGEMENTS The work described in this paper was supported by General Research Fund [Project No. 9041327 (CityU 110108)] and Competitive Earmarked Research Grant [Project No. 9041260 (CityU 121307)] from the Research Grants Council of the Hong Kong Special
92
GEO-SLOPE International Ltd. 2008. Stability Modeling with Slope/W 2007 Version, GEO-SLOPE International Ltd, Calgary, Alberta, Canada. Low, B. K. 2003. Practical probabilistic slope stability analysis. Proceedings of Soil and Rock America, MIT, Cambridge, MA, June 2003, Verlag Gluckauf GmbH Essen, Germany, Vol. 2, 2777–2784. Low, B. K. and Tang, W. H. 1997. Efficient reliability evaluation using spreadsheet. Journal of Engineering Mechanics, 127(7): 149–152. Low, B. K. and Tang, W. H. 2007. Efficient spreadsheet algorithm for first-order reliability method. Journal of Engineering Mechanics, 133(2): 1378–1387. Tang, W. H., Yucemen, M. S., and Ang, A. H.-S. 1976. Probability-based short term design of soil slopes. Canadian Geotechnical Journal, 13: 201–215. Wu, T. H. 2008. Reliability analysis of slopes. ReliabilityBased Design in Geotechnical Engineering: Computations and Applications, Chapter 11: 413–447, Edited by Phoon, Taylor & Francis.
Administrative Region, China. The financial supports are gratefully acknowledged.
REFERENCES Ang, A. H.-S. and Tang, W. H. 1984. Probability Concepts in Engineering Planning and Design, Vol. II, Wiley, New York. Au, S. K., Wang, Y., and Cao, Z. J. 2009. Reliability analysis of slope stability by advanced simulation with spreadsheet. Proceeding of the 2nd International Symposium on Geotechnical Safety and Risk (IS-Gifu2009), June 2009, Gifu, Japan (submitted). Au, S. K., and Beck, J. L. 2001. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Engineering Mechanics, 16(4): 263–277. Duncan, J. M. and Wright, S. G. 2005. Soil Strength and Slope Stability, John Wiley & Sons. Inc. New Jersey, 2005.
93
Geotechnical code drafting based on limit state design and performance based design concepts
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Developing LRFD design specifications for bridge shallow foundations S.G. Paikowsky Geotechnical Engg Research Lab, University of Massachusetts, Lowell, Massachusetts, USA & Geosciences Testing and Research Inc., N. Chelmsford, Massachusetts, USA
S. Amatya Geotechnical Engg Research Lab, University of Massachusetts, Lowell, Massachusetts, USA
K. Lesny & A. Kisse University of Duisburg-Essen, Essen, Germany
ABSTRACT: An ongoing project, supported by the National Cooperative Highway Research Program, NCHRP Project 24–31 is aimed to develop LRFD procedures and to modify the current AASHTO design specifications for Ultimate Limit State (ULS) design of bridge shallow foundations. The current study utilizes a comprehensive database of 549 cases of shallow foundation load tests under various loading conditions (i.e. vertical-centric, vertical-eccentric, inclined-centric and inclined-eccentric). In this paper, the procedure to establish the LRFD design adopted in the research study is introduced. The design methods used for ULS design of bridge shallow foundations are presented and the uncertainty in the estimation of the ultimate bearing capacity has been expressed in terms of a bias, defined as measured over calculated capacities. The biases in the estimation of the ultimate bearing capacity have been appraised based on the database. Typical bridge foundation loadings and their uncertainties are defined and utilized along with the resistance uncertainties to establish resistance factors. The investigations lead to the conclusion that one single resistance factor for the bearing capacity is not sufficient, as different loading conditions result in different levels of uncertainties. Hence, different resistance factors have been established based on the First Order Second Moment (FOSM) method, and the Monte Carlo simulations (MCS), each for the vertical-centric, vertical-eccentric, inclined-centric and inclined-eccentric loading conditions. The recommended preliminary resistance factors thus obtained in the study are presented. 1
INTRODUCTION
reliability. The challenges for the requirement of the second objective include overcoming generic difficulties applying the LRFD methodology to geotechnical applications, i.e. the evaluation of uncertainty in the geotechnical model incorporating e.g. indirect variability (site or soil parameters interpretation), load dependency of the geotechnical resistance (especially in the case of shallow foundations, where a strict separation between load and resistance is not possible), judgment (e.g. a previous experience), and other similar factors.
An ongoing project, NCHRP Project 24–31: LRFD design specifications for shallow foundations, is aimed at developing LRFD procedures and modifying the current AASHTO design specifications for the Ultimate Limit State (ULS) design of bridge shallow foundations. It is supported by the National Cooperative Highway Research Program (NCHRP) under the Transportation Research Board (TRB) of the National Academy of Science (NAS). The AASHTO specifications are traditionally observed as a National Code of the US highway practice on all federally aided projects, hence, they influence the construction of highway bridge and other structure foundations across the USA. The current AASHTO specifications as well as other existing codes based on Load and Resistance Factor Design (LRFD) principles were calibrated using a combination of reliability theory, fitting to ASD (allowable stress design) and engineering judgment. The main objectives of this project therefore are the compilation of a database of load tests on shallow foundations and the calibration of resistance factors based on the reliability analysis of the data to obtain more rational designs with consistent levels of
2 2.1
EVALUATION OF BEARING CAPACITY UNCERTAINTY Database
This research study utilizes a comprehensive database of load tests on shallow foundations, UML-GTR ShalFound07, for the evaluation of uncertainties in bearing capacity (BC) estimation. It contains 549 cases of load tests, mostly performed in Germany and the USA. It has been compiled from various publications noticeably using four major sources: (a) ShalDB Ver5.1 (Briaud & Gibbens 1997), (b) Lutenegger
97
Table 1.
Summary of UML-GTR ShalFound07 database. Predominant soil type
Foundation type Plate load tests B ≤ 1 m Small footings 1
6 m Total
Sand
Gravel
346
46
26
2
Mix
Others
Total
2
72
466
4
1
33
30
–
1
–
31
13
–
5
1
19
415
48
12
74
549
Notes: “Mix”: alternating layers of sand or gravel and clay or silt “Others”: either unknown soil types or other granular materials like loamy Scoria
Figure 1. Failure criterion based on the minimum slope of the load-settlement curve (Vesi´c 1963; modified to show settlement of 0.1B); 1psi ≈ 6.9 kPa, 1pcf = 0.157 kN/m3 , 1in = 25.4 mm
& DeGroot (1995), (c) German test database in a set of volumes (e.g. Muhs & Weiss 1972) compiled by DEGEBO (Deutsche Forschungsgesellschaft für Bodenmechanik) and (d) tests carried out or compiled by the University of Duisburg Essen, Germany (some of which are presented in Perau 1995). The database summary is presented in Table 1. Most cases relate to foundations subjected to vertical-centric loading in or on granular soils. Tests of foundation subjected to combined loadings (verticaleccentric, inclined-centric and inclined-eccentric) were mainly small scale model tests performed in controlled soil conditions (in laboratories using soils of known particle size and controlled compaction). 2.2
difficult. In addition, many load tests in the database have been found to be unsuitable for use in the present study as they have not been carried out to failure. This is especially the case for larger sized foundations for which failure would be associated with very large loads and excessive displacements well beyond the service limits. It was found from the comparison of the failure loads for 195 cases of database UML-GTR ShalFound07 that the Minimum Slope failure criterion provided the most consistent interpretation when compared to interpreted loads using two other methods: log-log load-settlement curve method (De Beer 1967) and two-slope criterion described in NAVFAC (1986). Hence the measured bearing capacity has been established using the Minimum Slope failure criterion. A total of 267 load test cases in/on granular soils could be used for the evaluation of uncertainties in the bearing capacity employing the Minimum Slope failure criterion. Out of the 267 cases, 172 foundations are under vertical-centric, 42 under vertical-eccentric, 39 under inclined-centric and 14 under inclined-eccentric loadings. Fourteen out of the 172 footing cases under verticalcentric loadings are in natural soil conditions (for which SPT blow counts are available) from 8 sites. The average friction angles and unit weights are in the ranges 29.8◦ to 39◦ and 14.5 kN/m3 to 19.7kN/m3 , respectively. There are 158 cases in soils with controlled soil conditions in 7 sites. The average friction angles of these soils range from 34.6◦ to 46.0◦ , with most cases lying between 42◦ and 46◦ .The average unit weights range from 10.2 kN/m3 to 18.4 kN/m3 . The width of footings under vertical-centric loading ranges from 5 cm to 3 m, with nearly half of the footings of size 9 cm × 9 cm; 104 of the footings are square, 63 are rectangular and 5 are circular. The load tests under all other loadings, i.e. verticaleccentric, inclined-centric and inclined-eccentric, have been carried out in controlled soil conditions. The averages of the soil friction angles for
Failure load criterion and measured bearing capacity
Vesi´c (1975) suggested the failure (ultimate) load to be the load which corresponds to the point where the slope of the load-settlement curve first reaches zero or a steady, minimum value. The interpreted ultimate loads for different load tests are shown in black dots in Figure 1 (Vesi´c 1963). In soils with higher relative densities, there is a higher possibility of failure in general shear mode and the failure load can be clearly identified, e.g. for test number 61. There are also cases when the identification of the “minimum value” becomes subjective, according to the soil relative densities Dr , e.g. for test number 64 in the figure. Hence, the interpretation of failure load of a shallow foundation from a load test is complex as the failure modes (general, local or punching) not only depend on the type of soil (categorized according to the relative density or the relative stiffness), but also on the footing embedment and loading type. Except for the case of general shear failure, in which the failure load is clearly defined by a peak in the load-settlement curve, judgment is often required to interpret a unique failure load. The examination of the load test results in the database reveal that the failures for which the load-settlement curves do not show a clear peak prevail, hence the interpretations of the failure loads for footings in most cases become
98
from the ground surface Dw = 0.0, Cwq = 0.5 and Cwq = 1.0 when Dw = footing embedment depth (Df ) or below. Cwγ = 0.5 for Dw ≤ Df , and 1.0 when Dw is greater than 1.5B + Df , with the values for intermediate groundwater location depths interpolated. For granular soils, c = 0 and hence only the terms with Nqm and Nγm in Equation 1 come into the picture. The equations for the bearing capacity factors Nq and Nγ , based on proposals by Reissner (1924) and Vesi´c (1973, 1975) are respectively.
The shape factors si used in the present calculation are those proposed by De Beer (1961, 1970) and Vesi´c (1973), which are also used in AASHTO (2007).
Figure 2. Positive moment (upper) and negative moment (lower) for inclined-eccentric loadings.
vertical-eccentric, inclined-centric and inclinedeccentric load tests are 42.0◦ , 43.4◦ and 44.9◦ , respectively, while the averages of the soil unit weights are 16.8 kN/m3 , 17.2 kN/m3 and 17.4 kN/m3 , respectively. The footing widths range from 5 cm to 1.0 m for vertical-eccentric and inclined-centric load tests while the inclined-eccentric load tests are carried out on footings 9 cm wide. The loadings in the inclined-eccentric load tests have been applied in two ways: (a) in radial load path and (b) in step-like load path. In the radial load path, the load components are mutually increased such that both the load eccentricity and the load inclination angle are kept constant until failure. In the step-like load path, the vertical loading is kept constant while the horizontal loading is gradually increased till failure. Hence, for the step-like path loading, the failure load can be interpreted from the load-displacement curve for the horizontal direction only. In addition, for the case of inclined-eccentric loading, depending on the relative directions of load eccentricity and inclination as shown in Figure 2, the bearing capacity of the foundation may increase or decrease. The upper combination in Figure 2 is the case of positive or reversible moment, and the lower is the case of negative moment. The foundation BC is greater in the case of positive moment as compared to that with a negative moment for the same magnitude of load eccentricities and inclinations. 2.3
For depth factors di , it is logical to use a consistent set of equations given by the same author, at present given by Vesi´c (1973, 1975). Hence, the proposal by Brinch Hansen (1970) and Vesi´c (1973) for depth factor dq are used instead of the discrete values provided in AASHTO (2007).
For inclined loading cases, the following load inclination factors ii proposed by Vesi´c (1975) have been used.
where H and V are the horizontal and vertical components of the applied inclined load P, c is soil cohesion; and
Calculated bearing capacity
The equation specified in AASHTO (2007) given in Equation 1, which is based on Vesi´c (1975), has been used to calculate the bearing capacity of a footing of length L and width B supported by a soil with cohesion c, average friction angle φf and average unit weight γ.
where θ is the projected direction of load in the plane of the footing, measured from the side of length L in degrees. In case of eccentric loading, the effective footing dimensions L = L − 2eL and B = B − 2eB are are to be used in Equations 1 through 6 instead of the full footing dimension B × L. For the bearing capacity calculations, the soil strength parameters are taken as the weighted averages
where qu is the calculated bearing capacity, Ncm = Nc sc ic , Nqm = Nq sq dq iq , Nγm = Nγ sγ iγ and Cwq and Cwγ are the reduction coefficients for the presence of groundwater. For the depth of groundwater table
99
of the strength parameters to a depth of twice the footing width, below the footing base. For footing cases with missing reported soil parameters, correlations of the parameters with SPT values have been used for estimation. A correlation given by Peck, Hansen and Thornburn as modified by Kulhawy & Mayne (1990) has been used to estimate the soil friction angles, while for the soil unit weight, a correlation proposed by Paikowsky et al. (2005) has been used as given by Equations 7 and 8, respectively.
where the corrected values of N60 for overburden, (N1 )60 , have been obtained based on the proposal by Liao & Whitman (1986):
where pa is the atmospheric pressure (≈100 kPa or 1tsf) and σv is the effective overburden pressure in the same unit as that of the atmospheric pressure. 2.4
Summary of mean bias values
The uncertainties in the aforementioned design method are expressed as a bias, defined as the ratio of the measured over the calculated bearing capacities. This lumped value includes all sources of uncertainties in the BC prediction such as the model uncertainties (e.g. BC factors, foundation scale effects etc), variation of soil properties and their interpretation, capacity interpretation etc. The biases in this present research have been studied according to the loading types, namely vertical-centric, vertical-eccentric, inclinedcentric and inclined-eccentric loadings as well as the nature of the soil, differentiating between natural and controlled soil conditions. Vertical-Centric Loading: The mean and coefficient of variations (COV ) of the biases are calculated for the 172 total vertical centric loading cases. Figure 3 summarizes the mean of bias grouped by test soil conditions and footing widths. The mean bias for the footings in natural soil conditions is found to be around 1.0 irrespective of the footing sizes (the largest footing tested being of 3.0 m width). In contrast, for the footings in controlled soil conditions the mean bias value is about 1.7 to 2.2. For footings in controlled soil conditions, there is less variation in the biases for small footings (B ≤ 0.1 m) as compared to the larger footings, even though the tests are from a larger number of sites. Compared to the tests in controlled soil conditions, the biases for those in natural soil conditions have higher variation even when the number of sites is comparable. The higher mean bias reflect conservatism (under-prediction) in the theoretical prediction of the BC factor Nγ by Vesi´c (1973), which was found to represent the lower bound of the back-calculated values from the load tests
Figure 3. Summary statistics of the bias (measured over calculated BC) for footings under vertical-centric loadings.
especially in the range of soil friction angles between 42◦ and 46◦ . The bias of Nγ (back-calculated from tests over Vesi´c) is found to increase from 0.95 to 2.18 for an increase in the friction angle from 42◦ to 46◦ . In both the natural and controlled soil conditions, it was found that there is a trend of increase in the bias with an increase in the footing size. It may be hence logically stated that the evaluation of uncertainties for small scale model tests to failure are valid for prototype large footings too. This inference is of importance especially for the cases under combined loadings, for which the testing on large scale models are limited by economical and practical reasons. Other Loadings: For the 42 cases of vertical-eccentric loadings, the bias is found to have a mean of 1.81 and COV of 0.349. For the 39 cases of inclined-centric loadings, the bias is found to have a mean of 1.43 and COV of 0.295. There are 8 cases of positive or reversible moment for inclined-eccentric loadings and 6 cases of negative moment; the mean and COV for the former is found to be 1.41 and 0.278, respectively, and for the latter 2.03 and 0.094, respectively. 3
RELIABILITY ANALYSES
3.1 Typical bridge foundation loading and load factors The loading condition has been taken as that used by Paikowsky et al. (2004) in establishing the LRFD for deep foundations. The load combination defined as Strength I in AASHTO is applied as follows in its primary form.
100
Table 2. Statistical details of the biases of the bearing capacities of shallow foundations in/on granular soils and resistance factors under different loadings. Resistance factor φ Loading type Vertical-centric Vertical-eccentric Inclined-centric Inclined-eccentric Positive or reversible moment Negative moment
Underlying soil conditions
No.of cases
No.of sites
Mean bias λ
COVλ
MCS
FOSM
Recommended
Controlled Natural Controlled Controlled
158 14 42 39
7 8 4 3
1.73 1.00 1.81 1.43
0.271 0.329 0.349 0.295
0.937 0.457 0.779 0.722
0.793 0.396 0.680 0.617
0.90 0.45 0.75 0.70
Controlled
8
1
1.41
0.278
0.748
0.635
0.70
Controlled
6
1
2.03
0.094
1.773
1.318
1.00
Note: λ = bias = measured over predicted COVλ = coefficient of variation of the bias MCS = Monte Carlo Simulation using 100,000 iterations FOSM = First Order Second Moment
where R is the resistance or bearing capacity of shallow foundation, D is the dead load and LL is vehicular live loads. The statistical characteristics of the random variables D and LL are assumed to be as those used in NCHRP Report 368 (Nowak, 1999). The load factors, γL for live load and γD for dead load from AASHTO (2007) (Tables 3.4.1-1 and 3.4.1-2), and the statistical characteristics used are as given below.
of 0.135%, assuming lognormal distributions for loads and resistance. The resistance factors obtained from the FOSM (original AASHTO calibration procedure, Barker et al. 1991) and from the Monte Carlo simulation using 500,000 simulations, along with the recommended factors, for shallow foundations under the different loading types are presented in Table 2. 4
Further, Paikowsky et al. (2004) examined the influence of the dead load to live load ratios demonstrating very little sensitivity of the resistance factors to that ratio, with overall decrease of the resistance factors with the increase in the dead load to live load ratio. The large dead-to-live load ratios represent conditions of bridge construction, typically associated with very long bridge spans. The relatively small influence of the dead-to-live-load ratio on the resistance factor lead Paikowsky et al. (2004), thereby, to use a typical ratio of 2.0 knowing that the obtained factors are by and large applicable for long span bridges being on the conservative side. This ratio was adopted, therefore, for the present study calibrations as well. Equation 10 above does not include the effects of the horizontal earth pressure, hence, the resistance factors developed and presented in this paper do not include the uncertainties due to the horizontal earth pressure. Except for the footings under vertical-centric loading, the resistance factors would likely to be different especially for the cases under the inclined-eccentric loadings when the lateral loading due to horizontal earth pressure is also considered. 3.2
Resistance factor calibration
The preliminary resistance factors based on the evaluation of biases in the ultimate limit estimation of shallow foundations in/on granular soils, presented in the previous section, have been calibrated for a target reliability βT of 3.0 or target exceedance probability
SUMMARY AND CONCLUSION
The NCHRP 24-31 research project aims to develop LRFD procedures and to modify the current AASHTO design specifications for the Ultimate Limit State (ULS) design of bridge shallow foundations. The research study utilizes a comprehensive database of load tests to establish the uncertainty in bearing capacity (BC) calculation. The failure loads in the model tests have been determined by different failure criteria, among which the minimum slope criterion proposed by Vesi´c (1963) and was employed for the database load tests interpretation. This criterion was established as the most appropriate one to identify the failure load. The uncertainties in the design method were expressed by the bias defined as the ratio of measured over calculated BCs. This lumped value includes all sources of uncertainties in the BC prediction originating from the model (e.g. BC factors, scale effects), variation in soil properties, etc. The bearing capacity analysis of shallow foundations on controlled soil conditions was found to systematically under-predict the measured capacity for all examined load combinations, namely; vertical-centric, vertical-eccentric, inclined-centric and inclined eccentric. This under-prediction results with the bias varying between 1.4 and 2.0 with COV values of approximately 0.3 (excluding the limited cases of inclined-eccentric loading under negative moment). Investigation of the bearing capacity factor Nγ suggests that a similar bias exists in this factor assuming all other factors are known and measured. The bias in Nγ was found to be related to the soil internal friction angle, where the bias increases (i.e. BC under-prediction increases) with the increase in the
101
internal friction angle. The controlled soil conditions of the examined cases suggest that the bias in the BC factor Nγ may explain in a large part the uncertainty in the BC calculations. This statement and its validity is currently further evaluated. The findings related to the foundations tested in/on natural soils is more difficult at it introduces larger variability in the soil type and strength parameters interpretation. It is noticeable however that the internal friction angles associated with all of the tested cases on natural soils resulted with internal friction angles lower than those for controlled soil and hence match the area for which the BC parameter Nγ did not exhibit a bias. Though qualitatively these observations match the data analyzed, the implementation of the indicated findings for natural soil condition is thus incomplete at this stage. In both the natural and controlled soil conditions, however, there was a trend of increase in the bias with an increase in the footing size. It may be hence logically stated that the evaluation of uncertainties for small scale model tests to failure are valid for prototype large footings too. This inference is of importance especially for the cases under combined loadings, for which the testing on large scale models are limited by economical and practical reasons. The initial investigation leads to the conclusion that one single resistance factor for the bearing capacity is not sufficient as different soil and/or loading conditions result in different level of uncertainties. Hence, different resistance factors were established based on probabilistic analyses (FOSM and Monte Carlo simulations), each for vertical-centric, verticaleccentric, inclined-centric and inclined-eccentric loading conditions. DISCLAIMER AND ACKNOWLEDGEMENTS The presented results are preliminary findings of an ongoing research; the final results may be changed due to further ongoing investigation. The presented research was sponsored by the American Association of State Highway Transportation Officials (AASHTO) under project NCHRP 24-31 awarded to Geosciences Testing and Research, Inc. (GTR) of North Chelmsford, Massachusetts. The presented parameters are preliminary and do not reflect the recommended or approved parameters. The opinions and conclusions expressed or implied in this paper are entirely of those of the research agency and not necessarily those of the Transportation Research Board (TRB), the National Research Council (NRC), the Federal Highway Administration (FHWA) or AASHTO. The collection and analysis of database UML-GTR ShalFound07 was conducted under a contract with the Geotechnical Research Laboratory of University of Massachusetts Lowell by the graduate students Ms. Yu Fu, Mr. Jenia Nemirovsky and postdoctoral fellow Dr. Shailendra Amatya. The help and dedication of Ms. Mary Canniff in conducting the research is appreciated.
REFERENCES AASHTO. 2007. Section 10: Foundations, LRFD Bridge Design Specifications, American Assoc. of State Highway and Transportation Officials, Washington, D.C. Barker, R.M., Duncan, J.M., Rojiani, K.B., Ooi, P.S.K., Tan, C.K. & Kim, S.G. 1991. Manuals for the Design of Bridge Foundations, NCHRP Report 343, Transportation Research Board, National Research Council, Washington, D.C., 308p Briaud, J.L. & Gibbens, R. 1997. Large scale load tests and database of spread footings on sand, USDoT, Federal Highway Association, Publication No. FHWA-RD-97068, Washington, D.C., 228p. Brinch Hansen, J. 1970. A Revised and Extend Formula for Bearing Capacity. Akademiet for de Tekniske Videnskaber, Geoteknisk Institut, Bulletin No.28, Copenhagen: 5–11. De Beer, E.E. 1967. Proefondervindelijke bijdrage tot de studie van het gransdragvermogen van zand onder funderingen op staal; Bepaling von der vormfactor sb. Annales des Travaux Plublics de Belgique, 68(6): 481–506. De Beer, E.E. 1970. Experimental Determination of the Shape Factors and the Bearing Capacity Factors of Sand. Géotechnique, 20(4): 387-411. Kulhawy, F. & Mayne, P. 1990. Manual on Estimation of Soil Properties for Foundation Design, Report EPRI-EL-6800, Electric Power Research Institute, Palo Alto, Calif. Lutenegger, A.J. & DeGroot, D.J. 1995. Settlement of shallow foundations on granular soils, Report UMTC-94-4, Univ. of Massachusetts Transportation Center, Amherst, Mass., USA (for Massachusetts Highway Department, Transportation research project Contract #6332 Task #4). Muhs, H. & Weiss, K. 1972. Der Einfluss von Neigung und Ausmittigkeit der Last auf die Grenztragfaehigkeit flachgegruendeter Einzelfundamente, DegeboMittelungen, Heft 28 (in German). NAVFAC. 1986. Naval Facilities Engineering Command Desing Manual DM7.01. Nowak, A. 1999. Calibration of LRFD Bridge Design Code, NCHRP Report 368. Transportation Research Board of the National Academies, Washington, D.C. Paikowsky, S.G., Birgisson, B., McVa, M., Nguyen, T., Kuo, C., Baecher, G., Ayyub, B., Stenersen, K., O’Malley, K., Chernauskas, L. & O’Neill, M. 2004. Load and Resistance Factor Design (LRFD) for Deep Foundations, NCHRP Report 507. Transportation Research Board of the National Academies, Washington, D.C., 126pp. Paikowsky, S.G., Fu, Y. & Lu, Y. 2005. LRFD foundation design implementation and specification development, NCHRP Project 20-07, Task 183, National Cooperative Highway Research Program, Transportation Research Board, National Research Council, Washington, D.C. Perau, E.W. 1995. Ein systematischer Ansatz zur Berechnung des Grundbruchwiderstands von Fundamenten, Mitteilungen aus dem Fachgebiet Grundbau und Bodenmechanik, Heft 19 der Universitaet Essen, edited by Prof. Dr-Ing. W. Richwien (in German). Vesi´c, A. 1963. Bearing capacity of deep foundations in sand, Highway Research Record 39, National Academy of Sciences, National Research Council: 112–153. Vesi´c, A. 1973. Analysis of ultimate loads of shallow foundations, J. of the Soil Mechanics and Foundations Div., SM1, ASCE, 99(1): 45–73. Vesi´c, A. 1975. Bearing capacity of shallow foundations. In H.F. Winterkorn & H.Y. Fang (eds.) Foundation Engineering Handbook: 121–147. New York: Van Nostrand Reinhold.
102
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Limit states design concepts for reinforced soil walls in North America R.J. Bathurst & B. Huang GeoEngineering Centre at Queen’s-RMC, Royal Military College of Canada, Kingston, Canada
T.M. Allen Washington State Department of Transportation, Olympia, Washington, USA
ABSTRACT: Limit states design (LSD) calibration for reinforced soil retaining wall design (called Mechanically Stabilized Earth Walls in USA terminology) has until recently been restricted to comparison to allowable stress design (ASD) practice (often called “calibration by fitting”). This paper presents LSD calibration principles and traces the steps required to arrive at load and resistance factors using closed-form solutions or Monte Carlo simulation for one typical limit state – pullout of steel reinforcement elements in the anchorage zone of a reinforced soil wall. A unique feature of this paper is that measured load and resistance values from a database of case histories are used to develop the statistical parameters in the example. Furthermore, model bias is considered explicitly in the calculations. The paper also addresses issues related to the influence of outliers in the data sets and possible dependences between variables that can have an important influence on the results of calibration.
1
INTRODUCTION
the same general topic can be found in the references (Bathurst et al. 2008, Allen et al. 2005).
The United States and Canada are committed to a limit states design (LSD) approach for the design of geotechnical structures. In the United Sates the term load and resistance factor design (LRFD) is used. Most engineers are familiar with LSD practice based on experience with civil engineering design codes. However, the methodologies to carry out calibration are less well understood and this has proven to be an impediment to acceptance of LSD for the design of conventional retaining wall structures. The challenge to adopt LSD for modern reinforced soil wall structures is further compounded by complexity of the limit state mechanisms, poor prediction accuracy of some of the underlying deterministic models and lack of data to perform rigorous calibration. In this paper we present LSD calibration principles and trace the steps required to arrive at load and resistance factors using closed-form solutions for one typical limit state – pullout of steel reinforcement elements in the anchorage zone of a reinforced soil wall. A unique feature of this paper is that measured load and resistance values from a database of case histories are used to develop the statistical parameters in the example. Furthermore, the statistics are computed for model bias which allows the accuracy of the underlying deterministic models for load and resistance to be considered quantitatively in LSD calibration. The paper also addresses issues related to the influence of outliers in the data sets and possible dependences between variables that can have an important influence on the results of calibration. An expanded treatment of
2
BACKGROUND
In North America, LSD (or, load and resistance factor design – LRFD) is based on a factored resistance approach. The general approach can be expressed as:
where: Qni = nominal (specified) load, Rn = nominal (characteristic) resistance, γi = load factor, and ϕ = resistance factor. In the North American approach, uncertainty in the calculation of the resistance side of the equation is captured by a single resistance factor while load contributions are assigned (typically) different load factors. Load factor terms have values γi ≥ 1, while the resistance term should have a value ϕ ≤ 1. It is important to note that there is a fundamental difference between North American and European approaches to LSD. A factored strength approach is used in Europe (Eurocode 7) where resistance factors are applied to individual parameters in resistance-side expressions. Examples of North American codes are the AASHTO LRFD Bridge Design Specifications (AASHTO 2007), National Building Code of Canada (NRC 2005) and the Canadian Highway Bridge Design Code (CHBDC 2006). The general approach for LSD of earth structures described in the UK standard (factored strength approach) has been adopted by agencies
103
in Australia (RTA 2003) and Hong Kong (Geoguide 6 2002). An excellent discussion of the difference in approaches between Europe and North America and the development of LSD methods in geotechnical engineering can be found in the paper by Becker (1996). 3
EXAMPLE LIMIT STATE
In this paper we use the example of pullout of steel grid reinforcement for the internal stability of reinforced soil walls. The resistance term R is expressed as:
where: Tpo = pullout capacity (kN/m), Le = anchorage length (m), σv = γs z = vertical stress (kPa), γs = soil unit weight (kN/m3 ), and F∗ = dimensionless pullout resistance factor (in this case, F∗ is a function of the thickness and horizontal spacing of the reinforcement transverse bars, and depth z of the reinforcement) (AASHTO 2007). The load term Q is computed as:
where: Tmax = maximum tensile load in the reinforcement (kN/m), Sv = the tributary spacing of the reinforcement layer (m), and Kr = dimensionless lateral earth pressure coefficient acting at the reinforcement layer depth. For steel grid reinforced soil walls, Kr varies from 2.5 Ka to 1.2 Ka at the top of the wall to a depth of 6 m below the wall top, respectively, and remains at 1.2 Ka below 6 m. Here, Ka = f(φ) = dimensionless coefficient of active lateral earth pressure. Note that the soil friction angle is constrained to φ ≤ 40◦ according toAASHTO (2007).This restriction is applied in calculations for predicted load values that are described later. Using Equations 1-3, the design limit state equation for reinforcement pullout with a single load term can be expressed as:
acceptable probability of failure will be related to the level of redundancy in the structure. For example, for pile groups, the loss of one pile does not mean that the group will fail since load shedding to the other piles is possible. A similar argument can be applied to the internal stability design of reinforced soil walls. Load shedding from a single reinforcing element that has failed to the remaining layers is possible. For this reason a probability of failure of pf = 1 in 100 recommended by D’Appolonia (1999) and Paikowsky et al. (2004) for pile groups is judged to be applicable for internal stability design of reinforced soil walls (i.e. for pullout and tensile rupture limit states) (Allen et al. 2005). The probability of failure for a prescribed unfactored limit state equation is illustrated in Figure 1. The reliability index β is used to relate the normal statistics for the distribution of limit state values (computed as g = R – Q) to probability of failure. Load and resistance factors are selected through calibration to ensure that factored values of R and Q result in an acceptable probability of failure. A probability of failure of pf = 1/100 corresponds to a reliability index value of β = 2.33. A useful means to equate these two parameters is to use the standard normal cumulative function (NORMSDIST) in Microsoft Excel as follows:
where the load factor notation γQ is now adopted for the case of a single load term in the limit state function. From Equation 4 the minimum pullout capacity is:
In other words, the nominal resistance value (Rn = Tpo ) must always be greater than the nominal load value (Qn = Tmax ) by a factor of γQ /ϕ. 4 4.1
CALIBRATION Probability of failure
An important first step in limit states calibration is the selection of probability of failure. The choice of
Figure 1. Probability of failure in reliability-based design.
104
4.2
Model bias
Model bias is a measure of the accuracy of predicted values to measured values using a limit state model. It is reasonable to assume that selection of magnitude of resistance and load factors to meet a target β value (i.e. probability of failure) will be influenced by the accuracy of the underlying deterministic model. In this paper we have introduced two such deterministic models (one for the load side and one for the resistance side). To quantify model accuracy and to carry out calibration it is useful to introduce the ratio of measured to predicted values as a “model bias”, or “bias” for brevity. For a very good model and a large number of data points, the mean bias value is close to one and the coefficient of variation (COV) is small. However, the choice of acceptable COV value is subjective. The use of bias statistics to quantify the accuracy of load predictions in reinforced soil walls has been demonstrated by Bathurst et al. (2008a,b). Adopting nomenclature introduced earlier the resistance bias value XR for a pullout resistance data point can be expressed as:
and the load bias value XQ as:
Mean and standard deviation can be computed for any set of bias data. The mean and COV of the resistance bias values are denoted here as µR and COVR , and for the load bias values as µQ and COVQ . If the limit state function is linear (the case here) and the load and resistance bias data are normally distributed, then β can be computed using the following formula:
values are not used directly. The advantage of using bias values is that variability in predicted load and resistance values resulting from the model selected for design is included explicitly in the subsequent calculation of load and resistance factors (Withiam et al. 1998, Allen et al. 2005). Furthermore, if the database of measured load and resistance values are taken from actual field measurements (for load) and laboratory measurements (for resistance) then, inherent variability in computed load and resistance terms will be captured, provided that the data represent typical quality and type of construction in the field, representative material components, consistent laboratory techniques and the site conditions for the structure being designed. 4.3
Statistical treatment of resistance and load data
Predicted and measured pullout data are used here to demonstrate how resistance bias statistics can be computed and interpreted. The data are taken from Christopher et al. (1989) and represent n = 45 tests with five different reinforcement products in combination with 15 different granular soils. It may appear tempting to use the entire data set to compute the mean and COV bias values. However, this can lead to poor selection of bias statistics. The best approach is to sort the bias data in rank order from smallest to largest and then plot the data as a cumulative distribution function (CDF) plot using z − XR axes where z is that standard normal variable, i.e. z = −1 (pf ). It can be computed using the Microsoft Excel NORMSINV function:
Here, i is the rank order of the data point and n is the number of data. Resistance bias data plot close to a straight line on a (CDF) plot with semi-log axes (Figure 2) indicating that the data are sensibly lognormally distributed. Note that lognormal statistics can be computed directly using data points converted to natural logarithms. If the data diverge from a perfect lognormal distribution there may be discrepancies Predicted lognormal distribution to filtered data set using normal statistics (n = 41)
2
Standard normal variable, z
Predicted lognormal distribution - best fit-to-tail of lognormal data (n = 45)
If the bias values are lognormally distributed then:
Lognormal prediction from normal statistics (n =45)
1
filtered data points 0
21 Resistance bias values (n = 45)
22
A necessary requirement for these calculations is that there are no dependencies between bias values and corresponding load and resistance (predicted) values. This is discussed later in the paper. The equations developed above demonstrate that for our LSD calibration purposes, statistical characterization of the distribution of actual load and resistance
Filtered resistance bias values (n = 41) 0.4
0.6 0.7 0.8
1.0
2
3
4
Resistance bias, XR
Figure 2. Cumulative distribution function (CDF) plots of resistance bias (XR ) values for pullout capacity for steel grid reinforcement and fitted approximations.
105
between statistics computed in these two different ways (Bathurst et al. 2008). However, the differences here are small and it is convenient to characterize all distributions using normal mean and COV values. Two data sets are shown on this plot. All n = 45 bias values are plotted as the open circles. There are a number of data points that are treated as outliers and the reasons for eliminating them from analysis are discussed later. If the filtered data set is considered only, then the remaining n = 41 data points plot as the open squares in the figure. Approximations to these curves are superimposed on the two data sets and have been computed as:
4 n = 45 (total number of data points) n = 41 (total less filtered data points)
filtered data points
resistance bias, X R
3
2
1
n = 13
0 0
20
n = 19
n = 13 40
60
80
100
120
140
160
predicted resistance, Rn (kN/m)
Figure 3. Resistance bias values (XR ) versus predicted (nominal) resistance (Rn ) values.
with X used here generically to calculate bias values for z in a lognormal CDF plot described by normal statistics µ and COV. Figure 1a demonstrates that it is the lower tail of the resistance CDF plot that is important since it is the overlap between the lower tail of the resistance distribution and the upper tail of the load distribution that governs the probability of failure. Simply using the normal statistics for the entire n = 45 bias data points provides a visually poor fit to this important tail. By adjusting the selection of mean and COV of the bias statistics it is possible to perform a best fit-to-tail as shown in Figure 2 and thus a better match. The reason for the poor fit using all of the data is that the approximation is influenced by four data points at the opposite tail. If these data are removed (filtered out) then the resulting mean and COV of the bias statistics (n = 41) give a visually better match with the lower tail. Examination of the test conditions that resulted in these four data points revealed that they came from the same test series carried out under low confining pressure (σv < 40 kPa) with dense compacted granular soils (compacted unit weight of 14.9 kN/m3 ) and a friction angle of 45 degrees or more. Loads generated in steel reinforcement systems are difficult to predict accurately for these conditions using current design equations due to complicated steel reinforcement-soil interaction effects (Bathurst et al. 2008, 2009). Hence, the data for these four data points is at the limit of the range of values that can be usefully extracted from conventional pullout testing. A necessary condition to carryout calibration using model bias statistics is that there must not be hidden dependences between bias statistics (X = XR or XQ ) and the magnitude of nominal prediction (i.e. Rn predicted or Qn predicted in Equations 7 and 8) (Phoon and Kulhawy 2003). Possible dependency between the bias values and the magnitude of the nominal predictions can be examined using the Spearman rank correlation coefficient (Bathurst et al. 2008), Kendall’s tau or Pearson’s correlation coefficient. Using all n = 45 pullout resistance data points, the Spearman rank correlation coefficient between XR and Rn
(i.e. Tpo predicted) is ρ = −0.374, which corresponds to a probability p = 0.0065 that the two distributions are independent. Hence, the null hypothesis (the distributions are independent) is rejected and the bias and predicted load values are considered correlated at a level of significance of 0.05. Removing these four points and recalculating the Spearman rank correlation coefficient with n = 41 gives ρ = − 0.242 which corresponds to a probability p = 0.063 that the two populations are independent at a level of significance of 0.05 (i.e. the null hypothesis that the two distributions are independent cannot be rejected). Visual evidence of the effect of the four outliers on statistical dependency can be appreciated in Figure 3. When all data (n = 45) are considered there is a detectable visual dependency between XR and Rn . If the four outliers are removed the dependency disappears consistent with hypothesis testing described earlier. An alternative strategy is to group the data into resistance ranges as shown on the figure and compute normal bias statistics for each range (Bathurst et al. 2008). The disadvantage is that the number of data points in each range is much less and a higher COV value may result for any individual subset. Allen et al. (2001, 2004) collected a data set of 20 well-instrumented reinforced soil walls constructed with bar mat and welded wire steel reinforcement (a total of 34 data points from six different wall sections constructed with compacted granular backfill). The friction angle used to calculate the lateral earth pressure value Kr in Equation 3 was taken as the reported value from triaxial or direct shear tests and capped at 40 degrees as required in AASHTO (2002, 2007) design codes. Using the entire data set, the distribution of load bias data points is reasonably well approximated by lognormal distributions in Figure 4. However, it is the data values in the upper tail of the distribution that are of interest since it is this region which will contribute to the calculation of probability of failure for the same reason described for the distribution of resistance bias values (see Figure 1a). An approximation to the load distribution using a lognormal fit to the upper tail is
106
140
2 lognormal prediction from normal statistics
Predicted load (kN/m)
standard normal variable, z
120
1 predicted lognormal distribution best fit-to-tail of lognormal data
0
100 1 80
1
60 40
ñ1 20 load bias values (n = 34)
ñ2 0.3
0.4
0.5 0.6 0.7 0.8
1
2
unfactored loads factored loads (Q = 1.75)
0 0
3
20
40
60
80
100
120
140
Measured load (kN/m)
load bias, XQ
Figure 4. Cumulative distribution function (CDF) plots of load bias (XQ ) values for reinforcement loads for steel grid reinforced soil walls and fitted approximations.
Figure 5. Predicted loads versus measured values for steel grid reinforced soil walls using the AASHTO Simplified Method.
plotted in Figure 4. A test of dependency similar to that described for resistance bias data reveals that load bias values and predicted (nominal) load values are not correlated at a level of significance of 0.05. Based on best fit-to-tail for resistance and load bias data presented in Figures 2 and 4 the following normal statistics are proposed: µR = 1.30, COVR = 0.400, µQ = 0.973 and COVQ = 0.462.
selected load factor can be carried out by plotting predicted load values against “measured” values. Figure 5 shows unfactored and factored (predicted) load values for steel grid reinforced soil walls, using the AASHTO Simplified Method, plotted against measured values. The figure shows that many of the original data points are below the 1:1 correspondence line. Applying a load factor γQ = 1.75 moves almost all of the data points above the 1:1 line, which is a desirable end result.
4.4
4.5 Estimating the resistance factor using a closed-form solution
Estimating the load factor
As noted earlier in the paper it is desirable to have a load factor γQ > 1. However, this may not be possible if the underlying deterministic model is poor. This has been the case for current design methods for geosynthetic reinforced soil walls that use the tie-back wedge method (Miyata and Bathurst 2007, Bathurst et al. 2008). Fortunately, the prediction of load for steel-reinforced soil walls is reasonably accurate as demonstrated by Bathurst et al. (2008, 2009). The following equation can be used as a starting point to estimate the load factor, if load bias statistics are available:
where nσ is a constant. For a given value of nσ , the probability of exceeding any factored load is about the same. The greater the value of nσ , the lower the probability the measured load will exceed the predicted nominal load. A value of nσ = 2 for the strength limit state was used in the development of the Canadian Highway Bridge Design Code and AASHTO LRFD Bridge Design specifications (Nowak 1999, Nowak and Collins 2000). This value is used in the example computations to follow. Using load bias statistics reported in the previous section gives 1.87 as a starting point. Bathurst et al. (2008) reported values from 1.73 to 1.87 depending on various minor adjustments to the selection of load and resistance normal statistics. A value of 1.75 is selected here. A visual check on the reasonableness of the
Once the load factor is selected, the resistance factor can be estimated through iteration to produce the desired magnitude for β, using Equations 9 and 10 (as applicable), a design point method based on the Rackwitz-Fiessler procedure (Rackwitz and Fiessler 1978), or the more adaptable and rigorous Monte Carlo method demonstrated later. Here, the example described earlier in the paper is continued using Equation 10 rewritten as follows:
Using the normal statistics for best fit-to-tail, load factor γQ = 1.75 and a target reliability index value of β = 2.33 (probability of failure pf ∼ 1/100) gives a resistance factor of ϕ = 0.612. From a practical point of view, ϕ = 0.60 is convenient and sufficiently accurate. If the resistance data is parsed and restricted to the middle range of data points in Figure 3, then the computed resistance factor is 0.81 (Bathurst et al. 2008). Clearly judgment is required in any calibration effort of the type described here. 4.6 Trial and error approach using Monte Carlo simulation An alternative approach to select the resistance factor is to carry out a Monte Carlo simulation. It is
107
computationally convenient to assume a nominal (mean) value of Tmax = 1 kN/m, then according to Equation 5, the nominal (mean) value for resistance must be equal to (γQ /ϕ) × 1 kN/m. The variation of Tmax values is lognormal and can be quantified using the normal statistics for load bias (µQ , COVQ ) and for the factored resistance values using (γQ /ϕ)µR and COVR computed from resistance bias values. Recall that normal statistics can be used to compute lognormal distributions with suitable accuracy. The factored limit state equation is now expressed as:
Random values of Ri and Qj can be computed using:
and
An advantage of Monte Carlo simulation is that normal and lognormal distributions can be used together. In fact, any fitted distribution function can be used to calculate random values of Ri and Qi in the example used here.
5
CONCLUSIONS
This paper has demonstrated some fundamental concepts for LSD calibration of geotechnical structures. An attempt has been made here to break through the obscure language that has traditionally been used in LSD practice. The paper has highlighted the need to carry out LSD calibration using model bias statistics in order to capture the accuracy of the underlying deterministic models on LSD load and resistance factors. Finally, the paper illustrates how simple Excel spreadsheets can be used to carry out calibration, treat data outliers and avoid hidden dependencies between variables.
REFERENCES
where, zi and zj are random values of the standard normal variable. Random pairs of Ri and Qj are then used to compute a set of g values. These values are then sorted in increasing order and a CDF plot constructed as shown in Figure 6. The calculations can be easily carried out using an Excel spreadsheet. Different values of γQ and ϕ can be tried until the value of the CDF plot intersects g = 0 at the target value of β although the general approach is to fix the load factor (which is often prescribed) and then adjust ϕ. In the example here, the numerical approach gives β = 2.35 which is very close to the target value of 2.33 used in the closed-form solution to compute ϕ = 0.60 assuming a resistance factor of 1.75. 5
standard normal variable, z
4 3 2 1 0 –1 –2
–3 –4 –5 –3 –2 –1 0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15
g (kN/m)
Figure 6. Monte Carlo simulation for the pullout limit state, using the AASHTO Simplified Method of design (steel grid walls only – lognormal distribution assumed for Tmax and Tpo , γQ = 1.75, and ϕR = 0.60, and 10,000 values of g generated).
AASHTO. 2007. LRFD Bridge Design Specifications, American Association of State Highway and Transportation Officials, Fourth Edition, Washington, D.C., USA. AASHTO. 2002. Standard Specifications for Highway Bridges, American Association of State Highway and Transportation Officials, 17th Edition, Washington, D.C., USA, 686 p. Allen, T.M., Bathurst, R.J., Holtz, R.D., Lee, W.F. & Walters, D.L. 2004. A new working stress method for prediction of loads in steel reinforced soil walls, ASCE Journal of Geotechnical and Geoenvironmental Engineering, Vol. 130, No. 1, 1109–1120. Allen, T.M., Christopher, B.R., Elias, V. & DiMaggio, J. 2001. Development of the simplified method for internal stability of Mechanically Stabilized Earth (MSE) walls, Washington State Dept of Trans, Report WA-RD 513.1, 108 p. Allen, T.M., Nowak, A.S. & Bathurst, R.J. 2005. Calibration to Determine load and resistance factors for geotechnical and structural design, Transportation Research Board Circular E-C079, Washington, DC. Bathurst, R.J., Miyata, Y., Nernheim, A. & Allen, T.M. 2008. Refinement of K-Stiffness method for geosynthetic reinforced soil walls, Geosynthetics International, Vol. 15, No. 4, 269–295. Bathurst, R.J., Allen, T.M. & Nowak, A.S. 2008. Calibration concepts for load and resistance factor design (LRFD) of reinforced soil walls, Canadian Geotechnical Journal, Vol. 45, 1377–1392. Bathurst, R.J., Nernheim, A. & Allen, T.M. 2008. Comparison of measured and predicted loads using the Coherent Gravity Method for steel soil walls, Ground Improvement, Vol. 161, No. 3, 113–120. Bathurst, R.J., Nernheim, A., Miyata, Y. & Allen, T.M. 2009. Predicted loads in steel reinforced soil walls using the AASHTO Simplified Method, ASCE Journal of Geotechnical and Geoenvironmental Engineering, Vol. 135, No. 2, 177–184. Bathurst, R.J., Miyata, Y., Nernheim, A. & Allen, T.M. 2008. Refinement of K-Stiffness method for geosynthetic
108
reinforced soil walls, Geosynthetics International, Vol. 15, No. 4, 269–295. Becker, D.E. 1996. Eighteenth Canadian Geotechnical Colloquium: Limit states design for foundations. Part I. An overview of the foundation design process, Canadian Geotechnical Journal, Vol. 33, 956–983. CFEM. 2006. Canadian Foundation Engineering Manual (4th Ed). Richmond, BC, Canada. CSA. 2006. Canadian Highway Bridge Design Code (CHBDC), CSA Standard S6-06, Canadian Standards Association, Toronto, Ontario, Canada. Christopher, B.R., Gill, S.A., Giroud, J.-P., Juran, I., Mitchell, J.K., Schlosser, F. & Dunnicliff, J. 1989. Reinforced soil structures, Vol. II Summary of research and systems information, FHWA Report FHWA-RD-89-043, 158 pp. D’Appolonia. 1999. Developing new AASHTO LRFD specifications for retaining walls, Report for NCHRP Project 20-7, Task 88, Transportation Research Board, Washington, DC., USA 63 p. Eurocode 7, 1995, ENV 1997-1 Eurocode 7, Geotechnical design, Part 1: General rules (with the UK National Application Document), British Standards Institution, London. Geoguide 6, 2002, Guide to reinforced fill structure and slope design, Geotechnical Engineering Office, Hong Kong, China. Miyata,Y. & Bathurst, R.J. 2007. Development of K-stiffness method for geosynthetic reinforced soil walls constructed with c-φ soils, Canadian Geotechnical Journal, Vol. 44, No. 12, 1391–1416.
Nowak,A.S. 1999. Calibration of LRFD Bridge Design Code, NCHRP Report 368, Transportation Research Board, Washington, DC, USA. Nowak, A.S. & Collins, K.R. 2000, Reliability of Structures, McGraw Hill, New York, NY. NRC, 2005. National Building Code. NRC of Canada, Ottawa, Ontario, Canada. Paikowsky, S.G., Birgisson, B., McVay, M., Nguyen, T., Kuo, C., Baecher, G., Ayyub, B., Stenersen, K., O’Malley, K., Chernauskas, L. & O’Neill, M. 2004, Load and resistance factor design (LRFD) for deep foundations, NCHRP Report 507, Transportation Research Board of the National Academies, Washington, D.C., 126 p. Phoon, K-K. & Kulhawy, F.H. 2003. Evaluation of model uncertainties for reliability-based foundation design. Applications of Statistics and Probability in Civil Engineering (Der Kiureghian, Madanat and Pestana (eds). Millpress, Rotterdam, Netherlands. Rackwitz, R. & Fiessler, B. 1978. Structural reliability under combined random load sequences, Computers and Structures, Vol. 9, 484–494. RTA 2003, Design of Reinforced Soil Walls, QA Specification R57, Roads and Traffic Authority of New South Wales, Australia. Withiam, J.L., Voytko, E.P., Barker, R.M., Duncan, J.M., Kelly, B.C., Musser, S.C. & Elias, V. 1998, Load and Resistance Factor Design (LRFD) for Highway Bridge Substructures, FHWA HI-98-032, Federal Highway Administration, Washington, DC, USA.
109
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Loss of static equilibrium of a structure – Definition and verification of limit state EQU B. Schuppener Federal Waterways Engineering and Research Institute, Karlsruhe, Germany
B. Simpson Arup Geotechnics, London, UK
T.L.L. Orr Trinity College, Dublin University, Ireland
R. Frank Université Paris-Est, Ecole nationale des ponts et chaussées, Navier-CERMES, Paris, France
A.J. Bond Geocentrix, Banstead, Surrey, UK
ABSTRACT: In order to satisfy the essential requirements for construction works, the Eurocodes require that structures fulfil the design criteria for both serviceability and ultimate limit states. Three ultimate limit states are of particular importance in geotechnical design: loss of static equilibrium (EQU), failure or excessive deformation of the structure (STR), and failure or excessive deformation of the ground (GEO). This paper is concerned with EQU. The problems relating to the use of GEO in geotechnical design are described in this paper and alternative views are presented. The authors have sought, by means of a number of illustrative examples, to examine these problems and clarify issues relating to the application of EQU.
1
2
INTRODUCTION
In order to satisfy the essential requirements for construction works, the Eurocodes require that structures fulfil the design criteria for both serviceability and ultimate limit states (SLSs and ULSs). It is a principle of the Eurocodes that “ultimate limit states shall be verified as relevant”. Three ultimate limit states are of particular importance in geotechnical design: loss of static equilibrium (EQU), failure or excessive deformation of the structure (STR), and failure or excessive deformation of the ground (GEO). This paper is concerned with EQU, which the authors have debated at length how to apply in geotechnical design, encountering problems which are also relevant to structural design. Particular difficulties arise when the stability required by EQU has to be augmented by structural or ground resistance. The problems relating to the use of GEO in geotechnical design are described in this paper and alternative views are presented. The authors have sought, by means of a number of illustrative examples, to examine these problems and clarify issues relating to the application of EQU.
DEFINITIONS
Ultimate limit states STR and GEO are described in Eurocode Basis of design (EN 1990) and Eurocode 7 Geotechnical design Part 1 General rules (EN 1997-1) by the following definitions: STR: Internal failure or excessive deformation of the structure or structural members, including footings, piles, basement walls, etc., where the strength of construction materials of the structure governs [EN 1990] (or) …where the strength of structural materials is significant in providing resistance [EN 1997-1]. GEO: Failure or excessive deformation of the ground, in which the strength of soil or rock is significant in providing resistance. In the verification of both STR and GEO, it must be shown that in every section, member and connection, the design value Ed of the effect of actions (such as internal force, moment or a vector representing several internal forces or moments) does not exceed the design value Rd of the corresponding resistance of the structure or the ground:
111
Inserting partial factors, this may be expanded to:
where: Frep is the representative value of an action, derived from individual characteristic actions taking account of combination of variable actions (in the case of variable actions Q, Frep = Qrep = ψ·Qk where ψ is a factor for converting the characteristic value to the representative value; in the case of permanent actions, Frep = Grep = Gk ) Xk is the characteristic value of a ground property anom is the nominal value of geometrical data (i.e. dimension) γE and γF are partial factors for effects of actions and actions, respectively γM and γR are partial factors for ground properties and ground resistance, respectively a is a safety margin or tolerance (Note: all γ values are usually ≥ 1 and a is typically zero.) Limit state EQU is described in EN 1990 and EN 1997-1 by two slightly different definitions: EN 1997-1, 2.4.7.1(1)P gives: “Loss of equilibrium of the structure or the ground, considered as a rigid body, in which the strengths of structural materials and the ground are insignificant in providing resistance” EN 1990, 6.4.1(1)P gives: “Loss of static equilibrium of the structure or any part of it considered as a rigid body, where: •
minor variations in the value or the spatial distribution of actions from a single source are significant, and • the strengths of construction materials or ground are generally not governing.” The concept of a ‘single source’ is important: EN 1990 notes (see Note 3 in Table A.1.2 (B) of Annex A.1): “For example, all actions originating from the self weight of the structure may be considered as coming from one source; this also applies if different materials are involved.” As the definitions state that the strength of the materials or the ground either plays no part in the verification or is not governing, the expression for EQU is different from Equation (1). For equilibrium, it must be verified that the design value Ed,dst of the effect of destabilising actions does not exceed the design value Ed,stb of the effect of stabilising actions:
where: Frep,dst and Frep,stb are representative destabilising and stabilising actions, respectively γF,dst and γF,stb are partial factors for destabilising and stabilising actions, respectively and the other symbols are as defined for Equation (1a) above Alternatively, partial factors may be applied to the effects of actions instead:
where γE,dst and γE,stb are partial factors for destabilising and stabilising effects of actions, respectively. In Annex A.1 of EN 1990 for buildings, the following partial factors are recommended in NOTE 1 of Table A1.2 (A): γG,dst = γG,sup = 1.1 for destabilising permanent actions Gdst , γG,stb = γG,inf = 0.9 for stabilising permanent actions, and γQ,dst = 1.5 for destabilising variable actions Qdst . In situations where the expression for EQU cannot be satisfied, EN 1990 also allows the introduction of additional stabilising terms in Equation (2) resulting from “for example, a coefficient of friction between rigid bodies” (EN 1990, 6.4.2 (2)). These “additional terms” are an important issue for this paper. For such situations, NOTE 2 of Table A1.2 (A) in Annex A.1 of EN 1990 also allows an alternative procedure, subject to national acceptance, in which the two separate verifications of STR/GEO and EQU are replaced by a combined EQU + STR/GEO verification with recommended partial action factor values of γG,dst = 1.35, γG,stb = 1.15, and γQ,dst = 1.50 (provided that applying γG,dst = γG,stb = 1.0 does not give a more unfavourable effect) combined with the relevant STR/GEO partial material and resistance factors from Design Approaches DA 1 (Combination 1), DA 2 and DA 3. EN 1997-1 requires verification of a limit state of static equilibrium or of overall displacements of the structure or ground (EQU) by:
and a note is added: “Static equilibrium EQU is mainly relevant in structural design. In geotechnical design, EQU verification will be limited to rare cases, such as a rigid foundation bearing on rock, and is, in principle, distinct from overall stability or buoyancy problems. If any shearing resistance Td is included, it should be of minor importance.” In discussions between the authors, two ‘concepts’ for the interpretation and application of ENs 1990 and 1997-1 have been developed. •
Inserting partial factors, this may be expanded to:
112
Concept 1 proposes verifying only EQU in those cases where loss of static equilibrium is physically possible for the structure or part of it, considered as a rigid body. Similarly Concept 1 proposes verifying only STR/GEO in situations where the strength
of material or ground is significant in providing resistance. • Concept 2 proposes verifying EQU in all cases; it is interpreted as a load case. Where minor strength of material or ground is involved, the combined EQU/STR/GEO verification may be used, if allowed by the national annex. Further discussion involves the term “Td ” in expression EN 1997-1 2.4, particularly with the use of Concept 1: it might be regarded either as a resistance (Concept 1-R) or as an action (Concept 1-A). Using Concept 1-R, the design resistance of the anchor is:
and a similar approach is generally used with Concept 2. If load factors can be applied directly to action effects, Rd is given directly as:
where EQ,rep,dst is E{Qrep,dst }. Using Concept 1-A and substituting the characteristic value of a stabilising action Ak,stb (assumed to be permanent, from, for example, an anchor), expression EN 1997-1 2.4 becomes:
Hence:
and Ak is then used in a STR/GEO verification to show that the design stabilising action Ad can be provided by the resistance Rd of, for example, an anchor:
Figure 1. Balanced structure on piled foundation.
that are different from those of STR/GEO. In contrast, no values are given for EQU for partial factors for geotechnical resistances, and none are offered for structural materials or resistances in EN 1990; it might be inferred that these unspecified material and resistance factors will be the same for EQU as for STR/GEO. The following examples show how these differing concepts lead to different results in practical design situations.
3
EXAMPLE 1: BALANCED STRUCTURE ON PILED FOUNDATION
3.1 General When variable actions are significantly larger than the permanent actions, this simplifies to:
Figure 1 shows a balanced structure sitting on a piled foundation. In the verifications it is assumed that: •
which is more onerous than ‘normal’ GEO design for which Rd ≥ EQ,rep,dst γQ,dst . The difference between Concepts 1-R and 1-A lies in the ratio γG /γG,stb (compare equations 3a and 4b), where γG is the load factor in a STR/GEO verification. Example 4, given below in section 6, illustrates this difference. EN 1997-1 allows national standards bodies to set values of partial factors for soil strength in EQU
• • • •
the representative values of the two forces W are equal (=Wr ). the column and footing are assumed to be weightless all structural components are of reinforced concrete the structure consists of a single beam, one column and two piles and there is no transfer of bending moment to the piles.
The details and the results of some verifications proposed by the authors are in presented in Table 1. Concept 1 only requires limit states STR and GEO to be considered, since failure can only occur from
113
a lack of strength in the structure or in the ground surrounding the piles. There is no possibility of loss of equilibrium, provided the structure and foundations are strong enough. It might be useful in such a situation to introduce the load case given by the factors of EQU as an additional STR verification. Alternatively, other provisions of ENs 1992 and 1993 might dominate this problem (e.g. geometric allowances etc). Concept 2 requires the spatial distribution of actions from the self weight of the horizontal element (a single source) to be considered, so additionally EQU must be verified. 3.2 Verification of limit states STR and GEO If there are no wind or snow loads to be considered the column will only carry the vertical load of the self weight 2Wr of the horizontal beam. The partial factor for permanent actions of γG = 1.35 must be applied to Wr to determine the design value of the effects of actions for the pile design. Clause 5.2(1)P of EN 1992-1-1 states that the “unfavourable effects of possible deviations in the geometry of the structure and the position of loads shall be taken into account in the analysis of members and structures”. For isolated members, such as that shown in Figure 1, a vertical lean of approximately 1/200 should be included in the calculation of moments. 3.3 Verification of limit state EQU via Concept 2 According to the notes of Annex A of EN 1990 two alternative procedures with two sets of partial factors Table 1.
to be applied to the self weight of the horizontal beam may be used to design the structure: Note 1: γG,dst = 1.1 and γG,stb = 0.9 Note 2: γG,dst = 1.35 and γG,stb = 1.15 (not shown in Table 1 – the results are as for Note 1). 3.4 Conclusions From the results of the verifications in Table 1, it can be seen that the design values of the different verifications depend to a large extent on the ratio a/b (width a between the forces relative to the width b between the piles). Comparing the different EQU-verifications it can be seen that in this example the EQU-verifications do not represent a separate ultimate limit state but a different set of partial factors on actions to account for a special design situation – here the possibility of the variance in the spatial distribution of the self weight of the horizon-tal concrete beam. In effect, the design values of the actions taken from EQU are applied when verifying limit state STR for the concrete of the column and piles and when verifying limit state GEO for the piles’ ground bearing resistance. 4
EXAMPLE 2: TOWER SUBJECT TO A VARIABLE ACTION
4.1 General The second example shown in Figure 2 is a tower subjected to a variable action. Here not only the structural design and Eurocode 7’s three Design Approaches for the verification of the ground bearing capacity need to
Results of the verifications of example 1 – balanced structure on piled foundation.
Concept
1, 2
2
1(1)
Source of factor values
EN1990 Table A1.2(B)
EN1990 Table A1.2(B)
Values of factors
γG = 1.35
Design value(s) of the bending moment M for structural design of the bottom of the column Design values of the forces F1 and F2 (compression > 0) for geotechnical design of the pile Design values of the force F1 and F2 (tension <0) for geotechnical design of the pile Design values of the force F1 and F2 (compression >0) for structural design of the pile Design values of the force F1 and F2 (tension <0) for structural design of the pile
0
EN1990 Table A1.2(A) Note 1 = UKNA(2) for buildings γGj,sup = 1,10 γGj,inf = 0.9 ±0.2 Wr a
±0.35 Wr a
γG Wr = 1.35Wr
Wr (1 + 0.2a/b)
Wr (1.175 + 0.35a/b)
No tension
Wr (1 − 0.2a/b)
Wr (1.175 − 0.35a/b)
γG Wr = 1.35Wr
Wr (1 + 0.2a/b)
Wr (1.175 + 0.35a/b)
No tension
Wr (1 − 0.2a/b)
Wr (1.175 − 0.35a/b)
γGj,sup = 1,35 γGj,inf = 1,00
(1) Without the use of the single source concept; this is a solution one might think of when one wants to design properly all the structural connections, as well as the foundations, in order to take into account the geometrical and load uncertainites of the balanced structure (2) UK NA = United Kingdom National Annex
114
be considered but also the question as to whether and how overturning of the tower should be investigated as a separate EQU limit state. For this example also two different concepts may be considered. The details and the results of the verifications are in presented in Table 2. The design must demonstrate that:
force VEd , and design bending moment MEd is given by either:
•
the base of the column has sufficient strength to resist combined axial compression, shear, and bending (limit state STR) • the ground beneath the column has sufficient strength to resist the applied bearing pressures and sliding forces (limit state GEO) • the foundation is wide enough to prevent toppling (limit state EQU).
4.2 Verification of limit states STR and GEO The bottom of the column must be designed for two separate conditions, one in which the self-weight of the column is treated as an unfavourable action and one in which it is treated as favourable. The most onerous combination of design axial force NEd , design shear
where Ev. is the vertical component of the force coming from the structure. The values of the partial action factors to be used in these equations depend on the magnitude of the load’s eccentricity. For large eccentricities, γG = 1.0 and γQ = 1.5 are more onerous; while for small eccentricities, γG = 1.35 and γQ = 1.5 are more onerous. (Note: in Design Approach DA 1, Combination 2, γG = 1.0 and γQ = 1.3 are used, regardless of eccentricity.) The design resistance is determined according to the Design Approach adopted. In Design Approach DA 1, Combination 1: γφ = γc = γRv = γRh = 1.0; in DA 1, Combination 2, and DA 3: γφ = γc = 1.25 and γRv = γRh = 1.0; while in DA 2 and DA 2∗ ,
Figure 2. Tower subject to a variable action.
Table 2.
where the partial factors γG = 1.35, γG,fav = 1.0 and γQ = 1.5 are specified in Table A1.2(B) of EN 1990. The ground beneath the column must resist the resultant (inclined) design force coming from the structure,given by Fd = [(γG Wk )2 + (γQ Qk )2 ]. This force acts at a design angle of inclination to the vertical of γ Q θd = tan−1 γGQ Wkk and, for the calculation of effective area for determining bearing resistance, the design eccentricity of this action is given by: γ Q h ed = h tan θd = γQG Wkk . (Note: in Design Approach DA ∗ 2 the calculation is performed with characteristic (representative) values. The characteristic ground resistance Rk is determined on the basis of the characteristic values of angle of inclination θk to the vertical and the eccentricity ek . In the end of the calculation the partial factors are applied and the ultimate limit state is verified by checking the expression:
Partial factors used in verification of EQU for Example 2.
Concept Action factors Material factors Resistance factors Source
γG,dst γG,stb γQ γϕ , γc γR,v , γR,h
1
2
1.1 0.9 1.5 Not used Not used EN 1990 Table A1.2(A)
1.1 0.9 1.5 1.1 1.0 EN 1990 Table A1.2(A) UK NA to EN 1997-1 Table A.NA.2
γR,v : partial factor for bearing resistance γR,h : partial factor for sliding resistance
115
γϕ = γc = 1.0, γRv = 1.4, and γRh = 1.1. See Frank et al., (2004) or Bond and Harris (2008) for further details of the Design Approaches. 4.3 Verification of limit state EQU via Concept 1 (assuming rigid ground) In Concept 1, only if the ground is considered as infinitely strong, an EQU check will be carried out; for this the γG,dst and γG,stb factors are used. Apart from the EQU check a separate STR/GEO check is carried out, using γG and γQ factors for the actions and factors on either the finite ground strength or ground resistance, in accordance with the selected Design Approach. In former German standards, which employed the global safety concept, sufficient safety against foundation overturning was demonstrated by ensuring the eccentricity e of the resultant load did not exceed one third of the foundation’s width. This ensured that there was always compression over at least half of the total base area (assuming a triangular distribution of this base pressure). If the ground is assumed to be rigid and the stabilising moment is caused by a force central to the foundation, consideration of overturning about the edge of the foundation base leads to a global factor of safety of 1.5 (equal to the ratio of the stabilising and destabilising moments about the foundation edge). With the implementation of Eurocode 7 in Germany, overturning of a shallow foundation is now covered by applying limit state EQU with partial factors (summarized in Table 2) taken from EN 1990. This verification of EQU is accompanied by verification of limit state GEO for the bearing capacity of a shallow foundation, as discussed in the previous section. 4.4 Verification of limit state EQU via Concept 2 (accounting for ground strength)
Figure 3. Overturning due to earth pressure from dry soil.
for both factors), then the critical case will be γG,sup = γG,inf = 1.0, and the loading for EQU will be the same as for GEO for this problem. 4.5
In the verification of limit state EQU assuming rigid ground (Concept 1), Equation 2 is used in its basic form. In some cases this means that the same structure is modelled as imposing more severe loading (i.e. greater inclination and eccentricity) on the ground if it is assumed to be “rigid” than if it is assumed to be “soil”. Concept 2 requires that the structure is demonstrated to be safe in all respects for EQU, STR, and GEO loading, so avoiding difficult decisions about whether the ground should be considered “rigid”. 5
5.1
In Concept 2, the ground is treated as being of finite strength for EQU, again with a separate check for the load factors of STR/GEO. Concept 2 requires EQU to be checked simply as a further set of factors applied in the complete structural and geotechnical design. Here, two sets of partial factors from EN 1990 have to be investigated, as summarized in Table 2. The weight W of the tower may prove to be relevant for design as a favourable action, and in this case the factors of EQU are more severe than those of GEO, leading to larger inclination and eccentricity of the load on the foundation, although its magnitude is slightly smaller. These actions are then applied in a verification of the bearing resistance of the spread foundation applying a partial factor on the angle of shearing resistance of γϕ . If γϕ is taken equal to that for GEO (e.g. γϕ = 1.25), then EQU becomes the governing case. For this reason, the National Annex of the UK has used the possibility, allowed by EN 1997-1, to specify a different value for γϕ , adopting 1.1. If, however, national annexes allow the use of the factors γG,sup = 1.35 and γG,inf = 1.15 (or 1.0
Conclusions
EXAMPLE 3: OVERTURNING OF A RETAINING STRUCTURE DUE TO EARTH PRESSURE General
The third example shown in Figure 3 is a retaining structure built on infinitely strong rock. Here the requirements for rotational stability and sliding in terms of the earth pressure F and self weight of the retaining structure W need to be considered. As the rock is infinitely strong, overturning of the retaining wall about its edge can be considered as an EQU limit state. Some colleagues consider that both overturning and sliding should also be checked for the requirements of STR/GEO, while others would check overturning for EQU only, disregarding overturning instability in the STR/GEO case when sliding is checked. In the calculations it is assumed that
116
•
earth pressures from the sand are active pressures, with the resultant characteristic value of Fk acting at depth 2/3 h, • the wall friction of the earth pressure is neglected, • the angle of friction for interface between wall and rock is δ, and • the wall weight W acts at the centre of the rectangular wall, its characteristic value is Wk .
Table 3.
Results of the verifications of example 3 – overturning of a retaining structure.
Concept
————————— 2 ————————–
1 Overturning
1 Sliding
Source of factor values
EN1990 Table A1.2(B) and EN1997-1 Table A4 γGj,sup = 1,35 γGj,inf = 1.0 γϕ = γδ = 1.00 0.333
EN1990 Table A1.2(C) and EN1997-1 Table A4 γG = 1.00 γϕ = γδ = 1.25
EN 1990 Table A1.2(A) EN 1997-1 Table A.2 γG,sup = 1, 10 γG,inf = 0,90
0.41
EN1990 Table A1.2(A) and UK NA Table A2 γGj,sup not used γGj,inf = 0.9 γϕ = γδ = 1.1 0.37
0.333
EN 1990 Table A.1.2(B) EN 1997-1 Table A.5 γG,sup = 1,35 γG,inf = 1,00 γR;h = 1.1 0.333
1
1.23
1.11
1
1
Requirement (inequality) for rotational stability in terms of Fk and Wk
1.35Fk (h/3)/ 1.0Wk ≤ b/2 − 0.1 m
1.23Fk (h/3)/ 1.0Wk ≤ b/2–0.1 m
1.11Fk (h/3)/ 0.9Wk = 1.23 Fk (h/3)/Wk ≤ b/2–0.1 m
1,1Fk (h/3)/ 0.9Wk = 1.22 Fk (h/3)/Wk < (b/2–0.1 m(?)) [1a]
Requirement (inequality) for sliding in terms of Fk and Wk
1.35Fk / 1.0Wk ≤ tan δk
1.23Fk /1.0Wk ≤ tan δk /1.25 So 1.54Fk /Wk ≤ tan δk
1.11Fk /0.9Wk ≤ tan δk /1.1 So 1.37Fk /Wk ≤ tan δk
Values of factors
Design value of Ka (Ka,d ) Ratio: Ka,d /Ka.k (if relevant)
1.35 Fk < 1,0Wk tan δk /1.1 So 1.49Fk < 1,0Wk tan δk
δ: structure-ground interface friction angle
The solutions differ in the way the design value of the action of the earth pressure is determined. It can either be regarded as an action to be factored (DA2 and DA1 Combination 1) or as derived from factored ground strength (DA1 Combination 2 and DA3). The details and the results of the verifications are presented in Table 3. Figure 4. Beam structure – static equilibrium.
5.2 Overturning
6
There is agreement for Concepts 1 and 2 that, for overturning, it must be shown that the design value of destabilising moment of the earth pressure with respect to the edge of the foundation is not greater than the design value of the stabilising moment of the self weight of the structure. In paragraph 6.5.4(2) of EN1997-1 it is recommended that tolerances up to 0.10 m be considered where loads with large eccentricities occur. Opinions differ about the need to check overturning for STR/GEO. If the requirements of STR/GEO are applied for overturning, as in Concept 2, they are more severe than those of EQU, so the EQU requirements have no effect to this problem.
The fourth example shown in Figure 4 is a continuous horizontal beam resting on two supports. When b ≥ a, the left support has to be designed as a tension-support to provide sufficient safety of the horizontal beam against loss of stability. Potentially this design has to be checked for both EQU and STR/GEO. The details and the results of the verifications are in presented in Table 4. Both concepts agree that to achieve equilibrium for limit state EQU, the design values Mdst,d and Mstb,d of destabilising and stabilising moments about the central support are compared:
5.3 Verification of sliding for limit state EQU In principle, Concept 2 also requires a verification of sliding for limit state EQU, whereas Concept 1 does not. However, if the requirements of STR/GEO are applied for sliding (as all agree they should be), they are more severe than those of EQU, so the EQU requirements have no effect to this issue.
EXAMPLE 4: BEAM STRUCTURE
In Concept 1-A using EQU Equation 2.4, an “additional term” is introduced as an action from the left support:
where Ga,k and Gb,k are the characteristic values of the two components of the self weight and Ak is the
117
Table 4.
Results of the verifications of example 4 - beam structure.
Concept
1-A
1-R
2
Source of factor values
EN 1990: Table A1.2(A), Note 1 and Table A1.2(B)
EN 1990: Table A1.2(A), Note 2
EN1990: Table A1.2(a) Note 1
Partial factors
γG,stb = 0.90 γG,dst = 1.10 γG,dst = 1.35 Mdst,d ≤ Mstb,d Gb,k bγG,dst ≤ Gb,k aγG,stb + Ak 2aγG,stb
γG,stb = 1.15 γG = 1.35
γG,stb = 0.90 γG,dst = 1.10
Mdst,d ≤ Mstb,d Gb,k bγG,dst ≤ Gb,k aγG,stb + Ad 2a
Mdst,d ≤ Mstb,d Gb,k bγG,dst ≤ Gb,k aγG,stb + Ad 2a Ad = (Gb,k bγG,dst − Gb,k aγG,stb )/2a
EQU-ULS expression
Characteristic value of the tension force Ak to be taken by the support or anchor if a = b and Ga = Gb = G Characteristic value of the tension force Ak if a = b and Ga = Gb = G Design value Ad of the action on an anchor if Ga = Gb = G Design resistance Rd of the anchor if Ga = Gb = G
Ak ≥ (Gk γG,dst − Gk γG,stb )/ 2γG,stb
Ak not required
Ak ≥ 0.2 Gk /1.80 Ak ≥ 0.11 Gk
Ak = 0.1Gk 1.35 Ak = 0.1.35 Gk
Ak not required
Ad = Ak γG = 0.11 Gk γG Ad = 0.148 G
Ad ≥ (Gk γG,dst − Gk γG,stb )/2 Ad = 0.1Gk
Ad = (Gb,k γG,dst − Gb,k γG,stb )/2 = 0.1 Gk
Rd ≥ Ad = 0.148 G
Rd ≥ Ad = 0.1 Gk
Rd = Ad = 0.1Gk
characteristic value of the necessary tension force of the left support to satisfy limit state EQU. The tension force is the characteristic value of the action (tension) to design the support or an anchorage. If a = b and Ga = Gb = G the tension force is:
In the next step the anchor can be designed fulfilling the expression for limit state STR for anchors (Equation 8.1 in EN 1997-1):
7
CONCLUSION
Situations in which actions balance each other are important and must be properly considered in design. “EQU” is defined in the Eurocodes to accommodate this, but it is clear that its definition and use requires further development, both for geotechnical and structural design.This paper has summarised some alternative views by means of examples involving both actions and resistances. The authors hope that these will contribute to clarification of EQU. REFERENCES
where Rd is the design value of the resistance of the anchor which is:
In Concept 2, EQU equation 2.4 is used to determine the necessary design value of the anchor resistance directly in the form:
Bond, A.J., and Harris, A.J. (2008), Decoding Eurocode 7, Taylor and Francis, London, 598pp. CEN (2002) Eurocode Basis of design. (EN 1990) European Committee for Standardization: Brussels. CEN (2004) Eurocode 7 Geotechnical design – Part 1: General rules. (EN 1997-1) European Committee for Standardization: Brussels. Frank, R. C. Bauduin, R. Driscoll, M. Kavvadas, N. Krebs Ovesen, T. Orr and B. Schuppener (2004): Designers’ Guide to EN 1997-1, Eurocode 7: Geotechnical design Part 1: General rules, London: Thomas Telford.
118
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Geotechnical criteria for serviceability limit state of horizontally loaded deep foundations Masahiro Shirato, Tetsuya Kohno, & Shoichi Nakatani Bridges and Structures Research Group, Center for Advanced Engineering Structural Assessment and Research (CAESAR), Public Works Research Institute (PWRI), Tsukuba, Japan
ABSTRACT: This paper presents a first approximation of the elastic limit displacement of soil resistance to horizontally loaded piles. A total of 37 field test data sets were consistently retrieved from a vast database. Both mathematical and graphical approaches were adopted to interpret the elastic limit of soil resistance from the measured load-displacement curve. The data sets indicate that the mean value of the elastic limit displacement is 5–6% of the pile diameter, and its coefficient of variation is approximately 40–60%. A design horizontal threshold displacement can also be proposed as 2–4% of the pile diameter, considering the variation in elastic limit displacement.
1
INTRODUCTION
The concept of serviceability limit state is widely accepted in foundation design and usually takes into consideration the influence of foundation displacement on supporting structures. Tolerable displacement is affected by many aspects including structure type, materials, and design philosophy. While a tolerable displacement has been proposed by many researchers, especially for buildings, earlier study is limited for highway bridge foundations. For example, Moulton (1985) proposed a tolerable displacement based on practical experience in the U.S. Recently, reliabilitybased study on the allowable displacement of foundations appears to have progressed (Phoon and Kulhawy 2008). As for highway bridges, Zhang and Ng (2005) have given reliability-based criteria on the tolerable displacement (i.e., settlement) of highway bridge foundations based on experiences in Hong Kong and they have also commented that abutment displacement due to the movement of approach embankments is a primary concern rather than the settlement of foundations. Since 1990, the Japanese Specifications for Highway Bridges (JRA 2002) has required verification of the tolerable displacement of foundations in terms of the following two aspects: 1) tolerable displacement based on the required performance or design of the superstructure; and 2) tolerable displacement based on the required performance of the foundation. The first tolerable displacement is equivalent to the serviceability limit state mentioned above. The second tolerable displacement, traditionally considered critical in foundation design due to Japan’s earthquakeprone nature, is the elastic limit displacement of foundations. While structures frequently undergo
small-scale earthquakes, residual displacement of foundations from each frequent-scale earthquake is not allowed. This study deals with the second tolerable displacement. It is worth noting that the first tolerable displacement is not critical in design except for the lateral movement of abutment foundations on soft ground, and preventive measures are usually applied in such cases. In addition, due to their long length, highway bridge superstructures are not sensitive to the unequal vertical displacement of neighboring substructures. The second tolerable displacement must confirm that the foundation provides the superstructure with steady-state reversible restoring resistance against frequent loads during its service life. The required bridge performance in the Japanese Specifications for Highway Bridges under normal, wind, and frequent earthquake design conditions is such that the bridge must be fully functional without changing the way it responds to loads. Accordingly, the corresponding limit state for the foundation is the elastic limit. Sectional stress in shafts must not go beyond the allowable stress from a structural viewpoint, and the load effect at the foundation top must not exceed the allowable elastic bearing capacity. Horizontal soil resistance is, in reality, nonlinear even at a very small displacement level. However, if the foundation displacement remains within a certain level and no notable residual displacement appears, it is expected that the horizontal soil resistance to the foundation will maintain a steady state, avoiding possible adverse effects. In addition, the elastic limit state gives an advantage in structural design, which can be modeled so that the soil resistance is reversible (elastic) and is not a function of load history within the tolerable foundation displacement. If foundation resistance and
119
displacement were assumed to be a function of load history, the designers would theoretically have to trace the entire history of foundation displacement against the entire load history for the expected service life, which is unacceptable in practice. Traditionally, the allowable horizontal bearing capacity is defined with a displacement at the design ground level. The Japanese Specifications for Highway Bridges describe allowable horizontal displacement as the larger value of 1% of the pile diameter or 15 mm under normal, wind, and frequent earthquake design conditions. The engineering background of these empirical values is given by Okahara et al. (1991a, 1991b) who collected in-situ test data on piles subjected to horizontal loads, and examined the elastic limit displacement on the observed load-displacement curves. Also, 1% of the pile diameter is reasonable from a practical viewpoint because the typical design calculation model is considered to be valid only for smaller displacement. However, we do not know whether soil resistance or shaft resistance is attributed to the elastic limit points on the load-displacement curves derived in earlier studies. Structural verification is separately implemented in design and the threshold displacement should be a function of soil resistance, not shaft resistance. In addition, larger-diameter shafts and stronger materials are now available. A new nonlinear design calculation model was recently introduced along with the advent of a new seismic design of foundations under rare earthquake conditions. Technically, the foundation response at larger displacement levels can be calculated to take advantage of shaft strength, while traditional values have been seen as a barrier against newer technologies. This study re-examines the elastic limit of horizontal soil resistance using a PWRI database for field tests on piles subjected to horizontal loads. First, we propose an interpretation for extracting the elastic limit of soil resistance from the load-displacement curve measured in testing. Then, field test data sets are consistently analyzed to illustrate the statistical characteristics of the elastic limit displacement in terms of soil resistance, so that we can furnish model statistics for reliability calibrations. Finally, we propose a revised design tolerable horizontal displacement of piles under the reliability design concept.
2
BEHAVIOR OF HORIZONTALLY LOADED PILES
Typical nonlinear behavior of horizontally loaded piles is shown in Figure 1. Nonlinearity in total pile resistance is attributed to the nonlinearity in soil and shaft resistance as indicated by ‘A’ and ‘X’ in Figure 1. Accordingly, it is necessary that the shafts do not become plastic, so that the elastic limit in soil resistance marked with ‘A’ in Figure 1 can be identified from a total load-displacement curve. Finally, the load test data should agree with the following requirements
Figure 1. Typical nonlinear behavior of horizontally loaded piles.
for identifying the elastic limit point of soil resistance: A. The load-displacement curve has clear nonlinearity. B. It can be assumed that the shaft will not become plastic up to a large displacement level.
3
INTERPRETATION OF ELASTIC LIMIT POINT
There is no specific definition for identifying the elastic limit point of soil resistance from a loaddisplacement curve, and both mathematical and graphical approaches are available for determining the point at which the transition occurs from the initial linear portion to the second linear portion of a loaddisplacement curve (e.g., Hirany and Kulhawy, 1989). Basically, a mathematical model is employed in this study, because it is considered more robust than a graphical approach. This study employs exponential (or Weibull) curve fitting referring to Uto et al. (1985) and Okahara et al. (1991a, 1991b).
where Ruw = ultimate total pile resistance estimated via exponential fitting, d = displacement, d0 = elastic limit displacement estimated in exponential fitting, B = pile diameter, and m = a constant that defines the shape of the curve. Typical curves are shown in Figure 2. The load level corresponding to elastic limit displacement is always R0 = Ruw × (1 − e−1 ) = 0.63Ruw for any m. For the sake of simplicity, this study assumes m = 1 except in the case of drilled shafts where this assumption results in errors based on a trial and error analysis. Also regarding drilled shafts, the data points that exist under a displacement level of 0.01B are omitted in the exponential (Weibull) fitting analysis because a pseudo elastic limit could appear on the loaddisplacement curves due to the development of cracks in the cover concrete prior to the yield of longitudinal reinforcement.
120
3) The maximum displacement level is larger than 5% of the pile diameter, B. 4) The observed maximum load in the load test is larger than 1.2 times the elastic limit load, R0 , obtained through exponential fitting. 5) The shaft will not become plastic up to a load level of 1.2 times the calculated elastic limit load, R0 .
Figure 2. Weibull curve.
4
GRAPHICAL METHOD TO DETERMINE ELASTIC LIMIT
Sometimes, the use of a graphical method is considered more useful for interpretation compared to a mathematical method from the viewpoint of physical meaning, and accordingly, an alternative graphical method proposed by Okahara et al. (1991a, 1991b) is also applied to confirm the mathematical method. Figure 3(a) shows the results of a cyclic load test. From the results, sets of residual displacement at an unloaded point of R = 0 and the preceding peak point displacement are extracted as [1], [2], [3], [4], and [5] as shown in Figure 3(b) and plotted in Fig. 3(c). It can be seen in Figure 3(c) that a sudden change in increment of residual displacement occurs at a peak displacement level of d1 . Therefore, this displacement of d1 can be regarded as the elastic limit displacement, because, if the pile did not go beyond this threshold displacement, there would be no residual displacement remaining, and the initial soil resistance property could still be mobilized.
5
SELECTION OF SINGLE PILE LOAD TEST DATA
For single piles, only in-situ tests are dealt with here. The data adopted here agrees with the following conditions: 1) The pile is as slender as possible, is assumed to have a semi-infinite length, and is a free-head, straightside pile. 2) The height of the point of loading from the ground level is lower than the pile diameter, B.
Items 3), 4), and 5) are associated with requirements A and B mentioned earlier. A very simple method is employed to verify Item 5) above. Using a design calculation model based on the theory “beam on Winkler foundation”, the maximum bending moment in the shaft at a load level of 1.2R0 is estimated. Eventually, 37 data sets were available. The observed and fitted load-displacement curves are shown in Figure 4. The displacement at ground level was adopted if available, and if not, the displacement at the closest point to the ground level was taken. If neither one was available, the displacement at the loading point was adopted in this study. When it comes to cyclic load tests, only the backbone curves are shown in Figure 4. A summary of the data sets used here is shown in Table 1 and Figure 5 in terms of construction type and pile geometry. Approximately 81% of the employed data sets are associated with the category of non-composite piles (or typical piles) and the rest are associated with the category of composite piles. Driven piles, large-diameter screw piles, and inner-augured compressively installed piles are classified as non-composite piles. Pre-bored piles and steel pipe/soil cement piles are to be classified as composite. In the data selection process, all drilled shaft tests were excluded due to nonconformance to Item 5) above. Inner-augured compressively installed piles are constructed as follows: (1) A pile is set, and an augur is used to drill through the pile, making a hole with a diameter somewhat smaller than the pile diameter. (2) As the auger drills lower, the pile is simultaneously pushed into the ground. (3) Steps (1) and (2) are repeated until the pile reaches a predefined depth. Pre-bored piles are constructed as follows: (1) Soil is augured with a diameter of B and a predefined pile length. (2) Runny cement is mixed with residual soil in the augured hole. (3) A factory-made high-strength prestressed concrete pile with a diameter somewhat smaller than B is placed into the augured hole. The soil cement mixture made in Step (2) rises up, and the space between the soil and pile should be filled so that a composite pile is made. Steel pipe/soil cement piles are constructed as follows: (1) A soil cement column with a diameter of B is constructed. (2) A pile with a diameter somewhat smaller than B is pushed into the soil cement column, so that it becomes a composite pile with the soil cement column part. Steps (1) and (2) can be conducted simultaneously or separately. An augured diameter B is somewhat larger than the inserted factory-made pile diameter in pre-bored piles and steel pipe/soil cement piles. This study deals with the diameter of B as the representative diameter of these piles, because the soil cement part should
121
Figure 3. Graphical analysis for estimating threshold displacement in terms of rapid evolution of residual displacement in cyclic load test.
Figure 4. Observed, normalized observed, and fitted load-displacement curves. Table 1. Number of load test results in terms of construction methods for single piles.
Construction methods Noncomposite (Typical)
Composite
Driven pile Large-diameter screw pile Inner-augured compressively installed pile Steel pipe/soil cement pile Pre-bored pile
# of data
Mean of d0 /B∗
COV of d0 /B∗
21
0.0599
0.39
7
0.0693
0.22
2
0.0515
0.23
6
0.0356
0.28
1
0.0329
–
* d0 = Elastic limit displacement obtained by Weibull fitting analysis
respond together with the factory-made pile and it should be directly subjected to the soil resistance. The bending rigidity of the shaft is made equal to that of the inserted factory-made pile. 6
STATISTICAL CHARACTERISTICS OF ELASTIC LIMIT OF HORIZONTAL SOIL RESISTANCE
First, the cyclic load test data is chosen from the load test data sets employed above. Although the number
of cyclic load test data is limited, the elastic limit displacement level identified through graphical analysis, d1 /B, is compared with that estimated through exponential fitting, d0 /B. For example, one of the results is shown in Figure 3, in which an exponential curve fitted for the backbone curve is shown in the graph on the left-hand side. The elastic limit point is marked on the fitted curve. Figure 6 compares the elastic limit displacement level derived using both graphical and mathematical methods. The number of employed load tests, n, eventually totals twelve. The results derived using both methods agree well with each other, which confirms that both methods are capable of accounting for the elastic limit of soil resistance. The bias factor, λ = d1 /d0 , has a mean value of 1.01 and a coefficient of variation (COV) of 0.45 obtained using the square root of (σ 2 /(n − 1)), in which σ = standard deviation and n = number of load tests. In addition, the mean and COV of d1 /B thereof were interpreted via a graphical method as 0.048 and 0.39, respectively. Then, using all the data, the elastic limit displacement estimated through exponential fitting, d0 , is plotted in the left panel of Figure 7 versus the pile diameter, B. A displacement level of 20 mm safely covers almost all the data. However, there is a tendency for the value of d0 to increase with an increase in the pile diameter, B. Therefore, the normalized displacement level, d/B, which more or less represents the strain in the soil in front of the pile, is used to characterize the elastic limit. The relationship between the elastic
122
Figure 5. Frequencies in pile geometry and types of construction.
Figure 6. Comparison of elastic limit displacements, d1 and d0 , obtained through graphical analysis and exponential fitting (N.C. = non-composite piles and C. = composite piles).
Figure 7. Relationship between estimated elastic displacement level and pile diameter (N.C. = non-composite piles and C. = composite piles).
limit displacement level, d0 /B, and the pile diameter is shown in the right panel of Figure 7. The data distributes around the mean value of 0.565B with a COV of 0.39. The calculated normalized exponential curve with a mean value of d0 /B, 0.565, and m = 1 is also shown in the right panel of Figure 4. The empirical distribution of elastic limit displacement level, d0 /B, is shown in Figures 8 and 9. It should be noted that there are small variations in the number of tests cited here because some test results in the
database do not have sufficient log data on boring. Based on Figure 8, the value of d0 /B of composite piles tends to be somewhat smaller than that for typical non-composite piles. Composite piles had a soil cement layer around the factory-made shaft. Therefore, the development of cracks in the soil cement part during the loading could be considered to result in nonlinearity in the load-displacement curve even at smaller displacement levels. For example, all driven piles in the adopted data sets are steel pipe piles. As
123
Figure 8. Variation in elastic limit displacement level, d0 /B, of horizontally loaded piles for each pile type (N.C. = non-composite piles and C. = composite piles).
Figure 9. Relationship of elastic limit displacement level, d0 /B, with soil type and SPT-N value (N.C. = non-composite piles and C. = composite piles).
shown in Table 1, while the mean value of d0 /B for the driven piles is 0.599, the mean value for steel pipe/soil cement piles is 0.356. The relationship between estimated elastic limit displacement level, d0 /B, soil type, and soil stiffness or strength, i.e., SPT-N value, is shown in Figure 9. For each pile load test, the predominant soil type depends on the subsoil layers within the characteristic length of pile as defined in the Specifications for Highway Bridges, in which 1/η = characteristic pile length, and η = [(kB)/(4EI )]1/4 , E =Young’s modulus of the shaft, I = second sectional moment of inertia of the shaft, and k = subgrade reaction coefficient. The total depth of sandy subsoil layers and clayey subsoil layers is calculated, and the predominant soil type is based on the larger total depth of either the sandy subsoil layers or the clayey subsoil layers. The subgrade reaction factor for calculating the characteristic pile length is obtained using an empirical formula in the Japanese Specifications for Highway Bridges. The SPT-N value is the mean value of the measured values within the characteristic pile length. The difference in the estimated elastic displacement level, d0 /B, resulting from the difference in soil type or SPT-N value is small. 7
DESIGN THRESHOLD DISPLACEMENT AND ITS RELIABILITY
A general design equation can be expressed as
As long as the resistance capacity is larger than the load effect, there is a margin of safety for the limit state under consideration. However, as both the resistance capacity and load effect are considered to be random variables, factored resistance and loads are usually used in design to achieve a predefined safety margin. In terms of the soil resistance to horizontally loaded piles, Eq. (2) can be rewritten as:
in the Load and Resistance Factor Design (LRFD) format, in which φ = resistance factor, dL = elastic limit displacement, γ = load factor, Q = load, and d = calculated pile displacement in design with factored load γQ. A crude estimation of the threshold value considering uncertainty in the elastic limit displacement is still better than that obtained using a deterministic approach. Accordingly, this study presents a conditional reliability analysis, disregarding the statistical issues in the load effect, d, and applying the loads in the current Japanese Specifications for Highway Bridges as the deterministic factored values. Eventually, Eq. (3) can be rewritten as
where dd = threshold displacement. The elastic limit displacement level, dL /B, is assumed to follow a log-normal distribution. Hence, the first-order secondmoment (FOSM) method gives a relationship for the
124
design threshold displacement level, dd /B = φdL /B, and reliability index, β, as follows:
where COVR = coefficient of variation of the resistance capacity, dL /B. The mean and COV for the resistance capacity, dL /B, can be estimated based on the results of the exponential fitting analysis and graphical analysis. Although it appeared in Figure 8 that there may be a difference in uncertainty due to the difference in construction type, this study does not take this difference into account, assuming that the elastic limit in the horizontal soil resistance is not a function of the behavior of the soil cement part in composite piles. If the graphical interpretation is considered to be more reasonable, the transformation error in elastic limit displacement between the exponential fitting analysis and the graphical analysis should be incorporated into the mean and COV of dL /B. Thus, the mean and coefficient of variation of elastic limit displacement, dL /B, is dL = λ × 0.565 = 0.057 and COVR = (0.392 + 0.452 )0.5 = 0.60. Accordingly, based on all the results obtained above, the values of the mean and COV of dL /B are likely to fall somewhere in the range of 0.05 to 0.06 and 40% to 60%, respectively. Eventually, this study sets the values of the mean and COV of dL /B at 0.055 and 50%. Estimation of the target reliability index, βT , is attempted here. As mentioned above, the allowable horizontal displacement in the current specifications is based on Okahara et al. (1991a, 1991b). They adopted an equivalent theory to Eq. (5) and considered a reliability level of approximately 1 using the load test data that they compiled. Accordingly, the target reliability level should be set around that value. In addition, a typical safety margin associated with designing shallow foundations and piles subjected to vertical loads can assist in the calculation. In the current Specifications for Highway Bridges, safety factors µ = 3 and 2 are applied to the ultimate bearing capacity to estimate the allowable bearing capacity under normal and frequent earthquake design conditions, respectively, and the allowable bearing capacity is considered to provide foundations with a sufficient safety margin for both serviceability (i.e., elastic limit state) and safety (i.e., ultimate bearing capacity) (Nakatani et al. 2007). The idea here is that equivalent safety factors should have been applied to the horizontal soil resistance. A generalized load-displacement curve can be modeled as
Therefore, safety factors µ = 3 and 2 applied to the ultimate horizontal soil resistance are translated into safety factors µ = 1.89 and 1.2 to the elastic limit soil resistance, RL , respectively. In addition, because the resistance factor φ is equal to 1/µ , safety factors µ = 1.89 and 1.2 correspond to resistance factors φ = 1/1.89 (=0.53) and 1/1.2 (=0.89). Therefore, Eq. (5) leads to the design threshold values and reliability levels. The design threshold displacement level, dd /B, at resistance factor φ = 0.53 is estimated as 0.022 with reliability level β = 1.67 and that at resistance factor φ = 0.89 is estimated as 0.038 with reliability level β = 0.54. Based on these results, the target reliability indices can eventually be rounded off to βT = 1.5 and 0.5 under normal and frequent earthquake design conditions, respectively. The corresponding values of design threshold displacement level, dd /B, are obtained as 0.024 and 0.039, respectively. As it turns out, we can expect at least 0.02B under normal conditions and 0.035B under frequent earthquake conditions as the rounded-off design tolerable displacement for the serviceability limit state.
8
While the results shown above are associated with single piles, pile foundations usually comprise several piles with close spacing between neighboring piles equal to 2.5–3.5 times the pile diameter. Accordingly, a rough analysis for the elastic limit displacement of pile groups is conducted for reference using exponential fitting. Eventually, a drilled shaft group and nine steel-pipe pile groups are adopted. To ensure the largest possible amount of data, laboratory test data is also used as well as in-situ test data. The adopted data sets satisfy the following conditions: 1) The pile group comprises several rows of piles both parallel and perpendicular to the loading direction, 2) The observed maximum displacement level is larger than 5% of the pile diameter, 3) The observed maximum load in the load test is larger than 1.2 times the calculated yield load, R0 , using exponential fitting. The relationship between the elastic limit displacement level, d0 /B, estimated using exponential fitting, and the pile diameter, B, is shown in Figure 10. The solid and dashed lines indicate the tolerable displacement under normal and frequent earthquake design conditions as proposed above, and the proposed tolerable displacements cover most data on the safety side. This result supports the application of the proposed tolerable displacements to the general design for pile groups having close pile-to-pile spacing.
9 where RL = soil resistance at the elastic limit load corresponding to the elastic displacement capacity, dL .
HORIZONTALLY LOADED PILE GROUPS
CONCLUDING REMARKS
When the soil resistance to piles behaves within the elastic limit, the piles have a reversible restoring soil
125
of the pile diameter for the serviceability limit state under normal and frequent earthquake conditions. These values are generally larger than the previously proposed values of 0.01B and 15 mm for a typical range of pile diameters in highway bridge foundations. 5) The proposed threshold displacements can also be considered relevant to the design of pile groups. As a remaining issue, the behavior of the soil cement portion of steel pipe/soil cement piles and pre-bored piles requires investigation in order to establish more detailed serviceability limit criteria. REFERENCES Figure 10. Elastic limit displacement obtained using the exponential fitting analysis for horizontally loaded pile groups.
resistance force, and are expected to be firmly supported over the years against service loads, wind loads and frequent earthquakes. Accordingly, this study examined the statistical issues for the elastic limit displacement. The following results were obtained from this study. 1) Clarification of the philosophy of serviceability limit states in highway bridge design in Japan from the viewpoint of superstructure design and foundation design. The latter is the elastic limit of foundation restoring resistance as a structural component of highway bridges. 2) Verification of a simple mathematical method for identifying the elastic limit of soil resistance from the load-displacement curve in the horizontal load test of a pile. 3) When the elastic limit displacement was interpreted consistently from a number of field load test results, we obtained a mean value of 4–6% of the pile diameter with a coefficient of variation of 40–55%. 4) We found that the design threshold displacement at the ground level can be set to approximately 2–4%
Hirany, A. and Kulhawy, F. H. 1989. Interpretation of load tests on drilled shafts, Part 3: Lateral and moment. Foundation Engineering: Current Principles and Practices, 2, ASCE, New York, pp. 1160–1172. Japan Road Association (JRA). 2002. Specifications for Highway Bridges. Tokyo. Moulton, L. K. 1985. Tolerable movement criteria for highway bridges. Report FHWA/RD-85/107, Federal Highway Administration, Washington, DC, USA. Nakatani, S., Shirato, M., Iochi, H., and Nomura, T. 2007. Stability check of highway bridge pile foundations under a performance-based design concept. PWRI Technical Note (4036), Public Works Research Institute. In Japanese. Okahara, M., Nakatani, S., and Matsui, K. 1991a. A study on vertical and horizontal bearing characteristics of piles, JSCE J. of Struct. Engrg. 37, pp. 1453–1466. In Japanese. Okahara, M., Takagi, S., Nakatani, S., and Kimura, Y. 1991b. A study on the bearing capacity of single piles and design method of column shaped foundations. PWRI Technical Memorandum (2919), Public Works Research Institute. In Japanese. Phoon K. K. and Kulhawy, F. H. 2008. Serviceability limit state reliability-based design, Chapter 9, Reliability-based design in geotechnical engineering: Computations and applications, pp. 344–384, Taylor & Francis. Uto, K., Fuyuki, M., and Sakurai, M. 1985. An exponential mathematical model to geotechnical curves. International Symposium on Penetrability and Drivability of Piles, San Francisco, USA, pp. 1–6. Zhang, L. M. and Ng,A. M.Y. 2005. Probabilistic limiting tolerable displacements for serviceability limit state design of foundations. Géotechnique, 55(2), pp. 151–161.
126
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability-based code calibration of piles based on incomplete proof load tests Jianye Ching National Taiwan University, Taipei, Taiwan.
Horn-Da Lin & Ming-Tso Yen National Taiwan University of Science and Technology, Taipei, Taiwan.
ABSTRACT: This paper addresses an issue often encountered in calibrating resistance factors for pile ultimate capacities based on load test database. In practice, many pile load tests are not conducted to failures but only to a multiple (e.g.: 2) of the design load. This leads a difficult situation of incomplete information: for these test results, the ultimate bearing capacities of the test piles are unknown. How can these test results still be used to calibrate resistance factors of piles? A full probabilistic framework is proposed in this research to resolve this issue. A local pile test database of Taipei (Taiwan) is presented for demonstration. The analysis results show that the inclusion of the incomplete pile load test data helps in calibrating the resistance factors. Moreover, it is found that the calibrated resistance factors are consistent to the safety factors that are adopted in the current Taiwan design code.
1
INTRODUCTION
Recently, reliability-based design approaches have emerged as a new design paradigm for pile ultimate capacities because reliability is a consistent measure of uncertainties. For reliability-based designs, one designs a pile so that its failure probability is in an acceptable range. Moreover, a single factor, called the resistance factor, is often used to quantify resistance uncertainties to facilitate reliability-based designs. In the case that ultimate capacities of the test piles in a region are known, the resistance factors of that region can be calibrated conveniently based on the load test database, e.g.: Barker et al. (1991), McVay et al. (2000), and Paikowsky et al. (2004). However, in practice many load tests are not conducted to ultimate capacity failures but only to two times of the design loads depending on the local code provisions. Taking the bored piles of the Taipei City in Taiwan as an example, there are about 57 load tests in the local database, but among them, only 8 of them were conducted to failures. Figure 1 illustrates the load-deformation curves of two piles in the database, one was loaded to failure while the other was not. By Davisson’s criteria (Davisson 1972), the ultimate capacity of the first pile can be identified, while that of the second pile is not identifiable because of the incomplete load-deformation curve. Although the load-deformation curve is incomplete, it does provide a certain amount of information. According to the load-deformation curve of the second pile in Figure 1, it is clear that the Davisson capacity of the pile is greater than the maximum test load
Figure 1. Complete (left:(a)) versus incomplete (right:(b)) load-deformation curves.
(2436 tons). Also, the actual Davisson capacity should be less than the load of point A in the figure (3300 tons), the Davisson capacity if the load-deformation curve is indefinitely linearly extended (i.e.: the dashed line). Therefore, it is evident that the actual Davisson capacity of the pile should be among [2436, 3300] tons. In the context of interpreting load test results of proof piles, Zhang (2004) addressed the issue of incomplete information and proposed a systematic method of updating reliabilities of within-site pile designs. By assuming the coefficient of variation (c.o.v.) of within-site pile capacities to be around 20%, he produced tables and charts that can answer important questions. The focus in Zhang (2004) is to update the reliabilities of “within-site” piles. For calibrating resistance factors for a region, a different approach should be taken.
127
The focus of this paper is to propose a rigorous framework of calibrating resistance factors of piles based on incomplete load test database. Imagine most information available from a regional database is the bounds on the Davisson capacities of the test piles. How can the regional resistance factors be calibrated based on the incomplete information? A test pile database of Taipei will be studied as demonstration, and a full probabilistic framework will be proposed to resolve the issue of incomplete information. In principle, the proposed framework is not limited to the Taipei database: the same framework can be implemented to other regional test pile database with incomplete information to calibrate regional resistance factors. 2
DATABASE OF TAPEI
This section describes a database of the load test results of 57 bored piles in the Taipei region. The basic information of the test piles is listed in Table 1. The diameters of the bored piles in the database range from 0.6 m to 2 m, and the lengths range from 13 m to 81 m. Most piles have lengths from 30 m to 60 m. In the table, the column of “ultimate capacity” reports the Davisson capacity if the load test is complete; otherwise, the [L,U] of the Davisson capacity will be reported, where the lower bound L and upper bound U are obtained from the load-deformation curve by using the interpreting approach shown in Figure 1(b). Only the Davisson criterion is used in this paper for illustrative purposes. For any particular purpose, suitable failure criteria should be selected. It is clear that there are only 8 complete load tests. The end-bearing layers for all piles are either sandstones or gravels, depending on their geographical locations. For each test pile, the soil profile from the nearest borehole from the test pile is taken. The soil profile includes the following information: (a) thickness of each layer; (b) classification of each layer; (c) SPT-N blow counts of each layer, manifesting itself either as a range or as an average value; (d) location of the water table; (e) unit weight of each layer. 3
QUATIFICATION OF UNCERTAINTIES
Two types of uncertainties are considered in this paper: (a) uncertainties in the basic soil parameters and indices (including water table locations) and (b) modeling uncertainties. The former is due to the inherent variabilities of soils and rocks, measurement errors of the test methods (e.g.: the reported SPT-N values may deviate from their actual values due to measurement errors in SPT) and transformation errors (e.g.: the errors induced in the process of estimating friction angles from SPT-N values). The latter is due to the inaccuracy of the predictive (design) models for the ultimate pile capacities. What follows summarizes our ways of determining the probability density functions (PDF) of these uncertainties based on the field data and prior information.
3.1 Uncertainties in soil parameters and indices The herein considered uncertainties include: (a) SPTN value of each layer; (b) unit weight of each layer; (c) undrained shear strength of each clayey layer; (d) friction angle of each sandy or gravelly layer; (e) water table location at each pile; (d) end bearing capacity for the sandstone layer. These parameters/indices are necessary for predicting the ultimate bearing capacities of the test piles. The average or the range of SPT-N value (N) is available for each soil layer. Note that the SPT-N values reported are “measured values” of SPT-N rather than their actual values: even if the SPT-N value of a layer is reported, its actual SPT-N value is still uncertain due to the measurement errors of SPT. It is desirable to model these uncertainties as probability distributions, described as follows. In the case where the range [Nlow , Nup ] of SPT-N value is reported for a layer, the uncertain SPT-N value is modeled to be uniformly distributed over the [Nlow , Nup ] interval. In the case that the average value of SPTN is reported, the uncertain SPT-N value is modeled to be lognormally distributed with mean value equal the reported average value, while the c.o.v. equal 50%. The 50% c.o.v. is based on the 15%–45% measurement error c.o.v. estimated by Phoon (1995) but is made slightly larger to accommodate the inherent variability of SPT-N values. The c.o.v. of the measurement errors for unit weights (γ) is negligible, while that of the inherent variability is around 9%. Therefore, in the case that the unit weight of a layer is reported, this value is taken to be the mean value and the c.o.v. is taken to be 10%. The distribution is taken to be lognormal because the unit weight cannot be negative. For Taipei clays, the ratio of undrained shear strength divided by vertical effective stress (su /σ v ) is roughly constant 0.21 for depth deeper than 12m with c.o.v. = 30% and is roughly 0.36 for depth within 12m with c.o.v. = 30%. These numbers are taken to be the mean value and c.o.v. for the su /σv ratio of a clayey layer. The distribution is taken to be lognormal because the su /σv ratio cannot be negative. The φ v.s. (N1 )60 relation summarized in Chen (2004) is taken to estimate the effective friction angle φ of sands and gravels. The standard deviation of this relation is roughly 3o , quantifying the magnitude of the transformation uncertainties of the φ v.s. (N1 )60 relation. If the location of water table is known at a borehole, its water table location is treated as known. However, there are boreholes where the locations of water table are not documented. For these cases, the depths of the water table locations are taken to be uniformly distributed over [0m, 4.5m]. This range is concluded from our Taipei borehole database, which shows in most areas of Taipei, the water table is between 0m to 4.5m deep. Many of the test piles in the database rest on sandstone layer. The end bearing capacities on the sandstone (qr ) are very uncertain. According to the
128
129
NO. Daimeter(m) Length(m) Capacity(tons) End-bearing layer Ground water depth NO. Daimeter(m) Length(m) Capacity(tons) End-bearing layer Ground water depth NO. Daimeter(m) Length(m) Capacity(tons) End-bearing layer Ground water depth NO. Daimeter(m) Length(m) Capacity(tons) End-bearing layer Ground water depth NO. Daimeter(m) Length(m) Capacity(tons) End-bearing layer Ground water depth NO. Daimeter(m) Length(m) Capacity(tons) End-bearing layer Ground water depth
P1 0.9 42 [600,1650] Gravel Unknown P11 1.2 46.35 [1100,1650] Gravel Unknown P21 0.9 34.1 [322,800] Gravel 2.5 P31 1 29 [650,900] Gravel 4.5 P41 1.2 50 [1450,10000] Gravel [0,4.5] P51 1.5 68.3 [2436,3300] Sandstone Unknown
P2 0.8 42 [500,800] Gravelq Unknown P12 1.3 33.6 [730,2250] Gravel 2.5 P22 0.9 34.5 [322,900] Gravel 2.5 P32 1 30.85 664 Sandstone 0.0 P42 1.5 66.1 [2194,10000] Sandstone [0,4.5] P52 2 37.5 [2000,10000] Gravel Unknown
P3 0.7 42 [400,700] Gravel Unknown P13 1.2 28.4 [600,1600] Gravel 2.5 P23 1 34.7 [376,1000] Gravel 2.5 P33 1 44.5 [596,850] Sandstone 3.0 P43 1.5 72.5 [2061,10000] Sandstone [0,4.5] P53 1.2 50 [1380,4500] Gravel Unknown
P4 0.6 42 [320,600] Gravel Unknown P14 1.2 34.2 [714,800] Gravel 2.5 P24 1 32.1 [366,1250] Gravel 2.5 P34 1.2 50 [2000,10000] Gravel Unknown P44 1.5 74 [2061,10000] Sandstone [0,4.5] P54 1 13 [550,1080] Sandstone Unknown
P5 0.9 44.02 [482,1100] Gravel Unknown P15 1.1 33.3 [610,2200] Gravel 2.5 P25 1.4 35 [598,2900] Gravel 2.5 P35 0.8 52 700 Sandstone Unknown P45 1.5 81.1 [2061,10000] Sandstone [0,4.5] P55 1.5 23 [800,1240] Sandstone Unknown
P6 1.5 46 [1120,7000] Gravel Unknown P16 1.1 33.8 [510,1500] Gravel 2.5 P26 1.1 33 [456,5200] Gravel 2.5 P36 1.2 55 1540 Sandstone Unknown P46 1.2 53.85 [950,10000] Gravel [0,4.5] P56 1 29 [650,1000] Gravel Unknown
P7 1.5 47.25 [1120,2100] Gravel Unknown P17 0.9 32.4 [410,1200] Gravel 2.5 P27 1.4 56.3 [1016,10000] Gravel Unknown P37 1.2 41.3 1040 Sandstone Unknown P47 1.2 57.4 [1970,22500] Gravel [0,4.5] P57 1 44.5 [600,800] Sandstone Unknown
P8 1.2 46.35 [814,1900] Gravel Unknown P18 0.8 29.1 [405,2000] Gravel 2.5 P28 1.4 56.1 [1016,10000] Gravel Unknown P38 1.5 45.2 1900 Sandstone Unknown P48 1 55.5 [1500,1650] Gravel [0,4.5]
P9 0.9 47.6 [482,1900] Gravel Unknown P19 0.8 37 [434,1000] Gravel 2.5 P29 1.3 54.9 [918,10000] Gravel Unknown P39 1.2 50 1300 Gravel Unknown P49 1.5 72.8 3000 Sandstone [0,4.5]
P10 1.2 47.4 [1100,2400] Gravel Unknown P20 0.9 35 [322,800] Gravel 2.5 P30 1 45.46 [817,1500] Gravel 0.0 P40 1.2 54.5 3600 Gravel Unknown P50 1.5 79.8 [3550,4400] Sandstone [0,4.5]
Table 1. Basic information of the 57 test piles. For the cases with incomplete load-deformation curves, [L, U] are the lower and upper bounds of the Davisson capacities. For the cases where the Davisson capacities are known, their capacities will be shown.
Table 2. The two deterministic predictive models adopted for the analysis. Model
Skin friction
End bearing
SPT-N
Clay: fs = αSu Sand and gravel: fs = N/3 ≤ 15 T /m2
Static
Clay: fs = αSu Sand and gravel: (2) fs = 0.67σ’v tanδ ≤ fmax
(1)
(1)
Clay: qb = 9Su Sand and gravel: qb = 30N ≤ 1500 T /m2 Sandstone: qb = qr Clay: qb = 9Su Sand and gravel: (3) qb = σ’v Nq∗ ≤ qmax Sandstone: qb = qr
Remarks: N is the average SPT-N value of the soils near the pile tip; (1) α suggested by Tomlinson (1994); (2) fmax suggested by DM7-2 (1982); (3) Nα∗ and qmax suggested by DM7-2 (1982)
code in Taiwan, the pile socket length is required to be greater than 2 to 3 meters (most test piles in our database satisfy this requirement). Under this condition, the end bearing capacities on sandstone can be predicted by its uniaxial compression strength. However, the actual end bearing capacities can still be very different from the predicted ones depending on the construction quality of pile tips. In the case that the construction techniques are relatively uniform and that the bedrock quality does not vary significantly, it may be reasonable to assume qr of all test piles are identically distributed. Moreover, variabilities in qr due to construction quality variabilities can be assumed independent. Therefore, in this study qr of all test piles are assumed to be independent but identically distributed. The distribution is taken to be lognormal because qr cannot be negative. The common mean value (µr ) and c.o.v. (δr ) are unknown. According to Chen (2000), the end bearing capacities in Taipei are usually in the range of [50 tons/m2 , 300 tons/m2 ]. Therefore, in this study µr is constrained in this range. 3.2
Modeling uncertainties
The most important uncertainty in the problem is perhaps the modeling uncertainty in predicting ultimate bearing capacity of a bored pile. Predictions on pile bearing capacities can be made based on the following equation:
where R = ultimate bearing capacity of a pile, Qs = skin resistance of the pile shaft, Qb = end bearing resistance,fs = skin resistance stress, qb = end bearing stress, As = surface area of pile shaft, Ab = crosssection area of pile tip. In Taiwan, there are two common methods of calculating fs and qb : the SPT-N and static methods, listed in Table 2. These two methods are developed primarily based on the Taiwan’s Foundation Design Code for Building (TGS 2001). The predicted capacity R is clearly a function of the
soil parameters and indices described in the preceding section. However, the actual capacity of a pile, denoted by C, will generally deviate from the predicted value R due to the inaccuracy of the predictive model. Let Cj be the measured Davisson capacity of the j-th pile in the database, and let Rj be the capacity predicted either by the SPT-N model or by the static model. The ratio between Cj and Rj is called the “model factor” ρj , i.e.: ρj = Cj /Rj . This factor characterizes the model uncertainty. In most literature (e.g.: Whitman (1984) and Baker et al. (1991)), the model factor ρj is assumed to be lognormally distributed. Furthermore, in the case that all pile tests are from the same region, it may be reasonable to assume the model factors of different piles are identically distributed. Therefore, in this study it is assumed that the model factors of the 57 test piles are independent identically distributed as a lognormal distribution whose mean value (µρ ) and c.o.v. (δρ ) are unknown. Among all the uncertain parameters and indices, there are four “hyperparameters” including µρ , δρ , µr and δr . These hyperparameters are special because the model factor ρ and the end bearing capacity qr are probably the most influential parameters in the entire model. Other parameters such as unit weights, undrained shear strengths, etc., are relatively less influential. Moreover, these four parameters are special because all piles share the same hyperparameters. The consequence is that the capacities of different piles are dependent although the uncertain soil parameters and indices of different piles are independent. These hyperparameters play crucial roles in calibrating resistance factors. In the perspective of Bayesian analysis, it is possible to update the PDFs of the hyperparameters by using the test results of the 57 piles. After the updating, the design of a new pile can be made based on the updated PDFs of the hyperparameters, which have “absorbed” the information of the 57 piles. By doing so, calibration of resistance factors will be possible. Therefore, the hyperparameters play the role of passing old information (the test results of the 57 piles) into new pile designs because the 57 piles and new piles share the same hyperparameters. 3.3 Results of preliminary deterministic analysis A preliminary study is performed to compare the actual Davisson capacities of the 57 piles with their predicted values. In order to compute the predicted capacities, all uncertain soil parameters and indices required by the SPT-N or static model such as unit weights, shear strengths, water table locations, etc. are fixed at their average values. The end bearing capacity qr is fixed at 200 tons/m2 . The predicted bearing capacities of the 57 piles and their actual Davisson capacities are compared in Figure 2 for both the SPT-N and static models. In the cases where the load-deformation curves are incomplete, the lower and upper bounds of the Davisson capacities are drawn in the figure.
130
The mean values of the soil properties Ynew can be obtained from proper site investigations of the new pile site through in-situ and laboratory tests, while their variabilities can be estimated based on arguments similar to those in the section of “Quantification of Uncertainties” to obtain their prior PDFs. The relationship between the resistance factor and target failure probability for the new pile design can be derived as follows: Figure 2. The comparison between the Davisson capacities and the predicted ones: left is for the SPT-N model, while right is for the static model. The ‘x’ data points correspond to the piles with complete load-deformation curves, while the bars show the lower and upper bounds for the pile with incomplete curves.
At the first sight, it is evident that the SPT-N model is more conservative, but its correlation with the data points with complete load-deformation curves (i.e.: the ‘x’ data points in the figure) seems stronger than that of the static model. Moreover, from the scattering of the data points, it seems reasonable to assume the 57 model factors to be identically distributed because the deviations of the predicted capacities to the actual Davisson capacities or to the Davisson bounds seem relatively homogeneous. 4
FULL PROBABILISTIC ANALYSIS
4.1 Calibration of resistance factors Given the load test data of the 57 piles, it is possible to calibrate the reliability-based resistance factor for pile designs in Taipei. The reliability-based design can be achieved by restricting the failure probability of a new design conditioning on the past data, i.e. consider the following equation:
where Cnew is the uncertain bearing capacity of a new pile, and c∗ is the design load of the pile; the Cnew ≤ c∗ is the failure event; C1:57 denotes our load test data; P(Cnew ≤ c∗ | C1:57 ) is the failure probability conditioning on the database data, which, in principle, can be computed with Bayesian analysis; PF∗ is the target failure probability. Equation (2) can be rewritten as
where ρnew is the model factor of the new pile, which is lognormally distributed with mean value equal to µρ and c.o.v. equal to δρ ; qr,new is the sandstone end bearing capacity of the new pile, which is lognormally distributed with mean value equal to µr and c.o.v. equal to δr ; Ynew contains all uncertain soil parameters of the new pile site other than the sandstone end bearing capacity. Note that R(Ynew , qr,new ) is uncertain because both Ynew and qr,new are uncertain.
n n where Ynew and qr,new are the nominal values of Ynew and qr,new , usually taken to be their mean values; n n η = c∗ /R(Ynew , qr,new ) is exactly the resistance factor, which is the reciprocal of the safety factor. One can see that the resistance factor can be simply estimated as the 100 · PF∗ percentile of G(ρnew ,Ynew ,qr,new ) conditioning on the data C1:57 . If one can obtain stochastic simulation samples of G(ρnew ,Ynew ,qr,new ) conditioning on the data C1:57 , denoted by {G 1 ,…, G N }, the relationship between resistance factor and target failure probability can be estimated as the following according to the Law of Large Number:
where 1(·) is the indicator function: it is unity if the inside statement is true and is zero otherwise. Once this relationship is obtained, the calibration of the resistance factor is achieved since one can now determine the required resistance factor of a new pile design from the target failure probability. Regarding the question of how to draw samples of {G 1 ,…, G N } conditioning the data C1:57 , since G(ρnew , Ynew ,qr,new ) is a function of ρnew , qr,new and Ynew , it suffices to draw samples of ρnew , qr,new and Ynew conditioning on the data C1:57 , i.e. draw samples from f (ρnew ,Ynew ,qr,new |C1:57 ). First of all, the soil parameters at the new site Ynew are independent of the past data C1:57 , so drawing Ynew samples conditioning on C1:57 is the same as drawing Ynew samples from its prior PDF. However, drawing {ρnew ,qr,new } samples from f (ρnew , qr,new |C1:57 ) is more challenging since the past data C1:57 contains information about {ρnew ,qr,new }, i.e. they are dependent through the hyperparameters {µρ , δρ } and {µγ , δγ }. In fact, these samples of {ρnew ,qr,new } play the role of conveying information from the test pile database to the new pile design. In the next section, stochastic simulation techniques that are used to draw {ρnew ,qr,new } samples conditioning on C1:57 will be discussed.
131
4.2
Stochastic simulation of {ρnew ,qr,new } samples conditioning on past data C1:57
In the case that the soil parameters and indices of the 57 test piles Y1:57 and end bearing capacities qr,1:57 are n fixed at their nominal values, i.e.: the test values Y1:57 n and some chosen values of qr , f (µρ , δρ , µγ , δγ |C1:57 ) has a very simple expression:
where f (C1:57 |µρ , δρ , µγ ,δγ ) is called the likelihood function. In this study, the prior PDFs f (µρ ), f (µγ ), f (δρ ) and f (δγ ) are taken to be flat PDFs, i.e.: no prior information on them. Therefore
Figure 3. The time histories and histograms of the samples for the hyperparameters.
5 ANALYSIS RESULTS AND VERIFICATIONS where j = 1,…, 8 indicates the piles with complete load-deformation curves (therefore their Davisson capacities are known), while j = 9,…, 57 indicates the piles with incomplete load-deformation curves (therefore only the lower and upper bounds {Lj ,Uj } of their Davisson capacities are known). Since the model factors are independent identically lognormally distributed,
where (·) is the cumulative density function (CDF) of the standard Gaussian PDF. The Markov chain Monte Carlo method can then be applied to obtain samples from f (µρ , δ ρ , µγ ,δγ |C1:57 ). Once the {µρ , δρ , µγ , δγ } samples are obtained, the ρnew sample can be easily obtained from a lognormal PDF whose mean and c.o.v. are at the sampled value {µρ , δρ }, while the qr,new sample can be easily obtained from a lognormal PDF whose mean and c.o.v. are at the sampled value {µγ , δγ }. These samples will be used in (4) and (5) to obtain the η-PF∗ relation, i.e.: to calibrate the resistance factors.
5.1 Samples of stochastic simulations Ten thousand samples are drawn from f (ρnew, qr,new |C1:57 ) for both design models, and the burn-in periods are shown in Figure 3. The samples after the burnin periods contain interesting information about the behaviors of the 57 test piles. For instance, the samples of the qr,new parameter, as shown in Figure 3, are distributed as f (ρnew |C1:57 ). The histogram of the sample values, also shown in the figure, basically gives the relative degree of plausibility of the sandstone end bearing capacity learned from the data. If the SPT-N model is adopted for designs, the sandstone end bearing capacity qr is in average 246 tons/m2 with c.o.v. equals 55%. The samples from f (ρnew |C1:57 ), also shown in Figure 2, gives the possible range of the model factor ρ learned from the data of the 57 test piles. From the histograms in Figure 3, it is found that the mean value and c.o.v. of the model factor are 1.247 and 34.9% for the SPT-N model and are 1.102 and 23.6% for the static model. Therefore, the actual Davisson capacity of a pile in Taipei is in average 1.247 times larger than the capacity predicted by the SPT-N model with c.o.v. equals 34.9%. For the static model, the actual capacity of a pile in Taipei is in average 1.102 times larger with c.o.v. equals 23.6%. Figure 4 shows the comparison between the actual capacities and the nominal ones predicted by the SPTN model multiplied by the average values of the ρ samples, where the nominal capacities are computed with all uncertain soil parameters held at their nominal values and the sandstone end bearing capacity held at the average value of qr samples. It is clear that the
132
Figure 4. The comparison between the Davisson capacities and the predicted ones multiplied by the average values of the ρ samples.
predicted capacities unbiasedly reflect the actual ones, indicating that the analysis results are consistent. A similar conclusion is found for the static model. 5.2
Relationship between resistance factor and target failure probability
The procedure introduced in a previous section is employed to estimate the relationship between resistance factor and target failure probability (or the target reliability index β∗ = − −1 (PF∗ ). In general, the estimated relationship is not unique: it depends on the configuration of the new pile design, the soil profiles, and the chosen design model. This non-uniqueness imposes a difficulty, but it can be resolved by selecting a representative set of pile design scenarios in Taipei and estimating the η − β∗ relationships for all scenarios to obtain the possible range of the relationship. One can first determine the target failure probability PF∗ for pile designs of Taipei, then convert in into the target reliability index β∗ = − −1 (PF∗ ), so the range of the required resistance factor can be found. In this paper, the sites of the 57 piles in the database are taken as the representative set of soil profiles in Taipei. Numerous scenarios of pile designs are considered and the corresponding η − β∗ relationships are computed. It is found that the η − β∗ relationships strongly depend on the chosen design model, i.e.: SPT-N or static model. For the SPT-N mode, the calibrated relation between the target reliability index and the required resistance factor is shown in Figure 5. In the case that only the eight tests with complete load-deformation curves are used for the calibration, the required resistance factor (see the left plot in Figure 5) is in the range of [0.2, 0.3] (required safety factor between 3.3 and 5) for the target reliability index β∗ = 3 and is less than 0.2 (required safety factor greater than 5) for β∗ = 4. In the current Taiwan code for pile designs, a safety factor of 3 (equivalent to a resistance factor of 0.33) is required for the bearing capacity consideration. From the left plot in Figure 5, such a resistance factor corresponds to reliability index between in the range of [2.3, 2.8], or failure probability in the range of [0.003, 0.01]. In the case that all information (all test results of the 57 piles) are used for the calibration, the required resistance factor (see the right plot in Figure 4) is in
Figure 5. The calibrated relation between target reliability index and resistance factor for the SPT-N design model. The left figure is based on the eight test results with complete load-deformation curves, while the right figure is based on all available information.
the range of [0.3, 0.45] (required safety factor between 2.2 and 3.3) for the target reliability index β∗ = 3 and is in the range of [0.22, 0.35] (required safety factor between 2.8 and 4.5) for β∗ = 4. For the code regulation of resistance factor = 0.33, the corresponding reliability index between is in the range of [3, 4], or failure probability in the range of [0.00003, 0.001]. It is obvious that when the test results with incomplete load-deformation curves are incorporated into the analysis, the calibrated resistance factor increases significantly, i.e.: the required safety factor decreases significantly. Notice that there is one curve in the left plot in Figure 5 where the required resistance factor is significantly less than the others. That curve corresponds to a very short pile resting on the sandstone layer. This phenomenon will be discussed in detail later. When the static design model is chosen, the calibrated η − β∗ relations are shown in Figure 6. Unlike the SPT-N model, the calibrated resistances with or without all available information are not very different. The required resistance factor is in the range of [0.3, 0.4] (required safety factor between 2.5 and 3.3) for the target reliability index β∗ = 3 and is in the range of [0.2, 0.28] (required safety factor between 3.6 and 5) for β∗ = 4. For the code regulation of resistance factor = 0.33, the corresponding reliability index between is in the range of [2.8, 3.4], or failure probability in the range of [0.0003, 0.003]. Again, there are several curves in Figure 6 where the required resistance factor is significantly less than the others. These curves correspond to very short piles resting on the sandstone layer.
6
CONCLUSION
(a) For the load tests with incomplete load-deformation curves, it is possible to identify the lower and upper bounds of the Davisson capacities. Moreover, it is evident that these incomplete load test data may be valuable in calibrating resistance factors. For the
133
resistance factors may be required to compensate the high uncertainties. It is found that for piles longer than 30 m, the end bearing capacity ceases to dominate, and their η − β∗ relations will fall into the ordinary ranges shown in Figures 5 and 6. As a consequence, when implementing the recommended resistance factors mentioned in the previous section, one should be careful when the piles are short (shorter than 30 m) and are resting on the sandstone layer. A much smaller resistance factor should be taken for such piles. Figure 6. The calibrated relation between target reliability index and resistance factor for the static design model. The left figure is based on the eight test results with complete load-deformation curves, while the right figure is based on all available information.
SPT-N design model, the calibrated resistance factors change significantly after the incomplete test results are taken into consideration. (b) Based on all load test results (including the incomplete ones) in the Taipei region, the mean value and c.o.v. of the model factor are identified to be around 1.247 and 34.9% for the SPT-N design model and around 1.102 and 23.6% for the static design model. This indicates that both design models are conservative. Moreover, the SPT-N design model is in average more conservative than the static model. However, the variability of the SPT-N model seems larger, judging from the 34.9% c.o.v., than the static model. (c) Based on the analysis results with all available data, a safety factor of 3 required by the Taiwan code corresponds to a reliability index between 3 and 4 (or failure probability between 0.00003 and 0.001) for the SPT-N model and corresponds to a reliability index between 2.8 and 3.4 (or failure probability between 0.0003 and 0.003) for the static model. For designs of pile ultimate capacity, the required resistance factor is usually around 3. Therefore, the safety factor of 3 required in the Taiwan code seems reasonable. (d) The calibrated resistance factors for short piles resting on the sandstone layer are particularly small. This is because their end bearing capacities dominate their ultimate capacities and the end bearing resistance qr of the sandstone layer is very uncertain. Therefore, the ultimate capacities for these short piles are also very uncertain, hence small
REFERENCES Barker, R. M., Duncan, J. M., Rojiani, K. B., Ooi, P. S. K., Tan, C. K. and Kim. S. G. (1991). Manuals for the Design of Bridge Foundations. NCHRP Report 343, TRB, National Research Council, Washington, DC. Chen, D.S. (2000). Pile types, bearing capacity and mechanisms – considerations in construction. Publication D19, Taiwan Construction Research Institute, 1–33. Chen, J.R. (2004). Axial Behavior of Drilled Shafts in Gravelly Soils, PhD thesis, Cornell University. Davisson, M.T. (1972). High capacity piles. Proceedings, Lecture Series, Innovations in Foundation Construction, ASCE, Illinois Section. DM 7-2 (1982). Foundations and Earth Structures – Design Manual 7.2, Department of the Navy Naval Facilities Engineering Command, Alexandria, VA. McVay, M. C., Birgisson, B., Zhang, L. M., Perez, A., and Putcha, S. (2000). Load and resistance factor design (LRFD) for driven piles using dynamic methods—A Florida perspective. Geotech. Test. J., 23(1), 55–66. Paikowsky, S. G., Birgisson, B., McVay, M., Nguyen, T., Kuo, C., Baecher, G., Ayyab, B., Stenersen, K., O’Malley, K., Chernauskas, L., and O’Neill, M. (2004). Load and Resistance Factor Design (LRFD) for Deep Foundations. NCHRP Final Report 507, Transportation Research Board, Washington, DC. Phoon, K.K. (1995). Reliability-based Design of Foundations for Transmission Line Structures, Ph.D. Dissertation, Cornell University. Tomlinson, M.J. (1994). Pile Design and Construction Practice. Fourth Ed., E & FN Spon, London. TGS, Taiwan Geotechnical Society (2001). Foundation Design Code for Building. Taiwan Geotechnical Society (TGS), Taipei. Whitman, R.V. (1984). Evaluating calculated risk in geotechnical engineering. ASCE Journal of Geotechnical Engineering Division, 110(2), 145-188. Zhang, L.M. (2004). Reliability verification using proof pile load tests. ASCE Journal of Geotechnical and Environmental Engineering, 130(11), 1203-1213.
134
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Sensitivity analysis of design variables for caisson type quay wall G.L. Yoon & H.Y. Kim Korea Ocean Research & Development Institute, Ansan, Korea
Y.W. Yoon Inha University, Incheon, Korea
K.H. Lee Konyang University, Nonsan, Korea
ABSTRACT: Sensitivity analysis was carried out to quantify the effects of design variables on the reliability indices of caisson type quay wall using Hasofer-Lind approach. A target reliability index from reliability analysis was estimated with three failure modes of overturning, sliding and bearing capacity for stabilities of the structures. The results showed that the reliability indices for both bearing capacity and sliding failure modes were mostly lower than those for overturning failure mode. Consequently, the major design variables affecting on the reliability indices were found to be coefficient of friction, residual water pressure and resistance moment for sliding, overturning and bearing capacity failure, respectively. Especially, coefficients of sensitivity of both the inertia force and dynamic water pressure with high variation in seismic cases were not large and this corresponds to the previous research obtained by correlation study between variations of design variables and coefficients of sensitivity.
1
2
INTRODUCTION
New design codes based on limit state design, performance based design and so on have been developed according to the changes of global code in port and harbor design. Also, during several years, there have been efforts to apply reliability theory to existing safety factor method. Nagao et al. carried out the research about reliability based design (RBD) of quay wall structures and investigated correlations between life cycle cost and probability of failure using FORM (2001). And Yoneyama et al. estimated probability of failure to get exact target reliability index from material of the failed gravity type quay wall and load factors for earthquake load (2000). In 2004, as a result of an international joint study, Chinese researchers investigated the target reliability index of quay walls and models for design variables and partial safety factors for sliding and overturning resistance of existing ten gravity quay walls. The purposes of this study is to consider reliability indices against each failure mode using reliability analysis of quay walls, to investigate the effects of each design variable on reliability indices using sensitivity analysis, and to estimate partial safety factors.
2.1
RELIABILITY ANALYSIS Limit state functions
Failure modes of a caisson type quay wall are extensively classified into sliding, overturning, bearing capacity and so on. Limit state functions for each failure mode are calculated as follows: Sliding:
Overturning:
Bearing capacity:
In the limit state function for sliding failure, f , W , C, Q, PU , P, S and kh are coefficients of friction,
135
Figure 2. Correlation between reliability index and sensitivity factor. Table 1. COV and probability distribution for typical random variables. Random variable
Mean
COV
Probability distribution
Unit weight (kN/m3 ) Reinforced con’c Plain con’c Gravel Coefficient of friction Static earth pressure Seismic coefficient
24.5 23.0 18.0 0.60 – 0.077
0.02 0.02 0.04 0.15 0.10 0.25
Normal Normal Normal Normal Normal Type II
Figure 1. Cross sections for the analysis.
self weigh of caisson, crane loading, surcharge, uplift pressure, wave force, collision loading and horizontal seismic coefficient, and subscript V and H mean vertical and horizontal component, respectively. In the limit state function for overturning failure, ai is the distance between center of moment and acting point of loading. At failure, qa , FV , BC and e are allowable bearing capacity, total vertical force, width of caisson and eccentric coefficient, respectively. B, E, R and D are refer to as common mean buoyancy, earth pressure, residual water pressure and dynamic water pressure, respectively. 2.2
Table 2.
Reliability indices for critical load cases.
Section
Sliding Overturning Bearing capacity
4.199 12.804 1.416
2.201 8.009 3.767
Section indicates safe values, whereas section indicates unstable values less than target reliability index. The value of safety margin for overturning failure is the largest one among the three cases.
Design conditions
In this study, target reliability index (βT ) was 2.05. Two sections which have different geometries of the caisson, design tidal levels and surcharges were analyzed, respectively. Figure 1 and 2 show cross sections of the quay walls. The load combination consists of ordinary and seismic case. Dynamic water pressure and inertia force were only applied to seismic case. Coefficients of variation (COV) for main random variables are presented in Table 1.
3
SENSITIVITY ANALYSIS
3.1 Sensitivity factor Sensitivity factor is the coefficient of linear approximation of limit state function. If there is no correlation among random variables, Sensitivity factor is defined as following:
2.3 Results for reliability analysis Table 2 shows the results of the most critical load cases of the reliability indices estimated by FORM.
136
Figure 3. Normalized sensitivity distribution for ordinary case(upper) and seismic case(lower).
Equation (4)’ means a direction cosine of reliability index on the axis of each random variable of the standardized space. When limit state function is defined as Z = X2 − X1 , equation (4)’ is represented in Figure 2. In case of no correlation among each random variable, the sensitivity factor takes positive value for resistance and negative value for load effect. And the summation of its square is unity. On the other hand, if correlation exists among random variables, coefficient of correlation is considered using standard deviation and sensitivity factor of limit state function. This is represented as following equations:
3.2
Results for sensitivity analysis
To find the effects of a specific design variable on safety of each structure, sensitivity analyses were performed for all load cases using FORM. Only a set of results representing the characteristics of sensitivity for design variables were presented. Figure 3 shows normalized sensitivity factors for each failure mode of ordinary case. When sliding failure occurs, the most affecting design variable
was coefficient of friction between caisson and base ground, followed by horizontal earth pressure (Eh ), self weight (W), buoyancy (B), residual water pressure (Pwr ) and vertical earth pressure (Ev ), respectively. Similarly, in the seismic case, coefficient of friction and horizontal earth pressure were the most sensitive ones, followed by self weight, buoyancy and dynamic water pressure (Pwd ), inertia force by earthquakes (Fi ), vertical earth pressure, and residual water pressure, respectively. For overturning failure which is determined by the magnitude between overturning moment and resistance moment, the moments due to its self weight of caisson and buoyancy were the most sensitive, followed by the moment due to horizontal earth pressure, vertical earth pressure and residual water pressure, respectively. In case of the seismic case, the moment due to horizontal earth pressure was the most sensitive, followed by the moment due to self weight, buoyancy, vertical earth pressure, dynamic water pressure, inertia force by earthquakes and residual water pressure, respectively. For bearing capacity failure, maximum sensitivity factors were resistance moment (My ) and total vertical force (Fv ). Therefore, they highly effects the safety as the design variable changes. In case of the seismic case, similar results were obtained. To investigate the effect of reliability indices on the sensitivity factors, the trends of changes of the sensitivity factors were observed showing the fluctuation of width of caissons. Figure 4 shows the sensitivity changes with reliability index.
137
Figure 4. Sensitivity changes with reliability index for ordinary case(upper) and seismic case(lower).
For sliding failure, the sensitivity of coefficient of friction and horizontal earth pressure were increased as reliability indices increase, whereas the sensitivity of others were slightly decreased or constant. The sensitivity of coefficient of friction was nearly approached to unity, so that it highly affected the reliability indices. Whereas, for residual water pressure, they approached to zero, so that it had no effect. The similar results were obtained in ordinary case, too. For overturning failure, the sensitivities of the moment due to self weight and horizontal earth pressure were remarkably increased as the reliability indices increase. On the other hand, the sensitivities of the moment due to inertia force, vertical earth pressure and buoyancy were decreased with respect to reliability indices. However, the sensitivities of the moment due to dynamic water pressure were not changed when the reliability indices were increased. For bearing capacity failure, increasing reliability indices were obtained as decreasing the sensitivity of the overturning and resistance moment. However, increasing reliability indices were obtained as increasing the sensitivity of the total vertical force. From the results, a significant point is that although the inertia force and dynamic water pressure are the variables affected by seismic coefficient with high COV, the changes of the sensitivity are not significant as the reliability indices change. The results showed the same correlation between tidal level and sensitivity factor of breakwaters investigated by Nagao et al. (2004). Thus, all the sensitivities of the variables with high COV are not significant. 4
ESTIMATION OF PARTIAL SAFETY FACTORS
Partial safety factors for the design by level 1 approach were estimated. Three components were employed in
Table 3.
Partial safety factors for sliding mode.
Section Condition f Eh Ev W Pwr B Fi Pwd
Table 4.
Ordinary
Seismic
Ordinary
Seismic
0.273 1.077 0.997 0.994 1.001 1.009
0.543 1.269 0.979 0.994 8.149 1.009 1.004 1.053
0.442 1.061 0.996 0.996 1.009 –
0.923 1.071 0.990 0.999 1.002 – 1.044 1.030
Partial safety factors for overturning mode.
Section Condition Eh Ev W Pwr B Fi Pwd
Ordinary
Seismic
Ordinary
Seismic
1.698 0.611 0.661 1.015 1.493
3.527 −0.229 0.815 1.008 1.269 1.054 1.511
2.640 0.133 0.456 1.286 –
2.313 0.328 0.951 1.048 – 2.089 1.602
the partial safety factors. They are target reliability index (2.05), weight applied to each design variable according to sensitivity factors, and the degree (how design variables affect stability of structures). As shown in Table 3 and 4, the values of partial safety factors for sliding mode are not significantly scattered between ordinary and seismic cases, whereas the value of coefficient of friction is increased between ordinary and seismic case. However, the values of
138
coefficients of friction in each quay wall are much scattered. The values of coefficients of inertia force and dynamic water pressure related to earthquake approach to unity and there are no differences between the values of coefficients in the two sections. For overturning mode, the partial safety factors of the design variables related to earthquake were relatively large and the partial safety factor for self weight in seismic case was increased comparing with the ordinary case. 5
related to earthquake, the result showed that the effect of dynamic water pressure was significant. Sensitivity analysis with respect to the reliability index was performed. Although the inertia force and dynamic water pressure are the variables affected by seismic coefficient with high COV, the changes of the sensitivity are not significant as the reliability indices increase. This corresponds to the existing research result for the breakwater structure. REFERENCES
CONCLUSIONS
To investigate the safety margin for each failure mode of caisson type quay walls and effects of the design variables on safety, reliability and sensitivity analysis were performed for two structures. As a result of reliability analysis, Safety margin for overturning failure was the largest. Using sensitivity analysis, in the ordinary case, the design variables mostly affecting failure of the structures were coefficient of friction between caisson and base ground for sliding failure followed by the moment due to self weight of caisson and buoyancy for overturning failure, resistance moment and total vertical force for bearing capacity failure. For the design variables
EN. 1998. Eurocode 8: Design of structures for earthquake resistance. Nagao, T., Yoshinami, Y., Sanuki, T., & Kamon, M. 2001. Application of reliability based design to external stability of caisson type quay wall, Structural engineering journal 47(A) (In Japanese). Yoneyama, H., Shiraishi, S. & Uwabe, T. 2000. A Study on Load Factors of Seismic Loads on Limit State Design Method for Port and Offshore Structures in Japan, Proc. of 8th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability. Yoon, Gil. 2005. Development of next generation port & harbour design (V). Ministry of maritime affairs & fisheries (MOMAF) (In Korean).
139
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Evaluating the reliability of a levee against seepage flow Y. Shimizu & Y. Yoshinami Fukken Co., Ltd., Hiroshima, Japan
M. Suzuki Shimizu Corporation, Tokyo, Japan
T. Nakayama & H. Ichikawa Hiroshima Institute of Technology, Hiroshima, Japan
ABSTRACT: Levees may collapse during an abnormal downpour, and the risk of flooding has increased in recent years. To date, the stability of levees has been evaluated using a safety factor by seepage flow analysis and the circular arc method. Because the safety allowance is estimated from the stability of the embankment which depends on the underground water, it is necessary to quantitatively evaluate the probability of seepage flow and the uncertainty of the resistance of the embankment. In this study, the uncertainty of the geotechnical properties of embankments was quantitatively evaluated, and a probability theory evaluation was performed for local failure and global failure assuming a circular slip surface. Specifically, the safety margin was evaluated by the failure probability and/or reliability index by expressing the geotechnical parameters using the probabilistic model and the stochastic finite element method. The action of seepage flow and the rise of water level of the river due to rainfall were deterministically calculated by transient analysis, and buoyancy and fluid power were estimated in each time, namely, the stability for a particular underground water level. Finally, the results of the proposed method are compared with those of the conventional method. 1
INTRODUCTION
The risk of flood has increased these years due to increase of extreme rainfall events increasing the risk of river levee failure. The stability of levee against seepage has been assessed based on seepage flow analysis and safety factor calculated by the circular arc method. However, since the allowance in safety would in nature be determined based on the stability of the fill against groundwater flow within the levee, it could be required to understand the probability of the groundwater flow and the uncertainty in resistance of the fill. In this study, the safety allowance against the full scale failure assuming its slip circle failure is evaluated by the failure probability and reliability index obtained by the analysis of stochastic finite element method by means of assessing quantitatively the uncertainties of the physical values of levee fill. A comparison of the method proposed in this paper and the conventional method is also described. 2 ANALYTICAL METHODS 2.1 Overview A stochastic finite element analysis was performed by setting the circular arc at the failure mode using the conventional circular arc method. A flowchart of the analysis is shown in Figure 1. Groundwater flow
Figure 1. Flowchart of the analysis.
141
accompanying rainfall and a rise in the water level of the river was calculated by a transient seepage flow analysis, in which time historical buoyancy and fluid power values were regarded to be external forces. First, a time sequential groundwater analysis was executed. Next, a stochastic finite element analysis was performed using the analytically determined groundwater level and flow velocity vector at each time point as external forces and considering the uncertainty of the ground parameters. The allowance in safety against local failure and global failure, which was obtained as a result of the analysis, was evaluated using the probability theory.
where τfi = distance between the center of the Mohr circle and the failure surface; ci = effective cohesion; and ϕi = effective internal friction angle. σ1 and σ12 = maximum and minimum principle stresses, respectively.
2.2
(c) Reliability index and probability failure of the i-th element The reliability index βi for local failure of i-th element is defined as follows:
Seepage flow analysis
The groundwater behavior was calculated by performing transient saturated and unsaturated seepage flow analysis that can reproduce the actual phenomena. A two dimensional expression of seepage flow is shown below:
where x = horizontal coordinate; z = vertical coordinate; k = hydraulic conductivity (m/hr); ψ = pressure head (m); C = specific moisture capacity (l/m); a = 1 for saturated zone and 0 for unsaturated zone; Ss = specific storage (l/m); t = elapsed time(hr) The specific moisture capacity, C, is the tangent inclination of the water retention curve. The specific storages, Ss , are Ss = 1 × 10−4 (l/m) for the sandy soil and Ss = 1 × 10−3 (l/m) for the cohesive soil.
(b) Local failure assuming a potential mobilized plane The performance function for shear failure of the i-th element is defined as follows:
where σi and τI = given as the orthogonal stress and shear stress on the potential mobilized plane.
where E[gi ] and Var[gi ] = mean and variance of performance function, respectively. According to the reliability index, probability failures of the i-th element are given as follows:
where ( ) = standard normal distribution function. 2.3.2 Global failure Global failure of a slip circle was modeled by a failure on a potential mobilized plane. The performance function of this failure was defined as follows:
2.3 Stochastic finite element method The finite element method has been a successful analytical tool in solving various geotechnical engineering problems, since the method can deal with complex shape and boundary condition of a system, various soil parameters, etc. However, the deterministic FEM has several limitations in system modeling; for example, soil properties must be decided as deterministic uncorrelated values. To overcome such limitations, many studies have been made on the stochastic finite element method (SFEM) by applying the random theory. For local failure, two performance functions, both with and without assumption of potentially mobilized plane, were considered to evaluate the probability level for slope failure. For global failure, another performance function was considered on a potential failure plane
where li = i-th element of the length of the plane, and N is the number of elements across the potential mobilized plane. The reliability index β for global failure is defined as follows:
where E[G] and Var[G] = mean and variance of performance function, respectively. According to the reliability index, probability failure is given as follows:
where ( ) = standard normal distribution function.
2.3.1 Local failure (a) Local failure without assumption of a potential mobilized plane The performance function for shear failure in the i-th element is defined as follows:
Figure 2. Slip surface passing the i-th element.
142
3 ANALYTICAL CONDITIONS
2.4 Actions with groundwater Stresses in levee can be roughly classified into effective stress of the self weight and the stress caused by seepage flow. The unit weight of soil was thus treated as saturated unit weight in sections below the groundwater level and as wet unit weight in sections above the groundwater level. Seepage force was assumed to act as an element body force. The body force was determined by multiplying unit weight γw of water by the hydraulic gradient, which is expressed with coordinate components in Eq. (9).
3.1 Analytical model The model used in the analysis was prepared by modeling an actual levee and is shown in Figure 3. Elements of the finite element method are strain triangular elements, and the number of elements are 4,991. Soil parameters are shown in Table 1. The parameter values in the table (unit weight γ, cohesion c, internal friction angle ϕ, and hydraulic conductivity k) are all mean values. The coefficients of variation were determined as shown in Table 2. (Matsuo, 1984).
3.2
In the finite element method, Eq. (9) is shown in triangular element as:
External force conditions
The time historical conditions of external forces (rainfall and water level of the river) used in seepage Table 2.
Coefficient of variation. Coefficient of variation
where = area of triangular element, xi and zi = coordinates of node i, and Hi = total head of node i.
Unit weight (γ) Cohesion (c) Internal friction angle (ϕ)
0.02∼0.08 0.2∼0.4 0.1∼0.2
Figure 3. Analytical model. Table 1.
Soil parameters.
Wet weight γ t (kN/m3 ) Saturation weight γ sat (kN/m3 ) Cohesion c(kN/m2 ) Internal friction angle ϕ◦ Permeability coefficient k(cm/s)
1
2
3
4
5
6
7
8
9
19.0
19.0
19.0
19.0
18.0
19.0
20.0
18.0
19.0
19.0
20.0
20.0
18.0
20.0
21.0
18.0
25.0
17.0
20.0
0.0
30.0
94.0
0.0
0.0
0.0
22.0
30.0
35.0
0.0
15.0
35.0
50.0
10
11
12
13
14
15
16
17
19.0 20.0
19.0
20.0
18.0
19.0
20.0
20.0
18.0
20.0 21.0
20.0
21.0
18.0
20.0
21.0
21.0
18.0
94.0
0.0
94.0
0.0
50.0
0.0
0.0
0.0
30.0
15.0 35.0
15.0
35.0
0.0
30.0
35.0
40.0
0.0
1.0 × 1.0 × 3.0 × 2.0 × 2.0 × 3.0 × 2.0 × 1.0 × 5.0 × 1.0 × 3.0 × 1.0 × 2.0 × 3.0 × 5.0 × 1.0 × 1.0 × 10−3 10−5 10−3 10−1 10−5 10−3 10−5 10−2 10−3 10−1 10−4 10−5 10−1 10−3 10−3 10−6 10−1
143
Figure 4. External force model.
analysis are shown in Figure 4. (Japan Institute of Construction Engineering, 2002). The water level of the river was assumed to increase from the low water level to the maximum water level in about 100 hours, stay at the level for 1 hour, and suddenly drop to the initial low water level in 15 hours. Although the Guideline mentions to use a preliminary rainfall of 1 mm/h until the maximum water level and then a rainfall intensity of 10 mm/hr until the water level starts to suddenly drop, the rainfall intensity was decided to be 1 mm/hr until the sudden drop in water level in this study, as a major objective of the study was to investigate the time historical changes in the water level in the levee accompanying the changes in the water level of the river. 3.3
Figure 5. Distribution of underground water level in levee.
Circular arc
The global failure was investigated by assuming the circular arc at the smallest safety factor that can be determined by the circular arc method. The circular arc of the back slope was analyzed since the analysis mainly aimed to investigate the stability at the high water level.
4 ANALYTICAL RESULTS 4.1
Underground water level in the river levee
The water level distribution in the fill at an arbitrary time point, which was determined by the seepage analysis, is shown in Figure 5. The figure shows that the water level in the levee changed along with changes in the water level of the river.
4.2
Central safety factor and reliability index
The analyzed global failure on a slip surface at an arbitrary time point is shown in Figure 6. The figure also shows the safety factor Fs determined using the circular arc method, the central safety factor θ (ratio between the mean shear strength and the mean shear stress), and the reliability index β. Flow velocity vectors determined by the seepage analysis are also shown.
Figure 6. Results of SFEM analysis and flow velocity distribution.
The following conclusions were deduced from the figure: (1) When Fs and θ were compared, Fs was larger than θ. The trend was more notable when the water
144
Figure 7. Comparison between central safety factor and reliability index.
level of the river was rising than when it was dropping. This was attributable to the differences between the circular arc method and SFEM, in which seepage force acted as an element body force. (2) When θ and β were time-historically compared, unlike Fs and θ, θ and β showed the same behavior trend regardless of the water level of the river. To understand the states described in (2) to the detail, time-sequential changes in θ and β are plotted in Figure 7. The plot shows that they behaved similarly and were closely correlated. Thus, the safety of levee, which changes along with the water level of the river, etc., could be assessed using β. 4.3 Local and global failures A contour diagram of the distribution of local failure probability at the time point described in Section 4.2 is shown in Figure 8. The figure shows that the slip surface of the global failure passed near the boundary of the section of large local failure probability. Figure 8. Contour diagram of local failure probability.
4.4
Effects of design factors on reliability index β
To understand the effects on β by the cohesion, internal friction angle and unit weight, which are the design factors, the time sequential changes of β were calculated using the coefficients of variation of the soil parameters shown in Table 1. The calculation involved using the maximum, mean and minimum values of the coefficient of a parameter and using the mean for the coefficients of the other two parameters. The results of the calculation are shown in Figure 9, from which the following conclusions were deduced: (1) The effects on β by the coefficients of variations of the soil parameters were relatively small in unit weight and cohesion and large in the internal friction angle. (2) This was likely because the coefficient of variations of unit weight was small (i.e., the unit weigh was relatively constant). The trends of the effects by cohesion and internal friction angle differences
were likely because the ground of ϕ value was dominant where the slip surface passed. The calculation showed that unit weight, which fluctuates little in most fills, had small effects on β. The effects by cohesion and internal friction angle, whose coefficients of variations are relatively large, were found to depend on the ground constant that is dominant in the zone of analysis. 5
SUMMARY
This study gave the following results: (1) A comparison between the time historical behaviors of θ and β showed similar behaviors regardless of changes in the water level of the river, unlike between Fs and θ. Thus, the safety of levee, which
145
failure was found to pass near the boundary of zones where the probability of large local failure was large. (3) To investigate the effects on β by variations of cohesion, internal friction angle and unit weight, β was calculated using widely used coefficients of variations of the soil parameters. The unit weight, which is believed to fluctuate little, had small effects on β. The effects by cohesion and internal friction angle, whose coefficients of variations are relatively large, were found to depend on the soil properties that is dominant in the zone of analysis. The results should be useful for assessing the safety of river levee. The only remaining problem is the uncertainty of permeability, which fluctuates largely and is likely to greatly affect the results of seepage analysis (time historical changes in the water level within levee). The authors will proceed investigations by improving methods of seepage analysis. REFERENCES
Figure 9. Effects of the coefficients of variations on reliability index.
changes along the water level of the river, etc., could be assessed using β. (2) From a contour diagram of local failure probability distribution, the slip surface of the global
Ishii, K. & Suzuki, M. 1987. Stochastic finite element method for slope stability, Structural Safety, 4: 111–129. Japan Institute of Construction Engineering. 2002. Kasen Teibou no Kouzou Kentou no Tebiki[Guideline for Investigating the Structure of River Levee].Tokyo: Japan Institute of Construction Engineering. Komada, H. & Kanazawa, K. 1975.Analysis of unsteady seepage flow and stability of fill dams under rapid drawdown of the water surface level of reservoirs, Proc. of JSCE, Japan, 240: 51–62. Matsuo, M. 1984. Jiban Kougaku Sinraisei Sekkei no Rinen to Jissai[Geotechnology Actually with the idea of the reliability based design]: 62–71. Tokyo: Gihodoshuppan. Nagase, M. Shirai, K. Segawa, A. & Hukunari, K. 2007. Kasen Teibougaku[Embankment study]. Tokyo: Sankaido. Nagao, T.Yoshinami,Y. Mukai, M. & Shimizu,Y. 2000. Evaluation of Safety against Foundation Failure for Breakwaters with Probabilistic Method, Proc. of JCOSSAR2000, Japan, 479–486. Suzuki, M. & Ishii, K. 1986. Probabilistic optimum design of drainage pipes on slope stability, Proc. of JSCE, Japan, 370(III-5): 209–216. 2007. Kouwan no Shisetsu no Gijyutsujyou no Kijyun Dou Kaisetsu[Technical basis and explanation of facilities in harbors]. Tokyo: The Japan Port and Harbour Association.
146
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Determination of partial factors for the verification of the bearing capacity of shallow foundations under open channels A. Murakami & S. Nishimura Okayama University, Japan
M. Suzuki Shimizu Co., Japan
M. Mori National Agriculture and Food Research Organization, Japan
T. Kurata & T. Fujimura NTC Consultants, Japan
ABSTRACT: The limit state design has been introduced into the design criteria for geotechnical structures. The current paper attempts to apply the reliability-based design method at Level II to the bearing capacity of the foundations of open channels from the viewpoint of the limit state design. To examine the applicability of the proposed procedure for practical structures, the reliability index is computed for evaluating the stability of the foundations of existing open channels designed by the usual method. It has been proven that the existing design method is on the safety side by obtaining larger values for the reliability index to be computed as 5.0 for sandy soil and 3.0 for clayey soil.
1
INTRODUCTION
The formation of the WorldTrade Organization (WTO) and the subsequent adoption of the Technical Barriers to Trade (TBT) Agreement placed an obligation on the International Organization for Standardization (ISO) to ensure that international standards would be globally relevant. For this purpose, the structural design code for agricultural facilities is required to fulfill international standards such as ISO2394 (JSIDRE 2008). The use of a performance-based design approach is widely recognized as supporting the development of globally relevant ISO standards. The JGS published the Japanese Geotechnical Standard, i.e. JGS4001-2004, entitled ‘Principles for Foundation Designs Grounded on a Performancebased Design Concept’ (JGS 2004) to introduce a performance-based design for foundation structures, and developed a design code for geotechnical structures. In this standard, the limit state design is introduced into their design criteria. The current paper attempts to apply the first order reliability method (FORM) to the bearing capacity of the shallow foundations of open channels from the viewpoint of the limit state design in order to follow the performance-based design framework. As is well known, the reliability analysis can be classified into three different levels, and the approach described in this paper belongs to the category at Level II which
considers first and second order statistical moments of a parameter, e.g., the average and the variance. Several studies have been conducted for the reliability analysis of the bearing capacity of shallow foundations. For example, Babu et al. (2006) carried out a reliability analysis of a foundation resting on cohesive soil based on Prandtl’s solution. Larkin (2006) proposed a reliability analysis for foundations subjected to multidirectional seismic loading by considering the shear strength and the probabilistic distribution of ground acceleration. FORM is applied to the bearing capacity of strip footings under the assumption that the shear strength is variable (Masshih et al. 2008). Griffith et al. (2001, 2002) and Fenton and Griffith (2003) carried out a probabilistic study on the bearing capacity of a rough rigid strip footing on a weightless cohesive soil to assess the influence of randomly distributed undrained shear strength. In this paper, a modified Terzaghi’s bearing capacity formula is employed as an evaluation method for the bearing capacity. The shear strength parameters, cohesion c and internal friction angle φ, and soil density γ are dealt with as probabilistic parameters, while the load due to self weights of the concrete structure of the channel and the inside water, is considered as a static and deterministic parameter, since the impact of the load is relatively small for the design of open channels. The statistical moments of the soil parameters are determined from the published data records,
147
and the database of the soil test results is organized for the design of agricultural infrastructures in Japan. It has been proven that the existing design method is on the safety side, since the computed values for the reliability index are larger than 5.0 for sandy soil and 3.0 for clayey soil. The partial factors to satisfy the target reliability indices are then determined for existing open channels. The determined partial factors for cohesion c, internal friction angle φ, and soil density γ are averaged for sixteen cases. As the target reliability indices, β = 2.0, 3.0, and 4.0 are herein adopted. The determined partial factors are calibrated with the re-calculation of the reliability indices for the sixteen channels, and verified to be appropriate for the new design code for shallow foundations under open channels. The remainder of the paper is organized as follows. In the next section, we will briefly review the reliability analysis method for the direct foundations of open channels. In Section 3, the reliability index is computed for evaluating the stability of the foundations of sixteen existing open channels designed by the usual method to examine the applicability of the proposed procedure to practical structures. The paper ends with a summary of our main conclusions.
The Taylor series expansion of the performance function at design points, e.g., c∗ , ϕ∗ , γ1∗ , and γ2∗ , is obtained as
2
Four probabilistic variables are normalized as defined in the following equation and have a normal distribution of N (0,1) when c, ϕ, γ1 , and γ2 follow the normal distribution:
RELIABILITY ANALYSIS
Most modern bearing capacity predictions involve a relationship of the form (Terzaghi 1943)
where c: cohesion of the soil below the foundation (kPa), γ1 : unit weight of the soil below the foundation (kN/m3 ), γ2 : unit weight of the soil in the embedment portion (kN/m3 ), Nc , Nγ , and Nq : coefficients of the bearing capacity, Df : the embedment depth of the foundation (m), B: the length of the foundation’s shorter side (m), η: correction factor due to the scale effect of the foundation.
The following definitions are also used at design points (c∗ , ϕ∗ , γ1∗ , and γ2∗ ):
where µc , µϕ , µγ1 and µγ2 are the averages for c, ϕ, γ1 , and γ2 , σc , σϕ , σγ1 , and σγ2 are the standard deviations for c, ϕ, γ1 , and γ2 . The derivative of the performance function should satisfy the following equation from equation (11):
The performance function is defined using the following equations.
where qmax : the maximum load due to self weights of the concrete structure of the channel and the inside water and deterministic variable, c, ϕ = tan φ, γ1 , and γ2 : probabilistic variables.
148
The expected value of the performance function is approximated with equation (14) derived from equation (6) as
3
RELIABILITY ANALYSIS FOR THE FOUNDATION OF OPEN CHANNELS
3.1 Statistics of parameters The statistical values of soil parameters, e.g., c, ϕ = tan φ, γ1 , and γ2 are obtained by collecting data from the references, for example, Matsuo (1984) and JGS (1988).Table 1 lists the average, the standard deviation, and the coefficient of variation obtained through a statistical analysis of the data. From Table 1, the following coefficients of variation for different variables are adopted for subsequent analyses. – Unit weight, γ1 and γ2 : 0.06 (∼ =0.055) – Coefficient of friction, ϕ = tan φ: 0.15 (∼ =0.153) – Cohesion, c: 0.30 (∼ =0.302) The standard deviation of the performance function is written in equation (15) as
Maximum bearing stress, qmax , is treated as being static and deterministic. Since the expected value of qmax is relatively small in this problem, compared with the qu value, the variability of this quantity does not significantly affect the results of the computation. 3.2 Dimensions of open channels to be analyzed Table 2 lists the dimensions of sixteen open channels and the average strength parameters and load. Figure 1 Table 1.
The reliability index is computed using the average and the standard deviation from equation (16).
Parameter
Average
Standard deviation
Coefficient of variation
c (kPa) tan φ γ1 , γ2 (kN/m3 )
25.0 0.65 16.9
7.35 0.10 0.98
0.302 0.153 0.055
Table 2.
where αc , αϕ , αγ1 , and αγ2 are called the sensitivity and are defined as follows:
Statistical values of the soil parameters.
Profiles of the open channels.
#
Width B and height H of open channels (m)
Strength Unit parameter Load Soil weight c (kPa), qmax type* (kN/m3 ) φ (◦ ) (kPa)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
B = 2.96,H = 1.62 B = 3.00,H = 1.80 B = 2.32,H = 1.20 B = 1.70,H = 0.90 B = 1.90,H = 1.65 B = 2.00,H = 1.70 B = 2.00,H = 1.20 B = 4.50,H = 1.80 B = 2.85,H = 1.65 B = 7.90,H = 2.48 B = 2.80,H = 1.00 B = 2.00,H = 2.20 B = 3.30,H = 1.80 B = 3.30,H = 1.80 B = 3.40,H = 1.00 B = 2.20,H = 1.20
S S C C S S S S S S S S S C S S
*S: Sand, C: Clay
149
19.8 18.0 14.0 19.8 20.0 20.0 20.0 19.8 18.8 20.0 20.0 20.0 18.0 15.0 20.0 20.0
φ = 35 φ = 23 c = 13 c = 18 φ = 25 φ = 25 φ = 25 φ = 23 φ = 29 φ = 25 φ = 25 φ = 30 φ = 15 c = 8.0 φ = 20 φ = 20
22 27 21 19 25 33 13 22 24 37 15 21 23 23 16 16
The sensitivity of the internal friction angle is dominant for the sandy grounds, and cohesion c has dominant sensitivity for the clayey grounds. The unit weights, γ1 and γ2 , have small sensitivities. Since unit weight γ1 is usually treated as a submerged unit weight, the sensitivity to the bearing capacity is smaller than for unit weight γ2 . Consequently, the reliability indices are greater than 5.0 for the sandy grounds. Although the reliability indices for the clayey grounds are almost 3.0, which sounds like a small value, the corresponding probability of failure is 0.1%, and thus, the structures on the ground are sufficiently safe.
Figure 1. Example of an open channel. Table 3. Reliability indices and sensitivities of the soil parameters.
4
Sensitivity for parameters #
β
Soil type*
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
11.0 6.1 3.2 3.2 10.0 5.9 12.7 5.2 17.0 7.5 9.5 17.6 6.5 2.9 6.3 14.0
S S C C S S S S S S S S S C S S
c
ϕ = tan φ
γ1
γ2
— — 0.997 0.999 — — — — — — — — — 0.990 — —
0.985 0.963 — — 0.963 0.975 0.956 0.988 0.986 0.968 0.963 0.961 0.887 — 0.957 0.838
0.000 0.003 0.000 0.000 0.000 0.003 0.000 0.016 0.000 0.000 0.000 0.000 0.003 0.000 0.003 0.000
0.171 0.268 0.071 0.041 0.268 0.224 0.294 0.154 0.167 0.251 0.271 0.275 0.462 0.140 0.290 0.545
*S: Sand, C: Clay
shows a definition of the variables used in the equation for the bearing capacity through Case 1. The sixteen cases cover thirteen cases for sandy soil under the condition of c = 0 and three cases for clayey soil under the condition of φ = 0. The unit weight considering the buoyancy force below the water level is adopted for the computation.
DETERMINATION OF PARTIAL FACTORS FOR THE FOUNDATIONS OF OPEN CHANNELS
4.1 Determination of partial factors The following methods are listed in ISO2394: (1) Cases where the ground parameters follow a normal distribution When probabilistic variables for the ground parameters follow a normal distribution and their characteristic value is the average, partial factor ρ is defined as follows:
where, fk : the characteristic value for the parameter, usually, fk = µ, fd : the design value for the parameter, for the reliability analysis, V : coefficient of the variation in the parameter, α: sensitivity of the parameter, and βt : target reliability index. (2) Cases where the ground parameters follow a logarithmic normal distribution When probabilistic variables follow a normal logarithmic distribution, partial factor ρ is written as the following equation:
3.3 Reliability analysis and discussion It is revealed that the reliability indices for sandy soil are between 5.2 and 17.6 with significant variability from Table 3 in which the reliability indices and sensitivities for each ground parameter are listed for the sixteen case studies. The reasons are as follows. Since parameters Nq and Ng in equation (1) are very sensitive to the internal friction angle and the maximum load qmax is relatively small compared to the value of qu in the problems of the open channels, the value of the performance function becomes extremely large easily. In the cases of the clayey soil, friction angle φ is zero and the value of qu has a linear relationship with cohesion c, namely, there is no extreme change in bearing capacity qu for the change in c. Consequently, the reliability indices are very similar among three cases.
where, λ: the average normal √ logarithm for probabilistic variables, λ = ln (µ/ 1 + V 2 ), ζ: standard deviation of the normal logarithm for probabilistic variables, ζ = ln (1 + V 2 ), and µ: the average of the probabilistic variables. 4.2
Calibration of the partial factors
A series of partial factors, ρ, is defined for each of the four parameters, and m sets of partial factors, ρi (i = 1,2,…,m) are prepared. For each of the sixteen cases listed in Table 2, the reliability indices
150
βij = βij (ρj ) (i = 1, 2, . . . 16) are calculated with partial factor ρj , ( j = 1, 2, . . . , m). Consequently, 16 × m reliability indices are obtained and the summations of the squared deviation for βij for the target reliability index, βt , Dj ( j = 1, 2, . . . , m) are computed in the manner of the following equation:
The optimum partial factors are selected for the minimum D among Dj ( j = 1, 2, . . . , m), so that the calculated reliability index mostly approaches target reliability index βt . In the reliability analysis in this section, internal friction angle φ follows the normal distribution, while cohesion c is assumed to distribute log-normally, since the coefficient of variation in c has the great value of 0.3, and the partial factor can not be defined for the value of the target reliability index, βt = 4.0. 4.3
Table 4.
Expected values of partial factor, ρ Target reliability index
ϕ = tan φ
γ1
γ2
c
γ1
γ2
βt = 2 βt = 3 βt = 4
1.42 1.79 2.42
1.01 1.01 1.01
1.02 1.03 1.05
1.86 2.46 3.22
1.00 1.00 1.00
1.02 1.04 1.07
Table 5. cases.
Since the maximum load qmax values are relatively small as seen in Table 2, the calculated reliability indices have great values as shown in Table 3. The values of qmax , however, are different for each site, and therefore, the actual qmax values are not used for the determination of the partial factors. As a performance function, Equation (22) is employed in following sections instead of Equation (5).
where qd is the design bearing capacity, and adjusted so that the computed reliability index based on the Equation (22) exactly coincides with the target reliability index in the calibration analysis. 4.4 Computation of partial factors by design values The partial factors for each case listed in Table 3 are computed for the target reliability indices of βt = 2, 3, and 4 based on equations (18) and (19). Among the ground parameters, the partial factors for unit weight γ and coefficient of internal friction angle tan φ are computed based on equation (18) by adopting the coefficients of the variation of 0.06 for both γ 1 and γ 2 , and 0.15 for tan φ under the assumption that their probability follows a normal distribution. As
Sandy soil
Clayey soil
Standard deviations of partial factors for sixteen Standard deviation of partial factor, ρ
Target Sandy soil reliability index ϕ = tan φ γ 1 βt = 2 βt = 3 βt = 4
Performance function for calibration analysis
In current design code of the open channels (Ministry of Agriculture, Forestry and Fisheries. 2001), following equation is employed to check the stability of the foundations, in which the safety factor of 3.0 is considered for the bearing capacity. The safety factor has no relationship with the load qmax .
Expected values of partial factors for sixteen cases.
0.009 0.023 0.059
Clayey soil γ2
c
γ1
γ2
0.005 0.009 0.021 0.000 0.010 0.004 0.013 0.057 0.000 0.020 0.005 0.018 0.144 0.000 0.040
for cohesion, different partial factors for each case are evaluated by equation (19) depending on characteristic values, because cohesion c follows a logarithmic normal distribution. The expected values and the standard deviation of the partial factors for the target reliability indices of βt = 2, 3, and 4 are listed inTables 4 and 5, respectively. The following items can be pointed out from these tables. 1) The obtained partial factors of unit weights γ 1 and γ 2 are comparatively small between 1.00 and 1.07. 2) The standard deviation of the partial factors is quite small for every case. This leads to the adoption of the expected value of the partial factors for each case. 4.5
Examination of partial factors by calibration
In order to examine an optimal set of partial factors, based on their expected values evaluated in the previous subsection, trial partial factors are proposed as multiples of 0.05 to cover the expected value of the partial factors appearing in Table 4. Table 6 shows a set of partial factors for each target reliability index, βt . The calibration has been made based on the set of partial factors in Table 6 by equation (20), and the resultant deviation of the reliability index in each case for the target reliability index, βt , is obtained. As a result of such a calibration, Table 7 lists an optimal set of partial factors to minimize the sum of the square of the deviations in each case for different target reliability indices. Table 8 presents the average safety factors for 16 cases. For the clayey grounds, the safety factors are
151
Set of partial factors for the case of βt = 2.
Table 6(a).
Parameters Sandy soil
Clayey soil
Unit weight Internal friction angle
γ 1 ,γ2 ψ = tan φ
Unit weight Cohesion
γ 1 ,γ 2 c
Table 7(c). βt = 4.
Average factors
Examined factors
1.01,1.02 1.42
1.00, 1.05 1.40, 1.45
1.00,1.02 1.86
Parameters Unit weight Internal friction angle Cohesion
1.00, 1.05 1.85, 1.90
Parameters
Average factors
Examined factors
Sandy soil
Unit weight Internal friction angle
γ 1 ,γ2 tan φ
1.01,1.03 1.79
1.00, 1.05 1.75, 1.80
Clayey soil
Unit weight Cohesion
γ 1 ,γ2 c
1.00,1.04 2.46
1.00, 1.05 2.45, 2.50
Table 6(c).
Parameters
Examined factors
Sandy soil
Unit weight Internal friction angle
γ 1 ,γ2 tan φ
1.00,1.05 2.42
1.00, 1.05 2.40, 2.45,2.50
Clayey soil
Unit weight Cohesion
γ 1 ,γ2 c
1.00,1.07 3.22
1.00, 1.05, 1.10 3.20, 3.25,3.30
Table 7(a). βt = 2.
Unit weight Internal friction angle Cohesion
Sandy soil
Clayey soil
γ 1 ,γ2 tan φ
1.05 1.40
1.05 —
c
—
1.85
γ 1 ,γ2 tan φ
1.05 2.50
1.10 —
c
—
3.30
Target β
Sand
Clay
βt = 2 βt = 3 βt = 4
2.3 3.5 5.3
1.6 1.9 2.3
smaller than the conventional factor of safety, 3.0 for all target reliability indices, while for the sandy ground the safety factors are greater than 3.0 for the target reliability index βt = 3 and 4. These results indicates that the conventional safety factor 3.0 creates safer side design for the clayey grounds than for the sandy grounds. 5
CONCLUSIONS
This paper has evaluated reliability indices for the foundations of existing open channels to examine the safety of the current design method for the bearing capacity and the effect of the uncertainty of the ground parameters. It is revealed that the current design method is safe for the bearing capacity of foundations with a reliability index over 3.0, and that the internal friction angle and the cohesion dominantly affect the safety. The reliability index obtained for sand is over 5.0; it cannot control the safety margin because of its dependence on the underlying layers and the index for clay is about 3.0 smaller than that for sand. The partial factors are then obtained by considering the sensitivity of the variability in the ground parameters and the limit bearing capacity. The results obtained in the current paper are limited to shallow foundations under open channels, but the evaluation of the reliability indices shown herein is effective for any type of structure.
Optimum set of partial factors for the case of
Parameters
Clayey soil
*Safety factor = qu /qd
Set of partial factors for the case of βt = 4. Average factors
Sandy soil
Table 8. Average safety factors.
Set of partial factors for the case of βt = 3.
Table 6(b).
Optimum set of partial factors for the case of
ACKNOWLEDGEMENT Table 7(b). βt = 3.
Optimum set of partial factors for the case of
Parameters Unit weight Internal friction angle Cohesion
Sandy soil
Clayey soil
γ 1 ,γ2 tan φ
1.05 1.80
1.05 —
c
—
2.50
The authors are grateful to Mrs. Hitoshi Yano and Taro Seto of Japanese Ministry ofAgriculture, Forestry and Fisheries, and its Tokai Branch for their help for collecting the data and providing some arrangements and suggestions. The members of performance-based design committee of the Japanese Institute of Irrigation and Drainage are also acknowledged for their discussion.
152
REFERENCES Babu, G.L.S., Srivastava, A. & Murthy, D.S.N. 2006. Reliability analysis of the bearing capacity of a shallow foundation resting on cohesive soil, Can. Geotech. J. 43: 217–223. Fenton, G.A. & Griffiths, D.V. 2003. Bearing capacity prediction of spatially random c-φ soils, Can. Geotech. J. 40: 54–65. Griffiths, D.V. & Fenton, G.A. 2001. Bearing capacity of spatially random soil: The undrained clay Prandtl problem revisited, Geotechnique 51(4): 351–359. Griffiths, D.V., Fenton, G.A. & Manoharan, N. 2002. Bearing capacity of rough rigid strip footing on cohesive soil: Probabilistic study, J. Geotech. Geoenviron. Eng. 128(9): 743–755. Japanese Geotechnical Society. 1988. Variability of Soil Data and Design (in Japanese).
153
Japanese Geotechnical Society. 2004. JGS4001-2004: Principles for Foundation Designs Grounded on a Performance-Based Design Concept. Japan Society of Irrigation, Drainage and Reclamation Engineering (JSIDRE). 2008. Introduction to PerformanceBased Design for Functional Maintenance of Agricultural Facilities, JSIDRE (in Japanese). Larkin, T. 2006. Reliability of shallow foundations subjected to multidirectional seismic loading, J. Geotech. Geoenviron. Eng. 132(6): 685–693. Massih, D.S., Soubra, A.H. & Low, B.K. 2008. Reliabilitybased analysis and design of strip footings against bearing capacity failure, J. Geotech. Geoenviron. Eng. 134(7): 917–928. Matsuo, M. 1984. Geotechnical Engineering – Theory and Practice of Reliability Theory, Gihodo (in Japanese). Ministry of Agriculture, Forestry and Fisheries. 2001. Design code for land improvement – Channels –.
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Application of concept in ‘Geo-code21’ to earth structures M. Honda Nikken Sekkei Civil Engineering Ltd., Tokyo, Japan
Y. Kikuchi Port and Airport Research Institute, Yokosuka, Japan
Y. Honjo Gifu University, Gifu, Japan
ABSTRACT: ‘Geo-code21’ is a comprehensive foundation design code that can harmonize all the major foundation codes in Japan. This code was published by JGS (Japanese Geotechnical Society) in 2004 and principles for foundation designs are described based on performance based design (PBD) concept. In order to add some chapters for earth structure design to ‘Geo-code21’, JGS has set up a working group under Standardization committee of design and construction in Standardizing department since 2006. In this paper, outline of the draft is reported.
1 1.1
INTRODUCTION
1.2 Framework of the code
Background
Members of the working group are listed in Table 1. The working group has been set up under Standardization committee of design and construction in Standardizing department of JGS since 2006 and has completed the draft of comprehensive design code for embankments and cut slopes based on performance-based design concept. The draft is now under consideration in JGS and outline of the code is reported in this paper.
Table 1.
Members of the Working Group.
Requirements Objective
Code,Approach Performance Requirements
Performance Requirements
Performance Criteria Comprehensive Design Code Approach B
Approach A
Specific Base Design Code Specific Design Code
Figure 1. Hierarchy of code description.
155
Verification
(Chair) Y. Honjo Gifu University (Secretary General) Y. Kikuchi Port and Airport Research Institute (Secretary) M. Honda Nikken Sekkei Civil Engineering Ltd. (Members) M. Hachiya Chuo Fukken Consultants Co., Ltd. S. Kato Public Works Research Institute K. Kojima Railway Technical Research Institute A. Murakami Okayama University K. Ohkubo Nippon Expressway Research Institute Co., Ltd M. Suzuki Shimizu Corporation S. Tani National Institute for Rural Engineering A. Wakai Gunma University Y. Wakatuki Fukken Co., Ltd.
Figure 1 is the hierarchy of code description of ‘Geocode21’ and this is also applied to the code for earth structures. The code consists of specification of performance requirements that achieve the objectives of structures, performance criteria that specify the guidelines to design specifically and verification of the performance. Two types of the verification methods are introduced. Verification approach A is the fully performance based design approach where designers are only given the performance requirements of the structures; the designers are requested to verify their design, and the results would be checked by authorized organizations et al.
cut slopes to give firm ground to run vehicles or trains safety, water storage dams that, through reservoirs, stable water supply is achieved in case of water shortage. River dike to prevent the flood to inland in case of flooding, the flood control dams or the regulating reservoirs to prevent the breaking dikes by controlling the outflow in case of heavy rain. Coastal levee to defend the coastal area from storm surge or Tsunami.
Table 2. Table of contents of Geo-code21. 0 BASES OF STRUCTURAL DESIGN 1 BASES OF FOUNDATIONL DESIGN 2 GEOTECHNICAL INFORMATION 3 DESIGN OF SHALLOW FOUNDATION 4 DESIGN OF PILE FOUNDATION 5 DESIGN OF COLUMN TYPE FOUNDATION 6 DESIGN OF RETAINING STRUCTURE 7 DESIGN OF TEMPORARY STRUCTURE (Sections of chapter 0–7 are omitted here)
2.3 Performance requirements
8 DESIGN OF EMBANKMENT 8.1 Scope 8.2 Objective 8.3 Performance Requirements 8.4 Performance Criteria 8.5 Investigations 8.6 Items to be considered in design 8.7 Predicting the behavior of Embankment 8.8 Verification 8.9 Design report 8.10 Construction
Performance that is required to achieve the objectives shall be described. In the case of complex structure, whose objective is achieved by combination of different types of structure and the earth structure composes a part of the complex structure, e.g., road, railway etc., the performance requirement shall be described in consideration with other types of structure, e.g. bridges. This is because the collapse of a part of the structure may cause failure of the whole complex structure.
9 DESIGN OF CUT SLOPE 9.1 Scope 9.2 Objective 9.3 Performance Requirements 9.4 Performance Criteria 9.5 Investigations 9.6 Items to be considered in design 9.7 Predicting the behavior of Cut slope 9.8 Verification 9.9 Design report 9.10 Construction
2.4 Performance criteria
Verification approach B is a verification procedure based on design codes: These codes may be established for each category of structures (e.g. highway bridges, buildings etc.) by the authorities who are either owner or responsible for the administration and safety of the category of structures. Table of the contents of Geo-code21 is shown in Table 2. The code has been developed as additional code of ’Geocode21’ (i.e. chapter 0–7). Design of embankment is described in chapter 8 and cut slope is described in chapter 9. Also, framework of the contents of the ’Geocode21’is applied to earth structure design. 2 2.1
OUTLINES OF THE CODE Scope
Scope of application is embankment and cut slope. The code is described for not only the structures that support vertical load, e.g., road or railway fill, but also the structures that block water, e.g., river dike, dam etc. 2.2
Objective
In the beginning of design, the necessity of the structure shall be specified. Earth structures have various objectives. For example, the road or railway fills and
Courses to estimate the performance shall be described specifically. Degree of achievement of the performance is evaluated by comparing the predicted behavior with the limit state. Limit state that divides intended condition and unintended condition shall be specified for each performance requirement. Actions that are predicted to act on the earth structure during design working life shall also be determined to estimate the behavior. In addition to the external loads, hydraulic actions should also be taken into account in the design because soil material may collapse under hydraulic action. In fact, some earth structure can be used indefinitely with appropriate maintenance, e.g., river dikes, and it is difficult to determine the design working life for earth structures specifically. This is because soil material is different from steel or concrete in that deterioration and corrosion doesn’t occur. In principle, the following three limit states shall be specified, although other alternatives are not necessarily excluded and not all limit states shall be determined. 2.4.1 Serviceability limit state – the limit state in which the regular use to achieve the objective of the structure is possible, if the damage to the structure occurred. 2.4.2 Reparability limit state – the limit state in which the damage to the structure has occurred, however regular use of the structure is possible to a limited extent and there are reasonable prospect for full functionality of the structure if economically-feasible repairs are performed within a feasible period of time.
156
Figure 2. Concept of the limit state design (JSCE(2002)).
Figure 4. Concept of the foundation design.
Figure 3. Concept of the limit state design for earth structures.
2.4.3 Ultimate limit state – the limit state in which the structure may have sustained considerable damage, but not to the extent that the structure has reached large failure or to the extent that would result in serious damage to the nearby structures. Figure 2 shows the concept of the limit state. This concept can be applied to the structures that consists of the members of steel or reinforced concrete because each limit state can be determined by yield points of the material and physical meaning of the limit states is clear. However it is difficult to apply this concept to earth structures directly because yield point of the material is not clear. Besides some earth structures can be repaired easily if the material has been deformed after peak strength. This is because soil material shows a certain strength after peak strength (see Figure 3) and construction of earth structures is comparatively easier. Therefore it is difficult to distinguish between reparability limit state and ultimate limit state.
2.5
Investigations
Prior to the design, a geotechnical investigation and a survey of the surrounding conditions shall
be performed. In the design of foundations, the mechanical behavior of bearing layer should be estimated. Also, interaction between foundation and surrounding ground should be estimated in the design of pile foundation (see Figure 4). Generally, bearing layer is firm in the foundation design and the behavior of soft ground is estimated only by the interaction with pile. Because pile foundation is chosen when bearing layer of shallow foundation does not have required bearing capacity. However, in the design of embankment, the behavior of the soft ground should be estimated as bearing layer. Geotechnical investigation should be conducted in consideration with the change in stress state because self weight of the embankment is heavy and it affects the stress state in the ground. Besides, the mechanical behavior of soft ground depends on effective confining pressure which varies with time when the permeability of the ground is low. In addition to the bearing layer, mechanical behavior of earth material should also be estimated. Embankment is constructed by the compaction control standard and the validity of this method has been confirmed empirically. Compaction characteristics of the earth material should be investigated and compaction control standard should also be examined in the investigation. When the mechanical property is examined by test, condition of the specimen should be determined in consideration with the compaction control standard because earth material is in unsaturated state in general and mechanical behavior of earth material depends on the dry density an water content. In the design of cut slopes, the behavior of the slope should be estimated after the soil is removed. Thus investigation should be performed to examine the mechanical behavior of the soil material in the slope after unloading. However inclination of slope that satisfies the safety can be determined by type of the soil, height of the slope etc. for each category of structures and validity of this method has been confirmed empirically.
157
be used. However this method is developing and it is more advisable when deformation of the ground is observed after construction to confirm the accuracy of the prediction. In the prediction of dynamic behavior, the reference of the condition of verified example is very important. Figure 5. Concept of the embankment design and cut slope design.
2.8 Verification
The foundation design and embankment design are different. In the design of foundation and embankment, the structures are created by loading to the ground, but in the design of cut slope, the space is created by unloading the ground (see Figure 4,5). 2.6
Items to be considered in design
Structure shall be designed based on the behavior that is caused by predicted actions. The behavior that should be studied shall be extracted as items to be considered in design. In the design of earth structure, stability of the ground, embankment and slope shall be examined. Also, settlement or deformation of the ground shall be examined. Besides stability against the seepage action shall be examined in the design of watertight structures.
The performance of the structure shall be verified by the comparison between predicted behavior and the limit value that is determined in consideration with the limit state. Some performances of earth structure are not easy to verify directly and other indirect verifying method whose validity has been confirmed empirically can be used as ’deemed to satisfy’. For example, most of the design of cut slope, inclination of slope does not determined by circular arc method but determined by type of the soil, height of the slope etc. This is because the soil material in the slope is in unsaturated state and it is difficult to evaluate the mechanical property. Besides, the stability of the slope has been evaluated by the inclination of the slope based on experiments indirectly because the stability of slope increases with the decrease in the inclination of slope. 3
2.7
Predicting the behavior
The global stability, deformation and seepage failure of earth structures shall be predicted. Circular arc method can be used to study the failure of the embankments and cut slopes. However inclination of slope that satisfies the safety can also be determined by type of the soil, height of the slope etc. Seepage flow analysis or flow net can be used to study the seepage behavior. If the one dimensional behavior of ground is predicted, settlement can be predicted by one dimensional consolidation theory, e.g., Terzaghi’s model of consolidation. However if two or three dimensional behavior is predicted or affect to nearby structures should be studied, numerical methods, e.g., finite element method, can
FINAL REMARKS
Outline of the comprehensive design code of earth structure is reported. The draft is now under consideration in JGS and it will be approved and published in 2009. REFERENCES JGS 2004. Principles for Foundation Designs Grounded on a Performance-based Design Concept (nickname ‘Geocode 21’) JSCE 2002. Basis of Structural Design for Buildings and Public Works, Ministry of land, infrastructure and trans-port: http://www.mlit.go.jp/kisha/kisha02/13/131021/131021_ e.pdf
158
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Limit state design example – Cut slope design Wan-Kwan Lin & Limin Zhang Department of Civil Engineering, Hong Kong University of Science and Technology, Hong Kong
ABSTRACT: This paper presents the design of a cut slope using limit state design (LSD), load resistance factor design (LRFD), and traditional allowable stress design (ASD). A soil cut slope with a maximum height of 15 m is considered. The slope involves four major soil/rock layers from the ground surface downward: fill, completely decomposed granite (a silty sand), highly decomposed granite, and moderately decomposed granite.The objective of design is to find a suitable cut slope angle that satisfies all design requirements. In LSD, three design approaches (DA1, DA2, and DA3) to the application of partial factors for actions, materials, and earth resistances are also considered. Finally, the cut slope angles determined using these different design methods are compared. 1
INTRODUCTION
Except for the design of retaining structures and reinforced fills, geotechnical design in Hong Kong is primarily based on the Allowable Stress Design (ASD) methodology, which applies a global factor of safety to accommodate uncertainties in both resistances and loads. Many codes of practice in Hong Kong are formed based on the British Standards. Recently, many European countries, including the UK, adopted or are in a transition to the Eurocodes, which are based on Limit State Design (LSD). Besides Europe, many other countries including China, USA, Canada, Japan and Australia are either using or in a transition to the LSD methodology. The direction of Hong Kong’s future geotechnical code developments is a subject worth of studying. The transition to LSD, however, has met with certain difficulties. One reason is that the development of partial factors for LSD requires some probability knowledge and statistical data. Another important reason is that the design procedures of LSD are relatively new to practicing engineers. Some well-worked design examples may be useful to help the transition. Therefore, the objective of this paper is to explore the use of LSD methodology in geotechnical design in the Hong Kong context. As slope stabilization is one of the most important geotechnical elements in Hong Kong, cut slope design examples are demonstrated in this paper using ASD, LSD, and load and resistance factor design (LRFD). As necessary partial factors for geotechnical designs in Hong Kong have not been fully developed, in the LSD examples, the recommended partial factor values in Eurocode 7 are applied. 2
condition with a sufficient safety margin against failure of the structure or excessive settlement of the structure. Factors of safety are usually used to compensate for uncertainties in simplified design models, ground conditions, material strengths, construction methods, and distribution of loads. Because of these uncertainties, the ultimate resistance of foundations should be factored down, while the loads should be factored up by certain amounts. The required safety margin should take into account consequences of failure and construction tolerances. There are three common design methods used to ensure an adequate margin of safety under consideration, namely ASD, LSD, and LRFD. 2.1 Allowable Stress Design The conventional design method for geotechnical works in Hong Kong is ASD. A global factor of safety is applied to compensate for the uncertainties in both loads and resistances:
where Qi = nominal load component; Rn = nominal resistance; FS = factor of safety. The design procedures of ASD are relatively straightforward. However, no consideration is given to the fact that different loads or different design analysis models have different levels of uncertainty. The factor of safety depends on the type of problem but the safety levels for different types of structures may not be consistent. 2.2 Limit state design
PHILOSOPHY OF GEOTECHNICAL DESIGNS
The basic criterion for geotechnical design is that the resistance must be greater than the design loading
LSD considers the functional limits of strength and serviceability of geotechnical works by considering possible ultimate limit states (ULS) and serviceability limit states (SLS) explicitly. One of the most
159
comprehensive LSD frameworks has been implemented in Eurocode 7, which considers risk and complexity using geotechnical categories and characteristic values. LSD may accord more logically with a performance based Design Approach (DA). It is based on the requirement that the resistance of a geotechnical work should exceed the load effect for all potential modes of failure, allowing uncertainties in load effects and the variability in resistance and material properties. Therefore, partial factors are applied to loads, resistances, and materials in LSD separately:
where Rd = design resistance obtained by using factored material properties; Qi = design load affect; γ i = partial factor for action; γ R = partial factor for resistance. The partial factors are statistically-based design parameters. These parameters can be selected to achieve a specified target reliability level, so that the use of LSD can achieve relatively uniform levels of safety for various geotechnical works. 2.3
stated in the National Annex. Different design problems may be governed by different DAs. The partial factors for the DAs are grouped in sets and selected according to the DA used and verified for structural and geotechnical limit states in persistent and transient situations (Tables 2 to 4). Set A1 or set A2 of the partial factors on actions should be applied to the permanent and variable unfavourable or favourable actions. Set R1, R2, R3 or R4 of the partial factors on resistance should be applied to the corresponding resistance conditions, such as base and shaft resistances. Set M1 or set M2 of the partial factors on soil parameters should be applied to the corresponding soil strength parameters, such as effective cohesion and friction angle. Table 1. Design Approaches recommended in EN1997-1. (CEN 2004).
Design Approach
Design except axially loaded piles and anchors
DA1 C1 DA1 C2 DA2 DA3
A1 “+” M1 “+”R1 A1 “+” M1 “+”R1 A2 “+” M2 “+” R1 A2 “+” (M1 or M2) “+” R4 A1 “+” M1 “+”R2 (A1* or A2# ) “+” M2 “+”R3
Load and resistance factor design
The AASHTO LRFD Specifications are based on Load and Resistance Factor Design (LRFD). LRFD is another widely used format of LSD. In LRFD, strength limit states, service limit states, and extreme event limit states are introduced as in LSD. The main difference between LRFD and LSD (as in Eurocode 7) is the use of factors. LRFD applies a resistance factor to the nominal resistance but does not apply factors directly to the material properties:
where “+” implies “to be combined with”; *on structure actions; # on geotechnical actions Table 2. Partial factors on actions (γF ) or the effects of actions (γE ) in EN 1997-1. (CEN 2004) Set Action Permanent
where γ i = load factor; φR = resistance factor; Rn = nominal resistance obtained without factoring down the soil parameters. Although LSD and LRFD involve some concepts of probability, no complex probability and statistical analyses are required when the load factors and resistance factors provided in design codes are used. 2.4
Axially loaded piles and anchors
Variable
Unfavourable Favourable Unfavourable Favourable
Symbol
A1
A2
γG
1.35 1.0 1.5 0
1.0 1.0 1.3 0
γQ
Table 3. Partial resistance factors (γR ) for slopes and overall stability in EN 1997-1. (CEN 2004) Set Resistance
Symbol
R1
R2
R3
Earth resistance
γ R;e
1.0
1.1
1.0
Design approaches
In contrast to the checking of structural designs, geotechnical actions and resistances of the ground cannot be separated. Geotechnical actions sometimes depend on the ground resistance, e.g. active earth pressure, and the ground resistance sometimes depends on actions. For example, the bearing resistance of a shallow foundation is a function of the loads on the foundation (Ovensen 2002). In order to account for this interacting relationship, EN1997-1 proposes three DAs for checking against the failure in the ground (GEO) and the structure (STR) (Table 1). The choice of a DA is for national determination, and should be
Table 4. Partial factors for soil parameters (γM ) in EN 1997-1. (CEN 2004) Set Soil parameter
Symbol
M1
M2
Angle of shearing resistance Effective cohesion Weight density
γφ γc γγ
1.0 1.0 1.0
1.25 1.25 1.0
160
3
DESIGN METHODS FOR SLOPE STABILITY
SLOPE/W is computer program commonly used to analyze the stability of earth slopes (Geo-Slope, 1991). The slope stability is analyzed using the method of slices. The factor of safety is defined as the ratio of available shear strength to the shear stress required to maintain equilibrium. The factor of safety is taken to be the same for all slices. The problem is normally considered in two dimensions, i.e. the condition of plane strain. SLOPE/W allows stability analysis using the Ordinary method, Bishop’s simplified method, Janbu’s simplified method, the Morgenstern-Price method etc. A variety of interslice side force functions can be used in the case of the Morgenstern and Price method. In this study, the Morgenstern-Price method was used and the interslice force function was assumed as a half-sine function.
4
DESIGN EXAMPLE – CUT SLOPE
A soil cut slope, which has a maximum height of about 15 m and a total length of about 60 m measured along the slope toe is considered. The cross section of the slope is shown in Figure 1. The slope angle is inclined at 61.4◦ to the horizontal. The groundwater level is 2 m above the rock head, representing a scenario corresponding to a 1 in 10 year return period rainfall. Residential buildings are located near the slope toe. Three layers of soils are present: fill, completely decomposed granite, and highly decomposed granite. The shear strength parameters of the three soil layers are shown in Table 5. A stratum of moderately to
slightly decomposed granite is encountered below the slope toe. Suppose the slope is identified as not sufficiently stable and needs to be cut. The design problem we face now is to determine the angle of slope cut to satisfy all design requirements. 4.1 Design procedures The processes used for slope stability analysis using LSD or LRFD are similar to the process used for ASD. The design procedures of the three design methods are summarized in Figure 2. In the conventional design, the stability problem is defined by creating the profile of the slope. An analysis method, such as the Morgenstern-Price method, should be chosen. Soil properties, pore water pressures, line loads are input respectively. Then, slip surfaces of the slope are specified to solve the problem. The minimum factor of safety and the corresponding slip surface are displayed. Contours, the critical slip surface, free body diagrams can be obtained. If the factor of safety is larger than a specified value, then the slope is considered to be satisfactory. The basic procedures for slope stability analysis in LSD and LRFD are the same, such as the definition of problem, specification of a method of analysis and sketching the problem. The soil parameters are required to be factored in LSD, but not in LRFD. At the end of the analysis in LSD or LFRD, the FS obtained from a stability analysis actually represents the ratio of the resisting moment (or force) to the mobilizing moment (or force) (MR /MD ), which should be larger than the value of partial resistance factor for slope stability (Table 3). 4.2 Conventional design GEO (2000) suggests a factor of safety of 1.4 for slopes that may have serious consequences to human life and properties. Therefore, the computed factor of safety from SLOPE/W should be larger than 1.4. The original angle of the slope in Figure 1 is 61.4◦ . The stability analysis results for the existing slope are shown in Figure 3 and Table 6. Most of the factors of safety of the slip surfaces are smaller than 1.4, which indicates the slope is not sufficiently safe. Assume the cut slope angle is 35.7◦ . The computed factors of safety are again listed in Table 6 and the minimum factor of safety is 1.40. 4.3
Figure 1. Cross-section of the slope. Table 5.
Soil Parameter of the slope.
Soil type
γ (kN/m3 )
c (kPa)
φ (◦ )
Fill CDG HDG
16 19 19
0 5 8
33 40 42
Limit state design
Although SLOPE/W calculates the factor of safety, the program can still be used to limit state design analysis. The input parameters, such as loads, cohesion, friction angle, should be factored before analysis. The factor of safety obtained would be equal to MR /MD . This ratio should be larger than the recommended value of partial resistance factor for slopes and overall stability in Eurocode 7 (Table 3).
161
Figure 2. Flow chart for slope stability analysis by ASD, LSD and LRFD.
However, the value of the cut slope angle should be steeper as the differences between the ratio and the partial resistance factor are relatively small compared with the conventional design.
4.3.1 Design Approach 1 – Combination 1 Assume the cut slope angle is 41.5◦ . The partial factor for friction angle and cohesion are 1.0. The partial resistance factor for slopes and overall stability is 1.0. However, the value of partial resistance factor of DA2 is 1.1, which is larger than that of DA1C1. Therefore, the analysis using these two DAs for the cut slope can be combined.
Figure 3. Specified slip surfaces before cutting the slope.
The original slope angle is 61.4◦ , the stability analysis for the existing slope is shown in Table 6. Almost half of the values of MR /MD are smaller than required, which means the slope is not up to the standard.
4.3.2 Design Approach 1 – Combination 2 Assume the cut slope angle is 41.5◦ . The partial factors for friction angle and cohesion are both 1.25. The minimum value of MR /MD for all slip surfaces is larger than 1.0. The partial factors of DA3 are equal to the partial factors of DA1C2.Therefore, the required angle of slope is the same.
162
Table 6.
Factors of safety or MR /MD computed using SLOPE/W. MR /MD (LSD) Factor of safety (ASD)
Before cutting slope
After cutting slope
Slip surface No.
Before cutting slope
After cutting slope
DA1C1, DA2
DA1C2, DA3
DA1C1, DA2
DA1C2, DA3
1 2 3 4 5 6 7 8 9
1.00 1.31 1.34 0.83 0.90 1.59 0.74 1.20 1.60
2.57 1.77 1.40 2.77 2.44 2.35 – – –
1.00 1.31 1.34 0.83 0.90 1.59 0.74 1.20 1.60
0.80 1.05 1.08 0.67 0.72 1.28 0.59 0.96 1.28
1.81 1.29 1.06 1.50 1.54 1.66 – – –
2.27 1.62 1.33 1.88 1.93 2.07 – – –
Table 7. Desing angles of the cut slope from different design methods.
Design methods Conventional design LSD DA1C1 DA1C2, DA3 DA 2 LRFD
Angle of slope (◦ )
Computed FS or MR /MD
Required FS or MR /MD
35.7 41.5 41.5
1.40 1.33 1.06
1.4 1.0 1.0
41.5 41.5
1.33 1.33
1.1 1.17
41.5◦ , the slope surface will contain a thin layer of fill, and the factor of safety would drop to about 0.7. So, the stability analysis is governed by soil layering. During the design of a slope, the self-weight of the soil is the weight of the slope. However, it is inappropriate to apply partial factors for permanent loads to the weight of soil. It is difficult in geotechnical calculations to realize to which side of the equilibrium equation the weight of a given volume of ground contributes. Therefore the partial factor for self-weight is taken as unity (Ovesen 2002). 5
4.4
Load and resistance factor design
As no factors are applied on materials in LRFD, the analysis of slope stability using SLOPE/W is the same as in ASD, except that the loads are factored. The FS obtained would be equal to MR /MD , which should be larger than the reciprocal of 0.85, the resistance factor for overall stability. Because there is no applied load on the slope and the only loading is the self-weight, the analysis of the original slope is the same as in ASD. The resistance factor in LRFD is basically a reciprocal of the partial resistance factor in LSD, so the FS computed by SLOPE/W should be larger than 1.17. Comparing the factors of LRFD and DA2, the partial factor for soil parameter is equal to 1.0, thus the soil parameters are not factored in both designs. As a result, the analysis of the cut slope design in LRFD is the same as DA2 in LSD.
CONCLUSIONS
LSD uses partial factors or load and resistance factors to account for the variations of both resistances and loads, and achieves relatively uniform levels of safety for various geotechnical works. In practical geotechnical design, the use of LSD is similar to the use of ASD, and no complex probability and statistical analyses are required. The outcomes of the design example show that the LSD designs of the cut slope require a smaller reduction of the slope angle and therefore are more economic than the conventional design. This reduction of cost does not sacrifice the safety of the slope. ACKNOWLEDGMENT The work presented in this paper was substantially supported by the Research Grants Council of Hong Kong (Projects Nos. 622207 and HKUST6126/03E).
4.5 Summary of cut slope design The design angles of the cut slope are summarized in Table 7. The angles of the cut slope obtained for LSD and LRFD are larger than those for the conventional design, thus a smaller amount of soil needs to be cut in order to maintain the stability of the slope. Although the computed moment ratios in DA1C1 and DA2 are larger than the required values, the slope angle cannot be reduced anymore. If the angle is slightly larger than
REFERENCES AASHTO (1998). Standard Specification for Highway Bridges 1998 edition. American Association of State Highway Transportation Officials, Washington, D.C. Chu, L. F. (2007). Calibration of Design Methods for Largediameter Bored Piles for Limit State Design Code Development. MPhil thesis, Hong Kong University of Science and Technology, Hong Kong.
163
European Committee for Standardization (1994). Eurocode 1 –Basis of designation on structures – Part 1: Basis of design. European Committee for Standardization (CEN). European Committee for Standardization (2004). Eurocode 7 – Geotechnical Design. European Committee for Standardization (CEN). Frank, R., Bauduin, C., Driscoll, R., Kavvadas, M., Ovesen, N. K., Orr, T., and Schuppener, B. (2004). Designer’s Guide to EN 1997-1. Thomas Telfored Ltd., London. Geotechnical Engineering Office (1984). Geotechnical Manual for Slopes, 2nd Edition. Geotechnical Engineering Office (GEO), Hong Kong. Geo-Slope (1991). User’s Guide, SLOPE/W for Slope Stability Analysis, version 2. Geo-Slope International, Calgary, Canada. Honjo, Y. (2007). Development of a basic specific design code in performance based specification concept: The technical standards for port and harbor facilities. ISGSR2007. First Unternational Symposium on Geotechnical Safety and Risk. Tongji University, China, pp. 105–115.
Morgenstern, N. R., and Price, V. E. (1965). The analysis of the stability of general slip surfaces. Geotechnique, Vol. 15, pp. 79–93. Orr, T. (2005). Evaluation of Eurocode 7. Trinity College, Dubin. Ovensen N. K. (2002). Limit state design – The Danish experience. Foundation Design Codes and Soil Investigation in view of International Harmonization and Performance based Design. Swets & Zeitlinger, Lisse, The Netherlands, pp. 107–116. Rosenblueth, E., and Esteva, L. (1972). Reliability basis for some Mexican codes. ACI Pub. SP-31, pp. 1–41. Schuppener, B. (2007). Eurocode 7 and its latest developments. ISGSR2007, First International Symposium on Geotechnical Safety and Risk. Tongji University, China, pp. 117–132. Zhang, J., Tang, W. H., Zhang, L. M., and Zheng, Y. R. Calibration and comparing reliability analysis procedures for slop stability problems. ISGSR2007, First International Symposium on Geotechnical Safety and Risk, Tongji University, China, pp. 205–216.
164
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Probabilistic charts for shallow foundation settlements on granular soil C. Cherubini & G. Vessia Politecnico di Bari, Bari, Italy
ABSTRACT: The calculation of settlement is often conditional on the dimensioning of shallow footings. In loose soils, particularly in sandy deposits, the impossibility of collecting undisturbed samples has led to define a set of formulas for calculating the settlements on the basis of in-situ measures. Moreover, the inherent variability of the mechanical properties of the soil as well as model errors in provided formulas for calculating settlements require new procedures for assessing the reliability of calculated settlements on this type of soils. A statistical analysis on nine formulas commonly used for calculating the settlements on sandy soils has been carried out in this paper. The calculations of the settlements have been compared with the values of the field test settlement measures by Burland & Burbidge (1985) and their accuracy, precision and entropy have been investigated. Besides, probabilistic charts of acceptable settlements have been designed by means of the most precise and accurate formula for settlement resulting from this study.
1
INTRODUCTION
Past studies showed high variability of calculated settlements estimated by different settlement formulas. Such variability, mostly attributed to model errors, has been studied comparing predictions with actual settlements from field tests by means of the ratio R, defined as:
where Sc represents the measured settlement and Sm the predicted one corresponding to the same load level. In this paper, starting from previous works on such topic (Cherubini & Greco 1998, Sivakugan & Johnson 2004, Cherubini & Vessia 2005) statistical tools have been used in order to reduce the scatter into R datasets relating to different formulas and to estimate the most efficient settlement formulation. Finally, on the basis of this latter some charts of acceptable settlements have been suggested for design purpose. 2
ESTIMATION OF MODEL ERRORS IN SETTLEMENT PREDICTIONS
2.1 The R ratio method This study aims at investigating how nine different formulas for settlement calculation, based on NSPT measures, can efficiently envisage in field measured ones. The R ratio is determined as in Equation (1). It can be defined as “bias factor” since it is an index of the deviation of the settlement estimation from its actual value; this means that R takes into account different models of non linearity in soil response to the applied
load and of dependency of soil stiffness indexes to settlement magnitude In literature this ratio is often expressed as the measure of the variability of the soil deformation modulus (Wu 1974, Kaggwa et al. 2002) when each single expression is considered for calculating the settlements. It is impossible to give a numerical demonstration that one formula is more correct, in mean terms, than another on account of the strong dependence of the estimations on the empirical relations of in-situ deformability parameters. Therefore, the following study does not estimate the correctness of calculation formulas in relation to the ability of the model to explain the theoretical behaviour of the soil but it considers all sources of uncertainty that heavily affect the estimations by means of one ratio. The R values have therefore been calculated and statistically analysed in order to quantify the efficiency of nine calculation models for settlements and to suggest some design choices based on statistical analysis. Past studies on nine formulas have been carried out by Cherubini & Greco (1998). They used the following formulations: Terzaghi & Peck (1967), Meyerhof (1965), Meigh & Hobbs (1975), Arnold (1980), Burland & Burbidge (1985), Anagnostopoulos et al. (1991), Schultze & Sherif (1973), Berardi & Lancellotta (1991). The above-named authors have taken into account the values of the settlements measured in-situ by Burland & Burbidge (1985) in 192 cases of different-size shallow foundations which lay on different kinds of soils and determined the frequency of the R ratio given in Equation (1). Afterwards Sivakugan & Johnson (2004) have found that the beta probability density function represents the sampling distribution of the R ratio for settlements estimated through the following four equations: Terzaghi &
165
Table 1. ratio.
There are many methods for detecting the anomalous values of a dataset, among which the Box plot method. However, this method is not suitable instrument for deciding on the statistical nature of these values. In fact, in a set of measures, there are many reasons which can explain the presence of too low or too high values compared with the average measures. The Box plot method has therefore been joined to the Ferguson test which can find out and decide on the anomalous values on a statistical basis.
Scale of conservation based on the values of the R
Degree of conservation
Values of R
Level of conservation (or safety) of the formula adopted for the calculation
5 4 3 2 1
>1.5 1.2–1.5 0.8–1.2 0.6–0.8 <0.6
Highly conservative Conservative Neutral Non-conservative Highly non-conservative
Peck (1967), Schmertmann et al. (1978), Burland & Burbidge (1985) and Berardi & Lancellotta (1994). In such a study the meaningfulness of the samples of R taken into account has not accurately investigated at all. The model error of nine R datasets is studied according to the following steps: 1) statistical tools for outlier detection; 2) the quality control of datasets in terms of “precision” and “accuracy”. To this end, at first, some considerations on a possible scale for R values must be done. Table 1 proposes a five level scale which identifies the distance from the ideal value of R in a qualitative and quantitative way. According to this scale, the “ideal” value for R is 1 and it falls within the Degree 3 of conservation, marked as neutral. The values of R less than 1 (Degrees 1-2) correspond to non-conservative formulas while the values of R greater than 1 (Degrees 4-5) correspond to conservative formulas. A safety-oriented design does not consider the values of R falling within Degrees 1 and 2, but it tends to stabilize in higher degrees of conservation. However, these considerations made on the values taken by R do not help to choose the best formulation for the settlement estimation. Such a scale has been applied to the complete dataset of R values investigated throughout this paper. Such dataset cannot be inserted in this paper due to the limited number of pages available. Nevertheless the 5 page datasets are reported in Cherubini & Vessia (2005): R takes, in general, values belonging to all the 5 degrees of conservation detected. It is therefore necessary to introduce statistical tools for gathering useful information on the quality of R values.
2.3 The box plot method and the Ferguson test This method calculates those values belonging to the dataset, marked as “extreme adjacent values”, which mark the boundary of the range of dataset measures being considered as acceptable. Outside this range of values, dataset measures are considered to be anomalous and so they are properly examined and/or discarded. From the analyses done by means of the Box plot method on nine datasets of R (the APK formula gives two possible expressions for the settlements: one expression changes according to the smaller size of the foundation B and the other one changes as the values of NSPT increase) forty-eight anomalous values have been found. As mentioned before, detecting the anomalous data in a statistical way is only a necessary condition for being discarded from the dataset, but it is not enough. Any exclusion is decided also according to a backwards path, i.e. the conditions under which measures are done so as to explain their anomaly in terms of measure error. In our case only basic data are known while data gathering and any possible processing are unknown. Experience gained in analysis of datasets from measured properties in homogeneous soil units shows that some values are very far from the mean, which may be due to the following reasons (Rethati 1988): • •
•
•
2.2 Analysis of the datasets The Exploratory Data Analysis (EDA) is used in the first step. In fact statistical properties, such as central tendency and dispersion, can be greatly influenced by different kinds of anomalous data which should always be identified and, in case, discarded in order to estimate any significant statistical quantity. These values are too low or too high compared with the interval within which the measures of a dataset fall. These outliers affect the mean and the dataset standard deviation, and they can also affect the choice of the probability density function (PDF) which explains the sampling distribution in the best possible way.
• •
•
the layer contains foreign inclusions; some samples came from an adjacent layer (not the one to be tested) due to the inaccurate determination of the boundaries between the layers; the structure or the state of the sample giving the outlying result, has changed during the sampling, the transport or the storage; some samples were tested under different conditions with respect to the others; the tested layer was subjected to some local effect; a part of the tests was performed by another laboratory assistant, or by a different method, or less carefully, though by the same assistant; the evaluation of test results is affected by gross error.
Considering the changeable structure of the anomalous data, Grubbs (1969) suggests two procedures according to whether the physical reasons for the anomalous values are known or not. In the case under examination, since the conditions under which the soil
166
Table 2. Reference values for the index size varies (Ferguson, 1961).
√
Each time this procedure has had a negligible impact on the remaining number of values in the datasets. In the following the datasets of R will be corrected by means of the Ferguson test and will be referred to as “corrected datasets”.
b1 as the sample
n α √ b1
0.01 0.05
30
35
40
50
≥60
0.98 0.66
0.92 0.62
0.87 0.59
0.79 0.53
0.72 0.49
2.4 Accuracy, precision and entropy in the settlement estimation
Table 3. Values of the Ferguson Index (1961) calculated for the nine datasets of R. Dataset
Ferguson index for dataset complete corrected
% of discarded measures
TP M MH A BB APK1 APK2 SS BL
1.46 1.22 1.04 1.19 1.62 1.86 2.91 0.98 1.04
3.4 7.4 5.1 5.1 3.5 3.7 1.1 1.7 2.9
0.71 0.71 0.71 0.71 0.72 0.72 0.71 0.70 0.67
properties are measured are unknown, decisions about the anomalous values can be taken only according to the statistical tests. The Ferguson test (1961) is useful to this end because it applies to those datasets whose variances are unknown and the population they belong to is supposed to have a standard arrangement. Moreover, when more anomalous values are all put on the same side as the mean value (these values being greater or lower than the mean), the Ferguson index is calculated as a function of skewness:
Its values are reported in Table 2 according to the numerousness of the sample. Considering the first column of Table 3, the 9 datasets, which include the anomalous values, do not satisfy Equation (3) and so all the anomalous values detected through the box plot have been discarded in order to get the corrected values given in the third column of Table 3. If the skewness ratio is over the values given in Table 2, the anomalous values will be discarded starting from those which are farther from the mean value and the index in Equation (2) will be calculated again until the following hypothesis is satisfied:
For the complete dataset in the 9 formulas taken √ into account, the values concerning the index b1 are summarized in Table 3.
In addition to the notion of conservation previously indicated and classified, other notions are here introduced in order to give an objective basis to the choice concerning the ratios between calculated and measured settlement, indicated as R: accuracy, precision and entropy. In particular, precision represents the dispersion of the values of an instrument during the calculation of a specific physical quantity. It can be considered as the dispersion measure. On the contrary, accuracy refers to the greater or smaller closeness of a set of measures to the real value which is supposed to be represented by the measures. Checking the accuracy of a dataset can be associated to an indicator of central trend, such as the mean. The above-mentioned features of a dataset are often neglected because if, on one hand data are very few, on the other hand the influence of the high inaccuracy of the starting dataset on the geotechnical designing is underestimated even in the presence of state-of-the-art physical and numerical models. A synthetic index which takes into account at the same time accuracy and precision of a dataset of R is the “Ranking Index” suggested by Briaud & Tucker (1988):
where µ and s stand for the mean and standard deviation of the absolute value of the natural logarithm of R, defined before as the ratio between the calculated and measured settlement. The values of this index let us define the conservation of the dataset having mean µ and standard deviation s. However, in defining RI the variable R is assumed to be distributed in a lognormal way. If we want to generalize the problem, Cherubini & Orr (2000) have suggested the “Ranking Distance” index, defined as follows:
where symbols have the same meaning of Equation 4. The ideal value of RD is 0, which represents the measures of the ratio R whose mean is equal to 1 and the deviation tends to 0. A geometrical interpretation of the “Ranking Distance” can be given by a Cartesian plane where the mean values of R are put on the x-axis and the standard deviations of R are on the y-axis (Cherubini & Orr 2000). It represents the distance of the measured R value from the ideal value whose coordinates are (1,0).
167
Lastly, the entropy, like the variance, is another measure of dispersion although the variance measures the dispersion around the mean value of a random variable whereas the entropy measures the overall uncertainty associated with the random variable (Kinsley 1983). This measure can be more significant than the variance whenever bimodal random variables are dealt with. Moreover, its value is slightly affected by extreme values as outliers, which are characterized generally by low probability values. If a random variable x has n possible outcomes xi (i = 1,2,..n) and the probability assigned to each outcome is Pi , then the entropy associated with x is (Shannon 1948):
Table 4. Values of RD calculated for the modified dataset of R. Dataset
Mean
Standard deviation
Skewness
Kurtosis
RD
TP M MH A BB APK1 APK2 SS BL
3.17 1.53 1.53 0.98 1.58 1.16 1.24 1.15 0.82
1.9 1.1 1.1 0.5 0.8 0.6 0.6 0.6 0.5
1.83 1.84 1.83 2.02 1.90 1.99 1.99 1.98 2.02
−0.2 0.34 −0.33 −0.19 −0.13 −0.39 −0.20 −0.30 −0.39
3.0 1.5 1.3 0.7 1.3 0.6 0.6 0.7 0.5
According to the entropy definition, the higher the entropy the less reliable the estimation. The ratio of the calculated entropy to the maximum entropy is called relative entropy Hr (x) (Kinsley 1983):
where Hmax is the maximum entropy of the random variable. The relative entropy of R can provide a way of relating the absolute dispersion of each R dataset to the 8 remainders. 2.5 Reliability estimation of the R datasets According to what has been previously indicated and elaborated, synthetic statistical indicators relating to the datasets of the ratios between calculated and measured settlements can be used for estimating the reliability of the settlement calculations. The way of achieving this has been subdivided into two stages: – location of the anomalous values that might be presented in the samples of R and their removal; – definition of the accuracy, precision and entropy of each dataset of R. In stage 1, as already seen in the previous paragraphs, the anomalous values have been located and discarded through the Box plot and Ferguson methods. In stage 2, the errors associated with the corrected samples of R have been described in terms of both precision and accuracy, through the ranking distance, and absolute and relative entropy. Table 4 gives the values of RD for the nine corrected datasets of R. Only the RD index has been taken into account since its validity does not depend on how the dataset is distributed. Figure 1 shows the values of the mean on the x-axis and the values of R deviations for the nine datasets reported in Table 4. The points corresponding to the corrected dataset are shaped like empty circles while the dark circles refer to the most efficient formulas.
Figure 1. Plane showing both mean values (on the x axis) and standard deviations (on the y axis) of R corrected dataset for the nine settlement formulas.
If the curves of RD having a constant value are represented on a graph, they have the shape of arcs of circles drawn around the “ideal” point (µR = 1.0; sR = 0.0) (Figure 1). Moreover, the same figure shows that around the “ideal” point it can be possible to see three areas having different ratios between accuracy and precision: a sector of circle within which more precise and accurate datasets fall can be set in a 30 degree angle from the vertical; the equally precise and accurate datasets fall within 30 to 60 degrees from the vertical; lastly, the datasets which are more precise than accurate are between 60 and 90 degrees from the vertical. Inside these areas, on the right of the “ideal” point, low values of RD correspond to high accuracy and precision; high values of RD correspond to low accuracy and precision. As you can see the values of RD are highly affected by the presence of anomalous values and their influence is unpredictable a priori with regard both to width and sign. For the nine datasets of R the points corresponding to calculated settlements greater than the measured ones (R ≥ 1) fall on the right of the “ideal”
168
point: under these conditions, if RD values are small, the closer the values of R are to the “ideal” point the more reliable they are. Figure 1 can easily show the ratios between accuracy and precision through the arcs of the circle. Moreover, when choosing the most efficient formulas according to RD, datasets having an R mean smaller than 1 must be discarded. In fact, values of R smaller than 1, and more specifically smaller than 0.8, correspond to underestimations of the settlement size that are not safe values in geotechnical designing. For the nine formulas corrected by means of Box plot and Ferguson test application, referring to Table 4, Berardi & Lancellotta (BL) formula will be discarded. The formulas corresponding to smaller values of RD are shaped as dark circles, that is: the two ones suggested by Anagnostopoulos et al. (1991) and the one suggested by Arnold (1980) and Schultze & Sherif (1973). All these formulas falls in the area more accurate than precise. These formulas coincide with mean values and standard deviations of R which are lower than the other ones. In fact, the most conservative formula among the ones taken into examination is the Terzaghi & Peck (TP) whose RD value is equal to 3, then comes the Meyerhof (M) which is also the most dated together with the TP one. The Meigh & Hobbs (MH) and the Burland & Burbidge (BB) formulas have the same values of RD (that is 1.3), though MH is more accurate than precise while BB is both accurate and precise. Nevertheless, the authors think that the act of referring to absolute quantitative scales for the values of RD is a complicated point due to the: (1) kind of problem taken into account and (2) the number of independent variables according to which the ratio R is calculated. In fact, Orr & Cherubini (2003) have introduced the “Ranking Distance” (RD) for estimating the precision and accuracy of different formulas used for calculating the rest thrust coefficient. In that case values of RD lower than 1.5 have been recorded but the formulas had only one independent variable, i.e. angle of internal friction (φ). On the contrary, in this study the formulas for settlement calculation depend on many parameters (NSPT , qc , Shape factors, Relative density, empirical coefficients). Nevertheless, considering that the value of RD calculated in the “ideal” point is RD = 0, then the values falling to the right of the “ideal” point on the graph represent acceptable mean and standard deviation values. Once we go away from the “ideal” point these values increase RD and so they are more conservative. From Table 4 and the dark circles in Figure 1 can be inferred that APK1, APK2, A and SS formulas correspond to the lowest acceptable value of RD compared with all other expressions, followed by BB, MH and M formulas. Finally, TP formula is extremely precautionary. On the contrary, BL formula, though it has small values of RD, falls to the left of the “ideal” point on the graph and it is not on the safe side. In view of these remarks on the ranking distance, dispersion can be further investigated, that is precision of the R
Table 5. Estimation of both entropy and relative entropy for the nine datasets of R. Dataset
Entropy (H)
Relative entropy (Hr)
TP M MH A BB APK1 APK2 SS
1.5 1.2 1.2 1.0 1.2 1.0 1.1 1.1
1.0 0.8 0.8 0.7 0.8 0.7 0.7 0.7
Table 6. Synoptic table of both moments and distributions of the datasets of R. Dataset
K Pearson’s coefficient
Probability distribution
TP M MH A BB APK1 APK2 SS
0.048 0.11 0.039 0.047 0.048 0.035 0.046 0.04
Normal Normal Normal Normal Normal Normal Normal Normal
datasets through the calculation of the absolute H and relative Hr entropy, introduced in section 2.4. Table 5 gives the values of entropy for eight out of nine datasets of R. As you can see, the most disperse formulas are TP, MH and M, while the less disperse formulas are BL and A. These results point out that, referring to these nine formulas for calculating settlements, the expressions which are less disperse around the average are also less disperse in absolute terms. This result is also confirmed by the calculation of the relative entropy according to Equation (7).
2.6 Estimation of R distributions Once the most accurate and precise formulas have been found we can go on estimating the sampling distributions relative to the datasets of R for the different formulas we have considered. By exactly determining the distributions of three samples of R concerning APK1, APK2 and SS formulas it is possible to provide a reliability-based design. A formulation has not been reported because of results are similar to SS formula. To this end, it is crucial to know how the three datasets of R are arranged to develop exceeding probability charts of settlements, which give values for a quick design, once the acceptable settlement size is defined. The estimation of the sampling distributions has been made by Pearson chart (Pearson & Hartley 1972). Table 6 shows the values of K Pearson’s coefficient calculated for all the datasets considered.
169
It can be seen that all the formulas considered follow the normal distribution. These results differ from Sivakugan & Johnson (2004) showing that anomalous values can have a great role on the identification of the sampling probability distributions. As a matter of fact Sivakugan & Johnson (2004), do not carry out preventive studies on the datasets of available settlements and they find a beta-type sampling distribution of R for BB, TP and BL formulas. Moreover, the estimation of the sampling distribution is affected by the numerousness of the sample under consideration. Unfortunately, Sivakugan & Johnson (2004) don’t report the number of values of settlements considered so that no comparison can be done with the datasets from this study, whose minimum size is 159 values. 3
Figure 2. Normal-distribution probability curves corresponding to different values of acceptable settlements calculated by APK1 formula.
SETTLEMENT PROBABILITY CURVES
The final aim of such a thorough study on the datasets of R relative to several formulas for settlement prediction under shallow footings is to provide a quick tool for design and consultation. In fact, curves of the measured settlements for the three formulas chosen before have been drawn for the different levels of probability. These levels of probability have been determined for the two expressions of Anagnostopoulos et al. (1991) and for the one of Schultze & Sherif (1973), which are the most precise formulas according to the available full-scale data. As far as the values of the possible settlements are concerned, both Eurocode 7 and the Italian Provisions (Testo Unitario 2008) do not give acceptable reference values. Nevertheless, some suggestions come from literature for different building standards (for example Sowers & Sowers 1970). In practice, settlements can be considered as acceptable if they do not cause both static and functional problems to the building. In this study we have made reference to five values of settlements considered as acceptable: 5, 15, 25, 35 and 45 mm. The exceeding probability curves have been drawn according to the normal distribution law for the values of R, so that:
This means that the probability that a value of settlement is higher than the corresponding actual value comes from the calculation of the probability that any value of R is lower than a fixed value of R. Owing to this consideration we have used three normal density distributions related to the modified datasets of R for APK1, APK2 and SS formulas (Figures 2–4) for calculating the probability curves. The values corresponding to the probability of exceedance can be obtained from these settlement probability curves. For example, considering that a 20 mm settlement has been estimated by means of the SS formula,
Figure 3. Normal-distribution probability curves corresponding to different values of acceptable settlements calculated by APK2 formula.
Figure 4. Normal-distribution probability curves corresponding to different values of acceptable settlements calculated by SS formula.
Figure 4 shows that there is a 46% probability that the actual settlement under footing will be more than 25 mm, that is to say the probability of calculating settlements below 25 mm will be 70%. Similarly, there
170
will be a 26% probability that the settlement is below 15 mm, i.e. an 74% probability that the settlement is more than 15 mm. For all three formulas the exceeding probability curves for values of settlements occurring between the curves plotted on the graph will be obtained by interpolating the values of probability for the two curves which mark the range. These probability charts let the engineer know the probability that a certain value of the actual settlement is exceeded provided that the settlement value is known, i.e. they provide the conditioned probability of the actual settlements:
where Sc and Sm are the calculated and measured settlements, respectively; s and S are the values of their respective settlements. So the conditioned probability in terms of not-exceedance will be the following:
Therefore, as a result of a reliability-based designing in terms of settlements (calculated by means of one of the three formulas APK1, APK2 and SS), the probability that the actual settlement of the footing does not exceed a given value can be easily got by the conditioned probability theorem:
where the first term comes from the probability charts (Figures 2–4) while the second one derives from the reliability-based designing. In fact, if the settlements are estimated by the SS formula, and the probability of not exceedance of a 20 mm settlement is equal to 10−4 , the probability that the settlement actually measured is below 25 mm will be easily got from Figure 4:
If the calculation is made by means of the two other formulas, for the same estimated settlement we will get:
4
CONCLUSIONS
This paper shows the results of a statistical study carried out on settlements under shallow footings on sand
by the so-called bias factor R. Settlements have been calculated according to nine formulas commonly used in geotechnical designing, while the values of fullscale settlements have been obtained from a paper of Burland & Burbidge (1985). This kind of approach has led to choose three expressions from the nine formulas which have turned out to be more precise and accurate according to both processing and available data. This selection has been made by detecting the anomalous values for the nine datasets of R and by calculating the ranking distance in order to estimate the reliability of the nine datasets of R. Moreover, this study has also estimated the best statistical distribution for each dataset by means of Pearson chart. The aim of these analyses was to develop the probability charts of the settlements measured for the three formulas chosen according to precision and accuracy criteria (APK1, APK2 and SS). Thanks to these charts and to the conditioned probability theorem, the exceeding probability of actual settlements can be determined from the exceeding probability of calculated settlements.
REFERENCES Anagnostopoulos, A.G., Papadopoulos, B.P. & Kavvadas, M.J. 1991. Direct estimation of settlement on sand, based on SPT results. In Proceedings of the 10th ECSMFE Florence, Italy: 293–296. Arnold, M. 1980. Prediction of footing settlements on sand. Ground Eng. 2: 40–49. Berardi, R. & Lancellotta, R. 1991. Stiffness of granular soils from field performance. Geotechnique 41(1):149–157. Berardi, R. & Lancellotta, R. 1994. Prediction of settlements of footings on sand: accuracy and reliability. In Proceedings of Settlement ’94, (1): 640–651. Briaud, J. & Tucker, L.M. 1988. Measured and predicted axial load response of 98 piles. Journal of Geotechnical Engineering 114(9): 984–1001. Burland, J.B. & Burbidge, M.C. 1985. Settlement of foundations on sand and gravel. In Proceedings of Inst. Civil Eng. part 1, 78: 1325–1381. Cherubini, C. & Greco, V. R. 1998. A comparison between “measured” and “calculated” values in geotechnics. In Proceedings of the 21st International Conference on Probabilities and Material (PROBAMAT): 481–498. Cherubini, C. & Orr, T.L.L. 2000. A rational procedure for comparing measured and calculated values in geotechnics. In Proceedings of the International Symposium on Coastal Geotechnical Engineering in Practice,Yokohama: 261–265. Rotterdam: Balkema. Cherubini, C. & Vessia, G. 2005. Studi di Affidabilità nel calcolo di cedimenti in terreni sabbiosi. Atti del III Convegno Scientifico Nazionale Sicurezza nei Sistemi Complessi, 19–21 ottobre, Bari. Kaggwa, W.S., Cheong, M.T. & Jaksa, M.B. 2002. Assessment of the luck associated with the settlement predictions that are based on elastic theory. In Pöttler, Klapperich and Schweiger (ed.), International Conference on Probabilistics in geotechnics. Technical and Economic risk estimation, Graz, Austria, 15–19 September. Kinsley, H.W. 1983. Some geotechnical applications of entropy. In Augusti, Borri and Vannucchi (ed.), Fourth
171
international conference on applications of statistics and probability in soil and structural engineering, Florence, 13-17 June, Italy. Eurocodice 7, Progettazione geotecnica – Parte 1: Regole generali, UNI ENV 1997 – 1-Aprile 1997. Ferguson, T.S. 1961. Rules for rejection of outliers. Rev. Inst. Int. Stat. 29: 29–43. Grubbs, F. 1969. Procedures for Detecting Outlying Observations in Samples. Technometrics 11(1): 1–21. Meigh, A.C. & Hobbs, N.B. 1975. Soil mechanics, Section 8, Civil Engineer’s Reference Book (3rd ed.). NewnesButterworth: London. Meyerhof, G.G. 1965. Shallow foundations. Journal of Soil Mechanics and Foundation Division 91(SM2). Orr, T.L.L. & Cherubini, C. 2003. Use of the ranking distance as an index for assessing the accuracy and precision of equations for the bearing capacity of piles and at-rest earth pressure coefficient. Canadian Geotechnical Journal: 1200–1207. Pearson, S. & Hartley, H. 1972. Biometrika, Tables for Statisticians (2). Rethati, L. 1988. Probabilistic solutions in geotechnics. Developments in geotechnical engineering 46. Elsevier (ed.).
Schmertmann, J.H., Hartman, J.P. & Brown, P.R. 1978. Improved strain influence factor diagrams. Journal of Geotechnical Engineering Division 104(GT8): 1131–1135. Schultze, F. & Sherif, G. 1978. Prediction of settlements from evaluated settlement observation for sand. In Proceedings of 8th ICSMFE, 3, Moscow, URSS: 225–230. Shannon, C.E. 1948. A mathematical theory of communication. The bell system technical journal 27: 379–423. Sivakugan, N. & Johnson, K. 2004. Settlement prediction in granular soils: a probabilistic approach. Geotechnique 54(7): 499–502. Sowers, G.B. & Sowers, C.F. 1970. Introductory soil mechanics and foundations. Collier-MacMillan: London. Terzaghi, K. & Peck, R.B. 1967. Soil mechanics in engineering practice. New York: J. Wiley & Sons. Testo Unitario 2008. D.M. Infrastrutture 14 gennaio 2008. Nuove Norme Tecniche per le Costruzioni. Ministero delle Infrastrutture, Ministero del’Interno, Dipartimento della Protezione Civile. Wu, T.H. 1974. Uncertainty, safety and decision in soil engineering. Journal of the geotechnical engineering 100(GT3): 329–348.
172
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Resistance factor calibration based on FORM for driven steel pipe piles in Korea J.H. Park, J.H. Lee, M. Chung & K. Kwak Korea Institute of Construction Technology, Goyang, Gyeonggi, Korea
J. Huh Chonnam National University, Yeosu, Chonnam, Korea
ABSTRACT: Resistance factors for static bearing capacity of driven steel pipe piles were calibrated in the framework of reliability theory. Two types of First Order Reliability Method (FORM), Mean Value First Order Second Moment (MVFOSM) method and Advanced First Order Second Moment (AFOSM) method were applied for the reliability analyses and resistance factor calibration. The target reliability indices are selected as 2.0 and 2.33 for group pile case and 2.5 for single pile case in consideration of the reliability level of the current design practices, redundancy effect of the pile group, acceptable risk level, and significance of individual structure. Resistance factors were recommended for the pile foundation design and construction practice and the subsurface conditions in Korea. 1
INTRODUCTION
Geotechnical design practice has been gradually changing in recent years from Allowable Strength Design (ASD) to Limit State Design (LSD), known as Load and Resistance Factor Design (LRFD) in the United States and Canada. Geotechnical resistance factors are greatly influenced by the site geology and the design and construction practice, and it is important to incorporate local ASD experience and practice accumulated over many decades in developing geotechnical LRFD codes. Foundation design and construction practice in Korea is quite different from that of other countries, and the Korean geotechnical community realized the need of developing our own LRFD codes. This paper presents the resistance factors calibrated for the static bearing capacity design methods commonly used in Korea for driven open-ended steel pipe piles. This is part of the research effort in Korea for LRFD implementation in geotechnical engineering. 2
RELIABILITY ANALYSIS
2.1 Resistance bias factor statistics Static load test reports on driven steel pipe piles were collected from all around Korea along with available records of subsurface investigation, and laboratory testing. While a total of over 2,000 static load test data was collected and reviewed, 57 static load test data were found to be reliable and useful for this study. All of these test piles were sorted into two cases for reliability analysis: 1) SPT N-value at pile tip less than
50 (N < 50), and 2) SPT N-value at pile tip equal to or more than 50 (N ≥ 50). And reliability analysis and resistance factor calibration were performed only for total pile capacity. Five different failure criteria were employed in this study to evaluate the pile’s bearing capacity, and the Davisson’s criterion was proven to perform the best in the previous study (Kwak et al. 2008). The Korean Design Standards for Foundation Structures (2003) adopted two static bearing capacity analysis methods. These two methods were used to estimate the static ultimate capacity (Qu ) of the 57 test piles.The first method is expressed in the following formula.
where, σv : effective overburden pressure at the pile tip with the depth limit to 20B, Nq : bearing capacity factor as a function of soil’s friction angle (φ), c : cohesion of soil, Nc : bearing capacity factor for cohesion, Nc = 9 for soils with φ = 0, Ap : cross section area of pile assuming 100% plugging, fs : unit frictional resistance along the shaft (=Ks σv tanδ for cohesionless soils), Ks = 1.4 (1 − sin φ) and δ = 20 degrees, σv : average effective overburden pressure along the shaft. fs = αcu for cohesive soils, α : adhesion factor, and cu : undrained shear strength of soil, As : pile shaft surface area. The second method is the empirical formula which is revised after Meyerhof (1976) as presented below.
where, m = 3 Lb /B ≤ 30 (Lb : pile embedment depth into bearing stratum), Np : average SPT N value
173
(uncorrected) in the vicinity of pile tip, mNp ≤ 1500 metric tons/m2 , n = 0.2, Ns : average SPT N value (uncorrected) along pile shaft, n Ns ≤ 10 metric tons/m2 . Resistance bias factor is defined as the ratio of the measured ultimate bearing capacity from a load test over the predicted ultimate bearing capacity by a static bearing capacity formula. The resistance bias factors for the selected test piles are computed for the two static design methods described above. The statistics of the resistance bias factors are computed for reliability analysis and resistance factor calibration, and are presented in Table 1. For the case of SPT N-value at pile tip less than 50, Equation (1) appears to predict the bearing capacity more closely to the measured capacity than Equation (2). Distribution of these resistance bias factors was also examined and lognormal distribution was found to most closely represent the bias factor distributions (Kwak et al. 2008). 2.2
The computed reliability indices by both the MVFOSM and AFOSM methods are shown in Figures 1–2. The Korean Design Standards for Foundation Structures requires a minimum safety factor of 3.0 for pile bearing capacity design, and in some cases a safety factor larger than 3.0 has been used in the Korean practice. Accordingly, reliability analysis was performed for safety factors of 3.0–5.0. Reliability of the static design methods seems to be within a reasonable range. The AFOSM method resulted in larger reliability indices than the MVFOSM method by 3.1% to 6.7% for SPT N at pile tip less than 50 case, and 2.9% to 9.8% for SPT N at pile tip equal to or more than 50 case.
Reliability analysis results
AASHTO LRFD Specification Strength Case I (2007) is considered as the critical loading case for pile bearing capacity. Two types of the First Order Reliability Methods (FORM), the Mean Value First Order Second Moment (MVFOSM) method and the Advanced First Order Second Moment (AFOSM) method were used to evaluate the reliability of the static bearing capacity design methods. Two random variables, the load (Q) and the resistance (R), are considered and they are assumed to be statistically independent and lognormally distributed. The limit state function in this case is defined as: g (R, Q) = ln (R) − ln (Q) = ln (R/Q). If the load effects to be considered are only dead and live loads, the reliability index (β) can be calculated by the following the MVFOSM formula.
Figure 1. Reliability indices (N < 50).
where, λR , COVR : mean and coefficient of variation of resistance bias factors, and FS : factor of safety. In the AFOSM analysis, the limit state function is linearized at a point on the failure surface. The limit state function g (R, Q) can be expressed in the following format.
Table 1.
Resistance bias factor statistics.
Resistance bias factor (λR )
N at tip < 50
N at tip ≥ 50
Eq.(1)
Eq.(2)
Eq.(1)
Eq.(2)
Mean COV* No. of data
0.975 0.511
1.750 0.755
0.726 0.411
1.317 0.743
27
*COV = coefficient of variation
30 Figure 2. Reliability indices (N ≥ 50).
174
3
RESISTANCE FACTOR
3.2 Calibration of resistance factors
3.1 Target reliability index Target reliability index is a very important element in resistance factor calibration, and it should be selected considering various sources of uncertainties in load and resistance evaluation, safety level required by the society, cost-effectiveness as well as the current design and construction practice. Meyerhof (1970) proposed that the probability of failure of foundations should be between 10−3 and 10−4 , which corresponds to reliability index between 3.1 and 3.7. Barker et al. (1991) suggested target reliability index as 2.0 to 2.5 in their research for the AASHTO resistance factor calibration for driven piles, and Whithiam et al. (1998) confirmed that this range of target reliability index is reasonable for a single pile design considering the redundancy of pile groups. The NCHRP 507 report (Paikowsky 2004) showed that the evaluation of resistance factors using various reliability indices before establishing the final target reliability index contributed to developing a reasonable target reliability index. It recommended target reliability indices in conjunction with group redundancy: 2.33 for redundant piles (5 or more piles per pile cap) and 3.0 for non-redundant piles (4 or fewer piles per pile cap) and these values were finally applied to develop the AASHTO LRFD resistance factors for driven piles. As stated above, target reliability index of 2.0–3.0 appears to be reasonable for pile foundations. Reliability indices computed using all 57 data in this study range from 1.5 to 2.9 for corresponding safety factors of 3.0 to 5.0, and the two design methods (Equations [1] and [2]) have fairly uniform reliability indices for both pile tip cases (N < 50 and N ≥ 50). Therefore, the same target reliability indices can be recommended for both design methods and both pile tip cases. Considering that a minimum safety factor of 3.0 has been used for steel pipe pile design, recommend target reliability index of 2.0 to 2.5 is within a reasonable compatibility with the reliability of the current design practice in Korea. Based on the reliability analysis of the current static design methods presented above and considering the target reliability indices recommended in the literature or selected in other research projects, the target reliability indices for static bearing capacity design of steel pipe piles in Korea are selected as presented in Table 2. Two different target reliability indices are recommended for the group pile case to account for acceptable risk level or significance of individual structure. Reliability index of 2.0 to 2.5 corresponds to probability of failure of 2.3% to 0.6%, respectively.
Resistance factors were calibrated by the three methods used in the reliability analysis and the results were compared. The following formula was used for the MVFOSM method:
where, βT : target reliability index. The basic algorithm of the AFOSM and MCS methods for resistance factor calibration is similar to that of the respective reliability analysis. The limit state function for AFOSM is:
Calibration was performed for the two static bearing capacity analysis methods, and the results are presented in Table 3. Calibrated resistance factors for the case of N at tip <50 are relatively larger than those for the N at tip ≥50 case. The resistance factors for Equations (1) and (2) are also different for both pile tip cases. Calibrated resistance factors for the case of N at tip <50 are relatively larger than those for the N at tip ≥50 case. The resistance factors for Equations (1) and (2) are also different for both pile tip cases. Resistance factors recommended in this study are specific for the pile foundation design and construction practice and the subsurface conditions in Korea.
4
This study presents the following findings and conclusions. 1. Target reliability indices for static bearing capacity design of steel pipe piles are selected as 2.0 and 2.33 for group pile case (5 or more piles in a group) and 2.5 for single pile case (4 or fewer piles in a group). 2. The difference between computed reliability indices by the MVFOSM and AFOSM methods can be significant and the MVFOSM method may not be reliable when the limit state function is nonlinear. Table 3.
Target reliability index (βT )
Group pile case Single pile case
2.0 and 2.33 2.5
Calibrated resistance factors. N at tip ≥ 50
N at tip < 50 Eq.(1)
βT
Table 2. Target reliability indices. Type
SUMMARY AND CONCLUSIONS
Eq.(2)
MvF* AF** MvF AF
2.00 0.41 2.33 0.34 2.50 0.31
0.44 0.37 0.34
0.46 0.37 0.33
Eq.(1)
Eq.(2)
MvF AF
MvF AF
0.48 0.37 0.39 0.32 0.35 0.30
*MvF : MVFOSM , **AF : AFOSM
175
0.40 0.35 0.35 0.28 0.33 0.25
0.37 0.30 0.27
3. Calibrated resistance factors for the case of N at tip < 50 are relatively larger than those for the N at tip ≥ 50 case. The resistance factors for Equations (1) and (2) are also different for both pile tip cases.
REFERENCES AASHTO 2007. LRFD Bridge Design Specifications, American Association of State Highway and Transportation Officials, Fourth Edition, Washington, D.C. Barker, R., Duncan, J., Rojiani, K., Ooi, P., Tan, C., and Kim, S. 1991. Manuals for the Design of Bridge Foundations. NCHRP Report 343, Transportation Research Board, Washington, D.C. Korean Society of Geotechnical Engineers 2003. Design Standards for Foundation Structures. Ministry of Construction and Transportation, Seoul, Korea.
Kwak, K., Kim, K. J., Huh, J., Park, J. H., Chung, M., and Lee, J. H. 2008. Target Reliability Indices of Static Bearing Capacity Evaluation of Driven Steel Pipe Piles. Proceedings of the 87th Annul Meeting of Transportation Research Board, Transportation Research Board, Washington, D.C. (CD-ROM) Meyerhof, G. G. 1970. Safety Factors in Soil Mechanics. Canadian Geotechnical Journal, 7(4): 349–355. Meyerhof, G. G. 1976. Bearing Capacity and Settlement of Pile Foundations. Journal of the Geotechnical Engineering Division, 102(GT3), ASCE: 197–228. Paikowsky, S. G. 2004. Load and Resistance Factor Design for Deep Foundations. NCHRP Report 507, Transportation Research Board, Washington, D.C. Whitiam, J., Voytko, E., Barker, R., Duncan, M., Kelly, B., Musser, S., and Elias, V. 1998. Load and Resistance Factor Design (LRFD) of Highway Bridge Substructures. Publication FHWA HI-98-032, FHWA, U.S. Department of Transportation.
176
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
An evaluation of the reliability of vertically loaded shallow foundations and grouped-pile foundations Tetsuya Kohno Bridges and Structures Research Group, Center for Advanced Engineering Structural Assessment and Research (CAESAR), Public Works Research Institute(PWRI), Tukuba, Japan
Takashi Nakaura Chodai CO.,LTD. Tokyo, Kita-ku, Japan
Masahiro Shirato & Shoichi Nakatani Bridges and Structures Research Group, Center for Advanced Engineering Structural Assessment and Research (CAESAR), Public Works Research Institute(PWRI), Tukuba, Japan
ABSTRACT: This paper estimates the reliability of the ultimate bearing capacity of shallow foundations and grouped-pile foundations. First, model uncertainty in the estimation for the bearing capacity of shallow foundations is evaluated using small and large-scale plate-load test results and design formulas. A similar evaluation is then conducted for single piles using in-situ load test results, and furthermore, the reliability of a pile group as a system is estimated considering the increase in the reliability with increase in the number of piles, because most pile foundations in Japan are grouped-pile foundations with more than 5 piles. Finally, this paper shows that the reliability indices for shallow foundations and grouped-pile foundations with more than 4 piles that are designed following current design specifications are equivalent.
1
INTRODUCTION
The advent of performance-based design specifications for bridges occurred in 2002 with the Japanese Specifications for Highway Bridges (Japan Road Association, 2002). The required bridge performances are clearly described for Normal Situations, Frequent Earthquake Situations, Rare Earthquake Situations, and so forth. These Specifications also call for limit states of structural components so that integrating the components that are verified for their limit state criteria can be considered to have achieved the required performance of total bridge systems. According to performance-based design, design requirements are clearly specified, while the existing detailed design methods, including computational methods and acceptable limits, are specified as typical acceptable solutions. Thus, the introduction of performancebased design encourages designers to apply advanced or better solutions in practice, even if these solutions are not presented in the Specifications. However, most of the Specifications are not based on a probabilistic background. Rather, the typical acceptable solution is achieved with empirical safety factors.The adoption of a reliability design concept has been enthusiastically encouraged so that new design approaches or materials can be compared with current practices in terms of reliability. Accordingly, the Specifications are being revised toward the implementation
of the load and resistance factor design (LRFD) format with a reliability design concept. An intermediate draft and the final draft are planned for release in 2009 and 2011, respectively. This paper reports an evaluation of the resistance factors for vertically loaded shallow foundations and grouped-pile foundations. This study is a part of the project for the revision of the Japanese Specifications for Highway Bridges by CAESAR, PWRI. In particular, we will consider the group effect on the reliability of pile foundations and attempt to achieve a consistent reliability level between shallow foundations and grouped-pile foundations.
2
RELIABILITY INDEX AND RESISTANCE FACTOR FOR SHALLOW FOUNDATIONS
The theory of bearing capacity of shallow foundations has been established, but it is well known that theoretical bearing capacity formulas tend to overestimate bearing capacity for larger sized footings. However, in the past twenty years, extensive, careful experimental and numerical research was conducted in Japan, and now the Japanese Specifications for Highway Bridges incorporate the scale effect into the bearing capacity formula of shallow foundation. Therefore, the model uncertainty in the bearing capacity of shallow
177
Figure 1. Definition of ultimate bearing capacity.
foundations is not expected to be sensitive to footing size and we may use even small scale test results to estimate model uncertainty for large-scale footings. 2.1
Error in estimating the bearing capacity of shallow foundations
In this study, the model uncertainty is defined as the ratio of X/Y, in which X is the observed bearing capacity in a load test and Y is the corresponding calculated value using the bearing capacity formula in the Japanese Specifications for Highway Bridges and soil investigation results. This ratio involves two uncertainties: the modeling of bearing capacity theory and the evaluation of geotechinical parameters. Hereinafter, the ratio X/Y is referred to as the model error. For shallow foundations, we use a database of plate load tests in which a rigid plate that modeled a shallow foundation that was subjected to centered vertical loads. We evaluate the ultimate bearing capacity based on a load-displacement curve with three different methods as shown in Figure 1. As shown in Figure 1(a), when the peak point is clearly identified in a load-displacement curve, the load at the peak point is defined as the bearing capacity. As shown in Figure 1(b), when a load-settlement curve contains a clear break point and shows a similar curve to elastoperfectly plastic bi-linear curves, the load at the break point is defined as the bearing capacity, even though there is no clear peak in the load-settlement curve. As shown in Figure 1(c), when a test was terminated before reaching the ultimate state, we approximated the load-settlement curve using an exponential function and used the approximated ultimate load as the observed bearing capacity.
in which R = load, s = settlement, Ru = approximated ultimate load, and sy = approximated yield settlement. Note that the tests that were terminated before the load reached 1.2 times the approximated yield load are omitted because the approximated ultimate load is considered to be highly unreliable.
A summary of the data sets used herein is shown in Table 1, in which B = footing width and B = effective footing width = square root of the footing base area for a rectangular footing or width for a strip footing. Both in-situ and laboratory tests are dealt with herein. Triaxial compression tests were conducted for in-situ test cases to estimate soil strength parameters. The considered confining pressures were also displayed. Subgrade reactions on the footing base of highway bridge piers are typically 100–500 kN/m2 based on a database of past design results. Accordingly, a typical lateral stress level at a depth of a half of the footing width can range 50–250 N/m2, assuming the earth pressure at rest to be 0.5, which is equivalent to the confining pressures of the triaxial compression tests considered in the in-situ test data. The laboratory loading tests considered herein were conducted by a joint research project between the University of Tokyo and the PWRI (Tatsuoka, et al. 1989, Okahara, et al. 1992), who carefully prepared the test facility to minimize the influence of friction between the sand box and sand on the experimental bearing capacity. Accordingly, we can neglect the side friction effect on the test results. It is also worth noting that the tests were conducted in plane strain conditions. The calculated bearing capacity is obtained as such:
in which Qu = ultimate bearing capacity (kN), A = footing base area and is equivalent to A of footing base area in this study (m2 ), c = cohesion (kN/m2 ), q = surcharge (kN/m2 ), γ = unit weight of soil (kN/m3 ), and B = strip footing width (m) or, if the shape of the foundation is rectangular, B is the shorter side length (m). Nq , Nc , and Nγ = coefficients of bearing capacity that are derived by Komada (1964), and Sq , Sc , and Sγ = modifiers for the scale effect on the bearing capacity coefficients (Japan Road Association),
in which c∗ = c/c0 (1 ≤ c∗ ≤ 10), c0 = 10 (kN/m2 ), q∗ = q/q0 , (1 ≤ q∗ ≤ 10), q0 = 10 (kN/m2 ), and
178
Table 1.
Statistical value of model error in the bearing capacity of shallow foundations.
(a) Square and rectangle footings Ground type
Number of grounds
B (mm)
Bias λp
Variability COVp
Soil test and confining pres-sure (kN/m2 )
8
3
300–1300
1.479
0.171
13
3
200–3000
0.894
0.257
Triaxial compression tests (300) Triaxial compression test (100–300)
Silt and clay
4 6
1 2
300–750 300–750
4.597 0.537
0.474 0.285
Soft rock
6
1
300
0.814
0.254
Gravel
Sand
Number of data sets
Box shear test Triaxial compression test (No information) Triaxial compression test (No information)
Reference Maeda, et al. (1990), Maeda, et al. (1991) Chimi, et al. (1996), Nakano, et al. (1996), Yamamoto, et al. (1996), Ouchi, et al. (1993), Onodera, et al. (1993), Okahara, et al. (1987), Briaud, et al. (1997) Utsunomiya National Highway Office (1976) Okahara, et al. (1987)
(b) Rigid strip footing in the plate strain box Ground type
Number of data sets
B (mm)
Bias λp
Variability COVp
Soil test and confining pressure (kN/m2 )
Sand (experiment)
14
230–500
0.722
0.307
Torsional simple shear test (100)
Reference Okahara, et al. (1992)
λ, ν, and µ = −1/3. The bearing capacity coefficients and the scale effect modifiers are dimensionless. For rectangular footings, α and β are the dimensionless foundation shape factors that are given as:
in which B and D = short and long side lengths of footing, respectively. For strip footings, α = β = 1. κ is the modifier for the embedment effect,
in which Df = effective foundation embedment depth (m). κ is also dimensionless. The soil strength properties of cohesion, c, and internal friction angle, φ, that are derived via soil laboratory tests shown in Table 1 are used in the calculation. When c appeared in CD triaxial compression test results even in the case of sand and gravel, the obtained c is incorporated into the calculating of bearing capacity. Figures 2 and 3 show the relationship between the model error and the effective footing width for all test results. With all data, the bias of model error X/Y, λp , is close to 1.0 and the model error, X/Y, does not have a tendency to change with effective footing size, suggesting that the modifier for the size effect to the bearing capacity coefficient works well. Accordingly, we can assume that the statistical values examined herein can also be used as the model uncertainty in the bearing capacity of larger size footings.
Figure 2. Relationship between model error and effective footing width for sand and gravel.
Figure 3. Relationship between the model error and the effective footing width for silt and clay.
179
Figure 4. Frequency distribution of the model error with all data except for silt and clay cases.
Figure 5. Difference in the model error with changing the effective footing width B .
However, the bias changes with differences in soil investigation methods for deriving soil strength parameter values. For example, for sands, calculated values tend to overestimate test values, especially when a torsional test is adopted to derive soil parameter values. It is worth noting that the laboratory footing load test was conducted in the plane strain condition while in-situ tests were not, and this is another reason to account for the difference in the bias between the in-situ and laboratory tests. When using a box shear test to obtain soil parameter values for cohesive soils, the calculated bearing capacity is underestimated by a large degree at 0.474. However, when using a triaxial compression test, the bias becomes closer to 1.0 and is equal to 0.573. Figure 4 shows the frequency distribution of the model error when the soil parameter values are obtained with triaxial compression tests only on sand and gravel.The frequency distribution seems to be consistent with both normal and log-normal distributions. In practice, shallow foundations are rarely placed on clayey soils for highway bridges in Japan. Based on these results, we finally estimate that the model error of the bearing capacity follows a log-normal distribution with a bias λp of 0.85 and a coefficient of variation (COVp ) of 0.30 under the following conditions:
load Qu is obtained using the plate loading test, φ can be estimated from back analysis using the following equation, assuming the value of cohesion c:
Since one of the test cases conducted a series of loading tests at a particular site using square and rectangular plates with differing diameters ranging from 0.3 m to 1.30 m on gravel (Maeda, et al. 1991) as summarized in Table 1, we will estimate the model uncertainty when using a plate loading test to derive soil parameter values. In this case, we considered a cohesion c of 100 kN/m2 that was estimated from CD triaxial compression test results, because the gravel on site seems to have cohesion because of moisture content. With this assumption, we performed a back calculation of φ using Eq. (6) and c = 100 kN/m2 . The back calculation of φ gives a result of 45 degrees. Figure 5 shows the relationship between the model error and effective footing widths. Figure 5 shows that the bias of the model error approaches 1.0 at every effective footing width when using a plate loading test together with a triaxial compression test to estimate the soil parameter values. This result suggests that, if we evaluate the geotechnical parameters with both triaxial compression tests and a plate loading test, we can estimate a more reliable bearing capacity.
(1) The soil parameter value should be estimated by triaxial compression tests at a confining pressure of approximately 250 kN/m2 . This confining pressure values are expected to approximate the lateral confining stress at a depth of the half depth of a shallow foundation. However, the shallow foundation width is unknown in the soil investigation stage before design. In addition, triaxial compression tests may become difficult in a lower confining pressure. Therefore, we suggest that 250 kN/m2 should be relevant based on the earlier design experience described above. (2) For cohesive soils, more complete investigations are required because the model uncertainty tends to be sensitive to the reliability of the estimation of cohesion c.
When a load Q and a resistance R are probabilistic variables assumed to follow a log-normal distribution, a safety factor µFS in the current design is associated with the reliability index β as follows:
In design, a plate loading test using a circular plate with a diameter of 0.3 m is sometimes used to derive soil parameter values indirectly. When the ultimate
in which, λQ and COVQ are the bias and coefficient of variation of load distribution, Q, (or calculated load acting on the head of the pile), and λR and COVR
2.2 Safety margin and resistance factor
180
Table 2. Reliability index β of existing shallow foundations, target reliability index βT , and resistance factor. β
βT
3.04
3.10
0.33
are the bias and coefficient of variation of resistance distribution, R. For simplicity, this study shows a conditional reliability analysis, disregarding the statistic issues of load, Q, and regarding loads in the current Japanese Specifications for Highway Bridges as the deterministic factored values. We ended up assuming λQ = 1.0 and COVQ = 1.0. Eq. (7) can be reformulated as follows:
When we apply the safety factor value of the current Specifications, µFS = 3.0, and λR = 0.85 and COVR = 0.30 as evaluated above, the reliability index is calculated as β = 3.04. Based on this result, we determined the target reliability index βT = 3.10. The resistance factor is calculated by:
in which Qd is the design vertical load, Rud is the design vertical bearing capacity, and Ruc is the characteristic value of the vertical bearing capacity, obtained with Eq. (2) and c and φ measured by a prescribed soil test or a combination of a prescribed soil test and a plate loading test. Rearranging Eq. (2) and replacing µFS with , where = the inverse of the current safety factor µFS leads to the following equation:
Finally, is calculated to be 0.33. Table 2 summarizes the reliability index β of existing shallow foundations and the target reliability index βT , resistance factor to fulfill the βT . However, as stated above, there is a large bias for cohesive soils on the unfavorable side. Accordingly, it is necessary to apply some partial factor to the term of Nc in Eq. (2) in order to compensate for the assumed bias in the estimation of above. 3
RELIABILITY INDEX AND RESISTANCE FACTOR FOR GROUPED-PILE FOUNDATIONS
Many load test results are available for single piles and the model uncertainty in the bearing capacity can be directly estimated using these test data. However,
most pile foundations are used as grouped piles that are connected though a rigid pile cap. Accordingly, it is preferred to estimate the model uncertainty of the bearing capacity of a foundation system rather than that in individual single piles in a grouped-pile foundation. For example, the AASHTO LRFD bridge design specifications (AASHTO, 2007), in the case of pile foundations, considers that the bearing capacity of pile groups should be larger than single piles because soil between piles can be compacted during construction, and it adopts different target reliability indexes between pile groups with smaller numbers of piles and those with larger numbers (more than 5) of piles (Paikowsky et al., 2004, Zhang et al., 2001). However, this compaction effect can be considered to change according to piling method. In Europe, Eurocode 7 (CEN, 2004) does not describe a specific redundancy factor that is similar to the AASHTO specifications. However, Bauduin (2002) has suggested a factor of 1.1 that can be multiplied with the bearing capacity of grouped piles with a stiff pile cap considering the redundancy that stems from a redistribution of loads within the piles when one of the piles reaches its maximum resistance. In this paper, we will first attempt to estimate the difference in the model uncertainty in the bearing capacity of grouped piles. We will then evaluate the reliability level and resistance factors for grouped-pile foundations as a function of the number of piles in the system. 3.1 Source of model uncertainty under particular consideration in this study We are particularly considering the following two kinds of uncertainty herein: 1) uncertainty in estimating the average bearing capacity of several single piles and 2) uncertainty in the bearing capacity because of the difference in site. As for the first source, let us consider an example in which there are several piles with identical details on a particular site and their bearing capacities are all tested. The bearing capacity of each pile will differ. However, as illustrated in Figure 6, when choosing n pile samples from all piles at the site, obtaining the average bearing capacity, and repeating all this process again and again, we can end up furnishing the distribution of the average bearing capacity of n piles at the site. As a result, it is expected that the variation of the average bearing capacity will decrease with increase in the number of sample piles n because the variation in the individual bearing capacity will be canceled out among the sample piles. Accordingly, the reliability of bearing capacity as a grouped pile foundation should be larger than that of a single pile and we should evaluate the variability in the bearing capacity of grouped pile foundations at a site with a function of the number of piles n. As for the second source, the bearing capacity is calculated using the following equation.
181
Figure 7. Relationship between the COV of model error and the effective embedment ratio, L/D. Table 4. Variability in model error of group piles at a site, COVL . Pile number in a footing 1 Figure 6. Difference in the uncertainty of bearing capacity between single piles and pile groups at a particular site.
COVL
Table 3. Variability in model error of single piles at a site, COVL . Site
1a∗
1b∗
2
3
4
5
Number of single piles in a site COVL
2
2
2
3
2
3
0.01
0.01
0.05
0.05
0.04
0.04
∗
Site-1a and Site-1b tests were conducted at the same site, but their pile lengths and bearing layers are different.
in which Ru = bearing capacity of a single pile (kN), qd = ultimate end bearing capacity intensity per unit area (kN/m2 ) that is a function of SPT-N value and specified in the Japanese Specifications for Highway Bridges, A = area of pile base (m2 ), U = perimeter of pile (m), L = thickness of soil layer considering shaft resistance (m), f = maximum shaft resistance of considered soil layer (kN/m2 ) as a function of SPT-N value and is specified in the Specifications. The ultimate end bearing capacity intensity qd and maximum shaft resistance f in the Specifications are based on regression analyses using pile load test results that were conducted all over Japan. Accordingly the model error changes by site. 3.2 Variability in the average bearing capacity of several piles at a particular site In the PWRI database, as tabulated in Table 3, there are six load test cases in which two piles with identical details were supported by the same bearing stratum at a particular site and tested. While the pile resistance is comprised of the base and shaft resistances, the tests at site Nos. 2–5 were conducted using piles with a friction reduction treatment over 80% of the embedded pile length. Accordingly, while the variability in the
2
3
4
5
6
9
0.050 0.035 0.029 0.025 0.022 0.020 0.017
bearing capacity for site No. 1 results from both the variation in the side and base resistances that for site Nos. 2–5 results almost entirely from the variability in the base resistance. We estimated the bias and COV of the model error for every test case, assuming no side resistance over the friction reduction treatment part in the calculation. The result is shown in Table 3 and Figure 7. Figure 7 shows the relationship between variation and the ratio of the effective embedded pile length L, where the friction reduction treatment part is disregarded, to the pile diameter D. The variability for site Nos. 2–5 is larger than that for site No 1. This tendency should be reasonable, because the spatial variation in the side resistance is supposed to be smaller than the point variation in the shaft and base resistances and the variation in the pile bearing capacity can decrease as the ratio of the shaft resistance to the total pile resistance. Finally, we estimate that the site dependent variability for two piles at a particular site is of the order of smaller than 10%. Finally, a COVL value of 0.05 is adopted as a representative value in this study. When assuming that the variation in the distribution of the average bearing capacity of several piles at a particular site follows the t-distribution, the variation in the mean value of the model error COVL decreases as shown in Table 4, with increase in the number of piles, in which 2, 3, 4, 5, 6, and 9 piles are included in commonly used highway bridge grouped pile foundations.
3.3 Variability in model error due to site differences (COVm ) As stated above, the model error in the bearing capacity of grouped piles should differ due to site differences. Accordingly, we tried to estimate the variability due to
182
site difference COVm using an available database of single pile load tests at different sites. The uncertainty in the model error for single piles is also a function of both the variation within the considered site and the variation due to site difference. Accordingly, the variation in the model error for single piles, COVP , is obtained by combining the variation for single piles within the considered site, COVL , that has already been estimated above and given in Table 3, with the variation due to site difference, COVm :
Table 5. Statistical values of the model error in the bearing capacity of single piles. Pile type and pile installation method
Number of test results
Bias λp
Variability COVp
Cast-in-place Piles Driven-in-steel piles Inner-soil-excavation Hollow steel pile
16 11 9
1.034 0.928 1.225
0.315 0.327 0.323
Table 6. Variability due to the difference of the sites, COVm .
While COVp and COVm change with the number of piles, COVm is presumed to be unaffected. To estimate the value of COVp , we analyze the model error for single piles using the PWRI database. The test value of bearing capacity of single piles is obtained as follows: (i) The peak load is considered as the bearing capacity when the peak load appears before the settlement reaches at 10% the pile diameter. (ii) In the case that the peak load does not appear, the load when the settlement reaches 10% of the pile diameter is defined as the bearing capacity, as stated in the Specifications. (iii) When the pile load test was terminated before the settlement reaches 10% of the pile diameter, the exponential fitting is applied to the loaddisplacement curve and the load at a settlement of 10% of the pile diameter is calculated. This is because most load test data indicate that the pile bearing capacity can be equivalent to the mobilized loads at a settlement level of 10% the pile diameter (Okahara et al., 1999).
COVm
Table 7. Variability in the model error of pile group, COVp . Number of piles
1
2, 3
4 or more
COVp
0.350
0.348
0.347
Table 8.
Reliability index β of pile group.
Table 5 shows the values of COVp for single piles. From these results, we set the variable model of the single pile as λp = 1.0 and COVp = 0.35. This indicates that the bias of the model error for grouped-pile foundations also will be λp of 1.0. Finally, substituting these values into Eq. (8) gives COVm = 0.346 (Table 6). 3.4
Error in estimating the bearing capacity of grouped-pile foundations
COVp is estimated by substituting COVL , which changes according to the number of piles, and COVm indicated in Table 6 into Eq. (8). We set the values indicated in Table 7. 3.5
Safety margin and resistance factor
We evaluated the safety margin and resistance factor for grouped-pile foundations as has been done in 2.2.
Number of piles
1
2, 3
4 or more
β
3.06
3.08
3.09
Table 9.
Note that we use only that data that meets the following conditions: – No treatment to reduce the side resistance. – When using the exponential fitting, the maximum load during the test was larger than the yield load calculated by the fitted curve.
0.3464, not dependent on the number of piles
Resistance factor .
Number of piles
1
2, 3
4 or more
0.329
0.331
0.333
First, we evaluated the reliability of existing groupedpile foundations to define the target reliability index. The reliability index of existing grouped-pile foundations is obtained by substituting the relevant statistical values and the resistance factor = 3 into Eq. (6). Table 8 shows the result. β of single piles is 3.06, and β increases according to the number of piles, resulting in β of grouped-pile foundations with more than four piles being 3.09. Most existing grouped-pile foundations have five or more piles. As shown in Table 8, β of grouped-pile foundations with more than four piles is assumed to be 3.09. This is almost the same as the β value of existing shallow foundations. Therefore, this study reveals that the reliability of shallow foundations and groupedpile foundations are almost equal in the current design norm. Based on the results of Table 8, we then set βT of pile foundation systems having more than four piles to 3.10 to make it equal to that of shallow foundations, 3.10. Table 9 shows the resistance factors that
183
satisfies βT . The resistance factor, , increases with increased in the number of piles. This is because the variation in the average bearing capacity of the individual piles in a grouped pile foundation decreases with increase in the number of piles. However, the difference in due to the difference in the number of piles is not large. This indicates that the accuracy of the calculated bearing capacity is poor and COVm is large. Accordingly, there is an ample room to allow for application of a larger resistance factor when a pile load test is conducted.
4
CONCLUDING REMARKS
We examined the resistance factor for shallow foundations and pile foundation systems. – In the normal design situation, the reliability indexes of both shallow foundations and pile foundation systems are almost the same: 3.10. – For pile foundation systems, we showed that the reliability index of existing pile foundations that have 4 or more piles is 3.10. – We proposed a resistance factor for pile foundation systems as a function of the number of piles with a consistent target reliability level.
REFERENCES AASHTO. 2007. LRFD Bridge and Construction Specifications. 4th edition. Bauduin, C. 2002. Design of axially loaded compression piles according to Eurocode 7. Proceedings of the Ninth International Conference on Piling and Deep Foundations. Nice, Presses de l’Ecole Nationale des Ponts et Chaussees: 301–311. Briaud J, L. and Gibbens, R. 1997. Large scale load tests and data base of spread footings on sand, Publication No. FHWA-RD-97-068, Federal Highway Administration. CEN/TC250. 2004. EN1997-1 Eurocode 7 Geotechnical Design –Part 1: General Rules. Chimi, K., Ouchi, M., Takada, S. and Tatsuoka, F. 1996. Footing settlement in central and eccentric vertical loading tests on a Pleistocene sand deposit, Proceedings of the Geotechinical Conference, JGS, No.31: 1577–1578. Chimi, K., Ouchi, M., Takada, S. and Tatsuoka, F. 1996. Footing settlement in central and eccentric vertical loading tests on a Pleistocene sand deposit, Proceedings of the Geotechinical Conference, JGS, No.31: 1577–1578. Japan Road Association. 2002. Specifications for Highway Bridges. Komada, K. 1969. Bearing capacity calculation diagrams for shallow foundations with inclined load in the two dimensional ground. Technical Report of PWRI(135). Public Works Research Institute. In Japanese.
Maeda, Y., Kusakabe, O. and Ouchi, M. 1991. Large scale in-situ loading tests of square, rectangular footings on a dense scoria, Journal of Geotechnical Engineering, JSCE, No.430/III-15: 97–106. Maeda, Y., Kusakabe, O. and Shigeki, K. 1990. The relationship between the loaded width and coefficient of subgrade reaction obtained with large-scale load tests, Proceedings of the Annual Conference, JSCE, No. 3: 1010–1011. Nakano, M., Tasaka, M., Oyake, T. and Tatsuoka, F. 1996. Shape factor and effects of load eccentricity in bearing capacity of a Pleistocene sand deposit, Proceedings of the Geotechinical Conference, JGS, No. 31: 1579–1580. Okahara, M., Kohata, H., Mori, H. and Tsugawa, Y. 1987. A study on the approximation method of the bearing capacity of shallow foundations on rock, Technical Memorandum of PWRI (2512), Public Works Research Institute. In Japanese. Okahara, M., Takagi, S., Kimura, Y., Mori, H., Asai, K., Watarai, M. and Inoue, A. 1992. An experimental study on the bearing capacity of rigid foundations, Technical Memorandum of PWRI (3087), Public Works Research Institute. In Japanese. Okahara, M., Takagi, S., Kimura, Y., Mori, H., Asai, K., Watarai, M., Inoue, A. and Tatsuoka, M. 1992. An experimental research of the bearing capacity of rigid foundations, Technical Memorandum of PWRI (3087), Public Works Research Institute. In Japanese. Okahara, M., Takagi, S., Nakatani, S., & Kimura, Y. 1991b. A study on the bearing capacity of single piles and design method of column shaped foundations. Technical Memorandum of PWRI (2919), Public Works Research Institute. In Japanese. Onodera, I., Miura, Y., Shiraki, T. and Ouchi, M. 1993. Comparison of modulus of deformation between laboratory and in-situ tests, Proceedings of the Geotechinical Conference, JGS, No. 28: 1893–1894. Ouchi, M.,Abe, S., Kusakabe, O. and Maeda,Y. 1993. Results and observation of in-situ loading tests of square footings on Hayakawa river mouth sand, Proceedings of the Geotechinical Conference, JGS, No. 28: 1583–1584. Paikowsky, S, G., Brigisson, B., McVay, M., Nguyen, T., Kuo, C., Baecher, G., Ayyub, B., Stenersen, K., O’Malley, K., Chernauskaas, L., O’Neill, M. 2004. NCHRP REPORT 507 Load and Resistance Factor Design (LRFD) for Deep Foundations, Transportation Research Board. Tatsuoka, F., Tani, K., Okahara, M., Morimoto, T., Tatsuta, M., Takagi, S. and Mori, H. 1989. Discussion on influence of the foundation width on the bearing capacity factor by Hettler and Gudehus, Soils and Foundations, 29(4): 146–154. Utsunomiya National Highway Office. 1976. Report of the analysis of in-situ test for caisson foundation. Yamamoto, N., Nakano, M., Oyake, T. and Ouchi, M. 1996. Bearing capacity tests in a pnenumatic caisson on a Pleistocene sand deposit, Proceedings of the Geotechinical Conference, JGS, No. 31: 1581–1582. Zhang, L., Tang, W. H. and Ng, C. W. W. 2001. Reliability of axially loaded driven pile groups, Journal of Geotechnical and Geoenvironmental Engineering, ASCE: 1051–1060.
184
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Study on rational ground parameter evaluation methods for the design and construction of large earth-retaining walls Y. Yamamoto, T. Hirose & M. Hagiwara East Nippon Expressway Co., Ltd., Tokyo, Japan
Y. Maeda Kyushu Kyoritsu University, Kitakyushu, Japan
J. Koseki The University of Tokyo, Tokyo, Japan
J. Fukui Advanced Construction Technology Center, Tokyo, Japan
T. Oishi Expressway Technology Center, Tokyo, Japan
ABSTRACT: Tokyo Outer Ring Expressway is an important trunk road that circularly connects the areas located about 15 km away from the center of Tokyo, Japan. In the vicinity of the Tokyo-Chiba boundary, the expressway extends for about 12.1 km, where sections approximately 9.5 km long make use of semi-underground ditch structures that will mostly be constructed with an open-cut system using earth-retaining walls. Since the construction cost of earth-retaining walls accounts for the major portion of the total project cost, the evaluation of ground parameters will greatly affect the cost. Aiming for cost reduction, this paper presents a study on rational methods for evaluating ground parameters to be used for design purposes. The following summarizes the technical characteristics and results of this study. 1) Up until now, most ground parameters to be used for design purposes estimated with empirical methods using, for example, SPT N-values had conservative values, which resulted in quite uneconomical designs. In this study, a series of highly accurate triaxial compression tests were conducted on undisturbed samples and ground strength parameters based on soil dynamics were investigated. 2) The investigation results revealed that in the case of Pleistocene sand layers, a larger value for undrained shear strength cu is obtained as compared to the value estimated using SPT N-value and unconfined compression strength, qu . In the case of Pleistocene sand layers, the existence of true cohesion cd , due possibly to cementation, was confirmed, so that it can be effectively considered in the design. 3) A rational ground parameter evaluation method was investigated via statistics based on triaxial compression test data of various soil types. 4) Soil layers were classified into several categories based on their fines content and plasticity index, and ground parameters were assigned for each categorized zone. 5) The deformation of earth-retaining walls during excavation was compared between the computed values using different sets of ground parameters and the measured value from trial construction. As a result, it was found that the ground parameters obtained following the procedures proposed in this study are sufficiently conservative, and they can provide great economical effect.
1 1.1
INTRODUCTION Scope of project
Tokyo Outer Ring Expressway is an important trunk road that circularly connects the areas located about 15 km from the center ofTokyo (see Fig. 1).The section in the vicinity of the Tokyo-Chiba boundary, which extends for about 12.1 km, is under construction.
Urbanization is in advanced stages along the road and approximately 9.5 km is also National Route 298 located on the surface for public use and the rest, exclusive to the Tokyo Outer Ring Expressway, is underground. It makes use of semi-underground ditch structures where most of it will be constructed with an open-cut system that uses earth-retaining walls (see Fig. 2).
185
Therefore, proper evaluation of soil survey results in setting ground parameters greatly contributes to how economical the project is. This report presents the study results on the rationalization of earth-retaining walls design by focusing on ground parameters (cohesion, c, angle of shearing resistance, φ, and modulus of deformation, E) and the appropriate evaluation of natural ground having confirmed its safety. 2 2.1
BACKGROUND OF STUDY Characteristics of section
Figure 3 shows soil components in vertical section. The characteristics are as follows. This section consists of: (1) alternating layers of various kinds of stratum. (2) soft alluvium deposit up to the vicinity of the structure’s deck. (3) a high groundwater level and a waterside that rises near the surface.
Figure 1. Location map.
2.2
Figure 2. Typical ditch structure cross-section.
1.2
Project Issues
The ground of the construction site is composed of Holocene deposit, Pleistocene deposit and alternating layers of sand and cohesive soil. Aquifer is compressed on the waterside near the surface. Thus, the ground condition is relatively inconvenient. Under these environmental and construction work constraints, necessary measures for ground excavation such as subsidence restraint of peripheral ground and prevention of heaving bottom are necessary. Temporary earthretaining walls are planned to be used for the open-cut method. In addition to the weakness of the ground, the width and depth of the excavation are large, so large-scale breasting is required. Various kinds of studies are being carried out with the goal of reducing project costs. One of which is the reduction of the temporary earth-retaining walls. One measure for constructing earth-retaining walls economically is shortening the design embedment length. In particular, in terms of designing the temporary earth-retaining walls, ground parameter settings greatly influence the earth pressures of both action and reaction, and the stability of the excavation base.
Conventional method for setting ground parameters
Ground parameters can be estimated from laboratory soil test aside from using the N-value of standard penetration test results. In most cases the shearing resistance angle of cohesive soil and the cohesion of sandy soil are not considered in a design. Instead of using the results of triaxial compression tests, safe values for ground parameters such as φ = 0 for cohesive soil and c=0 for sandy soil are applied. The ground parameters are divided into five parts as shown in Table 1. These parts that characterize the soil stratum are the plateau, the sea erosion platform in the north, the sand dune, the sea erosion in the south and the underground trough. (1) The cohesion of cohesive soil is the average value of triaxial compression test results obtained from boring log sampling. Cohesion is assumed to be zero for sandy soil. (2) The angle of shearing resistance of sandy soil is also set as the average value of triaxial compression test results. The angle of shearing resistance of cohesive soil is assumed to be zero. (3) The modulus of deformation is the product of a constant and N-value (28N) for both cohesive soil and sandy soil. Although the ground parameters set according to the above method provide a safe design in terms of strength, there is room for improving its economic impact. In order to confirm that this tendency exists, we investigated the actual behavior of earth-retaining walls. The test work of the open-cut method (approximately 90 m), in the sea erosion platform in the north,
186
Figure 3. Soil components in vertical section. Table 1. Conventional parameters.
Soil
Cohesion c
Sandy soil
–
Cohesive soil
Triaxial
method
for
setting
ground
Angle of shear resistance φ
Modulus of deformation E
Triaxial compression test – compression test
Standard penetration test (28N) Standard penetration test (28N)
parameters used in the design are sufficiently safe values. Therefore, the authors believe that it would be economical to establish new values for the ground parameters. Economical values are suggested herein by studying ground parameters that are close to natural ground and confirming safe results. 3 3.1
Table 2.
c (kN/M2 )
ϕ(◦ )
F (MN/m2 )
Ap Ac2 As2 Dc2 Ds2u
20.0 25.0 0.0 125.0 0.0
0.0 0.0 35.0 0.0 35.0
6.8 8.0 9.2 80.0 124.0
Composition of the soil
This study provides ground parameters used in design and construction. In order to obtain practical values, this study focuses on soil composition and parameters that have an impact on earth-retaining walls design. It was found that the design result was sensitive to the cohesion of six Pleistocene layers (i.e., Ds1, Dc1, Ds2, Ds2u, Dc2, Ds21) near the leveling structure.
Ground parameters from test site.
Soil type
COHESION OF PLEISTOCENE SAND
3.2 Triaxial compression test results
is under construction for the purpose of verifying the construction method, gaining an understanding of possible problems during construction and apply the findings to actual design and construction. Displacement of earth-retaining walls is measured and compared to the displacement calculated using the past ground parameters (see Table 2) set according to the method mentioned earlier (see Fig. 4). The result was that the calculated displacement according to the conventional method exceeds the measured value. It is thus confirmed that the ground
Although cohesion is generally ignored in sandy soil, there are cases of Pleistocene sand layer that exhibit cohesion. The triaxial compression test results gathered from the site were rearranged in order to examine whether cohesion could be considered. Cohesion is found using a failure envelope drawn using least squares method of Mohr circles obtained from the triaxial compression test. As an example, the triaxial compression test result under the CD condition of the test site’s Ds2u layer is presented in Fig. 5.
187
Final Excavation - Comparison between the Measured Deformation Value and the Conventional Design Value Deformation(mm) -40
0
40
80
120
Bs -1.0
First strut
Ap -3.0 Ac2
Second strut
160
200
-5.0 As2 -7.0
Figure 6. Unconfined compression test results of the test site’s Ds2u layer.
Third strut -9.0
3.3 Unconfined compression test results
Depth(m)
Dc1 -11.0
Figure 6 shows the unconfined compression test result of test site’s Ds2u layer. Unconfined compressive strength was confirmed in both saturated and partially saturated samples. This result also implies that cohesion exists in sandy soil of the test site.
Fourth strut -13.0
-15.0
Excavation depth GL-15.4 m
4 -17.0
RESETTING OF ZONES
4.1 Reclassification stratum
Ds2u
The stratum classification of the test site is rearranged into nine zones based on the ground’s formation process and particle size distribution (see Fig. 7).
-19.0
-21.0
4.2 Reclassification of soil composition -23.0
Measured value Conventional design value Excavation depth
-25.0
Figure 4. Comparison of earth-retaining walls displacement. (Measured values and calculated values using the conventional method).
In addition to cohesive soil and sandy soil, intermediate soil which exhibits characteristics of both cohesive soil and sandy soil is taken into consideration during classification. Soil is classified according to sand component rate, fines content and plasticity index. Basically, cohesive soil is defined to have sand content of less than 50%, fines content of 50% or more and a plasticity index of 30 or more; intermediate soil is defined to have sand content of 50% to 80%, fines content of 20% to 50% and a plasticity index of less than 30; sandy soil is defined to have sand content of 80% or more and granule content of less than 20% (see Fig. 8). 5
Figure 5. Triaxial compression test result of Ds2u layer from test site.
Results reveal that although cohesion is ignored in most cases that involve sandy soil, it is confirmed in this site (c = 27.8 kN/m2 ). Pleistocene sand Ds1, Ds2u and Ds2l showed similar results.
STATISTICAL APPLICATION OF TRIAXIAL COMPRESSION TEST RESULT (COHESION)
Usually, ground parameters are set statistically using the average value of triaxial compression results of boring samples except outliers. However, this requires statistical application which considers ground and spatial dispersions. Boring sampling can be done before excavation. Thus, several tests were conducted and much data was accumulated (approximately 140 UU or unconsolidated undrained tests and 190 CD or consolidated drained tests). This study proposes new ground parameters based on these data.
188
Figure 7. New soil composition.
Figure 8. Soil component classification method.
The new design value for cohesion is calculated according to the following equation using mean (m) and standard deviation (σ) of data.
Figure 9 Shows an example of setting design value. 6
COMPARISON OF TRIAXIAL COMPRESSION TEST VALUE OF BLOCK SAMPLING AND DESIGN VALUE
The triaxial compression test results of block sampling taken from the site are compared to calculated design values mentioned earlier. High-quality undisturbed samples can be collected by block sampling because soil is cut and taken from the ground in clumps. The test results obtained herein are believed to more accurately evaluate the ground cohesion.
Figure 9. Example of design value (Ds2u(s)).
Figure 10 shows the model for soil composition in vertical section of the test site. In this study, Dc1 is divided into two layers, cohesive soil and sandy soil, based on results of grain size analysis. Block sampling is conducted using Dc1, Dc1 (sand) and Ds2u soils. Comparison diagram of statistic values
189
computed according to Equation 1 using the boring sampling triaxial compression test and the result of the block sampling triaxial compression test is shown in Figures 11 and 12. Figure 11 shows the result for sandy soil of Dc1. Triaxial compression tests were conducted under CD and CU (consolidated undrained) conditions since sandy soil is under consideration. Results reveal that the statistical value of boring sampling and the test value of block sampling were close. Figure 12 shows the results for intermediate soil Ds2u. Triaxial compression tests were conducted under all conditions (i.e., UU, CD and CU), since intermediate soil exhibits characteristics of both cohesive soil and sandy soil. Results reveal that all test values of block sampling exceed statistic values of boring sampling. Therefore, it is thought that the statistic value of boring sampling produce safer design results compared to test values of block sampling.
7
SHEARING RESISTANCE ANGLE AND DEFORMATION MODULUS
Shearing resistance angle and deformation modulus has no significant effect on design based on the results of the sensitivity analysis. Their new values are set following the procedures that are similar to those that were employed for setting cohesion. It is assumed that the same factors as used for setting the cohesion can be considered in soil stratum classification and soil composition classification. Shearing resistance angle and deformation modulus for each soil is set as follows. 7.1 Shearing resistance angle In the case of sandy soil and intermediate soil, statistic application of triaxial compression test of boring sampling is conducted using Equation 1. For cohesive soil, φ = 0 is assumed since its sensitivity is small in earth-retaining walls design. 7.2 Deformation modulus Statistic application of triaxial compression test results of boring sampling is conducted using Equation 1 for all types of soil. Although the deformation modulus can be determined using N-values and unconfined compression test results, in this study it is set using triaxial compression tests, similar to cohesion and shearing resistance angle. 8
Figure 10. Soil components in vertical section of site.
EVALUATION OF GROUND STRENGTH DEGRADATION
The ground inside the earth-retaining walls is excavated in order to build the structure (see Fig. 13). As excavation continues, covering pressure is unloaded,
Figure 11. Comparison of block sampling and boring sampling of sandy layer Dc1(C).
190
the floor swells and eventually ground strength reduces. Swelling tests are conducted in order to determine the strength reduction of ground. In this test, CU test is executed after leaving the saturated sample to swell. The degree of strength reduction is studied by comparing the initial strength in terms of the peak deviator stress (q0w) obtained from the CU test of the nonswelled sample with those (q) of the swelling test results. The relationship of the normalized strength and elapsed time is shown in Figures 14 and 15 for cohesive soil and sandy soil, respectively. With respect to initial strength, cohesive soil reduced its strength to approximately 50% after one to two weeks. On the other hand, sandy soil showed almost no strength reduction. Therefore, the passive shearing resistance of cohesive soil (excavation side) reduces with elapsed time. Here the shearing strength is equal to cohesion since shearing resistance angle is not considered. The cohesion used in design is set to 50% of the statistic value since the normalized shearing strength also converges to 50%. The statistic value for sandy soil is not reduced since no significant strength reduction was found. Cohesion is reduced to 50%, similar to cohesive soil, for the safe design of intermediate soil. 9
2) Soil is classified into three types, i.e. cohesive soil, intermediate soil and sandy soil. 3) Cohesion and deformation modulus of cohesive soil uses the value calculated according to Equation 1. In this case, cohesion in the passive side is assumed to be 50% of that in the active side since
Figure 13. Illustration of ground excavation.
ESTABLISHMENT OF NEW GROUND PARAMETERS
Based on the above results, new ground parameters in the vicinity of the structure’s floor are set below. 1) Ground parameters are set according to the results of boring sampling triaxial compression tests.
Figure 14. Strength reduction of Dc1 soil.
Figure 12. Comparison of block sampling and boring sampling of sandy layer Ds2u(I).
191
Equation 1. In this case, cohesion is not reduced in the passive side. 5) Cohesion, shearing resistance angle and deformation modulus of sandy soil uses the value calculated according to Equation 1. In this case, cohesion is not reduced in the passive side. The actual displacement of test site’s earth-retaining walls and the estimated displacement using the new ground parameters are compared in Fig. 16. Both displacement modes are close and almost similar in behavior. However, at the top of earthretaining walls, the displacement according to new ground parameters is safer. Therefore, it can be said that the new ground parameters properly evaluate the natural ground conditions and secure safe results.
Figure 15. Strength reduction of Ds2u soil.
10
CONCLUSIONS
The following conclusions are drawn from the results of the study on the cohesion evaluation method for design using statistical application of triaxial compression test results of boring sampling and block sampling. 1) The existence of cohesion in Pleistocene sand was identified from the tests conducted herein. 2) Design value is set using the mean and standard deviation of triaxial compression test data taken from several boring samples. 3) The cohesion in the passive side is reduced to 50% for safe design considering the reduction of ground strength with time after excavation. Based on the comparative study of actual behavior of the test site’s earth-retaining walls and the calculated value of the design, both almost coincided and the calculated value is safe since it exceeds the measured value. 4) Detailed soil classification is established using fines content and plasticity index, and corresponding ground parameters were suggested. The results of earth-retaining walls trial design using the suggested ground parameters imply that penetration length and steel quantity can be reduced and so that the construction cost of earth-retaining walls can be reduced by approximately 20%.
Figure 16. Comparison of earth-retaining walls’ displacement. (Actual values vs. estimated values of new ground parameters)
cohesion decreases with time after excavation. Shearing resistance angle is not considered. 4) Cohesion and the deformation modulus of intermediate soil uses the value calculated according to
REFERENCES Japan Road Association, 1993. Road earthwork: Guide on Temporary Structures Engineering (in Japanese). The Japanese Geotechnical Society, 1992. Geotechnical notes: Two intermediate soils, sand or clay (in Japanese).
192
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
System reliability of slopes for circular slip surfaces Jianye Ching National Taiwan University, Taipei, Taiwan
Yu-Gang Hu National Taiwan University of Science and Technology, Taipei, Taiwan
Kok-Kwang Phoon National University of Singapore, Singapore
ABSTRACT: Evaluating the reliability of a slope is a challenging task because the possible slip surface is not known beforehand. Approximate methods via the first-order reliability method (FORM) provide efficient ways of evaluating failure probability of the “most probable” failure surface. The tradeoff is that the failure probability estimates may be biased towards the unconservative side. The Monte Carlo simulation (MCS) is a viable unbiased way of estimating the failure probability of a slope, but MCS is inefficient for problems with small failure probabilities. This study proposes a novel way based on the importance sampling technique of estimating slope reliability that is unbiased and yet is much more efficient than MCS. In particular, the critical issue of the specification of the importance sampling probability density function will be addressed in detail. Several numerical examples are investigated to verify the proposed novel approach. Furthermore, the simplified Bishop method of slices is taken as the slope stability method for this example. 1
INTRODUCTION
problem into the standard Gaussian input space. Monte Carlo simulation contains the following steps:
Slopes with nominal safety factor more than 1 are not necessarily safe, because of underlying geotechnical variabilities and inherent uncertainties in the predictive methods. Traditionally, this limitation is well recognized, but handled by empirical prescriptive rules, such as imposing a factor of safety of 1.2 for temporary slopes and 1.5 for permanent slopes. However, the prescribed factors of safety do not change regardless of the degree of variabilities and/or uncertainties in the problem. Reliability, namely one minus failure probability, mitigates this limitation. Over the years, analysis methods have been proposed to evaluate reliability of slopes, e.g.: Vanmarcke (1976), Chowdhury and Xu (1993), Christian et al. (1994), Low and Tang (1997), Low et al. (1998), Bhattacharya et al. (2003) and Griffiths and Fenton (2004). However, evaluating reliability of a slope is not an easy task, primarily due to the lack of knowledge of the failure surface. A standard procedure of evaluating the reliability of a slope is through Monte Carlo simulation (MCS) (Rubinstein 1981; Ang and Tang 1984). This approach was taken by Griffiths and Fenton (2004) where the MCS method is implemented with a random field model for spatial distribution of shear strengths. The procedure for MCS is straightforward. Let Z denote all uncertain variables in the slope of interest. Without loss of generality, let us assume Z is independent standard Gaussian. In the case that Z is not standard Gaussian, proper transformations can be taken to convert the
a. Draw Z samples {zi : i = 1, . . . , N} from the standard Gaussian PDF. b. For each sample zi , conduct a deterministic slope stability analysis to find the most critical slip surface among all trial surfaces. If the safety factor of the most critical surface is less than 1, the entire slope is considered to fail for that zi sample. c. Repeat Step 2 for i = 1, . . . , N. The average of the failure indicators is simply an estimate of the failure probability of the slope. Mathematically, the MCS procedure can be summarized as the following equation:
where F is the failure event of the entire slope; I [.] is the indicator function; is the set of all trial slip surfaces; ω is one of the trial surface, and FS ω is the safety factor for that trial surface, which is clearly a function of Z. Another way of interpreting the overall failure probability in (1) is as follows: let Fω denote the event FS ω (Z) < 1, i.e.: the failure event of the trial surface ω. The overall failure event F is simply the union of all individual failure events ∪ω∈ Fω . This interpretation is graphically depicted in Figure 1. The overall failure probability is therefore the volume under f (z) (the standard Gaussian PDF) within the F region.
193
What is missing in the literature is a technique that can provide unbiased estimate of the actual overall failure probability and yet only requires a small number of repetitive deterministic slope stability analyses. The purpose of this study is to demonstrate that it is possible to implement the importance sampling (IS) technique (Rubinstein 1981) to achieve so. Moreover, it is shown by examples that the c.o.v. of the failure probability estimator made by the IS technique can be as small as 0.2 with only 100 deterministic slope stability analyses for practical range of failure probability. In the paper, the discussion of the IS technique will be made in the context of circular trial surfaces and method of slices although its use is obviously not limited to this scenario. The limitation of the IS technique will also be addressed. Figure 1. Illustration of various failure regions in the standard Gaussian space.
The MCS method provides an unbiased estimate of the actual failure probability. However, it can be very time consuming, especially for slopes with small failure probabilities. This is because the coefficient of variation (c.o.v.; standard deviation divided by mean value) of the MCS estimator PFMCS is equal to
where δ(.) denotes the c.o.v. Therefore, in order to make the c.o.v. of the MCS estimator to be as small as 30%, it roughly requires 10/P(F) MCS samples, i.e.: 10/P(F) deterministic slope stability analyses. A more efficient method based on the first-order reliability method (FORM) was proposed to estimate the failure probability. The idea is to solve for the following optimization problem:
Note that the slip surface variable ω is augmented into the original FORM optimization problem. Let the solution of the optimization problem be z ∗ and ω∗ , then (−||z ∗ ||) is the estimated failure probability and ω∗ is the “most probable” slip surface. This approach was taken by Low and Tang (1997) and Low et al. (1998) for the evaluation of slope failure probability. The FORM technique is efficient and convenient because the repetitive deterministic slope stability analyses required by MCS are not needed. However, the tradeoff is that the FORM methods may provide biased and unconservative estimates of the actual overall failure probability. This can be seen in Figure 1, where the thick curve indicates the limit state function of the most probable slip surface ω∗ . It is clear that the volume under f (z) within the union region ∪ω∈ Fω (the actual overall probability) is always greater than or equal to that within the region Fω∗ (the FORM-estimated failure probability) because the latter is a subset of the former.
2
METHODS OF SLICES
Some popular methods of determining the safety factor for a given trial slip surface and fixed soil parameters are briefly reviewed. This presentation does not aim to give a complete review of the slope stability methods but just to give enough background for the forthcoming presentation of the importance sampling technique. Moreover, the presentation is limited to methods of slices with circular trial surfaces although the importance sampling technique may apply to general methods and non-circular trial surfaces. 2.1 Ordinary method of slices As shown in Figure 2, a circular trial surface is under consideration, and the soil parameters Z are fixed at some values (e.g.: for MCS, they are fixed their sampled values for each deterministic slope stability analysis). The goal is to determining the safety of a predefined slip surface indexed by ω.A simplified method that assumes no interacting forces between slices can be used to compute the safety factor:
where ck and φk are the cohesion and friction angle of the k-th slice; Wk is the total weight of the k-th slice; uk is the pore water pressure at the middle point of the slice bottom; lk is the length of the slice bottom; αk is the inclination angle of the slice bottom. Equation (4) is obtained based on the equilibrium equation of the overall moment for all slices about a chosen point. This method of determining the safety factor is called the ordinary method of slices (OMS) (Fellenius 1936). The OMS is the simplest method among all methods of slices. It does not satisfy force and moment equilibrium of individual slices, and it usually provides conservative estimates of safety factors. It is also the only method of slices that does not require iterative calculations to obtain the safety factor. In practical applications, many trial surfaces are randomly generated, and the trial surface with the smallest safety
194
where cn , φn and γn are the cohesion and (average) unit weight of the n-th soil layer; ψn = tan(φn ) is the tangent of the friction angle of the n-th soil layer; the summation over “kn” index (n = 1 or 2) is for the slices whose bases are within the n-th soil layer; hk is the average height of the k-th slice. We assume without loss of generality that the slice bases lie fully within one soil layer. This is always achievable by appropriate division of slices. Equation (6) can be transformed into standard Gaussian space by proper transformation, e.g.: Rosenblatt’s transformation (Rosenblatt 1952). The transformed safety factor will be denoted by FS(Z), and the standard Gaussian random variable corresponding to c1 will be denoted by Zc1 and similar for other random variables. Figure 2. Illustrative slope example.
factor is the critical slip surface. If the safety factor of the critical slip surface is less than 1, the entire slope is then considered as unstable, or to have failed. 2.2
Simplified bishop method of slices
The simplified Bishop method of slices (SBMS) (Bishop 1955) takes a different assumption that all inter-slice forces are horizontal. Equilibrium of vertical forces in all slices and the overall moment gives the following expression for the safety factor:
The safety factor FS ω can be found by iteratively solving (5). Again, many trial surfaces are randomly generated, and the trail surface with the smallest safety factor is the critical slip surface. If the safety factor of the critical slip surface is less than 1, the entire slope is considered to have failed. 2.3 Transformation to standard Gaussian space Previously, it is assumed that all uncertain variables are transformed to the standard Gaussian input space for the ease of presentation. To demonstrate how the expressions of safety factors can be transformed into the standard Gaussian space, consider the example in Figure 2, where the trial surface is ω, both soil layers are homogeneous, and the c, φ and γ parameters for both layers are uncertain and independent. Note that the first few slices share the same shear strengths and unit weights, and similar for the last few slices. In the case that the OMS is taken, the safety factor can be expressed as:
2.4 Approximate location of the design point of a trial surface Given the trial surface ω, an approximate way of finding its design point, denoted by z¯ω , is proposed. Finding z¯ω is highly non-trivial because most slope stability analysis methods are iterative, e.g.: the simplified Bishop method. Usually, an optimization problem similar to Eq. (3) needs to be solved (with ω fixed at the prescribed trial surface) in order to find z¯ω . Among the slope stability analysis methods, the OMS is the only non-iterative method. Therefore, it is proposed in this research to implement the OMS to find the approximate location of the design point z¯ω . One can re-write Eq. (4) as the following OMS limit state equation:
where g OMS is the OMS limit state function for the failure event of the trial slip surface ω: if g OMS < 0, the surface fails under the OMS criterion, and vice versa. The crucial assumptions here are: (a) the geometry of the critical slip surface does not change, although the soil parameters are allowed to vary and (b) b1 and b2 are constants given by the respective mean unit weights. These assumptions are obviously erroneous, but we are only looking for an approximate location of the actual design point z¯ω . Let us consider the soil parameters c1 , c2 , ψ1 , ψ2 , γ2 , γ2 to be uncertain. Note that if c1 , c2 , ψ1 , ψ2 and γ2 , γ2 are independent Gaussian, the limit state function in the standard Gaussian space can be approximated as
where µ and σ are the mean value and standard deviation of the subscripted variable. By doing so, Eq. (8) is now a linear function of the standard Gaussian input Z. It is soon clear that the OMS design point in the standard Gaussian space is
195
where
is the OMS reliability index of the trial surface ω. This OMS design point Note that the derivations of the design point in Eq. (9) are based on the following assumptions: (a) c1 , c2 , ψ1 , ψ2 and γ2 , γ2 are independent Gaussian and (b) the OMS is the adopted method of slices. Those assumptions are taken because the resulting approximate solution z¯ωOMS of the actual design point z¯ω has a simple analytical form. Clearly, such an approximate design point z¯ωOMS in general is not the actual design point z¯ω because for the actual application those assumptions may not be true, i.e.: the actual adopted slope stability method may not be the OMS and the soil parameters may not be independent Gaussian. It is also not equal to z¯ω even under assumptions (a) and (b), because Eq. (8) is erroneous. Nonetheless, it is found empirically that the approximate design point z¯ωOMS is usually reasonably close to the actual design point z¯ω even when the two assumptions are violated in the actual application. At the end, the IS method will provide unbiased failure probability estimates regardless of the accuracy of the design point. 3
determination of these M representative slip surfaces does not require much computation.The determination of the M representative slip surfaces and their weights will be discussed later in detail. The principle here is to choose the M representative slip surfaces such that the high probability-density region of q(z) is closer to the failure region F. The IS technique is based on the following observation:
where {zi : i = 1, . . . , N } are independent samples from q(z); Ef and Eq denote the expectations with respect to the PDFs f (z) and q(z), respectively. Therefore, the IS simulation contains the following steps: a. Draw Z samples {zi : i = 1, . . . , N } from q(z). b. For each sample zi , conduct a deterministic slope stability analysis to find the critical slip surface among all trial surfaces. If the safety factor of the most critical surface is less than 1, the slope is considered to fail for that zi sample. c. Repeat Step 2 for i = 1, . . . , N . The IS estimate of the overall failure probability is simply
IMPORTANCE SAMPLING TECHNIQUE
As mentioned earlier, reliability of a slope is not component-wise but system-wise. Therefore, finding the location of the design point for a single slip surface does not suffice to determine the system reliability of a slope. Although MCS is a viable method, it is computationally inefficient. This is primarily due to the fact that the high probability-density region of f (z), i.e.: standard Gaussian PDF, is quite far from the failure region F = ∪ω Fω , especially when the actual failure probability is small. The practical consequence is that MCS requires many samples before a failure sample is obtained. The basic idea of the proposed importance sampling method is to adopt the so-called importance sampling PDF (IS PDF) q(z) whose high probability-density region is closer to the failure region F. In this study, the following mixture of M Gaussian PDFs is taken as the IS PDF:
where ωj is the j-th slip surface among the M “representative slip surfaces”; d is the dimension of z; z¯ωOMS is the center of the j-th Gaussian PDF, taken to be the approximate design point z¯ωOMS of the j-th “representative slip surfaces”; wj is the weight of the j-th Gaussian sPDF. As discussed later, these M representative slip surfaces can be determined by using the OMS although the adopted stability method for defining failure may not be the OMS. Since the OMS is non-iterative, so the
Note that PFIS is an unbiased estimator of the actual overall failure probability regardless the choice of {(ωj ,wj ):j=1,…, M}. More importantly, if {(ωj , wj ):j = 1, . . . , M} is carefully chosen so that the high probability-density region of q(z) is close to the failure region F = ∪ ω Fω , the c.o.v. of PFIS can be quite small. The c.o.v. of the estimator PFIS is simply
3.1 Choice of {(ωj , wj ):j = 1, . . . , M } Although PFIS is unbiased regardless the choice of the representative slip surfaces and their weights {(ωj , wj ) : j = 1, . . . , M }, the choice may affect the efficiency of the IS estimator. An ideal choice is such that the high probability-density region of q(z) can cover the failure region with high probability content, i.e.: cover all failure modes. For instance, if there is only one failure mode in the slope of interest, a sensible choice is to let M equal 1, ω1 be the most probable failure surface and w1 equal 1.0, i.e.: the IS PDF q(z) is simply a Gaussian PDF centered at the design point of the most probable failure surface. In the case where there are two failure modes, it is preferable to let M equal 2, ω1 and ω2 be the two most probable failure surfaces within the two
196
failure modes, and w1 and w1 equal the failure probabilities of ω1 and ω2 , respectively, i.e.: the IS PDF q(z) is the mixture of two Gaussian PDFs weighted by the failure probabilities of ω1 and ω2 and centered at the design points of ω1 and ω2 . However, finding design points is itself a challenging task especially when an iterative stability method (e.g.: simplified Bishop) is adopted to define failures. Nevertheless, the IS method is unbiased regardless the choice of {(ωj , wj ):j = 1, . . . , M }.This implies that a rough estimate of design points may suffice. The key idea in this paper is therefore to obtain a set of approximate design points through the OMS, called the OMS design points, although the adopted stability analysis method may be the simplified Bishop method. Because the OMS produces consistently conservative safety factors, it is expected that the resulting q(z) can still cover the failure region to a certain degree. The determination of the OMS design points does not require much computation because the OMS is noniterative. The proposed simple procedure based on the OMS is described as follows. First of all, a circular trial slip surface can be characterized by three variables, u, v and δ in Figure 2, where u and v are the horizontal coordinates of the two points where the slip surface and the ground surface intersect, and ε is the tangent angle of the slip surface at the xlow location. The upper and lower bounds of the three variables, denoted by [ulow , uup ], [vlow , vup ] and [εlow , εup ], should be specified a priori based on engineering judgments. As a result, the space of slip surfaces is the three-dimensional space {ulow ≤ u ≤ uup , vlow ≤ v ≤ vup , εlow ≤ ε ≤ εup }. Secondly, equally spacing grid points are located in the {u, v, ε} space, e.g.: if the u axis is divided into three equally spacing grid points {ulow ,(ulow + uup )/2, uup } and similar for the other two axes, there will be 33 = 27 equally spacing grid points in the {u, v, ε} space, and each grid point represents a trial slip surface. The trial surfaces corresponding to these grid points are directly taken as the representative slip surfaces {ωj :j = 1, . . . ,M} except the inadmissible ones, e.g.: trial surfaces intersecting the ground surface. It is hoped that all failure modes can be detected through these widely spreading representative surfaces. Equation (10) can then be used to find their OMS reliability indices, denoted by {βj :j = 1, . . . ,M}. The weight of the j-the trial surface ωj is simply taken as its OMS failure probability, i.e.: wj = (−βj ). Notice that the calculations for determining {(ωj , wj ):j = 1, . . . ,M} are fast even for a large number of trial slip surfaces because only the non-iterative OMS equations are involved. 4
slip surfaces {(ωj , wj ):j = 1, . . . , M }. Use Eqs. (9) and (10) to find their OMS design points OMS {¯zωj : j = 1, . . . , M } and reliability indices {βj : j = 1, . . . , M }, and let the weights be wj = (−βj ). b. Draw Z samples {zi :i = 1, . . . ,N} from q(z), the OMS mixture of M Gaussian PDFs centered at {¯zωj : j = 1, . . . , M } weighted by {βj :j = 1, . . . , M }. c. For each sample zi , conduct a deterministic slope stability analysis (not necessarily the OMS) to find the critical slip surface among all trial surfaces. If the safety factor of the critical surface is less than 1, the slope is considered to have failed for that zi sample, i.e.: I [minFS ω (zi ) < 1] = 1; otherwise I [minFS ω (zi ) < 1] = 0. d. Repeat Step 4 for i = 1, . . . , N . The IS estimate of the overall failure probability is simply
5
NUMERICAL EXAMPLE
Example 1 An example in STABL manual Consider the following example extracted from the STABL user manual (Siegel 1975): the slope in Figure 3 underlain by a hard layer. The shear strength parameters c and φ of the soil are uncertain. It is assumed that tension cracks are present near the top surface of the slope up to a depth of 3.35 m. The uncertain cohesion c is lognormally distributed with mean µc = 24 kN/m2 and c.o.v. δc = 20%, while ψ = tan(φ) Gaussian with mean µψ = 0.176 (corresponding to µφ = 10o ) and standard deviation σψ = 0.018. There is a negative correlation of −0.3 between Zc and Zψ . The simplified Bishop method of slices is taken as the slope stability method for this example. The main purpose of this example is to demonstrate the advantage of the IS method over the MCS method in evaluating small slope failure probability. The MCS method with sample size N = 1000 is taken to estimate the failure probability of the slope. For each Z sample, a global search algorithm is taken to find the most critical slip surface, i.e.: the slip surface with the least Bishop-based safety factor, then Eq. (1) can be employed to find PFMCS . Figure 4 shows the locations of all MCS samples in the standard Gaussian space,
PROCEDURE OF PROPOSED APPROACH
The procedure of the proposed IS method is summarized as follows: a. Divid the {u,v,ε} space into equally spacing grid points to obtain the admissible representative
Figure 3. The slope considered in the first example. The grey lines are the admissible representative slip surfaces.
197
Figure 4. The MCS (left; N = 1000) and IS (right; N = 100) samples in the standard Gaussian space (µψ = 0.176). The gray region is the actual failure region F, and the triangles are the locations of the OMS design points of the representative slip surfaces.
where a ‘o’ mark indicates a non-failure sample, while a ‘x’indicates a failure sample. One can see most of the samples are non-failure ones. The analysis shows that PFMCS is 0.096 and the c.o.v. of the estimator is 9.7%. As a verification, the actual failure region F is found by an exhaustive analysis described as follows. First divide the standard Gaussian space into dense grid points, each grid point corresponding to a set of fixed values of c and ψ. At each grid point, a simplified Bishop stability analysis is taken to see if the slope fails (again, a global search algorithm is herein taken to find the slip surface with the least safety factor). Do so for all grid points to obtain the actual failure region F.The gray region in Figure 4 is the actual failure region F found by the exhaustive analysis. For the IS technique, the representative slip surfaces are found by dividing the {u,v,ε} space into 33 = 27 equally spacing grid points. Among them, 21 of them are admissible, and they are plotted in Figure 3. The IS PDF therefore consists of 21 Gaussian PDFs centered at their OMS design points, and the weights are their OMS failure probabilities. Notice that the calculation so far is fast because only the non-iterative OMS equations are involved. The locations of the OMS design points for the representative surface are shown as the triangles in Figure 4. There are some design points that are very far away from the actual failure boundary, e.g.: the left-most triangle in the figure (the other such design points cannot be seen because their horizontal coordinates are less than −5). These design points correspond to the representative slip surfaces that intersect the hard layer. It is clear that some approximate design points are fairly close to the actual design point even though the actual stability method is the simplified Bishop method, not the OMS, and even though in reality c and ψ are not independent Gaussian. Following the IS procedure, only 100 samples of Z are drawn from q(z) to obtain PFIS via Eq.(13). Those samples are plotted in Figure 4. It is clear that a large portion of the samples are failure samples. The resulting PFIS is 0.0913, and its c.o.v. is estimated to be 13.2% via Eq(14). To demonstrate the superiority of the IS approach for small failure probability, µψ is increased to 0.268 (corresponding to µφ = 15◦ ) and
Figure 5. The MCS (N = 10000) and IS (N = 100) samples in the standard Gaussian space (µψ = 0.268).
standard deviation is changed to σψ = 0.028, and both the MCS and IS methods are applied to estimate the corresponding failure probabilities. Figure 5 shows the samples obtained by the two methods, andTable 1 summarizes the analysis results. For a sensible comparison between MCS and IS, the required numbers of samples to achieve a c.o.v. of 20% are estimated √ based on the rule that c.o.v. is proportional to 1/ N . These numbers show that the MCS method becomes inefficient when the failure probability gets smaller, while the IS method seems more robust on that aspect. Comparisons with FORM are also made. Table 1 shows the results from the FORM method, i.e.: solve Eq. (3) to find the design point z¯ω and estimate the failure probability as (−||zω ||). In this paper, a global search algorithm is adopted for solving Eq. (3) to avoid local solutions. The adopted FORM method is based on an algorithm similar to the genetic algorithm in finding the most probable slip surface (e.g.: Xue and Gavin (2007)). Basically, the Hasofer-Lind reliability index βHL is first found with the genetic algorithm by solving a constrained optimization problem. Then the FORM failure probability can be estimated as (−βHL ). For this example, the adopted FORM method performs satisfactorily, and the required computational cost is much less than those required by the MCS and IS methods. Example 2 A slope in two clayey soil layers Consider the slope in two clayey soil layers underlain by a hard soil layer shown in Figure 6. The undrained shear strengths su1 and su2 of the two clayey layers are uncertain and independent. They are lognormally distributed with mean values µsu1 = 120 kN/m2 and µsu2 = 160 kN/m2 and with c.o.v.s δsu1 = 30% and δsu2 = 30%. The simplified Bishop method of slices is taken as the slope stability method for this example. The main purpose of this example is to demonstrate the advantage of the IS method over the adopted FORM method in evaluating slope failure probability when multiple failure modes are present. By solving Eq. (3), the failure probability estimate made by the adopted FORM method is found to be 0.0016. For the IS technique, 33 = 27 representative slip surfaces are generated, among them, 18 of them are admissible, and they are plotted in Figure 6. The IS PDF therefore consists of 18 Gaussian PDFs centered at their OMS design points, and the weights are their
198
Table 1. The analysis results from the MCS, IS and FORM methods for various values of µψ .
Method (µψ = 0.176) Number of sample N Computer runtime (minutes) Estimated failure probability Estimator c.o.v. (%) Required N to achieve c.o.v. = 20% Method (µψ = 0.268) Number of sample N Computer runtime (minutes) Estimated failure probability Estimator c.o.v. (%) Required N to achieve c.o.v. = 20%
MCS
IS
1000 282 0.0960 9.7% 235
100 12 0.0913 13.2% 44
1000 282 0.0960 9.7% 235
100 12 0.0913 13.2% 44
FORM
1 0.097
Figure 7. The MCS (N = 10000) and IS (N = 100) samples in the standard Gaussian space (Example 2). 1 0.097
Table 2. The analysis results from various methods (Example 2).
Figure 6. The slope considered in the second example. The grey lines are the admissible representative slip surfaces.
OMS failure probabilities. Notice that the calculation so far is fast because only the non-iterative OMS equations are involved. Figure 7 shows the actual failure region F determined by the exhaustive analysis. It is clear that there are two failure modes from the geometry of the actually failure region. Unfortunately, the FORM method is only able to identify one mode. More interestingly, the distances to the origin of the limit state functions for the two modes are similar, indicating the failure probabilities of the two failure modes are comparable. The locations of the OMS design points for the representative surfaces are shown as the triangles in Figure 7. It is interesting to see that both failure modes are captured by the triangles. Following the IS procedure, 100 samples of Z are drawn from q(z) to obtain PFIS via Eq. (13). Those samples are plotted in Figure 7. It is clear that a large portion of the samples are failure samples. The resulting PFIS is 0.0038, and its c.o.v. is estimated to be 20.9% via Eq. (14). The MCS method with sample size N = 10000 is also taken to estimate the failure probability. The failure probability estimate for the MCS method is found to be 0.0044 with c.o.v. = 15.0%. To further verify the consistency of the IS method with larger sample size, an IS analysis with sample size of 1000 is taken, and the resulting PFIS is 0.0041. This result is fairly close to the MCS result. These results are listed in Table 2 together with the FORM results. It is clear that the adopted FORM
Method
MCS
IS
Number of sample N Computer runtime (minutes) Estimated failure probability Estimator c.o.v. (%) Required N to achieve c.o.v. = 20%
10000 2137
100 12
0.0044
0.0038 0.0041 0.0016
15.04% 20.9% 5655 109
FORM 1000 105
1
6.62% 109
method significantly underestimates the failure probability although it is computationally cheap. Both the IS and MCS methods provide unbiased estimates for the failure probability, but the IS method is obviously more efficient.
6
CONCLUSION
A new method based on the importance sampling (IS) technique is proposed to efficiently estimate failure probability of slope stability for circular slip surfaces. The main novelty is to utilize the ordinary method of slices to determine suitable locations of the importance sampling probability density function (IS PDF) so that the IS PDF is much closer to the failure region. The resulting algorithm is much more efficient than the Monte Carlo simulation although both methods provide unbiased estimates for failure probabilities of slopes. From the results of the three numerical examples, it is concluded that the methods based on First-Order Reliability Method (FORM) may underestimate the failure probability, depending on the number of failure modes. However, the proposed IS method always provides unbiased failure probability estimates. REFERENCES Ang, A.H.S. and Tang W.H. (1984). Probability Concepts in Engineering Planning and Design, Volumn II Decision, Risk, and Reliability, John Wiley and Sons.
199
Bhattacharya, G., Jana, D., Ojha, S. and Chakraborty, S. (2003). Direct search for minimum reliability index of earth slopes. Computers and Geotechnics, 30, 455-462. Bishop, A.W. (1955). The use of the slip circle in the stability analysis of slopes. Geotechnique, 5, 7–17. Chowdhury, R.N. and Xu, D.W. (1993). Rational polynomial technique in slope-reliability analysis. ASCE Journal of Geotechnical Engineering, 119(12), 1910–1928. Christian, J.T., Ladd, C.C. and Baecher, G.B. (1994). Reliability applied to slope stability analysis. ASCE Journal of Geotechnical Engineering, 120(12), 2180–2207. Fellenius, W. (1936). Calculation of the stability of earth dams. Transactions of 2nd Congress on Large Dams, Washington, DC, 4, 445–462. Griffiths, D.V. and Fenton, G.A. (2004). Probabilistic slope stability analysis by finite elements. ASCE Journal of Geotechnical and Geoenvironmental Engineering, 130(5), 507–518. Low, B.K. and Tang, W.H. (1997). Probabilistic slope analysis using Janbu’s generalized method of slices. Computers and Geotechnics, 21(2), 121–142.
Low, B.K., Gilbert, R.B. and Wright, S.G. (1998). Slope reliability analysis using generalized method of slices. ASCE Journal of Geotechnical and Geoenvironmental Engineering, 124(4), 350–362. Rosenblatt, M. (1952). Remarks on a multivariate transformation. Ann. Math. Statist., 23, 470–472. Rubinstein, R.Y. (1981). Simulation and the Monte-Carlo Method, John Wiley & Sons Inc., New York. Siegel, R.A. (1975). STABL User Manual, Joint Highway Research Project 75-9, School of Engineering, Purdue University, West Lafayette, Indiana. Vanmarcke, E.H. (1976). Reliability of earth slopes. ASCE Journal of Geotechnical Engineering, 96(2), 609–630. Xue, J.F. and Gavin, K. (2007). Simultaneous determination of critical slip surface and reliability index for slopes. ASCE Journal of Geotechnical Engineering, 133(7), 878–886.
200
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Correlation of horizontal subgrade reaction models for estimating resistance of piles perpendicular to pile axis Y. Kikuchi Port & Airport Research Institute, Yokosuka, Japan
M. Suzuki Shimizu Corporation, Tokyo, Japan
ABSTRACT: A beam on an elastic-medium model is conventionally used for designing piles loaded perpendicular to the pile axis. Three types of subgrade reaction model: Chang’s model, the S-type model, and the C-type model, are generally used in the design of port facilities. The estimated coefficient of subgrade reaction varies from geotechnical investigation results quite markedly for each model. It is therefore important to improve the accuracy of estimating the coefficient of subgrade reaction in reliability-design methods. To do this for Chang’s model, the most widely used model, a new relation between the SPT-N value and the coefficient of subgrade reaction in Chang’s model, kCH , is proposed.
1
INTRODUCTION
In Japan, when estimating the resistance characteristics of a pile against a force acting perpendicular to the pile axis, the pile is usually modeled as a beam on an elastic medium. This modeling approach is simple, yet has enough accuracy for practical design situations. When a pile is modeled as a beam on an elastic medium, the modeling of the subgrade reaction is the most important part of design from the view of geotechnical engineering. Subgrade reaction per unit area, p, is generally given as p = k · xm · yn , where k is the coefficient of subgrade reaction, x is depth, y is deflection, and m, n are exponential indices. For example, in Chang’s model, which is one of the most widely used models in Japan, m and n are set as m = 0, n = 1 and the subgrade reaction is expressed as p = kCH · y. In Japanese technical standards and recommendations for port facility design (JTSP) (MLIT, 2007), PHRI models (PHRI = Port & Harbour Research Institute, the former name of the Port & Airport Research Institute) also are recommended as useful models for subgrade reaction models for pile design. They consist of two types: the S-type model and the C-type model. In the Stype model, m and n are set as m = 1 and n = 0.5 and the subgrade reaction is expressed as p = ks · x · y0.5 . The S-type model is also called Kubo’s model. In the C-type model, m and n are set as m = 0, n = 0.5 and the subgrade reaction is expressed as p = kc · y0.5 . The C-type model is also called Hayashi’s model. The S-type model is mainly applied to normally consolidated ground, and the C-type model is mainly applied to over-consolidated ground. The PHRI models are proposed in accordance with experimental results, and
can explain a pile’s behavior. Shinohara and Kubo (1961), who proposed the models, concluded that the coefficients of the models, ks and kc , are fundamentally constant and that their values depend solely on the ground condition. Chang’s model is applied to a wide range of analysis because the governing equation for a beam on an elastic medium can be analytically solved by using Chang’s model. The model assumes linearity between the subgrade reaction, p, and the deflection, y. However, this assumption gives improper characteristics of the resistance of a pile loaded perpendicular to the pile axis. Chang’s model and the PHRI models are considered for subgrade models for estimating the resistance of piles perpendicular to pile axis in this paper. In this research, first, the variance of the relation between the SPT-N value and the coefficient of subgrade reaction of each model is surveyed. Then the correlations of the coefficient of each model are presented. Finally, new relations for estimating the coefficients from the SPT-N value profile are developed.
2
COMPARISON OF BEHAVIOR OF THREE SUBGRADE REACTION MODELS
2.1 Fitting of laboratory loading test results A series of experiments (Kikuchi et al, 1992) is conducted in a sand box with a length of 6 m, width of 3 m, and depth of 3 m, which is filled with dry sand to a depth of 2.25 m. The sand used in this study is Sengenyama sand, which has a density of 2.746 g/cm3 , and D50 of 0.2 mm. The relative density of the layer is
201
40%. A pile 0.2 m wide and 2.5 m long, with a flexural rigidity 2.94 kNm2 , is used in the experiments. The embedded length of the pile L is set to 2.1 m. Horizontal static loading on the pile is conducted with a loading height h of 0.25 m. Measured items are: time, load, horizontal and vertical displacements of the loading point, pile top, horizontal displacement of the pile at the ground surface, and bending moments of the pile. Figure 1 shows the comparison between the experimental results and the fitting results using Chang’s model and the S-type model. Figure 1(a) compares deflection distributions. The coefficient of subgrade reaction on each model is selected to fit the deflection of the pile at the ground surface. The deflections at the protruding part are in good agreement in each model. However, the deflection of the deep area in Chang’s model does not fit well. Values of ks for fitting at both loading levels are the same, although the value of kCH changes in accordance with the loading level. Figure 1(b) compares the bending moment distributions. The coefficients of subgrade reaction used in this figure are the same as the values used in Fig. 1(a). These results from the S-type model show good agreement with the experiment data, whereas the results for Chang’s model have less agreement. In the case of Chang’s model, maximum bending moment is very small and decrement of bending moment is very slow at depths from around 0.8 m to 1.5 m. In these figures, the best fitting coefficient of the subgrade reaction value of the S-type model is independent from the loading level, but that of Chang’s model decreases along with an increase in the loading level.
2.2
Comparison of features of three subgrade reaction models under other conditions
As the model experiment is conducted under a pile head free condition, calculation results of the S-type model and Chang’s model are compared under pile head free conditions in Section 2.1. Those under other conditions are compared in this section. To begin with, the coefficients of subgrade reaction of the PHRI model and Chang’s model are determined under the same load, the same pile head condition, and the same deflection at ground surface with the same pile condition and a similar deflection at ground surface presented in Section 2.1. Calculated deflection distributions and bending moment distributions of these models are compared. Comparison of the calculation results of the S-type model and Chang’s model under pile head fixed condition are as follows: There is less difference observed in deflection distributions, but the maximum bending moment from Chang’s model was quite small and the decrement of bending moment is very slow compared to the S-type model. These behaviors are similar to the results presented in Section 2.1.
Figure 1. Comparison of the differences in subgrade reaction models in a case in which there is the same deflection at ground surface.
Comparison of the calculation results of the C-type model and Chang’s model under both pile head fixed and pile head free conditions are as follows: There are fewer differences observed in the deflection distributions, but the maximum bending moment from Chang’s model is small and the decrement of bending
202
moment is slow compared to C-type model. These differences are rather small compared to the differences in the comparison in the S-type model and Chang’s model. The value of maximum bending moment and the depth appearing at the maximum bending moment are important factors in pile foundation design work, in addition to deflection at ground surface and the protruding part. Because the S-type model quite well represents the bending moment distributions of the experiment as shown in Fig. 1, the coefficient of the subgrade reaction of Chang’s model is determined, so as to have the same maximum bending moment as that calculated using the S-type model under the same pile and loading conditions presented in Section 2.1. Comparison of the calculation results is shown in Fig. 2. The coefficient of subgrade reaction of Chang’s model need to be greatly reduced to have the same maximum bending moment as that in the S-type model. As a result, distributions of deflection and bending moment from Chang’s model were quite different from the experimental results. A comparison of calculation results in the S-type model and Chang’s model under a pile fixed condition determine the coefficient of subgrade reaction, so as to have the same maximum bending moment, is conducted. Similar results as shown in Fig. 2 are observed; the coefficient of subgrade reaction of Chang’s model need to be greatly reduced, and distributions of deflection and bending moment from Chang’s model are quite different from that of the S-type model. The same comparisons are conducted for the C-type model and Chang’s model. In the case of the C-type model and Chang’s model under a pile head-fixed condition, the depth appearing at the maximum bending moment and decrement behavior of the bending moment are quite different.
3
RELATION BETWEEN SPT-N VALUE AND COEFFICIENTS OF SUBGRADE REACTION
Figures 3 to 5 show the relation between the SPT-N value profile and the coefficient of subgrade reaction of each model for estimating the pile resistance perpendicular to the pile axis (MOT, 1999). Ground where the SPT-N value is constant with depth can be modeled as C-type model ground when using the PHRI models, while ground where the SPT-N value increases with depth can be modeled as S-type model ground. The abscissa axis of Fig. 4 is the average increment of SPT-N value in a meter. If using Fig. 5 to estimate kCH , the average of the SPT-N value from the surface to the depth of 1/β is used. These figures were produced before 1967 in accordance with the existing relation between field loading test results and geotechnical investigation results. Measurement in both clayey ground and sandy ground were used to make these figures, so as to cover the relations between ks , kc ,kCH and the STP-N profile in both
Figure 2. Comparison of the difference of subgrade reaction models in cases with the same maximum bending moments.
clayey and sandy ground. Dotted squares show the data from clayey ground, and solid squares show them from sandy ground in Fig. 3. White squares show the data from clayey ground, and solid lines show them from
203
Figure 3. Relation between SPT-N value and kc .
Figure 5. Relation between SPT-N value and kCH . Table 1. Relation between SPT-N value profile and coefficient of subgrade reaction. Approximation formula
Coefficient of correlation
Coefficient of variance
kc = 540N 0.648 (kN/m2.5 ) 0.654 (kN/m3.5 ) ks = 592N kCH = 3900N 0.733 (kN/m3 )
0.872 0.966 0.917
0.111 0.077 0.754
From Figs. 3 to 5, the correlation and variance between the SPT-N value and the coefficients of the models are calculated. The results are shown in Table 1. As shown in Table 1, there is a big variance in the relation between SPT-N value and kCH . It is hard to precisely estimate kCH from Fig. 5, when the SPT-N value is less than 10. The coefficients ks and kc estimated from the SPT-N value have a small variance. This means that kCH can be given a better estimation by considering the relation between kCH and kc or ks than directly from the SPT-N value.
Figure 4. Relation between N and ks .
sandy ground in Fig. 4. White squares and shadowed squares show the data from clayey and sandy ground, respectively, in Fig. 5. These figures are prepared for the convenience of designers to estimate coefficient with conventional geotechnical investigations. Actually the quality of the loading tests and the geotechnical investigations used for producing the figures varies greatly, so the relation has some variance. The coefficient of subgrade reaction of Chang’s model needs to be changed in accordance with load level or deflection level for estimating piles behavior, because linearity between the subgrade reaction and the deflection is assumed in Chang’s model and the actual relation is nonlinear. The value of kCH shown in Fig. 5 is back-calculated from the relation of load and surface deflection where the surface deflection y0 is 1 cm (MOT, 1999).
4
RELATION BETWEEN KS AND KCH OR KC AND KCH
Here, a new relation between the SPT-N value and kCH is proposed, although it cannot solve the inherent problems of Chang’s model as shown in Fig. 2. Nevertheless, it will useful in producing a more reliable estimation of kCH for design. The main factors affecting kCH are ground condition, pile diameter, flexural rigidity of pile, loading level, and loading height. On the other hand, ks and kc are defined so as to be constants only affected by ground condition. This kind of difference gives the difference of variance of the relation between SPT-N values and each coefficient of subgrade reaction as shown in Table 1.
204
To begin with, the relation among ks or kc and kCH is examined. Here, we try to determine their relations when surface deflection is the same under the same pile and same loading condition. If the loading height is 0, such as surface loading, even the PHRI models have an approximate analytical solution and the following equations are presented (MOT, 1999): for the S-type model: Pile head free condition:
Pile head fixed condition:
for C-type model: Pile head free condition:
Table 2.
Range of pile conditions for calculation.
Pile diameter D EI /D Loading height h Surface deflection y0 ks or kc
0.5–1.8 m 120–6400 MNm 1–20 m 0.01–0.1 m 300–10000 kN/m3.5 (or kN/m2.5 )
the PHRI models are conducted using a law of similarity (MOT, 1999). About 4800 cases of calculations are conducted with varying parameters for a considerable range of steel pipe piles as shown in Table 2. Calculation conditions for determining the coefficient of subgrade reaction for each case were: the same pile condition, the same loading condition. Another condition to be considered is separated into two cases to determine the coefficient 1) in order to have the same deflection at ground surface. 2) in order to have the same maximum bending moment.
Pile head fixed condition:
where, EI : flexural rigidity, D: pile diameter, k: coefficient of subgrade reaction, T : horizontal load applied to pile top, y0 : deflection at ground surface. The relations among ks or kc and kCH for surface loading are calculated using Eqs. (1) and (2) and an analytical solution of Chang’s model (y0 = T /(2EI β3 ): when the pile top is free, and y0 = T /(4EI β3 ): when the pile top is fixed): Pile head free condition:
In the case of 2), the total behavior of the pile is quite different in both models as seen in Section 2.2. Therefore the correlation of ks and kCH , and kc andkCH are considered under the condition of 1). Multiple regression analyses are performed, resulting in the following multiple regression equations. Pile head free condition:
Pile head fixed condition:
Pile head free condition: Pile head fixed condition: Pile head fixed condition:
Pile head free condition:
Pile head fixed condition:
This kind of relation is also examined for piles protruding above ground. In such a case, the PHRI models cannot give an analytical solution. The calculation of
where h: loading height. These regression equations have more than 0.9999 of multiple correlation coefficients. Figure 6 shows the example of the relation between kCH and the calculation results of regression equation under an S-type model with a pile head free condition. Relations between ks and kCH under the conditions shown in Fig. 1 are calculated from Eq. (5a) in order to verify the accuracy of this regression equation. As shown in Fig. 1, the best fitting coefficient of subgrade
205
Figure 6. Relation between kCH and the regression equation in S-type ground, pile head-free condition.
reaction in the S-type model ks was 1200 kN/m3.5 . Deflections at ground surface under a horizontal load of 2.11 kN and 4.19 kN was 0.065 m, and 0.145 m, respectively. Loading height h is 0.25 m, and EI/D is 14.7 kNm in this case. The coefficients of subgrade reaction of Chang’s model from Eq. (5a) are 980 kN/m3 and 710 kN/m3 under a loading condition of 2.11 kN and 4.19 kN, respectively. These values coincide with the value from the fitting. Because no suitable experimental data are prepared in the case of the C-type model and Chang’s model, comparing calculation results are presented here.Assume a pile with EI of 9.56 × 105 kNm2 , diameter D of 1 m, inserted 20 m into the ground with an SPT-N of 8, and loaded at 3 m in height h. When the SPT-N value is 8, kc is calculated as 2080 kN/m2.5 from Table 1. If 270 kN and 1200 kN of horizontal load are applied to the pile, deflection at the ground surface calculated by the C-type model are 0.010 m and 0.096 m respectively. The coefficients of subgrade reaction of Chang’s model kCH calculated with Eq. (??) are 26400 kN/m3 and 8760 kN/m3 respectively. Figure 7 shows calculated deflection and bending moment distribution of both by the C-type model and by Chang’s model. Calculation results of both models are in good agreement in this case. As designers want to know about the relation between ground investigation results and coefficient of subgrade reaction in design work, the relation between kCH and SPT-N or SPT-N are lead from both with Eqs. (5), (6) and with Table 1. The relations are presented as follows: If SPT-N value increases with depth: Pile head free condition:
Figure 7. Pile deformation in C-type ground.
Pile head fixed condition:
206
Figure 9. New relation between N value and kCH .
Figure 8. New relation between SPT-N value and kCH .
If SPT-N value is constant along the depth: Pile head free condition:
Pile head fixed condition:
In this calculation form, COVs of kCH are the same as those of ks and kc shown in Table 1, because multiple correlations of kCH and ks or kc are almost 1. Figures 8 and 9 show an example of these relations. The relation between kCH and SPT-N proposed in JTSP as shown in Fig. 5 is arrived at by fitting the loading experiment results of deflection at a ground surface of 1 cm. If the data used for producing Fig. 5 are from the ground condition of constant SPT-N along the depth, the variance of the relation between SPT-N and kCH is small as shown in Fig. 8. But, if the SPT-N value increases with depth, fitted kCH will be affected by the factor (EI/D). This is the main reason the relation between the SPT-N value and kCH shows a large variety. 5
CONCLUSION
This paper reviewed the variances of the estimated values of the coefficient of subgrade reaction for estimating the resistance of a pile perpendicular to the pile
axis from the profile of SPT-N values. The proposed relation between SPT-N value and kCH showed a large variance, although those between the SPT-N value and kc or N and ks showed small variances. To make correlations of subgrade reaction models, it was found that comparing each coefficient of subgrade reaction should be done in the same deflection at ground surface under the same pile and ground and loading conditions. New relations between the profile of the SPT-N value and kCH or kc or ks were proposed based on a parametric study and multiple regression analysis. It was found that the large variance of the existing proposal of the relation between SPT-N value and kCH was caused by disregarding the effects of (EI/D) when the SPT-N values increase with depth.The newly proposed relationships will minimize the variance of estimated value of the coefficient of subgrade reaction in each model. REFERENCES Kikuchi, Y., Takahashi, K., and Suzuki, M. (1992)."Lateral resistance of single piles under large repeated loads." Report of PHRI, 31(4), 33-60. (in Japanese). Shinohara, T. and Kubo, K. (1961). "Experimental study on the lateral resistance of piles (part 1) – lateral resistance of single free head piles embedded in uniform sand layer –." Monthly Report of TTRI, 11(6), 169-242. (in Japanese). Ministry of Land, Infrastructure, and Transportation (2007). Technical standards and commentaries for port and harbor facilities in Japan, The Japan Port and Harbour Association. (in Japanese). Ministry of Transportation (1999). Technical standards and commentaries for port and harbour facilities in Japan, The Japan Port and Harbour Association. (in Japanese).
207
Risk management in geotechnical engineering
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
The long and big tunnel fire evacuation simulation based on an acceptable level of risk and EXODUS software Shu-qing Hao, Hong-wei Huang & Yong Yuan Department of Geotechnical engineering, Tongji University, Shanghai
ABSTRACT: With the long and big tunnel extensive application of modern civilization, fire as one of the major disasters, are often concerned. In the past, some experts and researchers have done a lot of research works about the fire temperature and smoke diffuse rule occurred in the long and big tunnel fire and have gotten many valuable conclusions for fire evacuation in big tunnels. However, the conclusions have some blindness because it has not been guided based the risk acceptance level. How to control the fire loss under the risk acceptance level and realize the people evacuation after fire is very important. The authors made the acceptance risk level by the society after investigation and research. Then, use the level 2 risk to guide the simulation of the long and big tunnel evacuation technology. EXODUS is a suite of software tools designed to simulate the evacuation of large numbers of individuals within complex structures. The EXODUS family of evacuation models currently consists of airEXODUS, buildingEXODUS and maritimeEXODUS. The EXODUS software takes into consideration people-people, people-fire and people-structure. The EXODUS software has been written in C++ using object orientated techniques and rule-base concepts to control the simulation. Thus, the behavior and movement of each individual is determined by a set of heuristics or rules. The model tracks the trajectory of each individual as they make their way out of the enclosure, or to overcome fire hazards such as heat, smoke and toxic gases. Developed by the Fire Safety Engineering Group (FSEG) at the University of Greenwich, Smatfire aims to make fire field model techniques accessible to non-Computational Fluid Dynamics (CFD) experts. Such as fire fighters, architects or safety engineers. Smatfire achieves these aims by integrating knowledge based tools and techniques within many of the modules. Smartfire is a complete fire field modelling environment that employs state of the art finite volume methods. Smartfire provides dynamic user interaction and sophisticated case set-up tools to facilitate simulation and to help ensure solution correctness. The authors use SMARTFIRE software to simulate the fire in the long and big tunnels and import the hazard model to buildingEXODUS evacuation software to do the evacuation simulation the fire. By comparing the different numbers and different width of evacuation holes, get the fire evacuation tunnel structure parameters and the ventilation technology parameters under risk acceptable level 2. 1
INTRODUCTION
Tunnel as one of the three-dimensional modes of transportation can be used on urban traffic easily. However as one of the great disasters in tunnel, fire disaster risk is a very horrible and large loss one. Study of the evacuation technology when fired in the tunnel is very important. 2
BUILDING EXODUS METHOD
EXODUS is a suite of software tools designed to simulate the evacuation of large numbers of individuals within complex structures. The EXODUS family of evacuation models currently consists of air-EXODUS, building-EXODUS and maritimeEXODUS. The EXODUS software takes into consideration people-people, people-fire and peoplestructure. The EXODUS software has been written in C++ language using object orientated techniques and
rule-base concepts to control the simulation. Thus, the behaviour and movement of each individual is determined by a set of heuristics or rules. The model tracks the trajectory of each individual as they make their way out of the enclosure, and how to overcome fire hazards such as heat, smoke and toxic gases. Developed by the Fire Safety Engineering Group (FSEG) at the University of Greenwich, Smartfire aims to make fire field modelling techniques accessible to non-Computational Fluid Dynamics (CFD) experts, such as fire fighters, architects or safety engineers. Smartfire achieves these aims by integrating knowledge based tools and techniques within many of the modules. Smartfire is a complete fire field modelling environment that employs state-of-the-art finite volume methods. Smartfire provides dynamic user interaction and sophisticated case set-up tools to facilitate simulation and to help ensure solution correctness. Smartfire uses state-of-the-art physics, numeric and intuitive configuration options to give the most true-to-life simulation models.
211
In the article the risk including 2 points, they are risk probability and loss because of risk. When considered the risk level. When the risk level is 2, according its concept, the people casualties probability is bellow 0.1 percent. Here it is a structure optimization criterion.
4
Figure 1. EXODUS model of sub models.
BuildingEXODUS model is not only asylum, but in emergency situations, under normal circumstances, computer-based evaluation of population movements, laboratory operations. Through GMT University of earlier research and development, fire safety Engineering Group (FSEG) developed a simulation of the buildingEXODUS is people, people with fire, the interaction of people and structures. This model is subject to the pursuit of heat, smoke and toxic gas after the impact, escaped from the interior of each person’s path asylum.
3
LONG AND BIG TUNNEL RISK ACCEPTABLE LEVEL
Generally speaking, the risk can be characterized as the probability of risk of accidents and loss of the product of the accidents. The following are the risk grade of standards (see table 1 and table 2). Table 1.
Risk classification evaluation criterion
EVACUATION THEORY AND EVACUATION STRUCTURE SITUATION ABOUT LONG AND BIG TUNNEL
Safe evacuation time is a very important parameter. The evacuation time is affected mainly by underground space density of staff, the evacuation holes rationalization and the concentration of toxic gas, etc. The greater of the evacuation channel density, the greater the mouth, the easier to evacuate. However, from the perspective of economic risks to consider, it does not suitable to design the holes too dense and cause the greater economic costs. Therefore, combination the risk considerations, to meet the optimal level of risk evacuation circumstances, the evacuation channels to find the optimal design of the design parameters. The conditions for safe evacuation time: Security officers evacuated the time criterion is to assess whether the fire safety criteria is one of the main objectives. Guarantee the safety of underground space in the fire evacuation, the key technique is to complete the evacuation of all personnel time required to allow the safe evacuation time is less. Now we consider the risk level 2 for the design criterion. Trapped persons of the individual as a model is a collection of attributes that are widely divided into four categories: physical (such as age, gender, mobility, etc.), psychological (such as patience, power, etc.), experience (such as distance, individual timeconsuming, and so on) and the impact of disaster risk (for example, FIN, FICO2 , FIH). These attributes have a dual effect; all the trapped persons can be defined as individuals and through the difficulties of tracking them to understand the process. Simulation of the process, some attribute is fixed, and some other attributes as the other mode of input to change the outcome, is dynamic. In addition, some properties require users to manually or attribute definition of value, or accept the default settings, other attribute is in the module was calculated. 5 THE COMPUTATIONAL PROGRESS ABOUT THE BUILDING EXODUS
In table 1 Character “P” presents probability; Character “L” presents loss. And 1 is presents negligible, 2 is to be considered, 3 is serious, 4 is very serious, 5 is disastrous. In order to make risk assessment looks more intuitive, we make different color to presents different risk levels.
Table 2.
Risk rating standard color logo.
(1) Geometry model The establishment a length of 1000 m, width of 15 m, high up to 8 m of the tunnel, similar to a square section and 500 m in length, in case of fire, fire three dimensions are 3 m of gas burner of the gas burner a linear ramp of heat release curve, a slope of 34 kw the first 20 s. 6 radiation flux model is used to design a radiant heat.A assumption about the heat transfer model is that the tunnel wall to wall of the turbulent heat transfer. Long tunnel fire
212
burning power is 35 kw. The fire parameters for the curve: B = (34/20) = 1.7 kw. (2) Mesh Use mesh tools of the Smartfire to mesh the geometry model and get the result as fig. 2
evacuated. With the fire scene control, and personnel can not be effective in time to evacuate, the evacuation of failure. 6.2 Set up 12 export spreadsheet (not consider the impact of disasters)
Figure 6. The evacuation situation when time = 00:05:37.
Through 6 minutes 37 seconds, evacuation is still not completed, the crowd appeared in exports arch distribution, and through the control of fire scenes, the staff still can not be effective in time to evacuate, the evacuation of failure. Analysis of the reasons for the failure of the evacuation, set up too many staff, set up a total of 20,000 people, of which only 8,133 people in 6 minutes 37 seconds to evacuate. The rest were all killed were burned to death. Figure 2. Mesh figure.
(3) CFD calculation engine running
Figure 3. The development of fire.
Figure 7. Geometric patterns all the time fled the function.
Figure 4. Gas flow pattern.
Summary: In comparison, when the ventilation rate is 3.0 m/s, will effectively curb return. Below import EXODUS evacuation simulation conducted by the circumstances of the effective evacuation time to reach the evacuation and fire safety risk rating. 6 6.1
EVACUATION BASED ON THE EXODUS Figure 8. All fled the geometric patterns of a function of time.
Set up two export spreadsheet (not consider the impact of disasters)
6.3 Set up two export-100 m
Figure 5. The evacuation situation when time = 00:05:06.
Figure 9. The evacuation situation when time = 00:01:04.
When experienced five minutes six seconds, the evacuation is still not completed, 5,000 people in 833
One minute four seconds later, 2,000 people in 180 to evacuation.
213
Figure 10. The evacuation situation when time = 00:11:23.
11 seconds 23, all people escaped. Fire burning simulation calculation based on SMARTFIRE software with the heat generated by the speed of the spread of toxic gases contrast. Note 75 m here to escape the hole designed to meet the full demand for the safety of escape. 6.4
50 m spacing set to escape the hole, a long 100 m, the number of 800 people, in the escape of the exit of the arch there, and three seconds in 39 time-sharing, to escape the end.
Figure 15. Relationship between Elapsed time and Total Flow Rate.
7
Set the width of doors for 1.5 m.
CONCLUSIONS
(1) When the fire broke out inside the tunnel when the fire from the fire along the tunnel wall space to spread, different rates of thermal power led to the release of space within the unacceptable high temperature of different times, the higher the rate of heat release, the sooner the space temperature changes, Release to 35 KW of power, space within the tunnel will be 7 min within the scope of the temperature reached 200 in 60◦ C, and the flue gas will also spread to 200 mm of the tunnel. (2) the crowd through the doors of the time calculated by EXODUS be, we can see that when the heat release rate of 35 KW of when the fire broke out, if the tunnel every ≤75 m set up an escape hole, will enable the staff within 15 minutes safety Escape, the fire risk rating will also be controlled at two levels. The risk level is acceptable.
Figure 11. Evacuation situation.
Figure 12. When time = 00:01:28.
ACKNOWLEDGEMENTS The work described in the article is supported by the “Key project (2006BAJ27B04) supported by National Science & Technology Pillar Program”.
Figure 13. All fled the geometric patterns of time function.
REFERENCES
Figure 14. All fled the geometric patterns of a function of time.
1 A Study on the Evacuation of People in a Hall Using the Cellular automaton Model. International Journal of Modern Physics C (IJMPC).2007 Vol (18) I: 359–367. 2 Cellular automaton Simulation of the Escaping Pedestrian Flow in Corridor. International Journal of Modern Physics C (IJMPC). 2005. Vol(16).Page: 225–235 3 Preston, RJ; Marcozzi, D; Lima, R, et al. The effect of evacuation on the number of victims following hazardous chemical release. Prehospital Emergency Care. 2008. Vol(12): 18–23 4 Wang, JH; Lu, SX; Wang, DR, et al. Risk analysis of evacuation under building fire based on point estimate method. 1st International Conference on Risk Analysis and Crisis Response, Date: SEP 25–26, 2007 Shanghai Maritime Univ Shanghai PEOPLES R CHINA.
214
Source: PROCEEDINGS OF THE 1ST INTERNATIONAL CONFERENCE ON RISK ANALYSIS AND CRISIS RESPONSE 2007 Vol(2): 312–316. 5 Zhang, Q; Liu, M; Liu, J, et al. Modification of evacuation time computational model for stadium crowd risk analysis. ROCESS SAFETYAND ENVIRONMENTAL PROTECTION. 2007 Vol(85): 541–548 6 Liu, YZ; Zhang, XN; Li, FW, et al. Integrating evacuation traffic simulation model with flood risk system in a GIS environment. Conference Information: 2nd International Symposium on Intelligence Computation and Application
(ISICA 2007), Date: SEP 21–23, 2007 Wuhan PEOPLES R CHINA. Source: PROGRESS IN INTELLIGENCE COMPUTATION AND APPLICATIONS, PROCEEDINGS. 2007:662–666 7 Conventional RSK Role at Risk, STKE 2006 (345):247– 259. 8 A.P. TEIXEIRA; C.GUEDES SOARES. Reliability of load Bearing Steel Plates Subjected to Localized Heat Loads. International journal of reliability, Quality and Safety engineering (IJRQSE). 2006. Vol(13)2: 97–133
215
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Risk based decision support system for the pumping process in contaminated groundwater remediation Toshiro Hata Nagano National College of Technology, Nagano, Japan
Yoshihisa Miyata National Defense Academy, Yokosuka, Japan
ABSTRACT: New risk-based decision support system is proposed for the pump and treats method to remediate contaminated ground water with VOCs. The system can control the pumping process in contaminated groundwater remediation by considering not only the remediation efficiency but also environmental impact. In the evaluation of the remediation efficiency, human risk is counted by using ASTM RBCA model. This achievement may contribute for low-cost and low-risk remediation works.
1
INTRODUCTION
The pump and treat method has been applied to a lot of contaminated sites with VOCs in Japan. In this remediation works, it is difficult to control the pumping rate because there are many restrictions such as human health and environmental risk. Authors propose the new decision support system for the rational pumping process in contaminated ground water remediation. 2 2.1
PUMP AND TREAT REMEDIATION Outline
Pump and treat system can be used to remove contaminant (such as VOCs) mass from a plume, causing the plume to shrink toward its source. Thus, it may be possible to control risks to potential receptors arising from dissolved contaminants in plume. For control of plumes at depth of 40–50 m or more, pump-and treat may be the most viable and cost effective technology. Figure 1 illustrates the conceptual effluent concentrations with time for an array of pumping wells in a pump-and-treat system associated with a plume emanating from a long term source of contamination below the water table. For all wells, there is a rapid decline in effluent concentrations immediately following the onset of pumping, followed by a long period of gradually declining concentrations. The quality of the effluent from the sourcecontaminant wells is influenced by the groundwater quality close to the source, and the effect of dilution by uncontaminated groundwater drawn from surrounding regions as pumping proceeds. The effluent does not, in a practical time frame, attain a quality satisfying environmental guidelines, a direct consequence of continued release of contaminants from the source zone. For
Figure 1. The conceptual remediation of pump and treat.
the plume extraction wells, contaminant concentrations may decrease to acceptable levels in time frames extending from years to decades or longer. Decreasing contaminant concentrations in effluent from plume extraction wells are consequence of gradual removal of contaminants from the plume in combination with the introduction of uncontaminated groundwater from adjacent regions of the aquifer systems. And risk control wells can reduce the groundwater contaminants under a target such as cancer risk. 2.2 Target of remediation One of the more important issues concerning ground water remediation is to set the reasonable target in the remediation work. Persuasive target is to reproduce the pre-remediated condition which means zero
217
contaminant concentrations. However, this target is always impossible to achieve because of the high cost involved. In Japan, there are no comparable standard for allowable concentrations in contaminated groundwater. Human health risk assessment approach is a tool that permits establishment of remediation levels (target) that are protective of human health, depend on site conditions, remediation techniques, chemical characteristics, and utilize “acceptable” risk levels that can be selected to be more or less restrictive. Human risk assessment considers for main steps: 1) hazard identification, 2) dose-response or toxicity assessment, 3) exposure assessment and 4) risk characterization. The hazard identification is an identification of potentially harmful chemicals present in different media at a site. The toxicity assessment examines the toxicity, or harmfulness of each chemical found at the site. The exposure assessment step consists of identifying conditions under which people come in contact with the chemicals and characterizing the magnitude of exposure (exposure dose, mg/kg/day). In the risk characterization step, information from the above three steps is combined to estimate an additional risk to human health caused by exposure to the toxic chemicals assessed. For chemicals causing cancer, risk is understood to be the likelihood of cancer resulting from exposure to a chemical and is expressed as a probability. If the risk obtained from the risk assessment is larger than that established as “acceptable”(usually set 1 × 10−5 of drinking water in Japan), then it is said that the population is at risk and it’s necessary to reduce media concentrations to “Acceptable levels”. For noncarcinogen chemicals, the Hazard Quotient (HQ) is calculated, which is the ratio of the exposure dose for that chemical to the RfD (Reference Dose). If HQ > 1, it means that the population is receiving a dose larger than the RfD (maximum dose at which it is known that no adverse effects occurs) and, consequently, there is a possibility that the population will experience adverse effects. For this paper, new Decision Support System (DSS) was proposed based on rational pumping process based of fuzzy inference and RBCA risk assessment. 3 3.1
RATIONALIZATION OF PUMPING RATE BASED ON FUZZY INFERENCE Fuzzy inference
Figure 2. Outline of the fuzzy inference model.
1) Fuzzy logic control is defined in the form of the following. If X1 is A11 and Y2 is A21 then Z is B1 If X1 is A12 and Y2 is A22 then Z is B2 Where Xi is pumping rate, Yi is VOCs concentration, Zi is remediation efficiency, Aij is fuzzy level and Bi is fuzzy level. 2) The degree of conformity (wi ) of each rule is calculated
3) The inference result of each rule is calculated.
4) Result Bi∗ of each rule is unified and it is the whole rule. Next, B∗ is calculated.
5) The center of gravity of result B∗ is calculated.
After X1 and X2 are given by this reasoning method before asking for y∗ , since max (=∨) is being calculated in Step 3 and it is finally calculating for representation value y∗ by the center-of-gravity method, min (=∧) is called min-max composition calculating method by Step 1 and 2. 3.2 Rational pumping rate model
In this paper, new risk-based decision support system is proposed by introducing Fuzzy inference concept. This concept is proposed by Zadeh in 1970 (Zadeh, 1968; Zadeh, 1973) and it has already applied to various everyday electronic devices. In the field of environmental geotechnics, the application to optimum arrangement of a monitoring well etc. is proposed (Morisawa, 1991). In this paper, inference method proposed by Mamdani is used (Mamdani, 1974; Mamdani, 1976). This technique is a method which Mamdani used for the first fuzzy control. The sequences of fuzzy inference are as followings.
In order to keep a constant high effective remediation, it is expected that maintaining the pumping rate with VOCs concentration and groundwater level maintaining suitable is effectiveness in the remediation period. In this study, the effects of remediation were examined by numerical simulation. Concept of fuzzy inference model is shown in Fig. 2. The three following were chosen as input data from the daily monitoring information: 1) VOCs concentration was chosen the plume situation.
218
the Hazard Identification steps, information regarding to land use etc. and chemical properties of the groundwater was measured. Groundwaters were samples were taken at saturated zone and near the source zone and risk control zone. For the dose-response assessment, toxicological information was used to US-EPA’s guideline value. For the off-site receptors the land use was set as residential and for the on-site receptors land use was considered as industrial. The migration path was considered for groundwater remediation site were on groundwater migration. New human health assessment model for pump and treat remediation. Exposure pathway was only considered by the groundwater ingestion. This model considered only groundwater ingestion .To provides a overview of the calculations done to determine if the exposed populations are at risk equations (5) and (6) are presented:
Figure 3. Sample of the membership function. Table 1.
Sample of the fuzzy logic controller. Impacton geological environment Groundwater level
Pumping quantify
Small Medium Large
Small
Medium
Large
Small Small Medium
Small Medium Medium
Medium Medium Large
Contribution to decontamination TCE concentration
Pumping quantify
Small Medium Large
Small
Medium
Large
Small Medium Large
Medium Medium Large
Medium Large Large
2) Ground water level was chosen to monitor the risk of settlement. 3) Pumping rate was chosen to represent the remediation effect. Each item is expressed by five membership functions. Next, each 5×5 Fuzzy Logic Controller (FLC) using pumping rate and VOCs concentration expresses environmental impact; pumping rate and groundwater level expresses remediation efficiency. Next, twin 5×5 Fuzzy logic Controller (FLC) using pumping rate, VOCs concentration and groundwater level expresses environmental impact and remediation efficiency. The samples of the membership function and fuzzy logic controller are shown in Fig. 3 and table 1.
Where HQ(nc) = hazard quotient for non-carcinogen compounds; Risk(c) = risk for carcinogen compounds; Conc = groundwater concentration; RfD = Reference Dose (mg/kg/day); SF = Cancer slope factor; IRw = Ingestion rate of Water (L/day); ED = Exposure duration(year); EF = Exposure frequency (days/year); BW = Body weight (kg), AT n = Averaging time for non-carcinogens (year), AT c = Averaging time for carcinogens (year). For the noncarcinogen contaminants (equation 1), the HQ value for a particular contaminant would be the exposure dose divide by RfD, if HQ > 1, then it would suggest the possibility of developing adverse effects. The risk is calculated multiplying the exposure dose by the SF (cancer slope factor) which is the slope for the dose-response curve obtained in the laboratory for particular chemicals, and represents the potency of the chemical carcinogen. If risk > “acceptable risk”, it would suggest a high risk that the population exposed will develop cancer. Individual(for a single chemical) acceptable risk was set to 1 × 10−5 (carcinogen compounds); acceptable Hazard Quotient (HQ, single chemical, noncarcinogen effects) was set to 1.0 as well as the Hazard Index (HI ), these values were established according to ASTM recommendations. 5
4
HUMAN RISK CALCULATION BASED ON REFERENCE-DOSE MODEL
4.1 Human health risk assessment model To perform Human health risk assessment, the four steps described in the introduction were followed. For
NEW DECISION SUPPORT SYSYTEM WITH FUZZY-RISK CALCULATION MODEL
The goal of a Pump and Pump and Treat remediation work should be to protect human health for decision making. We developed the new decision support system for groundwater remediation by using Pump and treat method. This DSS can potentially be a reasonable
219
Table 2.
Parameter of risk evaluation.
Non Carcinogens
Carcinogens
Parameter
Value
Unit
Reference∗
Referential dose (RfD0) Body weight (BW) Averaging time for noncarcinogens (ATn) Ingestion rate of water (IRw) Exposure duration (ED) Exposure frequency (EF) Body weight (BW) Averaging time for carcinogens (ATc) Slope factor (SF0) Ingestion rate of water (IRw) Exposure duration (ED) Exposure frequency (EF)
0.006 50 30
mg/(kg day): TCE kg year
U.S.EPA (1996) JEA (1999) JEA (1999)
2 30 10 350 50 70
L/day year days/year kg year
JEA (1999) Assumed in this study JEA (1999) JEA (1999) JEA (1999)
0.011 2 30 10 350
[mg/(kg day)]−1 : TCE L/day year Days/year
U.S.EPA (1996) JEA (1999) Assumed in this study JEA (1999)
*USEPA: U.S. Environmental Protection Agency, JEA: Japan Environmental Agency
fuzzy inference. The proposed method may contribute for low-cost and low-risk remediation works. Application to actual site and modification of basic concept should be needed to further development. REFERENCES
Figure 4. Flow of a decision support system.
remediation work and protecting human health. Concept of decision support system is shown in Fig. 4. Human health risk assessment was calculated based on the ASTM RBCA model. And, environmental risk and rational pumping rate is expressed by fuzzy membership functions. The samples of the risk calculation parameters are shown in Table 2.
6
ASTM (2000) Standard guide for risk-based corrective action, ASTM E 2081-00. Japan Environmental Agency (1999) Survey and countermeasure guidelines for soil and groundwater contamination, Geo-Environmental Protection Center. Mamdani, E.H. (1974) Applications of fuzzy algorithms for control of simple dynamic plant, Proc. IEE, 121, 12, 1588. Mamdani, E.H. (1976) Advances in the linguistic synthesis of fuzzy controller, Int. J. Man-Machine Studies, Vol.8, No.6, pp. 669–679. Morisawa, S. (1991) Optimum allocation of monitoring wells around a solid-waste landfill site using precursor indicators and fuzzy utility functions, Journal of Contaminant Hydrology, Vol.7, pp. 337–370. U.S. Environmental Protection Agency (1996) Soil screening guidance technical background document, EPA/540/ R95/128. Zadeh, L.A. (1968) Fuzzy algorithms, information and control, Vol.12, pp. 94–102. Zadeh, L.A. (1973) Outline of a new approach to the analysis of complex systems and decision processes, IEEE Transactions on SMC, SMC-3, Vol.1, pp. 28–44.
SUMMARY
The concept of new fuzzy inference model, which is effective for groundwater contaminated with low levels of VOCs, was proposed. In this remediation technique, remediation efficiency and environmental impact are considered in the remediation period by
220
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
A risk evaluation method of countermeasure for slope failure and rockfall with account of initial investment T. Yuasa & K. Maeda Graduate School of Civil Engineering, Nagoya Institute of Technology, Japan
A. Waku Ltd. Takashima technology central adviser, asset management department general manager
ABSTRACT: In Japan, with the asset management and the advancement of risk management research, soil structures, such as slopes, are now being taken into consideration as infrastructure properties. In order to execute effective countermeasure for slope failure and rockfall, it is necessary to quantity effects of countermeasure by numerical analysis, such as DEM rockfall simulation. And, at present, method for evaluating the initial investment in countermeasures has not yet been established. To allow for the determination of an appropriate investment, we propose a new method of calculating the investment based on slope risk in this paper.
1
INTRODUCTION
In Japan, with the development of risk management technology, earth structures, such as slopes and tunnels, are now being taken into consideration as infrastructure properties. In this framework, there are circumstances in which it is necessary to execute slope measures more efficiently. This is because local governments face strict financial limitations, while the number of dangerous slopes is increasing due to expanding road networks in mountain areas. Slope risk management presumes the slope risk value beforehand, and supports the execution of strategic measures corresponding to the slope risk value. Some risk quantification techniques have been proposed, and in part, these have been studied increasingly for business use with GIS. However, many large problems should be addressed. The first problem is improving the precision of slope risk value. This is particularly the case with the countermeasures; it is necessary to adequately reflect their positive effect in the slope risk. At present, this effect has not been quantified and an evaluation technique to quantify the above effect has not been established. Second, an evaluation technique for calculating the investment for new slope measures has not been established. In a lot of infrastructure properties, investment appraisal is executed, including cost-benefit analysis (B/C). However, a method to evaluate the validity of slope measure cost has not been determined. This is a critical problem for efficient investment. The purpose of this paper is, first, improving the precision of slope risk quantification with slope measures. Second, the paper proposes a new method to evaluate the validity of the slope measure cost.
In this study, deal with slope failure and rock fall as a slope collapse disaster at the same time. Probability distributions are used in a mechanical stability analysis model to quantify the slope risk after each occurring mechanism is presumed. Moreover, we use the rockfall simulation DEM (Discrete Element Method) to quantify the effect of counter measures. 2 2.1
QUANTIFICATION OF SLOPE RISK Definition of slope risk
In order to quantify the slope risk R, R is defined as follows. This shows the expected value of amount lost.
where p is the probability of slope disaster, D is the amount lost due to disaster. 2.2 Calculation of occurrence probability We indicate the calculation method for each probability in this section because the occurring mechanism is different between slope failure and rockfall. (1) Probability of slope failure The calculation method for slope failure probability is divided roughly into a statistical and a mechanical technique. Here, we will use the mechanical model that Ohtsu proposed the method which is used slope stability analysis. In this method, first, rainfall is assumed to be an exogenous factor of slope disaster. We calculate the slope failure probability pa from the excess probability of annual probable rainfall ψ(α) in rainfall intensity
221
Figure 2. Types of rockfall. Figure 1. Infinite slope model.
α and the slope failure probability pf in case of that probable rainfall. If it is supposed that the rainfall hazard follows the Gumbel distribution, the excess probability ψ(α) will be the following.
In this equation, a and b are constant numbers obtained from the rainfall history. Next, it is supposed that a collapse type is infinity slope model (Fig. 1) in order to calculate the pf . Then, the safety factor of the slope stability analysis is indicated as follows.
where γ is the soil unit weight, γw is the unit weight of water, H is the thickness of the sliding layer, Hw is groundwater level, θ is the slope angle, c and ϕ are the soil strength parameters. Moreover, it is necessary to relate α and Hw , so we decided to use Eq. (4) from reference. (JCCA kinki, 2006)
Now, to quantify slope failure probability, we calculate the probability of “F < 1” because this shows the unstable (collapse) condition. To do this, we assume the soil cohesion c and inter friction angle φ to be random variables according to a normal distribution as in Eq. (5). This shows the uncertainly of soil parameters in physical terms.
As a result, the slope failure probability pf is calculating from the following equation.
Therefore, the slope failure probability pa is as follows.
The feature of this method is that probability distribution is taken into the mechanical stability analysis model. However, it is difficult to determine the volatility of the random variable. In addition, it is necessary to note that we use a simple equation for ease of calculation, though it is known that the occurring mechanism is influenced by the saturation, groundwater level, and so on. (2) Probability of rockfall The occurring mechanism of rockfall is very complex and has not yet been clarified. However, the calculation method for rockfall probability is also roughly divided into a statistical and a mechanical technique. We use the mechanical model that Okimura proposed. Fig. 2 shows the cases of the overhang type and fall off type of rockfall, and the safety factor equation for each type. Then, the random variables c and ϕ are taken into these equations. It is assumed that the probability from F number of falls below one (N ) is divided into the number of all trials (N) by Monte Carlo simulation. This mechanical model is very simple, so it is easy to execute. On the other hand, it seems that endogenous factors such as geological situation and exogenous factors such as rainfall, snow, freeze-thaw, wind, and earthquakes cannot be reflected in the model. Therefore, these points will become research topics in the future. 2.3 Calculation of loss amount due to disaster It this study, we define D as the sum of D1, the personal loss; D2, the road restoration cost; and D3, the traffic detour loss. In addition, the loss amount is changed by whether there are existing countermeasures or not. However, this section does not deal with the effects of countermeasures; these details will be given in section 4. Collapse types, which are needed to calculate D1, are modeled as in Fig. 3.
222
Figure 3. Type of collapse.
(1) Personal loss, D1 In case of slope failure, it called a “buried case” when sand exceeds the height of a car; the “buried case” seems to result in death, incurring the human loss I (yen) in this case. The loss amount decreases linearly in relation to the reduction of the disaster level. In case of rockfall, it seems that the case of being situated “right under the falling rock” results in death, and this is calculated in the same manner as before (Fig. 3). The number of victims is calculated from the daily traffic volume of the object road and the average number of passengers. Furthermore, the number of victims is calculated separately in relation to compact cars and large-sized cars in order to reflect the difference in the cars’ height. Incidentally, the human loss used is I = 29,764,000 (yen), as obtained from the survey of the Japanese Cabinet Office. (2) Road restoration cost, D2 In each case of slope failure and rockfall, we use Eq. (9) to calculate D2 (Yen). This equation is a regression function between the restoration cost and the archived volume of sand V (m3 ) based on past disaster records. (PWRI, 2004)
Figure 4. Relationship between collapse volume and recovery time (Kohashi, H et al. 2007).
Table 1. Slope no.
1
2
3
4
5
V (m3 ) φ (deg.) θ (deg.) γ (kN/m3 ) c (kN/m2 ) H (m) Traffic volume (/days) Mix rates of large-sized car (%) Distance of original/ detour road (km) Speed of original/ detour road (km/h)
2,100 30 41 18 16.7 3.5 4,502
600 35 38 18 2.7 1 4,502
900 30 43 18 10.4 2 4,502
800 35 36 18 4.5 1.5 4,502
1,500 25 38 18 13.2 2.5 4,502
10
10
10
10
10
3
(3) Traffic detour loss, D3 The traffic detour loss refers to the closing of roads when slope failure or rockfall occurs, and D3 consists of two kinds of loss, the “Cost loss of time” generated by increases of running time and the “Running cost loss” generated by increases in mileage. These can be calculated according to the mileage distance and the daily traffic volume. Recovery time has a big influence on the detour loss, and Fig. 4 shows relations between the recovery time N (days) and the collapse volume V (m3 ). In this study, we use the regression function as given in Eq. (10) However, it seems that extensive losses that are not reflected in the equation occur if there is no detour or the event occurs near an isolated village, so it is necessary to evaluate D3 in each slope for these cases.
Slope failure conditions.
20/40 10/25 15/25 15/50 15/50 50/30 50/30 50/30 50/30 50/30
RESULTS OF CASE STUDY
In this section, the calculations of slope risk using the above method are shown. In this paper, we set the 10 slope conditions as inTable 1.These are not real slopes, but we can assess what factors influence the results. In addition, probable rainfall is based on rainfall history for 1945–2006 in the Takayama Gifu observatory. Previously, there were no methods of indicating a slope risk which is expressed by the monetary value in terms of slope failure and rockfall at the same time. Now, however, we indicate that each type of slope risk can be evaluated simultaneously by the index of slope risk. Moreover, it is understood that the collapse probability by use of stability analysis that has been used does not necessarily correlate the slope risks. This is because the risk includes not only an index that shows the danger of slope collapse but also that considers the amount loss when the disaster will occur. For instance, comparing slope No. 3 and No. 9, the collapse probability is almost at the same level. However, it is understood that slope No. 3 has double or more the risk of slope No. 9 (Fig. 5).
223
Table 2.
Rockfall conditions.
Slope no.
6
7
8
9
10
Type of collapse* Weight of rock W (kN) Angle of sliding surface α (deg.) Length of sliding surface Y (m) Length of crack Y (m) φ (deg.) c (kN/m2 ) Traffic volume (/days) Mix rates of large-sized car (%) Distance of original/ detour road (km) Speed of original/ detour road (km/h)
(ov) 12.9
(ov) 2.76
(rf) 6.9
(rf) 86.3
(rf) 4.1
80
95
65
50
55
0.8
0.4
0.5
2.5
0.2
0.0
0.2
–
–
–
35 35 35 35 35 15 15 15 15 15 4,502 4,502 4,502 4,502 4,502 10
10
10
10
10/25 20/30 20/70 5/20
Figure 6. Priority of executing slope measures.
addition, it seems that slope risk can be used as a means of determining the accountability for residents as to when slope measures will be executed under the budget reductions.
10 10/30
50/30 50/30 50/30 50/30 50/30
4
*(ov)…Overhang Type (rf)…Rock off Type.
Table 3.
In this section, it described how to quantify the effects of countermeasure that either exist or will be built up. In this study, it pays attention to the rockfall disaster; the effect of countermeasure is quantified by calculating a rockfall behavior using two-dimensional DEM.
Results of slope risk analysis.
Slope no.
pa
D1
D2
D3
D
R
1 2 3 4 5 6 7 8 9 10
0.0621 0.9960 0.2001 0.3384 0.1041 0.3870 0.4650 0.0200 0.2050 0.0350
1,446 57 803 0 1,307 566 566 566 566 566
2,157 714 1,002 906 1,580 137 136 136 140 136
2,258 807 761 1,437 2,719 485 463 976 466 549
5,862 1,578 2,567 2,343 5,605 1,188 1,165 1,678 1,172 1,252
364 1,571 515 793 583 460 542 34 240 44
QUANTIFICATION OF SLOPE MEASURE EFFECT
4.1 Concept of DEM rockfall simulation
Figure 5. Results of slope risk and failure probability.
A methodology to determine the priority level of slope measures had not been clarified. However, it is thought that these priority levels can be decided more reasonably by arranging slopes in order in terms of slope risk value. (Fig. 6) This also means that we can order slopes in terms of their impact on society. In
DEM is a numerical analysis method used to solve each element progressively in an independent motion equation. At present, this method is most commonly used for rockfall simulations, in Japan. The ground slope is an actual section of a real site, and ground surface is approximated by a single layer of some particles. Often, ground slope is expressed by particle assembly. In this study, only one layer was used because it reduces the lengthy analytical time required to calculate many cases. Details will be provided later. Based on preliminary surveys, the location of rockfall generation was determined. To simplify calculations, rock particle was assumed to have a circular shape. Furthermore, the shape and location of countermeasure, such as a retaining wall, is set up arbitrarily. This represents the situation of how to establish a new countermeasure. To quantify the effect of slope measure, it is necessary to judge whether the road would be struck as a result of rockfall simulation. Two cases are thought to serve as judgment standards, and are represented in Fig. 7 and Table 4. In case (B), the judgment standard is presumed to be an amount greater than the possible absorption energy of the countermeasure. In order to describe uncertain rockfall behavior, caused by initial conditions, the generation location is regularly changed and rockfall simulation is executed 38 times. As a result, the probability of being struck by
224
Figure 8. Results of simulation (wall height = 2.0 m).
Figure 7. Judgment standards for road being struck. Table 4.
Judgment standards for road being struck.
(A) A rock exceeds the retaining wall (B) A rock destroys the retaining wall
Table 5. Analytical parameters of DEM. Spring constant (normal) Spring constant (shear) Damping factor (normal) Damping factor (shear) Coefficient of particle friction
5.0 × 106 5.0 × 106 × 1/4 0.3 0.3 0.477
Figure 9. Results of simulation (wall height = 2.5 m).
the rockfall is calculated as the number of times either judgment standard (A) or (B) is met divided into the total number of simulations. It is also possible to verify rockfall behavior width. In addition, another method of describing uncertain rockfall behavior exists. It is the probability distribution that takes into analytical parameters of rockfall simulation. Because the influence of analytical parameters on falling rock behavior has not yet been clarified, the former method described is used in this study. 4.2
Results of DEM rockfall simulation
Analytical parameters DEM are set according to Table 5. Fig. 8 represents one example of a simulation result. The solid line shows tracks of two or more falling rocks. The figure’s scales are equivalent to the actual site. In order to determine the number of times judgment standards (A) or (B) are filled, it is necessary to obtain two numbers. First, the number of times rockfall height exceeds the retaining wall height of 2.0 m; and, second, the number of times that rockfall energy exceeds the possible absorption energy of the retaining wall. To be more specific, the possible absorption
energy is set at 300 kJ, and it is assumed the countermeasure is destroyed if it received more than 300 kJ of kinetic energy in the x-direction. In this study, that number was 21; therefore, the probability of being struck by rockfall is estimated to be 55%. Next, it is necessary to calculate the risk decrease rate in order to correlate this result to the slope risk. In this case, risk decrease rate is estimated at 45%, calculated as (100–55)%. In addition, rockfall simulation may be used not only to examine the effects of countermeasures, but also to decide the best scale and shape for them. For instance, Fig. 9 represents the results of changing the height of the retaining wall, by only 0.5 m, when exactly the same simulation as before was executed. As a result, the risk decrease rate increased to 70%. The risk decrease rate, relative to the arbitrary scale and shape, is estimated by DEM rockfall simulation both in existing and newly measured cases. 4.3 Assignment of rockfall simulation DEM is used most commonly for rockfall simulations; however, it is not clear how much it is influenced by analytical parameters, the effects of talus, and rock shape so forth. Especially, the effects of talus which is deposited materials or weathered slope surfaces when
225
Figure 10. Meaning of slope risk.
Figure 11. Comparison of the two types of LCC (w = 65%).
falling rock is digging into them, how breakage of falling rock when the rock is crushed and other important elements have on rockfall behavior. Moreover, when a falling rock is modeled as a circle in 2D, or a sphere in 3D, the influence of shape and interaction between rotational and translational motion needs to be considered. The points that have been described so far represent problems to simulate rockfall behavior adequately. At the same time, however, energy dissipation effects such as rock crushing and digging into the talus, for instance, must be considered when designing effective, natural countermeasures. Therefore, it is important to continue to improve the precision of categorizing falling rock behavior to be able to achieve more effective yet inexpensive measures. It will also be possible to measure the effect of slope and to evaluate investment decisions by using the risk decrease rate. This is described further in the next section. 5
EVALUATION METHOD OF INVESTMENT FOR COUNTERMEASURES
In this section, we propose an evaluation method for the initial investment for countermeasures. At present, a technique for evaluating the validity of countermeasure cost has not yet been established. The biggest reason for this is that it is quite difficult to forecast the degrees of loss when a disaster occurs and to quantify the effect of damage reduction by countermeasures. However, as seen up to now, it is becoming feasible to use ‘slope risk’ for the former and ‘quantification of slope measure effect’ for the latter. Thus, we propose an evaluation method for the validity of an initial investment for countermeasures based on slope risk. 5.1
the slope risk is the slope’s cost per year. Therefore, slope LCC can be calculated by integrating slope risks for the use period. In addition, it is possible to adjust the evaluation techniques related to investment amount that have been developed with general infrastructures by the concept of slope LCC. Therefore, slope LCC is defined as follows.
where Ri is the slope risk of the ith year, N is the use period, C0 is the initial investment, w is the risk decrease rate by investment, and r is the Japanese social discount rate (4%). The OM cost (operation and maintenance cost) is not included in Eq. (11) because we assume that slope check and research costs can be disregarded compared with slope risk. In addition, it is necessary to presume w based on the DEM simulation result indicated in section 4, because this differs depending on the type of measurement and scales in each slope. 5.2 Evaluation index of investment, W In order to examine the amount of the investment, it is necessary to compare LCC in case of making the initial investment (countermeasure is executed) to LCC in the case of not making it (unmeasured). If LCC < LCC, it will be judged that “It is necessary to execute the project.” This means that the standard of the evaluation has to do with the relationship between both LCCs, as shown in Fig. 11. Therefore, we propose a new index W as in Eq. (12) that pays attention to the ratio of both LCCs in order to simplify investment decisions.
Slope LCC (Life Cycle Cost)
To examine the amount of investment, we must consider the cost generated during the use period, and the concept of LCC is necessary for this, as well as a general infrastructure. The damage cost on the slope is generated only at the disaster points, as shown in Fig. 10(Left). On the other hand, slope risk shows the expected value per year of the cost that may be generated, as shown in Fig. 10(Right). In other words, it can be assumed that
With the index of W , the project can be judged “It is necessary to execute it” if W is positive, or can be judged “Should not execute it” if W is negative. Table 6 shows the example of investment decisions when some measures plan is shown. The decisionmaker can determine the most effective plan by choosing the plan in which W has the biggest value. In this case, Plan (B) is the most effective investment plan.
226
Table 6.
Example of investment judgment for each plan. Non Plan (A)
Measures cost 0 100 Risk decrease 0 50% rate (%) LCC or LCC 500 350 (100 + 250) W (%) – +20
Plan (B)
Plan (C)
200 65%
400 90%
375 350 (200 + 175) (400 + 50) +25 +10
Table 7. Volatility factors in the future slope risk. Volatility of probability Volatility of loss amount
Figure 13. Result of Slope LCC.
(A) Volatility of probable rainfall (B) Volatility of geometrical parameter (C) Volatility of traffic volume
Figure 14. Relationship decision-maker.
Figure 12. Traffic volume prediction by use of “Arithmetic Brownian motion”.
And, for the further discussion, we call initial investment when becoming W = 0 “Amount of the limit investment”, Csup .
5.3
Evaluation of investment amount considering the risk volatility
In the previous section, the slope risk volatility is not considered. However, it is necessary to consider the risk volatility according to passage of time in the use period when we calculate slope LCC. Then, we describe the investment amount evaluation in the condition of uncertainty. The risk volatility factors are shown in Table 7, but the governing factor is (C). Therefore, in this study, we assume that (C) is the only volatility factor in the future slope risk. Then, it is necessary to model the traffic volume in the future. This estimation is usually carried out according to the some scenario based on forward road planning, but this has been criticized in that there is uncertainty in prediction and a constant width in the predictive value. Because of this, we considered the annual volatility of traffic volume to be “Arithmetic Brownian motion,” which is used in the field of the Financial Engineering; this is expressed using
between
W and
C0
for
the Monte Carlo simulation. Fig. 12 shows one example of these results. In addition, the feature of the arithmetic Brownian motion is its accordance with the Markov process in that the following present term traffic volume depends only on the traffic volume before one term. LCC can be fined when the traffic volatility is given; the result is shown in Fig. 13. The result demonstrates that LCC50 shows distribution with a certain width that centers on the mean value. Here, we consider the confidence interval of distribution to be 90%, DownSide and it is assumed that LCC50 when the cumuDownSide lative probability is 5% and LCC50 when the cumulative probability is 95%. From this, we can calculate the “Amount of the limit for investment.” With this result, the W and initial investment C0 relation are led as in Fig. 14. It can be determined whether the initial investment under examination is in the safety zone or in a dangerous zone, and the width of the middle zone generated by the uncertainty of the traffic can also be understood. For instance, if it was proposed to build newly countermeasure from 150,000 thousand yen, decision-maker can evaluated that this proposal is not economical but is not far apart from Csup . And if it was proposed more expensive plan, they might require to change this measure plan fundamentally. From this, it is believed that the investment evaluation for uncertain conditions will become possible by the index of W .
227
6
REFERENCES
CONCLUSION
In this paper, it has been made clear that slope risk can be quantified both in terms of slope failure and rockfall when slope risk management is executed. Furthermore, it has been demonstrated that it is possible to reasonably determine the priority level of countermeasures by slope risk. Moreover, in order to quantify the slope measure effect, rock fall simulation by DEM is an effective method. And it is able to calculate risk decrease rate of countermeasure by judging the road struck in many calculations. We have proposed a new method for evaluating the validity of an initial investment by leveraging the index W based on the concept of slope LCC. In addition, this method will be able to support decision making under uncertain conditions, as it has built in the risk volatility of traffic volume in the future. Future tasks include determining how to treat the statistical and mechanical error margin for each model. Furthermore, the required risk precision is different in the steps of risk management as compared to those of decision-makers, so it is necessary to construct a complete system of risk management. For instance, the decisions of measuring priority level are based on relative risk evaluations; on the other hand, investment decisions are based on an absolute risk value. In this case, a high level precision is needed. Briefly, it is necessary to clarify the risk quantification technique corresponding to the necessary risk precision. In addition, in relation to the rock fall simulation DEM, etc., future research to improve its precision is necessary.
Cundall, P.A. 1971. A computer model for simulating progressive large scale movements in blocky rock system, Proceedings of the Symposium of the International Society of Rock Mechanics (Nancy, France), Vol. 1, No II-8 Japanese civil engineering consultants association Kinki branch, JCCA kinki, July 2006. Introduction of deterioration concept for slope stability evaluation. (in Japanese) Japanese Society of Civil Engineers, 2003, Guidebook on Traffic Demand Forecasting, Vol. 1 (in Japanese) Kohashi, H., Kato, S., Ishihara, H & Furuya, A. September 2007. Evaluation of recovery time of road slope disaster, JSCE 2007 Annual meeting, Hiroshima: 877–878 (in Japanese) Komura, T., Muranishi, T., Nishizawa, K. & Masuya, H. 2001. Impact of falling rock on field slopes used in rock-fall simulation method, Journal of Structural Engineering, JCSE, VI-47A: 1613–1620. (in Japanese) Okimura, T., Torii, N., Hagiwara, S. & Yoshida, M. 2002. A proposal for risk evaluation method for rock fall on road slope, Journal of the Japan Landslide Society, Vol. 39, No. 1, June: 22–29 (in Japanese) Otsu, H., Onishi,Y., Nishiyama, S. & Takeyama,Y. 2002. The Investigation of Risk Assessment of Rock Slopes Considering the Socioeconomic Loss due to Rock Fall, Journal of JSCE, No.708/III-59: 187–198. (in Japanese) Otsu, H., Supawiwat, N., Matsuyama, H. & Takahashi, K. 2005. A Study on Asset Management of Road Slopes Considering Performance Deterioration of Groundwater Counter-measurement System, Journal of JSCE, No.784/VI-66: 155–169. (in Japanese) Public works research institute, PWRI. 2004. Manual that supports risk analysis and management for road slope disaster (idea). (in Japanese) Usiro, T. 2001. Rockfall numerical simulation software DRSP, Kochi Prefecture bridge association. (in Japanese)
228
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Risk assessment on the construction of interchange station of Shanghai metro system Z. W. Ning, X. Y. Xie & H. W. Huang Key laboratory of Geotechnical and Underground Engineering of Ministry of Education, Tongji University, Shanghai, P. R. China Department of Geotechnical Engineering, Tongji University, Shanghai, P. R. China
ABSTRACT: As the rapid growing of Shanghai metro system, risks along with the construction of interchange station are given great concerns. In this paper, an overview of interchange station in 2012 Plan is first presented by thorough statistics. Then both the objective and the subjective risk factors of the construciton of interhchange station are well analyzed. Finally a risk assessment model is established by using the Fuzzy Synthetic Evaluation Method and the Analytic Hierarchy Process to evaluate the risk level of the 4-line Century Avenue interchange station. 1
INTRODUCTION
The Shanghai metro system which currently operates 8 lines with a total length of 228.4 km is one of the fastest growing metro systems in the world. According to the plan, the system will include 13 lines with a total length over 500 km in 2012. Therefore, Shanghai metro system has stepped into a period of network-scale construction from previous single-line scale construction, which inevitably raises new risk issues. Among those, the risks along with the construction of interchange stations are given great concerns. Generally speaking, their more complicated configuration compared with normal stations would increase the chances of failure when excavation, and therefore cause severe consequences as they are normally located on central spots of the city. Moreover, a large potion of interchange stations are constructed by expanding or reconstructing the existing stations, which brings about great threatens to the safety of the operating metro lines. Apart from these technical concerns, poorly risk management in site, lack of experienced contractor and qualified personnel as well as equipment problems are all the potential causes for accident or delay in the process of interchange station construction.
2
STATISTIC OF INTERCHANGE METRO STATION IN SHANGHAI 2012 PLAN
According to the 2012 Plan, nearly 60 interchange stations will be built. Before risk analysis, we conducted statistic study to obtain an overview of these interchange stations in some aspects as station scale, interchange mode and construction mode.
Figure 1. Distribution of interchange stations with different scale.
2.1 Statistic study on station scale There are 59 interchange stations according to the 2012 Plan. Among those, there are 43 2-line interchange stations, 15 3-line interchange stations and 2 4-line interchange stations, making up 73%, 25% and 2% respectively in total. Figure 1 shows the distribution of interchange stations with different scale among all the 13 metro lines. It is found that Line 4 has the most interchange stations as it is a circular line that intersects with many other lines. The early built lines (Line 1 to Line 4), which make up of the framework of the Shanghai metro system, consist of more large scale stations (3-line and 4-line) than those newly built or in plan. It indicates that, in the following few years, there are many stations in old lines need to be expanded or reconstructed for interchange function.
229
Figure 2. Distribution of interchange pairs with different interchange modes.
Figure 3. Distribution of interchange stations with different construction modes.
2.2
to different lines are built at different stages. For the built at one time mode, it literally means the sessions belong to different lines within an interchange station are built at one time. Among all the interchange stations, there are 24 stations of reconstruction mode, 23 stations of built at stages mode and 12 stations of built at one time mode, making up 41%, 39%, 20% respectively in total. Figure 3 shows the distribution of interchange stations with different construction modes among all the 13 metro lines. Figure 3 demonstrates high percentage of reconstruction mode in early built lines as Line 1, Line 2, Line 3 and Line 4. It is also for the reason that there was less consideration for future network expansion when these stations were designed. As to interchange stations in new lines, excepting for those intersect with existing lines, they are mostly built at stages or built at one time with an overall interchanging design concept.
Statistic study on interchange mode
In Shanghai metro system, there are three interchange modes as: parallel-interchange, cross-interchange and passage-interchange (interchanging via passage connecting separate stations of different lines). For every pair of two different lines in an interchange station, they belong to a type of interchange mode. Thus, when we calculating the number of interchange modes, there are Cnn−1 interchange pairs for a n-line interchange station. Among all the interchange pairs in those interchange stations, there are 18 pairs of parallel-interchange mode, 34 pairs of crossinterchange mode and 42 pairs of passage-interchange mode, making up 19%, 36% and 45% respectively in total. Figure 2 shows the distribution of interchange pairs with different interchange modes among all the 13 metro lines. From Figure 2, high percentage of passageinterchange mode is found in Line 1, Line 2 and Line 3. That is because a complete metro network had not formed yet in early age of metro construction and less considerations for future interchanging needs in design. Under this circumstance, passage-interchange becomes a low cost and low risk option for existing lines to connect with new lines. In those newly built lines or lines still in plan, the parallel-interchange and the cross-interchange modes are widely adopted with higher efficiency and more convenience for passengers. 2.3
3
RISK ANALYSIS OF INTERCHANGE STATION CONSTRUCTION
In this paper, the authors divide the risk factors of interchange station construction into two categories: objective risk factors and subjective risk factors. The former are commonly recognized as technical or natural issues which add inherent difficulties and uncertainties to proposed projects while the latter are usually related to human errors or organizational deficiency in management, which may lead to failure or delay of projects.
Statistic study on construction mode
As old saying goes that, Rome was not built in one day, it is impossible to build all the metro lines simultaneously. Besides, the alignment of lines could be adjusted after original planning. Therefore, there are currently three different construction modes for interchange stations as: reconstruction, built at stages and built at one time. For the reconstruction mode, it refers to reconstructing or expanding a station without interchanging function to an interchange station. As to the built at stages mode, it means the interchange station is designed as a whole complex while the sections belong
3.1 Objective risk factors of interchange station construction From previous statistic study on the interchange stations of Shanghai metro system, it can be found that the complicated configuration and various construction modes are unique features of interchange station compared with normal station. To a great extent, they would affect the risk level of projects. Besides, as ground-related project, the geological conditions and the surrounding environments should always be taken into consideration as objective risk factors as well.
230
3.1.2 Construction mode A unique feature of interchange station construction that differs from normal station is the possible interval between the construction of different parts within a whole station. Therefore, the risks accompanied with each construction mode as defined in session 2 should be analyzed.
Table 1. Risk level of interchange station construction in terms of station configuration Risk level
Low
Interchange mode
Passage
Parallel
Cross
shallow/ medium normal regular
medium/ deep large regular
deep
Depth Area Layout
High
medium complicated
3.1.1 Station configuration Unlike the uniform configuration of normal metro stations, the interchange stations are distinct in excavation depth, excavation area and plan layout due to different interchange modes. The relationship between station configuration and construction risk will be discussed in terms of three different interchange modes stated above. 1 Cross-interchange As to the cross-interchange mode, because the platforms of different metro lines should be located at different levels, the stations are normally threestory or even four-story with excavation depth ranging from 20 m to 25 m in Shanghai. Such deep excavation would increase the chances of failure of the retaining structure and excess displacement of the adjacent ground. Besides, the plan layout of cross-interchange stations tend to be irregular, which is likely to result in poor arrangement of bracing and weak joints in the corners of retaining walls. Finally, if the station is not built at one time, the later-built line has to go up-through or downthrough the existing line. That would inevitably bring great threaten to the safety of the operating line. 2 Parallel-interchange For the parallel-interchange mode, though the plan layout of stations are usually rectangular, the excavation area are double or even triple of normal station, which would bring higher level of risk in excavation. However, the excavation depth could be well controlled less than 20 m as the platforms of different lines could be arranged at the same level. Similarly, if the station is not built at one time, risk will increase due to a very close distance between the existing line and the later-built line. 3 Passage-interchange Because the station of passage-interchange mode actually consists separate stations connected by passage, its construction risks are the same with normal station. Even when new lines will be incorporated later, the impact will be relatively low due to a far distance between different lines. The risk level of interchange station in terms of station configuration is summarized in Table 1.
231
1 Reconstruction Regarding to the reconstruction mode, the existing station will be partially reconstructed or expanded to incorporate new lines. Thus, there must be loading or unloading effects on the existing structure induced by any construction activities operated above, beneath or aside it, which would consequently cause certain response of the early built station. According to the protecting requirements for operating metro in Shanghai, neither vertical nor horizontal displacement of the station is allowed to exceed 20 mm. Therefore, extraordinary high risks exist in the reconstruction projects of interchange station, especially in shanghai’s high sensitive and saturated soft clay ground condition. Moreover, rare experience in similar projects multiplies the potential risks. Up to now, there have been some successfully completed reconstruction projects of interchange stations in Shanghai of various type as: Century Avenue Station (parallel/up-through), Shanghai Indoor Stadium Station (down-through) and People’s Square Station (parallel). More details will be presented in session 5. 2 Built at stages Due to the limit of available contractors, labor force and equipments as well as the endurability of the environment and the public, the number of metro projects carried out during the same period should be controlled. So there are a number of interchange stations could not be built at one time. However, for built at stages mode, the needs for future expansion and measures to reduce the impacts of later-built lines are taken into consideration in structural design phase. Besides, the joint areas connecting different lines are usually built and carefully treated in advance with the first-built line. Moreover, the bid for different lines within an interchange station usually goes to the same contractor to make sure an overall organization and smooth handover between separated construction stages. For all these reasons, though there still would be impacts of the later-built line to the existing one, the risk of built at stages mode is relatively lower than the reconstruction mode. According to the statistic above, this mode is now widely adopted in new lines under construction. 3 Built at one time The construction of interchange station of built at one time mode in Shanghai is the typical deep excavation project in soft clay areas. Generally speaking, its risk is lower than the previous two modes with similar scale under identical geological and environmental conditions.
The risk level of interchange station in terms of constructioin mode is summarized in Table 2. 3.1.3 Geological conditions In Shanghai the soft clay layers are widely distributed with low strength, large compressibility and high water content. The weakest silty clay layer and clay layer are distributed among 7 to 20 m underground where most metro stations are located. It greatly increases the chance of failure of the retaining structure and excess ground movement during excavation. The confined water is another important risk source for deep excavation in Shanghai. It will lead to soil up-rush at the bottom when its pressure is close to the gravity of the above soils. Also, seepage may occur around the foot of retaining walls due to the water pressure and high permeability of the fine sand layer. There are three confined water layers which might affect the construction of metro station in Shanghai as follows: the sub-confined water in sandy silt layer, the 1st confined water in silty fine sand layer and the 2nd confined water in sand layer. Table 2. Risk level of interchange station construction in terms of construction mode.
3.1.4 Surrounding environments As the metro network grows rapidly in downtown areas in Shanghai, it increasingly become a threaten to the safety of surrounding environments such as buildings or infrastructures during construction. The construction of interchange station bears especially high risk as they are usually located near the commercial center or other important city spots. There is a control criterion of environment protection in excavation of metro construction based on the engineering experience in Shanghai , which could be adopted as a reference to the risk assessment for the interchange station construction (Specification for Excavation in Shanghai Metro Construction, 2000). In this specification, three environment protection grades are set concerning the importance of the adjacent objects as well as the distance between the objects and the construction site. 3.2 Subjective risk factors of interchange station construction
Interchange mode Construction mode
Passage
Parallel
Cross
Built at one time Built at stages Reconstruction
low low medium
low/medium medium/high high
medium high high
Table 3.
Based on the depth and the relationship between different confined water layers, it is divided into five confined water zones in Shanghai (Liu, 2008). Risk varies in these zones for the construction of the interchange stations as shown in Table 3
Indeed, objective risk factors like technical difficulties and ground uncertainties increase the chances of project failure. However, whether the failure will occur or not is to a large extent affected by subjective factors like management and personal expertise. Both Sowers (1993) and Bea (2006) have studied
Risk level of interchange station construction in terms of geological conditions.
Risk level
Low
Confined water zone
Zone I
Zone II
Zone III
Zone IV
Zone V
thin or missing sub-confined water layer. 1st confined water −40 m to −50 m. layer locates at
1st confined water layer locates at around −30 m. 1st and 2nd confined water is connected.
1st confined water layer locates at around −30 m. water is connected.
1st confined water layer is located at shallow level.
thick sub-confined water layer. 1st and 2nd confined
Description
Table 4.
High
Risk level of interchange station construction in terms of environmental conditions.
Risk level Protection grade Environmental conditions
Low 1st-grade There are metro, municipal common trenches, gas pipes, main water pipes, important buildings or structures within the range of 0.7H* from the excavation.
2nd-grade There are important pipe lines, buildings or structures within the range of H ∼ 2H from the excavation.
*H: excavation depth.
232
High 3rd-grade There is no important pipelines, buildings or structures within the range of 2H from the excavation.
a large number of well-documented cases of failing civil engineering projects. Though there is a long time span between their studies, they got similar findings that approximately 80% of the failures are caused by subjective factors like human or organizational shortcomings. As follows, functioning of risk management system, contractor experience, personnel qualification and equipment condition are discussed.
problems, it often leads to a delay in construction procedures. Subject to the pressures of a tight schedule of metro construction plan in Shanghai, overuse and insufficient maintenance of equipments are widely found among contractors.
3.2.1 Functioning of risk management system A well established and effectively functioning risk management system is essential for risk control especially in construction phase. Very important are short and efficient communication channels in conjunction with clearly defined responsibilities (Savidis, 2007). And the management of all information such as monitoring data is the core of risk management system of geotechnical projects. Unfortunately, only a part of contractors realize the importance of risk management system and are willing to put their resources into it. In many cases, careless inspection of monitoring data or slow response to emergency in poor risk management system led to serious accidents.
In this general risk assessment model for the construction of interchange station, a primary risk value Fo is first introduced concerning the objective risk factors. Then, in order to consider the impact of subjective factors, an adjustment coefficient α is deduced as well. The final risk value is obtained as follow:
3.2.2 Contractor experience Underground projects present distinct regional features because different geological conditions require different construction methods and parameters. Therefore, contractors with less experience in soft clay areas might bring about higher risk than local contractors in Shanghai. Moreover, contractors with affluent experience are more sensitive with signs of potential hazards. Thus effective measures can be timely carried out to prevent the accidents from happening or at least minimize the negative consequences as much as possible. Whereas, for the reason that the experienced local contractors are not enough for large-scale metro construction in Shanghai, imports of contractors with less experience is inevitable.
4.1
4
RISK ASSESSMENT MODEL
Both Fo and α are calculated by applying the Fuzzy Synthetic Evaluation Method and the Analytic Hierarchy Process (AHP) (Liang, 2001).
3.2.3 Personnel qualification The metro projects are mostly ground-related with high risk and strict technical requirements, which can only be done by specially trained staffs. According to the terms of relevant codes in China, certificates are compulsory for staffs participating in construction projects. For large-scale project like the interchange metro station discussed here, the project manager should have the 1st-class manager qualification and the proportion of senior professional personnel should be higher than 10% in managing group. However, due to the booming development of the metro in China, the shortage of excellent project managers and qualified technical staffs become an urgent issue. There are situations where contractors employ unqualified staffs or appoint inexperienced managers, which are reported to be the direct or indirect causes for lots of metro construction accidents in China. 3.2.4 Equipment condition Compared with other factors, although less structure failures or casualty are directly caused by equipment
233
Calculation of primary risk value Fo
1 factor set The factor set for Fo is:
Where u1 is ‘station configuration’, u2 is ‘construction mode’, u3 is ‘geological conditions’ and u4 is ‘surrounding environments’. 2 weight set The weight set for each factor is:
It can be determined by AHP method. 3 comment set The comment set for Fo is:
Where v1 is ‘very high’, v2 is ‘high’, v3 is ‘medium’ and v4 is ‘low’. The value assigned to v1 , v2 , v3 , v4 is ‘4’, ‘3’, ‘2’ and ‘1’ respectively for quantification of the assessment result. 4 evaluation matrix The evaluation matrix or Fo expresses as:
Where rij is the degree of membership of factor ui to risk level vj . RF can be obtained by membership function, statistic or referring to pre-defined relationship between U and V. 5 primary risk value Fo The evaluation vector BF is calculated as:
Table 5.
Risk level of interchange station construction.
Risk Level
I
II
III
IV
Description Risk value
low <1.5
medium 1.5 − 2.5
high 2.5 − 3.5
very high >3.5
It gives the degree of membership of the target project to different risk levels. Then primary risk value Fo is:
Figure 4. Overview of Century Avenue interchange station.
4.2
Calculation of adjustment coefficient α
1 factor set For the calculation of α, The factor set is:
5
CASE STUDY
5.1 Project introduction Where u1 is ‘functioning of risk management system’, u2 is ‘contractor experience’, u3 is ‘personnel qualification’ and u4 is ‘equipment condition’. 2 weight set The weight set for each factor is:
It is determined by AHP method as well. 3 comment set The comment set is:
Where v1 is ‘poor’, v2 is ‘medium’, v3 is ‘excellent’. The value assigned to v1 , v2 , v3 is ‘1.3’, ‘1’ and ‘0.7’ respectively for quantification of the assessment result. 4 evaluation matrix The evaluation matrix for α expresses as:
5 adjustment coefficient α The evaluation vector Bα is first calculated as: It gives the degree of membership of the combining condition of subjective risk factors to different levels. Then the adjustment coefficient α is:
Century Avenue station is a 4-line interchange station for Line 2, Line 4, Line 6 and Line 9. The platforms of Line 2, Line 4 and Line 9 are parallel with the platform of Line 6 go across them above. The construction of this station is divided into three phases. Line 2 was firstly built and put into operation early in 1999. Line 4 was built years later and put into operation in 2005. Line 6 and Line 9 are now being built synchronously. The risk of construction phase 2 will be assessed in the following part. The excavation for the station of line 4 is 20.8 m in depth with an area of about 4300 m2 . The minimum distance between the retaining wall and the operating line 2 station is only 5.4 m. Because no future expansion was considered in previous design of line 2, part of the station have to be reconstructed for the connection with the new line. As to the ground profile, the soft clay is widely distributed 30 m below the ground. The 1st confined water layer is found about 8 m to the bottom of the excavation with a pressure head about 20 m. Except for the existing metro line, there are some commercial buildings and municipal pipelines near the excavation within a range of 5 to 10 m. The contractor who carried out this project holds the top qualification and has accumulated affluent experience in the underground projects in Shanghai. A web-based multilevel field monitoring and information management system was applied to this project.
5.2 Risk assessment
4.3
Calculation of risk value F
After the primary risk value Fo and the adjustment coefficient α are both obtained, The risk value F is determined by equation (1). The final risk level could refer to Table 5 as follows.
5.2.1 Risk assessment of object risk factors The statuses and rough risk levels of relevant object risk factors are summarized in Table 6 referring to the previous analysis. In order to get the weight set and the evaluation matrix, a survey was conducted among 15 experts engaged in the geotechnical engineering.
234
Table 6.
Risk level of interchange station construction.
Risk factors
Status
Risk level
Station configuration
medium medium regular reconstruction zone III 1st grade
medium
Depth Area Layout Construction mode Geological Condition Surrounding environment
The evaluation vector Bα was calculated according to equation (4):
The adjustment coefficient α was calculated according to formula (5):
high medium/high high
So the overall condition of the subjective factors is ‘excellent’. The weight set AF was determined by the AHP method as follows:
The evaluation matrix RF was determined in a way that, if n experts consider the risk level of factor ui is vj , then rij is n/15. For example, after the risk evaluation of ‘station configuration (u1 )’, the number of experts who gave the comment of ‘very high (v1 )’, ‘high (v2 )’, ‘medium (v3 )’, and ‘low (v4 )’ is 1, 5, 8 and 1 respectively. So, the evaluation vector R1 is (1/15, 5/15, 8/15, 1/15). In this way, evaluation matrix RF was obtained as:
5.2.3 Final assessment result Since the primary risk value Fo and the adjustment coefficient α are both obtained, the risk value F is calculated according to equation (1):
Thus, the risk level of this project is ‘medium’ but very close to ‘high’. From the overall risk assessment, it is found that, though the risk level is ‘high’considering the objective factors, the overall risk level was effectively reduced to ‘medium’ by ‘excellent’ combining condition of subjective factors, regarding experienced contractor, effective monitoring and risk management in field. 6
The evaluation vector BF was calculated according to equation (2):
The primary risk value Fo was calculated according to equation (3):
So, referring to Table 5, the risk level is ‘high’ based on the assessment of objective factors. 5.2.2 Risk assessment of subjective risk factors The weight set Aα was determined by the AHP method as well:
The evaluation matrix Rα was obtained in a similar way to RF :
COMMENTS ON RISK COUNTERMEASURE
By the risk assessment model developed in the foregoing session, all the interchange metro stations to be built could be classified in terms of risk level. However, the ultimate target of risk assessment, which is for better decision-making to avoid or minimize the risks by corresponding countermeasures has not been touched. Here the authors provides brief comments on this concerns The risk assessment can be carried out in selecting the suitable contractors during the tendering procedure. For projects with ‘low’ or ‘medium’ risk level only considering the objective risk factors, average contractors are qualified. While for projects with ‘high’ or ‘very high’ risk level in objective view, top contractor with excellent risk management should be required to adjust the overall risk to a lower level. When the risk assessment is carried out during construction, special attentions should be given to projects with ‘high’ and ‘very high’ risk level. For ‘high’ risk, evaluation of construction methods and intensified monitoring are required. The ‘very high’ risk is generally unacceptable. Under this circumstance, temporary cease of the project may be necessary until sound countermeasures are achieved by special technical meetings or researches. 7
CONCLUSIONS
There are a large number of interchange stations going to be built according to the plan of Shanghai metro
235
system in the following few years. Unlike the normal metro station, the interchange station features in complicated configuration, various construction modes, which greatly increase the risks in construction phase. The risk factors that affect the risk level of interchange station construction can be divided into objective factors and subjective factors. ‘station configuration’, ‘construction mode’, ‘geological conditions’ and ‘surrounding environments’ are four main objective factors while ‘functioning of risk management system’, ‘contractor experience’, ‘personnel qualification’, ‘equipment condition’ are four main subjective factors. From the risk assessment on Century Avenue 4-line interchange station by using Fuzzy Synthetic Evaluation Method and the Analytic Hierarchy Process (AHP), it is found that, the risk level of interchange station is generally high concerning the objective risk factors. However, a qualified and experienced contractor with effectively functioning risk management system would well decrease the overall risk level of the project. The results of risk assessment can be used for contractor selection and better decision-making to reduce or minimize the risk during the construction of interchange metro stations.
REFERENCES Bea, R. 2006. Reliability and human factors in geotechnical engineering. Journal of Geotechnical and Geoenvironmental Engineering. May 2006: 631–643. Chapman, T.J.P. & Van Staveren, M.Th. et al. 2007. Ground risk mitigation by better geotechnical design and construction management. Proc. ISGSR2007 First International Symposium on Geotechnical Safety and Risk, Shanghai, 18–19 Oct. Liang, S. & Bi, J.H. 2001. Fuzzy synthetic evaluation method of construction engineering quality grade. Journal of Tianjin University 34(5): 664–669. Liu, J & Pan, Y.P 2008. Guideline of Confined Water Risk Control in Rail Transit Construction. Shanghai: Tongji University Press. Sowers, G.F. 1993. Human factors in civil and geotechnical engineering failures. Journal of Geotechnical Engineering. 119(2): 238–256. Savidis, S.A. 2007. Risk management in geotechnical engineering projects by means of an internet-based information and cooperation platform. Proc. ISGSR2007 First International Symposium on Geotechnical Safety and Risk, Shanghai, 18–19 Oct. Wang, Q.G. 2008. Research on Deformation Prediction and Construction Control in Deep Excavation of Expansion Project near Existing Metro Transfer Station. Shanghai: Tongji University.
ACKNOWLEDGEMENTS The work presented in this paper was supported by Shanghai Shentong Group Co., Ltd. The authors gratefully acknowledge this support.
236
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Challenges in multi-hazard risk assessment and management: Geohazard chain in Beichuan Town caused by Great Wenchuan Earthquake Limin Zhang Department of Civil and Environmental Engineering, Hong Kong University of Science and Technology, Hong Kong
ABSTRACT: The Great Wenchuan Earthquake of Ms 8.0 in Richter scale on 12 May 2008 triggered approximately 15,000 landslides. This paper starts with an overview of the disaster chain caused by the earthquake. It then describes four episodes of geohazards occurred during and shortly after the earthquake in Beichuan Town, a small county town in north Sichuan. In Episode I, the fault rupturing and the earthquake caused the collapse of 80% of the buildings in the town. In Episode II, one large landslide (Wangjiayan Landslide) and one rock avalanche (Jingjiashan Landslide) were triggered by the earthquake, which together buried a significant part of the town and caused the death of more than 2300 people. In Episode III, on 10 June 2008, the flood from the breaching of Tangjiashan Landslide Dam, which is located approximately 3.5 km upstream of Beichuan, flushed the town. The dam-breaching flood was greater than a 200-year return period flood. In Episode IV, on 23-25 September 2008, the loose landslide deposits near and inside the town turned into severe debris flow torrents amid a storm. The debris flowed into the town and buried much of the old town that was not affected by the two landslides and the Tangjiashan flood occurred earlier. Finally, several challenges to geotechnical risk assessment and management are posed based on the observed disaster chain.
1
INTRODUCTION
A great earthquake of magnitude 8 in Richter scale occurred in Sichuan on 12 may 2008. The strikeslip rupturing started near Yingxiu and developed towards the northeast along the Yingxiu-BeichuanQingchuan fault zone. The rupture zone was approximately 300 km long (Fig. 1, Chen 2008). The epicenter depth was between 10–20 km. Latest investigations (Huang 2008; Xie et al. 2008) show that approximately 15,000 landslides were triggered during the earthquake. The earthquake caused a casualty of approximately 80,000, of these 19,065 were students. One quarter of the total casualty was caused by earthquake-induced landslides. I am deeply saddened because all these happened in the place I lived for many years. What further touches me is the degree of damage to a single site by a chain of geohazards. It is well known that a particular site may be affected by multiple hazards, such as rockfalls, landslides, and debris flow. The Great Wenchuan Earthquake posed new challenges: multiple geohazards not only occurred at a particular site, but also developed over time as a disaster chain. Hence assessment of the risks associated with such a disaster chain becomes a dynamic problem. This paper starts with an overview of the disaster chain caused by the Wenchuan earthquake. Then four episodes of catastrophes occurred during and after
Figure 1. Rupture zone in the Great Wenchuan Earthquake and location of Beichuan. (After Chen 2008).
the earthquake in a single place, Beichuan Town, are described. Finally, several challenges to geotechnical risk assessment and management are raised based on the observed disaster chain.
237
2
DISASTER CHAIN CAUSED BY GREAT WENCHUAN EARTHQUAKE
3
Figure 2 summarizes some observed disaster/event chains along the time line caused by the Great Wenchuan Earthquake. Focusing on geohazards only. Five typical types of geohazards occurred during the earthquake, namely, landslides, topplings, ejected deposits, loosened or cracked soil/rock masses, and peeled terrains covered by loose dry debris. These geohazards became more severe few days after the main shocks, caused by numerous aftershocks with eight of these larger than magnitude 6 in Richter scale. The landslides also blocked rivers and formed over 100 landslide dams. The water levels in these lakes started to rise after the earthquake. By one to three weeks after the earthquake, 34 large landslide lakes had formed and posed enormous risks to the public both downstream and upstream of the dams. While many of these landslide dams overtopped naturally, emergent engineering division measures had to be carried out for several very large ones. The Tangjiashan landslide dam is an example in which a division channel was excavated at extremely difficult conditions to reduce the risks to over one million people. The collapse of large landslide dams along Mianyuan River and Jianjiang River also caused floods larger than raininduced, 50 year-return floods, which inundated towns and cut off some major highways. Then in July and September 2008, severe storms occurred in the earthquake zone and the widespread dry landslide debris caused by the earthquake turned into wet debris flow. Such debris flow caused not only losses of lives and property, but also fundamental transformation of the natural environment. Such transformation is expected to last for many years.
MULTI-HAZARDS AT BEICHUAN CAUSED BY GREAT WENCHUAN EARTHQUAKE
3.1 Episode I, fault rupture and building collapse Beichuan Town is located in northern Sichuan (Fig. 1). By the end of 2006, the population of Beichuan County was 160,156 (Beichuan County Government 2008). Beichuan Town (Qushan Town), where the county government is located, was a quiet and beautiful town surrounded by green mountains and the Jianjiang river (Fig. 3). During the Wenchuan earthquake, which occurred at 2: 28 pm, 12 May 2008, the earthquake rupture went across the entire town (Fig. 4). The ground seismic intensity reached the 11th degree. The earthquake caused devastating damage to buildings (Figs. 4–6). In fact, 80% of the buildings in Beichuan Town collapsed and almost all of the remaining buildings were severely damaged (Fig. 5). Approximately 15,640 people in the county lost their lives (9.8% of the
Figure 3. Beautiful Beichuan Town surrounded by green mountains and a meandering river before the Great Wenchuan earthquake. (After Beichuan County Government 2008).
Figure 2. Observed disaster/event chains caused by the Great Wenchuan Earthquake.
238
population) and 4,412 people were reported missing (CCTV 2008). The most tragic of all was the collapse of two five-storey classroom buildings at the Beichuan Middle School (Fig. 6, loss of more than 1000 young students (Beichuan Middle School 2008)) and the burial of the entire New Beichuan Middle
School (loss of over 700 young students and teachers, to be described later). 3.2 Episode II, Landslides After a moment of time delay, one large landslide and one rock avalanche occurred. The landslide, Wangjiayan Landslide shown in Fig. 7, is about 10 million m3 in volume. The scar is steep, wide and very high, showing the features of a typical landslide that detached from the scar as a whole piece under tensile stresses and the detached materials slid down at a high speed. The landslide debris buried a large part of a government office area in the old town. As the debris advanced, it pushed many buildings, which were already shattered by the earthquake but not buried, forward for a distance, turning these buildings in ruins (Fig. 8). The scraping power of such high speed debris was such that the shallow foundation of a building was seen to have been brought to the debris surface (Fig. 9). This landslide caused the loss of about 1600 lives: the most amount of casualty in a single geohazard event.
Figure 4. The fault zone across the entire town and severe building damage. (After Huang 2008). The extents of two large landslides can also been seen, which affected a significant part of the town.
Figure 5. Beichuan Town shortly after the earthquake. (After Xinhua News Agency, 16 June 2008).
Figure 6. Ruins of Beichuan Middle School after the earthquake. Over 1000 young students lost their lives.
Figure 7. Wangjiayan Landslide in Beichuan Town, which killed 1600 people. (After Yang et al. 2008).
Figure 8. Wangjiayan Landslide in Beichuan Town. The landslide debris not only buried numerous buildings but also pushed a number of buildings forward for a distance.
239
Figure 9. The high-speed debris from Wangjiayan Landslide brought the shallow foundation of a building to the debris. Figure 12. Tangjiashan Landslide, Tangjiashan Landslide Lake, and Beichuan Town. (After Yang et al. 2008).
Figure 10. Jingjiashan Landslide in Beichuan Town, which buried the Beichuan Middle School, killing approximately 700 people, mainly young students.
Figure 13. Tangjiashan Landslide, which is 30 million m3 in volume. (After Yang et al. 2008).
Beichuan Middle School, leaving approximately 700 young students and their teachers in the darkness.
3.3 Episode III, flooding from breaching of Tangjiashan landslide dam
Figure 11. Jingjiashan Landslide in Beichuan Town. Large rock blocks are seen in this picture, three blocks being sufficient to fill a basketball court.
The rock avalanche, Jingjiashan Landslide, was also about 10 million m3 in volume (Figs. 4 and 10). The rock avalanche is characterized by the falling of a large quantity of rock blocks with diameters larger than 5 m (Fig. 11). The avalanche buried the New
The end of earthquake was not yet the end of geohazards exposure. During the earthquake, a large landslide occurred at Tangjiashan, about 3.5 km upstream of Beichuan Town (Fig. 12). Similar to the Wangjiayan landslide, the Tangjiashan landslide is a whole-piece, high-speed landslide detached from a wide and steep scar (Figs. 12 and 13), with the materials falling down for a vertical distance of approximately 500 m. The landslide deposit measures 611 m along the sliding direction and 803 m in the perpendicular direction, and approximately 20.4 million m3 in volume (Liu 2008). The landslide cut off the Jianjiang river and formed a large landslide dam 82–124 m high (Figs. 12 and 13). In the three weeks following the earthquake, the landslide lake was filled at a rate of approximately 110 m3 /s. By 9 June 2008, the lake volume reached 247 million m3 (water level = 742.58 m). When full the lake capacity would be 316 million m3 (Liu 2008).
240
Figure 14. Breaching of Tangjiashan Landslide Dam on 10 June 2008. The peak flow rate reached 6500 m3 /s at 11:30 am and the corresponding lake water level was 735.81 m. (After Gang Li, Xinhua News Agency, 10 June 2008).
Figure 16. Remains from the flood from the breaching of Tangjiashan Landslide Dam in Beichuan Town. The watermarks can be clearly seen on the third floor of a building.
Figure 15. The flood from the breaching of Tangjiashan Landslide Dam passed Beichuan Town, 12 noon, 10 June 2008. The peak flow rate reached 9780 m3 /s; the flood water level was 629.54 m. (After Gang Li, Xinhua News Agency, 10 June 2008).
The lake posed enormous risks to 1.2 million people downstream. A well organized emergency division program was implemented under the leadership of Mr. Ning Liu, Chief Engineer of the Ministry of Water Resources, during 25 May–11 June 2008. Meanwhile, approximately 250,000 people downstream were evacuated. The dam finally breached in a controlled manner on 10 June 2008 (Fig. 14). The peak flow rate reached 6500 m/s, which was similar to a flood of 200-year return level (6970 m3 /s) (Huaxi Metropolitan News 2008). When the dam-breaching flood reached Beichuan Town, the peak flow rate reached 9780 m3 /s, which was larger than a 200-year return flood. The old town was severely flooded (Figs. 15–17). In particular, the Jianjiang river makes a turn inside the town, creating a higher flood water level on the side of the old town (Fig. 15). The water marks can be clearly seen on the third floor of a building in Fig. 16. The flood inundated much of the old town, flushing building debris into the town. The debris jammed completely the roadway, which was open and played a critical role during the rescue period (Fig. 17).
Figure 17. Building debris that was brought into Beichuan Town by the flood from the breaching of Tangjiashan Landslide Dam. Also shown in this figure are the building damage by the earthquake and a rock avalanche front on the right. Quake collapse, landslide and flooding debris in the same scene: what else could possibly be more?
Fortunately, all the people in the town who survived through the earthquake were evacuated about two weeks after the earthquake for infective disease and biological control; hence no casualty was resulted from the flood. 3.4 Episode IV, debris flow The earthquake in May 2008 caused numerous landslides and rock avalanches. Much of the landslide or avalanche deposits spreading on the hilly terrains are at a marginally stable condition, and are highly erodible. On 23–25 Sept. 2008, a severe storm brought about 190 mm of rainfall, which caused widespread debris flow torrents. A severe debris flow from Weijiagou, a gully in the southwest of the town, as well as that originated from the Wangjiayan landslide deposit (Fig. 13), bursted into the old town. A large part of the old town that was not affected by the landslides and the Tangjiashan dam-breaching flood, was now buried
241
Figure 20. Timeline of geohazards in Beichuan Town.
Figure 18. Beichuan Town after the massive debris flow on 24 Sept. 2008.The buried part was planned as a Memorial Site to memorize those who lost their lives. (After China News Agency, 25 September 2008).
deposits and loosen terrains is likely to continue in years to come. Experiences with the Ch-Chi earthquake (ML = 7.3) in Taiwan in 1999 (Lin et al. 2008) show that the occurrence of major debris flow torrents during two typhoon events after the Chi-Chi earthquake was more than doubled compared with that prior to the earthquake. Transformation of the river system took place in the course of debris flow and general soil erosion. It appears that our ability to identify possible hazard scenarios is still limited and that we have to live with the unexpected. May peace upon the ceased. 4.2 Risk assessment The risk assessment process answers three questions (e.g. Ayyub 2008): (1) What can go wrong? (2) What is the likelihood that it goes wrong, (3) What are the consequences it will go wrong. The question ‘What can go wrong’ is addressed in hazards identification. The risk associated with multiple hazards can be expressed as
Figure 19. Beichuan Town after the massive debris flow on 24 Sept. 2008. The debris is seen to have contributed from Wejiagou Gully behind the town and Wangjiayan Landslide. (After China News Agency, 25 September 2008).
(Figs. 18 and 19).Again no casualty was resulted inside the town because all the people were evacuated in later May. 4 4.1
CHALLENGES TO GEOHAZARDS RISK ASSESSMENT AND MANAGEMENT Hazards Identification
A first lesson learned through the Great Wenchuan Earthquake is the need to re-assess risks faced by cities and communities located in high seismic zones or exposed to high-risk geohazards. A critical task in risk assessment is the identification of possible hazards that may affect the elements under concern. The time line for the four scenarios of geohazards reported in the above sections is shown in Fig. 20. Is the debris flow in Episode IV the end of hazards for Beichuan Town? Obviously, unexpected geohazards may occur in the future: the four episodes reported in this paper were largely unexpected years ago. Smaller earthquakes can occur in the foreseeable future. Some large landslides may reactivate due to rainfall infiltration or other triggers. Debris flow from the landslide
where pi is the occurrence probability of an hazard event i out of n possible events; vi is the vulnerability of the element at risk to the ith hazard; and ci is the element at risk given the occurrence of the ith hazard. The Beichuan Town hazard scenarios show that: (1) The hazard events are highly correlated. The outcome of one event (e.g. landslide) is the cause of other events (formation of landslide dams, dam breaching, debris flow etc.), the lead cause being the strong earthquake. (2) The events do not necessarily occur at the same time. They evolve as a disaster chain (Figs. 2 and 20). (3) The vulnerability to each event may be different. For example, the earthquake and landslide scenarios came without any warning and thus resulted in enormous loss of life, whereas dam-breaching flood and debris flow caused little casualty due to the sufficiently early warning and evacuation of the population at risk. (4) The selection of a proper benchmark recurrence period for a hazard type is an important decision for risk analysis. The occurrence probability of the root cause (strong earthquake) is extremely small (approximately 2000-year return period in the case of the Great Wenchuan earthquake). If a systematic risk analysis were conducted considering the
242
possible run-down distances of the two large landslides (Figs. 4, 7 and 10) and the debris flow (Figs. 18 and 19), the safe distance to the fault rupture (Fig. 4), and the zone susceptible to flooding (Figs. 15–17), then almost the entire town would not be inhabitable. Note that the Chinese Code for Geotechnical Engineering Investigation GB50021-94 (Ministry of Construction 1995) recommends the following safe distances to strongly active faults with the potential to cause earthquakes greater than magnitude 7 in Richter scale: 3000 m for designs to 9th degree intensity, and 1000–2000 m for designs to 8th degree intensity. The Code also recommends that important constructions should not be on the upper plate near the rupture. Similar situations in many towns in the quake zone make decisions on reconstruction planning very difficult.
4.3 Risk management The risk management answers the questions of (1) what can be done to reduce or control risk and (2) what are the impacts of any proposed solutions on the risk profile of the system (e.g. Ayyub 2008). The first question can be answered through the reduction of one or more of the three risk components: occurrence probability of hazard, vulnerability, and element at risk. Several issues or challenges in dealing with the two questions are as follows: (1) Reducing the occurrence probability of hazards requires the identification and strengthening of a large number of natural or manmade slopes to a tolerable level. While some landslide hazards will certainly be mitigated and some slopes will be stabilized, it is not likely a great number of features can be identified and strengthened. (2) Again the selection of a proper benchmark hazard recurrence period for engineering design is a difficulty decision. A large area suffered from damage of 8th-11th degree during the Wenchuan earthquake (2000-year event), while the original design seismic intensity was mostly 7th degree based on a hazard level that has a 10% probability of exceedance in a 50-year exposure period (475year return period). Although a design intensity of 8th degree has been adopted for reconstruction works, there is still a shortfall. A cost analysis often does not allow the use of the severe earthquake event actually happened as a benchmark for design. There must be a trade-off between risk and costs. The construction of the New Orleans Hurricane Protection System offers an example. While Hurricane Katrina in August 2005 was a 400-year event, the new system is designed for 100-year level events from a cost-benefit point of view (USACE 2006). (3) Now that the hazard occurrence probability cannot be reduced significantly due to economic concerns, a more effect way to risk mitigation is the
lowering of vulnerability. Since earthquake disasters may not be forecasted in a near future and the people are to live with disasters in high-risk zones, measures for vulnerability reduction at the community level (safe islands etc.) are called for. While forecasting of earthquakes is a formidable task, it is possible to monitor and predict the development of after-quake disaster chains (Figs. 2 and 20). The mitigation of Tangjiashan Landslide Dam risks (Figs. 12–17) is a successful example. (4) When risks cannot be reduced to a tolerable level or cost-benefit cannot be justified, there is a need to reduce the elements at risk, i.e., to relocate permanently the residents in areas susceptible to nearfuture disaster chains. This is often proved not quite viable considering various negative social impacts (e.g., separation of family members, loss of familiar communities, job market etc.). (5) There is a need for effective risk education and communication on several issues during the reconstruction period: (a) Cost-benefit evaluation of new design intensity when it is smaller than that actually happened; (b) Keeping earthquake vulnerability in mind in city planning; (c) Relocation or reconstruction at it is; (d) Use of potentially more dangerous new sites.
5
SUMMARY
The geohazard chain in Beichuan Town, particularly four episodes of catastrophes during and shortly after the Great Wenchuan Earthquake is described in this paper. The hazard events did not occur at the same time; instead these events evolved as a disaster chain with possibly unknown future events. The hazard events are highly correlated. The outcome of one event (e.g. landslide) is the cause of other events (formation of landslide dams, dam breaching, debris flow etc.), the lead cause being the strong earthquake. The earthquake actually happened was an extreme event (approximately 2,000-year event) that caused the loss of 80,000 people. The public has witnessed the cruel fact but has yet to accept smaller hazard events for future design and construction. All these raise challenges to risk assessment and management in a multi-hazard environment.
ACKNOWLEDGMENTS The author would like to acknowledge the assistance from Prof. Chang-Rong He of Sichuan University, who arranged a trip for the author to visit Beichuan shortly after the Great Wenchuan Earthquake. Profs. RunQiu Huang of Chengdu University of Technology and Wei-Lin Xu of Sichuan University provided valuable information. Mr. Yao Xu, Miss Melissa Zhang, and Ms Jinhui Li proof read the manuscript. The financial support from the Research Grants Council of Hong Kong (Project No. 622207) and the Hong Kong
243
University of Science and Technology (Project No. RPC06/07.EG19) is also gratefully acknowledged. REFERENCES Ayyub, B.M. (2008). A risk-based framework for multihazard management of infrastructure. Proc. International Workshop on Frontier Technologies for Infrastructures Engineering, 23–15 Oct. 2008, National Taiwan University of Science and Technology, Taipei, S.S. Chen and A. H-S. Ang (eds.), 209–224. Beichuan County Government. (2008). Beichuan county population statistics. http://beichuan.my.gov.cn/bcx/ 16581690 87802474496/20080617/305295.html. Beichuan Middle School (2008). Earthquake relief for Beichuan Middle School. Online: http://bczx.changhong. com/. CCTV (2008). Beichuan casualty statistics. The China Central Television. 23 June 2008. Chen, Y.T. (2008). Mechanisms of Wenchuan earthquake. Keynote lecture, Forum on Earthquake Relief and Science and Technology, Chinese Academy of Sciences, Chengdu, 25 July 2008. China News Agency. (2008). 25 September 2008. Huang, R.Q. 2008. Preliminary analysis of the developments, distributions, and mechanisms of the geohazards triggered by the Great Wenchuan Earthquake. State Key Laboratory of Geohazards Prevention and Geological Environment Protection, Chengdu University of Technology, Chengdu, China.
Huaxi Metropolitan News. (2008). Tangjiashan division channel can pass floods of 200 year-return period. 25 June 2008. Lin, M.L., Wang, K.L., and Kao, T.V. (2008). The effects of earthquake on landslides – A case study of Chi-Chi earthquake, 1999. Landslides and Engineered slopes, Chen et al. (eds.0, Taylor & Francis Group, London, 193–201. Liu. N. (2008). Landslide dams in Wenchuan earthquake and risk mitigation measures. Keynote lecture, Forum on Earthquake Relief vs. Science and Technology, Chinese Academy of Sciences, 25 July 2008, Chengdu, China. Ministry of Construction. 1995. Commentary – Code for Investigation of Geotechnical Engineering, GB50021-94. Ministry of Construction, Beijing. United States Army Corps of Engineers (USACE). 2006. Performance Evaluation of the New Orleans and Southeast Louisiana Hurricane Protection System. Draft Final Report of the Interagency Performance Evaluation Task Force, Volume I – Executive Summary and Overview. 1 June 2006. Xie, H.P., Deng, J.H., Tai, J.J., He, C.R., Wei, J.B., Chen, J.P., and Li, X.Y. (2008). Wenchuan large earthquake and postearthquake reconstruction-related geological problems. Chinese Journal of Rock Mechanics and Engineering, Vol. 27, No. 9, pp. 1781–1791. Xinhua News Agency. (2008). 10 June 2008. Xinhua News Agency. (2008). 16 June 2008. Yang, X.G., Li, S.J., Liu, X.N., and Cao, S.Y. (2008). Key techniques for emergent treatment of earthquake induced landslide dams. State Key Laboratory of Mountain Rivers and Environment, Sichuan University, Chengdu, China.
244
General sessions Design method (1)
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
A study of the new design method of irrigation ponds using sheet materials M. Mukaitani, R. Yamamoto & Y. Okazaki Takamatsu National College of Technology, Takamatsu, Japan
K. Tanaka Naruto surveying & designing co., ltd., Naruto, Japan
ABSTRACT: There are two hundred ten thousand of ponds in Japan. Most of all ponds became too old, damaged by natural disastars and spread of house building. When we repair the pond’s dike, we can not get soil materials from nearby mountains. Nobody is permitted to get from mountain’s soil with license, because of environmental problems. Many types of industrial sheets are developed after 1970’s. We must make the new design method of pond dike using sheet materials. Most pond dike’s slope failures are caused by the shear strength on between main slip layer’s soil and sheet material. Then, the shearing mechanisms of dike’s soils on sheet material are important. This paper treats the considerations of dike’s slope by viewpoint of the infinite slope method, application for the in-situ case study, a comparative study of ordinary design methods and progress of seepage line. Firstly, we considered the infinite slope method on simple type of slope stability and clarified the safety factor’s relation between the proposal method and traditional slope stability. We formularized the seepage line of progress of soil on the sheet material. We clarified that the seepage line is described a parabola.
1
INTRODUCTION
This paper treats the new design method of irrigation ponds using sheet materials. There are 210 thousand of ponds in Japan. If the pond is repaired against too old or the earthquake-resistant, we must seek dike’s materials. The dike of the irrigation pond is consisted of the covered soil, the filter soil and the core soil. It seems that we can’t get dike’s materials so easily. Top 5 prefectures make up over 50% of the irrigation ponds in Japan. These are Hyogo, Hiroshima, Kagawa, Yamaguchi and Osaka prefecture. The newly construction method of pond is made by the center core type. The old pond is improved by the front core type. The front core type pond needs lot of dike’s materials. From environmental view point, it is difficult to get newly dike’s materials from the near-by mountain area. In solitary island and island area, it is more difficult to get them. Because of above mentioned reasons, we must investigate the dike improvement method with low-cost and using low volume of newly materials. The dike improvement work is not used large scale construction machines in country area, where the irrigation pond does not have the big volume. We have been used the waterproof sheet material which made in factories after 1960’s. There are many kinds of waterproof sheet materials. These are imported from other countries and lot of types of structural design by each company. It is needed that the proper design method and the check system are clarified against the pond’s improvement work
(Tanaka (2007), Mukaitani (2008a, b and c)). When we designed the improvement work of the old pond, design problems are experimentally solved in each engineer. In this paper we would like to give our opinion about the newly design method of the old pond using the waterproof sheet material which has lately become a subject of special interest. 2
SUMMARY OF THEORETCAL ANALYSIS
Fundamental formula of theoretical analysis is based on the infinite slope stability. In this section, we make the assumption that the soil condition of the dike on the waterproof sheet material is as well as the infinite slope. R. M. Koener and D. E. Daniel (1997) proposed the dike’s analysis using the finite slope method including the waterproof sheet material. Manufacturing companies show lots of technical notes and the soil sliding model with consideration of the material’s tensile strength. We proposed the newly infinite slope stability which can be considered the parallel submergence ratio called PSR, the back side seepage pressure, the seepage pressure and the cohesion of the soil. The PSR and horizontal submergence ratio called HSR are determined by Koener et al. For example, when the PSR is zero, the groundwater level is equal to the bottom of the slope soil column on the waterproof sheet material. If the value of PSR is one, it means that the groundwater level is equal to the top of the slope soil column.
247
Figure 2. Schematic diagram of a typical case study. Figure 1. Schematic diagram of infinite slope stability.
When water is filled in the pond, the value of PSR is over one. In slope stability, most dangerous groundwater condition shows that the value of PSR is greater than zero and less than one. We determined the general equation of the safety factor of the infinite slope above considered problems as follows;
where PSR = hw /h, Z is the vertical height of the covered soil layer, α is the coefficient of the back side seepage pressure which determined by c /(γw · h · tanϕ ) , β is slope angle of waterproof sheet material, and c and ϕ are strength parameters of the covered soil. The α indicates the value from zero to one. We had better adopt the α less than 0.5. The seepage water pressure fh is determined as follows;
3 A TYPICAL CASE STUDY OF FAILURED DIKE In this case, c and α are equal to zero because of the conventional analysis. Figure 2 shows the schematic diagram of the failure dike. The failure of the pond dike was occurred by the continuous rainy days and the high level water of near river, which made the seepage pressure in the slope’s soil layer on the waterproof sheet material. When the value of PSR is zero, the safety factor from equation (1) is given 1.019. When the value of PSR is 0.5, the safety factor is given 0.714. When the value of PSR is one, the safety factor is given 0.483. If the pond is filled with water, the safety factor is recovered to 1.019. The pond which is filled with water is most stabilized. We investigated that the influence on the safety factor of the slope length between the conventional infinite analysis and Koener’s method.
Figure 3. Relation between safety factors and slope length.
Figure 3 shows relation between safety factors and slope length. In this paper the safety factor by Koener’s analysis is called F. The safety factor of Fs by our infinite method show constant value of 1.019 against the change of the slope length. The safety factor by Koener’s method decreases with the slope length growing. The relation between the safety factor by our infinite method and Koener’s F is determined as follows;
We compare the margin of error by each method on 10 m of the slope length. The safety factor by Koener’s method is 6.6% greater than our one. Our conventional infinite method gives the low value of the safety factor. We can get a little safety designing against the changes of environment immediately. Figure 4 shows the relation between the safety factor by our proposed infinite method and PSR under variation of the α from zero to 0.3. This figure shows that the safety factor is affected by the existence of the back side seepage pressure or the groundwater level in the soil layer on the waterproof sheet material. We think that many slope failure using the sheet materials were affected by the existence of α or seepage water pressure. When the bottom of pond is partially saturated, the possibility of slope failure rises up. The drainage system is needed against the groundwater nearby the bottom of pond.
248
Figure 5. Schematic diagram of seepage line on the slope soil layer by rainfall.
Figure 4. Relation between Fs and PSR under variation of α.
The covered soil on the waterproof sheet material has low value of cohesion. The covered soil has ten blows of the standard penetration test value by compaction work at the dike slope. The 5 kN/m2 of cohesion is needed by our infinite analysis against the normal slope of the pond dike. The cohesion on the slope is raised using the cement mixing method. We proposed that the cohesion under wet condition of unsaturated soil is equal to the thickness of the slope. This investigation will treat another paper. 4
COMPARISON WITH TECHNICAL NOTES BY MATERIAL’S COMPANIES
There are two groups of companies of the waterproof sheet materials. One is made by the rubber sheet. Another is made by the bentonite sheet. 4.1 Finite slope satiability using the mixed analysis This method is adopted by companies of the rubber sheet. After the countermeasure of bottom soil’s improvement is treated, they calculate the resistance based on the passive earth pressure. This view point is similar to the calculation of the cantilever retaining wall. The tensile strength of sheet is not considered. The fatal defect of this concept is not considered the changing of water level in the slope soil layer. 4.2 Infinite slope satiability considering with reinforced sheet and soil layer
5
CALCULATION OF SEEPAGE LINE IN THE POND’S DIKE UNDER RAINFALLL
We focused two points before calculations of seepage line in the pond’s dike under the rainfall. One is needed conventional assumptions and that formula is abele to be adapted to the general slope problems. Another is needed that it is easily calculated safety factor in the construction work at pond’s dike. Figure 5 shows the schematic diagram of the seepage line on the slope soil layer under the rainfall. We focused the column on the slope. In the following, it shall be assumed against the analysis of the seepage line; 1) The sedimentary condition is assumed the horizontal deposit. 2) The coefficient of horizontal permeability; kh is greater than the coefficient of vertical permeability; kv . The coefficient of storage in the covered soil layer is determined by (1 − 1/a). Therefore, if the value of inflow is one, the value of outflow is 1/a. 3) The anisotropy of permeability is assumed that kh /kv is normally 25 by the Japanese recommendation for design of irrigation tanks (2006). 4) The value of the seepage velocity by rainfall; v is assumed from 2 to 3 mm/hr. The seepage velocity by rainfall and the rainfall intensity have a simple proportional relation. The volume of inflow is assumed about 26% for decomposed granite soil. We will explain this in the Figure 5, the coordinate origin is decided at the top of slope. The seepage line in the slope on the waterproof sheet material is calculated as follows;
This method is adopted by companies of the bentonite sheet. The safety factor is calculated with the tensile strength of sheet on the slope. If the sheet has the open area like the geogrid, the soil penetrates to the sheet material. The fatal defect of this concept is not considered the changing of water level in the slope soil layer, too. In this case Koener’s method gives 1.26 of the safety factor.
249
where D is output water level, Z0 is input water level and Xmax is the peak value of water level in the slope. The Xmax is calculated by following formula;
The equation shows that the Xmax is affected by D, β and L. We must above mention the seepage line to our proposed PSR. PSR is calculated by following formula;
where Dmax is the thickness of covered soil layer on the waterproof sheet material. The area under seepage line is calculated by the integration of equation (4). 6
CONCLUSIONS
Many small scale irrigation ponds must be repaired. It is necessary to calculate easily at in-situ and economically. This paper proposed the conventional analysis using the infinite slope stability. The pond’s dike will repair using the waterproof sheet materials. The soil layer’s stability on the waterproof sheet material must be reconstructed more carefully. In Japan, heavy rainfall will be happened in one year. The accident in dike’s construction must be preventing as much as possible. The seepage pressure makes the slope failure which
is not so filled in soil layer. The effects of the bench cut under the sheets and the taper of covered soil’s layer on the sheet are must be investigated. These items are often considered in slope works. We continue the applications against in-situ problems and the failure slope data. REFERENCES Koerner, R.M & Daniel, D.E. 1997. Final Covers for Solid Waste Landfills and Abandoned Dumps, Thomas Telford Ltd. Tanaka, K. & Mukaitani, M. 2007. The new design method of irrigation ponds using sheet materials (part 1), Proc. of domestic annual conference for geotechnical engineering, JGS of Shikoku brunch, 87–88. (in Japanese) Mukaitani, M. & Tanaka, K. 2008a. The new design method of irrigation ponds using sheet materials (part 2), Proc. of domestic conference on disaster prevention engineering, JSCE of Shikoku brunch, 55–60. (in Japanese) Mukaitani, M. & Tanaka, K. 2008b. The new design method of irrigation ponds using sheet materials (part 3), Proc. of domestic annual conference for civil engineering, JSCE of Shikoku brunch, 212–213. (in Japanese) Mukaitani, M. & Tanaka, K. 2008c. The new design method of irrigation ponds using sheet materials (part 4), Proc. of domestic symposium for geotechnical problems and environment, JGS of Shikoku brunch & Ehime Univ., 63–68. (in Japanese) Recommendation for design of irrigation tanks, 2006 The Japanese society of irrigation, drainage and rural eng. (in Japanese)
250
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Research on key technique of double-arch tunnel passing through water-eroded groove Y. Chen Key Laboratory of Geotechnical and Underground Engineering of Ministry of Education, Tongji University, China
X. Liu School of Civil and Architectural Engineering, Central South University, China
ABSTRACT: In excavation of double-arch tunnel, there exists a big water-eroded groove. The method of elastic foundation beam is used to cope with the situation. According to the relationship of foundation beam with tunnel structure and scene case, the mechanical behaviour of double-arch tunnel passing through water-roded groove is studied by using finite element numerical simulation method. The research result shows that the interface of two different foundation mediums is worst point of foundation beam, cutting damage and ripping damage easily happen here; the physics property of filling matter in water-eroded groove has distinct influence to the foundation beam, to consider fully the carrying capacity of filling matter is reasonable; setting invert is good to improve internal force state of tunnel structure, but the action is no big to share the load of foundation beam. 1
GENERAL INSTRUCTIONS
Since there is a big water-eroded groove in excavation of tunnel, the choice of traversing method becomes critical. In practical tunneling engineering, the existence, location and size of water-eroded groove is difficult to be judged because of the limitation of geologic investigation, so relevant traversing method according to the actual condition is only when the problem happens instead of a design special for big water-eroded groove. This way is available to single tunnel because of its simple structure. However, to double-arch tunnel, a special design is necessary, considering the difficulty in the excavation caused by the complicated structure and more important, the influence to the stability by the construction. Therefore, taking the structure characteristics of double-arch tunnel in water-eroded groove into account, the writer get the available solution by calculation and analysis for setting foundation beam combined with the design and construction of practical engineering.
2 THE RELATION BETWEEN DESIGN AND CONSTRUCTION OF DOUBLE-ARCH TUNNEL In the construction, a middle pilot heading is finished firstly in order to support the arch ring at both sides. The middle wall is at a disadvantage stress state since its two sides is not tight to the surrounding rocks thus causing independent deformation. Commonly, it is not
Figure 1. Construction steps of double-arch tunnel.
suitable to construct the both tunnels at the same time in order to avoid long-span at the same cross-section that influences the stability of surrounding rock. So the idea of construction scheme is that excavate the one-side single span firstly. As a consequence, the horizontal force of the first-finished arch ring makes the middle wall bear marked bias, which is disadvantage to the middle wall without lateral rock resistance. Apparently, the middle wall is the key in this kind of tunnel construction. Construction steps are showed in fig 1. From the view of design, there are two kinds of structural stress state in the construction of doublearch tunnel, namely single span state and double span state.The middle wall is under a disadvantage bias condition in the first-finished single span lining and this
251
bias disappears since the final-finished double span lining is a balanced construction, so apparently the single span state is the most disadvantaged condition. In the design and construction should reflect this characteristic to ensure the safety. So the emphasis in the design is the state of single span. 3
the width of the wall bottom by 15 cm to 35 cm. 5 m is remained on the both end of the beam to put on the bedrock(IV grade wall rock) and the middle piece of the beam is put on the groove’s filler. So this is a elastic foundation beam setting on different foundation. 3.2 Design of tunnel’s support structure and load of elastic foundation beam
DISCUSSION OF ENGINEERING CASE
Tong you Mountain tunnel, major project in GuangXi South 2 belt highways, is 445 m in length. The area is in karsts developing district where limestone is widely distributed and dissolution groove and crack is common. Dissolution geode is commonly seen in dolomite which form anomalistic cast groove along the joint plane. Being limited by landform and landmark, the tunnel is designed as double-arch structure, meeting the demand of connection of road at the tunnel opening, which is excavated at a clear width of 28.56 m and a clear height of 8.92 m. In the part of K2 + 312 ∼ +332 of the tunnel, there is a big water-eroded groove, 20 m in longitudinal width, displaying an oblique tunnel axis along the direction northeast-southwest and it is passed through by the cross section of the double-arch tunnel. The part of water-eroded groove, belonging to shallow-buried segment, is about 20 m in buried depth. Its cover is clay adding detritus in which 20–40% belongs to dolomitic limestone whose corrosion surface is clear. The clay, having delicate and plastic structure, obviously is the filling of the groove and this kind of filling is compacting and its bearing capacity is about 150–180 Kpa. Its natural unit weight is γ = 16 ∼ 17 kN/m3 and there is no influence on groundwater. To prove the influence range of the bottom of the groove, we drilled to 17 m under the tunnel’s bottom and stopped since it is also bottomless. 3.1 Water-eroded groove span scheme and elastic foundation design The main method of water-eroded groove crossing is filling or span etc[13,14] . The range of this tunnel’s groove is deep, so the technical requirement to the filler of grouting reinforcement is high and objective effect cannot be guaranteed easily. Through careful research, the way of erecting beam crossing is decided to adopt. This method can ensure that there is no longitudinal or transverse crack along the lines caused by the weakening of the bearing strength of the basement in the structure of double-arch tunnel thus creating condition to guarantee the high quality of the tunnel. To make the tunnel pass safely, there is a strip reinforced concrete beam (foundation beam for short) across the groove in the base of right wall, left wall and middle wall. The beam is 30 m-long and 1 m-high. To detract the stress on the base of the wall, the width of the foundation beam in the side-wall is designed to 1.3 m and that of middle wall is 2.7 m, all go beyond
The tunnel’s support structure is compound lining. Initial support is a combined support including wire mesh, anchor and shotcrete. Double level of φ6 mat reinforcement is set in the whole cross section of the arch wall. The space between the mat reinforcement is 25 cm * 25 cm. The anchor arm is 3 m-long 22 reinforcement and its longitudinal space is 1 m and 1.2 m in ring space. The shot Crete is C20 concrete and it is 25 cm thick[15] . In order to make the surrounding rock stable (especially the clay filling of the water-eroded groove), advanced small pipe grouting support is set, 4.5 mlong φ42 steel pipe is arranged along the tunnel arch ring, one ring every longitudinal 2 m and the ring distance is 35 cm. Meanwhile “I” 18-type steel arch timbering and the longitudinal distance is 0.5 m. Secondary lining design is determined by analysis of load-structure model calculation. Considering that primary support is under a certain load of surrounding rock and also small duct grouting is taken in the assisting construction measures that help to reduce the stress of surrounding rock, so 80% of the overlying soil weight is used as the vertical load of secondary lining and the lateral pressure is comparatively treated. But there is no lateral pressure in the middle wall in the altitude rang of middle pilot heading. According to the actual situation, the most unfavorable loading state is taken namely single-span structure calculation as shown in fig 2. After the calculation of lining structure, the result of foundation pressure (per running meter) is 2460 kN in middle wall and 2200 kN in side wall. The load on the foundation beam is 2460/(2.7 × 1.0) = 911 kN/m2 in the middle wall whose foundation beam’s width is 2.7 m and 2200/(1.0 × 1.0) = 2200 kN/m2 in side wall whose foundation beam’s width is 1.0 m.
Figure 2. Lining counting sketch.
252
3.3
Counting and analysis of elastic foundation beam
Since the water-eroded groove’s filling and bedrock on the end of the foundation beam is two completely different medium, it is a problem about solving elastic foundation beam basing on different foundation. Finite element displacement method programming calculation is adopted and the assumption of simulation foundation’s sedimentation deformation by Winkler. fig 3 is the calculation graphic formulas and Table 1 is the calculation parameter. Put the max internal force and deflection of the foundation beam into the Table 2. Take the middle wall as an example and the internal force and deflection is listed in the Picture 4. From the result of calculation, it can be seen that the max shearing force happen at the fifth point, the point of intersection between the bedrock and the filling of the groove, according to the Fig. 3 and it shows that this has the greatest possibility of shear failure.Absolute maximum moment happen at the forth point and also at the junction. What’s more, it is negative moment which may make the superior margin of the beam have tension failure. All these should be paid to great attention in the reinforcement design. The max deflection happens at the mid span of the beam. However, this cannot form dangerous crosssection since the moment and shearing force are both very little. It should be illustrated that the length of foundation beam on the base rock of the two ends has great impact on the internal force state of the beam, long indwelling leading to waste and short to unsafe and both over long and over short can make the internal force unreasonable distribution. Since the groove should be treated as soon as possible to avoid collapse accident, there is no time for further analysis. Optimization treatment of proportion between indwelling length of the beam on the base rock and aspect of cross section is suggested in similar project. From the table 2, it can be known that the max positive and negative moment is separated by 19% that means further adjustment of the length of the whole beam or proportion of the aspect of the cross section is allowed. However, that in the middle wall is only 1.4% showing the reasonable design parameter.
3.4 Discussion about invert Invert and lining is divided into first done and later done. If do the invert later, the foundation beam will bear all the overlying pressure and the structure calculation is without the invert. If do the invert first, part of the pressure is separated make the stress on the beam decreased and now, the invert is involved in the calculation. To ensure the actual influence of this kind of Table 2. The max internal force and flexibility of the foundation beam. Mmax /(kN.m) Point 4 Point 4
−1662 −1663
Point 8 Point 8
1410 1687
Notice: points in the table see figure 5; Mmax – Absolute Maximum Moment, Qmax – Absolute Maximum Shear, Wmax-Maximum Deflection; A positive/negative bending moment causes compression/extension of the top fibres of a beam; a positive shear will tend to rotate each portion of the beam clockwise with respect to its other end, otherwise is negative.
Figure 3. Counting pattern of elastic foundation beam. Table 1.
In order to discuss the influence of fillings to the internal force state of the foundation beam, comparing calculation is taken among different fillings. Take the middle wall as an example, after regression treatment, and the result is shown in fig 5. Shown in the fig 5, there is a higher-correlation power function relationship between the max positive and negative moment and elastic coefficient of the fillings, namely value K. When the value K becomes small, the growth speed of the max moment gets higher. Especially when the value K becomes small to some degree (30 MPa/m), the max moment increases dramatically and the negative moment becomes more obvious than the positive moment. No matter how the elastic coefficient changes, the action point of the max negative moment is always on the fourth point, namely the edge of the groove. Therefore the junction of two different foundations medium is the easiest to damage place and the superior margin of the beam is easier to damage. And with the decrease of the value K, the max positive moment gradually moves to the mid span and eventually happens at the mid span. This shows that the groove fillings have important impact on the adjustment of the distribution of the moment of the whole beam.
Counting parameters of foundation beam.
Location of beam
The section of beam (m) (long × high)
Side wall Middle wall
1.0 × 1.0 2.7 × 1.0
Elastic foundation coefficient (MPa/m) Rock of grade IV
Cave clay medium
Elastic modulus (MPa)
Unit weight (kN/m3 )
Vertical load (kN/m2 )
350 350
100 100
2.85 × 104 2.85 × 104
25 25
2200 911
253
Figure 4. Internal force and flexibility of the beam under meddle wall.
4
Figure 5. Influence of different fillings to max moment of foundation beam. Table 3. The influence of invert to foundation beam load. Location of beam
Setting Pressure Acting load Load decrement(%) invert (kN) (kN/m2 )
Middle Yes wall No Side wall Yes No
2390 2460 1840 2200
885 911 1840 2200
2.8 16
groove filling to the beam, comparative calculation is taken in this case. The result turns out that the main effect of the invert is to improve the intern force state of the lining structure. Take the vault node as an example, the moment is decreased by about 70% when invert is set but it do little to reduce the load function to the beam especially to the middle wall beam, only 2.8% is reduced. It is slightly inconsistency to the initial estimate that the invert bears the most foundation beam load. After analysis, it turns out that lining structure mainly taken by the foundation beam since the invert is located on the flabby groove filling and the lining’s wall corner is located on the solid foundation beam that causing a tremendous difference in their nature leading to subsidence of the invert. Therefore, when this kind of foundation beam is designed, the foundation of the invert cannot be magnified. The influence of invert to the foundation beam is listed in table 3.
CONCLUSION
(1) When the groove is filled, it should be regarded as elastic foundation rather than simple overhead beam. By analysis, we know that even completely not considering the filling’s bearing capacity, the effect to the intern force of the beam is obvious. Comparing the foundation elastic coefficient k = 5 MPa/m and k = 0 MPa/m, it turns out that the max moment of the beam is 29745 kN.m and 70230 kN.m respectively, having a difference of 2. Therefore, when meeting the groove, we should earnestly find out the filling’s compactness and bearing capacity and also the groundwater condition. If possible, we should do our best to think about its bearing capacity to reduce the cost of the engineering. What’s more, appropriate grouting and exchanging filling can be combined with to enhance the bearing capacity artificially and then erecting beam and span. It is more economical than the method totally without the calculation of foundation beam capacity. (2) Generally speaking, since the uncertainty of the surround rock around the tunnel, the load to the foundation beam has its fuzziness. However to shallow-buried tunnel, this kind of fuzziness becomes much smaller. Therefore, there is certain reasonability to reach the effect on the foundation beam by the way mentioned in this context. (3) The junction where two different kinds of base medium meet is the most dangerous point of the foundation beam, shear and tension crack easily occurring there. (4) The function of the invert is mainly to improve the intern force state of the lining structure. However, turn to the sharing the stress of the beam, its function is changing with the bearing capacity of the filling. When the filling is very weak, the load sharing by the invert is very small.
254
(5) The middle wall is weakest part in the engineering and horizontal blanking level brace should be set to improve the stress state. REFERENCES [1] PENG Ding-chao,YUANYong, ZHANGYong-wu. Spatial effects on mid-partition due to excavation of a double-arched runnel[J]. Modern Tunnelling Technology, 2002, 39(1):47–53. [2] LU Yao-zong, YANG Wen-wu. Ressearch on construction scheme of Lianhuashan double-arch tunnel[J]. China Journal of Highway and Transport, 2001, 14(2):75–77. [3] LIU Hong-zhong, HUANG Lun-hai. Overview of design and construction of tunnel with multiple arch[J]. West China Exploration Engineering, 2001, 68((1):54–55. [4] XIA Cai-chu,LIU Jin-lei. Study on the middle wall stress of Xiangsilin doubled arch tunnel[J]. Chinese Journal of Rock Mechanics and Engineering, 2000, 19(Supplement):1116–1119. [5] LIU Zhi-kui, LIANG Jing-cheng, ZHU Shou-zeng, et al, Stability analysis of rock foundation wihth cave in karst area [J]. Chinese Journal of Geotechnical Engneering, 2003, 25(5):630–633. [6] ZHAO Ming-jie, AO Jian-hua, LIU Xu-hua, et al. Study on deformation character of the surrounding rock masses concerning the influence of karst caves in the bottom of tunnel[J]. Journal of Chogqing Jiaotong University, 2003, 22(2):20–23.
[7] ZHOU Yu-hong, ZHAO Yan-ming, CHENG Chonguo. Optimum analysis on the construction process for joint a arch tunnels in partial pressure. Chinese Jounal of Rock Mechanics and Engineering, 2002, 21(5):679–683. [8] LI De-hong. Construction monitoring of multi-arch tunnel and its result analysis[J], 2003, 40(1):59–64. [9] QI Zhi-fu, SUN Bo. Construction of multi-arch tunnels with twin large spans by NATM[J]. Journal of Railway Engineering Society, 2002(1):62–65. [10] LIU Gui-ying, WANGYu-xing, CHENG Jian-ping, et al. Structure analysis and working optimization of the double-arch tunnel of the expressway[J]. Geological Science and Technology Information, 2003, 22(10):97–100. [11] CHEN Shao-hua, LI Yong. Structural analysis for a joined roof tunnel[J]. China Journal of Highway and Transport, 2000, 13(1):48–51. [12] HAN Chang-ling. Structure design of double -arch integrity type tunnel[J]. Haiway, 2000(11):79–81. [13] The Second Survey and Design Institute of China Railway. Design technique handbook for railway engineering
Tunnel [M]. Beijing: Press of China railway, 1995: 426–438. [14] The Second Engineering Burean of China Railway. Construction technique handbook for railway engineering
Tunnel, the next volume[M]. Beijing:Press of China railway, 1995: 323–329. [15] ZHANG De-he. Design and construction of tunnel crossing cavern with accumulations[J], Underground Space, 1999, 19(2):93–100.
255
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Safety measures by utilizing the old ridge road and potential risks of the land near old river alignments M. Okuda Okuda Construction: Nagoya, Japan
Y. Nakane Showa Concrete Industries Co., Ltd, Nagoya, Japan
Y. Kani Nippon Concrete Industries Co., Ltd, Nagoya, Japan
K. Hayakawa Ritsumeikan University, Kusatsu, Japan
ABSTRACT: This is a study paper from the historical viewpoints on the safety measure and the potential risks of ridge road and old river site respectively. Many old roads in the mountainous area are the road along the crest of the mountain ridge. Since the ridge road is located far away from the river and located on the top of mountainous slope, the old ridge road can be used for the people coming and going during the blockade period of the modern riverside road due to calamity. It can be proved by the examinations of the existence of old facilities with a long and distinguished history, which are the old temple, old traveler’s guardian deity, Goddess stone, statue of Mercy, etc. The old ridge roads have been steadily existed for a long period after the experiences of historical calamities. Therefore, the old ridge road can be utilized as the fail-safe path during the time of calamity. This is also proved by the experiences of the recent calamity of flood and earthquake. The alignments of old Kiso River in the Nobi Plain were investigated through the literatures at first. It was found that there were many offshoots of old Kiso River in the Nobi Plain. The place-names near the old Kiso River and offshoots are also the useful information to judge the alignments of old river flow. The ground reconnaissance was made along the alignments of old Kiso River. Many sections of the old offshoot’s alignments have been already reclaimed and developed as the residential area, farmland, road, irrigation channel, etc. Such a developed land has no more retained what it used to be. Those developed places will have the potential risks of flood and the damages by earthquake due to the lower land, the existence of shallow groundwater and the existence of shallow and loose sandy layer, which will have a liquefaction risk.
1
INTRODUCTION
flow are shown respectively from the historical viewpoints in this study.
This is a qualitative study presenting the safety measures by utilizing the old ridge road as the fail-safe path during the time of calamity. Akiha Old Road is selected as a case study of old ridge road and also presented here three examples of old ridge road utilized during the time of calamity. Also, this study is presenting the potential ground risks of flood and liquefaction occurring along the old offshoot’s alignment of Kiso River in the Nobi Plain. The ground reconnaissance along the old offshoots of Kiso River was made. The findings of the safety measures by utilizing the old ridge road and the potential risks along old river
2 2.1
OLD MOUNTAINOUS ROADS Status of old road in the mountainous area
Many mountainous old roads in the Tokai area are the road along the crest of the mountain ridge, which have been utilized since the Japanese Warring State Period (16 Century) or before. Also, the said mountainous old roads used to being located far away from river and located on the top of mountainous slope.
257
Photo 1. Gateway at the entrance to the Akiha Shrine in Hamamatsu City.
Photo 2. A Stone Guidepost along Akiha Old Road.
The distinguish history of an old mountainous road, Akiha Old Road, is introduced hereinafter as an example to the long history of old ridge road. 2.2
History of Akiha Old Road
The distinguished history of Akiha Old Road can be proved by the existences of old shrines, old temple, stone statues, stone image of Buddha, etc. along the alignment of Akiha Old Road as follows: i) Shuyo Temple; This temple has been famous for a fire festival, located about 700 m above sea level and founded by Gyoki Bodhisattva about 1300 years ago, ii) Akiha Shrine; This shrine had been integrated with the Shuyo Temple up to the end of the Edo era (1868). The shrine is located at 885 m above sea level and has been also famous for fire prevention and metallurgy. The shrine holds about 400 Japanese swords dedicated by Samurais. Famous dedicated swords are “Masamune” by Shingen Takeda and “Osafune” by Kansuke Yamamoto in 1534. iii) Stone statues; The letters of the Bunka era (1804– 1818) are chiseled on the stone statues of Jizo. The years of 1760, 1787, etc. are also chiseled on the stone guideposts. iv) Yamazumi Shrine; This shrine has been famous for wolf and war, located about 1100 m above sea level and founded about 1300 years ago. A pair of stone guardian wolves is provided at the entrance of Yamazumi Shrine. Thus, Akiha Old Road has been utilized by the people for 1300 years or more as a road of life, belief in Akiha, transportation of volcanic glass as well as salt and a road for military operation. Those evidences of long history have been spread over the alignment of Akiha Old Road. 2.3
Historical Calamity occurred near Akiha Old Road
The main part of Akiha Old Road is located in the Haruno town in Hamamatsu City. According to a book
Photo 3. A Pair of Stone Guardian Wolves at the Entrance of Yamazumi Shrine.
written by Kishita (1984), there is no description about the disaster occurred along the Akiha Old Road, but describes the following calamities occurred along the rivers: i) An earthquake occurred in 715 caused landslide and the slid materials blocked the flow of Tenryu River for several months, ii) A flood occurred in 1715. It was the serious flood in 180 years, iii) An earthquake occurred in 1854. 1400 houses in the county were completely destroyed, iv) Bridges were washed away due to the heavy rain occurred in 1904. The historical calamities have occurred near the river in Haruno Town, but might be not often along the ridge road as Akiha Old Road. Furthermore, the above-mentioned long history of Akiha Old Road indicates also that the ridge road has been relatively safe and stable. 2.4 Recent examples of old ridge roads utilized during the time of calamity Example 1; Heavy rain attacked on Obara & Fujioka village of Toyota City in July 1972. The rainfall of downpour was 85 mm/hour and the death toll from the heavy rain climbed to 67. Next day after the downpour, an old ridge road was passable, but the prefectural road near river was not passable as shown in Figure 2.
258
Figure 3. Passable Old Roads inYamakoshiVill. of Nagaoka City After the 2004 Mid Niigata Prefecture Earthquake.
Figure 1. Alignments of Old Akiha Road in Hamamatsu City. (after Nakane).
Figure 4. Provision Road for Disaster and An Old Ridge Road in Okazaki City.
Figure 2. Passable Ridge Road at Obara Village of Toyota City after Heavy Rain in 1972.
Example 2; The 2004 Mid Niigata Prefecture Earthquake of thrust type earthquake occurred on October 23, 2004 at near Nagaoka City in Niigata prefecture. The maximum magnitude was 6.8 (JMA) and the death toll from the earthquake climbed to 68. The main lifeline roads in Yamakoshi village of Nagaoka City were not passable due to damages from earthquake at that time. However, some old ridge roads were passable as shown in Figure 3 and they were utilized as the access road just after the occurrence of earthquake. Example 3; A provision road against disaster in Kobu town of Okazaki City had been provided by the Okazaki city authority in 2005. This road alignment was selected near and parallel to the alignment of an old ridge road (= Zemanjo Road) as shown in Figure 4.
259
Figure 6. Typical Sectional Dimensions of Okakoizutsumi Dike (after Nishida).
Figure 5. Main Offshoots of Old Kiso River (after “Kisogawa Town History”, 1981).
2.5
Merit of ridge road
The reasons why many mountainous roads have been chosen the ridge alignment and sustained for long period are as follows: i) War operation at the ridge road has an advantage over enemy, ii) Safe from dangers of wild animals, harmful insect and viper, because the animal path is not overlapped with ridge road, iii) Less chance across river and swamp, iv) Good visibility, v) Less obstruction of falling stone. 3
OFFSHOOT ALIGNMENT OF OLD KISO RIVER
3.1 Search for old offshoots shown in literatures Old Kiso River ran along Sakai River and jointed with Nagara River up to 1586. It was noted that there were seven offshoots of old Kiso River after “Study on the Place-name of Owari Clan, 1916”. Other literatures describe not only seven offshoots of old Kiso River but also eight offshoots and twelve offshoots of old Kiso River. (for example, after “Asahi Village Journal, 1963”) Therefore, it can be understood that there had been many offshoots of old Kiso River, but not specific number of offshoots. According to a report of the resources investigation committee belonging to the Prime Minister’s Office, there were three main offshoots of old Kiso River at the beginning of the Edo period (1603–1867) as indicated in the Figure 5. The 1st offshoot, 2nd offshoot and 3rd offshoot of old Kiso River was identified as Ishimakura River, Hannya River and Azai River, respectively. Furthermore, Kuroda River and Ajika River were also the
Photo 4. Signboard of Okakoizutsumi along present Kiso River in Konan City.
offshoots of old Kiso River. It was also known that Saya River was a main flow of old Kiso River. The Owari Clan, who had ruled the left side territory of Kiso River during the Edo period, had constructed the left dike of about 48 km in length along old Kiso River from Inuyama City to Yatomi City between the years 1608 and 1609 in order to protect his territory against flood attacked by the old Kiso River. The left dike of old Kiso River has been called as “Okakoizutsumi”. The typical sectional dimensions of Okakoizutsumi dike are shown in Figure 6. The Okakoizutsumi dike had been constructed only along the left bank of old Kiso River. Therefore, the right bank area of old Kiso River had been often affected by flooding. Also, all flows of offshoot except Saya River had been blocked off the forks from old Kiso River by the embankment of Okakoizutsumi. It can be noticed from Figure 6 that some trees were planted on the dike though the modern Japanese river code does principally not allow to plant trees on the outer slope of embanked dike. Planting tree on the outer dike, however, could be said a Japanese traditional method. Because, the article 17 of the Yozenryo (= a regulation of building & repairs) in the Taihoritsuryo (= old Japanese regulation established in 701) specified that the trees (= elm, willow, etc.) should be planted on the dike. 3.2 Ground reconnaissance on alignment of old Kiso river Some dike sections of Okakoizutsumi still exist along the present Kiso River and also along the left bank of the abandoned Saya River.
260
Figure 7. An Existing Sectional Dimension of Okakoizutsumi Dike at Futago Town in Aisai City (after Nakane). Photo 5. Trace of 1st Offshoot, Aoki River at Kashiwamori.
Photo 6. Trace of 2nd Offshoot, Hannya River at Konan City.
Figure 8. Soil Profile along Okakoizutsumi Dike from Sobue area to Tatsuta area (after Nakane).
An existing sectional dimension of abandoned dike of Okakoizutsumi at Futago Town in Aisai City is shown in Figure 7. The soil profile along Saya River as determined from three boring data is shown in Figure 8. The boring point A, point B and point C is located along old Saya River offsetting from Kiso River around 30 km, 26 km and 20 km upper from the present Kiso River mouth, respectively. The strata name of soil profile are complied with the description shown in the literature of “Ground of Inazawa” because the order of strata has likeness to the soil profile of western Inazawa City presented in this literature. After the blockage of branch forks in 1610, some sections of the abandoned offshoot have gradually reclaimed and developed as the residential area, farmland, road, irrigation channel, etc. The Saya River has been also blocked off the fork from Kiso River in 1900 due to the dike burst at Utasu of Aisai City in 1897 and developed along the abandoned Saya River.
Photo 7. Trace of 3rd Offshoot, Nikko River at Inazawa City.
The developed land along old offshoot has no more retained what it used to be as shown above. Aoki River is the present name of Ishimakura River which is identified with the 1st offshoot of old Kiso River. The place-name indicates also the trace of old Kiso River alignment. For example, the place-name “Kotsu” at the fork of 1st offshoot indicates the timberland of drift-timber. “Kura” of Ishimakura River (= 1st offshoot) indicates the narrow ground between rivers. Hannya of Hannya River (= 2nd offshoot) called originally “Haniya”,
261
Figure 9. Zero-meter Area in Nobi Plain (after Society of Tokai-three-Pref. Ground Settlement Investigation).
Figure 10. Flooding Simulation Map of Aoki River in 40 min. after Burst of Aoki River Dike at 2.2 km (after Inazawa City H.P.).
which indicates the dried up land of mud flow. Azai of Azai River (= 3rd offshoot) indicates the swampy area and/or the existence of shallow groundwater. Also, the minor place-names related with river and swamp can be found along the offshoot areas of old Kiso River, such as Furukawa (= old river), Sunaba (= Sandy ground), Kawahara (= riverbed), Hatagawa (= old river name), Hasuike (= old swamp name), etc. These grounds which have the minor place-name related with river and swamp used to be lower than the surrounding ground. 3.3
Potential risks of flood at the lower ground
The Nobi Plain spreads over about 1,300 km2 and has the gentle slope from northeast to southwest. The Nobi Plain is also known with “zero-meter area” at the south part of the plain. Zero-meter area spreads over about 274 km2 (after Committee of Ground Subsidence at theTokai Three Prefectures, 1988) which occupy about 21% of Nobi Plain area. Most parts of the zero-meter area had been flooded at the time attacked by the Vera Typhoon (= Isewan Typhoon) in 1959. This zero-meter area in Nobi Plain is the largest zero-meter ground in Japan and has a potential risk of flooding. Some local governments have announced the potential risk of flooding areas near the offshoot of old Kiso River, which information are available in the home page of local government via internet. An example of flooding simulation along Aoki River (= 1st offshoot) is shown Figure 10. Thus, the zero meter area and some areas near the offshoot alignment of old Kiso River have a potential risk of flooding. The said flooding risk can be also recognized by the following map which is overlapped Figure 5 (after KisogawaTown) inArticle 3.1 and a map of anticipated
Figure 11. Anticipated flooding area and the offshoots of old Kiso River (after Aichi Pref. 1978 and Kisogawa Town).
flooding area when the right dike of Kiso River is bursted at Yamana in Fuso Town (after “Flood Control Plan of Aichi Prefecture”, 1978). 3.4 Potential risks of liquefaction near the offshoots of old Kiso River The authority of Aichi prefecture announced officially the prediction degree and zone of liquefaction against the future mega-earthquake as shown in Figure 12. In order to check the liquefaction potential along the alignment of Okakoizutsumi, four sandy soil samples
262
Figure 12. Prediction degree and zone of liquefaction against the future Tonankai Earthqueke (Aichi pref. H.P.). Table 1.
Figure 13. Gradations of soil samples obtained from Okakoizutsumi and the liquefaction potential zone specified in JMSDC.
Criteria of Liquefaction Potential by JBFDC, 1974.
Item
JBFDC Criteria
Okakoizutsumi
Finer than #200 sieve D50 Uc
<10% 0.075–2.0 mm <10
2.9–0.5% 0.34–0.47 mm 2.3–3.6
were collected from Okakoizutsumi dike at Bisai, Sobue, Saya and near Yatomi. These samples are expected to belong to the top sandy layer of the said places. Because, the borrow pit for the Okakoizutsumi earthwork under the labor intensive construction would be the adjacent ground and within about two meters in depth from the ground surface. These sandy soil samples have the grain diameter corresponding to the 50% finer line (D50) of 0.34– 0.47 cm and the coefficient of uniformity (Uc) of 2.3– 3.6. According to the Japanese Building Foundation Design Code (JBFDC, 1974), the following conditions have the potential of liquefaction as shown in table 1. Hence, the sandy soil samples obtained from Okakoizutsumi can be evaluated as having a potential risk of liquefaction under the JBFDC (1974) criteria. Also, the Japanese Marine Structural Design Code (JMSDC, 1989) specifies the liquefaction potential zone by the soil gradation. The gradation of the sandy soil samples obtained from Okakoizutsumi are within the higher potential zone of liquefaction as shown in Figure 13. On the other hand, the Saori Town History Book presents the past observation of 157 liquefaction cases. Furthermore, the Aichi Deposited Culture Center in Yatomi City hold a relief of quicksand (= estimated occurrence during the Tensho Earthquake in 1586) penetrated into the ground of Kiyosu castle downtown remains, where was located near Gojo River. Gojo River was the downstream of 1st offshoot of old Kiso River. The following figure 14 shows the traces of quicksand observed at the archaeological remains of Kiyosu castle downtown. As mentioned above, some areas near the offshoot alignments of old Kiso River in the Nobi alluvial plain have a potential risk of liquefaction.
Figure 14. Traces of Quicksand Observed at the Archaeological Remains of Kiyosu Castle Downtown (after Hattori).
4
CONCLUSION
The old ridge roads are relatively safe and stable, which can be proved by their long history and their present shape. Therefore, some old ridge roads can be utilized as the fail-safe path during the blockade of modern paved riverside road at the time of future calamity, which can be also proved by the recent cases mentioned in the article 2.4 of this report. Therefore, the old ridge road should be paid more attention and maintained properly, so that those old ridge roads will serve as the fail-safe path during the future calamity. Some reclaimed grounds and hinterlands along the offshoot alignment of old Kiso River have the potential risks of flooding and liquefaction during the time of future calamities. Because, the said areas used to have (1) the lower ground than the surrounding ground, (2) the existence possibility of shallow groundwater, and (3) the existence possibility of shallow and loose sandy layer. Many reclaimed grounds along the
263
offshoot alignment of old Kiso River can be identified by the place-name which is related with river and swamp. Though some local governments have officially announced about those potential risks of flooding and liquefaction, the ground developer, building designer and property owner should also pay attention to those potential risks when they execute the project in the area of old offshoot alignments as well as their hinterland. From the historical viewpoint mentioned in this paper, calamity will repeat itself at the riverside road and along the old offshoot site. We should not forget this potential ground risk of the historical repetition.
REFERENCES Aichi Pref. 1964. Recovery report from the calamity of Ise Bay Typhoon. p.221, Nagoya. Aichi Pref. Aichi Pref. 1978. Flood control plan of Aichi prefecture, p. 274, Nagoya. Aichi Pref. Aichi Pref. H.P. 2008. Damage prediction of Tokai & Tonankai Earthquake. http://www.pref.aichi.jp/bousai/all/ all.htm. Nagoya. Dept. of Disaster Prevention. Aich Pref. Akiha Shrine. 1999. Swords reserved by Akiha Shrine. pp.64, 80, Hamamatsu. Akiha Shrine Editing committee of Asahi village journal. 1963. Asahi village journal. p. 111, Nagoya. Iwata S. Editing committee of Bisai city history. 1998. Bisai city history Vol. 2. p. 8, Bisai, Bisai city. Editing committee of Kisogawa town history. 1981. Kisogawa town history. pp. 41–42, Aichi, Kisogawa Town. Editing committee of Saori town history. 1989. Saori town history. p. 17, Aichi. Saori Town
Hattori T. 1993. Annual report of Heisei 4 financial year. pp. 129-131. Nagoya. Aichi Pref. Deposited Culture Center Inazawa City H.P. 2008. Flooding Simulation Map. http: / / www.city.inazawa.aichi.jp/ kurashi/ hazard _ map / simulation/index.html. Inazawa. Inazawa City. JGC-Chubu committee of study on the Nobi ground. 1996. Ground of Inazawa. pp.29,40,45, Inazawa. Inazawa City JGS. 2006. Ground at Nobi Plain, Geotech Note 15. pp.19,105, Tokyo. Maruzen Kishita T. 1984. Chronicles of Haruno. pp.11, 128, 166, 212, Hamamatsu. Kishita T. Kubomi M. 1916. New interpretation of Taiho Ryo Vol. 3. p. 591, Tokyo. Meguro J. Nakane Y. et al. 2008. A consideration about the practical use of the old road on the disaster, Proceedings of the 28th Annual Conference of Histrical Studies in Civil Engineering. pp. 203–214, Tokyo. JSCE Nakane Y. et al. 2008. A Study on the Old River Channels, No.141 Symposium for the construction and conservation technique of historical geotechnical structure. pp. 13–20, Tokyo. JGS Nishida S. 1994. Life on the Kiso-three-river. pp.51–53, Gifu, Institute for water and culture of Kiso-three-river Oka M. 1979. Farmers’ biography 7, Nihon Agriculture Library Vol. 16. p. 293. Tokyo, Association of farm/mountain/fishing village’s culture River Front Maintenance Center. 1999. Management manual of tree in the river boundary. pp. 183–193. Tokyo. Sankaido Society of Tokai-three-Pref. Ground Settlement Investigation. 1985. Ground settlement and ground water at Nobi Plain. p.115, Nagoya. Nagoya Uni. Press Takigawa S. 1931. Study on Ritsuryo. P.??? Tsuda. M. 1916. Study on the place-name of Owari Clan. pp. 243, 245, 246, 253, 312, 497. Tsushima. Education board of Ama County, Aichi Pref.
264
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Bearing capacity of rigid strip footings on frictional soils under eccentric and inclined loads K. Yamamoto Department of Ocean Civil Engineering, Kagoshima University, Kagoshima, Japan
M. Hira Department of Environmental Sciences and Technology, Kagoshima University, Kagoshima, Japan
ABSTRACT: Finite element analyses were applied to find the exact limit load of rigid strip footings on frictional soils under eccentric and inclined loads. In addition to the values of limit bearing capacity, the contact normal and shear stress distributions below the footing were obtained. The results were compared with the standard design assumptions of an effective width by Meyerhof for bearing capacity calculations and of linear distribution of the contact normal stress on the base of the footing. The comparison suggests that Meyerhof’s and Hansen’s procedures are unconservative for large eccentricities. Finally, a bearing capacity equation was newly proposed to obtain the reasonable bearing capacity in the large eccentricities.
1
INTRODUCTION
Footings are frequently subjected to eccentric and inclined loads due to moments resulting from loadings due to the action of wind, water or earth pressures among several other possible sources. In practice, the design of such footings is usually conducted in two parts: an eccentric vertical load; and a centered, inclined load. The bearing capacity of the footing then obtained by analyzing the problem in two separate parts: (1) the bearing capacity of footing subjected to eccentric vertical loads; and (2) the bearing capacity of footing subjected to centered, inclined loads. The two values of obtained bearing capacities are superposed to get the bearing capacity of footing subjected to eccentric and inclined loads. Many researchers have investigated the footing subjected to eccentric vertical loads (e.g., Meyerhof 1953; Prakash & Saran 1971; Purkayastha & Char 1977). Similarly, the footing subjected to centered, inclined loads have also been investigated by many researchers (e.g., Meyerhof 1953; Saran et al. 1971; Hanna & Meyerhof 1981). A few researchers have investigated the footing subjected to eccentric and inclined loads with the help of model tests (e.g., Meyerhof 1953; Saran & Agrawal 1991). Also, Saran & Agrawal (1991) obtained the bearing capacity of an eccentrically inclined, loaded footing using limit equilibrium method. Despite the common occurrence of inclined and eccentric loads, rigorous solutions for the bearing capacity and the contact stress distribution for footings subjected to such loads were not previously available. Instead, approximate solutions often based on empirical observation have been typically
used (e.g., Meyerhof 1953; Hansen 1970). A commonly used approach for dealing with eccentric loads in design was first suggested by Meyerhof (1953). Meyerhof suggested that an effective width B − 2e, defined as the footing width (B) reduced by twice the load eccentricity (e), be used in the calculation of bearing capacity. This procedure has been widely adopted in geotechnical design. For eccentrically loaded footings, the contact stress distribution with the soil is not uniform. In design, the contact stress distribution for an eccentrically loaded footing with eccentricities in both directions is typically assumed to be linear in both the B and L directions, but this is a simplification of the real distribution. Peck et al. (1953) pointed out that if an eccentrically loaded footing is to fail in bearing capacity, the right comparison would be between the maximum stress at the toe of the footing and the limit bearing capacity for a much smaller effective width of the footing. This is conceptually the most correct solution since any bearing capacity failure will take place at or near the toe of the footing. As to the unit bearing capacity near the toe in sands, it is not the same as we would calculate for the whole footing in the case of a centrally loaded footing, in which the whole width B is operative. The difficulty with this procedure is that no results exist prescribing what fraction of B to use for calculating the bearing capacity. Peck et al. (1953) suggested 1/3 of B, but this is too conservative. Regarding the approach for dealing with inclined load in design, Meyerhof (1953) and Hansen (1961) have proposed inclination factors ic , iγ and iq for reduction of bearing capacity factors Nc , Nγ and Nq . The objective of this paper is to apply finite element analyses to evaluate the bearing capacity of a rough
265
Figure 1. Sign convention.
rigid strip footing on frictional soils under eccentric and inclined loads, as well as the contact normal and shear stress distributions below the footing. In addition, the validity of current design method is investigated. The results are compared with the standard design assumptions of an effective width for bearing capacity calculations (Meyerhof 1953) and of linear distribution of the contact normal stress on the base of the footing. The results obtained in this paper may provide important information for future design. 2
FINITE ELEMENT ANALYSIS
Finite element analysis can be conducted to approximate the exact limit load for a rough rigid strip footing under eccentric and inclined loads. The contact normal and shear stress distributions below the footing were obtained. The model follows the Mohr-Coulomb yield criterion without the gradient singularities at the corners of the yield surface in the deviatoric plane (Abbo & Sloan 1995). An associated flow rule was assumed. The mesh was composed of 6-noded triangular elements with quadratic shape functions in plain-strain condition. Every mesh used in the analysis was made very fine near the point of load application. The finite element analysis was conducted by first applying gravity to a level ground to construct the initial stress field, then the vertical and horizontal loadings, incrementally, until a limit load was defined. A Newton-Raphson iterative scheme was used to solve the problem. The Newton-Raphson scheme is a popular method for the solution of nonlinear elasto-plastic problems. It is an incremental-iterative scheme, which uses tangent stiffness iteration to obtain the solution. In the scheme, the stiffness matrix is updated after each iteration. It was confirmed that the modulus of elasticity (E) and Poisson’s ratio (ν) of the soil do not influence the value of the limit load, provided that E and ν are in a reasonable range. 3
RESULTS AND DISCUSSION
Figure 1 shows the sign convention. Regarding the signs of the load eccentricity e and the load inclination angle α, the left side from the center of the footing and the clockwise direction from the vertical
are positive. Figure 2 shows a typical finite element mesh. The bottom boundary of the mesh was fixed and the lateral boundaries were modeled with rollers. The lateral boundaries were placed far enough not to influence the limit load and failure mechanism. Although the bottom boundary appears not to have been placed very deep, it was confirmed that the results were essentially the same as when the base is placed at a depth of 4B. The thickness of the footing is 0.1B. The footing was assumed to be very rigid compared with the sand. The mesh was composed of 4160 triangular 6-noded elements. The mesh was particularly dense along the direction of the applied load and near the edges of the footing. The soil parameters were set as c = 0, φ = 30◦ , 35◦ and 40◦ . In the analysis, the ranges of the eccentricity-to-width ratio, e/B = 0, 1/12, 1/6, 1/3 and the inclination angle, α = 0◦ , 5◦ , 10◦ , 15◦ were considered. Figure 3 shows the load-displacement curve for the cases of centered and eccentric, vertical load (e/B = 0, 1/12, 1/6, 1/3 and α = 0◦ ) obtained from FEM. In the analysis, the multi-increment steps were used to get more accurate limit load. This figure indicates that the curves converge to some values as the vertical displacement increases. This value of qb can be taken as the limit unit vertical load at the base of the footing. It is found from Fig. 3 that when e/B increases, the convergent value for qb /γB decreases. Figure 4 shows the distribution of contact normal stress below the footing under centered and eccentric, vertical loadings. In FEM, the stresses at the integration points of elements are obtained. The stresses of nodal points below the footing are interpolated by those of the integration points of the elements, which constitute the nodal points. For the nodes constituting several elements, the average value of contact normal stresses was used. The straight lines represent the results obtained from the FEM. The dashed lines represent the linear distribution of contact stresses expressed as:
for e/B ≤ 1 / 6, and
for e/B > 1 / 6; where B and L are the width and length of the footing; Q is the total vertical load; and e is the eccentricity. It is noted that in Eq. (1), when e/B becomes 1/6, qb,min equals zero. For e/B > 1/6, qb,min will be negative, which means that tension will develop. Since soil cannot take any tension, there will be a separation between the footing and the soil. Then, the shape of the pressure distribution will become a triangle, described by Eq. (2). It is currently considered that the exact distribution of contact pressures is difficult to estimate. In Fig. 4(a), which is the case of centered, vertical loading (e/B = 0 and α = 0◦ ), the distribution is symmetrical with respect to the center of
266
Figure 2. Typical finite element mesh for FEM.
Figure 3. Relationship between qb /γB and vertical displacement/B from FEM (φ = 35◦ , α = 0◦ ).
the footing, and the maximum value of the distribution is obtained at the center of the footing. Note that the tension is positive for the contact normal stress below the footing. As the eccentricity-to-width ratio (e/B) increases beyond 1/6, the FEM shows that the extent of the contact stress distributions and the maximum values become smaller, consistent with the loss of contact between the footing and the soil at the trailing edge of the footing. Accordingly, qb /γB also decreases. From Fig. 4, the maximum value of the distribution obtained from the FEM occurs almost at the point of application of the load Q. For the case of e/B = 1/3 and α = 0◦ shown in Fig. 4(d), it is observed that the area of contact between the soil and the footing is entirely located on the left side of the footing. As shown in Fig. 4, both the distribution shapes and the maximum values of contact normal stress are in good agreement between FEM and the linear distribution. Figure 5 shows the distribution of contact shear stress below the footing from FEM corresponding to Fig. 4. In Fig. 5(a), which corresponds to Fig. 4(a), the distribution of contact shear stress is symmetrical. Regarding the sign of the contact shear stress below the footing, the clockwise direction is positive. Like the distribution of contact normal stress shown in Fig. 4, the distribution and the value of contact shear stress become smaller, as e/B increases. The point of application of the load, at which the value of contact shear stress equals zero, separates negative shear stresses on the right from positive shear stresses on the left. When e/B increases, the point at which the contact shear stress changes from positive to negative migrates in the same direction as the eccentricity e. In the case
of e/B = 1 / 3 and α = 0◦ , the contact shear stresses is mobilized at the left side of the footing as shown in Fig. 5(d). In the case of eccentric and vertical loading, the variations of the distributions of contact normal and shear stresses below the footing are more remarkable at the direction of the eccentricity. The tendency where the zigzag is appeared in the distributions can be found in Frydman & Burd (1997). This would be due to the effect of the discretization of continuum in finite element method, the interpolation from the integration points of elements and the average value for the nodes constituting several elements. Figures 6 and 7 show the distributions of contact normal and shear stresses below the footing under centered and eccentric, inclined loadings. Like the tendency shown in Figs. 4 and 5, when e/B increases, the extent of the contact stresses distributions and the maximum values become smaller. qb /γB also decreases. The zigzag in the distributions is not much observed at the direction of the applied (inclined) loading, comparing with that of the opposite side. In Fig. 7, the areas where negative shear stresses work, increase when the angle of load inclination is positive (the clockwise direction from the vertical), comparing with Fig. 5. Figure 8 shows the relationship between qb /γB and eccentricity-to-width ratio (e/B) for the cases of φ = 30◦ , 35◦ and 40◦ . In this figure, the solution obtained from FEM and the solutions calculated using Meyerhof’s and Hansen’s bearing capacity equation are presented. The bearing capacity equations proposed by Meyerhof (1963) and Hansen (1970) are:
where B is the footing width; e is the eccentricity; α is the angle of the load acting on the footing with respect to the vertical; φ and γ are the internal friction angle and unit weight of the soil; Nγ in Eqs. (3) and (4) is the bearing capacity factor given by Meyerhof (1963) and Hansen (1970), respectively; and B − 2e is Meyerhof’s effective footing width. Regarding the Nγ value, it is observed that the solution obtained from FEM agrees well with Meyerhof’s solution. The values qb /γB obtained from Eq. (3) are always larger than those obtained from FEM, except when e/B = 0, when they nearly match the FEM
267
Figure 4. Contact normal stress distribution on footing base from FEM (φ = 35◦ ).
Figure 5. Contact shear stress distribution on footing base from FEM (φ = 35◦ ).
values. Also, the values qb /γB obtained from Eq. (4) are larger than those from FEM for e/B > 1/12. It is observed that the Meyerhof’s solution is always larger than Hansen’s solution. When the internal friction angle increases, the difference between Meyerhof’s and Hansen’s solutions become large particularly for e/B = 0. However, when the eccentricity-to-width
ratio (e/B) increases, the difference gets smaller gradually. These observations suggest that Meyerhof’s and Hansen’s Eqs. (3) and (4) tend to overestimate the bearing capacity when e/B becomes large (e/B > 1/6). This fact agrees well with that derived by Michalowski & You (1998). They examined Meyerhof’s concept of an effective width (Meyerhof 1953), and showed that
268
Figure 6. Contact normal stress distribution on footing base from FEM (φ = 35◦ ).
its use may lead to upper bounds that are large for cohesionless soils. The following equation is proposed not to overestimate the bearing capacity in large eccentricities.
Figure 7. Contact shear stress distribution on footing base from FEM (φ = 35◦ ).
where Nγ is the bearing capacity factor given by Hansen (1970); and the key of Eq. (5) is that B − 2.5e is used instead of B − 2e. It is observed that the proposed equation is well corresponded to Hansen’s value for e/B ≤ 1/12 and is similar to those from FEM for e/B > 1/12.
269
Figure 8. Relationship between qb /γB and eccentricityto-width ratio (e/B) for various friction angles and α = 0◦ .
Figure 9 shows the relationship between qb /γB and inclination angle α for φ = 35◦ . It is observed that Meyerhof ’s and Hansen’s solutions are larger than the FEM values for α = 0◦ in Fig. 9(b) and α = 0◦ , 5◦ , 10◦ and 15◦ in Figs. 9(c) and (d). Particularly for the case for e/B = 1/3 (Fig. 9(d)), the values obtained from Meyerhof’s and Hansen’s Eqs. (3) and (4) are much larger than the FEM values, when α is small. In Fig. 9(a), the proposed equation agrees well with Hansen’s value when α = 0◦ . Also, the equation gets closer to the FEM values when α increases. In Figs. 9(b)–(d), it is observed that the equation is similar to the FEM values comparing with
Figure 9. Relationship between qb /γB and the load inclination angle α for φ = 35◦ .
Meyerhof’s and Hansen’s solutions. As a result, we can state that the solutions obtained from Meyerhof’s and Hansen’s Eqs. (3) and (4) are not accurate and tend to overestimate the bearing capacity when the eccentricity is large (e/B ≥ 1/3).
270
4
REFERENCES
CONCLUSIONS
The bearing capacity of a rough, rigid strip footing on purely frictional soil subjected to eccentric and inclined loads was analyzed using finite element method. The bearing capacities obtained from the finite element method were compared with those calculated from the bearing capacity equation proposed by Meyerhof (1963) and Hansen (1970). The conclusions drawn in this paper are summarized as follows: (1) The finite element analysis produces contact stress distribution and maximum contact stress values that are in better agreement with the corresponding values for the linear stress distribution. (2) The bearing capacities calculated using the Meyerhof (1963) and Hansen (1970) equations, which is expressed in terms of a effective width B − 2e, are not accurate and tend to overestimate the bearing capacities, particularly when the eccentricity is large (e/B ≥ 1/3). The Meyerhof’s solution is always larger than Hansen’s solution. When the eccentricity-to- width ratio (e/B) or the inclination angle increases, the difference between the Meyerhof ’s and Hansen’s solutions become small gradually. In general, foundation design avoids large eccentricities, so this deficiency in the Meyerhof (1963) and Hansen (1970) solutions are not usually present in a significant way. (3) The bearing capacity equation is proposed not to overestimate the bearing capacity in the large eccentricities. It is shown that the proposed equation can give the reasonable bearing capacity even in the large eccentricities.
Abbo, A. J. & Sloan, S. W. 1995. A smooth hyperbolic approximation to the Mohr-Coulomb yield criterion, Comput. Struc., 54(3): 427–441. Frydman, S. & Burd, H. J. 1997. Numerical studies of bearingcapacity factor N γ, J. Geotech. Geoenviron. Eng., ASCE, 123(1): 20–29. Hanna, A. M. & Meyerhof, G. G. 1981. Experimental evaluation of bearing capacity of footings subjected inclined loads, Can. Geotech. J., 18: 599–603. Hansen, J. B. 1961. A general formula for bearing capacity, Danish Geotech. Inst. Bull., 11: 38–46. Hansen, J. B. 1970. A revised and extended formula for bearing capacity, Danish Geotech. Inst. Bull., 28: 5–11. Meyerhof, G. G. 1953. The bearing capacity of foundations under eccentric and inclined loads, Proc. of 3rd ICSMFE, Zürich, 1: 440–445. Meyerhof, G. G. 1963. Some recent research on the bearing capacity of foundations, Can. Geotech. J., 1(1): 16–26. Michalowski, R. L. & You, L. 1998. Effective width rule in calculations of bearing capacity of shallow footings, Comput. Geotech., 23(4): 237–253. Peck, R. B., Hanson, W. E. & Thornburn, T. H. 1953. Foundation Engineering, John Wiley & Sons, Inc., New York. Prakash, S. & Saran, S. 1971. Bearing capacity of eccentrically loaded footings, J. Soil. Mech. And Found. Engrg. Div., ASCE, 97(1): 95–117. Purkayastha, R. D. & Char, A. N. R. 1977. Stability analysis for eccentrically loaded footings, J. Geotech. Engrg. Div., ASCE, 103(6): 647–651. Saran, S., Prakash, S. & Murty, A. V. S. R. 1971. Bearing capacity of footings under inclined loads, Soils Found., 11(1): 47–52. Saran, S. & Agrawal, R. K. 1991. Bearing capacity of eccentrically obliquely loaded footing, J. Geotech. Eng., ASCE, 117(11): 1669–1690.
271
Uncertainty
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability analysis of slope stability by advanced simulation with spreadsheet S.K. Au, Y. Wang & Z.J. Cao Department of Building and Construction, City University of Hong Kong, Hong Kong, China
ABSTRACT: This paper develops a package of EXCEL worksheets and functions/Add-In to implement an advanced Monte Carlo method called Subset Simulation in the EXCEL spreadsheet and applies it to reliability analysis of slope stability. The deterministic slope stability analysis and uncertainty modeling and propagation are deliberately decoupled so that they can proceed separately by personnel with different expertise and in a parallel fashion. An illustrative example demonstrates application of the EXCEL package to a slope with uncertainty on undrained shear strength and highlights computational efficiency of Subset Simulation for the slope stability problem. 1
INTRODUCTION
The reluctance of geotechnical practitioners to apply reliability methods to slope stability analysis is attributed, among other factors, to the sophistication of advanced probabilistic assessment/modelling methods, limited published studies/worked examples illustrating the implementation, and lack of user-friendly tools. From an implementation point of view, the less information required from the engineers regarding the probabilistic assessment or reliability computational algorithm, the smaller hurdle the engineer will face in properly using the algorithm, and the more likely it will be implemented. Therefore, it is desirable to decouple the process of deterministic slope stability analysis and reliability analysis so that the work of reliability analysis can proceed as an extension of deterministic analysis in a non-intrusive manner. It is also desirable to implement the reliability analysis algorithm in a software platform with whom the engineers are familiar. From this perspective, the ubiquitous Microsoft EXCEL spreadsheet is particular of interest. Low (2008) showed that geotechnical analysis and the expanding ellipsoidal perspective of the HasoferLind reliability index can be readily implemented in a spreadsheet environment. The Hasofer-Lind reliability index can be obtained using the object-oriented constrained optimization tool in the Excel spreadsheet (Low and Tang 1997, 2007). The approach has been applied to obtain the reliability index of conventional bearing capacity problem (Low 2008), anchored sheet pile design (Low 2005a&b), slope stability analysis (Low et al. 1998, Low 2003). This paper implements an advanced Monte Carlo method called Subset Simulation (Au and Beck 2001) in the EXCEL spreadsheet and illustrates its application to reliability analysis of slope stability. After this introduction, the Subset Simulation algorithm will be
briefly discussed, followed by developments of Subset Simulation tool and slope stability analysis worksheets in EXCEL.Then, an example of slope stability reliability analysis will be presented to illustrate the analysis process using the EXCEL spreadsheets. 2
SUBSET SIMULATION ALGORITHM
Subset Simulation is an adaptive stochastic simulation procedure for efficiently computing small tail probabilities (Au and Beck 2001, 2003). Originally developed for dynamic reliability analysis of building structures, it stems from the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities for some intermediate failure events, thereby converting a rare event simulation problem into a sequence of more frequent ones. During simulation, conditional samples are generated from specially-designed Markov chains so that they populate gradually each intermediate failure region until they reach the final target (rare) failure region. Let Y be a given critical response for which P(Y > y) is of interest, let 0 < y1 < y2 < · · · < ym = y be an increasing sequence of intermediate threshold values. It should be noted that considering a single critical response leads to little loss of generality because multiple failure criteria can be incorporated into a single one (Au and Beck 2001). By sequentially conditioning on the event {Y > yi }, the failure probability can be written as
The raw idea is to estimate P(Y > y1 ) and {P(Y > yi |Y > yi−1 ): i = 2,…,m} by generating samples of conditional on {Y () > yi ): i = 1,…,m}.
275
Figure 1. Schematic diagram of Subset Simulation procedure.
In implementations, y1 , …, ym are generated adaptively using information from simulated samples so that the sample estimate of P(Y > y1 ) and {P(Y > yi |Y > yi−1 ): i = 2,…,m} always correspond to a common specified value of the conditional probability p0 . The efficient generation of conditional samples is highly-nontrivial but pivotal in the success of Subset Simulation, and it is made possible through the machinery of Markov Chain Monte Carlo (MCMC) simulation (Roberts & Casella 1999). Markov Chain Monte Carlo is a class of powerful algorithms for generating samples according to any given probability distribution. It originates from the Metropolis algorithm developed by Metropolis and co-workers for applications in statistical physics (Metropolis et al. 1953). In MCMC, successive samples are generated from a specially designed Markov chain whose limiting stationary distribution tends to the target PDF as the length of the Markov chain increases. An essential aspect of the implementation of MCMC is the choice of ‘proposal distribution’ that governs the generation of the next sample from the current one. The efficiency of Subset Simulation is robust to the choice of the proposal distribution, but tailoring it for a particular class of problem can certainly improve efficiency. For robustness in applications, the standard deviation of the proposal distribution for each random variable is set equal to that of the conditional samples of the current simulation level. The Subset Simulation procedure for adaptively generating samples of conditional on {Y () > yi : i = 1,…,m} corresponding to specified
target probabilities {P(Y () > yi ) = pi0 , i = 1,…,m} is illustrated schematically in Figure 1. First, N samples {0,k : k = 1,…,N } are simulated by direct Monte Carlo simulation (MCS), i.e., they are i.i.d. as the original PDF. The subscript ‘0’ here denotes that the samples correspond to ‘conditional level 0’ (i.e., unconditional). The corresponding values of the tradable variable {Y0,k : k = 1,…,N } are then computed. The value of y1 is chosen as the (1 − p0 ) · N -th value in the ascending list of {Y0,k : k = 1,…,N }, so that the sample estimate for P(F1 ) = P(Y > y1 ) is always equal to p0 . Due to the choice of y1 , there are p0 · N samples among {0,k : k = 1,…,N } whose response Y lies in F1 = {Y > y1 }. These are samples at ‘conditional level 1’ and are conditional on F1 . Starting from each of these samples, MCMC is used to simulate an additional (1 − p0 ) · N conditional samples so that there is a total of N conditional samples at conditional level 1. The value of y2 is then chosen as the (1 − p0 ) · N -th value in the ascending list of {Y1,k : k = 1, . . . , N }, and it defines F2 = {Y > y2 }. Note that the sample estimate for P(F2 |F1 ) = P(Y > y2 |Y > y1 ) is automatically equal to p0 . Again, there are p0 · N samples lying in F2 . They are samples conditional on F2 and provide ‘seeds’ for applying MCMC to simulate an additional (1 − p0 ) · N conditional samples so that there is a total of N conditional samples at ‘conditional level 2.’ This procedure is repeated for higher conditional levels until the samples at ‘conditional level (m − 1)’ have been generated to yield ym as the (1 − p0 ) · N -th value in the ascending list of {Ym−1,k : k = 1,…,N } and that ym > y so that there are
276
enough samples for estimating P(Y > y). Note that the total number of samples is equal to N + (m − 1) · (1 − p0 ) · N .Approximate formulas have been derived for assessing the statistical error (in terms of coefficient of variation) that can be estimated using samples generated in a single run. The Subset Simulation algorithm has been applied to a variety of complex systems in structural (Au and Beck 2003), aerospace (Thunnissen et al. 2007) and fire (Au et al. 2007) engineering. Probabilistic sensitivity and failure analysis have also been carried out using Subset Simulation (Au 2004, Au and Beck 2003). 3
SIMULATION TOOLS IN EXCEL SPREADSHEET
equilibrium calculations and then returns the value of FS as the output. No probability/reliability concept is involved in the deterministic worksheet and so it can be developed by personnel without reliability background. To allow a seamless integration with Subset Simulation, the deterministic analysis worksheet is specially designed to be fully automated and does not involve any human intervention. This is necessary for automated calculation of the FS during Subset Simulation. E.g., if calculating the response requires clicking a button, then such button will need to be clicked as many times as the number of samples used in the simulation, which could be in the order of a thousand and is not acceptable in the simulation. 3.2
A package of EXCEL worksheets and functions/AddIn are developed, with the aids of Visual Basic for Applications (VBA) in EXCEL, to implement the Subset Simulation algorithm in a spreadsheet environment and apply it reliability analysis of slope stability. A software architecture is proposed that clearly divides the package into three parts: 1) deterministic analysis of slope stability, 2) modeling of uncertainty in slope stability problem, and 3) uncertainty propagation by Subset Simulation. It is of particular interest to decouple the process of developing the deterministic slope stability analysis worksheets and the VBA functions/Add-In for uncertainty modeling and uncertainty propagation (Subset Simulation) so that the work of uncertainty modeling and propagation can proceed as an extension of deterministic analysis in a non-intrusive manner. The deterministic analysis of slope stability and uncertainty modeling and propagation can be performed separately by personnel with different expertise and in a parallel fashion. Therefore, minimum information is required from the engineers regarding the reliability computational algorithm. 3.1 Deterministic slope stability analysis worksheets Deterministic analysis of slope stability is the process of calculating the factor of safety FS for a given ‘nominal’ set of values of the system parameters θ. The system parameters generally include geometry and stratigraphy of the slope, soil properties (e.g., soil unit weight, undrained shear strength, friction angle, cohesion, and pore water pressure) and other relevant information. Limit equilibrium analysis procedures (e.g., Ordinary Method of Slices, Simplified Bishop, Simplified Janbu, Spencer, Morgenstern and Price, Chen and Morgenstern procedures) are implemented in a series of worksheets and VBA functions for the FS calculation. The deterministic analysis using limit equilibrium procedures are organized into one or a set of worksheets, although for discussion purposes it is referred as a single worksheet. From an input-output perspective, the deterministic analysis worksheet takes a given θ as input, performs limit
Modeling of uncertainty in slope stability problem
Uncertainty in slope stability analysis arises from the system parameters θ, such as soil properties (e.g., soil unit weight, undrained shear strength, friction angle, cohesion, and pore water pressure). Therefore, these soil properties are treated as random variables in the analysis, although different limit equilibrium analysis procedures may have slightly different sets of random variables. Note that this paper focuses on the uncertainties arising from soil properties and does not account for other uncertainties, such as calculation model uncertainties. The spatial variability of soil properties within a given layer of soil is modeled by homogeneous random fields with an exponentially decaying correlation structure. An uncertainty model spreadsheet is developed for generating a random sample (realization) of the uncertain parameters θ. Starting with uniform random numbers supported by EXCEL, transformation is performed to produce the random samples of desired distribution. Available VBA subroutines in EXCEL is used to facilitate the uncertainty modeling. The uncertainty model worksheet is developed in parallel with development of deterministic analysis worksheet. From an input-output perspective, the uncertainty modeling worksheet takes no input but returns a random sample of θ as its output whenever a re-calculation is commanded (e.g., by pressing ‘F9’ in EXCEL). Similar to the deterministic analysis worksheets, the uncertainty model worksheet is specially designed to be fully automated and does not involve any human intervention. In addition, the uncertainty modeling is implemented in a single worksheet for the convenience of Subset Simulation. The Subset Simulation VBA code instructs EXCEL to re-calculate only the uncertainty model worksheet to generate a sample of θ, avoiding re-calculation of deterministic analysis worksheets which is not needed and which is often most time-consuming. 3.3
Uncertainty propagation by subset simulation
After the deterministic analysis and uncertainty model worksheets are developed, they are ‘linked together’
277
Figure 2. Schematic diagram of link between deterministic and uncertainty modeling worksheet.
Figure 4. Slope stability example.
Figure 3. Subset Simulation Add-In.
through their input/output cells to produce a probabilistic analysis model of the slope stability problem. As illustrated by Figure 2, linking simply involves setting the cell reference for the nominal values of θ in the deterministic analysis worksheet to be the cell reference for the random sample in the uncertainty model worksheet. After this task, the value of θ shown in the deterministic analysis worksheet is equal to that generated in the uncertainty modeling worksheet, and so the FS value calculated in the deterministic analysis worksheet is random. E.g., pressing the ‘F9’ key in EXCEL generates a random value of the FS. In other words at this stage one can perform a direct Monte Carlo simulation of the problem by repeatedly pressing the ‘F9’ key. When the deterministic analysis and uncertainty model worksheets are completed one is ready to make use of the Subset Simulation algorithm for uncertainty propagation that can provide better resolution at the distribution tail (i.e., low failure probability levels). A VBA code for Subset Simulation is developed that functions as an Add-In in EXCEL and can be called by selecting from the main menu ‘Tools’ followed by ‘SubSim’. A user form appears upon invoking of the function, as shown in Figure 3. The user should input the cell reference of the uncertain parameters θ and
system response Y = 1/FS in the uncertainty modeling worksheet. Other inputs include the following algorithm-specific parameters: p0 (conditional probability from one level to the next), N (number of samples per level), m (number of levels to perform). As a basic output, the program produces complementary CDF of the driving variable versus the threshold level, i.e., plot of estimate for P(Y > y) versus y. In general the CDF, histogram or their conditional counterparts can be produced.
4
ILLUSTRATIVE EXAMPLE
As an illustration, the worksheets and VBA functions/Add-In are applied to assess reliability of the slope shown in Figure 4. The factor of safety FS is defined as the critical (minimum) ratio of resisting moment to the overturning moment, and it is calculated using the Swedish Circle method in conjunction with general procedure of slices (Duncan and Wright 2005). The slip surface is assumed to be a circular arc centered at coordinate (x, y) and with radius r. The overturning and resisting moments are summed about the center of the circle to calculate the factor of safety, as shown in Figure 4. For moment calculations the soil mass above the slip surface is subdivided into 24 vertical slices, each of which has a weight Wi , circular slip segment length li , undrained shear strength Sui along the slip segment, and an angle αi between the base of
278
the slice and the horizontal. The factor of safety is then given by
where the minimum is taken over all possible choices of slip circles, i.e., all possible choices of (x, y) and r. A VBA code has been written to calculate the ratio of resistant to overturning moment for different values of (x, y) and r and then pick the minimum value as the factor of safety. As a reference, the nominal value of FS that correspond to the case where all Su values equal to their nominal value of 20 kPa, is equal to 1.2. Undrained shear strength of soil is treated as random variable, and the undrained shear strength at locations with the same depth is assumed to be fully correlated. The spatial variability with depth is modeled by a homogeneous Lognormal random field with an exponentially decaying correlation structure. The correlation structure is described through the logarithm of the undrained shear strength. That is, let Su (z) be the value of undrained shear strength at depth z. Then the correlation between log Su (zi ) and log Su (zj ) is given by Rij = exp (−2|zi − zj |/λ), where λ is the effective correlation length. The values of Su at different depths are simulated through transformation of i.i.d. uniform random variables. The generation of the latter is provided by the built-in function ‘Rand()’ in EXCEL. Specifically, let S = [Su (z1 ), Su (z2 ), . . . , Su (zn )]T be a vector of Su values at depths z1 , . . . , zn . Then
where u and s are Lognormal parameters equal to the mean and standard deviation of log Su (z); 1 is a column vector of length n with all entries equal to 1; Z is an n-dimensional standard Gaussian vector; L is a lower triangular matrix obtained from Cholesky factorization of the correlation matrix R = [Rij ] such that LLT = R. Note that each component of Z can be generated using the inverse of the standard Gaussian cumulative distribution function (‘NORMINV’ in EXCEL) of a uniform random variable. A combination of u = 2.976 and s = 0.198 is adopted in this example so that the mean and spatial variability are approximately equal to 20 kPa and 4 kPa (i.e., 20% coefficient of variation), respectively. The effective correlation length λ is assumed to be 2 m. The soil between the upper ground surface and 15 m below is divided into 30 equal layers. In the context of Subset Simulation, the set of uncertain parameters is = Z, which contains 30 i.i.d. Gaussian random variables that are used for characterizing the spatially varying random field for Su . By default, Subset Simulation drives the samples to the upper tail of the distribution of the response Y . As the lower tail of the distribution of the factor of safety (i.e., the unsafe zone) is of interest, the response Y is defined as the reciprocal of FS, i.e., Y = 1/FS = the ratio of the overturning to resistant moment.
Figure 5. Simulation results.
Subset Simulation is performed in EXCEL with the following parameters: p0 = 10%, N = 200 samples per level, m = 3 simulation levels. This means that the results for P(Y > y) shall be produced from a probability level of 1 down to 0.001. Figure 5 shows a typical estimate of the complementary CDF for Y = 1/FS, i.e., P(Y > y) versus y, estimated by Subset Simulation with a total of 200 + 180 + 180 = 560 samples (i.e., calculations of FS). For comparison the estimate by direct Monte Carlo with the same number of samples is also plotted. It is seen that the CDF curve by direct Monte Carlo is not accurate at low probability levels, while the Subset Simulation estimate provides consistent results even in the low probability level regime. The observed computational efficiency of Subset Simulation for the slope stability problem is typical of those structural reliability problems (Au et al. 2007). As mentioned before, the nominal factor of safety (i.e., ignoring soil uncertainty) is equal to 1.2. For this configuration in the presence of soil uncertainty described the failure probability P(FS < 1) = P(Y > 1) is about 8%. This corresponds to an effective reliability index of −1 (92%) = 1. 41. Note that this value depends on the level of soil uncertainty assumed. E.g., had a larger spatial variability been assumed, the failure probability will be larger and the corresponding effective reliability index will be smaller. Nevertheless the nominal factor of safety remains the same as it does not address uncertainty. The actual value of spatial variability used should of course be consistent with the site characteristics. 5
CONCLUDING REMARKS
This paper developed a package of EXCEL worksheets and VBA functions/Add-In to implement an advanced Monte Carlo method called Subset Simulation in the EXCEL spreadsheet and illustrated its application to reliability analysis of slope stability. A software architecture is proposed that clearly divides the package into three parts: 1) deterministic analysis
279
of slope stability, 2) modeling of uncertainty in slope stability problem, and 3) uncertainty propagation by Subset Simulation. The process of developing the deterministic slope stability analysis worksheets and the VBA functions/Add-In for uncertainty modeling and uncertainty propagation (Subset Simulation) are deliberately decoupled so that the work of uncertainty modeling and propagation can proceed as an extension of deterministic analysis in a non-intrusive manner. The deterministic analysis of slope stability and uncertainty modeling and propagation can be performed separately by personnel with different expertise and in a parallel fashion. Therefore, minimum information is required from the engineers regarding the reliability computational algorithm. The illustrative example demonstrated the application of Subset Simulation to slope stability reliability analysis that involves a large number of random variables characterizing a spatially varying random field and highlighted computational efficiency of Subset Simulation for the slope stability problem. ACKNOWLEDGEMENTS The work described in this paper was supported by General Research Fund [Project No. 9041327 (CityU 110108)] and Competitive Earmarked Research Grant [Project No. 9041260 (CityU 121307)] from the Research Grants Council of the Hong Kong Special Administrative Region, China. The financial supports are gratefully acknowledged. REFERENCES Au, S. K. 2004. Probabilistic failure analysis by importance sampling Markov chain simulation. Journal of Engineering Mechanics, 130(3): 303–311. Au, S. K., and Beck, J. L. 2001. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Engineering Mechanics, 16(4): 263–277.
Au, S. K., and Beck, J. L. 2003. Subset simulation and its applications to seismic risk based on dynamic analysis. Journal of Engineering Mechanics, 129(8): 1–17. Au, S. K., Ching, J. and Beck, J. L. 2007.Application of Subset Simulation methods to reliability benchmark problems. Structural Safety, 29(3): 183–193. Au, S. K., Wang, Z.W. and Lo, S. M. 2007. Compartment fire risk analysis by advanced Monte Carlo method. Engineering Structures, 29(9): 2381–2390. Duncan, J. M. and Wright, S. G. 2005. Soil Strength and Slope Stability, John Wiley & Sons. Inc. New Jersey, 2005. Low, B. K. 2003. Practical probabilistic slope stability analysis. Proceedings of Soil and Rock America, MIT, Cambridge, MA, June 2003, Verlag Gluckauf GmbH Essen, Germany, Vol. 2, 2777–2784. Low, B. K. 2005a. Reliability-based design applied to retaining walls. Geotechnique, 55(1): 63–75. Low, B. K. 2005b. Probabilistic design of anchored sheet pile wall. Proceedings of 16th International Conference on Soil Mechanics and Geotechnical Engineering, 12–16 Septermber 2005, Osaka, Japan, Millpress, 2825–2828. Low, B. K. 2008. Practical reliability approach using spreadsheet. Reliability-based Design in Geotechnical Engineering, Edited by Phoon, K. K. Taylor & Francis, London and New York. Low, B. K., Gilbert, R. B., and Wright, S. G. 1998. Slope reliability analysis using generalized method of slices. Journal of Geotechnical and Geoenvironmental Engineering, ASCE, 124(4): 350–362. Low, B. K. and Tang, W. H. 1997. Efficient reliability evaluation using spreadsheet. Journal of Engineering Mechanics, 127(7): 149–152. Low, B. K. and Tang, W. H. 2007. Efficient spreadsheet algorithm for first-order reliability method. Journal of Engineering Mechanics, 133(2): 1378–1387. Metropolis, N., Rosenbluth,A., Rosenbluth, M., and Teller,A. 1953. Equations of state calculations by fast computing machines, Journal of Chemical Physics, 21(6): 1087– 1092. Roberts, C. and Casella, G. 1999. Monte Carlo Statistical Methods, Springer. Thunnissen, D. P., Au, S. K., and Tsuyuki, G. T. 2007. Uncertainty quantification in estimating critical spacecraft component temperatures. AIAA Journal of Thermal Physics and Heat Transfer, 21(2): 422–430.
280
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Optimal moving window width in conjunction with intraclass correlation coefficient for identification of soil layer boundaries J.K. Lim & S.F. Ng Universiti Teknologi MARA Pulau Pinang, Pulau Pinang, Malaysia
M.R. Selamat & E.K.H. Goh Universiti Sains Malaysia, Pulau Pinang, Malaysia
ABSTRACT: The identification of layer boundaries and demarcating the soil profile into homogeneous layers are often much more complicated than one expected when dealing with highly variable complex natural material. The quantitative approaches reported in geotechnical literatures are limited, varies and mostly restricted to case or project specific basis. In this study, the performance of intraclass correlation coefficient (RI) in conjunction with various suggested window widths are investigated using three fairly different CPT soundings obtained from the database of National Geotechnical Experimental Sites. RI appears to be a powerful, robust and persistent tool and the corresponding optimal window width was proven as a function of average distance between boundaries which could be determined from autocorrelation analysis. The empirical criterion of 0.7 was found useful in guiding the researcher to decide whether a peak is significant enough to be considered as a valid boundary.
1 1.1
INTRODUCTION Background of research
A major uncertainty in geotechnical engineering is the inherent spatial variability of soil properties. The importance of recognizing uncertainties and taking them into account in geotechnical design have been propagated by numerous leaders since 1960s (Casagrande 1965; Peck 1969; Wu 1974; Leonards 1982; Tang 1984; Morgenstern 1995; Whitman 2000; Christian 2004). Probability theory and statistics provide a formal, scientific and quantitative basis in assessing risk and uncertainties and have been sprouted in geotechnical engineering research in recent years. In line with the development, characterization of soil properties has been advanced onto the functions of the deterministic mean and its stochastic characters, comprises of coefficient of variation and scale of fluctuation in modeling the inherent soil variability as a random field (Vanmarcke 1977; DeGroot and Baecher 1993; Jaksa et al. 1997; Phoon and Kulhawy 1999; Fenton 1999). Compliance to stationarity or statistical homogeneity criterion is imperative in any soil data analyses. A random function which used to modeled the variability of soil is considered stationary, or weakly stationary if (Brockwell and Davis 2002): (1) the mean of the function is constant with distance, i.e. no trend in the data, (2) the variance of the function is constant with distance, i.e. homoscedastic, (3) there are no seasonal variations, (4) there are no apparent changes in behaviors, and (5) there are no outlying observations.
In other words, a stationary series is essentially a function of their separation distance, rather than their absolute locations. In geotechnical characterization undertaking, the first step usually involves demarcating the soil profile into layers or sections which are homogeneous so that the result of subsequent analysis is not biased. A homogeneous layer comprises uniform soil material that has undergone similar geologic history and possesses with certain distinctive behaviors. The identification of boundaries and thus demarcation process are often much more complicated than one expected when dealing with this highly variable complex natural material. The variability exists not only from site to site and stratum to stratum, but even within apparently homogeneous deposits at a single site (Baecher and Christian 2003). 1.2
Problem statement
It would be rather useful to supplement the existing procedures with a quantitative systematic approach. The conventional method, which is based on visual observation, gives less accuracy and substantial subjectivity to the identification of actual boundary of soil. Existing statistical tools are not widely explored, well-calibrated and properly defined, thus generally result in unsatisfactory outcome. This paper intends to resolve the above problem for better characterization of soil properties. Reported useful statistical tools would be compared in terms of their effectiveness and the existing procedures are revamped for further improvement.
281
Cone Penetration Test (CPT) is widely used in soil characterization in view of its ability to provide almost continuous profile, widely correlated and highly repeatable (Robertson 1986; NCHRP 2007). In this study, the CPT soundings, in particular the cone tip resistance, qc , were used for detail illustration. The data were selected on the basis of their spacing, extensiveness and difference between each other which best yield the thorough examination on the performance of statistical tools in soil boundary demarcation. 2
STATISTICAL APPROACHES
2.1 General Classical and advance statistical approaches for testing similarity or dissimilarity of univariate or even multivariate records are believed to be substantial. Some of the established analytical tools have potential to be applied in the field of geotechnical engineering with modification to suit the nature of geotechnical parameters. Nevertheless, the cross-disciplinary collaboration undertaking is surprisingly little. Many geotechnical engineers are unfamiliar with the underlying concepts of statistics and probability and remain skeptical and reluctant even to make an attempt. In this paper, two statistical methods which are relatively common and simple in identifying the soil layer boundaries are presented. 2.2 Intraclass correlation coefficient Intraclass Correlation Coefficient (RI) was reported by Campanella and Wickreseminghe (1991) as a useful statistical method for detecting soil layer boundaries using CPT soundings. For identification of layer boundaries, a moving window width, Wd , is first specified and the window is divided into two segments. The RI profile is then generated by moving two contiguous segments over a measurement profile and the computed index is plotted corresponding to the midpoint of the window. RI will always lie between zero and unity and a relatively high value of RI is likely to indicate the presence of a layer boundary. The RI together with its pooled combined variance (sw2 ) and the between class variance (sb2 ) are defined as:
where n1 and n2 are the sample size of two equal segments, above and below the middle line of the window, s12 and s22 being the variances of the sample for the two segments, x¯ and s2 are the sample mean and variance
within the designated window. The equation can also be written as follow (Zhang and Tumay 1996) for the two segments with equal sample size of m and their sample mean of x¯ 1 and x¯ 2 , respectively.
Judging whether an index value is high enough to indicate a boundary in a relative sense by visual observation is fairly subjective and would result in inconsistency. Zhang and Tumay (1996) suggested that the peak value of RI which is equal to or larger than 0.7 can be empirically determined as the boundary line. Hegazy et al. (1996) proposed the critical value as the (mean + 1.65 standard deviation) with a level of significance of 5%. However, Phoon et al. (2003) commented that the above critical values are not depending on the underlying correlation structure of the profile. 2.3 Window width As all these statistical methods incorporate with the concept of moving windows, the width of the sampling window becomes an important parameter which could have substantial influence on the result of analysis. Generally, too narrow a window will result in undesirable effect of high noise level with too many peaks appear. On the other hand, too wide a window will oversmoothen the statistics till missing out the possible boundaries due to excessive perturbation region. Webster (1973) proposed a method to determine the boundaries on transects automatically and has found that the suitable width for the calculation window is approximately two thirds of the expected distance between boundaries where the spacing between boundaries does not differ widely. The expected distance or average spacing between boundaries could be determined from an autocorrelation analysis. The technique is said reasonably sensitive but found little affected by window width. Campanella and Wickremesinghe (1991) elaborated in detail the statistical methods for determination of window width and recommended that it is rather to adopt an incorrect narrower window width than wider window width to avoid missing out the possible layer boundaries. Two case studies, namely McDonald farm Site and Haney Site have been illustrated and the window widths selected were 1.5 m and 2.0 m, respectively. To the other extent, window width that less than 1.0 m should not be selected due to normal distribution restriction on the samples (Wickremesinghe 1989). Zhang and Tumay (1996) based on the finding of previous research that the standard 10 cm2 electric cones may require minimum stiff layer thickness of 36 cm to 72 cm to ensure full tip resistance and concluded that the value of the window width could be conservatively taken to be 150 cm or 75 cm for half of the window. Nevertheless, they reported that primary
282
layering usually does not provide satisfactory results due to uneven soil layers. The big difference of layer thicknesses will result in too many layers in thick zone and too little in the thin causing a bias in making a judgment. Cafaro and Cherubini (2002) used the same procedure as proposed by Webster (1973) in analyzing a stiff overconsolidated clay at a test site in Taranto, Southern Italy and obtained a fairly wide window width of 6.8 m for qc , fs and Ic profiles. The width was reduced to 4.8 m and the variation in the RI profile was found negligible on both the position (depth) and the value of the peaks. The geostatistical boundary was found not always correspond well to the geolithological boundary thus suggested the possible offset to be accounted for. Kulatilake and Um (2003) introduced a new procedure to detect statistical homogeneous layers in a soil profile. In examining the cone tip resistance data for the clay site at Texas A&M University, a window width of 0.4 m which contains 10 data points in each section has been used. Due to the short section adopted, four possible combinations for the mean soil property (either constant or with linear trend) were considered. The distance between the lower and upper sections was calculated and subsequently generated along the depth for evaluation of the statistical homogeneity at different levels. Phoon et al. (2003) adopted the lower limit of permissible window width, that is 1.0 m in generating the RI and Bstat profiles. Both profiles manage to capture the primary layer boundaries in consistent with visual inspection, of which the Bstat peaks were much more prominent. Considerable noises were observed and 3 fault boundaries were identified (no obvious soil boundaries can be seen in the qt record) in the RI profile when comparing to the critical value of 0.7.
coefficient will decrease more or less steadily with increasing lag distance from around 1 to some minimum value near zero and fluctuate thereafter. The lag distance over which this decay occurs can be taken as the average distance between boundaries which could be used as guidance in selection of suitable window width. 3 3.1
NUMERICAL EXPERIMENTS Selected case studies
For the present study, established database for National Geotechnical Experimental Sites (NGES) (http://www.unh.edu/nges/) funded by the Federal Highway Administration (FHWA) and National Science Foundation (NSF) of America were explored. The performance and usefulness of several approaches (in terms of window width and the statistical tool) that have been used by other geotechnical researchers in identifying layer boundaries were thoroughly examined. Typical CPT profiles from three sites representing different parts of North America were selected. These sites have been classified as Level I and II sites that are most closely fit the combined criteria of research areas as of significant national importance. These CPT soundings were closely spaced, extensive and fairly different between each other which best suit for this examination. The selected sites are 1) Treasure Island Naval Station in the San Francisco Bay area (CATIFS), 2) University of Massachusetts, Amherst campus (MAUMASSA), and 3) Northwestern University Lake Fill Site in Evanston (ILNWULAK). 3.2 Examination of Existing Approaches
2.4 Autocorrelation analysis The Webster’s intuition that the suitable window width should be equal to or somewhat less than the average distance between boundaries has led to the exploitation of autocorrelation analysis (Webster 1973). The result of his study showed that the optimize width is around two thirds of the expected distance although larger width up to the full expected distance could still be useful (main peaks appeared in the same positions but with different relative heights) for those area with marked changes. The autocorrelation coefficient at lag k was expressed as
where n is the number of sampling points in the series, k is the lag and u is the deviation from the series mean at the tth (or t + kth) point. In the correlogram, i.e. the plot of autocorrelation coefficient, rk against lag, k, the autocorrelation
Intraclass correlation coefficient (RI) which was reported to be useful in geotechnical literatures are examined here. Since the method is used in conjunction with moving window averaging concept, the sensitivity of the window width in generating the most optimize profiles which would discriminate the ‘true’ boundaries is of great concern. Several criteria in determining suitable window width deduced from previous researchers’ works are incorporated in this study as follows: i) Two thirds of the average distance between boundaries determined by using autocorrelation analysis (Webster 1973; Campanella and Wickremesinghe 1991), ii) The conservative assumption for full tip resistance of 1.5 m (Zhang and Tumay 1996), and iii) The minimum width of 1.0 m due to normal distribution restriction on samples (Wickremesinghe 1989; Phoon et al. 2003). In reality, perfect result is almost not possible as the actual soil data could be really erratic. However, the analytical approach (combination of a statistical
283
Figure 1. Identification of soil layer boundaries using RI of varying Wd for CATIFS site: Wd1 = 1.0 m, Wd2 = 1.5 m and Wd3 = 1.9 m.
tool with an optimize window width) could be deemed satisfactory from at least two practical aspects. The approach should avoid missing out possible prominent layer boundaries and at the same time, able to capture as many major boundaries as possible at one time. And it is evident from past literatures that the boundaries indicated by these statistical tools are often slightly offset probably due to variation in the upper and lower segments as moving the sampling window over the soil profile. Therefore, note that the tools could serve as a useful indicator, but yet final adjustment and decision have to be made with regards to the original profile, geological background and not forgotten engineering judgment. 3.3
Results of analysis
The results of analysis for CATIFS, MAUMASSA and ILNWULAK sites are presented in Fig. 1, Fig. 2 and Fig. 3, respectively. For each set of results, 3 different window widths as delineated above (i, ii and iii) have been incorporated with RI and presented side-by-side for comparison.
Figure 2. Identification of soil layer boundaries using RI of varying Wd for MAUMASSA site: Wd1 = 1.0 m, Wd2 = 1.5 m and Wd3 = 2.4 m.
Case 1: CATIFS Site From the cone tip resistance profile in Fig. 1 (CATIFS site), the heterogeneity of soil from 7.0 m to 9.0 m as compared to others is readily observed through visual examination. The RI profile manages to capture these peaks at both 7.0 m and 9.0 m locations, and another one at approximately 2.1 m. The above three main peaks were found persist for all the tested widths of 1.0 m, 1.5 m and 1.9 m (Wd1 to Wd3 ) with quite a number of noises appear exceeding the empirical value of 0.7 for window widths of 1.0 m and 1.5 m (Wd1 and Wd2 ). Thus, inference could be made from the results that the optimal window width should be around 1.9 m in this case. Limitation of the approach on missing out the information at both ends of the profile is noted. The apparent boundary of cone tip resistance at 1.26 m for instance is basically out of the coverage area of the generated output as the computed index is plotted against the midpoint of the moving window. In addition, the identification of a potential boundary around the depth of 2.1 m using RI profile indicated that the tool able
284
Case 3: ILNWULAK Site Results of the third case study of ILNWULAK site are presented in Fig. 3. The cone tip resistance profile seems to exhibit higher resistance values at both ends, i.e. depths before 1.0 m and after 7.0 m, and an interbedded heterogeneous layer at approximately 3.3 m to 4.3 m. RI profiles for all the tested widths of 1.0 m, 1.5 m and 1.3 m (Wd1 to Wd3 ) as presented in Fig. 3 show a very good agreement where the four expected boundaries were managed to detect. The main peaks persist as the window width changes and more noises can be noticed at smaller widths particularly for window width of 1.0 m (Wd1 ). Similar inference could still be reasonable drawn where the suitable width for this boundaries demarcation exercise ranges at approximately 1.3 m, obtained from the autocorrelation analysis. 3.4
Figure 3. Identification of soil layer boundaries using RI of varying Wd for ILNWULAK site: Wd1 = 1.0 m, Wd2 = 1.5 m and Wd3 = 1.3 m.
to detect a considerable sharp change along the profile which suggested two quasi-linear portions to be divided.
Case 2: MAUMASSA Site MAUMASSA site (Fig. 2) was the second case study where the cone tip resistance profile exhibits apparent heteroscedasticity characteristic with gradual change of gradient around the potential boundary. As shown in Fig. 2, the generated RI profiles for window widths of 1.0 m and 1.5 m (Wd1 and Wd2 ) are basically some noises inferring that the windows are too narrow. As the window width increased to 2.4 m (Wd3 ), which is approximately two thirds of the average distance between boundaries, the ‘true’ main peaks appeared, one at approximate depth of 4.4 m and another one at 3.0 m. Generated profile at both ends tends to be less reliable as shown by two erroneous peaks thus should not be considered. In this case, 2.4 m can be considered as the suitable width when intraclass correlation coefficient is used.
Discussion
In general, RI appears to be a powerful tool as it can capture most of the prominent major boundaries at one time fairly accurately. Besides, it is reasonably robust which could persistently detect the main peaks at the same positions even with window widths that are fairly different from the optimize configuration. Webster’s suggestion (1973) to determine the suitable window width as a function of average distance between boundaries using autocorrelation analysis was validated. The difference for profiles generated using smaller window widths is that many undesired peaks or noises may appear, whereas larger widths tend to hidden the necessary boundaries. The empirical criterion of 0.7 (Zhang and Tumay, 1996) to guide the worker in deciding whether a peak is significant enough to be considered as a valid boundary is very useful. The criterion was found performing pretty well in most circumstances as illustrated through various distinctive case studies in this paper. Observing the results of analysis, the authors presumed that these statistical tools are likely to well perform in identifying boundaries at which each divided layer is constituted of linear trend. Nonetheless, the presumption on this limitation does not restrict the worker to combine two or more layers in the subsequent analyses as far as they possess very similar variation characteristics, i.e. scale of variance and the autocovariance distance (or scale of fluctuation). The modeling which reasonably simplifies the soil profile into fewer layers within the same geological formation and at the same time remains most of the important information is always of great interest from the pragmatic stand. Note that statistical tools do not appreciate explicit soil type classification nor engineering behaviors, thus final evaluation and decision still lie with sound engineering judgment. Note that actual soil profile could be extremely erratic and complex, thus adopting any statistical method without incorporating engineering judgment could be unsatisfactory. One of the difficulties as mentioned by Webster (1973) and Zhang and Tumay (1996)
285
was solving the profile with layer thicknesses differ widely. To reduce the complication, the worker must first be clear on what he expects and properly plan before one starts the analysis. For instance, if in the first place, the rough approximate average thickness between layers is by visual seemed to be about 3.0 m, then the optimize window width would probably be around that value or somewhat slightly smaller. Any attempt that far too small or far too large from that value would be fruitless. Often, one time demarcation might not be adequate; on the other extent, excessive subdivision and modeling which has no practical value should be avoided. In the case whereby a homogeneous layer is of clear evident, for instance the clay layer from approximately 12.0 m depth to the end of exploration of about 30.0 m at the CATIFS site, that section of profile should not be mixed together with other relatively thinner layers at upper depth in the analysis (note that the clay layer from 12.0 m to 30.0 m had been excluded in the first case study here). Or else, any possible erroneous peaks appear within that section should be discarded after incorporated with visual observation and engineering judgment. For verification, the demarcated sections should be examined using stationarity tests, e.g. Kendall’s τ test, run test, sign test, etc. which are not covered in this paper. Every method has its own limitations and the underlying concepts must be well understood in order to fully exploit it appropriately. Please note that statistical tool is by mean to assist but not to confuse. 4
CONCLUSIONS
Generally, statistical tools can be utilized in identifying layer boundaries satisfactorily. RI appears to be a powerful one as it can capture most of the prominent major boundaries at one time fairly accurately. Also, it is robust and persistent which could detect the main peaks at the same positions even with fairly different window widths. Results from the analysis carried out shows that the lower limit of 1.0 m tends to create plenty of unnecessary noises which might complicate the interpretation. The resistance zone of 1.5 m appears to be too restrictive and may only apply to certain specific soil profile horizons. The conclusion obtained from previous research using 1.0 m and 1.5 m window widths are likely to be coincident. The average distance between boundaries from autocorrelation analysis seems to be most relevant, flexible and useful. The empirical criterion of 0.7 was consistently well performed in guiding the researcher to decide whether a peak is significant enough to be considered as a valid boundary. REFERENCES Baecher, G.B. & Christian, J.T. 2003. Reliability and statistics in geotechnical engineering. England: John Wiley & Sons. Brockwell, P.J. & Davis, R.A. 2002. Introduction to time series and forecasting. 2nd Ed., New York: SpringerVerlag New York.
Cafaro, F. & Cherubini, C. 2002. Large sample spacing in evaluation of vertical strength variability of clayey soil. Journal of Geotechnical and Geoenvironmental Engineering 128(7): 558–568. Casagrande, A. 1965. Role of the ‘calculated risk’ in earthwork and foundation engineering. Journal of Soil Mechanics and Foundation Division 91(SM4): 1–40. Christian, J.T. 2004. Geotechnical engineering reliability: How well do we know what we are doing? Journal of Geotechnical and Geoenvironmental Engineering 130(10): 985–1003. DeGroot, D.J. & Baecher, G.B. 1993. Estimating autocovariance of in-situ soil properties. Journal of Geotechnical Engineering 119(1): 147–166. Hegazy, Y.A., Mayne, P.W. & Rouhani, S. 1996. Geostatistical assessment of spatial variability in piezocone tests. Uncertainty in the geologic environment: from theory to practice (GSP 58): 254–268, New York: ASCE. Jaksa, M.B., Brooker, P.I. & Kaggwa, W.S. 1997. Inaccuracies associated with estimating random measurement errors. Journal of Geotechnical and Geoenvironmental Engineering 123(5): 393–401. Kulatilake, P.H.S.W. & Um, J. 2003. Spatial variation of cone tip resistance for the clay site at Texas A&M University. In Probabilistic Site Characterization at the National Geotechnical Experimentation Sites, New York: ASCE. Leonards, G.A. 1982. Investigation of failures. Journal of Geotechnical Engineering 108(GT2):185–246. Morgenstern, N.R. 1995. Managing risk in geotechnical engineering. The 3rd Casagrande Lecture. Proceedings of the 10th Pan-American Conference on Soil Mechanics and Foundation Engineering Vol. 4: 102–126, Guadalajara. NCHRP Synthesis 368 2007. Cone Penetration Testing. A Synthesis of Highway Practice. National Cooperative Highway Research Program, Washington D.C: Transportation Research Board of the National Academies. Peck, R.B. 1969. Advantages and limitations of the observational method in applied soil mechanics. Geotechnique 19(2): 171–187. Phoon, K.K. & Kulhawy, F.H. 1999. Characterization of geotechnical variability. Canadian Geotechnical Journal 36: 612–624. Phoon, K.K., Quek, S.T. & An, P. 2003. Identification of statistically homogeneous soil layers using modified Bartlett statistics. Journal of Geotechnical and Geoenvironmental Engineering 129(7): 649–659. Robertson, P.K. 1986. In-situ testing and its application to foundation engineering. Canadian Geotechnical Journal 23(3): 573–587. Vanmarcke, E.H. 1977. Probabilistic modeling of soil profiles. Journal of the Geotechnical Engineering Division 103(GT11): 1227–1246. Webster, R. 1973. Automatic soil boundary location from transect data. Mathematical Geology 5(1): 27–37. Wickremesinghe, D. & Campanella, R.G. 1991. Statistical methods for soil layer boundary location using the cone penetration test. Proc., ICASP6: 636–643, Mexico City. Wickremesinghe, D.S. 1989. Statistical characterization of soil profile using in-situ tests. PhD thesis, Univ. of British Columbia, Vancouver, Canada. Wu, T.H. 1974. Uncertainty, safety and decision in soil engineering. Journal of the Geotechnical Engineering Division 100(GT3): 329–348. Zhang, Z.J. & Tumay, M.T. 1996. The reliability of soil classification derived from cone penetration test. Uncertainty in the geologicenvironment: From theory to practice (GSP 58): 383–408, New York: ASCE.
286
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Soil variability calculated from CPT data T. Oka & H. Tanaka Graduate School of Engineering, Hokkaido University, Japan
ABSTRACT: The design method based on reliability has extensively used for construction of civil engineering structures, even in foundation design. Since the ground is not artificially but naturally created, the reliability method for conventional structures such as steel or concrete cannot be directly to the foundation design. This is because variability in soil properties may be different from that in other civil engineering materials. It is necessary to establish data base for showing variability in various grounds. In this present study, using data measured by Cone Penetration test (CPT), statistical analyses are carried out for selected 11 sites.
1
2 TESTED SITES AND METHOD OF CPT
INTRODUCTION
The design method based on reliability was introduced and has been gradually used for construction of civil engineering structures, including design of foundations. However, there are a lot of problems in this method when applying it to geotechnical engineering. For example, soil properties are different from those of other civil engineering materials, such as concrete and steel. Because the ground is naturally deposited except for compacted or improved soils, variability in soil properties is completely different from artificially manufactured materials whose properties are strictly controlled. In addition to this inherent difference, human errors at measurement of soil parameters should be taken account. Sample disturbance is one of key issues to obtain reliable soil parameters at laboratory test. Although in situ tests do not need to consider the sample disturbance, measured values are strongly influenced by the drilling method and/or the testing method. In Standard Penetration Test (SPT), for example, the method for dropping the hammer affects the N value. Cone Penetration Test (CPT) has been gradually used even in Japan. The most advantage feature of the CPT over other in situ tests may be that measured values are nearly free from human factors, because it does not require a borehole and its testing procedures are relatively simple. In addition, the CPT can obtain geotechnical information nearly continuously to depth. In this study, using CPT data measured at various 11 sites in Japan as well as overseas, variability in soil parameters is examined. Also, for comparison, variability in N value for sandy ground and the unconfined compressive strength (qu ) for clayey ground is investigated.
Statistical analyses were carried out for 11 sites, whose main features and location are indicated in Table 1 and Fig. 1 (excluding overseas sites), respectively. From Hachirogata to Amagasaki sites in Table 1, the ground consists of normally consolidated clayey soil. The burden pressure at Hachirogata and Busan sites does not so much change (except for fluctuation of the ground water table), while sites of Singapore and Amagasaki were recently reclaimed, but consolidation due to the filling is considered to be completed. From Kagoshima to Tomakomai, investigated grounds consist of granular materials. Material at Kagoshima, Nakasibetsu and Tomakomai sites is volcanic ash. At Kagoshima site, volcanic ash was transported and deposited by rivers at Kagoshima
Figure 1. Testing Sites in Japan.
287
Table 1.
Features for tested sites and Analyzed results from CPT data.
Site
Soil Type
Hatchirogata Busan
Clay Clay
Depth (m) a (MPa/m) b (MPa/m) β1
10∼37 5∼23 5∼23 Singapore Clay 18∼30 Amagasaki1 Clay 11∼19 Amagasaki2 Clay 11∼19 Kagoshima Volcanic Ash 5∼30 Yodogawa Sand 5∼20 Nakashibetsu Volcanic Ash 4∼10 Kemigawa Sand 6∼20 Higashiohgishima Filling (sandy) 5∼20 Tsuruga Filling 4∼16 (Crush Rock) Tomakomai Volcanic Ash 13∼20 sand no sand
β2
σ (MPa) Reference
−0.08 0.044 0.041 0.33 −0.32 −0.3 9.23 12.1 −8.61 −8.4 4.4 3.2
0.03 0.022 0.022 0.037 0.091 0.088 −0.0096 0.15 2.18 1.41 −0.092 0.3
0.0067 2.89 0.02 1.8 10.82 0.028 0.000062 2.63 0.024 0.00033 2.73 0.024 0.052 3.32 0.04 0.066 2.59 0.043 0.0986 3.87 1.35 0.061 2.93 3.74 0.78 1.98 4.61 0.33 3.1 1.94 2.16 5.18 1.92 0.0064 3.26 2.54
7.48
−0.0037
1.23
site (secondary deposition), while at the site of Nakashibetsu and Tomakomai, the ash was directly deposited by wind at the eruption (primary deposition). Yodogawa and Kemigawa sites are located in the riverside (“gawa” means river in Japanese), and their ground consists of mainly sandy material. Higashiougishima and Tsuruga sites are on a reclaimed land by sand and crushed gravel, respectively. More detail information on soil properties for sites are available in literatures indicated in Table 1. CPT was conducted following the specification of the international reference test procedure proposed by the ISSMFE technical committee on penetration testing (1988): the apex angle of the cone is 10 cm2 (the diameter is 35.7 mm); the apex angle of the cone is 60◦ ; the location of the filter for measuring pore water pressure is the shoulder behind the cone and the speed of the penetration is 2 cm/s. The point resistance (qt ) is corrected by the effective area and takes into account pore water pressure acting on the filter. 3 ANALYSES OF CPT DATA 3.1 A trend function due to increase in burden pressure qt measured by CPT may be broken down into a trend function [t(z1 . . . zn )] and set of residuals to the trend [ξ(z1 . . . zn )]: i.e., qt = qt (t) + qt (ξ). As a typical example of CPT result from clayey soil, Fig. 2 shows qt distribution at the site of Busan. In this example, qt values clearly increases with depth. The qt can be expressed by the following equation:
where, Nkt , Su and σvo are the cone factor, the undrained shear strength and the total burden pressure, respectively. σvo increases with depth and for normally consolidated ground, Su also increases with depth because of increase in consolidation pressure. Therefore, it is
Tanaka, 2006 Tanaka et al., 2001a Tanaka et al., 2001b
Mimura, 2003 Tanaka, 1999
4.05 0.85
Figure 2. (a) Measured and trend qt at Busan site. (b) Residuals at Busan site.
anticipated that qt linearly increases with depth (z), especially for the normally consolidated soil layer. On the other hand, for sandy soil, in another word, where the penetration of CPT is performed under drained condition, it is well known that qt does not increase lin 0.5 early with σvo , but with σvo . However, in this study, as a preliminary study, it is assumed that qt values from CPT have a trend function of the following linear equation:
where z is depth. Constants of a and b are calculated by the least square regression method. The calculated trend line at Busan site is shown in Fig. 2. In this site, depths for obtaining the trend line are restricted from 5 m to 23 m. Fig. 3 shows observed qt values and the trend line at the site of Kagoshima, which is covered by thick volcanic ash that was transformed by river from “Shirasu” terrace. Constants of a and b for other areas are
288
Figure 3. (a) Measured and trend qt at Kagoshima site. (b) Residuals at Kagoshima Site.
Figure 4. Pearson chart from CPT.
indicated in Table 1. For clayey ground, constant of b is related to increment of Su because unit weight of soil (γt ) does not so much vary in sites. It is shown that b for Amagasaki is clearly larger than that for other sites. For granular ground, b constants are completely different among sites, and those in some sites indicate negative values: see for example, Kagoshima, as shown in Table 1 and Fig. 3.
3.2
are plotted, and β1 and β2 are defined by the following equations.
where, Csk and Cku are skewness and kurtosis, respectively, and these equations are given as follows:
Residuals
Residuals (qt (ξ)) are a difference between measured qt and calculated from the trend line qt (t) at the same depth. Distribution of qt (ξ) is shown in Figs 2(b) and 3(b), for Busan and Kagoshima sites, respectively. It is thought that qt (ξ) consists of two components: measurement error and variation caused by heterogeneity of the objective layer. It is interested to note that qt (ξ) does not increase in depth but nearly constant for both Busan (clay) and Kagoshima (volcanic ash). This indicates that qt (ξ) is not influenced by magnitude of qt . In another word, qt (ξ) is not suitable to treat as normalized such as coefficient of variation, qt (ξ)/qt (t), since qt (ξ)/qt (t) decreases in depth. Standard deviation of qt (ξ) is indicated in Table 1 (its symbol is σ). As expected, σ for clayey ground is definitely smaller than that for granular ground, and its difference is as much as 100 times. This means that variation in qt is smaller than that for granular grounds, in addition to small qt itself for clayey ground. For studying properties of qt (ξ), it may be useful to know what shape of the qt (ξ) distribution is suitable. If qt (ξ) is formed by measurement errors, qt (ξ) should follow the normal distribution. In this study, Pearson chart (see Fig. 4) will be used for examination of qt (ξ) distribution. Pearson developed an efficient system for the identification of suitable probability distributions based on third and forth moment statistics of a data set, as shown in Fig. 4 (Baecher & Christian). In his chart, β1 for the horizontal axis and β2 for vertical axis
where, n, ϕi , mϕ , sϕ are sample numbers, residuals, sample mean, sample standard deviation, respectively. mϕ , sϕ are given as follows.
The values of β1 and β2 at various sites are indicated in Table 1. For clayey ground, β1 is relatively small compared with granular ground, except for Busan. β1 as well as β2 at the Busan site is extremely large, even in comparison of those for granular grounds. The reason for large these values at the Busan site may be attributed to the existence of sand layers. As shown in Fig. 2, at depths of 10 and 16 m, qt suddenly increases. From the data showing the pore water pressure, it was revealed that sand layers exist in these depths. When qt data at these sand layers are omitted, both β1 and β2 are dramatically changed, as shown in Table 1. Especially β1 was changed from 1.8 to 0.000062. However, the trend function (see a and b in the table) is not changed (though a little change in a), but σ is also slightly changed only from 0.028 to 0.024 (MPa). The distribution of qt (ξ) with sand layer and without sand
289
Figure 6. Measured values and trend line from UCT at Busan Site.
Figure 5. Histogram of residuals at Busan Site.
layer are compared in the histogram of Fig. 5. It can be seen that the shape of the histogram for both cases are almost same each other, except for a few data exceeding 80 kPa, which corresponds to qt at sand layers. The existence of these extreme values, though its frequency is very small, significantly β1 and β2 values. β1 and β2 of qt (ξ) for investigated sites in this paper are plotted in Pearson’s chart, as shown in Fig. 4. It can be seen that β1 and β2 relation for clayey and most granular grounds are distributed along “log Normal distribution” line and β1 values are nearly zero. Therefore, it can be judged that qt (ξ) for clayey and most granular grounds “follow normal distribution”. Some granular grounds (Kemigawa, Higashiougishima and Nakashibetsu) may follow Beta distribution.
4 ANALYSIS OF SPT AND UCT DATA Although CPT has gradually used in Japan, the most conventional testing method is SPT for granular soils and the unconfined compression test (UCT) for clayey soil. The same analysis is carried out for data from SPT and UCT. Figs 6 and 7 show qu /2 and N values for Busan and Kagoshima sites, respectively, with superimposed trend lines calculated by the regression method. It is interested to note that the trend line for the Kagoshima site also has a negative slope, similarly to that for CPT (see Fig. 4). Analyzed results are indicated in Table 2.
Figure 7. Measured values and trend line from SPT at Kagoshima Site.
For granular grounds, the slope of the trend line (i.e., b value) is compared for SPT and CPT in Fig. 8. It can be seen that there is a good relation between them. Fig. 9 shows the relation between σ obtained by SPT and CPT. It is also found that when σ for CPT is large, σ for SPT is also large. This fact indicates that the variation from the trend line may be caused mainly
290
Table 2. Analyzed results from UCT & SPT.
Site
Soil Type
Hatchirogata Busan Singapore Kagoshima Yodogawa Nakashibetsu Kemigawa Higashiohgishima
Clay Clay Clay Volcanic Ash Sand Volcanic Ash Sand Filling (sandy) Filling (Crush Rock) Volcanic Ash
Tsuruga Tomakomai
Testing Method
Sample Number
a (MPa/m)
b (MPa/m)
β1
β2
σ (Mpa)
UCT UCT UCT SPT SPT SPT SPT SPT
28 15 7 35 20 10 22 13
0.012 0.028 −0.016 21.46 21.16 −11.11 −21.04 −1.18
0.00072 0.0029 0.0032 −0.28 0.17 2.96 3.21 0.75
1.02 9.67 0.46 0.091 0.18 0.003 1.54 0.47
4.83 14.04 1.41 1.95 4.74 3.08 4.76 3.07
0.0047 0.029 0.0019 3.64 5.55 3.59 4.64 4.19
SPT
24
3.09
0.66
0.19
2.67
4.78
SPT
26
9.65
−0.024
1.21
3.75
1.98
Figure 8. Comparsion between SPT and CPT for b value.
Figure 10. (a) Comparsion between SPT and CPT for β1 value. (b) Comparsion between SPT and CPT for β1 value. Figure 9. Comparsion between SPT and CPT for σ value.
by heterogeneity in the ground, not by measurement error. Unlike σ or b, β1 and β2 for Pearson’s chart do not have dimension so that these values are comparable for grounds measured by SPT and UCT. β1 and β2 are calculated from SPT or UCT data and plotted in the Peason’s chart of Fig. 10. Compared with Peason’s chart from CPT, points from SPT and UCT are much
scattered, especially β2 values are larger. β1 and β2 are compared in Fig. 10(a) and (b). Though any meaningful relation for β1 can be hardly identified, it seems that there exists a weak relation for β2 . 5
CONCLUSIONS
Using CPT data, soil variability was examined for 11 different sites, consisting of various soil materials: i.e.,
291
clay, volcanic ash, sand and crushed gravel. Measured qt values are broken down into trend function and residuals, assumed that the trend is expressed by a linear line to depth. Properties of residuals are analyzed and the main conclusions are as follows: 1) The distribution of residuals from CPT at most sites follows the normal distribution. However, some granular grounds (Kemigawa, Higshiohgishima, Nakashibetsu) follow Beta distribution. 2) Statistical parameters such as σ, β2 calculated from SPT or UCT have relatively good relation to those from CPT. REFERENCES Baecher, B. & Christain, T. 2003. Reliability and Statistics in Geotechnical Engineering. England: Wiely
Mimura, M. 2003. Characteristics of some Japanese natural sands-data from undisturbed frozen samples, Characteristation and Engineering Properties of Natural Soils–Tan et al.(eds), (2): 1149–1168. Tanaka, H. 2006. Goetechnical properties of Hachirogata Clay. Characterization and Engineering Properties of Natural Soils, Vol. 3: 1831–1854. Tanaka, H., Mishima, O. & Tanaka, M. 1999. Applicability of CPT and DMT for grounds consisting of large granualar particles, Journal of Japanese Society for Civil Engineerings, III–49: 273–283, (in Japanese). Tanaka, H., Mishima, O., Tanaka, M., Park, S. Z, Jeong, G. H. & Locat, J. 2001a. Characterization of Yangsan clay, Pusan, Korea, Soils and Foundations, Vol. 41(2) : 89–104. Tanaka, H., Locat, J., Shibuya, S.,Tan,T. S. & Shiwakoti, R. D. 2001b. Characterization of Singapore, Bangkok and Ariake clays, Canadian Geotechnical Journal, 38: 378–400.
292
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reducing uncertainties in undrained shear strengths Jianye Ching National Taiwan University, Taipei, Taiwan
Yi-Chu Chen National Taiwan University of Science and Technology, Taipei, Taiwan
Kok-Kwang Phoon National University of Singapore, Singapore
ABSTRACT: Undrained shear strengths (su ) play important roles in geotechnical designs. In the context of geotechnical reliability-based design, reducing uncertainties in su can be an important research topic. There are at least two ways of reducing uncertainties in su : conduct laboratory or in-situ tests to obtain indices or parameters to correlate su indirectly. The way of reducing uncertainties in su can be challenging. The challenge lies in the fact that the so-obtained indices and parameters, e.g. CPT value, cannot be directly used to estimate su but can estimate su only through correlations. Above all, there is a challenge in combining information: how to reduce uncertainties in su when there are multiple sources of information? In this paper we will address the aforementioned challenges and propose a probabilistic framework to handle these difficulties. Sets of simplified equations will be obtained through the probabilistic analysis for the purpose of reducing uncertainties: the inputs to the equations are the results of in-situ or laboratory tests and the outputs are the updated mean values and coefficients of variation (c.o.v.) of the desirable undrained shear strengths. The uncertainties in su will decrease when the number of inputs increase, i.e. more information is available. The results of this research may be beneficial to geotechnical reliability-based design. 1
INTRODUCTION
Uncertainties are commonly encountered in geotechnical engineering. Possible sources of uncertainties include inherent variabilities, measurement errors, modeling uncertainties, etc. More economical geotechnical designs can be achieved by reducing the uncertainties in soil shear strengths through site investigation. 1.1 Reducing uncertainties by correlations In practice, it is already well-known that field or laboratory test data, denoted as “test indices” from hereon, can be combined to reduce the uncertainties in undrained shear strengths through correlations. For instance, given the field SPT-N test data, it is possible to infer first-order estimates of the mean value and coefficient of variation (c.o.v.) of the undrained shear strength (Su ) of the clay under consideration. This process of pairwise correlation is illustrated in Figure 1 – for a given observed SPT-N value (N), the corresponding mean value and c.o.v. of the undrained shear strength can be estimated. In the literature, such pairwise correlations between various test indices and undrained shear strengths have been widely studied. Table 1 lists examples of previous research studying such correlations. In particular,
Figure 1. Su versus SPT-N relationship.
Kulhawy and Mayne (1990) and Phoon (1995) both contain fairly comprehensive reviews about the pairwise correlations between various test indices and undrained shear strengths. 1.2 Graphical model for clayey soils The undrained shear strength (Su ) considered is the undrained shear strength determined by CIUC
293
Table 1. Previous research studying pairwise correlations between various test indices and undrained shear strengths.
2
Correlation pair
Examples of previous research
2.1 Database
Standard penetration test (SPT-N) Cone penetration test (CPT) Pressure meter test (PMT) Dilatometer test (DMT) Vane shear test (VST) Plasticity index (PI)
Terzaghi and Peck (1967); Hara et al. (1974) Keaveny and Mitchell (1986); Konrad and Law (1987) Mair and Wood (1987) Lacasse and Lumme (1988) Bjerrum (1972); Mersi (1975) Skempton (1957); Chandler (1988) Ladd et al. (1977); Jamiolkowski et al. (1985)
Probabilistic models for the pairwise correlations between undrained shear strengths and the chosen test indices are necessary for the Bayesian analysis. Therefore, efforts were made in compiling a correlation database, e.g. the Su vs. SPT-N data points shown in Figure 1, from the literature to estimate the probabilistic models for the pairwise correlations. For the database, it is always a concern whether the compiled data points are enough to cover a wide range of possible scenarios. The following guidelines are followed to mitigate this concern: (a) Unless mentioned explicitly, data points in the database do not include those from special clays, e.g. fissured and organic clays. Therefore, the corresponding correlations should not be applied to those soils; (b) In the case that all data points of a particular correlation are from the same geographical region, that correlation may be only applicable to that region. In general, the applicability to other regions is questionable. Bayesian updating will not reduce the physical limitations inherent in existing pairwise correlations. It merely provides a more rational and systematic method for combining information that will serve as a useful complement to engineering judgment. The following strategy is adopted to obtain the probabilistic models for the pairwise correlations given the pairwise-correlation data points:
Overconsolidation ratio (OCR)
Figure 2. Graphical model for clayey soils.
(isotropically consolidated undrained compression) tests. Figure 2 presents the graphical model adopted for clays in this paper. For the model of Su of clayey soils, the adopted test indices are limited to the following: (a) overconsolidation ratio (OCR); (b) energy-ratio corrected SPT-N value (N60 ); (c) adjusted CPT reading qT = qT − σv0 , where σv0 is the total vertical stress, and qT is the CPT reading corrected with respect to the pore pressure behind the cone. The underlying assumptions of this model are: (a) OCR is the main factor influencing the undrained shear strength, which in turn influences SPT-N and CPT values, i.e. the undrained shear strength is treated as the consequence of OCR, and SPT-N and CPT values are treated as consequences of the undrained shear strength; (b) given the undrained shear strength of a clay, its SPT-N and CPT values are independent of OCR, i.e. the undrained shear strength serves as a sufficient statistics of the SPT-N value and CPT reading: once the undrained shear strength is known, OCR does not contain further information of SPT-N and CPT values; (c) given the undrained shear strength, its SPT-N value and CPT reading are mutually independent. In other words, the SPT-N value and CPT reading are treated as two pieces of independent information for the undrained shear strength. This model is deemed to be reasonable for unstructured unfissured inorganic clays. The first two moments of the following quantities are needed for the Bayesian analysis: (a) Su conditioning on OCR; (b) N60 conditioning on Su ; (c) qT conditioning on Su . Details are given below.
PROBABILISTIC MODELS FOR PAIRWISE CORRELATIONS
– (a) In the case that the pairwise data points for a certain correlation are sufficient, the data quality will be verified through empirical correlations in the literature. Only those data points that are consistent with the literature will be later used to derive the probabilistic model for that correlation. – (b) In the case that the pairwise data points for a certain correlation are insufficient, the empirical correlation provided by literature that best matches the data points will be adopted to derive the probabilistic models for that correlation. If the c.o.v. of the adopted empirical correlation is not available in the literature, it will be estimated from the data points at hand. – (c) In the case that the pairwise data points for a certain correlation are absent, the empirical correlation provided by literature will be adopted. 2.2 Probabilistic models for pairwise correlations in clays Recall that the first two moments [mean and coefficient of variation (or standard deviation)] of the following quantities are needed: (a) Su conditioning on OCR; (b) N60 conditioning on Su ; (c) qT conditioning on Su . The derivations of the first two moments are presented in detail below.
294
Figure 3. Su /σv0 vs. OCR correlation, and the mean value and 95% confidence interval proposed by this research.
2.2.1 First two moments of Su conditioning on OCR Overconsolidation ratio OCR is assumed to be the main basic index affecting Su . Mayne (1988) compiled a set of Su /σv0 vs. OCR data (the Su data points were all from CIUC tests), shown in Figure 3. Based on the data, a least-square method is taken to obtain the following correlation:
Figure 4. The mean value of 95% confidence interval proposed by this research.
where εSu is the prediction error term. Its standard deviation is found to be around 0.237. Therefore,
and the standard deviation of this correlation is 0.237. Figure 3 shows the mean value and 95% confidence interval of this equation and the comparison with the actual data. In the case that the OCR information is not available, the prior c.o.v. for Su is taken to be a very large number. 2.2.2 First two moments of N60 conditioning on Su The correlation between Su of clayey soils and the SPT-N value is well known. Figure 4 compiles the pairwise data for this correlation summarized by Hara et al. (1974) (also see Phoon (1995) and Kulhawy and Mayne (1990)), where the undrained shear strengths are determined by UU tests. Note that all data points here are from clays in Japan. The following equation was proposed:
where SuUU denotes the undrained shear strength determined by a UU test; N is uncorrected SPT-N value; the standard deviation of ε1 is roughly 0.15 (Phoon 1995). The SPT-N energy ratio is roughly 78% in Japan (Chen 2004); therefore, the N value in (3) is roughly 1.3 N60 . In other words,
Figure 5. ln(SuUU /Su ) vs. ln(Su /σv0 ) correlation, and the mean value of 95% confidence interval proposed by this research.
Equation (4) can be used to obtain the first two moments of SuUU conditioning on N60 ; however, our goal here is to derive the first two moments of N60 conditioning on Su . With the same data set in Figure 4, a least-square method is taken to obtain the following equation:
where the standard deviation of ε2 is 0.407, and the unit of SuUU is in kPa. The 95% confidence interval of this new equation is shown in Figure 4 for comparison. Moreover, according to the database presented in Chen and Kulhawy (1993) (see Figure 5), the correlation between undrained shear strength determined by UU and CIUC tests are as follows:
where the standard deviation of ε3 is roughly 0.167, and σv0 is the effective vertical stress. The mean value of 95% confidence interval together with the database
295
3
presented in Chen and Kulhawy (1993) are plotted in Figure 5. Equations (5) and (6) imply that
where Su and σv0 are both in the unit of kPa, and the standard deviation of εN is (0.4072 + 1.2302 · 0.1672 )0.5 = 0.456 = 0.456. Therefore,
and the standard deviation of this correlation is 0.456. 2.2.3 First two moments of qT conditioning on Su The correlation between undrained shear strength and the CPT reading has been studied in several literature, e.g. Hanzawa (1992) (direct shear test), Tanaka and Sakagami (1989) (CIUC test), Fukasawa et al. (2004) (vane shear and unconfined compression tests) and Anagnostopoulos et al. (2003) (UU tests). The following correlation equation is commonly adopted (e.g. Lunne et al. (2002), Kulhawy and Mayne (1990)):
where Nk is called the cone bearing factor. Theoretical studies showed that Nk ranges from 7 to 18. Several experimental studies indicated that the measured Nk can vary wildly from 4.5-75, probably due to inconsistent reference strengths, mixing of different types of cones, need for correction on pore water pressure, etc. (Kulhawy and Mayne 1990). On the other hand, Phoon (1995) reported that:
for CIUC test results. This corresponds to Nk = 12.7, which agrees well with the theoretical results (Nk ranging from 7 to 18). Phoon (1995) further reported that the uncertainty of (10.) is roughly 35%. The probabilistic version of (10) is therefore
In the case that only a single piece of information is available, updating the mean value and c.o.v. of undrained shear strength is not difficult. For instance, given that SPT-N value of a clay sample is 10, from Figure 1 it is concluded that the updated mean value of Su is roughly 160kPa, and c.o.v. is roughly 30%. The same principle can be used to update mean and c.o.v. of undrained shear strength based on another single piece of information. However, in the case that multivariate test data is available, e.g.: SPT-N and OCR of a clay sample are available, updating the mean value and c.o.v. of the Su is less straightforward. Bayesian analysis is a natural way of handling multivariate information, even conflicting information. The basic Bayes’ rule consists of the following equation:
where x and y can be both vectors; y is the uncertain variable of interest, while x is the observed variable. f (y) is called the prior PDF of y, quantifying the uncertainties in y before observation on x is made, and f (x|y) is called the likelihood function of y given x. f (y|x) is the updated or posterior PDF of y conditioning on the information of x. As an example, let y be the logarithm of the undrained shear strength. In the case that OCR is given, f (ln(Su )|ln(OCR)) then serves as the prior PDF f (y). If the observed variable x is the corrected SPT-N value N60 , f (ln(N60 )|ln(Su )) serves as f (x|y). Then
represents the updated (posterior) PDF of the undrained shear strength given the multivariate information {N60 ,OCR}. In this paper, we have implemented the aforementioned model assumption that conditioning on Su , N60 is independent of OCR. 4
where ε4 is zero-mean with standard deviation equal to 0.34, corresponding to the 35% uncertainty of (10.). A simple Bayesian argument can transform the above equation into:
where εqT is also zero-mean with standard deviation equal to 0.34. Therefore,
BAYESIAN INFERENCE WITH MULTIVARIATE TEST DATA
MAIN RESULTS
If the variable of interest and the observed variable are jointly Gaussian, it is possible to derive all relevant conditional means and conditional variances in closed-form. Using these results, the updated mean and variance of the logarithm of the undrained shear strengths conditioning on various combination of multivariate test data are listed in the following with detailed derivations: Conditioning on OCR:
and the standard deviation of this correlation is 0.34.
296
Table 2.
Conditioning on N60 :
Conditioning on qT :
Conditioning on OCR, N60 :
qT1 (kPa)
11.3 12.8 14.8 16.1 17.1 17.8 18.3 20.2 20.2 20.9 22.7 24.0 26.6
43.1 76.5 82.4 88.7 58.4 85.8 93.8 106.2 111.2 115.7 101.9 121.3 139.8
628.9 577.1 459.9 420.0 454.9 479.0 495.4 543.5 544.6 561.5 636.7 772.8 1047.3
CK0 U UU VST UU CK0 U UU VST UU UU VST VST UU UU
1.6 1.4 1.2 1.2 1.1 1.1 1.1 1.0 1.0 1.0 1.0 1.0 1.0
9.1 9.1 12.8 14.6 17.5 18.9 18.5 17.3 17.3 16.8 16.0 16.2 13.8
4.0 3.0 3.3 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.8 5.6 7.2
206.4 233.4 270.2 293.6 312.0 324.7 334.7 368.4 369.2 381.1 413.9 437.4 485.4
115.5 128.0 144.9 155.6 164.0 169.9 174.5 190.0 190.3 195.8 210.9 221.7 243.7
835.3 810.5 730.1 713.6 766.8 803.7 830.0 911.9 913.7 942.6 1050.6 1210.2 1532.6
Strictly speaking, this is not qT because the CPT reading was not corrected against the pore pressure behind the cone.
Conditioning on N60 and qT :
Conditioning on OCR, N60 , qT :
The above results provide estimates for the first two moments of ln(Su ). Let us denote the estimated mean value and standard deviation of ln(Su ) by m and s, respectively, then the mean value and c.o.v. of Su are exp(m + s2 /2) and [exp(s2 ) − 1]0.5 , respectively, by assuming lognormality.
5.1
Depth Test (m) type
Equiv. Test indices CIUC Su value PI σv0 σv0 qc (kPa) OCR (%) N60 (kPa) (kPa) (kPa)
1
Conditioning on OCR, qT :
5
In-situ test data and indices for the Taipei case.
CASE STUDIES Clays in a deep excavation site of Taipei
A deep excavation site is extracted from Ou (2006). SPT-N and CPT tests were conducted at this site. The soil profile includes three thick clayey layers and three thin sandy layers. The water table is 2 m below the surface. Cone penetration and vane shear test results were taken to estimate the undrained shear strength of the clays. Moreover, several undisturbed clay samples are
extracted from the site, and laboratory tests, including UU and CK0 U tests, were taken to determine the undrained shear strengths. In principle, undrained shear strengths from different test types cannot be directly compared. Therefore, attempts are made to convert all undrained shear strengths into their equivalent CIUC values through empirical transformations suggested by Kulhawy and Mayne (1990) for CK0 UC to CIUC, suggested by Chen and Kulhawy (1993) for UU to CIUC and suggested by Ladd et al. (1977) for VST to field value (closer to DSS value; the DSS values are further converted to CIUC values by the transformation equations suggested by Kulhawy and Mayne (1990)). Table 2 summarizes the tested undrained shear strengths and their equivalent CIUC values as well as the other in-situ test indices of the clay at various depths. Notice that qT data is not available for this site since the behind cone pore pressure was not documented. As a result, qc data is directly taken to compute qT for this case study, i.e. qT = qT − σv0 . Because qT is always greater than qc , the actual qT values should be larger than the qT value reported in the table to a certain degree. Based on these data and the formulas provided in the previous section, the updated mean values and 95% confidence intervals (±2 standard deviations) with respect to depth are plotted in the left column in Figure 6 for various combinations of test indices D (Cases 1-7 in the figure). The mean values E(Su |D) and standard deviations Var(Su |D)0.5 are both normalized with respect to the measured equivalent CIUC values Sum , so mean = 1 indicates the updated mean is the same as the measured equivalent CIUC value. It is evident that the updated standard deviation of Su decreases as more information is taken for the updating. Let us take Cases 1, 2 and 4 as examples: When only one piece of information is involved (Case 1 for OCR and Case 2 for N60 ), the confidence intervals
297
Figure 6. Updated mean values and 95% confidence intervals of Su with respect to depth (normalized with respect to the measure Su ) for the Taipei case.
seem large, manifesting more uncertainties. When the OCR and N60 information is combined (Case 4), the confidence intervals start to shrink; moreover, although the confidence intervals become smaller, most of them still contain 1, i.e.: the measured equivalent CIUC value Sum still lies within the intervals. When all information is implemented, i.e. Case 7, the c.o.v. of the undrained shear strength is as low as 0.16. Compared to the c.o.v.s for Cases 1-3 that ranges from 0.24 to 0.34, the 0.16 c.o.v. is a major improvement: the uncertainty in Su is effectively reduced by incorporating multivariate data. Notice that relatively larger biases are found in the estimated Su values for Cases 3, 5 and 6. This can be clearly seen in the plots for Cases 3, 5 and 6, where the estimated mean values E(Su |D) deviate from 1. These are the cases where qT information is incorporated. These biases are obviously due to the fact that the employed qc values here are less than the actual qT values, so the estimated mean values of Su are significantly less than their actual CIUC values. However, by also incorporating OCR and N60 information, i.e. Case 7, the bias is significantly reduced. Therefore, incorporating more information not only reduces uncertainties but also reduces bias. Obviously, if the data is judged to be incomplete such as qT , it is preferable not to include these data, even if bias
can be reduced by including more sources of information. In other words, we do not recommend including all sources of information indiscriminately. Engineering judgment is still important in this interpretation process. The performance of the proposed Bayesian method is further compared with the level-1 T.E.A.M. approach (Technical Expert Averaging Method, Briaud et al. (2002)). The level-1 TEAM approach is an intuitive way of combining multivariate information. Taking Case 7 as an example: there are 3 prediction methods (predictions based on OCR, N60 and qT ) over 13 measured events {Sum j , j = 1, . . . , 13}, and the i-th method gives prediction E(Suj |Di ) for Sum j . The ratio rij = E(Suj |Di )/Sum j is treated as a measure of performance of the i-th prediction method over the j-th measured event; it has standard deviation = σi , which is estimated as the sample standard deviation of {rij : j = 1, . . . , 13}. The TEAM ratio rTEAM ,j for the j-th measured event is simply the arithmetic average (r1j + r2j + r3j )/3. The TEAM standard deviation is simply σTEAM ,j = [(σ12 + σ22 + σ32 )/3]0.5 – it is not a function of j and hence we drop j from the subscript from hereon. Note that the TEAM standard deviation σTEAM is estimated based on the ratios {rij : j = 1, . . . , 13}, which in turn based on the measured data {Sum j , j = 1, . . . , 13}. The TEAM 95% confidence intervals for the j-th measured event can then be plotted as rTEAM ,j ± 2 σTEAM . The estimated TEAM ratios and their 95% confidence intervals are plotted in the right column in Figure 6. For the cases not involving qT , i.e.: Cases 1, 2 and 4, the performance is similar to those from the Bayesian method proposed in this paper (see left column). It is reassuring that the proposed Bayesian method does agree with the empirical rule-of-thumb observed in numerous foundation capacity prediction exercises that “averaging” seems to improve predictions. It is noteworthy that the proposed Bayesian method can also be applied to improve estimation of foundation capacities by combining laboratory and in-situ based formulae. The Bayesian method is admittedly more analytically involved than simple averaging, but it does provide a more general and rigorous framework for deriving practical approximate results such as Eqs. (16.) to (22.). In addition, the Bayesian method will serve to validate the theoretically correctness of “averaging”. Furthermore, it is quite significant that the Bayesian method gives similar standard deviations to the TEAM method for Cases 1, 2 and 4: the standard deviations given by the former are completely independent of the Su values of the 13 data points (the standard deviations are actual fixed numbers as seen in Eqs. (16.) to (22.)), while those estimated by the latter depend on the Su values of the 13 data points. In this sense, the comparison between the Bayesian and TEAM methods is already inequitable, because the latter uses very valuable new information, i.e.: the Sum j , values of the 13 data points, that is usually not available before site investigation is taken. The consistency between the Bayesian
298
and TEAM results suggests that the Bayesian method can effectively predict standard deviations close to their actual values. For the cases involving qT , i.e.: Cases 3, 5, 6 and 7, the TEAM standard deviations are obviously smaller than those estimated by the Bayesian method although the TEAM ratios are obviously biased. The TEAM standard deviation is small because it is purely the sample standard deviation of the observed ratios {rij : j = 1, . . . , 13}: when these ratios are consistently biased – as seen in Cases 3, 5, 6 and 7 – their sample standard deviation will be small. However, the standard deviation of the Bayesian method is estimated purely from the training database (data from past correlations), not from the observed ratios. In fact, one can see that the variances from Eqns (16.) to (22.) are independent of the observed ratios. In this sense, the standard deviation estimated by the Bayesian method seems to be robust against the bias in the ratios. 5.2
Clays in various regions of the world
Ten soil profiles are extracted from Rad and Lunne (1988); they are located around the world, including Norway, North Sea, Norwegian Sea, England, Brazil and Canada. In these soil profiles, both OCR and qT data are available; moreover, the actual values of Su are known from either CIUC or CK0 UC tests. Similarly, the CK0 UC values are converted to their equivalent CIUC values before comparisons are made. Three scenarios of site investigation information are considered: Case 1: only OCR information is known; Case 2: only qT information is known; Case 3: OCR and qT are both known. The updated mean values and 95% confidence intervals, both normalized with respect to the measured equivalent CIUC Su , are shown in the first column in Figure 7. The analysis results show that the uncertainties are effectively reduced by incorporating more information: when OCR and qT information is both implemented, i.e. Case 3, the c.o.v. of the undrained shear strength is as low as 0.195. Compared to the c.o.v.s for Cases 1 and 2 that ranges from 0.237 to 0.337, the 0.195 c.o.v. is a major improvement. Furthermore, although the confidence intervals for Case 3 are small, they still mostly contain 1, indicating that the analysis performs satisfactorily. For all cases, the performance of the proposed Bayesian method is again compared with the level-1 T.E.A.M. approach (the second column). Unlike the deep-excavation example, they are similar to the results of the multivariate Bayesian analysis probably because in the current example, the systematic bias in the TEAM ratios does not exist. Again, it is quite significant that the Bayesian method gives similar standard deviations to the TEAM method for all cases: the Bayesian method can effectively predict standard deviations close to their actual values.
Figure 7. Updated mean values and 95% confidence intervals of Su (normalized with respect to the measure Su ) for the ten sites.
5.3 Discussions for the case studies – For both case studies, the updated mean values and confidence intervals (i.e. c.o.v.s) are consistent to
299
–
–
–
–
the actual Su data points except the cases in the first example where CPT information is involved, including Cases 3, 5 and 6, where large biases are found in the prediction. However, such biases can be understandable: qc rather than qT was used for the analysis. Therefore, it is expected that the predicted Su values would be conservative since qc is always less than qT . It seems helpful in reducing bias to incorporate more information. As we have seen from Cases 3, 5 and 6 in the first example: there are significant biases due to the incorrect use of qc . Nonetheless, when all available information is implemented in Case 7, the bias is obviously reduced. Incorporating more information also help to reduce uncertainties, i.e. make the confidence intervals smaller. This is obvious from the analysis results in Figures 6 and 7. Judging from the analysis results, Eqs. (16.) to (22.) seem to provide results that are consistent to actual Su data. Moreover, this consistency seems to be independent of the location of the site of interest. It is possible to reduce the c.o.v. of Su down to 0.16 if OCR, qT and N60 information are all implemented. This reduction in c.o.v. is significant: the c.o.v. of the inherent variability of Su can be as large as 0.3-0.4. In fact, with only information of OCR and qT , the c.o.v. is already reduced to 0.195.
– The TEAM approach seems to provide similar results to the Bayesian method proposed in this paper when there is no systematic bias in the estimation. This is quite significant since it indicates that the Bayesian method can effectively predict standard deviations close to their actual values. When there is systematic bias, the TEAM approach may give a small standard deviation. Nonetheless, the standard deviation estimated by the Bayesian method is based on the training database (data from past correlations) and is robust against the bias in the estimation. 6
CONCLUSION
A new framework is proposed to update the first two moments of undrained shear strengths of clayey soils based on in-situ and laboratory test data and indices, e.g., overconsolidation ratio, SPT-N value and CPT reading. This new method is based on pairwise correlations developed by literature, but it implements the Bayesian analysis to accommodate multivariate correlations to update the moments. The main product of this paper is a set of equations whose inputs are the observed multivariate test index values and outputs are the updated mean values and coefficients of variation (c.o.v.) of the undrained shear strength. One real case study are employed to verify the consistency of the proposed framework in predicting shear strengths of clays. The results show that the proposed framework offers satisfactory estimations of the undrained shear strengths. REFERENCES Anagnostopoulos, A., Koukis, G., Sabatakakis, N., and Tsiambaos, G. (2003). Empirical correlations of soil parameters based on cone penetration tests for Greek soils. Geotechnical and Geological Engineering, 21, 377–387. Bjerrum, L. (1972). Embankment on Soft Ground. Proceedings of ASCE Specialty Conference on Performance of Earth and Earth-Supported Structures, Lafayette. Briaud, J.L., Goparaju, K. and Dahm, P.F. (2002). The T.E.A.M. approach in geotechnical engineering. ASCE Geotechnical Special Publications No. 116 (Deep Foundations 2002), Vol. 2, 976–992. Chandler, R.J. (1988). The in-situ measurement of the undrained shear strength of clays using the field vane. Vane Shear Strength Testing in Soils: Field and Laboratory Studies (DTP1014), ASTM, Philadelphia, 13–44. Chen, J.R. (2004). Axial Behavior of Drilled Shafts in Gravelly Soils. Ph.D. Dissertation, Cornell University. Chen, Y.J. and Kulhawy, F.H. (1993). Undrained strength interrelationships among CIUC, UU, and UC tests. Journal of Geotechnical Engineering, 119(11), 1732–1750.
Fukasawa, T., Mizukami, J., and Kusakabe, O. (2004). Applicability of CPT for construction control of seawall on soft clay improved by sand drain method. Soils and Foundations, 44(2), 127–138. Hanzawa, H. (1992). A new approach to determine soil parameters free from regional variations in soil behavior and technical quality. Soils and Foundations, 32(1), 71–84. Hara,A., Ohta, T., Niwa, M., Tanaka, S., and Banno, T. (1974). Shear modulus and shear strength of cohesive soils. Soils and Foundations, 14(3), 1–12. Jamiolkowski, M., Ladd, C.C., Germaine, J.T., and Lancellotta, R. (1985). New developments in field and laboratory testing of soils. Proceedings of the 11th International Conference on Soil Mechanics and Foundation Engineering, San Francisco. Keaveny, J.M. and Mitchell, J.K. (1986). Strength of finegrained soils using the piezocone. In Use of In-Situ Tests in Geotechnical Engineering (GSP-6), Ed. S. P. Clemence, ASCE, New York. Konrad, J.M. and Law, K.T. (1987). Undrained shear strength from piezocone tests. Canadian Geotechnical Journal, 24(3), 392–405. Kulhawy, F.H. and Mayne, P.W. (1990). Manual on Estimating Soil Properties for Foundation Design, Report EL-6800, Electric Power Research Institute, Palo Alto. Lacasse, S. and Lunne, T. (1988). Calibration of dilatometer correlations. Proceedings of the 1st International Symposium on Penetration Testing (ISOPT-1), Orlando. Ladd, C.C., Foote, R., Ishihara, K., Schlosser, F. and Poulos H.G. (1977). Stress-deformation and strength characteristics. Proceedings of 9th International Conference on Soil Mechanics and Foundation Engineering, Tokyo. Lunne, T., Robertson, P.K. and Powell, J.J.M. (2002). Cone Penetration Testing in Geotechnical Practice. Spon Press, London. Mair, R.J. and Wood, D.M. (1987). Pressuremeter Testing. Butterworths, London. Mayne, P.W. (1988). Determining OCR in clay from laboratory strength. ASCE Journal of Geotechnical Engineering, 114(1), 76–92. Mersi, G. (1975). Discussion of “New design procedure for stability of soft clays”. ASCE Journal of Geotechnical Engineering, 101(4), 409–412. Ou, C.Y. (2006). Deep excavation engineering, Scientific & Technical Publishing Co. Ltd. Phoon, K. K. (1995). Reliability-based Design of Foundations forTransmission Line Structures. Ph.D. Dissertation, Cornell University, Ithaca, NY. Robertson, P.K. and Campanella, R.G. (1989). Guidelines for Geotechnical Design Using the Cone Penetrometer Test and CPT with Pore Pressure Measurement. Hogentogler Co., Inc (1989). Skempton, A. W. (1957). Discussion of “Planning and design of new Hong Kong airport”. Proceedings of Institution of Civil Engineers, 7, 305–307. Tanaka, Y. and Sakagami, T. (1989). Piezocone testing in underconsolidated clay. Canadian Geotechnical Journal, 26, 563–567. Terzaghi, K. and Peck, R.B. (1967). Soil Mechanics in Engineering Practice. A Wiley International Edition, 729 p.
300
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
A case study on settlement prediction by spatial-temporal random process P. Rungbanaphan & Y. Honjo Gifu University, Gifu, Japan
I. Yoshida Musashi Institute of Technology, Tokyo, Japan
ABSTRACT: A systematic procedure for spatial-temporal prediction of settlement is proposed. The method is based on Bayesian estimation by considering both prior information of the settlement model’s parameters and the observed settlement to search for the best estimates of the parameters. By taking into account the spatial correlation structure, all observation data can be used for rational estimation of the model parameters at any location and any time. The system error can be considered by Kalman filter including process noise. A procedure to estimate auto-correlation distance of the parameters and the observation-model error based on the maximum likelihood method is also proposed. The Kriging method is considered to be a suitable approach for determining the statistics of the estimated model parameters at any arbitrary location. A case study on the secondary compression settlement of the alluvial soil due to preloading work is carried out. The y ∼ log (t) method is chosen as a basic model for settlement prediction. It is concluded that, while the strong spatial correlation is required for significant improvement of the settlement prediction by taking into account the spatial correlation structure, the proposed approach gives the rational prediction of the settlement at an arbitrary point with quantified uncertainty. In addition, including process noise in the calculation can improve the estimation but care should be taken to assign an appropriate level of this system error. 1
INTRODUCTION
So far, all methods of predicting future settlement using past observations are based solely on the temporal dependence of their quantity. However, the fact that soil properties tend to exhibit a spatial correlation structure has been clearly shown by several studies in the past, e.g. Vanmark (1977), DeGroot & Baecher (1993). It is therefore natural to expect that the accuracy of the settlement prediction can be improved by taking into account the spatial correlation of ground properties, by which the observed settlement data from all of the different observation points can be simultaneously utilized. Furthermore, by introducing spatial correlation, it is possible to estimate the future settlement of the ground at any arbitrary point by considering the spatial-temporal structure. This study is actually an attempt to search for such an approach.
model is considered to be rational and practical for prediction of the secondary compression (Bjerrum 1967, Garlanger 1972, Mesri et al. 1997, etc.). The equation is given by
where yk = settlement at kth step of observation; m0 and m1 = constant parameters; tk = time at kth step of observation; εk = observation model error.
This implies the temporally independent characteristic of εk . 2.2 Bayesian estimation considering spatial correlation structure
2 2.1
SPATIAL-TEMPORAL UPDATING AND PREDICTING PROCESS Settlement prediction model
The basic model used for settlement prediction in this paper is the linear relationship between logarithm of time and the settlement, i.e. y ∼ log (t) method. The
In order to improve the estimation and to enable local estimation, utilization of Bayesian estimation considering spatial correlation is proposed in this paper. This approach uses prior information of the parameters and the observed settlement data from all observation points to search for the best estimates of the unknown parameters, i.e. model parameters
301
(m1 and m0 ), auto-correlation distance (η), and the variance of the observation-model error (σε2 ). The formulation consists of two statistical components, namely, the observation model and the prior information model. These two models will then be combined by Bayes’ theorem to obtain the solution. 2.2.1 Observation model This model relates the observation data to the model parameters. At a specific time step k, let Yk denote the observed settlement at n observation points, x1 , x2 , …, xn , where
The state vector, θ, is defined as the estimates of model parameters (m∗1 , m∗0 ) at the n observation points, as follows:
2.2.2 Prior information model By assuming two multivariate stochastic Gaussian fields for m1 and m0 , the prior information has the following structure:
where
m∗1,0 (xi ) and m∗0,0 (xi ) denote the prior mean at observation point xi of m1 and m0 , respectively. δ is the uncertainty of the prior mean with E[δ] = 0 and E[δδT ] = Vθ,0 . Vθ,0 is a prior covariance matrix. By introducing the spatial correlation structure in the formulation of Vθ,0 , we have
Consequently, the model in Eq. (1) can be rewritten in the following matrix form 2 2 where 0n,n denotes an nx n zero matrix. σm1,0 and σm0,0 represent the prior variance of m1 and m0 , respectively. ρ(|xi − xj |) denotes the auto-correlation function. The exponential type auto-correlation function is chosen for the current study because it is commonly used in geotechnical applications (e.g. Vanmarcke 1977). The function is given as
where
In,n denotes an n × n unit matrix. ε is a Gaussian observation-model error vector with E[ε] = 0 and E[εεT ] = Vε . Vε is a covariance matrix, the components of which are
It should be emphasized that σε2 is both spatially temporally independent. Furthermore, this error is also defined as the combination of the observation error and the model error. These two kinds of errors cannot be separated in practice, and so are assumed to be integrated in the model as shown in Eq. (8). Given θ and σε2 , the predicted settlement distribution at any time t can be represented by the following multivariate normal distribution
where xi , xj = spatial vector coordinate, and η = autocorrelation distance. It should be noted that, for the sake of simplification, there are two important assumptions about the correlation structure for formulating the above covariance matrix. Firstly, m1 and m0 are assumed to be independent of one another. Secondly, the correlation structures of these two parameters are identical, meaning that they share the same auto-correlation distance. Given η, prior means, and prior variances of m1 and m0 , the prior distribution of the model parameters is also a multivariate normal distribution of the following form
It is clear from this formulation that the spatial correlation of soil properties is included in the form of the spatial correlation of m1 and m0 . The settlements themselves are not correlated spatially. The authors believe that this is the most suitable way to introduce the spatial correlation structure to the settlement prediction model since it is soil properties that are spatially correlated, not the settlement.
302
2.2.3 Bayesian estimation Suppose that the set of observations Yk at the time tk for k = 0, 1, . . . , K has already been obtained. By employing Bayes’ theorem, the posterior distribution of the state vector θ can be formulated as
where Y denotes the set of all observed data, i.e. Y = (Y1 , Y2 , . . . , YK ). By substituting Eq. (9) and (15) into the above equation, a likelihood function can be defined, with the given values of σε2 and η, as follows:
The Bayesian estimator of θ, i.e. θ ∗ , is the one that maximizes the above function. Therefore, it is equivalent to minimizing the following objective function
has two distinct phases: time updating and observation updating. The time updating phase uses the state estimate from the previous time step to produce an estimate of the state at the current time step by considering process noise. In the observation updating phase, measurement information at the current time step is used to refine this prediction in order to arrive at a new state estimate. In fact, it can be proved that, without process noise, the Kalman filter gives the same results as the previously proposed approach, i.e. Bayesian estimation (Hoshiya & Yoshida 1996 and 1998). For the estimation of the soil parameters, the unknown parameters is considered to be stationary, then the Time updating process is expressed by
where Qk denotes covariance matrix of process noise; suffix k stands for the kth step of processing; k/k − 1 represents the kth step estimation conditioned on processing observation data up to the k − 1th datum To systematically define the value of process noise, we assume that Qk are given based on a priori information, as follows (Hoshiya & Yoshida, 1998):
where c is a constant parameter representing the level of system error, i.e. process noise. From (22) and (23), we have By differentiating the above equation with respect to the state vector, we obtain For Observation updating process, the updating is given by
By trial and error, the values of σε2 , η, and the corresponding θ ∗ that give the maximum value of the likelihood function (L) can be obtained. These values are actually the Bayesian estimators for the current problem. 2.3
Process noise consideration by Kalman filter method
In the previous section, the batch procedure by which all of the observations are treated equally for parameters updating was proposed. In practice, it is natural to give higher weight to the more recent observations. This can be done by considering an uncertainty parameter, so called ‘process noise’, through a sequential procedure, Kalman filter (Kalman 1960, Kalman & Bucy 1961, Jazwinski 1976). The Kalman filter
by defining the Kalman gain
where suffix k/k, similarly, represents the kth step estimation conditioned on processing observation data up to the kth datum. It should be emphasized that θ0/0 and Vθ,0/0 need to be defined in the same way as θ0 and Vθ,0 in Eq. (11) and (13), in order to take into account the prior information of the model parameters together with the spatial correlation structure. With its ability to sequentially update the estimation and systematically take into account process noise, this method is used in estimation of the unknown parameter, θ, while σε2 and η are estimated by Bayesian estimation described in Section 2.2
303
2.4
Local estimation by the Kriging Method
Based on the calculated statistical inferences of the model parameters at the observation points and the estimated auto-correlation distance, the statistics of the model parameters at any arbitrary locations can be determined by the ordinary Kriging method (Krige 1966, Matheron 1973, Wackernagel 1998).This method provides an unbiased and least error estimator built on the data from a random field. It is also assumed that the random field is second-order stationary. Based on the estimated model parameters (m∗1 , m∗0 ) at the n observation points x1 , …, xn , the value of m∗1 and m∗0 , at an arbitrary point x0 can be estimated by the following equations:
where Figure 1. Soil condition.
wi (i = 1, …, n) are the weights attached to the data at each of the observation points. µ is the Lagrange multiplier used for minimizing the Kriging error, and x0 denotes the spatial vector coordinate at x0 . ρ|xi − xj |) represents the auto-correlation function as defined in Eq. (14). 3
Figure 2. Surcharge thickness and settlement vs. time.
CASE STUDY
3.1 Description of the case The site is a residential land development project located in suburb area of Tokyo, Japan. This area is covered by a thick alluvial deposit which can be classified as a surface layer of peats followed by a very soft clay layer down to the thickness of about 17 meters (Fig. 1). Below these layers, the layers of medium dense sand and silt are found. In order to avoid the large amount of settlement due to the thick soft soil layer at the surface, the soil condition is improved by preloading prior to the construction. As shown in Figure 2, the preloading surcharge was filled up to the maximum thickness of about 6 m during the preloading period of, approximately, 900 days. The settlement observations were performed at both during the preloading period by the settlement plates and after removal of the surcharge by measuring settlement of the boundary stone around the housing lots. The settlement after removal of the surcharge, which is used in this study, was observed at about every 600 m2 with the total number of observation points of 42. The location plan of these observation points is shown in Figure 3, while all of the observation data are shown
Figure 3. Location plan of the observation points and surcharge area.
as semi-logarithmic plots of settlement and time in Figure 4. Various techniques have been proposed by several authors for predicting the future settlement using the observed settlement, for example, hyperbola method (Sridharan et. al. 1987, Tan 1994), y ∼ log (t) method (Bjerrum 1967, Mesri et al. 1997), and Asaoka’s method (Asaoka 1978). In this study,
304
By this treat, the missing data can be initially assumed within a reasonable range and it will then be ignored from the calculation by the influence of matrix Vε . As can be seen from Figure 5, the early part of the observed data clearly shows disagreement with the previously described y ∼ log (t) model. This is due to the fact that the settlement observed in this period, which is expected to result from the secondary compression, was still strongly influenced by the rebound effect from the removal of preloading surcharge. In order to implement the model to these sets of observation data, a part of the data need to be ignored. By investigating the settlement data of all the observation point, the data before the day 103th are discarded from the calculation by judgment. Choosing appropriate prior statistics of the unknown parameters (m1 and m0 ) is also an important issue. What has been done in the current research is that the prior mean of m1 and m0 were assumed to be equal to the value of slope and the intercept of the trend line resulting from the linear regression analysis of settlement vs. time plotting considering the data from all of the observation points. On the other hand, the prior variances are selected by trying several values of prior coefficient of variation (COV) and choosing the one which is relatively insensitive to the changes of prior means. Based on this approach, the prior means of m1 and m0 , which are assumed to be identical at every observation points, are assigned as 109.7 cm and −204.1 cm, respectively, while the prior COV is set as 0.4 for calculating prior variance of both parameters.
Figure 4. Observed settlement vs. time (after surcharge removal) for all observation points.
Figure 5. Observed settlement vs. time (after surcharge removal) and trend line at point A (see Figure 3).
y ∼ log (t) method is considered to be the most suitable approach due to the fact that the primary consolidation is expected to be completed before the surcharge removal, thus the settlement occurring afterward should result from the secondary compression process. Figure 5 shows an example of the y ∼ log (t) plot at an observation point. It can be seen that, by excluding a part of data in the early period of observation, within which the secondary compression is considered to be influenced by the rebound effect due to surcharge removal, this semi-logarithmic relationship fits quite well with the observation data. 3.2
Practical problems and solutions
To deal with the field observation result, the incompleteness of the data is, of cause, unavoidable. The example of the observed data shown in Figure 5 is illustrated a relatively complete set of data, but it is not always the case. Several observation points suffer the missing of settlement data at some observation steps.The problem is that, for both Bayesian estimation and Kaman filter, the data from all of the observation points are required at every time steps. It is clear form Eq. (17), (20) and (25) that every components of Yk is needed for every time steps k = 1 to K. To cope with this problem, the components of Vε which correspond to the observation points with the missing data at that time step are replaced by the extremely high numbers.
3.3 Estimation of the auto-correlation distance and observation-model error It was proposed in Section 2.2.3 that auto-correlation distance (η) and the standard deviation of the observation-model error (σε ) can be estimated by an optimization procedure based on Bayesian estimation. Considering the observation data together with the prior information of the model parameters, the likelihood values (L) for each pairs of η and σε can be determined by Eq. (17). The values of η and σε that give the maximum value of L will be served as the Bayesian estimators of these parameters. Figure 6 shows the contour of L in η and σε space for the case that all of the settlement data until the last step of observation, i.e. the day 1017th, is considered. In this case, the Bayesian estimators of η and σε are 32 m and 6.75 cm, respectively. Obviously, the estimated values of η are more likely to be changeable than those of σε . In practice, the observation data is collected stepwise for a period of time. Therefore, it is natural to sequentially update the estimation once the new sets of observation are provided. Figure 7 illustrates the plots of the estimated values of η and σε vs. observation time, until which the observation data are used in estimation. It can be observed that the estimated values of auto-correlation distance tend to decrease with the observation time, while those of the
305
Figure 6. Contour of likelihood values (L), using observation data until the last step of observation (the day 1017th).
Figure 8. mean absolute error of settlement prediction at the last observation time step(the day 1017th) vs. observation time.
points can be calculated, considering prior information of the parameters and observation data. Using these parameters, the settlement at any specific time can be estimated by y ∼ log (t) model. This can be compared with the observed data at this point to discover the estimation error. For quantitatively describing the estimation error, the term ‘mean absolute error’ is defined as follows:
Figure 7. Estimated values of auto-correlation distance (η) and the standard deviation of the observation-model error (σε ) vs. observation time.
observation-model error tend to increase, depending on the characteristic of the observed data. Both of these estimations seem unstable at the early stage of the observation, indicating insufficiency of the observation data for the calculations. However, they become more stable as the observation data accumulates. 3.4
Settlement prediction and estimation
Based on the procedure proposed in Section 2.2, the estimates of model parameters at each observation
where Xest,i and Xtrue,i denote the estimated value and true value, respectively, of the parameter to be estimated at each observation point. Nx represents the total number of estimated values. In this case, the estimated value is the estimated settlement, while the true value is the observed settlement. Nx is the total number of observation points, i.e. Nx = n. Figure 8 shows the plots of the ‘mean absolute error’ for prediction of settlement at the last observation time step (the day 1017th) vs. observation time. For comparison purpose, the case that the spatial correlation is ignored is also presented. For this case, the observation data at each point are used to update the model parameters of that point itself, i.e. η = 0. It should be noted that, for the case with considering spatial correlation, the estimated values of auto-correlation distance, as shown in Figure 7, are used in the calculations. As might be expected, the prediction error decreases with the increase of the available observation data. However, for the current set of observation data, considering the spatial correlation do not significantly improve the estimation in terms of the mean error. This may be due to the fact that the auto-correlation distance is relatively short in comparison with the spacing between the observation points. To further investigate the efficiency of the proposed method in dealing with the space-time problem, the observation data at each selected observation point are removed and the settlement estimations, or predictions, at this point are performed using the rest of the observations. Firstly, the estimates of model
306
Figure 9. Comparison between the estimated and the observed settlement at the day 430th, using data from the day 103th to 403th.
parameters at each observation points are calculated. Then, the parameters at the removed observation point are determined by Kriging method using the estimated auto-correlation distance (see Section 2.4). Comparison between the settlement estimated by these parameters and the actually observed one reveals the estimation error. Figure 9 shows the comparison between the estimated and the observed settlement of all 42 observation points at the day 430th. The observation data from the day 103th to the day 430th are used in the calculation. Obviously, for the case that the relatively weak spatial correlation is assumed, i.e. assuming η = 10 m (Fig. 9a), the estimation tends to be uniform and is not likely to be able to represent the variation of ground settlement. On the other hand, for the case that the estimated value of auto-correlation distance, η = 52 m, is used (Fig. 9b), the estimation gives relatively more realistic pattern of settlement in the area. It should be noted that the observation data at some points are missing due to the common imperfection of the field observation. Only the estimated values are shown at these points. In fact, this illustrates one of the practical advantages of the proposed method, i.e. the ability to perform estimation at a specific point without any observed information. Figure 10 shows the plots of mean absolute error (Eq. (30)) of settlement estimation at the removed observation points vs. observation time. Two cases of calculations are presented: Case A and B. Case A is an attempt to avoid the temporal error resulting from prediction of future settlement; therefore, the estimated settlement and the observed settlement are compared at that observation time, i.e. the same way as the calculations shown in Figure 9. Case B is the comparison between the predicted settlement at the last observation time step (the day 1017th) and the observed settlement at that time. It should be emphasized that the estimated values of auto-correlation distance and the observation-model error, which are varied with the
Figure 10. Mean absolute error of settlement estimation at the removed observation points vs. observation time. Case A presents the settlement observation at that day. Case B presents the settlement prediction at the day 1017th.
observation time as shown in Figure 7, are used for all calculations. It can be seen that, for the both cases, the estimation errors are decreased with the observation time. In other words, the estimation can be improved if more observation data are given. However, Case B gives the lower estimation error, even though the future prediction is performed in this case. This may be the result of the cancellation of error during the settlement prediction. 3.5 Effect of process noise consideration As mentioned in Section 2.3, higher weight should be given to the more resent data than to the previous one. This can be done by assigning appropriate values of process noise, i.e. system error, during the time updating process in Kalman filter procedure. According to Eq. (23), the level of this system error can be controlled by assigning an appropriate value of the constant parameter, c. By performing the same calculation as what is shown in Figure 8, but with the different values of c, the effect of process noise to the settlement prediction can be investigated. Figure 11 presents the prediction error of settlement at the last observation time step (the day 1017th) at different observation time. In fact, for the case that c = 0, i.e. no process noise, this is the same with the one plotted in Figure 8, for considering spatial correlation case. It can be concluded from Figure 11 that, to some extend, the settlement prediction can be improved by considering the system error. Especially for estimation at last time step (the day 1017th), at which all observation data until the target day for the prediction are include in the calculations, the mean absolute error reduce dramatically from about 9.8% to 0.3%. However, this is not always the case. At some stage of prediction, assuming too high value of process noise may mislead the prediction and the error become higher instead. This can be seen in Figure 11 when c = 2.0 are assigned. Therefore, the optimization of this process noise coefficient is required. This is, however, out of scope of the current research.
307
taken in assigning appropriate value of this parameter to avoid additional error due to including too high value of process noise.
REFERENCES
Figure 11. Mean absolute error for prediction of settlement at the last observation time step (the day 1017th) vs. observation time under different levels of process noise.
4
CONCLUSION
A methodology was presented for observation based settlement prediction with consideration of spatial correlation structure. The spatial correlation is introduced among the model parameters and the settlements at various points are spatially correlated through these parameters, which naturally describe the phenomenon. A case study on the secondary compression of alluvial deposits due to ground improvement by preloading was carried out using the proposed approach. It was found that the estimation of auto-correlation distance is relatively unstable and insufficient amounts of observation data may mislead the estimation. Furthermore, even though it seems that the auto-correlation distance of the soil parameter is relatively short in comparison with the observation point’s spacing, the proposed method provides the rational estimation of the settlement at any time and any location with quantified error. Including system error, i.e. process noise, into the calculation can give the improvement on the settlement estimation to some extend. However, care should be
Asaoka, A. 1978. Observational procedure of settlement prediction. Soil and Foundations 18(4): 87–101. Bjerrum, L. 1967. Engineering geology of Norwegian normally consolidated marine clays as related to settlement of buildings. Géotechnique 17(2): 81–118. DeGroot, D. J. & Baecher, G. B. 1993. Estimating autocovariance of in-situ soil properties. Journal of Geotechnical Engineering 119(1): 147–166. Garlanger, J.E. 1972. The consolidation of soils exhibiting creep under constant effective stress. Géotechnique 22(1): 71–78. Hoshiya, M. & Yoshida, I. 1996. Identification of conditional stochastic gaussian field. Journal of Engineering Mechanics, ASCE 122(2): 101–108. Hoshiya, M. & Yoshida, I. 1998. Process noise and optimum observation in conditional stochastic fields. Journal of Engineering Mechanics, ASCE 124(12): 1325–1330. Krige, D. G. 1966. Two dimensional weighted moving averaging trend surfaces for ore evaluation. Proc., of Symp. on Math., Statistics and Comp. Appl. for Ore Evaluation. Matheron, G. 1973. The intrinsic random functions and their applications. Adv. in Appl. Probab. 5. Mesri, G. et al. 1997. Secondary compression of peat with or without surcharging. Journal of the Geotechnical Engineering Division, ASCE 123(5): 411–421. Sridharan, A., Murthy, N. S. & Prakash, K. 1987. Rectangular hyperbolar method of consolidation analysis. Géotechnique 37(3): 355–368. Tan, S. A. 1994. Hyperbolic method for settlements in clays with vertical drains. Canadian Geotechnical Journal 31: 125–131. Vanmarcke, E. H. 1977. Probabilistic modeling of soil profiles. Journal of the Geotechnical Engineering Division, ASCE 103(GT11): 1227–1246. Wackernagel, H. 1998. Multivariate Geostatistics: An Introduction with Applications. 2nd ed. Germany: SpringerVerlag Berlin Heidelberg.
308
Construction risk management
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability analysis of a hydraulic fill slope with respect to liquefaction and breaching T. Schweckendiek Deltares, unit Geo-engineering & TU Delft, Delft, The Netherlands
G.A. van den Ham & M.B. de Groot Deltares, unit Geo-engineering, Delft, The Netherlands
J.G. de Gijt & H. Brassinga Public Works Rotterdam, Rotterdam, The Netherlands
P. Hudig Gate Terminal B.V.
ABSTRACT: A recently reclaimed site in the Port of Rotterdam will serve as location and foundation of an LNG terminal. LNG (Liquefied Natural Gas) is recognized as hazardous material and underlies strict safety requirements. As part of the safety assessment of the entire installation, a specific analysis had to be carried out concerning the geotechnical aspects. The paper describes the probabilistic approach that was chosen to verify the required level of safety of the hydraulic sand fill regarding (static) liquefaction, slope failure and breaching processes. Several reliability analyses using the respective physical process models were carried out and the results combined using a fault tree or scenario approach, leading to upper bounds of the failure probability. 1 1.1
INTRODUCTION Project outline
The paper describes an approach for a geotechnical reliability analysis problem in a real life project in 2007. For an LNG terminal to be built in the Port of Rotterdam, hydraulic sand filling was used to extend an existing artificial terrain in order to create space for 4 large LNG tanks (Fig. 1).
The original design contained slopes with angles of 1:2.5, protected mainly against erosion and wave action by steel slag dams. Initially, it was thought that compaction would not be required until a rough analysis of the liquefaction potential cast this assumption into serious doubt. Subsequent more thorough analyses led to several design modifications, the most important of which were to use a shallower slope angle of 1:3 and to compact the entire slope itself up to the height of the tanks to be built by means of vibro-flotation. The final representative cross section in Figure 2. Note that due to the construction process and the last-minute design amendments, some portions of the hydraulic fill could not be compacted and remained in a relatively loose state. These areas in combination with the still relatively steep slope caused some uncertainty about the chance of the occurrence of a liquefaction flow slide with subsequent damage to the foundation of the LNG-tanks. This uncertainty was the focus of the analysis described in this paper.
1.2
Figure 1. Overview LNG terminal.
Design requirements
For the LNG installation, as for other activities with hazardous materials, the safety requirements were formulated in terms of risk respectively an acceptable probability of failure, failure being defined as some the occurrence of an unwanted event or accident. The
311
Figure 2. Representative cross-section.
safety criterion for the geotechnical aspects treated in this paper was derived from the overall safety requirement, being: “The probability of a slope failure, including liquefaction and breaching, affecting the foundation safety of the LNG-tanks must not exceed Pf ,adm = 10−6 in the planned life time of the structure (50 years)”. Note that this criterion involves several potential failure mechanisms. 1.3 Probabilistic approach For the evaluation of the probability of failure stated in the previous section, the complex failure mechanism was split into basically three sub-mechanisms, which were tractable for structural reliability analysis. A choice for a dominant failure scenario did not seem appropriate, mainly due to the fact that multiple failure mechanisms were involved. To this end several failure scenarios were defined in order to ensure to not miss significant contributions. The results of the sub-mechanisms and the scenarios were combined by means of fault tree analysis in order to obtain the (upper bound of the) overall probability of failure, which was then compared to the acceptability criterion. Section 2 treats the physical process models applied in the analysis, whilst section 3 focuses on the reliability analysis aspects. 2 APPLIED PHYSICAL PROCESS MODELS 2.1 Liquefaction flow slide and subsequent breaching Under certain circumstances, loose, saturated sand elements in a slope may be sensitive to liquefaction or, more precisely formulated, may be in a ‘meta-stable’ state, which means that they will liquefy and loose their strength under any quick loading if they are free to undergo shear deformation. In case most adjacent sand elements in a slope have a much more stable state, no liquefaction will occur because these more stable elements will prevent the shear deformation of their meta-stable neighbors. However, in a slope with sufficient large pockets of meta-stable elements a
liquefaction flow slide may occur. The conditions for meta-stability mainly concern the soil state in terms of density and stresses which will be discussed in section 2.2. Whether the pockets of meta-stable elements are sufficiently large to enable a liquefaction flow slide will be studied by a traditional slope stability analysis in which the originally meta-stable elements are supposed to have liquefied (section 2.3). The final question is whether a liquefaction flow slide will result in failure of the foundation of the tanks. In case of a relatively shallow flow slide, this will only be the case if a breach in the unprotected sand created by the flow slide will progress over a sufficient large distance. The breaching process will be discussed in section 2.4. 2.2 Meta-stability or sensitivity to liquefaction The model that was used in this study for the undrained behavior of saturated (loose) sand is based on the theory presented in Stoutjesdijk et al (1998), which is also the basic theory used in the software SLIQ2D, mainly used by GeoDelft in the Netherlands during the last two decades. Whilst SLIQ2D only uses an instability or metastability criterion based on material parameters and the soil state (porosity and stresses) according to Molenkamp (1989), the approach in this study uses more information from the modeled undrained behavious respectively the stress path. For a given in-situ stress point, the undrained stress path is derived as a function of relative density from extensive laboratory tests. This path allows us two extract two types of information that helps us to judge the liquefaction potential and the residual strength after liquefaction:
312
1 whether the in-situ density is higher or lower than the wet critical density (WCD, see Figure 5). If ID < WCD, the undrained stress path exhibits a decreasing deviatoric or shear stress. This is the most important necessary, however not sufficient, condition for meta-stability and thus for the occurrence of instability and static liquefaction. 2 the maximum generated excess pore pressure respectively the minimum isotropic effective stress
pmin , which can be used to estimate the (“worst case”) strength reduction due to liquefaction. Both definitions are definitely conservative respectively will lead to upper limits of failure probabilities. We will come back to this question in section 3. 2.3
Slope stability
The slope stability was treated by conventional Bishop slip circle analyses using the MStab software by GeoDelft (since 2008 Deltares). Two non-standard features had to be included:
Figure 3. Equilibrium profile after flow slide.
1 The slope stability analysis had to reflect the situation, given that liquefaction occurred in the liquefaction-sensitive parts of the slope. In the deterministic setup, the reduction in isotropic effective stress was used as measure for the reduction of shear capacity, expressed in form of a reduced friction angle:
2 The Rotterdam area is not typically earthquakeprone, however, due to the low required failure probability, also very low occurrence frequency seismic loads were considered. An option in MStab to account for vertical and horizontal peak accelerations in the slope stability analysis was applied (Delft GeoSystems 2006). 2.4
Figure 4. Undrained stress path of loose sand.
Breaching
If slope instability occurs, a liquefaction flow slide will start, which means that the instable soil mass starts to slide over a shear surface. It will continue to do so until it finds a new equilibrium. The flow process will in this case probably take not more than several seconds to a minute, as follows from calculations in which inertia is incorporated. That time is not long enough to cause significant reduction of the excess pore pressure in the liquefied sand pockets. Consequently, the shape of the new profile can be estimated by using Bishop calculations and the new slope profile is characterized by a relatively steep slope just above the soil mass, that flowed down. Its location can be characterized by L1 as defined in Figure 3. This steep slope consists of sand and is not likely to be covered by slags or other parts of any slope protection. Part of the steep slope is situated under water, as indicated in Figure 3. This part of the slope may start breaching. Breaching is a process in which a steep under water slope, “breach”, remains temporary stable under the influence of dilation induced negative pore pressures, and gradually moves backwards while sand grains fall down from the surface and mix with water to create an eroding, turbulent sand water mixture. The process stops when the height of the under water part of the breach is reduced to zero. The resulting profile is sketched in Figure 6.
Figure 5. Definition Wet Critical Density (WCD).
Figure 6. Equilibrium profile after breaching process.
The breaching process is described by Mastbergen & van den Berg (2003) and can be modelled by the computer code HMBREACH. Given grain size distribution, relative density and initial height of the under
313
water part of the steep slope, sbh, the model calculates the change in this height as a function of the horizontal distance, from which the total distance of breach progress L2 (Fig. 6) can be derived. The slope of the part above the water is determined by the common shearing process and can be assumed to equal 1:1.5. Now the length (L2 − L1 ) of the damaged area follows. It is assumed, supported by indicative calculations, that no significant damage to the foundation of the tanks will occur as long as (L2 − L1 ) < 22.5 m, which is the distance between the foundation and the slope crest.
3
Figure 7. Upper and lower bounds of Pf vs. the design criterion.
RELIABILITY ANALYSIS
The previous section gave a concise overview of the concepts and methods used for deterministic evaluation of the sub-mechanisms playing a role in the present safety assessment problem. In this section we will discuss how an assessment of the criterion stated in section 1.2 was made in a probabilistic manner. First of all, we are dealing with the verification of a design criterion. That implies that it is sufficient to show that the upper bound of the estimate of the failure probability Pf ,sup fulfills the requirement:
Figure 8. Sequence of mechanisms in Failure mode.
Figure 9. Sequence of mechanisms leading to top event.
Thus, we can start with rough, conservative (upper bound) approaches and apply refinements, if necessary, as illustrated in Figure 7. Such refinements can either concern the probabilistic analysis itself (e.g. treatment of correlations) or more realistic physical process models. Such an approach was applied in the project, though for sake of readability in the following only the analysis that led to the successful outcome is described.
Figure 10. Sequence of mechanisms in Failure mode.
3.1 System definition As described in 2.1, the principal contemplated failure mode is a sequence of three mechanisms. To reiterate the sequence shortly, liquefaction of substantial, uncompacted volumes in the slope part of the fill may cause a flow slide respectively slope failure. The residual profile is common steep in the upper part and a breaching process may be initiated that could endanger the foundations of the installation in question. For the reliability analysis, this sequence is modeled by a parallel “sub-system” in a fault tree, consequently combined by and AND-gate (Fig. 10). Given the large uncertainties, it is not trivial to determine a dominant or representative scenario as we are used to do in deterministic approaches. For different combinations of parameters or properties, in some cases liquefaction and slope failure in the upper part may lead to the worst consequences, in other cases failures in the lower part or deeper sliding surfaces.
Figure 11. Schematic representation of two scenarios.
One way to circumvent the problem of choosing one scenario, is the definition of several scenarios. Two examples of such scenarios are presented schematically in Figure 11. The main difference in this
314
Figure 12. Fault tree.
discrete distinction of possibilities is the assumption of which of the uncompacted volumes liquefy and how many at a time, with all the due consequences. All the defined scenarios are integrated in a fault tree (Fig. 12). For sake simplicity, the “conservative”, i.e. upper bound assumption of independence (actually even mutually exclusivity) is made (see 3.6). 3.2
Parameters and uncertainties
The in-situ relative densities of the hydraulic fill were determined by means of the empirical CPT correlation function of Baldi e.a. (1982) which correlates the density index ID to the cone penetration value qc as a function of the vertical effective stress. A total of over 50 CPT’s were available. Accounting for both spatial variability and uncertainty of the correlation function the expected value of ID was found to be 39% with a standard deviation of 10%. These values concern the average of ID over a potential liquefiable area or failure surface. By means of several drained (CD) and dry triaxial tests on a number of representative (disturbed) samples, taken from the hydraulic fill, the parameters for the constitutive model (see 2.2) were determined. Influence of soil state was assessed by performing the tests at different stress conditions and porosities. Statistical analysis of the test results and considerations on spatial variability, lead to probability distribution functions of the important material model parameters for further use in the probabilistic analysis. In order to check the calibrated parameter set, a number of undrained (CU) triaxial tests was executed on the same samples and simulated with the model. Measurements and prediction fitted reasonably well (Fig. 13). 3.3
Meta-stability or sensitivity to liquefaction
The probability of meta-stability or the sensitivity to liquefaction Pliq of each area with non-compacted sand was evaluated by determining the probability of the in-situ sand being in a state below the WCD (see 2.2),
Figure 13. Comparison stress path (CU) between test and calibrated model.
given a representative stress point in the area and the uncertainties in the material properties:
with x being a vector containing all random variables. Pliq was determined by means of Monte-Carlo analysis. Per scenario, n = 105 realizations of the state, material and model parameters were produced and propagated through the model (undrained stress path, Fig. 4). Consequently the estimator for Pliq is:
where xi is the ith realization of x and IC (x) is the indicator function for condition C. Considering the definition of WCD, being a necessary not sufficient condition for static liquefaction, this is clearly a conservative approach leading to an upper bound estimate of the probability of liquefaction. In fact, the results in section 4 show that the estimate based on this method usually lead to very high probabilities that intuitively do not reflect the judgment of most experts. For the assessment of the probability of sensitivity to liquefaction, it is definitely desirable to
315
Figure 14. MStab reliability module. Table 1.
Peak acceleration values.
amax [m/s2 ]
P{amax > ámax } [1/year]
P{amax > ámax } [1/50year]
0.20 0.40
1/475 1/10000
1-(1-1/475)ˆ50 = 0.1 1-(1-1/10000)ˆ50 = 0.005
Figure 15. GEV distribution of amax .
use an approach that includes also the “distance” from instability or a critical-state model. This was not realized in the course of this project, but is one of our goals for the future. It is also noted that seismic action was neglected in this step. Due to the very low intensity the contribution was found to be insignificant. 3.4
Slope stability, given liquefaction
The second step respectively sub-mechanism in the contemplated chain of events is slope failure, given liquefaction has occurred in one or more of the problematic uncompacted zones. A total of 6 critical failure modes could be identified. The slope reliability analysis is carried out using the reliability module of MStab, which is essentially FORM applied to a Bishop slip circle analysis using average properties of the soil shear resistance properties as the main basic random variables, thus with implicit treatment of averaging effects in the probability distributions for the shear resistance (see JCSS 2001). As mentioned earlier, seismic loading was not considered in the initiation of liquefaction, i.e. the implicit assumption is that a trigger is always present with high probability. However, seismic action was taken into account in the slope stability analysis. For the considered area, two values of peak acceleration amax are given for the return periods of 10,000 years and 475 years (see Table 1). In order not to use the heaviest condition as deterministic value, a Generalized Extreme Value (GEV) distribution corresponding to the given quantiles was used to integrate the seismic loads in a probabilistic manner. The resulting GEV-distribution is shown in Figure 15.
Since the used software did not allow us to include the uncertainty in amax in the Bishop-FORM analysis, several of these form analyses were carried out for a set of deterministic values of the peak acceleration. Subsequently, the results in terms of the reliability index β, conditional on amax , can be integrated numerically to solve the following integral:
This is practically done by an external FORM-loop respectively design point search, for details refer to (Delft GeoSystems 2006). 3.5 Breaching, given slope failure By carrying out an uncertainty analysis on the initial breach height sbh and the value of L1 , based on the uncertainties in the strength of liquefied sand (φred ) and the strength of the non-liquefied and (critical state) probability distribution functions for these variables were established. The breach length L2 proved to be very insensitive to L1 , reason to give it a conservative deterministic value: L1 = 5 m (again simplified upper bound approach). The uncertainty in sbh, however, is expressed as a lognormal distribution with an expected value of 1m and a standard deviation of 1 m. The results of a large series of HMBREACH calculations could be approximated by the following equation (response surface):
where C1 and C2 are model parameters with lognormal distributions, expected values 1 and standard deviations 0.1 and 0.3, respectively. A reliability analysis on this response surface of the breach model resulted in: P{(L2 − L1 ) > 22.5m|slopeinstability) = 1.3 10−7
316
and an expected value of E(L2 ) = 7.8 m and a standard deviation σ(L2 ) = 3.7 m. It should be noticed that the applied models for the breaching process, given slope instability, are very rough. Even conservative assumptions, however, make clear that no large damage is to be expected here in the unlikely cased that slope instability occurs. This is due to the shallow location of the uncompacted areas. In other cases of liquefaction slope failures, the length L2 − L1 of the damaged area may reach values of up to 100 m or even more, as experience shows. Research in the field of the breaching process and the interaction between liquefaction and breaching is needed to improve the models and develop a practical tool to predict the length L2 − L1 of the damaged area.
3.6 Total failure probability As mentioned earlier, but emphasized again at this point, the results presented here in terms of the failure probability concern an upper bound. By definition, the value of this probability is expected to be lower. Various assumptions have led to a value “on the safe side”. These assumptions can be roughly classified in two categories: 1. Assumptions in probabilistic approach: a. The soil properties in the constitutive models are essentially independent and therefore treated as such. b. For combining the scenarios, it is assumed that they are mutually exclusive, thus the total probability is the sum of the probabilities of scenarios i (serial system):
b. In the slope stability analysis, the theoretical minimum of the shear strength according to the material model is assigned to the zones that are assumed to be liquefied. It is likely that not the entire affected volumes undergo the total strength reduction and that excess pore pressures diminish, i.e. that the shear strength is recovered at least partially. At the same time, the assumptions made, indicate where there is certainly significant potential for refinements in the applied method. More sophisticated mechanical and constitutive models are in principle available for coupled analysis in academia, but not yet easily applicable in consultancy work. There is a challenge for the applied sciences community to further develop these methods and tools closer to application in practical problems. 4
For the project itself, it was shown that some design amendments were necessary, such as the compaction of mainly the slope part pf the hydraulic fill and a slightly shallower slope than initially planned in order to fulfill the strict safety requirement. With this amended design it was shown that the total probability of failure (upper bound, see previous section) was in the order of Pf ,sup = 10−7 . But rather than presenting more figures, the type of results that can be produced with such an analysis are illustrated in this section: •
The probability of the top event in the fault tree, in this case the foundations of the installation affected by slope failure, possibly induced by liquefaction and breaching, can be used in higher level risk analyses and reliability analyses of the entire installation. The probabilistic approach therefore provides a comparability with other elements of the system that cannot be achieved otherwise by the classical deterministic methods. • The fault tree contains probabilities on (sub-) mechanism level. That enables the identification of the most relevant mechanisms and scenarios.This information is extremely useful for optimization of the design. • The reliability analyses on (sub-) mechanism level also produce information on the relative importance of the variables involved (e.g. FORM gives influence coefficients αi ). Some of these properties can either be influenced by changes of the design or by acquiring more information and thereby reducing (epistemic) uncertainty, e.g. additional soil investigation.
c. The combination of the sub-mechanism probabilities concerns a parallel system. Here the worst case is total dependence between the submechanisms. This assumptions is probably not even unreasonable since in all mechanisms the same soil properties play a role. Therefore, the maximum value of the sub-mechanism probabilities is used as the upper bound for the scenario probability:
Consequently the top event probability is determined by:
for n scenarios and m sub-mechanisms. 2. Assumptions in the physical-process modeling: a. As mentioned in 3.3, the probability of liquefaction is actually the probability of the material being liquefiable. More conditions in terms of stress state etc. have to be fulfilled for liquefaction to occur.
RESULTS
5
CONCLUSIONS
The work on this paper has lead us to formulate the following three main conclusions: Firstly, the paper demonstrates the applicability of reliability analysis for a rather complex geotechnical problem in a real world design problem. In the course
317
of the design verification, the upper bound of the failure probability is lowered step-wise by refinements of either physical process or probabilistic models until it is shown that the design fulfills the rather strict requirements. Secondly, it should be emphasized that such a decomposition of the analyzed failure processes can hardly be done with deterministic approaches. The common safety value, be it a factor, margin or something else, would be very difficult to compose out of the results of the evaluation of the sub-mechanisms. Once again, comparability is one of the major advantages using probabilistic approaches. Finally, of course, a probabilistic approach does not compensate for deficiencies in physical processbased models, it merely provides a consistent manner to deal with the uncertainties. In the illustrated case, the sometimes quite rough upper bound approaches led to a satisfactory answer, namely an acceptance of the design by verifying the required requirements. On the other hand, we are convinced that the use of upper bounds led to a rather conservative assessment. However, carrying out the indicated potential refinements is not a trivial task with the currently available methods. Especially for the initiation of liquefaction, the currently used models are unsatisfactory. Either they are of empirical nature and based on a limited number of (indirect and interpreted) observations, or they combine several physical-process based models with rather restrictive assumptions. There is clearly a need for better in-depth understanding of the physical
processes and their interaction leading to improved models. REFERENCES JCSS (Joint Committee of Structural Safety) 2001. Probabilistic Model Code, Part 3.07 – Soil Properties (last update 08/2006). ISBN 978-3-909386-79-6. Lindenberg, J. & Koning, H.L. 1981. Critical density of sand. Geotechnique 31(2): 231–245. Lunne, T. & Christoffersen, H.P. 1983. Interpretation of cone penetrometer data for offshore sands. In Proceedings of the Offshore Technology conference, Paper no. 4464. Richardson, Texas. Mastbergen, D.R. & Van den Berg, J.H. 2003. Breaching in fine sands and the generation of sustained turbidity currents in submarine canyons. In Sedimentology 50: 635–637. Molenkamp, F. 1989. Liquefaction as an instability. In Proceedings Int. Conf. on Soil Mechanics and Foundation Engineering (ICSMFE): 157–163. Delft GeoSystems 2006. MStab 9.9, User Manual, Delft GeoSystems, Delft. Olson, S.M. & Stark, T.D. 2003.Yield Strength Ratio and Liquefaction Analysis of Slopes and Embankments. In Journal of Geotechnical and Geoenvironmental Engineering 129 (8): 727–737. ASCE. Sladen, J.A., D’Hollander, R.D. & Krahn, J. 1985. The liquefaction of sands, a collapse surface approach. In Canadian Geotechnical Journal 22: 564–578. Stoutjesdijk, T.P., De Groot, M.B. & Lindenberg, J. 1998. Flow slide prediction method: influence of slope geometry. In Canadian Geotechnical Journal 35: 34–54.
318
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
A case study of the geological risk management in mountain tunneling T. Ikuma Dia Consultants Co., Ltd, Tokyo, Japan
ABSTRACT: In this tunneling project, a research was conducted on the risk management technique for occurrence prevention of latent geological risks on a mountain tunnel in Japan. The geological consultant played important role about the revaluation for unexcavated section and the proposal of a suitable construction method from geotechnical viewpoint. The geological risk management research of ground evaluation utilizing this experience is continued in other mountain tunnel with the same kind of geological features.
1
INTRODUCTION
Investigation of the prior stage of mountain tunnel with long length and high overburden was conducted for the purpose of clarifying each following item. •
The overall geological structure, distribution of geological features and characteristics of the tunnel section • Ground classification with synthetic technical consideration based on the results of investigation • Topography and geology of portal locations, basic data for a problem and its measure design, and • Basic data for evaluation of face stability, design of support, selection of auxiliary method, selection of excavation method and tunnel driving method Especially in a tunnel design stage, geographical and geological data of high accuracy are required. However, it is difficult to do the highly precise geological survey which covers all the extension of the tunnel as the linear structural object before excavation given the present technical level, and investigation period and current requirement of economic efficiency for geological survey. Moreover, the Japanese Islands belong to the mobile belt through the geological age, and have very complicated geological structure and distribution of geological features. For this reason, recognition of the uncertainty about the geological phenomenon in tunnel construction and the correspondence to them have been an important subject. In tunnel construction, how it grasps in a prior stage has linked to the subject resulting from the latent geological risk by the heterogeneity of the ground, and the uncertainty of geological information with cost and construction period directly. The geological risk in tunnel construction expresses degree and grade of a size of the uncertainty which undesirable phenomenon generates for construction and control of maintenance. This paper describes an example which reduced deviation of ground evaluation in the prior stage and
construction stage in the mountain tunnel built on the steep mountains area, and the future view from the viewpoint of geological risk management.
2
GEOMORPHOLOGICAL FEATURE AND GEOLOGY OF RESEARCHED TUNNEL
This tunnel is planned by steep mountain area with 1200–1400 m altitude, and some mountain streams which have channel in the direction of northeastsouthwest cross tunnel alignment (refer to Figure 1). The direction of these mountain streams is in agreement also with the lineament by an aerial photograph interpretation. Moreover, it is analyzed as low-velocity zone by seismic refraction method, and is geomorphological weak zone.
Figure 1. Locality and geological map of researched tunnel.
319
Ground consists of the Nohi Rhyolites (Rhyolitic∼ Dacitic welded tuff) formed at the Cretaceous age and Granite porphyry which intrude them. These welded tuffs have received weathering and hydrothermal alteration, and lithofacies also changes intricately. 3
GEOTECHNICAL PROBLEMS
It became clear that the following geotechnical problems were held from the geological investigation carried out in the prior stage in this tunnel •
Geological outcrops was deficient in land surface as a whole, and it was difficult to grasp in detail about the geological structure of ground in the prior stage. • It was presumed that welded tuffs were distributed over 96.6% of tunnel length, and granite porphyry was distributed to the remaining 3.4% in the tunnel formation level. However, the former has the development of fracture frequency and the extent of alteration in various forms, estimated accuracy for deep bedrock conditions is considerably low. • It is considered that geological boundary between welded tuffs and granite porphyry is a alteration zone with many clayey thin layers, and possibility of being the bed rock which deteriorated is high in the depths also. Therefore, large increase of earth pressure caused by tunnel excavation or sudden water inflow occurs when clayey layer is an impermeable wall at section where overburden exceeds 300 m. In such a case, it has a significant impact on excavating. The ground classification was performed in consideration of the ground conditions on this tunnel based on the tunnel standard for road tunnel-structure (Japan Road Association; 1989). The underground condition of tunnel with large overburden depends on the elastic wave velocity value acquired mainly by seismic refraction method. However, since the depth of not less than about 200 m is a limit of exploration, the reliability of the acquired elastic wave velocity value is considerably low. Therefore, accuracy of the position of low velocity zone and boundary on the class of the ground is also low in the tunnel formation level of these large depth sections. The low velocity zone is considered to be abovementioned alteration zone or shear zone on the ground. In the design phase, the FEM analysis was conducted in large overburden part. As a result, support pattern corresponding to the ground classification was ranked higher about the section where overburden exceeds 300 m. According to prediction the amount of water inflow using hydrogeological conditions and hydraulic formula, it is considered that the amount of steady inflow in this tunnel is about 0.7 m3 /min/km. The amount of concentrated water inflow at stage of construction is presumed to be those about several times. The generating position of concentrated water inflow can consider at the periphery of low velocity zone and directly under a mountain stream in the tunnel formation level.
4
GEOLOGICAL RISK MANAGEMENT
This tunnel is crossing some mountain streams with the right angle or the high angle. An intersection part is in agreement with the position of a low-velocity zone in many cases. In those parts, although the overburden is large, face falling and generating with large amount of water inflow are assumed with construction. In respect of safety of construction, such frequency and quantity pose a problem as the geological risk. The geological risk management about the countermeasure of emergency time or predicting of the geological risk is very important. The example which was able to be reflected in construction is reported using the geological risk management technique. In order to cope with the geological risks and to advance construction more smoothly, the tripartite council which consisted of owner, constructor and geological consultant were established before construction. Since geotechnical information was always shared in the tripartite council, quick investigation was conducted by the geological consultant when collapse of the face and water inflow occurred in tunnel. The geological engineer played important role about the revaluation for unexcavated section and the proposal of a suitable construction method from geotechnical viewpoint. 4.1 Case example (southern section of this tunnel) In southern section, after excavation was started, construction was favorably continued by performing TSP (Tunnel Seismic Prediction) and horizontal core boring as investigation of ahead of the tunnel face. However, from the vicinity of STA338, the deteriorated bed rock came to appear frequently, and the squeezing from a tunnel sidewall also became remarkable. When excavation work advanced to STA.333, the convergence was increased. A maximum of 500 mm displacements by squeezed are found at the lateral side of tunnel at the point of STA.331+15, and excavation was stopped. The cave of about 3 m × 4 m size was checked in the upper part of this face. The amount of spring-water from a cave was 1.3 m3 /min (Figure 2). When an alteration zone appears in tunnel formation level with deep overburden, face falling, expansion of loosen zone accompanying excavation and the increase in lateral pressure are given. Moreover, it became clear that the confined aquifer existed in ahead of the face from result of horizontal core boring. The support pattern accompanied by the high rigid auxiliary method is needed in continuation of excavation judging from these situations. Then, the steel pipe fore-piling as the prevention for face falling, increase of rock bolt as lateral pressure preventive measures and drainage borings as measure against water inflow were proposed, respectively. Although displacement headed for convergence, water inflow was not decreased. Pressure of water inflow also had 1.7 Mpa.
320
Figure 2. Base rock and water inflow situation around STA.331.
The geological condition in a tunnel was examined synthetically, and an excavation of drainage was proposed as a prevention which breaks through the confined aquifer with irregular distribution from a viewpoint of construction and economical efficiency. As shown in Figure 2, a drainage drift was excavated. In the point of No.331+16, it encountered the expected artesian aquifer by excavating the drainage drift. At that time, the face fell suddenly, the clayey∼sandy deteriorated bed rock of 80 m3 collapsed, and the water inflow of 2.5 m3 /min was occurred. The drainage drift was blockaded by this collapse. However, the generated water inflow move from the main tunnel to the drainage drift, and the water level and the amount of water inflow have been decreased. The total amount of water inflow of tunnel was ca. 7500 m3 /day. Since it was still generated by large amount water inflow from the face of this tunnel and the steel pipe fore-piling also after that, five more drainage boring were executed with STA.331+5.9. As a result, the amount of water inflow from the face also decreased and continuation of excavation was attained. However, since the deep overburden section will continue and occurring of sudden water inflow is also predicted, management of water inflow processing is needed. For this reason, drainage capability was reexamined and proliferation of the facilities for drainage in always and an emergency was proposed.
From this and other example (northern section of this tunnel), the technique of the geological risk management to construction in the section where deterioration of the bedrock and generating of a large amount of water inflow are summarized in Figure 3.
4.2
Orientation of geological risk
Although generating frequency was low about the sudden water inflow under tunnel construction, it was predicted that a large amount of water inflow was expected at occurred time. The mountain stream which flows through near a portal part is a clear stream, and mountain trout inhabits. Moreover, in the downstream region of southern section, there is a source of tap water using the infiltration water of the river, and it is used as drinking water. It is necessary to care about enough the measure against drainage of the water by which it is generated from the inside of a tunnel also from a viewpoint of social environment. The amount of convergence displacement measured by real construction is settled in the range presumed in the preliminary survey stage. The crack situation of the shotcrete surface was also observed carefully, and lining concrete was placed after checking displacement convergence. Those positioning is shown in Figure 4 about the amount of sudden water inflow and convergence among the geological risks in each stage of tunnel construction.
321
Figure 3. Construction flow chart about the face falling section and role of the geological engineer.
Figure 4. Orientation map of the geological risks.
In the maintenance stage, these geological risks are monitor by visual observation and periodical measurement.
4.3
classification is done by using the modification index (i). The modification index (Inoma: 1984) was used to analyze comparison of ground evaluation between initial design stage and actual results quantitatively. Modification index (i) is defined by equation 1.
Quantitative evaluation of ground by modification index
Next, comparison between initial design stage and actual results after excavation stage of the ground
322
Table 1. tunnel.
Brief summary of the risk managementin this
Figure 5. Contrast of the ground classification between initial design and excavation stage.
Where, R: Difference of class numbers with corresponding rock mass between each designed and actual case n: Ratio against the total length with each R. This index is a statistical numerical value using root mean square showing distributed condition of a variable. The modification index is calculated through the following process; Procedure 1: First of all, class of ground in original design stage and actual excavation result are compared. Then, it totals for every width of a class about change situation of ground class between the two. And an accumulation curve of the changed width as variable is drawn. Procedure 2: The modification index is calculated by equation 1 by using above-mentioned R and n. The result of comparing the ground classification at initial design stage, ahead of the face investigation stage, and actual results after excavation stage is shown in Figure 5. If the modification index is calculated from the ground classification in the ahead of face investigation stage and excavation stage, it becomes 0.59 and 0.80, respectively. With the general technical level, it is considered i ≤ 1.1 as a standard, and the quite effective face design was completed by the ahead of face investigation.
4.4 Effect of the geological risk management based on case example in this tunnel Management of risks have roots in geological features is predicting appearance of risk in advance, and preventing or avoiding beforehand. Especially, the uncertainty of geological risk in underground construction of our country is high with complexity of geological
features, and it is not easy to straightforwardly classify geological risk management in several patterns. However, geological risk working group of JGCA researched geological risk management pattern recently, and they divided into following three patterns. Type A: Type B: Type C:
A case of avoiding geological risk. A case on which geological risk is actualized. A case of minimizing damage associated with geological risk which was actualized.
Above-mentioned case example belongs to Type A. About southern section, deteriorated ground was expected to some extent from before construction. However, those scale and extension were uncertain. This case confirmed these situations by various explorations ahead of face and prevents geological risks such as face collapse and water-inflow beforehand. In this case, since bed rock deterioration which repeated itself complicatedly, we continued NATM by the observational method. Furthermore, suitable correspondence of heaving and displacement of side wall were completed by geological risk managements such as convergence measurement, reset of control criteria value, and water-inflow management. Brief summary of the geological risk management is shown in Table 1.
323
Table 2.
Ground classification of this tunnel.
Figure 6. Ground classification of the planned tunnel with the same kind of geological features.
5
FUTURE VIEW
In the ground where geological and rock mass condition change intricately in tunnel extension and transverse direction like this tunnel, the difference of ground classification between initial design stage and construction stage is remarkable in many cases. About the unexcavated section which poses a problem especially in a construction stage, the ahead of face investigation result by two or more techniques is considered synthetically, and grasp of the threedimensional ground condition of the tunnel extension is needed. However, an applicable exploration method is different according to overburden, geologic structure, and the rockmass condition of the ground. Therefore, the geological management which judges the more effective exploration method appropriately about quality grasp of the unexcavated part of a tunnel is important. In evaluation about deterioration part of the ground consists of the Nohi Rhylolitic rocks like this tunnel, examination of the resistivity value was effective. However, the geological problem about the structure of Granite porphyry remained during construction, and the variation of ground evaluation was caused. Geological outcrop distribution of Granite Porphyry was fragmentary and details of those distribution and geological structure were uncertain in the preliminary investigation stage. In the excavation stage, it became clear that Granite porphyry had several times as many spread compared with a preliminary investigation stage. Petrographically, the extensional shape of Granite porphyry is stock, and widening toward deep. The depth distribution shape of Granite porphyry remained as a residual risk until the excavation stage. A ground classification in the design and construction stage of this tunnel is shown in Table 2. The class of ground of this table is conformed to the ground classification of Japan Road Association (1989). In addition, based on the actual result of the portal part, the ground classification of the general
part of this tunnel in a design stage was changed. Furthermore, the DSC (Different Site Condition) was also considered and a new standard of classification was made. Moreover, the ground classification scheme in the planning stage of A and B tunnel which consists of same geological kind around this tunnel is shown in Figure 6 (refer to next page). These two tunnels are under excavation now. By repeating observation of these tunnel face conditions and comparison of each ground condition, the more practical standard of ground will be built from now on.
6
CONCLUSIONS
It learned from the case of face collapse and squeezing of ground which occurs at northern section of this tunnel, geological risk management about prediction and accident prevention to tunnel deformation, and water inflow in the unexcavated part was carried out. As a result, the modification index (i) became about 0.5∼0.8, concerning the section where occurrence of tunnel deformation and water inflow were predicted, and deviation of the ground evaluation before and after excavation was able to be made considerably small. In general, a large amount of water inflow generated in the tunnel inside is thought to be the geological risk on excavation. However, because it was precious water resources about the fresh water in particular, water analysis was used together about this. Furthermore, separation measures were taken appropriately about fresh and murky water, and it was managed on water inflow. It was possible to manage appropriately ground classification and water inflow using geological risk management technique, this management effect was large also from a standpoint of the excavation cost.
324
ACKNOWLEDGMENTS The author expresses his sincere gratitude to Professor Yusuke HONJO of the Gifu University for his encouraging advice. Special thanks are extended to ProfessorTUNEMI WATANABE of the Kochi University of Technology and Mr. Yoshihito SABASE of the CTI ENGINEERING Co.,Ltd. for kind advice. Thanks are due to Mr. Akira TAINAKA of the DIA CONSULTANTS Co., Ltd, who provided suggestion during the preparation of this paper. The author also expresses gratitude to Mrs. Keiko HORIKAWA of the DIA CONSULTANTS Co., Ltd, who cooperated in creation of the figure and table for this paper.
result in Mountain Tunnel, The 36th Japan National Conference on Geotechnical Engineering, The Japanese Geotechnical Society pp.1927–1928. (in Japanese) T. Ikuma 2008. A case study of the geological risk management using suitable investigation in mountain tunneling, Proc. of International Symposium on Society for Social Management Systems 2008, Kochi. Society for Social Management System H. Inoma 1984. Comparison between the Projected Rock Mass Classification at the Initial Design and the Actual Results after the Excavation under NATM Method, Journal of the Japan Society of Engineering Geology, Special Volume: 63–70.(in Japanese with English Abstr.) Japan Road Association, 1989, Technical Standard for Road Tunnels-Structures, 273p. (in Japanese)
REFERENCES T. Ikuma, K. Hatamoto, K. Yamamoto, T.Shindou, T. Ogawa, 2001, Revaluation of the Rock Mass based on Excavated
325
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Guideline for monitoring and quality control at deep excavations T.J. Bles, A. Verweij, J.W.M. Salemans & M. Korff Deltares, Delft, The Netherlands
O. Oung & H.E. Brassinga Public Works Rotterdam, Rotterdam, The Netherlands
T.J.M. de Wit Geomet BV, Alphen a/d Rijn, The Netherlands
ABSTRACT: Geotechnical monitoring and quality control are often used as a standard approach of deep excavation risk control. However, often the full potential of monitoring is not used and the relation with design and construction processes is delimited to some ground water levels readings and height marks. Monitoring offers much more possibilities when incorporated in the projects risk management and the constructions quality control. That is why a committee of experts from the field is setting up a guideline for the implementation of monitoring and quality control in deep excavations. The guideline helps designers, contractors and clients to determine monitoring benefits for their project and suggests opportunities for successful embedding of monitoring in design, tender, contract and construction. This paper deals with the content of the guideline, which is to be released in 2009. 1
BENEFITS OF QUALITY CONTROL AND MONITORING
3. 4. 5. 6.
1.1 Introduction Failure costs in the construction sector are estimated to be five to ten percent of the total turnover. A large portion of these failure costs is related to underground construction works including deep excavations and pile foundations. The focus of the research done and of this paper is on deep excavations (i.e. an excavation for purpose of building an underground construction with depths of a few meters with a maximum of approximately 30 meters) and accompanying foundation works. In order to reduce failure costs it is necessary to control the risks accompanying underground construction works. Unwanted and unforeseen events frequently occur, resulting in constructional and economical damage and in a negative image of the construction sector. Examples are severe leakages, foundation piles not reaching the required depth, discomfort for the public or even damage to deep excavation or surroundings. As a consequence, ground related risk management has rapidly evolved in recent years. More and more it is used as a fruitful method to reduce geotechnical risks. To give structure on ground related risk management Deltares developed the GeoQ-method (Van Staveren, 2006). GeoQ is based on six generally accepted risk management steps: 1. Determination of objectives and data collection; 2. Risk identification;
Risk classification and quantification; Risk remediation; Risk evaluation; Transfer of risk information to next project phase
Geotechnical risk management is already successfully used in many construction projects. Monitoring and quality control are excellent tools within this approach for deep excavation design and construction. 1.2
Monitoring and quality control
Many definitions are available for geotechnical monitoring such as described by Dunnicliff (1988, 1993). The most important aspects are: – Measurements are performed repeatedly. This is necessary to gain insight in deep excavation behavior over a certain period of time. – Depending on the (failure) mechanism to be observed measurements are performed on (elements of) the construction, in soil and/or on possible surrounding constructions. – Measurements can be executed before, during and after the construction period. – Measurements should create the possibility to foresee unwanted events and facilitate an incentive to take appropriate measures in order to prevent negative consequences. Contractors also perform measurements for quality control. These measurements contribute to building process control. Measurements for quality control are usually performed only once and are usually not
327
part of a monitoring plan. Still, many quality control measurements complement traditional geotechnical monitoring. Therefore quality control should be part of deep excavation risk management. 1.3
Objectives of measuring
As Marr (2001) states: “Geotechnical monitoring saves money and lives and/or diminishes risks”. In addition, Marr gives fourteen reasons demonstrating geotechnical monitoring benefits, varying from indicating construction failure to construction process control and increase of state-of-the-art knowledge. In general four types of objectives of measuring and monitoring are identified: 1. Operational/qualitative goals: Decision making with regard to possible occurrence of risks is improved by measuring failure mechanism development. The progress of construction of the deep excavation is controlled and checks are performed on the assumptions made for the design of the deep excavation. Constructive safety of deep excavation and surroundings is also guaranteed. The aim is to reduce uncertainty and gain reliability. In addition, quality control of constructional elements is an operational goal. Examples are load tests on anchors or piles, or torque measurements while installing drilled piles. 2. Communicative goals: Deep excavations are often constructed in densely populated areas. Therefore it is very important to get the public’s support in order to prevent complaints that can slow down the construction process. Monitoring can be efficiently used to demonstrate the construction process is under control. 3. Legal goals: Monitoring can be used to answer questions about liability of building damage. Monitoring can also be a requirement or boundary condition to authorities’ permission for deep excavation construction. 4. Scientific goals: Monitoring can provide excellent data for scientific research to improve understanding of deep excavation (and soil) behavior. 2
industry in the Netherlands (contractors, clients, engineers, monitoring companies and researchers). The result is a CUR guideline for the implementation of quality control and monitoring in deep excavations (CUR is a Dutch civil engineering research institute). The guideline will be available (in Dutch) mid 2009.
RESEARCH AND GUIDELINE
2.1 Research In general practice, monitoring is already used as a standard part of deep excavations risk control. However, often the monitoring potential is not fully used and the relation with design and construction processes is limited. The main reason for this research is to clarify the possibilities to use monitoring more efficiently in practice. We researched the way monitoring can be optimized during the entire construction process, providing a powerful tool within a broad risk management framework for quality control and process optimization. The research has been done together with the construction
2.2 Improvement in monitoring practice After guideline implementation three types of improvements are expected to be achieved in Dutch construction practice. These improvements are: 1. Increase of client’s awareness of monitoring benefits. Monitoring is often mistaken for a time and money consuming activity, necessary for satisfying the authorities’ requirements and clients demands. The guideline underlines monitoring benefits and describes how to maximize monitoring results. 2. Monitoring will become an integral part of the building process. This can be separated in the following aspects: – Facilitation of explicit responsibility allocation for different parties involved in the construction process. – All monitoring activities are based on a risk management approach and are laid down in a standardized monitoring plan. – Monitoring activities are coordinated by one party to prevent fragmentation of monitoring activities. – Measurements are directly adapted to the building process (frequency, reference and end measurements, limit values of measurements, communication and interpretation of data, etc.). 3. Providing an overview of all different measurement techniques applicable for deep excavations. Techniques are coupled with specific risks incorporated in different construction methods in a work break down structure. 2.3 Objectives of the guideline The guideline’s overall-goal is to improve the use of measurements and monitoring, thus improving quality and risk management. Three objectives were formulated: 1. Describe measurement techniques associated with deep excavation construction, coupled to all relevant construction risks and the respective parameters. 2. Present a step-by-step plan and format to set up a solid monitoring plan. 3. Provide opportunities to embed monitoring in tenders and contractual processes. 3
OVERVIEW OF MEASUREMENT TECHNIQUES
Many companies deal with monitoring. They all have their own experience with different monitoring
328
Table 1. Standard for describing parameters to be measured; example deals with soil deformations. Parameter
Soil deformation
Measurements(msmt) of
Deformation in x-, y- and z-direction with use of inclination or extenso instruments, leveling Measurement boundary Measurements should be checked values during each construction phase against design values – signal values 80% design value – limit values 100% design value
Figure 1. Structure to identify measurement techniques.
Required accuracy – absolute accuracy – frequency
Depends on design A minimum of one time per construction phase (end msmt is start msmt of next phase). During critical phases more measurements are necessary, especially when time is an important factor – timing of ref. msmt Before starting of activities – demands on reference At least two measurements measurement (similar) – timing of end With time related effects: 3 month measurement after last activity affecting the deformations. Without consolidation: 1 month after last activity affecting the deformations. – demands on end msmt One is sufficient Handling of data – processing of data
Lines of deformation according to time. Check against design value. – necessary speed for Speed of processing depends availability of data at on construction phase and risk; testing company aim for maximal three days. Communication to all parties involved. – necessary speed for Speed of decision making depends decision making when on construction phase and risk; measurement exceeds aim for maximal three days. boundary Communication to all parties involved. Measurements in Monitoring plan quality-control-plan or monitoring-plan
Figure 2. (Part of a) WBS of a deep excavation.
techniques. However, these experiences are not shared among others. In this way the practical knowledge level does not increase and best practice is in fact re-invented at each deep excavation, especially for new and special techniques. For a complete outline of all measurement techniques a risk-based overview of a deep excavation is developed. This overview has the structure as shown in figure 1. The elements of a deep excavation are identified by means of a work-break-down-structure (WBS) of a deep excavation. The WBS can be seen in figure 2. For every element a list of unwanted events has been made. These unwanted events are described to show the relevance. In order to keep an overview, a differentiation has been made for unwanted events affecting
1. single elements of the deep excavation e.g. instability of trench of cemented bentonite wall 2. deep excavation as a whole, a combination of those single elements e.g. bursting/heave of a submerged concrete floor 3. surroundings of the deep excavation e.g. cracks in surrounding buildings caused by vibrating piles For each unwanted event the parameters have been identified which should be taken into account for possible development of the unwanted event. With these parameters it is possible to find a list of specific measurement techniques. Each parameter and technique is described using a standard. Examples of such standards are shown in table 1 and table 2.
329
Table 2.
Standard for describing a monitoring technique; example deals with extensometer.
Monitoring technique
Extenso instrument
What is monitored
Vertical displacements (swell caused by excavation, settlement caused by consolidation, relative to strains in concrete constructions) Extenso instruments are used to constantly measure differences in distance between two or more points over the axis of a borehole. This makes it able to determine the vertical displacements of soil layers. In combination with inclino measurements a total view of the displacements can be derived. In practice, an open tube is placed inside a borehole. Fixed points are placed on different depths. The displacements between these fixed points and the head of extenso instruments on the top of the tube are measured. E.g. the instruments can be displacement recorders or potentio instruments.
Functioning of instrument
Photo of instrument Figure of measurements Accuracy of monitoring technique a) sensitivity to installation errors b) sensitivity to errors during operation c) vulnerability of instruments (solidness) Explanation and recommendations
d) Accuracy (absolute) e) Measurement range (absolute) Relevant influencing factors from surroundings Long term behavior (calibration, stability) Procedure of measurements Interpretation of data a) Existing systems for analysis and interpretation b) Ambiguity Maintenance Application 1 Suitability for this application Best practices a) Number of instruments b) Location of instruments Application 2 Application 3
Little sensitive for installation errors Little sensitive for measurement errors Instruments are vulnerable When used for measurements in center of deep excavation, instruments are sensitive for collision with vehicles. Instruments can be protected by installing a casing till 1 meter below excavation level and attaching the casing to a strut. +/− 0,05 mm 100 mm Instruments should be protected in case of placement in an excavation. Depends on specification of instruments. Reference point should be measured regularly. Automatically Absolute and relative displacements can be measured. Unambiguous (interpretation always objective) None Swell (vertical soil displacements) at excavation Very suitable At minimum one location in the center of the deep excavation At minimum one measurement anchor at each soil layer. More accurate results can be obtained by using two anchors per soil layer. Damage to surroundings caused by (densification of sand layers due to) vibrating or hammering sheet piles Deformation of soil and surrounding buildings caused by bending or collapse of wall of the pit
For this example, app. 2 and 3 are not worked out in detail
4
DEVELOPING A MONITORING PLAN
4.1 Table of contents
The guideline presents a step-by-step plan to set up a good monitoring plan. The basis of this plan was formulated in HERMES (2003), but adapted for practical use in construction works. Not all situations ask for the same monitoring intensity and efforts. A large deep excavation in a busy and old city centre will require a much more intense monitoring system than a standard deep excavation outside the city. In addition, the type of construction will lead to a different monitoring strategy. With help of the guideline, the reader can learn which monitoring is necessary, based on the situation and project risks, to provide a good tool for proper risk management.
A good monitoring plan should include at least the elements stated below. The capitals behind the chapters refer to the steps that can be followed in order to get all necessary input for a proper risk based monitoring plan. These steps are described below. 1 1.1 1.2 2 3 4 5
330
Introduction Project description and basic assumptions Objectives of monitoring Results from risk analysis Monitoring strategy Operational plan Maintenance plan
A B C D–G H–I J
6 7 8
Measures when measurement limits are exceeded Dismantle plan Communication plan
– Frequency of the measurements; a higher or lower frequency can lead to the choice for other instruments.
K L M
The table of contents is the same for every type of project, big or small, simple or complex. The way it is elaborate can however be different. One has to use the guideline in a pragmatic way. 4.2
Steps to obtain a risk based monitoring plan
4.2.1 Steps A and B, scope and objectives The project needs demarcation in space and time in order to control the scope. Then, the objectives (see different types of objectives in paragraph 1.3 of this paper) for the monitoring are chosen. 4.2.2 Step C, risk analysis Risk management is of key-importance to a good monitoring plan. Monitoring efforts may be an outcome of a risk analysis. Step C therefore includes a go/nogo decision. This decision should be based on the following questions: – Is the risk to be monitored critical (big enough)? – Is monitoring the best option in order to manage the risk? The risk analysis can be technical on operational goals, but can also be more general; for example on communicational goals. A summary from this analysis should be written in the monitoring plan. 4.2.3 Step D, monitoring strategy The effort on monitoring needs to be evaluated, together with the client, by weighing the benefits of measuring (decrease of risk to loose money, time, quality and/or image of the client) to the costs. In this way better understanding of the necessity of the measurements is created. For elaboration one can use the following steps D–G. However, only the determined strategy needs to be reported and further elaborated in the final monitoring plan. 4.2.4 Step E, parameters Determine the parameters to be measured. Are these parameters sensitive enough for all risks that have to be monitored? The scheme in the previous chapter provides background for this step. 4.2.5 Step F, demands Determine the demands on the monitoring: – Signal and limit values of parameters to be measured, making use of the scheme in the previous chapter and/or drawings, norms and literature. – Location of the measurements; is this location sensitive enough in order to have proper measurements? – Sensitivity and range of measurements of each parameter to be measured; this is defined from the risks to be controlled and is a demand for the instruments.
4.2.6 Step G, instruments Based on the previous steps, types of instruments can be selected that fit the given demands. Each type of instrument should have a specific goal for measurements. Instruments without such a specific goal should be left out. Afterwards a specific instrument can be selected from a producer. 4.2.7 Step H, influence from surroundings Effects from activities surrounding the project can disturb the measurements. For example heavy traffic on a neighboring road causes possible vibrations and daily temperature differences cause shrinkage and extension of constructions. This can influence processing of the data and can put high demands on the maintenance plan. Sometimes it is necessary to go back to step F and choose a different instrument. 4.2.8 Step I, planning of operations For each monitoring instrument the following should be clear: – – – –
Location (xy) and depth (z) Demands on reference measurement Measuring frequency Time table for obtaining monitoring data (related to the construction process) – Format of data – Demands on the processing of data – Demands on end measurement 4.2.9 Step J, planning maintenance Planning of necessary calibration and maintenance. 4.2.10 Step K, measures An important step in this risk based process is to decide about measures to be taken when exceeding signal and limit values of measurement. Only with pro-active thinking, measures can be taken in time in order to prevent discussions when immediate action is necessary. However, for each project one has to choose to what depth one wants to elaborate all possible measures. 4.2.11 Step L, dismantling Short description of when and how dismantling of the monitoring system will take place and who is responsible. 4.2.12 Step M, communication When using monitoring it is very important to have proper communication between all parties involved. The processing of the data should be aligned with the project activities. Especially the maximum time span between measurement, processing and taking measures is of importance. When there is no attention for communication, monitoring does not make sense. After all, the purpose of monitoring is to foresee unwanted events and take measures in time.
331
However, the client can state process requirements. For example, the client can demand the contractor to formulate the monitoring plan according to the guideline. During the tender phase it is difficult to give specifications within an integrated contract, because a client can not be too specific, allowing the contractor to make the design himself. Risk management is the key to solve this. The client can demand a contractor to use risk management in his design approach and to be specific on the role monitoring will play in the total project’s risk management.
Therefore, it is crucial to have an effective communication plan. This plan should at least give answers to the following questions: – Who is responsible for measurements execution? – Who is responsible for measurement communication? – Who is responsible for measurement processing? – Who is responsible for measurement interpretation? – Who is responsible for taking action after exceeding the boundary values of measurements? How is ensured that these actions really take place? – Who is end responsible for the total program? 5
6
PRACTICAL USE
In the end, monitoring has to be used in practice. However, also when monitoring is considered of much importance, it is often perceived difficult to divide responsibilities between the different parties involved in the construction process. The guideline therefore provides suggestions on how monitoring can be embedded in tenders and contracts and how to spread responsibilities. Distinction is made between traditional contracts (using specifications, design by or through the client) and integrated contracts (i.e. design and construct). Three rules form the basis for dividing the responsibilities: 1. The party that makes a certain choice in the design or construction process is responsible for that choice. 2. This party also takes the consequences accompanying the choice. 3. Accordingly, this responsible party determines the monitoring with regard to this choice and performs the monitoring or assigns a third party to perform the monitoring. Roughly, this results in the following distribution of responsibilities for the different types of contracts: – Traditional: the client determines the extent of monitoring and performs it himself, or assigns a third party to perform the monitoring. The monitoring can be part of the specifications for the contractor. – Integrated: Monitoring is part of the contract with the contractor and the contractor is responsible for determination and performing.
CONCLUSIONS
The guideline’s overall-goal was to improve the use of measurements and monitoring, thus improving quality and risk management. The guideline indeed answers the research questions. The knowledge of Dutch monitoring and construction industry with regard to deep excavations is gathered in order to get an overview of all different measurement techniques. Also a tutorial is given in order to obtain a risk based monitoring plan. Finally, suggestions are shown for implementation in practice. Two case studies have been executed to check the practical use of the guideline with positive results. Last changes are made according to the results of these cases. The guideline will be available from mid 2009. ACKNOWLEDGEMENTS This research was only possible with the contributions of all members of the committee (CUR H416) of experts in the field and Delft Cluster (www.delftcluster.nl). REFERENCES Dunnicliff, J., 1988, 1993, Geotechnical Instrumentation for Monitoring Field Performance, John Wiley & Sons, Inc. HERMES, feb 2003, Het Rationale Monitoring Evaluatie Systeem (The Rational Monitoring Evaluation System), Delft Cluster Marr, A.W., 2001, Why monitor Geotechnical Performance? 49th Geotechnical Conference in Minnesota Staveren, M. van, 2006, Uncertainty and Ground Conditions: A Risk Management Approach, Elsevier Ltd.
332
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
A study on the empirical determination procedure of ground strength for seismic performance evaluation of road embankments K. Ichii Graduate School of Engineering, Hiroshima University, Higashihiroshima, Japan
Y. Hata R&D Center, Nippon Koei Co., Ltd., Tsukuba, Japan
ABSTRACT: Road embankments should have a certain level of seismic performance against a strong earthquake motion. The ground strength parameters (cohesion c and internal friction angle φ) are key factors in the seismic performance assessment of road embankments. However, the procedure to determinate the ground strength parameter is dependent on the experience of the engineers, and it is not well documented. For example, the patterns of in-situ soil test at the embankment (Number of tests and its locations) are not unique. In this study, a questionnaire survey to 76 civil engineers in Japan is conducted to reveal their empirical procedure of parameter determination. The result of the questionnaire clarify the considerable variation of determined ground strength parameters which depending on the experience of engineers.
1
INTRODUCTION
Road embankments should have a certain level of seismic performance against a strong earthquake motion (Japan Society of Civil Engineers (JSCE), 2000). There are many applicable guidelines for the seismic performance evaluation of geotechnical works including road embankments. In the procedure of seismic performance assessment, the ground strength parameters (cohesion c and internal friction angle φ) are most important. However, the procedure to determinate the ground strength parameter is dependent on the experience of the engineers, and it is not well documented. For example, the patterns of in-situ soil test at the embankment (Number of tests and its locations) are not unique. In this study, a questionnaire survey to 76 civil engineers in Japan is conducted to reveal their empirical procedure of parameter determination. The questionnaires consist of 4 questions. The first one is a question for the types of geotechnical investigation to be carried out. The second one is a question for the location and number of in-situ tests. The third one is a question asking parameter identification for a virtual soil profile. The last one is a question for the experience of the respondents. In this paper, the result of the third question is briefly reported. 2
Seismic performance assessments of embankments are requested for the point A and point B, as shown in the Figure 1. The point A is a standard type embankment on horizontally layered bedrock, but the point B is the half-bank shaped embankment on tilted bedrock. The distance between point A and point B is 5 km. These embankments (Embankment A & Embankment B) are both located in Hiroshima Prefecture, Japan, and filled by Masado sand (decomposed granite). The shapes of embankments are summarized in Figure 1 and Table 1. The parameters of embankments shapes, such as slope gradient, height and crest width are also shown in Table 1. The bedrocks are very strong. The ground water level is very low, the possibility of liquefaction is no necessary to be considered.
CONTENTS OF QUESTIONNAIRE
2.1 Assumed background for the questions The following are assumptions for questions. Note it determined by the viewpoint of simplicity.
Figure 1. The targeted embankments.
333
Table 1.
Parameters of targeted embankments.
Embankment Parameters
Embankment A (Standard)
Embankment B (Half-bank)
Height H (m) Crest width W (m) Gradient of slope 1:s (deg.) Gradient of base 1:k (deg.) Soil material
12 22 1:1.8 (29.1)
12 22 1:1.8 (29.1)
Horizontal base Masado
Tilted base 1:3.63 (15.4) Masado
and number of in-situ tests. The third one is a question asking parameter identification for a virtual soil profile. The last one is a question for the experience of the respondents. Following is the contents of the third question. ‘Standard penetration tests (SPT) were carried out at the crest of Embankment A and Embankment B. The obtained N values are shown in Figure 2. Based on the obtained data, please identify the shear strength parameters (cohesion c and internal friction angle φ) to carry out the slope stability assessment based on the circular slip method.’ ‘If possible, please mention the method to be used. Furthermore, in addition to the final estimate values of shear strength parameters (cohesion c and internal friction angle φ), please mention the possible range of parameters.’
3
RESULTS OF THE QUESTIONNAIRE
3.1 Characteristics of respondents The questionnaire was sent to total 200 engineers. As a result, the effective response was 76. The characteristics of respondent are shown in Figure 3. As shown in Figure 3(a), the majority of respondents (42) are consultant engineers. The practical working year of the respondents is summarized in Figure 3(b). It is in very wide range. Figure 3(c) shows the qualification of the respondents. Most of the respondent have at least one of big four qualifications (Professional Engineer of Japan in civil engineering field, Professional Engineer of Japan in general technological project management, Doctor of Engineering, 1st grade Engineering works management engineer). Therefore, most of the respondents might have certain level of technological knowledge and experience. Figure 3(d) shows the main business field of respondents. About one-third of the respondents are mainly working on the design of embankment or slope stability assessment.
3.2 Method for cohesion estimation Most of the engineers estimated the cohesion as zero (c = 0), because the Masado soil was used. This is based on the engineers experience and by the engineering judgment considering the safety side. The cohesion of a small quantity is answered by some respondents, as a dummy value to prevent surface failure of slopes in the dynamic FEM analysis. The followings are empirical equations used in the answer (Japan Road Association (JRA), 1996; JRA, 1999). Figure 2. The N values at the sites.
2.2
Questions
Total 4 questions are given in the questionnaire. The first one is about the type of geotechnical investigation to be carried out. The second one is the location
Where, c (kPa) is the cohesion and h (m) is the depth from the crest of embankments.
334
Figure 3. The characteristics of respondents.
3.3
Method for internal friction angle estimation
The results of answered method for internal friction angle φ are summarized in Figure 4. There is no great difference in the methods for Embankment A and Embankment B. The answered methods can be classified into 10 types as follows. Method I is the technique proposed by Fukui (Fukui et al., 2000), and it is on the specifications for highway bridges (JRA, 2002).
The conversion of N value into N1 value is as follows.
Where, σv is the effective overburden pressure (kPa). Method II is the technique which is on the specifications for highway bridges (JRA, 1980). The effect of the confining pressure is not considered in this equation.
Method III is the technique based on the actual values for expressway embankments (e.g. Okubo et al.,
Figure 4. The methods for parameter determination.
335
2004). Note a purely empirical decision based on the experience in expressway embankment is also classified as this method. Method IV is the technique proposed by Osaki (Osaki, 1959). This method is based on the soil test results in the Kanto region in Japan.
MethodV is the technique based on the effect of previous studies on the Masado (e.g. Japan Geotechnical Society (JGS), 2004). Method VI is the technique based on the guidelines for design of expressway by NEXCO (Nippon Expressway Co., Ltd, 2006). Method VII is the technique proposed by Hatanaka (Hatanaka & Uchida, 1996). This method is based on the relationship between CD test results and corresponding N value.
The conversion of N value into N2 value considering the effect of effective confining pressure is as follows.
Method VIII is the technique based on the guidelines for road earthwork of the slope engineering and slope stability engineering (JRA, 1999). Method IX is the technique based on the guidelines for railways (Railway Technical Research Institute, 2007). Method X is the technique proposed by Dunham (Dunham, 1954).
Where, A is the coefficient considering the grain size distribution and the grain shape. In Figure 4, “Others” are comments that it is difficult to estimate the internal friction angle only from the N value, and “None” are non-effective answers. About a half respondent adopted the techniques based on the specifications for highway bridges (Method I and Method II). 3.4 Estimated result of cohesion The estimated result of the cohesion is shown in Figure 5. There is no great difference between the estimated cohesion for Embankment A and that for Embankment B. Most of the respondents regard the cohesion as zero (c = 0), and only 15 respondents considered Masado as adhesive material. The main reason of regarding Masado as cohesionless material (c = 0) is to get evaluation on the safety side, though a certain level of cohesion can be observed for the unsaturated soil in usual.
3.5 Estimated result of internal friction angle The estimated result of the internal friction angle is shown in Figure 6. As same as the Figure 5, there is no great difference between the estimated values for Embankment A and Embankment B. The degrees of the dispersion of estimated internal friction angle are about ±4 degrees in the standard deviation, about 0.13 in the variation coefficient. Even if the same method is adopted, the estimated result varies. This is because of the difference in the way of dealing with scattering N values. Some respondent divide the embankment into some layers. Note there is a unique answer which regard Masado as purely adhesive material (frictionless material; φ = 0) because the obtained N values are not increased with the increase of the depth.
4
DISCUSSION
As the effect of variations in estimated ground strength parameters (c, φ), the Mohr-Coulomb’s failure criterion is varied as shown in Figure 7. The influence of the determined cohesion is significant in the lower confining pressure region. However, maybe due to the effect of surface failure which corresponds to the lower confining pressure region, it is reported that the seismic performance of embankments evaluated by FEM is greatly dependent on the level of the apparent cohesion (Hata et al., 2009). Therefore, the obtained variation of the estimated cohesion implies the significant variation on evaluated seismic performance of embankments. In this study, the degree of the dispersion of the estimated internal friction angle is almost 0.1 in the variation coefficient (refer to Figure 6). On the other hand, it is reported that the degree of the heterogeneity of the internal friction angle obtained by laboratory tests for Japanese airport embankments is almost 0.1 in the variation coefficient (Hata et al., 2008). In other words, the degree of the dispersion of internal friction angle based on engineering judgments is almost in a same level of the degree of the heterogeneity of soil itself.
5
CONCLUSIONS
In this study, a questionnaire to 76 civil engineers in Japan to reveal their empirical procedure of parameter determination is carried out. In this paper, the answers for a partial question are briefly reported. The method for the ground strength parameter determination from N values shows a wide range of variety. In addition to the difference of the adopted method, the difference in the way of dealing with scattering N values is also a major reason of the difference of the estimated parameters. As a result, a certain level of variation in the estimated values on shear strength parameters is reported.
336
Figure 5. The estimated cohesion.
Figure 6. The estimated internal friction angle.
337
Figure 7. The difference of estimated Mohr-Coulomb’s failure criterion.
For example, the degree of the dispersion of internal friction angle based on engineering judgments is almost in a same level of the degree of the observed heterogeneity of soil strength. This kind of knowledge is quite important to discuss the reliability of seismic performance evaluation. A detailed examination of obtained answers and some more detailed survey will be done as a future study. REFERENCES Dunham, J. W. 1954. Pile foundations for buildings, Proc. of ASCE, Soil Mechanics and Foundation Division.
Fukui, J., Shirato, S., Matsui, K. and Okamoto, S. 2002. Relationship between internal friction angle of the sand between N value of the standard penetration test based on the triaxial compression test (in Japanese), Technical note of PWRI, No.3849, 50p. Hata, Y., Ichii, K., Kano, S. and Tsuchida, T. 2008. A fundamental survey on the soil properties in the airport embankments (in Japanese with English abstract), Bulletin of the graduate school of engineering, Hiroshima university, Vol.57, No.1. Hata, Y., Ichii, K., Tsuchida, T. and Kano, S. 2009. A study on the seismic resistance reduction of embankment due to rainfall (in Japanese with English abstract), Jour. of JSCE C, Vol.65, No.1. (in printing) Hatanaka, M. and Uchida, A. 1996. Empirical correlation between penetration resistance and internal friction angle of sandy soils, Soils and Foundations, JGS, Vol.36, No.4, 1–9. Japan Geotechnical Society. 2004. Japanese standards for geotechnical and geoenvironmental investigation methods, Standards and explanations (in Japanese), 889p. Japan Road Association. 1980. Specifications for highway bridges, Part IV Substructure edition (in Japanese), Maruzen Co., Ltd. Japan Road Association. 1996. Specifications for highway bridges, Part IV Substructure edition, 566p, Maruzen Co., Ltd. Japan Road Association. 1999. Guidelines for road earthwork, Slope engineering and slope stability engineering edition, 470p, Maruzen Co., Ltd. Japan Road Association. 2002. Specifications for highway bridges, Part V aseismic design edition, 406p, Maruzen Co., Ltd. Japan Society of Civil Engineers. 2000. The third suggestion and commentary about the civil structure (in Japanese), Chapter 8, Earth structures edition, 29–34. Kitazawa, G., Takeyama, K., Suzuki, K., Okawara, H. and Osaki, Y. 1959. Tokyo ground map (in Japanese), 18–19, Gihodo Shuppan Co., Ltd. Nippon Expressway Company Limited. 2006. Guideline for design of expressway, earthwork edition (in Japanese), 350p. Okubo, K., Hamazaki, T., Kitamura, Y., Inagaki, M., Saeki, M., Hamano, M. and Tatsuoka, F. 2004. A study on seismic performance of expressway embankments subjected to High-level seismic load (part 1) –Evaluation of shear strength of embankment materials- (in Japanese), Proc. of 39th Japan National Conference on Geotechnical Engineering (CD-ROM), No.881, 1759–1760. Railway Technical Research Institute. 2007. Guidelines for railway, Earth structure edition (in Japanese), 703p., Maruzen Co., Ltd.
338
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Geo Risk Scan – Getting grips on geotechnical risks T.J. Bles & M.Th. van Staveren Deltares, Delft, The Netherlands
P.P.T. Litjens & P.M.C.B.M. Cools Rijkswaterstaat Competence Center for Infrastructure, Utrecht, The Netherlands
ABSTRACT: Ground conditions appear a major source of cost overruns in infrastructure projects, which has been confirmed by recent Dutch research. As an answer to these cost and time overruns, ground related risk management has rapidly evolved in recent years. Deltares for instance developed the GeoQ-method. In the Netherlands, Rijkswaterstaat, the Centre for Public Works of the Dutch Ministry of Public Works and Water Management, is initiator and owner of all federal infrastructure projects. They asked Deltares to perform a Geo Risk Scan on five selected projects of the top 20 largest Dutch infrastructure projects. The Geo Risk Scan proved to be an effective tool for quickly providing information about the degree and the quality of ground related risk management in infrastructure projects. This paper describes the Geo Risk Scan, as well as its application within five projects. The evaluation of the five projects resulted in six main lessons which are further elaborated in 20 recommendations. These lessons may help project owners, engineers, and contractors to manage their construction projects. 1 1.1
INTRODUCTION Successful and unsuccessful projects
What makes the difference between a successful and an unsuccessful construction project? A project that is completed well within budget, time, and its requirements, or not? It can not be its size and complexity, because there are successful and unsuccessful ones, amongst small and large, as well as simple and complex projects. It is also not its location, because every country seems to have its successful and problematic projects. It is even not because of ground conditions, as we all know examples of successful projects that have been completed in very difficult ground conditions. There must be another reason. Perhaps it is the way the management team of a project is able to manage the inherent presence of risk, during all phases of realizing the project.
1.2 Risk management as an answer to failure costs Several studies indicate that failure costs in the construction industry are typically 10 to 30 percent of the total construction costs (Avendano Castillo et al., 2008). This seems to be a worldwide phenomenon. There is also abundant evidence that unexpected and unfavourable ground conditions have a serious stake in these failure costs (Van Staveren, 2006). In the Netherlands, Rijkswaterstaat, the Centre for Public Works of the Dutch Ministry of Public Works and
Water Management, is initiator and owner of all federal infrastructure projects. Therefore, Rijkswaterstaat decided to pay particular attention to the management of ground related risks within their projects.
1.3
Ground related risk management
The development and application of geotechnical risk management gets more attention in recent years. More and more, it is considered an effective and efficient way of work process for controlling all types of ground-related risk. For instance Deltares, formerly known as GeoDelft, developed the GeoQ-method (Van Staveren, 2006), that is already used in many construction projects with good results. The GeoQ approach is in fact an in-depth application of the RISMAN project risk management method. GeoQ focuses on controlling ground-related risks. The method is based on six generally accepted risk management steps: 1. Determination of project objectives and data collection; 2. Risk identification; 3. Risk classification and quantification; 4. Risk remediation; 5. Risk evaluation; 6. Transfer of risk information to the next project phase. These risk management steps should be explicitly taken in all phases of a construction project. Ideally, the ground related risk management process starts in the feasibility phase and is continued during the
339
(pre)design phase, the contracting phase, the construction phase and the operation and maintenance phase. Obviously for being effective and efficient, the ground related risk management should be aligned with more general project risk management approaches. Because of the similarity of risk management steps, this should be no problem. The main differentiating feature of ground related risk management, when compared to generic project risk management, is its specific attention to geotechnical risk and its remediation. Therefore, ground related risk management uses conventional risk management approaches, such as qualitative risk assessments, as well as specific geotechnical approaches. The latter includes for example risk-driven site investigations and monitoring programmes. 2 2.1
SET UP OF A GEO RISK SCAN Introduction and objectives
For gaining insight in the degree and quality of geotechnical risk management in projects, Rijkswaterstaat asked Deltares to perform a Geo Risk Scan on five selected projects out of the top 20 largest Dutch infrastructure projects. The main objectives were gaining insight in the type and characteristics of ground related risks, the possible consequences when these risks would occur, and the degree to which risk remediation measures were taken within the projects. Moreover, the results of the Geo Risk Scan would generate a quality judgement about the degree of geotechnical risk management. In order to achieve these objectives the Geo Risk Scan aims to quickly scan both process and content of the ground related risk management within a project. The execution of a well-structured risk management process, by taking the presented six risk management steps, is considered as the main boundary condition for generating effective and efficient geotechnical risk management. If necessary, recommendations to the project organisations are provided in order to improve the project performance and reducing the probability of ground related failure costs. 2.2
Structure of a Geo Risk Scan
The basis of the Geo Risk Scan is the GeoQ approach mentioned above. Using this approach the Geo Risk Scan was executed focussing furthermore on aspects such as: – Deviation between process and content – Within the context of a project, the scan is executed from a more generic analysis to more detailed analyses on specific points of interest for the projects scanned – Any scan starts with a qualitative analysis; quantitative analyses are only performed when considered necessary, based on the qualitative analysis
The following four stages are identified regarding these basic assumptions. The first two stages in fact form the Geo Risk Scan; the latter two stages can be completed within a project, depending on the results of the first two stages. 1. 2. 3. 4.
Geo Quick Scan – qualitative process test; Geo Check – qualitative content and product test; Geo Risk Analysis – quantitative content analysis; Implementation – Geo Risk Management as a routine work process
3
EXECUTION OF A GEO RISK SCAN
3.1
Stage 1: Geo Quick Scan
3.1.1 Execution of a Geo Quick Scan In order to be able to perform this stage, first one has to gain insight in the project objectives and context. Therefore, an interview is planned with the project management team. It is important to have at least an interview with the technical project manager, who is normally responsible for the technical part of a project. For larger projects, it can be of good help to interview the risk manager (when present within the project), project leaders of specific elements of the project and the contract manager. The interview is based on a standardized questionnaire and deals mainly with the GeoQ approach. Examples of questions are: – Is the GeoQ approach recognizable in the scanned project? – Have all six steps been fully elaborated in each project phase? Is everything done in order to get good results from the risk management steps? – Have all six steps been explicitly elaborated in each project phase? It is important to know whether a step is performed explicitly, following a plan or just as some sort of unaware coincidence. In general, when a step is only performed implicitly, it is not guaranteed that in next project phases or in other projects the same risk driven project management is applied. This could cause negative consequences. Further insight is gained by asking for the products available from these steps and the knowledge and tools that have been used in the project to assist in the elaboration of the steps.
3.1.2 Results of Geo Quick Scan Elaboration of the interview and studying of the gathered information make it possible to evaluate the Geo Quick Scan. Scores are based on table 1 (made for each project phase) and the accompanying legend. Moreover, the application of the six main lessons learned (next chapter in this paper) is checked. Besides this score, recommendations are provided for improvement of the ground related risk management process.
340
Table 1.
Scoring the Geo Quick Scan.
Step in GeoQ-approach
Degree of explicit execution
Degree of complete execution
1. Setting objectives and data collection 2. Risk identification 3. Risk classification and quantification 4. Risk remediation 5. Risk evaluation 6. Transfer of risk information
Figure 1. Structure of checklists. Table 2.
Risk table.
Risk after implementation of measures Unwanted recommended event Probability Consequence Risk in scan
1 point: step isn’t executed 2 points: implicitly, but not fully elaborated 3 points: explicitly, but not fully elaborated 4 points: implicitly, and fully elaborated 5 points: explicitly, and fully elaborated The total amount of points from table 1 gives the judgement and a ‘report mark’: – <20 very insufficient 4 – 20–22 insufficient 5 – 22–24 moderate 6 – 24–26 sufficient 7 – 26–28 good 8 – >28 excellent 9
3.2
Stage 2: Geo Check
3.2.1 Execution of a Geo Check The work in the Geo Check phase is focussing on the points of attention resulting from the Geo Quick Scan. The Geo Check deals particularly with the content or quality of the projects ground related risk management. Analyses and calculations from the project are checked qualitatively by making use of experienced geotechnical engineers. New calculations are not performed in this stage. The primary focus is checking the work already performed by the project organisation. For example, the following questions should be answered during the Geo Check: – Are correct starting points chosen in relation with boundary conditions of the project? – Are all relevant ground related risks identified? – Are calculations been performed for the relevant identified risks? – Are appropriate models and techniques applied? – Are results of the performed analyses according to expectations? 3.2.2 Checklists Despite the experience of geotechnical experts, it is of major importance to assure all foreseeable risks are indeed identified. Therefore, using standardized checklists is very useful. These checklist have been developed for building pits, roads and dikes for quickly gaining insight in the completeness of the identified ground related risks. These checklists proved to be of good assistance in all performed Geo Risk Scans.
All risks in the checklists are classified as geotechnical risks, geohydrological risks, geo-ecological risks, risks related to objects or obstacles in the ground, risks related to contract requirements or construction risks. All risks are described in terms of causes and consequences. The consequences are by definition unwanted events. By using this structure of the checklists, it is possible to use them on different scales. If the project is still in the feasibility phase, risk identification can be done only on the scale of unwanted events. When more detail is required, one can work from causes to sub causes and estimate the risks accordingly. 3.2.3 Results of Geo Check When a Geo Check is performed, the project organisation gaines insight in the following questions: – Are unacceptable ground related risks present in the project? – Have already risk remediation measures been identified for these risks? (based on expert judgement during the Geo Check) – Which unacceptable risks are still present? Answers on these questions are described as recommendations for improving the in-depth quality of ground related risk management. Besides these recommendations a risk table (table 2) is presented. Such risk tables proved to be a more practical way of displaying the project risks than conventional risk graphs (figure 2), without loosing insight. Finally, the Geo Check is evaluated by giving a ‘report mark’ on a scale of 1 to 10, based on expert judgement. 3.3 Results of Geo Risk Scan After execution of the Geo Quick Scan (process) and the Geo Check (content) a total overview of the degree of a projects ground related risk management is available. Rijkswaterstaat asked for a project portfolio of all scanned projects in order to be able to compare
341
used, as well as geotechnical calculations. Examples are the use of an Electronic Board Room for brainstorm/expert sessions, contractual risk allocation by the Geotechnical Baseline Report, model experiments, field monitoring, and so on. This makes it possible to analyse and quantify any remaining unacceptable risks in order to select and execute proper measures. The project team itself can execute work during the Geo Risk Analysis stage. At the end of the Geo Risk Analysis stage, the optimal risk assessment strategy can be chosen. Possibilities are avoiding the risk, reducing risk probability and or consequence, and risk transfer to a third party.
Figure 2. Risk graph.
3.5 Stage 4: Implementation All information gathered and recommendations from the previous stages will not improve the projects work, unless it is implemented in the projects work. Therefore, the implementation can be seen as the most important stage! Implementation has to be done by the project team itself. This stage is beyond the scope of this paper and for instance elaborated in Van Staveren (2009). 4
LESSONS LEARNED
the results of the different project scans. Figure 3 shows the matrix which made this possible. Each scanned project can be placed in the matrix. Content is evaluated as of more importance than process. After all, when the results of a project are good, the project objectives will not be affected. Therefore, projects with bad scores for the Geo Quick Scan still can get a good or moderate overall score. Nevertheless, these projects should keep focus on improving the process of ground related risk management. Maybe, it was only a coincidence that the content part of ground related risk management of the project had good results!
The evaluation of the five scanned projects resulted in six main lessons which are further elaborated in 20 recommendations. All lessons are described in this chapter of the paper. The lessons are subdivided in two main types. The first set of lessons may help to improve the application of well-structured ground related risk management during projects. These lessons are interesting for owners who are responsible for the geotechnical conditions in their projects, as well as for contractors or engineers who have to manage ground conditions towards successful project results. These lessons are referred to as lessons dealing with content (C). The second set of lessons teaches how the coordination and delegation of managing geotechnical risk by owners can be improved. These lessons seem particularly relevant for owners who use innovative design and build type of contracts and are referred to as lessons dealing with process (P).
3.4
4.1 Lesson 1 – Clear risk management positioning
Figure 3. Quality matrix and project portfolio
Stage 3: Geo Risk Analysis
The aim of the Geo Risk Analysis stage is to improve projects ground related risk management, either with focus on process, or with focus on content by performing extensive and if necessary quantitative analyses. Executing the Geo Risk stage, a project in the bottom left corner of the quality matrix in figure 3 should move to the upper right corner of the matrix by executing the Geo Risk stage. Analyses are executed on unacceptable risks as identified in the Geo Check and recommendations of both Geo Quick Scan and Geo Check are elaborated. If necessary, advanced risk management tools can be
Lesson 1 concerns the positioning of ground related risk management within the project. 4.1.1 Ground related risk management should be an integral part of project risk management, but with explicit status (P) In all of the five scanned projects, ground related risks were an integral part of the total risk management. From a project management point of view, this seems a good strategy, because more aspects than only ground related risks are of importance for a project. From a geotechnical specialist point of view, this gives the
342
opportunity to give ground related risks the proper attention. However, ground related risks need special attention, having specialists dealing with them and executing specific remediation measures. Most remarkable is that ground related risks have mainly consequences during the construction and maintenance of the project. Consequently, these risks are often not given the attention they need, or thought about as solvable, during the design phase. In each specific project it is recommended that in early stages geotechnical experts determine whether or not this may result in unacceptable risks later on in the project. Therefore, ground related risks need an explicit status in the total risk management. In two of the five scanned projects this approach was used with good results. 4.1.2 All specific ground related risks should be part of a project’s risk register (P&C) All ground-related risks should not only have an explicit status, they should also be part of the project risk register. Often, only imprecise ground-related risks are part of the project risk register. For example, phrases like “soil investigation is insufficient for making a good design”. Such fuzzy descriptions make explicit risk management difficult and probably even impossible. It is unclear which measures have to be taken and what the anticipated effects are. Therefore, it is recommended that the groundrelated risk register is part of the overall project risk register. 4.2
Lesson 2 – Clear risk management responsibility
Lesson 2 highlights the importance that any identified ground related risk needs one or more owners. Otherwise the risk will not get the required attention for adequate remediation. 4.2.1 Appoint a coordinator who is responsible for ground related risk management (P) Scanning the five projects showed the importance of somebody in the project acting as a coordinator of all ground related issues. The quality of the project largely improved by such a coordinator. It is not necessary that this person also is responsible for the ground related risks themselves. The technical manager of a big infrastructural project is usually too busy to give ground related risks the proper attention. The mentioned coordinator should assist the technical manager. 4.2.2 All ground related risk should be allocated contractually to one or more of the parties within a project (P) Because of the inherent ground related uncertainty it is very important to contractually arrange the responsibilities for unwanted events caused by differing soil conditions. At least, it is important to think about
the consequences, when ignoring risks caused by the uncertainty accompanying the soil. One could simply divide all risks to one or the other party, but often partial risk allocation is preferred. For instance the principles and practices of the geotechnical baseline report (GBR) are recommended (Essex, 2007). The main principle is to allocate any risk to the party involved that is best able to manage the risk. Sometimes sharing a risk is preferred, as both parties are (un)able the manage the risk by their own. 4.2.3
Ground related risks, completely allocated to the contractor, needs still evaluation by the client (C) In integrated contracts, many risks are transferred from the client to the contractor. However, the client still bears consequences when the risks occur. This is especially the case for immaterial consequences, like loss of reputation, safety or political risks. The project management team can use monitoring and other quality checks in order to keep control over these risks. These checks should not only be process checks, but should also include in depth analyses of content. 4.3
Lesson 3 – Clear risk communication
Lesson 3 stresses the importance of transparent risk communication between all parties involved in the project, as early in the project as feasible. 4.3.1
Link explicitly the functional and technical level of project organisation to each other (C&P) All five scanned projects used integrated contracts, where the contractor also had to do the design or even finance and maintenance. This implies that the project organisation has to pay much attention to the functional description of project specifications. By this way, all identified risks need to be transformed to the contractor. During scanning of the five projects two handicaps were shown: – (Geo)Technical experts have difficulties in translating their recommendations to this functional level. – On the other hand, project managers have difficulties in translating the technical requirements of the experts to functional requirements. Only one of the five projects excelled in this link between project management and ground related technical experts. This precious link was formed by one person who could ‘speak both languages’. This is recommended for every project. 4.3.2 The risk file of a client should be known by the contractor and vice versa (P) Every project organisation of the five scanned projects had a dilemma about sharing their risk file with the contractor. Many different concepts of sharing this information (or not) were encountered.
343
One might think it is desirable to show the contractor all identified (ground-related) risks and vice versa. By doing so, project organisations however feel like attracting responsibility to themselves, because the information given to the contractor can contain misconceptions. Though, this last option is not esteemed to be correct. Risks are only transferred as points of attentions and can not be wrong. Another rationale is that with innovative design and build type of contracts, one might be pushing the contractor in some direction, when exchanging risk information. After all, the intention of an integrated project is to use the knowledge and experience of the contractor for the design and the client should not give implicit directions to a design. Balancing between these both ways, from point of view of a professional client, it is recommended to always exchange risk information between parties; at least after the tender phase. 4.3.3 Tests should be applied on feasibility of requirements with a geotechnical scope (C) Requirements with a direct relation to (geo) technical aspects can be stricter than the maximum accuracy of predictions in the design and construction phase. For example, settlement requirements for roads are often more stringent than can possible be designed and constructed within reason using ‘state-of-the-art’techniques and models. Negative consequences are over dimensioning or relatively big efforts for maintenance. It is therefore recommended to verify the feasibility of those types of requirements.
Figure 4. Need of early risk sessions.
the scanned projects. However, looking back at project activities it always seemed possible to divide them to the six risk management steps. One needs to keep in mind that real risk management is only possible when these steps are taken with an explicit plan. Risk management steps are performed with a cyclic approach. Consequently the steps are not always performed in succeeding order. For example gathering of extra information (executing extra soil investigation) is part of step 1, but can be done as a measure identified in step 4. Therefore, when risk management steps are not performed in succeeding order, there needs to be an induced explanation by the risk management process itself (see previous example).
– – – –
4.4.3 A ground related risk session should be organized in early project phases (P) In early stages of a project little investments are made and steering possibilities are still high. This underlines the importance of risk sessions in the early project phases. However, technical risk sessions are often ignored during the first project stages because project management conceives technical risks as solvable in later project stages. As a consequence of this assumption optimal technical solutions might be overlooked in the early stage of the project. This will require much more effort in later project stages. Therefore also technical risk sessions should be planned in the early stages. One of the scanned projects proved this to be true. From an early risk session one major construction risk was identified. Extra soil investigation was executed and the risk got special attention during the tender phase in which the contractor had to make a plan to show how this specific risk was managed.
4.4.2 All GeoQ steps should be explicitly executed step by step during each project phase (P) Following the six risk management steps will lead to good risk control of a project and hence a correct and complete risks register. This is generally accepted. Therefore, a crucial aspect in all Geo Risk Scans was the examination of the way these steps were followed in
4.4.4 Communication should be explicitly risk based between project and third parties (P) A project organisation can call in the help of third parties (e.g. for soil investigation, monitoring, technical advice, etc.).Third parties are often called in to manage (implicitly or explicitly) identified risks. It is important to communicate these risks as an instruction. This ensures that the right analyses are executed by third parties. The other way around it is also important for the project management to ask the third parties to report identified risks to them.This ensures a more completed
4.4
Lesson 4 – A ground related risk register
Lesson 4 underlines the importance of a correct, complete and up-to-date ground related risk register and gives recommendations for realization. 4.4.1 Description of a risk needs to satisfy basic demands (C&P) This is a general recommendation that is applicable on all types of risks; also ground related risks. It is important to describe a risk in the risk file with at least the following aspects: Cause (in words) of the risk Consequence (in words) of the risk Determination of probability of the risk Determination of amount of consequences of the risk (failure costs, quality loss, delays, loss of image & public confidence, etc.) – Possible remediation measures – Owner of the risk – Responsibility for the risk
344
risk file of the project and no important information gets lost. 4.4.5 Explicit guarantee on completeness and correctness of ground related risks and analyses is necessary (P) Experts are of major importance in soil related risk management. Caused by the uncertainty of geotechnique, almost always different interpretations of the same risk can be expected. Guarantee on completeness and correctness of ground related risks therefore is important. One can think of big risk sessions with many experts, colleague checks, second opinions and use of checklists in order to guarantee this. 4.4.6 Checklists are recommended as a check on the completeness of risk files (C) Accompanying the previous point checklists have proven to be of good help in order to check the completeness of risk files. In paragraph 2.4.2 is shown how a good checklist can be implemented and used. 4.5
Lesson 5 – Risk driven site investigations
4.5.1
Site investigation should be explicitly risk based (C) From a good (ground related) implemented risk management process, in situ ground investigation and supporting laboratory research can be identified as a good measure. After all, Performing in situ soil and lab research will gain insight in specific risks. The risk management that let to the performing of the in situ soil and lab investigation should be extended in the plan for the investigation itself. Six basic steps can be used (Staveren, 2006) in order to be sure that the correct information is gathered. In short: 1. Determine ground related constructions; 2. Determine main mechanisms that affect the fit-forpurpose; 3. Determine the risks if the identified mechanisms act adversely; 4. Determine the design techniques for the identified measurements; 5. Determine the most critical ground parameters; 6. Determine the in situ soil and lab investigation considering the ground parameters and the geological heterogeneity. 4.5.2
In situ soil and lab research should be flexible executed (C) By using a flexible approach of site investigation it is possible to adjust to the obtained results during execution. On the one hand more detailed research can be executed if more heterogeneity is encountered as expected. On the other hand the research can be done more broad if there is no reason for more detail. 4.5.3
Quality control of site investigations is necessary (C) Lab and in situ soil research are used in calculations in order to make analyses. It is therefore of great
importance that the parameters derived from lab and in situ soil research are reliable. There can also be contractual consequences when soil investigation results were send to the contractor but are proven to be wrong. Question marks were stated in three out of the five scanned projects regarding the quality of the in situ soil and lab research. Especially with specialized experiments one should critically apply quality control. A geotechnical specialist should be able to perform these checks.
4.6
Lesson 6 – Risk driven field monitoring
4.6.1
Monitoring should be used as a tool for guarantee of quality and control of risks (C) Field monitoring is an excellent tool for controlling ground-related risks during the construction and operation phases of projects. Obviously, these programmes need to be defined according to the risk profile of the project. With integrated contracts, often monitoring is coordinated by the contractor. However, the client should always check the results of the applied monitoring for the key-risks of the project. Monitoring should not only be checked according to the process, but regular in-depth analyses of content should also be applied.
4.6.2
Ground related risks should have an explicit place in a monitoring plan (C) Monitoring can be broader than only measuring for controlling ground related risks. However, ground related risks should have an explicit place in a monitoring plan in order to make sure the measurements make sense for the ground related risks to be monitored.
5
CONCLUSIONS
Unexpected ground conditions appear a major source of cost and time overruns in infrastructure projects, which is confirmed by recent Dutch research. The presented Geo Risk Scan proved to be an effective tool for quickly providing information about the degree and the quality of ground-related risk management in infrastructure projects. Six main lessons and supporting recommendations are derived from using the Geo Risk Scan in five major Dutch projects. The main lessons are: 1. 2. 3. 4. 5. 6.
Clear risk management positioning Clear risk management responsibility Clear risk communication A ground related risk register Risk driven site investigations Risk driven field monitoring
These lessons seem to be generically applicable in construction projects. Ongoing application of these lessons in Dutch projects proves this conclusion.
345
ACKNOWLEDGEMENTS The authors are grateful to all professionals who have been interviewed, or who otherwise contributed, during performing the Geo Risk Scans in the five infrastructure projects. REFERENCES Avendano Castillo, J.E., Al-Jibouri, S.H. and Halman, J.I.M., 2008, Conceptual model for failure costs management in construction, Proceedings of the Fifth International Con-
ference on Innovation in Architecture, Engineering and Construction (ACE), Antalya, Turkey, July 23–25. Essex, R.J. (ed.), 2007, Geotechnical Baseline Reports for Construction: Suggested Guidelines. Danvers: ASCE. RISMAN, www.risman.nl Staveren, M. Th. van, 2006, Uncertainty and Ground Conditions: A Risk Management Approach, Elsevier Ltd. Staveren, M.Th. van, 2009, Suggestions for implementing geotechnical risk management, Proceedings of the Second International Symposium on Geotechnical Safety and Risk, Gifu, Japan, June 11–12 (in press).
346
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reduction of landslide risk in substituting road of Germi-Chay dam H. Farshbaf Aghajani M.Sc in Geotechnical Engineering, Ashenab Consulting Engineers Co., Tabriz, Iran
H. Soltani-Jigheh PhD of Geotechnical Engineering, Technical Faculty, Azerbaijan University of Tarbiyat Moallem, Tabriz, Iran
ABSTRACT: Germi-Chay dam is an earth-fill dam with central clay core that is underconstruction across the Germi-Chay River in Iran. Due to dam construction, part of the main road, that connects East Azerbaijan and Ardabil provinces together, remains within the dam reservoir. Therefore, it is necessary to construct a substituting road, 5 km long, outside of the reservoir. Part of the substituting road is located on the natural ground slope is susceptible to landslide. In this paper, first, the stability of the road slope is verified by performing a back analysis for the slip surfaces. Then, an appropriate scheme is suggested for construction of the road embankment in order to attain permanent stability of the slope. Evaluation of measured deformations show that the slope displacements have decreased considerably and slipping of the slope have stopped after executing the stabilization treatment. 1
2
INTRODUCTION
Landslide is one of the most important geohazards in geotechnical engineering that can be a threat for the stability of the structures. The potential of land sliding may increases when the slope is consisting of a weak texture and also subjected to groundwater rising by locating in vicinity of a waterway. Construction of infrastructures, such as the roads and buildings over the mentioned slopes may increases the geohazard risk, and even can lead to slide of the slope and consequently failure of the constructed structures. To avoid from the unfavorable incidents and to guarantee stability of structures, the landslide risk must be managed and reduced by using the efficient and practical methods as well as with considering of the costs. Some researchers have investigated the risk assessment and management of the landslides by statistical, analytical and geological methods (Lessing et al, 1983; Fell, 1994; Dai et al, 2002). Some of these methods are appropriate for risk assessment of land slide occurrence. However, a few numbers of methods have been presented for hazard risk reduction and treatment of the slipped slope that are performable in practice with low costs. In this research, a practical and efficient method is employed to reduce the landslide risk of a slipped slope, as a subgrade of a road, in Iran. The geological formations, landslide potential, and stability of the slope are first investigated. Then, an appropriate methodology is proposed for construction of the road embankment with considering the necessity of current land slide risk reduction. Also, for examining the efficiency of proposed method, slope deformations are monitored at regular intervals during and after embankment construction.
SUBSTITUTING ROAD OF GERMI-CHAY DAM
Germi-Chay dam is an earth-fill dam with central clay core that is being constructed over the Germi-Chay River, located about 220 km of north-east of Tabriz city in East Azerbaijan province of Iran. The heights of the dam from bed rock and river bed are 82 and 62 m, respectively. Also, length and width of the dam at the crest are respectively 730 and 10 meters. The maximum water level of reservoir is 1460 meters higher than sea level. The main purposes of the Germi-Chay dam are irrigation of farm lands and supplement of urban drinking water. Unfortunately, due to dam construction, a part of the main road, that connects East Azerbaijan and Ardabil provinces together, is located within the dam reservoir. For this reason, a substituting road is being constructed, with the length of 5 km, outside of the reservoir. A part of the substituting road at kilometer 2 + 065 is located in the natural ground slope that is a slipped area and susceptible to landslide. The geological investigation of the area shows that the slope is comprised of weak shale material located on the igneous bedrock. Furthermore, a waterway has been located in the vicinity of subgrade slope. So that during the rainy seasons, ground water rises within the slope and causes to occur slips and tension cracks in the subgrade slope. Field measurements indicate that the values of horizontal and vertical displacements during 9 months are 714 and 346 centimeters, respectively. A view of cracks and slips occurred in the subgrade slope are presented in Figures 1 and 2, respectively. With respect to the importance of substituting road from the view point of linking two provinces
347
Figure 3. Geological map of studying area as well as land sliding mass with substituting road axis. Figure 1. Tention cracks near borehole BH 55 due to slipping.
Figure 2. A view of sliped in the subgrade slope as well as substituting road axis.
and its location on the slipped area, the stabilization of the subgrade slope and securing of road safety permanently is concerned in this research.
3
GEOLOGY OF LANDSLIDING AREA
To investigate the geological and geotechnical properties of the studied area, a number of six boreholes, denoted by BH-53, BH-54, BH-55, BH-56, BH-57, and BH-R1, are drilled in the subgrade slope. The results of the subsurface explorations and laboratory tests showed that the geological structure of the subgrade slope consist of the recent alluvium (TR), quarternerary traces (TA), crashed red rhyodacite (RhD), quartz diorite & quartz monzo-diorite (QD & QMD) and alternation of grey shale with yellow sandstone (Sh & S). Details of these formations have been illustrated in Figure 3 (Ashenab, 2005). The recent alluviums have located at the sides of waterway and comprised of silt and sand mixtures. The thickness of this formation is less than 3 meters. The quarternerary traces comprise mixtures of finegrained soils and some sand and gravel.
Sedimentary formations consist of shale and sandstone materials frequently have been repeated with depth within the slipped zone, and exposed at the kilometer 2 + 065 of the substituting road. At deep levels, amount of sandstone mass increases and sometimes the sedimentary layer is alone consisting of sandstone. Shale mass is in the forms of claystone and siltstone. In order to investigate properties of shale material, a number of experimental tests were carried out on the samples obtained from BH-54 and BH-55 boreholes. The results indicated that these materials have low shear strength with average liquid limit (LL) and plasticity index (PI) of 42 and 17, respectively. Also according to Unified Soil Classification System, the majority of samples are categorized as CL (ASTM 1997). Because of weak texture of sedimentary formations, precisely determination of layers strikes and their dips is difficult. However, site explorations show that the slip plane of slope does not coincide to the dip direction of sedimentary layers of shale materials. At the right hand of the waterway, geological formations have been made from the rhyodacite mass. Also, at the upper elevation of subgrade slope, rhyodacite mass with almost 50 meters depth is located over the shale layer. The rhyodacite formation has good quality and strong property which the fragments are used as concrete aggregates. Bedrock formations of quartz diorite & quartz monzo diorite materials have been embedded under the mentioned layers. The bedrock has relatively good quality and high strength with a few joints. According to the site explorations, the depth of bedrock at BH-53, BH-54 and BH-55 boreholes are 10, 14.8 and 3.5 m, respectively. The slope has slipped because of seasonal raining and weak condition of shale mass. Figure 3 illustrates the boundary and direction of slips. Also, a geological longitude profile of the slope along the slip direction (i.e. G-H cross section in Figure 3) is presented in Figure 4. As illustrated in this figure, the slipped area can be divided into two separate portions. The first portion, specified by S.S.1 area, includes the land slips in the upper elevations of the slope occurred due to the movement of rhyodacite mass on the shale layer. The values
348
Table 1.
Figure 4. Geological profile of slope (G-H), locations of slip surfaces of 1 and 2 as well ground water level.
Material type
Friction Angle
Cohesion (kPa)
Density (kN/m3 )
Rhyodacite Shale Slip Surface 1∗ Slip Surface 2∗
30 23 15 15
10 100 20 15
22 20 19.5 19.5
*Obtained from the back-analysis
of displacements at benchmarks installed on the rhyodacite mass are about a few centimeters. This fact is due to existence shallow bedrock in front of rhyodacite mass, which withstand against the movements. The second portion of the slips is ground movement in the lower elevation of the slope through the shale material, i.e. S.S.2 area shown in Figures 3 and 4. Field observations indicated that the amount of movements progressively have increased during rainy seasons. Also, subsurface explorations determined the depth of sliding about 10 m. This relatively low depth is related to the non-conforming of shale lamination and slip direction as well as increasing of fraction of sandstone at lower depths. Although the strong sandstone has extended up to considerable depth, the amount of horizontal movements is much more, about 714 centimeters during a period of nine months.
4
Strength parameter of slope materials.
BACK ANALYSIS OF SLIP
Prior to stabilize the subgrade slope, it is necessary to known the mechanical parameters of materials located among the sliping surfaces. The most efficient method for determine these parameters is performing a backstability analysis for slipped slope (Sabatini, 2002). With respect to the progressive movement of the subgrade slope, it can be concluded that the shale material along the slip surface have reached to the residual condition. In this condition, the soil cohesion is negligible and the effective friction angle may be determined by performing a stability analysis assigning safety factor of 1.0 (USACE, 2003). The back-analysis was performed with the limit equilibrium method by using SLOPE/W software. Mohr-Coulomb criterion is utilized for modeling of material. Since the slip surface is known in the field, it must be defined carefully for back-analysis. Therefore, the slip surfaces (S.S.1 and S.S.2) with identified situation are considered as a narrow band made of weak material with unknown friction angle and differ from the other material of slope. It is necessary to note that the SLOPE\W software is unable to model the interfaces between different materials in order to define slip surfaces. Thus, by applying this approach, it is not necessary to reduce the parameters of whole slope. The Mohr-Coulomb parameters of other materials, such as cohesion and friction angle, are obtained from conducting triaxial and direct shear tests on the samples retrieved from boreholes. Density, cohesion and
friction angle of the slope materials are presented in Table 1. Subsurface explorations determined ground water level 2 m below the ground surface. Two dimensional geometry of the slope, materials of layers, locations of slip surfaces as well as ground water level are shown in Figure 4. The values of the friction angle and cohesion of slip surface materials obtained from the back-analysis are presented inTable 1. Comparison between the obtained results and recommended values in literature shows the accuracy of back analysis (Bowles, 1996).
5 A SCHEME FOR SUBGRADE SLOPE STABILIZATION Since the safety of substituting road was threatened by future probable sliding of subgrade mass totally due to large deformations expected in S.S.2 area, it is necessary to seek a remedial treatment for stabilizing of the subgrade slope with considering practical and economical feasibility. In last decades, many approaches and methodologies have been introduced for stabilizing and treatment of the slopes. These methods can be categorized in the following groups: slope geometry modification, control surface water and internal seepage control, provide retention, increase soil strength with injections and soil reinforcement (Hunt, 2007, Cheng & Lau, 2008). Selection of appropriate method is based on several factors such as practical feasibility, economy and available facilities. In Germi-Chay project, utilizing the slope geometry modification method required enormous excavation and thus high cost. Because of slipped and weak texture of shale mass, the efficiency of soil reinforcement also is unconfident. As mentioned in geological description, bedrock embedded under the slope at S.S.1 area has been located at shallow depth. Thus this feature is utilized to construct a retained system along with appropriate drainage capacity. Therefore, to stabilize the slope, it was proposed to excavate the foundation of road embankment up to the bedrock level. Then, the foundation trench was filled with rockfill material to reach to the identified level of road embankment and, finally, the road embankment was completed. It expected that the road embankment plays role as a barrier against slope sliding and
349
Figure 5. Geometry of the slope after stabilization and road embankment.
Figure 7. Excavation and filling processes of the strips during construction.
6
Figure 6. Potentially slip surface after stabilization.
decreases the movement of the upper mass because of high strength and stiffness of rockfill material used in the embankment and its appropriate geometry. Figure 5 illustrates the stabilization scheme of subgrade slope after stabilization process as well as road embankment section. The proposed scheme for stabilization was modeled and analyzed using SLOPE/W software for evaluating safety of the slope and road as well as for determining supposed slip surfaces. Also, the factor of safety of S.S.1 surface was determined by stability analysis of the proposed geometry. As indicated in Figure 6, the results of the analysis show that in the presence of the road embankment, the safety factor of S.S.1 surface increases from nearly 1.0 (as a threshold condition in stability analysis) to 2.545. Moreover, the safety factor of the embankment constructed on the bedrock is about of 3.85. Since the embankment material is comprised of rockfill material with high drainage property, the phreatic line locates in a lower level and the safety factor becomes more than theoretical calculation. In addition, a culvert was performed within the road embankment to conduct the surface flows toward downstream; hereby the safety increases from this point of view. For economic evaluation of proposed stabilization method, the costs of this method was calculated and compared with those of the geometry modification method. The results indicated that in proposed method, the volumes of subgrade excavation and foundation filling are 21650 and 16100 cubic meters, respectively. However, for the second method required excavation is 256000 cubic meters as well as 94000 cubic meters of rockfilling for embankment construction to a level higher than the normal water level of reservoir. Thus, the excavation volume of second method is almost 12 times greater than proposed method.
METHODOLOGY OF THE EMBANKMENT CONSTRUCTION
In spite of the current movement of the slope and low strength parameters of shale material, it is expected that the subgrade slope excavation up to the bedrock may lead to instability and hazardous in the slope; particularly that the trench locates at the toe of slope. As a result, in this research, a comprehensive method was proposed to excavate the road trench down to the bedrock level without occurrence of any instability in the slope. In the suggested method, first, the material of embankment basin located in the outside part of the slipped region (S.S.1) was excavated to the bed rock and then immediately filled with the coarse grained material. For excavating road embankment foundation located on the slipped area, it is divided into a number of narrow strips (about 3 meters width) perpendiculars to the longitude axis of road. To avoid any instability within the slope during construction process, each strip was first excavated from downstream to the upstream of the slope and then filled with rockfill material until reaching to original ground level. Then, the adjacent strip was performed similarly. This operation was executed for all strips until the weak shale material of the road subgrade was substituted with the strong rockfill material. Since the least depth of the bedrock located at the east north of the embankment, the excavation and filling processes was commenced from this situation. After completing the excavation and filling of all the strips, the road embankment was constructed safely. Figure 7 shows the excavation and filling processes for one strip of road foundation. Also, the embankment of substituting road after completion is shown in Figure 8. 7
MONITORING OF SLOPE BEHAVIOR
To control the deformations of slope during construction, displacements was surveyed and evaluated
350
embankment construction, the displacement of Z21 increases only about 4 centimeters (from May-2008 to Oct-2008). After embankment completion up to now, the movement of benhmark Z21 and subsequently the slips of slope are almost stopped. This consequence indicates the effectiveness of embankment in reduction of landslide risk and attainment of permanent stability of substituting road. 8 Figure 8. Embankment completion.
of
substituting
road
after
CONCLUSIONS
In this paper, to reduce the landslide risk and construction of a substituting road, a practical, most effective and economical method was proposed and constructed. Also, the enormous cost of slope modification was saved by applying the proposed method for risk management. To assure from stability of the slope, as a subgrade of the road, during and after the road construction, it was necessary to monitoring and evaluating the deformations of benchmarks installed on the rhyodacite mass and shale slope (upstream of road embankment). By evaluating the measured data, the slope behavior can be managed and studied continuously, and the critical deformations, led to the hazards, can be predicted. REFERENCES
Figure 9. Measured accumulated displacements at benchmarks located on the slope and construction level of earthfill.
regularly at the benchmarks. The investigations showed that the considerable reduction occurred in the deformations after construction of road embankment. For example, variations of displacements during and after construction at the benchmarks located near the BH-55 borehole and Z21 are presented in Figure 9. In this figure, the horizontal displacement of the benchmark located on BH-55 borehole is 721 cm before the road embankment construction (from Oct.2006 to July-2007). So the average rate of horizontal displacement is equal to 2.38 cm/day. Although during construction of embankment (from July-2007 to Oct.-2007), the displacement of this benchmark shows considerable reduction. Unfortunately, during construction operation, this benchmark is destroyed and inevitably another benchmark (i.e. Z21) was installed in the slip direction and slope behavior was investigated via monitoring displacements in this benchmark. The site surveying during road construction indicated that the accumulative value of displacements in the benchmark Z21 at the similar rainy season is 16 centimeters (from Oct-2007 to May-2008) and its average rate is about 0.7 mm/day. Comparing the displacements of the two benchmarks at similar time intervals indicates that the deformation of subgrade slope has decreased about 34 times. During
Ashenab 2005. Geology Report of Germi-Chay Dam, Ashenab Consults, Tabriz Iran ASTM 1997. Standard practice for classification of soils for engineering purposes (Unified Soil Classification System). ASTM D2487. West Conshohocken, Pa. American Society for Testing and Materials, Bowles, J. E. 1996. Foundation analysis and design”, Fifth Edition. New York, McGraw-Hill Cheng,Y.M. and Lau, C.K. 2008. Slope Stability analysis and stabilization: new methods and insight. New York. Rout ledge Taylor & Francis Group Dai, F.C., Lee, C.F. and Ngai, Y.Y. 2002. Landslide risk assessment and management: an overview, Engineering Geology, 64(1), 65–87 Hunt, R.E. 2007. Geological Hazards: a field guide for geotechnical engineers. New York. Taylor & Francis Group, Fell, R. 1994. Landslide risk assessment and acceptable risk, Canadian Geotechnical Journal, 31(2), 261–272. 1994 Lessing, P., Messina, C.P., and Fonner, R.F. 1983. Landslide risk assessment, Environmental Geology, 5(2), 93–99 Remondo, J., Bonachea, J. and Cendrero, A. 2005. A statistical approach to landslide risk modeling at basin scale: from landslide susceptibility to quantitative risk assessment, Landslide, 2(4), 321–328 Sabatini, P.J., Bachus, R.C., Mayne, P.W. Schneider, J.A. Zettler, T.E. 2002. GEOTECHNICAL ENGINEERING CIRCULAR NO. 5; Evaluation of Soil and Rock Properties, FHWA-IF-02-034, Washington, DC, USACE, 2003, SLOPE STABILITY, EM 1110-2-1902, 31 October 2003, USA
351
Risk assessment
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Probabilistic risk estimation for geohazards: A simulation approach M. Uzielli Georisk Engineering S.r.l., Florence, Italy
S. Lacasse & F. Nadim International Centre for Geohazards, Norwegian Geotechnical Institute, Oslo, Norway
ABSTRACT: Risk estimation for geohazards is inherently characterized by considerable uncertainties in inputs and outputs. Uncertainty-based analysis provides a more robust and complete assessment of risk in comparison with deterministic analysis. A probabilistic framework for the quantitative, uncertainty-based estimation for geohazards is proposed. The framework relies on Monte Carlo simulation of risk through the preliminary definition of its macro-inputs: hazard, vulnerability and the value of elements at risk. The paper provides an operational description of the framework as well as conceptual discussions supporting and motivating each step of the procedure. A calculation example is provided. 1
INTRODUCTION
It is generally accepted that quantitative risk estimation for natural hazards is to be preferred over qualitative estimation whenever possible, as it allows for a more explicitly objective output and an improved basis for communication between the various categories involved in technical and political decision-making. The considerable heterogeneity in conceptual approaches to risk estimation is a well-known fact. No univocal definition is available at present, and the conceptual unification of risk analysis methods currently appears to be a practically unattainable goal. A consistent quantitative risk estimation analysis must rely on a reference risk framework. UNDRO (1979), for instance, proposed the following model, in which risk is calculated as the product of three macro-factors:
To avoid the undesirable consequences of misinterpretations of risk estimates and assessment due to the aforementioned terminological fragmentation, it is essential to provide reference definitions explicitly. In the ISSMGE Glossary of Risk Assessment Terms (e.g. http://www.engmath.dal.ca/tc32/), risk is defined as a “measure of the probability and severity of an adverse effect to life, health, property, or the environment”. Hazard is “the probability that a particular hazardous event occurs within a given period of time”; vulnerability is “the degree of expected loss in an element or system in relation to a specific hazard”. The “elements at risk” macro-component parameterizes the value of vulnerable physical or non-physical assets in a reference system. The measurement units of
elements at risk are not univocal, and depend at least on the reference time frame of hazard, the typology of elements at risk and on the investigator’s perspective. The value of physical assets, for instance, is usually measured and expressed in financial units, while the value of lives has been parameterized using financial and non-financial units (e.g. ‘equivalent fatalities’). Safety and cost-benefit optimization are primary, essential objectives of risk management. The quest for optimum decision-making, which ensures the safety and performance of a physical system while minimizing the cost of risk management and mitigation, can be associated to the concept of minimization of excess conservatism. Conservatism is in itself a positive concept, as it is finalized to the attainment of safety, and to the reduction of the likelihood and risk of undesirable performance of a physical system to a level which is deemed acceptable or tolerable. Excess conservatism, however, should be also avoided as it corresponds to a non-optimal use of resources to attain the aforementioned goals. While the above qualitative reasoning is trivial, the borders between under-conservatism, conservatism and excess conservatism are not easily assessed in risk management for natural hazards practice because of the relevance of uncertainties in the physical environment. Among the main factors contributing to uncertainty are: (a) the difficulty in parameterizing the destructiveness of geohazards; (b) the heterogeneity, spatial and temporal variability of the physical environment; (c) the complexity in the interaction between the hazardous event and the physical system; and (d) the indetermination in mitigation costs. Neglecting uncertainties can introduce unnecessary conservatism or, on the other extreme, lead to mitigation countermeasures which are inadequate in terms of safety and performance. Although mathematical disciplines such as statistical science can
355
effectively contribute to quantifying, modeling and processing uncertainties, they are not routinely used in estimating risk for geohazards. With reference to the risk model in Eq. (1), while hazard is usually investigated in an uncertainty-based perspective, vulnerability and elements at risk are almost invariably estimated deterministically.The conventional quantification of risk also ignores the effects of uncertainties, and the term “risk” is often synonymous with the expected value of risk. A truly non-deterministic approach to risk estimation should acknowledge the existence of uncertainties in all risk macro-factors, and address such uncertainties explicitly. This paper illustrates a framework for uncertaintybased, quantitative risk estimation for geohazards by Monte Carlo simulation. Operationally, the emphasis is on practical applicability and robustness. As large uncertainties exist in the parameters and models used for risk estimation, it is not deemed advisable nor significant, in an operational perspective, to employ overly refined techniques for uncertainty modeling and uncertainty propagation analysis, as significantly more complex theoretical frameworks and restrictive conditions on parameters, hardly compatible with the type and quantity of data available in risk estimation practice, would be required. Nonetheless, the methods and criteria adopted in the proposed approach are theoretically sound and well suited for systems with large uncertainties. Uzielli (2008) extended the scope of the framework to include probabilistic decision-making criteria for risk mitigation. The term “Monte Carlo simulation” embraces a wide class of computational algorithms which are effectively capable of simulating complex physical and mathematical systems by repeated deterministic computation of user-defined models using random or pseudo-random values sampled from user-assigned distributions. Despite the vast and heterogeneous character of the algorithms, a common operational sequence consisting of the following steps can be identified: (1) probabilistic modelling of uncertainties in parameters and models; (2) repeated deterministic model simulation by computation on sampled model inputs; and (3) aggregation of results of the individual simulation instances into the analysis output. These steps are developed in the following sections. 2 2.1
UNCERTAINTY MODELLING FOR RISK ESTIMATION Uncertainty: definitions
The uncertainty in risk factors can be seen as the result of the complex aggregation of aleatory and epistemic uncertainties (see e.g. Phoon & Kulhawy 1999). Aleatory uncertainty stems from the temporal and/or spatial variability of relevant parametric attributes of both the hazardous event and vulnerable elements in the reference system. Epistemic uncertainties are a consequence of the practical impossibility
to measure precisely and accurately the physical and non-physical characteristics of the reference system and the hazardous event, and to model their interaction confidently. The absolute and relative magnitudes of the aleatory and epistemic components of total uncertainty are markedly case-specific. The aleatory and epistemic uncertainties addressed herein are conscious uncertainties, in the sense that the analyst is aware of their existence. Unknown uncertainty refers to the knowledge which is not currently attainable by the analyst. The reader is referred to Ayyub (2001) for an interesting discussion of uncertainty categorization. A number of mathematical techniques for modelling and processing uncertainties are available. Here, a probabilistic approach is chosen for a variety of reasons. First, probability theory (most often used in conjunction with statistical theory) provides a widely used, generally well understood and accepted framework for which a vast bulk of theoretical and applicative literature is available. Second, some of the basic concepts in risk analysis (e.g. the definition of hazard) are conceptually linked to probability. Third, enhanced computational capabilities allow extensive use of techniques such as Monte Carlo simulation, which are of greater applicability than other probabilistic techniques such as First-Order Second-Moment approximation, the latter requiring constraints on the degree of linearity of the reference models and on the magnitude of uncertainties in input variables. Fourth, due to the diffusion of probabilistic concepts in the technical disciplines, probabilistic approaches can be implemented with relative ease using general distribution software such as electronic spreadsheets. Fifth, uncertain parameters can be modelled probabilistically using both objective and subjective criteria, thereby allowing greater applicability to risk assessment analyses. The ‘dual nature’ of probability, which includes the objective ‘frequentist’ perspective and the subjective ‘degree of belief’ perspective, is of great practical significance in the context of QRE for geohazards as it is most often necessary to resolve to both objective and subjective modelling. Objective modelling can be performed if results of descriptive statistical analyses on samples of the random variates of interest are available, for instance in the form of frequency histograms. Once a suitable distribution type has been selected by the user, distribution parameters can be retrieved using appropriate inferential statistical techniques involving distribution fitting. A purely objective modelling is seldom feasible in the context of QRE for geohazards, where data are invariably limited in number and quality and the complexity of the interaction between a hazardous event and any reference system exceeds the analyst’s modelling and parameterization capabilities. Moreover, site-specific conditions may require substantial interpretation. A purely subjective modelling relies on the analyst’s experience, prior information, belief, necessity or, more frequently, a combination thereof. Subjective modelling should not be viewed as a surrogate of objective modelling (Vick 2002). In practice, probabilistic modelling is invariably hybrid
356
(i.e. both subjective and objective) to some extent. Well-established frameworks such as Bayesian theory allow rigorous merging of subjective and objective probabilistic estimates. Whether the assignment is objective or subjective, it is important to recognise that reducing the magnitudes of the sources of uncertainty requires fundamentally different actions. Epistemic uncertainty can be reduced by increasing the amount and quality of data and refining models. Aleatory uncertainty, however, may remain unchanged or even increase with increases in the quality and quantity of data, because the real degree of scatter in values of relevant parameters may increase if more observations, better measurement tools and more refined models become available. To reduce aleatory uncertainty, it is necessary to increase the resolution of the analysis (e.g. by defining a greater number of more specific categories, or subdividing the reference system into geographical sub-units) as to decrease the intra-category heterogeneity of vulnerable elements. 2.2
Uncertainty-based modelling of risk factors
The character of the interaction between a hazardous event and a vulnerable element depends on the characteristics of both the event and the element. In the technical risk analysis literature, hazard, vulnerability and elements at risk are expressed formally in a variety of ways. Here, risk macro-components are modelled as functions of intensity. In qualitative terms, intensity parameterizes the damaging potential of the hazardous agent. The concept of intensity is not univocally established in a quantitative sense: the diversity and heterogeneity of hazardous events are such that it is difficult to attain a general quantitative definition. Even for any single typology of hazardous event, the literature reveals that different parameterizations of intensity have been identified as most suitable depending on the problem to be investigated. In earthquake risk analyses, for instance, commonly adopted intensity parameters include peak ground acceleration, peak ground velocity, peak ground displacement, spectral acceleration and magnitude. The situation is even more complex for landslides. A reference intensity parameter should be selected by the user with the aim to concisely describe the most relevant damaging characteristic of the event. It depends both on the type of event and vulnerable element, because different vulnerable elements may suffer prevalently from different attributes pertaining to the event. Here, the aforementioned intensity-dependence is formalized by identifying a reference intensity parameter IN for the hazardous event and by subsequently expressing hazard, vulnerability and elements at risk quantitatively as risk factor functions of IN . The framework proposed herein allows the selection of any scalaror vector-valued intensity parameter from which risk macro-factor functions can be defined univocally. Three risk factor functions are defined, namely: the hazard function fH (IN ); the vulnerability function
fV (IN ); and the elements at risk function fE (IN ). Each of these functions can be described analytically if a concise model is available or devised by the user; otherwise, functions can be defined empirically by points at relevant levels of nominal intensity. Risk factor functions are defined in a common domain of nominal intensity values. A nominal value is a representative deterministic value of a non-deterministic parameter. In the subsequent phase, uncertainty must be associated to nominal values. The total uncertainty in risk factors is, in operational terms, an aggregation of parameter and transformation uncertainty. Parameter uncertainty in risk factors is due essentially to the total uncertainty in the reference intensity parameter, which serves as input to the risk factor functions. Epistemic uncertainty in intensity results at least from limited measurement capability of dynamic characteristics (e.g. velocity, momentum, seismic magnitude) and geometric features (e.g. volume, displacement, area, depth) and the uncertainty in the model used to define the reference intensity parameter from available data. Aleatory uncertainty in intensity is due, among other things, to the complexity of the physical media which are mobilised in the course of a hazardous event and to the spatial and/or temporal non-stationarity of the dynamic and geometric characteristics of any hazardous event. As will be shown in Section 3.2, intensity is directly involved in the sampling process of risk factors because sampling distributions of risk factors must account for the uncertainty in intensity. Transformation uncertainty in risk factors stems from the risk functions’ limited capability of modeling and approximating the physical world. Parametrically, transformation uncertainty can include bias and scatter (or dispersion). Bias is related to a function’s precision; scatter is related to its accuracy.
2.3 Uncertainty modeling for simulation In a probabilistic simulation perspective such as the one adopted herein, uncertainty modelling requires the generation of sampling distributions for intensity I and for the risk macro-factors H , V and E. Here, two approaches are proposed for the generation of sampling distributions: the direct approach is applicable if the distribution parameters are known from objective analyses or can be assigned subjectively. Such approach relies on the implementation of the definitions of the selected probability distribution types. The indirect approach requires the preliminary generation of distribution parameters from secondmoment statistics such as standard deviation, variance or coefficient of variation. Experience has shown that a relatively limited set of distributions are able to fit satisfactorily a wide range of observed phenomena. Selected distribution types must be consistent with the definition and properties of the parameters which are being modelled. Different distribution types may be used for the same macrofactor at different intensity levels.
357
Hazard has been defined herein as a probability of occurrence of a hazardous event. As probability values are by definition inferiorly and superiorly bounded at 0 (no likelihood of occurrence) and 1 (certainty of occurrence) respectively, any distribution used in modelling hazard must be bounded in the closed interval [0,1]. Vulnerability as defined in Section 1 is also defined in the closed interval [0,1]. Even at the greatest level of generality, it is intuitive that values of Elements at risk must be both non-negative and non-infinite. Hence, inferiorly and superiorly bounded distributions are assumed for all of the risk macro-components. Among commonly used probability distributions which satisfy the conditions of lower- and upperbounding are the uniform distribution and the PERT distribution. These are two special cases of the Beta distribution. The probability density function of the Beta distribution for a continuous random variable θ is given by
in which B(α1 ,α2 ) is the beta function with inputs α1 and α2 ; and
is the range of θ, given by the difference of the two extreme values (upper-bound value θ u and lowerbound value θ l ). The uniform distribution is a special case of the Beta distribution with α1 = α2 = 1. The probability density function of the uniform distribution can also be expressed in simplified form:
subjectively. When distribution parameters are not available and uncertainty is parameterized in terms of second-moment statistics, the indirect approach can be used to estimate lower- and upper-bound values. In descriptive statistical terminology, the coefficient of variation (COV) of a generic parameter ψ is defined as the ratio of the standard deviation of a dataset to its expected (i.e. mean) value:
As the COV provides an effective measure of relative dispersion of a dataset around its mean value, it can be conceptually associated with uncertainty, with higher values attesting for higher levels of uncertainty. It is thus possible to transpose a qualitative judgment on the level of uncertainty in a parameter (or model) quantitatively by associating to it a COV. For instance, a small COV (e.g. COV < 0.10) can be used to represent the belief in a low level of uncertainty. COVs in the range 0.10–0.30 can be regarded as “intermediate”, while higher values attest for high uncertainty. In the indirect approach, the derivation of lowerand upper-bound parameters can be achieved using statistical theory, by which relations between the standard deviation and the range of a random variable are available depending on the known or assumed distribution type. The standard deviation σ(θ), if it is not available directly, can be calculated by inverting Eq. (7) using the modal value as expected value (assuming that the distribution is quasi-symmetric with respect to the modal value). It is then used to calculate the upperand lower-bound values. This is achieved by calculating the range. If the distribution is uniform-type, the range is given by:
In case of PERT-type distributions:
The uniform distribution should be assumed whenever no objective information or subjective motivations are present regarding the existence of values inside the parameter domain which are more likely to occur than others. The PERT distribution is a particular case of a Pearson type-I Beta distribution which requires user specification of the modal value θ m of the distribution and the extremes only. The characteristic parameters of the PERT distribution α1 , α2 are calculated by:
Modal values can be taken as corresponding to nominal values when these are available, or can be assigned
Lower- and upper-bound values are then given, respectively, by subtracting and adding the semi-range to the modal value. It may be necessary to impose constraints on limiting values θ min and θ max :
The inherent boundary values θ min and θ max are parameter-specific: for hazard and vulnerability, for instance, θ min = 0 and θ max =1; for elements at risk, θ min = 0 and θ max <∞. Once characteristic parameters are defined, uniform or PERT-type probability distributions can be generated using the direct approach. The indirect approach should be applied with confidence only in case of parameters which distributions can reasonably be expected to be at least approximately symmetric around their expected value.
358
3
RISK CALCULATION
The framework illustrated herein allows the estimation of risk by Monte Carlo simulation at Q ≥ 1 userselected target nominal intensity levels, as long as macro-factor functions are defined at each of such target levels. 3.1
Preliminary considerations
In a theoretical perspective, the definition of the size of sampling distributions is a central aspect in Monte Carlo simulation. In most general terms, increasing the size of sampling distributions allows for reduction of the bias in the statistics of the generated samples with reference to the true “unknown” population statistics, thus allowing for a more reliable representation of the randomness of the parameters under investigation. The trade-off for the reduction of estimation uncertainty in samples is a larger computational expense. In risk estimation for geohazards, distribution types and parameters are assigned to a large extent subjectively due to the difficulties in obtaining reliable measurements and in characterizing the type of uncertainty associated with relevant parameters. Hence, the magnitude of epistemic uncertainty can be expected to exceed the estimation uncertainty associated with excessively small sample size. For this reason, it is deemed not meaningful to pursue an exceedingly formal, rigorous estimation of sampling distribution size. The optimum size can be identified heuristically by assessing the convergence of outputs for increasing sample size, i.e. by adopting a sample distribution size which simulation outputs are sufficiently similar to those pertaining to larger sampling distribution sizes while ensuring computational feasibility. Investigating correlation among input variates is another central issue in Monte Carlo simulation, and the object of a vast bulk of ongoing research involving, for instance, copula theory. In the proposed framework, correlation is implicitly accounted for as the risk factors are all defined as functions of intensity. Monte Carlo simulations are performed at reference nominal intensity levels. At each reference intensity level, risk factors can be assumed to be uncorrelated for operational purposes since the distribution types and distribution parameters used for sampling of the different factors are mutually independent. 3.2
Operational framework
Operationally, the calculation of risk at the q-th nominal intensity level (q = 1,…,Q) is achieved through a modular procedure. For each risk factor ψ, the first step consists in the generation of a T -sized sampling distribution of intensity at the q-th nominal intensity (q) level IN (which is taken to be the modal value of the distribution) and using the direct approach (if lower(q) (q) and upper-bound values of intensity Il Iu are available) or the indirect approach (if a user-assigned COV of parameter uncertainty of intensity COV p (I (q) ) is available).
Intensity is inferiorly and superiorly bounded (it cannot be negative, nor infinite); hence, a PERT or uniform distribution type can be assigned to I (q) depending on the investigator’s knowledge or belief. The second step consists in the calculation of a T sized distribution of the risk factor by applying the risk factor function deterministically to each of the T sampled values in the distribution of intensity I (q) obtained in the previous step. The resulting sampling distribution is a “parametrically uncertain” representa(q) tion ψu of the risk factor.The attribute “parametrically uncertain” reflects the fact that the generated distribution only accounts for the parameter uncertainty stemming from the propagation of the uncertainty in the model inputs, and does not account for transformation uncertainty associated with the risk factor function. The third step consists in the characterization of the transformation uncertainty associated with the risk factor function at the q-th nominal intensity level (q) f (IN ). Such uncertainty can be parameterized by (q) a dimensionless multiplicative coefficient k which (q) modal value k,m (representative of the bias in the risk factor function) is assigned by the user based on objective or subjective assessment. The associated uncertainty can be expressed either in the form of (q) a COV of transformation uncertainty COV t [f (IN )] associated with the risk factor function, or by lower(q) (q) and upper-bound values k,l and k,u . The fourth step consists in the generation of a T -sized sampling distribution of the risk factor ψ(q) . This is achieved by (q) scalar-multiplying the sampling distributions of ψp (q) and k :
The resulting sampling distribution accounts for parameter and transformation uncertainty. Following completion of the above steps for H , V and E, risk can be calculated by scalar-wise application of the fundamental equation to the sampling distributions of the macro-factors:
The procedure described above is implemented at each of the Q reference nominal intensity levels at which risk estimates are of interest. The procedure is represented graphically in Figure 1. 3.3 Statistical characterization of risk The direct output of the procedure described above is also a set of T deterministic risk values. These can be considered as a sample of the “risk” output random variable. Risk can then be investigated quantitatively through its sample statistics. Statistical moments, for instance, parameterize properties such as central tendency (mean), scatter (variance), degree of symmetry
359
4
CALCULATION EXAMPLE
A simple calculation example is presented hereinafter for illustrative purposes. The object is the quantitative rainfall-induced landslide risk estimation for a building. The maximum rainfall intensity (in mm/h) and rainfall duration D (in h) are selected to serve as a vector-valued reference intensity parameter. A risk estimate is desired for D = 0.5 h and = 20 mm/hr. Sample distribution sizes T = 1 000 000 were adopted following comparative analyses. The results of the corresponding deterministic analysis are given for comparative purposes. 4.1
Intensity
COVs of parameter uncertainty of 0.03 and 0.05 are assigned to D and , respectively. These account for epistemic uncertainty in duration and amount of rainfall. PERT-type sampling distributions of size T = 1 000 000 are thus generated for D and using the indirect method. The resulting lower- and upper- bound values are Dl = 0.46 h, Du = 0.54 h, l = 17 mm/h and u = 23 mm/h. 4.2 Hazard The following hazard function expresses the nominal value of the annual probability of occurrence of at least one rainfall event with a given intensity and duration D:
Parameter uncertainty in fH corresponds to the uncertainty in the intensity parameter components D and . The hazard function is applied deterministically to the couples of sampled values of D and , thereby obtaining a T -sized distribution of the “parametrically uncertain” hazard Hp . Transformation uncertainty in the hazard function is parameterized by COV t (fH ) = 0.30. A T -sized PERT-type sampling distribution of model uncertainty is generated by assuming fH to be unbiased (i.e. kH ,m = 1) and by calculating the lower- and upper-bound values kH ,l = 0.10 and kH ,u = 1.90 using the indirect method. The deterministic output value of hazard for D = 0.5 h and = 20 mm/hr is 0.32. 4.3 Vulnerability Figure 1. Scheme of procedure for simulation-based estimation of risk factors.
(skewness) and shape (kurtosis). Quantiles also provide a detailed statistical description of a sample by parameterizing its empirical cumulative distribution function. The choice of the statistics to be extracted from the risk sample is related to the scope and goal of the specific risk estimation analysis.
The following vulnerability function, which reflects the results of fictitious runout analysis for different sets of values of D and , is assigned:
A sampling distribution of parametrically uncertain vulnerability Vp is obtained by applying Eq. (15)
360
Figure 2. Outputs of calculation example.
deterministically to the sampling distributions of D and . Assuming the model is unbiased, a uniformtype sampling distribution of kV is obtained by preliminary calculating lower- and upper-bound values kV 0,l = 0.57 and kV 0,u =1.43 using the indirect method. Subsequently, scalar multiplication is employed to calculate the T -sized sampling distribution of V . The deterministic value of V given by Eq. (15) is 0.48. 4.4
Table 1. Outputs of probabilistic and deterministic calculation example (1 000 000 simulations).
determ. mean std. dev. COV[−] Q05 Q25 Q50 Q75 Q95
Elements at risk
A constant, intensity-invariant value of E (in €) is assumed:
Lower- and upper-bound values for the multiplicative coefficient kE are assigned as kV ,l = 0.75 and kVul = 1.05, respectively. Assuming the model is unbiased, a PERT-type sampling distribution of kE is obtained by the direct method. Subsequently, the sampling distribution of E is obtained by multiplying each sampled value of kE by the deterministic value. 4.5 Risk Risk is obtained by applying the fundamental model in Eq. (1) to the sampling distributions of H , V and E. The relative frequency histogram of the output T -sized distribution is shown along with those of the macroinputs in Figure 2. In the figure, relative frequency histograms of parametrically uncertain distributions of H and V are plotted (in grey) along with the “total uncertainty” distributions (in black). Deterministic values of risk factors and risk are also plotted as dashed black lines. Visual inspection of the results allows appreciation of the capability of simulation of accommodating heterogeneous distribution types, and of providing output distributions which shape and character may be difficult to establish aprioristically. Moreover, it may be seen that while model uncertainty is not very significant for H , there is a very relevant modification in the relative frequency histogram of V and E from the “parametric uncertainty” to the “total uncertainty” case. Table 1 reports the sample statistics of the probabilistic simulation as well as the output of the equivalent deterministic analysis. The table is structured as
H [yr−1 ]
V [−]
E[€]
R[€·yr−1 ]
0.32 0.32 0.13 0.41 0.16 0.23 0.29 0.38 0.57
0.48 0.47 0.12 0.25 0.29 0.37 0.47 0.58 0.66
1 500 000 1 450 015 76 185 0.05 1 306 960 1 400 688 1 461 618 1 510 363 1 553 255
224 788 218 676 106 489 0.49 90 038 142 223 196 418 270 498 425 124
follows: the first row contains the “equivalent deterministic” outputs of H , V , E and R; the second row reports the mean values of the output samples of the same parameters from Monte Carlo simulation, while rows 3 and 4 report the sample standard deviations and coefficients of variation, respectively, of the same output samples. Rows 5-9 report the 0.05, 0.25 (i.e. 1st quartile), 0.50 (i.e. median), 0.75 (i.e. 3rd quartile) and 0.95 quantiles of the samples, respectively. In the example illustrated herein, the magnitude of scatter in output samples of H , V and R around sample means (parameterized by sample COVs) is significant. Deterministic values of H , E and R are not coincident with mean values. Though the difference is not large, statistics reflect the skewness of the distributions. The 0.50 and 0.90 quantile ranges, given by the differences Q75 − Q25 and Q95 − Q05 , respectively, also attest for a considerable level of uncertainty in risk estimates, and provide very useful supplementary information regarding the sensitivity of estimated risk to parameter and transformation uncertainty.
5
CONCLUSIONS
This paper has illustrated a methodology for the probabilistic estimation of risk and its macro-factors: hazard, vulnerability and elements at risk. While uncertaintybased analysis does not imply the elimination of all of the uncertainties involved in risk estimation for geohazards, it does allow a more rational assessment of the level of safety, performance and conservatism associated with risk estimates.
361
Monte Carlo simulation provides a powerful, robust and flexible means of modelling uncertainties and investigating their propagation in risk models. Moreover, it allows seamless aggregation of subjective and objective uncertainty estimates in parameters and models. Another main benefit of simulation is the possibility to appreciate the full “shapes” of uncertainty distributions in risk factors and risk itself, as well as to compare the relative magnitudes of epistemic and aleatory uncertainties. Such an insight into the character of the uncertainty allows enhanced decisionmaking and the selection of appropriate mitigation measures to attain target levels of risk. The simple calculation example included in the paper, based on artificial but realistic data, attests for the considerable level of uncertainty which is inherent to risk estimation for geohazards.
REFERENCES Ayyub, B.M. 2001. Elicitation of expert opinions for uncertainty and risks. Boca Raton: CRC Press. Phoon, K.K. and Kulhawy, F.W. 1999. “Characterization of geotechnical variability”, Canadian Geotechnical Journal 36(4), 612–624. UNDRO – United Nations Disaster Relief Organization 1979. Natural disasters and vulnerability analysis. Geneva. Uzielli, M. 2008. Probabilistic risk analysis for geohazards: a simulation approach. NGI report 20061032-8. International Centre for Geohazards / Norwegian Geotechnical Institute, Oslo. Vick, S.G. 2002. Degrees of belief – Subjective probability and engineering judgment. New York: ASCE Press.
ACKNOWLEDGEMENT The work described in this paper was supported by the Research Council of Norway through the International Centre for Geohazards (ICG). Their support is gratefully acknowledged.
362
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
A research project for deterministic landslide risk assessment in Southern Italy: Methodological approach and preliminary results F. Cotecchia Department of Civil and Environmental Engineering, Technical University of Bari, Italy
P. Lollino & F. Santaloia National Research Council – IRPI, Bari, Italy
C. Vitone & G. Mitaritonna Department of Civil and Environmental Engineering, Technical University of Bari, Italy
ABSTRACT: The paper presents a methodology for the deterministic assessment of the landslide risk in chain areas at intermediate scale which is being developed within a multi-disciplinary research project. The procedure, which needs to be implemented in a GIS system, is being tested in the Daunia region (Southern Italy), where tectonised and fissured soils outcrop, but it is intended to be of reference for risk analysis and risk management within those areas characterized by similar geological contexts. A first application of such methodology to a specific urban centre from Daunia is also presented.
1
INTRODUCTION
The assessment of landslide risk is a research topic of increasing interest all over the world due to both an increasing awareness of the dramatically important impact of landslides on the socio-economic environment and an increasing request for development and extension of urbanization in areas prone to landsliding. The present paper discusses the formulation of a methodology that is being developed for regional landslide risk assessment within geologically complex areas and some preliminary results of its application at the intermediate scale (i.e. between the regional and the slope scale). The areas of application are located in the southern Apennines (Italy), where landsliding is widespread and responsible for frequent damages to structures and infrastructures. The methodology is the subject of an on-going multidisciplinary research project, which aims at the assessment of the landslide hazard, of the corresponding vulnerability of structures and of their exposition, involving different expertises. In particular, both the landslide hazard and the structure vulnerability assessments are meant to be based upon the knowledge of the failure mechanisms and, as such, to benefit from scientific knowledge in the fields of both geotechnical engineering and structural mechanics.At the same time, the exposure of the elements at risk is to be investigated according to analyses of the socio-economical context where the risk is being evaluated. In the present paper only the work relating to landslide hazard is presented. This work aims at the further development of Quantitative Landslide Hazard Assessment, QHA (Ho et al.
2000), following a deterministic approach. As such, it is aimed at exporting the geo-mechanical interpretation of slope stability and landslide mechanisms from the slope scale (site-specific) to the regional scale. The research work is developed with reference to a test-site area, the Daunia region, located at the eastern margin of the southern Apennines, which is a portion of the chain belt along the subduction zone between the african and the euro-asiatic plates, where slopes are made up of tectonised and fissured soils and rocks. Here, frequent and intense landsliding involve the slopes extensively and repeatedly, restraining urban development. The research is currently investigating the applicability of the deterministic methodology to the landslide hazard assessment of the urban territories in the region. In the following, after a brief review of the main methodologies for landslide hazard assessment available in the literature, the paper focuses on the methodology being developed in the research. Thereafter, preliminary results of its application to evaluate the landslide hazard in the test-site region are discussed.
2
CURRENT APPROACHES TO LANDSLIDE HAZARD ASSESSMENT
It is generally recognized that the criteria to be satisfied for a successful landslide hazard assessment are: the objectivity of the data collection processes, the detection of objective relationships between landslide factors and events, the possibility of on-going updating
363
of the hazard assessment procedure and the adequate choice of the analysis scale (Aleotti & Chowdhury 1999, Fell et al. 2008). Several methods for landslide hazard assessment have been proposed in the literature, which satisfy to different extents these criteria and that can be classified as either heuristic, or statistical or deterministic (van Westen et al. 1997, Aleotti & Chowdhury 1999, Dai et al. 2002, Fall et al. 2006). Heuristic methods, also called expert evaluation models, are the most widespread. They are mainly based on geo-structural analyses, geo-morphological surveys and interpretation of aerial photographs, which may give indication of: the slope inclination and vegetation conditions, the geological set-up, the existence of landslide bodies and their activity (style and distribution of movements – Cruden & Varnes 1996). Based upon such expert oriented qualitative interpretations, the hazard assessment results in either landslide inventory maps, or landslide density maps, or index maps. In particular, index mapping implements procedures of processing of the factors supposed to influence the landslide process, which account for weighting coefficients (Guzzetti et al. 2000) and which result in hazard category maps. At present, these methods are extensively applied using GIS techniques to create the factor archives and deduce the category maps. However, the main disadvantage of such methods is the subjectivity in the choice of the factors and of the corresponding weighting coefficients, due to the subjectivity of the interpretation of the landslide activity and of its relation with the factors. Indeed, the choice of the weighting coefficients and of the indexes are not refuted by any quantitative assessment. In addition, the difficulty of updating the hazard assessment due to the use of quite implicit criteria in its formulation (Fall et al. 2002, Aleotti & Chowdhury 1999, Dai et al. 2002) represents another important drawback of the methodology. Statistical methods involve statistical determinations of combinations among the factors that are supposed to influence landsliding (Yin & Yan 1988; Carrara et al. 1991, Aleotti & Chowdury 1999). These methods still make use of qualitative interpretations of the landslide phenomena, but they restrain the degree of subjectivity of the analysis by comparison with the heuristic methods and allow for the updating of the hazard mapping with time. They are generally considered to be appropriate for landslide hazard mapping at the medium scale (1:10000 – 1:50000), since this scale is suitable for the acquisition of the landslide factor values to be processed in the statistical analyses. However, the main disadvantage of statistical methods results from the lack of validation of the statistical correlations between the factors being used, which results from the lack of any quantitative assessment of the slope failure mechanisms (Ho et al. 2000). Such disadvantage could be limited by applying such methods in the light of an adequate mechanical interpretation of the landslide processes. However, in general the statistical methods are quite strictly related to the area where
they are calibrated and cannot be easily exported to areas of different features (Canuti & Casagli 1994). Deterministic methods assess landslide hazard by means of quantitative evaluations of the stability of the slope or, otherwise, of the active landslide mechanism, based upon the knowledge of the geo-structural setup, the mechanical behaviour of the materials forming the slope and the hydraulic conditions, and the values of the external factors of landsliding (e.g. rainfalls, seismic loading, human actions). Landslide hazard is derived from the suscettivity assessment accounting for the probability of occurrence of given values of the external factor with time. As such, these methods restrain the degree of subjectivity of the analyses. Due to the necessary use of significant and reliable databases and of accurate calculations in their application, so far they have been mainly adopted for slope-specific hazard assessments. Applications at the regional scale have been carried out mainly in regions where a single-type failure mechanism could be recognized (Montgomery & Dietrich 1994, Cascini et al. 2003, Savage et al. 2004), so that the interpretation of such mechanism could be of reference in the prediction of failure on most slopes. Conversely, within regions of complex geological set-up and where different landslides appear to occur, landslide hazard is generally classified by means of index mapping (Guzzetti et al. 2000). In the following, a methodology for a deterministic hazard assessment at the small scale is proposed. 3
DETERMINISTIC LANDSLIDE HAZARD ASSESSMENT AT THE SMALL SCALE
According to the research objectives, the deterministic analysis of landslide hazard at regional scale has been designed in order to export the use of the geomechanical understanding of the slope failure mechanism from the slope scale to the regional scale (small scale). The geo-mechanical understanding of the landslide hazard is based upon the recognition of the relationships between the landslide factors (Terzaghi 1950, Hutchinson 1988) and the landslide mechanisms (cause-effect relations). These can be deduced from analyses at three levels: a I level, or preliminary, which accounts mainly for phenomenological studies; a II level, or intermediate, performed by means of limit equilibrium analyses of the slope geotechnical model; a III level, or advanced, which involves both numerical modelling and field monitoring. As such, the latter two analyses result in quantitative assessments of the slope stability and of the possible landslide mechanisms. The I level analysis may represent a first approach to the quantitative assessment, if developed accounting for: the geo-hydro-mechanical set-up of the slope, the hydro-mechanical properties of the materials and their influence on the landslide mechanism. Therefore, a methodology for deterministic landslide hazard assessment at the small scale has to implement sitespecific studies of the landslide mechanisms which
364
Figure 1. Flow-chart of the proposed methodology for small scale deterministic landslide hazard assessment.
occur in the region and of their connections with the geo-hydro-mechanical factors. These studies should be widespread across the region and represent the 1st phase of the methodology being proposed. In particular, on one side they are intended to recognize the sets of geo-hydro-mechanical factors which best represent the slopes in the region. On the other, they are aimed at identifying the slope failure mechanisms to be considered as representative in the region. Such studies require the collaboration of different expertise, in the fields of topography, geology/geo-morphology and geotechnics. Their results should be reported in guidelines meant to outline a framework for the quantitative interpretation of landsliding in the region. Thereafter, the methodology should implement the procedures of use of these guidelines for the hazard assessment in any portion or slope of the region (2nd phase of the methodology). The flow-chart shown in Figure 1 summarises the main working steps of the methodology. The usefulness of the 1st phase studies for the regional deterministic hazard assessment assumes, as first, that the representative geo-morpho-hydromechanical set-ups and failure mechanisms in the region are of limited number and, as second, the possibility to identify representative quantitative relations between the failure mechanisms and the slope geomorpho-hydro-mechanical factors, which would be those of reference in the region (Hypotheses 1 and 2 in Fig. 1 respectively). A region characterized by a small number of set-ups and failure mechanisms will be defined homogeneous; this is the case the smaller
is the extension of the region and the more repetitive are the geological features of the soil masses. As recalled earlier, the literature reports the successful use of the deterministic approach to intermediate scale landslide hazard assessment in chain areas where the geo-mechanical set-ups are so repetitive that a single landslide mechanism is found to occur repeatedly, thus in regions which are so homogeneous that a single landslide mechanism interpretation is of reference for the hazard forecasting on any slope. However, hypotheses 1 and 2 can be found to be valid also in regions where the landscape appears to be variegated and geologically complex and, as such, they have been assumed to support the general applicability of the deterministic methodology being proposed. Such assumption has been based on the awareness that, in general, the variations in mechanical behaviour among soils and lithotypes are more limited than the variations in their geological classification features. Indeed, soils and rocks part of different geological formations are often found to exhibit similar bulk mechanical response. If this is the case, a classification of the landscape using, as priority classifying feature, the mechanical properties of the materials and their combination in the soil mass may be far more synthetic than one based upon the geological features. It would follow that the use of mechanics as a mean to classify the set-ups and the phenomena in a given landscape would simplify its characterization. The validity of Hypotheses 1 and 2 in a geologically complex region such as the Daunia region is discussed later in the paper. Figure 1 reports a flow chart outlining the 1st phase of the deterministic methodology. Step 1 (S1) represents the creation of an analytical database of all the factors influencing the slope equilibrium (Terzaghi 1950, Hutchinson 1988). They can be either internal factors (slope geometry, lithology, geo-structural set-up, tectonic structures, mechanical properties of the materials, hydraulic regime in the slope) or external factors (rainfalls, earthquakes, man action, natural variation of the slope geometry). Such database should include both data logged at a regional scale and more detailed data from larger scale studies. All these different source data should be implemented in a GIS database after appropriate screening (Mancini et al. 2008). Although large part of such database is common to heuristic and probabilistic methods, for the deterministic analyses it must be far richer in information concerning the hydro-mechanical features of the materials. The second step (S2 in Fig. 1) of the methodology concerns the geo-hydro-mechanical classification of the soil masses (classes GMi in Fig. 1) and the classification of the representative landslide typologies (Li) in the region, based upon the collected database and site-specific surveys. Finally, the third step (S3 in Fig. 1) of the 1st phase is aimed at the recognition of the connections existing between the sets of internal factors of landsliding (characterizing the GMi classes), the external factors and the landslide typologies (Li). It is based both on analyses of the GIS data and on site-specific studies widespread
365
across the region. Such site-specific studies can be developed at the three aforementioned levels (I, II and III). In detail, the I level analyses could be based on comparisons between the different sliding phenomena and corresponding mechanical and hydraulic conditions. The II level analyses consist in limit equilibrium analyses of the sliding process carried out using the geo-hydro-mechanical data available in the database. These analyses could be parametric with respect to the factors for which the evaluation had been least objective.The results of such analyses would provide quantitatively based indications of the geometry of the landslide body, of the failure mechanism and of the average strength mobilized along the slip surface, as well as preliminary indications of the possible landslide evolution and triggering factors. These results could or not validate the I level interpretations. The III level analyses consist of both the direct insitu monitoring and the numerical modelling of the landslide processes; as such they provide the most objective interpretation of the phenomena, but, given their complexity, they should be applied to few of the representative cases in the region. The results of the third step (S3) studies should outline the representative landslide mechanisms in the region, their causes and possible evolution, and must be reported into guidelines, which may also implement either algorithms or modelling procedures and represent the reference for the susceptibility assessments in the region. Such guidelines, together with the regional GIS database, may be updated progressively with time. However, the assessment of the evolution of the failure mechanism and of the time of occurrence of the event, required for hazard assessments, require additional information. For example, they may benefit from the result of investigations of multiple-year topographic maps, aerial photos and monitoring data, which are generally subjects of I level analyses. Alternatively, in II and III level analyses, the modelling may give indications of the evolution of the slope equilibrium and failure conditions with time if accounting for the variation of both the internal and external factors with time. In addition, III level analyses could also give indications about the landslide run-out. All such indications would be of use to advance from the susceptibility to the hazard prediction. The hazard assessment application within a given portion of the region represents the second phase of the methodology (Fig. 1). In this phase, a specific database has to be developed for the area subjected to assessment. This is more detailed than the regional one created in the first phase and implements data logged at a larger scale. The data to be collected concern all the factors of landsliding as well as all the available data indicative of the activity of movements on the slopes, such as: topographic monitoring data, interferometer data, at depth monitoring data (e.g. inclinometer data), as well as data about the damages of structures interacting with the moving slopes. The procedure of creation of such local GIS database is meant to be outlined in the guidelines and is based upon the understanding
of the landslide processes gathered in the 1 phase. The interactive consultation of the local GIS database and of the results of the 1 phase studies reported in the guidelines should lead to the identification of the possible landslide mechanisms applying to the area of application. In particular, the values of the landslide factors reported in the local GIS database should be compared with those reported in the guidelines for the different geo-hydro-mechanical set-ups (GMi) in connection with the different landslide mechanisms (Li). This comparison should let the identification of the landslide mechanism occurring in slopes characterized by values of the factors closest to those of the area being assessed. The information about such landslide mechanism and about its connection with the landslide factors should finally provide sufficient knowledge to assess what type of landslide may occur on the slope being assessed and how it may evolve.
4 TEST-SITE OF THE PROJECT AND PRELIMINARY 1ST PHASE RESULTS The outcropping of stratified sedimentary sequences is widespread within the Daunia region. These are part of several different turbiditic successions (or flysch), within which rock layers alternate with clay layers, which are generally highly fissured. The most frequent flysch formations are the San Bartolomeo Flysch (called SBO hereafter) and the Faeto Flysch (FAE), which both include rock and clay strata: calcareous strata and clayey marls for the FAE flysch and arenites and clays for the SBO flysch. Conversely, the Red Flysch (FYR) is the one predominantly formed of highly fissured clays. The heterogeneity and the fissuring characterizing the rocks and clays forming these formations makes them be classified as structurally complex formations (Esu, 1997). In particular, Red Flysch clays are classified as varicoloured scaly clays and are characterised by the lowest strengths. Research studies carried out by the Authors (e.g. Cotecchia & Santaloia 2003; Vitone et al. 2009) in the last decade show how the fissuring pattern of FYR clays (which is so intense as to define inter-fissure clay elements of millimetre thickness, defined as scales) causes such very low strengths. Cotecchia et al. (2006) and Vitone et al. (2009) have demonstrated that the state boundary surface of these fissured clays is even smaller than that of the same material when reconstituted in the laboratory. Since the low strength of the fissured clays plays a fundamental role in the development of the slope failure processes in the region, the knowledge about their mechanical properties is crucial to effective quantitative analyses of the failure processes. The methodology proposed above has been applied to the landslide hazard assessment within the urban areas located in the mountainous part of the Daunia region (scale 1:10000–1:50000). In the first phase, the landslide factors characterizing the slopes within twenty-five urban areas have been investigated. In this
366
Figure 2. a) Geological scheme of Italy and location of the test-area; b) Geological Setup GMi and c) Landslide Typology Li defined in the study area. Setup (1): rigid cap overlying a deformable unit with sub-horizontal contact; setup (2): monoclinal contact between clayey and rocky units; setup (3): single geological units. Landslide type L1: compound landslide from medium to large depth; L2: mud-slide from shallow to medium depth; L3: rotational landslide evolving in mud-slide from medium to large depth.
respect, the study of the landsliding occurring in freefield areas has been disregarded. The lithological and geo-structural features of the slopes and the properties of the materials have been characterized based upon the analysis of geological maps and reports, geotechnical reports, landslide inventories and results of in-situ surveys. All these data have been implemented in GIS databases, each relating to a single urban centre. The analysis of these data has been aimed at the recognition of the main geo-mechanical set-ups, representative for the urban areas in the region. In particular, it has been recognized that for most towns, the old part is founded on rock strata, that most often are either the limestones of Faeto Flysch or the sandstones of San Bartolomeo Flysch. Only the last century urbanization caused the extension of the urban areas towards the clay outcroppings, aside the rock outcroppings. The rock strata are found either to float above the clay strata (of either the same flysch or of the Red Flysch), awareness that the overall behaviour of these slopes depends on the features of the succession and clay strata, the type of contact between the rock and clay strata has been accounted for as the main element characterizing the areas of interest.Thus, the sub-horizontal contact between an upper stiff – high strength rock slab and a lower stratum formed of very weak and low stiffness fissured clays, has been considered to represent a representative geomechanical set-up, classified as GM1 (Fig. 2b). The geo-mechanical set-up corresponding to the monoclinal contact has been instead classified as GM2 (Fig. 2b). Finally, since some urban centres are found to lie in areas where a single lithological unit (rock or clay unit) outcrops, such set-up has been distinguished as GM3, as shown in Figure 2b. For all the GMi set-ups,
the water table has been found to be shallow, generally not deeper than 6 metres below ground level. The first phase studies have been aimed also at the recognition and classification of the main landslide mechanisms occurring on the slopes of the urban areas. For this purpose the GIS database has been extended to include all the available data concerning landsliding, such as the data reported in both national and regional landslide inventories, literature data, displacement and failure data presented in geotechnical reports, information about damages of structures and new data resulting from on-purpose in-situ surveys. The analyses of all these data have resulted in the identification of three main types of landslides, named as L1, L2 and L3 in Figure 2c. According to the international landslide classification (Cruden & Varnes 1996), the first typology, L1, is represented by intermediated to deep-seated compound landslides, of failure surface depth larger than 30 m and width comparable to the length. These landslides have been often found to be retrogressive and multiple. The second typology, L2, corresponds to mudslides, commonly either lobate or elongate, of shallow to intermediate depth sliding surfaces (≤30 m). Locally, the retrogressive failure surface may be deeper (30 ÷ 40 m). The third typology, L3, is represented by deep-seated to intermediate depth rotational landslides evolving into either mudslides or earthflows downslope. The limited number of landslide types being recognized in the region has thus confirmed one of the hypothesis assumed in the definition of the methodology, i.e. that is possible to recognize a limited number of representative geo-morpho-hydromechanical set-ups and failure mechanisms even in a region of significant geological complexity, such as the one being considered.
367
Figure 3. Celenza Valfortore: a) geological and geomorphological map (after Melidoro 1982, modified). Key: 1) urban centre area, 2) filling material, 3) SBO Flysch (a-clayey unit, b-arenaceous unit, 4) FYR, 5) thrust, 6) strata attitude, 7) deep failure, 8) landslide (a-crown, b-body, c-depression, d-direction of movement); b) Detail of the southern area.
The first phase studies of the connections between the main classes of landslides in Daunia, Li, the main geo-hydro-mechanical set-ups, GMi, and the external landslide factors (Terzaghi 1950) are still under way. Phenomenological (I level) connections have been envisaged and both II level (using Morgenstern & Price method) and III level analyses (using finite element modelling and field monitoring) are being carried out to verify the I level results across the region. Based upon I level analyses, most of the L1 and L2 landslides have been found to be activated in the clays at the base of the slopes and to retrogress upslope. Conversely, L3 slides are often found to start at the top of the slope and have advancing activity. The strength of the slope clays (i.e. the soil properties) is seen to influence the depth of the sliding process, e.g. L1 slides occur more frequently in stiffer clays, becoming deeper the stiffer is the clay, whereas L2 slides occur more softer clays. The contact between the rock and clay strata is seen to influence the type of retrogression. For example, the L2 slides have several source areas when retrogressing in clays, whereas they tend to have a single deep source area when retrogressing in the top rock stratum. In addition, most of the landslides in the region are reactivated processes, whose first failure occurred before the beginning of last century. They can be mainly classified as very slow to slow-rate movements (v < 5 × 10−3 mm/s; Cruden & Varnes 1996), with possible accelerations after extreme rainfall events. In the following, preliminary results of the application of the methodology for the landslide hazard assessment in a specific urban area of the region are discussed, as resulting from I and II level analyses. 5
PRELIMINARY 2ND PHASE RESULTS: THE CASE OF CELENZA VALFORTORE
As for most of the urban centres in the Daunia Apennines, the promontory of Celenza Valfortore is
bordered by the crowns of several landslides (Fig. 3a). The application of the second phase of the methodology to Celenza Valfortore has required the collection of all the available data resulting from previous sitespecific surveys and investigations (e.g. geomorphological, stratigraphical, inclinometric and piezometric data), and data concerning damages to structures triggered by landsliding, together with new data resulting from on-purpose in-situ surveys. All these data have been implemented in a GIS database, according to the guidelines developed in the first phase of the work. The consultation of the GIS database in light of the GMi set-ups identified in the region has resulted in the recognition of the promontory of Celenza as part of category GM2 (Fig. 2b). It is formed by a sequence of SBO rock strata, SBO clay strata and FYR clay strata in monoclinal contact. The old part of the town is founded on the outcropping of the arenaceous SBO flysch, whereas the most recent area (built after 1920) lies directly on the outcropping of either the SBO or the FYR clay strata (Fig. 3a). The comparative consultation of the GIS database of Celenza and that resulting from the 1 phase studies across the region, in light of the landside typology classes Li, has resulted in the recognition of a single type of landslide mechanism on most of the slopes surrounding the promontory, that is the medium depth mudslide L2 (Fig. 2c). These landslides involve both the FYR scaly clays and the SBO clays. In most cases, they are retrogressive and their rear scarp either borders or involves the foundations of the buildings at the top of the slopes. When the mudslide develops solely in FYR clays (i.e. the weakest clays: maximum strength c ≈ 18 kPa φ ≈ 20◦ ) it has several source areas, whereas it is of single source area when involves either the stiffer SBO clays (maximum strength c ≈ 10 kPa φ ≈ 25◦ ) or approaches the SBO arenites at the top. II level limit equilibrium analyses have been carried out for the susceptibility assessment of slopes located in the southern portion of the promontory.
368
Figure 4. Celenza Valfortore: Landslide 1, limit equilibrium analysis.
In particular, two interactive landslides, respectively indicated as 1 and 2 in Figure 3, have been analysed; landslide 1 is a L2 mudslide with several source areas, whereas landslide 2 is a L2 with a single source area, which involves a portion of the built area of the town (Fig. 3). According to I level analyses, both the slides are retrogressive reactivation processes. The failure process of landslide 1 is likely to have started at the toe about the river due to erosion processes. The activation of landslide 2 has probably followed the movement of landslide 1, given the interaction of its toe with landslide 1. The limit equilibrium analyses have been aimed at identifying the hierarchy of instability between the sliding bodies (in order to deduce the way they interact) and the mobilized friction angles along the slip surfaces. The limit equilibrium analyses for landslide 1 have been carried out in section A’− A − D, whose axis is shown in Figure 3 and which includes only FYR clays (Fig. 4). The analyses for landslide 2 have been carried out in a section which starts at the crown of this landslide (E in Fig. 3), reaches the mid-slope toe at point A (Fig. 3) and thereafter follows landslide 1 down to the toe at the river (section axis E − A − A’, Fig. 3). Figure 5 shows that this section includes FYR clays in the lower part and SBO clays in the top part. As first, the steady-state pore pressure regime in the slopes has been reconstructed based upon finite element seepage analyses (SEEP/W Geostudio 2004), providing results consistent with field measurements.Thereafter, such seepage regime has been implemented in limit equilibrium analyses (SLOPE/W Geostudio 2004; Morgenstern & Price 1965). For body 1, different slip surfaces have been analyzed, which have all been assumed to pass through the top and toe of the landslide recognized in the field (D and A’ respectively, Fig. 3). In addition, c = 0 kPa has been assumed for the FYR clay. Accounting for these conditions, the analyses demonstrate that the slip surface of the most unstable body in section A’− A−D has maximum depth of 25 m (Fig. 4); F = 1 applies to this slip surface if ϕ = 15◦ , that is a value consistent with the strength properties of FYR clays when shearing postpeak (at large strains), but far before residual strength conditions (φr ≈ 6◦ ). For landslide 2, the limit equilibrium analyses have compared the stability of landslide bodies following two cinematic hypotheses. Figure 5 shows a sliding body (hypothesis a) that coincides with landslide 1 in the lower part (portion A’− A of the section, Figs. 3, 4) and extends upslope following section A–E up to the crown; this hypothesis
Figure 5. Celenza Valfortore: Landslide 2 (A’− A−E in Fig. 3), limit equilibrium analyses: a) hypothesis a; b) hypothesis b.
simulates a connection of sliding between landslides 1 and 2. This body crosses the FYR clays in the lower part and the SBO clays in the upper part. Along the slip surface crossing FYR clays, the same mobilized strength parameters applying to landslide 1 have been implemented (c = 0 kPa, φ = 15◦ ). The analyses have looked for the critical depth of this landslide in the portion crossing SBO clay; this has resulted to be about 35 m (Fig. 5a), with mobilised strength parameters of SBO clay: c = 0 kPa and ϕ = 18◦ –19◦ (which are consistent with the post-peak strength values of SBO clay). The landslide body for Hypothesis b (Fig. 5b) has toe at point A (Figs 3, 5b), crosses solely the SBO clays and reaches starts the crown E at the top. Implementing in these analyses the same SBO strength parameters as resulting from the analyses following hypothesis a (i.e. c = 0 kPa and ϕ = 18◦ ), it results that any landslide body moving in section A’ – A – E with toe at A is more stable than the longer landslide body (hypothesis a) moving with toe at the river. Therefore, the limit equilibrium analyses appear to confirm the I level interpretation of the landslide mechanism, assessing that the activation of landslide 2 has followed the slipping of landslide 1 and that at present the activity of landslide 2 is likely to be connected to movements in the lower part of the slopes, down to the river. 6
RESEARCH PERSPECTIVES
The further development of the research will take advantage of the results of several on-going in-situ investigations within different Daunia active landslides, of laboratory testing on the soils being sampled as well as of the results of numerical modelling, which will allow for the evaluation of the validity of both the I and II level assessments conducted so far. Numerical analyses implementing statistical processing of the external factors will also provide indications about the time factor for appropriate hazard assessments. ACKNOWLEDGMENTS The Strategic Research Project n. 119 “Landslide risk assessment for the planning of small centres located
369
in chain areas: the case of the Daunia region” funded by the Apulian Region. REFERENCES Aleotti, P. & Chowdhury, R. 1999. Landslide hazard assessment: summary review and new perspectives. Bull. Eng. Geol. Env., 58, 21–44, Springer-Verlag. Canuti, P & Casagli, N. 1994. Considerazioni sulla valutazione del rischio di frana. Proc. Conf. Fenomeni franosi e Centri Abitati, 27th May 1994. Bologna (Italy). Carrara, A., Cardinali, M., Detti, R., Guazzetti, F., Pasqui, V., Reichenbach, P. 1991. GIS techniques and statistical models in evaluating landslide hazard. Earth Surface Processes and Landforms, 16, 427–445. Chowdhury, R. & Flentje, P. 2002. Modern approaches for assessment and management of urban landslides. 3rd Int. Conf. Landslides, Slope stability & the safety of infrastructures. 11–12 July 2002. Singapore. Cotecchia, F., Santaloia, F. 2003. Compression behaviour of structurally complex marine clays. Nakase Memorial Symposium on Soft Ground Engineering in Coastal Areas. 2003: 63–72. Nagase, Yokosuka, Japan. Cotecchia, F., Vitone, C., Cafaro, F., Santaloia, F. 2006. The mechanical behaviour of intensely fissured high plasticity clays from Daunia, Characterisation. and engng properties of Natural Soils, Singapore, 1975–2003. Cotecchia, F., Vitone, C. & Santaloia, F 2008. The influence of intense fissuring on the compression behaviour of two Italian clays, IV Int. Symp. on Deformation Characteristics of Geomat (IS2008), 22–24 September 2008. Atlanta, Georgia. Cruden, D.M. & Varnes, D.J. 1996. Landslides Types and Processes. In: Turner A.K. & Schuster R.L. (eds.) Landslides: Investigation and Mitigation. Transp. Research Board Special Report 247. Nat. Ac. Press, WA, 36–75. Dai, F.C., Lee, C.F. & Ngai, Y.Y. 2002. Landslide risk assessment and management: an overview. Engineering geology, 64, 65–87. Esu, F. 1977. Behaviour of slopes in structurally complex formations. The Geotechnics of Structurally Complex Formations; Proc. intern. symp., Capri (Italy), 2, 292–304. Fall, M, Azzam, R. & Noubactep, C. (2006). A multimethod approach to study the stability of natural slopes and landslide susceptibility. Engineering geology, 82, 241–263. Fell, R., Corominas, J., Bonnard, C., Cascini, L., Leroi, E., Savage, W.Z. et al. 2008. Guidelines for landslide susceptibility, hazard and risk zoning for land use planning. Engineering and Geology, 102, 83–111.
GeoStudio 2004. Geo-Slope Int. Ltd., Alberta, Canada. Guzzetti, F., Cardinali, M., Reichenbach, P & Carrara, A. 2000. Comparing landslide maps: a case study in the upper Tiber river basin, Central Italy. Environmental management, 25 (3), 247–263. Ho, K., Leroi, E. & Roberts, B. 2000. Keynote Lecture: quantitative risk assessment: application, myths and future direction. Proc., GEOENG 2000, Melbourne, Australia,. Technomic, Lancaster, 1, 263–312. Hutchinson, J.N. 1988. General Report: Morphological and geotechnical parameters of landslides in relation to geology and hydrogeology. Proc. of the 5th Int. Symp. on Landslides, Lausanne, 1, 3–36. Leroueil, S. & Locat, J. 1998. Slope movements – Geotechnical characterization, risk assessment and mitigation. 8th Int. IAEG Congress, Balkema, Rotterdam, The Netherlands, 933–944. Mancini, F., Ceppi, C., Ritrovato, G. 2008. Analisi del rischio da frana in ambiente GIS: Il caso del Sub-Appennino Dauno (Puglia). Proc. ASITA, L’Aquila, ISBN 978-88903132-1-9. II, 1393–1398. Melidoro, G. 1982. Indagini geologiche sulle condizioni di stabilità dell’abitato di Celenza Valfortore. Relazione, 1–30. Montgomery, D.R. & Dietrich, W.E. 1994. A physically based model for the topographic control on shallow landsliding, Water Resources Research, 30, 1153–1171. Morgenstern, N.R. & Price, V.E. 1965. The analysis of the stability of general slip surface. Géotechnique, 15, 239–247. Savage, W.Z., Godt, J.W., Baum, R.L. 2004. Modeling timedependent areal slope stability, In Lacerda, W.A., Erlich M., Fontoura S.A.B., Sayao A.S.F., (eds.) Landslides Evaluation and Stabilization, Proc. of the 9th Int. Symp. on Landslides, Balkema, 1, 23–36. Terzaghi, K. 1950. Mechanisms of Landslides. Geol. Soc. Am., Berkley Volume, 83–123. Van Westen, C.J., Rengers, N., Terlien, M.T.J. & Soeters, R. 1997. Prediction of the occurrence of slope instability phenomena through GIS-based hazard zonation. Geologische Rundschau, 86, 404–414. Vitone, C., Cotecchia, F., Desrues, J., Viggiani, G., 2009. An approach to the interpretation of the mechanical behaviour of intensely fissured clays. Soils & Foundation Journal, in press. Yin K.L. & Yan T.Z. 1988. Statistical prediction models for slope instability of metamorphosed rocks. In: Bonnard C. (eds.), Proc. 5th Int. Symp. Landslides, Lausanne, Switzerland, vol. 2, Balkema, Rotterdam, 1269–1272.
370
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability-based performance evaluation for reinforced railway embankments in the static loading condition M. Ishizuka Integrated Geotechnology Institute Limited, Japan
M. Shinoda Railway Technical Research Institute, Japan
Y. Miyata National Defense Academy of Japan, Japan
ABSTRACT: The shift from the allowable stress design method to the limit state design method is advanced now. There is a limit when the soil materials and the reinforcements that have potentially variabilities are quantitatively evaluated though the limit state design method is an effective design method when the performance is designed. In this research, to evaluate the performance of the structure that used the soil materials and the reinforcements quantitatively, the reliability analysis was executed.
1
INTRODUCTIONS
When the safety of the structure is evaluated, it is necessary to handle the soil modulus reasonably and quantitatively though it is known well that the soil modulus vary when various structures are designed intended for the soil and the bedrock. In the design that uses the safety factor in a past allowable stress method, it has corresponded by adopting the value of the safety side as a value for the design when uneven to the ground constant etc. However, it was a problem that the technique for evaluating how taking the value of the safety side logically and quantitatively had not been established. The reliability design method is a technique for can the evaluation as the safety index or the breakdown probability (Hereafter, it is recorded as the limit state exceedance probability) that is the index to ruin the safety of the structure by applying a probability and a statistical theory to treat the uncertainty logically and quantitatively to the design of the structure and becoming it. Quantitatively treating the soil modulus etc. that have a potential difference becomes possible by applying this technique. Because the limit state exceedance probability is an exceedance probability of defining the state of the limit presented in the performance check type design method, it is not necessarily equivalent to the breakdown probability in case of safety factor < 1. The reliability design method is divided at three levels. A so-called limit state design method hits this in the technique for giving reliability by the load coefficient and the resistance coefficient though
the occurrence probability of the mode of breaking is not quantitatively appreciable at level I, Level II is a technique for evaluating reliability requesting the limit state exceedance probability from the safety index obtained from the mean value and the standard deviation of the performance function. Concretely, the First - Order and Second Moment (FOSM) method2–6 and the First order reliability (FORM) method5–8 , etc. are enumerated. Level III is a technique for requesting the limit state exceedance probability directly from assumption that characteristics of an uncertain factor concerning the mode of breaking the probability statistical are all already-known. Concretely, the Monte Carlo method 5,6,9–11 etc. are enumerated. The case where the reliability design method is applied to the soil structure is enumerated the plural, and has results especially by the embankment12,13 and the having retaining wall14,15 , etc. in the examination of slope safety3,4,14,16,17–19 . An analytical research on the reinforcement embankment is actively done recently20−22 . The reinforcement embankment is targeted in this research. Level II-reliability analytical method in the loading condition was always applied and the height of the embankment or the safety index and the limit state exceedance probability to the performance rank were calculated based on the design basis of the railway23 . And, the relation between those indices and conventional safety factor was considered. Refer to As for the calculation of the cost of the life cycle of the reinforcement embankment at the earthquake, it is document 24 .
371
2 ANALYSIS METHOD 2.1
Calculation of safety factor
Safety factor FS of the reinforced embankment was calculated by the modified Fellenius method like the next expression.
Kh is the horizontal seismic coefficient here, Mdw is the sliding moments by its own weight, Mrw is the resistance moment by its own weight, Mrc is the resistant moment by cohesion, Mrt is the resistant moment by reinforcements, Mdk is the increment of standard sliding moment by unit seismic inertia force, and Mrk is the decrease of resistant moment by unit seismic resistant. Because the loading condition is always limited in static this research, the term of the horizontal seismic coefficient is not considered. It based on Moreover, the partial safety coefficient concerning pulling out reinforcement and the partial safety coefficient concerning reinforcement breaking strength were railway standards23 . 2.2
structure at risk according to expression (3). It is difficult for performance function Z to become a complex function in case of almost and to request the limit state exceedance probability strictly. Then, the limit state exceedance probability is calculated by performance function’s Z in the Taylor expansion, discontinuing the series by the first order term, and making it to linear. In FORM, it expands by the circumference of becoming of the performance function 0. That is, when the Taylor expands by design point x ∗j circumference of a basic random variable, the following expressions obtain performance function Z.
Performance function Z mean value µz and dispersion σ z are requested as follows.
Calculations of safety index and limit state exceedance probability
The reliability analysis technique used by this examination is FORM in level II-reliability design method. If the design point is one in FORM, and the performance function is linear, the solution with good accuracy will be obtained in a short time. However, when two or more design points and the performance function is nonlinear, a big error margin is caused. In this examination, the design point used from targeting the embankment constructed on the steady ground and the performance function used FORM with linear by one to be assumable. It easily explains FORM as follows. The value for the design is assumed, and the function that has X1 –Xn is assumed to be g, and the value of the performance function is assumed to be Z. The occurrence of the limit state exceedance can be judged as follows.
Here, performance function Z was set as follows.
In this research, FS was assumed to be expression (1). If the value of the performance function is a positive value, it can be said excessively of the state of the limit that the structure is safe according to expression (2). If the performance function has 0 or a negative value, it means the state of the limit is exceeded, and there is a
Safety index β can be obtained as follows by the use of expression (6) and expression (7).
A basic random variable is and there is mutually independently a relation between the following up to safety index and limit state exceedance probability Pf (Z ≤ 0) according to the normal distribution function in case of special.
is a standard normal probability distribution function, and the safety index β is a standard here by which how mean value Z is relatively away from point (Z = 0) dangerously is shown. That is, the safety index β grows by the mean value of performance function Z large and standard deviation small, and there is room in safety. Moreover, if standard deviation is large even if the mean value of performance function Z is large (equivalence to the safety factor large), it is not necessarily safe. This respect is a big difference point with a past design method. 3 ANALYTICAL MODELS In this research, the assumption specification described in P. 384 of document23 was examined on the design basis of the railway. There are two kinds of
372
Figure 1. Cross sections of embankments (Performance rank I, gradient of slopes 1 : 1.8)
examined structures, that is, reinforced embankment (with long reinforcement) and no reinforced embankment (without long reinforcement). In the assumption specification, the reinforced embankment is specified for the performance rank I and the no reinforced embankment is specified for the performance rank II and III. Figure 1 shows the reinforced embankment model section of the performance rank I, and Figure 2 shows the no reinforced embankment model section of the performance rank II and III. The height of the embankment was assumed to be four kinds, 3, 4.5, 6, and 9 m. The embankment inclination of the performance rank I was adjusted to 1 : 1.8 and the performance rank II and III were adjusted to 1 : 1.5. The embankment section has been divided into two (the surface part and the deep part) according to P. 58 of document23 .The range of 2 m is called a embankment surface part (henceforth surface) from the embankment slope, and other parts are provided for embankment deep part (henceforth deep)(Fig. 3). The application of the value for the design of the position where reinforcement is constructed and the soil (Table 1) is directed based on this. Moreover, there are division into two such as the upper part of embankment and the lower side of embankment in the soil.23 . Concretely, the whole is treated as the upper part of the embankment for 3 m in the embankment height on the ground side from top. In case of 4.5 m and 6 m in height, the upper part of the embankment from top to 3 m, and 3 m or less are the lower side of the embankment. In case of 9 m in height, the upper part of the embankment is from top to 3 m, 3 m or less are the lower side of the embankment. Especially, it was assumed an analytical model by whom berm of 2.0 m in width was installed in the case
Figure 2. Cross sections of embankments (Performance rank II and III, gradient of slopes 1:1.5) Table 1.
Soil material parameters (mean value).
Material
Unit
Soil 1
Soil 2
Soil 3
unit weight γt Surface part Cohesion c Internal friction angle Deep part Cohesion c Internal friction angle
kN/m3 kN/m2 deg.
18 3 40
17 3 35
16 3 30
kN/m2 deg.
6 45
6 40
6 35
with 9 m in embankment height because there was regulations that installed berm of 1.5 m in standard width when the height of the embankment exceeded 6 m. That is, the upper part of the embankment and the lower side uses soil 1 for both and the performance rank. The performance rank II uses soil 1 upper part and soil 2 is used lower part. The performance rank III uses soil 2 in the upper part and soil 3 is used lower part. Reinforcement was assumed to be two kinds (long and short reinforcement). The construction interval of short reinforcement is 0.3 m and the construction interval of long reinforcement is 1.5 m. The design value of reinforcement is indicated in Table 2. Short reinforcement was constructed in the embankment surface part and long one was constructed in the deep part (Fig. 3).
373
Table 2.
Reinforcement material parameters.
Reinforcement
Unit
Mean value
Long Short
kN/m kN/m
30 2
Figure 3. Division of embankment surface and subsurface. Table 3.
Overburden loads
Performance rank
Track structure
Unit
Load value
I II•III
Concrete Ballast
kN/m2 kN/m2
15 10
Table 4.
Coefficients of variation.
Unit weight Cohesion c Internal friction angle φ Long reinforcement short reinforcement
0.05 0.10 0.10 0.05 0.05
The case of 9 m in height was examined as one example though it became a complex analytical section by the berm installation for 6 m or more in height. Besides this, the applied loadings23 were shown in Table 3, and the applied coefficient of variation25,26 is shown in Table 4. Here, the probability distribution of the soil modulus and the reinforcement constant was assumed to be normal distribution and a mutually independent. The mean value of the soil modulus set the value based on the standard of the railway. Figure 4. Embankments height versus safety index and limit state exceedance probability.
4 ANALYSIS RESULTS Figure 4 shows the safety index and the limit state exceedance probability to the embankment height. Both mutually are an equivalent values, and show the safety side so that the larger the value of the safety index is, the smaller the value of the safety side and the limit state exceedance probability is. As for the safety index, it is understood that the maximum value in 4.5 m in the embankment height is taken in (a), and the value has decreased as the embankment rises, and each height of the embankment is relatively high the value. It is thought that the effect of the reinforcement constructed in embankment deep part is high from this. Moreover, the impetus by the height of the embankment having increased is thought that it
is a cause that the resistance power by the reinforcement construction was surpassing as for the reason why the safety index takes the maximum value for 4.5 m in embankment height. In the safety index in (b), the point being seen for the tendency to which the value decreases as similar to (a), the embankment height rises low in (b) is an examination problem in the future. The safety index in (c) is decreased from 3 m in embankment height to 6 m, and has recovered by 9 m. It is thought that the reason why the safety index has recovered in the embankment 9m height is an effect of installing berm. The tendency similar to the safety index is shown about the limit state exceedance probability in each performance rank of (a)–(c).
374
Figure 5. Embankments height versus Safety index and safety factor.
Figure 5 shows the relation between the safety index and the safety factor to the embankment height. The safety index is the same data as Figure 4. The safety factor is almost constant in (a). As for the safety factor of (b) and (c), the value increases both with an increase of the height of the embankment though it decreases when it reaches 9 m. When the safety index is compared with the safety factor, it has been understood that the increase and decrease tendency to safety index and limit state exceedance probability to the height of the embankment is corresponding in (a)–(c).
Figure 6. Performance rank versus safety index and limit state exceedance probability.
Figure 6 shows the relation between the safety index and the limit state exceedance probability to the performance rank. The embankment inclination of each performance rank was fixed to a horizontal axis. It has been understood that the safety index and the limit
375
(a) to (d) in order of performance rank I, II, and III. It has been understood that a similar tendency is seen in the relation between the increase and decrease of the safety index and the safety index and the safety factor. 5
CONCLUSION
The shift from the allowable stress method to the limit state design method is advanced now. There is a limit when the soil materials and the reinforcements that has potentially variabilities are quantitatively evaluated though the limit state design method is an effective design method when the performance is designed. In this research, to evaluate the performance of the structure that used the soil materials and the reinforcements quantitatively, the reliability analysis was executed. Concretely, the reinforced and no reinforced embankments in the static loading condition of 3, 4.5, 6, 9 m height were modeled that used the design basis of the railway structure standard in Japan. The analysis that assumed circular arc destruction was done, the safety index, the limit state exceedance probability, and the safety factor were calculated. FORM (First order reliability method) was used about the calculation of the safety index and the limit state exceedance probability, and the modified Fellenius method was used for the calculation of the safety factor. FORM has the feature of being obtain the solution with good accuracy in a short time. That is one of the techniques for evaluating reliability by requesting the limit state exceedance probability from the safety index obtained from the mean value and the standard deviation of the performance function. However, it is necessary to meet the requirement, that the design point is the only and the performance function is linear when FORM is applied. Accordingly, embankments constructed on the steady ground where met the requirement was assumed. When modeling, the point to have paid attention besides embankments height were grade of face of slopes, arrangement of soil materials, and presence of berm. When the safety index and the limit state exceedance probability that was the result of the reliability analysis and the safety factor that was result of allowable stress method were compared, it was understood that the tendency to safety was approximate corresponding. The feature and the problem in the future of the analytical result are from 1) to 5). Figure 7. Performance rank versus safety index and safety factor.
state exceedance probability show a similar increase and decrease tendency from (a) to (d). Figure 7 shows the relation between the safety index and the safety factor to the performance rank. The safety index is the same data as Figure 6. As for the safety factor, the decrease of the value is seen from
376
1) In the relation between the embankment height and the safety index, the value becomes small as the embankment rises and the value of the limit state exceedance probability has grown as the embankment rises in 3, 4.5, and 6 m of the performance rank II and III. 2) The relation between the embankment height and the safety factor is constant in the performance rank I, and the value of the safety index has become small as the embankment rises in 3, 4.5, and 6 m of the performance rank II and III.
3) In the relation between the performance rank and the safety index, the safety index becomes small as the performance rank rises. Similarly, in the relation between the performance rank and the limit state exceedance probability, the limit state exceedance probability grows as the performance rank rises. 4) In the relation between the performance rank and the safety factor, the safety factor decreases as the performance rank rises. 5) As for the calculation result of 9 m in the embankment height, with berm, it was shown that stability was comparatively high by arranging the embankment height. It is thought that it is necessary to treat the safety index, the limit state exceedance probability, and the safety factor besides other embankment height, and the examination of analytical model including berm is future tasks. It was assumed static loading condition in this research. In the future, train loading condition and seismic loading condition, etc. were examined, and the reliability of the reinforcement embankment will be evaluated overall.
REFERENCES [1] Hoshiya, M., Ishii, K. 1986. The reliability designing of structures: Kajima Institute Publishing (In Japanese). [2] Cornell, C. A. 1967. Bounds on the reliability of structural systems. Journal of Structural Engineering, 93(1) : 171–200: ASCE. [3] Wu, T. H. and Kraft, L. M.1970. Safety analysis of slopes, Journal of Soil Mechanics and Foundation Division, 96(2): 609–630, ASCE. [4] Tang, W. H., Yucemen, M. S. and Ang, A. H.S. 1976. Probability-based short term design of soil slopes, Canadian Geotechnical Journal, 13: 201–215 [5] Ayyub, B. M. and Haldar, A.1984. Practical structural reliability techniques, Journal of Structural Engineering, 110(8): 1707–1724, ASCE. [6] Lian, Y. and Yen, B. C. 2003. Comparison of risk calculation methods for a culvert, Journal of Hydraulic Engineering, 129(2): 140–152. [7] Hasofer, A. M. and Lind, N. C. 1974. Exact and invariant second-moment code format, Journal of Engineering Mechanics Division, 100(1): 111–121. ASCE. [8] Rackwitz, R. and Fiessler, B. 1978. Structural reliability under combined random load sequences, Computers and Structures, 9: 489–494. [9] Nguyen, V. U. and Chowdhury, R. N. 1985. Simulation for risk analysis with correlated variables, Géotechnique, 35(1): 47–58. [10] Gui, S., Zhang, R., Turner, J. P. and Xue, X. 2000. Probabilistic slope stability analysis with stochastic soil hydraulic conductivity, Journal of Geotechnical and Geoenvironmental Engineering 126(1): 1–9.
[11] El-Ramly, H., Morgenstern, N. R. and Cruden, D. M. 2002. Probabilistic slope stability analysis for practice, Canadian Geotechnical Journal, 39: 665– 683. [12] Matsuo, M. and Kuroda, K. 1974. Probability approach to design of embankments, Soils and Foundations, 14(2): 1–17. [13] Christian, J. T., Ladd, C. C. and Baecher, G. B. 1994. Reliability applied to slope stability analysis, Journal of Geotechnical Engineering, 120(12): 2180–2207, ASCE. [14] Duncan, M. J. 2000. Factors of safety and reliability in geotechnical engineering, Journal of Geotechnical and Geoenvironmental Engineering, 126(4): 307–593. [15] Hoeg, K.A. M. and Murarka, R. P. 1974. Probabilistic analysis and design of a retaining wall, Journal of Geotechnical Engineering, 100(3): 349–365, ASCE. [16] Vanmarcke, E. H. 1977. Reliability of earth slopes, Journal of Geotechnical Engineering, 103(11): 1247–1265, ASCE. [17] Alonso, E. E. 1976. Risk analysis of slope and its application to slopes in Canadian sensitive clays, Géotechnique, 26(3): 453–472. [18] Bergado, D.T. andAnderson, L. R. 1985. Stochastic analysis of pore pressure uncertainty for the probabilistic assessment of the safety of earth slopes, Soils and Foundations, 25(2): 87–105. [19] Li, K. S. and Lumb, P. 1987. Probabilistic design of slopes, Canadian Geotechnical Journal, 24(4): 520–535. [20] Shinoda, M., Horii, K., Yonezawa, T., Tateyama, M. and Koseki, J. 2006. Reliability-based seismic deformation analysis of reinforced soil slopes, Soils and Foundations, 46(4): 477–490. [21] Shinoda, M., Yonezawa, T., Tateyama, M. and Koseki, J. 2005. Limit state exceedance probabilities of reinforced retaining walls, Journal of Geotechnical Engineering, 792, III(71): 119– 129.(In Japanese) [22] Shinoda, M., Hara K., Masuo, T. Koseki, J. 2006. Reliability analysis on reinforcement soil structure that considers statistical character of reinforcement breaking strength, Geosynthetics Engineering Journal, 21: 89–96. (In Japanese) [23] Railway Technical Research Institute, 2007. Design standard for railway structures., Soil structures, Maruzen Co., Ltd.. (In Japanese) [24] Watanabe, K., Ohki M., Shinoda, M., Kojima, K. and Tateyama, M., 2005. Triaxial tests of the soil strength parameters used in checking on embankments stability, Quarterly Report of RTRI, Vol. 19(3), 29–34. (In Japanese) [25] Hara, K. Masuo, T., 2005. Variability evaluation of geogrid tension strength in the ISO geotextile examination standard, Geosynthetics Engineering Journal, vol. 20, 287–294. (In Japanese)
377
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Maximum likelihood analysis of case histories for probability of liquefaction Jianye Ching Department of Civil Engineering, National Taiwan University, Taipei
C. Hsein Juang Department of Civil Engineering, Clemson University, Clemson, USA
Yi-Hung Hsieh Department of Construction Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
ABSTRACT: In this paper, the maximum likelihood method is used to derive probabilistic models for liquefaction potential evaluation using piezocone penetration test (CPTU). Emphasis of the paper is placed on establishing the relationship between the factor of safety computed with a newly developed deterministic CPTU model and the probability of liquefaction. The modeling error of the limit state function is characterized with a bias factor and several models of the bias factor are investigated within the maximum likelihood analysis framework. The best performing model is judged based on the criteria of the well-accepted Information Theory, as well as through the proposed probability-probability (P-P) plot. The significance of the developed probabilistic model is discussed and comparison with the existing probabilistic models is presented.
1
INTRODUCTION
In this paper the maximum likelihood analyses of a liquefaction database are carried out to develop probabilistic models for liquefaction potential evaluation using piezocone penetration test (CPTU). The liquefaction database consists of case histories compiled by Chen et al. (2008) from different sources. Each case, either liquefied or non-liquefied, is characterized with the piezocone penetration test (CPTU) data, including cone tip resistance (qt ), sleeve friction ( fs), and penetration porewater pressure (u2 ), and the seismic parameters, including peak ground surface acceleration (amax ) and moment magnitude (Mw ). One objective of the study is to develop a mapping function that relates the probability of liquefaction (PL ) to the factor of safety (FS ) against the occurrence of liquefaction computed with a CPTU model (Chen et al. 2008; Juang et al. 2008). This mapping function, an empirical relationship between PL and FS , is developed through the maximum likelihood analysis. The developed relationship allows for a practical, one-step estimate of the liquefaction probability PL based on the computed FS . The simplified procedure for evaluating the liquefaction potential of a soil was created by Seed and Idriss (1971) in which the seismic loading required to initiate liquefaction was expressed as a cyclic stress ratio (CSR) and the soil resistance against liquefaction was determined through correlations with in situ test such as the standard penetration test (SPT),
cone penetration test (CPT), Becker penetration test (BPT), and shear wave velocity (Vs ) measurements. An excellent summary of the SPT-, CPT-, BPT-, and Vs-based models was documented by Youd et al. (2001). This paper focuses on the evaluation of liquefaction potential using piezocone penetration tests (CPTU), and thus, only the cone penetration-based evaluation procedure is discussed hereinafter. There is an increasing need to evaluate the liquefaction potential in terms of the probability of liquefaction. Earlier models for the probability of liquefaction are empirical models established through logistic regression analysis of historical data or through Bayes’ theorem (for example, Juang et al. 2003). Recently, Moss et al. (2006) developed a simplified model for the probability of liquefaction based on a comprehensive analysis that utilizes the maximum likelihood principle. These probabilistic models allow for the determination of the probability of liquefaction without evaluating parameter uncertainty on the part of the user. In the Moss et al. (2006) model, the model uncertainty is fully characterized and clearly stated by the model developers. However, it is up to the user to further incorporate the parameter uncertainty, if present, in a separate analysis (for example, using the Monte Carlo simulation) to determine the probability of liquefaction. On the other hand, Juang et al. (2006) presented a spreadsheet solution that is based on the first order reliability method (FORM) that explicitly considers both model uncertainty and parameter uncertainty. The user of this method is required to
379
evaluate all input variables for their mean values and standard deviations but the spreadsheet is a standalone tool that already implements the FORM and thus, no separate analysis on the part of the user is needed for the determination of the probability of liquefaction. The existing CPT-based methods, either deterministic or probabilistic, as mentioned previously, are quite well established, and there is concern of diminishing return for further effort to improve upon the existing methods. However, the effect of fine-grained material (“fines”) on liquefaction resistance is still not well understood, particularly with respect to the liquefaction resistance of soils that are considered “too clay rich to liquefy” in a CPT-based investigation (Robertson and Wride 1998). Idriss and Boulanger (2004 & 2006) questioned the use of soil behavior type index Ic in the Robertson and Wride method as a change proxy to the effect of fines on liquefaction resistance. It should be noted that in the Robertson and Wride’s formulation for Ic , the penetration porewater pressure (u2 ) is not used; for distinction in this paper, their definition of Ic is denoted as Ic,RW hereinafter. To account for the effect of fines, Moss et al. (2006) suggested use of friction ratio (Rf = fs /qt × 100) in lieu of Ic,RW , whereas Li et al. (2007) and Shuttle and Cunning (2007) suggested using a version of Ic that includes the penetration porewater pressure in the formulation. Based on the work of Li et al. (2007) and Shuttle and Cunning (2007), a CPTU model was initiated by Chen et al. (2008) and fully developed by Juang et al. (2008), in which the penetration porewater pressure [and thusly, the excess pore pressure ratio (Bq )] is included in the formulation of Ic and CRR. This CPTU model is more applicable to a wider range of geomaterials, including soils that were once considered “too clay-rich to liquefy” using the Robertson and Wride method. The CPTU model is shown to be comparable to the existing CPT-based methods such as the Robertson and Wride method in general, and in cases that were judged to be “too clay-rich to liquefy,” it is shown to yield improved results. Therefore, it is considered a worthwhile effort to develop a probabilistic CPTU-based model for liquefaction potential evaluation. In this paper, such effort is culminated in a simplified mapping function (relationship) between the probability of liquefaction and the factor of safety determined with this CPTU model.
2
Figure 1. Case histories in the database plotted on the soil behavioral classification chart (after Juang et al. 2008).
and Wride method) <2.2 and friction ratio Rf < 1.5%, the absolute values of Bq are very small (Bq < 0.05). Assuming Bq = 0 for these cases will cause a maximum error in the computed Ic of less than 3%.Thus, the database from Moss et al. (2006) was screened with the criteria of Ic,RW < 2.2 and Rf < 1.5%, and 116 “qualified” cases were selected. These cases were assumed to have Bq = 0 and included in the database. The rest of data in the compiled database were derived from the 1999 Chi-Chi, Taiwan Earthquake and the 1999 Kocaeli, Turkey Earthquake where Bq was available. This data set provides a balance between “cleaner” sands and soils with higher Ic . Figure 1 as a whole characterize the soils in the adopted data set of case histories. The deterministic CPTU model (Juang et al. 2008) was developed through learning of the case histories data with artificial neural networks (ANN). The ANN approach has been employed in previous studies with satisfactory results (Juang et al. 2000 & 2003; Kim and Kim 2006). In this CPTU model, three derived parameters are used as the input. The first is an adjusted cone tip resistance qt1N defined by Idriss and Boulanger 2004; 2006):
DATA SET AND THE DETERMINISTIC CPTU MODEL
A database of case histories, consisting of 190 liquefied cases and 123 non-liquefied cases, was compiled by Chen et al. (2008) from five sources (Moss et al. 2006, Ku et al. 2004, Lai et al. 2004, Bray et al. 2004, and PEER 2007). Notably, the CPT data reported in Moss et al. (2006) did not include Bq . An examination of all available CPTU data revealed that for those cases with Ic,RW eliminate (the Ic used in the Robertson
where σatm is the atmospheric pressure (1 atm = 1.013 bars = 101.3 kPa). Equation 1 was proposed by Idriss and Boulanger (2004; 2006) in conjunction with their CPT-based simplified method for use in liquefaction potential evaluation. According to Juang et al. (2008), choice of
380
qt1N as defined in Equation 1, as opposed to other normalization models such as Olsen (1997) and Robertson and Wride (1998), is based on the fact that it produced the best results in the ANN learning of case histories. The second required parameter for the CPTU model is Ic , defined as follows (after Jefferies and Davies 1993, Jefferies and Been 2006):
It should be noted that the more well-known soil behavior type index defined by Robertson and Wride (1998), denoted previously as Ic,RW , is actually a modification of Equation 2 (by excluding the Bq term and modifying the coefficients). According to Shuttle and Cunning (2007), the dimensionless term {Qt (1Bq ) + 1} in Equation 2 is “fundamental for the evaluation of undrained response during CPTU sounding,” which allows for greater differentiation between silty clays and clayey silts. The first two parameters, qt1N and Ic , are required for the calculation of the cyclic resistance ratio (CRR) in this CPTU model. The CRR is computed as follows:
Figure 2. Three dimensional boundary surfaces at three different angles with case history data (after Chen et al. 2008; Juang et al. 2008).
Note that Equation 3 was established through calibration with the CPTU case histories where the CSR (cyclic stress ratio) was computed as follows (Idriss and Boulanger 2006):
3
where σv and σv are the total stress and the effective stress, respectively, of the soil of concern at a given depth, g is the acceleration of gravity, which is the unit for peak ground surface acceleration amax , rd is the depth-dependent shear stress reduction factor, MSF is the magnitude scaling factor, and Kσ is the overburden correction factor for cyclic stress ratio. Since Equation 3 was created on the basis of Equations 1, 2, and 4, thus, these equations must be used together, and collectively, they form the CPTU model. The validity of this CPTU model has been examined by Chen et al. (2008) and Juang et al. (2008). Figure 2 shows the liquefaction boundary surface that represents this model along with case history data. The three-dimensional boundary surface is shown at three different angles as an example to illustrate its ability to delineate liquefied cases from non-liquefied cases. In this paper, this CPTU model is characterized with rigorous probabilistic analyses, with an objective of developing an empirical relationship between the probability of liquefaction (PL ) and the factor of safety (FS ) that is computed with this CPTU model.
METHODOLOGY FOR DEVELOPING PL –FS RELATIONSHIP
To begin with, let us denote the database as D = {(CSRi , qt1N ,i , Ic,i , Li )|i = 1, . . . , 313}. For the ith data point, Li is an indicator of liquefaction: Li = 1 for liquefied case; otherwise, Li = 0. The subsets for the nonliquefied cases and the liquefied cases are represented by D1 = {(CSRi , qt1N ,i , Ic,i , Li = 0)|i = 1, . . . , 123} and D2 = {(CSRi , qt1N ,i , Ic,i , Li = 1)|i = 1, . . . , 190}, respectively. Let us further denote θ = {CSR, qt1N , Ic }. The goal of the probabilistic analysis is to find function g(.) so that the probability of liquefaction P(L = 1) = g(θ). This goal can be re-written as a mapping function h(.) so that:
Prior to presenting the methodology, it is noted that the boundary surface shown in Figure 2 can be reduced to the more traditional, two-dimensional (2-D) “boundary curve” if the concept of an equivalent cone tip resistance is adopted. In this regard, the converting
381
Figure 3. CRR computed with two different but equivalent ∗ equations (using qt1N versus qt1N ).
Figure 4. Two-dimensional boundary curve and data projected from three-dimensional boundary surface shown in Figure 2.
factor K is obtained by numerical solutions of the CPTU model: 3.1 Limit state function and maximum likelihood estimate Let R be the limit state function so that R > 1 means liquefaction: And with this factor, the equivalent cone tip resis∗ tance is computed as: qt1N = K · qt1N . It is noted this equivalent tip resistance is obtained for the reference condition of Ic = 1.51, the “lowest” point on the boundary surface for a given qt1N (Juang et al. 2008). Thus, the CRR calculated with a pair of (qt1N , Ic ) using Equation 3, denoted previously as CRR(qt1N , Ic ), is equal to the CRR computed with the following equation (derived from Equation 3 by setting Ic = 1.51 and ∗ replacing qt1N with qt1N ):
To demonstrate the equivalency of the CRR computed from Equation 3 with those computed from Equation 7, the 313 cases in the database are computed with both equations, and the results are shown in Figure 3. As shown in this figure, this equivalency is validated. It should be noted that although use of Equations 6 and 7 is not necessarily any easier than use of Equation 3, this equivalency approach is nevertheless more convenient for subsequent maximum likelihood analysis and is adopted in this paper. Figure 4 shows the boundary curve, the 2-D counterpart of the 3-D boundary surface, along with the equivalent 2-D data. The CRR defined by this boundary curve (Equation 7) is ∗ denoted herein as CRR(qt1N ).
where Z characterizes modeling errors of the limit state function. Further let the cumulative distribution function (CDF) of Z be b,δ , where b is the mean value of Z, serving as the bias factor for the ratio CSR/CRR; δ = δ(θ, a) is the coefficient of variation of Z, where a is the unknown parameter that characterizes the δ(θ, a) function. The unknown parameters a and b may be estimated based on the database D. With the limit state function being defined, the goal expressed in Equation 5 can be refined as follows:
where a∧ and b∧ are the best estimates for the unknown parameters a and b based on the database D; and the function h(FS ) becomes:
382
Finally, the form or model of the coefficient of variation of Z, namely, δ(θ, a), is also found to be a significant factor in the results of the maximum likelihood analysis. Four simple models are examined in this study:
Figure 5. PDFs and CDFs of four different types of distributions examined.
To determine the parameters a and b with the principle of the maximum likelihood, the likelihood function f [D|(a, b)] is first expressed as:
Then, by maximizing the term, log{f [D|(a, b)]}, the best estimates, (a∧ , b∧ ), for the unknown parameters a and b can be obtained. As will be shown later, the shape of the assumed probability distribution of b,δ can affect the results of the maximum-likelihood analysis. In this study, the following cumulative distribution functions (CDFs) are examined: Lognormal:
Gaussian:
Minimum Gumbel:
Maximum Gumbel:
As an example, these CDFs and their corresponding probability density functions (PDFs) are illustrated in Figure 5 for one set of distribution parameters, mean = 1 and coefficient of variation (COV ) = 30%. Of course, this set of distribution parameters is arbitrarily selected only for illustration purpose.
In summary, the likelihood function based on all cases in the database is maximized to yield the best estimates (a∧ , b∧ ) of the parameters a and b. These best estimates are affected by the assumed type of distribution of Z and the assumed model for the COV of Z. Thusly, the problem at hands is not only the need to estimate (a∧ , b∧ ) but also the need to determine the most appropriate “model” to take regarding the two assumptions. 4
SIMULATION RESULTS AND DISCUSSIONS
The maximum likelihood analysis of the database D yields an empirical relationship between the proba∗ bility of liquefaction and the input parameters (qt1N , CSR) as follows:
∗ where CRR(qt1N ) is the CRR computed with Equation ∗ 7 that uses qt1N as its input, as noted previously. This empirical equation is derived with the assumption that the variable Z follows the “Minimum Gumbel” distribution and the assumption that the COV of Z follows the model designated as M2 (i.e., COV is a linear func∗ tion of qt1N ); collectively, the combined condition is referred to herein as the “Minimum Gumbel + M2 ” model. Use of any other combination (or model) in the maximum likelihood analysis yields an equation similar to Equation 20 but with different coefficients. Contours of liquefaction probability can be plotted easily with Equation 20. Some observations can be made with such graphs. Figure 6 compares the results obtained for two models, “Lognormal + M1 ” versus “Minimum Gumbel + M1 .” In both models, the COV of Z follows M1 (i.e., constant COV ). However, in Figure 6(a), variable Z is assumed to follow the lognormal distribution (a right-skewed PDF), and the resulting contours are essentially “equally-spaced.” On the other hand, in Figure 6(b), variable Z is assumed to follow the Minimum Gumbel distribution (a left-skewed PDF), and the resulting contours are now unevenly spaced (the contour spacing is smaller in the lower right but larger in the upper left). The effect of the assumed type of distribution of Z is evident.
383
Figure 6. Contours of liquefaction probability for two models, “Lognormal + M1 ” versus “Minimum Gumbel + M1 ”.
Figure 8. The modified P-P plots for two models, “Minimum Gumbel + M2 ” versus “Lognormal + M1 ”.
Let us take as an example the choice of lognormal PDF; if Z is indeed lognormal with mean value = b and COV = δ(θ, a), then the liquefaction probability will be:
Figure 7. Contours of liquefaction probability for two models, “Lognormal + M1 ” versus “Lognormal + M3 ”.
Similarly, the effect of the assumed model for the COV of Z can also be observed. Figure 7 compares the results of two models, “Lognormal + M1 ” versus “Lognormal + M3 .” The difference in the resulting contours of liquefaction probability can also be observed clearly. Therefore, the question as to which model is the most consistent with the data and the most appropriate to use must be addressed. To the best knowledge of the writers, this question has never been systematically addressed in the geotechnical data analysis. 4.1
Modified P-P Plot as a Means to Justify the Selected Model
In this regard, a new tool is developed, in terms of the modified Probability-Probability Plot (or modified PP plot for short). The idea is similar to the ordinary P-P plot, which is very useful in determining the PDF type for the actual distribution of a set of real-valued samples. However, the ordinary P-P plot is only good for real-valued observations. For the analysis of liquefaction data, the observations are binary (either 0 or 1), and thusly, the ordinary P-P plot is not readily applicable. In this paper the ordinary P-P plot is modified. Recall that in the limit state expressed in Equation 9, R(Z, θ) is a function of CSR, CRR, and Z. The liquefaction probability can be computed as:
The probability obtained from an assumed theoretical PDF model, such as Equation 22, is referred to herein as the theoretical liquefaction probability PLT . This probability will be the “true” liquefaction probability if the lognormal distribution assumption is correct. On the other hand, the empirical liquefaction probability PLE for a given value of PLT can be estimated from an adopted database. For instance, if we are interested in knowing PLE corresponding to PLT = 0.1, we can gather all the data points whose PLT values are close to 0.1 (e.g., within 0.1 ± 0.05) and determine the ratio of the number of actual liquefied cases (nL ) over the total number of cases (n) with PLT ≈ 0.1. This ratio, denoted by P˜ LE , is an unbiased estimator of the empirical liquefaction probability PLE . The modified P-P plot is simply a plot whose horizontal axis is the theoretical liquefaction probability PLT and vertical axis is the estimated empirical liquefaction probability P˜ LE . If the assumed PDF is the exact, or close to exact, distribution, the resulting PLT vs. P˜ LE plot should roughly follow the 1:1 line. The modified P-P plot provides a means to visually examine the validity of an assumed theoretical PDF model. Based on the above analysis of simulated data, the modified P-P plot with the confidence band is seen as an effective tool for rejecting an assumed PDF model given a set of empirical liquefaction probability data. The modified P-P plot also provides a visual means to inspect the validity of a model; the closer the plot of PLT vs. P˜ LE is to the 1:1 line, the better the model is. To this end, an example is presented below to further demonstrate this approach using the adopted data set of case histories, rather than the simulated data. As an example, two models, the “MinimumGumbel + M2 ” model and the “Lognormal + M1 ” model, are compared in terms of the modified P-P plot using the data set of 313 cases. Figure 8 compares
384
the modified P-P plots obtained with the two models. The results show that both models are “not rejected” at the 95% confidence level, as in either case, the modified P-P line falls within the confidence band. It is interesting to observe that both confidence bands are larger when the liquefaction probability PLT is close to 0.5, and become smaller when the probability gets close to either 0 or 1. This is because the number of data points n is smaller for the bins close to 0.5 and is larger for the bins close to 0 or 1 (see the histogram below the P-P plot). The bins with larger n have tighter confidence intervals, and vice versa. Notice that a wider confidence interval (band) does not mean the assumed model is worse: it simply means that there is less evidence to reject the model. As shown in Figure 8, for small liquefaction probabilities (PLT < 0.3), the “Minimum-Gumbel + M2 ” model performs excellently, reflected by the observation that the PLT vs. P˜ LE line follows the 1:1 line very well. Since the empirical liquefaction probability may be viewed as the actual (or close to actual) probability observed from the data, the “Minimum-Gumbel + M2 ” model is seen as an excellent model in the range of PLT < 0.3. On the other hand, the “Lognormal + M1 ” model appears to be conservative in that range. Thus, the “Minimum Gumbel + M2 ” model considerably outperforms the “lognormal + M1 ” model in this range. In other ranges of PLT , the two models perform roughly equally. Overall, the “Minimum Gumbel + M2 ” model outperforms the “lognormal + M1 ” model. This result is consistent with the higher AIC and BIC scores obtained previously for the “Minimum Gumbel + M2 ” model. While not shown herein, the comparison of the “Minimum Gumbel + M2 ” model with each of the other models is performed and similar conclusion is reached. Thus, the modified P-P plot provides a simple means to visually examine the validity of an assumed model regarding the distribution of Z and the COV model of Z.
4.2
Summary of the simulation results
The results and discussions presented previously have established that the “Minimum Gumbel + M2 ” model is the most appropriate model for the maximum likelihood analysis of the adopted database. Under the assumption of this model, the empirical relationship between the probability of liquefaction and the vector of input parameters θ is expressed previously in Equa∗ tion 20. Substituting the term CRR(qt1N )/CSR with the ∗ factor of safety FS and replacing qt1N with K · qt1N , the following probability-safety factor relationship is obtained:
Equation 23 provides a means to estimate the liquefaction probability PL based on the calculated factor of
Figure 9. Probabilistic and deterministic boundary curves.
safety FS . It should be noted that unlike the simplified Bayesian model by Juang et al. (2002), in which the probability PL is a function of only FS , the probability PL is seen here as a function of both F S and the equiv∗ alent cone tip resistance qt1N (or K · qt1N ). The current result appears to be more consistent with the wellrecognized observation that the uncertainty in different parts of the liquefaction boundary curve, correspond∗ ing to different qt1N or CSR levels, is different; for those parts of the boundary curve, in which the curve is well constrained by liquefied and non-liquefied data points, there is less uncertainty regarding the “location” of the curve. Thus, for the same FS value, the probability PL is expected to be different at a different ∗ level of qt1N . It is noted that the developed probability model (either Equation 20 or 23) can be used to construct the traditional probability curves (contours), as shown in Figure 9. For comparison, the deterministic CPTU model by Juang et al. (2008) is also shown in this figure. As shown in Figure 9, the deterministic CPTU boundary curve is seen to correspond to probabilities in the range of 0.1 to 0.3, which indicates that this deterministic model is generally conservative and provides a safety level consistent with the generally acceptable risk for design of ordinary structures against liquefaction (Juang et al. 2002).
5
CONCLUSIONS
The modified P-P plot are shown to be effective tools for selecting the best performing model in the maximum likelihood analysis. For the liquefaction database analyzed, the combined use of the Minimum Gumbel model (as the cumulative distribution function) and the M2 model (as the COV model of variable Z) yields the best results. The resulting model (Equation 23), an empirical relationship that relates the factor of safety
385
(FS ) to the probability of liquefaction (PL ), referred to as the proposed model in this paper, provides a simple means for evaluating liquefaction potential using piezocone penetration test (CPTU). The well-recognized observation that the uncertainty level in different parts of the liquefaction boundary curve is different is confirmed by the results of this study. First, different parts of the deterministic CPTU boundary curve (implying that FS = 1) are characterized with different probabilities (Figure 9). Sensitivity analysis within the maximum likelihood framework shows that the resulting model for the liquefaction probability is “stable” with respect to the adopted database. The sensitivity analysis also indicates that the effect of sample disparity is quite negligible and no special effort on the part of the user of the proposed model in needed. Overall, the standard deviation of the liquefaction probability computed with the proposed model is approximately equal to σPL = 0.035. The liquefaction probability computed with the proposed model is a nominal probability. The nominal probability implies that the input soil (CPTU) and seismic parameters have been properly evaluated and the extent of variation (or uncertainty) in these parameters is deemed “acceptable.” If the uncertainties in the input parameters are too large to ignore, separate analysis using the Monte Carlo simulation method or the reliability method that incorporates explicitly the parameter uncertainties should be carried out.
ACKNOWLEDGMENTS The study on which this paper is based was supported in part by the U.S. Geological Survey (USGS), Department of the Interior under USGS award number 07HQGR0053. The views and conclusions contained in this paper are those of the writers and therefore should not be interpreted as necessarily representing the official policies, expressed or implied, of the U.S. Government. REFERENCES Akaike, H. 1974. A new look at the statistical model identification. IEEE Transactions on Automatic Control, AC-19: 716–723. Bray, J.D., Sancio, R.B., Durgunolu, T., Onalp, A., Youd, T.L., Stewart, J.P., Seed, R.B., Cetin, O.K., Bol, E., Baturay, M.B., Christensen, C., and Karadayilar, T. 2004. Subsurface characterization at ground failure sites in Adapazari, Turkey. Journal of Geotechnical and Geoenvironmental Engineering, ASCE, 130: 673–685. Chen, C.H., Juang, C.H., and Schuster, M.J. 2008. Empirical model for liquefaction resistance of soils based on artificial neural network learning of case histories. Geotechnical Special Publication, ASCE, Reston, VA, 179: 854–861. Cover, T.M. and Thomas, J.A. 1991. Elements of Information Theory. Wiley, New York. Idriss, I.M. and Boulanger, R.W. 2004. Semi-empirical procedures for evaluating liquefaction potential during
earthquakes. Proceedings, the 3rd International Conf. on Earthquake Geotechnical Engineering (ICEGE), Berkeley, CA. 32–56. Idriss, I.M. and Boulanger, R.W. 2006. Semi-empirical procedures for evaluating liquefaction potential during earthquakes. Soil Dynamics and Earthquake Engineering. 26: 115–130. Jefferies, M.G. and Davies, M.P. 1993. Use of CPTu to Estimate Equivalent SPT N60 . Geotechnical Testing Journal, American Society for Testing and Materials, 16: 458–468. Juang, C. H., C.J. Chen, W.H. Tang, and D.V. Rosowsky 2000. CPT-based liquefaction analysis, Part 1: Determination of limit state function. Geotechnique. 50: 583–592. Juang, C.H., Jiang, T., and Andrus, R.D. 2002. Assessing probability-based methods for liquefaction evaluation. J. of Geotech. and Geoenvir. Engrg.,ASCE, 128: 580–589. Juang, C.H., Yuan, H., Lee, D.H., and Lin, P.S. 2003. Simplified CPT-based method for evaluating liquefaction potential of soils. J. of Geotech. and Geoenvir. Engrg., ASCE, 129: 66–80. Juang, C.H., Fang, S.Y., and Khor, E.H. 2006. First order reliability method for probabilistic liquefaction triggering analysis using CPT. J. of Geotech. and Geoenvir. Engrg., 132: 337–350. Kim, Y.S. and Kim, B.T. 2006. Use of artificial neural networks in the prediction of liquefaction resistance of sands. J. of Geotech. and Geoenvir. Engrg., 132: 1502–1504. Ku, C.S., Lee. D.H., and Wu J.H. 2004. Evaluation of soil liquefaction in the Chi-Chi, Taiwan earthquake using CPT. Soil Dynamics and Earthquake Engineering, 24: 659–673. Lai, S.Y., Hsu, S.C., and Hsieh, M.J. 2004. Discriminant model for evaluating soil liquefaction potential using cone penetration test data. J. of Geotech. and Geoenvir. Engrg., ASCE, 130: 1271–1282. Li, D.K., Juang, C.H., Andrus, R.D., and Camp, W.M. 2007. Index properties-based criteria for liequefaction susceptibility of clayey soils: a critical assessment. J. of Geotech. and Geoenvir. Engrg., ASCE, 133: 110–115. Moss, R.E.S., Seed, R.B., Kayen, R.E., Stewart, J.P., Der Kiureghian, A., and Cetin, K.O. 2006. CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential. Journal of Geotechnical and Geoenvironmental Engineering, ASCE, 132: 1032–1051. PEER web site 2007. Soils and liquefaction data on 1999 Kocaeli, Turkey Earthquake, http://peer.berkeley.edu/ publications/turkey/adapazari/index.html. Seed, H.B. and Idriss, I.M. 1971. Simplified procedure for evaluating soil liquefaction potential. Journal of the Soil Mechanics and Foundation Div., ASCE, 97 (SM9), 1249–1273. Schwarz, G. 1978. Estimating the dimension of a model. Annals of Statistics, 6, 461–464. Shuttle, D.A. and Cunning, J. 2007. Liquefaction potential of silts from CPTu. Canadian Geotechnical Journal, 44: 1–19. Youd, T. L., Idriss, I. M., Andrus, R.D., Arango, I., Castro, G., Christian, J.T., Dobry, R., Liam Finn, W.D., Harder, L.F., Jr., Hynes, M.E., Ishihara, K., Koester, J.P., Laio, S.S.C., Marcuson, W.F., III, Martin, G.R., Mitchell, J.K., Moriwaki, Y., Power, M.S., Robertson, P.K., Seed, R.B., Stokoe, K.H., II. 2001. Liquefaction resistance of soils: summary report from the 1996 NCEER and 1998 NCEER/NSF workshops on evaluation of liquefaction resistance of soils. Journal of Geotechnical and Geoenvironmental Engineering, 127: 817–833.
386
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Suggestions for implementing geotechnical risk management M.Th. van Staveren Deltares & Delft University of Technology, Delft, The Netherlands
ABSTRACT: Without doubt, geotechnical engineering is a key success factor for most construction projects. Currently, risk management gets more and more attention in these projects. Nevertheless, with geotechnical risk analysis becoming common practice, the routine application or implementation of geotechnical risk management still is an unexplored area. After numerous debates over the last years, about why applying geotechnical risk management in construction projects, a new major question emerges within the geotechnical community: How implementing geotechnical risk management effectively, efficiently, and persistently in construction projects? The Dutch Delft Cluster Research Programme Implementing Risk Management investigates the hurdles and conditions for successfully implementing risk management in organizations. First, by theory, data and investigator triangulation, these hurdles and conditions have been identified, analyzed, clustered and synthesized into seven key hurdles and ten key conditions for effectively applying geotechnical risk management. The result triggered another research question: Are these key hurdles and key conditions also appropriate for actually implementing geotechnical risk management within organizations? Therefore, the next research phase consisted of identifying, selecting, and combining innovation management variables and theories with those of risk management. The key hurdles and key conditions for geotechnical risk management have been synthesized and classified into two main implementation dimensions: (1) geotechnical risk management methodologies and (2) organizations routinely using geotechnical risk management. This paper presents the key conditions for implementing geotechnical risk management in organizations in a conceptual model, together with two main implementation suggestions. In conclusion, the research results demonstrate that the organizational dimension, which is usually entirely neglected, is of key importance for realizing the routine application of geotechnical risk management methodologies throughout all of the phases of construction projects. The synthesis of innovation management and risk management concepts and variables increases the chance for more effective, efficient, and persistent organizational implementation of geotechnical risk management in construction projects. This may dramatically strengthen the geotechnical role in, and contribution to, the successful completion of engineering and construction projects within our societies. 1
INTRODUCTION
The growing interest for risk management in construction projects is paramount. For example, on November 1, 2007, leading Dutch organizations signed a joint agreement for rigorously implementing risk management within the construction industry. By the year 2012, three ministries, the four largest cities, and the national organizations of contractors and consulting engineers aim fully applying risk management in eighty percent of the projects in the Netherlands. Expected benefits are less failure costs, less time delays, and a reduction of the number of disputes, by building trust, increasing transparency, and improving communication between all construction project parties. Therefore, it is promising that over the years, at least in the large and complex projects, the application of geotechnical risk analysis seems becoming common practice. However, contrary to other papers, this paper does not introduce any new or updated methodology for
applying geotechnical risk analysis or geotechnical risk management. While still rather limited, the amount of literature about geotechnical risk management is growing. For example, for generic geotechnical risk management methodologies reference is made to Clayton (2001) and Van Staveren (2006). Moreover, there is an increasing amount of literature covering specific geotechnical risk analysis and management topics. Examples of these are the difference between unsafe geotechnical certainty and safe geotechnical uncertainty (Barends, 2005), the role of the human factor in achieving geotechnical reliability (Bea, 2006), objective and subjective ways of geotechnical risk classification (Altabba et al., 2004), and contractual allocation of geotechnical risk (Essex, 2007). Unlike most other papers, this paper focuses on implementing existing geotechnical risk management methodologies in organizations. Implementation is here simply defined as the routinized application of risk management during the entire design and
387
construction process. Implementation thus differs highly from often incidentally application of risk analyses within construction projects. For instance, Halman (2008) addresses the importance of implementing risk management in the Dutch construction industry. The need for particularly implementing geotechnical risk management in organizations in the construction industry has been raised by Smith (2008). A workshop of the US GeoCouncil, in December 2006 with a group of fifty geoprofessionals, revealed that currently the main areas of attention in the construction industry are innovative contracting, safety, cost analyses, and research, development, and training. Attention to these trends should contribute to providing better, faster, and cheaper solutions to geotechnical problems in construction projects. Geotechnical risk management was considered as the best chance for meeting these demands in each of the trend areas (Smith, 2008). Obviously, geotechnical risk management should be routinely applied, and thus be well implemented within organizations, for materializing benefits. However, the author’s experience teaches that even when geotechnical professionals and their managers say that they are applying geotechnical risk management in engineering and construction projects, often they are not actually doing it in an explicit and well-structured way. Moreover, even if they do it in that explicit and well structured way, they often execute more of a risk analysis, rather than executing the full risk management cycle within each and every project phase. By conventional “hit and run” risk management of doing one or two analyses, the potentially large benefits of routinely applying geotechnical risk management remain hidden. This results in missed opportunities, for instance saving lives of construction workers by reducing unsafety, increasing profits by reducing failure costs, and speeding up the construction process by reducing delays. Similar to quality management (Imai, 1989), a cyclic approach with continuous attention to improving “little things” is required for effective geotechnical risk management (Van Staveren, 2006). This requires full implementation within (project) organizations. Therefore, after many debates over the last years, about why to apply geotechnical risk management in construction projects, now a new type of question emerges within the geotechnical community: how to implement geotechnical risk management effectively in construction projects? This how-question seems even more difficult to answer than the previous why-question. For instance, how to relate disciplinebased geotechnical risk management to project risk management in construction projects? Therefore, this paper addresses a yet highly underestimated topic: How to realize a routine use of geotechnical risk management during planning, engineering, and construction of all sorts of buildings and infrastructure projects? To date, there appeared to be no literature covering this topic, despite its utmost relevance. Any scientific
research and resulting practical guidance about how to implement risk management in general is very scarce. Concerning geotechnical risk management in particular, research and guidance is entirely lacking. Therefore, there seems to emerge a free market paradox of high knowledge demand with no knowledge supply. The implementation of geotechnical risk management is still an unexplored area of research. The practical research project Implementing Risk Management of the Dutch Delft Cluster Research Programme aims to answer the question of how to implement risk management in organizations in the construction industry. This research project is performed by involving researchers of the unit GeoEngineering of Deltares (formerly known as the Dutch National Institute for GeoEngineering, GeoDelft), the unit Innovation and Environment of TNO, the Technology, Policy, and Management faculty of the Delft University of Technology, and the Construction Management and Engineering research group of the University of Twente, Netherlands. The research will be completed by the end of the year 2009. However, recent research results for successfully implementing geotechnical risk management in organizations are readily available to be shared with the international geotechnical community. One of the innovative research approaches of the Implementing Risk Management project is considering the implementation of risk management in organizations a sort of innovation. If new to (part of) an organization, fostering the routine application of geotechnical risk management in (part of) the organization proved to have a lot in common with implementing innovations in organizations, such as geotechnical quality systems or software for geotechnical design. These organizations are either a temporary project organizations for realizing a construction project, or well-established firms. After this necessarily comprehensive introduction, this paper continues with presenting the research approach. Then, the research results about risk management, innovation management and their synthesis for implementing geotechnical risk management are presented. This generates the suggestions for implementing geotechnical risk management, the very core of this paper. It ends with the main research conclusions.
2 2.1
RESEARCH APPROACH Introduction
This chapter briefly presents the research approach for investigating the implementation of geotechnical risk management in (project) organizations. These organizations involve people, who work together for realizing common goals. In the construction industry, the common goal is usually realizing projects according to pre-set quality specifications, and within budget and planning.
388
In organizations in general, and particularly when dealing with risk, the so-called human factor or people factor plays a dominant role (Bea, 2006, Van Staveren, 2006). Therefore, regarding the nature of reality (ontology), a hermeneutic worldview has been chosen for this research. This considers the world as a social construct, with its inherent subjectivities. The epistemological point of view concerns assumptions about the nature of knowledge about reality. The design science paradigm with a practical research approach (Van Aken, 2004) has been purposefully selected, for generating solution–oriented knowledge about implementing risk management in organizations. Together, the ontological and epistemological positions provided the scientific research framework, which synthesized geotechnical and organizational aspects. This framework consisted of subsequently exploratory and synthesizing research of risk management and of innovation management. In the following sections, the four resulting research steps are described. 2.2
research question: Are these key hurdles and key conditions appropriate for actually implementing (routinely applying) geotechnical risk management within organizations? 2.4
Exploratory innovation management research
The exploratory innovation management research also consisted of literature surveys and field research for identifying relevant concepts and variables. An extensive literature survey has been performed, which included Ph.D. theses, scientific top journals and additional literature about innovation management. The focus was on implementing innovations in organizations. Field research included in-depth interviewing of seven Dutch experts in implementing innovations by realizing planned organizational change. All but one are well-known Dutch professors from universities of Amsterdam, Rotterdam, Eindhoven, Groningen, and Twente, who also perform top management consultancy. The one remaining expert is a professional risk manager involved in implementing risk management in public organizations.
Exploratory risk management research
The exploratory risk management research consisted of literature surveys and field research. Both aimed identifying the relevant risk management concepts and variables. Extensive literature research has been performed by using Van Staveren (2006) and performing an additional survey, which is reported in Van Staveren (2007). Field research involved in-depth interviewing of four academic geotechnical and mining experts of leading universities in the US (Massachusetts Institute of Technology, University of California, Berkeley), the UK (University of Southampton), and South Africa (University of the Witwatersrand). In addition, three geotechnical and mining consultants from the UK and South Africa were interviewed. Moreover, Dutch experiences with applying geotechnical risk management were retrieved from a previous and the actual Delft Cluster research project, as well as from RISNET. The latter is the Dutch joint knowledge platform for applying risk management in the construction industry. 2.3 Synthesizing risk management research The synthesizing research part included analysis and classification of the identified risk management concepts and variables. Proven research tactics, including data and investigator triangulation were applied. All variables were classified into either hurdles, which obstruct the application of geotechnical risk management, or conditions that are required for applying geotechnical risk management. An in-depth analysis generated 7 key hurdles and 10 key conditions for applying geotechnical risk management, which were considered the most relevant variables. This research result triggered another
2.5 Synthesizing innovation management research The synthesizing research part included analysis and classification of the identified innovation management concepts and variables. Proven research tactics, such as data, and investigator triangulation were applied. In total 55 hurdles and 93 conditions for implementing innovations in organizations were identified. These variables were compared with those from the risk management research part However, because of the maximum length of this paper and the focus on geotechnical risk management, this comparison and its conclusions could not be presented here. Nevertheless, from the synthesizing research part of innovation management particularly the resulting concepts for implementing innovations in organizations are used in the remaining part of this paper. 3
GEOTECHNICAL RISK MANAGEMENT
This chapter presents the results of the exploratory and synthesizing research of geotechnical risk management concepts and variables. 3.1
Risk management concepts
Analyzing the identified risk management concepts revealed three interrelated levels for implementing risk management: (1) the discipline level, (2) the project level, and (3) the organizational level. Figure 1 symbolizes these three levels as a mountainous area, of which the risk mountains have steep and slippery slopes. Geotechnical risk management represents the discipline level. When reaching the top of geotechnical risk management, indicating routinely applied risk
389
Table 1. Numbers of hurdles and conditions for applying geotechnical risk management, from several data sources.
Data source
Hurdles no.
Conditions no.
Van Staveren (2006) Van Staveren (2007) Interviews with 7 experts Delft Cluster and RISNET Total numbers
5 17 63 24 109
10 26 73 38 147
Table 2. Key hurdles for applying geotechnical risk management. No. Category 1. 2. 3. 4. Figure 1. Three risk management mountains. 5.
management, there raises another and higher top that represents the project risk management mountain. If that top has been reached as well, indicating wellembedded geotechnical risk management in project risk management, another top is still there. This latter top is representing the organizational level of risk management. This level involves managing risks of entire project portfolios of a firm. For example, a contractor having ten projects under construction should compensate one very risky project with the remaining nine and less risky projects. This would avoid going bankrupt, when all risks within the risky project occur. In summary, for reasons of acceptability, as well as for effectiveness, efficiency, and persistence over time, geotechnical risk management should be wellembedded in project risk management. Preferably, it should furthermore be related to portfolio risk management. Obviously, realizing this challenge is more of a management responsibility than that of a geotechnical engineer. However, the latter engineer may substantially contribute to both project and portfolio risk management, by adequately performing geotechnical risk management during all engineering and construction project phases.
3.2 Risk management variables As mentioned before, all identified variables for applying geotechnical risk management in organizations were classified into either hurdles, obstructing the application of geotechnical risk management, or conditions that are required for applying geotechnical risk management. From the literature survey and field research, in total 109 hurdles and 147 conditions for successfully
6. 7.
Description
Motivation Lack of geotechnical risk management awareness. Motivation Lack of geotechnical risk management benefits. Motivation Fear for geotechnical risk transparency. Motivation Difficulty of applying geotechnical risk management. Training Lack of geotechnical risk management understanding. Tools Lack of geotechnical risk management methods, protocols, software, guidelines. Tools Lack of geotechnical risk management benchmarks.
applying geotechnical risk management were identified. Table 1 shows the distribution of these hurdles and conditions over the different data sources. Despite some overlap of a number of factors, the high numbers in Table 1 demonstrate the enormous complexity of applying geotechnical risk management. This raised the following research question: Which of the unworkable large series of hurdles and conditions are the most significant hurdles and conditions? For answering this question, these hurdles and conditions have been clustered and synthesized into seven key hurdles and ten key conditions for effectively applying geotechnical risk management. Three purposeful selected main categories were motivation of engineers for applying geotechnical risk management, training required for learning how to operate geotechnical risk management methodologies, and tools for facilitating the execution of geotechnical risk management. 3.3 Hurdles for geotechnical risk management Table 2 presents the seven key hurdles or obstructions that resulted from the data analysis, including the category. Remarkably, four out of the seven key hurdles are motivational. Lack of geotechnical risk management awareness, the benefits of it, as well as fear
390
for geotechnical risk transparency and difficulty of applying geotechnical risk management are hurdles at the level of the individual geotechnical engineer. Presence of these hurdles will highly restrict his or her motivation for routinely applying geotechnical risk management in his or her day-to-day activities. Van Staveren (2006) present six suggestions for overcoming these individual hurdles, including developing risk awareness and taking sufficient time for actually applying risk management. Of the remaining three key hurdles, one is trainingrelated and two concern the role of tools for applying geotechnical risk management. Some sort of education and training is required for operating risk management tools, such as risk data bases. Remarkably, the seventh key hurdle explicitly addresses the lack of geotechnical risk benchmarks. This means that a lack of clear levels of acceptable geotechnical risk, such as maximum allowed differential settlements, is also a key hurdle for applying risk management. This key hurdle has been allocated to the tools category, which includes for example geotechnical design software. Such software may be required for setting geotechnical risk management benchmarks.
Table 3. Key conditions for applying geotechnical risk management.
3.4 Conditions for geotechnical risk management Similarly, Table 3 presents the ten key conditions, which resulted from the data analysis. Table 3 present also the category of each key condition. In total, six out of the ten key conditions are of a motivational type. Similar to the hurdles, motivational key conditions dominate. Obviously, it should be clear to any individual geotechnical engineer why routinely applying risk management. Therefore, clear objectives and goals should be defined, before starting any activities. Preferably, these goals are measurable. Closely related to the first key condition, there should be an individual awareness of the consequences of geotechnical risk. What are the effects to which parties when a geotechnical risk occurs? This awareness may raise the desire to avoid the risk to occur, and thus grows risk management motivation. It may help when geotechnical risk management responsibilities are clearly defined. A geotechnical baseline report (GBR) may be useful for allocating the risk of differing site conditions (Essex, 200, Van Staveren, 2006). By relating, and preferably incorporating, geotechnical risk management within project risk management, economies of efficiency may be realized that contributes to the motivation of engineers to apply geotechnical risk management. Involving other project stakeholders is also a motivational factor. Particularly, clients requesting geotechnical risk management may be helpful for increasing the motivation of geotechnical engineers. The last motivational key condition concerning resources seems obvious. In most largely moneydriven firms in the construction industry, which put also large time pressure on their projects, resources
No.
Category
Description
1.
Motivation
2.
Motivation
3.
Motivation
4.
Motivation
5.
Motivation
6.
Motivation
7.
Training
8.
Training
9.
Training
10.
Tools
Clear objectives and goals for applying geotechnical risk management. Awareness of geotechnical risk consequences. Contractually arranged responsibilities for geotechnical risk and its allocation. Clear relationship of geotechnical risk management and project risk management. Involvement of all project stakeholders in applying geotechnical risk management. Availability of resources (budget, time) for applying geotechnical risk management. Understanding of geotechnical risk management by geotechnical engineers. Understanding of risk management in teams by geotechnical professionals. Understanding of risk management and culture by geotechnical managers. Fit of geotechnical risk management methodologies with the project objectives.
such as budget and time should be made available to the geotechnical engineers who should apply geotechnical risk management. If risk management is effectively applied, the return of investment may be as high as a factor ten, or more (Smith, 1996, Sperry, 1998). The training-related key conditions address the need for understanding geotechnical risk management, supplemented by understanding the application of geotechnical risk management in teams. The latter is of importance for dealing with the inherent differences in subjective risk perception, even when based on the same factual information, such as cone penetration test results or results of laboratory index tests (Van Staveren, 2006). Moreover, particularly geotechnical managers, who are responsible for the use of geotechnical risk management methodologies by their appointed geotechnical engineers, should understand the dominant role of organizational culture in routinely applying geotechnical risk management (or not). Finally, the selected risk management tools for applying geotechnical risk management should fit with the targeted users of those tools (geotechnical engineers), as well as with the complexity and risk profile of the project. Rather sophisticated tools, such as Monte Carlo type of software, may be required in complex projects, while just performing some sensitivity analyses with already available geotechnical software may be sufficient within the less complex and smaller projects. Obviously, there is no one recipe for which tools to select. This entirely depends on the type of project, expected ground conditions, the risk propensity of the client and the involved engineers, and so on.
391
Table 4. Numbers of hurdles and conditions for applying geotechnical risk management, from several data sources.
Data source
Hurdles no.
Conditions no.
Ph.D. theses Scientific top journals Additional literature Interviews with seven experts Total numbers
4 6 10 35 55
14 8 22 49 93
4
Table 5.
Main innovation characteristics.
No.
Innovation characteristics
Acknowledged in risk management
1. 2. 3. 4. 5. 6. 7.
Relative advantage Compatibility Complexity Triability Observability Indirect network effect Relative usefulness
yes no yes yes no no no
INNOVATION MANAGEMENT
This chapter presents the results of the exploratory and synthesizing research of innovation management concepts and variables.The main objective of this research part was providing additional scientific evidence for the relevance of the key hurdles and key conditions for implementing geotechnical risk management in organizations. An innovation management perspective was considered useful, because of the assumed similarities between implementing innovations and implementing new risk management methodologies in organizations.
main dimensions for implementing innovations: (1) the innovation itself, and (2) the organization, in which the innovation will be implemented? For answering this question, the risk management key hurdles and key conditions (see Table 2 and Table 3), as well as all identified hurdles and conditions for implementing innovations (see Table 4 for their numbers) have been classified according to the main characteristics of innovations and organizations, as derived from combining the models of Rogers (2003) and Song (2006). Next, the results of this exercise are presented.
4.1 Innovation management concepts By analyzing the identified innovation management concepts, the innovation diffusion model by Rogers (2003) was considered the most complete and relevant model. This has been confirmed by comparing this model with the few other available models, including those by Klein & Sorra (1996) and Lewis & Seibold (1993). A number of factors of the direct and indirect network externalities adoption model (Song, 2006) were added to the model by Rogers (2003). Combining these models generated two main dimensions for hurdles and conditions for implementing innovations: (1) those related to the innovation, and (2) those related to the organization, in which the innovation has to be routinely used by its implementation. 4.2
Innovation management variables
Similar to the identified risk management variables, the variables for implementing innovations in organizations were classified into either hurdles, obstructing the routine application of innovations, or conditions that are required for implementing innovations. From the literature survey and field research, in total 55 hurdles and 93 conditions for successfully implementing innovations were identified. Table 4 shows the distribution of these hurdles and conditions over the different data sources. Despite some overlap of a number of factors, similar to Table 1 concerning risk management, the high numbers in Table 4 demonstrate the enormous complexity of implementing innovations in organizations. This raised the following research question: Which are the main characteristics of the two previously mentioned
4.3 Innovation management characteristics By attributing the 55 innovation hurdles and 93 innovation conditions to the main characteristics of innovations, it became clear that there are seven main characteristics of innovations. However, attributing the 7 key hurdles and the 10 key conditions to these innovation characteristics revealed only three out of the seven main characteristics of innovations. In other words, by considering a risk management perspective only, the majority of relevant characteristics of risk management methodologies, 4 out of 7 characteristics or 57 percent, would remain hidden. As Table 5 indicates by the word no in the right column from an innovation perspective, these hidden risk management characteristics are compatibility, observability, indirect network effect, and relative usefulness. Therefore, for successfully implementing risk management methodologies, also these four characteristics need to be addressed, in addition to the three characteristics that are indicated by yes in the right column of Table 5. The latter originate form solely a risk management perspective. A similar exercise has been performed for the main dimension of the organization in which risk management has to be implemented. Table 6 shows the results. When attributing the 7 key hurdles and the 10 key conditions to the main innovation characteristics, only two of the four main characteristics of were acknowledged. In other words, by considering a risk management perspective only, fifty percent of the relevant characteristics of organizations in which risk
392
management. The latter needs to be related to portfolio risk management, when present in the organization. The synergies of these three levels of risk management would provide the maximum geotechnical risk management benefits.
Table 6. Main organization characteristics.
No. 1. 2. 3. 4.
Organization characteristics
Acknowledged in risk management
Structure of the organization Norms within the organization Organizational implementation decision Organizational innovation consequences
yes no no
6
MAIN IMPLEMENTATION SUGGESTIONS
yes
Summarizing this research provides two main suggestions for implementing geotechnical risk management in (project) organizations in the construction industry: 1. Incorporate geotechnical risk management as much as possible in project risk management at project level, and in portfolio risk management at organizational level; 2. Maximize the presence of the five key conditions for implementing geotechnical risk management methodologies, as well as that of the five organizational key conditions.
Figure 2. Conceptual model for implementing geotechnical risk management in (project) organizations.
management methodologies have to be implemented would be entirely neglected. As Table 6 indicates by the word no in the right column from an innovation perspective, these hidden organizational characteristics are the norms within the organization and the type of decision making within the organization, such as cooperative decision making by involving risk management users or top-down decision making. For successfully implementing risk management methodologies in organizations, these latter two characteristics need also being addressed.
5
RISK AND INNOVATION MANAGEMENT
By using the selected innovation management concepts, as introduced in the previous chapter, the key variables for implementing geotechnical risk management were further synthesized and classified into the two main implementation dimensions. Figure 2 presents the resulting key conditions, which are presented in a conceptual model for implementing geotechnical risk management within organizations. Figure 2 aims to demonstrate that for successfully implementing geotechnical risk management in organizations, five key conditions for the risk management methodologies and another five key conditions for the organization should be fulfilled. Moreover, geotechnical risk management needs to be related to project risk
Based upon the comprehensive research undertaken, it is expected that using the two main suggestions considerably increase the chance to realize effective, efficient, and persistent implementation of geotechnical risk management in (project) organizations in the construction industry. At the moment of writing this paper, these suggestions are being empirically tested in four case studies. Moreover, a major Dutch public client is using the suggestions for reducing the geotechnical failure costs with fifty percent by the year 2015 (Van Staveren et al., 2009). Realizing this objective will save this organization tens of millions Euro per year. Moreover, a number of other Dutch organizations are following the suggestions as well.
7
CONCLUSIONS
This paper intends to create awareness about the importance of the routine application, defined implementation, of geotechnical risk management. This topic stretches conventional geotechnical engineering, but from now on, it can not be neglected anymore. For targeting geotechnical engineers and their managers, the main results of an still ongoing Dutch research project about how to implement risk management in organizations in the construction industry are summarized. This research revealed the enormous complexity of implementing risk management methodologies. Synthesizing risk management and innovation management concepts considerably structured and reduced the many implementation variables, while fostering their completeness. In general conclusion, for realizing effective, efficient, and persistent geotechnical risk management during the entire engineering and construction process, (much) more attention should be paid to the implementation of geotechnical risk management methodologies in (project) organizations. This should be related
393
to project risk management and to portfolio risk management, for realizing maximum results. Moreover, the presence of ten risk management key conditions should be maximized. Only then, the pursued benefits of geotechnical risk management are expected to occur. This will give geotechnical engineering, and particularly its engineers, the credits and rewards they deserve, given the enormous impact of ground conditions on construction projects. It is up to us, the frontrunners of geotechnical risk management, to realize this organizational-geotechnical challenge. ACKNOWLEDGEMENTS The author would like to thank Prof. Bob Bea, Prof. Herbert Einstein, Prof. Dick Stacey, Prof. Chris Clayton, Mr. Tim Chapman, Dr. Jan Hellings and Dr. Oscar Steffen for their interviews. REFERENCES Altabba, B., Einstein, H. & Hugh, C. 2004. An economic approach to risk management for tunnels. In Ozdemir (ed.) Proc. North American Tunnelling 2004. New York: Taylor & Francis Group. Barends, F.B.J. 2005. Associating with advancing insight: Terzaghi Oration 2005. In Proc. 16th International Conf. on Soil Mechanics and Geotechnical Engineering, 12–16 September, Osaka, Japan. Rotterdam: Millpress. Bea, R.G. 2006. Reliability and human factors in geotechnical engineering Journal of Geotechnical and Geoenvironmental Engineering May: 631–44. Clayton, C.R.I. (ed.) 2001. Managing Geotechnical Risk: Improving Productivity in UK Building and Construction. London: The Institution of Civil Engineers. Essex, R.J. (ed.) 2007. Geotechnical Baseline Reports for Construction: Suggested Guidelines. Danvers: ASCE. Halman, J.I.M. (ed.). 2008. Risicomanagement in de Bouw: Nieuwe Ontwikkelingen bij een Aantal Koplopers (in Dutch). Boxtel: Aeneas.
Imai, M. 1986. Kaizen: The Key to Japan’s Competitive Success. New York: Random House. Klein, K.J. & Sorra, J.S. 1996. The challenge of innovation implementation The Academy of Management Review 21(4): 1055–80. Lewis, L.K. & Seibold, D.R. 1993. Innovation modification during intra-organizational adoption The Academy of Management Review 18(2): 322–54. Rogers, E.M. 2003. Diffusion of Innovations (5th edn.). New York: Free Press. Smith, R. 1996.Allocation of risk: The case for manageability The International Construction Law Review 4: 549–69. Smith, R.E. 2008. An evolving view of geotechnical engineering: A focus on geo-risk management. In Proc. GeoCongress 2008: The Challenge of Sustainability in the Geoenvironment, 9–12 March, New Orleans. Reston: ASCE. Song, M. 2006. Research Presentation of the Direct/Indirect Network ExternalitiesAdoption Model DINAM. Enschede: University of Twente Construction Management & Engineering Group. Sperry, P.E. 1981. Evaluation of savings for underground construction Underground Space 6: 29–42. Van Aken, J.E. 2004. Management research based on the paradigm of the design science: The quest for field-tested and grounded technological rules. Journal of Management Studies, 41(2): 219–46. Van Staveren, M.Th., Bles, T.J., Litjens, P.P.T. & Cools, P. 2009. Geo Risk Scan: A successful geo management tool. In Proc. 17th Int. Conf. on Soil Mechanics and Geotechnical Engineering, 5–9 October, Alexandria, Egypt (in prep.). Van Staveren, M.Th. 2007. Hurdles and conditions for implementing risk management in public project organizations. Delft Cluster Working Paper, Work Package 2, November 2007. Delft: Delft Cluster. Van Staveren, M.Th. 2006. Uncertainty and Ground Conditions: A Risk Management Approach. Oxford: Butterworth-Heinemann.
394
Design method (2)
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Framework for evaluation of probability of failure of soil nail system I.S.H. Harahap & W.P. Nanak Universiti Teknologi PETRONAS, Tronoh, Malaysia
ABSTRACT: This paper proposes conceptual framework to evaluate probability of failure of soil nailing system. The paper starts with a review on current design methods of soil nailing. Hazards related to the structure are identified through possible prevailing failure mechanisms. Four failure mechanisms of soil nail system such as pull out failure, nail tendon failure, face failure and general stability failure were identified. Events that triggered failure were identified through fault tree analysis (FTA). For this purpose a semi quantitative hazard and risk analysis framework by Vaunat was adopted. The framework consisted of four steps: (i) inventory of factors that could lead to danger, (ii) forecasting of danger, (iii) assessment of hazard and (iv) evaluation of risk. From inspection of fault tree diagrams, only some of the events were quantifiable through structural reliability analysis (SRA) method. We are proposing a hybrid framework that combines structural reliability analysis (SRA) with Bayesian Belief Network (BBN ). The framework can be used as part of risk management of soil nail system whereby risks associated to the hazards can be quantified for prioritizing risks mitigation measures.
1 1.1
INTRODUCTION Background
Soil nail system has been around over about thirty years as earth-retaining structure. The structure solves problem associated with slope failure during slope cutting, disturbed terrain and excavation. The basic concept of soil nailing can be defined as inserting a passive inclusion into the soil in a closely spaced pattern to increase the overall shear strength of the in-situ soil and restraint its displacement. This system fundamentally improves the stability, in a soil slope for example, principally through the mobilization of tension developed in the soil nails. Generally, the basic design concept of soil nailing system is still based on the conventional deterministic approaches which depends upon the transfer of resisting tensile forces generated in the inclusions into the ground through friction at its interfaces. System reliability and risk based design concepts have been increasingly proposed on a number of papers recently. However, for application in soil nailing (Yuan, 2003), it is limited to evaluation of sliding mechanism only, one of many possible failure mechanisms in soil nailing.The purpose of reliability analysis is to estimate the effects of surrounding uncertainties on the safety of the application. In view of that, reliability analysis is a means of evaluating the effects of uncertainties (Duncan, 2000). The probabilistic methods on which these concepts are based allow for the uncertainties in the input parameters and the model describing potential failure modes to be evaluated. Probability of failure of a system and its components play a role in risk management where danger, hazard and risk which potentially occur in the system
can be evaluated for the purpose of prioritizing mitigation measures. It has been defined that danger is the event of failure, hazard is the occurrence probability of the event and risk is the expected consequences oh hazards. In this paper we propose a framework to evaluate probability of failure of soil nail system. Using this we will be able to assess the effect of uncertainty to the overall performance of the system as well as the performance of its components. The framework consists of: (i) identification of fault mechanisms through fault tree analysis (FTA), (ii) evaluation of component probability of failure and (iii) evaluation of system probability of failure. It would be clear that not all of the identified failure mechanisms can be evaluated using structural reliability analysis (SRA). For this purpose Bayesian Belief Network (BBN ) was used to evaluate the system probability of failure combined with component probability of failure evaluated using SRA method.
1.2
General concept
The method used within this paper followed a riskbased approach for soil nail system. To predict the folding risk, R for existing soil nails system, it was necessary to evaluate the probability of failure (Pf ) of the respective slope instability and the potential damages, E(D) as results from failure. The risk can be estimated as the product of the probability (Pf ) and the damage E(D). The concept of determining the overall probability of failure for a soil nail system is illustrated in Fig. 1. The figure shows an eight-step-procedure which starts with deterministic design and the selection of the most relevant failure modes for which limit
397
Figure 1. Simplified flow chart for determination of failure probability of soil nail system (adopted from Kortenhaus et al. 2002).
Figure 2. Pullout failure mode (a) before and (b) after failure (from FHWA, 1998).
state equations have to be determined. These initial three steps are purely deterministic and are essentially based on failure analyses. Step 3 shows links with the set-up of a fault tree (Step 7) which can be used to schematize the potential failure mechanism in soil nail system. Following this method a deterministic fault tree can be derived which can later be used for probabilistic design. Steps 4 and 5 discuss the uncertainties of parameters and models which will be included in the probabilistic design. In step 6 the probability of failure of all limit state equations is performed. Steps 7 and 8 discuss the relation between the failure modes and the calculation of the overall probability of failure (Kortenhaus et al. 2002).
2
SOIL NAIL FAILURE MECHANISMS
Figure 3. Nail tendon failure mode: (a) before and (b) after failure (from FHWA, 1998).
surface. The following are general failure modes of soil nail:
2.1 Pullout failure
Generally, the failure mode of soil nail system can be classified broadly as external and internal failure mechanisms. External failure refers to the development of potential failure surfaces effectively outside the soil nail ground mass (active zone). The failure can be in the form of sliding, rotation, bearing or other forms of loss of overall stability. As for internal failure, it refers to failures within the soil nail ground mass (passive zone). The internal failure modes in the active zones include failure of the ground mass, bearing failure underneath soil nail heads, structural failure of the soil nail under combined actions of tension, shear and bending, structural failure of the soil nail head or facing and surface failure between soil nail heads. While on the passive zones, the modes of failure are the pullout failure at ground-grout interface or grout-reinforcement interface (GEO, 2006). In risk analysis, all possible or potential failure modes must be considered to address potential consequences of failure. This is achieved by evaluating available nail forces to stabilize the active block defined by any particular slip surface. The failure modes of soil nails can be categorized as pullout failure, nail tendon failure, face failure and overall failure (Chow & Tan, 2004). Overall, the potential failure modes must be considered in evaluating the available nail force to stabilize the active block defined by any particular slip
This failure results from insufficient embedded length into the resistant zone to resist the destabilizing force. The pullout capacity of the soil nails is governed by the following factors (Tan & Chow, 2004): • • •
The location of the critical slip plane of the slope The size (diameter) of the grouted hole for soil nail The ground-grout bond stress (soil skin friction)
Nail tendon failure – This failure results from inadequate tensile strength of the nails to provide the resistant force to stabilize the slope. It is primarily governed by the grade of steel used and the diameter of the steel (FHWA, 1998). Face failure – This aspect of failure mode for soil nailing is sometimes overlooked as it is generally wrongly “assumed” that the face does not resist any earth pressure (Chow & Tan, 2004). These failures tend to be fail in either facing failure or the front if nails zone sliding off (FHWA, 1998). Overall Failure – This aspect of failure mode is commonly analyzed based on limit equilibrium methods. The analyses are carried out iteratively until the nail resistant force corresponds to the critical slip plane from the limit equilibrium analysis. To carry out such iterative analysis, it is important that the nail load diagram (Fig. 5) is established (Tan & Chow, 2004). From Fig. 5, it can be seen that the nail load diagram consist of three zones, A, B and C. Zone A is
398
If active conditions (i.e. σv = Ka σv ) assumed to develop perpendicularly to the slope
where the value of Ka may be taken as:
Critical nominal nail head strength, TFN is calculated as follow:
Figure 4. Face failure mode (from FHWA, 1998).
where TFN = critical nail head strength, CF = Flexure pressure factor, mV ,NEG & mV ,POS = vertical nominal unit moment resistance at the nail head and mid span, and SH & SV = respectively are horizontal & vertical nail spacing. Face failure: Vertical nominal unit moment
Figure 5. Nail load diagram (from FHWA 1998).
governed by the strength of the facing, TF and also the ground-grout bond stress Q. If the facing of soil nails is designed to take full tensile capacity of the nail, TN , then the full tensile capacity of the nail can be mobilized even if the critical slip circle passes through Zone A. However, to design the facing with full tensile capacity of nails, TN , instead of lower TF is not economical for high slope (e.g. more than 15 m). Zone B is governed by the nail tendon tensile strength. The ultimate tendon capacity is calculate using the following equation:
where AN = nominal diameter of nail, and FY = yield strength of tendon material. Zone C is governed by the ground-grout bond strength. The ultimate ground-grout bond load Q can be determined using the following equations:
where, As = area of tension reinforcement in facing panel width ‘b, b = width of unit facing panel (equal to Sv ), d = distance from extreme compressive fiber to centroid of tension reinforcement, fc = compressive strength of the concrete Face failure: Punching shear strength Nominal internal punching shear strength of the facing, VN
Dc = bPL + hc Nominal nail head strength, TFN
& where, σn = average radial effective stress, φdes cdes = design values for the soil shearing resistance. The average radial effective stress σn acting along the pull-out length of a soil nail may be derived from:
where CS = pressure factor for punching shear, and AC & AGC bearing plate connection details given in FHWA (1998). It is clear that the mobilized nail resistance should not exceed the nail load envelope developed from the three failure criteria discussed earlier. Therefore, the nail resistance to be input into slope stability analysis should refer to the nail load diagram (Fig. 5) corresponding to the available bond length for the critical slip plane (Fig. 6). A simple limit state equilibrium using method of slice can be derived as follow:
where, σv = average vertical effective stress, calculated mid-way along nail pull-out length and KL = coefficient of lateral earth pressure parallel to slope.
where, The FoS stands for factor of safety,. Q = capacity, moment due to resistance forces (capacity) and L = load, moment due to driving forces,
399
Figure 7. Failure modes of soil nails system.
system are slightly more complex but can be simplified in the same way. 3
Figure 6. (a) Available bond length for slope stability analysis, (b) Force diagram.
and
where n = number of slices, m = number of nail, TNJ = ultimate ground-grout capacity at nail number j established with consideration to nail-soil interaction model given in Fig. 5. Other parameters are shown in Fig. 6. 2.2
EVALUATION OF PROBABILITY OF FAILURE
In the reliability theory context, the word “failure” does not necessarily imply catastrophic failure. To distinguish the contextual of reliability theory, sliding of a retaining wall for example – would be more appropriately described as “unsatisfactory performance” than as “failure”. While for another example, slope failures which involve large displacements are appropriately described by the word “failure”. It is important to bear in mind the likely consequences of the mode of “failure” being analyzed, and whether they are catastrophic or more benign (Duncan, 2000). The tools in performing a reliability analysis fall into three broad categories. The first are the methods of reliability analysis, second includes the event trees, fault trees and influence diagrams which describe the interaction among events and conditions in an engineering system, and the third includes other statistical techniques. In particular, some problems are so poorly defined that it is useless to try to formulate mechanical models thus causing engineer to rely on simple statistics (Christian, 2004). Direct reliability analysis procedure is borrowed from practice in structural analysis, which propagates the uncertainties in properties, geometries, loads, water levels, etc., through analytical models, to obtain probabilistic descriptions of the behavior of a structure or system. The common approach is the load–resistance or capacity–demand model.
Failure analysis
Through assessment of the potential failures which been described, soil nail failure mechanisms have been identified which may eventually lead to failing of soil nail system. The summarized procedures are shown in Fig. 7 for the internal, external and global stability failures of the soil nail system. All relevant processes as shown in Fig. 7 need to be described and simplified by limit state equation (LSE). The processes on the global stability of the soil nail
3.1 Structural reliability analysis Uncertainty related to design variables such as load (F) and capacity (Q) is modeled as random variable, and design risk is given as probability of failure (Pf ). The basic of reliability problem is to evaluate probability of failure Pf from pertinent statistical data of Q and F, usually in the form of mean (mF and mQ ) and standard deviation (SF and SQ ). If each of Q and Fhas normal distribution, then the range of
400
safety (M = Q − F), also have normal distribution, as such mean and standard deviation has the following relationship:
If probability distribution of M is known, the probability of failure (Pf ) can be evaluated as follow:
where Prob(.) is probability of an event and (.) standard normal cumulative density function. Because of difficulty in using probability of failure particularly and small probability, and the connotation of “failure”, therefore reliability index β has been used as a measure of failure, where β is defined as:
where −1 (.) = inverse of standard normal cumulative function, β = the other form of Pf which is more meaningful because of its scale of magnitude. The range of reliability index commonly used in geotechnical and structure are between 1 and 4, correspond to probability of failure range between 16 and 0,003%. Figure 8. Safety, reliability and capacity-demand model.
3.2
System reliability evaluation
The concept used to consider multiple failure modes and/or multiple component failure is known as system reliability evaluation. A complete reliability analysis includes both component-level and system-level. There are three types of systems: Component system – A system with only one performance function. Series system or weakest link system – A system with multiple performance criteria and system failure may be defined as occurring when any of the criteria is violated. Thus system failure is defined by the union of the individual performance failures. The first-order bounds for a system with n failure events Ei are:
where PS = probability of failure of the system and P(Ei ) = probability of failure of the i failure event. The lower bound is when all events are perfectly dependent. The upper bound is when they are mutually exclusive. If they are independent, the upper bound becomes:
Parallel or redundant system – This is the case if failure occurs when all the components fail. System
failure is defined as the intersection of the individual (component) failure events. The first order bounds are:
In most cases the lower bound will be close to zero. The upper bound is exact if the events are perfectly correlated. If they are independent, the lower bound becomes:
As shown, to analyze a series or a parallel system we have first to calculate individual components probability of failures; then the rest is only substitution in the above equations to get system probability of failure. So, in this study we have concentrated on the analysis of component system with one performance function. 3.3 Bayesian belief network Belief networks are graphical representations of models that capture relationship between model parameters. The network identifies variables that interact directly and simplify belief updating by limiting such variables processing to its local neighborhood.
401
Figure 9. Propagation of uncertainty in Bayesian Belief Network.
Bayesian belief network (BNN ) is a causal belief network where the nodes represent stochastic variables and the arcs specify direct causal influence between the linked variables (Suermondt, 1992). According to Bayes’ rule the posterior probability, or belief, of hypothesis H given the evidence e is Figure 10. Top event and sub-top event that cause failure to soil nail system.
where p(e|H ) = likelihood of the evidence e,\ if the hypotheses H is true and p(H ) = prior probability that H is the correct hypotheses. The propagation of belief from parent to its children for case of three nodes network is obtained from chain rule:
where λ(Z) is evaluated from p(Y |Z) and π(X ) is evaluated from known evidence. 4
RELIABILITY ANALYSIS OF SOIL NAIL SYSTEM
Based on the failure analysis described in the previous section, fault tree analysis is applied for the reliability deterministic of each failure mechanisms of soil nails system. The results are set up in detailed fault tree comprising the different failure mechanisms for the pullout failure, nail tendon failure, facing failure and overall instability. The mentioned four (4) failure mechanisms identified in the soil nails system was defined to be the top-event of the fault tree. The independence of failure modes was carefully checked and adjusted. The following shows the results of the analyses on the failure mechanisms of soil nails system. The top level events and sub-top level events that cause failure of soil nail system are shown in Fig. 10. The top events that cause failure in soil nail system are identified as pull out failure, nail tendon failure, punching failure and general stability failure. The sub-top event are common for all top level events, they are fault during design, fault during construction, fault due to poor maintenance and
Figure 11. Failure event that lead to tendon pull out failure: Fault Tree Analysis.
fault due to external factors such as unexpected load, change in the set up of surrounding area, etc. The last three sub-top events are undeveloped events in a sense that qualitative probability evaluation are difficult to obtain and amendable to other non qualitative method. Fault tree diagram of pull out failure together with its consolidated form is shown in Fig. 11. Examination shows that events that lead to failure are distinguishable between events and its probability of failure can
402
Figure 12. Data sheet for soil nail example.
be quantified through structural reliability analysis (SRA), i.e. event E5. Event E5 is read as event that “over estimate of tendon capacity due to uncertainty in parameters to determine ground-grout bond ultimate strength”. The event of “over estimation of tendon pull out capacity” (E2) is related to events E3, E4, B6 and B7. In turn E3 is dependent on B1, B2 or B3. The effect of construction method is difficult to quantify using standard SRA method, however, it is suitable for analysis using BBN. Event E6 is dependent o either B4 or B5. Fault tree diagrams and its consolidated forms for tendon failure, face failure and general stability failure can be established similarly. A more simple causal relationship can be obtained using BNN as shown in Fig. 11. Inspection on Fig. 11 indicates that: 1. Causal relationship of some nodes cannot be casted into capacity demand model and therefore cannot be evaluated using SRA. 2. Propagation of uncertainty not necessarily in “True-Fault” relationship, therefore system failure evaluation using Eqs. 15 to 18 provide only the upper and lower bounds. 5
EXAMPLE
The following example is taken from FHWA manual as an example for LRFD design of soil nailing. Data geometry of the soil nail is given in Fig. 12 and level of detail for BNN is given in Fig. 13. The grey nodes in Fig. 13 represent stochastic variables as known variables. Since the face failure does not lead to total system failure it is not included in the evaluation of system failure, however the failure mode is still relevant for risk evaluation. The soil nail system probability of failure is evaluated based on the probability of failure of each nail which is in a different stage of failure given the failure surface. The propagation of different stage of failure is propagated into failure of the system using BNN.
Figure 13. Bayesian belief network for soil nail system.
Given the geometry and stochastic variables the steps are as follows: 1. Establish trial failure circle and calculate tension at each tendon using SRA with limit state function given in Eq. (1) considering soil-nail interaction diagram shown Fig. 5. 2. Evaluate system failure probability using BNN adopting Pf of each tendon established in Step 1 as nodes in BNN. Dependency of failure of the system on each nail is found using SRA with limit state function given in Eqs. (10) and (11) 3. Repeat Step 1 with new trial circle until optimum Pf can be established. 4. Example on the application of the framework and results of parametric studies will be presented during the conference.
6
CONCLUDING REMARKS
Based on the preceding discussion the following conclusions can be advanced: 1. We have reviewed current design approach for soil nail system. Failure mechanism of the soil nail system consists of pull out failure, tendon failure, face failure and general stability failure. Fault tree analysis reveals that failure events are solvable to either quantitative or qualitative method. 2. We are proposing a hybrid approach to evaluate probability of failure of soil nail system. Events that trigger failure of the system are evaluated using quantitative and qualitative method. The structural reliability analysis method is used for quantitative method and BNN is used to propagate belief among the nodes forming the system. 3. Example on the application of the framework and results of parametric studies will be presented during the conference.
403
ACKNOWLEDGEMENT The authors acknowledge the management of University Teknologi PETRONAS for their encouragement and supports to attend the conference
REFERENCES Chow, C. M. & Tan, Y. C. (2006), “Soil Nail Design: A Malaysian Perspective”, International Conference on Slopes 7–8 August 2006, Kuala Lumpur, Malaysia. Christian, J.T., “Geotechnical Engineering Reliability: How Well Do We Know What We Are Doing?” Journal of Geotechnical and Geo-Environmental Engineering, October 2004. Duncan, J.M., “Factor of Safety and Reliability in Geotechnical Engineering”, Journal of Geotechnical and GeoEnvironmental Engineering, April 2000.
FHWA, 1998, “Manual for Design & Construction Monitoring of Soil Nail Walls”, FHWA-SA-96-69R, U.S. Department of Transportation, Federal Highway Administration, Washington D.C. Kortenhaus, H. Oumeraci, R. Weissmann, & W. Richwien, (2002), “Failure Mode and Fault Tree Analysis for Sea and Estuary Dikes”, research in coastal engineering and funded by the Federal Ministry for Science, Education, Research and Technology (BMBF), Germany. Suermondt, H.J. (1992) Explanation in Bayesian belief networks, thesis submitted as partial requirement for the degree of Doctor of Philosophy, Stanford University. Vaunat, J. & Leroueil, S. (2002), “Analysis of Post-Failure Slope Movements within the Framework of Hazard and Risk Analysis”, Natural Hazards, 26: 83–109, Kluwer Academic Publishers, Netherlands. Yuan, J.-X., Yang, Y., Tham, L.G., Lee, P.K.K. & Tsui, Y., “New Approach to Limit Equilibrium and Reliability Analysis of Soil Nailed Walls”, International Journal of Geomechanics, 3(2):1345–151, Dec. 2003.
404
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Reliability analysis of embankment dams using Bayesian network Dianqing Li, Haihao Liu, Shuaibing Wu State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan, China
ABSTRACT: This paper aims to propose a Bayesian network approach applied for reliability analysis of dams. The basic principle of the Bayesian network is briefly introduced. The importance coefficient of nodes in the Bayesian network is defined and the formula of the importance coefficient is presented. Then the Bayesian network is employed to evaluate the reliability of embankment dams. The results indicate that the reliability of embankment dams can be evaluated using the Bayesian network in a more rational way. The relationship between the causes and consequences associated with dam safety evaluation can be expressed in an intuitive format through the Bayesian network. In addition, the common cause failure associated with reliability evaluation of embankment dams can be easily solved using the Bayesian network. The results of importance analysis can aid in the identification of which factors are candidates for mitigation actions.
1
INTRODUCTION
of BN for the application on reliability analysis of embankment dams.
It is increasing recognized by decision makers that efficient management and consistent quantification of risks associated with large dams is an issue of societal concern. The book entitled Risk and Uncertainty in Dam Safety provides details of the analysis methods available to characterize and quantify the risks and uncertainties in dam safety (Hartford and Baecher 2004). The reliability and risk of dams has been extensively studied in the literature. For example, Benjamin (1983) explored the application of statistical decision theory to the dams and levees. Lave and Balvanyos (1998) studied the risk analysis and risk management using benefit-cost analysis. In recent years, Bayesian network (BN) (Jensen 1996; Jensen 2000) has received considerable interest for risk assessments and as a decision support tool for technical systems. For instance, Friis-Hansen (2001) explored the application of BN on marine structures. Faber et al. (2002) studied the risk assessment of decommissioning options using BN. Bayraktarli et al. (2005) proposed a framework for the assessment and management of earthquake risks based on BN, where optimal risk mitigation actions are identified based on indicators. Holichy (2005) developed a BN for fire safety design situations. Straub (2005) explored the application of BN for natural hazards risk assessment. The objective of this paper is to demonstrate the potential and advantages of BN for the application on reliability analysis of embankment dams. Firstly, a brief introduction to the BN is presented. Thereafter the importance coefficient for nodes in BN is defined and the formula for calculating the importance coefficient is presented. Finally, examples are investigated to illustrate the potential and advantages
2
BAYESIAN NETWORK
A Bayesian network, also known as a Bayesian belief network, is a directed acyclic graph formed by the variables (nodes) together with the directed edges, attached by table of conditional probabilities of each variable on all its parents (Jensen 1996). Therefore it is a graphical representation of uncertain quantities and decisions that explicitly reveals the probabilistic causal dependence between the variables as well as the flow of information in the model. The Bayesian network methodology has been developed and applied mostly in the field of artificial intelligence. Most studies mainly focused on developing efficient algorithms for large Bayesian network construction and computation. A brief and concise overview on BN is provided by Pearl and Russell (2000), more extensive textbooks on BN include Jensen (1996) and Jensen (2001). Furthermore, many software packages, both commercial and freeware, are available for the computation of BN. Therefore, only a highly condensed introduction to BN is presented in the following. BN is a probabilistic model based on directed acyclic graphs. In a Bayesian network, the nodes without any arrows directed into them are called root nodes that have prior probability tables (discrete nodes) or functions (continuous nodes) associated with them. The nodes that have arrows directed into them are called child nodes and the nodes that have arrows directed from them are called parent nodes. Each child has a conditional probability table (or function) associated with it, given the state (or value) of the parent
405
Figure 1. A simple Bayesian network.
nodes. The BN represents the joint probability distribution P(X) of a set of variables X = X1 , X2 , …, Xn . The joint probability distribution can be easily computed using the BN. However, in order to obtain such joint probability distribution, the number of required probabilities increases exponentially with the number of variables in the model. In the last decade, some algorithms in order to facilitate calculating in graphical models have been developed (e.g., Celeux et al. 2005). BN enables an efficient modeling by factoring of the joint probability distribution into conditional distributions for each variable given its parents. A simple BN is illustrated in Fig. 1 (Straub 2005). It consists of three variables X1 to X3 . X1 is a parent of X2 and X3 , which are children of the former. The joint probability distribution of this network shown in Fig. 1 is given by (Straub 2005)
According to the assumptions, namely, d-separation and conditional independence, the joint probability distribution for any BN can be expressed as
For computational reasons, BN is restricted to variables with discrete states. In some cases, Gaussian variables can be used. However, it is required to discretize all continuous random variables. The BN allows entering evidence, the probabilities in the network can be updated when new information becomes available. For example, when the state of X2 in the network as shown in Fig. 1 is observed to be e,this information will propagate through the network and the joint prior probabilities of X1 and X3 will change to the joint posterior probabilities as follows (Straub 2005):
Note that the common influencing variable X1 introduces a dependency between X2 and X3 .This is a typical situation in dam risk assessment. For instance, X1 may represent the load conditions, and X2 and X3 are dam sites at two different locations. The BN can be extended to decision graphs by including decision nodes and utility nodes in the network, which enables the assessment and the optimization of possible actions in the framework of decision theory.
The optimal action is the one yielding the maximal expected utility. The decision problem on dam safety is beyond the scope of this paper. One main purpose in the probability of dam failure is to find out the most important cause so that more attention should be paid to it to reduce the probability of dam failure. The importance analysis can be easily conducted in the Bayesian network. The importance coefficient associated with the nodes representing the basic variables can be defined as
where αi is the importance coefficient of the ith node in the Bayesian network; p0 is the failure probability of parent node; pi is the failure probability of parent node when the state of the ith node is safe. 3
ILLUSTRATIVE EXAMPLES
Reliability analysis of embankment dams is investigated to illustrate the application of Bayesian network. The first example is the reliability analysis of embankment dams with common cause failure factors. The second example is reliability analysis of embankment dams based on data in China using Bayesian network. In the first example, the considered causes of dam failure are overtopping and slope failure. Both the spillway failure and the unexpected flood can lead to overtopping. The unexpected flood and cracks in dam body can lead to slope failure. It can be seen that the unexpected flood can lead to not only overtopping but also slope failure, which is the common cause failure factor in the reliability analysis of dam. Therefore, the reliability analysis of dam should account for the correlation between the probabilities of overtopping and slope failure.The Bayesian network for the above problem is shown in Fig. 2. It can be seen from Fig. 2 that the node representing crack in dam body includes three states, namely, normal crack that will not lead to slope failure, serious crack that will possibly lead to slope failure, and catastrophic crack that will definitely lead to slope failure. Assuming that the occurrence probabilities of normal crack, serious crack, and catastrophic crack in dam body are 0.949, 0.05, and 0.001, respectively. All the remaining nodes have two states, namely, true and false representing safe and failure, respectively. The corresponding probabilities are also shown in Fig. 2. The cause and effect relationships defining these mechanisms are strongly interrelated and cannot represented by conventional event tree or fault tree, which considers each mechanism independently. However, such problem can be easily solved by the Bayesian network as discussed above. Based on the data reported by Li et al. (2006), the main causes of dam failure and the corresponding annual probability of failure in China are reproduced in Table 1. It is clear that the overtopping is the major cause of dam failures, which results in 50.2% dam failures among the total dam accidents. 34.8% dam
406
Figure 2. Bayesian network including common cause failure for reliability evaluation of embankment dam. Table 1.
Causes of dam failure and annual probability of dam failure based on Li et al. (2006). No.
Percentage (%)
Annual dam failure (×10−4 )
Human errors Others Total
435 1032 701 110 208 5 168 13 185 212 3339
12.6 37.6 20.2 3.2 6.0 0.1 4.9 0.4 5.3 6.1
1.01 3.29 1.77 0.28 0.53 0.013 0.42 0.033 0.47 0.54 8.75
failures are caused by the poor quality of dam body. Human errors and the others only occupy 15% dam failures. Based onTable 1 and the principle of Bayesian network, the corresponding Bayesian network shown in Fig. 3 for reliability analysis of embankment dams is constructed with the help of the Hugin software. For the considered problem, the node has two states, namely, true and false. Based on the data shown in Table 1, the values of node can be determined as 8.44E-04, which is the same magnitude as the global probability of failure in the world. To identify the key factors contributing to the dam failure, the importance coefficients for the considered factors should be evaluated. For illustrative purposes, the node representing the seepage is selected for importance analyses. Take the Bayesian network shown in Fig. 3 as an example again. The importance analysis can be conducted by setting the probability for the state of false as zero in the Bayesian network, which
implies that the seepage failure cannot occur. The corresponding Bayesian network is shown in Fig. 4. The annual probability of failure for the considered dam can be calculated with the help of the Hugin software. The probability for quality of dam body with problems decreases from 3.05E-04 to 1.27E-04 when some measures are taken to avoid the seepage failure. Accordingly, the corresponding annual probability of failure decreases from 8.44E-04 to 6.67E-04. Applying Eq. (4). the importance coefficients for seepage can be obtained as 0.21. Similarly, the importance coefficients for the other variables can be calculated using Eq. (4) and are shown in Table 2. As can be seen from Table 2, insufficient discharge capacity and seepage of dam foundation are significant random variables with higher importance coefficients. That is, changes in the discharge capacity and the seepage of dam foundation will significantly influence the probability of dam failure. Therefore, to reduce the probability of dam failure
Causes dam failure Overtopping Quality of dam body
Unexpected flood Insufficient discharge capacity Seepage of dam foundation Slope stability of dam body Spillway Flood-releasing tunnel Culvert Collapse of dam body
407
Figure 3. Bayesian network applied for reliability evaluation of embankment dam.
Figure 4. Bayesian network for reliability evaluation of embankment dam without seepage damage.
effectively, some measures such as enlarging spillway should be taken to increase the discharge capacity. Among the remaining random variables, the probability of dam failure is most sensitive to changes in the unexpected flood, while changes in the flood-releasing tunnel appear to be the least important factor.
4
CONCLUSIONS
The application of BN on reliability analysis of embankment dams is demonstrated in this paper. An approach for importance analysis associated with nodes in BN is proposed. Two examples on reliability
408
Table 2. failure.
Science Foundation of Hubei Province, China (Project No. 2008CDA091).
Importance coefficients of factors causing dam
Factors causing dam failure
Importance coefficients (%)
Unexpected flood Insufficient discharge capacity Seepage of dam foundation Slope stability of dam body Spillway Flood-releasing tunnel Culvert Collapse of dam body Human errors Others
13.0 39.0 21.0 3.3 6.2 0.1 5.1 0.5 5.5 6.3
REFERENCES
analysis of embankment dams are presented to demonstrate the validity and capability of the proposed approach. The results indicate that the reliability of embankment dams can be evaluated using the Bayesian network in a more rational way. The relationship between the causes and consequences for dam safety evaluation can be expressed in an intuitive format through the Bayesian network. In addition, the common cause failure problem for reliability evaluation of embankment dams can be easily solved using the Bayesian network. The results of importance analysis can aid in the identification of which factors are candidates for mitigation actions. For the considered example, insufficient discharge capacity and seepage of dam foundation are significant random variables with higher importance coefficients. Accordingly, to reduce the probability of dam failure effectively, some measures such as enlarging spillway should be taken to increase the discharge capacity. ACKNOWLEDGMENT This research was substantially supported by grants from the National Natural Science Foundation of China (Project No. NSFC50879064) and the Natural
Bayraktarli, Y.Y., Ulfkjaer, J., Yazgan, U., et al. 2005. On the application of Bayesian probabilistic networks for earthquake risk management. Proceedings of the 9th International Conference on Structural Safety and Reliability. Edited by Augusti G, Schuëller G I, Ciampoli M, Millpress, Rotterdam, 3505–3512. Benjamin, J.R. 1983. Risk and decision analyses applied to dams and levees. Structural Safety 1(1): 257–268. Celeux, G.C., Corset, F., Lannoy, A. et al. 2006. Designing a Bayesian network for preventive maintenance form expert opinions in a rapid and reliable way. Reliability Engineering and System Safety 91(7): 849–856. Faber, M.H., Kroon, I.B., Kragh, E. et al. 2002. Risk assessment of decommissioning options using Bayesian networks. Journal of Offshore Mechanics and Arctic Engineering 124(4): 231–238. Friis-Hansen, A. 2001. Bayesian Networks as a Decision Support Tool in Marine Applications. PhD thesis, Technical University of Denmark. Hartford, D.N.D. & Baecher, G.B. 2004. Risk and Uncertainty in Dam Safety. Thomas Telford. Holick, M. 2005. Reliability and risk assessment of buildings under fire design situation. Proceedings of the 9th International Conference on Structural Safety and Reliability. Edited by Augusti G, Schuëller G I, Ciampoli M, Millpress, Rotterdam. 3237–3242. Jensen, F.V. 1996. An Introduction to Bayesian Networks. New York: Springer. Jensen, F.V. 2001. Bayesian Networks and Decision Graphs. Springer-Verlag, New york. Lavel, B. & Balvanyos, T. 1998. Risk analysis and management of dam safety. Risk Analysis 18(4): 455–462. Pearl, J. & Russell, S. 2000. Bayesian Networks. UCLA Cognitive Systems Laboratory, Technical Report (R-277). Straub, D. 2005. Natural hazards risk assessment using Bayesian networks. Proceedings of the 9th International Conference on Structural Safety and Reliability. Edited by Augusti G, Schuëller G I, Ciampoli M, Millpress, Rotterdam. 2509–2516. Von Thun, L.J. 1987. Use of risk based analysis in making decisions on dam safety. Engineering Reliability and Risk in Water Resources, Edited by Duckstein, L, and Plate E J. Martinus Nijhoff Publishers. 135–146.
409
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Identification and characterization of liquefaction risks for high-speed railways in Portugal P.A.L.F. Coelho & A.L.D. Costa Department of Civil Engineering, Faculty of Science and Technology, University of Coimbra, Portugal
ABSTRACT: The Portuguese Government decided to develop a High-Speed Railway (HSR) network linking its major cities and other European countries via the extensive Spanish network. This encouraged the development of an international research project aiming at assessing, analyzing and mitigating risks affecting this transportation infrastructure. Risk analyses undoubtedly depend on the ability to identify and characterize the risk factors affecting the HSR construction and operation. The information available to perform these tasks is generally scarce and ambiguous, which limits the use of ordinary methods. This paper considers the use of experts’ opinions to carry out identification and quantitative assessment of earthquake-induced liquefaction risks. Both the probability of occurrence and the consequences resulting from the risks are considered. Liquefaction risks are selected because of how seriously they can affect the HSR. The results are compared with respect to the level and type of experience of the experts. It is shown that experts’ opinions are a useful tool to characterize liquefaction risks, even if some issues need to be considered. 1
INTRODUCTION
The Portuguese Government decided to develop the first High-Speed Railway (HSR) network in the country. This network will significantly reduce journey durations between major Portuguese cities located along the coast, where most part of the population lives. It will also radically improve the quality and speed of international railway connections, via the planned connections to the vast Spanish network. The political support to the construction of the HSR and to the establishment of joint research projects in the field of transportation infrastructures motivated the creation of an international research project that aims at identifying, assessing, analyzing and mitigating the risks that may affect the construction and the operation of a HSR, which involves uncommonly high financial and technical complexity. Hence, Risk emerged as a collaborative research project involving various Portuguese Universities and Research Centers and MIT, which has a long and fruitful experience with research on risk analyses in engineering (e.g. Einstein et al. 1995). This experience includes risk analyses of railway infrastructures built in countries like Japan that, as Portugal, are seismically active and present complex geotechnical conditions (Sussmnan & Shimamura 2007). The aim of the project is to develop tools that allow the development of comprehensive risk analyses to support political decisions, namely in HSR. A component of Risk Project considers the seismic and geotechnical risks affecting the Portuguese HSR. This is justified by the fact that seismic and geotechnical risks can have very serious impact on the
construction and operation of the HSR. In addition, there is considerable difficulty in identifying and characterizing, in terms of probability of occurrence and magnitude of the consequences, the different hazards of seismic and geotechnical nature that may affect the HSR. Therefore, the consideration of the seismic and geotechnical risks is a fundamental aspect of risk analyses when applied to the HSR to be built in Portugal. When performing risk analyses, experts’ opinions are often the only practical alternative to accomplish the task of identifying and assessing seismic and geotechnical risks. Based on their experience, experts identify the risks that may possibly be relevant to the problem under consideration. They also quantify the probability that each risk will effectively materialize and the effects caused by their occurrence, which depends a lot on the local conditions. Despite the understandable need to use expert panels as a tool to appraise the data allowing seismic and geotechnical risks to be incorporated in risk analyses, the soundness of this tool is seldom discussed. This paper presents a preliminary assessment of the validity of using experts’ opinions to characterize the risks related to the occurrence of earthquake-induced (cyclic) liquefaction in HSR.
2 THE USE OF EXPERTS’ OPINIONS IN RISK ANALYSIS In order to carry out risk analysis, different procedures can be used to characterize the risk factors relevant
411
to the problem under consideration (Scawthorn 2008): – Analytical; – Empirical; – Expert opinion. In many circumstances, it is practically impossible to accomplish analytical solutions able to quantify a certain risk. This is particularly true in the case of geotechnical and seismic risks, where even approximate numerical solutions can rarely present a feasible alternative. Empirical solutions also have limited value in the case of seismic hazards, since the return period tends to be considerably large. Therefore, it is impracticable to collect enough good quality data to perform statistical analysis. With respect to this issue, Christian (2004) expresses his concern that properties estimated from small samples may be seriously in error. Therefore, expert opinion is often the ultimate source of information that can be used. Hartford (2008) considers that the term “expert judgment” involves enumerating subjective probabilities that reflect an expert’s degrees of belief. He also states that an expert must possess two fundamental capabilities: the substantive expertise and the normative expertise. The first one determines how well a set of assessments predicts the actual outcome, high probabilities being on average assigned to those events that turn out to occur. The second one relates to the ability of assigning probabilities to the events that correspond to their empirical frequency of occurrence. Expert opinions can be used at different phases in risk analysis involving seismic and geotechnical risks, namely during the early planning and decision stages, during the comparison of technical alternatives and during the final design and construction stages (Einstein et al. 1992). The usefulness of this tool has been demonstrated in practice, namely in cases where the geotechnical part was a significant element of the risk analysis (Einstein & Haas 1984, Einstein et al. 1992). Christian (2004) reckons that experts generally tend to estimate mean trends well but tend to underestimate uncertainty. In addition, he considers that experts tend
to be overconfident in their estimates. However, these conclusions are debatable, mostly due to the reasons why experts’ opinions became popular: the results are typically difficult to verify (Scawthorn 2008). Taking into account all the issues regarding the use of this tool in risk characterization, Faber (2008) considers that the use of expert panels still lacks a generally applicable and consistent philosophy.
3
LIQUEFACTION IN PORTUGAL
3.1 Characteristics of the territory Earthquake-induced liquefaction is a serious hazard to a HSR, which is extremely susceptible to some effects caused by the occurrence of the phenomenon: – the settlement or failure of the ground foundations, which can cause derailment of the high-speed-train and/or long-lasting disruption of the operation or reduction of its quality; – the vibration induced at the ground surface, which can cause derailment of the high-speed train, even if the earthquake causes no permanent deformation of the railway, as observed in Niigata Chuetsu Earthquake (Watanabe et al. 2004). The Portuguese HSR will also be vulnerable to the effects of earthquake-induced liquefaction. In fact, the geological and hydrological conditions of a significant part of the territory which, along with its seismicity, are shown in Figure 1, favor the occurrence of earthquakeinduced liquefaction. First of all, various regions located near the coast and along the alluvial plains of the major Portuguese rivers (Figure 1a), especially South of the city of Porto, have relatively shallow water tables. In addition, many of deposits existing in those regions are mostly formed by recent sedimentary soils, which are illustrated as dark areas in the geological map of Figure 1b. Because these deposits commonly contain cohesionless materials in a loose condition, they are extremely prone to earthquake-induced liquefaction. Finally, the Portuguese territory is a seismically active region, as
Figure 1. Hydrological, geological and seismic characteristics of the Portuguese territory.
412
boldly demonstrated by the catastrophic event that struck Lisbon in 1755. The seismic zoning proposed in EC8 (Figure 1c) shows that the southern part of the territory is more prone to large seismic events, namely the areas around and south of Lisbon in the vicinity of the western and southern coasts. The seismicity depends on the mechanisms causing the earthquake, the country being affected by both interplate and intraplate type of events. 3.2
Historical liquefaction events and implications to the Portuguese HSR
Figure 2 undoubtedly underlines the argument that liquefaction can be particularly damaging to the HSR in Portugal. In fact, analyses of historical testimonies describing the effects of past earthquakes suggest that liquefaction has effectively occurred repeatedly in the past in the territory (Figure 2a). The occurrences were instigated by earthquake events of different magnitude and having distinct mechanisms in their origin. The results compiled by Jorge & Vieira (1997) suggest that the 1755 Lisbon Earthquake certainly caused liquefaction episodes in all the regions identified as potentially liquefiable. However, no sure evidence was found regarding liquefaction occurrences due to other earthquakes outside the alluvial plain of Tagus (Tejo) river. This implies that the Portuguese HSR may be affected more or less frequently by liquefaction effects in a significant part of its alignment (Figure 2b). This alignment is fairly rigid, due to the need to serve the population, which lives predominantly in the regions where liquefiable deposits are present (Figure 2c). 3.3
an empirical assessment of those risks was carried out for the HSR section between Lisbon and Porto using the information available. The first principle accepted was the historical criterion, which states that liquefaction tends to recur at locations where it occurred in previous earthquakes, as observed in the field (Youd 1984) and in centrifuge models (Coelho et al. 2006). On the other hand, even if the scale of the effects will depend on the local ground conditions and magnitude of the seismic motion, it is reasonable to expect that some degree of damage will always take place due to liquefaction. Thus, the probability of occurrence of a liquefaction risk equals the probability of exceedance of an earthquake that once liquefied the ground in that location. Consequently, the annual probability of occurrence of a liquefaction risk is given by the inverse of the return period of a ground motion that induced local liquefaction in previous occasions. The return period of interplate earthquakes generated in the sea and affecting the Portuguese territory, such as the 1755 and 1761 events, is in the order of a thousand years. Intraplate earthquakes occur more often, the return period for those generated in the Lower Tagus Valley region being in the order of a few hundred years. Table 1 relates return periods and earthquake magnitudes for the Lisbon area. Table 2 lists the magnitudes and origins of the earthquakes supposed to have caused liquefaction Table 1. Return periods and magnitudes of earthquakes in the Lisbon area (Campos Costa 2008). Return period (years)
95
200
475
700
975
2000
5000
7.2
7.6
7.9
8.1
8.2
8.4
8.5
Empirical assessment of liquefaction risks
In order to qualitatively judge the value of experts’ opinions in the characterization of liquefaction risks,
Magnitude (M)
Figure 2. Alignments of the HSR in relation to the demographic distribution and the historical liquefaction events in Portugal.
413
Table 2. Magnitudes and origins of the earthquakes considered when identifying liquefaction occurrences in Figure 2a.
settlements to affect the operation of the HSR is closer to the probability, PL , of liquefaction to occur than is the probability of train derailment due to liquefactioninduced foundation settlements or failure, both being below PL .
Earthquake characteristics Year
Origin
Magnitude
1531 1755 1761 1856 1858 1909
Lower Tagus Valley region Interplate (exact location unknown) Interplate (Gorringe zone) Loulé Fault Sado Submarine Valley region Lower Tagus Valley region
≈7 8.5–9(1,2) ≈8 – ≈7 5.8–6.3(3,4)
4
(1)
Gutscher (2006); (2) Chester (2008), (3) Teves-Costa et al. (1999); (4) Sousa Oliveira et al. (2006)
in Portugal in the past, which are plotted in Figure 2a. Larger interplate earthquakes, generated in the contact between the Eurasian and African plates, have magnitudes of 8 or higher. These events, which can cause liquefaction in all the major liquefiable deposits existing in the Portuguese territory, may have return periods as high as 5000 years, according to Table 1. However, more comprehensive studies suggest that these events have lower return periods. There are suggestions that these vary between an overly pessimistic estimate of 614 ± 105 years (Chester 2008) and a more realistic prediction ranging from 1500 to 2000 years (Gutscher 2006). Considering the middle point of this last range as the most suitable value, the annual probability of exceedance of an interplate seismic event capable of liquefying deposits in different regions in Portugal is around 0.06%. Therefore the annual probability of liquefaction occurring in a particular liquefiable deposit in Portugal, unless it is located near Tagus river, may be in the order of 10−4 to 10−3 . With respect to intraplate earthquakes, whose magnitude can reach values as high as 6 to 7, the probability of occurrence tends to be larger. According to Table 1, the return period for that type of events is under one hundred years, which suggests that the annual probability of exceedance is close to 1%. Based on this, and considering that these earthquakes have definitely liquefied deposits in the Tagus river valley, the annual probability of liquefaction occurring in this location is in the order of 10−2 . Because no firm evidence was found that these seismic events can instigate liquefaction in other deposits, this probability may not apply to liquefiable deposits located outside the region of Tagus river valley. In summary, the annual probabilities estimated for the interplate (10−4 to 10−3 ) and intraplate (10−2 ) earthquakes approximate the probability of liquefaction to occur in liquefiable deposits outside and within the region of Tagus river valley. The probability of liquefaction risks to materialize will obviously be equal to or less than the probability of liquefaction to occur, each risk having its own probability of occurrence. For example, the probability of liquefaction-induced
USE OF EXPERTS’ OPINIONS IN RISK CHARACTERIZATION: A CASE STUDY
In connection with the research performed under Risk Project, a survey was conducted to evaluate the value of experts’ opinions in assessing different risk factors for the HSR. Using a set of risk factors identified by a restricted number of experts involved in the research project, other experts were invited to check if the risks listed were adequate and to quantify the probability of occurrence and the potential consequences for each risk. This paper discusses the results obtained regarding liquefaction risks. 4.1 Risk factors identified This paper considers the risks related to the liquefaction phenomenon and affecting the embankments of the HSR. The risks taken into account, which concern the deformation, stability and soil-structureinteraction aspects of the problem, are: – Risk L1: settlement of the embankment due to liquefaction of the ground foundation; – Risk L2: failure of the embankment due to liquefaction of the ground foundation; – Risk L3: train derailment due to liquefaction effects on the ground motion propagation through the deposit. 4.2 Groups of experts consulted To assess the importance of the professional experience and occupation of the experts, three groups of experts were invited to participate in the survey. The groups include experts that are representative of the following professional categories in Portugal: – G1: industry/practitioners (8 experts); – G2: junior academics (7 experts); – G3: senior academics (9 experts). Thus, the results can be compared to perceive the influence of the experts’ previously accumulated level and type of experience. The soundness of the experts’ opinions may also be examined on a qualitative basis. 4.3 Regions tested Two regions were considered in the present study (Figure 3). The northern region is located on the alluvial plain of Mondego river, in potentially liquefiable soils, in an area of moderate to low seismicity. The southern area is located in the higher-seismicity region surrounding Lisbon on the relatively hard rock formations of the Jurassic and Cretaceous.
414
Figure 3. Location of the regions tested in hydrological and geological maps of the Portuguese territory.
4.4
Information requested
Each expert invited to participate in the investigation was asked to classify the probability of occurrence and the magnitude of the effects for each risk factor, using a semi-quantitative scale comprising 5 levels. The annual probability of occurrence could vary between the extreme values of 10−4 and 1, with 3 intermediate values varying by a factor of 10. The consequences of each risk occurrence could be qualitatively graded as: very little significant, little significant, significant, serious and very serious. The study was carried out through a blind written survey, no supplementary explanations being given on the risks proposed. The surveys were completed individually and no attempt was made to promote a consensus between the experts with respect to the values assigned to each risk.
4.5
Results obtained
Almost every expert agreed with the set of risk factors listed. None of the few experts describing the list as incomplete suggests additional risks related to the occurrence of the phenomenon of earthquake-induced liquefaction. 4.5.1 Probability of occurrence The experts consulted assessed the probability of occurrence of the risk factors proposed in the survey for the two regions, as shown in Figure 4. Except for risk L3, all the experts managed to quantify the probability of occurrence for each risk factor. This suggests that risk L3 is poorly understood or even ignored by many engineers. The fact that the number of experts declining to estimate the probability of occurrence for risk L3 is considerably larger within the leastexperienced group of junior academics corroborates this hypothesis.
Figure 4. Annual probability of the occurrence of liquefaction risks in the two regions considered as quantitatively assessed by different categories of experts.
Figure 4 highlights the considerable disparity of values proposed by the experts within any of the groups. Maximum and minimum predictions for the probability of occurrence often differ by two orders of magnitude, but in some cases the extreme values diverge by three orders of magnitude. Industry and senior academic experts tend to agree more on the probability of occurrence, yielding similar relative discrepancies and average values for each risk. Table 3 presents the averages of the probabilities of occurrence for each risk calculated per region and
415
Table 3. Averages(1) of the annual probabilities of occurrence of liquefaction risks predicted by the experts. Average annual probability of occurrence
Risk Region Industry L1 L2 L3
1 2 1 2 1 2
5.6 × 10−3 3.2 × 10−3 5.6 × 10−3 3.2 × 10−3 3.7 × 10−3 3.7 × 10−3
Junior academic
Senior academic
Global
1.4 × 10−3 5.2 × 10−3 0.5 × 10−3 3.7 × 10−3 4.0 × 10−3 1.8 × 10−3
1.7 × 10−3 1.3 × 10−3 2.2 × 10−3 1.7 × 10−3 3.2 × 10−3 2.4 × 10−3
2.4 × 10−3 2.8 × 10−3 1.8 × 10−3 2.7 × 10−3 3.6 × 10−3 2.5 × 10−3
(1)
computed from 10ˆ(average of log of probabilities), ignoring the “no answer” replies.
per group of experts, in the manner stated. It can be inferred that: – the annual probability of occurrence of the risks is estimated between 0.5 × 10−3 and 5.6 × 10−3 ; – industry and senior academic experts tend to predict that liquefaction risks are more likely to occur in region 1 than in region 2, while junior academic members tend to express a contrary view; – in region 1, senior and junior academics believe that risk L3 has the largest probability of occurrence, whereas practitioners consider that this risk is the less likely to occur, which suggests that information on occurrences of earthquake-induced train derailment is better known amongst academics; – in region 2, senior academic and industry experts consider L3 as the most likely risk to occur, junior academics considering that L1 has the largest probability of occurrence; – industry experts always predict a higher probability of a risk occurring than senior academics; – industry experts and senior academics tend to produce estimates closer to the global average values, while junior academics assign values that are arbitrarily above or below that average and that tend to have significant impact on it. The results suggest that industry experts are always more pessimistic when evaluating the risk of occurrence in comparison to senior academic. This may reflect the need of industry experts to incorporate a factor of safety in design, which apparently does not influence as much the assessment made by academic experts. The values proposed by junior academics vary randomly and often significantly around the average, which may reflect their inexperience with some of the risks. Even if industry and senior academic predict that the risks are more likely to occur in region 1, the global average is controlled by the opinion of the junior academics. In fact, this group predicts, on average, that liquefaction risks have a probability of occurrence that is about 4 to 7 times larger in region 2, for risks L1 and L2, respectively.
Figure 5. Consequences of the occurrence of liquefaction risks in the two regions considered as qualitatively assessed by different categories of experts.
4.5.2 Consequences of occurrence The experts consulted also assessed the probability of occurrence relative to each risk factor for both regions (Figure 5). The variability of the values proposed for the magnitude of the effects induced by the occurrence of each risk factor, within each group, is very significant. In fact, 3 or 4 different levels of magnitude are normally used by the different experts to quantify the consequences of a risk factor in a certain region. Moreover, senior academics need to employ the 5 levels of magnitude proposed in the scale when trying to assess the consequences of liquefaction risk L2 in both regions.
416
Table 4. Averages of the consequences(1) of liquefaction risks predicted by the experts. Average consequences of occurrence Risk Region Industry L1 L2 L3
1 2 1 2 1 2
3.0 (s) 3.5 (s/se) 4.0 (se) 3.9 (se) 4.0 (se) 4.3 (se/vse)
Junior Senior academic academic Global 3.4 (s/se) 3.6 (s/se) 3.1 (s) 3.9 (se) 3.4 (s/se) 3.2 (s)
3.0 (s) 3.3 (s/se) 3.3 (s/se) 3.4 (s/se) 4.0 (se) 4.1 (se)
3.1 (s) 3.5 (s/se) 3.5 (s/se) 3.7 (s/se) 3.8 (se) 3.9 (se)
(1) measured in the following scale: 1- very little significant (vls); 2- little significant (ls); 3- significant (s); 4- serious (se); 5- very serious (vse).
The results suggest that, alike the prediction of the probability of occurrence, industry experts tend to be more pessimistic than senior academic when evaluating the consequences of the occurrence of the risks. Also, the larger variation of the values proposed by junior academics may indicate a poorer understanding of some of the risks. The fact that larger consequences are expected to occur in region 2 suggest that experts believe that, if the risks occur, the magnitude of the effects will be governed by the scale of the seismic motion and not by the geological and geotechnical conditions of the site. Experts also seem to fear the occurrence of derailments, namely those caused by the vibration of the ground surface. 4.6
More than 25% of the junior experts and about 10% of the other experts were unable to quantify the effects of the occurrence of liquefaction risk L3. Because all the experts managed to quantify the consequences relative to the other risks, this suggests that risk L3 is worse understood than the others, especially by junior academics. Table 4 presents the averages of the consequences of the occurrence for each risk predicted by the three groups of experts for each region. In view of the scale used to assess the consequences, ranging from 1 (very little significant) to 5 (very serious), it can be observed that: – the consequences predicted for each risk vary between a minimum value of 3.0, indicating significant effects, and a maximum value of 4.3, which is indicative of serious to very serious effects; – normally, the consequences of the three risks considered are expected to be more severe in region 2 than in region 1, which suggests that the experts focused more on the seismicity of the region than on the geological and geotechnical conditions of the foundation; – in region 1, all the groups believe that risk L3 can cause some of the most serious consequences, although industry and junior academic experts also believe the effects of L3 are comparable to those due to risks L2 and L1, respectively; – in region 2, industry and senior academic experts identify risk L3 as the most serious in terms of consequences, whereas junior academic experts predict this risks would have the least consequences; – industry experts tend to predict more serious consequences in most part of the cases than senior academics, although the difference is more significant when the probability of occurrence is evaluated; – industry experts and senior academics tend to produce average predictions of consequences that are closer to the global average values, while the average estimates of junior academics vary more notably and randomly around the average, though not as much as in terms of the probability of occurrence.
Soundness of the experts’ opinions
One of the main difficulties when assessing the soundness of the opinions expressed by experts in relation to risk analysis, namely when involving geotechnical and seismic hazards, is the very few truthful data available. Still, qualitative assessment of some of the values produced by the experts can be performed based on the results presented. Comparing the estimates produced between different groups, it is apparent that: – estimates produced by senior experts exhibit less variation than those of junior experts, which suggests that the level of experience of the expert is an important factor; – predictions of industry experts are more pessimistic than those of academic experts with the same level of experience, both with respect to the probability of occurrence and to the consequences, which proves that the type of experience of the experts influences their opinions; – because many earthquakes may cause ground deformations without causing its failure, the fact that senior experts suggest that L2 is at least as likely to occur as L1 seems incongruent; – the reason why the consequences of the risks tend to be larger in region 2, where the ground foundation is less susceptible to liquefaction, is unclear; – the fact that the consequences of risk L3 are different in regions 1 and 2, as proposed by all the groups of experts, is not straightforwardly explicable. Similar analyses can also be performed based on the absolute values proposed by the experts for the probability of occurrence of each risk. As discussed in section 3.3., the annual probability of a liquefaction risk to materialize in a liquefiable deposit varies from about 10−4 to 10−3 for region 1 and is about 10−2 for region 2. Thus, the range of annual probabilities of occurrence obtained for region 2, ranging from 1.3 × 10−3 to 5.2 × 10−3 , seems qualitatively reasonable, since the local deposits are fairly unsusceptible to liquefaction. For region 1, the annual probability of occurrence, varying between 0.5 × 10−3 and 5.6 × 10−3 , seems overestimated, as it exceeds the expected probability of liquefaction occurrence at the site.
417
5
CONCLUSIONS
Risk analyses considering the effects of geotechnical and seismic hazards are extremely demanding, especially with respect to the characterization of the risk factors in terms of the probability of occurrence and magnitude of the effects. Suitable risk analyses are required to justify public investment in large transportation infrastructures, such as the one planned in Portugal for the HSR. Due to the characteristics of the territory, the geotechnical and seismic aspects of risk have crucial importance, which motivated a research on this subject. The study presented in this paper considers the use of experts’ opinions as a tool to provide the data on the probability of occurrence and on the magnitude of the consequences to the HSR due to liquefaction risks. Based on a blind written survey to three different types of experts (junior and senior academics and industry experts), it can be concluded that experts’ opinions are a useful tool to characterize seismic and geotechnical risks. However, some issues may not be ignored. The level and type of experience of the experts may have significant impact on the data obtained: experienced experts yield less variability and industry members are more pessimist than academics. Also, some values proposed are incongruent. In order to overcome these problems, get more unbiased data and to help clarifying possible doubts of the experts, joint meetings may contribute to improve the use of experts’ opinions in risk analyses. ACKNOWLEDGEMENTS This publication was made possible by the generous support of the Government of Portugal through the Portuguese Foundation for International Cooperation in Science, Technology and Higher Education and was undertaken in the MIT-Portugal Program. REFERENCES Campos Costa, A. 2008. Fundaments of probabilistic risk analysis – Methods and applications, Advanced Course on Risk management in civil engineering, LNEC, Lisbon. Chester, D.K. 2008.The effects of the 1755 Lisbon earthquake and tsunami on the Algarve region, southern Portugal, Geography Geographical, 93 (2): 78–90. Christian, J.T.C. 2004. Geotechnical Engineering Reliability: How Well Do We Know What We Are Doing?- Karl Terzaghi Lecture, ASCE-Journal of Geotechnical and Geoenvironmental Engineering, 130 (10): 985–1003.
Coelho, P.A.L.F 2007. In situ densification as a liquefaction resistance measure for bridge foundations, PhD Thesis, Cambridge University, UK. Coelho, P.A.L.F., Haigh, S.K. & Madabhushi, S.P.G. 2006. Effects of successive earthquakes on saturated deposits of sand, International Conference on Physical Modelling in Geotechnical Engineering, Hong-Kong. Einstein H.; Chiaverio F.; Koppel U. 1995. Risk analysis for the Alder tunnel, Int. Journal of Rock Mechanics and Mining Sciences and Geomechanics Abstracts, 32(4) Einstein, H.H, Dudt, J.-P., Halabe, V.B. & Descoeudres, F. 1992. Decision Aids in Tunneling – Principle and Practical Aplication, Swiss Federal Office of Transportation, Project AlpTransit. Faber, M. 2008. Management of structural risks, Advanced Course on Risk management in civil engineering, LNEC, Lisbon. Gutscher, M.-A. 2006. The great Lisbon earthquake and tsunami of 1755: lessons from the recent Sumatra earthquakes and possible link to Plato’s Atlantis, European Review, 14(2): 181–191, Cambridge University Press. Hartford, D. 2008. Science-based management of civil asset risk- Objectives, Principles, Process and Analytical Techniques, Advanced Course on Risk management in civil engineering, LNEC, Lisbon. INE 1991. XIIth General Census of the population, National Institute of Statistics, Portugal. Jorge, C. & Vieira, A.M. 1997. Liquefaction potential assessment- application to the Portuguese territory and to the town of Setúbal, Seismic behaviour of ground and geotechnical structures, Sêco e Pinto (ed), Balkema: 33–43. LNEC 2008 Study of alternative locations for the New Lisbon Airport, Report LNEC 2/2008-DT, LNEC (in Portuguese). Pardal, L.F. 2005The Portuguese Development of High-Speed Railway, in www.rave.pt (in Portuguese). Scawthorn, C. 2008. Earthquake risks, Advanced Course on Risk management in civil engineering, LNEC, Lisbon. Sousa Oliveira, C., Roca, A. & Goula, X. 2006. Assessing and Managing Earthquake Risk: Geo-scientific and Engineering Knowledge for Earthquake Risk Mitigation: Developments, Tools, Techniques, Springer Ed. Sussmnan, J.M. & Shimamura, M 2007. R&D Symposium roundtable- Cooperation with MIT, JR EAST Technical Review No.10, Japan. Teves-Costa, P., Borges, J.F., Rio, I., Ribeiro, R. & Marreiros, C. 1999. Source Parameters of Old Earthquakes: SemiAutomatic Digitization of Analog Records and Seismic Moment Assessment, Natural Hazards Journal, 19 (2-3): 205–220. Watanabe, G., Lee, T.Y, Nagata, S., Sakellarai, D. & Kawashima, K. 2004. Preliminary investigation on the damage of transportation facilities in the 2004 Niigata Chuetsu Earthquake, Technical Report, Kawashima Research Group, Japan Youd, T.L. 1984. Recurrence of liquefaction at the same site, 8th WCEE, USA, Vl.3:231–238.
418
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Field characterization of patterns of random crack networks on vertical and horizontal soil surfaces J.H. Li & L.M. Zhang Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
ABSTRACT: Cracks are common in natural and engineered soils and provide preferential pathways for water infiltration into soils. Randomly distributed cracks can induce randomness and anisotropy in permeability of soils. The objectives of this paper are to study the crack pattern and the probability distribution of the orientation of cracks on two soil surfaces in the natural environment. Field tests were conducted to study the desiccation crack pattern by fully exposing the soils to rainfall and evaporation. A digital imaging method was used to measure the cracks. The patterns of the cracks on a vertical soil surface and on a horizontal soil surface are investigated separately. The cracks at the survey site form an inter-connected columnar structure. The cracks on the vertical soil surface are mainly along the vertical direction and the horizontal direction. Therefore, the cracks are grouped into two crack sets: a vertical crack set and a horizontal crack set. The probability distributions of the orientations of the two crack sets follow normal distributions. On the other hand, the orientations of the cracks on the horizontal soil surface are relatively uniform in each direction and follow a uniform distribution. 1
INTRODUCTION
Cracks are common in natural and engineered soils. The presence of cracks may decrease the stability of slopes through three effects. First, cracks provide preferential pathways for water flow, which will significantly increase the hydraulic conductivity of slope soils and, in turn, pore water pressures in slope soils. As a result, the shear strength of the slope soils will be reduced. Second, water-filled cracks exert an additional driving force on the slope. Third, cracks can form a part of the critical slip surface that has no shear strength, which leads to reduced safety margin. Cracks in soils can be induced by desiccation, temperature changes, uneven settlement, construction etc. Crack patterns may be different for cracks of different origins. It is essentially impossible to deterministically characterize the pattern and geometry of a complex network of cracks. Therefore it has become more common to describe the crack network stochastically (Long et al. 1982). The flow of water through a cracked soil is strongly influenced by the randomly distributed cracks (Morris et al. 1992; Konrad & Ayad 1997; Perret et al. 1999). The randomness in the geometrical parameters of cracks can induce randomness and anisotropy in the permeability tensor (Li 2008; Li et al. 2009). Particularly, the anisotropy of permeability is significantly affected by the orientation of cracks (Li & Zhang 2007). Therefore, the study on the crack pattern and the probability distribution of crack orientation will provide a basis to analyze the randomness and the anisotropy of permeability induced by random crack networks. The objectives of this paper are to
characterize the patterns of desiccation cracks and to study the probability distributions of the orientation of the cracks on a vertical soil surface and on a horizontal soil surface through a field study. The survey site is located at the foot of Huangshan slope, which is beside Nanxu Avenue in Zhenjiang, downstream of Yangtze River, China. The average annual rainfall at the survey site is 1088 mm. The average annual duration of sunshine is 2001 hours. The survey site was excavated and backfilled in 2002. In 2003, a landslide occurred after a heavy rainstorm, which led to serious traffic congestion. The densely distributed cracks on the slope are believed to have played a crucial role in the landslide (Xiao et al. 2004). Crack survey, double ring infiltration tests, and twostage infiltration tests were conducted to study the geometry of cracks and the hydraulic properties of the cracked soil. A vertical soil surface and a horizontal soil surface (as shown in Fig. 1) were selected to conduct field characterization of cracks. Fig. 1 also shows the ground conditions of the test site. The particle size distribution of the shallow soil is shown in Fig. 2. The predominant particles are silt (34%) and clay (35%), with some sand (23%) and gravel (8%). Typical geotechnical properties of the soil, including dry unit weight, saturated water content, liquid limit, plastic limit, shrinkage limit, and expansive index, are given in Table 1. The water content in the field was in the range of 30% to 13%. The soil is classified as a silty clay with a medium expansivity in accordance with the criteria in ASTM standard D4829-03 (ASTM 2003). An expansive soil exhibits significant swelling and shrinking upon wetting and drying.
419
Figure 1. Ground conditions at the survey site.
Figure 2. Particle size distribution of the silty clay. Table 1.
Properties of the silty clay at the survey site.
Soil property
Values
Dry unit weight (kN/m3 ) Saturated water content (%) Liquid limit (%) Plastic limit (%) Shrinkage limit (%) Expansive index
16.7 30.0 40.4 15.6 13 56
Consequently, densely populated cracks develop at shallow depths. 2
FIELD CHARACTERIZATION OF RANDOM CRACK NETWORKS
A field survey was conducted to identify the cracks on a vertical soil surface and on a horizontal soil surface that were exposed to rainfall and evaporation. A digital
imaging method was used to log the cracks.A large survey area was partitioned into many smaller plots. The size of each small plot was about 180 mm by 210 mm. Snapshots were taken of each plot periodically. Each snapshot was taken with minimized distortion by fixing the camera on a tripod and using two scales placed around the plot as a reference. The camera had a 5-Megapixel capacity and the effective pixels were about 2560 by 1920. The resolution of each snapshot was the ratio between the picture size and the effective pixels, which was about 0.09 mm. Each snapshot was imported into AutoCAD and scaled to its full size based on the reference scales around the picture. The snapshots of an area of interest were assembled. The combination of the snapshots gave a complete picture of the cracks in the large survey area. Figure 3 shows a picture of the random crack network on the vertical soil surface. Fig. 4 shows a picture of the random crack network on the horizontal soil surface. Each picture covers a 530 mm by 820 mm area and comprises of 12 snapshots. The cracks formed a complex, three-dimensional columnar structure although only the parameters of the surface cracks were measured in this study. Mizuguchi et al. (2005) observed a similar columnar crack structure in an 11.6 mm thick starch-water mixture using a resin solidification method. A continuous opening along a certain direction in the soil is defined as a crack.The crack is approximated by a straight line as shown in Fig. 5. The geometry of the crack can be described by its length, orientation, location, and aperture (as shown in Fig. 5). The length of the crack is defined as the length of the straight line that approximates the crack. The orientation of the crack is defined as the angle between the crack and the global coordinates. The crack length and crack orientation were determined by the aid of AutoCAD in this study.
420
Figure 3. A digital image of the crack network on the vertical soil surface.
3
Figure 4. A digital image of the crack network on the horizontal soil surface.
PATTERN OF CRACKS ON THE VERTICAL AND HORIZONTAL SOIL SURFACES
The cracks on the two soil surfaces are approximated by straight lines to study the crack pattern. Fig. 6 shows the crack structure on the vertical soil surface obtained from the snapshots shown in Fig. 3. Fig. 7 shows the crack structure on the horizontal soil surface obtained from the snapshots shown in Fig. 4. Most cracks in the sample area tend to be connected and form triangles, quadrangles, pentagons or hexagons on both the vertical soil surface and the horizontal soil surface. These polygons are interconnected and form a random crack network. The cracks form a polygonal network due to the influences from both the stress field and the moisture field in the soils. Because the evaporation from crack walls is more rapidly than that from the bulk soil, the water content gradient is large at the ends and turning points of the existing cracks. As a result, the stresses at the ends and turning points are large and the branching of cracks is most likely to occur at these points. The new cracks connect with the existing cracks and form a polygonal network. There are 106 cracks in the survey area on the vertical soil surface shown in Fig. 6. These cracks form one triangle, twelve quadrangles, twelve pentagons,
Figure 5. Approximation of a crack.
and three hexagons. There are 903 cracks in the survey area on the horizontal soil surface shown in Fig. 7. These cracks form nine triangles, 74 quadrangles, 96 pentagons, and 32 hexagons. The distributions of the polygons along the x and y directions are relatively uniform. The number of polygons in the middle of
421
the plot is larger than that near the edges of the plot, because many cracks near the edge do not form complete polygons. The interconnected polygonal structure of cracks offers preferential pathways for water flow through the cracked soils. 4
Figure 6. Crack structure on the vertical soil surface. (unit: m).
Figure 7. Crack structure on the horizontal soil surface. (unit: m).
ORIENTATION OF CRACKS ON THE VERTICAL AND HORIZONTAL SOIL SURFACES
Cracks on the horizontal soil surface distribute uniformly along the x and y directions as shown in Fig.7. These cracks developed mainly due to desiccation of the soil in the relatively homogenous horizontal ground. The soil water evaporates uniformly from the soil surface. In the process, the soil suction increases and the soil volume decreases, which induce tensile stresses in the surface soils (Fredlund & Rahardjo 1993). When the soil properties are isotropic in all horizontal directions, the tensile stresses are identical in all horizontal directions. As the tensile stresses become larger than the tensile strength of the soil, cracks will develop. The isotropic stress field results in the uniformly distributed cracks on the horizontal soil ground. Cracks on the vertical soil surface are mainly distributed in two directions: the horizontal direction and the vertical direction (as shown in Fig. 6). Cracks in different directions result from different stress states. An isotropic incremental tensile stress field is present on the vertical soil surface due to isotropic drying in the vertical plane, which is similar to the stress field in the horizontal soil ground. However, the gravity of soil exerts an additional anisotropic stress field in the soil with the major principal stress in the vertical direction. The superimposition of the isotropic tensile stress field and the anisotropic gravity stress field gives an anisotropic resultant stress field prior to crack formation. The direction of the major principal stress is consistent with the direction of the gravity, that is, in the vertical direction. As a result, the horizontal stress must be the minor principal stress and tends to be a tensile stress. When the tensile stress is larger than the tensile strength of soil, cracks will appear first along the vertical direction. Hence, the cracks in the vertical direction form mainly due to the combined action of these two stress fields resulting from desiccation and the gravity of soil. The cracks in the horizontal direction develop mainly due to desiccation of the soil. When the desiccation induced tensile stress is larger than the tensile strength of the soil, new cracks form and tend to be perpendicular to the existing cracks. The phenomena can be interpreted using a spring network model (Vogel et al. 2005). In this model, soil is represented by a two-dimensional triangular network of simple Hookean springs. When the springs break, a crack will form. Perpendicular to a crack the neighboring springs are relaxed, while parallel to a crack, the tension is slightly higher compared to the fully connected springs. Consequently, if a crack tip approaches
422
Figure 9. Histogram and assumed distributions for orientation of the cracks on the horizontal soil surface.
Figure 8. Histogram and assumed normal distribution for the orientation of the cracks on the vertical soil surface: (a) the vertical crack set; (b) the horizontal crack set.
another existing crack, the joint of the two cracks will be most likely at 90◦ . Therefore, the new crack set will be perpendicular to the existing vertical cracks and mainly in the horizontal direction. The cracks on the vertical soil surface can be grouped into two crack sets: a vertical crack set and a horizontal crack set. The average orientation of the vertical crack set is about 87◦ ; the average orientation of the horizontal crack set is about – 4◦ . Mahtab et al. (1972) developed a numerical method for analyzing clusters of orientation data. It was found that rock fracture orientations followed a normal distribution with a mean value in the prevailing direction. In this study, the normal distribution is assumed to fit the observed data of crack orientation on the vertical soil surface. Figure 8 shows the histograms of crack orientation for the two crack sets on the vertical soil surface. A chisquare test is used to check the goodness of fit of the assumed normal distributions for the two crack sets. The chi-square test is a widely used goodness-of-fit test (Ang and Tang 2007). The vertical crack set has 19 intervals; therefore the degree of freedom (d.o.f.) is f = 19 − 2 = 17. The critical value, c1−α , f , is 40.8 for the normal distribution at a significance level of 0.001, and the observed difference cs is 8.1. The observed difference is smaller than the critical value. Therefore, the normal distribution can be used to model the vertical crack set on the vertical soil surface. Similarly, for the horizontal crack set, the observed difference, 11.2, is
much smaller than the critical value, 40.8, at a significance level of 0.001. According to the test, the normal distribution is valid at the 0.001 significance level for modeling the probability distributions of orientations for the cracks on the vertical soil surface. Normally distributed orientation is also observed for fractures in rocks (National Research Council 1996). Figure 9 shows the histogram for the orientations of the cracks on the horizontal soil surface. Both normal distribution and uniform distribution are assumed to model the orientation of cracks. The observed difference for the normal distribution is 155, which is larger than the critical value, 66.6, at a significance level of 0.001. The result indicates that the normal distribution cannot be used to model the orientation of the cracks on the horizontal soil surface. The chisquare test results for uniform distribution are listed in Table 2. The results indicate that the uniform distribution is appropriate at the 0.001 significance level. The average orientation for the cracks is about 90◦ . The relatively uniformly distributed crack orientation results from the isotropic drying of the relatively homogeneous soils in two horizontal directions. The uniform distribution of crack orientation gives rise to uniformly distributed polygons. The deviation of the probability distribution for the orientation of the cracks on the two soil surfaces may result from different stress fields prior to crack formation, as discussed in the previous sections. On the vertical soil surface, a number of approximately parallel cracks are present and characterized as two crack sets. Each set of cracks has a dominant orientation and the deviations from the dominant direction may be normally distributed. Instead of obvious crack sets, uniformly distributed crack polygons are present in the horizontal ground. Desiccation cracks disperse evenly and result in uniformly distributed crack orientations on the horizontal soil surface. 5
CONCLUSIONS
A field survey was conducted and a digital imaging method was used to document and analyze the cracks
423
Table 2. Results of chi-square test for orientation of cracks on the vertical soil surface and on the horizontal soil surface. Location Vertical Vertical set surface Horizontal set Horizontal surface
Assumed distribution
d.o.f. f
Critical value c1−0.001,f
Observed difference cs
Normal Normal Uniform Normal
17 17 35 35
40.8 40.8 66.6 66.6
11.2 8.1 40.8 155.0
on a vertical soil surface and a horizontal soil surface. The crack pattern and the probability distributions of the orientation of the cracks on these two soil surfaces were studied. Most cracks tend to be connected and form triangles, quadrangles, pentagons or hexagons on both the vertical soil surface and the horizontal soil surface. These polygons are interconnected and form a random crack network. The interconnected polygonal structure of cracks provides preferential pathways for water flow. The isotropic drying on the horizontal soil ground results in an isotropic stress field in the horizontal directions. The uniform stress field leads to uniformly distributed cracks and crack polygons on the horizontal soil surface. Hence, the orientation of the cracks on the horizontal soil surface follows a uniform distribution. There is an additional anisotropic stress field induced by the gravity in the soil on the vertical surface besides the isotropic stress field due to desiccation. Cracks on the vertical soil ground develop under the effects of the resultant anisotropic stress field, with the major and minor principal stresses along the vertical and horizontal directions, respectively. Therefore, the cracks on the vertical soil surface are characterized by two crack sets: a vertical crack set and a horizontal crack set. The orientations of the two sets of cracks follow normal distributions. ACKNOWLEDGEMENTS This research was substantially supported by the Research Grants Council of the Hong Kong Special Administrative Region (Project No. 622206). The authors would like to thank Professors Y. Wang and J. Wei, Mr. C. P. Kwong, Mr. D. Feng, and Mr. H. H. Chen for their kind help during the field survey. REFERENCES
ASTM 2003. Designation D4829-03: Standard test method for expansion index of soils. ASTM Book of Standards, Vol. 04.08, West Conshohocken, PA. Fredlund, D.G., and Rahardjo, H. 1993. Soil mechanics for unsaturated soils. John Wiley & Sons, New York. Konrad, J.M., and Ayad, R. 1997. Desiccation of a sensitive clay: field experimental observations. Can. Geotech. J., 34: 929–942. Li, J.H. 2009. Field experimental study and numerical simulation of seepage in saturated/unsaturated cracked soil. PhD thesis, The Hong Kong University of Science and Technology, Hong Kong. Li, J.H., and Zhang, L.M. 2007. Water flow through random crack network in saturated soil. Geotechnical Special Publications, 170: 205–214, ASCE, Reston, Va. Li, J.H., Zhang, L.M., Wang, Y., and Fredlund, D.G. 2009. Permeability tensor and REV of saturated cracked soil. Canadian Geotechnical Journal, in press. Long, J. C. S., Remer, J. S., Wilson, C. R., and Witherspoon, P. A. 1982. Porous media equivalents for networks of discontinuous fractures. Water Resources Research, 18(3), 645–658. Mahtab, M.A., Bolstad, D.D., Alldredge, J.R., and Shanley, R.J. 1972. Analysis of fracture orientations for input to structural models of discontinuous rock. U.S. Bur. Mines, Rep. Invest. RI (7669). Mizuguchi, T., Nishimoto, A., Kitsunezaki, S., Yamazaki, Y., and Aoki, I. 2005. Directional crack propagation of granular water systems. Physical Review E, 71(5): 056122. Morris, P.H., Graham, J., and Williams, D.J. 1992. Cracking in drying soils. Can. Geotech. J., 29(2): 263–277. National Research Council 1996. Rock fractures and fluid flow: contemporary understanding and applications. Washington, D.C. Perret, J., Prasher, S.O., Kantzas, A., and Langford, C. 1999. Three-dimensional quantification of macropore networks in undisturbed soil cores. Soil Sci. Soc. Am. J., 63(6): 1530–1543. Vogel, H.J., Hoffmann, H., Leopold, A., and Roth, K. 2005. Studies of crack dynamics in clay soil: A physically based model for crack formation. Geoderma, 125: 213–223. Xiao, G.F., Chen, C.X., Lin, T., and Liu, C.H. 2004. Stability analysis of clayey gentle slopes considering variation of water level. Rock and Soil Mech., 125(11): 1754–1760.
Ang, A.H.S., and Tang, W.H. 2007. Probability concepts in engineering: emphasis on applications in civil & environmental engineering. John Wiley & Sons, New York.
424
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Stochastic methods for safety assessment of a European pilot site: Scheldt M. Rajabalinejad, P.H.A.J.M. van Gelder & J.K. Vrijling Hydraulic Engineering Department, Faculty of Civil Engineering, TUDelft, the Netherlands
ABSTRACT: In this study, we provide a comparison to the applied methods for the sliding process in the reliability analysis of the pilot site Scheldt. These methods are based on Bishop technique benefiting the random field theory (Vanmarcke 1983). The finite element method is used as a benchmark to evaluate the outcome of the Bishop method integrated with the random field theory which takes into account variation of soil parameters. Since the sliding process is a key issue in the safety assessment of dike rings, this comparison is an important issue. 1 1.1
INTRODUCTION Pilot site Scheldt
Flood defence systems are becoming more important structures to engineers, inhabitants, and decision makers. FloodSite project1 is an evidence of a collective attempt of 37 European countries to reduce the risk of flooding in their homelands. There are several pilot sites defined in this project to apply and evaluate new techniques: River Elbe Basin in Germany, River Thames Estuary in England, and River Scheldt Estuary in the Netherlands. The ‘Scheldt’ pilot site is a typical North Sea area protected against coastal flooding by means of different flood defence structures such as forelands, sea dikes, dunes and other constructions. The Western Scheldt forms the entrance to the harbor of Antwerp (Belgium). Water levels are influenced by the wind surges on the North Sea, as well as the river discharges from the Scheldt. There are four surrounding dike ring areas long the Western Scheldt. These dike rings are numbered from 29 to 32 as shown in Figure 1. The plan view of the area is also presented in Figure 2. The reliability assessment of this site is an important task which is to be applied by utilizing of the limit state equations (LSE). A considerable number of LSEs are addressed in the report of the task 4 in FloodSite (Allsop 2007). As a matter of fact, the outcomes of the task 4 and task 7 (Van Gelder 2008) present the building blocks of this enormous research project. In these studies, an attempt has been made to define and apply LSEs for different failure modes of a usual flood defence structure, and most of the presented LSEs can be easily utilized for safety assessment. Among different failure modes, sliding is one of the important and influential modes which can 1
www.floodsite.net (2004–2008). Integrated Flood Risk Analysis and Management Methodologies.
Figure 1. Dike ring areas in the southern part of the province Zeeland are presented, this area is counted as number 32 (Van Gelder et al., 2008).
be modeled in two classes of analytical approaches (AA) and finite elements (FE). Since the reliability assessment of the FE models is a complicated and time consuming process (Rajabalinejad, Kanning et al. 2007), a few analytical methods are programmed in Mprostab. This program is able to communicate with PCRing. PCRing is also a reliability tool which takes in to account different parameters to assess the overall safety of a dike ring. Therefore, sliding which is an important failure mechanism in the safety assessment of dikes, is to be carefully watched. This paper presents a part of the observations for the correspondence of the currently implemented method with a finite element model using Plaxis code.
2 THE MODEL The dike ring area Zeeuws-Vlaanderen, counted as Dike Ring 32, was initially divided into 287 dike
425
Figure 2. Plan view of the Scheldt estuary, the white area is called “Zeeuwsch Vlaannderen” which refers to Dike Ring 32, Scheldt estuary, the Netherlands (Googleearth, 2007).
Figure 3. The selected cross sections of Dike Ring 32 in Zeeuws-Vlaanderen, Scheldt, the Netherlands.
sections. Then, a selection was made and totally 33 dike and 4 dune sections are deemed to be representative of the total dike ring. This number is without the water retaining structures (14 structures). The lo-cation of the selected dike sections is shown in Figure 3. Because calculating the sliding mechanism is an elaborate process, this calculation was not performed for all sections. The district water board has made a selection of 7 cross-section profiles (out of a series of 40 that were used for the testing) during the process of schematization (Van Gelder et al., 2008). As a result, 7 Profiles have been selected to calculate the probabilities of failure concerning the sliding mechanism with MproStab. In this study, two of these sections are briefly presented, more complete report is presented in (Rajabalinejad and van Gelder 2007).
2.1 Section ALS166B A typical cross section of the dike ring in Zeeland is presented in Figure 4. This section is located in the North East part of Ring 32, as indicated in Figure 3. The available data for this area can be divided in two categories of geometry and geotechnics. Figure 4 shows the geometry of this cross section in which all the soil layers are horizontal. The material properties of this dike are presented in Table 4–2 of the report of (Rajabalinejad and van Gelder 2007), where the material numbers in this table are correlated to the numbers of soil layers in the figure. The main portion of this dike is sand on top of the figure, and its toe is made of clay. The main material of foundation, also, is sand. However, there are some layers of peat (Veen), and loose clay (Slapklei)
426
Figure 6. A comparison between normal and lognormal PDF, where µX = 100 and σX = 10. Figure 4. A typical cross section of the Dike Ring 32 in the Scheldt pilot site, named as ALS166B depicted by MStab.
Figure 5. A typical cross section of the Dike Ring 32 in the Scheldt pilot site, named as EMMA118 depicted by MStab.
under the body of dike. The variation of soil properties is considered by mean value, standard deviation, and a distribution type which are presented in the aforementioned report. 2.2
cohesion (c) and tangent of the friction angle (tan φ). There is also a possibility of assigning a Normal PDF to the soil parameters, if there is a low probability of generating values less than zero. However, given a normal PDF there is always a possibility of getting negative values. A Normal PDF of a random variable, X , is defined as
where µ and σ are respectively mean and standard deviation of its PDF, and x is a scalar value called random sample. If the random sample x from a normal random variable X is replaced by ln (x), a lognormal distribution comes out. The parameters of a lognormal distribution can be obtained from the following equation:
Where
Section EMMA118
Figure 5 shows the cross section EMMA118. The location of this section is illustrated in Figure 3. This section is also a sandy dike located on the sand, peat, and loose clay. The downstream toe of the dike is made of clay to prevent erosion and make the dike less permeable. The detailed properties of materials are presented in Table 5–2 in (Rajabalinejad and van Gelder 2007). The mean value, standard deviation, and the distribution type of each variable are also presented in this table. 3 THE THEORY Apart from Section 3.1, which presents an introduction to the soil variation, the rest of this paper is consistent with the user manual of MProstab (Calle 1994).
and
A lognormal distribution yields always positive values. The PDF of normal and lognormal distributions are depicted in Figure 6 for the mean value of 100 and standard deviation of 10. In fact, a lognormal PDF, shown by dashed line, is an asymmetrical distribution, and generated random numbers are different with the Normal distribution. However, there is not a little difference for small ratios of the mean value and standard deviation, µ[x]/σ[x]. A good comparison between the normal and lognormal PDF is given in (Griffiths and Fenton 2007). 3.2 Autocorrelation function
3.1
Lognormal distribution
Here we apply a lognormal probability distribution function (PDF) in combination with an auto correlation function to define the stochastic model for both the
MProstab describes the pattern of fluctuation within a soil layer as a weak stationary. Weak stationary means that for any two spatial points (x1 , y1 , z1 ) and (x2 , y2 , z2 ), where x and z are two horizontal and y the
427
vertical spatial coordinates, the (marginal) probability distributions are identical and that the correlation among the random variables is only a function of the distance between these points. The selected autocorrelation function, which expresses the correlation among any two points as a function of the distance lags, is of a modified Gaussian type:
these parameters. The larger the number of tests, the smaller the coefficient of correlation among estimates of expected mean values of the parameters; correlation can be taken into account in MProstab. The assumption of zero correlation, however, is slightly conservative (Calle, 1990). 3.3
The autocorrelation function is a function of the distance between two points in different directions of x, y, and z as
Dh and Dv are the so-called correlation lengths, which are related to the scales of fluctuation as introduced by (Vanmarcke, 1977). Typical values of Dh range between 25 m and 100 m and values of Dv may range between 0.1 m and 3 m (Calle 1994). The parameter α is a variance parameter, which equals the ratio of local variance, i.e. the variance of fluctuations relative to the mean value along a vertical line, and the total variance, which is the variance relative to the mean value taken over the whole deposit space.
Bishop technique
Most of the problems of slope stability are modeled on the base of physical equilibrium between parameters; yet they are statistically undetermined. As a result, some simplifying assumptions are necessary, and under various assumptions, different methods have been developed. Some of the popular methods are Fellenius, Bishop, Janbu, and Spencer (Malkawi, Hassan et al. 2000). Here we consider the simplified Bishop model which satisfies only the overall moment and is applicable to a circular slip surface. For the Bishop model, the factor of safety is directly obtained; this method assumes that the inter slice forces are parallel to the base of each slice, thus they can be neglected (Figure 7). On the base of the Bishop Method, a limit state equation can be defined as a difference between the total resistance and driving forces as
where the resistance and driving moments are respectively called as Mr and M0 . Another stochastic For α = 1 the autocorrelation function takes on the classical Gaussian form, which is often suggested in literature. It was found, however, that such type may be inconsistent with actual measurements. The αparameter enables consistent modeling of the presence of overall weak and strong locations within a layer. For fluctuations of cone resistance in clayey layers, αvalues ranging between 0.5 and 1.0 have been found (Calle 1994). Parameters of the probability distributions, i.e. expected mean values and standard deviations, are usually estimated on the basis of series of laboratory or in-situ test results. Since the number of samples is limited, the estimators are statistically biased. A bias estimator is a source of uncertainty, and Mprostab suggests to take this uncertainty into account by adjusting the field variance with a factor (n+1)/n and modifying the autocorrelation function into:
If both c and tan φ are estimated from test results of the same experiment, e.g. triaxial test, the estimates are (negatively) correlated. In the stochastic field model it is assumed that the fluctuation fields of the two parameters are not correlated. Correlation among estimates c and tan φ from the test result of a sample, results in correlated estimates of the expected mean values of
Figure 7. This figure is a schematic representation of the Bishop Method. Also, the considered forces on a sample slice are shown.
428
parameter which is implemented into the model is called the model factor, q which has expected value µq ; and the standard deviation σq . As a result, the limit state equation turns to
The resistance moment is obtained by
where F is the safety factor; u is the water pressure; c and ϕ are the soil parameters; W is the weight of the slice; and the other parameters are shown in Figure 7. The driving moment comes from the total weight of each slice as
3.4 Finite element approach Finite Element analysis is a technique for solving partial differential equations by discretizing of the equation in the space dimension. This discretization can be done by different element shapes; but, with the finite number of elements. Then, a matrix can be constructed to relate the inputs to the outputs on some specific points which are called nodes. Then the system of equations can be solved and the inputs and outputs are related on the base of a partial differential equation. The finite element models of the cross sections ALS166B and EMMA118 are made by Plaxis2 and presented in Figures 8 to 10. Their geometry is according to the Mprostab model presented in Figure 4, and further information of the model is presented in Table 1.The ground water flow, also, is assumed hydrostatic to provide a closer comparison with the Bishop method. Table 1. Numbers, type of elements, and integrations points used in the finite element analysis. Type
Type of element
Type of integration
Total no.
Soil
6-node triangle
3-point Gauss
410
Considering the stochastic variables as c, ϕ, and variation of the water level u and the model factor, the first order estimation of the limit state equation at the expected point is obtained as
where Zl is a random variable which may be estimated by its first and second moment which respectively are E[Zl ] and σ 2 [Zl ]; and ∇ is the gradient, and T is the transpose sign. Then, given the first and second moment of the first order extension of the limit state equation, the reliability index can be obtained as
Figure 8a. The finite element model of cross Section ALS166B, depicted by Plaxis.
which leads to the estimated probability of failure by Figure 8b. The finite element model of Section EMMA118, depicted by Plaxis.
where is the probability that the random variable does not exceed a design value, assuming the standard normal distribution, ϕN
Figure 8c. The finite element model of Section ALS166B, depicted by Plaxis.
and 2
Plaxis, B. V. (2002). Plaxis, finite element code for soil and rock analyses. R. B. J. Brinkgreve. Delft, Netherlands.
429
Figure 9. Three different results for the failure of the cross section ALS166B. (a) Shows the Slip Circle by Bishop Model and and model factor 0.9. (b) Shows the slip circle calculated by Bishop Model and model factor 1.00. (c) Shows the failure shape which is calculated by finite element analysis, Plaxis.
4
RESULTS AND DISCUSSION
Results are presented for two typical cross sections of the Dike Ring 32 which were addressed in Section 2. Some differences are shown between the reliability assessment of the Bishop and finit element method. This comparison is important from the point of view that many reliability programs which are related to dikes and coastal structures use this technique. For instance, PC_Ring which is developed to assess the total safety of dikes in the Netherlands is fed by this technique for the sliding mechanism (see Section 1). The finite element method presents more accurate results. The tan φ − c reduction technique is implemented into the finite element analysis. Applying this technique, the strength parameters of soil are gradually reduced untill failure of the model. This provides a relatively accurate and meaningful approach to failure. To compare different failure shapes in section ALS166B, three illustrations are shown in Figure 9. They present the Bishop results with and without model factor together with the finite element output. The figures show influence of the model factor on the failure shape. There is a considerable difference between Figure 9(a) and Figure 9(b) as a result of the different model factors. As a matter of fact, Figure 9(a) does not collapse, but it has partial failure. Therefore, it is possible to have a big difference in
Figure 10. Three different results for the failure of the cross section EMMA118. (a) Shows the Slip Circle by Bishop Model and and model factor 0.9. (b) Shows the slip circle calculated by Bishop Model and model factor 1.00. (c) Shows the failure shape which is calculated by finite element analysis, Plaxis.
failure modes (slip circle) using this Method. This fact needs more attention where the loose foundation prevents the circular failure shape. However, when the shape of failure is almost circular the results of finite elements and Bishop Model seem to be in a good correspondence as presented in Figure 10(a) and Figure 10(b). This kind of failure is usually expected when there are not quite different materials with quite different strength properties in the body and foundation of dike.
5
CONCLUSION
The finite element method is an accurate technique for failure assessment of dikes. The finite element method determines the actual water table and water head of different spots in a flood defence structure. On the other hand, the Bishop method is a simplified method which is widely used in slope stability problems. It is less time consuming and easier to apply. It needs, also, less parameter in comparison with the finite element method. It, therefore, can be considered for the first estimation of the probability of failure. Nevertheless, it is shown that the result of the simplified Bishop method in some cases may be far from the real failure mode, as shown in Figure 9, or in good correspondence with it, as shown in Figure 10. The influence of the model factor on the safety is more coherent to the model itself if the whole limit state function is multiplied by a model factor. Otherwise, if the model factor is partially implemented into
430
the limit state equation or when the limit state equation needs to be discretized, the outputs may not always be close to the finite element analysis. Figure 9 shows a strong effect of the model factor on the results. As a result, before assigning a model factor to a model, we should pay attention to the failure shape (slip circle) which needs to be close to a circle, otherwise implementation of a model factor might lead to unexpected results. 6
SYMBOLS AND ABBREVIATIONS
µX
Mean value of the variable X with a Normal PDF σX Standard deviation of the Nor variable X with a Normal PDF µln X Mean value of the variable X with a Lognormal PDF σln X Standard deviation of the Nor variable X with a Lognormal PDF fX PDF of the variable X The probability of exceeding the design point in a standard normal PDF B The reliability index q Model Factor φ Friction angle Dh Horizontal correlation length Dv Vertical correlation length LSE Limit state equation PDF Probability Distribution Function ρ(δx , δy , δz ) Autocorrelation function r(δx , δy , δz ) Modified autocorrelation function REFERENCES
Griffiths, D. V. and G. A. Fenton (2007). Probabilistic methods in geotechnical engineering, Springer, Wien, New York. Malkawi, A. I. H., W. F. Hassan, et al. (2000). “Uncertainty and reliability analysis applied to slope stability.” Structural Safety 22(2): 161–187. Plaxis, B. V. (2002). Plaxis, finite element code for soil and rock analyses. R. B. J. Brinkgreve. Delft, Netherlands. Rajabalinejad, M., W. Kanning, et al. (2007). “Probabilistic assessment of the flood wall at 17th street Canal, New Orleans.” Risk, Reliability and Societal Safety, Vols 1–3: 2227–2234. Rajabalinejad, M. and P. H. A. J. M. van Gelder (2007). Geotechnical applications of the probabilistic methods FloodSite, Task 7, TUDelft. Rajabalinejad, M. (2009). Reliability Methods for Finite Element Models. Amsterdam, the Netherlands, IOS Press. Rajabalinejad, M., P. van Gelder, et al. (2008). Improved Dynamic Limit Bounds in Monte Carlo Simulations. 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Schaumburg, IL, USA, American Institute of Aeronautics and Astronautics. Rajabalinejad, M., P. H. A. J. M. van Gelder, et al. (2008). Probabilistic Finite Elements with Dynamic Limit Bounds: A Case Study: 17th Street Flood Wall, New Orleans. 6th International Conference on Case Histories in Geotechnical Engineering and Symposium in Honor of Professor James K. Mitchell, Rolla, Missouri USA, Missouri University of Science and Technology. Rajabalinejad, M., P. van Gelder, et al. (2008). Application of Bayesian Interpolation in Monte Carlo Simulation. Safety, Reliability and Risk Analysis (ESREL), Valencia, Spain, Taylor and Francis Group, London, UK. Van Gelder, P. (2008). Reliability Analysis of Flood Sea Defence Structures and Systems. Floodsite task reports. Floodsite, Floodsite. Vanmarcke, E. (1983). Random fields, analysis and synthesis, MIT Press Cambridge, Mass. www.floodsite.net (2004-2008). Integrated Flood Risk Analysis and Management Methodologies.
Allsop, W. (2007). Failure Mechanisms for Flood Defence Assets. Floodsite report tasks. Floodsite. Calle, E. O. F. (1994). “MPROSTAB user’s guide.” Geodelft, Delft.
431
A report by JGS chapter of TC23 limit state design in geotechnical engineering practice
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Code calibration in reliability based design level I verification format for geotechnical structures Y. Honjo, T.C. Kieu Le & T. Hara Gifu University, Gifu, Japan
M. Shirato Public Works Research Institute, Tsukuba, Japan
M. Suzuki Research Institute, Shimizu Corporation, Tokyo, Japan
Y. Kikuchi Port and Airport Technical Research Institute, Yokosuka, Japan
ABSTRACT: This report is prepared by a working group in JGS chapter of TC23 Limit State Design in Geotechnical Engineering Practice and summarizes the important points to note and recommendations when new design verification formulas are developed based on Level I reliability based design (RBD) format. There are several different types of verification format in RBD Level I such as partial factor method (PFM), load and resistance factor design (LRFD), etc. LRFD format is recommended in this report as the most suitable verification format at present based on the following reasons: (1) a designer can trace the most likely behavior of the structure to the very last stage of design if the characteristic values of the resistance basic variables are appropriately chosen, (2) the format can accommodate a performance function with high non-linearity and with some basic variables to both force and resistance side, and (3) code calibration is possible for the case only total uncertainty of the design method is known and the total uncertainty cannot be decomposed to each source of uncertainty. It is recommended that the design value method (DVM) should be adopted as the basic approach for the code calibration. The reasons for this recommendation are as follows: (1) Load and Resistance factors can be determined based on a sound theoretical framework, such as target reliability level, COV of basic variables and sensitivity factors, (2) redesign of a structure is not required if the initially designed structure has reliability level that is not very far from the target, (3) by referring to the sensitivity factors, one can evaluate contributions of each load and resistance component. On the other hand, DVM based on FORM often has problems such as instability and non-uniqueness of convergence when the performance functions become complex and highly non-linear. It is recommended in this report to employ Monte Carlo simulation (MCS) method when the performance functions are complex and non-linear. MCS is easy to implement for any complex performance function, and a solution can always be obtained if sufficient computational effort is put. A method is proposed in this report to determine factors based on DVM approach by MCS. Furthermore, a method that applies the subset MCMC method is suggested to improve the efficiency of calculation in carrying out this procedure. Some related issues concerning code calibration are referred to. These are definition of characteristic values of basic variables, the baseline technique and determination of appropriate reliability levels. It is emphasized that a mean value that may have taken into account the statistical uncertainty should be used as a characteristic value for resistance side basic variables. It is related to the philosophy that a designer should trace the most likely behavior of the structure to the very last stage of design as much as possible. The baseline technique is examined by some simple calculation to show that taking high/low fractile values have some merit for load side but the merit is not so obvious for the resistance side, thus taking low fractile values for the characteristic value of the resistance side basic variables is not necessary. Some information on target reliability index and background risks are provided for convenience of the readers. 1 1.1
INTRODUCTION Background and purpose of the report
Many design codes in the world have now been revised to RBD Level I format. The limit state design (LSD),
PFM and LRFD are all included in this category of design verification format. The factors such as WTO/TBT agreement, near future establishment of the Structural Eurocodes, ISO documents (e.g. ISO2394
435
1.2
Reliability based design
As the progress of plastic theory in mechanics, which directly takes into account the failure of a structure or a member, the limit state design treated resistance and forces as random variables from the beginning. This method has been established as reliability based design (RBD) method today. A brief summary of development of RBD can be described as below. So called the classic RBD method, which is the direct origin of the RBD today and is based directly on the probability theory, started in 1940s by Fredenthal
Basic variables
Level III
Reliability evaluation
Level
Random variable with full distribution.
Failure probability
Random variables with mean, variance and covariance Deterministic variables
Reliability index
Partial factors, Load, resistance factors
Verification Acceptable level of reliability Economical optimization etc. Target reliability index Verification formula
and others in the western world (Takaoka, 1988). However, this classic theory had the drawbacks as follows (Matsuo, 1984): (1) In modeling the uncertainties of forces and resistances, it is the tail part of the distribution that is really affecting the reliability of a structure. However, it is very difficult to accurately model the tail part from limited data. (2) The calculation of failure probability by probability theory requires multiple integrations. This calculation becomes actually impossible when the number of random variables increases to several, which is very typical in actual design calculation. Thus the solution cannot be obtained. (3) The design calculation method is a simplification and idealization of real phenomena. It is rarely possible to evaluate the model uncertainty involved in this kind of method. Thus, the calculated failure probability cannot be an accurate indicator of the failure event.
1.2.1 Design methods A design method is defined as a method or procedure to make decision under uncertainties encountered in a structural design process (Matsuo, 1984). The most traditional, popular and widely used design method is the safety factor method. This method introduces a safety factor which is defined as a ratio between total resistance and total force, and should usually be kept a value more than one given by experience. The safety factor method is a superior design method which is supported by a vast amount of experiences. However this method has some drawbacks as follows: (1) The safety factor is not an absolute measure of the safety of a structure. A safety factor is determined based on the past experiences, thus one cannot say a structure built on safety factor twice as big as the one usually employed does not necessary implies twice as safe. (2) One of the reasons the safety factor is not an absolute measure of safety comes from the fact that this method has been developed hand in hand with the allowable stress design (ASD) method. ASD method does not necessary design structure for a limit state.
Classification of RBD.
Level II
Table 1.
Level II
(ISO, 1998)) are all taking part in this movement (Honjo, 2006). Some of the examples of such design codes in geotechnical engineering are Eurocodes 7 (CEN, 2004), Ontario Highway Bridge Code (MOT, 1979, 1983, 1991), the Canadian Bridge Design Code (CSA, 2000; Becker, 2006), AASHTO LRFD Bridge Design Specifications (AASHTO, 1994, 1998, 2004; Allen, 2005; Paikowsky, 2004) etc. In Japan, AIJ guideline for loads on buildings (AIJ, 1993, 2004), AIJ guideline for limit state design of buildings (AIJ, 2002), JGS Geocode 21 (JGS, 2006) and Technical standards for port and harbor facilities (JPHA, 2007) have been developed based on this approach. This report is prepared by a working group in JGS chapter of TC23 Limit State Design in Geotechnical Engineering Practice and summarizes the important points to note and recommendations when new design verification formulas are developed based on Level I reliability based design (RBD) format for geotechnical structures.
One of the proposals to overcome the drawback (2) was FOSM (First Order Second Moment) method by Cornell (1969). In this method, the uncertainties are only treated parametrically by mean, variance and covariance of random variables. The non-linear equations are linearized by using Taylor expansion, Cornell (1969) also introduced the reliability index, β, in place of failure probability Pf . The concept proposed by Cornell (1969) was continuously developed and revised by other researchers such as Ditlevesen, Rackwitz, Lind and Hasofer. These efforts lead to FORM (First Order Reliability Method) which is the standard method in RBD today. RBD is usually classified into three levels as presented in Table 1: Level III: Basic variables are treated as random variables with full distributions. The failure probability is evaluated based on a performance function defined in the basic variable space and separate stable region from failure region.
436
Level II: A simplified version of Level III. Basic variables are parametrically described by mean, variance and covariance. A method such as FORM is used to evaluate the reliability index. Level I: Necessary safety margin is preserved by applying partial factors to characteristic values of basic variables directly (i.e. PFM) or to calculated forces and resistances (i.e. LRFD). All the calculations are done deterministically, and a designer needs not to have knowledge in reliability theory. However, code writers need to determine the factors based on reliability analysis. The method is sometimes called limit state design (LSD) and many design codes are now being revised to this format. 1.2.2 Design methods and design codes A design code is a guideline document that describes procedure of design for general and routinely designed structures so that certain level of technical quality is assured (Ovesen, 1989, 1993). Therefore, most of the structures we see daily are designed and built according to design codes, and this fact reflects us the importance of design codes. As explained in the previous section, many design codes in the world today are in the process of revision from traditional ASD to LSD or Level I RBD. Some of the features of Level I RBD can be listed as follows: (1) Reliability of a structure is evaluated based on so called a limit state of a structure, a criterion that separates desirable state (e.g. stable condition) from undesirable state (e.g. failure). The ultimate and the serviceability limit states are examples of the typical limit states employed in RBD. (2) The uncertainties involved in basic variables concerning actions, material properties, shape and size, etc. are taken care by factors which are determined based on reliability analyses. However, general users of these factors are not required to understand the details of the reliability analyses. (3) The harmonization of design codes is possible under identical concept. Traditionally, design codes are developed in different ways among different structures (e.g. buildings vs. bridges), and different materials (e.g. steel, concrete, composite, wood and geotechnical). ISO documents such as ISO2394 General principles on reliability of structures (ISO, 1998) plays important role in the code development because WTO/TBT agreement requires for the member countries to respect the internationally agreed standards. Although most of the important parts of RBD have been developed by structural engineers as explained in the previous section, geotechnical RBD also has some original developments (Meyerhof, 1993). It is natural for engineers to invent methods to encounter uncertainties while they carry out design in their own ways. The most significant contribution of geotechnical RBD came from Brinch Hansen (1953, 1956, 1967). He was the first person who used limit states in
geotechnical engineering and proposed to introduce partial safety factors into design codes. Under his influence, a geotechnical design code using partial safety factors has been established in Denmark since 1965. Thus, LSD became much popular in Scandinavian countries in early years, which added some different background to Eurocode 7 compared to other parts of the Structural Eurocodes. One of the most well known structural reliability scholars, Ove Ditlevsen, states in his structural reliability textbook that A consistent code formulation of a detailed partial safety factor principle was started in the 1950’s in Denmark before other places in the world. This development got particular support from the considerations of J. Brinch Hansen who applied the principles in the field of soil mechanics. (Ditlevsen and Madsen, 1996, p.31) It is understood that contribution of Brinch Hansen has been recognized widely. One of new trends concerning design codes is uprise of Performance based Design (PBD) concept, which requires transparency and accountability in design (Honjo & Kusakabe, 2002; Honjo et al., 2005). RBD (or LSD) seems to be the only rational tool to provide a design verification procedure that designs a structure for clearly defined limit states (i.e. performances of structures and members) and introduces sufficient safety margin. It is concluded that RBD will be used as a tool to develop design codes at least for the next several decades. 2
FORMAT OF VERIFICATION FORMULA
2.1 PFM vs. LRFD The definitions of partial factor method (PFM) and load and resistance factor design (LRFD) in this report are given in the following subsections: 2.1.1 PFM The basic philosophy of this format is to treat the uncertainties at their origin. Thus many partial factors are introduced in this format, which can be written as:
where, γri : partial factor multiplying to basic variables xri in resistance side of the equation; γsi : partial factor multiplying to basic variable xsi in load side. γRi and γSi , another important group of partial factors need to be introduced to cover design modeling error, redundancy, brittleness, and statistical uncertainties. It should be noticed that, partial factors for material properties are introduced to encounter against material uncertainty, and a characteristic value of a material is usually discounted by a material partial factor before it is inserted to a design equation to calculate the resistance.
437
2.1.2 LRFD The resistance and the load (external action) are first calculated based on the characteristic values of the basic variables, then load and resistance factors are applied to resulting resistance and load components to assure appropriate safety margin. It may be convenient to list two types of LRFD format as below:
– Many of the factors have non-linear effects on the resulting uncertainty of the structural reliability. Thus it is difficult to control the reliability of the structure at the sources of uncertainties. – Since PFM modifies the material property that needs to be input to the design calculation before the calculation, the actual behavior of structure based on this calculation can be far from reality (or the most likely behavior). It is considered that, especially in geotechnical design where the engineering judgment plays very important role, such unrealistic calculation may not be favorable. In geotechnical design, there should be a philosophy that a designer should keep track of the most likely behavior of the structure toward the end of the design calculation as much as possible.
where, γR : a resistance factor and γSj : a load factor for a load component Sj
where, γRi : resistance factors for resistance components Ri and γSj : load factors for load components Sj . In Eq. (3), the resistance side maybe decomposed into several lump sums that can be superimposed to obtain the final resistance (for example, pile tip and side resistances). Different resistance factors may be applied to the different terms. This approach is sometimes called MLRFD – Multiple Load and Resistance Factor Design – (Phoon et al. 1995, Kulhawy and Phoon, 2002). Traditionally, PFM is developed mainly in Europe(Gulvanessian and Holicky, 1996), whereas LRFD in the North America (Ellingwood et al., 1980, 1982). However, they are mixedly used world wide now. In Eurocode 7 (CEN, 2004) and Geocode 21 (JGS, 2006), Material Factor Approach (MFA) and Resistance Factor Approach (RFA) are respectively used in almost the same way as PFM and LRFD here.
As could be suggested by the above discussion, although PFM looks theoretically sound, it encounters many difficulties in practical situations. It is very difficult, if not impossible, to rigorously determine partial factors in practice. 2.3 Pros and cons of LRFD Pros of PFM are cons of LRFD. However, LRFD is superior to PFM in the following points: – Especially for the resistance side, the design calculation based on LRFD predicts the most likely behavior of the structure to its last stage of design, and it is in coincidence with the philosophy that a designer should keep track of the most likely behavior of the structure to the last stage of the design as much as possible. – In geotechnical design, where the interaction between a structure and ground is so high, it is not possible to know whether the discount of the material parameter value may result safe design of a structure. It is especially true for a sophisticated design calculation method like FEM. – In many code calibration situations, uncertainties of a design method are provided as results of many loading tests, for example in the form of database. The given uncertainties include all aspects of uncertainties, and it is impossible to decompose to various sources. Therefore, in practice, it is possible to calibrate codes based on LRFD but not on PFM. – LRFD has a design verification formula that is more close to the traditional safety factor method than PFM. It may be easier for practical engineers to become familiar with LRFD than PFM.
2.2 Pros and cones of PFM PFM has advantages over LRFD on the following aspects: – It is intuitively felt reasonable to encounter uncertainties at their sources. – It is easier to accommodate the development of construction technique and design method when the uncertainties are treated at their origin: one can simply change the partial factors related to the improvement by the new developments. However, it is very difficult to evaluate the reliability of structure by reliability analysis based on superposition of all uncertain sources for the following reasons. – Not all sources of uncertainties are known quantitatively. Especially the model uncertainties are difficult to evaluate. – Some of the sources are correlated and difficult to consider correlation in the reliability analysis. – Sometimes overall uncertainty of the structure can be estimated. However, it is difficult to break down the result into each source.
For these reasons, LRFD is considered to be better format than PFM in geotechnical engineering design code at least in the present situation. 3
DESIGN VALUE METHOD
3.1 General consideration on code calibrations An engineer can design structures by using Level I RBD format without knowing the details of probability
438
Figure 1. Illustration of design value method.
Figure 2. Definition of design point and sensitivity factors.
and statistical knowledge. On the other hand, a code writer has to determine the factors to introduce sufficient safety margin to the designed structures based on detail knowledge on treatment of uncertainties, which is known as code calibration. The basic issues of code calibration can be listed as follows: – There are infinite numbers of combinations of factor values in design verification to give required reliability level to a structure. Mathematically, this problem is ill-posed (meaning no unique solution exists). – Even for same category of structure, the design conditions vary from one structure to another. The factor values may depend on these conditions and may not be same from one to another. For these reasons, Level I RBD may not be looked at as an ultimate design method to introduce sufficient reliability to a structure. There should be always a way open to carry out reliability design of a structure by more direct method such as Levels II or III RBD. In fact, it is recommended to design important structures by directly employing higher level RBD methods.
3.2
Design value method
Design Value Method (DVM) is one of the convenient methods to determine the factors for Level I Reliability Based Design Method to overcome the ill-posed problem pointed out above. The basic idea of DVM is that all the factored items, i.e. basic variables, will be assigned the values at the design point based on the relationship with the characteristic values. Here, the design point is defined as a point on the limit state line which has the maximum likelihood value in the basic variable space (Figure 1). Hasofer and Lind (1974) has proposed to calculate reliability index as an invariant, where the basic variable space is transformed to a standardized space. In this standardized space, the design point can be presented as a closet point from the origin on the limit state line (Figure 2). For this graphical representation of β,
their reliability index is sometimes called geometric reliability index (Ditlevsen & Madsen, 1996). The DVM can be most visually and intuitively explained when the performance function consists of a linear combination of two independent normal random variables, namely the resistance R and the external force S. The other typical case is that the performance function consists of a linear combination of ln R and lnS, where R and S independently follow lognormal distributions. These two cases will be discussed in detail in the following subsections. 3.2.1 Linear performance function with Two Independently Normally Distributed Basic Variables R and S Mean and standard deviation of safety margin M, are µM = µR − µS and σM = σR2 + σS2 . Thus, the reliability index β is given by
It is required in design that the reliability index β must be always larger than a target reliability index, βT , that is chosen beforehand, i.e., β ≥ βT .
Also, it is defined that,
439
where, αR = − √
σR σR2 + σS2
and αS = √
σS σR2 + σS2
are termed
For β = µσMM ≥ βT , the following can be obtained:
sensitivity factors of R and S. Then, the followings can be obtained:
Substitute Eq.(11) into Eq.(12), therefore: It is obvious that (µR + βT αR σR ) and (µS + βT αS σS ) are design values. Let characteristic values of R and S be R and S, respectively. The partial factors which can fulfill the target reliability index, βT , can be determined as follows:
where, γR and γS satisfy:
Finally, the partial factors can be defined by:
The target reliability level can be guaranteed by: γR R ≥ γS S
The definition of design point and sensitivity factors are illustrated in Figure 2. 3.2.2 Linear performance function, with independently lognormally distributed basic variables If X is an independently lognormally distributed variables with mean E[X ] and variance Var[X ], then lnX will be an independently normally distributed variables with mean E[lnX ] and variance Var[lnX ]. Mean and variance of M are given by:
3.2.3 General solution of design value method General solution of the Design Value Method includes many basic variables (multivariable) with a nonlinear limit state function. Let X = Xi , i = 1, . . . , n be a vector of n basic random variables. The basic random variables Xi , (i = 1, . . . , n) follow a distribution function fX (X ). The safety margin M of n random variables in this case can be expressed as M = g(X ) = R(X ) − S(X ). The solution of the DVM is obtained by solving the equations:
A general code calibration procedure based on DVM using Monte Carlo Simulation will be proposed in section 5. On the other hand, 4 4.1 In addition,
where the sensitivity factors αR and αS are defined as:
SOME ISSUES RELATED TO CODE CALIBRATIONS Characteristic Values
4.1.1 Introduction When using PFM and LRFD of the Level I RBD verification for design a certain structure, one of the important problems that needs to be carefully considered is how the characteristic values of material and load should be determined. In general, the characteristic value of the materials such as steel and concrete which have a low dispersion in physical properties is usually chosen by fractile values (usually 1% or 5%). If a fractile value is also used as characteristic value of soil material, it will be considerably small comparing to the mean value.
440
In a design calculation, if the output values proportionally change with the reduction of the input values (i.e. a linear model), whatever safety margin introduced in the input value is proportionally propagated to the output. In design of foundation structures, however, calculation of soil stress, bearing capacity etc., follow highly non-linear models. Furthermore, Finite Element Method and other numerical calculation methods have been used broadly and directly in geotechnical design. In these cases, if the characteristic value of a material used for design is considerably different from its mean value, the behavior of the designed structure will be far from the most likely behavior of the real structure. Also, considering the complex interaction among basic variables, it is impossible to judge whether reduction of the material parameters would preserve safety of the whole structure. Simpson and Driscoll (1998) mentioned in the Commentary of Eurocode 7 that characteristic values of geotechnical parameters are fundamental to all calculations carried out in accordance with the code. Their definition has been the most controversial topic in the whole process of drafting Eurocode 7. The characteristic value is usually equal to a certain percentile of the distribution of that quantity (e.g., mean, mode, median, mean minus one standard deviation, fractiles etc.). Whitman (1984) showed that some engineers use the mean value, while others used the most conservative of the measure strengths for the characteristic value. According to the Eurocode 0, the nominal values for resistance given by producers should correspond (at least approximately) to certain fractiles. For example, for steel structures, the values for resistance side should correspond to the 5%-fractile. In the case of action side, the characteristic values are defined as the 50%-fractile (mean value) for permanent actions; 98%-fractile of the distribution of the annual extremes, which means an average return period of 50 years for variable actions (e.g. climatic actions). 4.1.2 Characteristic value in Eurocode 7 Numerous debates on choice of the appropriate characteristic value for geotechnical design have been done related to the development of Eurocode 7. These methods can be listed such as statistical method in Eurocode 7, method proposed by Schneider (1997), Bayesian approach, etc. However, the selection of the characteristic value is still open for debate between engineers. In Eurocode 7, the characteristic values of loads and resistance are chosen as explained in detailed by Orr (2000). (1) Characteristic value of loads The characteristic values of permanent loads derived from the weights of materials, including water pressures, are normally selected using average or nominal unit weights for the materials, with no account being taken of the variability in unit weight. Characteristic earth pressures are obtained using characteristic
Figure 3. Process for obtaining design values from test results, Orr & Farrell (1999).
ground properties and surface loads and include characteristic water pressures. The characteristic values of variable actions, for example wind and snow loads, are either specified values or values obtained from meteorological records for the area concerned. (2) Characteristic values of geotechnical parameters In article 2.4.3(2,3,4,5) of Eurocode 7, it is said that the characteristic value of geotechnical material parameters are based on an assessment of the material actually in the ground and the way that material will affect the performance of the ground and structure in relation to a particular limit state. The characteristic value of a soil and rock parameter shall be selected as a cautious estimate of the value affecting the occurrence of the limit state. Eurocode 7 proposed the process to obtain the characteristic values of geotechnical parameters from the results of field and laboratory tests. This process involves the three stages summarized by Orr & Farrell (1999) as shown in Figure 3, during which the following values are obtained: 1. Measured values 2. Derived value of a parameter 3. Characteristic value of a parameter. A measured value is simply defined in Eurocode 7, Part 2 as the value measured in a field or laboratory test. A derived value is the value of a ground parameter at one particular location in the ground without consideration of the nature of the structure and obtained by theory, correlation or empiricism from measured test results. Orr (2000) also emphasized that the derived values of the same parameter might vary with different types of tests used. (3) Statistical methods for determining the characteristic value The characteristic value of ground property in Eurocode 7 is defined as the value such that the probability of a worst value governing the occurrence of a
441
limit state is not greater than 5%, i.e. 95% confidence level that the actual mean value is greater than the selected characteristic value. Eurocode 7 also emphasize that the statistical method should be compared to the results obtained by Bayesian method instead of using only pure statistical approach without consideration on the actual design situation and comparable experience. The characteristic value, X of a soil property in statistical method is given by:
where µX is mean value, kn is a factor, depending on the type of statistical distribution and the number of test results, and VX is the coefficient of variation. Since the actual mean value, µX of a soil parameter cannot normally be determined statistically by a sufficient number of tests, it must be assessed from the average value, µX of the test results. One way to obtain the factor kn in the above equation is using pseudonym of Student proposed by Gossett. Orr (2000) also noticed that the characteristic value obtained by statistical method using Student t value will become too cautious and uneconomic in case of limited number of test results which is common in practice. (4) Schneider’s method for determining the characteristic value Based on comparative calculations, Schneider (1997) has shown that a good approximation to X is obtained when kn = 0.5, i.e. if the characteristic value is chosen as one half a standard deviation below the mean value as in the following equation:
where the mean, standard deviation and coefficient of variation are obtained from test results. (5) Bayesian approach for determining the characteristic value This approach may be used to determine the characteristic value when some comparable experience is available that enables the mean and standard deviation values to be estimated. In this situation, it is possible to combine the test results with the estimated values, i.e. a priori values, using Bayes’ theorem so as to obtain more reliable mean and standard deviation values and hence a more reliable characteristic value. The relevant equations for the mean and standard deviation using the Bayesian approach are (Tang, 1971):
Figure 4. Flow chart for determining the design values of the geotechnical parameters.
where X1 , σ1 are the estimated mean and standard deviation values based on experience; X2 , σ2 are the mean and standard deviation values obtained from the test results; and X3 , σ3 are the updated mean and standard deviation values. 4.1.3 Characteristic value in Geocode 21 In the Principles for Foundation Design Grounded on Performance Based Design Concept (nick name Geocode 21 ) (JGS 4001, 2004), the characteristic values of soil parameters are recommended to be defined based on the philosophy explained below. In Geo-code 21, the definitions of characteristic value and design value are shown in Figure 4 and stated as below:
442
– Measured Value is the value obtained from variety of investigation or testing. For example: groundwater level, SPT N value, stress and strain in triaxial test. – Derived Value is estimated from Measured Value after consideration on some certain theories, experiences, and correlation. For example: the cohesion and internal friction angle obtained from Mohr circle of triaxial test, Young’s modulus obtained from SPT N value, and so on. – Characteristic Value is a representative value of a soil parameter that is most appropriately estimated for predicting the limit state of a foundation/ground model in design. Deciding a characteristic value must be based on theories and experiences, as
well as sufficiently consider the dispersion of soil parameters and the applicability of the simplified model. Geo-code 21 defined the characteristic value as follow: The characteristic value of a geotechnical parameter is principally thought of as the average (expected value) of the derived values. It is not a mere mathematical average, but also accounts for the estimation errors associated with statistical averaging. Moreover, the average value shall be carefully and comprehensively chosen considering data from past geologic/geotechnical engineering, experiences with similar projects, and relationships and consistencies among the results of different geotechnical investigations and tests. (Design principle 2.4.3(c)) – Design Value is the value of foundation parameter that is used in design calculation model in case of material factor approach and can be obtained by applying partial factor to characteristic value. 4.1.4 Various discussion on characteristic value In Phoon et al. (2003), it is stated that from a reliability perspective, the key consideration is that the engineer should not be allowed to introduce additional conservatism into the design by using, for example, some lower bound value, because the uncertainty in the design parameters already is built rationally into the RBD equations. Phoon et al. (2003) also realized that it is very conventional to choose characteristic values of loads and resistances for very high/low fractile values, such as 95%, 5% and the reason for taking such values are based on so called the Baseline Technique that the values determined based on these fractile values are less sensitive to the change of uncertainties of basic variable used in the code calibration. However, this is no longer true when COV exceeds 0.3. This will be explained in detail in the next section.
4.2
Suppose a limit state function is given based on two statistically independent basic variables R and S as follows:
where R is resistance side whereas S is loading side. R and S follow Normal distribution function with mean and standard deviation of µR and σR for R, and µS and σS for S. M is a random variable denoting the safety margin. The following procedure is introduced in subsection 3.2.1 where DVM was introduced. The load and resistance factors can be obtained as follows:
where, characteristic values R and S can be obtained by the following equations by choosing proper fractile values, wR and wS , respectively.
where, −1 (·): inverse standard normal probability distribution function. To study choices of R and S on sensivities of γR and γS , when coefficient of variation VR and VS changes, we carried out a parametric study where parameters are set as follows: Target reliability index:, βT = 2, 3 Mean values of R and S: µR = 7.0, µS = 3.0 VR , VS : 0.10, 0.20, 0.30 Percentage of fractile: wR = 1% ∼ 50%, wS = 99% ∼ 50%. – Sensitivity factors in the case of linear performance function with two statistically independent basic variables can be calculated as follows:
– – – –
Baseline Technique
It is very conventional to choose characteristic values of loads, S, and resistances, R, for very high/low fractile values, such as 95%/5%. It is said that one of the reasons for taking such values are based on so called Baseline Technique (e.g., Phoon et al., 2003). The Baseline Technique says that if S and R are chosen as higher and lower fractile values respectively to define load and resistance factors, then load and resistance factors, i.e. γS and γR , are relatively insensitive (or robust) to change of coefficient of variation of S and R, i.e. VS and VR . In this section, the effectiveness of Baseline Technique in choosing the load and resistant fractile values which are used for determination of load and resistance factors will be discussed. Namely, for each fractile value of Sand R, load and resistance factors will be calculated while changing coefficient of variation and fractile values of S and R.
Some of the important figures used in the consideration as presented in Figures 5–8. Some remarks can be made as the followings:
443
a. According to Figures 5a, b & c, as far as γS is concerned, it changes with change of VS , but the change is less when higher fractile value of S is taken. Therefore, the Baseline Technique is valid that the characteristic value S should be chosen as
Figure 6(a). wS vs. γS , Normal, βT = 3, VR = 0.1.
Figure 5(a). VS vs. γS , Normal, βT = 3, VR = 0.1.
Figure 6(b). wS vs. γS , Normal, βT = 3, VR = 0.2. Figure 5(b). VS vs. γS , Normal, βT = 3, VR = 0.2.
Figure 5(c). VS vs. γS , Normal, βT = 3, VR = 0.3.
Figure 6(c). wS vs. γS , Normal, βT = 3, VR = 0.3.
a higher fractile value, for example 95% or 99%. It should be noticed, however, that if one takes a very high fractile value, one may get a load factor, γS , that is smaller than unity (Figures 6a, b, & c). This may not be preferable for practical design engineers. b. However, for γR , the advantage of Baseline Technique is not as obvious as that for γS . As can be seen from Figures 7a, b, & c, stability of γR for lower fractile values is not so obvious as that of values closer to mean value. Furthermore, γR is very unstable to change of VR when a very low fractile value is taken. The same statement can be read from Figures 8a, b, & c that γR ’s for different VR are not getting closer for small fractile value of R. c. The statements (a) and (b) are more pronounced when βT is larger, although the results are not attached in this paper.
using the low fractile value for R may not be as effective as in case of S. The same procedure is also done for the case that Sand Rfollow lognormal distributions. It is also observed that the Baseline Technique is valid for S than for R when S and R follow lognormal distributions. Furthermore, compared to the normal case, all the statements are more pronounced in lognormal case. In conclusion, the advantage of Baseline Technique is much more obvious in determining the load factor, γS , comparing to that of γR . It is recommended to take high fractile value for S, however the low fractile value need not to be chosen for R.
Therefore, for the case with two independently statistically normally-distributed random variables Sand Rin the linear performance function M = R–S, it is recommended to take high fractile value for S, however,
4.3 Determination of a target reliability As introduced in many references and textbooks, there are mainly three ways to determine a target reliability level: (1) Adopt reliability level that preserved in the existing structures. This implies preserve the reliability level of the existing design codes.
444
Figure 8(b). wR vs. γR , Normal, βT = 3, VS = 0.2.
Figure 7(a). VR vs. γR , Normal, βT = 3, VS = 0.1.
Figure 7(b). VR vs. γR , Normal, βT = 3, VS = 0.2.
Figure 8(c). wR vs. γR , Normal, βT = 3, VS = 0.3.
one needs to compare the design reliability risk to other social risks. This situation is coming to be important when one needs to make decision between infrastructure building and other soft alternatives. For the last method, it may be important to understand the general framework of the method. However, this method is rarely used in practical code calibration, and discarded from this report. Figure 7(c). VR vs. γR , Normal, βT = 3, VS = 0.3.
Figure 8(a). wR vs. γR , Normal, βT = 3, VS = 0.1.
(2) Determine the reliability level based on comparison with other existing individual and social risks, i.e. the background risks. (3) Use optimization scheme based on some economic or other criteria (e.g. LQI). The methods are ordered from simple and practical ones to more theoretical and ideal ones. In the actual code calibration, the first method is mostly employed. The information obtained in the second is useful when
4.3.1 Target reliability by existing design codes Due to various difficulties in evaluating absolute level of reliability of designed structure, most of practical code calibrations are done by setting target reliability level based on existing structures assuming that the existing structures are satisfying the reliability level required by the society. Common practice is to evaluate the reliability level of the existing structure before setting the target reliability level. This implies the reliability analysis of structures designed by the existing design codes. For the convenience of code writers, some typical target reliability indices, βT , are listed in a document like ISO2394 (ISO, 1998). ISO2394 first gives the relationship between β and failure probability assuming β follows the normal distribution as shown in Table 2. The typical target reliability indices are presented in Table 3. AISC LRFD (Elingwood et al., 1980, 1982) which is LRFD based code adopted target β to be 2.5–3.5 for steel beams and columns, and 3.0–3.5 for RC beams and columns. This was one of the first attempts to calibrate load and resistance factors in LRFD based design code.
445
Table 5. Pf β
In order to accommodate such ambiguity in acceptable risk level recognition into account, HSE (1999) of UK proposed to divide risk levels into three regions:
Relationship between β and Pf .
10−1 1.3
10−2 2.3
10−3 3.1
10−4 3.7
10−5 4.2
10−6 4.7
10−7 5.2
(1) Unacceptable region: risk cannot be justified except in extraordinary circumstances. (2) ALARP (as low as reasonably possible) region: tolerable only if the cost of risk reduction is grossly disproportionate to the reduction in risk achieved. (3) Risk negligible region: further effort to reduce risk is not normally necessary.
Table 6. Taget β-values (life-time, examples). Relative costs of safety measures
High Moderate Low
Consequences of failure small
some
moderate
great
0 1.3 2.3
A 1.5 2.3 3.1
2.3 3.1 3.8
B 3.1 C 3.8 4.3
Some suggestions are: A: for serviceability limit sates, use β = 0 for reversible and β = 1.5 for irreversible limit sate. B: for fatigue limit states, use β = 2.3 to 3.1, depending on the possibility of inspections. C: for ultimate limit states design, use the safety classes β = 3.1, 3.8 and 4.3.
The target β’s that have been used in geotechnical code calibrations have been summarized in Paikowsky (2004). Based on the review, they adopted the target β? for a single pile to be 3.0, whereas 2.33 is used for group piles supporting a foundation with more than 5 piles. The reduced β was adopted in the group piles because of the redundancy of the foundation system. There are many proposed target β in the various reports and papers. It is, however, important to carefully review the purpose for which the target β is set. They may be different for material, members, failure modes, limit states and calculation methods. Furthermore, β can be defined either for annual or for life time of a structure, and distinction between the two is sometimes overlooked. 4.3.2 Criteria on allowable risk ICG (2003) defines risk as probability of an event times consequence if the event occurs. Similar definitions of risk can be found in many other literatures (e.g. Beacher & Christian, 2003). Therefore it is important to consider both probability (i.e. frequency) of the occurrence of event and its consequence at the same time. Furthermore, in preparing countermeasures for reducing risk, both reduction of frequency and consequence need to be considered. It is not appropriate to think there is an absolute threshold between acceptable risk and unacceptable risk. Every individual has his/her own preference for risk, and therefore acceptable risk level is different from a person to another depending on the risk under consideration. Starr (1969) has pointed out that there is quite difference between acceptable levels for involuntary and voluntary risks. Slovic (1987) found that factors like dread or uncontrollable and unknown or unobservable influence considerably in risk recognition of general individuals.
ALARP region is well established in case law in UK, and has been adopted in UK health and safety Act (GEO, 2007; Diamantidis, 2008). Diamantidis (2008) has proposed to make distinction between individual risk and societal risk in developing criteria for acceptable risk level, which is also followed in the report. (1) Individual risk Individual risk is defined as the annual probability of being harmed by a hazardous situation. A typical individual risk is fatality risk which can be measured by the annual probability of being killed by some harm. Table 7 presents annual fatal accident rate in the developed countries by Diamantidis (2008). In addition, annual fatal accident rate in Japan derived from official statistics in 2004 in Japan by Honjo (2006) are presented in Table 8. Some of the figures are also copied to Table 7 for comparison. It should be noticed that the methods used to obtain these rates in these two tables are different, thus comparison may be made with some care. It may be possible to say from these results that the risk level for each event is different whether they are voluntary or involuntary, and also benefit one can get from each event. (2) Societal risk One of the most popular ways to describe the societal risk is by an F–N curve. F stands for frequency and N for number of fatalities. An F–N curve presents probability of occurrence of an accident which involves more than n fatalities, and has relationship with the cumulative distribution function of N as follows:
F–N curve was widely known by a report prepared by USNRC(1975), known as Wash 1400, to show the risk of building nuclear power plants in US. F–N curves showing various societal risks with the risk of operating nuclear power plants were presented in order to justify the construction of new nuclear power plants in the US. In the field of geotechnical engineering, an F–N curve drawn by Baecher in 1982 is well known (Baecher & Christian, 2003). Some of relatively new F −N curves proposed in the various parts of the world have been plotted in Figure 9. These are the Dutch government published risk guidelines (Versteeg, 1987), the Hong Kong government planning department (HKPD, 1994) and ANCOLD
446
Table 7.
Fatal accident rate in developed countries (modified from Diamantidis, 2008).
Cause of death
During activity (/108 hrs)
Proportion of time (average)
Annual probability
Rock climbing Motorcycle accident Skiing Workers in high rise building industry Deep sea fishing Workers on offshore oil and gas rigs Disease average for 40–44 age group Travel by air Travel by car Disease average for 30–40 age group Coal mining Travel by train Construction industry Agriculture (employees) Accidents in the home Travel by local bus Chemical industry California earthquake
4000 300 130 70 50 20 17 15 15 8 8 5 5 4 1.5 1 1 0.2
0.005 0.01 0.01 0.2 0.2 0.2 1 0.01 0.05 1 0.2 0.05 0.2 0.2 0.8 0.05 0.2 1
2.00 × 10−3 3.33 × 10−4 1.25 × 10−4 1.43 × 10−4 1.00 × 10−3 4.00 × 10−4 1.67 × 10−3 1.4 × 10−5 7.7 × 10−5 8.3 × 10−4 1.7 × 10−4 2.5 × 10−5 1.0 × 10−4 8.3 × 10−5 1.1 × 10−4 5 × 10−6 2 × 10−5 2 × 10−5
5.1
Table 8. Annual fatal rate in Japan from various causes in 2004 (Honjo, 2006). Cause of death
Annual fatal rate
All causes Cancer Heart disease Cerebrovascular disease pneumonia Work related accident Natural hazard Traffic accident Suicide (2003)
0.80 × 10−2 0.25 × 10−2 0.12 × 10−2 0.10 × 10−2 0.75 × 10−3 0.26 × 10−4 0.24 × 10−5 0.58 × 10−4 0.25 × 10−3
5.8 × 10−5
9.8 × 10−4 2.2 × 10−4 2.3 × 10−4 3.4 × 10−6 (average over 1960-2004?
Background of the proposal
5.1.1 LRFD format The reasons for recommending LRFD format for design verification have been explained in detail in section 2. The LRFD format that has employed in this report is of multiple resistance factors format which is given as below:
Both force and resistance are lumped into several terms. Sj ’s and Ri ’s are functions of basic variables, and some of basic variables are included in the both terms.
(1994). All these F–N curves divide the space into three regions, namely unacceptable, ALARP and risk negligible. Furthermore, it is observed that all three proposals have very similar border lines in the low consequence area (i.e. the number of fatalities between 1 to 10), but quite different borders in high consequence region. It is suggesting difficulties in determining risk acceptance for low frequency – high consequence events.
5
Annual fatal rate in Japan (Honjo, 2006)
CODE CALIBRATION BY MCS
A method is proposed in this report to determine load and resistance factors based on DVM approach by MCS. Only a method that uses ordinary MCS is introduced here. A method that applies the subset MCMC method to improve the efficiency of calculation is proposed in Kieu Le and Honjo (2009).
5.1.2 Merits of MCS The performance functions used in reliability analysis become more complex and non-linear. This situation makes it difficult to use FORM in reliability analysis due to time consuming programming works, slow convergence and existence of the local optimum solutions. On the other hand, MCS is easy to implement to any complex performance functions and a stable solution can be obtained if enough computational time is provided. The problem of computational time is getting less due to rapid development of computers. The authors feel problems of MCS in code calibration may be the followings:
447
– In code calibration using solely MCS, a structure needs to be redesigned every time factors are modified. – It is difficult to obtain overall view of the code calibration process when MCS is used.
Figure 9. Relatively new F–N curves proposed in the various parts of the world.
5.1.3 Merits of DVM DVM is developed in the framework of FORM to determine factors to ensure sufficient safety margin to be secured in design. This method has following advantages: – Load and resistance factors can be calculated based on target reliability index, sensitivity factors and COV (coefficient of variation) of S and R. – Redesign of a structure is not required in the code calibration process as long as initially chosen structural dimensions have reliability level that is not very far from the target. – Contribution of each load and resistance term can be evaluated by the sensitivity factor.
5.2
Step 2: Carry out OMCS. The OMCS is carried out based on the probability distributions of basic random variables Xi (i = 1, . . . , n). Nt samples are generated. Pf may be estimated as the ratio of Nf (number of samples fall into the failure region) and Nt . In addition, by using the generated samples, it is possible to estimate the joint density function fR,S (R, S). Step 3: Define an approximate design point (R*,S*). Among the generated pairs of (R,S), choose the ones close to the limit state line, and choose one that has the maximum likelihood. The one chosen can be used as an approximation of the design point and make it (R*, S*). Step 4: Calculate αR , αS and γ R , γ S . The sensitivity factors can be obtained by either of the follow two methods: Method 1:
Code calibration by ordinary monte carlo simulation
In complex cases where a performance function contains basic variables in both load and resistance functions, i.e. g(X) = R(X)–S(X), where X = Xi , (i = 1, . . . , n), and shows high non-linearity for the basic variables, it is difficult, if not impossible, to determine load and resistance factors by using DVM. A simple method which is based on DVM concept and yet takes advantages of Ordinary Monte Carlo simulation (OMCS) to obtain the load and resistance factors is proposed: Step 1: Define basic variables Xi (i = 1,. . ., n) and their PDFs.
where, (ZR∗ , ZS ∗ ) is the estimated design point in the standardized space, i.e.ZR∗ = (R∗ − µR )/σR and ZS ∗ = (S ∗ − µS )/σS respectively. Method 2:
Eqs. (8) and (9) should be used to estimate the load and resistance factors.
448
Table 9. Sensitivity factors, load and resistance factors obtained for simple linear performance function example.
OMCS Subset MCMC True results
αR
αS
γR
γS
−0.72 −0.69 −0.71
0.69 0.73 0.71
0.67 0.69 0.68
1.12 1.15 1.13
If it is judged more appropriate to fit lognormal distribution, the sensitivity factors can be estimated by either of the two methods as follows: Method 1’: Figure 10. Generated samples for simple linear performance function.
where, (ZlnR∗ , ZlnS ∗ ) is the estimated design point in standardized space, i.e. Zln R∗ = ( ln R∗ − µln R )/σln R and Zln S ∗ = ( ln S ∗ − ln µS )/ ln σS . Method 2’:
Probability of failure: Pf = 0.26 × 10−2 Reliability index: β = 2.79 The samples of the last step are used to estimate the design point. Design point: D(R∗ , S ∗ ) = (5.1,5.0)
Eqs. (13) and (14) should be used to estimate the load and resistance factors. 5.3
Examples
5.3.1 Linear performance function Z = R − S R and S are independently normally distributed random variables, i.e. R ∼ N(7.0,1.0) and S ∼ N(3.0,1.0) (Kieu Le & Honjo, 2009). In this example, the true results are known as follows: −2
True failure probability: Pf = 0.23 × 10 True reliability index: β = 2.83 True design point: (R∗ , S ∗ ) = (5.0,5.0)
In this example sensitivity factors obtained by Method 1 (Eq.(29)) are true values. Load and resistance factors are then obtained for a certain target reliability index, for example, βT = 3.2. The results are also shown in Table 9. The results obtained by OMCS after 50,000 runs are as follows:
Sensitivity factors are calculated by using Method 2 shown in Eq.(30). Finally, load and resistance factors are obtained for βT = 3.2. The results are aslo shown in Table 9. It may be recognized that the results obtained by the proposed method based on OMCS (or Subset MCMC) and DVM are very close to the true results. Therefore, the method seems to work well in this simple example. 5.3.2 A Retaining wall example Determination of load and resistance factors The proposed method is now applied for reliability analysis of a gravity retaining wall (Figure 11) under sliding failure mode(Kieu Le & Honjo, 2009). This example is taken from Orr (2005). The necessary parameters of the retaining wall, soil below and fill above the retaining wall are described below. – Properties of sand beneath wall: cs = 0, φs , γs – Properties of fill behind wall: cf = 0, φf , γf – Groundwater level is at depth below the base of the wall – Friction angle between base wall and underlain sand: φbs – Thickness of retaining wall: w = 0.4 m
Probability of failure: Pf = 0.25 × 10−2 Reliability index: β = 2.81 Design point: D(R∗ , S ∗ ) = (5.1,4. 8) Sensitivity factors are calculated by using Method 2 shown in Eq.(30), whereas the load and resistance factors are obtained for βT = 3.2, and the results are shown in Table 9. A similar procedure based on DVM and Subset MCMC by Kieu Le & Honjo (2009) is also attempted with Nt = 100 samples. An example of the generated samples is shown in Figure 10.
The limit state function of the problem is given as:
where R: total horizontal force that resists horizontal movement of the wall, S: total horizontal force that moves the wall.
449
Table 10. Probabilistic parameters of basic variables in limit state function of retaining wall under sliding failure mode Variables X1 X2 X3 X4 X5 X6 X7
γf γs γc tan φf tan φs tan φbs q
Distribution µ
σ
COV λ
ζ
Lognormal Lognormal Lognormal Lognormal Lognormal Lognormal Lognormal
1.000 0.950 1.250 0.107 0.086 0.070 1.500
0.05 0.05 0.05 0.14 0.13 0.12 0.10
0.050 0.050 0.050 0.136 0.127 0.120 0.100
20 19 25 0.781 0.675 0.577 15
2.994 2.943 3.218 −0.256 −0.402 −0.557 2.703
Table 11. Results obtained for βT = 3.1, sensitivity factors are obtained based on method 1 & 1 By OMCS
Figure 11. Description of retaining wall.
Normal
The actual form of R and S are given as follows: α γ
By Subset MCMC Lognormal
Normal
Lognormal
R
S
R
S
R
S
R
S
−0.57 0.61
0.82 1.79
−0.64 0.61
0.77 1.85
−0.54 0.62
0.84 1.76
−0.61 0.61
0.79 1.83
Table 12. Results obtained for βT = 3.1, sensitivity factors are obtained based on method 2 & 2 By OMCS Normal
where, tan(45◦ − φf /2) = 1 + ( tan φf )2 ) − tan φf tan(45◦ + φs /2) = 1 + ( tan φs ) + tan φ2 Thus, γs ’, γf ’, γc , tan φs ’, tan φf ’, tan φbs ’ and q are considered as basic variables of the limit state function and some of them are included both in R and S. The probabilistic parameters of basic variables are shown in Table 10. OMCS has been carried out for 100,000 samples. The estimated failure probability and reliability index are obtained as below: Pf = 0.2910−2 (i.e.β = 2.76) This approximated design point: (R = 168.9, S = 168.8) Based on 100,000 generated samples (R, S), mean and standard deviation of R and S can be estimated. µR = 204.5 σR = 23.6 µS = 122.5 σS = 21.1 In addition, based on fitting test on the generated points, the density function fR,S (R, S) seems to be best-fitted to both the joint normal and the lognormal distribution functions. Therefore, the joint density function fR,S (R, S) may be assumed to be either the joint normal or the lognormal distribution. The sensitivity factors are obtained by two methods. Finally, load and resistance factors are obtained for a
α γ
By Subset MCMC Lognormal
Normal
Lognormal
R
S
R
S
R
S
R
S
−0.75 0.57
0.67 1.69
−0.56 0.63
0.83 1.91
−0.73 0.57
0.68 1.66
−0.54 0.63
0.84 1.87
target reliability index assumed to be βT = 3.1 and are shown in Table 11 and 12. Note that the characteristic value of R and S adopted in this example is calculated from characteristic values of the basic variables which are chosen as the mean value for X1 to X6 , while 95% of fractile value for X7 . It seems that OMCS method in this case requires very larger calculation effort. Subset MCMC simulation is then carried out 1000 runs to obtain Pf of the gravity retaining wall under sliding failure mode.The number of samples generated for each variable in a step is Nt = 100. In addition, from the generated samples of basic variables during subset MCMC simulation, series of numbers of R and S are also calculated. Thereafter, these R and S values are utilized for determination of partial safety factors. The estimated Pf by 1000 simulation resulted the mean value of 0.27 × 10−2 (i.e. β = 2.70), and COV(Pf ) = 1.40. Design point:
450
γf = 20.2 γs = 19.1 tan φf = 0.588 tan φs = 0.624 tan φbs = 0.472 q = 14.9
γc = 24.1
This approximated design point is equivalent to point (R = 169.1, S = 166.8). Nt pairs of (R, S) calculated from Nt generated points Xi (i = 1, . . . , n), that are generated in the subset F0 , are utilized to estimate mean and standard deviation of R and S. The joint density function fR,S (R, S) is also assumed to be either the joint normal or the lognormal distribution. µR = 201.1 σR = 21.8 µS = 120.8 σS = 20.2 The sensitivity factors are obtained by two methods. Load and resistance factors (shown inTable 11 and 12.) are obtained for the same βT and the characteristic value of R and S as in the case of OMCS. Followings are some remarks on the results: – Load and Resistance Factors obtained all the cases, i.e. different joint density functions, different way to calculate sensitivity factors, are very similar. – The results obtained based on the estimated design point in Table 11 (by method 1 & 1’) is better than those obtained based on only the samples generated in the first step of subset MCMC procedure in Table 12 (method 2 & 2’), as expected. – The results obtained by OMCS (for 100,000 samples) and Subset MCMC are quite similar.
6
SUMMARY AND CONCLUSIONS
This report is prepared by a working group in JGS chapter of TC23 Limit State Design in Geotechnical Engineering Practice and summarizes the important points to note and recommendations when new design verification formulas are developed based on Level I RBD format. The followings are the major recommendations of this report. (1) Several RBD Level I verification formats such as PFM and LRFD have been proposed. It is recommended to use LRFD format at present due to the reasons below: -A designer can trace the most likely behavior of the structure to the very last stage of design if the characteristic values of the resistance basic variables are appropriately chosen. -The format can accommodate a performance function with high non-linearity and with some basic variables to both force and resistance sides. -Code calibration is possible for the case only total uncertainty of the design method is known, where the total uncertainty cannot be decomposed to each source of uncertainty. (2) The DVM should be adopted as the principal approach in code calibration. MCS method may be used for reliability analysis. A procedure is proposed to use MCS method to obtain sensitivity factors and design points. (3) It is recommended to use a mean value as characteristic value of basic variables related to resistance. Statistical uncertainties in estimating the mean value may be considered. This is again based
on the philosophy that a designer should trace the most likely behavior of the structure to the very last stage of design. This is very important in geotechnical design where engineering judgment plays important role in design. (4) Baseline technique which states a fractile value should be used for a characteristic value because of the stability of partial factors against change of variation of a basic variable is criticized. It was shown by some simple calculations that this merit does not exist for resistance side basic variables. REFERENCES AASHTO. 1994. LRFD Bridge Design Specifications, 1Ed. , American Association for State Highway and Transportation Officials, Washington, D.C. AASHTO. 1998. LRFD Bridge Design Specifications, 2Ed. , American Association for State Highway and Transportation Officials, Washington, D.C. AASHTO. 2004. LRFD Bridge Design Specifications, 3Ed. , American Association for State Highway and Transportation Officials, Washington, D.C. AIJ. 1993. Recommendations for Loads on Buildings, Architectural Institute of Japan AIJ. 2002. Recommendations for Limit State Design of Buildings, Architectural Institute of Japan, Japanese version AIJ. 2004. AIJ Recommendations for Loads on Buildings (2004), Architectural Institute of Japan ALCOLD. 1994. Guidelines on risk assessment, Australian National Committee on Large Dams Allen, T.M. 2005. Development of Geotechnical Resistance Factors and Down drag Load Factors for LRFD Foundation Strength Limit State Design. Reference Manual, No. FHWA-NHI-05-052. Au, S.K., & Beck, J.L. 2003. Subset simulation and its application to seismic risk based on dynamic analysis. Journal of engineering mechanics, Vol. 129(8): 901–917. Baecher G.B, & Christian, J.T. 2003. Reliability and Statistics in Geotechnical Engineering. John Wiley & Son, England. Becker, D.E. 2006. Limit states design based codes for geotechnical aspects of foundations in Canada. TAIPEI2006 International Symposium on New Generation Design Codes for Geotechnical Engieering Practice, Taiwai. Brinch Hansen, J. 1953. Earth pressure calculation. Copenhagen, Denmark: The Danish Technical Press Brinch Hansen, J. 1956. Limit state and safety factos in soil mechanics. Danish Geotechnical Institute, Copenhagen, Bulletin No. 1. Brinch Hansen, J. 1967. The philosophy of foundation design, design criteria, safety factor and settlement limit, Bearing Capacity and Settlement of Foundations (ed. A.S. Vesic), pp. 9–13, Duke University. Cornell, C.A. 1969. A Probability Based Structural Code, Journal of ACI, Vol. 66, No. 12, pp. 974–985 CSA. 2000. Canadian Highway Bridge Design Code, CSA S6-2000. Rexdale, Ontario: Canadian Standards Association Ditlevsen, O., & Madsen, H.O. 1996. Structural Reliability Methods. John Wiley & Sons, England. Diamantidis, D. 2008. Background document on risk assessment in Engineering: Risk Acceptance Criteria (Document #3), JCSS (Joint Committee on Structural Safety). CEN, 2004. EN 1997-1 Eurocode 7: Geotechnical design – Part 1: General rules.
451
Ellingwood, B., Galambos, T. V., MacGregor, J.G., & Cornell, C.A. 1980. Development of a Probability Based Load Criterion for American National Standard A58: Building Code Requirements for Minimum Design Loads in Buildings and Other Structures. NBS Specical publication 577. Ellingwood, B., MacGregor, J.G., Galambos, T.V., & Cornell, C.A. 1982. Probability Based Load Criteria: Load Factors and Load Combinations. ASCE, Vol. 108, No. ST5: 978–997. Gulvanessian, H., & Holicky, M. 1996. Designers’Handbook to Eurocode 1 – Part 1: Basis of design. Thomas Telford, London. GEO. 2007. Landslide Risk Management and the Role of Quantitative Risk Assessment Techniques (Information Note 12/2007). Hasofer, A.M., & Lind, N.C. 1974. Exact and Invariant Cecond-Moment Code Format.ASCE, Vol. 100, No. EM1: 111–121. HKGPD. 1994. Hong Kong Planning Standards and Guidelines, Chapter 11, Potential Hazard Installations, Hong Kong. HSE. 1999. Reducing risks, Protecting people, Health and Safety Executive, London. Honjo, Y. & Kusakabe, O. 2002. A keynote lecture “Proposal of a comprehensive foundation design code: Geo-code 21 ver. 2”, Foundation design codes and soil investigation in view of international harmonization and performance based design, Balkema (Proc. of IWS Kamakura), p. 95–103. Honjo, Y., Y. Kikuchi, M. Suzuki, K. Tani and M. Shirato, 2005. JGS Comprehensive Foundation Design Code: Geocode 21, Proc. 16th ICSMGE, pp. 2813–2816, Osaka. Honjo, Y. 2006. Some movements toward establishing comprehensive structural design codes based on performancebased specification concept in Japan. TAIPEI2006 International Symposium on New Generation Design Codes for Geotechnical Engieering Practice, Taiwai. ICG. 2003. Glossary of Risk Assessment Terms for ICG, based on recommendation of ICOLD, IUGS working group on landslides and ISDR/UN, edited by F. Nadim, S. Lacasse and H. Einstein. ISO 1998. ISO 2394 General principles on reliability for structures. JGS. 2006. JGS4001-2004 Principles for foundation designs grounded on a performance-based design concept (Geocode-21), Japanese Geotechnical Society. JPHA. 2007. Technical standards for port and harbor facilities, in Japanise. Kieu Le, T.C. & Honjo, Y. 2009. Study on determination of partial factors for geotechnical structure design, Proc. IS Gifu (under printing). Kulhawy, F.H., & Phoon, K.K. 2002. Observations on geotechnical reliability-based design development in North America. Foundation Design Codes and Soil Investigation in view of International Harmonization and Performance: 31–48. Matsuo, M. 1984. Geotechnical Engineering: Concept and practice of reliability based design, Gihodo-suppan, Tokyo, in Japanese. Meyerhof, G.G. 1993. Development of geotechnical limit state design. Proceedings of the International Symposium on Limit State Design In Geotechnical Engineering, Copenhagen: Danish Geotechnical Society, 1: 1–12. MOT. 1979. Ontario Highway Bridge Design Code, Ministry of Transportation, Canada
MOT. 1983. Ontario Highway Bridge Design Code, Ministry of Transportation, Canada MOT. 1991. Ontario Highway Bridge Design Code 3Ed. , Ministry of Transportation, Canada Moan T. 1998. Target levels of reliability-based reassessment of offshore structures. In: Shiraishi, Shinozika, Wen, editors. Structural safety and reliability (proceedings of ICOSSAR’97). Rotterdam, Balkema: 2049–56. Orr, T.L.L., & Farrell, E.R. 1999. Geotechnical design to Eurocode 7. Springer Verlag, London. Orr T.L.L. 2000. Selection of characteristic values and partial factors in geotechnical designs to Eurocode 7. Computers and Geotechnics, 26: 263–279. Orr, T.L.L. 2005. Proceedings of the International Workshop on the Evaluation of Eurocode 7. Ireland. Ovesen, N.K. 1989. General report, Session 30, Codes and Standards, Proc. ICSMGE, Rio de Janeiro. Ovesen, N.K. 1993. Eurocode 7: A European code of practice for geotechnical design, Proceedings of the International Symposium on Limit State Design in Geotechnical Engineering, Copenhagen: Danish Geotechnical Society, 3: 691–710. Paikowsky, S.G. 2004. Load and resistance factor design (LRFD) for deep foundations. NCHRP Report 507. Phoon, K.K., Kulhawy, F.H., & Grigoriu, M.D. 1995. Reliability-based design of foundations for transmission line structures. Report TR-105000, Electric Power Research Institute, Palo Alto, CA. Phoon, K.K., Kulhawy, F.H., & Grigoriu, M.D. 2000. Reliability-based design for transmission line structure foundations. Computers and Geotechnics 26: 169–185. Phoon, K.K., Becker, D.E., Kulhawy, F.H., Honjo,Y., Ovesen, N.K., & Lo, S.R. 2003. Why consider Reliability Analysis for Geotechnical Limit State Design?. LSD2003 International Workshop on Limit State Design in Geotechnical Engineering Practice, USA. Schneider, H.R., 1997. Definition and determination of characteristic soil properties, Contribution to Discussion Session 2.3, Proc. ICSMGE, Hamburg, Balkema. Simpson, B. & Driscoll, R. 1998. Eurocode 7 – a commentary. CRC Ltd., London. Simpson, B. 2000. Partial factors: where to apply them?. LSD2000: International Workshop on Limit State Design in Geotechnical Engineering, Melbourne, Australia. Slovic, P. 1987. Perception of risk. Science, 236, p. 280–285. Starr, C. 1969. Social benefit versus technological risk, Science, 165, 1332–1238. Takaoka, N. 1988. History of reliability theory. Journal of Structural Engineering Series 2: Evaluation of lifetime risk of structures, Japanese Society of Civel Engieers, p. 294–300, in Japanese. Tang, W. 1971. A Bayesian evaluation of information for foundation engineering design. International Conference on Applications of Statistics and Probability to Soil and Structural Engineering. USNRC. 1975. Reactor Safety Study, WASH-1400, NUREG 75/014. Versteeg. 1987. External safety policy in the Netherlands: An approach to risk management, Journal of Hazardous Materials, 17, p. 215–221 Whitman, R.V. (1984). Evaluating calculated risk in geotechnical engineering. ASCE, J. Geotech. Eng., 110, 2: 145–186.
452
Geotechnical Risk and Safety – Honjo et al. (eds) © 2009 Taylor & Francis Group, London, ISBN 978-0-415-49874-6
Author index Aghajani, H.F. 347 Allen, T.M. 103 Amatya, S. 97 Au, S.K. 89, 275 Bathurst, R.J. 103 Bles, T.J. 327, 339 Bond, A.J. 111 Brassinga, H. 311 Brassinga, H.E. 327 Cao, Z.J. 89, 275 Chen, Y. 251 Chen, Y.-C. 293 Cherubini, C. 83, 165 Ching, J.-Y. 193 Ching, J.Y. 69, 127, 293, 379 Chung, M. 173 Coelho, P.A.L.F. 411 Cools, P.M.C.B.M. 339 Costa, A.L.D. 411 Cotecchia, F. 363 de Gijt, J.G. 311 de Groot, M.B. 311 de Wit, T.J.M. 327 Frank, R. 111 Fujimura, T. 147 Fukui, J. 185 Goh, E.K.H. 281 Hagiwara, M. 185 Hao, S.-Q. 211 Hara, T. 435 Harahap, I.S.H. 397 Hata, T. 217 Hata, Y. 333 Hayakawa, K. 257 Hira, M. 265 Hirose, T. 185 Honda, M. 155 Honjo, Y. 39, 75, 155, 301, 435 Hsein Juang, C. 379 Hsieh, Y.-H. 69, 379 Hu, Y.-G. 193 Huang, B. 103 Huang, H.-W. 211 Huang, H.W. 27, 229 Hudig, P. 311 Huh, J. 173 Ichii, K. 333 Ichikawa, H. 141 Ikuma, T. 319 Ishizuka, M. 371
Kani, Y. 257 Kieu Le, T.C. 75, 435 Kikuchi, Y. 39, 155, 201, 435 Kim, H.Y. 135 Kisse, A. 97 Kohno, T. 119, 177 Korff, M. 327 Koseki, J. 185 Kurata, T. 147 Kwak, K. 173 Lacasse, S. 355 Lee, J.H. 173 Lee, K.H. 135 Lesny, K. 97 Li, D.Q. 405 Li, J.H. 419 Lim, J.K. 281 Lin, H.-D. 127 Lin, W.K. 159 Litjens, P.P.T. 339 Liu, H.H. 405 Liu, X. 251 Lollino, P. 363 Maeda, K. 221 Maeda, Y. 185 Mitaritonna, G. 363 Miyata, Y. 217, 371 Mori, M. 147 Mukaitani, M. 247 Murakami, A. 147 Nadim, F. 13, 355 Nagao, T. 39 Nakane, Y. 257 Nakatani, S. 119, 177 Nakaura, T. 177 Nakayama, T. 141 Nanak, W.P. 397 Ng, S.F. 281 Ning, Z.W. 229 Nishimura, S. 147 Oishi, T. 185 Oka, T. 287 Okazaki, Y. 247 Okuda, M. 257 Orr, T.L.L. 111 Oung, O. 327 Paikowsky, S.G. 97 Park, J.H. 173 Phoon, K.-K. 69, 193, 293
453
Rajabalinejad, M. 425 Rungbanaphan, P. 301 Salemans, J.W.M. 327 Santaloia, F. 363 Schuppener, B. 111 Schweckendiek, T. 311 Selamat, M.R. 281 Sêco e Pinto, P.S. 51 Shimizu, Y. 141 Shinoda, M. 371 Shirato, M. 119, 177, 435 Simpson, B. 111 Soltani-Jigheh, H. 347 Suzuki, M. 141, 147, 201, 435 Tanaka, H. 287 Tanaka, K. 247 Uzielli, M. 355 van den Ham, G.A. 311 van Gelder, P.H.A.J.M. 425 van Staveren, M.Th. 339, 387 Verweij, A. 327 Vessia, G. 83, 165 Vitone, C. 363 Vrijling, J.K. 425 Waku, A. 221 Wang, Q. 89 Wang, Y. 89, 275 Watabe, Y. 39 Wu, S.B. 405 Wu, T.H. 1 Xie, X.Y. 229 Xue, Y.D. 27 Yamamoto, K. 265 Yamamoto, R. 247 Yamamoto, Y. 185 Yang, Y.Y. 27 Yen, M.-T. 127 Yoon, G.L. 135 Yoon, Y.W. 135 Yoshida, I. 301 Yoshinami, Y. 141 Yuan, Y. 211 Yuasa, T. 221 Zhang, L.M. 159, 237, 419