NDT Data Fusion
This Page Intentionally Left Blank
NDT Data Fusion X.E. Gros DUT, BSc (Hon), MSc, PhD Independent NDT Centre, France
A member of the Hodder Headline Group LONDON • SYDNEY • AUCKLAND Copublished in North, Central and South America by John Wiley & Sons, Inc., New York • Toronto
First published in Great Britain in 1997 by Arnold, a member of the Hodder Headline Group, 338 Euston Road, London NW1 3BH Copublished in North, Central and South America by John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012 ©1997XEGros All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronically or mechanically, including photocopying, recording or any information storage or retrieval system, without either prior permission in writing from the publisher or a licence permitting restricted copying. In the United Kingdom such licences are issued by the Copyright Licensing Agency: 90 Tottenham Court Road, London WIP 9HE. Whilst the advice and information in this book is believed to be true and accurate at the date of going to press, neither the author nor the publisher can accept any legal responsibility or liability for any errors or omissions that may be made. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN 0 340 67648 5 ISBN 0 470 23724 4 (Wiley) Typeset in 10/12 pt Times by Mathematical Composition Setters Ltd, Salisbury, Wiltshire SP3 4UF. Printed and bound in Great Britain by St. Edmundsbury Press, Bury St. Edmunds, Suffolk and Hartnolls Ltd, Bodmin, Cornwall.
To Rachael
Contents
Preface Acknowledgements List of Abbreviations 1
2
Introduction
1
1.1 1.2
1 2
4
Introduction In brief
Data Fusion - A Review 2.1 2.2 2.3 2.4 2.5 2.6
3
ix xi xiii
Introduction Data fusion system models Fusion methodology Data integration and fusion applications A practical example of NDT data fusion Conclusion
5 5 6 22 34 34 36
Non-destructive Testing Techniques
43
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12
43 44 46 48 51 57 58 59 66 71 72 73
Introduction Visual inspection Liquid penetrant inspection Magnetic particle inspection Eddy current testing Alternating current potential drop Alternating current field measurement Ultrasonic testing Radiographic inspection Additional NDT methods Computers in NDT Performance assessment of NDT methods
Scientific Visualisation
82
4.1 4.2 4.3 4.4
82 83 87 88
Introduction Data visualisation Volume visualisation Animation and virtual reality
viii
Contents 4.5 4.6 4.7
Fundamentals of image processing Visualisation in NDT Summary
5 A Bayesian Statistical Inference Approach to the Non-destructive Inspection of Composite Materials 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8
Introduction Composite materials Current NDT methods for the inspection of composites Description of test specimens Methodology and experimental design Inspection results NDT data integration and fusion Discussion
6 Application of NDT Data Fusion to Weld Inspection 6.1 6.2 6.3 6.4 6.5 7
Introduction Weld samples Non-destructive examination of the test specimens NDT data fusion Discussion
89 91 91
95 95 96 97 101 104 104 114 121 127 127 128 129 141 177
Perspectives of NDT Data Fusion
180
7.1 7.2
180 184
Concluding comments The future of NDT data fusion
Glossary Bibliography Index
189 195
Preface
This book is the first to be devoted exclusively to the concept of multisensor integration and data fusion applied to non-destructive testing (NDT). Data fusion is a rapidly evolving technology, and is now the most recent addition to NDT signal processing methodologies for efficient understanding and interpretation of data. This text provides a valuable source of information on NDT, data fusion, composite inspection, scientific visualisation and performance analysis of NDT methods. The text is intended for inspectors, students and researchers working in the fields of NDT, signal processing and measurement and testing, and delivers a comprehensive, easy-to-read guide on the concept of NDT data fusion. Its main objective is to initiate the readers into the subject by introducing data fusion processes. Problems are approached progressively through detailed, original experimental case studies, and solutions gradually implemented so that beginners can follow the procedures effectively. In addition, the oil, nuclear and aerospace industries may find the application of data fusion to weld and composite inspections worthwhile. NDT Data Fusion offers practical guidance for those wishing to develop and explore NDT data fusion further. It is intended to offer the most comprehensive introduction available at this time to the concepts of NDT and multisensor data fusion. In order to maintain a balance between theoretical and experimental discussions, chapters 1 and 4 present the theory behind NDT data fusion while chapters 5 to 7 concentrate on the implementation phase. Multisensor NDT data fusion is introduced in the first chapter which gives a general idea to the readers of the applications of non-destructive examination (NDE), its limitations and how data fusion techniques can contribute to the improvement of the overall performance of a non-destructive inspection. Chapter 2 presents a review of multisensor data fusion models, statistical and probabilistic methods currently used to combine information from multiple sources. A survey of data integration and fusion applications as well as a practical example of NDT data fusion are discussed. Existing non-destructive testing techniques, their advantages and limitations, are outlined in chapter 3. This chapter offers synthesised information about NDT techniques as well as their possible industrial applications. The computer visualisation of scientific data constitutes the topic of the fourth chapter. A brief description of the tools available and the applications of scientific visualisation in industry, and more particularly in NDT, is presented and illustrated. Virtual reality, computer animation and image processing techniques are also addressed. The first implementation of multisensor NDT data fusion in the assessment of composite materials using a Bayesian statistical inference approach is discussed in chapter 5. Inspection of composite samples currently used in the aerospace industry using multiple NDT techniques is described. The experimental results of these inspections are presented and the
x Preface performance of these techniques evaluated using probabilistic and statistical processes. The integration and combination of information from multiply eddy current sensors, using a Bayesian approach, and the outcome of this implementation, are discussed and analysed. Chapter 6 investigates NDT data fusion applied to weld inspection. The applications of data fusion highlighted in this chapter are directly relevant to nuclear and offshore industries. A Bayesian statistical inference approach and the Dempster-Shafer theory of evidence are used to combine data from multiple NDT instruments; their advantages and limitations are discussed in this chapter. Finally, chapter 7 concludes this book with discussion and comments on the current and future trends of NDT data fusion and its impact on industry. A bibliography of text and literature related to NDT, data fusion, visualisation and artificial intelligence is also included as well as a glossary of some of the most important terms used in these disciplines. It is hoped that NDT Data Fusion will give the reader a general overview of what is still required and what could be achieved to maintain high safety levels in industry, and will provide engineers, researchers, students and NDT inspectors with a more complete picture and a more accurate assessment of structural integrity than are currently possible with a single NDT technique. Finally, it is hoped that readers will gain an understanding of the great benefits which can be achieved by implementing multisensor data fusion, not only in NDT but in any discipline involved in measurement and testing. X.E. Gros
Acknowledgements
The author would like to thank P. Strachan and D.W. Lowden from the Robert Gordon University, Aberdeen, for the useful advice and support they provided throughout his research work on multiprobe NDT data fusion. The author also wishes to acknowledge G. MacGregor (Core Technical) for his interest in this research and for the loan of NDT equipment without which measurements could not have been carried out. Special thanks are also due to D. Graham and to I. Findlay for access to radiographic and infrared thermographic equipment. In addition, the author would like to acknowledge J. Bousigue (Ingenieur E.N.S.I. de Caen, D.E.A. Traitement et Synthese d'Image) for his invaluable discussion and generous help in programming. Last but not least, the author cheerfully thanks Dr R.D. Wakefield for her faithful support, patience and encouragement during completion of this book.
This Page Intentionally Left Blank
List of Abbreviations
These acronyms and abbreviations are official terms related to research and NDT in Europe. They are widely used in present-day papers and NDT literature. This list is not definitive and is given only for information as some of the terms are used in this book. AC ACFM ACPD ADC AE AGOCG AI apE A-Scan ASCII AVS Bel bpa BS B-Scan BVID CAD CCD CIE C3I CCTV CMYK
Alternating Current Alternating Current Field Measurement Alternating Current Potential Drop Analog-to-digital Converter Acoustic Emission Advisory Group on Computer Graphics Artificial Intelligence Animation Production Environment Amplitude Scan American Standard Code for Information Interchange Application Visual Software Belief Basic Probability Assignment British Standard Brightness Scan Barely Visible Impact Damage Computer Aided Design Charge Coupled Device Commission Internationale de l'Eclairage Command Control Communication and Intelligence Closed Circuit Television Cyan - Magenta- Yellow Black
Crack Opening Displacement Central Processing Unit Cathode Ray Oscilloscope Cathode Ray Tube Contrast Scan Computerised Tomography Digital-to-analog Converter Decibel Direct Current Direct Current Potential Drop Distance-Gain-Size Deep Penetration Eddy Currents Eddy Current EC Electromagnetic Array EMA Electromagnetic Acoustic EMAT Transducer emf Electromotive Force Expert System ES ET/ECT Eddy Current Testing Focus-to-film Distance FFD Fast Fourier Transform FFT GEP Generalised Evidence Processing Glass Fibre Reinforced GFRP Plastic GMAW Gas Metal Arc Welding GTAW Gas Tungsten Arc Welding HAZ Heat Affected Zone He-Ne Helium-Neon HSV Hue-Saturation-Value HVT Half Value Thickness
COD CPU CRO CRT C-Scan CT DAC dB DC DCPD DGS DPEC
xiv List of Abbreviations IACS ID IEEE IIT IRT IT IQI KBS LCD LOSWF LPI LR MPI MPT MRI MRF NDA NDE NDI NDT NMR NN NP OD OP PC PCB PFC PISC Pis
International Annealed Copper Standard Inner Diameter Institute of Electrical and Electronics Engineers Image Intensifier Tube Infrared Thermography Information Technology Image Quality Indicator Knowledge Based System Liquid Crystal Detector Lack of Side Wall Fusion Liquid Penetrant Inspection Likelihood Ratio Magnetic Particle Inspection Magnetic Particle Testing Magnetic Resonance Imaging Markov Random Field Non-destructive Assessment Non-destructive Examination Non-destructive Inspection Non-destructive Testing Nuclear Magnetic Resonance Neural Network Neyman-Pearson Outer Diameter Operational Amplifier Personal Computer Printed Circuit Board Probability of False Calls Programme on the Inspection of Steel Components Plausibility
POD P-Scan PT QA QC QNDE RFEC RGB ROC ROV RT RTR SAFT SAW
sec SEM SFD SNR SP SQUID 3-D TOFD TV UT UV VDA VDU VT XR
Probability of Detection Projection Scan Penetrant Testing Quality Assurance Quality Control Quantitative Non-destructive Evaluation Remote Field Eddy Current Red-Green-Blue Receiver (or Reliability) Operating Characteristic Remotely Operated Vehicle Radiographic Testing Real-time Radiography Synthetic Aperture Focusing Technique Surface Acoustic Wave Stress Corrosion Cracking Scanning Electron Microscopy Source-to-film Distance Signal-to-noise Ratio Signal Processing Superconducting Quantum Interference Device Three-dimensional Time-of-flight Diffraction Television Ultrasonic Testing Ultraviolet Visual Data Analysis Visual Display Unit Visual Testing X-ray Radiography
1
Introduction Science is nothing but trained and organised common sense T.H. Huxley, 1878
1.1
Introduction
In order to improve manufacturing quality and ensure public safety, components and structures are regularly inspected for defects or faults which may reduce their structural integrity. Among the methods of testing developed for maintenance and inspection purposes, non-destructive testing (NDT) techniques present the advantages of leaving the components undamaged after inspection. Such techniques find applications in the aerospace,1'2 transport,3'4 nuclear,5'6 food7 and offshore industries.8'9 Most NDT methods can now be automated and computer controlled in order to facilitate signal interpretation.1011 Despite these improvements, non-destructive examinations (NDE) are usually performed by a qualified NDT inspector using NDT techniques which are applied on an individual basis. Scientific measurements based on a single sensor can provide only limited information about the environment in which it operates. Because each NDT method presents different advantages and limitations, the use of more than one method is usually required to inspect a material fully. For example, ultrasonic testing helps in the detection of internal defects while eddy current examination is more appropriately applied in the detection of surface breaking defects. However, information from different NDT systems can be conflicting, incomplete or vague if looked at as discrete data. The concept of data fusion can be used to combine information from multiple NDT systems and help in decision making to reduce human error interpretation. Data fusion can be defined as the synergistic use of information from multiple sources in order to assist in the overall understanding of a phenomenon. Multisensor data integration and fusion have gained popularity in military and robotics applications and more recently in non-destructive testing; data fusion applied to NDT was first introduced in 199312 and research interests are rapidly increasing throughout Europe.13"15 During the last decade, although considerable research effort has gone into the application of data fusion to robotics,1617 imaging techniques18 and target tracking,19,20 relatively little use has been made of the concept in NDT. Thus the objective of this text is to present recent research advances of interest to the NDT community by taking the existing base of published knowledge and adapting and extending this into a model which can be used to enhance the value and cost-effectiveness of non-destructive methods of testing, analysis and evaluation.
2 Introduction The development of a data fusion process to combine information from multiple nondestructive testing sensors in order to provide a more complete picture and a more accurate assessment of structural integrity than is currently possible with a single NDT method is described in this book. A review of the existing multisensor data integration and fusion models, methods and applications is discussed in chapter 2, to help determine and understand the data fusion processes to be used in the implementation phase. The actual NDT techniques available, their advantages and limitations, are described in the third chapter, while chapter 4 presents an introduction to scientific visualisation and identifies the usefulness of visualisation methods to present NDT data efficiently. NDT data fusion, implemented through two different approaches, is analysed in chapters 5 and 6; first, a Bayesian statistical approach was used to make inference and test binary hypothesis from information collected from multiple eddy current sensors used to inspect composite materials. This approach demonstrated the potential of Bayesian theory, and visualisation enabled data to be presented in a colour-coded visual format. In addition, the efficiency of different NDT techniques used in the inspection of composite materials was analysed using statistical theories which are described in this book. The second data fusion approach concerned the inspection of welds. The procedures carried out can be of direct relevance to the nuclear and offshore industries which are currently inspecting welds using more than one NDT method. A Bayesian statistical inference approach and the Dempster-Shafer theory of evidence were used to combine information from (i) multiple NDT sensors of a similar type but from different instruments, and (ii) different NDT sensors. A study was made of the performance and efficiency of each approach to combine data effectively and to provide the user with valuable results, and the most appropriate approach for NDT data fusion was identified. This book concludes with personal views regarding the future of NDT data fusion, its implications in industry and how it can be developed further. 1.2
In brief
Different probability approaches to NDT data fusion were studied and their efficiency in combining information was assessed using statistical theories such as probability of detection21 and receiver operating characteristic curves.22 The NDT data fusion process implemented presents results in the form of probability associated with a measurement which is used to make inference.23 Because analogue signals on cathode ray tubes are difficult to analyse, a data visualisation approach was adopted to facilitate signal interpretation and provide the user with qualitative and quantitative information about defects. This information is very useful to the structural engineer who needs to advise on possible levels of failure of a component. Visualisation of NDT data and fusion of information at pixel level were also performed, as described in this.book. References 1. Hobbs C, Temple A. The inspection of aerospace structures using transient thermography, April 1993, The British Journal of Non Destructive Testing, 35(4), 183-9. 2. Wassel AB. Safety and reliability in the air, June 1993, The British Journal of Non Destructive Testing, 35(6), 315-18.
References 3. 4. 5. 6. 7. 8. 9. 10.
11.
12. 13.
14.
15. 16.
17.
18.
19.
20.
21.
3
Egelkraut K. Are the railways the real pioneers of NDT?, May 1994, Insight, 36(5), 306-9. Gartside C. Automated ultrasonic testing of rail axles on TGV and Channel Tunnel trains, May 1994, Insight, 36(5), 310-12. Rylander L, Gustafsson J. Non-destructive examination of the primary system in Igulania nuclear power plant, April 1994, Insight, 36(4), 210-12. Gartside C, Hurst J. Application of TOFD inspection technique to fasteners in power generating plant, April 1994, Insight, 36(4), 215-17. Lacey RE, Payne FA. Ultrasonic velocity in used corn oil as a measure of quality, 1994, ASAE Transactions, 37(5), 1583-9. Rogers LM. Sizing fatigue cracks in offshore structures by the acoustic emission method, Sept. 1994, Insight, 36(9), 6 6 1 - 5 . Raine GA. An alternative method for offshore inspection, Sept. 1994, Insight, 36(9), 678-82. McNab A, Dunlop I. Advanced visualisation and interpretation techniques for the evaluation of ultrasonic data: the NDT workbench, May 1993, The British Journal of Non Destructive Testing, 35(5), 233-40. Smith RA. Evaluation and accuracy assessment of Andscan - a portable nondestructive scanner - Part 1: Andscan hardware and software, April 1995, Insight, 37(4), 284-9. Edwards I, Gros XE, Lowden DW, Strachan P. Fusion of NDT data, Dec. 1993, British Journal of Non Destructive Testing, 35(12), 710-13. Georgel B, Lavayssiere B. Fusion de donnees: un nouveau concept en CND, 24-28 Oct. 1994, Proceedings of the 6th European Conference on Non Destructive Testing, Nice, France, 1, 3 1 - 5 . Johannsen K, Heine S, Nockemann C. New data fusion techniques for the reliability enhancement of NDT, 24-28 Oct. 1994, Proceedings of the 6th European Conference on Non Destructive Testing, Nice, France, 1, 361-5. Gros XE, Strachan P, Lowden DW. A Bayesian approach to NDT data fusion, May 1995, Insight, 37(5), 363-7. Carrol MS, Meng M, Cadwallender WK. Fusion of ultrasonic and infrared signatures for personnel detection by a mobile robot, Proceedings of the SPIE Conference, 1611, Sensor Fusion IV, 1991,619-29. Mandelbaum R, Mintz M. Active sensor fusion for mobile robot exploration and navigation, Proceedings of the SPIE Conference, 2059, Sensor Fusion VI, Sept. 1993, 120-9. Pinz AJ, Bartl R. Information fusion in image understanding: Landsat classification and Ocular Fundus images, Proceedings of the SPIE Conference, 1828, Sensor Fusion V, Nov. 1992, 276-87. Stewart L, McCarty P. The use of Bayesian belief networks to fuse continuous and discrete information for target recognition, tracking and situation assessment, Proceedings of the SPIE Conference, 1699, Signal Processing, Sensor Fusion and Target Recognition, 1992, 177-85. Thomopoulos SCA, Chen BH. Fusion of Ladar and FLIR data for enhanced automatic target recognition, Proceedings of the SPIE Conference, 2093, Substance Identification Analytics, Oct. 1993, 600-9. Berens AP, Hovey PW. Evaluation of NDE reliability characterization, Dec. 1981, AFWAL-TR-81-4160, Vol. 1, Air Force Wright-Paterson Aeronautical Laboratories.
4 Introduction 22. Centor RM, Keightley GE. Receiver operating characteristics (ROC) curve area analysis using the ROC Analyzer, Nov. 1989, Proceedings of the 13th Symposium on Computer Applications in Medical Care, IEEE Publications, 222-6. 23. Gros XE, Strachan P, Lowden D. Theory and implementation of NDT data fusion, 1995, Research in Non Destructive Evaluation, 6(4), 227-36.
2
Data Fusion - A Review Data fusion is deceptively simple in concept but enormously complex in implementation US Department of Defense, 1990
2.1
Introduction
Scientific measurements using identical or disparate multiple sensors generate large amounts of data of similar or different class which need to be processed in a meaningful way. Owing to the increasing demand for more accurate information, a practical and robust procedure needed to be developed to manage data efficiently, in order to improve system reliability. The systematic integration of multisensor information is known as data fusion. Its aim is to combine and manage multisensory data (by integrating information from multiple sources) in order to obtain a more complete evaluation of the environment in which the sensors operate. Multisensor data integration and fusion can be described as the synergistic use of information from multiple sources to assist in the overall understanding of a phenomenon and to measure evidence or combine decisions. The data fusion concept is not a new process; it appears in scientific literature of the late 1960s1,2 in a theoretical form using mathematical algorithms before being implemented in the 1970s and 1980s in multiple disciplines.3"5 In 1984 a data fusion sub-panel was established in the USA, organising conferences, promoting data fusion to industry and educational establishments, identifying the needs of industry and coordinating data fusion projects. From a survey of publications related to data fusion, it has been estimated that worldwide more than 50 universities and industrial companies are currently performing (or have carried out) research in multisensor data fusion and integration. When it first appeared in the literature, the term 'data fusion' had no particular meaning for many scientists. Only an industry in need of an efficient and operator-independent data management and analysis system, with substantial capital, was able to invest and begin research in this field. For these reasons, most of the early data fusion projects were military oriented, their principal objectives being to improve the efficiency of national defence by developing a man-machine interface system. This would reduce the skills required for data interpretation and decision making operations for battlefield surveillance or tactical situation assessment. Therefore the majority of the publications relating to data fusion have been focused on defence applications where its potential was realised.6"11 Nowadays the areas of application of data fusion span a broad range of disciplines such as robotics,12 nondestructive evaluation,13 pattern recognition,3 geoscience,14 medicine15 and even finance.16
6 Data fusion - a review This chapter gives a broad overview of what has been achieved to date and shows the potential of the technique by presenting a resume of several data fusion implementation areas. A survey of the most frequent data fusion systems together with the advantages and limitations of some of these techniques is also included. The last section describes a practical implementation of data fusion applied to non-destructive evaluation. 2.2
Data Fusion System Models
This section describes multiple data fusion models and compares centralised and distributed fusion systems. The role of data fusion, as viewed through several publications, and its use in managing uncertainty and improving accuracy are presented. The features that characterise multiple sensor devices as opposed to single sensor devices are briefly reviewed. 2.2.1 FROM BIOLOGY TO TECHNOLOGY The adaptation of data fusion to technology appeared through artificial intelligence, the theory of which is based on the development of an artificial system which will be able to reproduce human reasoning. Data fusion is carried out by the human brain when, for example, associating images and sound while watching television. Some interesting examples of human and other animal data fusion processes are discussed by Luo and Kay17 whose paper presents a fundamental review of data fusion. Pearson et al.x% present an analogy between the neural system of barn owls to detect and locate prey, and an artificial neural network for target tracking. A barn owl performs fusion of visual and acoustic information via a series of four computational maps whose combined information is used for target localisation. Using a similar artificial neural network system, these authors produced an efficient target location mechanism which could be adapted for technical applications. The fusion of multiple information by humans occurs every time the senses are stimulated by appropriate signals. Our sensors, for example eyes, ears, nose, tongue and skin, are fusing sight, hearing, smell, taste and tactile information in our brain. The sound of a voice combined with visual information helps in identifying a person; Fig. 2.1 illustrates this example. The human brain is probably the best analogy to a data fusion system. Artificial intelligence (AI) is a science which is trying to reproduce the advantages of human behaviour, such as reasoning, identification or combination of information, in a more technological way in order to minimise human error.19 Rauch20 presents aspects of tactical data fusion using expert systems for decision making. Artificial intelligence systems still need more research, and significant progress will have to be made before they are able to compete with the human fusion centre on an autonomous and consistent basis. The scientific reality is not as simple as it appears; several parameters have to be taken into account by an artificial system in order to perform efficient integration of information. The quality and accuracy of the sensors, the environment in which they operate and the type of information collected are all factors which may influence the quality of a fusion system if they have not been clearly specified and adequately assessed. 2.2.2 MULTIPLE VERSUS SINGLE SENSOR DEVICES Sensors are devices used to obtain information from the environment in which they operate. For example, radar in aerospace applications is used to detect the presence of a
Data fusion system models Sensor 1
1
Sensor 2
Fig. 2.1 Illustration of the human data fusion system vehicle, underwater sonar gives information on the range of an object, and an ultrasonic sensor in non-destructive examination aids in the location of an internal flaw in a component. Information from a single sensor can be very limited; in the previous examples, radar information would be more complete if the vehicle could be identified, and it would be more valuable if the shape of objects detected by the sonar could be elucidated. In non-destructive evaluation, it is essential to detect surface flaws as well as internal ones. Measurements taken using single sources are not fully reliable and are very often incomplete due to the operating range and limitations which characterise each sensor. The transmission of information is also dependent on the reliability of the system. Moreover, the electronics of the sensor, the operational environment (underwater, space) or natural phenomena such as storm and lightning can affect sensor reading by reducing the signal-tonoise ratio. The use of multiple sensors has numerous advantages over single sensor instruments. Because of the technical features which characterise each sensor, redundant and/or complementary observations about a measurand are made. The combination of this information can be used to generate a more complete picture of the environment than is currently obtainable with a single sensor. Using the previous examples, a satellite image could be coupled with radar information, an infrared camera added to a sonar and an eddy current probe combined with an ultrasonic sensor, to enable the missing information, in this case vehicle identification, depth information and flaw information, to be gained. A multiple sensor device can include any instrument with several sensors of identical or dissimilar types used to measure a physical quantity. The simultaneous use of similar sensors can be very advantageous when large areas need to be covered in a short time, or to assess the accuracy of a reading by comparing multiple outputs. Magee and Aggarwal21 and Krzysztofuwicz and Long22 presented the advantages of using multiple sensors over a single sensor using several examples, and Smith et alP defined an algorithm to detect faults in multisensor probes. Dawn24 presented some fundamental limits in multisensor
8 Data fusion - a review data fusion, and Chao et al.15 demonstrated that increasing the number of sensors led to a significant reduction in error. The probability of error decreases asymptotically with the number of sensors (Fig. 2.2). However, increasing the number of sensors also increases the complexity of a system.
Number of sensors Fig. 2.2 Probability of error versus number of sensors
The following benefits can be identified in the use of multiple sensor devices: • •
a reduction in measurement time a downtime reduction and an increase in reliability redundant and complementary information a higher signal-to-noise ratio a reduction in measurand uncertainty a more complete picture of the environment.
All of these result in an overall increase in system performance. Information from multiple sources needs to be effectively combined in a coherent and efficient manner in order to compensate for their limitations and deficiencies. Several data fusion models have been developed which are presented in the next section. 2.2.3 SENSOR MANAGEMENT Data integration is a process which gathers sensor information and relates it to other information in order to identify common sources. Because sensor information can differ in time, space and accuracy, an efficient sensor management system must be considered. Sensor information in data fusion The decreasing cost of sensors has made possible the use of more sensors in science. Each sensor can produce either a single signal/decision or multiple signals (subset of
Data fusion system models 9 hypotheses). The data from any type of sensor can be fused providing that each sensor refers to the same measurand. Sensor selection is difficult and depends on the type of information required as well as the application. The use of both identical and different sensors presents advantages and disadvantages. With identical sensors, the signal output is in an identical format which can be fused with minimum processing and can be used to validate the information provided by a previous sensor. By comparing information from multiple identical sources, redundant information increases the certainty and reliability of an inspection. Lee and Van Vleet26 gave an estimate of the error between multiple identical sensors in order to improve the efficiency of a fusion system. Durrant-Whyte27 used multiple cameras to track objects. However, the use of identical sensors is limited, as they provide the same type of information and present similar strengths and weaknesses. The use of different sensors presents the advantage of providing complementary information which can be fused to enhance the overall performance of an inspection. In this case, the signals will have to be processed to obtain an identical data format. Identification of the sensors in operation has to be performed to process the data according to sensor type. The multisensor inputs must be related to one another in both time and space for real-time data fusion applications. In non-real-time applications, input data may be related only in space. Table 2.1 presents a survey of the different types of sensor which have been used in data fusion. Among the most commonly used sensors are radar, cameras and infrared.28 It is not surprising to find these sensors at the top of the list as they are widely used in military applications for surveillance and target tracking. Table 2.1 Survey of typical sensors used in data fusion Sensor
Output format
Applications
References
Optical sensor
Image
Mobile robot guidance
Radar
Pulse signal
Infrared sensor
Image
Satellite
Image
Target detection and target tracking Object identification Surveillance and pattern recognition Mobile robot guidance Materials examination Obstacle detection Pattern recognition Medical
1 8 , 2 6 , 2 7 , 3 1 , 3 4 , 35, 5 9 , 6 1 , 6 2 , 6 5 , 7 2 , 73, 77, 90, 100 10, 29, 30, 36, 42, 49, 72,85,94,96 1 0 , 2 6 , 2 9 , 3 1 , 3 6 , 72, 86,98 3 , 4 , 14,32,33
Ultrasonic sensor Pulse signal NDT sensor
Voltage
Sonar Laser X-ray
Pulse echo Image Image
37, 74, 53, 77,
34, 59, 60, 90 1 3 , 8 3 , 9 1 , 103 36, 37,53 21,60,80 15
Combining laser radar images with infrared images is very common for target detection.10'29"31 Johnson et al.32 and Ehlers33 gave examples of satellite image data fusion for missile location and weather broadcast applications. Franklin and Blodgett14 described the application of data fusion in the field of geoscience, by fusing multiple satellite images
10 Data fusion - a review used for classification, discrimination of vegetation communities, and analysis and mapping of ecological areas. Sensors are used because they help to enhance human senses or provide additional information otherwise unobtainable (e.g. sonar, radar). One of the major advantages of using multisensor systems is that information can be more accurate and more rapidly transferred. Robotics make great use of sensors for position location, distance assessment, object recognition and guidance. Richardson and Marsh34 and Shapiro and Mowforth35 combined vision, tactile and range sensors for industrial mobile robots to perform assembly tasks and classification. Flynn36 addressed the problem of combining information from sonar and infrared sensors for mobile robot navigation. A sonar gives range measurement as a dense sample of data but no depth information. Infrared sensors provide good depth information but poor distance measurement. Information from each sensor is integrated to overcome the deficiency of the other in order to produce a more accurate representation of the environment in which the robot operates.37 McCoy11 described an application of data fusion for fuel management requirements of military fighter aircraft. Sensor characteristics are a limiting factor in the performance of a data fusion system. Sensor performance The performance and potential of each sensor used needs to be established in order to assign weight of evidence, for example. The uncertainty of each of the sensors listed in Table 2.1 can be modelled as a Gaussian distribution; this will also be the case for NDT sensors. Sensor modelling is a very important characteristic for a multisensor fusion system using a Gaussian distribution. System accuracy is limited by the resolution, sensitivity and precision of sensors. Only by knowing the limit of each sensor will meaningful information with a high confidence level be extracted. Uncertainty and errors are other factors which may cause problems for signal interpretation and decision making. The most common sources of uncertainty are: • •
little or no knowledge about a measurement; incomplete measurement (when data are approximated rather than waiting for complete data which may be time-consuming and costly); • limitations of the system.
Sensor uncertainty can be a source of wrong decision making. In NDT, for example, uncertainty may mean unnecessary repair cost or structural failure if a major fault has been detected but wrongly identified and no action taken. Figure 2.3 shows some common types of errors. Sensor performance can be statistically represented using detection probability criteria. Such a criterion can be the probability of detection (POD) of a given measurand by a specific sensor. In NDT, POD curves are plotted against flaw size and are used to assess the potential and limitations of a technique.39 An idealised POD curve is a step function, but more realistic POD curves are not as perfect (Fig. 2.4). A system should be able to detect all flaws above a critical flaw size, denoted Cs9 in order to fulfil quality and safety standards. Sensor performance can also be expressed as a receiver operating characteristic (ROC) graph which plots the probability of detection versus the probability of false
Errors
Inspection practicality (environment)
Incorrect output
Unreliable Fig. 2.3 Common types of errors (modified from Giarratano and Riley 38 )
12 Data fusion - a review 1.0
g 0.5
0.0
0.0
Cs
0.5 Flaw Size
1.0
Fig. 2.4 Typical POD versusflawsize curve alarm (Fig. 2.5).40 Van Dijk and Boogaard41 expressed the performance of a system as £=l-2V(l-POD)PFC
(2.1)
where POD is the probability of detection and PFC the probability of false call.
0.0
Probability of false alarm Fig. 2.5 Typical ROC curve
For a fictitious system, the performance increases as k increases as shown in Fig. 2.5. ROC curves are very often used to compare the performance between two or more sensors where each set of sensor data is assumed to be statistically independent. The major advantage of ROC curves compared to POD curves is that false calls are taken into
Data fusion system models 13 account to plot a ROC curve; with a POD curve there is no information on the number of false calls of a system. In practice, ROC curves are difficult to realise. Evaluation and characterisation of the performance of a sensor have to be assessed prior to fusion and before applying a weight to a particular system. Parra-Loera et al.42 presented a methodology to determine sensor confidence factors for a multitarget tracking environment using the Dempster-Shafer mathematical theory of evidence. Blackman and Broida94 evaluated the performance of multiple sensor tracking systems in aerospace applications. The system can be very complex, especially if we update the weight as a function of different parameters which may influence a sensor (i.e. location, temperature, experimental setup). 2.2.4 DATA FUSION MODELS As the terminologies - data fusion, data integration, multisensor integration - become more widely used in day-to-day scientific publications, their meaning needs to be clarified. Waltz and Llinas,8 Hall9 and Rothman and Denton43 gave their views and definitions of what data fusion really is. In 1990, the US Department of Defense defined data fusion as 'a technology which involves the acquisition, integration, filtering, correlation and synthesis of useful data from diverse sources for the purposes of situation/environment assessment, planning, detecting, verifying, diagnosing problems, aiding tactical and strategic decisions, and improving systems performance and utility'. This is a very complex definition oriented towards military applications rather than a general explanation of data fusion. In simple terms it can be summarised as the processing, interpretation and use of data from multiple sources. Data fusion is used in an important variety of topics and technologies. A general data fusion system model capable of handling various applications is very difficult, if not impossible, to design. As a consequence, various data fusion models can be found in the literature. General reviews on data fusion were presented in 1988 by Blackman,44 Schoes and Castore45 and Luo and Kay,46 in 1990 by Hackett and Shah,47 and in 1991 by Rothman and Denton43 where different fusion technologies were described. Luo and Kay17 presented a detailed survey of multisensor integration and fusion systems. Their paper covers the broad aspects of data fusion with clear descriptions, examples and an extensive bibliography which is very useful to anyone requiring to know more about this topic. In Luo and Kay's model, the outputs of two sensors are fused into a new representation. This is then fused with information from a third sensor and so on. Fusion occurs gradually with care and deliberation using only two sets of information at a time. An interactive information system composed of three units - sensor selection, world model and data transformation - is used to modify the fusion process. The first unit, sensor selection, is used to select the most appropriate group of information from sensors to be fused. Sensory information is represented within the world model and the data transformation system is used to normalise data prior to fusion. A functional data fusion model for multisensor non-destructive testing (NDT) data is presented in Fig. 2.6. In the first phase, measurements from n sensors are integrated. At this stage the raw data are processed, using thresholding, averaging or image processing techniques, and converted to a common numerical format usable by the fusion unit where data association is performed. Evidential reasoning, probabilistic and belief theories are used to process the data further and make inferences. The results are classified and selected before a decision on the optimum fused sensor data information can be made. With this
14 Data fusion - a review system, information from multiple sensors is processed by the same data integration centre before being fused.
Sensor 1
Sensor 2
XI
Y1
X2
Y2 Data
Processing
, 1
Assignment Z2 ,
^ Processed
Rawd; dai
Zl
te
W
of Bayes or Dempster/Shafer Rule of
data
Combination
Sensor n
Z*
Yn
Xn
•
^ Integration
Fusion
Integration
Multisensor data integration and fusion centre
Fig. 2.6 Functional model for a data fusion process Krzysztofuwicz and Long22 outlined three schemes for fusing information in multisensor detection systems. The first scheme, fusion at the sensor level, fuses raw sensor data. The second one fuses decision information after data processing and integration. The final scheme fuses detection probabilities resulting from sensory data information. The performance of each scheme is assessed using a Bayes risk theory. The authors concluded that fusion of detection probabilities offers advantages over the other two schemes. Among these are higher performance and flexibility, and better suitability to situations where observations are in a probability format. They also described a probability fusion and decision model making use of likelihood and Bayesian statistics. A specific data fusion model can be created for each particular application but it sometimes appears that models may also vary within an identical application. Functional models of a data fusion process for target tracking and identification have been presented by Waltz and Llinas.8 Tong et al.29 and Ruck et a/.30 described different models for the same application. Another general approach to data fusion was proposed by Pau48 whose paper reviewed some knowledge representation approaches devoted to sensor fusion. Methodological problems in multisensor data fusion and application to fusion of acoustical and optical data are given by Richardson and Marsh.34 Data fusion models differ from author to author, but they all agree on a three-level process. Thomopoulos49 presented a sensor integration paradigm composed of three levels: the signal level, the level of evidence and the level of dynamics (Fig. 2.7). The integration phase at the signal level is called data fusion, at the level of evidence it is referred to as features fusion, and decision fusion relates to the level of dynamics. Artificial neural networks can be used at the signal level for image processing. Statistical models and probabilities are used to describe the experiment carried out and to integrate data at the evidential level. This level makes use of a Bayesian approach or Dempster-Shafer theory. A mathematical model that describes the experiment is used at the level of dynamics. Harris50 described a three-level distributed fusion system. Pau48 demonstrated the performance of a multilevel data fusion approach. His introduction will
Data fusion system models
15
Sensors y' Data Fusion
T
T
Signal Level
Features Fusion
Level of Dynamics
Level of Evidence
Decision Fusion
T
Fig. 2.7 A three-level fusion paradigm (modified from Thomopoulos49) certainly convert any non-believer in data fusion into a fervent follower of the theory. He also reported the benefits of combining evidence from multiple sensors (e.g. better feature, reduced cost, achievement of sensor diversity, increased speed, provision range data, etc.). The choice of the fusion level depends mainly upon the application and complexity of the system. An extensive survey of multisensor data fusion systems was presented by Linn et al.2S Their survey was oriented towards military applications of data fusion systems. For each of the 54 data fusion systems they identified, a short description and comments were given, providing the reader with detailed information on the performance of each system. Among the 54 data fusion systems identified, 49 were performing fusion of information at the first and second levels and only five at the third level. This may be due to the numerous sensors producing image-based data from satellites used by the military services and also to a decision to minimise the complexity of a system by performing most of the operations at the first level. Two of the most common types of architecture for data fusion systems are centralised and distributed (or decentralised) decision structures (Figs 2.8a and 2.8b respectively). Chao et al.,25 Schoes and Castore,45 Tenney and Sandell51 and Thomopoulos52 have compared their performances and presented decision theories for distributed systems. | Sensor 1 | — • Measurement — i
'
i
>
r v 1
Fusion
Decision w
1 Sensor 2 |—^* Measurement i i
Centre
i
i
1
i
i
1
| Sensor N|—>* Measurement —
Level
J
Fig. 2.8a Centralised signal detection system Distributed signal detection systems fused identity declarations using Bayesian theory, the Dempster-Shafer paradigm or Thomopoulos generalised evidence processing (GEP). The output from each sensor is a decision and these decisions form the inputs to a fusion centre where association is performed. Centralised systems are more suitable for the fusion of raw data but the association phase can be difficult. They are used with commensurate
16 Data fusion - a review Sensor 1
- • Measurement
- Local Decision Feature
Sensor 2
-•Measurement
Extraction
Local Decision
Level Sensor N
- • Measurement
- Local Decision
Fig. 2.8b Distributed (decentralised) signal detection system
sensor data (e.g. infrared images, satellite images) which are fused using pattern recognition or estimation techniques with high computational requirements. Chair and Varshney53 presented a data fusion structure for distributed detection systems. Distributed signal processing (SP) is attractive as the SP is performed at the sensor level, reducing cost and communication bandwidth and increasing reliability. Viswanathan et al.54 compared parallel and serial data fusion systems with statistically independent sensors. They proved the optimality of a Neyman-Pearson test when employed at multiple stages of the fusion process. Their implementation of the data fusion system concluded in favour of a parallel distributed fusion scheme. Thomopoulos and Okello55 presented a distributed fusion system for decision making between sets of information from two sensors. One sensor is used for final decision making, the other is used to consult and check the decision taken by the first one. This system is advantageous when there is little information on the prior probability. Multisensor systems can be categorised into four major network types: parallel, serial, parallel-serial and serial-parallel systems. A parallel sensor suite combines information in parallel as presented in Fig. 2.9, and is well suited to the fusion of measurand
Sensor 1
Sensor 2
Sensor j Fig. 2.9 Parallel multisensor suite
Data fusion system models 17 information from identical or dissimilar sensors. A serial sensor suite is more suited to sensors of different ranges which are complementary in a sequential order (Fig. 2.10). A serial sensor suite consists of j sensors in series from which the information is sequentially combined.
Sensor 1
Sensor 2
Sensor j
Fig. 2.10 Serial multisensor suite Sensor output can be regarded as a decision array of n decisions. The efficiency of each sensor, noted rjj9 is the probability of correctness of the decision Dy from sensor j ; it is a measure of the effectiveness of a sensor. The following is based on Dempster-Shafer theory which is described in section 2.3.3. Let us define C; and Wj as the belief that the decisions from sensor j are correct and wrong respectively. By definition, from the Dempster-Shafer theory we can write Cj+Wj^l and U}= 1 - (Cy+ Wj) where £/, is called the ignorance (or uncertainty) of a measurand. From Shafer56 and Dasarathy,57 CL+
WL+
UL=
UL
(2.2)
where ck9 wk and uk are the incremental probabilities of the fused correct and incorrect decisions and non-decision respectively at the kth stage of the fusion process. Recursively, we obtain A.
K
(2.3)
which can be written as Ck+Wk+uk=l
(2.4)
uk=uk_i-ck-wk
(2.5)
C, = c 1 ^ M ( /- 1 »
(2.6)
and with
1=1
it
wt-w^ur*
(2.7)
;=1
Therefore k
1
uk
(2.8) /-l
withO^Wi^l.
\-U\
18
Data fusion - a review
Ck and Wk are the summed correct and incorrect fused decision rates; equations (2.6) and (2.7) can be rewritten as 1
*
Ck = cx \-Ux
(2.9)
\-U\ 1
1 - u*x
(2.10)
\-U\
for 0 ^ ux ^ 1. The asymptotic limits (as k tends to infinity) give
c \
-
^ k I max ""
wk
(2.11) C\ + Wi Wi
(2.12)
C\ +W\
with ^ it I max +
*^ifc I max ~
(2.13)
^
Figure 2.11 is a plot of Ck |max against cx for different values of w{. It can be seen that as wx approaches zero, C j m a x reaches unity, resulting in a high correct fused decision rate (similar experimental results achieved with a Bayesian approach are described in chapter 6).
0.8 -f
Wl =
0.5
w^O.2 w,=0.1 0.2
H
1 0.4
1
h 0.6
H
h0.8
1.0
Fig. 2.11 Plot of Ck |max against c{ for different values of wx Two decision fusion cases can be identified: binary decision making and multiple hypothesis decision making. Table 2.2 presents the possible decision outputs for two NDT sensors in the case of binary decision output.
Data fusion system models 19 Table 2.2 Binary decision outputs for multiple NDT sensor outcomes Sensor 1 Sensor 2
Defect
No defect
Uncertain
Defect No defect Uncertain
Defect Uncertain Uncertain
Uncertain No defect Uncertain
Uncertain Uncertain Uncertain
From Shafer56 and Dasarathy57 we can write Ci = i7i%
(2.14)
w ^ d - T / i X l - ^ l
(2.15)
" I = (^7I + ^ 2 ) - 2 7 7 I ^ 2 < 1
(2.16)
Three cases corresponding to the values of rjt can be considered. If rjl•,= 1, this means that one sensor is very efficient and the other will be totally discarded at the fusion level. If 77, = 0, both sensors are assumed wrong and the fusion process will not improve any decision making. In the general case, where there is a specific value for rjx and rj2, we can write m a x ^ , rj2) = rj and mm(rjl9 rj2) = arj. This leads to Cl Wl
= arj2^l
(2.17)
= (l-i/)(l-ai7)«l 2
ul = (l + a)rj-2arj
(2.18) (2.19)
Therefore 2
arj Ck |max = / 2 2a?7 -?7(a + l) + l
(2.20)
which gives the curves in Fig. 2.12. It is evident that the more efficient the system, the faster the improvement. The general case for t sensors can be written as t
ci-n*/*1
(2 2i)
w^Ylil-rjj)^!
(2.22)
-
and the minimum value of rj at which the correct fused decision rate exceeds the incorrect fused decision rate is given for rjmin ^ 0.5. Ck |max and Wk |max become t
C t U = —-— -<1 ti'+a-tiY (i - v)' ^L» =- r ^ ^ 1 v' + (i->7)
(2.23)
(2-24)
20
Data fusion - a review
0.1
0.2
0.3
0.4
0.5
*7«0.1
0.6
*7-0.5
0.7
0.8
0.9
1.0
-?;-0.9
Fig. 2.12 Plot of Ck |max against a for different values of rj
Figure 2.13 illustrates the benefit of using multiple sensors. The lower the value of rj, the higher is the number of sensors required to achieved similar performance. In the case of multi-hypothesis decision analysis for t sensors, the Dempster-Shafer
4
5 6 7 Number of sensors
8
Fig. 2.13 Plot of Ck |max against number of sensors for different efficiency values
Data fusion system models 21 theory can be written as t
Cl
= Y\t]j^l
(2.25)
fid-*,) w, = ^ — < 1 (w-l)*'" 0 «I = 1 - ( P I + ? I ) < 1
(2.26) (2-27)
where w is the number of hypotheses tested. In the general case where r/, = tj Cl
= rj'
(2.28) (I-17)'
w, = - i !!— (m-l) ( ' _ 1 ) ^
-
"
-
^
(2.29)
(2 30)
^
-
The minimum number of sensors t necessary to obtain a correct decision probability higher than a wrong decision probability (i.e. cx^wx) is given by Ci-w,3=0
(2.31) ln(m-l) In[i7(w-l)/(l-i7)]
(2.32)
It can be easily seen that for a given number of hypotheses m, the higher 77 is, the fewer sensors are required to infer a correct decision. The general expressions for Cj m a x and Wk I max are
CU
"/<"''")
,
(2.33)
riffa m) + (I - ri) WkU=
\
'\
r
(2.34)
with /(7;,m)=[(m-l)^](-1) (2.35) The main objective of a data integration and fusion scheme is to gather information from multiple sensory systems (e.g. radar, sonar, NDT sensor) in order to produce a single and more complete solution from information measured by each sensor. The system output can take the form of an analogue signal or a colour coded computer image. Most architectures rely on one central data fusion processing unit with multiple levels of preprocessing units. The first level involves fusion of raw data, the second is a fusion decision level and the third is a feature or probability fusion level. In the first stage, unprocessed (or raw) sensor data are integrated and put into an identical format. These data can then be fed into the data fusion centre before being processed according to a
22
Data fusion - a review
mathematical algorithm which will produce a coherent global result. Data fusion is not only a signal processing unit but also an information evidential combiner. The choice between centralised and distributed architecture really depends on the application being considered, as does the choice of a data fusion methodology.
2.3
Fusion methodology
This section presents different theories and strategies applied to multisensor data integration and fusion. Multiple heuristic and analytical techniques for data fusion have appeared in the literature during the last 20 years. Table 2.3 presents a survey of the most common of these. From the literature, no optimal combination technique has been proposed, all varying from one application to another. However, the most commonly used theories appear to be Bayesian probabilistic reasoning and Dempster-Shafer theory of evidence.56 The advantages and limitations of these two approaches have been widely illustrated by Waltz and Llinas, 8 Thomopoulos,49'52'58 Abidi,59 Abdulghafour et a/.,60 Crowley and Demazeau61 and Blackman.44 Linn et al.2S summarised more than 50 different data fusion procedures developed for military applications.
Table 2.3 Survey of most common data fusion and integration methods Fusion method
Applications
Pixel level fusion
Image processing, image segmentation Bayesian theory Decision making between multiple hypotheses Dempster-Shafer theory Decision making, beliefs of evidence intervals Neural network Signal interpretation Neyman-Pearson criteria Decision making Fuzzy logic Handle vagueness Knowledge based system Pattern recognition Markov random field Image processing
References 10, 14, 26, 29, 30, 31, 32, 33, 35, 36, 37, 61, 62, 72, 74, 75, 77, 87, 89, 90, 98, 101 22, 27, 35, 49, 50, 52, 53, 58, 60, 64, 65, 66, 67, 79, 94 10, 23, 42, 44, 49, 52, 60, 68, 94, 101, 103 18, 30, 75, 83, 84, 85, 86, 87, 91 54, 55, 58, 63 59,60,70 19, 45, 48 79, 101
Averaging of data conveying information in an identical format is considered as the most simple form of fusion.13'47 Data fusion processes in general make great use of statistical and probability theories. Among these are classical inference (or probability theory), Bayesian inference, Dempster-Shafer evidential reasoning, generalised evidence processing (GEP) theory and fuzzy logic inference techniques. These methods, used to represent uncertainty62 in measurement by expressing a degree of belief which supports or refutes a hypothesis, are described in more detail in the next sections. 2.3.1 CLASSICAL INFERENCE The classical inference method, also known as probability theory, computes probabilities from multiple hypotheses in order to determine their acceptability. This method is useful to assess two hypotheses at a time. Classical inference provides quantitative information
Fusion methodology 23 about a sensor observation in the form of a probability. It was first introduced by Pascal (1623-1662). The probability of an event E, denoted p(E), is equal to the frequency of occurrence of E, denoted f(E), in n trials:
RE)
p(E) = lim J-±-^ /!->oo
(2.36)
fj
In all cases, O^p(E) ^ 1. The alternative event, denoted E9 is defined as p(E)=l-p(E)
(2.37)
It is assumed that if p(E)>p(E), E is true, otherwise E is true. In classical inference, the probability which does not support a hypothesis refutes it. One limitation is that probability theory is not able to distinguish between uncertainty and ignorance. Among the most common inference approaches based on an observed sample of data for acceptance or rejection of a hypothesis we find: • • • •
maximum a posteriori likelihood ratio criterion Neyman-Pearson test Bayes criteria.
Maximum a posteriori The maximum a posteriori approach is probably the most widely used method to test a hypothesis. H0 will be true if p(H0\y)>p(Hl\y)
(2.38)
where y is an observation from a sensor and //, a hypothesis /. The maximum a posteriori criterion compares two probabilities assigned to two hypotheses and favours either one or the other depending only on their chance of occurrence. Likelihood ratio criterion The likelihood ratio is a test to decide between hypothesis H0 or its alternative Hx (H0 = HX). The decision maker follows a likelihood ratio criterion of choosing H0 if the ratio of p(u0\H0) to p(u0\H{) exceeds a defined value t. It can be formulated by the equation P("/l#o)
A(",) = fl Lp(Mo|#i)J > t
(2.39)
where Hl and H0 designate hypotheses 1 and 0 respectively, n is the number of sensors, ux is random observed sample data with normal distribution N(ju9o2), p{ut\Hx) is the probability of decision w, given that Hx is true, A(u) is the likelihood ratio (also called the level of sufficiency), and t is the threshold (or significance level) determined from experiment (or survey) and set at the fusion centre. A(w) represents the degree to which the observation of evidence u influences the prior
24 Data fusion - a review probability of H. If A(u)>t, then H0 is true as the observation u tends to confirm H0. If A(u) ^ t, then Hx is true. If A(w) = 0, w does not influence the prior confidence in H0. The threshold t is defined as ^ = P 0 (C 1 0 -Coo)
(2>4Q)
^(Coi-Cn) The likelihood test is used to select the most likely hypothesis based on a priori probability from each hypothesis. It represents the degree to which the evidence w, influences the prior probability of hypothesis H. A cost function is a function which gives the probability of obtaining a hypothesis as a function of parameters affecting the cost such as loss, human cost or repair cost. For example, C n is a measure of the cost in terms of monetary unit if a decision 1 is taken when 1 is true. Neyman-Pearson hypothesis test The Neyman-Pearson test (1937) is a general theory used to make a decision between two hypotheses. The hypothesis H0 is rejected if the following equation is verified:
«!M„
(2.4!)
A(u\Hx) A(w|//,) is the likelihood function given the hypothesis //, (/ = 0,1). The threshold t is chosen depending on the risk the user is prepared to take to accept or reject H. The smaller the value of t9 the lower the risk. Thomopoulos et a/.58'63 studied the performance of a Neyman-Pearson test (NP) for a distributed detection system at the sensor level and decision level. A NP test can be used in a centralised fusion centre and is performed independently on each sensor providing data to the fusion centre. Bayes criteria A cost function based on false alarm and probability of detection is used to select between two hypotheses H0 and Hx. P0 and Px are a priori probabilities which govern the decision output. A cost function C can be defined for each decision outcome: CQO i s ^ e cost function assigned to the decision 0 when the true and P(H0 \ H0) is the probability associated with this decision. • Cox is the cost function assigned to the decision 0 when the true and P(H0 \ Hx) is the probability associated with this decision. • C10 is the cost function assigned to the decision 1 when the true and P(HX \ H0) is the probability associated with this decision. • Cxx is the cost function assigned to the decision 1 when the true and P(HX \ Hx) is the probability associated with this decision.
•
outcome is 0 outcome is 1 outcome is 0 outcome is 1
The expected value of the cost as the risk R is defined as R = CmP0P(H01 H0) + C0lP0P(H01 Hx) + CX,PXP(HX \ H0) + CnPxP{Hx \ Hx) (2.42)
Fusion methodology 25 The decision intervals can be defined as
p(y\Ho) J
pjHMCu-Cn)
(2.43)
p(y|#i) % p(J/o)(C10-Coo) By defining
A W - ^ l
(2.44)
and j/ =
p(// 1 )(C 01 -C 11 )
(2.45)
p(#o)(Cio-Coo) the equation can be rewritten as A(II) i° 77
(2.46)
where rj is the threshold of the test and should be such that the cost is as small as possible. Bayes criteria lead to a likelihood ratio test. Bayes and Neyman-Pearson tests consist of finding a likelihood ratio and comparing it to a threshold value in order to make a decision. 2.3.2 BAYESIAN INFERENCE The Bayesian inference technique takes its name from an English clergyman named Thomas Bayes (1702-1761) who developed a theorem now known as Bayes's theorem. His theory can be used to estimate the degree of certainty of multiple sensors providing information about a measurand. Bayes's theory is the most commonly used probabilistic method of combination of evidence. Durrant-Whyte27,64'65 described an algorithm for a decentralised architecture for multisensor data fusion. He presented a method for integrating observation of geometric states of an environment by multiple sensors. He makes use of Bayes's probabilistic approach and presents an example of his system for mobile robot applications. Duda et al.66 and Krzysztofuwicz and Long22 used a Bayesian model to fuse detection probabilities. Bayesian inference theory has been used by Malik and Polkowski67 to decide on the presence of an obstacle in mobile robot application; by Shapiro and Mowforth35 to yield an updated sensor reading; and by Chair and Varshney53 to produce an optimised data fusion algorithm and weight each signal differently for each sensor according to their reliability. The Bayesian probabilistic method uses a priori probability of a hypothesis (or conditional probability) to produce an a posteriori probability of this hypothesis. The a priori probability is updated as new information is conveyed to the system. The Bayesian theory can be mathematically described as follows. Suppose there are n mutually exclusive and exhaustive hypotheses H0, Hu ..., Hn that an event E will occur. The conditional probability p(E \ Ht) states the probability of event E given that Ht is true,
26 Data fusion - a review and p(Hj \ E) is given by p(Er\H,)
p(Hi\E) =
ZpiEnHj) p(E\H,)p(Hi)
£p(En//,)p(//,) p(£|//,)p(//,)
(i-O,..., n)
P(E)
(2.47)
where p(//,) is called the a prion probability of hypothesis //, and p{Ht\E) is the a posteriori probability of having E given that //, is true, where N
5>(#/) = 1
(2.48)
i' = 0
Bayesian theory can be adapted for decision making. If multiple sensors are used, the general equation becomes p(tf01 E)p(Hi \E)...p(H,\ E)p(E) P(E/H0
n HX n • • • n H,) =
(2.49) !>(#,)
This theory presents the advantages of giving a belief (probability) on a hypothesis given an evidence (event) (classical inference gives the probability of occurrence of an event given a hypothesis). It uses an a priori probability about the feasibility of a hypothesis. When no a priori information is available, the principle of indifference is used in which the p(Ht) for all / are assumed equal. Some limitations of this theory are: • • • • •
no representation of ignorance is possible; prior probability may be difficult to define; result depends on choice of prior probability; it assumes coherent sources of information; adequate for human assessment (more difficult for machine-driven decision making); • complex with large number of hypotheses; • poor performance with non-informative prior probability (relies on experimental data only). The outcome of Bayesian inference method is a single 'hard' number related to a proposition (single decision probability).
Bayesian estimation is used to eliminate unlikely information/hypotheses and to solve ambiguities and conflicting information from multiple sensors. An example of Bayesian theory applied to NDT is described in chapter 5. 2.3.3 DEMPSTER-SHAFER EVIDENTIAL REASONING The Dempster-Shafer theory is often described as an extension of the probability theory or a generalisation of the Bayesian inference method.2 The Dempster-Shafer theory was
Fusion methodology 27 not specifically designed for reasoning with uncertainty and its application to expert systems and data fusion did not become apparent until the 1980s. Dempster-Shafer theory has been used for assigning a degree of belief in target identification applications10'44 and tactical inferencing.68 Before combining information, the theory of evidence must be presented. It is called a theory of evidence because it deals with weight of evidence. Assume a set of n mutually exclusive and exhaustive propositions, 0 = [X0,XU ..., Xn} where 0 is called a frame of discernment. Thus propositions can be developed by the Boolean operator OR; 2 0 is the set of all the subsets of 0. Dempster-Shafer developed the concept of mass probability to assign evidence to a proposition, which is denoted m(X) where 0 ^ m(Xi) ^ 1
X rn{X) = 1 m(0) = 0 Another term for mass probability is basic probability assignment (bpa). The support for a hypothesis is the total degree of belief for this hypothesis to be true. A belief function can be defined by Bel: 2e -> [0, 1]
Bel(X) = J] ™(r)
for each x
^0
(2 50)
'
YCX
where Bel{X) is the degree of support for the proposition X which for multiple hypotheses becomes Bel(X)= X > ( X , ) xtcx
(2.51)
The properties of a belief function are Bel(G) =1 Bel(X) = 0 0
as ]T m(X) = 1 ifXtte ifXc©andX*0 for each X c 0 containing only one element
Bel(X) + Bel(X)*l since Bel(G) = Bel(X\jX) = BeKX) + BeKX)+
J]
m(Y)=l
XCiY*0
xny*o
Any belief which is not assigned to a specific subset is called a non-belief and is associated with 0. A belief function is usually expressed in terms of bpa. By convention the mass probability of the empty set is zero, m(0) = 0.
28 Data fusion - a review Data association using Dempster-Shafer theory is performed through the Dempster rule of combination. It can be defined as follows ml®m2(Z) = m3(Z) = K
J ] mx{Xx)m2{X2) xlnx2=z
(2.52)
This is called the orthogonal sum of mx and m2 and is defined as the sum of the mass product intersections. The total belief committed to Z is Bel(Z)=
]T
ml(Xi)m2(XJ)
(2.53)
ij
xtr\xrz The Dempster rule is both commutative and associative. The order and grouping of combination do not affect the resulting joint probability masses. Figure 2.14 is a geometric representation of the Dempster rule of combination. In this diagram, evidence from two sources mx and m2 is combined by computing mx(Xi)m2(Xj) committed to Xt fl Xj and their orthogonal sum is shown by a shaded rectangle. The combination of multiple belief functions is carried out by repeating the above process in a pairwise manner. 1 m2(Xn) - •
m
l,2(XiXj)
m2(Xj) - •
m2(Xx) - • 0 ( m{(X{)
mi(Xi)
mi(Xn)
Fig. 2.14 Geometrical representation of Dempster rule of combination If the sum of all the masses is less than 1, a normalisation factor (1 - k) has to be considered. It is given by the equation
k=
Y,
^(X^m^Xj)
(2.54)
U x,nx y =0
The factor k indicates the amount of evidential conflict. If k = 0 this shows complete compatibility, if k= 1 it shows complete contradiction and if 0 < £ < 1 it shows partial compatibility.
Fusion methodology 29 The general form of the Dempster rule of combination can be written as
mx 0 m2(Z) = - ^ ^
(2.55) 1 -k The output from the Dempster rule of combination is a set of evidential intervals (EI), denoted EI= [Bel(Z), Pls(Z)], where Bel(Z) is the belief assigned to the proposition Z and Pls(Z) its plausibility. The plausibility is a measure of evidence that supports the hypothesis, and is given by Pls(Z) = l-Bel(Z)
(2.56)
A decision is made in favour of the hypothesis having the best evidential interval. Figure 2.15 is a graphical representation of the evidential interval and Table 2.4 is the decision associated with different evidential intervals.
M
Incertitude
Belief
•
Disbelief Plausibility
Fig. 2.15 Graphical representation of the evidential interval
Table 2.4 Decision interpretation for several evidential intervals [Bel(X),Pls(X)]
Decision
[0,1] [1,1] [0,0] [0.4,1 ] [0,0.7 ] [0.3,0.5 ]
Total ignorance, no belief in support of X Proposition X is completely true Proposition X is completely false Partial belief, tends to support X Partial disbelief, tends to refute X Both support and refute X
The Dempster-Shafer theory presents several advantages for combining evidence. In this theory, the probability masses are combined according to the communality of hypotheses. An a priori mass density function is updated to obtain an a posteriori evidential interval. The interval provides information on the belief, plausibility, disbelief and ignorance about a hypothesis. Some of the features of the theory are: • •
an overestimation of the final assessment can occur; small changes in input can cause important changes in output;
30 Data fusion - a review • •
high efficiency with bodies of evidence in pseudo-agreement; lower efficiency with bodies of evidence in conflict.
Dempster-Shafer provides a measure of the amount of knowledge about a measurand. 2.3.4 GENERALISED EVIDENCE PROCESSING THEORY The generalised evidence processing (GEP) theory was introduced by Thomopoulos49,52 as an extension of both Bayes and Dempster-Shafer theories. GEP generalises the Bayesian distributed decision fusion theory into a concept where soft decision making can occur. It uses the concept of separation of hypotheses from decisions and extends Bayesian theory to a frame of discernment similar to Dempster-Shafer. The masses assigned to a decision are combined pairwise according to the Dempster rule of combination. The major difference between GEP and Dempster-Shafer is that in GEP the masses are associated via thresholds in a likelihood or Neyman-Pearson test manner, whereas in Dempster-Shafer theory the bpa are combined according to the commonality of events. Thomopoulos claimed that his theory is supposed to combine the strong points of both methods without their disadvantages. 2.3.5 FUZZY LOGIC INFERENCE TECHNIQUE Fuzzy logic theory is very flexible and there is no universal rule of formalism which can be associated with it. This depends on the type of application, and different strategies can be implemented to associate elements to a particular set. Fuzzy logic evaluates qualitatively a signal from a sensor, and fuzzy sets associate a grade (numerical value) to each element (Table 2.5). Table 2.5 Typical associated values for different elements in fuzzy logic Element
Associated values
Associated reliability
Signal high Signal medium Signal low
[1.0,0.7] [0.7,0.3] [0.3,0.0]
Certain Uncertain Incorrect
A membership function usually defines a fuzzy set and is mathematically described as follows: 1 if X is definitively a member of a set 0 if X is definitively not a member of a set /(*)= 0 < d < 1 if X is a member of a set to a certain degree
(2.55)
This function is used to associate a specific set to each element. Fuzzy set theory is a very subjective method as one element may belong to one or a multiple set, depending on the threshold fixed by the membership function. Abidi59 demonstrated that fuzzy logic membership functions can be used for classification and decision purposes of sensor information. Zadeh,69 Ferrari70 and Russo and Rampoui71 showed that fuzzy logic
Fusion methodology 31 techniques give a linguistic interpretation of a numerical measurement by associating a descriptive reasoning to a numerical value (e.g. tall associated to the value 1, small associated to the value 0). They presented a multilevel system to handle vagueness: sensor level, data fusion level and reasoning level, information being produced at the sensor level and integrated at the data fusion level. The reasoning level generates a decision making use of artificial intelligence systems. A confidence level can be associated for each set of numerical values (Table 2.5). The next stage of the process is the data fusion phase which associates a degree of reliability to each set in function of their numerical values. Fuzzy logic methods can be very useful to represent uncertainty from multiple sensors and to handle vagueness, but they have been mainly dedicated to image segmentation purposes. 2.3.6 PIXEL LEVEL DATA FUSION Combining information from multiple images to improve classification accuracy of a scene where images are processed at the pixel level using a segmentation algorithm is very common.32 Pixel level data fusion can be performed for image processing and image smoothing.10'26,29'31'72 This approach is used with noisy multisensor data. In computer vision applications, data fusion is used for image segmentation to combine information perceived by two or more visual sensors.32,73 Image segmentation and image processing are widely used for pixel level fusion.32'33 Figure 2.16 shows the image segmentation of a similar scene as seen by two sensors where similarity detection patterns are performed by correlation. The final representation conveys information from both sensors. The fundamentals of image processing theory for image segmentation are described by Seetharaman and Chu.73 There are two types of segmentation process: regionbased and edge-based segmentation. The region-based segmentation process groups pixels according to their similarity. The edge-based segmentation process identifies the boundaries of objects by noting the changes in pixel values. Duncan et al.14 describe segmentation as a hill-climbing problem according to an objective function. More parameters, increasing computation time and system complexity, are necessary to
X Sensor 1
o
X
A
^
~^^
Sensor 2
FUSION CENTRE
X
±
A
O
Fig. 2.16 Image segmentation of similar scene as seen by two sensors. The final representation conveys information from both sensors
32 Data fusion - a review improve results of pixel association at the image region level.75 Mathur et al.12 transported a mathematical algorithm into CMOS technology, and presented an analogue electronic circuit to reduce noise in order to facilitate spatial feature extraction at the pixel level fusion. Libby and Bardin76 presented a hardware unit, called TIGER, used to perform calculations for 3-D object format data. Another system called MITAS72 is used to collect and process multisensor imaging data for airborne surveillance operations. Pau48'78 uses a statistical technique for pattern recognition. Crowley and Demazeau61 reviewed the problem of fusion in machine vision. They present a Kalman filter fusion system used in the integration process of numerical information. A Kalman filter is an iterative technique of dynamic linear modelling which can be used to predict the state of the model and to update the property estimates in the model.2 It provides a mean to determine the weight given to input data, and an estimate of the target tracking error statistics, through the covariance matrix, and is used for gating information. Wright79 explained the data fusion process at pixel level using a Markov random field (MRF). The MRF is a stochastic process which defines a probability measure for each independent pixel value of an image and compares it with the measure of the same pixel location of the next image. Xu80 used a maximum likelihood approach to fuse disparate sensory data, implementing his approach by using a Monte-Carlo simulation for mobile robot guidance. 2.3.7 ARTIFICIAL INTELLIGENCE Artificial intelligence (AI) techniques developed for data association make use of expert systems and neural networks. Expert systems are computer systems designed to emulate the decision making ability of the human brain. Expert systems make use of specialised knowledge and expertise and are often referred to as knowledge based systems. The decision made by an expert system will be based on the information acquired during its development. However, the efficiency of the expert system will be a function of the amount of knowledge pre-programmed into it. Given facts and data, expert systems make inferences about an event or a hypothesis. Permanence, consistency, increased reliability, fast response and cost reduction are direct benefits of using expert systems. Artificial neural networks (NNs) are software simulated processing units, or nodes, which are trained in order to solve problems. The training of NNs is performed with historical data and associated outcomes. The NN calculates its response to test input data and compares it with a known result. Through this process, the weight of each node can be adjusted according to a specified algorithm. Neural networks can be very useful to solve problems in applications where it is difficult to specify an algorithm. They are usually composed of a number of interconnected nodes or 'neurones' that act as independent processing units (Fig. 2.17). Each node processes input signals through a specified transformation and produces an output signal which can be mathematically expressed as J =/(!>,*,)
(2.57)
where y is the output, w, the weight associated to the input node / and x, the input on node /. One of the most common NN models is the multilayer Perceptron. It is composed of three or more layers of nodes (Fig. 2.18) each feeding its outputs forward to one or more nodes in the following layer.
Fusion methodology 33 Input Nodes
Input Signal •
Output Signal
Fig. 2.17 A single-layer artificial neural network Input Nodes y v \ ^ v Input
x^>v
Signal
^^^
*^(
Hidden Layer Output Node ^ Output W Signal
\ Fig. 2.18 A two-layer neural network, Perceptron
Detailed information concerning neural networks can be found in the literature.18,81"83 They have been used for signal interpretation and decision making, and also for sensor data fusion as a neural network data fusion decision system for detection and correct classification of space object manoeuvres observed by radar of different frequencies and resolution.84-87 Pearson et a/.,18 Ruck et a/.,30 Eggers and Khuon85 and Chilips and Steele88 described the use of neural networks in decision systems for target tracking, object detection, recognition and classification in defence applications. Kjell and Wang89 discussed image processing operations, such as filtering and segmentation, using NNs. Chen90 used a NN to select matching pixel based fusion from sensors for robotics application. Tsao and Libert75 used a neural network to perform pixel-to-pixel image
34 Data fusion - a review association for object identification using segmentation and filtering. They took into account a temporal frequency parameter (motion) in addition to the conventional factors such as image intensity and boundaries (see section 2.3.6). Artificial neural networks have already been applied to non-destructive examination (NDE)91 for eddy current signal classification and automatic tube inspection,83 defect characterisation,92 classification of weld defects93 and signal interpretation.50 2.4
Data integration and fusion applications
Applications of data fusion span a broad range of disciplines such as robotics,67'35 airborne surveillance,14'96 target tracking10'30'44'95"97 and defence.7 Pickles16 combined financial information about market equities using expert systems. The most impressive data fusion implementations are in military applications,6,7'29'98 and the statement of MacNicol,99 'the next war will be intelligent', encapsulates current success in this discipline. Table 2.6 presents a survey of applications of data fusion. Again, target tracking represents the major area of data fusion implementation through military applications. Table 2.6 Areas of implementation of multisensor data integration and fusion Application of data fusion
References
Target tracking Pattern recognition Robotics Review on data fusion Image processing Surveillance Medical
6, 7, 10, 24, 27, 30, 42, 44,46, 55, 58, 68, 84, 95, 96, 97 3, 4, 21, 29, 34, 36, 48, 61, 72, 75, 86, 87, 100 34, 35, 50, 59, 61, 64, 65, 67, 70, 80, 90, 100 6, 17, 20, 28, 43, 44, 45, 46, 47, 49, 52, 60, 78 31, 32, 33, 48, 59, 62, 73, 74, 79, 89, 101 14, 18, 37, 53, 77, 85, 94, 98 15
Other important areas of implementation of data fusion are in pattern recognition,74'79'89*100101 image analysis,410*30101 fusion of satellite images,33 and coordination and integration of disparate sensory information from mobile robot systems for obstacle detection.34*61'64105 Edwards et al.13 and Gros et al.m described the use of data fusion in non-destructive examination, and Garreau et al.15 used it to produce 3-D representations of vascular networks by fusing radiographs. 2.5
A practical example of NDT data fusion
This section concerns the application of data fusion to NDT. Data fusion applied to NDT is a new concept still to be accepted by practising engineers using common NDT techniques. Two commercial eddy current systems (Lizard and Hocking Phasec) were used for the inspection of a butt welded plate. For commercial reasons, the name of each system has been substituted by system 1 and system 2. Toe cracks were detected using each system but different depth measurements were obtained. The reading from each sensor for depth measurement was modelled using a Gaussian normal distribution. The representative curves for these systems are shown in Fig. 2.19. Both systems have different means and standard deviations; they are in subjective disagreement.
A practical example ofNDT data fusion
35
U.Z3 -
- - -
-
System 1 o..„*
0.2 -
o
0.15 -
» 0.1 -
\ %
i
\
0.05 -
0
% \
i
/ -'
%
\
-"^r\— 1 — 1 — \ —4-H—h-H—i
0.5
i r > - i—i—v-r-=\—i
1.0 1-5 Defect depth / mm
1
2.0
i
1 1 1 1
2.5
Fig. 2.19 Sensor models for two NDT systems The objective is to fuse systems 1 and 2 in order to generate a consensus assessment of the defect depth. The Dempster-Shafer rule of combination was applied to integrate information from each system. Each sensor provides the fusion centre with a probability (or belief) of an event. The fusion is performed at a statistical level and not directly at a measurement level (details of the operation are given in chapter 6). The fusion centre collects the information from multiple sensors and produces a global inference operation. The resulting consensus is a third curve (Fig. 2.20), with a greater degree of support than 0.35 - System 1 - System 2 0.25 +
- System 3
* 0
0.2
0.4
0.6
0.8
1.0 1.2 1.4 Defect depth / mm
1.6
I I I i I I 1.8 2.0 2.2 2.4
Fig. 2.20 Fusion of multisensor data using Dempster-Shafer rule of combination
36 Data fusion - a review either of the two models. Using data fusion to combine information from multiple NDT sensors can improve the performance of an inspection. 2.6
Conclusion
The basic principles and most common methodologies of data fusion have been presented in this chapter. The number of publications related to this technology demonstrates its popularity and implementation in a wide range of areas. The potential of data fusion was described and illustrated through several references. As stated by Rothman and Denton43 a successful application of sensor fusion can only be achieved by an understanding of both the data fusion theory and the application domain. Among the data fusion methodologies presented, application of Bayes's theorem and Dempster-Shafer evidential reasoning can be used to model sensor uncertainty. Dempster-Shafer theory provides a powerful methodology for the representation and combination of evidence. More implementation will have to be performed to assess more deeply the potentials and limits of this theory applied to NDT. The wide range of application and the potential of the data fusion process to any scientific application where sensors play a role at any level are demonstrated by the large number of industrial and educational establishments in the world that are involved in data fusion. The automatic fusion of data from multiple sensors and the use of artificial intelligence for decision making have already demonstrated all its capabilities for multiple target tracking, air combat, robotics and air/sea traffic control.59 It has been demonstrated through a practical example, using current instrumentation, that data fusion can improve the accuracy and reliability of non-destructive examinations. Its implementation should not be hindered by ignorance or conservatism. Conventional NDT methods have limited performance; data fusion may be the technology to develop in order to enhance their efficiency. The acceptance of data fusion and artificial intelligence in NDT will require a change in operator training which will be oriented towards new technologies useful for NDT allied to integrity engineering. Problems exist with using data fusion for decision making and it will be some time before machines approach the data fusion capabilities of the human brain. Research needs to be carried out in a range of technical disciplines. Research on neural networks, expert systems and artificial intelligence will certainly produce new developments in various fields which hitherto have benefited very little from these advances. References 1. Van Trees HL. Detection estimation and modulation theory, 1968, Vol. 1, John Wiley and Sons, New York. 2. Dempster AP. A generalization of Bayesian inference, Journal of the Royal Statistical Society, 1968, 30, 205-47. 3. Barnea DI, Silverman HF. A class of algorithm for fast digital image registration, Institute of Electrical and Electronics Engineers Transactions on Computers, Feb. 1972, C-21 (2), 179-86. 4. Goodenough DG, Robson MA. Data fusion and object recognition, Proceedings of Vision Interface Conference, June 1988, Edmonton, Canada, 42pp. 5. Barbera AJ, Fitzgerald ML, Albus JS, Haynes LS. RCS: the NBS real time control system, Robots 8th Conference Proceedings, Detroit, MI, June 1984, 19.1-19.33.
References 6.
7. 8. 9. 10.
11. 12.
13. 14. 15.
16.
17.
18.
19.
20. 21.
22.
23. 24.
37
Llinas J, Hall DL, Waltz E. Data fusion technology forecast for C 3 MIS, Institute of Electrical and Electronics Engineers Proceedings of 3rd International Conference on Command, Control, Communications and Management Information Systems, May 1989, Bournemouth, UK, 148-58. White FE, Llinas J. Data fusion: the process of C 3 I, June 1990, Defence Electronics, 7 7 - 8 3 . Waltz E, Llinas J. Multisensor data fusion, 1990, Artech House. Hall DL. Mathematical techniques in multisensor data fusion, 1992, Artech House. Roggemann MC, Mills JP, Rogers SK, Kabrisky M. Multisensor information fusion for target detection and classification, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, USA, 8-13. McCoy M. Sensor data interpretation for improved flow data accuracy, Feb. 1992, Sensors, 8-14. Miller WT, Glanzz FH, Kraft LG. Application of a general learning algorithm to the control of robotic manipulators, International Journal of Robotics Research, 1987, 6(2), 8 4 - 9 . Edwards I, Gros XE, Lowden DW, Strachan P. Fusion of NDT data, Dec. 1993, British Journal ofNon Destructive Testing, 35(12), 710-13. Franklin SE, Blodgett CF. An example of satellite multisensor data fusion, 1993, Computers & Geosciences, 19(4), 577-83. Garreau M, Coatrieux JL, Collorec R, Chardenon C. Symbolic and numeric data fusion for the 3-D reconstruction of vascular networks, 1990, Signal Processing V: Theoretical applications, Elsevier, Amsterdam, 919-27. Pickles W. Distributed agents in parallel data fusion, Wall Street - A financial system, Software Development 92, Technical Track. Proceedings of the Conference, London, June 1992, 127-35. Luo RC, Kay MG. Multisensor integration and fusion in intelligent systems, Sept./Oct. 1989, Institute of Electrical and Electronics Engineers Transactions on Systems, Man and Cybernetics, 19(5), 901-31. Pearson JC, Gelfand JJ, Sullivan WE, Peterson RM, Spence CD. Neural network approach to sensory fusion, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 103-8. Garvey TD. A survey of Artificial Intelligence approaches to the integration of information, Proceedings of the SPIE, 782, Infrared Sensors and Sensor Fusion, May 1987, Orlando, FL, 68-82. Rauch HE. Probability concepts for an expert system used for data fusion, Fall 1984, AIMagazine, 55-60. Magee MJ, Aggarwal JK. Using multisensory images to derive the structure of 3-D objects - A review, 1985, Computer Vision, Graphics and Image Processing, 32, 145-57. Krzysztofuwicz R, Long D. Fusion of detection probabilities and comparison of multisensor systems, May/June 1990, Institute of Electrical and Electronics Engineers Transactions on Systems, Man and Cybernetics, 20(3), 665-77. Smith LA, Godfrey K, Fox P, Warwick K. A view technique for fault detection in multisensor probes, March 1991, 1062-7. Dawn FE. Fundamental limits in multisensor data fusion, 1990, ffiEE CH2872-0/ 90/0000-0316,316-19.
38 Data fusion - a review 25. Chao JJ, Drakopoulos E, Lee CC. An evidential reasoning approach to distributed multiple-hypothesis detection, Institute of Electrical and Electronics Engineers Proceedings of the 26 Conference on Decision and Control, Los Angeles, CA, Dec. 1987, 1826-31. 26. Lee RH, Van Vleet WB. Registration error analysis between dissimilar sensors, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 109-14. 27. Durrant-Whyte HF. A modular, transputed-based architecture for multisensor data fusion, April 1990, SERC ACME GRE/42419, 229-53. 28. Linn RJ, Hall DL, Llinas J. A survey of multisensor data fusion systems, Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, Orlando, FL, 13-29. 29. Tong CW, Rogers SK, Mills JP, Kabrisky MK. Multisensor data fusion of laser radar and forward looking infrared (FLIR) for target segmentation and enhancement, Proceedings of the SPIE, 782, Infrared Sensors and Sensor Fusion, May 1987, Orlando, FL, 10-19. 30. Ruck DW, Rogers SK, Kabrisky M, Mills JP. Multisensor target detection and classification, 1988, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 14-21. 31. Duane G. Pixel-level sensor fusion for improved object recognition, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 180-5. 32. Johnson DG, Hindley N, Fullwood J. Multisensor fusion for classification and change detection in remote sensed imagery, Feb. 1991, 4/1-4/4. 33. Ehlers M. Multisensor image fusion techniques in remote sensing, Feb. 1991, Journal of Photogrammetry and Remote Sensing, 46(1), 19-30. 34. Richardson JM, Marsh KA. Fusion of multisensor data, Dec. 1988, International Journal of Robotics Research, 7(6), 78-96. 35. Shapiro J, Mowforth P. Data fusion in 3D through surface tracking, Proceedings of the 3rd International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, 1, 15-18 July 1990, Charleston, USA, 163-8. 36. Flynn AM. Combining sonar and infrared sensors for mobile robot navigation, Dec. 1988 International Journal of Robotics Research, 7(6), 5ml4. 37. Automated scene interpretation for ROVs, Nov./Dec. 1993, Underwater System Design,lS(6), 11-13. 38. Giarratano J, Riley G. Expert Systems, 1989, Boyd & Fraser. 39. Beissner RE, Bartels KA, Fisher JL. Prediction of the probability of eddy current flaw detection, 1988, Review of Progress in Quantitative Non Destructive Examination, Williamsburg, VA, USA, 22-26 June 1987, Plenum Press, 7B. 40. Nockemann C, Tillack GR, Wessel H, Heidt H, Konchina V. Receiver operating characteristic (ROC) in nondestructive testing inspection, Proceedings of the NATO Advanced Research Workshop, 'Advances in Signal Processing for NDE of Materials', Quebec, Canada, 17-20 Aug. 1993. 41. Van Dijk GM, Boogaard J. NDT Reliability - a way to go, 1992, Proceedings of the Non-Destructive Testing 1992 Conference, Elsevier Science, xxxi-xliii. 42. Parra-Loera R, Thompson WE, Salvi AP. Adaptive selection of sensors based on individual performances in a multisensor environment, Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, Orlando, FL, 30-6.
References 43.
44. 45.
46. 47. 48. 49.
50.
51.
52.
53.
54.
55.
56. 57. 58. 59.
60.
61.
39
Rothman PL, Denton RV. Fusion or confusion: knowledge or nonsense? Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, Orlando, FL, 2 - 1 2 . Blackman SS. Theoretical approaches to data association and fusion, Proceedings of the SPIE, 931, Sensor Fusion, 4 - 6 April 1988, Orlando, FL, 5 0 - 5 . Schoes J, Castore G. A distributed sensor architecture for advanced aerospace systems, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 74-85. Luo RC, Kay MG. Multisensor integration and fusion: issues and approaches, Proceedings of SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 4 2 - 9 . Hackett JK, Shah M. Multisensor fusion: a perspective, IEEE CH2876-1/90/00001324, 1990, 1324-30. Pau LF. Sensor data fusion, 1988, Journal of Intelligent and Robotic Systems, 1(2), 103-16. Thomopoulos SCA. Sensor integration and data fusion, Proceedings of the SPIE, 1198, Sensor Fusion II: Human and Machine Strategies, Nov. 1989, Philadelphia, PA, 178-91. Harris CJ. Distributed estimation, inferencing and multi-sensor data fusion for real time supervisory control, Sept. 1989, Proceedings of the Artificial Intelligence in Real-time Control IF AC Workshop, Shenyang, China, 19-24. Tenney RR, Sandell NR Jr. Detection with distributed sensors, July 1981, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, 17(4), 501-10. Thomopoulos SCA. Theories in distributed decision fusion: comparison and generalization, Proceedings of the SPIE, 1383, Sensor Fusion HI: 3D Perception and Recognition, Nov. 1990, 623-34. Chair Z, Varshney PK. Optimal data fusion in multiple sensors detection system, Jan. 1986 Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, 22(1), 98-101. Viswanathan R, Thomopoulos SCA, Tumuluri R. Optimal serial distributed decision fusion, July 1988, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, AES-24(4), 366-75. Thomopoulos SCA, Okello NN. Distributed detection with consulting sensors and communication cost, Proceedings of SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 31-40. Shafer G. A mathematical theory of evidence, 1976, Princeton University Press, Princeton, New Jersey, USA. Dasarathy BV. Decision fusion, 1994, IEEE Computer Society Press. Thomopoulos SCA. Theories in distributed decision fusion, 1991, IF AC Distributed Intelligence Systems, Virginia, USA, 195-200. Abidi MA. Sensor fusion: a new approach and its application, Proceedings of the SPIE, 1198, Sensor Fusion II: Human and Machine Strategies, Nov. 1989, Philadelphia, USA, 235-46. Abdulghafour M, Goddard J, Abidi MA. Non-deterministic approaches in data fusion - A review, Proceedings of the SPIE, 1383, Sensor Fusion HI: 3D Perception and Recognition, Nov. 1990, 596-610. Crowley JL, Demazeau Y. Principles and techniques for sensor data fusion, May 1993, Signal Processing, 32(1-2), 5-27.
40 Data fusion - a review 62. Huntsberger TL, Jayaramamurthy SN. A framework for multi-sensor fusion in the presence of uncertainty, 1987, Proceedings of the 1987 Workshop on Spatial Reasoning andMultisensor Fusion, 5-7 Oct. 1987, AAAI, 345-350. 63. Thomopoulos SCA, Viswanathan R, Bougoulias DC. Optimal Decision Fusion in Multiple Sensor Systems, Sept. 1987, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, AES-23(5), 644-53. 64. Durrant-Whyte HF. Sensor models and multisensor integration, 1988, 303-12. 65. Durrant-Whyte HF. Consistent integration and propagation of disparate sensor observations, Fall 1987, International Journal of Robotics Research, 6(3), 3-24. 66. Duda RO, Hart E, Nilsson NJ. Subjective Bayesian methods for rule based inference systems, 1976, Proceedings of the National Computer Conference, 1075-82. 67. Malik R, Polkowski E. Morphological technique for combination of sensor readings, Proceedings of the SPIE, 1350, Image Algebra and Morphological Image Processing, 1990,165-76. 68. Dillard RA. Tactical inferencing with the Dempster/Shafer theory of evidence, 1983, Proceedings of the Institute of Electrical and Electronics Engineers 17th Asilomar Conference on Circuits, Systems and Computers. 69. Zadeh LA. Fuzzy logic, April 1988, Institute of Electrical and Electronics Engineers Computer, 94-102. 70. Ferrari C. Coupling fuzzy logic techniques with evidential reasoning for sensor data interpretation, Proceedings of Conference on Intelligent Autonomous Systems 2, Dec. 1989, Amsterdam, Netherlands, 2, 965-71. 71. Russo F, Rampoui G. Fuzzy methods for multisensor data fusion, April 1994, Institute of Electrical and Electronics Engineers Transactions Instrumentation and Measurement, 43(2), 288-94. 72. Mathur B, Wang HT, Liu SC, Koch C, Luo J. Pixel level data fusion: from algorithm to chip, Proceedings of the SPIE, 1473, Visual Information Processing: from Neurons to Chips, April 1991, Orlando, FL, 153-60. 73. Seetharaman G, Chu CHH. Image segmentation by multisensor data fusion, Institute of Electrical and Electronics Engineers Proceedings of 22nd South-eastern Symposium on System Theory, March 1990, Cookeville, USA, 583-7. 74. Duncan JS, Gindi GR, Narendra KS. Low level information fusion: multisensor scene segmentation using learning automata, Proceedings of the 1987 Workshop on Spatial Reasoning and Multisensor Fusion, Oct. 1987, AAAI, 323-33. 75. Tsao TR, Libert JM. Fusion of multiple sensor imagery based on target motion characteristics, Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, Orlando, USA, 37-47. 76. Libby V, Bardin RK. Conversion of sensor data for real time scene generation, Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, Orlando, USA, 59-64. 77. Thomas J. MITAS: multisensor imaging technology for airborne surveillance, Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, Orlando, USA, 65-74. 78. Pau LF. Behavioral knowledge in sensor/data fusion systems, 1990, Journal of Robotic Systems, 7(3), 295-308. 79. Wright WA. A Markov random field approach to data fusion and colour manipulation, May 1989, Image and Vision Computing, 7(2), 144-50.
References 41 80. Xu H. Efficient fusion technique for disparate sensory data, 1991, Institute of Electrical and Electronics Engineers Proceedings of IECOW91, CH2971-9/91/ 0000-2535, 2535-40. 81. Beard W, Jones A. Harnessing neural network, Dec. 1990, Electronics World and Wireless World, 1047-52. 82. Learning with neural network, August 1993, Laboratory Equipment Digest, 8-9. 83. Charlton PC. Investigation into the suitability of a neural network classifier for use in an automated tube inspection system, August 1993, British Journal of Non Destructive Testing, 35(8), 433-7. 84. Whittington G, Spraclen T. The application of a neural network model to sensor data fusion, Proceedings of the SPIE, 1294, Applications of Artificial Neural Networks, April 1990, Orlando, USA, 276-83. 85. Eggers M, Khuon T. Neural network data fusion concepts and application, Institute of Electrical and Electronics Engineers Proceedings of International Joint Conference on Neural Networks, June 1990, San Diego, USA, II.7-II.16. 86. Ruck DW, Rogers SK, Kabrisky M, Mills JP. Multisensor fusion classification with a multilayer Perceptron, Institute of Electrical and Electronics Engineers Proceedings of International Joint Conference on Neural Networks, June 1990, San Diego, USA, H.863-H.868. 87. Rajapakse J, Acharya R. Multisensor data fusion within hierarchical neural networks, Institute of Electrical and Electronics Engineers Proceedings of International Joint Conference on Neural Networks, June 1990, San Diego, USA, II.17-II.22. 88. Chilips ML, Steele N.F. Non destructive evaluation using neural network, Nov./ Dec. 1989, Nuclear Plant Journal, 44-50. 89. Kjell BP, Wang PY. Data fusion and image segmentation using hierarchical simulated annealing on the connection machine, Proceedings of the SPIE, 1002, Intelligent Robots and Computer Vision, Nov. 1988, Cambridge, USA, 330-7. 90. Chen S. Adaptive control of multisensor systems, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, USA, 98-102. 91. Udpa L, Udpa S. Application of neural network to non destructive evaluation, Oct. 1989, Report, Colorado State University, 143-7. 92. Udpa L, Udpa SS. Eddy current defect characterization using neural network, March 1990, Materials Evaluation, 48, 342-53. 93. Windson CG, Anselme F, Capineri L, Mason JP. The classification of weld defects from ultrasonic images: a neural network approach, Jan. 1993, British Journal of Non Destructive Testing, 35(1), 15-22. 94. Blackman SS, Broida TJ. Multiple sensor data association and fusion in aerospace applications, June 1990, Journal of Robotic Systems, 7(3), 445-85. 95. Easthope PF, Goodchild EJG, Rhodes SL. A computationally tractable approach to real time multi-sensor data fusion, Proceedings of the SPIE, 1096, Signal and Data Processing of Small Targets, March 1989, Orlando, USA, 298-308. 96. Deb S, Mallubhatla R, Pattipati K, Bar-Shalom Y. A multisensor multitarget data association algorithm for heterogeneous sensors, 1992, Proceedings of the 1992 American Control Conference, IEEE 92CH3072-6, 2, 1779-83. 97. Thompson WE, Parra-Loera R, Ta CWO. A Pseudo k-means approach to the multisensor multitarget tracking problem, Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, Orlando, USA, 48-58.
42 Data fusion - a review 98. Gerhart G, Martin G, Gonda T. Thermal Image Modeling, Proceedings of the SPIE, 782, Infrared Sensors and Sensor Fusion, May 1987, Orlando, Florida, USA, 3-9. 99. Macnicol G. Complex Imagery, Sept. 1991, Computer Graphics World, 75-9. 100. Henderson T, Weitz E, Hansen C, Mitiche A. Multisensor knowledge systems: Interpreting 3-D structure, Dec. 1988, International Journal of Robotics Research, 7(6), 114-37. 101. Lee RH, Leahy R. Segmentation of multisensor images, Proceedings of the 6th Multi-dimensional Signal Processing Workshop, Sept. 1989, Pacific Grove, USA, 23. 102. Popovic D, Heine R, Scharne T, Wolter F. Search strategies for collision-free path planning for robot manipulators, Robotersysteme, 8(2), 1992, 67-73. 103. Gros XE, Strachan P, Lowden DW, Edwards I. NDT Data Fusion, 1994, 6th European Conference on Non Destructive Testing, Nice, France, Oct. 1994.
3
Non-destructive Testing Techniques Non-destructive testing has no clearly defined boundaries R. Halmshaw, 1991
3.1
Introduction
The structural integrity of materials, components and structures has to be assessed for quality control, safety regulations and product specifications. Numerous testing techniques have been developed for maintenance and condition monitoring. These techniques can be categorised into two main classes: destructive testing, based on fracture mechanics, and non-destructive testing which leaves the inspected component undamaged. Non-destructive testing (NDT) is particularly relevant to the inspection of large and expensive components. The aerospace, food, nuclear and offshore industries are only a few examples of industries which employ a wide range of NDT techniques. The most commonly used NDT methods in industry include visual inspection, liquid penetrant inspection, magnetic particle inspection, eddy current testing, alternating current potential drop, alternating current field measurement, ultrasonic testing and radiography. These NDT techniques can be used for: • • • • •
the detection of unwanted discontinuities and separations in a material (flaws); structural assessment of a component (microstructure and matrix structure); metrology and dimensional purposes (thickness measurement, checking of displacement and alignment); determination of physical properties of a material (electrical, magnetic or mechanical properties); the detection of foreign bodies in food.
The principal objective of a non-destructive examination (NDE) is to provide the inspector with quantitative as well as qualitative information. This is achieved by detecting, locating and sizing any detected flaws. Several types of defect exist, for example cracks, voids, corrosion, inclusions, delamination, impact damage and holes. These defects begin as minor flaws which can occur as the result of excessive loading or external stresses applied to a material. If not discovered at an early stage, they may develop into dangerous faults. Defect quantification requires considerable skill and experience, very often leading to the use of more than one NDT method, owing to the fact that each method is able to provide limited information on a particular category of defect. For example, eddy
44 Non-destructive testing techniques current testing will allow detection of surface defects but internal defects will remain unseen by this method. Therefore, the use of another method such as ultrasound will be required. The terms 'technique' and 'method' have been cautiously applied as follows: method will be used for the description of a discipline, such as ultrasonic inspection; and pulseecho or through-transmission will be qualified as techniques. NDT methods make use of physical principles such as electromagnetism and optics, and an understanding of the physics of the inspection methods is required to ensure an effective inspection procedure. The limitations and advantages of the most common NDT methods suited to data fusion are reviewed in this chapter. The sensitivity and accuracy of each method is dependent upon its application, and the performance of NDT systems can be compared and assessed using probabilistic and statistical analysis. Such studies are becoming more common in industry in order to define the best suited method or defect detection technique for a specific application. These performance criteria will be discussed in this chapter. Less common methods such as infrared thermography and acoustic emission, and more specialised methods including proton annihilation, neutron scattering and microwaves, are briefly described below. This list is not exhaustive as new methods are continuously being developed. Not all possible NDT methods are presented in this chapter, and additional information on NDT can be found in the literature.1"5 3.2
Visual inspection
Visual inspection is the original method of NDT and should not be neglected when performing a NDE. Direct visual inspection, with the naked eye or with optical aids such as a magnifying glass, microscope, lamp, camera or horoscope, is still the first step to be carried out in a NDE and is usually followed by more sophisticated NDT methods.6 3.2.1 MICROSCOPY Microscopy and macro-photography can be used for non-destructive examination of components. Traditionally a light microscope is used to observe and characterise the surface structure up to the limit of magnification of the apparatus. Spatial resolution, depth of focus and magnification are limiting factors. Figure 3.1 is a micrograph of the surface of a composite material showing damages to composite fibres and matrix breakage. Image processing developments have greatly improved the quality of visual inspection. Photographs or images from boroscopes, microscopes and video cameras can now be enhanced using smoothing and filtering facilities. 3.2.2 BOROSCQPY Boroscopes, also known as endoscopes and introscopes, can be used for the inspection of areas which otherwise are not accessible without disassembly.7 A horoscope is an instrument used for inspecting cavities and is normally composed of an illuminating light and a miniature camera. The use of optical fibres makes boroscopic equipment flexible, allowing the inspection of hollow components, such as tubes or pipes, and cavities where there is no straight or direct access. Boroscopes 6 m long with a direction of view from 0 ° to 100° and magnification facilities are now available. They can also be coupled to a closed-circuit television camera (CCTV), providing the operator with an image on a
Visual inspection
45
Fig. 3.1 Micrograph of the surface of a composite material showing broken fibres and matrix breakage after impact damage (magnification: x 100) screen which can be recorded on a videotape. A remote viewing facility finds applications in the inspection of valves, pistons and cylinders of an engine, drugs or explosives searching by security officers, inspection of combustion chambers in aircraft, corrosion mapping in pressure tanks and nuclear reactor vessel inspection. 3.2.3 LASER HOLOGRAPHY Laser holography can be used in NDE to measure deformation of components under stress, to detect the presence of a defect or to measure the surface uniformity of a component. The general principle of holography is as follows: a 3-D image is obtained by recording the interference of a diffusely reflected wavefront by an object with a reference wavefront (Fig. 3.2). Laser holography makes use of electromagnetic waves from visible, coherent laser light (typically a H e - N e gas laser). Stable conditions are required during the Mirror
Object
Mirror
Holographic Film Fig. 3.2 Recording of a laser hologram
46 Non-destructive testing techniques exposure time. Pulse lasers reduce the exposure time and can be used where stable conditions cannot be achieved. Image reconstruction can be performed by lighting the hologram with the same reference beam. An observer will be able to view a virtual 3-D image of the object through the hologram (Fig. 3.3). Mirror BeamSplitter
LASER
^
^^
\ \
y^
s'
V\
W
U
J
./Reference
J^
Beam
•YK; Mirror \\
\
gg \
yfOS
/rOS
\
\
A
/
\
JVirtual Image
/
= t / ^ Hologram jrv, ^^Observer Fig. 3.3 Reconstruction of a laser hologram
Components inspected by laser holography are usually heated slightly and placed under stress or subjected to small vibrations during inspection, and a differential test is performed by comparing interference fringes of the specimen under stress with reference fringes of the static component. Laser holography is used for the inspection of pneumatic tyres, turbine blades and honeycomb composite structures in the aerospace industry.8 However, it is limited to surface examination and provides only qualitative defect information. In summary, visual examination techniques can be described as limited to the detection of surface-breaking defects, location of corrosion and surface roughness assessments. Their main advantage is that results are displayed in a visual format which can be readily interpreted by an inspector. With visual equipment being continuously updated and coupled with image processing equipment, visual inspection still has an important place in NDT and should not be neglected. 3.3
Liquid penetrant inspection
Liquid penetrant inspection is a low cost method, easy to apply and used to detect surface breaking defects such as cracks, laps and porosity in forging, castings, ceramics and nonporous materials.9 This method can be described as an extension of visual inspection but with a greater sensitivity. Large areas can be inspected but liquid penetrant is a slow process in terms of application and flaw indication. The principle of liquid penetrant inspection consists of spraying a coloured dye onto the surface of a component which seeps into any surface opening by capillary action. The liquid concentrates into cavities and, after removal of excess penetrant, is made visible by applying a developer, which reverses the capillary action, to the surface of the specimen. As a manual process, liquid penetrant can be very time consuming for large scale inspection tasks. It can be automated, using robotics and pattern recognition facilities, for on-line manufacture examination.
Liquid penetrant inspection 47 Liquid penetrant inspection is a six-stage process (Fig. 3.4) which is performed as follows: • • • • • •
The surface of the component inspected should be cleaned and dried prior to inspection. A coloured liquid penetrant is sprayed onto the surface of the component. The excess liquid is removed from the surface by rinsing with water or a chemical. A developer is applied over the surface of the component to reveal liquid penetrant trapped in defects by chemical reaction. Inspection of the component is performed and defects located. Post-cleaning of the specimen is carried out after inspection. Crack
. Surface cleaning + drying
4. Application of developer
2. Application of liquid penetrant + dwell
3. Removal of excess penetrant + rinse + dry
5. Inspection
6. Surface post-cleaning
Fig. 3.4 Liquid penetrant inspection procedure
Three types of liquid penetrant can be used: water soluble, post-emulsifiable and solvent removable. Water soluble penetrant can be washed directly with water and is the most widely used type of penetrant. Post-emulsifiable dyes are oil based penetrants which require the use of an emulsifier prior to water washing. They are more costly than water washable penetrants but have a greater defect sensitivity. Solvent removable penetrants are used for on-site inspection of large workpieces. They are oil based penetrants which require a chemical solvent for cleaning. Liquid penetrants can be visible under white light and fluorescent ultraviolet (UV) light. The choice of penetrant depends upon the sensitivity required, the size of the specimen inspected, accessibility and the cost of the inspection. The different sensitivities and relative costs of dye penetrants are shown in Fig. 3.5. Surface cleanliness and roughness, and the size, shape and accessibility of the sample are limiting factors of liquid penetrant inspection. This method is usually used for testing non-ferrous components such as austenitic stainless steel, and is widely used in the aircraft industry to inspect ceramics and structural weldments. Magnetic particle inspection is considered more appropriate on ferrous or magnetisable materials.
48
Non-destructive testing techniques Post-emulsifiable fluorescent
High Sensitivity + Cost
Solvent removable fluorescent Water washable fluorescent Post-emulsifiable visible dye Solvent removable visible dye Water washable visible dye
Low Sensitivity + Cost
Fig. 3.5 Difference in sensitivity and cost for various liquid penetrants
3.4
Magnetic particle inspection
Magnetic particle inspection (MPI) is used for the detection of surface breaking cracks in ferromagnetic materials. 10-12 It is one of the most extensively used electromagnetic methods in industry as it is easy to apply and provides a direct visual indication of surface breaking cracks. Magnetic particle inspection involves magnetisation by the application of a permanent magnet, electromagnet or electric current to the surface of the component inspected. This produces a magnetic field inside the material which becomes distorted by the presence of a flaw, causing a local magnetic flux leakage (Fig. 3.6).
Surface Breaking Crack
Magnetic Particles
Ferromagnetic Material
Magneti Flux Lines Fig. 3.6 Magnetic leakage flux in the vicinity of a surface breaking crack and accumulation of magnetic particles Fine ferromagnetic particles in the form of dry powder or suspended in a liquid (oil or water) are then sprayed onto the surface of the specimen to reveal the leakage field. Electromagnetic particles are available as daylight visible or UV fluorescent particles (Fig. 3.7(a) and (b)). Their size ranges from 1 to 25 pm for wet particles and up to 150 pm for dry particles. The particles are attracted by the leakage field and accumulate in the vicinity of the crack which is subsequently made visible (Fig. 3.6). The greatest leakage flux for a given test field is obtained for flaws positioned at right angles to the lines of force. For detection, flaws should lie between 45° and 90° to the magnetic field, which is commonly applied in two directions at right angles.
Magnetic particle inspection 49
Fig. 3.7 (a) Daylight and (b)fluorescentmagnetic particle detection of a HAZ crack in a butt welded plate of medium carbon steel
3.4.1 MAGNETIC HYSTERESIS The magnetic flux density, B, induced in the test zone is a function of the magnetic permeability of the material, jun and the applied magnetic field, H: where B is measured in tesla (T), H in ampere per metre (A/m) and /ur and JU0 in henry per metre (H/m). /u0 is the permeability of free space: JU0 = 4TZ 10 "7 H/m. The value of JUTvaries with the material composition; /uT=l for non-ferromagnetic material. The magnetisation curve B = / ( / / ) varies in a non-linear manner (Fig. 3.8). For unmagnetised specimens, the value of B is null (point a) and increases up to a saturation value Bs. If a ferromagnetic material is magnetised to saturation its relative permeability is equal to unity. After saturation of a component, any increase in the magnetic field strength H
Fig. 3.8 Typical hysteresis curve for a ferromagnetic material
50 Non-destructive testing techniques due to the presence of a defect leads to a decrease in permeability and flux leakage occurs. If the magnetic field H is reduced to zero, the value of B does not return to zero but to a value Br called the remanent flux density. By reversing the direction of H, B returns to zero (point b) where the distance ab is known as the coercive force of the material. If H is increased further, B decreases up to a negative saturation value (-Bs). Finally, if H is reduced to zero, the plot of B against H does not retrace its original path but follows the path cde and eventually reaches Bs again. The plot of B against H is known as a hysteresis curve and is shown in Fig. 3.8. The behaviour of electromagnetic waves in conducting materials can be derived from Maxwell's equations. It is not in the scope of this book to investigate further these equations which can be found in Reference 11. Several procedures of magnetisation are summarised in Table 3.1, and include prods, permanent magnets, electromagnetic coils, yokes and flexible cables. Prods are current flow techniques and permanent magnets, electromagnetic coils, yokes and flexible cables are magnetic flow techniques. Permanent magnets and yokes are well suited for on-site and laboratory inspection but are limited to the coverage of small areas. Flexible cables and prods are more useful for the magnetisation of large specimens. The best inspection results are usually achieved on materials with high relative permeability.
Table 3.1 Advantages and limitations of magnetising techniques used for MPI Magnetising technique Advantages Permanent magnet
Electromagnetic coil
Yoke
Flexible cable
Prod contacts
Portable Low cost Limited test area Magnetises the specimen parallel to the axis of the coil Very uniform field
Portable Adjustable shape and design Easy to manipulate Magnetisation of large specimens No danger of burn marks Efficient for underwater inspection of pipes Hand held Useful for magnetisation of large areas
Restrictions Field magnitude limited Magnet difficult to remove from component Mainly for bar-shaped materials Magnetisation uniformity is affected by the position of the specimen within the coil Positioning is important Good contact with specimen is required Field limited to the outer surface of the pipe Voltage requirements increase with cable length Difficult to position Can cause current arc damage (burn marks) Requires high current supply
Eddy current testing 51 3.4.2 MAGNETISING CURRENT The choice of the magnetising current, AC or DC, depends upon the material to be inspected and the type of defect sought. Alternating current is preferred for the inspection of soft materials, such as pure iron and low carbon steels, as it produces excellent mobility of the particles and demagnetisation is usually not required (the higher the magnetic permeability, the easier the magnetisation). Moreover, AC confines the magnetic field to the surface of the specimen which makes the technique very efficient for the detection of surface breaking defects. Hard materials (alloy steels, high carbon steel) are difficult to magnetise and exhibit a high remanence. The field created with DC penetrates deeper through the component inspected and sub-surface discontinuities can sometimes be detected. Direct current magnetising techniques introduce a constant magnetic flux in the material. Demagnetisation of the specimen is required after inspection, especially with hard materials, as: • • •
residual fields may interfere with magnetically sensitive components; abrasive particles may be attracted to magnetised areas; with electric arc welding, the arc may be deflected.
Demagnetisation can be performed by submitting the component to a continuously reversing magnetic field of decreasing strength. Magnetic particle inspection is a well established method and favoured by industry as it is low cost and portable and provides the operator with an immediate visual display of the flaw. However, quantitative information other than the length of the defect cannot be obtained except by using filing or grind methods. It is limited to the detection of surface and surface-breaking flaws in ferromagnetic materials, and the sensitivity of the method is dependent upon multiple parameters such as the magnetisation technique and the electromagnetic properties of the material inspected, as well as the size, shape and orientation of the defect. Automatic MPI, using video cameras and image processing facilities, has been developed for the examination of blade roots, rotor grooves of turbine rotors and butt welds of pressure vessels.13 3.5
Eddy current testing
The eddy current method uses the principle of electromagnetic induction to inspect a component.14-19 A magnetic field which varies with time induces electrical currents in conducting materials. The currents are called Courants de Foucault, from the physicist of the same name who discovered their presence, but are more commonly known as eddy currents. The presence of a flaw affects the formation of eddy currents, and this perturbation can be measured to locate and quantify defects. The eddy current method can be applied to the inspection of any electrically conductive material for detection of surface and sub-surface defects as well as corrosion mapping. Eddy currents can also be used for monitoring crack growth, as this is a very reproducible method of inspection. 3.5.1 CONVENTIONAL EDDY CURRENT TESTING An alternating current of fixed frequency sent to a coil creates a magnetic field in the vicinity of the coil. The alternating magnetic field is perpendicular to the direction of the
52 Non-destructive testing techniques current and parallel to the axis of the coil. The coil is held in a probe which is scanned over the surface of a component. If the coil is brought into proximity with a conductive material, the magnetic field in the coil induces electrical 'eddy currents' in the material. These currents give rise to a secondary magnetic field in the specimen called the 'induced magnetic field'. According to Lenz's law, the induced magnetic field is of equal magnitude but has a polarity which opposes the original magnetic flux in a non-ferromagnetic metal. The presence of a discontinuity on the surface of the component inspected will perturb the induced magnetic field and will also affect the eddy currents. The eddy current variation is recorded by measuring the changes in electrical impedance of the coil in terms of magnitude and phase. The output of the coil is usually displayed on the cathode ray tube (CRT) of an oscilloscope. An eddy current transducer can be electrically represented as a resistance and a coil in series (Fig. 3.9).
I
©
<
1
Fig. 3.9 Basic equivalent electric circuit of an eddy current transducer The excitation current to the coil is a single frequency sinusoidal current. An eddy current transducer is characterised by two electrical quantities: its ohmic resistance, R in ohm (Q), and its inductance, L in henry (H). The reactance XL (£2) and the impedance Z (Q) of the circuit shown in Fig. 3.9 are given by XL = 2jtfL Z=R + iwL
\z\ =
tf[+&
(3 2)
*
tan 0 = wL/R where i2 = - 1 , / = co/2jt is the frequency of the alternating current in hertz (Hz) and | Z | is the modulus of Z. Impedance plane diagrams are used to present graphically the form and amplitude of impedance changes by plotting the inductance against the resistance (Fig. 3.10). In free space, R = R0 and L = L0, and a normalised plane diagram can be produced by plotting coL/coL0 against R/coL0 (Fig. 3.11). The output signal of an eddy current probe is a time varying voltage of which amplitude and phase can be measured. The induced eddy currents tend to be concentrated near the surface of the specimen; this phenomenon is called the skin effect. Eddy currents decay exponentially with depth below the surface of the specimen. Ferromagnetic materials have a high magnetic permeability which causes shallow skin depth and a rapid attenuation of the magnetic field throughout the material.
Eddy current testing 53
.Z / /
0)L O)L0
0
0.1 0.3 R/(oL0
Fig. 3.11 Normalised impedance plane diagram for a non-ferromagnetic tube encircled by a coil for different frequencies (/) andfillfactor values (77) The depth of penetration, or skin depth - for a plane conductor in a uniform field - is defined as the depth at which the magnitude of the eddy current is equal to 1/e of its surface value (where In e = 1). The skin depth (m) is given by d=
1
(3.3)
y/jtjUOf
where /u is the absolute magnetic permeability (H/m) and is equal to JU0JUT (see section 3.4.1), and a is the electrical conductivity (S/m) of the material inspected. It can be
54 Non-destructive testing techniques seen that as the frequency, permeability and conductivity increase, the skin depth decreases. At high frequencies, eddy currents will be concentrated at the surface of the specimen. Calibration of the eddy current equipment is necessary prior to inspection. A typical eddy current instrument uses a Wheatstone type bridge as illustrated in Fig. 3.12. A null potential difference is produced by balancing the inductance of the bridge. When the inspection coil is brought close to a conductive material, its impedance is modified, which upsets the balance of the bridge. When calibrating an instrument, the distance of the coil above the specimen must be kept constant during the inspection; this distance is called the lift-off. Lift-off variations will cause a change in the amplitude of the eddy current which may induce a misinterpretation of the signal.
R
, Coil Balancing Impedance
Fig. 3.12 Typical eddy current bridge circuit
Eddy current coils have been categorised into three main types: encircling coil, probe coil and bobbin coil (Table 3.2). Among these, absolute and differential transducers can be found. An absolute transducer is composed of a single coil where any variation in impedance is directly measured. A differential sensor is made of a pair of coils and the resistance and inductance are cancelled out when the two coils measure the same value. Differential eddy current probes have been designed for tubing inspection. Rotary probes have been designed for rapid and thorough inspection of tubes and bolt holes. The resulting signal is waterfall display which provides the operator with a readout of the distance of the defect from the start of the scan. In order to improve the efficacy and to reduce the time of eddy current inspection, multiple sensor systems have been developed.20,21 The advantages of multisensor arrays over single sensors have already been discussed in chapter 2. One of the most common eddy current multiprobe devices is the Lizard system manufactured by Millstrong Ltd. The system is composed of a multisensor probe head, computer and software for signal display and analysis. It allows rapid detection and sizing of surface breaking defects by scanning areas 300 mm wide and 1 m long in a single pass. The probe head includes a multiple
Eddy current testing
55
Table 3.2 Description of characteristics of various eddy current coils Eddy current coil
Characteristics
Absolute coil
Signal easy to analyse Responds to sudden narrow and gradual disturbance Affected by lift-off Differential coil Complex signal Does not detect gradual long defects Can detect sudden flaws Encircling coil The test object is surrounded by the coil (OD coil, feed-through coil) Inspection of bar and tubular materials Does not detect circumferential defects Probe coil Used for surface examination (pancake coil, surface coil, flat coil) Slow manual inspection speed High defect resolution Bobbin coil Inspection from the inside of an object (ID coil, inside coil) For pipe and tube inspection Fast for large inspection sampling
array of two types of sensor coils: absolute and differential. Two probes are conventionally used: the LP 100 for flaw detection and the LP600 for sizing. The signal display consists of four pairs of traces and one lift-off trace (Fig. 3.13). A disadvantage of this system is that the speed of the scan has to be identical to the speed of the traces displayed on the screen. This is a major problem, especially for a system which has been developed for diverexecuted underwater inspection. Eddy current inspection is dependent upon multiple variables: the amplitude and the frequency of the coil current, the electrical conductivity and magnetic permeability of the material inspected, the probe lift-off, the component design and size, and the size, orientation and location of the defect. The material conductivity itself is affected by the chemical composition of the metal, heat treatment and temperature of the specimen at the time of inspection. Eddy current signal interpretation is not always obvious and requires training and experience. Flaw characterisation is difficult as there are very few theoretical solutions which mathematically represent the information contained in the phase of the signal. Moreover, change in material composition (e.g. HAZ in a weld), lift-off and edge effect may significantly perturb the signal. Another limitation of the technique is that flaws which lie outside the perimeter of the test coil are not detectable. 3.5.2 SPECIAL EDDY CURRENT TECHNIQUES Pulse eddy current testing uses high amplitude current pulses (10 A) of short duration (5 to 75 jus at a frequency of 1 kHz) to excite a coil.22 The resultant pulse is propagated more deeply through the surface of the specimen than with conventional eddy current testing. A detector coil placed at the opposite surface to the emitter coil is used to record the reflected signal. By measuring the time delay and peak amplitude, variations in material integrity
T-S-A+ 10-10-1993 11:13:58 Old File : calib4.105 Node Calibration Block Component : Channel 6 From Factors
0 : 0.150,
0.150
If
To : 400 Plotting range
El-
If
All signals
Marks : 100 400/ 400] Threshold
1/
]l
I T+S+A+ 15-10-1993 10:46:18 New File : CALIB2.104 Node : Calibration Block Component : Calibration Block :
0 : 0.150,
0.150
To : 400 Plotting range
[1-
All signals
Marks : 100 400/ 400] Threshold
Fig. 3.13 Example of a defect-free signal (top) and a signal with a defect (bottom) from weld inspection performed with a Lizard eddy current system
Alternating current potential drop 57 can be quantified. Pulse eddy current testing requires advanced instrumentation for signal filtering, analysis and recording. The technique finds applications in wall thickness measurement and detection of internal flaws. In remote field eddy current (RFEC) testing, an exciter or driver coil energised with a low frequency alternating current generates an electromagnetic field which diffuses through the wall of a hollow specimen, such as a pipe (Fig. 3.14). The field inside the pipe is rapidly attenuated. Outside the wall, the field propagates axially along the pipe with less attenuation. A detector or pick-up coil, usually spaced at three inside diameters, detects the reflected electromagnetic field inside the specimen. Remote field eddy current testing is useful to test pipes and heat exchanger tubes, to measure gradual changes in tube wall thickness, pit corrosions and external and internal defects. However, because RFEC signals have a low signal-to-noise ratio, sensitive instrumentation for filtering and signal analysis is required.23'24 Electromagnetic Field Path Remote Exciter Coil Direct Detector Coil ^-^x
Pipe Wall
Coil Spacing = 3 I.D. Phase Amplitude Detector
Fig. 3.14 Typical coil display in remote field eddy current inspection
Since its development in the 1940s by Dr F. Fcerster, eddy current inspection has
benefited from improvements in electronics and computer technology. Nowadays, inspectors are faced with automation,25 multiple probe arrays,20 defect visualisation26 and computer aided decision-making systems.27 Developments of artificial neural networks for the classification of eddy current signals28 have improved the efficiency of the method by providing qualitative and quantitative information about a defect. 3.6
Alternating current potential drop
The alternating current potential drop (ACPD) method measures a difference of potential between two reference electrodes to enable the detection of surface and sub-surface defects in conductive materials.29 Two contact electrodes placed on the surface of a component inject an alternating current into the material. This has the effect of creating a potential difference between the contacts. Spatial variation of electrical conductivity caused by the presence of a flaw perturbs the potential difference which leads to a change in the reference voltage. The voltage difference is measured and the detected defect can be sized. The crack depth is a function of the potential measured and the distance between the
58 Non-destructive testing techniques potential contacts: '
-
(
^
where d = defect depth (mm) / = distance between the prod tips (mm) V0 = potential measured in a defect-free sample (V) Vx = potential measured across a crack (V) An alternative technique which utilises DC power can be used (DCPD) but requires several hundred amperes to achieve the same degree of sensitivity as ACPD. Irregular geometric shapes are difficult to inspect as a uniform field is hard to establish. ACPD is, however, relatively inexpensive and because of its high sensitivity (a few microvolts) this method is used for crack monitoring of in-service components. 3.7
Alternating current field measurement
Alternating current field measurement (ACEM) is a non-contact method which measures changes in a magnetic field induced into a material to detect and quantify surface and subsurface defects. The ACFM method induces a uniform electric field on the surface of a component in a similar way to ACPD. The induced currents produce a magnetic field into the specimen and it is the variation, produced by a flaw, of this magnetic field which is recorded. The input current flow which follows the profile of the crack is assumed to be uniform away from the crack, and an estimation of the actual crack depth can be calculated using a theoretical modelling technique for semi-elliptical cracks.29,30 Two parameters of the induced magnetic field (B) are measured: Bx is the magnetic field strength parallel to the crack edge and Bz is the magnetic field strength perpendicular to the material surface (Fig. 3.15). A magnetic field probe is used to measure the variations of Bx and Bz. The presence of a defect is noted by a decrease in the Bx component and a peak followed by a trough of the Bz component. Bz provides information on the length of the crack and Bx on its depth. By plotting Bx against Bz, a butterfly plot can be drawn (Plate 3).
Fig. 3.15 X, Y and Z coordinates of a magneticfieldaround a semi-elliptical surface breaking crack
Ultrasonic testing 59 Alternating current field measurement equipment is portable, benefits from computer aided facilities for signal display and analysis and has been designed for underwater inspection of offshore platforms. 3.8
Ultrasonic testing
The ultrasonic testing method uses ultrasonic waves for materials examination and internal flaw detection and sizing. Ultrasonic testing together with MPI are probably the two most widely used methods of inspection.31,32 3.8.1 PRINCIPLE OF ULTRASONIC TESTING Ultrasonic beams generated by a piezoelectric transducer are mechanical waves of frequency above the audible range (>20 kHz). The waves are propagated into an elastic medium and are detected either by the same or by a different transducer. An emitter probe containing a piezoelectric crystal generates high frequency ultrasounds (0.1-25.0 MHz) which are injected into a material by placing the probe in contact with the surface of the component inspected. The sound wave propagates through the specimen inspected and is reflected from the far surface known as the backwall echo. The reflected beam is detected by a receiver probe, or the same probe in the case of pulse-receive systems, and the signal is displayed on a CRT as an A-scan plot of signal amplitude versus time (Fig. 3.16). The
Ultrasonic Transducer Test Sample
Flaw
t -
Backwall I Echo Flaw Echo J-J±do
time
Fig. 3.16 Typical ultrasonic pulse echo system and A-scan plot
60 Non-destructive testing techniques transmitted pulse is indicated by the first rise on the screen of the CRT; the second rise is the reflected pulse either from the backwall of the specimen or from a defect. By measuring the time taken for the two pulses to travel through the specimen, the thickness of a component or the position of a defect can be accurately measured. If the speed of the ultrasonic wave in the material inspected is v (m/s) and if tx (s) is the time measured between the two peaks, the distance d0 (m) of the defect from the surface of the specimen can be calculated from vtx ^o=-i 2
(3.5)
and the ultrasonic wavelength X (m) is given by v X=/
(3.6)
where / is the frequency in hertz (Hz) of the ultrasonic wave. Two techniques, known as 6 dB drop and 20 dB drop, are commonly used for sizing of defects. The decrease in signal amplitude caused by aflawis used as an indicator of flaw dimension. The ultrasonic testing technique using the transit time of an acoustic wave to measure the distance of a flaw from the probe and the amplitude of the reflected wave to size this flaw is called the pulse echo technique. In this technique, a single probe is used to transmit and receive the ultrasonic signal. Another technique called throughtransmission uses two transducers (a transmitter and a receiver). This technique requires access to both sides of the specimen, as the receiver is generally placed on the opposite side. Ultrasonic signals can be displayed as A-scan, B-scan or C-scan formats. A typical Ascan plot is shown in Fig. 3.16. Both B-scan and C-scan formats display a 2-D ultrasonic image of a defect. In the case of a B-scan, the displayed signal is time versus a linear position. With a C-scan a 2-D image of the signal amplitude at a particular depth range and over a surface is generated. C-scan systems are usually computer controlled and the image displayed is colour coded to facilitate interpretation. Ultrasonic systems displaying both Bscan and C-scan formats are called P-scan systems. 3.8.2
ULTRASONIC TRANSDUCERS
There are four main types of ultrasonic transducer:33'34 normal, single crystal, twin and angle beam (Table 3.3). Designs of single crystal probe and angle beam probe are shown in Figs 3.17 and 3.18. A variable angle probe (VAP), developed by Babcock, can be sequenced through up to eight different shear wave angles using a curved piezoelectric crystal mounted on the circumference of a hemi-cylindrical perspex shoe.35 Beyond the plastic window of a probe, three regions characterise the behaviour of an ultrasonic wave: the dead zone, the near field zone and the far zone (Fig. 3.19). The dead zone - due to the transmission pulse width - is a region immediately beneath the entry surface from which no reflection from flaws can be observed. The near zone, or Fresnel region, is the region in an ultrasonic beam which is subject to complex interference due to diffraction effects. Sizing of flaws should be avoided in this region. The near field zone can
Table 3.3 Different types of ultrasonic transducers Ultrasonic transducer
Characteristic
Normal probe
Generates longitudinal waves Separate transmitter and receiver A single crystal both transmits and receives the ultrasonic signal Transmitter and receiver in same housing but electrically and acoustically separated Produces an ultrasonic beam which is introduced at an angle into the material Typical angles: 30°, 45°, 60°, 70°, 80°
Single crystal probe Twin crystal probe
Angle beam probe
Coaxial Cable
Metal Case Backing Material Rubber Insulator Electrode Piezoelectric Crystal Electrode Plastic Window Fig. 3.17 Design of an ultrasonic compressional wave transducer
Electrodes Backing Material Piezoelectric Crystal Perspex Wedge Fig. 3.18 Design of a shear wave ultrasonic transducer
62
Non-destructive testing techniques
Boundary of the Beam
D
Dead Zone
Far Field Zone
Near Field Zone
Fig. 3.19 Schematic representation of the ultrasonic beam intensity
be calculated from 2
-D
-X..2
N= AX
D*
« — AX
2
2
(forD 2 » A2)
(3.7)
where D is the diameter of the probe. The far zone, or Fraunhofer region, is the region in an ultrasonic beam where the energy of the beam decays exponentially. In this region, the intensity of the ultrasonic wave is inversely proportional to the square of the distance from the transmitter. Flaw sizing is performed in this region using the 6 dB or 20 dB drop technique. Krautkramer and Krautkramer32 demonstrated how distance-gain-size (DGS) diagrams can be used to estimate the size of a defect. A DGS diagram is a plot of signal amplitude (dB) against the near field length for different values of the gain (G) of the ultrasonic instrument. The minimum detectable size of a defect for a particular ultrasonic sensor can be estimated as follows: G=
defect size probe diameter
(3.8)
3.8.3 ULTRASONIC WAVE PROPAGATION Because air is a poor transmitter of sound, a couplant is generally used between the ultrasonic probe and the surface of the specimen inspected in order to increase the amount of ultrasonic energy transmitted.36 The couplant can be a gel or water. A common technique involves immersion of the specimen in a water or oil tank and the use of an automated scanning system for inspection. Reflection of ultrasonic waves between two materials depends on their acoustic impedance (Fig. 3.20) which determines the amount of reflection of the ultrasonic wave
Ultrasonic testing 63
Medium \,Z\,V\ Medium 2, Z2, v2
Fig. 3.20 Compressional ultrasonic incident, reflected and transmitted waves and is given by (3.9) where Z, is the acoustic impedance of a longitudinal wave (kg/m2 s), p is the material density (kg/m3) and u, is the velocity of the longitudinal wave in the material (m/s). Longitudinal, or compressional, waves travel in the direction of molecules in a material at the velocity v: (3.10) where E is the elastic modulus of the material (N/m2). When the angle of incidence is zero (Fig. 3.21), the reflection coefficient R and the transmission factor T for compressional and transverse waves are given by /.
IZ2-ZX
A
Ui+Z2
T=l-R
=- =
(3.11)
/,
4Z,Z 2
/i
iz,+z2y
(3.12)
The reflection and transmission coefficients of an ultrasonic wave if medium 1 is steel and medium 2 is copper, for example, are respectively 0.001 and 0.999. This shows that almost the whole of the incident wave is transmitted. In the case of a solid/liquid interface, the transverse wave is always completely reflected. For example, with a steel/water interface, R = 0.871 and T = 0.125. The behaviour of an ultrasonic wave of velocity vx arriving at an angle a on an interface between two materials can be described by a physical law called Snell's law (Fig. 3.21): sin a
sin /?
sin 0
sin d
vs\
vC\
^S2
1^C2
(3.13)
64 Non-destructive testing techniques
S2^
(v!>v2)
Fig. 3.21 Shear waves on an interface between two media
where a is the angle of incidence and reflection, and /? the angle of refraction. Angle waves are often called shear waves. Attenuation of ultrasonic waves varies with the type of material. Material inhomogeneities such as crystal discontinuities, mixed microstructure, anisotropic material, large grain size and low acoustic impedance cause beam scattering and interference effects as a result of diffraction. Austenitic stainless steel, nickel-chromium alloys (Inconel, Incoloy) and copper castings have large anisotropic grains (Fig. 3.22) and are difficult to inspect as they produce severe attenuation and scattering.37
4.80KX 18UH-
10KU HD*8HN
S^8Sil3
P*808*1
.yy$j«
is •". J r
1
* i-
«lr
•#
•&•'<,
asps
X
j*AKM*ik L
Fig. 3.22 Scanning electron microscope (SEM) image of austenitic stainless steel sheeting
Ultrasonic testing
65
3.8.4 OTHER ULTRASONIC TECHNIQUES The time of flight diffraction (TOFD) technique relies on the diffraction of acoustic waves when interacting with a defect (Fig. 3.23).38'39 The diffracted wave from the tip of a crack is weaker than the reflected wave from the backwall echo and can be accurately analysed. TOFD is less sensitive to flaw orientation than conventional ultrasonic techniques, and provides more accurate sizing information. This technique requires the use of two transducers between which the distance must be constant.
Transmitter
Receiver
Diffracted Waves
Crack
Backwall Reflection
Fig. 3.23 TOFD ultrasonic wave interaction
The synthetic aperture focusing technique (SAFT) uses the same principles as radar and sonar.40 A small, wide beam transducer is scanned over the surface of a specimen. The ultrasonic echo waveform is sampled and recorded at regularly spaced intervals. Echoes are shifted in time and added together and image reconstruction of a defect is performed. Images are usually of poor quality where best results are obtained with homogeneous and isotropic materials. Knowledge of the scan, surface geometry of the specimen and probe positioning are required for SAFT inspection. This technique is limited to the inspection of flat surfaces, and advanced signal processing operations restrict it mainly to a laboratory inspection procedure. Electromagnetic acoustic transducer (EMAT) testing is a non-contact ultrasonic technique which differs from conventional ultrasonic techniques in the generation of ultrasonic waves. An electromagnetic acoustic transducer uses a strong magnetic field and a radio frequency coil placed close to the surface of a conducting material to induce radio frequency eddy currents in a material. Interaction of the eddy currents with the magnetic field generates Lorentz forces which produce ultrasonic stress waves.41 Applications for ultrasonic examination span a wide range of industry and materials such as welds, castings, and composite materials. Weld testing is a major area in which ultrasounds are applied.42'43 The method is also used for examination of concrete (location of steel reinforcement, thickness measurement, integrity assessment) and composite
66
Non-destructive testing techniques
materials (honeycomb structures, graphite and glass fibre materials) for detection of delaminations. Ultrasonic inspection is a well-accepted and extensively used NDT method in industry, and much research has been carried out to improve signal interpretation and display. Automated ultrasonic examination is currently used for on-line manufacturing inspection.44 Expert systems have been developed for signal interpretation which is often difficult with an A-scan display.45
3.9
Radiographic inspection
Radiographic inspection is based on absorption of penetrating radiation by the material under test. Due to variations in density and thickness, materials absorb different amounts of radiation. Photons, such as X-rays and gamma rays, pass through materials and the emergent radiation is recorded on radiographic film. Radiographs provide visual information on the size and location of internal defects. 3.9.1 PRINCIPLE OF RADIOGRAPHIC
INSPECTION
For radiographic inspection, the component to be inspected is exposed to a radiation source of X-rays or gamma rays.46'47 The absorption of radiation varies as a function of the thickness, composition and structural integrity of the material exposed. The unabsorbed radiation, or emergent radiation, passes through the whole thickness of the material and is detected on the opposite side of the source by a photographic film (Fig. 3.24).
Source
L = SFD
Flaw Test Piece
Fig. 3.24 Geometric projection of X-rays Once exposed, the film is processed and analysed. Variations in density, thickness and composition of the inspected material appear as variation of grey levels on the film. The darker the shade of grey, the more radiation has passed through the specimen and been incident on the film; flaws appear as dark shadows on the radiograph. The sensitivity of the
Radiographic inspection 67 method is dependent on the orientation of the defect; a defect parallel to the direction of the radiation is easier to detect than a defect perpendicular to the radiation. Sources of radiation Two main types of radiation can be used for radiographic inspection: X-rays and gamma rays. Both types have short wavelengths (<1 pm), are physically indistinguishable and differ only in the manner by which they are produced. X-rays are generated in an X-ray tube from the interaction between accelerated electrons hitting a solid target material (Fig. 3.25). An X-ray tube consists of a cathode containing a filament and an anode containing a metallic target (tungsten, platinum, gold), both in a vacuum chamber. The filament is heated by an electric current and at incandescence, electrons are emitted which are accelerated between the cathode and the anode by a high potential (50 kV). The electrons hit the target on a small area called the focal spot and they are absorbed. The energy lost by each electron is given up as radiation quanta which appear as X-rays (typical energy 10-500 keV). Microfocus X-ray tubes have been developed, and have the advantages of improving the sharpness by producing a magnified image. High energy tubes such as the Van de Graaff, Betatron and Linac allow the production of photons in the megaelectronvolts (MeV) range. Glass Envelope Tungsten Target
Cathode
Fig. 3.25 Design of an X-ray tube
Gamma rays are emitted during the radioactive decay of an unstable isotope. The most common isotopes used are cobalt-60, iridium-192, ytterbium-169 and caesium-137. Gamma radiation is usually less sensitive than X-ray and its period of use is limited due to the decay of the isotope source. The isotope, contained in a sealed unit, is remotely operated by a mechanical system which may present more radiation hazards than an X-ray tube. X-rays are electromagnetic radiation and can be regarded as photons of energy E given by
X
(3.14)
68 Non-destructive testing techniques where E is the photon energy in electronvolts (1 eV= 1.6 x 10"19 J), h is Planck's constant (h = 6.63 x 10~34 J s), v is the frequency of radiation (Hz), A is the wavelength of radiation (m) and c is the velocity of electromagnetic radiation in free space (m/s). The energy of photons ranges from 10 to 107 eV for X-rays and from 5 x 104 to 5 x 107 eV for gamma rays. The intensity of radiation, denoted /, varies exponentially with the thickness of homogeneous material through which it passes, and can be calculated as follows: / = /0e->"
(3.15)
where / is the intensity of the emergent radiation, I0 is the initial intensity, t is the material thickness (m) and // is the linear absorption coefficient of the material (m _1 ). The value of t required to reduce / by 50 per cent is called the half-value thickness and is given by r=
0.693
(3.16)
Equation (3.15) can be rewritten as / = / 0 2"' /r = / 0 antilogj ^ ^
•Pr*)I
(3.17)
The sensitivity of a radiograph can be estimated using an image quality indicator (IQI). The IQI sensitivity test accepted by the UK, the USA and Germany is the wire IQI sensitivity: .. . diameter of smallest discernible wire wire IQI sensitivity = thickness of specimen Geometry of image formation The image formation is dependent on the amount of energy absorbed by a material and on the characteristics of the source of radiation used as well as the sensitivity of the radiographic film. Lead screens are sometimes used to improve the image quality of the radiographs. Their effect is to absorb scattered radiation.48 The sharpness of a radiograph will depend on several factors such as the source of radiation, the size of the source (5 in Fig. 3.24), the source to film distance (L in Fig. 3.24), the density of the film, the exposure time and the source-specimen orientation. Ug is the geometric unsharpness as described in Fig. 3.24 and can be expressed as ST Ug =
(3.18) L-T It can be minimised by using a smaller focal spot size or by increasing the distance between the source and the detector. 3.9.2 PROGRESS IN RADIOGRAPHIC TECHNIQUES The development of techniques such as real-time radiography (RTR) and computer tomography (CT) has enhanced the quality and efficiency of radiography by providing instantaneous radiographs in 2-D and 3-D formats.
Radiographic inspection 69 Real-time radiography (RTR), also known as radioscopy and fluoroscopy, was first, and still is, used by customs officers for baggage inspection.49 Real-time radiography produces instantaneous radiographs which are displayed on a TV monitor. With RTR, there is no need for film processing, as the film is replaced by a fluorescent screen, an image intensifier tube (IIT) or a scanning linear array system sensitive to radiation. Figure 3.26 is a schematic diagram of an RTR system using an IIT. Real-time radiography is a technique that lends itself to automation for on-line inspection at the manufacturing level and is well suited for remote operation. Automatic welding inspection systems using pattern recognition and image processing have been developed and have proved to be very efficient.50"52 Research that has been carried out in the use of RTR for underwater inspection has demonstrated its full potential.53 Notwithstanding its high capital investment, modern computer and image processing facilities have rendered RTR potentially more cost-effective than film for large operations.
pt X-ray Source
CCTV Test Piece ADC Internal Flaw Image Processing
TV Monitor
Recorder
Fig. 3.26 Schematic diagram of an RTR system using an image intensifier tube
Computer tomography (CT) produces highly detailed images of the inside of a specimen.54 It allows the generation of 3-D images of a defect and component inspected by creating an image of a cross-section of an object (Fig. 3.27). The principle of CT involves passing an X-ray beam through a test piece and recording the emergent photons at multiple angles (Fig. 3.28). This operation is performed by moving the source around the object or rotating the object itself. A computerised reconstruction algorithm maps the points in a cross-sectional area. Image processing is usually required as artefacts are often produced in the image when the photons are absorbed differentially through the object. CT is greatly used for turbine blade inspection in the aerospace and nuclear industries.55 Neutron radiography uses low energy neutrons (cold neutrons, 0.0025 eV) to produce an image.56 The absorption of neutrons in materials such as aluminium, iron and lead is much less than that of X-rays. This technique finds application in the inspection of components of large thickness, explosive fillings and detection of corrosion. The
70 Non-destructive testing techniques HC3 FRONT
£3
1 L E F T
1
CM
Fig. 3.27 Computer tomography image of a section of a helicopter rotor blade
Array of Detectors
Object
^
p^-Rotating Table X-ray Source
r
Computer
Signal Processing
2D/3D Display
Image Reconstruction
Fig. 3.28 Schematic layout of a CT inspection system
production of low energy neutrons from atomic reactors and particle accelerators is time consuming and costly, limiting its application and portability. X-ray fluorescence is a metrologic technique which can be applied to thickness measurement and detection of corrosion. The specimen inspected is irradiated by X-rays and its surface produces fluorescence which is detected by a scintillator. The result is indicated on a coordinate plotter for quantitative analysis. Metallurgical specimens of small size and area can be inspected with this technique. 3.9.3 TYPES OF INSPECTION Forensic science and medicine remain the main areas of application of radiography. Radiography is also used for the inspection of welds, pipes and pressure vessels. The
Additional NDT methods 71 procedure for weld inspection can be found in British Standards BS 2600 and BS 2910. Inspection of rocket parts, detection of foreign objects in food, assessment of the structural integrity of buildings and examination of the construction of statues and artistic objects are currently performed with radiography.57 This method allows the detection of internal flaws and a measure of specimen thickness. Radiography requires large investment and the equipment is not fully portable. Moreover, it is still a hazardous method, and strict control and safety regulations need to be followed to ensure the safety of the inspector. Flaws have to be large enough and positioned in a direction parallel to the radiation beam to be detected, and access to two sides of the component to be inspected is necessary. In spite of these disadvantages, radiography is widely used in industry as it can be automated and provides visual information of the internal parts of an object. Inspection can be fully automated using realtime radiography equipment and pattern recognition facilities.58 3.10
Additional NDT methods
Non-destructive testing methods are not limited to those previously described; in this section, less common and more specialised methods and techniques are briefly presented. 3.10.1 ACOUSTIC EMISSION Plastic deformation and crack growth can result in the generation of acoustic signals. Acoustic emission inspection is principally used for crack monitoring and defect location.59,60 Acoustic emissions are high frequency stress waves generated by the release of strain energy from a material under stress. Because static flaws cannot be detected, external stress is usually applied to the material. This release of energy can be recorded and analysed. Sensitive instrumentation for filtering and signal processing is required in order to obtain optimum results. This method is well suited for laboratory experiments but can be very time consuming and difficult for on-site monitoring. 3.10.2 SQUID MAGNETOMETERS Superconducting quantum interference devices (SQUID) have been developed for precise and accurate measurement of small variations in electromagnetic fields.61 A SQUID consists of a superconducting coil which generates a very stable field. This acts as a polarising field which is distorted by variations in permeability in ferromagnetic materials. These distortions can be detected by a SQUID. Magnetometers are limited to laboratory applications as they operate at the temperature of liquid helium and therefore require cooling equipment. 3.10.3 INFRARED THERMOGRAPHY Infrared thermography is the mapping of isotherms over the surface of a component using heat-sensitive devices.62 An infrared camera scans the surface of a component and records any changes in temperature. The presence of a defect appears as a cold or hot spot and can be rapidly located. Very often, the component inspected has to be pre-heated before inspection. Thermography is a non-contact method which provides rapid visual qualitative information about the structural integrity of a material. It is used for condition monitoring of composite materials and concrete and to check alignment of parts.
72 Non-destructive testing techniques 3.10.4 MICROWAVE INSPECTION Microwaves are waves of electromagnetic radiation (0.001 ^ X ^ 0.1 m) which are directed towards an object and propagate through the material.63 A phase detector compares the refracted wave with a reference signal. A flaw will act as a reflector which will perturb the refracted wave. Because microwaves do not penetrate deeply into metals, this method is restricted to thickness measurement of thin metallic coatings, determination of voids and inclusions in ceramics, plastics and insulators, and the control of homogeneity. 3.10.5 SHEAROGRAPHY Shearography has been developed for the inspection of composite materials and honeycomb structures.64 The technique uses a laser based interferometer which produces two overlapping sheared images of a component under stress. These images interfere at paired points and are detected by a CCD camera. Surface strains due to sub-surface flaws on the component inspected are made visible by analysing the fringe pattern induced. Strain anomalies on the interferogram are characteristic of the presence of a flaw. 3.10.6 LEAK DETECTION For the detection of leakage, a search gas is injected into a sealed enclosure and leakage is detected using a vacuum or pressure gauge (also called a sniffer). Leak detection can be performed on non-porous materials for quality control of seals in glass envelopes, vacuum chambers and containers. Porosity, holes, cracks and lack of seals can be detected, although their location can be difficult to define. 3.10.7 ACOUSTIC IMPACT Acoustic impact is also known as coin tapping and consists of mechanically tapping a material onto the surface of a component. This causes mechanical acoustic vibrations which are detected by a sonometer or by ear. Any change in sound is indicative of anomalies or flaws. This very low cost technique is mainly a manual and slow process, and sensitivity is limited to the hearing of the inspector. Its main application is for the detection of cracks, disbonds and delaminations in metals and composites (honeycomb structures and helicopter rotor blades, for example). 3.11
Computers in NDT
The reliability and efficiency of NDT inspections have been improved by the use of computer aided systems and artificial intelligence techniques. Computers have become an essential tool in NDT for automated inspections, remotely operated systems, signal processing, signal interpretation and defect visualisation. Automated systems can operate in hazardous environments and provide inspection results of greater reliability than those from a human operator. Automated systems for MPI,13 eddy current,25 ultrasonic44'65 and radiographic inspection50,52 have been developed and are currently employed in a number of industries. Computer facilities allow 2-D and 3-D visualisation of flaws, improving the reliability and
Performance assessment ofNDT methods 73 accuracy of an inspection.26 The increasing use of computers in NDT (e.g. Lizard, ACFM, Andscan) has the major objective of minimising human intervention in terms of signal interpretation26,66'67 therefore enabling the inspector to concentrate more on the inspection and less on interpretation. Inspectors are subject to fatigue and loss of concentration and may work in a high risk environment; machine-driven NDT systems provide a means to surmount these problems and are cost-effective both as a short-term and as a long-term solution. Portable PCs are available at affordable prices and can be used to down-load onsite data sets which can then be processed at a later stage.66'68'69 Progress in artificial intelligence has contributed towards the development of artificial neural networks and expert systems for pattern recognition, signal interpretation and defect classification.27'45'67,70 They usually require broad database representatives of different types of signals and inspections. This can be a costly and time consuming phase in development and research. Expert systems emulate human reasoning by applying mathematical algorithms and logical inferences to a knowledge base for decision making purposes.71'72 Neural networks have been shown to be effective for classification of weld defects from ultrasonic images and for defect characterisation from ultrasonic and eddy current signals.28,73-76 NDT techniques producing a visual display, such as a radiograph, have benefited from developments in image processing and feature extraction using neural networks. As stated by Guettinger et al.,11 'Image processing assists the NDT technician during manual testing and lends itself to automation'. Digital signal processing (e.g. FFT, filtering, averaging) and image enhancement techniques of radiographs and ultrasonic images have been developed.78 Such systems find applications in the on-line manufacturing inspection of components. The combination of artificial intelligence and knowledge based systems has led to automation of NDT procedures. Computer visualisation is also a new feature of recent NDT equipment such as Andscan. The area of the component inspected is displayed on the screen of the computer and any defect is displayed in a colour coded manner.77'79 Defect location and quantification are greatly facilitated, reducing error in signal interpretation. Moreover, the images produced can be rotated, saved on disk for quality record purposes, and printed as hard copy documents. Multiprobe array scanning systems, of eddy current or ultrasonic inspection equipment, generate signals which can be imaged using commercial software such as Excel, PV-Wave or Dadisp (see chapter 4).26 Computer aided design (CAD) software has been coupled with visualisation of NDT data for complete and accurate defect characterisation.80,81 The combination of CAD, defect visualisation and finite element analysis could provide the NDT inspector with information on the structural integrity of a component by displaying areas prone to failure. Software has also been designed for NDT training and certification.82 Remotely operated vehicles (ROVs) are already in operation for underwater inspection of offshore platforms and can be more effective than human operators. Development of robotic systems for totally independent inspection, signal interpretation and decision making is still under way. Data fusion will contribute efficiently towards the achievement of such systems. 3.12
Performance assessment of NDT methods
Because the accuracy of inspection processes is uncertain, statistical and probabilistic methods to describe and compare the performance of NDT techniques have been developed.83"86
74 Non-destructive testing techniques 3.12.1 PROBABILITY OF DETECTION The concept of probability of detection (POD) is a statistical representation of the ability of a technique to detect a specific defect size. The use of probability of detection curves is becoming more frequent, especially in the aerospace industry.84,86 Figure 3.29 shows the change in probability of detection of a NDT method against the change in defect size. Several useful parameters can be defined from this curve. These are the defect detection threshold value, the sure defect detection value and the median defect detection value. The threshold value corresponds to the minimum detectable size of a defect, the sure detection value is the minimum defect size detected with a POD close to 100 per cent, and the median defect detection value is the defect size detected with a POD of 50 per cent
Detection
i
<4-
Probability
o
0.9
-
0.8
-
0.7
-
0.6
-
/
0.5
-
1
0.4
-
0.3
-
0.2 0.1
-
0
-
/
;
/
^
/
i i f f n 111 ii 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
0
1
2
3
4
5
6
7
8
9
10
Defect Size Fig. 3.29 Example of a probability of detection (POD) curve
Probability of detection curves should be specific to a particular type of defect on a defined material. The same defect on a different material will produce a different signal response. A POD(a) function can be theoretically modelled and is defined as the frequency of all cracks of size a that will be detected by a particular NDT system. It has been demonstrated that the log-logistics (log odds) functions present the best model for representation of POD data. The POD mathematical model can be defined by POD(a) =
exp(a + /?lna) 1 +exp(a+/?ln<2)
(3.19)
which can also be expressed as a linear function of In a: In
POD(fl)
l-POD(a)J
= a + fikia
(3.20)
The parameters a and fi can be estimated experimentally using a regression analysis. To plot a POD curve, an inspection should be performed over a series of independent samples of identical composition but with defects of various known dimensions. In its
Performance assessment ofNDT methods
75
simplest form, if a sample contains N independent flaws of various dimensions and if K flaws have been detected, the probability of detection of a particular system can be written as POD(%) = — x l O O N
(3.21)
For example, in the inspection of a composite sample submitted to multiple impact energies using an eddy current system, if the inspection is repeated for multiple different sensitivities, POD curves such as those presented in chapter 5 can be plotted. From these curves we can estimate the most appropriate sensitivity in order to obtain a high probability of detection for a particular type of defect on a specific material. Probability of detection curves do not provide information concerning the number of false calls occurring during an inspection. To overcome this problem, another type of representation called receiver operating characteristic curves has been proposed. 3.12.2 RECEIVER OPERATING CHARACTERISTIC A receiver operating characteristic (ROC) curve is a plot of probability of detection (POD) against the probability of false calls (PFC). 87-89 A false call, also called a false alarm or false positive, is obtained when a signal representative of a defect is detected in an area where no defect is present in reality. Figure 3.30 is a representation of a typical ROC curve. The main difference between POD and ROC curves is that false alarm information is displayed on a ROC curve but not on a POD curve. Both POD and ROC curves are required in order to assess the performance of an instrument. A NDT apparatus which can detect only medium-sized cracks with low false alarms will be preferred to a system which can detect smaller cracks but with a high number of false alarms. The accuracy of an inspection system, or procedure, can be assessed with ROC curves.
0.8 4-
0.6-40.4 4 -
0.2 4-1
0
0.2
0.4
0.6
0.8
1
Probability of False Calls Fig. 3.30 A typical receiver operating characteristic (ROC) curve
76 Non-destructive testing techniques A large quantity of data is necessary to plot accurate ROC curves. Moreover, such a representation is inadequate if the NDT system has very few false calls. A solution to overcome the large amount of experimental work required has been proposed by Nockemann et a/.88'89 The signals from any NDT instrument are assumed to be statistically independent and normally distributed but with different means and standard deviations. Experimentally, six points have to be defined to plot an ROC curve (Fig. 3.31). The signal analysis of the instrument has to be carried out to identify the values of these points. A detectability scale corresponding to the signal amplitude has to be determined (Fig. 3.32). The signal level has been divided into five ranks ranging from a very good signal, representative of a defect, to a no defect signal.
& o
I 0.5
l
Probability of False Calls Fig. 3.31 Diagram of an experimental ROC curve (modified from Nockemann et a/.89) 2
1
I3
WW WW WW Very good
Good
11 4
Average
AW\AM Bad
No defect
Fig. 3.32 Detectability scale for different signal ranks (modified from Nockemann et a/.88)
To plot the ROC curve presented in Fig. 3.32, the six points are calculated as follows. For the POD (F-axis) where a defect is actually present: •
Point 0: value = 0
•
Point 1: value =
•
Point 2: value =
X signals of rank 1 X defects actually present X signals of rank 1 + X signals of rank 2 X defects actually present
References
11
X signals of rank 1 + X signals of rank 2 + 2 signals of rank 3 •
Point 3: value = X defects actually present
and so on until signals of all ranks have been summed up. The value of the fifth point is assumed equal to 1. For the PFC (X-axis) where a defect is detected but is not actually present: •
Point 0: value = 0
•
Point 1: value =
X signals of rank 1 X non-defective samples X signals of rank 1 + X signals of rank 2 •
Point 2: value =
X non-defective samples and so on until signals of all ranks have been summed up. The value of the fifth point is assumed equal to 1. The use of ROC tests has been suggested for personnel certification and assessment.88 But ROC curves are not adequate in the case of very few false calls and cannot be plotted with the method previously described. Both POD and ROC methods appear, at the moment, to be adequate to provide an estimation of the quality of NDT methods according to experimental results. Still more research is required to establish the most appropriate way to assess the performance of an inspection.
References 1. 2. 3. 4. 5. 6. 7. 8.
9. 10. 11. 12. 13. 14.
Halmshaw R. Non-destructive testing, 1991, 2nd Edition, Edward Arnold, London. Nichols RW. Advances in non-destructive examinations for structural integrity, 1982, Applied Science Publishers, London. Hanstead PD. The capability and limitations of Non Destructive Testing, 1988, British Institute of NDT. Nichols RW, Dau GJ, Crutzen S. Effective NDE for structural integrity, 1988, Elsevier Applied Science Publishers, London. Gilardoni A, Orsini A, Taccani M. Non-destructive testing, Gilardoni Spa. Pub. Allgaier MW. Visual testing: method with a future, Sept. 1991, Materials Evaluation, 49(9), 1186-7. Ammann F. Industrial endoscopy, 1984, ASNT. Barbier P, Le Floc'h C. Holography - The NDT of composite structures, Proceedings of the 3rd European Symposium on Spacecraft Materials in Space Environment, Noordwijk, The Netherlands, 1-4 Oct. 1985, 29-33. Lovejoy DJ. Penetrant testing: a practical guide, 1991, Chapman & Hall. Lovejoy DJ. Magnetic particle inspection: a practical guide, 1993, Chapman & Hall. Blitz J. Electrical and magnetic methods of non-destructive testing, 1991, Adam Hilger. Jiles DC. Review of magnetic method for NDE, April 1990, NDT International, 23(2), 8 3 - 9 1 . Borucki JS. Development in automated magnetic particle testing systems, March 1991, Materials Evaluation, 49(3), 324-9. Libby HL. Introduction to electromagnetic non-destructive test methods, 1971, RE Krieger Publishing Co.
78 Non-destructive testing techniques 15. Hagemaier DJ. Fundamentals of eddy current testing, 1990, ASNT. 16. Cecco VS, Vandrumen G, Sharp FL. Eddy current testings, 1987, US Edition, GP Courseware Pub. 17. McNab A. A review of eddy current system technology, July 1988, British Journal of Non Destructive Testing, 30(7), 249-56. 18. Rudlin JR. A beginner's guide to eddy current testing, June 1989, British Journal of Non Destructive Testing, 31(6), 314-20. 19. Lord W. Electromagnetics for engineering NDE, 1994, Chapman & Hall 20. Hedengren KH, Hurley DC, Kornrumpf WP, Young JD, Sutton GH. An array system for fast, sensitive eddy current inspections, 1993, Proceedings of ASNT 1993 Fall Conference and Quality Testing Show, 8-12 Nov. 1993, Long Beach, CA, ASNT, 51-3. 21. Gulliver J, Newton K. A new eddy current instrument for inspection of welds on steel jackets, 1988, Offshore Inspection, Repair and Maintenance, Offshore Exhibition and Conference Ltd., Aberdeen, UK, 8-10 Nov. 1988. 22. Wittig G, Thomas HM, Maser D. Developments and investigations for the application of the pulsed eddy current technique, 1981, New Procedures in Non Destructive Testing, Springer Verlag, Berlin and Heidelberg, 479-87. 23. Schmidt TR. History of the remote field eddy current inspection technique, Jan. 1989, Materials Evaluation, 47(1), 14-22. 24. Kilgore RJ, Ramchandran S. Remote field eddy current testing of small diameter carbon steel tubes, Jan 1989, Materials Evaluation, 47(1), 32-6. 25. Granville RK, Charlton PC. Real time automated tube inspection system, Jan 1993, British Journal of Non Destructive Testing, 35(1), 11 -14. 26. Lowden DW, Grox XE, Strachan P. Visualising defect geometry in composite materials, Proceedings of the International Symposium Advanced Materials for Lightweight Structures, ESTEC, Noordwijk, The Netherlands, March 1994. 27. Levy AJ. Oppenlander JE, Brudnoy DM, Englund JM, Loomis KC, Barsky AM. Dodger: an expert system for eddy current evaluation, Jan 1993, Materials Evaluation, 51(1), 34-44. 28. Udpa L, Udpa S. Application of neural networks for classification of eddy current data, 1990, Review of Progress in Quantitative Non Destructive Examination, 9, Plenum Press, New York, 673-9. 29. Dover WD, Collins R, Michael DH. Review of developments in ACPD and ACFM, March 1991, British Journal of Non Destructive Testing, 33(3), 121-7. 30. Lugg MC, Lewis AM, Michael DH, Collins R. The non-contacting ACFM technique, 1988, Institute of Physics Meeting No. 12, IOP Publishing Ltd, 41-8. 31. Ensminger D. Ultrasonics: fundamentals, technology, applications, 1988, 2nd Edition, Marcel Dekker. 32. Krautkramer J, Krautkramer H. Ultrasonic testing of materials, 1990, 4th Edition, Springer Verlag. 33. Silk MG. Ultrasonic transducers for Non Destructive Testing, 1984, Adam Hilger. 34. Iddings FA, Rock ML. The ultrasonic test probe, 1988, ASNT 1988, 34-5. 35. Thomson JL, Farley JM. Demonstration trials of an electronically scanned variable angle ultrasonic probe, 1987, Proceedings of the 8th International Conference on Non Destructive Evaluation in the Nuclear Industry, 17-20 Nov. 1986, Kissimmee, Florida, USA, ASMI, 603-10. 36. Erhard VA, Rathgeb W, Wustenberg M. The influence of the coupling layer on the
References
37.
38. 39. 40. 41.
42. 43. 44.
45.
46. 47. 48. 49. 50.
51. 52. 53. 54. 55. 56.
79
sound transmission in ultrasonic testing with inclined beam, 1976, Materialpruf, 18(9), 312-15. Edelmann X. Ultrasonic testing of austenitic stainless steel components, 1987, Proceedings of the 4th European Conference on Non Destructive Testing, 13-18 Sept. 1987, London, Pergamon Press, 338-57. Charlesworth JP, Temple JAG. Engineering applications of ultrasonic time-of-flight diffraction, 1990, John Wiley & Sons. Kramer S, Leeman DV. New ultrasonic techniques: a review, Jul.-Aug. 1990, Canadian Society of Non Destructive Testing Journal, 11(4), 18-29. Seydel JA. Ultrasonic synthetic aperture focusing techniques in NDT, 1983, Research Techniques in Non Destructive Testing, 6, ed. RS Sharpe, Academic Press, New York. Billson D, Edwards E, Dixon S, Idris A, Rohani S, Palmer SB. Laser/EMAT techniques for materials evaluation, 33rd Annual Conference on Non Destructive Testing, York, UK, 13-15 Sept. 1994. Handbook on the ultrasonic examination of welds, 1986, International Institute of Welding. Procedures and recommendations for the ultrasonic testing of butt welds, 1989, 2nd Edition, Welding Institute, London. Boogaard J, Vanwersch P. Automated ultrasonic examination of tube-to-tubesheet welds, Proceedings of the 4th European Conference on Non Destructive Testing, 13-18 Sept. 1987, London, Pergamon Press, 1086-94. Hopgood AA, Woodcock N, Hallam NJ, Picton PD. Interpreting ultrasonic images using rules, algorithms and neural networks, April 1993, European Journal of Non Destructive Testing, 2(4), 135-49. Halmshaw R. Industrial radiology, theory and practice, 1982, Elsevier Applied Science Publishers, London. Becker GL. Radiographic Non Destructive Testing, 1990, DuPont Non Destructive Testing Systems. Halmshaw R. Scattered radiation in industrial radiology, March 1993, British Journal of Non Destructive Testing, 35(3), 113-18. Halmshaw R. X-ray real time radiography and image processing, Nov. 1988, Proceedings of a Symposium held at Newbury, Berkshire, British Institute of NDT. Arrondeau PY, Charbonnier A. Robot for real time radiographic inspection of circumferential welds with computer processing of the whole system, 1987, Proceedings of the 4th European Conference on Non Destructive Testing, 13-18 Sept. 1987, London, Pergamon Press, 1575-9. Rokhlin SI. In process radiographic evaluation of arc welding, Feb. 1989, Materials Evaluation, 47(2), 219-24. Gayer A, Saya A, Shiloh A. Automatic recognition of welding defects in real time radiography, June 1990, Non Destructive Testing International, 23(3), 131-6. Grox XE. Real-time radiography of underwater pipelines, Sept. 1993, British Journal of Non Destructive Testing, 35(9), 492-5. Burstein P. Performance issues in computed tomography specifications, May 1990, Materials Evaluation, 48(5), 579-93. Ross JB, McQueeney K. Computed tomography imaging of turbine blades, Oct. 1990, Materials Evaluation, 48(10), 1270-3. Berger H. Neutron radiography: a method for Non Destructive Testing, 1992, Industrial Quality Inc.
80 Non-destructive testing techniques 57. Lang J, Higgins T. Recent developments in radiography at the British museum, July 1993, British Journal ofNon Destructive Testing, 35(7), 363-8. 58. Daum W, Rose P, Heidt H, Builtjes JH. Automatic recognition of weld defects in Xray inspection, March 1987, British Journal of Non Destructive Testing, 29(3), 79-82. 59. Filho PF. Acoustic emission monitoring of known flaws, Proceedings of the 13th World Conference on Non Destructive Testing, Brazil, Oct. 1993,1, 34-9. 60. Raj B, Jha BB. Fundamentals of acoustic emission, Jan. 1994, British Journal of Non Destructive Testing, 36(1), 16-23. 61. Cochran A, Donaldson GB, Morgan LNC, Bowman RM, Kirk KJ. SQUIDs for NDT: the technology and its capabilities, April 1993, British Journal of Non Destructive Testing, 35(4), 173-82. 62. Maldague XPV. Nondestructive evaluation of materials by infrared thermography, 1993, Springer Verlag. 63. Yeh CY, Rami E, Zoughi R. A novel microwave method for surface crack detection using higher order waveguide modes, June 1994, Materials Evaluation, 52(6), 676-81. 64. Newman JW. Shearographic inspection of aircraft structure, Sept. 1991, Materials Evaluation, 49(9), 1106-9. 65. Scruton G, Dagen JM. Use of computers to enhance ultrasonic inspection, March 1988, Computer Applications In Non Destructive Testing, 12-18. 66. Tietze M. Valuable eddy current inspection utilising latest computer technology, May 1991, British Journal of Non Destructive Testing, 33(5), 217-20. 67. Chapman CE, Fahr A, Pelletier A, Hay DR. Artificial intelligence in the eddy current inspection of aircraft engine components, Sept. 1991, Materials Evaluation, 49(9), 1090-4. 68. Kreier P, Gribi M, Durocher JM, Hay DR, Pelletier A, Edelmann X. Computer integrated testing, Jan. 1993, European Journal of Non Destructive Testing, 2(3), 94-9. 69. Fere C, Jordon M, Lamant D, Paradis L. Eddy current signal analysis on PC, Proceedings of the 9th International Conference on Non Destructive Evaluation in the Nuclear Industry, Tokyo, Japan, 25-28 April 1988, ASMI, 375-8. 70. Klvjev VV, Orlov NA. Implementation of artificial intelligence methods in NDT: expert system approach, Proceedings of the 13th World Conference on Non Destructive Testing, Brazil, Oct. 1993,1, 305-7. 71. McEwan W, Abou-Ali M, Belavendram N. Parameter design and expert systems in NDT, March 1991, British Journal of Non Destructive Testing, 33(3), 115-20. 72. Kataoka S, Okamoto A, Miyoshi S, Kurokawa A. The development of an expert system for defect identification and its asessment, Proceedings of the 9th International Conference on Non Destructive Evaluation in the Nuclear Industry, Tokyo, Japan, 25-28 April 1988, ASMI, 61-5. 73. Ogi T, Notake M, Yabe Y, Kitahara M. A neural network applied to crack type recognition, 1990, Review of Progress in QNDE, 9, Plenum Press, New York, 689-96. 74.. Windsor CG, Anselme F, Capineri L, Mason JP. The classification of weld defects from ultrasonic images: a neural network approach, Jan. 1993, British Journal of Non Destructive Testing, 35(1), 15-22. 75. Udpa L, Udpa S. Eddy current defect characterization using neural networks, March
References 81
77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89.
1990, Materials Evaluation, 48(3), 342-53. 76. Philips ML, Steele NF. Nondestructive evaluation using neural networks, Nov./Dec. 1989, Nuclear Plant Journal, 44-50. Guettinger TW, Grotz K, Wezel H. Eddy current imaging, April 1993, Materials Evaluation, 51(4), 444-51. Geogel B. Digital signal processing for NDT, 1992, Proceedings of the Non Destructive Testing 1992 Conference, Elsevier Science Publishers, 283-7. Siores E. Automated NDE using robots and 3D imaging techniques, Proceedings of the Australian Institute of Non Destructive Testing National Conference, Melbourne, 19-21 Aug. 1991, AINDT. McNab A, Cornwell I, Dunlop I. A framework for improved NDT data interpretation, May 1994, Insight, 36(5), 326-30. Deuster G, Brinette R. Defect characterisation with advanced NDE methods, Oct. 1991, European Journal of Non Destructive Testing, 1(2), 59-63. Jenkins SA, Lowden DW. Eddy current inspection simulators: Computer-aided learning, Nov. 1993, Materials Evaluation 51(4), 1226-36. Mittleman J. Statistical comparison for selection of inspection procedure, Feb. 1990, Materials Evaluation, 48(2), 240-3. Hovey PW, Berens AP. Statistical evaluation of NDE reliability in the aerospace industry, 1988, Review of Progress in QNDE, Plenum Press, New York, 7B, 1761-8. Berens AP, Hovey PW. Evaluation of NDE reliability characterization, 1981, Vol.1, Air Force Wright-Patterson Base, Report AFWAL-TR-81-4160. Barbier P, Blondet P. Using NDT techniques in the maintenance of aeronautical products, 1992, Aerospatiale France, Report No. 93-11587/l/GAL. Van Dijk GM, Boogaard J. NDT reliability - A way to go, 1992, Non Destructive Testing 92, Elsevier Science Publishers, xxxi-xliv. Nockemann C, Heidt H, Thomsen N. Reliability in NDT: ROC study of radiographic weld inspection, Oct. 1991, Non Destructive Testing and Evaluation International, 24(5), 235-45. Nockemann C, Tillack GR, Wessel H, Heidt H, Konchina V. Receiver operating characteristic (ROC) in nondestructive inspection, 1993, Proceedings of the NATO 'Advanced Research Workshop on Advances in Signal Processing for NDE of Materials , Quebec, Canada, 17-20 Aug. 1993.
4
Scientific Visualisation ... thought is impossible without an image Aristotle, 325 BC
4.1
Introduction
Scientific phenomena and technological developments can be quickly understood if experimental or theoretical results are presented in an image format instead of as a numerical result. The expression 'I see' is mentioned many times, meaning 'I understand', and graphs are often used to represent experimental data and help in their interpretation and understanding. Throughout history scientists and engineers have utilised simple or complex charts to represent their data visually. We learned about the engineering developments of Leonardo da Vinci because of his drawings and sketches. In 1637, Descartes stated, Imagination or visualisation, and in particular the use of diagrams, has a crucial part to play in scientific investigation'. Data visualisation using computer facilities allows 2-D and 3-D representation of data, colouring and rendering. Relationships between computer visualisation and NDT data fusion may not be obvious but the two are closely linked. Data from non-destructive measurements were originally limited to one-dimensional pointer-type or to two-dimensional analogue displays on a CRT. Magnetic particle inspection and radiography were, until recently, the only techniques capable of imaging defects. The interpretation of poor quality radiographs, which, in the past, would have been difficult to do and was not always feasible, can today be achieved through the use of digital image processing facilities. Signal and image processing of digital images from real-time radiography systems and infrared cameras can be performed to extract information hidden by noise. Two- and three-dimensional imaging of ultrasonic and eddy current data are currently performed which facilitate signal interpretation by displaying the specimen inspected, and any defect detected, in a colour coded format.1 Minimisation of inspection time and expense incurred has been attained using visualisation techniques. Further achievements will be made if statistical numerical data fusion results are capable of being presented as colour coded probabilistic maps. Particular care must be taken in the visualisation to prevent the display of erroneous information. Moreover, fusion at pixel level of radio-graphic and ultrasonic data requires the use of computer graphics and image processing techniques. It is becoming increasingly necessary to be aware of the limits and possibilities of computer graphics and visualisation facilities available for NDT data fusion.
Data visualisation 83 Scientific visualisation was officially born in 1987 with a report by the National Science Foundation's Advisory Panel on Graphics and Image Processing. Since then scientific visualisation has become commonplace and can be defined as an interactive way to display and explore data in order to understand the significance of an event and to extract a meaningful slice of data within a huge data set. It is a graphical examination of data rather than a numerical analysis and is sometimes known as visual data analysis (VDA). It can be viewed as an alternative to numbers, and its ultimate goal is to create images that 'speak' to the viewer without additional explanation. The purpose of an image should be to facilitate communication of knowledge as well as to display information from an experiment. Visualisation is a wide topic and encompasses the fields of computer graphics, image processing (IP), computer aided design (CAD), signal processing and computer vision (virtual reality). These themes will be briefly discussed in the following sections of this chapter. 4.2
Data visualisation
Visualisation is a subject in which considerable research and development is being carried out in universities and industry.2"10 The place that computer graphics has in everyday science and engineering research is widely illustrated through publications (e.g. IEEE Computer Graphics) and conferences (e.g. SIGGRAPH) which present advancement and evolution in visualisation. Also, the Institute of Physics Congress 1994 has now included a special section on Scientific Visualisation. A visualisation community club11 and the advisory group on computer graphics (AGOCG) produce reports on visualisation systems and help and advise researchers, developers and users interested in computer graphics by organising meetings, workshops, courses and conferences in the UK. Visual data analysis facilitates data display, processing and analysis by allowing researchers to interact with their data in order to select areas of interest. Quality images can also be produced to present the results to a wide audience and to inform and communicate with other scientists. Comparison of prototype performances with theoretical expectations and monitoring of experiments over a long period are other advantages of VDA. Computer visualisation can be time consuming if little knowledge of the software is available or if the system has a poor interface and limited flexibility. Improper representation of data by choosing the wrong type of graph or an inadequate colour palette can lead to a wrong conclusion. As Globus and Raible stated, '... if the picture looks good, it must be correct',12 but there must be more to it than a 'good picture'. There are various applications of computer graphics and visualisation. These include industrial design, architecture, aerodynamics, oil exploration, geology,13 chemistry, mathematics, physics, fluid flow modelling,14 medical and business applications.15 Effective data representation may appear to be simple, but in reality discovering the technique that best fits a visualisation purpose is often a very time consuming process of trial and error experiments. As in any discipline, a methodology is necessary to produce efficient and effective images.16 Fundamental guidelines on how to display data effectively, through useful advice and a multitude of illustrations, are given by Tufte.17 A set of data and a visualisation system are two essential requirements for scientific visualisation. Supercomputers provide the computational power necessary to process and visualise large data sets rapidly. However, a hand-drawn data chart can be very accurate, efficient and rich in information. Data can have the form of numerical inputs (ASCII,
84 Scientific Visualisation binary) collected from a scientific instrument or artificially generated. Images from satellites,1819 cameras or X-rays and even analogue signals from ultrasonic sensors20 can be used. All these data types may need to be converted to a format which can be recognisable and usable by the software utilised. 4.2.1 VISUALISATION TOOLS When they first started in the 1980s, simple visualisation applications required months of programming using graphics languages such as PHIGS and Silicon Graphic's GL. By 1989, Application Visualisation Software (AVS) and Animation Production Environment (apE) offered visualisation packages requiring minimum programming knowledge which allowed scientists to concentrate more on research and less on programming. For this, hardware able to handle large sets of data in multiple formats and which produced high quality images and animation was necessary. The most successful visualisation software runs on workstations such as Sun Workstation, Silicon Graphic's Workstation or Crays and requires significant investment. Imaging tools are now available for PCs and are more accessible to scientists with a limited background in computing by providing user friendly interfaces and point-and-click facilities. However, the difficulties in using software are increasing with their flexibility. The most common visualisation software available includes PV-Wave, Iris Explorer, Khoros, AVS, apE, Uniras, Spyglass, Data Visualiser, Gplot, Grass, ER-Mapper, HIPS and Visilog. Such software was first designed for specific scientific applications of interactive VDA (e.g. Uniras for the oil industry) but has now been adapted for general applications such as medical imaging,21,22 test engineering (NDT), computational mathematical functions, signal and image processing. Computer aided design (CAD) software such as AUTOCAD, 3-D Studio and ARE24 comprises mainly drawing packages for geometric modelling, design and architecture. Although they are not strictly visualisation software, inexpensive spreadsheets such as Lotus 123, Excel and Quattro Pro can produce rapid data display on a PC. Plate 1 is a view from above of disbonds, in red, on the sub-surface of a section of a helicopter rotor blade. Spreadsheets are useful if a small amount of data has to be analysed on-site using a laptop, for example. However, their flexibility is restricted and they can only cope with a limited amount of data. Matlab, Mathcad, Mathematica and Maple V are mathematically oriented software with graphics facilities to represent mathematical functions and algorithms. They can produce 2-D and 3-D charts as well as generate short animations of time series functions. Visualisation helps in the representation of complex mathematical functions and geometric shapes. Mathematical visualisation influences the process of mathematics by providing a tool which can solve equations and functions. Techniques of visualisation can be applied to compare two data sets, to represent multivariate data sets, to show an evolution in time, to process images and to animate dynamic data. 4.2.2 THE USE OF COLOURS The communication of information and the identification and location of points of interest within a large data set can be quickly and effectively achieved by producing colour images. Colour can change the perception of form and readability of an image and can add dimension by locating extremes, identifying stress points or defective regions in a material.
Data visualisation 85 Specific details of an image can be examined by cueing a particular colour. Choosing the correct colour is critical for efficient data visualisation.23 Because the eye has a non-linear response to colour, the perception of colour varies from one individual to the next; therefore choosing appropriate colours is difficult and must be carefully considered. Colour codes are experienced daily, for example in the assessment of risk in crossing a road controlled by traffic lights where red is a symbol of danger and green of safety. Colourblind persons have a limited perception of certain colours which has to be considered if producing a colour-coded image. Changing colour codes through the use of alternative colour palettes is a necessary option for such image-producing systems.23,24 Other important factors, for instance perspective, texture, shadows, hue and intensity, can modify the perception of an image. Three primary colours - red, green and blue (RGB) - are commonly used in computer graphics. Red, green and blue values of a colour table are generally accessed through a pixel value. An n-bit colour display provides a 2" element index to the red, green and blue components for each pixel value (for example, an 8-bit colour display has a 28 = 256 element index). Three different colour models exist: the HSU (hue, saturation, value) model describes the perception of colours; the RGB model specifies colours for CRT display; and the CMYK (cyan, magenta, yellow, black) model specifies colours for printer output. Both RGB and CMYK relate to output devices. Hue describes a colour (e.g. red, orange or yellow), saturation is the vividness of a colour, and value describes colour intensity (e.g. light or dark). The Commission Internationale de l'Eclairage (CIE) created a chromaticity diagram in 1931 which represents the perceptual attributes of hue and saturation. This diagram, recognised as an international standard, expresses a colour as an (x,y,z) coordinate. Three monochromatic primaries (RGB) are represented on this diagram and at equal energy (R = G = B = 1) the colour is white. Before a difference in colour is perceived, a change of at least 10 per cent in RGB is necessary. The simultaneous use of highly saturated extremes (e.g. red and blue) produces ill-defined images because the human eye has a low perception and acuity in purple, blue and red. Red and green or yellow and blue go well together and produce distinct images. Red and yellow or green and blue produce poor images. Unrelated and intense colours will confuse the observer. Blue should not be selected as a foreground colour but can be used as a background colour. It is difficult to focus simultaneously on blue text against a red background. Yellow and green on a light background will have low contrast and will be difficult to see. Details can be masked in areas of little colour change. Some colours have preconceived universal meanings, such as red for heat or danger, green for safety and blue for cold. A neutral background colour should be chosen - black, white or blue - to provide adequate contrast. A limited number of realistic, logical colours provides the best results. Taking a 'natural' colour palette, some colours suggest landscapes well: black (sea bottom), dark blue (deep water), light blue (shallow water), green (plateau), brown (hills, mountains, elevation), white (mountain top, snow). Plate 2 shows visualisation results achieved by using Visual Numerics' PV-Wave to visualise the same data set as displayed in Plate 1. The values have been pseudo-coloured with a predefined palette. In this case the maximum signal amplitude, related to defects, is shown by the red-yellow colour, while non-defect areas are shown in black-blue-green. The legend on the left provides information on the meaning of the colours in relation to surface height in millimetres. The colours of the image show that elevations greater than 0.6 mm characterised as a defect - are located and identified as red patches. Colour coded NDT maps of the area inspected can help in the location of defects and the assessment of
86 Scientific Visualisation structural integrity of a component. Black-and-white images are still employed, particularly in medical applications where a greater range of contrast through grey scale can be obtained with X-rays. Nevertheless, the visual detectability of a defect can be enhanced using pseudo-colours to map black-and-white radiographs. 4.2.3 ILLUSION IN VISUALISATION Initially visualisation may be considered as a simple technical problem of how to transform numerical or analogue data into a pictorial format, and involves many communication problems. Optical illusions or visual fiction can often occur, depending on the individual, while observing an image which may contain multiple patterns. Gregory24 described how humans perceive images and his work contains several examples of perceptual illusion. Darius25 discussed the misapprehensions of the eye and misinterpretations of the brain which he thought were necessary to understand before producing images. Kanizsa's triangle (Fig. 4.1(a)) is a good example of an illusory surface bounded by imaginary contours. The viewer has the impression of two triangles, one on the top of the other; the white triangle appears brighter than the background and imaginary contours are produced. Rubin's vase-face (Fig. 4.1(b)) is another well-known example of an optical effect. A black vase and/or two faces looking at each other can be seen from this image. The two examples given in Fig. 4.1 demonstrate that great care must be taken when trying to draw an object by making sure that the image produced conveys the information required and is free of optical illusion. This rule also applies to scientific visualisation. The image generated must be carefully analysed and must not contain erroneous or hidden information which may induce misinterpretation of the data.
(a)
(b)
Fig. 4.1 (a) Kanizas's triangle; (b) Rubin's vase. Two examples of optical illusion Kaiser and Proffitt26 discussed the human perception of images and described the use of perceptual technology to design and develop effective tools for scientific visualisation. They demonstrated that a methodology is necessary to produce natural images rather than using ad hoc methods. Standard visualisation techniques are preferable to user-defined techniques. The compilation of a hand drawing or a mental model is advised before producing an image. Gershon27 also demonstrated that changing colour palettes or smoothing can hide information and display totally different information (as seen in Fig. 4.1). The use of different imaging techniques on the same data set can yield unexpected
Volume visualisation 87 results (Plate 5). Plate 2 displayed the variation of elevation on the surface of a helicopter rotor blade. The data are from an eddy current sensor output which scanned the surface of the rotor blade. The same data set using a different signal amplitude threshold and colour palette is shown in Plate 5. This example illustrates the need for basic approximation of what is expected and shows that computer graphics require a minimum of training and understanding of visualisation techniques. Alternative visualisation approaches may be considered to compare the results. The challenge remains for each user to discover how to get the most from the data and how to produce effective images. 4.3
Volume visualisation
Two-dimensional contouring techniques take a two-dimensional array of values and create a surface plot with isolines. Contour plots present scalar data of the form f(x,y) by constructing level curves of equal values of the function / . Such graphs are very useful in the representation of the variation of altitude, for example, or scalar data which can use elevation as an analogue (e.g. temperature). Contour lines and contour surfaces are classified as slicing techniques; they are used to represent 3-D information in a 2-D format. Contour lines reveal troughs and peaks. Numbers can be added to the contour lines to provide extra information on the actual altitude at a particular position. Slices can be made through a volume to obtain a sequence of images at different heights and different directions (Fig. 4.2). Computer tomography (see chapter 3) already provides X-ray images of slices through an object to gain insights into the third dimension. Another way to visualise internal information is to produce a volumetric representation of the data.
Fig. 4.2 Slicing through a cube
Volume visualisation is well suited for interpretation of volumetric data. It is a method to generate 2-D projections of 3-D objects, render volumes and create shading. Several introductions to volume visualisation describing the progress and applications of this technique can be found in the literature.28-32 The improvements in hardware and software systems have allowed researchers to study the interior of objects and structures by creating volumetric views. With 3-D numerical simulation, the parameters are of the form f(x,y,z). At each point (x,y,z) a value is defined and is termed a voxel. A voxel is an abbreviation for 'volume element' or 'volume pixel'; it is the 3-D equivalent of a pixel (Fig. 4.3). A voxel can be represented as a cube centred at a particular position in 3-D space. It is a
88 Scientific Visualisation quantum unit of volume with a numerical value which contains information on the colour, opacity and density of the particular part of an object. X
/ / / /
/
/
y
y
y
/ y
/ 1 X.
/
y
/ y
y
/
y
y
/
y
7
/
y
/ y
/
/ y
/
/
/ /
/
X
/
/ /
/
/
/ s
/
/ /
/
/
/
/
/ / /
/
y
/ /
/
/
/
s
/
A /
/
/ /
/
/
/ /
/ /
y
/ /
/
y
/
y
y
y
/ / /
/ / / /
A voxel Fig. 4.3 Representation of a volume element (voxel) in a cube Translucent isosurfaces can reveal the structure within a volume. Volume data can be generated from multiple sources, for example 2-D data from sampling and scanning through slices (discrete points which are processed and a 3-D image reconstructed by interpolation between points), or voxels can be generated by computer from a mathematical algorithm. A large amount of memory is required to generate and store a volumetric image (e.g. a 512-pixel size image requires 5123 bytes = 134 MBytes of memory). Volume rendering is a software algorithm which converts the data specification of an image to a volumetric image, including colour, opacity and shading information. A rendering technique is usually composed of a hidden-surface elimination procedure and a shading model. The hidden-surface procedure displays the final visible parts of an object and the shading procedure determines the final appearance of each surface (colour, texture, reflection). Volumetric objects can be rendered using ray casting methods based on ray tracing where thin rays are cast through each pixel to determine which objects in the scene contribute to the pixel's colour. Shading is also used to render images and to create photorealism. 4.4
Animation and virtual reality
Computer animation is a recent tool which aids scientists in extrapolating an idea, visualising and analysing data which evolve with time (e.g. dynamic study of a component understress) or simply rotating a volume. Magnenat-Thalmann and Thalmann33 have collected papers on animation, simulation and modelling which describe the use of animated scientific representations. Fracture mechanics, evolution of cracks and the study of crack propagation in different materials to understand their limits, strengths and weaknesses can be analysed using animation andfiniteelement analysis software.
Fundamentals of image processing 89 Animation which allows the generation of successive different views of the image produced will give the operator information on hidden parts of the image. Rotation around the x, y or z axis individually is possible, as well as around a combination of these axes. By defining an animation window of 400 x 250 pixels, the complete rotation of the image through 360° with a step size of 2° can be performed on a SUN Sparc Workstation in 15 seconds, and only 90 seconds are required to compile each frame for the animation. A sequence of images of a rotation around a 3-D defect is shown in Plate 4. Animation can also be very useful to show the evolution of a defect with time and Plate 7 shows different stages in a disbond growth on the surface of a helicopter rotor blade under dynamic stress. This animation allows fracture analysts to estimate the expected life of a component as a function of the degree of propagation of a defect. Conventional real-time computer systems using a mouse and a keyboard are commonly used to manipulate 2-D/3-D data displays. Virtual reality was developed as a better way to handle and interact with data.34 The display is updated with the movement of the head of the user who has a head-mounted display unit, and virtual objects are manipulated through a data glove which replaces the mouse. Virtual environments are more recent but already provide a full 3-D interface for display and control of volumetric views of an object. Significant visualisation problems can be addressed through virtual reality but it is not useful for conventional representation of scientific data. However, visualisation of phenomena, such as crack propagation in a material, can be effectively analysed. Interactive visualisation through virtual reality systems is a very powerful technique to explore data, manipulate molecules and interact with scientific equipment.35"38 Virtual reality already finds applications in medicine (e.g. to explore models of human biological systems), space science (e.g. to train astronauts) and chemistry (e.g. molecular manipulation). Artificial defects could be generated and randomly positioned in a component in order to train and assess the competence of NDT inspectors.39 For example, the use of virtual reality to simulate inspection in different environments will present a low cost, flexible and safe system for the training of underwater inspectors. 4.5
Fundamentals of image processing
Image processing refers to the manipulation of digital images in order to extract more information than is actually visible on the original image.40'41 A digital image is a 2-D matrix of pixels of different values which define the colour or grey level of the image. The higher the resolution of an image, the greater the number of pixels. Each pixel represents a variation in greys or colours. The grey level resolution is expressed in terms of bits; an 8-bit image has 256 grey levels. Image processing techniques use filters to enhance an image. Their main applications are to transform the contrast, brightness, resolution and noise level of an image. Contouring, image sharpening, blurring, embossing and edge detection are typical image processing functions (see Table 4.1). Low and high pass filters (spatial filters) are used when the filtering is based on pixel values and gradients to smooth and reduce noise and details. High pass filters are used for edge enhancement and to sharpen an image. Low pass filters are used to smooth an image. Only a small number of image processing filters are necessary to obtain information about an image. Image processing operations can be performed in the spatial domain and frequency domain of an image. Spatial domain refers to the matrix of pixels composing an image (original pixels of the image). Frequency domain refers to the matrix of numbers making up a Fourier-transformed image (spectral
90 Scientific Visualisation representation of the image). Fast Fourier Transform (FFT) is a powerful tool for analysis and computation in image processing and is used to compensate for imaging/optical effects. The 2-D FFT of an image identifies the spatial frequency components and eliminates suspected noise components in the transform. The inverse FFT is taken and a filtered image is produced. Table 4.1 Image processing techniques and their applications Image processing technique
Applications
Colour enhancement/transformation
Used to distinguish fine detail, to map original data into a particular palette or to colour a black-and-white image Threshold operation on pixel values to improve image display Replaces a pixel value by the median of its neighbours or by its most common neighbour (mode). Their main effect is to smooth and speckle, which often produces a loss of edge sharpness and a slightly out of focus effect Used for edge enhancement to sharpen an image Average multiple images of different quality for noise reduction Adds, subtracts, divides, multiplies. Differential imaging between two images can be used to identify changes Rotation, translation, scaling or wrapping to remove distortion To divide an image into meaningful regions (e.g. background + foreground) Used for noise reduction and to extract detail
Contrast enhancement Low pass filters (mean, weighted mean, median, mode)
High pass filters Averaging Arithmetic operation
Geometric operation Segmentation Fast Fourier Transform
In materials testing, digital images can be produced from X-rays, gamma rays, infrared cameras or ultrasound waves. Laser disc storage is an important factor for digital images. An example of image processing techniques, for detection of impacts on composite materials, is shown in Plate 8. Image processing can be very useful to improve the interpretation of NDT data from thermographs by reducing noise and tracing contour lines of damaged areas. Lack of information in radiographs, due to poor contrast or variation of attenuation of X-rays in the material, can be overcome using digital image processing techniques to enhance radiographic images by enhancing the contrast, reducing the noise level or using colour to represent the grey levels of radiographs in order to extract hidden information. Processing two images of the same object taken at different energies can reveal different characteristics of a material. Link et al.42 and Jensen43 discussed the possibilities of image processing applied to real-time radiography. MacNicol44 described
Summary 91 the use of image processing for satellite images in order to transform satellite data into a more understandable form. Classification can be used in image processing to identify and label an object. This can lead to automation of defect recognition and characterisation. 4.6
Visualisation in NDT
Computer graphics and visualisation have already started to influence the development of NDT products (e.g. Andscan®, ultrasonic C-Scan systems) as they have in design, architecture, medicine and the aerospace industry. Moreover, colour-coded images are less prone to misinterpretation than an analogue signal on a CRT and they allow rapid location of defect areas, displaying qualitative and quantitative information. A thousand times more information can be recorded by eye than by ear, and because defects are usually threedimensional it appears logical to represent them in 3-D. In visualisation, as with analogue signal displays, operators would require training in order to recognise a defect from background noise. However, training would be quicker and fewer errors would occur compared with those from analogue signals. Visualisation of NDT data on a workstation can be time consuming. Thus an alternative is to display the data in real time on a PC using in-house or commercial software for onsite data analysis. Information can be transmitted through electronic mail for study or diagnosis in a laboratory situated in another country. Further analysis can be performed at a later stage on workstations for recording and quality control purposes. Imaging in nondestructive examination can be used to represent surface and internal flaws or to combine both on the same image. Computer graphics also find applications in defect monitoring by comparing NDT data collected over a period of time, or in simulation of the evolution of a defect in a component under stress (e.g. disbond growth). Interpretation of eddy current analogue signals is difficult and can be facilitated by the use of visualisation tools as described by Guettinger et al.45 McNab and Cornwell20 developed a system called Trappist to present and manipulate 3-D images from ultrasonic data combined with CAD images. Lowden et al.46 described the use of a computer graphics workstation to visualise defect geometry in composite materials. Plate 10 is a 2-D view from above of the surface of a steel plate with a centreline weld inspected using eddy current and revealing two defects. Plate 6 shows impacts (in red/yellow) detected on the surface of a composite material using an eddy current technique. Imaging of flaws gives quantitative information about the defects where, in this case, high energy impacts have a larger area than low energy impacts. This information would not be visible on an amplitude display chart. An advanced PC-controlled computer aided testing system using a CAD concept-supported ultrasonic scanning system has been developed by Kreier et al.41 Coupling CAD information (Plate 11) with finite element software could provide information on weak areas and on the degree of safety of a material. 4.7
Summary
The use of computer visualisation provides enormous potential for enhancing scientific studies and it is now an essential tool for the advancement of theories and experiments. With the increasing amount of experimental and theoretical data generated by scientists, an effective tool is necessary for rapid and efficient data analysis. Visualisation appears to be well suited for this task as it allows a more effective use of the scientist's time and resources.
92 Scientific Visualisation The graphical possibilities of computers have been recognised and are currently in use in different scientific areas. However, NDT is only on the threshold of benefiting from this potential. Research in visualisation should be encouraged in the field of NDT, and in order to keep pace with advancing technology engineers should have essential knowledge in computer science and be trained in the interpretation of visual display of NDT signals. Colour-coded images of NDT signals are becoming more common as such displays facilitate defect location, provide qualitative and quantitative information about a defect and reduce errors in signal interpretation, with consequent safety and economic benefits. Visualisation will also lead to automation if expert systems for pattern recognition are combined with visualisation software. The continuous development in visualisation with new techniques such as virtual reality will improve the display of information and enhance data analysis. References 1. McNab A, Cornell I. 3-D visualisation of offshore NDT data, Colloquium on 'New techniques for manual and automated offshore Non Destructive Evaluation', 6 Dec. 1994, Digest No 1994/24, IEE Pub. 2. Kaufman AE. Visualization, July 1994, Computer, 27(6), 18-19. 3. Earnshaw RA, Wiseman N. An introductory guide to scientific visualisation, 1992, Springer-Verlag. 4. Brodlie KW, Carpenter LA, Earnshaw RA, Gallop JR, Hubbold RJ, Mumford AM, Osland CD, Quarendon XX, (eds). Scientific visualization - Techniques and applications, 1991, Springer-Verlag. 5. Patrikalakis NM. (ed.). Scientific visualization of physical phenomena, 1991, Springer-Verlag. 6. Farrell EJ, Christidis ZD. Visualization of complex data, Proceedings of the SPIE, 1083,3D Visualization and Display Technologies, Jan. 1989, California, 153-60. 7. Thompson J. Science on display, Jan. 1990, Systems International, 33-5. 8. Keller PR, Keller MM. Visual cues - Practical data visualization, 1993, IEEE Computer Society Press. 9. Nielson GM, Shriver B. (eds). Visualization in scientific computing, 1990, IEEE Computer Society Press. 10. Cunningham S, Brown JR, McGrath M. Visualisation in science and engineering education, 1990, IEEE Computer Society Press. 11. Popovic R, Gallop J, Brodlie K. The 2nd Annual report of the EASE Visualization Community Club, March 1994, RAL-94-024. 12. Globus A, Raible E. Fourteen ways to say nothing with scientific visualization, July 1994, Computer, 27(7), 86-7. 13. Hibbard W, Santek D. Visualizing large data sets in the earth sciences, Aug. 1989, Computer, 53-7. 14. Helman J, Hesselink L. Representation and display of vector field topology in fluid flow data sets, Aug. 1989, Computer, 27-36. 15. Robertson B. Biz Viz, Sept. 1991, Computer Graphics World, 45-9. 16. Papadakis EP. Imaging methodologies: making decisions before seeing the images, Feb. 1993, Materials Evaluation, 51(2), 120-3. 17. Tufte ER. The visual display of quantitative information, 1983, Graphic Press, USA. 18. Schroder F. Visualizing meteorological data for a lay audience, Sept. 1993, Institute
References 93
19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.
31.
32. 33. 34. 35.
36.
of Electrical and Electronics Engineers Computer Graphics and Applications, 12-14. Wolff RS, YaegerL. Visualization of natural phenomena, 1993, Springer-Verlag. McNab A, Cornwell I. Visualisation of 3-D ultrasonic NDT data, 33rd British Conference on Non Destructive Testing, York, UK, Sept. 1994. Nelson TR, Elvins TT. Visualization of 3D ultrasound data, Jan. 1993, Institute of Electrical and Electronics Engineers Computer Graphics and Applications, 13(6), 50-7. Fuchs H, Levoy M, Pizer SM. Interactive visualization of 3D medical data, Aug. 1989, Computer, 46-51. Rheingans P. Color, change, and control for quantitative data display, Proceedings of the Institute of Electrical and Electronics Engineers Conference on Visualization, Oct. 1992, MA, USA, IEEE Computer Society Press, 252-9. Gregory R. How do we interpret images, 1986, Images and understanding, Cambridge University Press, Miller J. (ed.), 310-30. Darius J. Scientific images: perception and deception, 1986, Images and understanding, Cambridge University Press, Miller J. (ed.), 333-57. Kaiser MK, Proffit DR. Perceptual issues in scientific visualization, Proceedings of the SPIE Conference, 1083, 3D Visualization and Display Technologies, Jan. 1989, California, 205-212. Gershon N. How to lie and confuse with visualisation, Jan. 1993, Institute of Electrical and Electronics Engineers Computer Graphics and Applications, 13(6), 102-3. Ranjan V, Fourier A. Volume models for volumetric data, July 1994, Computer, 27(7), 28-36. Hanson AJ, Munzner T, Francis G. Interactive methods for visualizable geometry, July 1994, Computer, 27(7), 73-83. Kaufman A. Fundamentals of volume visualization, Proceedings of the 10th Conference of the Computer Graphics Society, Visual Computing, Integrating Computer Graphic with Computer Vision, June 1992, Tokyo, Japan, SpringerVerlag, KuniiT.L. (ed.), 239-52. Avila RS, Sobierajski LM, Kaufman AE. Towards a comprehensive volume visualization system, Proceedings of the Institute of Electrical and Electronics Engineers Conference on Visualization, Oct. 1992, MA, USA, IEEE Computer Society Press, 13-20. Banchoff TF. Beyond the third dimension - Geometry, computer graphics and higher dimensions, 1990, Scientific American Library. Magnenat-Thalmann N, Thalmann D. (eds). New trends in animation and visualization, 1991, Wiley. Bryson S. Virtual environment in scientific visualization, 1993, Animation and scientific visualization tools and applications, Earnshaw RA, Watson D. (eds), Academic Press, 113-22. Ma KL, Smith PJ. Virtual smoke: an interactive 3D flow visualization technique, 1992, Proceedings of the Institute of Electrical and Electronics Engineers Conference on Visualization, Oct. 1992, MA, USA, IEEE Computer Society Press, 46-53. Beshers C, Feiner S. Automated design of virtual worlds for visualizing multivariate relations, Proceedings of the Institute of Electrical and Electronics Engineers
94 Scientific Visualisation
37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47.
Conference on Visualization, Oct. 1992, MA, USA, IEEE Computer Society Press, 283-90. Wright J. Computed reality can illuminate research, July 1994, Scientific Computing, No. 2, 28-30. Satterthwaite K. New frontiers in catalysis: modelling holds the key, Sept. 1994, Scientific Computing, No. 3, 9-13. Jenkins, SA, Lowden DW. Eddy current inspection simulator: computer aided learning, Nov. 1993, Materials Evaluation, 51(11), 1226-36. Pratt WK. Digital Image Processing, 1978, John Wiley & Sons Ed. Niblack W. An introduction to digital image processing, 1986, Prentice & Hall Inter. Link R, Nuding W. Sauerwien K, Souw EK. Image processing in real time radiography, 1985, 11th World Conference on Non Destructive Testing, Las Vegas, Nevada, USA, Nov. 1985, Taylor Pub. Co., 1, 560-5. Jensen TH. Filmless X-ray evaluation of structures made of light materials, 1994, Non Destructive Testing and Evaluation International, 27(2), 89-96. Macnicol G. Complex imagery, Sept. 1991, Computer Graphics World, 75-9. Guettinger TW, Grotz K, Wezel H. Eddy current imaging, April 1993, Materials Evaluation, 51, 444-51. Lowden DW, Gros XE, Strachan P. Visualising defect geometry in composite materials, Proceedings of the International Symposium on Advanced Materials for Lightweight Structures, ESTEC, Noordwijk, The Netherlands, March 1994. Kreier P, Gribi M, Durocher JM, Hay DR, Pelletier A, Edelmann X. Computer integrated testing, Jan. 1993, European Journal of Non Destructive Testing, 2(3), 94-9.
5
A Bayesian Statistical Inference Approach to the Non-Destructive Inspection of Composite Material Statistical methods of analysis are intended to aid the interpretation of data that are subject to appreciable haphazard variability. Cox and Hinkley, 1974
5.1
Introduction
Composite materials are used in a wide range of industries, including aerospace, transport and leisure, in the manufacture of aircraft, trains, cars and sports equipment such as golf clubs and tennis rackets. Composite materials combine the advantages of high strength, high fatigue resistance, good thermal oxidative stability, freedom of design and very low weight compared to more conventional materials such as aluminium or steel. For these reasons they are now extensively used, but there is limited knowledge concerning the effect of impact damage, delamination and fatigue cycling on the mechanical properties of composites. In order to meet safety criteria required by industries, composites need to be carefully inspected to assess for structural integrity. The heterogeneous and multi-layered characteristics of these materials make them difficult to inspect with conventional nondestructive testing (NDT) techniques. At the present time, there is no reliable test for the inspection of composite materials and there is a need for an efficient and accurate NDT technique. This chapter is devoted to the use of an electromagnetic NDT method for the inspection of composite materials such as helicopter rotor blades and composite panels. Visual examination, infrared thermography, radiography and electromagnetic testing have been carried out on several samples provided by the aerospace industry. These case studies are presented in this chapter. An eddy current method appears to be well suited for the inspection of composite materials and more specially to detect disbonds and impacts in carbon fibre materials. The performance of the method was analysed through probability of detection (POD) and receiver operating characteristic (ROC) curves, and the use of computer visualisation of eddy current data was employed to facilitate signal interpretation and provide qualitative and quantitative information about a defect. Data fusion and integration have been performed using ensemble averaging, thresholding and the Bayesian statistical theory to improve the signal-to-noise ratio and to help in decision making with regard to the presence and location of defects. The theory of Bayesian inference is described through Bayes's rule, and an example of binary decision making in regard to
96 Bayesian approach to inspection of composite materials disbond detection in a helicopter rotor blade, using multiple eddy current sensors, is presented. The results of binary decision analysis show that combining information from multiple sensors can be used to help in decision making by reducing uncertainty and verifying the degree of support of a hypothesis. The final outcome improves the reliability of information and facilitates defect location and characterisation in a statistical and probabilistic manner. 5.2
Composite materials
Composite materials are made up of two or more different components, each having complementary properties. Fibres of boron, carbon (graphite), nylon, Kevlar® or Tedlar® are woven together into a specific pattern which is then bound by a polymer, metal or ceramic matrix. A reinforcement component is incorporated into the matrix which transmits stress and protects the fibre weave from damage. The reinforcement material, usually a fibre a few micrometres in diameter, is made of glass, carbon, aramid, polyethylene or silicon carbide. Because of their heterogeneous composition, composites present a combination of properties otherwise unavailable through the use of more conventional materials. The manufacture of composite materials is still expensive as it is a labour intensive production process which requires sophisticated equipment. Despite their advantages, the materials show relatively poor resistance to impact and ageing. The limited understanding of composites and their complex failure mechanisms present problems in establishing safety limits. 5.2.1 DISBONDS AND IMPACTS Composite materials can be manufactured to almost any shape at relatively low cost and require only minimum maintenance. In addition to the limited knowledge concerning the mechanics of fatigue damage and resistance to strains and stresses, defects are usually difficult to detect in composite materials. Composites are anisotropic, multi-layered and inhomogeneous and their degradation is difficult to assess and depends upon dimension, location and geometry of the defect, as well as on the composition and properties of the material and the nature of the applied stress.1 Two types of defects, disbonds and impacts, which are the main cause of failure of composite materials will be considered in this chapter. Areas of delamination and disbond are hazardous because they cause a reduction in compressive strength and are usually difficult to detect with conventional NDT methods.2 Generally disbond can be defined as a lack of adhesion between two different materials, whereas delamination is a lack of adhesion between two layers of wound fibres. Heida et al? demonstrated that artificial delamination can be used for inspection calibration purposes because of its similarity with real delamination. Solomos and Lucia4 described the application of laser holography for the detection of delamination in composite materials. Sometimes changes on the surface of the material can indicate the presence of a flaw; however, the main problem occurs when there are no visible marks. Defects can be introduced to materials during manufacture or during in-service operations. They can develop over time, with overloading, in stress concentration areas and under particular environmental conditions. At the manufacturing stage disbonds can occur for the
Current NDT methods for the inspection of composites 97 following reasons: •
if composite adherents are not dried before bonding, the absorbed moisture can vaporise and produce bubbles in the adhesive layer; • because of voids in the adhesive layer, caused by trapped air or gases between glue and the protective coating; • due to a lack of adhesive, creating disbonds; • due to the presence of grease on the glue, or as a result of impact. Disbond formation during in-service operations can be due to: • • •
environmental conditions (gale, hail); hitting a bird or stone; dropping a tool during maintenance operation.
The mechanical properties of composite materials can be affected by impact damage, especially with a brittle component such as graphite. Impact damage is described by Wey and Kessler5 as the primary cause of delamination, and damage to aircraft can be caused by strong gales, stones, bird strikes or the dropping of a tool. Low velocity impacts can reduce the structural integrity of a component without any visual evidence on its surface and can produce internal damage reducing both static and dynamic mechanical characteristics.2 The effects of impact damage on the structural integrity of a material, as well as their detection with NDT methods, have been well illustrated in the literature.5-13 The resistance of graphite/epoxy specimens to impacts ranging from 12.0 to 200.0 J has been studied by Marshall and Bouadi.6 They showed that the threshold energy to fracture was dependent on the material thickness (for identical material composition) and that the greater the thickness, the higher the threshold energy. Greszczuk and Chao7 investigated the failure modes in graphite fibre materials subject to low velocity impact and demonstrated that resistance to impact increased with fibre strength, and as the Young's modulus of the matrix decreases. Chester and Clark8 described a computer model used to improve the prediction of structural response of composite materials subjected to impacts. They demonstrated that delaminations resulting from impact were more likely to occur than ply cracks or fibre failures. Reed and Bevan9 used a finite element method to estimate the stress distributions at damage initiation in composite materials, showing that matrix cracking occurs at the fibre interface. Because of the structural damage caused to composites through impacts, damaged areas need to be detected at an early stage and quantified accurately in order to assess the structural integrity of a component. 5.3
Current NDT methods for the inspection of composites
In this section, the advantages and limitations of the most common NDT methods for composite material examination will be briefly reviewed. The most common NDT methods for inspection of composites are visual inspection, coin tapping, ultrasound, infrared thermography and radiography. The sensitivity and effectiveness of these methods depend on the type of defect researched, the material inspected, the inspection conditions, the location and the experience of the operator. Other less common inspection methods include laser holography,4 shearography, vibro-thermography, acoustic emission, neutron radiography, air-coupled ultrasounds and acoustic microscopy. It is not in the scope of this section to discuss in detail the principle and potential of each technique; these have already been reviewed by Jones and Berger,14 Bar-Cohen15 and Hawkins et al.16
98 Bayesian approach to inspection of composite materials 5.3.1 VISUAL INSPECTION Visual inspection by an experienced operator is the most common procedure for materials examination. It is fairly inexpensive, rapid but limited to relatively pronounced surface defects. Visual examination does not give quantitative information concerning a defect and is usually performed to give an initial rapid inspection prior to damage quantification by another method. 5.3.2 COIN TAPPING The manual acoustic impact method, also known as coin tapping, is currently in use in the aerospace industry to detect delamination and disbonds.17 Disbonds produce a variation in the emitting sound and defects as small as 10 mm diameter can be detected. The tap test method can be automated and the resulting sound amplitude recorded which allows the production of C-scan images of a defect.18 Detection of damage can be very efficient with thin laminates (layers less than 1.0 mm) but is sensitive only to laminar-type flaws such as disbonds and delaminations. In addition, the fact that it relies only on differences in acoustic waves makes signal interpretation of manual testing very subjective. The technique appears to be inadequate for the inspection of thick laminates. 5.3.3 ULTRASONIC TESTING Ultrasonic testing is the most widely used NDT method in industry for the inspection of metals and composite materials. The reason why this method is so popular is that it can be relatively inexpensive and gives real-time information on internal defects, and software is now available to present the results of an inspection in a 3-D format. However, there are several disadvantages to this method which make it unsuitable for composite materials inspection. Typically, ultrasonic frequencies range between 1.0 and 25.0 MHz depending upon the material composition and the sensitivity required. Signal processing techniques are often needed to help signal interpretation and to reduce the noise level19 which can be time consuming and costly. The inspection of honeycomb/cellular foam structures is difficult due to the high attenuation occurring in the material. It has been reported that variation in material composition, density and porosity content provoke an attenuation of the ultrasonic beam.10'20 Also, the characteristics of composites are sometimes comparable to the wavelengths of the ultrasonic waves used to inspect them, which results in multiple scattering and wave dispersion distorting the ultrasonic signal.21 The viscoelastic properties of epoxy matrix also lead to additional wave dispersion.21 The pulse-echo technique, using a focused transducer, is not suitable for inspection of thick composites in the detection of delamination, porosity, matrix cracking and non-uniform matrix distribution20 and is also insensitive to fibre breakage. Damage from impacts of energies over 14.0 J,1 15.0 J22 and 12.0-30.0 J10 has been detected in composites with ultrasound. Although ultrasonic testing has found much success in the detection of large voids and delamination in composites, little work has been undertaken as yet into its ability to detect low energy impacts in composite materials. Sophisticated ultrasonic systems such as laser ultrasound23'24 are more suited to composite material inspection but are, at the moment, expensive and limited to laboratory applications.
Current NDT methods for the inspection of composites 99 Ultrasonic inspection requires a couplant such as a gel or water to provide an interface between the specimen inspected and the ultrasonic sensor. In the case of water, immersion of the specimen or the application of a steady jet of water is required. If a gel couplant is used, the material to be inspected needs to be cleaned which can be very time consuming, especially if a large area is to be tested. Furthermore, an experienced inspector is required for the interpretation of ultrasonic signals. Ultrasonic inspection is a time consuming operation and appears to have limited reproducibility when performed manually. Automated ultrasonic inspection, using a multi-axis robotic system and computer visualisation software, is becoming more widespread but still remains expensive.25 Such a system allows an operator-independent inspection to be performed, reducing human error and improving the signal-to-noise ratio. However, specially designed tanks are required in which the specimen is immersed. Conventional ultrasonic techniques need to be modified and adapted for efficient inspection of composites. 5.3.4 INFRARED THERMOGRAPHY Because of its portability and real-time display, infrared thermographic equipment is becoming a common means of inspection in the aerospace industry. Recent research has demonstrated the popularity n ' 25-28 of infrared techniques for non-destructive examination of materials with low thermal conductivity. Infrared thermal wave imaging finds several applications in industry as a real-time, non-contact method of inspection which provides clear colour-coded information of a certain type of defect. Portable systems allow rapid inspection of large areas and are suitable for on-site inspection, although a heat source is often required to preheat specimens to be inspected. The method has limited resolution for low energy impact detection on carbon reinforced composites30 because of the high thermal anisotropy of graphite fibre,16 and thus thermographic signals require postprocessing operations as well as a high capital investment. The sensitivity of infrared thermography is dependent upon the thermal conductivity of the material. Materials with high surface emissivity will exhibit local surface temperature changes which tend to mask 'hot spots'. Significant noise can appear due to material reflectivity, especially when inspecting components made of two different materials, such as metal and composite on a helicopter rotor blade. Image processing operations such as thresholding, sharpening and smoothing are needed to enhance the quality of thermographs and reduce noise. Infrared thermography does not give quantitative information and images can sometimes be difficult to interpret. Infrared thermography (IRT) is a standard technique used by the aerospace industry for on-site inspection.28 The technique has been successfully applied to the detection of impact damage in composite materials of energies equal to 6.0 J26'27 and 3.4-30.5 J.11 The thermal transient time of composite materials is between 1.0 and 10.0 seconds, making IRT methods well suited to recording rapid changes in temperatures. However, the efficiency of the technique still needs to be improved to enable detection of low energy impacts (0.5 to 6.0 J) and to allow inspection of materials with variable surface emissivity. 5.3.5 RADIOGRAPHIC INSPECTION Radiography is very often used with ultrasound, as it provides complementary direct visual information on the internal structure of a component. Despite the hazards of accidental exposure to X-rays, radiography is used in several applications in inspection of composite
100 Bayesian approach to inspection of composite materials materials. Although a major limiting factor is the high cost of X-ray apparatus, X-rays can detect honeycomb-core defects in bonded sandwich assemblies, but delamination cannot be easily detected31 because there are no variations in density between the delaminated and undelaminated areas. This problem can be overcome by using a dye penetrant visible on the radiograph. Computer tomography (CT) may produce better results32 but the apparatus is more expensive compared to conventional X-ray systems. Computer tomography coupled to an image processing system has proved to be useful for the detection of matrix cracks, fibre fractures and impact damage.17 Hastings33 described a real-time radiographic system using a 160 keV source to detect water entrapment in the honeycomb structure of a helicopter rotor blade. 5.3.6 EDDY CURRENT TESTING Eddy current testing is a well known and established method for the inspection of metals, but contrary to the NDT methods previously described, it has not attracted much interest or development in regard to composite inspection. Until recently ultrasonic inspection was considered as the primary candidate for composite material inspection.33'35 Recent developments have demonstrated that eddy current testing can provide defect information in carbon reinforced composites.35"41 In 1976, Owston38 demonstrated that it was feasible to use eddy currents to reveal cracks and variations in fibre orientation in carbon fibre reinforced plastic (CFRP) materials. Vernonet and Gammell36 demonstrated the potential of using eddy currents to assess the structural integrity of non-metallic fibre composites. Eddy currents provide reproducible quantitative information on a damaged area in composites42 and can be used to monitor flaw growth under loading and under different environmental conditions. Eddy current inspection of composite materials has been successfully carried out using a commercial system to detect disbonds in a helicopter rotor blade,40'42 and to locate and characterise defects in carbon fibre reinforced materials39 as well as impact damage in carbon fibre reinforced materials.30 The method can be made portable and is relatively inexpensive, and results are reproducible unlike coin tapping or ultrasound. However, further experimental tests are still required to fully appreciate the limitations and advantages of the technique. 5.3.7 MORE SOPHISTICATED NDT METHODS Sophisticated NDT methods have recently been developed or improved in order to overcome the limitations of already existing techniques such as laser holography, aircoupled ultrasound and shearography. Laser holography is considered as a specialised visual inspection technique4 which has proved to be an efficient and valuable technique in the detection of surface damage in composites. Impacts of energies as low as 1.0 J have been detected by laser holography.4 Unfortunately, the results achieved depend upon the stability of the test surface and the technique is mainly limited to laboratory applications. Shearography is a laser based optical method developed to overcome the limitations of laser holography.43 Portable shearographic equipment is now available which can inspect any material surface by measuring plane surface displacements and producing maps of strain contours.44 Shearography is sensitive to sub-surface flaws of a few millimetres below the surface,43 but still remains an expensive technique with lower defect detectability than laser holography.
Description of test specimens 101 Although air-coupled ultrasound equipment is already available, this method is still at the research and development stage. The main advantage of this type of system is that no couplant will be required between the ultrasonic probe and the component. However, the limitations of conventional ultrasonic equipment will still be the same. 5.4
Description of test specimens
All samples used in the following examinations were real samples donated by the aerospace industry and include sections of helicopter rotor blades and multiple CFRP composite panels. Defects of known size and location were introduced in each sample. Inspections were performed as a blind test; the exact locations of defects were revealed to the inspector only after completion of the inspection. A description of each specimen inspected follows. 5.4.1 HELICOPTER ROTOR BLADE OF A BOEING VERTOL 234 The first specimen tested was a section of a helicopter rotor blade from a Boeing Vertol 234 (BV234) supplied by British International Helicopters (BIH). Helicopter rotor blades are exposed to extreme environmental conditions, and are subject to bending torsion, stretching, stress, and temperatures ranging from -40 °C to 70 °C depending on the location of the vehicle. Figure 5.1 shows a cross-section of this rotor blade. The titanium erosion plate of the rotor blade was inspected in order to detect defects which may occur as a result of the breakdown of adhesion between the protective metal skin and the glass fibre reinforced plastic material (GFRP). Such defects, commonly referred to as disbonds, can be described as 'blisters' of minuscule height (a few micrometres) on the metal surface.42 GFRP Titanium erosion plate
noneycomp core 61cm
34 cm
Fig. 5.1 Cross-section of a BV234 helicopter rotor blade 30C
28V
J3S
2
54*V
_
4 _
J12 ^ — • - >
9/ \ --
20C
r^ 10(
n
^
3/
\
3
>
J44
6
zy
>
J91
V
\^^
) -
200
7
400
600
800
x/mm Fig. 5.2 Artificial disbond map (modified from Lowden42)
1000
102 Bayesian approach to inspection of composite materials Seven disbonds were present on the section of the rotor blade (Fig. 5.2) and were artificially induced by the helicopter operator by localised heating of the erosion plate. The portion with the titanium metal skin was 1000.0 cm long and 34.0 cm wide. The inspection was concentrated on a 20.0 x 20.0 cm2 portion of the rotor blade. 5.4.2 HEUCOPTER ROTOR BLADE OF AN ECUREUIL Two sections of a rotor blade from an Ecureuil type helicopter with disbonded leading edge shields and disbonds were supplied by Eurocopter, France. Although the full lengths of the first and second samples were 90.0 cm and 53.0 cm respectively, the inspection of
Metal-composite interface
Unit: mm
Fig. 5.3 Schematic diagram of thefirstsample of rotor blade from Ecureuil helicopter
Metal-composite interface Unit: mm Fig. 5.4 Schematic diagram of the second section of rotor blade from Ecureuil helicopter
Description of test specimens 103 the stainless steel erosion plate on each sample was limited to a distance of approximately 50.0 cm. Schematic diagrams of these two samples are shown in Figs 5.3 and 5.4. 5.4.3 CARBON REINFORCED FIBRE COMPOSITE PANELS To determine the limits of an electromagnetic technique for the detection of low energy impacts, five samples with different impact loading, generated by a drop tower mechanism, were prepared by Eurocopter, France. The samples were realistic specimens of composite material similar to that used in the manufacture of helicopter panels. Each sample was made of carbon composite materials of different thickness and weave (Fig. 5.5).
2.0
2.5
1.0
1.5
2.0
3.0
3.5
4.0
2.5
3.0
3.5
4.5
5.0
6.0
0.5
1.0
2.0
1.5 0.5
Sample 1 2.5
Sample 3
1.0
2.5
4.0
4.5
0.5 2.0
0.5
2.0
3.5
5.0
1.5
3.0
1.5
3.0
7.0
Sample 2 Side 1
Sample 2 Side 2
5.0
2.0
3.0
3.0
1.0
0.5
2.5
3.0
2.0
2.5
2.0
4.0
10.0
3.5
2.0
1.0
2.0
1.5
Sample 4
Sample 5
Fig. 5.5 Position of impact and impact energies in joules for thefivecomposite samples inspected The number of plies of carbon varies for each sample as follows: • sample 1: 4 plies carbon on each layer • sample 2: 2 plies carbon on first layer and 3 plies carbon on second layer
104 Bayesian approach to inspection of composite materials • • •
sample 3: 2 plies carbon on each layer sample 4: 6 plies carbon on first layer and 2 plies carbon on second layer sample 5: 6 plies carbon in a single layer.
The metal matrix of the composites also differed between each sample. The samples were subjected to low energy impacts ranging from as little as 0.5 J up to 10.0 J. A total of 53 impacts were introduced on the five samples; their positions and impact energies are presented in Fig. 5.5. 5.5
Methodology and experimental design
In addition to eddy current testing, where possible, complementary NDT methods, such as X-ray radiography and infrared thermography, were used for the inspection of the rotor blade samples and the inspection of the composite panels. The results of these inspections, together with the eddy current testing, are presented in section 5.6. Because of the low conductivity of graphite epoxy materials, coil design, frequency and sensitivity had to be carefully considered. Each sample described in section 5.4 was inspected using an electromagnetic testing method. The surface of each sample was scanned in a raster manner by an eddy current probe connected to a commercial eddy current instrument (Nortec 25L). The output signal from the probe was read by the computer controlled eddy current instrument, and the signal transferred, via an RS-232 connection, to a computer where it was stored on file. A core ferrite probe was used to direct the magnetic field. Such a probe is fairly inexpensive and available in different shapes and dimensions useful for inspection of materials with unconventional design. The sensitivity of the system to detect small defects is increased by the use of small probe diameters. The electrical resistivity of composite materials is between 5000 and 20 000 jiQcm, up to 8000 times higher than for aluminium,39 therefore high frequency eddy currents are required to inspect these components. A frequency range from 0.5 to 10.0 MHz was used in order to concentrate the eddy currents and the magnetic field in the upper layer of the material inspected. The surface of each specimen inspected was automatically scanned using a computer controlled scanning frame. Although a 3-D map of each sample was graphically generated at a later stage, it was also possible to display the signal amplitude as a 2-D graph in real time. Using computer visualisation facilities, the monitoring of a defect could be easily performed. The resulting images were colour coded to facilitate defect location and quantification even by a relatively untrained operator. Numerical information such as defect depth and signal amplitude was read on the computer generated map. Postprocessing using thresholding, filtering and image processing facilities could also be performed for detailed evaluation of data. 5.6
Inspection results
5.6.1 DETECTION, VISUALISATION AND QUANTIFICATION OF DISBONDS ON A BV234 HEUCOPTER ROTOR BLADE Visual inspection of the Titanium-GFRP surface of the BV234 helicopter rotor blade did not provide any indication of the presence of disbonded areas. Manual coin tapping inspection, however, detected most of the large disbonded areas rapidly and gave an
Inspection results
105
outline of the extent of the damage. No quantification was possible as the method did not provide direct visual indication or depth information. An eddy current inspection was performed using the equipment previously described in section 5.5. The eddy current inspection provided real-time 2-D visual qualitative and quantitative information about a defect (Fig. 5.6). The amplitude of the signal output from the eddy current sensor is proportional, within limits, to the actual height of the disbond. The graph of signal output against sensor position for multiple inspections in Fig. 5.6 shows the reproducibility of the technique in detection and quantification of a disbond.
1
t
'
0 -I
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
3
4
5
6
7
8
9 10 11 12 13 14 Sensor Position / cm
15
16
17
18
-1
2
1
1
-4
1
1
5
1
1
19 20
1
21
—o—MEAN
Fig. 5.6 Two-dimensional graphs of signal output vs. sensor position for multiple eddy current inspections on the surface of a BV234 helicopter rotor blade The disbond with the detection curve shown in Fig. 5.6 begins at 5.0 cm and ends at 15.0 cm along the length of the section of the rotor blade inspected. The location of this particular disbond corresponded with the position of disbond number 5 shown on the map in Fig. 5.2. Assuming the disbond to be circular, its area can be estimated using the algorithm described in Lowden42. The relationship between signal amplitude and disbond height can be estimated by plotting a lift-off curve of the sensor output against the elevation of the sensor above the specimen inspected (Fig. 5.7). A map of the actual area of the BV234 rotor blade inspected is shown in Plate 17. The inspection was confined to a surface of 20.0 x 20.0 cm2. The image is colour coded with each colour corresponding to a particular elevation of the surface of the titanium layer on the rotor blade. The most important fact gained from this map is that a disbond is not necessarily circular and in this case the algorithm used previously to calculate its area was not adequate. A different algorithm making use of the Trapezoidal Rule was used to calculate the disbond area more precisely. Results of area calculations from multiple algorithms are summarised in Table 5.1. The area of the disbond was also calculated using Autocad facilities. Autocad calculations gave results which appeared to be in close agreement with the actual size of the disbond estimated by the manufacturer; however, more work is required to make inference about its accuracy. Information on disbond height can also be obtained interactively by pointing to the area of interest using a computer mouse.
106 Bayesian approach to inspection of composite materials
e
»n
556
fS
302
286
—«
048
032
i-*
794
778
i—t
524
762 o
1.27
508
o
©
016
254
c/3
CN
CO
en
en
00
Distance Sensor to Rotor Blade / mm Fig. 5.7 Lift-off curve for the eddy current sensor used on the BV234 helicopter rotor blade Table 5.1 Calculations of disbond area with multiple algorithms Method
Disbond length/cm
Disbond area/cm2
Algorithm from Lowden42 Trapezoidal rule Autocad
10.0 9.1 9.0
90.0 67.0 69.8
The map shown in Plate 17 was generated by integrating the data from 11 channels which were designed for multiple probe array scanning. This type of array allows a full inspection of the rotor blade in only one scan. Currently, the inspection is done separately for each channel, therefore requiring 11 scans in order to cover the total surface of the rotor blade. The inspection could have been performed in a single scan if an array of multiplexing sensors had been available. A contour map can be added to the display in Plate 17 which gives supplementary information to the inspector. The inspection map conveys valuable information such as defect location, area, height and profile. This type of information display facilitates signal interpretation, defect location and characterisation. Automatic defect recognition using expert systems and neural networks could also be achieved given the relevant software and hardware. The image displayed in Plate 17 was obtained after data processing using ensemble averaging and multisensor integration. Multisensor integration is related to the use of multiple sensors, providing data of the same class which are amalgamated to map the area of an inspected component, in our case the surface of a helicopter rotor blade. By integrating the data from 11 channels a 2-D map can be produced as shown in Plate 17. The map displays the signal output (volts) of the eddy current probes for 11 channels along a distance of 20 cm. In order to assess the reproducibility of the technique, the
Inspection results
107
inspection was repeated several times, 10 scans being concentrated on the same portion of the rotor blade. A disparity between data from 10 separate inspections was noted, which was probably caused by the inability of the experimental test rig to scan the same portion of the rotor blade each time. In order to improve the signal-to-noise ratio and therefore provide a more accurate image representative of the surface of the specimen inspected, data fusion, as previously defined, was used on the NDT data collected. In this experiment, the eddy current probes used were identical - within the manufacturing tolerance - and the output signals were of the same class and in the same format, simplifying the fusion process. For this first NDT data fusion approach, the inspection was carried out under optimum test conditions, i.e., the use of identical sensors, identical signal format and similar geographic position of the probes on the inspected area. With these conditions, data alignment was easily achieved, and no overlapping of information occurred. The 2310 items of data collected were stored, and low level data fusion using ensemble averaging was implemented. Ensemble averaging is considered as low level data fusion since it takes into account the data acquired from multiple sensors and multiple inspections of the same specimen to produce a final output with a major noise reduction and relevant information extraction. If the noise is random, averaging the signals will cause the output signal-to-noise ratio to approach infinity. The major defect in the BV234 rotor blade sample is clearly visible in the foreground of Plate 17 and appears to be connected to another defect on the top right-hand corner of the image. A doubt remains as to whether the defect is one single disbond extending diagonally through the area inspected or two different disbonds. This ambiguity was solved using the Bayesian statistical inference approach as described in section 5.7. 5.6.2 DETECTION OF DELAMINATION AREAS ON AN ECUREUIL HELICOPTER ROTOR BLADE Visual inspection Damage on composite materials on both sides of the helicopter rotor blade sections were observed by visual inspection as shown in Plate 12. Adhesive failure, causing delaminations and disbonds generated by the two materials (metal/resin) flexing in different ways as the blade curvature changes from the up-flexion to the down-flexion when the helicopter is stationary, was not visible. Coin tapping inspection The results of coin tapping inspection were very subjective, in part due to the manufacturing composition and design of this rotor blade. Differences in sound could be perceived when moving to a section of the blade thinner than other sections due to curvature. The variation of density filling also affected the sound of the metal tip. Coin tapping inspection has been carried out by Eurocopter at La Courneuve in France, but the results of this inspection were never communicated, making its comparison with other methods impossible. Computer tomography examination Computer tomography was performed on the first sample at an energy of 150.0 keV. The results gave poor resolution and artefacts occurred at the interface between the stainless
108 Bayesian approach to inspection of composite materials steel and the composite material. A possible delamination appears to be detectable on the radiograph shown in Fig. 5.8, but results were not conclusive. Also, with such a system, the rotor blade would have to be removed from the helicopter for inspection.
FRONT
HC 3 £3
1
DELAMINATION
E
F T
1
W C
CM
15 © S 15 Q 8
Fig. 5.8 Positive image of CT scan of the second section of the rotor blade from the Ecureuil helicopter showing light delamination
Infrared thermographic examination Infrared thermographic inspection was carried out using an AGEMA-400 video camera interfaced to a computer. This system is portable and can be used for on-site inspection. The thermographs were recorded on disk for post-processing operations. The temperature range of the camera was -20 °C to +500 °C, the thermal sensitivity was 0.1 °C at 30.0 °C and the detector type was a single element MCT-Sprite (HgCdTe). The images were displayed on a portable computer screen with 256 colour levels (8 bit). The two Ecureuil rotor blade sections were heated using a hot air gun and were placed in a dark closed room to prevent reflection from ambient radiation. Inspection of the first section did not give any conclusive results. The problem was in part due to the fact that high heat reflection occurred at the metal-composite interface of the rotor blade. The heat was strongly reflected by metallic parts but reflection from the composite was negligible. The result was a bright spot on all metallic parts of the examined section, thus hiding any potential defects at the metal-composite interface. Better results were obtained with the inspection of the second section presented in Plate 13. A disbonded area appears to have been detected at the metal-composite interface as the yellow spot shows in Plate 13. The effect of different emissivities across the component's surface can now be eliminated using the new Agema Thermovision 900 Lock In system37 but such equipment was not available at the time of inspection.
Inspection results 109 Eddy current inspection The eddy current inspection of the first section of the Ecureuil rotor blade was performed over a distance of 50.0 cm. A major disbond was detected which ran along the interface between the metal and the composite material. The map in Plate 15 permits an estimation of the size and location of this disbond. The inspection of the second sample was performed over a length of 50.0 cm shown in Plate 16. A disbond at the level of the V-joint was detected (Plate 16). An additional defect was also detected along the interface of the metal and composite material. It is not clear whether this was due to (i) a disbond, (ii) a variation of the thickness of the glue, (iii) a foreign carbon fibre material or (iv) a variation of the signal caused by the curvature of the blade or the laboratory test rig. Similar to other NDT methods, such as coin tapping or ultrasound, the eddy current signal output is compared with a reference signal calibrated in a defect free area. In this case, because a sample of reference material was not available, calibration of the instrument had to be carried out on a portion of the rotor blade which was assumed to be defect free. A few difficulties were encountered during the inspection of the two sections of the Ecureuil helicopter rotor blade. The difficulties were related to the test rig which was not designed for specimens of this size. The previous tests on helicopter rotor blades were performed on a Boeing Vertol 234 type rotor blade which was almost twice the depth of the Ecureuil rotor blade (95.0 cm for the Boeing and 35.5 cm for the Ecureuil). Therefore the test rig had to be redesigned and optimised for the inspection of these types of rotor blades. The two problems encountered greatly influenced the results of the inspection by increasing noise level. In the previous experiment of impact detection, the defects were roughly mapped to ascertain the success of the technique in detecting these areas. 5.6.3 LOW ENERGY IMPACT DETECTION ON CARBON REINFORCED COMPOSITE MATERIALS Visual inspection An initial visual inspection was carried out on each sample described in section 5.4. Only surface defects where the matrix had broken were found to be detectable by visual examination. However, the extent of the impact damage was not identifiable from visual inspection. Low energy impacts may result in no surface damage and are therefore undetectable by visual inspection. Fibre damage can occur with and without matrix damage, and sometimes appears on the back surface of the component (Fig. 5.9). Impacts of energy less than 2.0 J were difficult to detect on sample 1 and no defect was visible on sample 5. The visual inspection results were compared with the impact locations shown in Fig. 5.5. Infrared thermographic examination The same apparatus as described in section 5.6.2 was used to carry out infrared thermographic inspection on the five carbon fibre reinforced materials described in Fig. 5.5. The composite material specimens were placed in an oven preheated to 100 °C (temperature advised by the manufacturer). After 5 minutes, the samples were removed
110
Bayesian approach to inspection of composite materials
Fig. 5.9 Damage on the back surface of a composite panel (sample 4) caused by an impact of 10.0 J and detected by visual examination and the thermal variations on the impacted sides were recorded. Variations in temperature occurring at the edges of all samples were noted which created heat spots in a defect-free area. Four impacts of energies ranging from 4.0 to 6.0 J were detected on sample 1. Plate 9 shows a thermograph of side 1 of sample 2. Three defects resulting from low energy impacts of 2.0, 2.5 and 3.0 J are shown as circular hot spots. However, image processing was required to enhance impact location. No impacts were detected on samples 4 and 5 under similar conditions. It was noted that infrared thermographic inspection was partially efficient only for samples 1, 2 and 3 compared with impact maps in Fig. 5.5. Poor results were also obtained from the thermographic inspection performed by Eurocopter (France). Radiographic testing Because of the low absorption coefficient of composites to X-rays, low voltage radiography (5.0-50.0 keV) is used to increase the sensitivity of the system for impact detection. Medical radiographic equipment, (Siemens Ergophos-4 with a tungsten anode), was used for these inspections. The radiographs were taken at 42.0 keV from 50.0 to 80.0 mA with an exposure time of approximately 0.6-1.0 s as recommended by the radiographer. Figure 5.10 is a positive image of a radiograph showing the damaged honeycomb structure and fibres of sample 2 for an impact of 2.0 J. Radiographic examination did not provide additional information over what was obtained with visual inspection, and the interpretation of radiographs was difficult. Eddy current inspection Eddy current inspection of the surface of the five composite panels was carried out using a computer-controlled scanning frame. Using computer visualisation facilities, monitoring
Inspection results
111
Fig. 5.10 Positive image of a radiograph showing fibre and honeycomb damage caused by an impact of 2.0 J on sample 2 of a defect could be easily performed. The resulting images were colour coded to facilitate defect location and quantification even by a relatively untrained operator (Plate 18). A 3-D display of the inspected area can be produced after completion of each inspection (Fig. 5.11). Numerical information, such as defect depth or signal amplitude, and visualisation of the extent of the damaged area can be estimated from the computer generated map. Post-processing could be performed for a more detailed evaluation of data. Eddy current detection of impacts on carbon reinforced composites was found to be a low cost technique with higher impact detectability and easier signal interpretation than was achieved with
Fig. 5.11 Example of 3-D representation of the surface of sample 2 inspected with an eddy current system
112 Bayesian approach to inspection of composite materials infrared thermography or radiography. The efficiency of eddy current testing depends on the sensitivity of the instrument, which should be selected as a function of the sample composition. Probability of detection and receiver operating characteristic analyses were performed, which are described in sections 5.7.2 and 5.7.3 respectively, to assess the actual performance of each inspection method previously used in the detection of low energy impacts in carbon reinforced composites. Impact quantification on both sides of sample 2 The eddy current inspection sensitivity depends on the material composition, structure and weaving. From experimental results, it can be said that eddy current appears to be more sensitive in the detection of low energy impacts on Tedlar-carbon based materials than on carbon-Nida materials. Figures 5.12 and 5.13 are plots of the damaged area (in mm2) and impact depth (in mm) as functions of the impact energy (in J) for side 1 and side 2 respectively. These plots show that both the damaged area and the impact depth increase with the impact energy. By plotting on the same graph damaged area against impact energy (Fig. 5.14) and against impact depth (Fig. 5.15) for both sides, additional information can be obtained. At equal impact energy, side 2 appears to be more resistant to low energy impacts than side 1. It can be seen in Fig. 5.14 that with an impact energy of 3.0 J, the damaged area is 80 per cent greater for side 1 than for side 2. Brittle fracture of the Tedlar-carbon side was revealed by visual inspection and damage occurred mainly in the highest plies of the component. The three layers of carbon on side 2 offer a good resistance to crack propagation and present little deformation from impacts compared to Tedlar. Therefore, from the point of view of inspection and impact resistance, components made of carbon-Nida materials would be preferable to Tedlar-carbon even though their threshold value is slightly higher than for Tedlar-carbon. Still more research needs to be carried out to fully characterise an adequate combination of materials to achieve optimum strength and toughness of composites. 500 F
J o a. 1 Q
i E Depth Area
Impact Energy / J
Fig. 5.12 Plots of damaged area and impact depth as functions of impact energy for sample 2 side 1
Impact Energy / J
Fig. 5.13 Plots of damaged area and impact depth as functions of impact energy for sample 2 side 2
Impact Energy / J
Fig. 5.14 Comparison of damaged area against impact energy for sides 1 and 2 of sample 2
114
Bayesian approach to inspection of composite materials
Impact Energy / J Fig. 5.15 Comparison of impact depth against impact energy for sides 1 and 2 of sample 2
5.7
NDT data integration and fusion
5.7.1 THE BAYESIAN APPROACH The Bayesian inference theory (first described in chapter 2) is adequate when an explicit representation of ignorance and support of a hypothesis need to be obtained. The combination of information is performed through a recursive process known as Bayes's rule which is derived from the Bayesian decision theory. Assuming mutually exclusive hypotheses, the probability of occurrence of a hypothesis H is denoted p(H) and its probability of non-occurrence is denoted p(FI). From conditional probability theory we can write p(H)-\- p(H) = 1. The Bayes's theorem was first introduced by the Reverend Thomas Bayes in 1763 and is used to verify the degree of support of one or multiple hypotheses. It can be written as follows: p(E/H)p(H)
p(H/E) =
(5.1)
p(E/H)p(H)+p(E/H)p(H)
where £ is an event and p(E/H) is the conditional probability of occurrence of this event when the hypothesis H is verified. p(H) is called the prior probability of the hypothesis H and p(H/E) is defined as the posterior probability of occurrence of H when the event E is true. The general form of equation (5.1) is p(HjEj)
=
p(E,.///,.)p(//,.)
£>(£,///,)/>(//,) ;=i
(for i = l,..., k)
(5.2)
NDT data integration and fusion
115
where E}r= {eu ..., ek] is a sequence of events (or pieces of evidence) and //, is a set of hypotheses such as Hf= {hu ..., hk}. Assuming all sensors to be mutually independent one another, one can see that information from multiple sources can be combined to estimate the final probability value. Whenever new data are received, Bayes's rule is used to update probabilities. The outcome of the combination of information through Bayes's rule is a hard decision level in opposition with Dempster-Shafer theory which provides a plausibility value associated to the degree of support of a hypothesis. Ambiguity can be reduced by calculating the probability of a defect being detected at a particular location of an inspected area of a component. Assuming p{D) to be the probability of a defect being present and p(S/D) the conditional probability of measuring a specific signal output when a defect is actually present, the asymptotic posterior probability p(D/S) can be computed (Fig. 5.16). From Fig. 5.16 it can be seen that the posterior probability of D given S is proportional to the prior probability of having a defect and the likelihood of S when D is verified. In the case where there is no knowledge about the material and defect distribution, a probability /7(D) = 0.5 can be assumed. This is represented as a straight line in Fig. 5.16 and can be defined as pure chance and defines a limit above which p(D/S)>p(S/D). One of the restrictions of the Bayesian approach is that a large amount of data is required to determine the actual prior and conditional probabilities. .
0.9 }
/7(D) = 0.1
0.8 \
y
0.6 -\
0.4 ]
0.1 ]
^y y
S
,y
y
y ' yy yy s
0.5 ]
0.2 ]
>•; \
yy/'
07 ]
0.3 ]
"_——~z^3&>ii
I / / <' 7///> / / /yy / yy^ 1I
/
yy s
Ji
0 -\*& -~\
s
/y
y
s
s
y
/ *''
^.^
j.*•*
„"
*r
:?— —H CO
d
^."
*
y
y
y y/ - / '
y'
- y
y -'
,'
y'
/ */ i /
*' / / // /
*'/
/
/
/
\ — i — \ d
y
p(D) = 0.4 /?(D) = 0.5 /7(D) = 0.6
/7(D) = 0.8 /7(D) = 0.9
1
d
p(D) = 03
/7(D) = 0.7
^~ ^*
""
d
/ y
y
y'
'
/7(D) = 0.2
d
1 00
d
1 ON
O
P(S/D)
Fig. 5.16 Asymptotic posterior decision probability (p(D/S)) vs. conditional probability (p(S/D)) for different prior probability values In the inspection of the BV234 helicopter rotor blade, the sensors were assumed to be spatially distributed. Because all the sensors were similar, they were assumed to operate at identical probabilities of detection and probabilities of false alarm. The signal output threshold was determined from experimental inspections and information on safety limits provided by an experienced inspector. The prior and conditional probabilities were estimated from repeated inspections and improved by using
116 Bayesian approach to inspection of composite materials interrelationships between defect attributes, materials inspected, equipment selected and operator experience. The results of the Bayesian approach using equation (5.2) are shown in Plate 14 which represents a probability map showing the degree of support and certainty associated with data displayed in Plate 17. It is evident that an important reduction in noise has been achieved and the defects are clearly delineated which facilitate defect location and characterisation. The probability of having a defect area at a particular position is again colour coded and can be obtained by selecting a desired location on the computer generated map. As an example, the probability of having a defect on the previously defined location (between 5.0 and 15.0 cm) reaches 96 per cent, which supports the reading of the sensors to a great extent (only 4 per cent of the actual sensor outputs are related to a false call). Before combination of information, the probability of having a defect was equal to 0.28. This value has been decreased to 0.12 in the area where the two disbonds were supposed to be connected. The Bayesian inference reasoning approach has removed this ambiguity and two distinct disbonds are in fact present on the section inspected. Assuming H to be the hypothesis of having a defect and E to be the signal output from a sensor, the probability of having a defect knowing the probe signal output, p(H/E), can be computed using the following expression: p(H)p(E/H)+p(H)p(E/H) where p(H) = 0.276 and p(E) = 0.724 (in our case). In the case where the signal output is £=1.9, the probability that there is a defect is p(H/E) = 3.6 per cent. If £ = 5.1, p(H/E) = 88.4 per cent. By computing the above equation for each value of the sensor output, a probability map (Plate 14) can be generated. The Bayesian inference theory can distinguish inconsistencies among various sources of information and improve the reliability of decision making by reducing vagueness and providing a measure of certainty. One of the limits of the efficiency of this approach is that B ayes's theory is dependent on the prior probability as illustrated in Fig. 5.16. 5.7.2 PROBABILITY OF DETECTION ANALYSIS The concept of probability of detection (POD) is a statistical representation of the ability of a technique to detect a specific defect size. Probability of detection curves are becoming more frequently used, especially in the aerospace industry.45'46 Probability of detection curves for different low energy impacts of visual, eddy current, infrared and radiographic inspections of the composite panels described in section 5.4.3 were plotted (Fig. 5.17) using equation (5.3) described by Berens and Hovey:47 POD(tf) = 1 + exp
(5.3)
where a is the defect size, and ju and o are mathematical parameters defined in Berens and Hovey47 (see chapter 3 for more details). Several useful parameters can be defined from a POD curve. These are the defect detection threshold value, the sure defect detection value and the median defect detection value. The threshold value corresponds to the minimum detectable size of a defect, the
NDT data integration and fusion i
111
-I
"^"Z"-^"-^— t-?
—
1
0.9 1 0.8 1 a o
8
a *o
07
// 0.6
/
i
>'
'
/
t
ii
i
/
0.5
ili
*
04
\t
i
* ii'
'/
1
• / ! / ; / \ '
Pro
cd X>
l
!
0.3 0.2
\ 1
1/ n
' .
0.9
/
/
I '
0.1
i
//
'
/
1
/
' Visual — — -Infrared Radiography Eddy Current
'
s - ^
•
1
.
2
.
3
,
,
,
,
4
5
6
7
,
,
9
10
Impact Energy / Joules
Fig. 5.17 Probability of detection vs. impact energy for different NDT methods sure detection value is the minimum defect size detected with a POD close to 100 per cent and the median defect detection value is the defect size with a POD of 50 per cent (Table 5.2). The impact detection threshold value for eddy current inspection is five times better than that for inspection by infrared or X-ray. Moreover, impact energies above 0.27 J can be detected with a 50 per cent confidence level using eddy current against 0.5, 1.6 and 3.0 J for radiographic, visual and infrared inspection respectively. The POD curves show the high performance of eddy current compared to visual, infrared and radiographic examination for low energy impact detection in carbon reinforced composites. Table 5.2 POD parameters from Fig. 5.17 for different NDT methods POD parameter
Visual examination
Infrared inspection
Radiographic inspection
Eddy current examination
Threshold value/J Median detection value/J Sure detection value/J
0.5
1.5
0.3
0.1
1.6
3.0
0.5
0.27
8.0
8.5
6.5
5.0
Figure 5.18 is a plot of POD against impact energy for different sensitivities of the eddy current instrument; sensitivities used varied from 5.0 V/div (volts per division) to
118 Bayesian approach to inspection of composite materials 0.21 V/div. The sensitivity of the apparatus needed to be adjusted carefully in order to select the most appropriate level for the material inspected (Fig. 5.19). It can be seen from Fig. 5.18 that the best results are obtained with the highest sensitivity at 0.24 V/div. However, other factors such as the probability of false calls, which do not appear in POD curves, have to be taken into account to assess the performance of a system. False calls are considered when performing receiver operating characteristic analysis as described in the following section.
^ ^ ^
S
r*"
0,9 -f
•7 it / it
\
o
0,6
un // //
/ 25
S2
t •? /
u/ 1 \i
1 J
S1
I
S0.5
/
Vi / 0,1
•
V
S0.2
/
f1 PT 1 1 1 . M M
1
Sensitivities II 1 M I I M ' 1 I I i i 11 1i i 1 1 1 1 1 1 11 M 1 11 M 1 I I 1 M 1 1! I1 I 1I 1I I I I I I I I i i I I I I I I I I I 1 1 M 1 i 1 M M II
2
3
4
5
6
7
8
9
10
Impact Energy/J
Fig. 5.18 Probability of detection vs. impact energy for different sensitivities of the eddy current instrument used for inspection of sample 1
Aeronautical products have to be inspected at the manufacturing and in-service levels for safety and economic reasons. Certain flaws need to be detected at an early stage and their growth monitored before they reach a critical size. As has been shown in the previous experiments, eddy currents are well suited to the detection of low energy impacts in composite materials. Visual inspection provides only limited information for low impact energy defect detection but is useful for a rapid check and estimation of a damaged area. Infrared thermography was not efficient in the detection of impact energies of less than 2.0 J. The IRT technique appeared to be as efficient as visual inspection. The resulting radiographs showed that X-rays are not sensitive to planar defects such as impacts, and that only broken fibres can be detected. One of the limitations of the POD representation is that it does not display the number of false calls of a technique. In order to take false calls into account, a receiver operating characteristic analysis is performed (see section 5.7.3) on the data obtained for eddy current inspection at different sensitivities.
flenrntiyitY 3.0 V/div.
Sensitivity 6.0 V/dlv.
0
6
O
v
o
0
0
0
o
O
O
• c=
•o
0
:
o 6 o
flenrttivity 1.0 V/div.
o o o flenrttivity 0.6 V/div.
0
0
<5
n.n ,
Benritlvttv 0.2 V/dlv.
.2-
A - £<
o o o
Fig. 5.19 Contour maps from eddy current inspection of sample 1 with different sensitivities
120 Bayesian approach to inspection of composite materials 5.7.3 RECEIVER OPERATING CHARACTERISTIC ANALYSIS Receiver operating characteristic (ROC) curves can be used to compare the performance of inspection systems as shown by Nockemann et a/.48 The analysis gives a very good representation of system performance as it takes into account the signal false calls as well as true calls. It can be seen from Fig. 5.20, showing the POD and number of false calls for different sensitivities on the carbon reinforced panel 1 presented in Fig. 5.5, that the eddy current sensitivity selected would have to give a high POD and low probability of false calls. The best suited sensitivity in this case would lie between 1.0 V/div and 2.0 V/div, both of which give no false calls and a 100 per cent POD.
0.2
0.5
1.0 Sensitivity (V / div.)
2.0
5.0
£ ] POD g>]NOFC Fig. 5.20 Probability of detection (POD) and number of false calls (NOFC) for different sensitivities of the eddy current instrument used for inspection of sample 1 The inspection performance, denoted K, for each sensitivity can be calculated by selecting the tangent on an ROC curve as described by Boogaard and Van Dijk,49 and is defined as K = 1 - 2V(l-POD)xPFC
(5.4)
The value of K gives an estimate of the performance of a method; the greater ^T is, the better the method. The inspection performance values for the five eddy current sensitivities used in the inspection of composite panels are summarised in Table 5.3. From experimental studies and ROC curves obtained (Fig. 5.21), a sensitivity of 1.0 V/div appeared to be best suited for the inspection of a carbon reinforced composite sample of composition similar to sample 1. It achieved the highest POD with impact energies less than 2.5 J compared to the POD results achieved using a sensitivity of 2.0 V/div. In any inspection case requiring the most efficient method of defect detection, this type of study could be performed. For methods which give high POD and very low
Discussion 121 Table 5.3 Inspection performance values for different eddy current sensitivities Sensitivity in V/div
Inspection performance (K)
5.00 2.00 1.00 0.50 0.20
0.76 0.90 0.94 0.91 0.89
false calls, an ROC analysis is not adequate to show the potential of the technique. Another mathematical method would be required to assess system performance.
Fig. 5.21 ROC curves for different sensitivities of the eddy current instruments
5.8
Discussion
Defects detected through visual examination in helicopter rotor blade samples were limited to gross visible damage. Disbond and delamination on BV234 and Ecureuil helicopter rotor blades were beyond the detection level of visual inspection. The visual detection of impacts in the composite panels described in section 5.6.3 was also limited to broken matrices which showed through the surface. No visible damage was visible on the surface of thin panels (samples 4 and 5) and * blind side' impact damage can be visually detected only if access to the back of a component is possible. With an overall probability of detection of 53 per cent for impact detection, visual inspection should not be neglected, although impacts of less than 1.0 J are difficult to detect in thick composites (samples 1, 2
122 Bayesian approach to inspection of composite materials and 3) and another NDT method would be required for suitable low impact energy detection. A possible alternative would be to use laser holography or shearography. It was noted that thick composites made of Tedlar and with a honeycomb structure (samples 2 and 3) show very poor resistance to impact as well as glass fibre/graphite material (sample 1), unlike materials made solely of carbon (sample 5). Several problems were encountered from the thermographic inspection of the Ecureuil helicopter rotor blades made of stainless steel and epoxy. The metal part and the epoxy resin acted as two different sources of emissivity and it was impossible to detect any fault. A solution would be to filter the heat emissivity from metal or from the composite material. The problem of different emissivity with different parent materials appears to have been solved by recent developments in signal processing and by measuring separately phase and amplitude of thermal emissivity.50 Modern flash tubes delivering energies up to 5.0 J/cm2 may produce better impact detectability in carbon reinforced composites, but due to such high heat pulses, physical damage could occur in materials of low heat tolerance. The same defects detected by visual inspection in composite panel 2 were found with IRT, a limiting factor of the technique which should be taken into account when purchasing such an expensive equipment. Successful detection of impacts of 2.0, 2.5 and 3.0 J was possible in sample 2; poor results were obtained in the detection of impacts of less than 2 J. The detection limit of passive thermography for low energy impact in composites appeared to have been reached. Impacts were not readily detected on sample 1. No changes in temperature were visible on thin composites with a carbon layer (samples 4 and 5). Impacts on glass fibre and Tedlar surfaces were detected (samples 1, 2 and 3). From these experiments, passive thermography does not appear to be adequate to detect impacts on composites made of carbon fibres in a brittle epoxy resin matrix (samples 4 and 5). Infrared thermography achieved a 20 per cent POD, far less than that achieved by visual inspection. However, this technique needs to be further investigated with state-ofthe-art equipment such as the Thermovision 900 Lock-in in order to make a definitive conclusion. Low energy radiographic inspection of the Ecureuil helicopter rotor blade showed that detection of faults in a metal-composite junction was producing artefacts. Poor defect sensitivity was achieved which could be overcome by using filtering systems reducing artefacts. The interpretation of radiographs for impact detection on composite panels was difficult and required magnification of the images in order to detect fibre breakage or honeycomb damage in composite panels (samples 1, 2 and 3). It was impossible to tell which layer of the panel was damaged from honeycomb radiographs as no information was available regarding the position of the honeycomb structure. Identifying faults on a radiograph of a honeycomb structure can be very time consuming and awkward. A solution would be to compare results from an inspection with a reference defect-free area using image processing facilities. The inclusion of fibres such as bronze, Nida, glass and Tedlar into composites facilitates X-ray inspection, allowing the detection of fibre deterioration rather than matrix deformation. Girshovich et alP have shown that even using an X-ray-sensitive dye penetrant, impacts were difficult to detect using X-rays. Radiographic examination is not a suitable technique for on-site inspection of composites and does not provide the operator with an easy signal interpretation display. In addition, a large investment is required to set up radiographic inspection and image enhancement systems. Eddy current inspection appears to be best suited for detection and quantification of disbonds and low energy impacts in composites. The technique is very easy to set up and
Discussion 123 the calibration of the equipment can be stored and recalled when performing inspections on the same type of material. Disbonds were successfully detected on the titanium layer and stainless steel layer of the BV234 and Ecureuil helicopter rotor blades respectively. The eddy current inspection of the Ecureuil helicopter rotor blade presented several problems which have been highlighted in section 5.6.2. The results of the eddy current inspection obtained with the carbon reinforced composite panels show the potential of eddy current testing for low energy impact detection. More experiments would have to be performed to fully appreciate the limitations and advantages of the system. The system can be fully automated for on-line manufacture inspection of composites. Moreover, eddy currents have proved to give very reproducible results compared to ultrasound or coin tapping. This makes the technique suitable for monitoring defect growth in composites by comparing test results (as computer images) of successive inspections over time. The colour-coded computer generated map facilitates signal interpretation and can provide extra information: defect dimensions such as height and area of a disbond or depth of an impact. The quantitative area information can be used to assess the structural integrity of a component. Another main advantage is that it is a fairly cheap technique compared to the high capital outlay required for infrared, X-ray or holographic inspection. Only impacts of 0.5 J were difficult to detect. Increasing the sensitivity of the instrument, although increasing noise level, did enable defects of 0.5 J to be detected. Eddy currents were sensitive enough to detect damaged areas extending further than could be detected with the naked eye. The only limitation is that eddy current testing is limited to the inspection of materials made of carbon. The system is easy to operate and the visual format display of the information greatly facilitates signal interpretation. In addition to its low development cost, eddy current inspection presents other advantages: • • • • • •
it is a non-contact method; it provides real-time visual information of the area inspected; results are reproducible; there are no radiation hazards; the inspection can be totally operator-independent; the development and running costs are extremely low compared to other methods.
The visual format display allows an operator with limited experience and only general knowledge of the type of material inspected to accurately locate and quantify the defects detected. Eddy current inspection coupled with ultrasound will certainly enhance the quality and efficiency of non-destructive examinations and improve the level of quality and safety required by the aerospace industry. Further improvement would be to develop a fully automated system for on-site and/or manufacture inspection, displaying 3-D results in real time, and to store calibration parameters according to material composition. The Bayesian inference reasoning was used for binary decision making, and demonstrated that data fusion on multiple sensors can reduce uncertainty, enhance the signal-tonoise ratio and improve decision making in regard to the presence or absence of a defect. The Bayesian approach described and implemented in this chapter was concentrated towards disbond detection but it can be applied to any type of inspection. A different approach would be to implement Bayesian reasoning to sensor information from different instruments and for non-binary decision making. The main limitation of the approach is the large amount of data required in order to implement Bayes's rule and to define prior and conditional probabilities. These probabilities can be modelled using mathematical
124 Bayesian approach to inspection of composite materials algorithms, but this approach is difficult when little or no knowledge of the material inspected is available. Several advantages can be gained from using data fusion which will enhance the overall performance of an inspection. Data fusion can be very useful to NDT as, in the case of single or multiple sensor systems, it can support or refute inspection results or enhance signal information from a sensor. It can improve the efficiency and performance of an inspection by providing the inspector with additional colour-coded information that is easy to interpret. For remotely operated vehicle inspection, the combination of both eddy current and ultrasonic information from the same sample will provide the operator with a more complete representation of the actual structural integrity of a component. Data fusion will help in decision making by producing a probability map and/or giving a degree of belief of an inspection. The performance of the combined system is greater than that of any system taken separately. The fusion of information from different commercial eddy current instruments used for weld inspection has also been achieved and is presented in the next chapter. References 1. Buynak CF, Moran TJ, Martin RW. Determination and crack imaging in graphiteepoxy composites, 1989, Materials Evaluation, 47, 438-47. 2. Morlo H, Kunz J. Impact behaviour of loaded composites, 1990, Proceedings of the 4th Conference on Composite Materials, Development in Science and Technology of Composite Materials, Sept., Germany, 987-91. 3. Heida JH, Benker GJ, Verdegaal MA. Manufacturing and inspection of artificial delamination in composite materials, 1992, Non Destructive Testing 92, Elsevier Science Publishers, 882-5. 4. Solomos GP, Lucia AC. Delamination detection via holographic interferometry, 1992, Materials Engineering, 3(2), 341-9. 5. Wey AC, Kessler LW. Quantitative measurement of delamination area in low velocity impacted composites using acoustic microscopy, 1992, Review of Progress in Quantiative Non Destructive Examination, 11, Plenum Press, 1563-8. 6. Marshall AP, Bouadi H. Low-velocity impact damage on thick-section graphite epoxy laminated plates, Dec. 1993, Journal of Reinforced Plastics and Composites, 12, 1281-94. 7. Greszczuk LB, Chao H. Impact damage in graphite-fiber-reinforced composites, 1976, Proceedings of the 4th Conference on Composite Materials: Testing and Design, ASTM Pub., May 1976, 389-408. 8. Chester RJ, Clark G. Modelling of impact damage features in graphite/epoxy laminates, 1992, Damage Detection in Composite Materials, ASTM, 200-12. 9. Reed PE, Bevan L. Impact damage in a composite material, Aug. 1993, Polymer Composites, 14(4), 286-91. 10. Pezzoni R, Merletti L, Battagin G. Denis R. Ultrasonic inspection of impact induced damage in polymeric composite materials, 1992, Proceedings of the 18th European Rotorcraft Forum, Sept. 1992, 52.1-52.9. 11. Jones T, Berger H. Thermographic detection of impact damage in graphite-epoxy composites, Dec. 1992, Materials Evaluation, 50, 1446-53. 12. Frock BG, Martin RW, Moran TJ, Shimmin KD. Imaging of impact damage in
References
13.
14. 15. 16.
17. 18. 19. 20. 21. 22. 23. 24. 25. 26.
27.
28.
29. 30. 31. 32.
125
composite materials, 1988, Review of Progress in Quantiative Non Destructive Examination, 7B, Plenum Press, 1093-9. Girshovich S, Gottesman T, Rosenthal H, Drukker E, Steinberg Y. Impact damage assessment of composites, 1992, Damage Detection in Composite Materials, ASTM, 183-99. Jones TS, Berger H. Application of NDI methods to composites, April 1989, Materials Evaluation, 47, 390-400. Bar-Cohen Y. Nondestructive evaluation of fiber-reinforced composite materials - A review, March 1986, Materials Evaluation, 44, 446-54. Hawkins GF, Sheaffer PM, Johnson EC. NDE of thick composites in the aerospace industry—an overview, 1991, Review of Progress in Quantitative Non Destructive Examination 10B, 1591-7. Mohindru JI, Murthy CRL. Detection of bond defects in a helicopter rotor blade by acoustic impact technique, 1992, Non Destructive Testing 92, Elsevier Science, 13-17. Adams RD, Cawley P. Sensitivity of the coin tap method of NDT, May 1989, Materials Evaluation, 47(5), 558-63. Frock BG, Martin RW. Digital image enhancement for ultrasonic imaging of defects in composite materials, April 1989, Materials Evaluation, 47, 442-7. Martin BG. Ultrasonic attenuation due to voids in fiber reinforced plastics, 1976, Non Destructive Testing International, 9(5), 242-6. Sachse W. Towards a quantitative ultrasonic NDE of thick composites, 1991, Review in Quantiative Non Destructive Examination, 10B, 1575-82. Steiner KV. Defect classification in composites using ultrasonic NDE techniques, Damage Detection in Composite Materials, ASTM, 1992, 72-84. Dewhurst RJ, He R., Shan Q. Defect visualisation in carbon fiber composite using laser ultrasound, August 1993, Materials Evaluation, 51, 935-40. Monchalin JP. Laser-Ultrasonics, 1992, Flight-vehicle Materials, Structures, and Dynamics - Assessment and Future Directions, 4, ASME, Chapter 4. Jones TS. Inspection of composites using the automated ultrasonic scanning system (AUSS), May 1985, Materials Evaluation, 43, 746-53. Cielo P, Maldague X, Deom AA, Lewak R. Thermographic nondestructive evaluation of industrial materials and structures, April 1987, Materials Evaluation, 45, 452-60. Vavilov VP, DeGiovanni A, Didierjean S, Maillet D, Sengoulier AA, Houlbert AS. Thermal flaw detection and tomography of CFRP articles, Sept. 1991, Soviet Journal of Non Destructive Testing, 27(9), 609-19. Hobbs CP. The inspection of aeronautical structures using transient thermography, Feb. 1992, Non Destructive Testing for Corrosion in Aerospace Structures, Proceedings of the Royal Aeronautical Society, 6.1-6.10. Maldague XPV. Nondestructive examination of materials by infrared thermography, 1993, Springer Verlag. Gros XE. Eddy current inspection of composite materials, NDT Summary Report from Eurocopter Collaboration, 1994. Sheldon WH. Comparative evaluation of potential NDE techniques for inspection of advanced composite structures, Feb. 1978, Materials Evaluation, 4 1 - 6 . Bathias C, Cagnasso A. Application of X-ray tomography to the NDT of highperformance polymer composites, 1992, Damage Detection in Composite Materials, ASTM, 35-54.
126 Bayesian approach to inspection of composite materials 33. Hastings, KP. Real-time X-ray technology gains a firm foothold in aerospace composite component inspection, May 1994, Insight, 36(5), 314-15. 34. Hagemaier DJ, Fassbender RH. Ultrasonic inspection of carbon-epoxy composites, April 1985, Materials Evaluation, 43, 556-80. 35. Hashimoto M, Nakamma H, Sugiura T. Miya K. Eddy current testing of graphite material, Proceedings of the 9th International Conference on Non Destructive Examination in the Nuclear Industry, Tokyo, Japan, ASMI, April 1988, 385-9. 36. Vernon SN, Gammell PM. Eddy current inspection of broken fiber flaws in nonmetallicfibercomposites, 1985, Review of Progress in Quantitative Non Destructive Examination, 4B, Plenum Press, 1229-37. 37. Fitzpatrick GL, Thome DK, Skaugset RL, Shih EYC, Shih NCL. Magneto-optic/ Eddy current imaging of ageing aircraft: a new NDI technique, Dec. 1993, Materials Evaluation, 51, 1402-7. 38. Owston CN. Eddy current methods for the examination of carbon fiber reinforced epoxy resins, Nov. 1976, Materials Evaluation, 34(11), 237-44. 39. Valleau AR. Eddy current nondestructive testing of graphite composite materials, Feb. 1990, Materials Evaluation, 48, 230-9. 40. Lowden DW, Gros XE, Strachan, P. Visualising defect geometry in composite materials, Proceedings of the International Symposium 'Advanced materials for lightweight structures', ESTEC, Noordwijk, The Netherlands, March 1994, 683-6. 41. Gros XE, Lowden DW. Electromagnetic testing of composite materials, April 1995, Insight, 37(4), 290-3. 42. Lowden DW. Quantifying disbond area, October 1992, Proceedings of the International Symposium 'Advanced materials for lightweight structures', ESTEC, Noordwijk, The Netherlands, March 1992, 223-8. 43. Newman JW. Production and field inspection of composite aerospace structures with advanced shearography, 1991, Review in Quantiative Non Destructive Examination, 10B, 2123-33. 44. Newman JW. Advanced shearography aviation applications, 1990, Air Transport Association Non Destructive Testing Forum, Montreal. 45. Barbier P, Blondet P. Using NDT techniques in the maintenance of aeronautical products, 1992, Aerospatiale France, report No. 93-11587/l/GAL. 46. Hovey PW, Berens AP. Statistical evaluation of NDE reliability in the aerospace industry, 1988, Review of Progress in Quantitative Non Destructive Examination, Williamsburg, VA, USA, 22-26 June 1987, Plenum Press, 7B. 47. Berens AP, Hovey PW. Evaluation of NDE reliability characterization, Dec. 1981, AFWAL-TR-81-4160, Vol. 1, Air Force Wright-Aero. Lab., Wright-Patterson Air Force Base. 48. Nockemann C, Tillack GR, Wessel H, Hobbs C, Konchina V. Performance demonstration in NDT by statistical methods: ROC and POD for ultrasonic and radiographic testing, 6th European Conference on Non Destructive Testing, Nice, France, 24-28 Oct. 1994, 37-44. 49. Boogaard J, Van Dijk GM. NDT reliability and product quality, 1993, Non Destructive Testing and Evaluation International, 26(3), 149-55. 50. The impossible comes into view, October 1994, Eureka Transfers Technology, 40-3.
6
Application of NDT Data Fusion to Weld Inspection ...the consequences of decision-making can also have numbers attached to them, and these two sets of numbers combined to solve the problem and determine the best decision Lindley, 1971
6.1
Introduction
Inspection results from two or more NDT systems used for the examination of the same specimen are usually different, sometimes in disagreement, even if related to the same defect. In the case of two contradictory results, the NDT inspector has no reason to favour one instrument over another and more inspections may be required using a third system to confirm or refute the presence or size of a defect. In the worst cases, decisions could be made resulting in unnecessary and expensive repair costs, or no action taken where a major fault has been detected but wrongly characterised. Uncertainty and errors in signal interpretation are factors which may cause problems in decision making. The information from NDT sensors can be combined to help in decision making. Both a theoretical approach to, and experimental results from, the application of NDT data fusion to weld inspection are presented in this chapter. Non-destructive examination of welds on plates, pipes and T-joints with induced defects performed using two commercial eddy current systems is described. A parallel multisensor approach using Bayesian statistical theory and Dempster-Shafer evidential reasoning was used to combine defect information such as depth and length from up to five NDT sensors. Fusion of eddy current, ultrasonic and radiographic data from a surface breaking defect is also presented. In addition, NDT data fusion at pixel level is introduced. The performances of these three data fusion processes applied to NDT are discussed. The following procedure was adopted: • • • • • • • • •
selection of weld samples selection of NDT systems sample inspection, defect detection and sizing assessment of experimental results for each system data processing data visualisation data fusion decision making comparison and appraisal of data fusion strategies.
128 Application ofNDT data fusion to weld inspection Prior to eddy current and ultrasonic inspections, each weld sample should be subject to visual examination, dye penetrant inspection and magnetic particle inspection (MPI). If no flaws are detected with visible and dye penetrant inspections, the crack length measurements performed with MPI should be used as reference. 6.2
Weld samples
The performance of each sensor, with relation to a particular application, needs to be established prior to fusion in order to assign a weight of evidence or degree of efficiency to the sensor. It is only by knowing the limit of each sensor that meaningful information, with a high confidence level, will be extracted. In order to do so, data have to be collected from the inspection of weld samples made of material similar to the one to be inspected in the field, with defects of known dimension and location. Because a weld itself is a defect, their inspection needs to be carefully carried out. Figure 6.1 shows a cross-section of a typical plate with a centreline weld. It is usually good practice to cut three calibration slots into a butt welded plate sample in order to calibrate the NDT equipment prior to inspection (Table 6.1). Calibration slots can be in the parent metal as it is highly unlikely that cracking occurs in this region due to external loading; the most important defects that do occur are fatigue cracks at the weld toe regions.1 Access to a library of weld sample geometries with different defect types, such as a toe crack, a root crack and a HAZ crack (Fig. 6.2), is required to define prior probabilities which will be used in data fusion. Weld
Heat Affected Zone (HAZ) Parent Metal' Fig. 6.1 Cross-section of a typical plate with a centreline weld Table 6.1 Typical dimensions of calibration slots on a butt welded plate Calibration slot
Length/mm
Depth/mm
A B C
40.28 19.81 10.23
1.96 1.00 0.53
Toe crack x
c~
HAZ crack
^x
^ — :
:
Root crack Fig. 6.2 Cross-section of a butt welded plate with artificial flaws
Non-destructive examination of the test specimens 129 6.3
Non-destructive examination of the test specimens
In addition to visual examination, dye penetrant inspection and MPI, weld samples have been inspected using eddy current and ultrasound. The inspection results for these two methods follow. 6.3.1 EDDY CURRENT TESTING Two commercial eddy current systems, the Millstrong Lizard Topscan and the Hocking Phasec 1.1, were used for inspection of the weld samples. For commercial reasons these systems will not be referred to in the text by their respective names but as system A and system B. Each sample was inspected and each defect detected was repeatedly sized (depth, length and location were recorded) using each system in order to gather real data for the fusion phase. It should be noted that in a real on-site inspection, only one numerical datum from each instrument may be available and would be necessary for fusion. From the 27 defects present in our experiments, six defects were not detected by system B. In such an event, one can see that no fusion of information is therefore possible as only system A provided information. The defects undetected included root crack, an intermittent root crack and a HAZ crack. Estimation of sensor efficiency The inspection of calibration slots was performed in order to build a database of the efficiency of each sensor to detect a specific defect. As expected, the readings from each instrument on the same defect were different. The graphs of estimated calibration slot depth against actual depth for systems A and B are shown in Figs 6.3-6.6. One can see that the measurements with system A appear very 2.5 System A
2.0 E E ^ 1.5 .c ~o_ Q
•o "5
1.0
0.5
0.0 0.0
0.5
1.0 Actual Depth /
1.5 mm
2.0
2.5
Fig. 6.3 Plot of the average estimated depth (and standard error) vs. actual depth for repeated sizings of three calibration slots with eddy current system A
,
2.5
,
.
,
1
,
,
1
,
1
.
,
i
i
1
i
,
,
,
1
,
i
.
System B s
2.0 — s
1.5
—
X
Q_
H l
A
Q
X
UJI
1.0
—
w "
UJ
0.5
s
0.0 I ' . 0.0
s
X '
—1
J
s
i
| J
'
~\J
J
X
]
,' /
—
•
x"
s
/
1 1
/
o
"o
s
A
s
/
E E ^
r—n
^
—
y'k
y
.
•
.
.
0.5
•
•
.
•
•
•
•
•
•
1.0 Actual Depth /
1
1.5 mm
•
•
•
.
2.0
•
•
,
.
.
2.5
Fig. 6.4 Plot of the average estimated depth (and standard error) vs. actual depth for repeated sizings of three calibration slots with eddy current system B
20 Actual Length /
30 mm
Fig. 6.5 Plot of the average estimated length (and standard error) vs. actual length for repeated sizings of three calibration slots with eddy current system A
Non-destructive examination of the test specimens 50
-
'
_ I
1
•
'
'
System B
'' '
y
30 s '
1 1 1
10
; ' - _
s
40
20
r
,-*''
i-
z ~ 0 \f
131
s
f''
/
,'
,'
y
— : ~-
_z
i
i
: -
30 mm
40
50
y
s
[
-
1 : -_
/
y 10
s
1
20 Actual Length /
,
=
Fig. 6.6 Plot of the average estimated length (and standard error) vs. actual length for repeated sizings of three calibration slots with eddy current system B
scattered and that the deeper the defect, the more scattered the data and the higher the standard error associated with them. For both systems, the deeper the calibration slot, the wider was the spread of estimated depth. The lowest error is 0.22 mm associated with the depth of 0.57 mm (Fig. 6.7). From analysis of estimated length against actual length of the calibration slots, it can be said that system A provides very accurate measurements compared to system B. Small standard errors associated with each value also show the high accuracy of system A over system B. System B appears to overestimate defect length; this may be due to the design of
Actual Depth /1 Estimated Depth / mm Fig. 6.7 Diagram showing increasing error with increasing defect depth for measurements with system A
132 Application ofNDT data fusion to weld inspection the system, which does not provide the user with a reference point that is easily identifiable. The automatic calculation, by the instrument, of the length and depth measurements was not found to be sufficiently adequate and accurate. There is such a disparity in the estimation of defect length between system A and system B that little improvement is expected from the data fusion of length measurements. System A will always have a higher weight than B associated with it and information from system B will therefore be discarded. For this reason fusion of depth measurements will be investigated in more detail than that of length. Moreover, in relation to fracture mechanics estimates of structural integrity, the depth is an important parameter to measure accurately when performing an inspection, as the inspector is required to calculate the degree of penetration of the defect through the material and the remaining undamaged thickness, essential factors affecting strength. Gaussian normal distribution From repeated measurements performed on calibration slots, the Gaussian normal distribution associated with an NDT system is assumed and estimated from the sample mean x and the standard deviation o.2 The normal probability density function f(x) is given by 1 t-.(x-x)2\ f{x) = - = = exp /
for -oo < x < +«>
(6.1)
The Gaussian normal distributions for the length and depth for both systems for the calibration slot 40.28 mm long and 1.96 mm deep are given in Fig. 6.8. One can see from these graphs that both systems are in agreement regarding the defect depth. It also appears that system B has more support than system A for the depth estimation. Regarding defect length, both systems appear to be in disagreement, with system A presenting a high and narrow probability density function (therefore with a smaller standard deviation) and system B with a much lower support regarding the defect length. Prior to data fusion and without the use of MPI, no conclusion can be made regarding which system gives the best length estimate. It can be seen from the graph that system A will always have the strongest probability function regardless of the accuracy of system B. The normal probability distribution graphs for the depth of the calibration slots can be
l l — System A . . . System B 1
0.5
/ *
I-
.System A - . . System B I
0.4 0.3
>Wn
m
| 1
Jl
*\
// ^f.'
'-Sw
1.5
2
2.5
Defect Depth / mm
0.2 0.1 0
-H—,—,—,—,—,—,7 \ , V . ,—J 4.5
9.5
14.5 19.5 24.5 29.5 34.5 39.5 44.5 49.5 54.5 59.5 Defect Length / mm
Fig. 6.8 Normal probability density function for systems A and B for a 40.28 mmx 1.96 mm calibration slot
Non-destructive examination of the test specimens 133 transformed into the standard normal distribution by means of the equation x-x z=
(6.2)
a
and the function O(z) can be plotted:
v
(6.3)
\ 2
lln
This transformation has the effect of centring the normal distribution at zero. The reason for this transformation is that the probability associated with any defect depth can be calculated using Z values and statistical tables. The graphs of normal probability density function confirm the previous observations that as the defect depth decreases, system B becomes more precise compared to system A, in terms of standard error and probability. The relative probabilities associated with each system for different defect depths are summarised in Table 6.2. It can be seen that the probabilities for each system increase, although the probability for system B increases much faster than that for system A. Therefore, more belief is given to the measurements of small shallow defects sized with system B. The x2 test A x2 (chi-squared) test was performed in order to estimate the degree to which the signal output from eddy current NDT systems A and B could be modelled using a Gaussian normal distribution.3 The %2 test was performed on the data collected from inspection of calibration slots. The graphs resulting from the #2 test are shown in Fig. 6.9. It appears from this test that the Gaussian normal distribution is a good approximation to model sensor output and calculated probability associated with it. The x2 t e s t depends upon the characteristics of the sample distribution. It assesses the difference between measured and expected values. The comparison between the measured and expected values is made by calculating the x2 statistic, and is given by k
\2
/
1-1
£/•
where m, and e{ are the measured and expected values respectively.4 With large samples, the x2 statistic approximates to a continuous %2 distribution. Comparison between the %2 statistic and a x2 distribution provides a measure of the probability of the distribution of Table 6.2 Probability values for systems A and B for different depth measurements Defect depth/mm Probability, system A Probability, system B 2.00 1.00 0.50
0.10 0.14 0.18
0.14 0.22 0.40
134 Application ofNDT data fusion to weld inspection
Depth/mm
m
IES3 measured '
expected I '
Depth/mm
1.00mm
| E S 3 measured '
expected I — 1
Fig. 6.9 Chi-squared test on calibration slots for depths of 1.96 and 1.00 mm for system A
differences between measured and expected values. A %2 distribution is an asymmetrical continuous distribution which behaves as a probability distribution. The plots of the measured and expected frequency distributions of depth values for two calibration slots are shown in Fig. 6.9. The area under the curve represents the relative frequency with which specific values of %2 occur. The %2 statistic is a cumulative statistic process. Once %2 has been calculated, it is compared to the %2 distribution. This is known as the x2 test. This test is used to examine the hypothesis that the distribution is a normal distribution. From the x2 test it w a s estimated that the distribution of depth measurements can be considered to approximate to a Gaussian normal distribution for 72 per cent and 88 per cent for the 1.96 mm and 1.00 mm deep calibration slots. More data would be required to make a definitive statement, but with limited measurements the Gaussian normal distribution appears to be a good approximation. From the x2 test applied to length measured on calibration slots, a 78 per cent degree of confidence that all three calibration slots can be modelled using a Gaussian normal distribution was achieved. The distribution of length measurements can be considered to approximate a Gaussian normal distribution. The x2 test shows that for a 0.53 mm deep calibration slot, the estimation of Gaussian normal distribution is correct at 76 per cent which reflects the small spread of measured values. On the other hand the measurements on the 1.96 and 1.00 mm deep slots cannot be correctly estimated as a Gaussian normal distribution. System B appears to be more accurate than system A, with measurements of shallow defects compared to deeper defects. It cannot be said that the depth measurement being approximated using a Gaussian normal distribution is incorrect, nor can it be said that it is correct. At this stage, this approximation will be made in order to combine information from both systems in a similar manner. The x2 test performed on the length measurement for system B showed that the Gaussian normal distribution was correct up to 40 per cent. Because system B does not give regular consistent measurements of defect length, this can explain the low values of correctness compared to system A. In general terms and in order to implement NDT data
Non-destructive examination of the test specimens 135 fusion, the measurements for these two systems can be assumed to be normally distributed. The Kolmogorov-Smirnov test In addition to the %2 test, a Kolmogorov-Smirnov test was performed to determine how well the estimated measurements of the depth and length of the calibration slots fit a normal distribution for systems A and B. Unlike the %2 test> ^ e Kolmogorov-Smirnov test is most suited to estimate the goodness-of-fit of a small number of samples. It tests each datum individually and gives the degree of agreement between the distribution of the estimated values and the normal distribution.5 The Kolmogorov-Smirnov test compares the estimated cumulative distribution function for a variable with a specified distribution which may be normal. The test statistic is computed (using a statistical analysis and data management software such as SPSS) from the largest difference between the estimated and expected distribution functions. The results of the Kolmogorov-Smirnov test are summarised in Tables 6.3 and 6.4. Plots of the expected normal values against the observed values are in Figs 6.10 and 6.11. From Table 6.3, one can see that the distribution of depth measurement can be considered to approximate to a normal distribution for systems A and B. The P values from the Kolmogorov-Smirnov test below 0.05 imply that a data set is not normally distributed, while P values above 0.05 imply that a data set fits a normal distribution. In our case, all P values are above the 0.05 threshold of non-normality, therefore there is no evidence that the data from systems A and B are not normally distributed. The P values below 0.05 presented in the system A column of Table 6.4 show that the distribution of the Table 6.3 P values from the KolmogorovSmirnov test for depth measurements on calibration slots measured by systems A and B P values Actual depth/mm
Systemi A
System B
1.96 1.00 0.53
0.97 0.57 0.20
0.75 0.54 0.12
Table 6.4 P values from the KolmogorovSmirnov test for length measurements on calibration slots measured by systems A and B P values Actual length/mm
System A
System B
40.28 19.81 10.23
0.04 0.02 0.31
0.80 0.31 0.40
136 Application of NDT data fusion to weld inspection Normal Plots - System A
Normal Plots - System B
Observed value
Observed Value
Fig. 6.10 Normal plots of the expected values against the observed values for depth measurements on calibration slots 1.96 (top), 1.00 (middle) and 0.53 mm long (bottom) for systems A and B
length measurements for system A does not appear to follow a normal distribution. This is due to the lack of precision of the length measurements performed with system A leading to a grouping of results due to rounding. In this particular case the %2 t e s t i s m ° r e appropriate to estimate the degree of normality of the set of sample values. However, there is no evidence which shows that the distribution of length measurement does not fit a normal distribution for the estimated length measurements from system B. In general it can be said that the Kolmogorov-Smirnov test supports the possibility that the spread of the measurements from NDT systems A and B does follow a normal distribution. Curve fitting In order to apply the Bayesian and Dempster-Shafer theories to data from NDT systems A and B, standard deviation values for any defect depth and length will have to be
Non-destructive examination of the test specimens Normal Plots - System A
Observed value
137
Normal Plots - System B
Observed Value
Fig. 6.11 Normal plots of the expected values against the observed values for length measurements on calibration slots 40.28 (top), 19.81 (middle) and 10.23 mm long (bottom) for systems A and B
determined. From the data collected during the inspection of the calibration slots, a mathematical function which will model the expected standard deviations of both systems was calculated. The curve fitting operation computed using a type function of the form y = ax1 + bx+ c was selected. A standard deviation can be associated with any depth measurement, and using Gaussian normal distribution tables, the associated probability of this measurement can be calculated. This information is necessary in order to implement the fusion phase which is described in section 6.4. 6.3.2 ULTRASONIC EXAMINATION Ultrasonic examination was performed on two ferritic samples using a Staveley Sonic 136 ultrasonic system and 45°, 60° and 70° probes. Three defects were studied: a slag inclusion and a lack of side wall fusion (LOSWF) on a plate and a toe crack on a pipe. Each
Toe crack - P3075 0.16 0.14
-45 Peg. Probe •
60 Peg. Probe |
0.12 0.1 +
viS o.o8 0.06 0.04 + 0.02 H
1
t
h-
H 10
1
I
K
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
28
30
32
34
Lack of side wall fusion - PL3071
10
Slag inclusion-PL3071
- 45 Peg. Probe
H
- 70 Peg. Probe |
60 Peg. Probe •
0.15
1
I
I
I
!
I
1
I
<
t
I
I
2
4
6
8
10
12
14
16
18
20
22
24
H-
26
Defect Length / mm
Fig. 6.12 Normal probability density function for ultrasonic inspections
36
38
40
Non-destructive examination of the test specimens 139 detected defect was repeatedly sized using the 6 dB drop technique, and the average length estimation for multiple probe angles is summarised in Table 6.5. A Gaussian normal distribution, similar to the one described for eddy current signals, can be computed. The Gaussian normal distribution graphs related to these inspections are in Fig. 6.12. One can see that although average defect lengths from different probe angles are in relative agreement for the slag inclusion and LOSWF, measurements are very different from the actual length given by the manufacturer. However, the length of the surface breaking crack estimated with a 60° probe appears to be close to the value estimated by the manufacturer, but this greatly depends on the sizing method used. A %2 t e s t w a s performed for each probe angle and the one with the best estimate of sensor output modelling using a Gaussian normal distribution will be selected for data fusion. For data fusion purposes, since it is intended to combine information from eddy current and ultrasonic data, the angle probe with the highest probability of correctness for toe crack measurement (i.e. the 45° probe) was chosen. Because of the small set of measurements from ultrasonic inspections, a Kolmogorov-Smirnov test was performed in addition to the %2 test to determine if these measurements fit a normal distribution. The P values from the Kolmogorov-Smirnov test are summarised in Table 6.6. The P values from the Kolmogorov-Smirnov test are all above the 0.05 threshold of non-normality, therefore it is likely that the measurements from the ultrasonic probes do not follow a normal distribution. Table 6.5 Results of ultrasonic inspections Defect type
Actual length/mm
Probe angle/ degrees
Estimated length/mm
Slag inclusion Slag inclusion Slag inclusion LOSWF LOSWF Toe crack Toe crack
22 22 22 19 19 29 29
45 60 70 45 70 45 60
31.75 ±1.45 34.80 ±1.82 32.15 ±1.76 26.30 ± 2.00 23.20 ± 2.67 32.35 ± 2.92 29.90 ±2.71
Table 6.6 P values from the KolmogorovSmirnov test for different angle probes and defect types Defect type
Angle of probe
P value
Slag inclusion Slag inclusion Slag inclusion LOSWF LOSWF Toe crack Toe crack
45 60 70 45 70 45 60
0.24 0.24 0.32 0.69 0.24 0.96 0.86
140 Application ofNDT data fusion to weld inspection 6.3.3 X-RAY RADIOGRAPHIC INSPECTION X-ray radiographic inspection was carried out on three weld samples, two plates and one pipe, using a Scanray CPA 150 kV constant potential X-ray unit. The single wall, single image technique was used for each sample with the film placed on the root side and the Xray source placed on the cap side for both plates. For the pipe sample, the X-ray source was placed on the inside surface against the opposite wall with the film placed on the outside (cap). Results of the radiographic inspection are summarised in Table 6.7 and can be seen in Figs 6.13 and 6.14. Because of its orientation and due to the fact that it is not a volumetric defect, the toe crack of a plate sample was almost invisible to X-rays (even by placing the film on the cap side of the weld) and would not have been detected in a standard radiographic examination. The length estimation of the volumetric defect (LOSWF) and toe crack on the pipe is greatly dependent on the inspection method (i.e. ultrasound, eddy current or radiography) and the sizing technique used (i.e. 6 dB or 20 dB drop method). Because of the limited number of radiographic examinations performed, it was difficult to model a normal distribution for defects detected by radiography. Despite this restriction and because of the different physical properties of this NDT method, information from radiographic
Table 6.7 Results of X-ray radiographic inspection of weld samples Defect type
Defect length on radiograph/mm
Defect length after correction/mm
Toe crack LOSWF Toe crack
10.8 27.0 28.0
10.4 25.9 26.9
Fig. 6.13 Positive radiographic image of a toe crack
NDT data fusion
141
PlIoM
10 12 14 16 18 20 22 24 26 2( Fig. 6.14 Radiography of a lack of side wall fusion
examination has been combined with eddy current and ultrasonic data using both the Bayesian and the Dempster-Shafer approaches. 6.4
NDT data fusion
Parallel multisensor fusion systems are well suited to gather information from identical or dissimilar sensors to provide knowledge about the same measurand. The raw signals from each sensor are processed in order to obtain an identical format suitable for fusion. The data fusion centre gathers the information from multiple sensors and outputs information on features such as defect length and depth. These features are then integrated and a general fused feature generated. Fused features are fed into the decision centre which applies inference rules and estimates a degree of certainty about a defect which helps in decision making. Six stages can be identified in the inspection and data fusion phases (Fig. 6.15). Before developing a fusion procedure and applying inference rules, the format and class of information provided by NDT sensors have to be clearly defined. Considering the case of two different NDT sensors (from two different NDT instruments), two features can be identified. The first is concerned with conflicting information such as 'defect' or 'no defect' detection. In this case we are dealing with a simple binary problem of which the output of fusing information is shown in Table 2.2. This binary approach can also be used to add extra information at a data fusion pixel level (e.g. by combining surface defects detected with eddy current and internal defects detected with ultrasound). The second is a defect characteristic estimation feature which may occur when measuring two different sizes on the same defect (e.g. depth or length). In the approach described in this chapter, data fusion was first implemented using a Bayesian statistical process similar to the one described in chapter 5. The Bayesian and Dempster-Shafer approaches to data fusion described in sections 6.4.1-6.4.7 are concerned with combining quantitative defect information such as depth and length. The binary case 'defect/no defect' has already been implemented in chapter 5 with the inspection of a helicopter rotor blade, and will therefore not be described in this chapter.6 The combination of qualitative information at pixel level has been implemented and is discussed in section 6.4.6. If a hard decision fusion system was considered, which is based only on the probability
142
Application ofNDT data fusion to weld inspection Non-destructive Examination
i Data Acquisition
1 Data Processing
ir
i Data Visualisation
Probabilistic Data Fusion
i Pixel level Data fusion |
i Signal Interpretation
1 Defect Characterisation •
Decision
Fig. 6.15 General flow diagram for NDT data fusion associated with each measurement, the threshold decision criteria would favour sensor output with the highest probability.4 Hard decision systems are those which provide the operator with a single hypothesis decision (Fig. 6.16). Under this threshold the data are ignored. With Bayesian and Dempster-Shafer, both sets of data are combined, regardless of their associated probability, and the decision is then performed on the final fused result. Soft decision systems provide information on the form of intervals representing different confidence levels about a hypothesis (Fig. 6.17). A measure of the belief and uncertainty associated with a hypothesis characterises a soft decision output. Although combination of information using the Bayesian approach can be described as a soft decision operation, the final output is a hard decision probability applied to a single parameter. On the other hand
Sensor 1
Sensor 2
•
Hard Decision Hard Decision
Threshold Decision Centre
Fig. 6.16 Hard decision design system
•
Final Hard Decision
NDT data fusion Soft Decision
Sensor 1
Data Fusion Algorithm
Soft Decision
Sensor 2
•
143
Interval Decision Output
Fig. 6.17 Soft decision design system the Dempster-Shafer approach generates an interval of the measurand and provides belief and uncertainty values associated with this interval. It can be seen in Fig. 6.18 that with a hard decision system, under a certain threshold both signals are ignored and no decision could be performed. With a soft decision approach both signals are combined before a decision can be made. 6.4.1 ASSOCIATION OF EDDY CURRENT DATA WITH THE BAYESIAN THEORY Depth measurements In an actual on-site inspection only one measurement would be performed; therefore in the experiments carried out, each measurement can be considered as an individual measurement. A degree of belief can be associated with each measurement, either from a database using the Gaussian normal distribution and Z values previously calculated or simply by estimating a prior probability value. The Bayesian fusion inference process can be applied to make inference on a hypothesis7 by either: •
0.25
considering a single measurement from a unique NDT system (or sensor) and making inference on the estimated measurement (Fig. 6.19, top), or (a)
Decision Threshold
c
7
0.2
1 — System 1
System 1
System 2 System 2
Fused Data
0.15
/
f
/
/
/
'
'
' X Nv \
\ vv
\
/'•' V*
0.05
'-^
1—
0.5
1—^-
1
1 v ** -
1
1.5 0.5
Defect depth / mm
1
Defect depth / mm
Fig. 6.18 Comparison between hard (a) and soft (b) decision outputs
1.5
144 Application ofNDT data fusion to weld inspection
Sensor 1
Bayesian Inference
Datal DA
Sensor 1
Datal
—»
Technique
*• Bayesian Inference DA — •
Technique Sensor 2
Decision DA or-.DA
Decision DA or -iDA orDBor-DB
Data 2 DB
Fig. 6.19 Flow diagram of Bayesian inference fusion approach for one measurement from a single sensor (top) and multiple measurements from two sensors (bottom)
•
combining two (or more) distinct measurements from two (or more) NDT systems and making inference in favour of one (or more) measurement (Fig. 6.19, bottom).
Case of one sensor Figure 6.20 is a schematic example of Bayesian calculation for one depth measurement estimated by one single sensor. The key to the abbreviations used can be found in Table 6.8.
p(-,Cm = 0.417l
p(-iDA/-.CD) = 0.7501
I p(DA/-nCD) = 0.250
~ ] p(CD) = 0.583
p(-.DA/CD) = 0.650 I
lp(DA/CD)=0.350
||
p(-.DAA-.CD) 1 0.313 I I
p(DAA-,CD) I 0.104 I
p(-nDAACD) I 0.379 I
p(DAACD) I 0.204 I
||
p(-.CD/-.DA) I Q-452 |
p(^CD/DA)
p(CDAnDA)
I
I
p(CD/DA) i 0.662 I
I
0.338 I
0.548 I
|
Fig. 6.20 General diagram of the Bayesian approach for probability of one depth measurement estimated by one eddy current sensor (e.g. system A)
NDT data fusion
145
Table 6.8 Key to abbreviations used in this chapter Abbreviations
Key
CD/CL -.CD/-.CL p (CD) p (CD)A p (—iCD) or p (CD) DA DB p (DA) p(CD/DA) p (DA/CD)
Correct depth/correct length Incorrect depth/incorrect length Probability of correct depth Probability of correct depth for system A Probability of incorrect depth Estimated depth with system A Estimated depth with system B Probability of having DA Probability that the correct depth is DA Probability of having DA given the correct depth Probability that the correct depth is DA from the fusion of DA and DB
p (CD/DA A DB)A
It can be seen from Fig. 6.20 that the probability of having the correct depth for one measurement has increased by 14 per cent from the original prior probability but no further information about the uncertainty of this measurement is given. The probability that the correct depth is not the estimated value is 0.548. Therefore, greater confidence is given to the depth measurement being correct than incorrect. The higher the prior unconditional probability, the higher the posterior probability. As an example, if the estimated depth with system A is 1.89 mm, a probability of 75 per cent is associated with the fact that the depth of system A is correct, but no uncertainty or depth range is given. A single hard decision number is associated with the estimated depth. The probabilities on measurements for up to five NDT systems can be found in Table 6.9. Systems A and B are the two eddy current systems previously described and used in experiments. Systems C, D and E are virtual systems of which probabilities have been randomly generated. The graphs in Fig. 6.21 show that for a given value of p(DA/CD), as the probability p(DA/CD) is decreasing, the probability of having the correct depth (p(CD/DA) and p(CD/DB)) is increasing. This is due to the fact that there is no evidence towards the estimated depth (i.e. p(DA/CD)) of being incorrect; thus /?(CD/DA) reaches unity as there is 100 per cent of chance that the estimated depth is correct. As /?(DA/CD)
Table 6.9 Probabilities associated with depth measurements for five NDT systems NDT system p(CD) 0.583 0.667 0.622 0.710 0.458
p(D/CD) />(D/CD) 0.350 0.400 0.380 0.420 0.330
0.250 0.200 0.300 0.220 0.180
146 Application ofNDT data fusion to weld inspection
System A
p(DA/CD)
System B
/?(DB/CD)
Fig. 6.21 Plots of Bayesian posterior probability p(CD/DA) vs. p(DA/CD) for increasing values of p(DA/CD) for eddy current systems A and B
zero, p(CD/DA) reaches 100 per cent more quickly irrespective of the value of the prior probability. The graphs in Fig. 6.22 also show that the higher the probability of having the correct depth for a measurement, the higher the posterior probability p(CD/DA). These graphs plot equation (5.2) for different values of p(CD), illustrating the benefit of having a high prior probability which is representative of the sensor efficiency. The greater die sensor efficiency, the better the posterior probability. This is normal for any system; as the belief and accuracy of a system are increasing, there is more chance to obtain a correct depth estimation. For given values of p(DA/CD) and p(DA/CD) one can plot the graphs shown in Fig. 6.23 of the posterior probability p (CD/DA) against the probability of correct depth. From this graph, the output of the Bayesian operation for systems A and B, and for any particular sensor efficiency, can be graphically determined. As from the previous example, if the estimated depth DA =1.89, its associated probability p(CD)A = 0.678, and graphically the posterior probability p(1.89/CD)A = 0.75. The graphs in Fig. 6.24 show that p (CD/DA) is increasing proportionally in a non-linear form to p (DA/CD) and that low values of p(DA/CD) affect system performance even for high prior probability values.
NDT data fusion
I
0,9
^
0,8
/
0,7
<
0,6
Q
0,5
|
m / — — — — ^-m ~
—
- ^J^.-3i-^--^"
' ' . ' ' ' . ^ -' ,«" ^ - "
„ . • " " . . . - • • " ' "
0,4 0,3 -
ft/// >'
0,2 •,
\J./s *' -^
0,1 0
"'
0,2
- - - 0.3 0.4 0.5 — 0.6 — 0.7 - - 0.8
A*CD)
W''-''''^l * ^ ^ ^ ^
0
147
0,4
0,6
System A
p(DA/CD)
System B
p(DB/CD)
Fig. 6.22 Plots of Bayesian posterior probability /?(CD/DA) vs. p(DA/CD) for increasing values of the prior probability p(CD) for eddy current systems A and B
Case of two sensors In this section Bayes's theory was applied to the combination of information (depth measurements) from two eddy current sensors (i.e. systems A and B). Two cases have to be computed while implementing Bayes's theory. First, the posterior probability, assuming the depth measurement from sensor A is correct, can mathematically be written as p(CD/DA A DB)A =
/?(CD)A/?(DAADB/CD)
(6.5)
p(CD) A /?(DA A DB/CD) +/?(CD) A /?(DA A DB/CD) which can be written as p(CD/DA A DB)A P(CD) A p(DA/CD)p(DB/CD) p(CD)Ap(DA/CD)p(DB/CD)+p(CD)Ap(DA/CD)p(DB/CD)
(6.6)
148 Application ofNDT data fusion to weld inspection
Q
<
0.4
Q U
X
0.3
-System A - System B H
1
1
(-
p(CD) Fig. 6.23 Plots of Bayesian posterior probability p(CD/DA) vs. p(CD) for eddy current systems AandB
Second, the posterior probability, assuming that the depth measurement from system B is correct, has to be computed and can be written as p(CD/DA A DB)B =
p(CD)Bp(DA A DB/CD)
= - (6-7) p(CD)Bp(DA A DB/CD) +p(CD)Bp(DA A DB/CD)
Decisions would be made in favour of the estimated depth which has the highest posterior probability (i.e. the depth with the highest support). It can be seen from these equations that the final output is directly dependent on sensor efficiency and the estimated depth. For example, if DA =1.62 and DB = 2.00, in the case where the correct depth is assumed to be DA, the posterior probability is / ? ( C D / D A A D B ) A = 0.850, while in the case where the depth estimated with system B (DB) is assumed correct, the posterior probability is p(CD/DA A D B ) B = 0.855. Therefore we will tend to have more confidence in the depth estimated with system B as its posterior probability is higher. Graphs of P ( C D / D A A D B ) VS. P ( D A A D B / C D ) for increasing values of p(DA A DB/CD) and for both cases (i.e. case 1 CD = DA, case 2 CD = DB) are presented in Fig. 6.25. One can see that the posterior probability is increasing as the belief in a system measurement increases. Graphs in Fig. 6.26 show the dependence of the fused decision rate p(CD/ D A A D B ) for different sensor efficiencies. These curves are useful to determine optimal sensor efficiency as a function of posterior probability. The posterior probability may be recommended by quality standards. Low values of p(CD) affect the system performance even for high values of p(DA/CD). In the general case (for no particular depth measurement), the posterior probabilities for
NDT data fusion
System A
p(CD)
System B
p(CD)
149
Fig. 6.24 Plots of Bayesian posterior probability /?(CD/DA) vs. p(CD) for increasing values of p(DA/CD) for eddy current systems A and B one and two sensors are summarised in Table 6.10. It can be seen that the posterior probability with system B is greater than that with system A either with one or with two sensors. In this specific example, the posterior probability resulting from the combination of information of both systems, is higher than the prior probability of each system taken individually. The superiority of system B over system A in terms of ability to quantify a defect depth accurately is obvious. Case of three sensors The posterior probability for the three cases CD = DA, CD = DB and CD = DC can be computed using the equation /?(CD/D1 A D 2 A D 3 )
=
P(CD)P(D1AD2AD3/CD)
p(CD)p(Dl A D2 A D3/CD) +p(CD)p(Dl A D2 A D3/CD)
150 Application ofNDT data fusion to weld inspection
-0.0
^ 0.1 - - 0.2 — — 0.3 0.4 — -0.5 -0.6 0.7 0.8 0.9 -1.0
——-
p(DA~DBHCD)
-o
— 0.1 - - 0.2 ^ 0.3 0.4 — -0.5 -0.6 0.7 0.8 0.9 -1
——-
/>(DAT>B/-,CD)
Case CD=DB
/>(DAT>B/CD)
Fig. 6.25 Plots of Bayesian approach to the combination of estimated depth from systems A and B in the cases where CD = DA and CD = DB
Performance analysis By comparing results from the Bayesian data fusion approach, it can be noted that at equal p(DA A D B / C D ) A the posterior probability is lower. In this case, the confidence associated with a measurement was decreasing. It can be seen graphically that, at equal prior probability, more confidence is achieved in combining information from multiple sensors. For example, at equal values of /?(CD) = 0.1 and for p(DA/CD) = p(DA A DB/CD) = 0.8, the posterior probabilities are p(CD/DA) = 0.25 and p(CD/DA A D B ) A = 0.58. A comparison of posterior probability between system A and system B prior to fusion (Fig. 6.23) shows that system B is more efficient than system A. It can be seen when comparing Figs 6.21 and 6.25 that as the number of sensors increases, the posterior probability (and therefore the belief associated with a specific value) is improving. Figure 6.27 shows that the best posterior probability is achieved with the system with the highest sensor efficiency and that inference will be made in favour of the depth estimated with system D (Fig. 6.28). Figure 6.29 is a comparison of the Bayesian posterior probability achieved by combing information from one and up to five sensors, for two different cases where CD = DA and CD = DE. These graphs show the benefit of using multiple systems to
0.1 — - 0.2 — - 0.3 0.4 0.5
P(CD) 0,4
0,6
p(DA~DB/CD)
Case CD=DA
p(DAT>B/-,CD)
P(CD)
Case CD=DA
Fig. 6.26 Plots of p(CD/DA) vs. /?(DA*DB/CD) (top) and p(CD/DA) vs. p(CD) (bottom) using Bayes's theory to combine estimated depths DA and DB
Table 6.10 Results of the Bayesian posterior probability for the general case of fusion of depth information from one and two sensors Posterior probability One sensor
Two sensors
p(CD/DA) /?(CD/DB)
80% 85%
66% 80%
152 Application ofNDT data fusion to weld inspection
0.9 -
//// / //// / //// ' //// / //// / //// .'
0.8 -
Ill' !
0.7 -
III' i li ; ' ii ;
q °- 6 -
III! i; ;
//1
<
11.
Q ^
0.4-
//// lv' nil
0.3 -
//// /
i
III
In i
0.1 -
0 -
System A
//// / //// / In i I'i I
0.2 -
In; If/
-^H
0.1
System B System C System D System E| 1-
1
1
\
1
H
1
1
1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
P(CD) Fig. 6.27 Posterior probability vs. sensor efficiency forfiveNDT systems quantify a defect. The certainty regarding a defect depth increases with the amount of fused information; knowing /?(D/CD) and the number of sensors, one can graphically determine the posterior probability and make inference in favour of the estimated depth with the highest p(CD/DA...DE). In this case it would be assumed that the depth estimated with system D is the most representative of the actual defect depth. From the example in Table 6.11, it can be said that an improvement in decision making can be achieved with the Bayesian inference theory. Before fusion it was difficult to make a decision in favour of DA or DB as each system had almost the same prior probability. After fusion, in the cases of one and two sensors, the posterior probability provides more confidence in DA than DB. In order to assess the performance of Bayes's theory, random data can be generated and fused for the 1.96 mm deep calibration slot. However, the numerical results show that there is little variation in the final output with different defect depth. If decision of a depth estimation is made in favour of the one with the highest posterior probability, a significant spread of results can be obtained. Considering the calibration slot of 1.96 mm depth, it can be seen that in the case where both posterior probabilities are equal and the estimated depths are different, no decision can be made in favour of one or the other. This illustrates one problem of the Bayesian approach. In the case where there are two different posterior probabilities, decisions can be made based on the depth estimation with the highest probability. However, it should be noted that this does not always reflect the reality. For example, if DA = 2.00 mm and DB = 1.96 mm, more confidence is given to DA than DB which, according to the data provided by the manufacturer of these calibration slots, is not
NDT data fusion
153
i -
. ^ ^ ^ * ***
0.9 -
///// / / // // // / / / / / /'
0.8 -
0.7
w
0.6 -
•<
0.5 -
//// / // // / ///// // / / / //// /// / / / /7 /'' / ///7/ / // -' ///// / / /•' / / / / •' //// / ///'' //// / //// /'
^-N
Q Q ^ Q U
X
0.4 -
0.3 •
II i -
0 -
Pa«A r n - nR
/'
UdoC \^LJ — LJO
//// / l/V // ////
0.2 -
0.1 -
rwdac rn — - nA a cA \sU LJr\
////
/•*
r*r"\ — r»p>
/
//// jy// jfZ•^-rg.-r'
'
1
1
1
1
1
1
1
0.1
0.3
0.4
0.5
0.6
0.7
0.8
0.S
0.2
/
/
Case CD = DC uase o u - u u Case CD = DEl
p(DA...DE/CD) Fig. 6.28 Comparison of Bayesian posterior probability forfivepossible outcomes fromfiveNDT systems Table 6.11 Results of the Bayesian posterior probability of fusion of depth information from one and two sensors Posterior probability after fusion
System
Estimated defect depth/mm
Prior probability before fusion, one sensor
One sensor
Two sensors
A B
1.89 1.80
68% 67%
75% 80%
86% 85%
the value expected. Similar events have been noted for 1.00 mm and 0.53 mm deep calibration slots. It sometimes appears that no decision can be made in favour of one depth or another as they both have identical posterior probability (Table 6.12), a limitation of the Bayesian approach. In summary, three types of fusion outputs are possible which may lead to: • • •
the decision of a depth measurement which is close to the actual depth; the decision of a depth which is far from the actual depth; no decision, as both posterior probabilities are identical.
i -
^^^^^^'
0.9
"~~
0.8 -
V
r?(CD/DA)
0.7 -
s*
. - -
r
0.6 0.5 0.4 H
/
0.3 0.2
-
• '
/
1 Sensor 2 Sensors 3 Sensors 4 Sensors
/// '
/
/
o oensors p(D/-.CD) = 0.5
0.1 0
0
0.2
0.4
Case CD=DA
1 0.9
,
0.6
0.8
p(D/CD)
1 / ^ ^'^ ^
/
0.8
f —--
0.7 _L ^
-h-
-
•
'
'
'
0.6
PQ
Q Q U X
0.5
I 0.4
•-"
-''//i
/ ///
1 Sensor 2 Sensors 3 Sensors 4 Sensors
1 •' / '
0.3 +
0.2 1 .
/ //
/ V/ 0.1 T • 0 4-=—-=—*~^-—i 0.2 0
Case CD=DB
o oensors p(D/-.CD) = 0.5 h-
0.4
-t—
1
0.6
0.8
p(D/CD)
Fig. 6.29 Comparison of posterior probability achieved with Bayes's theory applied to depth information from one to five sensors
NDT data fusion
155
Table 6.12 Results of the fusion of depth information from systems A and B using Bayes's theory (calibration slot 1.96 mm deep) System A, System B, DA
DB
p(CD/DA)
/?(CD/DB)
2.00 1.80 1.90 1.83 1.37
1.96 1.70 1.85 1.80 1.70
0.856 0.854 0.854 0.855 0.853
0.855 0.853 0.853 0.850 0.853
Length measurements The probabilities associated with five NDT systems can be found in Table 6.13. As previously discussed (section 6.3.1) system A is superior to any of the other four systems and therefore it is expected that the improvement from data fusion will be insignificant, as its efficiency is already almost equal to unity. Case of one sensor Results of the Bayesian inference theory performed on one measurement for systems A and B show very little variation in the posterior probability regardless of the estimated length. System A also appears superior to system B whatever the measurement. The graphs in Figs 6.30 and 6.31 clearly show the superiority of system A over system B. Case of two sensors The combination of depth information from systems A and B using Bayes's theory was performed for the cases where CL = LA and CL = LB. Where length information from systems A and B have been fused, no variation in the posterior probability is apparent. System A is 'too' efficient and perturbs the calculations by assigning a high posterior probability to both length measurements. In the case of length, it clearly appears that no
Table 6.13 Probabilities associated with length measurements forfiveNDT systems NDT system
p(CL)
p(L/CL)
p(L/CL)
0.950 0.483 0.622 0.710 0.458
0.950 0.483 0.380 0.420 0.330
0.050 0.517 0.300 0.220 0.180
156 Application ofNDT data fusion to weld inspection
p(LA/-.CL)
/?(LB/-.CL)
System B
p(LB/CL)
Fig. 6.30 Plots of Bayesian posterior probability /?(CL/LA) vs. /?(LA/CL) for increasing values of /?(LA/-iCL) for eddy current systems A and B
decision can be made in favour of one system or the other. The Bayesian inference theory does not facilitate decision making with these specific eddy current systems. The graphs of / ? ( C L / L A A L B ) VS. / ? ( L A A L B / C L ) for increasing values of / ? ( L A A L B / C L ) are shown in Fig. 6.32. From these graphs, it can be seen that more belief is associated with system A than with system B; even so, the numerical results were confusing at first. Performance analysis From the graph in Fig. 6.33, it can be seen that more belief is associated with the length estimated with system A than with any other system. This is due to the fact that the
NDT data fusion
157
1 -
0.9 0.8 -
B 0.61 0.5< 4
i °- u ^
0.3 0.2 -
—-System A System B
0.1 0 - *— 0
1 0.1
1 0.2
1 0.3
1 0.4
1 0.5
1 0.6
h 0.7
1
0.8
1__
0.9
1
P(CL)
Fig. 6.31 Plots of Bayesian posterior probability p(CL/LA) and p(CL/LB) vs. p(CL) for length estimation on calibration slots with eddy current systems A and B
sensor efficiency of system A was almost equal to unity prior to fusion (Fig. 6.34). In the case of length measurements, an improvement in the length estimated by system A has been achieved (Table 6.14) which has reinforced the belief that LA is a good estimation of the actual defect length. In the example described in Table 6.14, in the case where the estimated lengths are considered individually, the belief associated with LB is actually decreasing as the belief associated with LA is increasing and almost reaching unity. The addition of extra information from NDT systems with lower sensor efficiencies than system A has the effect of slightly decreasing its final posterior probability from 97.58 per cent to 97.42 per cent. However, in the case of sensors with low efficiency (e.g. system B) the result is a significant increase in the posterior probability which in this case does not help in making inferences as both posterior probabilities are almost equal. It can therefore be said that fusion of data from multiple sensors can only improve the belief associated with a measurement if this particular measurement actually needs to be improved. In the case of system A, there will be no improvement gained from the combination of length information with other data as it already provides the user with a high degree of certainty.
The Bayesian estimate The Bayesian estimate approach is better suited to making inferences with random variables. This approach can be used to estimate the mean JU of a population; in our case the depth or length of a defect. Let us define JU0 and a 0 as a subjective prior distribution and standard deviation respectively, for the possible values of JU and o. These prior estimations will be combined with evidence (or measurement denoted x) consisting of a
158 Application ofNDT data fusion to weld inspection
0.1 0.2 0.3 0.4 0.5 0.6 — - - 0.7 — , - 0.8 — - 0.9 1.0 A
— — — —
p(LA LB/iCL) 0,4
0,6
0,8
A
/?(LA LB/CL)
Case CL=LA
— — — —
0.1 0.2 0.3 0.4 0.5 0.6 — - - 0.7 — - 0.8 — - 0.9 1.0 A
p(LA LB/-iCL) 0,4
0,6 A
/?(LA LB/CL)
Case CL=LB
Fig. 6.32 Plots of Bayesian approach to the combination of estimated length from systems A and B in the cases where CL = LA and CL = LB
sample of size n and an estimate of a, where o is the standard deviation of the population, here taken from 10 depth measurements. Using the Bayes estimate, the posterior density for JU is the conditional density of M given the sample values, and can be written as fM\x(f*\x) =
fx,M(x>V)
fx\i^(x\ju)fM(iu)
fx(x)
fx(x)
(6.9)
where M is the estimate of the depth and ju is unknown. The prior density expresses the degree of belief of the location of the value of JU prior to sampling. The posterior density expresses the degree of belief of the location of the value of /u given the results of the
"
0.8 -
/ /
//
L/LA. ..LE)
0.7 -
U
^-^^z^^-
//
0.9-
0.6 -
//
.y
'.S
//
:/
0.5 0.4 CaseCL = LA
11
0.3 0.2 -
//
0.1 -
V, S/t
0- — i
< i ^ r 'H 0.1
0.2
/
—
y
CaseCL = LE
_ ) — H—_
0.3
CaseCL = LB
— Case CL = LC rji
i — i — , — i —
0.4
0.5
H
1
0.6
1
1
1
0.7
1
0.8
1
1
1
0.9
P(CL) Fig. 6.33 Posterior probability vs. sensor efficiency for five NDT systems
System A -
System B
— System C System D — -i
0.3
0.4
0.5
0.6
1
1
0.7
- - System E 1
1
0.8
1
1-
0.9
p(LA...LE/CL) Fig. 6.34 Comparison of Bayesian posterior probability for five possible outcomes from five NDT systems
160
Implementation ofNDT data fusion to weld inspection Table 6.14 Results of the Bayesian posterior probability of fusion of length information from one and two sensors Estimated defect System length/mm
Prior Posterior probability probability after fusion before fusion, one sensor One sensor Two sensors
A B
68.00% 67.80%
20.00 22.00
97.58% 66.30%
97.42% 97.39%
sample. The density function of the sample can be written as
Z(X,-tf
1
fxiu(x\/i) = \
"Jin a)
exp
(6
2o2
and the joint density of the sample values and M is A|M(*I/*)/MV>) = |
—Y—V
I(X,-A)2 exp
exp
2a 2
2ol
(6
with X (*,- - ju)2 = X (xt - xf + n(x - fj,)2 and 1 x = — 2 xt n Defining K as a constant term, equation (6.11) can be written as fxAx> V) = K
ex
P
v>-^o) 2ol
n(x-ju) exp
(6
2o2
The exponent can be rewritten as
2a
2
(x-fi)
2
1 + — -2 (ju-fio) 2o 0
=
I n 2 \cr
1\ 2
2
oil
and equation (6.12) becomes /x,M(*.<«) = £ e x p
1 / n
T \ —T ~2
2 \(7
1
-\2
+ —TK^" -^0^) _2
(6
CTQ
From Lindley7, the estimate can be calculated using
V
=
nx
1
a
QQ
n_
\_
~2
~2
a
a0
(6
NDT data fusion
161
The value of pi can be defined as a weighted mean of x (direct sample evidence) and // 0 (estimated prior information) with n/a2 and l/o^ as respective weights. An estimate of the actual defect depth can be calculated if it is assumed that the depths of defects are normally distributed with unknown mean pc and known standard deviation a = 0.41 for the 1.96 mm deep calibration slot, and o = 0.29 and o = 0.21 for the 1.00 mm and 0.53 mm deep calibration slots respectively. Tables 6.15-6.17 show the results of the Bayes estimate for depth measurements on the three calibration slots, estimating a defect fi0 and a standard deviation a0 and with a randomly selected depth measurement from one instrument. Table 6.15 Results of the Bayes estimate for calibration slots 1.96 mm deep with o = 0.41, ^ = 2.00 and cr0 = 0.20 with evidence from one to five sensors System A
System B
System C
System D
System E
Depth/mm
2.18 2.18 2.18 2.18 2.18
— 2.00 2.00 2.00 2.00
— — 1.89 1.89 1.89
— — — 1.98 1.98
— — — — 2.05
2.03 2.03 2.01 2.01 2.01
Table 6.16 Results of the Bayes estimate for calibration slots 1.00 mm deep with a = 0.29, ^ = 0.80 and a0 = 0.30 with evidence from one to five sensors System A
System B
System C
System D
System E
Depth/mm
0.78 0.78 0.78 0.78 0.78
— 0.85 0.85 0.85 0.85
— — 0.91 0.91 0.91
— — — 0.95 0.95
— — — — 0.95
0.79 0.81 0.84 0.86 0.87
Table 6.17 Results of the Bayes estimate for calibration slots 0.53 mm deep with a = 0.21, //0 = 0.50 and a0 = 0.10 with evidence from one to five sensors System A
System B
System C
System D
System E Depth/mm
0.35 0.35 0.35 0.35 0.35
— 0.50 0.50 0.50 0.50
— — 0.65 0.65 0.65
— — — 0.55 0.55
— — — — 0.45
0.47 0.48 0.50 0.55 0.50
162 Implementation ofNDT data fusion to weld inspection One can see that as the amount of information, or evidence, increases, the depth estimation shifts towards the sample mean x. The Bayes estimate allows inference to be made about a defect size when no prior probability can be assigned to a measurement. The Bayes estimate gives the advantage of producing an agreement over the depth, and may be better than the Bayesian probability theory at combining information from multiple NDT sensors. 6.4.2 ASSOCIATION OF EDDY CURRENT DATA WITH THE DEMPSTER-SHAFER THEORY Depth measurements The same data collected during the eddy current inspection and to which Bayesian theory has been applied in section 6.4.1 can also be fused using the Dempster-Shafer theory. Experimental results of data fusion with the Dempster-Shafer theory are shown in Table 6.18. The Dempster-Shafer data fusion operation is performed by combining information from systems A and B. The theory behind the data fusion process has already been described in chapter 2 and will not be reviewed in this chapter. The example in Table 6.19 illustrates the Dempster-Shafer data fusion principle.910 Unlike the Bayesian inference theory, which provides the user only with a hard probability value, the output of Dempster-Shafer evidential reasoning takes the form of several depth intervals, each having an evidential interval associated with it. A decision is made in favour of the depth interval with the highest evidential interval. In the example of Table 6.19, the evidential intervals associated with each depth interval are summarised in Table 6.20. Table 6.18 Results of fusion of depth measurements from systems A and B with Dempster-Shafer theory (unit: mm) (calibration slot 1.00 mm deep) System A, System B, DB DA 0.78 0.78 1.06 0.70 1.33
0.40 0.80 0.80 0.70 0.60
Depth interval
Evidential interval
[0.51-1.05] [0.61-0.99] [0.75-0.99] [0.53-0.87] [0.99-1.67]
[0.521,0.771] [0.678,0.782] [0.459,0.563] [0.662,0.770] [0.528,0.779]
Table 6.19 Fusion of depth information from eddy current systems A and B using DempsterShafer evidential reasoning
DB = 1.00 0
DA =1.37 0 II = [1.03-1.22] 12 = [0.78-1.22] 13 = [1.03-1.71] 0
NDT data fusion
163
Table 6.20 Evidential intervals associated with depth intervals of the example described in Table 6.19 Depth interval/mm
Evidential interval
11 = [1.03-1.22] 12 = [0.78-1.22] 13 = [1.03-1.71]
Ell = [0.459, 0.563] EI2 = [0.216, 0.320] EI3 = [0.221, 0.325]
From Table 6.20, a decision can be made in favour of the interval II as the belief associated with it is the highest, with a probability of 0.459. Table 6.21 presents information associated with depth interval II resulting from the Dempster-Shafer data fusion process. There is a 45.9 per cent chance that the actual defect depth lies between 1.03 mm and 1.22 mm, and there is a 56.3 per cent chance that this belief is correct. The uncertainty on this depth interval is very low (10.4 per cent) and therefore a decision can be made in favour of II. The actual defect depth was 1.00 mm, which in this case means that the decision was made in favour of a defect depth which can be assumed correct at ±0.03 mm. Length measurements As with depth, data have been combined in pairs and length and evidential intervals generated to make inferences (Tables 6.22 and 6.23). In this example, a decision can be made in favour of the length interval 13 which has a 47.5 per cent chance that the actual defect length is lying within this interval, with a plausibility of 69.9 per cent. The actual defect length was 19.81 mm which correctly lies in the interval selected. It was noted that if the two estimated lengths are far apart from each other, no decision could be made as Table 6.21 Information associated with depth interval II Belief Plausibility Uncertainty Disbelief
0.459 0.563 0.104 0.437
Table 6.22 Fusion of length information from eddy current systems A and B using Dempster-Shafer evidential reasoning LA = 20.00 0 LB = 16.00 1 1 = 0 12 =[14.07-17.93] 0 13 = [19.02-20.98] O
164
Implementation ofNDT data fusion to weld inspection Table 6.23 Evidential intervals associated with length intervals of the example described in Table 6.22 Length interval/mm
Evidential interval
12 = [14.07-17.93] 13 = [19.02-20.98]
EI2 = [0.470, 0.694] EI3 = [0.475, 0.699]
the two length intervals generated have the same evidential interval (i.e. identical belief and plausibility). This means that there is no more support for one system or the other; in this case no information is available and there is the same degree of uncertainty for each system. Cases of three and four sensors The previous section described the fusion of depth and length information from two distinct eddy current sensors using Dempster-Shafer evidential reasoning. Combination of information with the Dempster-Shafer theory is performed in a pairwise manner. Therefore, if information from another instrument is to be combined, this will be another fusion process performed after completion of the fusion of information from the two first sensors. Unlike the Bayesian approach, combination of information from three or more sensors cannot be performed simultaneously with the Dempster-Shafer approach. Considering the data fusion example described in Table 6.19, the interval selected (II) can now be fused to another depth estimation from another system (Table 6.24) (as with the Bayesian approach, the data from these instruments was randomly simulated). From the data fusion example presented in Table 6.25 it can be seen that decision is shifted towards a new interval (F2) with a belief of 36.2 per cent and a support of 54.1 per cent. The addition of extra information in this particular example induced an increase in Table 6.24 Fusion of depth information from II (Table 6.19) and data from another NDT system using Dempster-Shafer evidential reasoning II = [1.03-1.22] 0 DC = 0.90 I'l = [1.03-1.11] F2= [0.69-1.11] 0 13= [1.03-1.22] 0
Table 6.25 Evidential intervals associated with depth intervals of the example described in Table 6.24 Length interval/mm
Evidential interval
I'l = [1.03-1.11] I'2 = [0.69-1.11] I'3 = [1.03-1.22]
EI'l = [0.308, 0.487] EI'2 = [0.362, 0.541] EI'3 = [0.151, 0.330]
NDT data fusion
165
depth interval and a reduction in its associated evidential interval. Results from the addition of information of a fourth sensor are presented in Tables 6.26 and 6.27. The addition of information from a fourth sensor greatly improved decision making by producing a smaller depth interval with an 82.4 per cent belief supported at 93.7 per cent. Results from Tables 6.18-6.27 are summarised in Table 6.28. In order to obtain an improvement in the combination of information from multiple sensors, the information from each system must overlap. If no overlapping regions can be identified the final combination will be at least equal to the system with the highest probability. Discrimination between the two sensor outputs is required to avoid errors. Table 6.26 Fusion of depth information from F2 (Table 6.24) and data from another NDT system using Dempster-Shafer evidential reasoning
DD=1.25 0
F2= [0.69-1.11] 0 F3= [0.69-1.11]
0 F2= [1.15-1.35] 0
Table 6.27 Evidential intervals associated with depth intervals of the example described in Table 6.26 Length interval/mm
Evidential interval
F2= [1.15-1.35] F3 = [1.03-1.11]
EF1 = [0.824, 0.937] EF3 = [0.063, 0.176]
Table 6.28 Summary of the Dempster-Shafer data fusion applied to depth measurements from four eddy current NDT systems Combination operation DAnDB InDC I'nDD
Depth and evidential intervals I =[1.03-1.22] EI= [0.459, 0.563] F = [0.69-1.11] EF= [0.362, 0.541] I" =[1.15-1.35] EI" =[0.824, 0.176]
6.4.3 FUSION OF ULTRASONIC DATA FROM DIFFERENT ANGLE PROBES Ultrasonic data gathered from the inspection of the pipe sample using 45° and 60° angle probes was combined using the Bayesian and the Dempster-Shafer approaches. The lengths of the toe crack estimated by the 45° and 60° probes are denoted L45 and L^ respectively. The probabilities associated with length measurements for two ultrasonic angle probes are summarised in Table 6.29.
166 Implementation ofNDT data fusion to weld inspection Table 6.29 Probabilities associated with length measurements for two ultrasonic angle probes Angle probe (°)
p(CL) p(L/CL) /?(L/CL)
45 60
0.350 0.700
0.070 0.140
0.130 0.060
Bayesian data fusion Ultrasonic data from both probe angles can be combined using the Bayesian inference process and results of this are summarised in Tables 6.30 and 6.31. In Table 6.30, where L45 = 28.00 mm and Lm = 29.00 mm, logically a decision should be made in favour of the sensor with the highest posterior probability, in this case the 45° probe angle, whereas when L45 = Lm = 28.00 mm (Table 6.31) no decision can be made in favour of any probe angle as the posterior probabilities are identical. From these calculations it can be seen that the Bayesian inference process is not suitable for making inferences when the prior probabilities associated with each angle probe are very similar. Dempster -Shafer data fusion The ultrasonic data combined above using the Bayesian inference process were also fused using Dempster's rule of combination. Results of these combinations are summarised in Tables 6.32-6.35. It can be seen that the Dempster-Shafer evidential reasoning allows decisions to be Table 6.30 Results of the Bayesian posterior probability of fusion of length information from two ultrasonic angle probes
Angle probe/deg.
Posterior Estimated defect Prior probability probability after fusion length/mm before fusion
45 60
28.00 29.00
0.6802 0.6778
0.728 0.726
Table 6.31 Results of the Bayesian posterior probability of fusion of length information from two ultrasonic angle probes
Angle probe (°)
Posterior Estimated defect Prior probability probability after fusion length/mm before fusion
45 60
28.00 28.00
0.6802 0.6802
0.728 0.728
NDT data fusion
167
Table 6.32 Fusion of length information from two ultrasonic angle probes using Dempster-Shafer evidential reasoning
1 ^ = 28.00 0
L45 = 28.00 II = [25.46-30.54] 13 = [25.46-30.54]
0 12 = [25.46-30.54] 0
Table 6.33 Evidential intervals associated with length intervals of the example described in Table 6.32 Length interval/mm
Evidential interval
I = [25.46-30.54]
EI = [0.9, 1.0]
Table 6.34 Fusion of length information from two ultrasonic angle probes using Dempster-Shafer evidential reasoning
1 ^ = 29.00 0
L45 = 28.00 II = [26.37-30.54] 13 = [25.46-30.54]
0 12= [26.37-31.63] 0
Table 6.35 Evidential intervals associated with length intervals of the example described in Table 6.34 Length interval/mm
Evidential interval
11 = [26.37-30.54] 12 = [26.37-31.63] 13 = [25.46-30.54]
Ell = [0.461, 0.564] EI2 = [0.219, 0.322] EI3 = [0.217, 0,320]
made in favour of a length interval regardless of the input values of the estimated lengths. So, in the case where both angle probes are estimating a defect length of 28.00 mm, there is a 90 per cent chance that the defect length is included in the interval I = [25.46-30.54 mm]. The reason for this high value supporting this interval is due to the wide range of lengths included in the interval. The second example provides a more informative estimation of the length by supporting the interval II = [26.37-30.54 mm] with 46.1 per cent. Although the length interval has been slightly reduced, the actual defect length appears to be (i) longer than 25.46 mm and (ii) shorter than 31.63 mm (actual defect length = 29.00 mm). The use of another ultrasonic probe angle or another NDT method could help to reduce the length interval and thus approach a more accurate
168 Implementation ofNDT data fusion to weld inspection estimate of defect length. This was performed in section 6.4.5 by combining data from eddy current, ultrasonic and radiographic examinations. 6.4.4 FUSION OF EDDY CURRENT AND ULTRASONIC DATA When a defect can be, or has been, detected and sized using multiple different NDT methods, combination of information can be performed. A toe crack from a pipe sample was detected and sized using the eddy current system A and ultrasonic system U described in section 6.3.2. As for the fusion of information from multiple eddy current systems previously described, the fusion of eddy current and ultrasonic data was performed using both the Bayesian and Dempster- Shafer theories. Bayesian data fusion approach Noting LA and LU as lengths estimated with eddy current system A and the ultrasonic system respectively, the Bayesian posterior probability, in the case where the correct length is assumed to be LA, can be calculated using the following equation: /?(CL/LAALU)A
p(CL)A/7(LA/CL)/7(LU/CL) =
=
=
^=-
(0.15)
p(CL)Ap(LA/CL)p(LU/CL) + p(CL)Ap(LA/CL)p(LU/CL) and in the case where the correct length is assumed to be LU the equation becomes P(CL/LAALU)U
p(CL) uP (LA/CL)p(LU/CL) , , 1A , = = = (o.lo) p(CL)up(LA/CL)p(LU/CL)+p(CL)up(LA/CL)/7(LU/CL) As with the fusion of information from multiple eddy current systems, no decision can really be made as both posterior probabilities are very similar and only the fourth digit is modified. =
Dempster-Shafer data fusion approach This approach appears more appropriate than Bayes's theory as no confusion occurs from the output of the fusion process which facilitates decision making. Tables 6.36 and 6.37 present an example of the Dempster-Shafer fusion of eddy current and ultrasonic data. In this example, a decision can be made in favour of interval II. There is a 46.35 per cent chance that the actual defect length lies between the estimated length interval II with 56.5 per cent support for this decision (actual defect length = 29.00 mm). Performance analysis The implementation of the fusion of eddy current and ultrasonic data demonstrated that it is possible to fuse information from different NDT systems provided they refer to the same defect. As with fusion of eddy current data from multiple systems, the Bayesian approach does not help in decision making because the support associated with the estimated length
NDT data fusion
169
Table 6.36 Fusion of length information from eddy current system A and ultrasonic system U using Dempster-Shafer evidential reasoning LA = 29.00 0 LU = 27.00 II = [28.04-29.45] 12 = [24.56-29.45] 0 13 = [28.04-29.96] 0
Table 6.37 Evidential intervals associated with length intervals of the example described in Table 6.36 Length interval/mm
Evidential interval
11 = [28.04-29.45] 12 = [24.56-29.45] 13 = [28.04-29.96]
Ell = [0.463, 0.565] EI2 = [0.218, 0.320] EI3 = [0.218, 0.320]
by eddy current system A is too high (95 per cent). Dempster-Shafer appears to be more appropriate for the combination of information in a more objective manner and for presenting results wisely in the form of intervals. Still, the fusion of ultrasonic and eddy current data can be performed in order to gain information otherwise unavailable when using only one NDT system; for example combining surface breaking defects and internal defects from the same sample. This combination of information has to be performed graphically at pixel level. 6.4.5 FUSION OF EDDY CURRENT, ULTRASONIC AND RADIOGRAPHIC DATA Three different NDT methods, eddy current, ultrasound and radiography, were used to inspect a pipe weld sample and to size a toe crack. Non-destructive testing data collected from these inspections were fused using the Bayesian and the Dempster-Shafer approaches, the results of which are described in Tables 6.38 to 6.42. Table 6.38 shows that with the Bayesian inference process, a decision is made in favour of the source of information with the highest prior probability. In the particular example
Table 6.38 Results of the Bayesian posterior probability of fusion of length information from eddy current, ultrasonic and radiographic weld inspections
Estimated defect NDT method length/mm
Prior probability before fusion
Posterior probability after fusion
Eddy current 29 Ultrasound 28 Radiography 27
0.680 0.680 0.500
0.788 0.788 0.636
170 Implementation ofNDT data fusion to weld inspection Table 6.39 Results of the Dempster-Shafer fusion process of length information from eddy current and ultrasonic weld inspections Lm = 28.00 0 LEC = 29.00 II = [28.04-29.96] 12 = [28.04-29.96] 0 13 = [25.46-30.54] 0
Table 6.40 Evidential intervals associated with length intervals of the example described in Table 6.39 Length interval/mm
Evidential interval
I = [28.04-29.96]
EI = [0.680, 0.782]
Table 6.41 Results of the Dempster-Shafer fusion process of length information from fused data from Table 6.34 and radiographic weld inspections
L (Fused EC -KUT)
0
^ X R = 27.00 11 =: [28.04--29.96] 13 =• [26.50--28.50]
0 12 =: [28.04--29.96] 0
Table 6.42 Evidential intervals associated with length intervals of the example described in Table 6.41 Length interval/mm
Evidential interval
11 = [28.04-29.96] 12 = [26.50-28.50]
Ell = [0.340, 0.500] EI2 = [0.340, 0.500]
presented here, the combination of information from a third source of NDT data (i.e. X-ray radiography = LXR) with little associated knowledge has the effect of increasing uncertainty regarding defect length estimation. More radiographic experiments on weld samples with a similar defect type would be required to be able to estimate a realistic probability value. 6.4.6 DATA FUSION AT PIXEL LEVEL Pixel level data fusion was used to combine information from eddy current systems A and B for a binary-like hypotheses testing, and to combine ultrasonic and eddy current data in the case of adding extra information to an inspection.
NDT data fusion
171
Combination of information from two eddy current systems The fusion of information from two identical methods of inspection, but from two different systems (i.e. eddy current systems A and B), can be used for: • •
a qualitative pixel level data fusion (e.g. defect/no defect testing hypothesis); a quantitative pixel level data fusion (e.g. defect dimension testing).
In the case of qualitative pixel level data fusion, the final objective is to increase confidence in the presence or absence of a defect.11 In the case of quantitative pixel level data fusion, the objective is to compare and make a consensus assessment on the estimated defect dimension.12 Eight steps in pixel level data fusion can be identified: • • • • • • • •
data acquisition data processing data transfer data visualisation image enhancement alignment image fusion decision phase.
Pixel level data fusion was performed on eddy current images obtained from the visualisation of inspection of a butt welded plate which had two defects - a toe crack and a HAZ crack. Figure 6.35 is a view from above the plate with a centreline weld and two defects. The results of one inspection with both eddy current systems are shown in Figs 6.36 and 6.37. By comparing inspection results with actual defect location it can be seen that system A provides a good estimation of defect location. The two surface breaking defects were detected with system B, but the toe crack appears intermittent and the HAZ crack is much longer than when measured with system A. Moreover a false call arose as a second toe crack appeared to be detected. The image fusion phase of pixel level data fusion can be performed by combining images using logical functions available with IP software (e.g. AND, OR) or mathematical operations (e.g. addition, subtraction, division, multiplication). Another alternative would be to combine processed images using segmentation, feature extraction,
Toe crack IUIIU.I >
M"
••'•
HAZ crack
Fig. 6.35 Schematic defect position on sample tested
Fig. 6.36 View from above of the eddy current inspection of the sample described in Fig. 6.35 using system A
Fig. 6.37 View from above of the eddy current inspection of the sample described in Fig. 6.35 using system B
NDT data fusion
173
classification and geometrical transformation. Results from this approach are presented in Table 6.43. Pixel level data fusion results can be assessed using colour intensity, with bright red and yellow representing high probability defects and lighter red and yellow representing a low support towards defect indication. From the results in Table 6.43 it was noted that direct image addition does not improve decision making as even false calls are combined. The multiplication operation tends to increase the support of identical features such as toe and HAZ cracks but to the detriment of length information. The subtraction operations had the effect of reducing length information and the false toe crack was still visible. Direct mathematical operations on raw images from two eddy current systems do not really aid in decision making. Another approach investigated was to perform the same operation but using weighted images. The weight associated with each original image was selected from the previous weld inspection. Results of these operations are summarised in Table 6.44. The pixel level data fusion of information using images weighted with probability collected from real inspections improve pixel fusion outcome. The best results are obtained with the addition and multiplication operations. The output of the addition operation is a high degree of support for the HAZ crack and the toe crack without any false indication. The result of the multiplication operation appears more realistic and presents interesting information. There is a high degree of support in favour of the toe crack but it still appears intermittent; an intermediate degree of support for the HAZ crack; and a very low degree of support for the false toe crack. This is a good approach as both real defects are visible and have been detected, and any potential defects are displayed but with a low
Table 6.43 Mathematical operations for fusion of eddy current data at pixel level Image operation
Comments
Original image from system A Toe crack and HAZ crack detected Defect location + dimension correct Original image from system B Toe crack detected but intermittent HAZ crack detected but longer than actual length False call (extra toe crack detected) All defects detected with systems A and B are Addition present including false toe crack Increase in support regarding the presence of actual toe crack and HAZ crack but no improvement on length information or intermittent toe crack Multiplication More certainty on the presence of the actual toe crack Reduction in support regarding both the HAZ crack and the false toe crack Subtraction Actual toe crack and HAZ crack visible Toe and HAZ crack lengths smaller than actual length False toe crack still present
174
Implementation ofNDT data fusion to weld inspection Table 6.44 Mathematical operations of weighted images for fusion of eddy current data at pixel level Image operation
Comments
Weighted image A Actual toe crack and HAZ crack detected Defect location + dimension correct Stronger support Weighted image B Toe crack detected but appears intermittent HAZ crack detected but longer than actual length False call (extra toe crack detected) but less support associated with it Toe crack and HAZ crack with high support Addition Correct location and length No false toe crack visible Toe crack and HAZ crack with high support Multiplication False toe crack almost cancelled More certainty towards information from system A, even so toe crack appears intermittent High support for the HAZ crack, correct length Subtraction High support for toe crack but smaller length Very low support for false toe crack
degree of support which means that they are not neglected as with the addition operation. Again there is more confidence in the measurement from system A. This is shown by a higher confidence level for the estimated location and length of the HAZ and toe cracks, and by a lower confidence level for the false toe crack detected with system B. Combination of eddy current and ultrasound data With the fusion of images from eddy current and ultrasonic inspection, an increase in the knowledge of the actual total number of defects present on the same sample is expected. Because both techniques are complementary, eddy current inspection should detect mainly surface breaking defects and ultrasonic testing should detect internal defects, so the logical operation AND was chosen. This has the result of directly adding both types of defects on the same image. The surface breaking defect detected with eddy currents (toe crack) and the internal defects detected with ultrasound (slag inclusion) can be represented and made visible on the same image to facilitate structural assessment by fracture mechanics specialists. An increase in the spatial observation domain can therefore be achieved. 6.4.7 PERFORMANCE ASSESSMENT OF THE BAYESIAN AND DATA FUSION PROCESSES
DEMPSTER-SHAFER
In order to determine the optimal decision-fusion process, the performance of the Bayesian and Dempster-Shafer approaches to NDT data fusion should be assessed. In
NDT data fusion
175
the previous sections the decision output from each data fusion process was compared to the actual and expected defect depth or length. In this section, receiver operating characteristic (ROC) curves are used to establish a measure of the decision making associated with depth estimation from the fusion of eddy current data by each data fusion process. 1314 Assessment of the decision making associated with length estimation was not possible due to the large difference in sensor efficiency between system A and system B (section 6.4.1). Receiver operating characteristic curves for the Bayesian and Dempster-Shafer data fusion approaches regarding decisions associated with depth estimation from the measurements on calibration slots are shown in Figs 6.38 and 6.39. A binary decision rule to determine whether or not depth estimation could be assumed accurate was used. A measurement of the accuracy of the Bayesian and Dempster-Shafer approaches was performed by calculating the area under the ROC curve using a trapezoidal rule.15 Figure 6.40 is a measure of the area for depth estimation of the Bayesian and Dempster-Shafer approaches from the fusion of eddy current measurements of calibration slots. According to the area under the ROC curves for each calibration it can be said that the Dempster-Shafer evidential theory is more accurate, regarding decision making of depth estimation from NDT data fusion, than the Bayesian statistical inference process. Therefore, the use of the Dempster-Shafer theory would be preferable to the Bayesian theory in terms of accuracy of depth estimation resulting from the combination of eddy current data.
Fig. 6.38 Receiver operating characteristic (ROC) curves of the Bayesian data fusion approach at depths 1.96 mm (a), 1.00 mm (b) and 0.53 mm (c) on calibration slots
Fig. 6.39 Receiver operating characteristic (ROC) curves of the Dempster-Shafer data fusion approach at depths 1.96 mm (a), 1.00 mm (b) and 0.53 mm (c) on calibration slots
100
1.96
1.00
0.53
Calibration Slot Depth / m m
Bayes
Dempster-Shafer
Fig. 6.40 Measured values of area under ROC curves for decision making of depth estimation with the Bayesian and Dempster-Shafer data fusion approaches
Discussion
111
6.5 Discussion Fusion of NDT data from eddy current and ultrasonic systems applied to weld defect characterisation using the Bayesian inference theory and the Dempster-Shafer theory is presented in this chapter. Quantitative and qualitative data fusion at a statistical level and pixel level is performed. Finally a comparison of the two data fusion processes implemented is carried out using ROC curves. From this study it was noted that data association through Bayes's theorem is adequate to provide a measurement of certainty about a hypothesis and to make inferences. The Bayesian approach is very dependent upon sensor efficiency and knowledge of a measurement and does not always allow decision making, particularly in the case of length estimation. It was also demonstrated that an increase in the number of sensors for a specific task would not only reduce inspection time, but also increase the performance of a data fusion system. An increase in accuracy can be achieved with the Dempster-Shafer approach by presenting data in the form of an interval. The Dempster-Shafer evidential theory appears to be more adequate when information from multiple systems is combined, and does not require prior knowledge of a measurement. The Bayesian inference theory updates the probability of hypotheses and allows multiple hypotheses to be evaluated simultaneously, while the Dempster-Shafer evidential reasoning can evaluate only two hypotheses at a time. The Dempster-Shafer theory updates an a priori mass density function to obtain an a posteriori evidential interval. The evidential interval quantifies the belief of a proposition and its plausibility. In some cases an improvement in decision making was achieved from the fusion of NDT data. However, in all instances, mistakes were made which resulted in no, or inaccurate, decision outputs. It was important to identify the causes of errors in order to determine the actual limitations and advantages of each data fusion process and to prevent similar errors from happening again in future experiments. Errors in fusion output do not appear to be dependent upon defect type; however, defect detection clearly depends upon defect type, as root cracks and intermittent cracks were not detected with eddy current system B. Combination of length measurements from eddy current systems A and B using the Bayesian approach did not help in decision making, as prior to fusion, system A was too efficient compared to system B. Due to the low belief associated with measurements from system B, information from system B was discarded at the fusion level. In this particular case, the increase in sensor number did not produce an increase in sensor efficiency. This demonstrated that NDT data fusion will ameliorate measurement accuracy only if there is a need for improvement. The system with high sensor efficiency will always have more belief associated with its signal output than any other system. Performance assessment through ROC curves showed that the Dempster-Shafer data fusion approach was more accurate than the Bayesian statistical inference theory. To be truly objective, performance of both data fusion processes would have to be compared with results from depth estimation performed by experienced operators during field inspections who do not have information about the location and size of any defects. It was also seen that the combination of information from three different NDT methods, namely eddy current, ultrasound and radiography, does not necessarily have the effect of increasing the belief associated with a measurement. This is mainly related to the input values from each NDT method considered; little knowledge associated with one NDT method has the effect of increasing the uncertainty of the overall fused information.
178 Implementation of NDT data fusion to weld inspection An increase in the spatial observation domain was achieved from the combination of ultrasonic and eddy current data at pixel level. Pixel level data fusion can be useful to gather information from different types of sensors but data visualisation is required prior to fusion. Applying Bayesian, Dempster-Shafer or fuzzy logic rules to pixel level data fusion may present interesting results and this is certainly a viable alternative, especially with the increasing use of digital output from NDT equipment. No data fusion system can produce a 100 per cent accurate decision output but it can certainly improve decision making by providing operators with a degree of certainty associated with a measurement. One could ask the question: 'Is NDT data fusion worthwhile?' The operations described in this chapter have shown that no two NDT systems provide identical information regarding a defect size or location. However, the combination of information from multiple systems helped in decision making and increased accuracy. Therefore, it can be said that NDT data fusion will greatly improve accuracy in defect characterisation and will definitely: • • • • • •
increase confidence level, increase spatial observation domain, reduce ambiguity, improve defect detection range, enhance NDT system reliability, and improve the overall performance of a non-destructive examination.
References 1. Dover WD, Rudlin JR. Underwater inspection reliability trials, 13-16 Oct. 1992, International Offshore Conference and Exhibition, Aberdeen, UK. 2. The normal distribution, 1984, The Open University Press, Unit 9. 3. Caulcott E. Significance tests, 1973, Routledge & Kegan Paul. 4. Waltz E, Llinas J. Multisensor data fusion, 1990, Artech House. 5. Gibbons JD. Nonparametric statistical inference, International Student Edition, McGraw-Hill, 1971. 6. Grox XE, Strachan P, Lowden DW. A Bayesian approach to NDT data fusion, May 1995, Insight, 37(5), 363-7. 7. Lindley DV. Introduction to probability and statistics from a Bayesian viewpoint Part 2: Inference, 1970, Cambridge University Press. 8. Rudlin JR, Wolstenholme LC. Development of statistical probability of detection models using actual trial inspection data, Dec. 1992, British Journal of Non Destructive Testing, 34(2), 583-9. 9. Gros XE, Strachan P, Lowden D. Edwards I. NDT data fusion, Oct. 1994, Proceedings of the 6th European Conference on Non Destructive Testing, Nice, France, 1, 31-5. 10. Gros XE, Strachan P, Lowden D. Theory and implementation of NDT data fusion, 1995, Research on Non Destructive Evaluation, 6(4), 227-36. 11. Georgel B, Lavayssiere B. Fusion de donnees: un nouveau concept en CND, 24-28 Oct. 1994, Proceedings of the 6th European Conference on Non Destructive Testing, Nice, France, 1, 31-5. 12. Johannsen K, Heine S, Nockemann C. New data fusion techniques for the reliability
References 179 enhancement of NDT, Oct. 1994, Proceedings of the 6th European Conference On Non Destructive Testing, Nice, France, 1, 361-5. 13. Swets AJ. Measuring the accuracy of diagnostic systems, June 1988, Science, 240, 1285-93. 14. Nockemann C, Tillack GR, Wessel H, Heidt H, Konchina V. Receiver operating characteristic (ROC) in non-destructive inspection, Aug. 1993, Proceedings of the NATO Advanced Research Workshop 'Advances in Signal Processing for Non Destructive Evaluation of Materials', Quebec, Canada. 15. Pete A, Pattipati KR, Kleinman DL. Methods for fusion of individual decisions, June 1991, Proceedings of the 1991 American Control Conference, Boston, IEEE Cat. No. 91CH293-7, 2580-5.
7
Perspectives of NDT Data Fusion The use of diverse methods, in concert, to solve data fusion problems is still evolving DL. Hall, 1992
7.1
Concluding comments
The most common data fusion methodologies, theoretical and experimental approaches to the concept of NDT data fusion, are presented in this book. The identification of data fusion algorithms and of the factors that contribute to the reliability of NDT data fusion are discussed. The design of a data fusion system for industrial applications is described. This chapter completes the work presented on multisensor data fusion by summarising the material from earlier chapters and making concluding remarks on the theory and application of NDT data fusion. Future research directions and possible applications are also presented for the further development of multisensor NDT data fusion. Application of NDT data fusion, described in chapters 5 and 6, showed that fusion should be performed using data which is as close as possible to the original data. This has the effect of minimising loss of information, which could occur from extensive signal processing, and reducing complex processing operations. The criteria for the design of a data fusion model are identified as maximum simplicity, maximum efficiency and operational flexibility. Unfortunately the last point is not a synonym for simplicity. Raw sensor data constitutes the most common input format of data integration and fusion systems; however, if a multi-usage compatible system is required, other parameters have to be considered. Among these are information on sensor location, information on external databases and environmental data, as well as information related to the experience of human operators (Fig. 7.1). Therefore very complex data management would be required to develop such a powerful system to perform the following operations: • • • • • • •
store data in multiple format; access and modify data format if required; allow interactive access of data by multiple users; have a user friendly interface; be compatible with other systems; be secure to prevent unauthorised access to the data; be able to compress data for optimisation of storage space.
It is clear from the research presented in this book that multisensor NDT data fusion is
Defect Detection
I
Defect Quantification
1
I
i
I
NDT Techniques
,
X
I
Location Orientation Length Depth
Visual MPI
NDT Equipment
.etc.
I
,
Specification Calibration
Component Inspected
x
Inspection Location
x
Material Dimensions Access On site
Inspection Procedure
l Laboratory
Environment
Manual
Automated
Inspector
x Qualification Experience Age
Fig. 7.1 Factors to consider in the development of a data fusion system for multiple uses
182 Perspectives of NDT data fusion worthwhile and can improve the overall efficiency of a non-destructive examination. It has also been demonstrated that NDT data fusion could be applied to different types of inspections, regardless of sensor types. The data fusion algorithm implemented took the form of probabilistic inference processes such as the Bayesian inference theory, the Baysian estimate and Dempster-Shafer evidential reasoning. Data fusion can be performed at the signal level using raw eddy current data collected during the inspection of composite materials and with data which characterise defect size (depth or length) from eddy current and ultrasonic sensors used for weld inspection. A pixel level data fusion approach with signals from eddy current and ultrasonic systems applied to weld inspection has also been used. It was demonstrated that data fusion can help to make an inference about a hypothesis both in the case of binary decision making, such as for defect/no defect1, and in the case of quantitative information, such as defect depth, from more than one NDT system.2 From the experimental results achieved, the Dempster-Shafer approach was preferred as it is more efficient than the Bayesian approach in making accurate estimations of defect depth. It also presented the results with associated probability intervals which were used to make decisions in favour of a data fusion output with the highest degree of confidence. The outcome of the Bayesian inference process is a posterior probability which either supports or refutes a hypothesis. This type of reasoning is useful for binary testing and provides a measure of uncertainty of a hypothesis. In the case of non-binary testing, the Dempster-Shafer evidential reasoning is better suited to making inferences. In both cases, an increase in sensor number would not only reduce the inspection time but also increase the performance of the fusion system. It was also noted in chapter 6 that the final decision outcome is affected by the performance of each NDT system. Improvements from data fusion would only be achieved if each NDT system is in a similar performance range. No advantage would be gained from the fusion of NDT systems with poor performances as the belief associated with any sensor output would be very low. As stated by J.W. Tukey (1977), 'We have not looked at our results until we have displayed them effectively'. Visualisation of NDT data was performed using commercially available software which provides an affordable way to display data in a format which has colour coded images. This allows rapid location and sizing of defects as illustrated in chapter 5, with the detection and visualisation of disbonds and impact damage in composite materials. For NDT data fusion, visualisation enabled clarification of many aspects of defect detection by presenting data fusion results in a format which facilitates interpretation and also enables pixel level data fusion. Similar displays could be used as inputs to neural networks to assist the human operator in decision making by performing pattern recognition tasks as well as pixel level data fusion operations. Prior to the implementation of NDT data fusion, the most common NDT methods were described and their physical principles studied before carrying out experimental inspections. This brief study is necessary to select complementary NDT methods, the data from which could be combined at a later stage. The use of eddy currents to detect and quantify defects in composite materials was also investigated. This study demonstrated that electromagnetic techniques were a low cost, highly reproducible and efficient alternative to ultrasound and infrared thermography for inspection of composites. Standard statistical analyses such as POD and ROC were used to assess the performance of the eddy current system. Similar statistical analyses were carried out to assess the performance of the data fusion algorithms implemented, which demonstrated - in the experiments described in chapter 6 - the Dempster-Shafer evidential theory to be more accurate regarding decision making of defect depth estimation than the Bayesian theory.
The future ofNDT data fusion
183
From the comment by R.A. Armistead, 'No single NDT method alone can provide a total solution to the needs of the advanced engineering materials community', the use of multiple NDT techniques is anticipated, leading to a need to display and combine data effectively. Fusion of NDT data can be used to combine information from multiple identical or different sensors and to make inferences on inspection results. The technology is already available to perform such a task but only a direct requirement by industry would promote further development of NDT data fusion. Research has already started at an industrial level and it is not surprising that this need has come from the nuclear industry for which the use of multiple NDT methods is necessary to meet safety standards. More research is required to develop an NDT data fusion system for on-site inspections, to develop a database and a man-machine interface and to configure a system for a specific application. Configuration and interfacing of communication software for data collection and data transfer, and of visualisation software for data display, mapping and analysis would be required. A data fusion algorithm adapted to the problem would have to be selected or especially designed, depending on the data format and application. The statistical approach to NDT data fusion described in this book demonstrated that an improvement in defect characterisation can be achieved by combining information from multiple sensors. Two approaches, based on the Bayesian and the Dempster-Shafer theories, were implemented for NDT data fusion. From these two approaches, it was noted that the Bayesian posterior probability tended to be more affirmative than the support generated by Dempster-Shafer, because in the latter, when there is conflicting information, uncertain results are obtained. The Bayesian inference process provides a probability of a hypothesis being true when given evidence; however, the prior probabilities are highly dependent on experimental results. Unlike the Dempster-Shafer approach, there is no uncertainty associated with the decision towards a hypothesis resulting from a Bayesian calculation. However, the Bayesian approach is more appropriate for binary testing with multiple identical sensors than the Dempster-Shafer theory. Owing to the difficulty in defining prior probabilities, the Bayes estimate approach appeared to be best suited to making inferences towards measurement of an unknown quantity, i.e. a defect depth or length. It was noted that the Bayes estimate has the advantage of producing an estimated defect depth when given evidence from multiple sensors of which a normal prior probability on ju and o can be assumed. The Dempster-Shafer process is highly dependent upon the sensor efficiency; a small change in input data can produce a large variation in the outcome of the Dempster-Shafer rule of combination. For example a 55 per cent increase in accuracy for length estimation can be achieved with the Dempster-Shafer rule of combination when combining length information from instruments in relative agreement. But a variation of 1.00 mm of length measurement from one instrument can produce a change in the output, and an increase of only 11 per cent in accuracy is achieved. Combining similar information using the Bayesian approach produces an increase in accuracy of 38 per cent in both cases, regardless of the sensor variations. The advantages and limitations of both approaches are summarised in Tables 7.1 and 7.2. The data fusion process is dependent on the type of defect detected and the equipment used. Data input is in the form of length or depth measurements, and preprocessing of the original signal into a common numerical format is required to build a data fusion engine which will be able to combine information at the signal level. A large amount of experimental data needs to be collected prior to fusion, on multiple test samples and with several NDT instruments, in order to be able to build a database to assign prior probabilities to each measurement. However, once this set of data has been collected, data
184
Perspectives of NDT data fusion
fusion could be performed in real time using a computer program specifically designed for a particular inspection procedure. A schematic for the development of an NDT data fusion engine is presented in Fig. 7.2. Table 7.1 Advantages and limitations of the Dempster-Shafer data fusion approach Advantages
Limitations
Provides a soft decision output Associates belief and uncertainty values to a decision output Better accuracy than Bayesian approach for depth estimation (5% more accurate)
Poor results if both systems are in relative disagreement Small changes in input data can produce important changes in decision output The estimated defect depth is given as a depth interval, not a definitive depth value
Table 7.2 Advantages and limitations of the Bayesian data fusion approach Advantages
Limitations
Best suited for binary decision testing Provides only a hard decision output (no uncertainty values) Low computational requirements Prior probabilities can be difficult to obtain (Bayes estimate is preferable) Output highly dependent on sensor efficiency Sensors need to be of similar efficiency (not suitable for combining information from sensors with high discrepancy)
7.2
The future of NDT data fusion
From the progress achieved in the fields of NDT data visualisation, artificial intelligence, pattern recognition, NDT data fusion and discussions with industries in need of an efficient and reliable NDT technique, a rapid evolution of the concept of NDT data fusion is foreseen.3"5 It is more economically viable to keep using conventional NDT equipment and combine data collected from such systems than to redesign existing apparatus. One single NDT instrument can be used for multiple inspection purposes and data produced combined using a data fusion system designed for a specific NDT task. Pattern recognition and signal processing are already used to build expert systems to detect and classify defects with minimum operator intervention. In a highly advanced NDT data fusion system, the expertise of the human operator could be coupled with information provided by a machine, and expert systems developed or adapted to represent this knowledge and make inferences. The development of a neural network could have four major advantages: • • • •
to estimate the certitude of the sensor output (NN input) for different NDT systems; to extract significant information from each system; to extract relevant information from each system in relation to the type of inspection; to indicate a decision in a numerical or graphical format easy to interpret by an operator and ready for fusion.
J Identify Defect Location Defect Type Elements
Examine Geographical Sensor Location
Extract Signal Information
Defect Detection
Sensor 1
Data
Database
Alignment
Management for
Data Fusion
Output
Prior Engine Probability Sensor N
rr-1
Extract Signal Information
Defect Detection
Identify Sensor Type
TT"
Data
Information
Alignment
>| Defect Type Elements Identify Defect Location
Fig. 7.2 Schematic for development of a NDT data fusion engine
Decision
186 Perspectives of NDT data fusion A three-level expert system could be designed to assist in non-destructive examinations (Fig. 7.3). Level 1
Level 2
—> NDT Sensor 1 — • Data Alignment Data Fusion i i Centre — • i Signal Processing NDT Sensor n —•
i
r
—•
•
•
Level 3 Expert System Situation Assessment Failure Analysis
•
Human-Machine Interface
T Decision Fig. 7.3 Design of an expert system to assist in non-destructive examinations Signal processing, data alignment and correlation are performed at the first level while data fusion operations are carried out at level 2 using conventional or specific data fusion algorithms such as Bayesian, Dempster-Shafer theories or fuzzy logic. At level 3, an expert system is used for situation assessment, identification of defect type, estimation of defect size, location and orientation. Other tasks such as failure analysis can be performed and information collected from an inspection compared to that of pre-existing information stored in a database. A human-machine interface analyses equipment malfunction and displays signals from NDT systems in an analogue, digital, 2-D or 3-D colour coded format, as well as displaying information from the data fusion centre in the form of images and/or statistical and probabilistic numerical values. The operator is also informed of the output of the expert system operations and of any major danger which may be associated with the presence of a defect. The final decision is left to the human operator but could be automated if required. Fully automated inspection using a remotely operated vehicle (ROV) for testing in hazardous environments has already been developed for the nuclear and offshore industries. By fitting multiple NDT sensors to an ROV, automatic logging of data, display of remote information in a safe environment and real-time data fusion could be performed (Fig. 7.4). Instrument and operator errors are significant factors in sizing and location of flaws. The knowledge and expertise of a human operator could be coupled with information provided by a machine in order to reduce human error. Artificial systems could be developed further to represent this information and to make inferences. Nuclear plants already have supervisory control configurations where decision support systems using Dempster-Shafer theory aid human operators to perform high level tasks by processing and displaying uncertain information.6 The final phase of this approach could be the adaptation of multiple NDT systems on remotely operated vehicles for completely automated inspection.
References 187 Safe Environment Data Logging
Data Visualisation
Data Fusion
Ultrasonic Sensor
Eddy Current Sensor
Underwater Environment Fig. 7.4 Schematic diagram of a fully automated ROV inspection
References 1. Gros XE, Strachan P, Lowden DW. A Bayesian approach to NDT data fusion, May 1995, Insight, 37(5), 363-7. 2. Gros XE, Strachan P, Lowden D. Theory and implementation of NDT data fusion, 1995, Research in Non Destructive Evaluation, 6(4), 227-36. 3. McNab A, Dunlop I. A review of artificial intelligence applied to ultrasonic defect evaluation, Jan. 1995, Insight, 37(1), 11-16. 4. Kirk I, Lewcock A. Neural networks - an introduction, Jan. 1995, Insight, 37(1), 17-24. 5. Windsor CG. Can we train a computer to be a skilled inspector?, Jan. 1995, Insight, 37(1), 36-49. 6. Hasegawa S, Inagaki T. Dempster-Shafer theoretic design of a decision support system for a large complex system, 1994, Proceedings of Institute of Electrical and Electronics Engineers International Workshop on Robot and Human Communication.
This Page Intentionally Left Blank
Glossary
Accuracy: the extent to which an estimated or measured value approaches the actual true value (related to systematic errors associated with an experiment or an instrument). Acoustic emission testing: a technique which enables the detection of flaws by monitoring acoustic signals caused by plastic deformation of structures. Acoustic impact testing: a technique which uses variations in sound from the tapping of an object on a surface to detect surface anomalies in components. Algorithm: a set of rules which specifies a sequence of actions to be taken to solve a problem. Each rule is precisely and unambiguously defined so it can be carried out by a machine (computer). Alpha particle: (a) a positively charged particle emitted in radioactive decay of gamma isotopes. Alternating current magnetisation: magnetisation of a material induced by a magnetic field generated by an alternating current. Angle ultrasonic transducer: a sensor which transmits ultrasonic energy at a specific angle to the surface of a component. Arc strikes: burn damage to a material caused by the breaking of an active electric circuit. Array sensor system: a group of sensors combined in one system to reduce measurement time. A-scan: a cathode ray tube image which displays signal amplitude against sweep time. A-to-D converter: an apparatus which converts an analogue signal into a digital signal. Bayesian statistical inference: a decision rule used to make probabilistic inference about hypotheses. Beam spread: the divergence of an ultrasonic wave traversing a medium. Brittle fracture: rupture in a material without prior plastic deformation. B-scan: a 2-D image of the cross-section of a component inspected. Calibration slots: artificial slots or defects manufactured in a standard material to calibrate an instrument prior to inspection. Coercive force: the magnetic field strength required to reduce remanent magnetism to zero.
190 Glossary Coil: a conducting material shaped in the form of one or multiple loops which can induce magnetic fields when conveying an electric current. Convection: term used to describe the transfer of heat due to temperature differences. Conversion screen: a screen used to convert incident photons in another form of energy. C-scan: a 2-D plan of the scanned surface of a component inspected. Curie: an international unit of the rate of radioactive activity (1 Ci = 3.7 x 1010 disintegrations per second). Dead zone: the zone after an ultrasonic pulse where additional echo cannot be detected. Defect: a flaw or discontinuity in a material which may affect its structural integrity and/or may make it unsuitable for the task it has been designed for. Diamagnetic material: material repelled by a magnet and with a magnetic permeability of less than 1. Ductile fracture: a break in a material which has undergone plastic deformation. Eddy current examination: detection and quantification of surface and sub-surface flaws through measurement of the variations in an electric current induced by a time-varying magnetic field into the material inspected. Edge effect: phenomenon which causes signal distortion when a probe approaches the edge of a sample. Electromagnet: a ferromagnetic material which behaves as a magnet when the coil surrounding it is energised by an electric current. Electron: a negatively charged subatomic particle of 1.602 x 10"19 coulombs. EM AT: apparatus generating ultrasonic, horizontally polarised shear waves from a coil excited by an AC placed close to the surface of a conductive material. Far field: (also known as Fraunhofer zone): the distance at which the decrease of ultrasonic signal amplitude is inversely proportional to the distance of the surface of the material inspected from the sensor. Ferromagnetic material: a material whose magnetic resistivity and magnetic permeability are high and depend upon the strength of the magnetising field (e.g. iron, nickel, cobalt). Usually exhibits the hysteresis phenomenon. Fill factor: a term used to describe the level of electromagnetic couplage occurring between a test coil and the material that surrounds it. Fuzzy logic: a theory developed to quantitatively express imprecision between categories in the form of membership functions. Gating: the process of selecting a portion of a signal on account of time, magnitude or phase. Gauss: a unit of magnetic flux density. Hall effect: a change in voltage which occurs at right angles to the direction of the electric current and the magnetic field in a conductor stimulated by an electric current.
Glossary 191 Heuristic programme: a programme which attempts to improve its own performance as a result of learning from previous actions within the programme. Heuristic rules: approach based on commonsense rules and trial and error rather than comprehensive theory. Holography: an optical imaging process in which reflected light from an object is captured on photographic film without the use of a lens. Hue: the characteristic in which a colour can be classified as red, green, blue, etc. Hysteresis loop: a closed curve formed by plotting the magnetic flux density B versus the magnetic field H. IACS: a standard conductivity measurement in which the conductivity of the unalloyed copper is set at 100 per cent. Image processing: technique used to filter and enhance the quality of an image. Image quality indicator: a small reference specimen radiographed with the specimen under test to ensure the quality of a radiograph. Image segmentation: refers to the division of an image into multiple regions. Specific parameters representative of a scene are used for selection of each region before segmentation occurs. Such parameters are image intensity, texture, colour, spatial arrangement, shape and geometry. Impedance: the resistance of a material to the passage of an electric current. Inductance: the magnetism produced in a ferromagnetic material by an external magnetising force. Infrared radiation: the region of the electromagnetic spectrum associated with heat transfer. Intensifying screen: metallic or fluorescent screen used to convert incident X-radiation into light energy or electrons. Knowledge-based system: a software system composed of a knowledge-based module and an inference engine used to assist in decision making. LASER (Light Amplification by Simulated Emission of Radiation): an apparatus which produces a beam of coherent light. Leak detection: a technique which consists of injecting a search gas into a sealed enclosure and monitoring loss of gas to check for leaks. Lenz's law: if an emf is induced in a material, an electrical current is created which flows in a direction which tends to oppose the cause of the induced emf. Lift-off: a term used to describe the distance between the test coil and the test object. Liquid penetrant testing: a method of inspection where the surface of the component to be tested is covered with a visible penetrating liquid which concentrates in cracks. Magnetic field inspection: a technique used to detect surface and sub-surface defects by monitoring the variation of a magnetic field induced in the material tested.
192 Glossary Magnetic permeability: the ease with which a magnetic field can be induced in a material. Magnetic susceptibility: the amount by which the relative magnetic permeability of a medium differs from unity. Magnetising force: the force used to create a magnetic flux in a magnetic circuit. Maxwell's equations: fundamental equations of electromagnetic field theory. Measurand: the physical quantity measured by an instrument. Microwave testing: the detection of microwave radiation directed onto a test component is used to check the presence of flaws in composites. MPI: a NDT technique in which the component tested is magnetised and magnetic particles are sprayed onto its surface to reveal cracks. Multifrequency system: apparatus capable of generating more than one frequency in a sequential or simultaneous manner. Multi-layer perceptron: (also known as feed-forward network): a type of neural network in which the nodes are arranged in layers. It is composed of an input layer, an output layer and any number of hidden layers. Multisensor data fusion: a theory which can be described as the synergistic use of information from multiple sources to assist in the overall understanding of a phenomenon, and to measure evidence or combine decisions. Near field zone (also known as the Fresnel zone): disturbance zone after the initial ultrasonic pulse in which defects cannot be sized or detected. Neural network: computer system designed to produce a set of output values from a set of input data. Non-ferromagnetic material: a material into which a magnetic field cannot be induced. Paramagnetic: a phenomenon in some materials in which the susceptibility to magnetism is positive and the magnetic permeability is slightly higher than unity and independent of the magnetising force. Precision: related to the random error distribution associated with an experiment or an instrument. Prods: hand-held electrodes used to pass magnetising current through a material. Pulse-echo method: an ultrasonic method which uses back-echoes to detect flaws in components. Radiographic inspection: X-ray, gamma or neutron radiography techniques which use penetrating radiation to detect internal faults in components. Rayleigh wave: an ultrasonic wave which propagates at or near the surface of a material. Remanent magnetic field (also known as residual magnetic field): the magnetic field remaining in a ferromagnetic material after reducing the magnetising force to zero. Resolution: a measure of the capability of an instrument to distinguish between two signals at very small distances from each other.
Glossary 193 SAFT: ultrasonic technique based on the concept of collecting waveforms from a scanning transducer and processing them as a single unit. Sensitivity: the lowest limit of detectability of a signal, defect or detail on an image. Signal-to-noise ratio: the ratio of signal amplitude to noise amplitude. Skin depth: the depth at which the intensity of an induced eddy current has decreased to 37 per cent of the surface value. Skin effect: a phenomenon by which high frequency electrical currents tend to be concentrated in the thin layer of conductors. Skip distance: the distance from the point at which the ultrasound beam first enters the test specimen to the point at which the back-reflected pulse first encounters the front surface. Synergism: combination of the action of two or more sensors resulting in enhancement of the efficiency of a process. Thermal conductivity: a measure of the rate of heat flow through a given area and thickness in the presence of a temperature gradient. Thermography: technique which consists of mapping isotherms over a surface. TOFD: an ultrasonic technique based on measuring the time separating two diffracted waves from the defect extremities. Transmission: a physical process by which energy waves travel through a medium. Ultrasonic testing: a technique which uses variations in the echoes of ultrasonic pulses injected into a material to detect and size internal defects. Vibrothermography: a thermal inspection technique which uses cyclic vibrations to induce heat into a material. Visual inspection: an optical technique carried out with or without optical aids to inspect the surface of materials.
This Page Intentionally Left Blank
Bibliography
1. Abudlghafour M, Abidi MA. Data fusion through non-deterministic approaches: a comparison, Proceedings of the SPIE, 2059, Sensor Fusion VI, Sept. 1993, 37-53. 2. Abidi MA. Sensor fusion: a new approach and its application, Proceedings of the SPIE, 1198, Sensor Fusion II: Human and Machine Strategies, Nov. 1989, 235-46. 3. Agate CS, litis RA. Data fusion of association hypotheses in a distributed sensor network, Proceedings of the SPIE, 2235, Signal and Data Processing of Small Targets, 1994, 486-96. 4. Al-Ibrahim MM, Varshney PK. A decentralized sequential test with data fusion, Proceedings of the 1989 Institute of Electrical and Electronics Engineers American Control Conference, June 1989, 2, 1321-5. 5. Basir OA, Shen HC. Sensory data integration: a team consensus approach, Proceedings of the Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation, May 1992, 2, 1683-8. 6. Bayes T. Essay toward solving a problem in the doctrine of changes, Philosophical Transactions of the Royal Society, London, 1763. 7. Bedworth MD, Heading AJR. The importance of models in Bayesian data fusion, Proceedings of the 1st Institute of Electrical and Electronics Engineers Conference on Control Applications, Sept. 1992, 1, 410-14. 8. Berger J. Statistical decision theory: foundations, concepts and methods, SpringerVerlag, 1980. 9. Blackman SS, Broida TJ. Multiple sensor data association and fusion in aerospace applications, Journal of Robotic Systems, June 1990, 7(3), 445-85. 10. Boily E. Performance evaluation of a multi-sensor data fusion implementation: a progress report, Proceedings of the SPIE, 2235, Signal and Data Processing of Small Targets, 1994, 570-9. 11. Bowman C. Artificial neural network adaptive systems applied to multisensor ID, Proceedings of the 1988 Symposium on Tri-Service Data Fusion, 1, June 1988, 161-72. 12. Bowman C. Possibilistic versus probabilistic trade-off for data association, Proceedings of the SPIE, 1954, Signal and Data Processing of Small Targets, 1993 341-51. 13. Brown DE, Pittard CL, Spillane AR. A simulation-based test bed for data association algorithms, Proceedings of SPIE Conference, 1306, Sensor Fusion III, 1990, 58-68.
196 Bibliography 14. Buede DM, Martin JW. Comparison of Bayesian and Dempster-Shafer fusion, Proceedings of the 1989 Symposium on Tri-Service Data Fusion, Vol.1, May 1989, 81-101. 15. Buser RG, Warren FB. Sensors and sensor fusion, Proceedings of the SPIE, 782, Infrared Sensors and Sensor Fusion, May 1987. 16. Chair Z, Varshney PK. Optimal data fusion in multiple sensor detection systems, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, Jan. 1986, 22(1), 98-101. 17. Chen S. Fusion of multisensor data into 3D spatial information, Proceedings of the SPIE, 1003, Sensor Fusion: Spatial Reasoning and Scene Interpretation, Nov. 1988, 80-5. 18. Courand G. PENDRAGON: A highly automated fusion system, Proceedings of the 1987 Symposium on Tri-Service Data Fusion, June 1987, 54-63. 19. Crowley JL, Demazeau Y. Principles and techniques for sensor data fusion, Signal Processing, May 1993, 32(12), 5-27. 20. Daum FE. Fundamental limits in multisensor data fusion, Proceedings of the Institute of Electrical and Electronics Engineers International Conference on Systems Engineering, 1990, 316-19. 21. Dempster AP. Upper and lower probabilities induced by a multivalued mapping, Annals of Mathematics and Statistics, 1967, 38. 22. Dempster AP. A generalization of Bayesian inference, Journal of Royal Statistical Society, 30, 1968, 205-47. 23. Dillard RA. Tactical inferencing with the Dempster-Shafer theory of evidence, Proceedings of the Institute of Electrical and Electronics Engineers 17th Asilomar Conference of Circuits, Systems, and Computers, 1983, 312-16. 24. Dubois D, Prade H. Possibility theory and data fusion in poorly informed environments, Control Engineering Practice, Oct. 1994, 2(5), 811-23. 25. Duren B. Comparison of multisensor fusion architectures based on situation assessment principles, Proceedings of the 1987 Symposium on Tri-Service Data Fusion, June 1987, 192-8. 26. Durrant-Whyte HF. A modular, transputed-based architecture for multi-sensor data fusion, Proceedings of the 3rd Transputer/Occam International Conference, May 1990, 229-53. 27. Durrant-Whyte HF. Elements of sensor fusion, Proceedings of the Institute of Electrical and Electronics Engineers Colloquium on Intelligent Control, 1991, 1 - 2 . 28. Easthope PF, Goodchild EJG, Rhodes SL. A computationally tractable approach to real-time multi-sensor data fusion, Proceedings of the SPIE, 1096, Signal and Data Processing of Small Targets, 1989, 298-308. 29. Eggers M, Khrion T. Neural network data fusion for decision-making, Proceedings of the 1989 Symposium on Tri-Service Data Fusion, May 1989, 119-28. 30. Eggers M, Khuon T. Neural network data fusion concepts and application, Proceedings of the International Joint Conference on Neural Networks, 1990, 2, 7-16. 31. Feller W. An introduction to probability theory and its applications, 1, 3rd Ed., 1968, John Wiley & Sons. 32. Ferrari C, Chemello G. Coupling fuzzy logic techniques with evidential reasoning for sensor data interpretation, Proceedings of International Conference on Intelligent Autonomous Systems 2, Amsterdam, Dec. 1989, 2, 965-71.
Bibliography 197 33. Fitchek JJ, Lee JP, Herring D. A multi-frequency multi-sensor data fusion system, Proceedings of the 1988 Symposium on Tri-Service Data Fusion, June 1988, 209-17. 34. Fox G, Arkin S. From laboratory to the field: practical considerations in large scale data fusion system development, Proceedings of the 1988 Symposium on TriService Data Fusion, 1, June 1988, 277-96. 35. Franklin SE, Blodgett CF. An example of satellite multisensor data fusion, Computers & Geosciences, April 1993,19(4), 577-83. 36. Freedman DD, Smyton PA. Overview of data fusion activities, Proceedings of the 1993 American Control Conference, June 1993,1, 854-8. 37. Gelb A. Applied optimal estimation, 1974, MIT Press. 38. Geraniotis E, Chau YA. Robust data fusion for multisensor detection systems, Institute of Electrical and Electronics Engineers Transactions on Information Theory, Nov. 1990, 36(6), 1265-79. 39. Goodchild E, Easthope P, Dashwood T, Wolfe H. Data fusion architecture refinement and simulation, ASA Report T88/009, July 1988,1. 40. Goodenough DG, Robson MA. Data fusion and object recognition, Proceedings of Vision Interface '88 Conference, 1988, 42. 41. Goodman IR. An approach to the data association problem through possibility theory, Proceedings of the 5th MIT/ONR Conference, Aug. 1982. 42. Gray DA. Track-to-track correlation for sensor level tracking, Proceedings of 1SSPA '92, 3rd International Symposium on Signal Processing and its Applications, Aug. 1992, 2, 666-9. 43. Gros XE, Fucsok F. Combining NDT signals improves safety in nuclear power station, Proceedings of the International Conference 'In-Service Inspection', Oct. 1995. 44. Gros XE, Strachan P, Lowden DW. A Bayesian approach to NDT data fusion, Insight, May 1995, 37(5), 363-7. 45. Hackett JK, Shah M. Multisensor fusion, Technical Report, CS-TR-16-89, University of Central Florida, Computer Science Department, Oct. 1989. 46. Hall DL. Mathematical techniques in multisensor data fusion, 1992, Artech House. 47. Hall DL, Linn RJ. Algorithms selection for data fusion systems, Proceedings of the 1987 Symposium on Tri-Service Data Fusion, June 1987, 100-10. 48. Hall DL, Linn RJ. Comments on the use of templating techniques for multisensor data fusion, Proceedings of the 1989 Symposium on Tri-Service Data Fusion, 1, May 1989, 343-54. 49. Hall D, Linn R. A taxonomy of multi-sensor data fusion techniques. Proceedings of the 1990 Symposium on Joint Service Data Fusion, 1, May 1990, 593-610. 50. Hall DL, Linn RJ, Llinas J. A survey of multi-sensor data fusion systems, Proceedings of the SPIE Conference, 1470, Data Structure and Target Classification, April 1991, Orlando, FL, 13-29. 51. Hall DL, Linn RJ. Survey of commercial software for mutli-sensor data fusion, Proceedings of the SPIE, 1956, Sensor Fusion and Aerospace Applications, 1993, 98-109. 52. Hall DL, Llinas J. A survey of techniques for CIS data fusion, Proceedings of the 2nd International Conference on Common Control, Communications and management Information Systems, April 1987, 77-84.
198 Bibliography 53. Hall D, Nauda A. Embedded Al-based diagnostic system for signal collection and processing systems, Journal of Wave-Material Interaction, 4(1-3), Jan.-July 1989,241-8. 54. Harris JC. Distributed estimation, inferencing and multi-sensor data fusion for real time supervisory control, Proceedings of the Artificial Intelligence in Real-Time Control IF AC Workshop, Sept. 1989, 19-24. 55. Haskins TG. Sensor cueing performance analysis, Proceedings National Aerospace and Electronics Conference, IEEE, 1984, 262-5. 56. Ho YC. Team decision theory and information structures, Proceedings of the Institute of Electrical and Electronics Engineers, June 1980, 68(6), 644-54. 57. Hovanessian SA. Introduction to sensor systems, 1988, Artech House. 58. Iversen GR. Bayesian statistical inference, 1984, Sage Publications. 59. Jackson P. Introduction to expert systems, 1986, Addison-Wesley Publications. 60. Keating PN. Signal processing in acoustic imaging, Proceedings of the Institute of Electrical and Electronics Engineers, April 1979, 67(4), 496-510. 61. Kewley DJ. A model for evaluating data fusion systems, Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Nov. 1993, 1, 273-7. 62. Kjell BP, Wang PY. Data fusion and image segmentation using hierarchical simulated annealing on the connection machine, 1989 Proceedings of the SPIE, 1002, Intelligent Robots and Computer Vision, Nov. 1988, Cambridge, MA, 330-7. 63. Klix F. Human and artificial intelligence, 1979, North-Holland. 64. Kyburg HE Jr. Bayesian and non-Bayesian evidential updating, Artificial Intelligence, 1987, 31, 271-93. 65. Lawrence JD, Garvey TD. Evidential reasoning: a developing concept, Proceedings of the Institute of Electrical and Electronics Engineers International Conference on Cybernetics and Society, Oct. 1982, 6-9. 66. Lee RH, Leahy R. Segmentation of multi-sensor images, Proceedings of the Institute of Electrical and Electronics Engineers 6th Multidimensional Signal Processing Workshop, 1989, 23. 67. Lee S, Schenker PS, Park J. Sensor-knowledge-command fusion paradigm for man/machine systems, Proceedings of the IF AC Symposium, Aug. 1991, 113-18. 68. Leonard JJ, Moran BA. Sonar data fusion for 3-D scene reconstruction, Proceedings of the SPIE, 1828, Sensor Fusion V, Nov. 1992, 144-55. 69. Lippman RP. An introduction to computing neural nets, Institute of Electrical and Electronics Engineers Acoustics, Speech and Signal Processing Magazine, 3(4), 1987, 4-22. 70. Liu LJ, Gu YG, Yang JY. Inference for data fusion, Proceedings of the SPIE, 1766, Neural and Stochastic Methods in Image and Signal Processing, 1992, 670-7. 71. Llinas J, Hall DL, Waltz E. Data fusion technology forecast for C3MIS, Proceedings of the 3rd International Conference on Command, Control, Communications and MIS, March 1989. 72. Llinas J, Waltz E. Multisensor data fusion, 1990, Artech House. 73. Llinas L, Antony R. Blackboard concepts for data fusion and command control applications, International Journal on Pattern Recognition and Artificial Intelligence, Spring 1992.
Bibliography 199 74. Luo RC, Ming-Hsiumn L. Robot multi-sensor fusion and integration: optimum estimation of fused sensor data, Proceedings of the 1988 Institute of Electrical and Electronics Engineers International Conference on Robots and Automation, April 1988, 24-9. 75. Luo RE, Kay MG. Multisensor integration and fusion in intelligent systems, Institute of Electrical and Electronics Engineers Transactions Systems, Man, and Cybernetics, 19(5), Sept.-Oct. 1989, 901-31. 76. Luo W, Caselton WF. Inference and decision analysis based on imprecise probability and likelihoods, Proceedings of the 2nd International Symposium on Uncertainty Modeling and Analysis, April 1993, 431-5. 77. Malik R, Polkowski E. Morphological technique for combination of sensor readings, Proceedings of the SPIE, 1350, Image Algebra and Morphological Image Processing, 1990, 165-76. 78. Mathur B, Wang HT, Liu SC. Pixel level data fusion: from algorithm to chip, Proceedings of the SPIE, 1473, Visual Information Processing: from Neurons to Chips, April 1991, 153-60. 79. Miller KS. Hypothesis testing with complex distributions, RE Krieger Publishing Company. 80. Monai FF, Pochez B, Chehire T. Possibilistic logic and multiple assumptions management: application to a data fusion problem, Proceedings of the 12th International Conference on Artificial Intelligence, Expert System and Natural Languages, June 1992, 1, 209-20. 81. Noble D.F. Template-based data fusion for situation assessment, Proceedings of the 1987 Symposium on Tri-Service Data Fusion, June 1987, 152-62. 82. Parra-Loera R, Thompson WE, Akbar SA. Multilevel distributed fusion of multisensor data, Proceedings of the SPIE, 1699, Signal Processing, Sensor Fusion and Target Recognition, 1992, 127-33. 83. Pau LF. Sensor data fusion, Journal of Intelligent and Robotic Systems, 1988, 1(2), 103-16. 84. Pau LF. Behavioral knowledge in sensor/data fusion systems, Journal of Robotic Systems, June 1990, 7(3), 295-308. 85. Payne T. Central fusion of sensor information using reasoned feedback, in Complex systems: from biology to computation, IOS Press, 1993, 248-59. 86. Pearl J. Probabilistic reasoning in intelligent systems: networks of plausible inference, 1988, Morgan Kaufmann. 87. Peled A, Liu B. Digital signal processing, theory, design, and implementation, 1976, John Wiley & Sons. 88. Pete A, Pattipati KR, Kleinman DL. Methods for fusion of individuals decisions, Proceedings of the 1991 American Control Conference, June 1991, 3, 2580-5. 89. Pfeffer N. Integrating consistency maintenance in a system combining expert inferences and constraint propagation, Proceedings of the 12th International Conference on Artificial Intelligence, Expert System and Natural Languages, June 1992,1, 197-208. 90. Prieve C, Marchette D. An application of neural networks to a data fusion problem, Proceedings of the 1987 Symposium on Tri-Service Data Fusion, June 1987, 226-36. 91. Qiang Z, Xiaoxum Z, Kam M. A simple algorithm for adaptive decision fusion, Proceedings of the 1994 American Control Conference, July 1994, 2, 1304-8. 92. Rajapakse J, Acharya R. Multi sensor data fusion within hierarchical neural
200 Bibliography
93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110.
networks, Proceedings of International Joint Conference on Neural Networks, 1990, 2, 17-22. Reibman AR, Nolte LW. Optimal detection and performance of distributed sensor systems, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, Jan. 1987, 23(1), 24-30. Reiner J. Application of expert systems to sensor fusion, Proceedings Institute of Electrical and Electronics Engineers National Aerospace and Electronic Conference, 1985. Robinson GS, Aboutalib AO. Trade-off analysis of multisensor fusion levels, Proceedings of the 2nd SPIE Symposium on Sensor Fusion, March 1989. Rothman PL, Denton RV. Fusion or confusion: knowledge or nonsense? (Data fusion and sensor fusion), Proceedings of the SPIE, 1470, Data Structures and Target Classification, April 1991, 2-12. Ruach HE. Probability concepts for an expert system used for data fusion, AI Magazine, Fall 1984, 55-60. Russo F, Ramponi G. Fuzzy methods for multisensor data fusion, Institute of Electrical and Electronics Engineers Transactions on Instrumentation and Measurement, April 1994, 43(2), 288-94. Sadjadi FA. Hypothesis testing in a distributed environment, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, March 1986, 22(2), 134-7. Schenker PS. Sensor fusion, spatial reasoning and scene interpretation, Proceedings of SPIE, 1003, Sensor Fusion, Spatial Reasoning and Scene Interpretation, Nov. 1988. Schenker PS. Sensor fusion II, human and machine strategies, Proceedings of SPIE, 1198, Sensor Fusion II, Human and Machine Strategies, Nov. 1989. Seetharam G, Chu CHH. Image segmentation by multisensor data-fusion, Proceedings of the Institute of Electrical and Electronics Engineers 22nd Southeastern Symposium on System Theory, 1990, 583-7. Shafer G. A mathematical theory of evidence, 1976, Princeton University Press. Shapiro SC. Encyclopaedia of artificial intelligence, Vols 1-2, 1987, John Wiley & Sons. Shi S, Hull MEC, Bell DA. Combining evidence in probabilistic reasoning, Proceedings of the 9th International Conference on Applications of Artificial Intelligence in Engineering, July 1994, 535-42. Smyton PA. Overview of data fusion activities and techniques, Proceedings of the 1st Institute of Electrical and Electronics Engineers Conference on Control Applications, Sept. 1992,1, 390-1. Su-Shing C. Adaptive control of multisensor systems, Proceedings of the SPIE, 931, Sensor Fusion, April 1988, Orlando, FL, 98-102. Tenney RR, Sandell NR. Detection with distributed sensors, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, July 1981,17(4), 501-10. Thomopoulos SCA, Viswanathan R, Bouboulias DP. Optimal decision fusion in multiple sensor systems, Institute of Electrical and Electronics Engineers Transactions on Aerospace and Electronic Systems, Sept. 1987, 23(5), 644-53. Thomopoulos SCA. Sensor integration and data fusion, Journal of Robotic Systems, 7(3), June 1990, 337-72.
Bibliography 201 111. Thomopoulos SCA. Theories in distributed decision fusion: comparison and generalization, Proceedings of SPIE 1990 Conference on Sensor Fusion III: 3-D Perception and Recognition, 1383, Nov. 1990, 623-34. 112. Van Doom BA, Blom HAP. Systematic error estimation in multisensor fusion systems, Proceedings of the SPIE, 1954, Signal and Data Processing of Small Targets, 1993, 450-61. 113. Van Orden R. Management by decision, Signal, Sept. 1978, 35-9. 114. Van Trees HL. Detection, estimation and modulation theory, Vol. 1, 1968, John Wiley & Sons. 115. Weaver CB. Sensor fusion, Proceedings of SPIE, 931, Sensor Fusion, April 1988, Orlando, FL. 116. Weaver CB. Sensor fusion II, Proceedings of SPIE, 1100, Sensor Fusion II, March 1989. 117. White FE. Data fusion requirements and developments, Proceedings of the 1989 Symposium on Tri-Service Data Fusion, 1, May 1989, 287-320. 118. Whittington G. Spracklen T. The application of a neural network model to sensor data fusion, Proceedings of the SPIE, 1294, Applications of Artificial Neural Networks, April 1990, 276-83. 119. Wilson GB. Some aspects of data fusion, Proceedings of the 1985 Institute of Electrical and Electronics Engineers Conference on Intelligence in Advances in C3I, April 1985. 120. Wright FL. The fusion of multisensor data, Signal, Oct. 1980, 39-43. 121. Wright G. Combining sensor images in X, Image Processing, 1994, 5(4), 22-4. 122. Wright WA. A Markov random field approach to data fusion and colour segmentation, Image and Vision Computing, May 1989, 7(2), 144-50. 123. Yamane S, Aoki K, Izumi M, Fukunaga K. Model-based object recognition using basic probability assignment, Transactions of the Institute of Electronics, Information and Communication Engineering, Nov. 1994, J77D-II(11), 2247-54. 124. Yong M, Chandler JS, Wilkins DC. On the decision making problem in Dempster-Shafer theory, Proceedings of the International Symposium on Artificial Intelligence, Dec. 1992, 293-300. 125. Yool SR, Stewart SE, Evans JR, Williams DN. Small target enhancement with fused infrared and radar data, Proceedings of the 1988 Symposium on Tri-Service Data Fusion, June 1988, 54-7. 126. Zadeh LA. Fuzzy logic and approximate reasoning, Synthese, 30, 1975, 407-28. 127. Zadeh LA. Fuzzy sets and systems, 1978, North-Holland. 128. Zadeh LA. The role of fuzzy logic in the management of uncertainty in expert systems, Fuzzy Sets and Systems, 11, 1983, 199-227. 129. Zavoleas KP, Kokar MM. A model-theoretic framework for sensor data fusion, Proceedings of the SPIE, 1956, Sensor Fusion and Aerospace Applications, 1993, 121-32. 130. Zohdy MA, Loh NK, Liu J, Abdul-Wahab AA. Application of maximum likelihood identification with multisensor fusion to time-varying stochastic system, Proceedings of the 1989 Institute of Electrical and Electronics Engineers American Control Conference, June 1989,1, 411-16. 131. Zuidgeest RG. Multi-sensor data fusion and the use of artificial intelligence, Proceedings of the 12th International Conference on Artificial Intelligence, Expert Systems and Natural Languages, June 1992, 2, 73-84.
This Page Intentionally Left Blank
Index
Acoustic emission 71 Acoustic impact 72, 98,107 Alternating current field measurement 58-59 Alternating current potential drop 57-58 Artificial intelligence 6,31-34, neural network 14, 32-34 Bayes cost function 24 criteria 24-25 risk 14,24 Thomas 25 Bayesian estimate 157-162 inference 22,25-26,95-126,114-116, 143-157,166,168-169,183-184 statistics 14 Boroscopy 44-45 Chi-squared test 133-135,139 Classical inference 22-23 see also impact damage Composite materials 95-96 CFRP 103-104 coin tapping 98, 107 computer tomography 107 -108 eddy current testing 100,104-107, 109-111 examination 104 GFRP 101 infrared thermography 99,108-110 inspection results 104—114 laser holography 100 NDT methods 97-101 panels 103-104 radiographic inspection 99-100, 110-111 shearography 100 Tedlar 112,122 ultrasonic testing 98-99 visual inspection 98, 107, 109-110
Data fusion see also NDT applications 34 centralised 15 definition 1, 5, 13 distributed 15-16 factors 181 flaw diagram 14, 142, 185 human 6 methodology 22-34 models 6,13-22 paradigm 14 performance assessment 174-176 pixel level 31-32,170-174 process 13-14 sensors 8-13 Data integration 13,114-121 Decision 22-31 binary 18-19 hard 142-143 output 17-19 probability 20-21 rate 18 soft 142-143 Delamination 102-103, 107-109 Demagnetisation 51 Dempster-Shafer 13, 17, 22, 26-30, 162-170, 184 belief function 27 data fusion 35, 162-170 decision 29 evidential interval 29,162-166 plausibility 29 rule of combination 28 Disbond 96-97,101-107 Eddy current bridge circuit 54 coil 52, 54-55 conventional 51-55 impedance diagram 53 multi-sensors 54-55 pulse 55,57 remote field 57
204
Index
Eddy current (Continued) skin depth 53-54 testing 51,100,105-107,109-112, 129-137 variables 55 Endoscopy see Boroscopy Expert system 186 Evidential reasoning see Dempster-Shafer Fuzzy logic 22,30-31 membership function 30 Gaussian normal distribution 132-133, 139 GEP theory 15,22,30 Helicopter rotor blade 101-109 Holography 45-46,100 Hypothesis multiple 20-21 testing 114-116 Image processing 89-91 Image segmentation 31-32 Impact damage 96-97,109-114 Infrared thermography 71,99,108-110 Introscopy see Boroscopy Kolmogorov-Smirnov test 135 -136,139 normal plots 136-137 P-values 135,139 Laser holography see Holography Leak detection 72 Likelihood ratio criterion 23-24 Liquid penetrant inspection 46-48 Magnetic field 48,51,58 flux density 48 flux leakage 48 hysteresis 49-50 particle inspection 48-51 Magnetising currents 51 Markov random field 22, 32 Maximum a posteriori 23 Microscopy 44-45 Microwave inspection 72 Multiple sensors 6-8, \6-\l see also eddy current combination 16-17 integration 13 output 17-22 parallel suite 16 serial suite 16-17 NDT computers 72-73
data fusion 34-36,95-96,107,114-116, 127-187,141-178 definition 1,43-44 expert systems 73,186 performance assessment 73-77 underwater 186-187 visualisation 73,91-92 Neural network see Artificial Neyman-Pearson test 24 Perceptron 32-33 POD 10,12-13,74-75,116-118 Probability 20-25,133,145 see also POD a priori 26 density function 132,138 posterior 148-162 prior 166 Radiography computer tomography 69-70, 100 107-108 electromagnetic radiation 67-68 gamma rays 66-68 geometric projection 66 geometric unsharpness 68 half value thickness 68 image formation 66, 68 inspection 66-71,99-100, 110-111 140-141 IQI 68 neutron radiography 69-70 principle 66-68 radiation intensity 68 real time radiography 69 X-rays 66-68 X-ray fluorescence 70 Receiver operating characteristic 10,12-13, 75-77,120-121,175-177 ROC see Receiver operating characteristic ROV 186-187 Sensor efficiency 17-20,129-132 errors 10—11 information 8-9 management 8-13 performance 10-13 selection 8-10 Shearography 72,100 SQUID magnetometers 71 Tomography see radiography Ultrasonic acoustic impedance 63 A-scan 59-60 attenuation 64
Ultrasonic (Continued) B-scan 60 C-scan 60 couplant 62 DGS 62 EMAT 65 far zone 62 frequency 59-60 near-field zone 60,62 principle 59-60 P-scan 60 pulse echo 59-60 reflection 63-64 SAFT 65 shear waves 63-64 testing 59-66,98-99,137-139 through transmission 59,60 time-of-flight 65 TOFD see time-of-flight transducers 60-62 transmission 63 velocity 63 wave propagation 60 wavelength 62-64 Vagueness 31 Visual inspection 44-46,98,109-110
see also microscopy, boroscopy, holography Visualisation animation 88-89 CAD 83,91 colours 84-86 data visualisation 83-87 definition 82-83 illusion 86-87 NDT 73,87,89,91-92 RGB 85 spreadsheet 84 tools 84 virtual reality 88-89 visual data analysis 83 volume 87-88 Weld eddy current testing 129 -137 heat affected zone 128,171 -174 inspection 127-179 lack of side wall fusion 140-141 NDT 129-141 radiographic inspection 140 -141 samples 128 toe crack 140,171-174 ultrasonic examination 137 -139
This Page Intentionally Left Blank
20 19 18 D 1.02-1.088
17 16
0 0.952-1.02
15
•
0.884-0.952
14
•
0.816-0.884
13
•
0.748-0.816
12
•
0.68-0.748
11
i
•
0.612-0.68
10
i
•
0.544-0.612
9 § 8 7 6 5 4
9 0.476-0.544 •
0.408-0.476
•
0.34-0.408
•
0.272-0.34
•
0.204-0.272
3
S 0.136-0.204
2
•
0.068-0.136
1
•
0-0.068
0 Elevation / mm
Plate 1
Visualisation, using Excel spreadsheet, of eddy current data from the inspection of the surface of a helicopter rotor blade
Plate 2
Combination of 3-D colour coded variations in elevation on the surface of a helicopter rotor blade and contour map. Defects are shown in red and yellow
Gmcfcr
Plate 3 Typical ACFM signal display from a defect (Bx and Bz plots and butterfly plot are respectively displayed on the left and right of the screen)
mmSmmM^m^mmmtrnm
Plate 4
jmmmwmmmt
i< i n
»
' "
'
"
Multiple sequences from a computer animation procedure showing defects on the sub-surface of a section of a helicopter rotor blade
Plate 5 An example of results which may be obtained by applying a different visualisation technique to the same set of data (see Plate 2)
Plate 6 Three dimensional visualisation of impacts on the surface of a composite material inspected using an eddy current system
Plate 7
Multiple sequences from a computer animation procedure used to model the evolution of a disbond on a helicopter rotor blade under stress, using contour plots to delimit the defect area
Original Image (thermograph)
Tune/Posterisation
Edge Enhancement
Contour Lines
Plate 8 Image processing on a thermograph (original image) using posterisation, edge enhancement and contour lines. IP procedures help in determining the boundaries of heat flow and facilitate defect location - in this case impacts on the surface of a composite material
Plate 9
Thermograph of sample 2, side 1, showing impacts of 2.0 J, 2.5 J and 3.0 J
Plate 10 NDT visualisation of the surface of a steel plate using an eddy current system and showing a toe crack and a HAZ crack
Plate 11 Autocad visualisation of a toe crack (red, right-hand side) on a metallic plate (green) with a centreline weld (blue)
2M
11A ^ V w i,y£fl
Plate 12 Detection of surface damage by visual inspection of Ecureuil helicopter rotor blade sample 2
Plate 13 Thermograph from infrared inspection of sample 2 (Ecureuil rotor blade). The hot spot shows an area of delamination
Plate 14 Bayesian probability map of the data from Plate 17 showing the degree of support of a defect being present on the area inspected
Plate 15 Results of the eddy current inspection of Ecureuil rotor blade, sample 1
Plate 16
Results of the eddy current inspection of Ecureuil rotor blade, sample 2
Plate 17 Visualisation of eddy current data from the inspection of a section of the BV234 helicopter rotor blade
Plate 18 Detection and visualisation ofimpact damage with eddy current inspection on Sample 1