MATLAB(g) for Photomechanics A Primer
MATLAB(g) for Photomechanics A Primer
Elsevier Science Internet Homepage
-
http://www.elsevier.com
Consult the Elsevier homepage for full catalogue information on all books, journals and electronic products and services.
Elsevier Titles of Related Interest
CHESNOY Undersea Fiber Communication Systems. ISBN: 012171408X
WOLF (Series Editor) Progress in Optics. Series ISSN: 0079-6638
RASTOGI & INAUDI (Editors) Trends in Optical Non-destructive Testing and Inspection. ISBN: 0-08-043020-1 Related Journals
A sample journal issue is available online by visiting the homepage of the journal (homepage details at the top of this page). Free specimen copy gladly sent on request. ElsevierScience Ltd, The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK Computers and Electrical Engineering Image and Vision Computing ISA Transactions(g) Optics and Lasers in Engineering
Optics and Laser Technology Optics Communications Signal Processing Signal Processing: Image Communication
MATLAB@ for Photomechanics A Primer
Anand Krishna Asundi
School of Mechanical and Production Engineering, Nanyang Technological University, Singapore
2002
ELSEVIER AMSTERDAM SAN
DIEGO-
-
BOSTON SAN
-
LONDON
FRANCISCO-
-
NEW
SINGAPORE-
YORK
-
OXFORD
SYDNEY-
-
TOKYO
PARIS
ELSEVIER SCIENCE Ltd The Boulevard, Langford Lane Kidlington, Oxford OX5 1GB, UK
9 2002 Elsevier Science Ltd. All rights reserved. This work is protected under copyright by Elsevier Science. and the following terms and conditions apply to its use: Photocopying Single photocopies of single chapters may be made for personal use as allowed by national copyright laws. Permission of the Publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use. Permissions may be sought directly from Elsevier Science via their homepage (http://www.elsevier.com) by selecting "Customer support" and then 'Permissions'. Alternatively you can send an e-mail to:
[email protected], or fax to: I+44) 1865 853333. In the USA, users may clear permissions and make payments through the Copyright Clearance Center. Inc.. 222 Roseaood Drive. Danvers. MA 01923, USA. phone: (+1) (978) 7508400, fax: (+1) (978) 7504744. and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W1P 0LP, UK; phone: (+44) 207 631 5555: fax: 1+44t 207 631 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Tables of contents may be reproduced for internal circulation, but permission of Elsevier Science is required for external resale or distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this work. including any chapter or part of a chapter. Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Global Rights Department, at the mail. fax and e-mail addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences. in particular, independent verification of diagnoses and drug dosages should be made.
MATLAB is a registered trademark of The MathWorks, Inc. For MATLAB product information, please contact: The MathWorks Inc., 3 Apple Hill Drive, Natick, MA 01760-2098, USA Tel: +1 508 647 7000; Fax +1 508 647 7101 E-mail:
[email protected] Web: www.mathworks.com First edition 2002 Library of Congress Cataloging in Publication Data A catalog record from the Library of Congress has been applied for. British Library Cataloguing in Publication Data A catalogue record from the British Library has been applied for.
ISBN: 0 08 044050-9 O The paper used in this publication meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper). Printed in The Netherlands.
Supported by Qian Kemao V. Krishnakumar Liu Tong Wang Jun Xu Lei
Dedicated to my parents Kala and Krishna Asundi
Acknowledgements
School of Mechanical and Production Engineering, Nanyang Technological University Ministry of Education, Singapore Dr. V. Murukeshan Dr. Sujata Surinder Kathpalia Foon Lai Kuen Centre for Graphics and Imaging Technology
vii
Contents Chapter 1 1.1 1.2 1.3 1.4 1.5
Introduction Introduction Principles of Optical Methods Photomechanics Introduction to MATLAB| MATLAB| for Image Processing
1 2 4 7 12
2.1 2.2 2.3 2.4
MATLAB| for Photomechanics (PMTOOLBOX) Introduction Image Processing in Photomechanics MATLAB| Demonstration Conclusion
15 15 32 37
Chapter 2
Chapter 3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 Chapter 4
Digital Photoelasticity Introduction Digital Polariscope Phase-Shifting in Digital Photoelasticity Phase-Shifting Method with a Normal Circular Polariscope Isoclinic Ambiguity MATLAB| Demonstration Dynamic Phase Shift Photoelasticity MATLAB| Demonstration Conclusion
Moir~ Methods 4.1 Introduction 4.2 Digital Moir~ 4.3 MAT~| Demonstration 4.4 Moir@ of Moir~ 4.5 Moir@ of Moir@ Interferometry 4.6 Gabor Strain Segmentation 4.7 MATLAB| Demonstration 4.8 Diffrentiation of Low Density Fringe Patterns 4.9 MATLAB| Demonstration 4.10 Conclusion
39 40 45 48 50 59 68 74 77
79 81 83 87 92 93 102 108 111 114
viii Chapter 5 5.1 5.2 5.3 5.4 5.5
Digital Holography Introduction Digital Holography Digital Holographic Interferometry MATLAB| Demonstration Conclusion
115 116 124 126 141
6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8
Speckle Methods Laser and White Light Speckles MATLAB| Demonstration Sampled Speckles Defocus Detection Using Speckles MATLAB| Demonstration Curvature Contouring MATLAB| Demonstration Conclusion
145 146 149 154 157 158 160 164
Conclusion
165
Chapter 6
Chapter 7
169
References Appendix A Appendix B Appendix C
MATLAB| Functions Used in Primer PMTOOLBOX 9 Functions MATLAB IP TOOLBOX (Version 3.1) Function List
CD of M-files and de mo programs
179 181 184
Chapter 1 Introduction 1.1
Introduction
Photomechanics are a suite of experimental techniques, which use optics (photo) to solve problems, in mechanics. These techniques which have been in existence for quite some time have always lagged behind other experimental techniques, notably the electrical strain gauge method. Furthermore, numerical techniques such as the Finite Element Method were developed as alternatives to experimental mechanics with great success. The m a i n r e a s o n for the restricted use of photomechanics is the difficult interpretation of data and lack of postprocessing routines and display. While the fringe pattern, the end product of the optical method, provides whole-field visualization of deformation or stress and strain, it is not presented in a form useful to an engineer. There is a need to process the fringe pattern further and display quantitative information in a manner that is best suited for the problem being solved. Both experimental and numerical means have been proposed to delineate the necessary information from the fringe patterns. In particular, over the last couple of decades, various attempts have been made to automate these techniques via software. However, in general the software is too specific, built more for the patterns and images of the developer's laboratory. Furthermore, it cannot always be tailored or modified for the images that are obtained in the end-users experiment. Thus the onus is on the user to develop his own software. This is generally time consuming and detracts from the original intention of the user to test a particular application. With this in mind, this primer is written to assist the user in getting started with the experimental analysis quickly. It is based on MATLAB| which has dedicated and optimized routines for a variety of image processing tasks. These can be readily scripted together along with some additional mathematical expressions for any specific experimental technique. This primer assumes that the reader has some background knowledge in optical methods for strain analysis. Except where necessary, no effort is made to explain basic concepts of these methods in detail. Suitable references are provided which will allow the reader to have a deeper understanding of the optical method. In particular, effort is put into explaining the concepts of some of the newer techniques, which use MATLAB| as the tool for data processing, and the particular equations that are programmed in the MATLAB| m-files. The theories in some cases are interspersed with the MATLAB| routines.
2
1.2
MA T L A B |
Principles of Optical Methods
Photomechanics - A Primer
(1.1)
Optical methods offer significant advantages over their electronic counterparts for mechanical measurements. Remote, non-contact measurement and whole field visualization of deformation are the major advantages. With the growth of fiber optic sensors, the passive, dielectric and insulating properties also become important. In addition, lack of electrical power generated at the point of measurement enables precise measurement without the risk of electrical shorting, heat generation, etc. However, electrical sensors and devices that are tried and tested have a longer history. Thus for engineering measurements most of the current commercial sensors are still electrical based with the ubiquitous foil strain gauge being a prime example in engineering mechanics. Nevertheless, optical techniques are gaining in popularity as limitations of electrical sensors have been exposed in precise measurements with high spatial resolution over large areas. Furthermore, the availability of low cost off-the-shelf optical components and devices has made these optical systems more affordable and compact. Developments in lasers, optical components and devices, digital and electronic detectors and processing have given optical methods a fresh edge over electrical sensors. This has resulted in a resurgence of design and development of traditional methods. In addition, a host of new applications of optical sensing and detection are growing in the area of micro-mechanics and micro systems as well as biomedical and bioengineering. Optical techniques for mechanical measurement can be classified as point sensors and whole-field sensors. Point sensors measure deformation, strain, temperature or other physical quantity at discrete points. They offer high speed and high sensitivity of measurement. However, many applications require a whole-field deformation information as critical areas can be identified. Since light is used for measurement, whole field visualization and processing can proceed at very high rates. In this primer, the discussion is restricted to whole-field sensors. Optical methods also come in four different flavours characterized by the optical property they monitor. These are: Intensity Based Techniques External disturbances causing changes in the amplitude (intensity) of the light are the basis for these sensors. The main advantage of these sensors is that they are rugged. However, their sensitivity leaves much to be desired, although with active illumination they are finding applications in different fields. Fluctuations
Introduction
3
in the light source, especially found with laser diodes can be a problem in some applications. The suite of moir6 methods using low frequency gratings are typical of examples that use active illumination to improve the sensitivity and range of applications in engineering measurement. In addition, speckle correlation methods fall into this category as well. Interferometric/Diffraction Based Techniques These form the majority of techniques used in engineering measurement. They use the wave nature of light and thus the deformation sensitivity is of the order of the wavelength of light (i.e. few 100s of nm). Many systems are available for point-wise and whole-field detection and compact systems are being developed incorporating laser diodes and fiber optics with digital imaging and processing. Moir6 interferometry and Holographic Interferometry are two widely used technologies, which utilize these phenomena for engineering measurement applications. Speckle methods use these principles as well, albeit in a subtle way. Polarization Based Techniques These techniques measure changes in the state of polarisation (SOP) caused by external loading to monitor stresses in components. They are more reliable than intensity based techniques and at the same time, more rugged than interferometric based methods. Their sensitivity also lies between that of the intensity and the interferometric sensors. Overall, for engineering measurement applications, this technique is easiest to use, provides adequate sensitivity and is reliable. However, there are some limitations. A transparent model needs to be fabricated for general use. If the technique is applied to the actual machine part, sensitivities are low. Hence techniques are being explored to improve sensitivity. Novel lighting and improved data acquisition and processing are proposed to improve these drawbacks. Low level birefringence measurement are in increasing demand these days within the micro-electronic sector for residual stress measurement and characterizing of new materials such as CaF. Spectroscopic Techniques Spectroscopic methods rely on light-matter interaction to determine the characteristic of the specimen. While traditional spectroscopy is applied for chemical analysis, spectroscopic methods are being used for stress analysis as well. Raman spectroscopy is currently being used for evaluation of residual stresses in semiconductor devices and micro-mechanics. Micro-Raman
4
MATLAB|
Photomechanics- A Primer
spectroscopy is capable of analyzing the local stress in micrometer scale, and extensively used for the local stress evaluation. Residual stresses cause a spectral shift in the Raman signal that can be detected. Since Raman techniques are currently point-wise methods, they do not directly benefit from MATLAB| routines. 1.3
Photomechanics
(1.2-1.10)
Two of the major techniques used extensively in experimental mechanics are photoelasticity and the moire/grid methods. These methods exploit the whole field characteristic of the optical method. However, they still require a fair bit of processing in order to deduce the required engineering parameters of interest such as strain and stress. Hence, while the technique provides a whole-field picture, processing was primarily local rather than global. In the early days (1950s) efforts were made to use optical techniques to process the picture and get the desired information. Thus from this early stage photomechanics (optical methods in experimental mechanics) involved an all-optical system, i.e. both recording and processing of data was done optically. It is only recently that the telecommunication systems are moving from an electro-optic system to an alloptical system. Surprisingly, photomechanics is going in the opposite direction and developing the opto-electronic system, where the sensor system is optical while the recording and processing is electronic or digital. Photoelasticity and moir~ are two of the oldest optical techniques for experimental stress analysis. Brewster first observed the photoelastic effect in the early nineteenth century while Robert Hooke used grids, the basis of the moir~ methods, in the 17th century to verify his stress-strain equations. Lord Rayleigh first proposed moire, as a measurement tool, in the early 19th century. Despite these early beginnings, both techniques were systematically developed into tools for experimental stress analysis only in the second quarter of the last century. The works of Coker and Filon (3.1) and Frocht (3.2)were pioneering in developing photoelasticity into a viable experimental tool for research and industrial applications. Photoelasticity was further extended for three dimensional stress analyses with the development of the frozen stress method and to this day remains one of the few experimental techniques, which can be applied for internal stress distribution measurement. Moir~ methods were developed by personalities such as Guild [4.1], Theocaris [4.2], and Durelli and Parks [4.31. Photoelasticity has generally found greater acceptance in industrial applications and was widely used for problems, which could not be
Introduction
5
easily solved using analytical methods. Moir~ methods, although principally very simple, did not garner the same level of acceptance that would have been expected of such versatile a method. The reason for this could be in the information that could be directly gleaned from the respective fringe patterns. Photoelasticity provides contours of principal stress difference and could visually display regions of high stress concentration which designers and analysts can directly relate to. This further enhanced the role of photoelasticity in experimental stress analysis. Moir~ methods, on the other hand, provide displacement contours, from which strain and stress had to be derived. This additional step was tedious and time consuming and thus the technique was not as widely accepted. Furthermore, specimen grating preparation and the relatively low sensitivity of the method at that time restricted application of moir~ methods for large deformation problems, which is the emphasis of the book by Durelli and Parks (4.3). Nevertheless, the versatility of the moir~ method shone through in techniques for topographic measurement and slope and curvature techniques for which there was no other technique available. Theocaris, in his book t4.21, exemplifies these applications of the moir~ method. Indeed in those early days, the moir~ method was developed as a tool to complement, rather than compete with, photoelasticity and was thus applied to problems which photoelasticity could not address. Over the years, the versatility of the moir~ method was shown in the increasing number of applications to which it was applied. Also, techniques were developed to enhance the sensitivity of the method to make it a viable alternative to photoelasticity. Other applications of the moir~ method developed rapidly and the advent of computers and fringe processing which eased the tedium of data processing augured well for wider usage of the moir~ method. Moreover, development in optics, in particular the ability to manufacture and print low cost high frequency specimen gratings enabled moir~ methods to achieve interferometric sensitivities, thus making it a truly versatile method. Photoelasticity, on the other hand, became an established tool and was widely used as a technique to introduce students to the field of photomechanics. There were attempts to extend photoelasticity to the plastic domain (photoplasticity), however the problem of having to model the actual structure using a birefringent material was cause for concern in some applications. The birefringent coating method was developed to apply photoelasticity to the actual specimen, but the sensitivity was low and alternate techniques were developed to overcome this problem. The development of the laser in the early 1960s heralded the advent of new optical techniques in experimental mechanics. Leading the way was holographic interferometry proposed initially by Powell and Stetson (5.3). Holography is a two
6
MA T L A B |
Photomechanics - A Primer
step process. In the first step, a hologram, which is a photograph of the interference pattern of light scattered from the object and a reference beam, is recorded. The next step, termed as hologram reconstruction, reconstructs the object wave by illuminating the hologram with the reference beam. The essence of holography was to store and reconstruct the entire object wave, i.e. both its amplitude and phase. In holographic interferometry, two holograms of the object, before and aider deformation were interferometrically compared to reveal fringes, which contoured the total displacement incurred by the object in between exposure. While holographic interferometry promised a revolution in the field of experimental mechanics, it failed to completely deliver. While developers and proponents of the method highlighted and demonstrated the versatility and wide applicability of the method, fringe interpretation was even more complicated than the moir6 methods. This was due to the fact that the method was sensitive to the complete displacement vector, which had three unknown components. By suitable alignment of the recording and reconstruction optics, the sensitivity of the technique could be adjusted to measure specific displacement components. Realtime recording was particularly sought after and the concept of TV-holography (1.6) evolved. A hologram of the object before deformation is recorded using a video camera. This image is stored electronically and subtracted from the live hologram of the deformed object. The subtracted image was high pass filtered and rectified to reveal real-time deformation fringes. Analysis was still primarily qualitative. Recently, developments in digital holography (~.9)have caused the technique to resurface but it faces stiff competition from other experimental techniques that have also benefited from the growth in technology. A spin-off from holographic interferometry was the field of speckle techniques. It was observed that speckles (more precisely laser speckles) created by random interference of light scattered from the surface of the object, were a source of noise limiting the resolution and sensitivity of holography and holographic interferometry. Burch (6.1)in his seminal work showed that these speckles could themselves be used as displacement sensing elements. Thus spawned the field of speckle metrology (6.2). The speckle method as with holography was a two-step process - the recording step in which a double exposed speckle pattern (specklegram) of the object, before and aider deformation was recorded. The reconstruction step could be done on a point-wise or a whole-field basis. In this the specklegram was filtered to reveal the displacement vector at a point or the wholefield contours of displacement component. The versatility of the speckle method was also demonstrated and its similarities and differences with the moir6 methods highlighted in various works (6.4). Research into laser speckle metrology showed some of the drawbacks of the technique. One of the major concerns was that the
Introduction
7
speckle movement could not be simply explained in general cases. This is because the speckles were not physically attached to the surface of the object that was being measured. Thus there was a need to rethink the application of the laser speckle method. Out of this arose, the white light speckle method (6.3). While the white light speckle method for strain analysis was systematically studied after the development of the laser speckle method, some earlier work showed that white light speckle or a random dot pattern can generate moir@ fringes similar to those obtained using gratings. White light speckles showed great similarities with the moir@ method with the following differences. The sensitivity (i.e. grating pitch) of the moir@ method is pre-selected at the start of the experiment. The speckle methods exhibit variable sensitivity, which can be selected aider the experiment is performed. Thus specimen preparation for the moir@ method, especially the in-plane moir@, required some care and patience, while the specimen preparation in speckle methods were minimal. However, the time spent on specimen preparation, resulted in good quality displacement fringes in the moir@ method. While for the speckle method, the fringe pattern was noisy despite the point-wise or whole-field filtering. The speckle method also provided impetus for the development of electronic and digital processing techniques to make significant impact in photomechanics. Techniques such as Electronic Speckle Pattern Interferometry and Digital Speckle Correlation methods (6.5)were actively researched. This led to increased use of digital processing of fringe processing in photomechanics and the development of the Fourier Transform and Phase Shifting methods for quantitative evaluation of fringe patterns. However, each laboratory had its own unique approach to fringe processing based on their specialization and equipment. Hence part of the experimenter's job was to develop image-processing tools as a precursor to his experimental work. While this was justified in cases where new techniques were being developed, it did little to promote wider use of photomechanics techniques. There was, therefore a need to develop routines within the framework of a broader programming language. In this book MATLAB| is chosen as that framework since it uses matrix processing routines and digital images are a matrix of intensity information. 1.4
I n t r o d u c t i o n to MATLAB|
The name MATLAB| is short for MATrix LABoratory (2.1). M A T ~ | uses matrices as its fundamental building block to carry out high performance numerical analysis, signal processing, and scientific visualization tasks. Like any
8
MA TLAB|
Photomechanics - A Primer
language, MATLAB| has a grammar, which has to be adhered to, failing which error is reported or it performs actions undesirable to the user. However, the power of MATLAB| is unlike high level programming languages, e.g. C and FORTRAN that require the data processing and graphical visualization to be programmed. MATLAB| provides most graphics routines as library functions, which can be scripted for complex data representation, and analysis based on the user's needs. When MATLAB| is invoked a command window is opened in the MATLAB| environment. The MATLAB| window opens with a '>>' sign with the cursor blinking ready to take in commands. Typing in commands in the MATLAB| window command line permits small programs to be built up. The data and commands typed directly in the MATLAB| command window are lost once we exit the MATLAB| environment. The data and commands typed in the command line can also be stored in MATLAB| This aspect of the program is discussed later. For extensive data analysis and processing, programs called macros need to written and they should follow MATLAB| syntax. A typical example for a simple arithmetic operation in the command line will be, >> 2+3 and press enter to get the result ans - 5
To store the value in a variable for later use, >>a=2+3 a=5
To store the result as a variable, place a ';' at the end of the statement, e.g. >> a=2+3; Note t h a t the use of the colon is very important when dealing with images. Without the colon, the matrix of image pixels will be dumped to the screen.
Introduction
9
Matrices in MATLAB| can be represented as a variable and matrix operations can proceed as for the arithmetic operation described above. Thus >> C = A+B; is the sum of two matrices A and B and the result is placed in a third matrix C. Matrices can be input via the command line by writing the components of individual rows separated by a colon to distinguish the rows. Thus >> A=[12 3 4; 5 6 7 8] represents a 2x4 m a t r i x with the first row having elements 1, 2, 3, 4 and the second row with elements 5, 6, 7, 8. Written is this form the m a t r i x will be displayed in the command environment as A
__
1 5
2 6
3 7
4 8
Due to the tedium in generating such matrices by manually typing the elements, these could be filled in using the colon operation as follows >> A= [1:4; 5:8] This will generate the same matrix as above. The MATLAB| command environment has a menu on top where the user can see the variables used by his/her program, the option to open .m files, and the help files. The help files can either be opened as an html file or as an MATLAB| interactive window where the list of help files are detailed in terms of sub-topics. The MATLAB| command window has options for opening a file, saving the file, the s t a n d a r d cut and paste options found in text editors and help options. Knowledge of other programming languages is not necessary to write macros in MATLAB| but can help you get started faster. Some basic commands used frequently in photomechanics are highlighted. Appendix A has the complete list of MATLAB| commands used in this book. We will go through the stages of data acquisition, preprocessing, main processing and post processing and explain in detail with examples how MATLAB| commands can be used to implement the m a t h e m a t i c a l tasks for these processing methods. We will explain with an
10
MA TLAB| f o r Photomechanics - A Primer
illustration some of the syntax to be followed while programming in the MATLAB| environment. MATLAB| programs can be either run from the command line directly or can be saved as a file, which could then be used to execute a series of MATLAB| commands. This could then be saved for future use. For complicated data analysis, it would be better to have the analysis program written onto a file commonly referred to as the m-file. MATLAB| can execute a sequence of statements saved as an m-file. Much of the work in MATLAB| involves creating and refining m-files. There are two types of m-files: script files and f u n c t i o n files. S c r i p t files. A script file consists of a sequence of normal MATLAB| statements. If the file has the filename, say, 'rotate.m', then the MATLAB| command rotate will cause the statements in the file to be executed. Variables in a script file are global and will change the value of variables of the same name in the environment of the current MATLAB| session. An m-file can reference other mfiles, including referencing itself recursively. F u n c t i o n files. Function files provide extensibility to MATLAB| You can create new functions specific to your problem, which will then have the same status as other MATLAB| functions. Variables in a function file are by default local. MATLAB| thus permits one to test out various combinations of functions and come up with the best combination for a particular application. During the course of the experiment, one could develop a variety of function files, which would be scripted in different routines for fringe processing. Appendix B has some of these function files t h a t have been developed. Each user will stamp his own identity in generating these function and script files. Thus one would appear to see some duplication of functions, which are common to various image-processing tasks. A typical example of this is shown below: Gratings are widely used in photomechanics. A simple grating with equal line widths can be specified by its size (number of rows (M), number of columns (N)) and its pitch. Since different applications require gratings with different sizes and different pitches, a function called 'cgg.m' can be created which allows us to generate gratings with variable parameters.
Introduction
%computer generated grating %grating=cgg(M,N,period,shift) %M: pixels in x direction %N: pixels in y direction %period: pitch of the grating %shift: normally the start point is the beginning %of one period, shift means how many periods is shifted %toward left %grating: return grating
%By Qian Kemao 24/05/01 %Toolbox:Photomechanics function grating=cgg(M,N,period,shift) error(nargchk(3,4,nargin)); if nargin==3 shift=0; end period=ceil(period/2)*2; N l=N+fix(shift*period); c=fix(N1/(period/2)); for i=l:M forj=l:c if (j/2)~ =fix(j/2) t(l:M,(j-1)*period/2+l:j*period/2)=0; else t( 1:M,(j - 1)*period/2 + l:j*periocY2) = 1; end end end if (c*period/2
11
12
MA TLAB| f o r Photomechanics - A Primer
For example at the MATLAB| command prompt, typing cgg(300,100,10,0) would scroll a matrix of ones and zeroes corresponding to the grating of width 300 pixels and length 100 pixels with a pitch of 10 pixels. The last zero is the starting pixel position. For this example, the first five pixels will be zero. If it is required that the first pixel be 1, then one could shift the grating by 5 pixels. This shift is useful in some applications discussed in a later chapter. To use this function, one could generate the following script, either on the command line or saved as a file. >> a= cgg(300,100,10,); %to generate matrix 'a' representing the grating >> colormap(gray); % to change the image to grayscale >>imagesc(a); % to display the matrix as an image.
1.5
MAT~|
for Image Processing
MATLAB| is ideally suited for image processing, since images are nothing but a matrix of numbers, and matrix manipulation provides the image processing capability. Image processing involves data acquisition, analysis and interpretation and display. The data is usually in the form of images captured by the CCD camera or scanned photographs. The data once digitized is ready for processing. Therefore, the task of the researcher is to feed in data to the program for estimation of parameters of interest. MATLAB| for image processing in general and photomechanics in particular can be divided into four parts: Image or data acquisition, Pre-processing, Main processing and Post processing. A complete set of tools for control and with a variety of PC compatible data acquisitions is available in MATLAB| Together, MATLAB| and the Data Acquisition Toolbox offer a single, integrated environment to support the entire data acquisition and analysis process. Data can be easily analyzed or visualized on the fly, saved for post-processing, and for making iterative updates to a test setup based on the analysis results. In addition, Mathworks are now offering an Image Processing toolbox as an add-on to the basic MATLAB| package. Thus with the Data Processing and Image Processing Toolboxes, most of the image acquisition, preprocessing and post-processing can be easily accomplished. The main processing module is the major content of this book and since the development of the image processing toolbox is more recent, most routines can be run without the toolbox in place.
Introduction
13
Image Acquisition Module Images can be acquired in a variety of ways with scanners, digital cameras etc. Currently MATLAB| only works with images stored as files. It is not possible to directly work with live images. However, with improvements in image capture modules and cameras using USB and FIREWIRE ports, the possibility of TWAIN modules for acquiring images directly into MATLAB| is not t h a t far fetched. MATLAB| allows reading in m a n y different formats which can then be numerically processed. Data can be in binary as is the case for most images or in standard ASCII format. The data is read using the function 'imread' or 'load' depending on the format in which data is being input. In case of formatted images, 'imread' is used and for reading ASCII data, 'load' is used. When 'imread' is used, the data is converted to an unsigned 8-bit integer. The values range from 0 to 255 and values below or above these limits are mapped to 0 or 255. When converted to uint8 form, the fractional part of each of the values is left and is not rounded off. For example, if we have an vector p=[0.9,9.6,4.9,-1.0] the uint8 array of the vector p will be p=[0,9,4,255]. Mathematical operations are not supported when the data is in this form. The data format is changed using the double function in MATLAB| to make the data usable for numerical operations. However, for some operations like the logical operations, displaying data values in this format are allowed. A typical MATLAB| command line to read the image will be, >> a = imread('
'); where a is the variable into which the image is stored. The data in variable 'a' will be in an 8-bit unsigned integer form. 'a' is re-read into another variable and converted into double precision data which is ready for processing. >>b=double(a); Depending on whether the image is a gray scale image or a RGB image, the size of the matrix b will be either M X N or M X N X K. The data can be saved for later use either in one of the standard image formats like jpg tiff etc. or as a plain ASCII file. To load an ASCII file, use the following syntax, >>image = load('')
14
MA TLAB| f o r Photomechanics - A Primer
pre-Processing Module Pre-processing routines prepare the data for analysis. Before we start the actual processing, the data has to be pre-processed to remove the detector effects. Preprocessing is the most important aspect of data processing. When data is acquired as an output of an experiment, the next step is modeling the data to extract useful information. Globally the data output could be either large, too little or fractured. Pre-processing of data includes classifying the data into one of these three kinds and processing accordingly. Hence, data filtering, data ordering, data editing and noise modeling play an important role in any data preprocessing. MATLAB| has routines like 'resampling', 'decimate' which help us achieve some of these aspects. Also, conversion of data formats between real and integer representation can be made which helps in retaining the sensitivity or the least count of the experiment whose output is being processed. Main Processing Module Once pre-processing is the data needs to be analyzed. A variety of library routines from the commonly used Fast Fourier transform algorithm (FFT) to the wavelet transforms, are available as MATLAB| functions. Interpolation and curve fitting routines, which are routinely used for signal analysis, are interactive. The main processing module decides the extent of information extracted from the data, which mainly depends on the ingenuity of the researcher analyzing the data. MATLAB| supports novel methods and techniques for information extraction from data by providing the researcher with the basic building blocks for developing the necessary tools for such developments. Routines like FFT allow both one and two-dimensional data to be analyzed. Post Processing Module Communication is the most important aspect of a research work because it helps the researcher to publicize his results and make people understand his work. This aspect of data processing is usually termed as data visualization. Post processing of data deals with display of the results obtained from the data in the main processing module. Using 2D and 3D visualization tools with color codes and animation from a time sequence of data all form a part of the post-processing module. MATLAB| allows a wide variety of post processing module, ranging from the commonly used 2D plots to 3D visualization with lighting effects and also visualization of the surface using a grid mesh or continuously shaded image.
15
Chapter 2 MATLAB| 2.1
for P h o t o m e c h a n i c s
(PMTOOLBOX)
Introduction
This chapter provides some standard image processing routines used in photomechanics. The functions developed could form the basis for a Photomechanics Toolbox, which would form a special subset of the Mathworks Inc., Image Processing Toolbox (2.1). Users can learn operation of the program by following the step by step instructions. They can work with their own images and can input various parameters to find the best solution for their fringe pattern. However, it is our experience that fringe patterns recorded for the different experimental techniques and the facilities in different laboratories have their own subtle processing nuances. The use of the toolbox enables the user to jumpstart with MATLAB| for Photomechanics and allows him to decide the best routine for his fringe pattern. MATLAB| scripts and functions in subsequent chapters' highlights the subtleties of each technique and the development of MATLAB| routines for specific methods. MATLAB| routines are also evolving and there is no attempt made here to optimize the solution. The primer is hoped to enable the users to get familiar with MATLAB| for fringe processing quickly and also give them an option to make improvements best fitting their applications.
Legend: The following notation will be used in the text: 9 9 9 9
2.2
The commands following the prompt ">>" should be entered by the user Information displayed on the screen is shown in italic font. Parameters to be input will appear as bold text. Any step followed by one or more figure show the resultYs of the program.
Image Processing in Photomechanics
(2.2, 2.3)
In photomechanics (optical methods in experimental mechanics), most of the techniques such as Photoelasticity, Moir6 and Holography provide fringes which contour the principal stress difference or deformation. To convert the contours into continuous variation of the measurand is the task for MATLAB |
16
MA TLAB| for Photomechanics - A Primer
The intensity at any point (x, y) of an interferometric fringe pattern can be described as l ( x , y)
= Io+
Im
cos[O(x, y)+ct)]
(2.1)
where I(x, y) is the intensity of fringe pattern at a generic point (x, y), Ib is the background intensity, Im is the fringe modulation intensity, 0(x, y) is the phase difference between the two interfering beams and a is a known added phase. The phase 0(x, y) is the required quantity and the intensity I(x, y) is the measured quantity. The phase is related to the principal stress difference in photoelasticity via the stress optic law and related to deformation in holography, speckle and moir~ via the sensitivity vector of the system set-up. Pre-processing While equation (2.1) gives the intensity distribution for an ideal fringe pattern, the actual distribution maybe different depending on the set-up parameters and optical components. Hence there is a need for pre-processing to remove noise and other specimen or system artifacts. Traditional smoothing routines have to be employed with care, as we do not wish to remove some of the signal as well. One typical smoothing routine, which has found favour in experimental mechanics, is the Gaussian Filter. The Gaussian filter is a 2-D convolution operator similar to the mean filter in image processing. The difference is in the kernel used for filtering. As the name suggests, the Gaussian kernel has a bell shaped profile and is given as
x2+y2/ --•
1 2-57) G(x, y)= 27/:~ 2 e where a is the standard deviation.
(2.2)
17
MATLABB for Photomechanics (PMTOOLBOX)
The function 'Gaussflt' can be used to filter an image using the Gaussian kernel. help gaussflt result=gaussflt(imdata,fwhm); Generate a Gaussian filter function and filter the image in the Fourier domain imdata: input image data fwhm: full width half maximum of Gaussian profile. The script 'gfdemo', provides a demonstration of this function for fringe pattern enhancement. The demonstration first loads a noisy pattern and then filters it with different widths of the Full Width Half Maximum (FWHM) profile as required by the function.
Imo
Press start to begin the demo
18
MA TLAB| f o r Photomechanics - A Primer
Using a Gaussian Filter with Full Width Half Maximum (FWHM) of 40 gives the following
MATLAB| for Photomechanics (PMTOOLBOX)
19
Other filter sizes are also demonstrated within this demo. As this filter blurs the image, careful balancing between the loss of resolution and sensitivity with improved image quality has to be considered. Main Processin~ v
There are three methods to extract the phase, 0(x, y) from the digitized fringe intensity distribution. They are the Fractional Fringe Method (FFM) (2.4, 2.5), the Fourier Transform Method (FTM) (2.6-2.10)and the Phase Shifting Method (2.11-2.13). Both the FFM and FTM need to process only one fringe pattern but are unable to determine the sign of the fringes. The Phase Shift Method requires more than one fringe pattern but allows the determination of sign of the fringes. Brief demos of the three routines are described below. Fractional Fringe Method
(2.4,2.5~
As indicated by equation (2.1), the fringe pattern has a sinusoidal distribution. However, due to the variations in intensity and contrast, Ib and IT, are also functions of the coordinates (x, y). The approach in FFM is to determine Ib and Im for each half of a fringe and interpolate the phase, between the minimum and maximum of this fringe, via the cosine function. Thus equation (2.1) can be rewritten as
O(x,y)=cos-' (l(x, y ) - I o)1 I,.
(2.3)
where a is taken as 0 without any loss of generality. FFM.m is the function, which evaluates the phase using the Fractional Fringe method. This is elaborated below. ~, help ffm
p hase=ffm(imdata, command__str); This function is to evaluate the phase by Fractional Fringe Method imdata: input image data comand_str: 'x' or 'y' for scanning direction, 'x' for default Typing 'ffmdemo' at the command line runs the demonstration script, which uses the 'ffm' function.
20
MATLABBfor Photomechanics - A Primer
>>ffmdemo and press the start button
lrdl
Next the 'ffm' function is invoked to evaluate the phase one half fringe a t a time. Details are shown in the screen below.
MA TLAB| for Photomechanics (PMTOOLBOX)
21
Finally the 3-D mesh of the unwrapped phase can be plotted using the MATLAB function 'mesh' as shown below.
This routine is best suited if there are few fringes in the field. Fast Fourier Transform Method (FFT)(z6-Zl0) The FFT method generally requires a high fringe density. This is because the method uses the Fourier transform to isolate the phase component of the fringe pattern. The Fourier transform of a quasi-periodic fringe pattern has two lobes centred on the mean fringe frequency and the zero order central lobe. The side lobes have the information of the phase 0 (x, y). Higher the fringe density, easier the delineation of the side lobes from the central lobe. Setting one of the side lobes as the only non-zero component, the data is Fourier transformed again. Since the data selected in the Fourier domain is not symmetric, the Fourier transform yields a real and an imaginary component. The desired phase can then be determined as: O(x, y) = tan -~ Im
Re
where Im and Re are the Imaginary and Real parts of the inverse transform.
(2.4)
22
MATLAB| for Photomechanics- A Primer
The MATLAB function 'ftm_carrier' provides the basis of the FFT method for fringe processing >~help ftm_carrier p hase=ftm-carrier(imdata• carrier ffreq-x• carrier--freq-y••lter-width-x••lter-widt h y); This function is to evaluate the phase from carrier fringe patterns by Fourier Transform Method (FTM) imdata: input image data carrier_freq_x: carrier frequency in x direction carrier_freq_y: carrier frequency in y direction, default value is carrier_freq_x filter_width_x and filter_widthy: widths of the filter in frequency domain the default width is 1/2*maximum(carrier freq_x, carrier_.freq_y) This phase is wrapped in the region (-n to +n). Phase unwrapping is needed to make the distribution of phase continuous. Unwrapping requires knowledge of the fringe sign and the phase of the starting point. In the unwrapping function shown below, the fringe sign is set to monotonically increase from right to left of the starting point which has zero phase. The starting point is default to the centre of the pattern but can be u s e r - selected as well. >>help unwrapping unwrapped_phase=unwrapping(original_phase, original_x, original y); This function is to unwrap the phase for comparison use input the original phase and output the unwrapped phase original_x, original y is the starting point, whose default value is the central point of the map The script 'ftmcdemo' demonstrates the use of the ftm_carrier function to process a fringe pattern and the unwrapping function for phase unwrapping. The screen captures of the routine are shown below. The window below each image highlights the MATLAB| statement used.
MA TLAB| for Photomechanics (PMTOOLBOX) >>ftmcdemo
23
24
MA TLAB| f o r Photomechanics - A Primer
MA TLAB| for Photomechanics (PMTOOLBOX)
25
As mentioned previously, the Fourier Transform method is best suited if the fringe density is high. There is also the possibility of using the Fourier Transform Method for patterns with a few fringes. In this case, a second phase shifted pattern is necessary to eliminate the ambiguity in phase. ,, help ftm_no_carrier p hase=tim_no_c a rrie r(i l ,fil te r_w idt h_x, filte r_w idt h_y, i2) ; This fucntion is to evaluate the phase from fringe patterns without carrier by Fourier transform method(FTM) i1: input image data to be processed filter_width_x and filter_widthy: width of filter for filtering the dc part, the default value is 3*3 i2: phase shifted imdata for removing the phase ambiguity when i2 is not provided, phase output is a complex matrix as phase_x+j*phase_y, phase_x is the result when the frequency component fx>O is filtered and phase_y when fy>O is filtered when i2 is provided, the output is the phase without ambiguity The script 'ftmncdemo' demonstrates the use of this function for processing a fringe pattern with no carrier fringes. This demonstration is illustrated later in the chapter and hence is not repeated here. Phase Shifting Methods
(2.11-2.13)
It can be seen from equation (2.1) that a minimum of three images are necessary to solve for the three unknowns, Ib, Im and 0(x, y). The three independent images can be obtained by adding known incremental phase a to the fringe pattern. Depending on the method, phase shifting can be achieved in different ways. One possibility is to provide four phase steps of 0,rd2, x, and 3rd2. The required phase 0(x, y) can then be deduced as -~[I,-I~ 1 O(x, y)= tan [_/1--~_]
(2.5)
Phase unwrapping as before is performed to obtain a continuous map of the phase distribution. Some of the more frequently used phase shift routines have been programmed in the function ps.m
MA TLAB| for Photomechanics - A Primer
26
. help ps p hase=ps(Algorithm_Type, i 1, i2, i3, i4, i5, i6); This function evaluates the phase by phase shifting algorithms Algorithm_Type is the algorithm used to evaluate the phase in is the phase shifted intensity distributions whose phase shift is The number of images and phase steps are as follows:
Type intensities
phase shift
merits (insensitive to...)
1, 2,
(n-1)*pi /2 (n-1)*pi /2
linear phase shifter error
i1~-i3 i1~i4
quadric phase shifter error quadric intensity error linear and quadric phase shifter error linear phase shifter error & quadric intensity error linear & quadric phase shifter error & 7, il-i5 (n-1)*pi /2 quadric intensity error 8, i1~i4 (-3,-1,1,3)*alpha, linear phase shifter error (Carre) 9, i1~i5 (o2,-1,0,1,2)*alpha, linear phase shifter error (Stoilov) in types 8 and 9, alpha is arbitrary and fixed. 3, 4, 5, 6,
il-i4 i1~i4 1~i5 i1~i5
(n-1)*pi /2 (n-1)*pi/2 (n-1)*pi /2 (n-1)*pi/2
The script 'psdemo' demonstrates the phase shift routine for fringe pattern analysis using the four-step algorithm. As before, run the script at the command prompt and press start and next to proceed with the different steps.
MA J LAtJ|
>> p s d e m o
Photomechanics (PMI'OOLBOX)
27
28
MATLAB| for Photomechanics- A Primer
The four images are sequentially loaded and the phase shift and unwrapping functions invoked to display the phase map shown below.
MATLAB| for Photomechanics (PMTOOLBOX)
29
Speckle Correlation (2.14) While the above three methods deal with fringe patterns, speckle photography can also use digital correlation as a tool to determine the deformation between the reference and deformed patterns recorded separately. The correlation algorithm scans a pixel array from the undeformed speckle pattern over the deformed pattern to locate the array with the best correlation. This then permits displacement of the array to be determined. By similarly scanning other arrays in the undeformed pattern, a vector map of displacements can be generated. The MATLAB| function sc.c does the speckle correlation. - help sc speckle correlation method function d=sc(i_o,i_d,x,y, templ, thr) d: the original image i_d: the deformed image x: x index of point calculated y: y index of point calculated templ: the width and height of the template, should be [width height] thr: the max and min value for displacement set previously, should be [min_u max_u min_v m a x v ] The MATLAB| script 'scdemo' demonstrates the speckle correlation corresponding for rigid body rotation measurement. An interesting pattern, called the moir6 of speckle pattern is also generated indicating the similarities between the speckle and moir6 methods.
30
M A T L A B | f o r Photomechanics - A P r i m e r
MA TLAB| for Photomechanics (PMTOOLBOX)
31
32 2.3
MATLAB| for Photomechanics- A Primer
MATLAB|
Demonstration
While the above routines are run via the command line interface, MATLAB| also allows one to develop a windows type interface. The script 'pmdemo' demonstrates one possible approach to doing this. >>pmdemo
Click on each button to run the particular demo. For example, selecting "FTM without carrier", gives the screen shown on the next page.
MA TLAB| for Photomechanics (PMTOOLBOX)
Press Start to begin the demo
33
34
MA T L A B | f o r P h o t o m e c h a n i c s - A Primer
As can be seen, the top half is for display, the bottom area shows the function used to generate the image above. Brief comments are also included to assist the user.
MA TLAB| for Photomechanics (PMTOOLBOX)
35
36
MA T L A B | f o r Photomechanics - A Primer
MA TLAB| for Photomechanics (PMTOOLBOX)
2.4
37
Conclusion
MATLAB| has standard functions for image acquisition, preprocessing and post-processing. While the pre-processing might not always work for fringe patterns, which have slightly different structures than conventional images the image input/output and post-processing functions can be used as provide with the MATLAB| package. Indeed the improvements in the Image Processing Toolbox would improve these functions with each new upgrade. The main processing functions are what are specific to photomechanics. MATLAB| allows a fast approach to comparing the different methods available for fringe processing. The Fractional Fringe Method is a useful tool if only part of the image or singe lines in the image need to be processed. The method, however, requires the user to be very familiar with the fringe pattern, as significant interaction is required in the processing. The Fourier Transform Method is a standard function in MATLAB as is the correlation function. However, the stages following these processing are crucial in MATLAB. For example, in the Fourier Transform Method, the size and shape of the window selected in the transform plane prior to determining the inverse transform is vital. MATLAB can be used to test out the various windows and select one most suitable for the user's application. Finally, the phase shifting method is very specific to photomechanics and different expressions are available depending on the number of phase shifted images that are used. Finally, phase unwrapping is essential for both the Fourier Transform Method and the Phase Shift Methods. Once again, while MATLAB| has a function for unwrapping, the numerous schemes available in literature give the user a wide choice. These schemes, however, have not been presented in the MATLAB| demonstrations above, but can be readily implemented.
This Page Intentionally Left Blank
39
Chapter 3 3.1
Digital Photoelasticity
Introduction
Photoelasticity is based on the property exhibited by most transparent plastics - the property of birefringence or double refraction. Some materials such as mica and quartz crystals exhibit this behaviour naturally, but in photoelasticity the materials chosen are those which become birefringent when subject to loads (or stresses). This temporary birefringence is referred to as a Load-induced (Stress induced) birefringence. David Brewster first observed this phenomenon in the early nineteenth century in glass and he did foresee its application to stress analysis. However, glass was far from the ideal material for photoelasticity and it was not until the first half of this century that the method took off as a diagnostic and analytic tool. The treatise by Coker and Filon (3.1) and the works of M.M. Frocht (3.2) were pioneering in exploring the various aspects and applications of Photoelasticity. Since then the method has been widely used in industry as a means of two and three dimensional stress analysis (3.3- 3.11). Its simplicity of use and ability for whole-field visualization of stresses were the driving force in its acceptability by industry and research organisations. Only with the advent of the computer and the development of the finite element method has the popularity of the method waned to a certain extent. Nevertheless, photoelasticity is, apart from strain gauges, the experimental method to which students are exposed as a tool for whole-field stress visualization, in particular its application to Stress Concentration Factor determination. More recently the photoelastic effect is being revived with applications in fiber-optic sensors; in evaluation of rapid prototyped products that are made using the stereolithographic process and in birefringence measurements of materials (Si and CaF) used in the electronics industry. This has been further enhanced by developments in digital photoelasticity (312), i.e. the automation of photoelastic data collection and analysis. The classical manual procedure of analysis is usually very tedious and time consuming and requires skilled and experienced personnel. With the advent of digital image processing techniques in photomechanics, digital photoelasticity has become popular due to its capabilities of providing quantitative data with minimum manpower and being compatible with generally computerized design systems. However, unlike other interferometric techniques, such as moir~ interferometry discussed in Chapter 4, computer aided analysis of the photoelastic fringes is not straightforward. Consider in particular the phase shift method for fringe pattern analysis. This method requires that the phase of one of the beams of an interferometer be shifted by known amounts to generate the required phase shifted images for analysis. However, in photoelasticity since the two beams travel the same optical path, it is difficult to phase shift one beam without introducing a
40
MA TLAB| for Photomechanics - A Primer
shift in the other beam. Apart from photoelasticity, the other method, which has this problem, is the shadow moir~ method (4.5). In addition, unlike other interferometric techniques, in photoelasticity, there are two unknowns that need to be determined. Techniques were proposed to overcome these difficulties by developing a new polariscope and recording six rather than the usual four phase shifted patterns. However, this would mean that firstly, polariscopes, which exist in many research and teaching laboratories, would be obsolete. Secondly, phase shifting precluded application to dynamic problems, which is one of the fortes of the new digital polariscopes. Finally, if a new polariscope is to be developed, why not utilize the new optical components such as LEDs and laser diodes as light sources coupled to the digital imaging system. 3.2
Digital Polariscope
Figure 3.1 is a schematic of a typical digital polariscope (3.13). A stressed model made of temporary birefringent material is inserted in the polariscope. Fringe patterns related to the principal stress difference and the principal stress direction in the specimen are captured by the digital camera and processed and displayed by a computer.
Figure 3.1 S c h e m a t i c o f a Digital Polariscope
The effects of the stressed model in the digital polariscope are revisited using Jones Calculus.
Digital Photoelasticty
41
Jones Vector for A Birefringent Plate Consider a beam of light with component U1 along the x-axis and V1 along the yaxis impinging on a birefringent plate. Each component is resolved into components, along the slow axis (A1) and along the fast axis (A2) of the plate (Figure 3.2(a)).
F
Y
F
Y
S
(a)
(b)
Figure 3.2: Polarised light (a) entering birefringent plate (b) leaving the birefringent plate
If 0 is the angle between the slow axis of the plate and x axis, then AI=U 1cos0 + V 1 sin0
(3.1)
A2=-U ' sin0 + V, cos0
(3.2)
When the light emerges from the plate, the two components along the fast axis and the slow axis result in components A3 and A4 (Figure 3.2(b)): A3= (U 1 cosO + V 1 sinO) e -i5/2
(3.3)
A4= (-U 1 sinO + V I cosO) e i 6 / 2
(3.4)
where 6 is the relative retardation. Then the component along x-axis, U2, and the component along y-axis V 2 become U 2 = (A 3 cos O- A 4 sin O)
(3.5)
V 2 = (A 3 sin 0 + A 4 cos O )
(3.6)
42
MA TLAB| for Photomechanics - A Primer
F r o m equations (3.3) to (3.6) U 2 = A3 cos 0
- A4
sin 0 : (U~ cos 0 + V, sin 0 )e -;~ cos 0 - (- U~ sin 0 + V~cos 0 )e §'~ sin 0
= U, (cos 6/~2-/cos 20 sin 6~2 )- Vl l'sin 20 sin 6~2 ) V2 = A3 sin 0
+ A 4 c o s 0 : ( U l c o s 0 + V,
(3.7)
sin 0 )e - ~ sin 0 + (- U~ sin 0 + V, cos 0 )e §
cos 0
= -U l (/sin 20 sin 6~2)+ V, (cos ~/~2+ icos 20 sin ~/~2)
E.I
V2 =
- i sin 20 sin 5~2
cos 5//22+ i cos 20 sin
cos 6/~2- i cos 20 sin 6/~9 JM = -
i sin 20 sin o/.. /A
(3.8)
V,
- isin 20 sin 6/~2/~2 ] cos 6/~2 + i cos 20 sin
(3.9)
(3.10)
Equation (3.10) is the general form of Jones' vector for any birefringent plate. Special Case I: Q u a r t e r Wave Plate When a birefringent plate is cut parallel to the optic axis to a thickness resulting in a phase difference between the two components of one q u a r t e r of a wavelength, we get a q u a r t e r - w a v e plate (/V4). This optical element is used to convert planepolarized light into circularly polarized light and vice-versa. The Jones calculus for a q u a r t e r - w a v e plate is actually a special case of t h a t for birefringent plate. In eqn. (3.10), let 5 = z / 2 , then ~f2 I1- i cos 20 J Q - --~ L - i sin 20
- i sin 20 1 1+ i cos 20
(3.11)
F u r t h e r m o r e w h e n 0 = z / 4 , the Jones' vector simplifies to: JQ =--2-- - i
(3.12)
In the above analysis, 0 is defined as the angle between the slow axis of the q u a r t e r - w a v e plate and the horizontal axis (fig. 3.2). If 0 were the angle between the fast axis of the q u a r t e r - w a v e plate and the horizontal axis, then the 0 in equation (3.11) is replaced by 0 + ~ / 2 . The equation becomes
43
Digital Photoelasticty
JQ
=
, ~ I l+icos20 --~ i sin 20
isin 20 1 1- i cos 20
(3.13)
For 0 = z / 4 , we get (3.14)
Jones' vectors for other elements can be similarly derived. A polariscope is a combination of different optical elements and the Jones' calculus allows the resultant light intensity emerging from the polarsicope set-up to be readily deduced. General Intensity Equation for Circular polariscope A schematic of a typical circular polariscope is shown in Figure 3.3.
P
Y
F
S
Source -
Polarizer Quarter-wave plate 1
Specimen Quarter-wave plate 2 Analyzer
Figure 3.3: A Circular polariscope
Using Jones' calculus, the components of the transmitted perpendicular to and parallel to the analyzer axis are obtained as g
= J
A
J
Q2
JMJ
Ql
J
P
ae
light
vector
44
MA TLAB| for Photomechanics - A Primer
where Jp, JQ1, din, JQ2 and JA are the Jones' vector for the polarizer, the first quarter-wave plate, the model, the second quarter-wave plate and analyzer respectively. The subscripts p v and xy indicates the reference systems in which the calculus are obtained. Expanding and multiplying we get:
{;} [
1 0 = 2 cos/3
qr
0 1- icos 20 - i sin 20 sin/3Jt-isin20 l+icos20
x[COS % - i s i n % c o s 20 -isin % s i n 20
E +/cos2 /s,n2 i sin 2(o
j
-isin%sin20
]
c o s % + i s i n % c o s 20
1- i cos 2(o
(3.16)
If fl=O, 0=-z/4 a n d qr--z/4 (fig. 3.3), we get a dark field circular polariscope and the resulting pattern is the dark field fringes. With this arrangement, equation (3.16) simplifies to: U - - a sin --5 e(o,_20_,4/2 ) 2 . ~ (~ a2 a2 I = I o+[U 12-. I 0 + a" sin" -- = I o + - - - - - c o s 5 2 2 2
(3.17) =I b-lm cos5
(3.18)
where Ib ( = Io + a 2 / 2 ) is the background/stray light and Im ( = a 2 / 2 ) is the maximum intensity of the light emerging from the analyzer or fringe modulation. Other polariscope settings such as the dark field plane polariscope and light field circular polariscope can be similarly formed. Since the angle 0 does not appear in the above expression, the isoclinics have been eliminated from the fringe pattern. From eqn. (3.17), extinction (I =0) occurs when 5 = 2 n z , for n =0, 1,2,3
(3.19)
An example of dark- and light-field isochromatics for a disk under diametral compression is shown in figure 3.4.
Digital Photoelasticty
45
Figure 3.4: (a) Dark- a n d (b) bright-field isochromatics o f a disc in a circular DolariscoDe (same setting as (a) except B -- z / 2 .
The fringe patterns need to be analyzed using equation (3.19). As is obvious, this is no easy task unless certain information is known, such as the location of the zero fringe order and the sign of fringes, i.e. whether fringe orders are increasing or decreasing in different regions. In addition, there are isoclinics, contours of principal stress directions, which have to be recorded and evaluated globally. Traditionally, this was the task of the trained photoelastician and was one of the reasons for the limited application of the technique. Computer assisted processing has alleviated this shortcoming tremendously. In particular phase shifting in photoelasticity makes it possible to determine both isochromatics and isoclinics over the whole field, even for complex geometries.
3.3
Phase-Shifting in Digital Photoelasticity
While the fractional fringe method and the Fourier Transform methods have been and still are being used for fringe analysis in photoelasticity, in this discussion, we concentrate on phase shifting. Apart from some novel developments in this area, capability of dynamic measurement and analysis are also possible. Tardy Compensation Method Before the advent of digital processing, the most common method for measuring fractional fringe orders was the Tardy method. The method required no additional equipment and could be used with a normal polariscope. The disadvantages are that this is a point-wise method and thus tedious and time consuming for wholefield analysis. However, it is a precursor to the phase shift method and hence is explained here briefly. In order to determine the fringe order at a point of interest, Tardy's method is used. To employ the Tardy method, the axis of the polarizer is aligned with the principal direction of the maximum principal stress at the point in question (0 = 0). All other elements of the polariscope are rotated relative to the
46
MA TLAB| for Photomechanics - A Primer
polarizer so that a standard dark-field polariscope exists. The analyzer is then rotated by an amount 13 to produce extinction at the point of interest. The procedure to determine the fringe order is given as below. Substituting 0 = 0 into equation (3.10), gives the Jones' vector for a birefringent plate as
Ju
=
[COS~2+isin~2 0
0
1
(3.20)
cos ~/~2- i sin S//s
Equation 3.16 can thus be rewritten as: {U}
1[ 0 0 ][_1 -ilIcos~/~2-isin~/~2 = 2 cosfl sinfl i 1 0 cos ~2 +isin 2 2 cos 13cos
i 1
+ 2 sin fl sin (3.21)
Giving the dark field intensity as
, = L I:
-a
: sin
+fi )
(3.22)
Extinction occurs when 5 / 2 +_fl=n z
where n=0,1,2 ....
and the fractional fringe order N at the point of interest is given by N=
512z=
n • fl /rc
This was the basis of the phase shifting method. Since the method relies on a circular polariscope, only those techniques utilizing a circular polariscope will be mentioned with the most current technique highlighted for comparison. By using phase shifting expressions, Asundi (3.14) extended the Tardy method of compensation from a point-to-point technique to evaluate the fractional fringe order at all points lying on the isoclinic line. The system still used the normal polariscope but applications were limited. Patterson and Wang Algorithm Hecker and Morche (3.15)changed the phase by rotation of the second quarter-wave plate and the analyzer, while the fast axis of the first quarter-wave plate was set
47
Digital Photoelasticty
to 45 ~ with respect to the axis of polarizer. This was the start of the developments in which the normal polariscope could not be used anymore. Patterson and Wang (3.16) used the same basic optical arrangement as Hecker and Morche. However, by using different steps of optical rotation of the second quarter-wave plate and analyzer, Patterson and Wang's algorithm had a higher modulation over the field. Patterson and Wang's algorithm used six intensity equations with the settings of the second quarter wave plate and analyser that varied as follows: Table 3.1" Patterson and Wang's Algorithm ~ 13 Intensity equation 0 x/4 Im 11 = I b + - 7 ( 1 + cosS)
3rd4 3x/4
3rd4
3rd4
I I~=I b+-~(1 rn
I
cos8)
13 = I o +--~-(1 sin2Osin~)
3rd4
rd4
x/4
3rd4
rt/2
rd2
3rd4
3rd4
3n/4
I.~ = I b + 15 = I b
m
lm
-7-(1 + cos 20 sin 8) z
I + - T ( l + s i n 20sinS) m
Im
16 -- Ib + " ~ (1 cos 20 sin 8)
From the six intensity equations summarized in Table 3.1, both the isoclinic angle and the phase retardation can be computed as 0 -
1 --tan-1 ( ( I 5 -- 13 )/((14 - 16 )) 2 -tan-1 (2(15-
14 )/((I1- 12 Xsin
20 - c o s 20))
(3.23) (3.24)
All the existing phase-shifting algorithms have an ambiguity when determining both the isoclinic and isochromatic parameters. The isoclinic angle 0 is computed using an inverse tangent function with a multiplication factor 1/~ and is expressed in the range [-rd4 to rd4] instead of its true range [-n]2 to rd2]. In circularpolariscope-based phase-shifting algorithms, although the phase retardation can be determined in a full range [0, 27t] by using the four quadrant arctangent function, its interaction with the isoclinic angle still leads to ambiguity regarding its mathematical sign.
48
MA TLAB| for Photomechanics - A Primer
When ambiguity exists at a given point, however, simple relationship between the calculated values of 0 and 5 and their true values of 0t and 5 t can be derived easily from any of the above full-field algorithms as /z"
O, = 0 + - - ; 2 5, = tan-' (-tan -15) = -5
(3.25)
A restriction on the widespread application of Patterson & Wang's algorithm is that this method cannot be implemented with a normal circular polariscope. In Patterson & Wang's algorithm, the change in phase is achieved by rotation of the output optical elements of the circular polariscope, i.e. the analyzer and the second quarter-wave plate. This requires that the second quarter-wave plate and the analyzer in the circular polariscope rotate independently. Unfortunately, this is not the case for many commercially available circular polariscopes widely used in laboratories of industrial organizations and universities. The design requirement of these polariscopes involves changing between a plane polariscope and a circular polariscope with ease and precision. Thus the quarter-wave plates were inter-linked with a 45 ~ maximum rotation capability. It is thus evident that Patterson & Wang's algorithm cannot be used with a normal circular polariscope. This requires some permanent modification to the polariscope including possibly lowering their accuracy and integrity. Hence, an alternative method, which uses the phase-shifting technique without modifying the normal polariscope, is desirable. 3.4 Phase-Shifting Method (Asundi and Liu Algorithm (3.17))
with
a Normal
Circular
Polariscope
A normal polariscope (fig. 3.5) is a circular polariseope with its two quarter-wave plates inter-linked. This facilitates transformation of the polariscope from a circular polariscope to a plane polariscope with greater precision and ease. Most commercial polariscopes are normal polariscopes.
Figure 3.5: The two n o r m a l circular polariscopes used in this project
Digital Photoelasticty
49
A new algorithm is proposed and improved on to apply phase shifting techniques with a normal polariscope and using only four images. For a circular polariscope with q~= r eqn. (3.16) can be rewritten as
V
1E~ O lEl-/cos2 =2- cos /3 sin fl - / s i n 2tp x I c~ fi~2-/sinfiz,_6/~2cos 20 -/sin / e sin 20 x I l + i c ~ 2q9 L i sin 2 q9
- i sin 2q9 ]
1 +/cos 2q9 J -/sin
fi~2 sin 20
IE~ I
cos ~ + / s i n
~cos
] 20
isin 2q9 1 - i cos 2 q9
(3.26) The intensity of light emerging from a point on the specimen is then given as
l = l o + a 2 cos2 flsin2 sin22(r 6 2
- 0)+ sin" flcos--- + 2
+ 1 sin 2 fl sin 6 sin 2(q9- 0 )+ sin 2 S cos2 2(0 - q9)sin 2(t3 - 2q9)
2
(3.27)
2
where Io is the background light. For a normal polariscope, the restrictions are that ~0can take values of n/4 and rd2 only and/3 can have any value between 0 and ~. Six intensity equations identical to those in Patterson & Wang's algorithm (Table 3.1) can be obtained by substitution of appropriate values of ~p and fl as per table 3.2 into this equation. This shows that even with the limited capability of a normal circular polariscope, the phase-shifting technique can still be used. In addition, it is evident that with this new method, Equations (3.23) and (3.24) can still be used to determine the isoclinic and isochromatic parameters and are restated here:
o = -1- t a n - 1 2
((15 _ 13 )/((I,
6 = tan-1 (2(15
_ 14
- 16
))
)/((I, - 12 Xsin 20 - c o s 20))
(3.23) (3.24)
50
MA TLAB| for Photomechanics - A Primer
Table 3.2: Intensity equations required for a normal circular polariscope (0 = q))
Image
QWp angle Analyzer angle Intensity equation
Fringe pattern
#
~
1
rd4
rd2
11-I~ +/m COS5 *
Bright field
2
x/4
0
12= I~ - / , cos5
Dark field
3
rd2
3x/4
I3 =
4
rd4
n,/4
14 = I~ +/m sinScos~
5
rd2
rd4
15 = It,
6
rd4
3rd4
16 "-- I b - I m sin5 cos20
2
* lb=l~
Ib -/m sinSsin20
+ I m sin~ sin20
2
Im=a2
The Four-Step Phase-Shifting Method Equations (3.23) and (3.24) are based on the premise that the background/stray light I b and the maximum intensity of light I m are constant at a generic point during the phase-stepping procedure. With this assumption, the six steps can be simplified to four steps using intensities, L, I2, I4 and I5 from Table 3.2. In this case, 0 and 5 are deduced as Ib - - ~1 (I~ +I2) 0 - - t1a n - 1 [(I5-Ib)/(I4 - I b ) ] 2 - tan -~ [2(I 4 - 15 )/(I~ - 12 Xcos 20 - sin 20)]
(3.28)
It can be seen from the above equation, that the phase angle 5 depends on the proper determination of the isoclinic angle 0. Ambiguity in determination of the isoclinic angle will arise due to 1Aterm in the determination of 0.
3.5
Isoclinic Ambiguity
The main restriction to the application of phase-shifting techniques is the problem of isoclinic ambiguity. The principle behind this effect is that the isoclinic
Digital Photoelasticty
51
angle is calculated in the range [-rd4, rt/4] instead of its true range [-rd2, rd2]. Thus, in some regions, the isochromatic parameter will have the wrong mathematical sign as the signs of sin20 and sin(20+z/2) or [cos20 and cos(20 + z/2)] are opposite. To solve this problem, Ekman and Nurse's (3.18) load stepping combined with the phase-shifting method was the best available, although there are two major drawbacks. First, it is difficult to obtain three precise equal-stepped loads. Second, in low-stressed regions, the difference in phase retardation, AS, between load steps is very small. Therefore results in these regions are not reliable and noisy. Thus a generalized multi-load approach using a completely new and novel algorithm is developed. Two methods, termed as the three-load phase shifting method and the two-load phase-shifting method are proposed and verified. The Three-Load Method Consider a specimen subjected to three different loads
P,, P2and P3 with
sequentially increasing loads. Since P1 > P2 > Pa, the phase retardation (5) at each point in the specimen will satisfy 5, > 52 > 53
(3.29)
However, from the phase-shift equations it is apparent that, in regions where the isochromatic parameter has the wrong mathematical sign, the above inequality becomes ,5, < 52 < 6 3
(3.30)
Taking into account the periodic nature of the isochromatic parameter, inequalities (3.29) and (3.30) can be grouped into ambiguous and non-ambiguous relationships as illustrated below and graphically shown in fig. 3.12.
(3.31)
52
MA TLAB| for Photomechanics - A Primer
Figure 3.6 Schematic diagram of the distributions of phase retardation wrapped in the range (0,2~) for three different loads. (a)no ambiguity: ~) 53>5~>52; ~5~>52>53; ~ 52>5s>5~; (b)ambi~uitv exists: ~5.~
Digital Photoelasticty
53
Using these six relationships, the isoclinic angle 0 and phase retardation 5 can be determined unambiguously as summarized in Table 3.3.
Table 3.3: Possible relationships among 5i, 52, 5a and the determination of 0 and 5 in three-load method The relationship obtained
The state of ambiguity
The determination of 0 and 5
(~) 53>51> 52
No ambiguity
0 =0 i
| 51>52> 53
5 =5,
(~) 52>53> 51 (~) 53<51< 52
Ambiguity present
0 =0 i + z / 2 i f - z / 4 < O , <0
| 51<52< 53
0 =0 i - z ~ 2 ifO
| 52<53< 51
5 - tan -l (- tan 5 i )
To compare this method with Ekman & Nurse's approach, a ring-in-compression is investigated. Three equal-stepped loads of 101+0.7N, 108+0.7N and 116+0.7N are used for the test and the resulting dark field fringes are shown in fig. 3.7. Usually this procedure needs to be repeated many times before two equal load increments are obtained. The isochromatic and isoclinic maps obtained by the phase shifting method, Ekman & Nurse's method and the three-load method are shown in fig. 3.8. Also shown in the figure is the colour-coded stress-difference map. Comparison of the unwrapped phase distribution made along the marked line in fig. 3.7(a) is shown in fig. 3.9.
Figure 3.7: Dark-field images of the ring under diametral compression. (a) 143+_1 N. (b) 155+_1 N. (c) 167+_1 N. From the above comparison, the two methods agree quite well. However, in the low-stressed region, Ekman & Nurse's results yield noisy isoclinics and isochromatics. This is due to the fact that in this region, the phase retardation
54
MA TLAB| for Photomechanics - A Primer
difference between load steps is very small and the value of the phase retardation is not reliable. In contrast, the results obtained by the method proposed in this project are much better in this low-stressed region because the relationships between phase retardation are maintained even when the phase retardation differences are not accurate.
Figure 3.8: Isochromatics and Isoclinic using (a) phase shift method only (b) E k m a n and Nurse's method (c) current method (d) Dark field isochromatics and the unwrapped phase map showing principal stress difference.
Digital Photoelasticty
55
Figure 3.9: Phase distributions along a horizontal line for (a) Phase-shifting only; (b) Current Three-load method and (c) Ekman & Nurse's method
The Two-Load Method The success of the three-load method depends on its ability to categorize the six relationships between phase retardation for different loads. However, further investigation of this method shows that under some condition, two loads are sufficient. Consider loads: P~ and P2 with P~ < P2 applied sequentially. Thus, 51 < 52 everywhere. However, the wrapped phase retardation obtained using the phase-shifting technique can be either Figure 3.10(a) or (b) depending on the ambiguity. Note that in both situations, the two relationships of 6~ < 62and 6~ > 62 are observed. Therefore, in general it is not possible to identify the ambiguity using only two loads.
56
MA TLAB| for Photomechanics- A Primer
8~
....
8~
(a) i81- 82i m~x
Actual d i s t r i b u ~ 82 f
, ~ s " .
.,"
81 "
7"
I
9 "
!.'7
!-P'i.-,
..."1
9
; I
I
Position
I
82 -.--.---..
v
d
(b)
~1
Actual d i s t r i b u ~ , ,
}
82
I
[~.
I ]
81 "
/..'
I
'"~. 9 "~;~L/
|
9
"~.~
I '
~.
[51- 52 [ max<
',
:
'%
~1
I
"'"
i
/
"h.
I'~,
/
9" ~ '
i
'
Position
Figure 3.10: The four possible relationships in calculated phase retardation between two-load steps for an actual linear distribution of phase retardation.
However, if the two steps are so selected that the maximum difference in phase retardation is less than n for the region of interest, there exist four new unique
57
Digital Photoelasticty
relationships as shown in Table 3.4, two of them showing an ambiguity and other two having no ambiguity. Table 3.4: Relationship between & and 62 and the determination of 0 and 6 by the two-load method
The relationship
Ambiguity
The determination of 0 and 8
|
None
0 =0 i
6~
|
a2>z a2
8=5 i
Present
0 =0 i+z/2
0=0,
if - z / 4 < 0
i <0
z/2 ifO
- tan-' (, tan~ )
The advantage of the two-load method over the three-load method is obvious. First, it is simpler and easier to implement, particularly when the loading environment is complicated. Second, the error introduced by the use of the third load is avoided; therefore, less noise in resulting images can be expected. This is evident by comparing the effectiveness of two approaches on the previously mentioned specimen used, which is shown in fig. 3.11 for a ring under diametral compression.
58
MA TLAB| for Photomechanics - A Primer
Figure 3.11: Comparison of three-load method and two-load method for a ring under compression. Left: phase retardation, right: principal stress direction A common problem in multi-load methods is that they are noisy in low stress regions, as shown in figure 3.11. This is because in these regions the difference between phase retardations for different loads are very small and the relationships are no longer reliable. It is observed that the two-load method has less noise in these low stress areas than the three-load method. However, it can be seen from Table 3.4 that if the absolute value of phase retardation 5 at a given point is less t h a n n, the value of sin 6 is always positive. That means the signs of the intensities I3, I4, I5 and I6 only depend on 0. Therefore, the principal direction 0 can be determined directly in range [-z/2, z/2] using four-quadrant inverse tangent to ensure that the phase retardation is determined without ambiguity. But, the problem in implementation of this idea is to identify regions where the phase retardation is less than ~. From the multi-load fringe patterns, an estimate on the difference between phase retardation for different loads can be made. For example, if the two loads of 100N and 90N differ by 10% then for the absolute value of phase retardation to be less than ~, the m a x i m u m difference between phase retardation should be less than ;r• =0.1z. In view of this, if the
Digital Photoelasticty
59
difference between two phase retardation is less than 0.1~ at a given point, the isoclinic angle and phase retardation at this point can be determined using the four-quadrant arctangent functions. In this way, the principal direction is directly determined in the range of [ - ~ / 2 , z / 2 ] and the phase retardation will have no ambiguity in this region. For practical implementation of this idea, however, the cut-off value chosen should be smaller than the theoretical one. For example, in the above case, a value between 0.03n - 0.05n instead of 0.1~ is used because in regions having phase retardation difference greater than this value, the relationships in Tables 3.3 and 3.4 are reliable. This statement can be better understood by a closer investigation of the images in figure 3.11 where the noisy area is always limited to a small area. The optimal threshold still has to be determined by experience. Applying this solution to the above specimens gives the results shown in Figure 3.15. Note that the noise in low stress zone is much less than before.
Figure 3.12: The results of applying the modified multi-load algorithm to resolve noise in low stress regions seen in Figure 3.11. 3.6
MATLAB|
Demonstration
The MATLAB| script pspe.m is a menu driven routine made-up of different scripts and functions for Photoelastic Phase Shift analysis. >>pspe The following will be displayed on the screen, 1. input the image files 2. segment the region of interest 3. calculate the retardation and direction for each load 4. recompute the parameters for low stressed region 5. correct the results with multi-load-step method 6. Automated phase unwrapping with multi-load-step method input the choice->
60
MA TLAB| for Photomechanics - A Primer
(normally image files should be loaded first) 1 Step 1. Read image from image file input the file names for processing or load the saved data (i / l) ? (file n a m e s can be entered via the keyboard one by one. If there are a lot of file n a m e s this process is tedious. Hence after the first time you e n t e r the names, the d a t a are save in a file p s p e l . d a t , which can be loaded instead.) In this demo we select i to input the file names_ Use six-step or four-step method? (4/6)>> 6
Use two-load method or three-load method (2/3)>> 3 I 11= c 4 5 9 0 f 9 0 . t i f I12= c 4 5 0 0 f 9 0 . t i f 113= c 9 0 4 5 f 9 0 . t i f 114= c 4 5 4 5 f 9 0 . t i f I21= c 4 5 9 0 f l O0.tif I22= c4500flOO.tif I23= c9045flOO.tif I24= c4545flOO.tif I31= c 4 5 9 0 f l 10.tif I32= c 4 5 0 0 f l lO.tif I33= c 9 0 4 5 f l lO.tif I 3 4 - c 4 5 4 5 f l lO.tif I15= c 9 0 1 3 5 f 9 0 . t i f I16= c 4 5 1 3 5 f 9 0 . t i f I25= c 9 0 1 3 5 f l O0.tif I26= c 4 5 1 3 5 f 1 0 0 . t i f I35= c 9 0 1 3 5 f l lO.tif I36= c 4 5 1 3 5 f l lO.tif After the files are loaded, the second step is to s e g m e n t the region of interest from the original image. Run the file again. >>pspe The opening screen will come up again, 1. input the image files 2. segment the region of interest 3. calculate the retardation and direction for each load 4. recompute the parameters for low stressed region 5. correct the results with multi-load-step method 6. Automated phase unwrapping with multi-load-step method input the choice-> 2 Step 2. Segment specimen from its background Select a rectangle area of interest-> Click two points as the left and right boundaries->
Digital Photoelasticty
61
The following figure will appear and click two points as the left and right boundaries
Click two points as the top and bottom boundaries-> Click two points as the top and bottom boundaries to select the region of interest.
by thresholding or by drawing a disk or ring (t/d/r)-> These are three choices for further segmentation~
62
MA TLAB| for Photomechanics - A Primer
E n t e r t to select thresholding in this example. The other two choices are preset for solid disk or ring shaped objects, which are traditionally used in photoelastic calibration, or testing. This step results in the following temporary image
Choose a point to separate the image into four subimages for local thresholding This is to separate the image into four subimages and to give each subimage different thresholds. Click on one point and the histograms of the four images are shown as below
Digital Photoelasticty
63
The image is separated into foursub-images with their respective histograms shown Click two thresholds in each part according to the histogram following the sequence of top-left, top-right, bottom-left and bottom-right Click a point in the top-left sub-image as its low threshold (you may select the point shown in the figure), and then click another point in the top-left sub-image as its high threshold (you may select the point shown in the following figure). Similarly select the thresholds for the other three sub-images. Thus a good segment of the boundary is obtained as below.
Are you satisfied? y/n E n t e r y to finish the segmentation (you may select n to redo it). This completes the preprocessing stage and the next stage of phase calculation follows. Run the program a third time and select 3 from the s t a r t menu >>pspe select 3
Step 3. Calculation of the photoelastic parameters The isoclinic (left) and isochromatic (right) maps are calculated and displayed.
64
MATLAB| for Photomechanics- A Primer
Though the phase retardation (isochromatic map) has been calculated, a phase ambiguity exists. But there is no phase ambiguity for low stress parts, so for these part, the phase retardation will be recalculated. Run the program a fourth time and select item 4 from the start menu >>pspe select 4 Step 4. Recompute the values in low stress areas Enter the threshold for low stress(in radian, 0.1 Image showing the low stress areas are displayed as shown below. left-top image shows the low-stress area, left-bottom is the corresponding region in isoclinic image and right-bottom shows the region in the isochromatic image
Digital Photoelasticty
Are you satisfied with the result?@/n)-> y The phase ambiguity in other areas still exists, so the next step is to remove it. Run the p r o g r a m a fifth time and select item 5 from the s t a r t m e n u >>pspe select 5
Step 5. Cancelling the ambiguity of the retardation by the multiload method Calculation is in progress... The corrected isochromatic m a p is shown below
65
66
MA TLAB| for Photomechanics - A Primer
Also the corrected isoclinc p a t t e r n is displayed as shown below
Finally, for phase u n w r a p p i n g , r u n the p r o g a m a sixth t i m e and select i t e m 6 from the s t a r t m e n u >>pspe select 6 Step 6. p h a s e u n w r a p p i n g Calculation is in progress...
Digital Photoelasticty Enter the load steps FI= 90 F2= 100 The unwrapped phase map is then correctly calculated and ordered
Post-processing function such as 'mesh' allows viewing the result as a 3D plot
67
68
MATLAB| for Photomechanics - A Primer
Noise removal can be achieved by the MATLAB| function 'medfilt2'
3.7
Dynamic Phase Shift Photoelasticity
(3.19)
In all the phase shifting methods stated so far, the phase-shifted images can only be collected sequentially by changing the orientations of elements within the polariscope. This restricts the application of the phase-shifting technique to the study of static events only. Patterson & Wang (3.20)developed an instrument for simultaneous observation of the four phase-shifted images. In their set-up, three beam-splitters were used to split the beam emerging from the model into four paths in four different directions. By placing a different configuration of a quarter-wave plate and analyzer in each path, the four phase-shifted images were recorded separately on four CCD cameras. The principle of the system is straightforward; however, its implementation looks tedious and difficult. First, the requirement for the images to be identical in both size and intensity makes it difficult to align the system with so many optical elements. Second, a monitoring and control system is required to ensure that the four CCD cameras operate synchronously to obtain four phase-shifting images at the same instant. To simplify the set-up, a novel alternative is proposed and developed. Use of a "Multispec Imager TM'' enables four different combinations of the quarter-wave plate and analyzer to be placed along the same optical path to capture the four images on the same CCD plane. Phase shifting can then be accomplished using Asundi and Liu's four step algorithm described in section 3.4. By capturing more
Digital Photoelasticty
69
than one image at different stages during the dynamic process the two or three load method could also be implemented for eliminating the isoclinic ambiguity. A schematic diagram of the new instrument is shown in Figure 3.13. This is basically a standard circular polariscope except for the "Multispec Imager TM'' and four groups of quarter-wave plates and analyzers.
Figure 3.13:Schematic of the dynamic phase shift polariscope Apart from its simplicity, this system has many advantages. First, the image splitting device is an off-the-shelf item, which can be directly integrated with the user-designed configurations of quarter wave plates and analyzers. The size and position of images can be adjusted easily. The position of each image can be changed independently, while the size of four phase-shifted images are designed to change only in a synchronous manner. Another important advantage of this device is that only one CCD camera is needed. The configuration of the four sets of quarter wave plates and analyzers necessary to get the four phase shift equations according to equation (3.28) is shown in fig. 3.14. The demonstration of the proposed system is assessed using the "C" shape model that was frequently used in the previous experiment. The stress distribution in area A is subjected to combined compression and bending and serves as a good example for evaluation of this system as the stress on either side of the neutral axis have opposite signs for beam bending.
MATLAB| for Photomechanics - A Primer
70
A ~
5~
Q2
Q2
Q2
, v
Q2
Figure 3.14 Configurations of the 2 nd quarter-wave plate, Q2 and the analyzer, A to get the desired four-phase shift images. An INSTRON loading machine applies a cyclic load thus generating the dynamic event. The schematic of the loading cycle is shown in fig. 3.15. Since our camera can only capture images frame by frame, the frequency of the loading vibration is not set high.
60N-
I1
Iz o
1ON-
0
t,
t9 t.~
2
Time(s)
Figure 3.15:Schematic of loading cycle and image capture times. However, using the LED based polariscope with a Time Delay and Integration (TDI) camera system (3.21)this can be readily overcome. A monochromatic filter centered at 575 nm is used with a white light source. A series of images, each of which consists of four phase-stepped images, are captured. The isoclinic and isochromatic parameters are determined for each image using the two-load method with the image preceding the event selected for eliminating the isoclinic
Digital Photoelasticty
71
ambiguity. A typical set of three images recorded at times (ta, t2 and t3) is shown in fig. 3.16
Figure 3.16:Three sets of phaseshifted images recorded at different times during a load cycle.
Figures 3.17(a) and (b) are the isochromatic and isoclinic fringes obtained by processing one set of phase-shifted images as per equations (3.28). The ambiguity in the isoclinic pattern and its effect on the isochromatic pattern is obvious. It is also worth noting that the discontinuities observed in the isoclinic map is a common problem of the phase-shifting technique caused by the non-deterministic isoclinic angle at points where the four light intensities are all zero. To correct for the ambiguity, one of the other load images is used and fig. 3.17(c) shows the corrected isoclinic pattern. With this corrected isoclinic pattern, the isochromatics can be properly ordered (fig. 3.18). The unwrapped phase maps are shown in fig 3.19. The fringe orders along the central line of interest are also shown. The combined bending and compression effect is obvious from the maximum fringe orders and the shift of the neutral axis from the centroidal axis of the beam.
72
MA TLAB| for Photomechanics - A Primer
Figure 3.17: (a) Isochromatic and (b) Isoclinic map obtained by using the phaseshifting technique on one load image. (c) Corrected isoclinic map by using the twoload-step to phase-shifting technique
Figure 3.18 Wrapped isochromatic maps at three different times
Figure 3.19 (a-c) Unwrapped isochromatic for the three loads and (d) the distribution of principal stress along the horizontal centre line for the three loads.
Digital Photoelasticty
73
Figure 3.20 Phase map for three configurations before (left) and after (right) eliminating ambiguity. (a) Inward; (b) Straight; (c) Outward.
Figure 3.21. The unwrapped phase maps for the three specimens
74 3.8
MA TLAB| for Photomechanics - A Primer MAT~|
Demonstration
In dynamic photoelasticity it is necessary to segment the image into four sub-images corresponding to the four phase shifted images that are acquired simultaneously. Once the images are segmented phase shifting as before can be performed. >>multi_seg Enter the path and name of the images for cropping Example: I1= 'c: \ map \ multipleimager \ c2405 \ 3' I1= ms l . t i f I2= ms 2.tif The first set of four images are displayed as below
establish four reference points, l:lt,2:rt,3:lb,4:rb. Click four point as mark to identify the four subimages as I t - left top; r t - right top; lb- left bottom and r b - right bottom.
DigitalPhotoelasticty
75
The marks shows up in the images as displayed below
Click four points to establish the boundaries of the first image The boundaries of other three subimages will be obtained automatically. The sequence of picking up points is 1.left bondary; 2.right boundary;3.top boundary; 4.bottom boundary
76
MA T L A B | f o r Photomechanics - A Primer
The figure shown below is displayed
If ok, the four images are segmented and displayed as shown below
Processing then proceeds as usual for the four phase shift method.
Digital Photoelasticty 3.9
77
Conclusion
Photoelasticity has been the last to benefit from automated fringe processing primarily due to the complexities associated with the fringe analysis. There was the need the separate the isoclinics and isochromatic fringes. This was not a major concern, but the fact that during the processing, the isoclinic angle could be erroneously determined. This affected the isochromatic fringes as well. Using the load stepping method, this process was successfully resolved. MATLAB| was instrumental during the evaluation phase of this routine and Liu Tong's Ph.D. thesis (3.13)is testament to this. The next improvement in the normal polariscope would be to incorporate novel light sources such as LED's. Advantages of LEDs for dynamic imaging have been exploited (3.21). In addition, multi-colour LED's permit both colour as well as monochromatic fringes. Advances in colour fringe processing and benefits gained from that would be worth exploring. Current colour fringe processing in photoelasticity limits itself to analysing the RGB components (3.22)separately.
This Page Intentionally Left Blank
79
Chapter 4 Moir~ Methods 4.1
Introduction
Moir6 methods (4.1- 4.9) are a versatile set of techniques for in-plane and out-of-plane deformation measurement, topographic contouring, and slope and curvature measurement. The basis of all moir6 methods are gratings, which for metrological applications are primarily an array of dark and bright lines (line gratings) or a set of crossed array of lines or dots (cross gratings). Other grating forms used are circular and radial gratings. Generally such choices of gratings are made based on the ease of interpretation of the patterns resulting from the superposition of two such patterns. Indeed, selective superposition of gratings of different profiles result in some interesting and aesthetically pleasing pictures (figure 4.1).
Figure 4.1 Moirg pattern of two circular gratings
80
MA TLAB| for Photomechanics- A Primer
Furthermore, the widths of the dark and bright lines are generally the same and the sum of these widths is the p i t c h of the grating. The frequency of the grating, represented as lines per mm is the inverse of the pitch. Once again different widths of the dark and bright lines have been used to enhance moir~ patterns. Metrological applications conventionally make use of two initially identical gratings. One of them follows the deformation of the object and is referred to as the specimen or deformed grating; while the other remains unchanged and is the reference or undeformed grating. Incidentally as the names suggest the two gratings can be record of the same grating before and after deformation or could be two physically separate gratings. In the first case moir~ patterns are generated by superposing the recorded images of the two gratings while in the latter case the onus is on recording the moir~ pattern. Recording of the individual gratings enables techniques for increasing the sensitivity to be applied. However if the gratings have initially a sufficiently high frequency, it is difficult if not impossible to record the gratings themselves and thus the moir~ pattern has to be recorded. Moir~ methods have been historically classified as grid methods, moir~ methods and moir~ interferometry, based on the frequency of gratings used. For the grid methods, the pitch of the grating is generally greater than 1 mm (i.e. a frequency smaller than 1 line/mm). Moir~ methods utilize gratings of pitch between .01 mm and 1 mm; corresponding to maximum frequency of 100 lines/mm. Gratings of pitch smaller than .01 mm (or frequencies greater than 100 lines/mm) fall into the realm of moir~ interferometry. All the methods provide the same information and can be interpreted in identical ways. The differences arise the differences in optical methods for the pattern formation. In the grid method, the individual gratings are analyzed and subtracted resulting in no visual moir~ pattern to start of with. In the moir~ method, geometrical optics in the form of obstruction of light by the two superposed gratings is used to explain the formation of the moir~ pattern. While in moir~ interferometry, diffraction and interference form the basis of fringe pattern formation. The sensitivity of the moir~ methods depends on the pitch of the grating, with smaller pitch providing for higher sensitivity. The range of applications of the moir~ method differs in the way the specimen or deformed grating is generated. For in-plane deformation measurements, the specimen grating is printed or adhered to the specimen surface and deforms with the specimen. In projection moire, the grating is obliquely projected onto the surface and is modulated by the topography of the surface. In reflection moir~ methods the specimen grating is the image of a grating as
MoirO Methods
81
seen through the reflective surface of the object. Finally the shadow moir~ method the specimen grating is the shadow of the reference grating wrapped around the object. The specimen gratings thus generated are superposed onto a reference grating to generate the respective moir~ patterns. Except for the shadow moir~ method, in which the specimen and reference grating are inter-linked, all other moir~ methods benefit from the fact that the two gratings are separate. In addition, since the sensitivity of the moir~ method depends on the pitch of the grating the ability to vary the pitch to suit the application is an advantage of these methods. This feature is available for all moir~ methods except for the in-plane moir~ method where the pitch of the specimen grating once selected cannot be changed within the same experiment. This however, does not necessarily mean that the sensitivity cannot be changed, as various techniques have been developed to solve this particular inevitability.
4.2
Digital Moir~
(4.10)
Digital and computer processing of fringe patterns have, as with other optical methods, been applied to processing of moir~ patterns. Since digital methods and computers use regular arrays for recording and displaying images, it is only natural that they exhibit moir~ effects. This has obviously been realized and observed and have been termed as 'aliasing' by persons in the image processing field. Efforts were made by them to overcome or avoid this situation, since it gave rise to inaccuracies in analysis. However, for our applications, this feature is desired and what the moir~ method for deformation measurement is all about. This is the origin of Computer Aided Moir~ Methods (CAMM) as will be explained in this section. Computer Generated Gratings (CGG) A line grating comprises of array of dark and bright lines of equal widths, with the pitch of the grating being the sum of the dark and bright line widths. It is thus simple to write a software program to generate a binary grating and then display it. The dark line is represented as having a gray level of '0' while a bright line has a gray level of '1'. A run of pixels with a '0' thus corresponds to the dark line width and similarly the following run of pixels with a gray level of ' r is the bright line width. The pitch, in units of pixels is the sum of these two line widths. The smallest grating pitch is thus two pixels. Circular, radial and gratings of other forms can also be generated. With a better understanding of the display scheme, a grating with triangular or sinusoidal
82
MA TLAB| for Photomechanics - A Primer
intensity (gray level) distribution can be generated and displayed. Indeed colour gratings are also possible, although applications using colour gratings are only recently being understood and used. Logical and Arithmetic Moir~
(4.11)
Having generated the computer gratings, moir~ patterns need to be generated by superposing two different gratings. Traditionally, moir~ patterns are as a result of arithmetic operations, such as addition, subtraction etc., of the two generating gratings. The same can be applied to computer generated gratings. Since the two gratings are made up of binary numbers, logical operations provide an exciting alternative to superposition. The right side of Table 4.1 is the so-called TRUTH TABLE resulting from logical operations. It is seen that the AND operation gives a moir~ pattern which is identical to that of the physical moir~ pattern. The OR operation gives a moire, however like the Subtraction Moir~ is not the familiar distribution. Finally the XOR operation also provides a moir~ but with a difference.
T a b l e 4.1 L o g i c a l a n d A r i t h m e t i c M o i r d
Moir~ fringes are contour maps of displacement component in the direction perpendicular to the grating lines. Conventional analysis of moir~ fringes involves locating the center of the dark fringes, which are then assigned monotonically changing integer fringe orders. Displacement is then simply the product of the fringe order and the pitch of the reference grating. To aid in the location of centerlines of dark fringes, novel optical schemes have been suggested. Furthermore, to improve the sensitivity by increasing the number of dark fringes, fringe multiplication and fringe shifting techniques have been proposed. That is each half a fringe, from the minimum to the maximum intensity point is split into equal steps, the number of which is the same as the pitch of the reference grating. Since the fringe order from a minimum to the next maximum changes by 0.5, the intermediate steps would have fractional orders, which increment by the inverse of the pitch of the reference grating. It is seen that this method of fringe interpretation implies that for a
83
MoirO Methods
small pitch there are more fringes and correspondingly smaller steps while for a larger pitch the number of fringes are reduced but the number of steps have increased, thus both gratings have the same sensitivity. This method however assumes that the sign of the fringe order is known, i.e. whether the fringe orders are increasing or decreasing. One of the ways of determining the fringe sign is fringe shifting. In this method the reference grating is translated perpendicular to the grating lines and observing the shift of fringe pattern. If the moird fringe shifts in the same direction as the reference-grating shift, the fringe orders are increasing else they are decreasing. Translating a computer grating is relatively straightforward through software and thus one can readily determine the sign of the fringes. 4.3
MAT~@
Demonstration
To aid with this understanding of computer gratings, logical moird and fringe shifting, a MATLAB| demo file is available. Run dmdemo (or dmdemol text version) from the MATLAB| command window prompt. >>dmdemo (or >>dmdemol) The following screens result, which display the computer generated gratings (cgg) and the digital moire (dm) with phase shifting. Changes to grating pitch can be made in the program cgg.m and choice of appropriate logical or mathematical operation can be done in the file dm.m.
84
MATLAB| for Photomechanics- A Primer
Screen 1 Airthmetic Moir~
Screen 2 Logical Moire
85
Moir~ Methods
Screen 3 Four Phase shifted reference and the deformed grating
Screeen 4 Phase Shifted XOR Moire
86
MA T L A B | f o r Photomechanics - A Primer
Screen 5 Averaged digital moire with average taken over pitch of reference grating
Screen 6 Wrapped and unwrapped phase
Moire Methods
87
Many variations on the theme are possible which can provide insights into understanding of not only moir~ but also interference. One can generate a specimen grating to simulate a particular deformation, e.g. bending of a cantilever and see the resulting moir~ pattern. A possible advanced question to consider after gaining familiarity with the program is what happens if the difference in gratings pitch are large, e.g. if the specimen grating is twice that of the reference grating. Or what would happen if the specimen grating were shifted. To further enhance the similarities between the arithmetic and logical moir~ MATLAB| m-files can be scripted to display all images sequentially in one frame. Running the m-file dmdemo4 in the MATLAB| command window gives the following end result displaying the arithmetic moir~ on the left and the logical moir~ on the right. The similarities are apparent. The program is self-explanatory and the MATLAB| programs called at each stage is prompted for in the window below the main screen.
4.4
Moir~ of Moir~
The initial application of the moir~ technique was for measurement of in-plane deformations and strains. In this method, the specimen grating is adhered or printed on the specimen surface and is interrogated by a reference grating. The moir~ pattern depicts contours of displacement component in a direction perpendicular to the grating lines. For complete surface strain measurement it is required to obtain displacement field in two perpendicular directions. To achieve this a cross grating is generally mounted on the specimen and each of the two perpendicular interrogated sequentially using a line reference grating rotated through 900 in between recording the two displacement fields. Generally these are referred to as the u and v
88
MA TLAB| for Photomechanics- A Primer
displacement fields, which needs to be differentiated either optically or numerically to determine the two normal and one shear strain components. For in-plane moir~ methods it is necessary to measure small deformations in the elastic region of materials as well as very large deformations to understand plastic deformation characteristics and response of materials. Measuring large deformations using coarse gratings is relatively simple both from the point of view of printing the grating, to recording the deformed grating, to analyzing the resulting deformation. However great effort was put into trying to measure small deformations. This ranged from enhancing the sensitivity of the coarse grating to enable small deformation measurement to the more recent use of high frequency gratings with their inherent higher sensitivity. As discussed earlier, when using high frequency gratings, it is usually not possible to record the grating lines due to limitations of the imaging systems, while for coarse gratings, the deformed and undeformed gratings can and are normally recorded separately and superposed later. In case of moir~ interferometry, which utilizes high frequency gratings, it is not practical to record the individual grating lines. In these cases, the moir~ pattern is recorded. Figure 4.2 shows a typical moir~ interferometric pattern of the displacement component in the direction of the applied load for a tensile coupon with a central hole. The figure only shows a quarter of the symmetric pattern. In in-plane moir~ method the derivative of displacement which is related to the strain component is desired. To get the derivative of displacement, traditionally the displacement curve is first plotted, smoothed and the numerically differentiated. Numerically differentiation is an inherently error-prone approach. Furthermore due to high density of fringes, it is possible that in the process of smoothing, subtle variations in the fringe pattern might be averaged out. One possibility is to consider that the moir~ interferometric pattern is made up of uniform fringes modulated by local nonuniformity. This introduces the concept of central frequency. This is the average frequency of the fringe pattern and would be what would accrue if the specimen was a uniform tensile specimen. Local deviations from this central frequency provide the effects of specimen geometry such as holes and stress concentrators. The uniform fringes can also be thought of as carrier fringes found in other interferometric techniques such as holography and interferometry. The non-uniformities or distortions in the uniform pattern are of interest and may get washed away during the averaging process.
Moir~ Methods
89
Figure 4.2 Moird Interferometric pattern of a disk with a central hole subject to uniform tension. Fringe pattern contours the displacement component in the direction on the load with a sensitivity of O.417pm /fringe Subtracting the uniform component of the fringe pattern using the moir~ effect can enhance the non-uniform part as bold variations. These bolder variations in the displacement curve assist with the differentiation and yield more accurate values for the displacement derivatives. The uniform pattern that was subtracted is added as constant value to determine the total strain distribution. However, fewer fringes might introduce greater errors in the numerical differentiation. Instead of numerical differentiation, optical differentiation by superposing on the original pattern a shifted image of the same fringe pattern provides a moir~ of moire, which contours strain components in the direction of the shift.
90
MA TLAB| for Photomechanics - A Primer
The following MATLAB| r o u t i n e d e m o n s t r a t e s this moir~ of moir~ effect. >>mom
- Step 1: Read image file into m a t r i x Enter the file name (jpg, tiff, or b m p ) : m o m . j p g
- Step 2: Generate moire fringes to contour derivatives of d i s p l a c e m e n t There are two ways to do this: 1. by multiplication ; 2. by addition Enter your choice (1/2): 1 Determine the shear a m o u n t 1. according to center fringe frequency or 2. arbitrarily Enter your choice (1/2): 1 Firstly, detecting center frequency (carrier frequency) a n d orientation Step 11: choose a line segment along the first order of the spectrum/
1 Note that this form of scripting is confusing due to the repeated use of Step 1
MoirO Methods
91
Click the left button of the mouse in one point and then click the right button of the mouse in another point, then a line linking these two points will be constructed. Step 2: Choose a line segment along the zero order of the spectrum/ To construct a second line by the same way.
Next, enter your selection on the shear a m o u n t ( 1 / 2 / 3 ) : 1. small; 2. medium; 3. large -> 3 The shear amount are 6 and 24 in the horizontal and vertical Lastly, automatically threshold the pattern to highlight the moire fringes
92
MA TLAB| for Photomechanics - A Primer
- Step 3: Low pass filtering to filter moire fringes o u t Firstly, choose the shape of filter function (default is a zonal window) 1. zonal window 2. circular window 3. Gaussian window with an aspect ratio (default is 1 / 2 along the main axis) Enter your choice (1/2 / 3): 2 Then, as deteremined by a ratio, choose window size in such a way that the size will be the ratio multiplies of the N y q u i s t frequency. The ratio is 1
- E n d of the program for moire of moire fringe calculation-
While contours of derivative of displacement component can be readily visualized, quantification of these contours is still not automatic and has to be manually set based on the expertise of the user. Thus possible alternatives are sought. Two alternative solutions are presented here in which image processing and thus MATLAB| would be useful. The first is called Moir~ of Moir~ Interferometry and the second is a novel multi-channel Gabor filtering scheme
4.5
Moir~ of Moir~ Interferometry
(4.12)
Following from the earlier discussion, wherein moir~ of the moir~ interferometric pattern is used to subtract a uniform part of the original pattern, it was observed that parts of the original pattern where the fringe spacing matches that of the uniform grating could be readily isolated. This means t h a t the frequency of the moir~ interferometric pattern at these points could be determined. The frequency of the moir~ interferometric pattern is proportional to the derivative of displacement and hence the derivative of
MoirOMethods
93
displacement could be determined. Thus by using a series of gratings with different pitch and orientation, whole-field derivatives of displacement component could be mapped (fig. 4.3). Using computer generated gratings and logical moire, this process could be done through digital processing without the need for physical having a large array of gratings. The MATLAB| m-files 'dmom' and 'dmom2' do precisely this. Figure 4.3 gives the resulting patterns from which strain has been deduced at points where the two gratings have the same pitch. Similar points can also be determined when the frequency of the moir~ interferometric pattern is twice that of the uniform grating. Thus points with two times the strain can also be gleaned from these images.
Figure 4.3 Logical moirg of moirg interferometry for strain determination. 4.6
Gabor Strain S e g m e n t a t i o n
(4.13)
Two approaches for strain determination using the moir~ effect have been described above. The traditional moir~ of moir~ approach provides patterns, which contour the derivative of displacement in the direction of the shift. While a dense moir~ pattern is required, the strain (derivative of displacement) contour interval depends on the shift, which can be large. Furthermore the fringe pattern still needs interpretation. An alternate approach involves superposing on the moir~ pattern, a grating of known
94
MA TLAB| for Photomechanics - A Primer
pitch. The concept being that the strain at any point depends on the fringe spacing and hence if the fringe spacing can be obtained the strain can be determined. Thus by using a grating of known pitch and identifying points on the fringe pattern with the same spacing, strains at these points can be readily determined. However, strains can be determined at only specific points, and there is need for many gratings if the whole image needs to be segmented. To automate this process a novel approach of strain segmentation using a multi-channel Gabor filter was proposed. This approach attempts to get fringe frequencies within moir~ patterns based on a frequency-sorting algorithm. Via a set of Gabor channel filters, fringe frequencies at any point are evaluated based on selection of a best matching channel from the filter bank. Hence the frequencies to be determined are represented by the associated tuning functions. This method allows one to directly segment derivatives of displacement from only one interferogram without the need for phase unwrapping and numerical differentiation. Appropriate filter design allows for user-specific segmentation, which is essentially in engineering design and analysis. A 2-D Gabor function is a harmonic function (i.e. sinusoidal grating) modulating a Gaussian. This can be expressed as (4.1)
h(x, y)= g(x', y')exp(j2lrFx')
where
(x', y')= (xcosO + ysinO, xsinO - y cosO)
are
the
rotated
spatial
coordinates, (F,O) are the radial center frequency and orientation, which specify the filter in the spatial domain. g(x, y) is a Gaussian function, given as
g(x, y)=
1 exp -0.5 ----r + 2ZCrxCr :, cr.,_"
(4.2)
(a X,ay ) are the bandwidth of the Gaussian, hence the bandwidth of the Gabor filter. This filter can be tuned to a specific frequency (F) and orientation (0), enabling them to be used in a m a n n e r described above for the physical grating. Since in strain analysis using Cartesian coordinates the frequency components in two perpendicular directions are required, instead o f ' F ' and '0', we choose U (= F cos0 ) and V (= F sin 0 ) as the Gabor filter parameters. Thus equation (4.1) becomes
Moir~ Methods
95
h(x, y)= g(x', y')exp(j2z (Ux + Vy))
(4.3)
In the Fourier transform domain, the frequency response function of the Gabor filter is still a Gaussian, centered at the frequencies (U,V) with standard deviation ( a , a,,). This response function can be expressed as
{ E/u/05 l, v/l}
H (u, v) = exp -
(4.4)
+
O" u
O"
where (u', v')= (u cos 0 + v sin O, u sin 0 - v cos 0 ) are the rotated coordinates in the
frequency
domain.
1
1
--) (a,, or. )= (2:fro" . ' 2~zo-,.
are
proportional
to the
bandwidth of the filter along the u' and v" direction in the frequency domain. The bandwidth is defined as the half-peak of the frequency response of the filter and given as
AU'= a
and
7~O" x
AV'= c~ in the u' and v" directions 7tO" r
respectively, where a = ~/ln 2 / 2. Figure 4.4 depicts a typical Gabor filter in both the spatial and frequency domains. The Gaussian and its frequency response, together with the complex exponential function and its frequency response, are shown at the center of the figure. Multiplication of these functions in the spatial domain gives a Gabor filter in the same domain. Its real and imaginary components are shown to the left of the figure. Both are sinusoidal functions, but with a phase difference of z / 2 between them. Convolution of Gaussian and delta functions in the Fourier domain gives the frequency response of the filter in the Fourier domain, which is a Gaussian shifted according to the center frequencies (U, V). As restricted by Gaussian envelopes, Gabor filters are local in both the spatial and frequency domains. These filters can be seen as band-pass filter with narrow bandwidth and can thus be used to select local frequencies of interest from a fringe pattern. This is achieved by convoluting a fringe pattern with a Gabor filter to give a response R(x, y). The magnitude of
R(x, y) is m(x, y )given as: m(x,y)=li(x,y)Nh(x,y) ]
(4.5)
re(x, y) is an indicator of the correlation between the filter and the localized fringe frequencies at a particular point (x, y). A m a x i m u m response indicates regions where the local fringe frequency where |
is convolution operator,
MA TLAB| for Photomechanics - A Primer
96
matches the Gabor filter frequency. This is the basic idea for evaluating the fringe frequencies based on a set of channels of a Gabor filter bank. By a judicious choice of filters, contours of fringe frequencies in two perpendicular directions can be obtained. Let {m,,(x, y) }, ne [1,N] be the magnitude outputs from N channels. From these N channel response the local frequency can be obtained as follows:
(f~ (x, y), f:, (x, y))= (U,, V~ ) where
(f<,/y)are
whe re
l : arg { max [m,, (x, y)] }, n e [1, N]
(4.6)
the fringe frequency along the x and y directions
respectively. (Ut,V t ) are the channels with frequencies that is exactly tuned (has highest response from the N-channels) to the fringe frequencies surrounding the image point at the coordinates (x, y). The local frequency at a point is proportional to the derivative of displacement. Thus, the components of the derivative of displacement at the point can be assigned by the frequencies of the corresponding Gabor channel. When the response value at the point reaches maximum, the harmonic is said to be tuned to fringe frequencies, i.e. the frequencies at the point is equal to the filter frequencies. Thus the derivatives of displacement are discretely sampled. In the current application, the contours are the loci of points with the same strain, given as A f / f , where Af is the frequency separation of adjacent channels and f is frequency of the reference grating. Consider the moir~ interferometric fringe pattern shown in fig. 4.5(a). The pattern depicts contours of displacement component in the 'y' (load) direction of one quarter of an isotropic tensile specimen with a hole. A 1200 lines/mm diffraction grating is used to sample the deformation over the specimen with an optically generated reference grating with a frequency of 2400 lines/mm. Figure 4.5(b) is the resulting Fourier spectrum of this fringe pattern. The Fourier transform assists in identifying the frequency spread of the fringe pattern and thus enables suitable Gabor channel filters to be selected. One such selection is shown in fig. 4.6. There are a total of 42 equi-spaced channel with a frequency separation of Af =0.25 lines/mm between adjacent channels. These channels are used to tune harmonics to the fringe frequencies in the original pattern. Each cross in fig. 4.6 corresponds to one Gabor filter with tuned parameters f< and f , . Convoluting each of the Gabor filters with the moir6 pattern yields a set of response patterns. Consider a typical Gabor filter corresponding to the cross on the fourth row and second column. This corresponds to frequency components fx =0.25 lines/mm and
Moir~ Methods
97
f,. =0.92 lines/mm. The response pattern after convolution with the moir6 fringe pattern is shown in fig. 4.7(a). The highest response corresponds to points on the image, which match the Gabor filter frequencies. Thus this channel is tuned to the uniform grating. Figure 4.7(b) is a similar pattern but with Gabor filter tuned to the fringe frequencies f, = 1.30 lines/mm and f,. = 0.92 lines/mm. The m a x i m u m response occurs at a different location.
From the set of 42 Gabor frequency response channels the maxima at each point on the fringe pattern can thus be identified. Shown in fig. 4.8(a) and (b) are the maxima of f, and f, frequency components divided by the frequency f = 2400 lines/mm respectively. Thus the distribution in the former figure are the contours of the displacement derivative in the x (horizontal) - direction, while t h a t in the latter are the contours of the derivatives of displacement in the y (vertical) direction. From the frequency separation of the Gabor filter, the contour interval is Af / f = 0.26 / 2400 = 1.08 x 10-4
(4.7)
Since the fringe pattern is a contour of displacement component in the vertical direction, fig. 4.8(b) show contours of the normal strain (~y) in the vertical direction. Figures 4.8(c) and (d) show the moir~ of moir~ pattern generated by superposing a copy of the moir~ pattern on itself and shifting it either in the horizontal (fig. 4.8(c))or vertical direction (fig. 4.8(d)). The moir~ of moir6 patterns provide contours of strain proportional to the shift between the two gratings. In this case the contour interval is chosen to be identical to the Gabor filter contours. Thus the shift A x - l / A f = 1/0.26 = 3.85 m m (or 34 pixels). The similarity between the moir~ of moir~ p a t t e r n s and the multichannel Gabor filtered patterns are obvious. However, while the moir~ of moir~ patterns provide contours which need to be further interpreted, the Gabor filter segmentation provides quantitative values of displacement derivatives directly and thus easily discernible by the design or test engineers. Gabor filters provide an effective means of segmenting fringe patterns to contour derivatives of displacement. By a suitable design of the Gabor filters, full-field contours of derivatives of displacement with desired resolution can be determined. In this paper the Gabor filter bank comprised of equi-spaced increment of fringe frequencies. While this would be an obvious choice, it may not be an appropriate filter design for all situations. Indeed, from an engineering point of view, there is generally a need to segment regions where the stresses (strains) are above a critical value, which is known a priori.
98
MA TLAB| for Photomechanics- A Primer
Gabor filters can then be readily used to segment the fringe pattern using a few channels of filters. This would be a rapid and quantitative approach for quantifying displacement fields obtained using optical methods in a form suitable to design and test engineers.
~.~o
Figure 4.4 Gabor filter in the spatial and frequency domains.
1O0
MA TLAB|
Photomechanics - A Primer
(a)
(b) Figure 4.5(a) Moirg fringes showing the contours of displacement component in the vertical (load) direction for a tensile specimen with a hole. Only quarter of the image is shown (b) Fourier spectrum of the fringe pattern.
MoirO Methods
l 01
Figure 4.6 Multi-channel Gabor filter design. Each cross represents a pair of frequency components in the horizontal and vertical directions used to generate one channel of the Gabor filter. Frequency separation between adjacent channels is 0.26 lines / mm.
Figure 4.7 Response function for convolution of moirg pattern with Gabor filter tuned to frequencies (a) fx - 0.26 l i n e s / m m and fy - 0.92 l i n e s / m m and (b) fx 1.30 l i n e s / m m and fy - 0.92 l i n e s / m m .
102
MA TLAB| for Photomechanics - A Primer
Figure 4.8 Derivatives of displacement along the horizontal and vertical directions. (a) and (b) using Gabor segmentation and (c) and (d) using moird of moirg contouring. Contour interval is 1.08xl0 -4. Note derivative in the vertical direction is the normal strain contour. 4.7
MATLAB|
Demonstration
Two MATLAB| routines are demonstrated. The first involves use of Gabor filter for fringe segmentation. This is useful if there is interest in either subtracting the background carrier pattern and keeping only the region of interest for further processing or if there is a need to identify regions where the strain (i.e. fringe frequency) exceeds a certain critical value. The second routine is as describe above for strain segmentation.
MoirO Methods
103
Single-Channel Gabor Filtering >>singabor Step 1: Read the pattern into matrix Enter the file name: singabor.jpg
- Step 2: Select one channel ofgabor filter1. according to center frequency & orientation or 2. arbitrarily Enter your choice (1/2): 1
Firstly, there is need to determine the center frequency and orientation Step 1: Choose a line segment along the first order of the spectrum! click the left button of the mouse at one point and then click the right button at another another point in the vicinty of the first diffraction order, a line linking these two points will be constructed.
104
MA TLAB| for Photomechanics - A Primer
Step 2: Choose a line segment along the zero order o f the spectrum! Construct a second line as above
Next, calculate response function for this channel.
| 05
Moir~ Methods
- E n d of the program for segmentation of fringe pattern -
Multi-Channel Gabor Filtering -
w
>>mgabor Step 1: Input the data and required parameters Enter the file name: mgabor.jpg
106
MA TLAB| for Photomechanics - A Primer
press any key to continue The histogram is displayed next
Determine image domain for the given pattern by selceting one level from the histogram Click to select a threshold from histogram, you can try the point on the above figure Enter frequency of the reference grating (lines/mm): 2 4 0 0 The Fourier transform of the fringe pattern is calculated and the spectrum is displayed
Moire. Methods
107
Step 2: Design a filter bank for the given pattern Press any key to contine Step i: choose a line segment along the first order of the spectrum! click the left button of the mouse at one point and then click the right button at another point, near the first fundamental frequency point. A line linking these two points is constructed. Step ii: choose a line segment along the zero order of the spectrum! Construct a second line similar to the previous step but around the zero frequency region. A set of Gabor filter frequencies as shown are generated. Note t h a t in this demo the set of filters are pre-selected but these can be changed as per r e q u i r e m e n t s .
Next, calculate response function for all channels. Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Channel 7 channel 8 Channel 9 step 3: frequency matching between Gabor filters and fringes Computation will take some time; press any key to continue. Press a n y key Running...
108
MA TLAB| for Photomechanics - A Primer
The segmented strain plots are displayed. E n d o f the p r o g r a m for strain contouring based on m u l t i - c h a n n e l Gabor filters
4.8
Diffrentiation of Low Density Fringe Patterns
All the above routines work best if the fringe density is relatively high. However, in many instances, there are only a few fringes. Phase shifting can help improve the deformation m e a s u r e m e n t accuracy and sensitivity. However, there is still a need for numerical differentiation to get the derivatives of strain. Herein, a new alternative using phase shifted moir6 patterns to directly determine the derivatives of displacement without the need for unwrapping the phase image or differentiating the displacement data is proposed. Furthermore, means of increasing the sensitvity of the derivatives is also shown. These routines are then simply programmed into MATLAB| and a demonstration program is shown. The intensity of a linear carrier fringe pattern with a phase shift a, can be written as, I,(x, y)= a(x, y)+ b(x, y)cos( k x + k w(x, y)+ o:, )
(4.8)
where I i is the intensity of the i th frame, a(x, y) represents the background intensity, b(x, y) is fringe modulation, and k = 27~f0 is the sensitivity vector. The phase shift a i = i ( 2 z / N ) ,
i e [0,N- 1]. When two identical carrier patterns, as in
equation (4.8), with a shift Ax are superimposed, the overlapped fringe pattern is given as,
Moir~ Methods
109
1;(z, y)= I,(z, y)1, (~ + ~ , y)
y)a(x + ~ , y)+ ~b(x, y)t,(, + ,',,, y)cosq,(w(x + s~, )')- W(X, y))+ ~ )
~ a(x~
+ a(x + ~ , y)b(x, y)cos(k~ + ~(x, y)+ ,~ )
+ a(x, y)b(x + Ax, ),)cos( k (x + At-)+ kw(x+ ,ha, y)+ a i) +lb(x, y)b(x + Ax, y)cos(2kx + k(w(x, y)+ w(x + Ax, y))+dp:~+ 2cti) Z
where
(4.9)
I;(x, y) denotes the intensity of the ith overlapped pattern, i e [O,N -1]. The
second term contains information regarding the derivative of displacement, and can be rewritten as,
lc~ where r
+ Ax' y)- w(x' Y))+r ) = l c ~ k Ow(x' Ox y) Ax +OA~
(4.10)
2ZfoAX is the original phase. Adding the N overlapped patterns
yields a moir6 pattern M'(x, y), as" N-1
M'(x, y ) : Z I[(x, y)
(4.11)
i--0
I;(x, y),i ~ [0,N -1] is given by equation (4.9). It is clear that for the N phaseshifted fringe patterns, all terms except for the first two low frequency terms are eliminated in this summation. This is due to the fact that the summation is done over a full period of the carrier grating. Thus the moir~ fringes are highlighted while the background carrier pattern is averaged to a uniform background. Substituting equations (4.9) and (4.10) into equation (4.11) gives the resulting moir6 pattern M'(x, y), as
N b(x, .v)b(x+Ax, y)co~kOW(x'Y)Ar+0a,. I (4.12) M'(x, y)= Na(x, y)a(x + Ax, y)+--~ Ox Subtracting the constant background intensity Na(x, y)a(x + Ax, y) and dividing by the (N/2)b(x,y)b(x+ Ax, y) makes the intensity M(x, y) proportional to the derivative of displacement, Ow(x, y)/Ox. Thus
110
MATLAB| for Photomechanics- A Primer M(x, y)=co~k Ow(xY' )bxJOx (4.13)
_M'(x,y)-Na(x,y)a(x+Ax, y) (N / 2)b(x, y )b(x + Ax, y ) where
M'(x,y)
is computed using equation (4.12). The original phase is set,
without loss of generality, to be a multiple of 2~. In the above section, we made use of phase-shifted carrier fringe patterns to generate moir6 fringes representing derivative of displacement component along the shear direction. The sensitivity of measurement is a function of the carrier frequency. For instance, the upper limit of the carrier frequency of a digital sinusoidal grating is fc = 0.25 cycles/pixel. Thus, the m e a s u r e m e n t sensitivity is limited. Consider a carrier pattern with a frequency of fc and let the shear amount be one pitch (=1/fc ) of the carrier grating. Using 0.25 cycles/pixel for the carrier frequency and 4 pixels for the shear amount gives the upper limit of the m e a s u r e m e n t sensitivity as 1. In order to increase the m e a s u r e m e n t sensitivity, a digital multiplication method is proposed. Let
{Ii(x, y)}, i e
[0,N -1] denote a series of phase-shifted fringe patterns, where N
is the number of phase steps. Furthermore, let N = 2N' be an even number, where N' is an integer. When the N'-frame fringe patterns {12,(x,y)}, i e [0,N'-1] are substituted into equation (4.13), contours of
3W(x,y)/Ox
with a shear of Ax
along the x direction is given as
M ~(x,y)=co~2rCfoOW(x'Y)Av lax N'-I
E I~i(x, y)- N'xA(x, y) i=0
where
{l~i(x,y)},ie
N'/ZxB(x,y)
(4.14)
[0,N'-I] are the shifted and superposed fringe patterns from
each of the original phase shifted fringe patterns. The background intensity
A(x,y)
is given by
A(x,y)=a(x,y)a(x+Ar.,y),
N'-I
where
a(x,y)=(1/N')~.,12,(x,y ), and i=0
the fringe contrast intensity B(x, y)= using the following relation:
b(x, y)b(x +Ax, y),
where
b(x, y)
is calculated
Moir~ Methods
111
27 b(x, y)= N--
Ei2i(x,y)sina, i--o -'
+ ,,--o ~_l,i(x,y)cosa, ' -
(4.15)
Furthermore substituting the remaining N' fringe patterns {12i+l(x, y)}, i e [0,N'-1] into equation (4.13) produces the new moir6 pattern
Mz(x, y). They represent the same derivative as described by equation (4.14) with the same carrier frequency f0. Multiplying M~(x,y) with M2(x, y) yields moir6 fringes representing Ow(x, y)/Ox with a frequency of 2f0. Thus the moir6 fringe contours of Ow(x, y)/Ox with twice the carrier frequency are generated. These are governed by:
M (x' y ) = c~ 4rcf ~ Ow(x' Y ) Ax =2Ml(x,y)M2(x,y)-I
Both the fringe density and the m e a s u r e m e n t sensitivity increase two-fold.
4.9
MATLAB| Demonstration
- momph - Step 1. Read data into matrix and get information on the phase shiftingEnter phase steps to be used (3 / 4 / 5): 4 Enter phase shift amount for the first pattern (degree): 0
Enter phase shift amount for the next pattern (degree): 90 Enter phase shift amount for the next pattern (degree): 180 Enter phase shift amount for the last pattern (degree): 270 Enter file name of the first pattern: Momph_0.tif Enter file name of the next pattern: Momph_90.tif Enter file name of the next pattern: Momph_180.tif Enter file name of the last pattern: Momph_270.tif The four phase shifted images are displayed
(4.16)
112
- S t e p 2: R e c o n s t r u c t i o n o f moire f r i n g e s Press a n y key to c o n t i n u e
This reveals the spectrum of the fringe pattern
MA TLAB| for Photomechanics- A Primer
MoirO Methods
113
There is need to detect center frequency (carrier frequency) and orientation in the Fourier domain. Step 1:choose a line segment along the first order of the spectrum/ click the left button of the mouse at one point and then click the right button at another point. A line linking these two points is constructed. Step 2:choose a line segment along the zero order of the spectrum/ Construct a second line in a m a n n e r similar to above The resulting derivatives of displacement before and after enhnacement are shown
Step 3: Multiplication of moire fringes Press any key to continue P a t t e r n with double the fringe density is shown
- E n d of the program -
114
4.10
MATLAB| for Photomechanics- A Primer
Conclusion
Moir~ methods can be used for determination of in-plane displacements, out-of-plane displacement, slopes and curvatures. However, to obtain information in a form useful for application, fringe processing is necessary. Moir~ methods, in most cases are the simplest to process. Most of the image processsing techniques can be applied to moir~ fringe pattern analysis. If the fringe density is low, the fractional fringe method can be used. For dense fringes the Fourier Transform Methods have been used. Use of computer gratings and digital moir~ give the moir~ method added flexibility when incorporated with the Phase Shift processing schemes. The processing of the moir~ pattern is generally the first step towards converting the fringe pattern into a form which can be easily interpreted and used. Additional steps including numerical differentiation and feature detection are necessary. In this chapter, schemes to directly determine derivatives of displacements has been highlighted. Gabor filter is proposed as a novel scheme for segmentation of moir~ fringes based on the fringe gradient. The scheme can be applied for patterns with high fringe density. Alternatively, moir~ of moir~ in conjunction with phase shifting enables, gradient determination in cases where the fringe density is not high.
115
Chapter 5 5.1
Digital Holography
Introduction
One of the most significant contributions of holography is in the field of holographic interferometry. High sensitivity and the full-field 3-D visualization of complex shapes with rough, diffusely reflecting surfaces are the major features of contributing to its development during the recent decades. Besides, sharing similar advantages as other interferometric methods, holographic interferometry exhibits unique benefits in static and dynamic measurement of displacement, both out-of-plane and in-plane, for a broad range of objects in engineering problems. Historically, holography is inherently linked with microscopy. Gabor's (5.1) original concept of holography was primarily proposed in 1948 to correct for spherical aberration of an electron microscope through lensless imaging and thereby, to achieve atomic resolution. Although a successful application of the technique to electron microscope has not materialized so far because of several practical problems, the validity of Gabor's ideas in the optical field was recognized and confirmed in 1950. Spurred by the development of laser, which provided an available source of coherent light, and enhanced by the off-axis reference beam set-up proposed by E. Leith and J. Upatnieks (52), optical holography surged in the 1960s. Among these research forays, holographic interferometry was demonstrated in 1965 by Stetson and Powell (53), capable of mapping minute displacements or refractive index changes over an extended field of view with high resolution. This led to Holographic Non-Destructive Testing (HNDT), which showed good industrial promise (54). From the view of practical application, however, holographic interferometry was mostly confined to research laboratories in spite of its obvious advantages (5.5-~.9). The reasons for this mainly lie in the following aspects. First, the high sensitivity of holography proved a bane for the engineers, as the stability requirement in holography was not readily compatible with industrial shop floor environments. Pulse lasers did alleviate the problem to some extent. Second, the photographic recording process and the subsequent development of the film (hologram) introduced annoying time delays, which prevented on-
116
M A T L A B | f o r Photomechanics - A Primer
line inspection. Moreover, reconstruction of the fringe patterns depended on optical systems. For real-time measurement the exact repositioning of holographic plates after chemical processing was required. With respect to micro-measurement, the application of conventional optical holography has become more and more difficult since the optical setups may be rather complicated when structures with smaller lateral sizes have to be examined.
Solutions to the above issues are crucial for further advancing the progress of holography. Successful usage of charge devices (CCD) in digital speckle pattern interferometry (DSPI) suggests a more effective means to greatly enhance the flexibility and efficiency of the systems. Utilizing CCD sensors as the holographic recording medium holograms can be recorded at video rates and thus eliminate the need for wet processing. On the other hand, real-time measurement is frequently required in engineering applications, which implies the use of the increasing power of computer technology to realize numerical solutions to replace traditional optical procedures. Aided by efficient numerical algorithms, holographic reconstruction and quantitative evaluation can be realized quickly. As with other optical techniques, holographic interferometry needed to tap on the digital imaging and image processing technology not only for fringe processing but also to record and reconstruct the holograms.
5.2
Digital Holography
Digital imaging for holographic recording first appeared in the development of Digital Speckle Pattern Interferometer (DSPI). DSPI is, in fact, image plane holography with interferometric evaluation based on intensity correlation. Both amplitude and phase of the object wave need to be reconstructed for it to be classed as holography. To this end, diffraction principles need to be invoked and digital holography can be classified as Fresnel type and Fraunhofer type. In Fresnel digital holography as shown in Fig. 5.1, the CCD is placed in the Fresnel diffraction zone of the object wave and the interference pattern with a collimated reference beam is recorded. 0 is the angle between the object and reference beams. With such arrangement, the numerical reconstruction is straightforward.
Digital Holography
117
Figure 5.1 Fresnel digital holography Fraunhofer holography is a specific case of Fresnel type with the object placed at a very large distance (infinity) from the CCD. In the Fraunhofer configuration shown in Fig. 5.2, a lens is used to generate the far-field diffraction pattern of the object wave.
Figure 5.2 Fraunhofer digital holography To put in historical perspective, the original concept of digital holography was first put forward in 1980 by L. P. Yaroslavskii and N. S. Merzlyfakov (5.10). Their work focused on computer synthesis of holograms on the basis of numerical descriptions of an object. In a word, both holographic recording and reconstruction were done numerically in computers. The premise of the method is that accurate mathematical descriptions of an object have to be known, then relevant scalar wave theories are applied to simulate the propagation, interference and diffraction of the waves. Later, some work was done to reconstruct holographic images from photographically enlarged holograms using mathematical methods. U. Schnars and W. J u p t n e r (5.11) proposed the new concept of digital holography in 1994. Off-axis Fresnel holograms were recorded by a CCD and reconstructed numerically. Despite
118
MA TLAB| f o r Photomechanics - A Primer
its short development period, the potentials of digital holography are widely acknowledged both for Non-Destructive measurement and testing, as well as for imaging and visualization. In digital holography, since holograms are digitally sampled, information of optically interfering waves is stored in the form of matrices. Numerical processing can thus be performed to simulate the optical processes of interferometry, spatial filtering, etc. Both amplitude and phase distributions of the wave can be extracted numerically. Furthermore, digital processing allows subtraction of background noise and can be useful in eliminating the zero-order diffraction term for an in-line holographic system. Thus digital processing can compensate for lack of spatial resolution of digital cameras. To apply digital holography successfully, two features need to be considered: Recording Medium
By replacing the photographic film with Charge Coupled Devices (CCD), conventional optical recording techniques have to be modified to meet the requirements placed by the use of CCD. Most importantly, the fine interference pattern needs to be resolved. As long as the sampling theory is fulfilled, digitally sampled holograms can be reliably reconstructed without any loss. However, the spatial resolution of CCD sensors is generally about an order of magnitude lower than that of conventional recording media. For this reason, the maximum allowable angle between the object and reference waves, namely the interference angle, is limited to a few degrees.
Reconstruction Algorithm Numerical reconstruction of a digital hologram follows the scalar diffraction theory. According to the diffraction properties of the object wave and system structures, different techniques are employed to formulate the diffraction integrals. Convolution type diffraction integral and Fresnel transform diffraction formula, which are based on the theory of space-invariant linear system and Fresnel approximation, respectively, have been developed for the reconstruction of digital holograms. The first approach treats the diffraction integral as a convolution. It allows for economic calculation of the integral without approximation, whereas in Fresnel transform method paraxial approximation is adopted to reconstruct the diffraction field U by
Digital Holography
U(xl , Yl ) =
1
119 jz 2 2 exp(ikDexp[--z-=(x~ + y~ )] AD
"IIh(XH, Y,, )R(XH, YH )exp
r
(X1XH + YlYH )~HdyH
(5.1)
where D is the reconstruction distance, ~ is working wavelength, h is the recorded hologram, R is the reference wave and k is the wave number. To apply Fresnel transform equation, the distance D between the object and CCD target needs to satisfy the Fresnel diffraction condition given as D 3 >> 4--~-[(x I --XH) 2 +(y~ - yH)2] 2 max
(5.2)
For holographic microscopy, however, this assumption is generally not valid. Therefore, numerical reconstruction based on the Fresnel-Kirchhoff integral is used. It allows imaging with high resolution in the cases when light passing through an object that contains high spatial frequency components diffracts at a large angle. In the above numerical scheme either several Fourier transform computations or complex multiplication is required. Despite the increased speed of digital image processing, this is still computationally intensive, especially when a high-resolution CCD is used. The Gabor hologram and Leith-Upatnieks hologram are two basic wavefront recording and reconstruction configurations in conventional optical holography. Being a typical in-line system, Gabor holography employs coaxial object and reference beams, while for Leith-Upatnieks hologram, which is also known as the off-axis hologram, a distinct reference wave with an offset angle is used. Let O(x,y) and R(x,y) represent the complex amplitude of the object and reference waves on the hologram plane, respectively. After photographic recording and processing, illuminate the hologram with a plane wave with uniform amplitude B. The resulting image wavefront of a Gabor hologram is given by
u(z, y) or
+ BlO(x, y)l 2 + BR* (x, y)O(x, y) + Bg(x, y)O* (x, y),
where tb is a constant coefficient, and * denotes complex conjugate.
(5.3)
120
MA TLAB| for Photomechanics - A Primer
It is seen that the real and virtual images of the object are reconstructed simultaneously in Gabor holography, both being centered on the hologram axis, and accompanied by a coherent background. In an off-axis system, because of the introduction of an offset angle 0, the avefront reconstructed by a hologram is given as U(x, y)
Bt b + B[[O(x, y)['~ + rO(x, y ) e x p ( j 2 r t y o y ) + rO *(x, y ) e x p ( - j 2 r t g o y )
(5.4)
where r denotes the real amplitude of the plane reference wave. u = (sin O) / is the spatial frequency related to the offset angle. The equation implies that the real and virtual images are generated symmetrically on either side of the hologram, at an angle 0 from the optical axis. From the analysis of the two holographic configurations, it is found that Gabor type in-line hologram has the problem of the superposition of the coherent background and the reconstructed images. As a result, observation of one focused image is always disturbed by the presence of the zero-order diffraction wave and the other defocused conjugate image. The inseparability of the image terms and the coherent background greatly degrades the image quality and visualization of an in-line hologram. Thus the off-axis holographic recording, which spatially separates the three reconstructed beams, was developed. The offset object and reference beams generate an interference pattern, whose spatial frequencies depend on the angle between the two interfering beams.
So far, a major issue in digital holography has been the interference-angle limitation induced by the low spatial resolution of current available CCD sensors, because of which frequently in off-axis reconstruction the diffraction wave components cannot be completely separated. As a consequence, image quality and measurement accuracy are greatly degraded. Even if the overlapped pixels can be retrieved, current off-axis systems have other disadvantages such as the limited space for the image and inefficient use of CCD pixels. In contrast, an in-line setup is free from such limitations. It is known that applications of an in-line system depend on effective solutions to its inherent problem of coaxial diffraction wave components. The unique properties of digital holography, such as the digital form of the optical wavefields, together with the powerful numerical techniques, offer more flexibility to manipulate the information so as to achieve this goal.
Digital Holography
121
To achieve good quality reconstruction, digital recording of holograms has to satisfy the requirements of the Nyquist sampling theorem. For exact recovery of the image of an original object in digital holography, sampling theorem implies that the interference fringe spacing must be larger than two pixels. Its direct effect in digital holography is that the interference angle between the object and reference waves has to be limited to within a few degrees. For a high-resolution KODAK Megaplus 4.2i CCD camera with 2048 x 2048 pixels and a pixel size AN of 9 ~m x 9~m, the m a x i m u m interference angle is limited to 2 a~ax = ~ = 030295 rad - 1.7~ 2AN (5.5) for)~ = 0.532tml For efficient use of the CCD pixels, it is important t h a t the sampling theorem should be fulfilled across the entire CCD sensor. For an object of certain lateral size, the recording distance D between the object and the CCD target has to be greater than a m i n i m u m value Dmin. This is to allow spherical wavelets from each point on the object to interfere with the reference wave in the recording plane at an angle no greater t h a n (Zmax. For an in-line setup, the m a x i m u m possible interference angle, as illustrated in fig. 5.3, occurs between the rays from the lowest end point on an object and the normally incident plane reference wave at the topmost point of the CCD sensor.
Figure 5.3 Digital recording mechanism of in-line holography If the centers of the object and CCD target are both on the optical axis of the system, then the m i n i m u m allowable distance is given as AN D.c.:in_line = ~ ( N 9AN + Lox ) (5.6)
122
MATLAB| for Photomechanics - A Primer
for a small (Zmax. Here, Lox is the lateral dimension of the object along the xaxis, and N.AN is the sensor size of the CCD. It is seen that for a specific recording system, D min-inline increases linearly with the object size at a rate of AN/~. In the off-axis setup, an offset angle 0 is introduced to separate the various diffraction wave components in space. To separate the twin image terms from each other and from the light transmitted in directions close to the optical axis, the offset angle 0 of the object with respect to the reference wave must be greater than a specific minimum value Ommgiven by: 0mi~ = sin-l(3Wok)
(5.7)
where Wo is the maximum spatial frequency content of the object. If the object is offset along the X-axis as illustrated in fig. 5.4, the bandwidth 2Wox of the object along the X-axis is of interest. Therefore, 0ram is approximately given by: t9 min = 3L~ 2D where D is the distance from the object to the CCD plane.
(5.8)
Figure 5.4 Digital recording mechanism of off-axis holography
In the off-axis setup, digital recording of a hologram is affected by two interactive factors simultaneously. The position of the object has to be determined carefully so as to ensure the need of the minimum offset angle 0m~, and to comply with the limitation of maximum interference angle (Zmax. Taking into account these angle requirements, the relationship between the minimum allowable distance, Drain:off-axisand the object size is:
Digital Holography
123
O~n:off-axis = N 9AN + Lox + 2boz 2a~x
(5.9)
where boff= 0ram. Dmin:off-axis Thus Dmin:off-axis is given as: _ A N ( N 9 A N + 4Lo• )
D rrfin:off _axis -- ~
(5 10)
From the equations, we know that in both the in-line and off-axis cases the m i n i m u m recording distance increases linearly with object size as shown in fig. 5.5.
Figure 5.5: M i n i m u m allowable recording distance in digital holography The pixel size and pixel number characterizes the spatial resolution of a CCD sensor. For a digital holography system with given recording parameters, the object has to be placed at a distance 3Lo/2amax farther in the off-axis setup than in the in-line system. Furthermore, to separate the twin images and avoid overlap with the undiffracted wave component, only a small fraction of the CCD pixels are used in off-axis digital reconstruction. Besides the obvious negative influence on the observation, the inefficient usage of the CCD chip leads to the degradation of the m e a s u r e m e n t resolution and accuracy, both of which are especially important for the m e a s u r e m e n t of fine structures. In-line digital holography, on the contrary, allows more efficient utilization of the limited sensor area because of its coaxial arrangement. In principle, it is possible to use the whole matrix of CCD pixels for the desired image.
124
MA TLAB| f o r Photomechanics - A Primer
In the review of the current applications of digital holography, besides the important field of interferometry, there is also widely used for 3-D visualization. Digital holography is capable of recording full-field information of objects for 3D imaging, and numerical focusing can be adjusted freely to yield images at different depths. In the applications of holographic microscopy, this flexibility is employed to improve the performance of conventional systems. In addition, since the complex amplitude distribution can be obtained, visualization of pure phase objects is possible. Moreover, the obtained phase-contrast image can provide a quantitative measurement of the optical phase distribution at the surface of the test object. Another class of applications of digital holography in 3D visualization is particle analysis. Current approaches for measuring spatial distribution or motion of particles are performed either point wise or in a two-dimensional (2D) manner. In-line digital holography is reported as a new alternative for particle monitoring and characterization. By making extensive use of its flexible capability in numerical focusing, the position, size and velocity of particles in 3D space can be quantitatively determined with high accuracy.
5.3 Digital Holographic Interferometry Digital recording and computer-based numerical processing of holograms offers many new possibilities in metrological applications. To realize interferometric measurement by holography, determination of phase variation from interferograms is a key step. From the phase distribution, together with the sensitivity vector, which is given by the geometric arrangement of an optical setup, the displacement vectors and thus the strain/stress fields are determined. In conventional holographic interferometry, the analysis of fringe patterns, which represent the displacement qualitatively, is complex. To increase the reliability and accuracy, specific interferogram evaluation techniques, such as fringe tracking, phase stepping, Fourier transform evaluation or heterodyne, are employed. All these techniques, however, require additional experimental and computational efforts. Moreover, phase information has to be determined from intensity distributions, which are generally disturbed by noises. With digital holographic interferometry, holograms corresponding to the undeformed and deformed states of the object are recorded digitally and evaluated numerically. Not only the intensity but also the phase of the recorded waves can be calculated. This means that phase variation, which provides quantitative information of the displacement fields, can be evaluated by simply comparing the reconstructed phases. The effect of optical interference can be simulated, without the need to generate the intensity distribution of interferograms. Besides static applications, digital holography
Digital Holography
125
is found especially useful in pulsed laser interferometry for dynamic or vibration analysis. Since phase can be obtained from one single recording, the phase-shifting procedure, which is nearly impossible in transient processes, is not required. In addition to deformation measurement, off-axis digital holography incorporating the two-wavelength contouring or two-source methods are used for 3-D shape analysis. This could lead to combined measurement of shape and deformation with a single system.
Experimentally, the procedure differs to some extent from conventional double exposure holography. For film based systems, two holograms of the object before and after deformation are recorded on the same film. The holograms record the complete object wavefronts before and after deformation. Reconstruction of the processed double-exposed hologram reveals the interference of the two object wavefronts. As with traditional interferometry, this intensity distribution is modulated by cosine fringes as a result of the phase difference between the two wavefronts. Due to the even and periodic nature of cosine function, phase determination from a single interferogram is indeterminate with respect to its sign and absolute phase value. Fringe processing techniques, such as phase shifting, are thus required.
In digital holographic interferometry, the undeformed and deformed wavefields are recorded and stored as separate image files. Each of these is reconstructed numerically as described above for in-line or off-axis situation. The reconstruction provides both the amplitude and phase of the wavefront. For display the amplitude is sufficient, but for measurement the phase is important. Cosine fringes as in conventional holographic interferometry can be generated by superposing the two reconstructed wavefronts and displaying the resultant amplitude. However, for measurement the phases, the two wavefronts are separately extracted. Since the phase is determined as the arc-tangent of the ratio of the imaginary part to the real part of the wavefront, the sign of these two components can be determined consistently in the interval [-n, ~]. Consequently the difference in phase between the undeformed and deformed wavefronts can be determined without sign ambiguity. Phase unwrapping then proceeds routinely to remove the modulo 2n phase jumps. The phase map can then be related to the deformation vector d via the ._.>
sensitivity vector S as
126
MA TLAB| for Photomechanics - A Primer --)
_-.)
d.S
- s . Adp
2z
(5.11)
The sensitivity vector is the vector sum of the unit vectors from the object to the source and from the object to the detector. Proper choice of sensitivity vector permits determination of out-of-plane displacement or in-plane displacement components.
The next section demonstrates use of MATLAB for digital holography imaging as well as digital holographic interferometry. 5.4
MATLAB| D e m o n s t r a t i o n
Figure 5.6 is the flow chart for the recording and reconstruction of digital inline transmission and reflection holograms.
Digital Holography
Record the axperemmnl parammlmra: t W o r k , * g w a v e enp*'
2 F e c o f a n p c $!a-ce
...
? Exaasure I - e .,^
"
I
F o r ..rmv.,r!
.-
I.-
::--:I "9..
. A , . -
I""
p-*
+
-
-: !mcd3m#
c a s t ' 5 . .IC?~
-2
..*.PI,
mapeg'
p -:I*
save the reco!ded imsgrc in: An l a p s pfc:e?sn$ srltem 2 G.avr:ale It..-a 3 !I,@=, :F cr - 2 3 f a m a ! t
I-~DI$
I
-
FIOwChlfl of the sample program: dlplholo.m
-
Read the Image files Ea.nclr 2 --.t - c
lnpul the system parmmatmra w l r k s n g w a v e e-pqh=: C ? > s p we a ) r e X - a x s ~ fI p xe' r ze I V . a x s = G 3 rec0ns"uc' on dalan:a=59 or 74
-
u Reconrtrucllng the Image
iY--'Y
.:
.::"I
.t.*.::,,
-1--1c.C.c.?r
---------------
Figure 5.6 Nozu chart for the recording and wronstruction of digital inline transmission and r ~ f l ~ r t i oholograms n
Intensity of the Image 1.U.U'
Graphlc prmasntallon of the reconstructed Image
128
MA TLAB| for Photomechanics - A Primer
For in-line transmission hologram, a single hologram of a three dimensional object is recorded using a collimated beam. For this example the set-up and images as shown in Fig. 5.7 was used. y 1 ,,
transparency
2
j,, sI
Figure 5. 7 Schematic of set-up for digital in-line holography
For this demonstration program the object comprises of a cross pattern and a circular dot pattern placed at distances of 58 and 74 m m respectively from the recording CCD plane. The hologram is recorded and stored in the computer. To process this hologram, run the dhline.m program from within MATLAB| The program flows as follows: , dhinline Choose the system configuration in-line transmission hologram (1) or in-line reflection hologram (2)? (1 or 2): 1
Digital Holography
129
Step 2 is to input some system p a r a m e t e r s input the system parameters: working wavelength (micron)= 0.6328 pixel size in X-axis (micron)-6.8 pixel size in Y-axis (micron)-6.8 Reconstruction distance ( m m ) - 5 8 o r 74 Reconstruction in progress ..... be patient .... The reconstructed images are shown depending on the reconstruction
The figure on the left corresponds to a reconstruction distance of 58 m m while the one on the right showing the circular dot pattern reconstructed 74 m m from the hologram plane. The bright background from the undiffracted beam is also quite a p p a r e n t but in this case it does not degrade the image which is made of black dot patterns. To demonstrate subtraction of the background bright square due to the reconstruction beam, an in-line reflection hologram of a small beam is recorded. Before the hologram is recorded, the intensity distributions of the object beam and the reference beam are separately recorded. Thus in this case three images are used in the reconstruction. The object and reference beams t h a t were individually recorded are used to preprocess the hologram prior to reconstruction. Details of this can be found by opening the m-file.
, dhinline Choose the system configuration in-line transmission hologram (1) or in-line reflection hologram (2)? (1 or 2): 2
130
i n p u t the system parameters: working wavelength (micron)= 0.532 pixel size in X-axis (micron)= 9 pixel size in Y-axis (micron)= 9 Reconstruction distance (mm)= 485.5 Reconstruction in progress ..... be p a t i e n t ....
MA TLAB| for Photomechanics - A Primer
Digital Holography
131
Generally, for best reconstruction as with optical holography, it is necessary that the recorded image, reconstruction parameters such as wavelength of light, pixel size etc, all have to be exactly as it was when the recording was made. This is the case for optical reconstruction of holograms as well. Digital holography allows us to experiment with what would happen if one of the parameters was not properly selected. Consider the recording stage of the hologram. For proper reconstruction, the pixel size of the recorded hologram has to be precisely known. This would imply that the image is saved in the raw format, where each pixel in the image file corresponds to object pixel during recording. However, this would mean that the image file size is large. Various image formats are available which can compress the information. In some cases this results in lossy images. If these images were used to reconstruct the hologram, we would not get the best image. The optical counterpart of this is to try to reconstruct the hologram from a portion of the original holographic part. The following demo shows the result of compressing the image file of the digital hologram of a three-bar test target. Only a portion of the original hologram is used for reconstruction.
. dhinlinej Choose the system configuration in-line transmission hologram (1) or in-line reflection hologram (2)? (1 or 2): 2
132
MATLAB| for Photomechanics- A Primer
input the system parameters: working wavelength (micron)= 0.532 pixel size in X-axis (micron)= 9 pixel size in Y-axis (micron)= 9 Reconstruction distance (mm)= 485 Reconstruction in progress ..... be patient ....
Compare this with the reconstructed image from the digital hologram saved as a raw image file and the difference is apparent. Digital holography however, allows the flexibility of trying different combinations to get the best result.
Off-Axis Holography _
_
As discussed earlier, off-axis digital holograms are restricted by the resolution of the CCD in its ability to have a large angle between the reference and object beams. The flow chart for reconstruction of off-axis digital hologram is shown in fig.5.8.
Digital Holography
Figure 5.8 Flow chart for reconstruction of off-axis digital hologram The following demonstrates the off-axis reconstruction of a three-bar test target.
. dhoffaxis Step 1. Read the image file into matrix Step 2. Numerical reconstruction of the images input the system parameters: working wavelength (micron)= 0.532 pixel size in X-axis (micron)= 9 pixel size in Y-axis (micron)= 9
133
134
MA TLAB| for Photomechanics - A Primer
reconstruction distance (mm)= 747.5
The following image results.
Note that only part of the image provides useful information and hence the resolution of the off-axis system is lower than that of the in-line system. Of course there is no need to eliminate the undiffracted beam as it is spatially separated from the first order holographic reconstruction. Digital Holographic Interferometry For this demonstration, a micro-cantilever beam fabricated using silicon MEMS technology is used. Since the object is small (length= 5mm and width=0.8mm), the resolution of the 2048x2048 Kodak Megaplus camera is sufficient to resolve the hologram from the whole object. In-line reflection digital holograms are recorded of the object before and after deformation. The flow chart shown in Fig.5.9 describes the progress of the program dhinter.m, the steps of which are highlighted below - dhinter Step 1. R e a d the image files into matrices d i m e n s i o n s o f the images
Ny= 1024 Nx = 1024
Digital Holography
135
136
MA TLAB| for Photomechanics - A Primer Step 2. Pre-processing of digital holograms
Step 3. Numerical reconstruction of the images input the system parameters: working wavelength (micron)= 0.532 pixel size in X-axis (micron)= 9 pixel size in Y-axis (micron)= 9 reconstruction distance (ram)= 485.5 Reconstruction in progress ...... Choose to reconstruct the real (1) or virtual (2) image? (1 or 2): 1
Digital Holography
137
138
MA TLAB| for Photomechanics - A Primer Step 4. Numerical simulation of the interferogram
Step 5. Numerical calculation of the phase distributions
Digital Holography Numerically calculated phase distribution of the stressed state
Step 6. Numerical evaluation of the interference phase
139
MATLAB| for Photomechanics - A Primer
140
Step 7. Determination of the displacement
Having calculated the interference p h a s e , the displacement field can be d e t e r m i n e d if the experimental p a r a m e t e r s of recording are input. The p r o g r a m proceeds f u r t h e r as below
illumination angle (degree)= 45 length of the object (mm)-- 5 width of the object (mm)-- 0.8 Graphic presentation of the displacement Choose the divisions in the length - 60 Choose the divisions in the width - 20
Digital Holography
141
As a further example, holograms of a MEMS pressure sensor are digitally recorded before and after deformation. These are files PSl.jpg, and PS2.jpg which can be found in the images directory. In addition, file PSO.jpg and PSR.jpg are the images of the object beam and the reference beam, which are necessary for preprocessing of the hologram to remove the bright undiffracted beam. Substituting these images in the program 'dhinter'. 5.5
Conclusion
Digital Holography offers a lot of advantages to offset the disadvantage of low resolution of the CCD recording system. With the ability to subtract the background noise numerically, in-line holography becomes a viable alternative. This permits one to use the full range of the CCD pixels. As far as digital holographic interferometry is concerned, direct subtraction of the phase of the undeformed and deformed wavefronts allows deformation to be deduced without traditional phase processing techniques. MATLAB| processing permits rapid evaluation of alternative approaches and schemes. One typical example is to reconstruct using a different light source. In traditional holography, and this includes digital holograms, recording and reconstruction have to be done with the same laser. In physical holography, choice of wavelengths is to certain extent limited, albeit with developments in laser technology, continuously tunable lasers are becoming commonplace. However, digital reconstruction allows one to choose any arbitrary wavelength to get a better understanding of the reconstruction process. But the reconstruction process also depends on the pixel size and reconstruction distance. Thus changing reconstruction light source might affect reconstruction distance. The MATLAB| routine allows one to readily test this. Figure 5.10 shows the reconstruction of the in-line transmission hologram recorded as shown in fig. 5.7. Both the 632 nm laser and a 532 nm laser reconstruction distances are shown. The differences in reconstruction distances are obvious.
MATLABB for Photomechanics - A Primer ( - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 7
I F l o r t h r H of the sample p r o p r r m -
i
rlh1ntrr.m
.: - . '
Rcad the Image Illes b.,
,m
S.".?'
.+,>rp--.
,$*,?<<+!
! *
,
,-
I
, l * j
?*,,R,
t .
1
Pre.processlnp t o suppress Ihe l r c h p r o u n d nolse
. ..
+
i-;+ ,n
,11,
M $'I.p1.1% F ) r r n 3 r ? W m
.
b
1 lnput the system p r r r m e l e r s * - p i -; - 3 . n nn;-,,21' .PI
r ,*I nologrlphlc reconslrucllon
3 Z? :P
$
-----:
,- r
a.
5=6
v a.
7-->
."" r l a - r ~ l l l Ts
recnn7.. . r *
8"
Numcrlcrl reconstruction o l the l m r p r w ~ r e t ~ r Ul d
NumerIcrI r e c o n s l r ~ c t l o n 01 the Image w a v r l ~ r l dU,
,
IrU
for d i ~ i t a Irologrnphic l
u.
I r l 01 21
reconstruclcd imrpe r...........
..-.....-..-..---------------------------*---*----.-! Numericmi slmulmtlon o l I n l r r t e r o g ~ a m UI=UOf.UII i9-UI U1'
+ Numerlcrl c r l c u l r l i o n 01 phase dlslrlbullon phO, p h l
Interlerence phase modulo I n DPh
...........................................
-*----------------------------------------
,-.,. .
lnput the experlmentrl p l r r m e l e r s Isngth
h
Dlspl~cnmrnt delermlnrt#on
-,
4:
..'.,. -. p-..i
0,
tL.,
,83.,
3:
.,-.=- R
Dlsplscement c r l c u l r t l o n
Crrphic presentrtton 01 d i s p l r c r m r n t 7 ' - - S F '-E ? o r : n j 1 I..@ L..-..-.--.........-----------------------***+------------------------------------~~-~~~
r "-;;-
1
-
143
Digital Holography
58 rnm
L~ ...
A,
,-L--
A
60 mm C
L
.
I
A
62 mm
w
C
fa)
Reconstruction with 633 n m wavelength light
01)
R~ronstrnctionwith 532 nrn rca ~7elengthlight
Figure 5.10 Effect of reconstruction wavelength on the reconstruction distance
This Page Intentionally Left Blank
145
Chapter 6 6.1
Speckle Methods
Laser and White Light Speckles
Speckle methods come in two traditional flavors - laser speckles and white light speckles. Laser speckles arise from multiple interference of light scattered from an object illuminated by a coherent laser beam and white light speckles are a physically generated speckle pattern on the surface of the object. Speckle photography evolved out of the seminal paper by Burch (6.1), which described the application of speckle - noise in a coherently illuminated object - for the purpose of displacement and deformation measurement. In this paper, speckles were generated due to the multiple interference of light scattered from an object illuminated by a coherent laser. These speckles will be referred to as 'virtual speckles', since they only exist when the laser beam is on. A double-exposed speckle pattern is recorded of the object before and after deformation. This specklegram is optically filtered using the point-wise or the whole-field methods to delineate the deformation. A variety of applications of these speckles have been demonstrated from static to dynamic displacement and strain measurement (621. However, a few of the drawbacks of the technique also came through, primarily that of speckle de-correlation. One solution to this was the development of the white light speckle method (6.3). In this method, artificial (or 'real') speckles are created on the surface of the object, which is then illuminated with white light and recorded and processed as for laser speckles. Since the speckles are real and not virtual (as with laser speckles), their movement can be understood more easily. The white light speckle method showed the similar breadth of application for both static and dynamic applications. While optical filtering, using the point-wise or whole-field methods, is relatively straightforward and fast, quantification of this data is slower and cumbersome. Digital image processing techniques were adopted for the acquisition and quantification of the specklegrams for analysis (6.4). Two streams of the processing emerged. The first approach made use of optical filtering to first delineate the displacement information in the form of Young's fringes which were then digitized and analyzed. A whole-field map of the displacement distribution could thus be obtained by evaluating the displacement at an array of points on the specklegram. The second approach used the individual speckle patterns themselves along with digital speckle correlation (6.5)to extract the deformation. This approach has gathered more steam and is being currently exploited for 3-D displacement measurement in micro-mechanics. However, due to high image magnifications, there is a need to create very small artificial speckles.
MA TLAB| for Photomechanics - A Primer
146
There is, nevertheless, a need to digitally process either the speckle correlation fringes or the speckle patterns themselves. In this section, a few of the standard processing routines will be highlighted, first as standalone programs by typing the appropriate m-file in the MATLAB| command window and then using a menu driven program within MATLAB| which provides a much more user friendly interface. Note that the speckle correlation method was introduced in the chapter on basic image processing and hence will not be referred to again. 6.2
MATLAB|
Demonstration
Determining the spacing and orientation of the Young's fringes obtained using pointwise filtering from a double exposed speckle pattern, provides a sensitive means of determining the displacement direction and magnitude. The direction of the displacement is perpendicular to the fringe orientation and the magnitude of displacement is inversely proportional to the fringe spacing. As an example, Young's fringes at six select points along a cantilever beam, loaded at the end, are used. The Young's fringes were pre-recorded and MATLAB| is used to determine the orientation and magnitude of the fringes. >>youngdemo 1 Point-wise filtered fringes analysis First the fringe orientation and spacing are determined by Fourier transform method using function young_ftm type help young_ftm in command window for info The 1st pattern is shown and processed.
Speckle Methods
147
The application of the Fourier Transform Method is straightforward as far as the fringe interpretation is concerned. These can be readily deduced from the spectrum in the spectrum in the transform plane. Similarly, five other images are processed and displayed. A key press is required after each image is processed and a statement such as Press any key to calculate the nth pattern is displayed as a prompt in the command window where 'n' is the next image in the sequence The final window after all six images are processed is shown below:
Next the orientation and spacing are re-calculated using the Radon transform method using function young_rtm type help young_rtm in command window for info The Radon transform is ideally suited for Young's fringe analysis, since it obtains the orientation and spacing by projecting the fringes onto a line. All lines are scanned from 0 to 180 degrees and the angle, which gives the best contrast of fringe pattern, would be perpendicular to the desired fringe orientation. For a better understanding of the Radon Transform method, run the MATLAB| demo m-file 'ordemo' and follow the instructions in the lower window. The six images are sequentially analysed and displayed as before with the user requested to key press after each image analysis.
148
MA TLAB| for Photomechanics - A Primer
It would appear that the process is time consuming if all angles between 0 and 180 degrees in steps of 1 degree are analyzed. Furthermore, if the orientation needs to be determined with greater accuracy, smaller steps would be required, further increasing the processing time. An alternate would be to use the Fourier Transform Method to get a rough estimate of the spacing and orientation and then use the Radon Transform over a small range of angles to fine tune the result. The hybrid approach, which follows next in the same demo, does exactly that. Note a slight increase in speed as compared to the pure Radon Transform approach. The program continues... Finally the orientation and spacing are redetermined by Hybrid transform method using function 'young_htm'. type help young_htm in command window for info Press any key to process the 1st pattern After the first pattern is processed and displayed, key presses by the user, process and display the next five images sequentially.
Speckle Methods
149
--This finishes the Young fringes analysis demo -Typing 'youngdemo' at the MATLAB| command prompt can run an alternate version of the same demo. In this case the image is shown on the top half and the hints as well as the MATLAB| commands are shown in the bottom window. The Start button initiates the demo, while the Next button is analogous to the key press in the previous section. Other buttons are selfexplanatory. Note t h a t the Radon Transform method takes some time to process, so one needs to wait a bit. Pressing the next key at this stage would initiate the next step as soon as the current one is completed.
6.3
Sampled Speckles
(6.6)
A third form of speckles called 'sampled proposed. The formation of these speckles and displacement m e a s u r e m e n t will be described here in are relatively new developments. However instead results of the demo program 'sspecklel' (or 'sspeckle' are shown.
speckles' was recently their application for some detail since these of showing figures, the for the window version)
Digital imaging records a scene by sampling it into a n u m b e r of pixels, each of whose intensity depends on the a m o u n t of light received by it. A typical gray scale image would thus have pixels with 256 shades of gray. Indeed, these can be considered as r a n d o m distribution of intensities - i.e. s a m p l e d speckles.
150
MATLAB| for Photomechanics - A Primer
- sspecklel
Step 1: show sample speckle by enlarging the image Press any key to view the original image Press any key to view the locally enlarged image please observe the sample speckles Press any key to view a further enlarged image please observe the sample speckles Press any key to view further enlarge image please observe the sample speckles
press any key to continue The shapes of these speckles are square but they have a random intensity distribution. The smallest speckle size corresponds to one pixel. Larger speckles can be thought of as a collection of pixels, which can be obtained by digitizing the image with a larger sample size.
Step 2: show sample speckle by further sampling Press any key to view the original image Press any key to view the further sampled image please observe the sample speckles Press any key to view the further sampled image please observe the sample speckles Press any key to view the further sampled image
Speckle Methods
151
please observe the sample speckles press any key to continue
Digital Fourier transform of a typical sampled speckle pattern provides a halo similar to that which would be observed in an optical filtering (point-wise or whole-field) set-up with the laser beam illuminating the entire speckle pattern. Since digital transforms are involved, artifacts due to finite window and truncated sampling data are also visible. However, the halo, which is circular (or elliptical) in the case of laser or white light speckles and optical filtering, is square in shape for sampled speckles
Step 3: sample speckle and its spectrum Press any key to view the original image. Press any key to view its spectrum please observe the halo
152
MA TLAB| for Photomechanics - A Primer
A rigid body displacement of the sampled speckle photograph is then added to the original undeformed pattern to give a sampled double exposed specklegram. When this specklegram is transformed, characteristic Young's fringes are seen. Press any key to view the simulated double-exposure image Press any key to view its spectrum please observe the halo and the fringe
If a small sub-image of this pattern is transformed, the same Young's fringes will be visible, however with larger speckles. The fringe pattern spans the entire halo as for traditional speckle patterns. Alternatively, in speckle photography, multiple images of the object deformed by equal amounts in between exposures will provide a fringe pattern with enhanced contrast (6.1). This can be achieved in sampled speckle photography by recording the blurred image of a moving object. This image can be considered as the sum of many single images translated by the same amount. A FFT of this image shows the enhanced and sharpened fringe contrast as expected. However, one artifact of this, not seen with optical speckles, is that the halo size has become smaller in the direction of the displacement. This is due to the fact that the multiple exposure blurs the speckle effectively increasing the speckle size and thus reducing the halo size. Step 4: blurred (multi-exposure) image and its spectrum Press any key to view the blurred image Press any key to view its spectrum please observe the halo and the fringe press any key to continue
Speckle Methods
153
Finally, speckle size also contributes to degradation of fringe contrast. The smaller the illuminating beam, the larger the secondary speckles. Since the digital analogue of the illuminating beam is the image size, a smaller image or a subsection of the main image, will give rise to larger speckles. Step 5: influence of image size Press any key to view the spectrum of 16"16 image
Press any key to view the spectrum of 32"32 image
Press any key to view the spectrum of 64*64 image
Press any key to view the spectrum of 128"128 image
Note that as the speckle size becomes smaller, the fringe contrast is improved, as is the halo size.
154
6.4
MA TLAB| for Photomechanics - A Primer
Defocus Detection Using Speckles
(6.6)
Some of the existing methods for estimating the focus errors are based on power spectral analysis. In the Fourier domain, defocusing results in reduction of power in higher Fourier frequencies. This results in blurring of image in the real domain. In general, an imaging instrument can be thought of as a low pass filter. A method to estimate the amount of defocus using speckle correlation is demonstrated here. The principle behind this approach is that the speckles are present only on the surface of the object. Hence any defocus would mean that the speckles would be enlarged and not as sharp. This would affect their contrast as well as spatial frequency content. Indeed this feature was used with optical transformation to determine the amount of defocus (6.7). In this approach, an alternate method is used. If the speckle pattern is translated as a rigid body and then added to the original speckle pattern, a double exposed pattern is obtained. Fourier transformation of this pattern would result in a uniform fringe pattern. The contrast of these fringes depends on the focus of the original speckle pattern. Thus by determining the contrast of the fringe patterns for different speckle patterns of the object recorded with different amounts of focus, the image with best focus can be determined. Consider a speckle pattern given by the intensity distribution f(x,y). Let the pattern be shifted in the x direction by an amount 5x. The resulting image g(x, y) obtained by superposing the original and shifted images is given as: g(x, y) = f (x, y)+ f (x + ~x, y)
(6.1)
Fourier transformation of this equation gives the familiar Young's fringes. To enhance the contrast of the fringes, the two-dimensional intensity distribution is converted to an one-dimensional intensity distribution by adding the array values in a direction perpendicular to the direction of shift. Let I ( u , v ) b e the intensity distribution in the Fourier domain. The twodimensional intensity distribution is summed up along the direction perpendicular to the fringes such that the resulting fringe pattern I(v) can be expressed as,
I(v) = f I(u, v)du
(6.2)
Speckle Methods
155
To get the width of the fringes we perform an auto-correlation on l(v)along the fringe direction,
K(v) = ~ l(v')l(v + v')dv'
(6.3)
Visibility V is defined as, V = K max(V)- K min(V) K r~x(V)+ K .~.(v)
(6.4)
The visibility V of the auto-correlation fringes is plotted for different images obtained at different image distances. The position of the imaging instrument where the visibility of the auto-correlation fringe is at a maximum is the best focus plane. Figure 6.1(a) is the two-dimensional intensity distribution of Young's fringes for an image at the true focal plane. Figure 6.1(b) shows the Young's fringes obtained by summing the two-dimensional fringe pattern in a direction perpendicular to the fringe direction. These fringes have been obtained using an image recorded at the true focal plane. Figure 6.2(a) is the two dimensional intensity distribution of Young's fringes and fig. 6.2(b) shows the Young's fringes obtained by summing the two dimensional intensity distribution in a direction perpendicular to the fringe direction. This image is recorded at an image plane located 30p from the best focus plane. To estimate the goodness in focus, we run an auto-correlation routine on the fringe pattern and obtain the fringe width. The visibility of the autocorrelation fringes is plotted for different images. The image for which the visibility of the auto-correlation fringes is at a maximum is the best focus image. The auto-correlation function has higher visibility in the frame recorded closer to the true focal plane. The auto-correlation function obtained with frames for different positions of the image plane is plotted in fig. 6.3. It can be seen from Fig 6.3 that the visibility of the Young's fringes is highest for the image recorded at the best focus plane. This is used to find the true focal plane in an imaging instrument where the depth of focus of the imaging instrument is of the order of few microns. The low complexity of this technique ensures fast analysis and hence can be adapted for real time focus error corrections in imaging instruments.
156
MA TLAB| for Photomechanics - A Primer
Figure 6.1: (a) Two dimensional intensity distribution of Young's fringes obtained using a focused image. (b) Plot of the intensity distribution of the fringes obtained by adding the intensity distribution perpendicular to the direction of displacement to enhance the fringe contrast.
Figure 6.2 Same as fig. 6.1 but for an image recorded at a distance of 40 micron from the image of fig 6.1.
Speckle Methods
157
0.5
(D
=0.45
"r"
tO "4::;
O U u E)
0.4
0
g v 0.35 o u o
o ~
.~_ 0.3
>
.... -;o o Displacement on either side of true focal plane (in micron) True focal plane at 0
Figure 6.3: Visibility of the Young's fringes obtained as a function of defocus distance. Visibility of the auto-correlation fringe is the highest for the image obtained at the true focal plane. 6.5
MATLAB|
Demonstration
For the MATLAB| demonstration, speckle patterns recorded at 50X magnification of a silicon wafer surface at 5 different focus positions in intervals of 2 ~tm are used. A m-file called defocus.m contains the routine to generate the Young's fringes and calculate the contrast of the fringes. This routine is called for all the images and the contrast plotted. The speckle pattern, which gives best contrast, is the one, which is in best focus. The program proceeds as follows: ~ dfdemo 1
Step 1: Read the first image file and calculate its contrast by speckle method Current plot held press any key to continue Step 2: Read and process the second image file
158
MATLAB| for Photomechanics - A Primer
press any key to continue Step 3: Read and process the third image file press any key to continue Step 4: Read and process the fourth image file press any key to continue Step 5: Read and process the fifth image file Press any key to continue
The third image appears to be in best focus. Typing 'dfdemo' at the MATLAB| version of this program.
6.6
Curvature Contouring
command prompt can run the window
(6.9)
Electronic Speckle Pattern Interferometry (ESPI) and Digital Shearing Speckle Interferometry (DSSI) are two workhorses of the speckle technique for real time deformation and slope contouring. Unlike the earlier techniques of laser, white light and sampled speckle photography, speckle interferometry is similar to holography. Indeed the set-up is not much different from the inline holographic set-up. Replacing the reference beam by a shearing element the ESPI system can be converted to a DSSI system. Unlike holography, however processing of the images is based on intensity correlation of the
Speckle Methods
159
speckle interferometric patterns recorded before and after deformation. It is primarily accomplished by subtracting the deformed speckle interferometric pattern (fig. 6.4(a)) with the speckle interferometric pattern of the undeformed object (fig. 6.4(b). The subtracted image (fig.6.4(c)) is contrast enhanced prior to display, which shows the resulting correlation fringes (fig.6.4(d)). If the correlation fringes were to be recorded in real-time, the undeformed image is stored in the buffer, the live deformed image is then subtracted, rectified and filtered before display.
Figure 6.4 (a) Undeformed sheared speckle interferometric pattern of a disk (b) deformed sheared speckle interferometric pattern of the disk subject to central load (c) Subtraction of (a) - (b) (d) Contrast enhancement of (c)
160 6.7
MATLAB| for Photomechanics - A Primer MATLAB|
Demonstration
The speckle correlation pattern seen in fig. 6.4(d) contours the derivative of the out-of plane displacement in the direction of the shear. However, in such problems, in-plane stresses are related to the curvature or the second derivative of the displacement. Hence, it is necessary to differentiate this pattern one more time. As discussed in the Chapter on Moir~ Methods, this is an error prone solution compounded by the small n u m b e r of fringes. The small fringe density also precludes obtaining derivatives by superposing and shearing. A novel digital solution is thus provided and is scripted in MATLAB| as a macurv m-file. >>macurv 1. Read image file 2. Lowpass Filtering 3. Obtain curvature fringes 4. Lowpass Filtering Your m a y chose 1->2->3->4 or 1->2->3 or 1->3-4 and compare the results Select your choice: 1 Here we show the mode of 1->2->3->4 A window will pop up, select the image 'macurv.bmp'
Speckle Methods
The image is displayed and the contrast of the image is very low as shown below.
Run the file for the second time. >>macurv and input 2 , the image with enhanced contrast is shown below
161
162
MATLAB| for Photomechanics - A Primer
Run the file for the third time. >>macurv input 3 Select the region to be processed.
The above figure is opened again. Use the mouse to click the t o p - left point and bottom-right point to select the region of interest (ROI) for processing
The pre-processed image will then be displayed as in the figure below.
Speckle Methods
Run the file a fourth time. >>macurv and i n p u t 4; the r e s u l t a n t c u r v a t u r e fringes are displayed as shown below
This ends the file.
163
164 6.8
M A T L A B | f o r P h o t o m e c h a n i c s - A Primer
Conclusion
A specklegram is a double exposed speckle pattern, created either artificially or generated through interference on the object before and after deformation. Speckle methods require post processing to delineate the deformation from the specklegram. Traditionally optical filtering techniques, such as the point-wise and whole-field methods, are used. Data obtained using these techniques, required further processing to determine the displacement map. In particular, the point-wise filtering approach provides greater sensitivity and accuracy. Image processing routines are developed to speed up the point-wise method for full field displacement mapping. The Fourier transform method is the obvious choice to convert the point-wise Young's fringes to the displacement vector at the point of interrogation. However, as shown other transform methods such as the Radon transform might provide greater sensitivity and accuracy for displacement measurement Shearography is an extension of the speckle method for whole field visualization of contours of the derivatives of displacement. In some cases these patterns need to be further differentiated to obtain curvature contours. While this can be achieved optically, digital processing provides a flexible and robust alternative. With greater influence of digital image processing in photomechanics, speckle correlation method proved a better solution than the opto-digital processing techniques used above. Speckle correlation technique compare the deformed and undeformed speckle patterns separately. Greater sensitivity and accuracy of deformation sensing can be obtained. While MATLAB| correlation function do provide a first step, dedicated correlation functions need to be created specifically for photomechanics application. There are various schemes possible and MATLAB| permits these algorithms to be quickly generated and compared. In addition, there is an increasing need to compress images for storage. Effect of compression on the correlation process is another interesting question that can be resolved using MATLAB| scripts and functions.
165
Chapter 7
Conclusion
MATLAB provides a quick method to process fringe patterns obtained using optical methods in experimental mechanics. Using predefined functions in MATLAB, images can be read into the workspace where they can be manipulated using user-defined scripts. Some of these scripts can be then be saved as user generated functions, especially after they have been regularly tested and need to be routinely applied. This thus permits each laboratory or for that matter each individual within a laboratory to tailor the processing of a fringe pattern according to his needs. In this primer, functions and scripts generated as part of the research endeavours in the Photomechanics Laboratory within the School of Mechanical and Production Engineering at Nanyang Technological University are highlighted as a means of introducing the reader to the ease and flexibility of the MATLAB| programming environment. Each student stamps his/her own style on the scripting process as is evident from the different styles of the demonstration programmes for specific fringe processing tasks. Following a brief overview of Photomechanics and MATLAB| routines that are common to image processing in photomechanics applications are highlighted in the third chapter. Preprocessing of fringe patterns do not always benefit from the standard image enhancement functions in traditional image processing. Hence some new algorithms such as adaptive thresholding or Gaussian filtering are called for in some applications. The three main processing routines are the Fractional Fringe Method, the Fourier Transform Method and the Phase Shift Method. Apart from the Phase Shift Method, which can have one of a number of different processing combinations depending on the number of images used, the other two methods require one image. These functions, it is hoped, would become standard in the MATLAB| toolbox. While the FFT function is a standard MATLAB| toolbox function, the fractional fringe method and the Phase Shift Method could also be included as their applications can go beyond photomechanics. Image post-processing routines are standard functions in the MATLAB| toolbox and benefit from them. The MATLAB| Image Processing Toolbox that provides functions (Appendix C) and tools for enhancing and analyzing digital images and developing imageprocessing algorithms is a growing part of the MATLAB| package. It further
166
MATLAB| for Photomechanics - A Primer
simplifies the learning and teaching of image processing techniques in both academic and research settings. The major focus of this toolbox is targeted towards medical and biological application using image enhancement and edge detection, blob and morphological analysis and segmentation and image deblurring functions. Details of these functions and demos are available and updated at the Mathworks website (http://www.mathworks.com). Some of these functions are also useful in Photomechanics. In particular, phase shifting in photoelasticity typically requires 18 images to be input. As with the set of Magnetic Resonace Images (MRI), a set of phase shifted images can be saved as single tiff image and then accessed one image at a time during processing. Thus there is no need to specify each and every image as in the current demonstration detailed in Chapter 3. The other function, which is useful, is the suite of Transform functions. While the FFT function is standard in the original toolbox, the Radon transform has some interesting applications in speckle pattern analysis as demonstrated in Chapter 6. In addition, moir~ interferometry utilizes Gabor filter for strain segmentation (Chapter 4). While the Image Processing Toolbox has segmentation routines, these are not suitable for fringe pattern segmentation. One other area, which has potential for becoming a useful function, is the reconstruction of Digital Holograms (Chapter 5). This is essentially a subset of the transformation functions and the Fourier transform can be part of this for the reconstruction of Fourier holograms. The following script, which uses the functions in the Image Processing Toolbox, is best suited for this hologram reconstruction. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Matlab program for reconstructing Fourier digital holograms % % dhfourier.m % % Developed by Xu Lei, NTU, Mar 2002 % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clear all H=imread('Holo','tif); H=double(H); [Ny Nx]=size(H); disp('Figure No.l: Fourier digital hologram'); figure(l); imshow(H,[ ],'notruesize'); title('Fourier digital hologram'); U=fft2(H); U=fftshift(U); I=(U.*conj(U));
Conclusion
167
clear H,U; I=(1og(I.A(1/9))); max=max(max(I)); min=min(min(I)); for k=l:Ny for l= 1 :Nx I(k,1)=(I(k,1)-min)/(max-min); end end I=imadjust(I,[0.3 0.5],[ ],2.5); disp('Figure No.2: Numerically reconstructed image'); figure(2); imshow(I,[ ],'notruesize'); colorbar; The new functions used in this script are: figure; imshow and imadjust. The first two are means of displaying images in different windows, while the third function is used to adjust the contrast of the image. The result of running this script shows the recorded hologram as below. Fourier digital hologram
The reconstructed hologram is then computed and shown below.
168
MA T L A B | f o r Photomechanics - A P r i m e r
MATLAB| while being generic image processing software in some aspects, can be scripted for specific applications. The scripts do require knowledge of available functions and in some case required generation of new functions. However, scripts to perform the required image processing tasks can be scripted easily and improvements made as familiarity of the software develops. MATLAB| thus permits rapid implementation and subsequent optimization of the script. Each script is imprinted with the particular style of the author as is evident in the primer. The scripts for the basic functions, photoelasticity, moire, holography and speckle were written by different students and clearly demonstrate the preference and style of each individual.
169
References 1
Chapter 1 - Introduction 1.1
A. Asundi, Optical Sensors for Mechanical Measurements, Academic Research Fund Final Repor for Ministry of Education, Singapore, 2002.
1.2
J. W. Dally and W. F. Riley, Experimental Stress Analysis, McGraw-Hill, New York, 1978.
1.3
L.S. Srinath, Experimental stress analysis, Tata McGraw-Hill, New Delhi 1984.
1.4
D.C. Williams (ed.) Optical methods in engineering metrology: Chapman & Hall, London, New York, 1993.
1.5
A.S. Kobayashi, Handbook of Experimental Mechanics, VCH, New York, 1993.
1.6
K.J. Gasvik, Optical Metrology, John Wiley and Sons, New York, 1995.
1.7
G. L. Cloud, Optical methods of engineering analysis, Cambridge University Press, Cambridge, 1995.
1.8
P.
K.
Rastogi
(ed.),
Optical Measurement
Techniques and
Applications, Artech House Inc., Boston, 1997. 1.9
R. S. Sirohi, F. S. Chau, Optical methods of measurement: wholefield techniques, Marcel Dekker, New York, 1999.
170
MATLAB| for Photomechanics - A Primer
1.10
P. K. Rastogi(ed.), Photomechanics,
Springer Verlag, Berlin
Heidelberg, 2000. 1.11 MATLAB| 2
http://www.mathworks.com
Chapter 2 - MATLAB| for Photomechanics 2.1
MATLAB
for
Image
Processing,
Mathworks,
Inc.
http://www.mathworks.com/products/image/ 2.2
D . W . Robinson and G. T. Reid (eds.), Interferogram analysis: Digital fringe pattern measurement techniques, Institute of Physics,
London, 1993. 2.3
C.M. Wong, Image processing in experimental mechanics, M.Phil thesis, Department of Mechanical Engineering, The University of Hong Kong, 1995.
2.4
A . S . Voloshin, C.P. Burger, Half-fringe photoelasticity: a new approach to whole-field stress analysis, Experimental Mechanics,
1983 2.5
A. Asundi, Recent Advances
in
Photoelasticity,
Proceedings
Advanced Technology in Experimental Mechanics (ATEM-97), pp. 385-390, The Japan Society of Mechanical Engineers, Tokyo, Japan, 1997. 2.6
M. Takeda, H. Ina, S. Kobeyashi, Fourier-transform method of fringe-pattern
analysis
for
computer-based
topography
and
interferometry, J. Optical Society of America, vol. 72, 156-160,1982.
References 2.7
171 Mitsuo
Takeda
and
Kazuhiro
Mutoh,
Fourier
transform
profilometry for the automatic measurement of 3-D object shapes, Applied Optics, vol.22 1983. 2.8
D. J. Bone, H.A. Bachor and R.J. Sandeman, Fringe pattern
analysis using 2-D Fourier Transform, Applied Optics, vol. 25, pp. 1653-1660, 1986. 2.9
M. Takeda, Spatial carrier fringe pattern analysis and its
application to precision interferometry and profilometry: an overview, Industrial Metrology, vol.1, pp. 79-99, 1990. 2.10
J.B. Liu and P.D. Ronney, Modified Fourier Transform method for
interferogram fringe pattern analysis, Applied Optics, vol. 36, pp. 6231-6241, 1997. 2.11
K. Creath, Phase measurement interferometry techniques, Progress in Optics, E. Wolf, ed., vol. 26, pp. 349-393, 1988.
2.12
Y. Surrel, Design algorithms for phase measurements using phase
stepping, Applied Optics, vol. 35 pp. 51-60, 1996 2.13
A. Asundi and C. S. Chan, Phase Shifting Applied to Non-
Sinusoidal Intensity Distribution - An Error Simulation, Optics And Lasers in Engineering~ vol. 21, pp. 3-30, 1994. 2.14
T.C. Chu, W.F. Ranson, M.A. Sutton and W.H. Peters, Applications
of digital image correlation techniques to experimental mechanics, Experimental Mechanics, vol. 25, pp. 232-244, 1985.
172 3
MA TLAB| for Photomechanics - A Primer
Chapter 3 - Digital Photoelasticity 3.1
E.G.Coker
and
L.N.G.
Filon, A
treatise
on photoelasticity,
Cambridge University Press, New York, 1957. 3.2
M.M. Frocht, Photoelasticity, Vols. I, II, John Wiley & Sons, New York, 1941,1948.
3.3
Vlatko Brcic, Photoelasticity in theory and practice, SpringerVerlag, New York 1972.
3.4
R. B. Heywood, Photoelasticity for designers, Pergamon Press, Oxford, New York, 1969.
3.5
Albrecht Kuske and George Robertson, Photoelastic stress analysis, Wiley, London, New York ,1974.
3.6
Hillar Aben, Integrated photoelasticity, McGraw Hill International Book Co., New York ; London, 1979.
3.7
P. S. Theocaris, E. E. Gdoutos, Matrix theory of photoelasticity, Springer-Verlag, Berlin; New York, 1979.
3.8
L.S. Srinath, Scattered light photoelasticity, Tata McGraw Hill London, 1983.
References
173
3.9
S.A. Paipetis and G.S. Holister (ed.), Photoelasticity in engineering
practice, Elsevier Applied Science Publishers; New York, NY, USA, 1985. 3.10
J.T. Pindera and M.-J. Pindera, Isodyne stress analysis, Kluwer Academic Publishers, Dordrecht ; Boston, 1989.
3.11
Terry Yuan-Fang Chen (ed.), Selected papers on photoelasticity, SPIE Optical Engineering Press, Bellingham, WA, 1999.
3.12
K. Ramesh, Digital photoelasticity: advanced techniques and
applications, Springer-Verlag, Berlin ; New York, 2000. 3.13
Liu Tong, Advances in Digital Photoelasticity, Ph.D. thesis, School of Mechanical and Production Engineering, Nanyang Technological University, Singapore, 2002.
3.14
A. Asundi,
Phase
Shifting
in Photoelasticity,
Experimental
Techniques, vol. 17, pp. 19-23, 1993. 3.15
F. W Hecker and B. Morche, Computer aided measurement of
relative retardation in plane photoelasticity, in Experimental Stress Analysis, ed. H. Wieringa, pp. 535-542, 1986. 3.16
E. A. Patterson and Z.F. Wang, Towards full field automated
photoelastic analysis of complex components, Strain, vol. 27, pp. 4956, 1991. 3.17
A. Asundi, Liu Tong and G.B. Chai, Phase Shifting Method with a
normal polariscope, Applied Optics, vol. 38, pp. 5931-5935, 1999.
MATLAB|
| 74 3.18
Photomechanics-A Primer
M.J. Ekman and A.D. Nurse, Completely automated determination
of two-dimensional photoelastic parameters using load stepping, Optical Engineering, vol. 37, pp. 1845-1851, 1998. 3.19
A. Asundi, Liu Tong and G. B. Chai, Dynamic Phase Shift
Photoelasticity, Applied Optics, vol. 40, pp.3654-3658, 2001. 3.20
E.A. Patterson and Z. F. Wang, Simultaneous observation of phase
stepping images for automated photoelasticity, Journal of Strain Analysis, vol. 22, pp. 1-15, 1998. 3.21
A. Asundi and M.R. Sajan, Multiple LED camera for dynamic
photoelasticity, Applied Optics, vol. 34, pp. 2236-2240, 1995. 3.22
A. Ajovalasit,
S. Barrone
and
G. Petrucci,
Towards RGB
photoelasticity: Full field automated photoelasticity in white light, Experimental Mechanics, vol. 35, pp. 193-200, 1995.
4
Chapter 4 - Moir~ Methods 4.1
J. Guild, Diffraction gratings as measuring scales, practical guide to
metrological use of moirg fringes, Oxford University Press, Oxford, 1960. 4.2
P.S. Theocaris, Moird fringes in strain analysis, Pergamon Press, London, 1969.
4.3
A. J. Durelli and V. J. Parks, Moirg analysis of strain, PrenticeHall, Englewood Cliffs, New Jersey, 1969.
References
175
4.4
C. Sciammerlla, The moirg m e t h o d - a review, Experimental Mechanics, vol. 22, pp. 418-433, 1982.
4.5
F. P. Chiang, Moird methods of strain analysis, Manual on Experimental Stress Analysis (eds. J.F. Doyle and J.W. Philips), Chapter 7, Society for Experimental Mechanics, Connecticut, 1989.
4.6
Oded Kafri, ILana Glatt, The physics of moir~ metrology, Wiley, New York, 1990.
4.7
K. Patorski, M. Kujawinska, Handbook of the Moir~ Fringe Technique, Elsevier, Amsterdam, New York, 1993.
4.8
D. Post, B. Han, P. I~u, High sensitivity moirg, Springer Verlag, New York, 1994.
4.9
Isaac Amidror, The theory of the Moir~ phenomenon, Kluwer Academic, Dordrecht, Boston, 2000.
4.10
A. Asundi, Computer aided moird methods, Optics and Lasers in Engineering, vol. 17, pp. 107-116, 1993.
4.11
A. Asundi and K. H. Yung, Phase shifting and logical moird, Journal of the Optical Society of America (A), vol. 8, pp. 1591-1600, 1991.
4.12
A. Asundi and M.T. Cheung, Moirg of moirg interferometry, Experimental Techniques, vol. 11, pp. 28-30, 1987.
4.13
A. Asundi and Wang Jun, Strain contouring using Gabor filters:
Principle and Algorithm, Optical Engineering, 2002.
MA TLAB| for Photomechanics - A Primer
176 5
Chapter 5 - Digital Holography 5.1
D. Gabor, A new microscopic principle, Nature, vol. 161, pp. 777778, 1948.
5.2
E. N. Leith and J. Upatneiks, Wavefront reconstruction with diffuse
illumination and three dimensional objects, Journal of the Optical Society of America, vol. 54, pp. 1294-1301, 1964. 5.3
R. L. Powell and K. A. Stetson, Interferometric vibration analysis by
wavefront reconstruction, Journal of Optical Society of America, vol. 55, pp. 1593-1598, 1965. 5.4
R. K. Erf, Holographic Non-destructive testing, Academic Press, New York, 1974.
5.5
W. Schumann and M. Dubas, Holographic interferometry, SpringerVerlag, Berlin, 1979.
5.6
C. M. Vest, Holographic interferometry, Interscience, New York, 1979.
5.7
Yu. I. Ostrovsky, M. M. Butusov, G. V. Ostrovskaya, Interferometry
by holography, Springer-Verlag, Berlin, New York, 1980. 5.8
P. Hariharan, Optical holography: principles,
techniques, and
applications, Cambridge University Press, Cambridge, New York, 1984.
References
177
5.9
T. Kreis, Holographic interferometry- Principles and methods, Akademie, Berlin, 1996.
5.10
L. P. Yarovslavskii and N.S. Merzlyfakov, Methods of digital
holography, Consultants Bureau, New York, 1980. 5.11
U.Schnars and W. Juptner, Direct phase determination in hologram
interferometry with the use of digitally recorded holograms, Journal of Optical Society of America, A, vol. 11, pp. 2011-2015, 1994.
6
Chapter 6 - Speckle Methods 6.1
J.M. Burch and J.M.J. Tokarski, Production of multiple beam
fringes from photographic scatterers, Optical Acta, vol. 19, pp. 253257, 1972. 6.2
Robert K. Erf, Speckle metrology, Academic Press, New York, 1978.
6.3
A. Asundi, White Light Speckle Method for Strain Analysis, Ph.D. Thesis, Department of Mechanical Engineering, SUNY at Stony Brook, New York, 1981.
6.4
Rajpal S. Sirohi (ed), Speckle metrology, Marcel Dekker, New York, 1993.
6.5
W. H. Peters and W.F. Ranson, Digital imaging techniques in
experimental stress analysis, Optical Engineering, vol. 21, pp. 427431, 1982.
MATLAB| for Photomechanics- A Primer
178 6.6
A. Asundi, Sampled speckle photography for deformation
measurement, Optics Letters, vol. 25, pp. 218-220, 2000. 6.7
A. Asundi and V. Krishnakumar, Defoucs measurement using
speckle correlation, Journal of Modern Optics, vol. 48, pp. 935-940, 2001. 6.8
A. Asundi and F. P. Chiang, Defocused white light speckle effect, Applied Optics, vol. 21, pp.1887-1888, 1982.
6.9
V. Krishnakumar, V.M.Murukeshan, A. Kishen and A. Asundi,
Opto-digital system for curvature measurement, Optical Engineering, vol. 40, pp. 340-342, 2001.
179
APPENDIX A
MATLAB| F u n c t i o n s U s e d in P r i m e r ABS ACOS ASIN ATAN ATAN2 AXIS CEIL CLEAR CLF COLORMAP COLORBAR CORR2 COS DISP DOUBLE EXP FEVAL FFT FFT2 FFTSHIFT FIGURE FIX FLOOR GINPUT HOME IFFT IFFT2 IMAG IMAGE IMAGESC IMHIST IMREAD IMSHOW IMWRITE INPUT LENGTH LOAD
Absolute value Inverse cosine Inverse sine Inverse tangent Four quadrant inverse tangent Control axis scaling and appearance Round towards plus infinity Clear variables and functions from memory Clear current figure Color look-up table Display color bar (color scale) Compute 2-D correlation coefficient Cosine function Display information on the screen Convert to double precision exponential function Execute function specified by string Discrete Fourier transform Two-dimensional discrete Fourier Transform Shift DC component to center of spectrum Create figure window Round towards zero Round towards minus infinity Graphical input from mouse Returns the cursor to the upper left corner of the screen Inverse discrete Fourier transform Two-dimensional inverse discrete Fourier transform Complex imaginary part Display image Scale data and display as image Display histogram of image data Read image from graphics file Display image Write image to graphics file Prompt for user inpu Length of vector Load workspace variables from disk
180 MAX MEAN MEDFILT2 MEDIAN MESH MIN NARGCHK NARGIN PLOT POLYFIT QUIVER RADON REAL ROUND SAVE SUBPLOT SIN SIZE SORT SORT STD SURF TAN TITLE TRUESIZE UIGETFILE UINT8 XLABEL YLABEL ZEROS ZLABEL
MA TLAB| f o r Photomechanics - A Primer
Largest component Average or mean value Perform 2-D median filtering Median value 3-D mesh surface Smallest component Validate number of input arguments Number of function inputarguments Plot Fit polynomial to data Quiver plot Compute Radon transform Complex real part Round towards nearest integer Save workspace variables to disk Create axes in tiled positions Sine function Size of matrix Sort in ascending order Square root Standard deviation 3-D colored surface Tangent Graph title Adjust display size of image Standard open file dialog box Convert to unsigned 8-bit integer X-axis label Y-axis label Zeros array Z-axis label
181 APPENDIX B
PMTOOLBOX 9 Functions
Based on these functions, scripts or m-files can be developed to solve specific problems. The idea is to have optimized functions for the main processing, which can be tailored to a specific application by allowing certain variables to be adjusted. The variables that can be adjusted or that are required to run each function can be found by entering help at the MATLAB| command prompt. For example ~ help ps
p hase=ps(Algorithm_Type, il, i2, i3, i4, i5, i6); This function evaluates the phase by phase shifting algorithms Algorithm_Type is the algorithm used to evaluate the phase il ..i6 are the phase shifted fringe patterns Detalied listing of the phase shift Algorithm_Type are as follows merits(insensitive to...) Type intensities phase shift 1, i1~i3 (n-1)*pi/2 2, i1~i4 (n-1)*pi /2 linear phase shifter error 3, il-i4 (n-1)*pi/2 quadric phase shifter error quadric intensity error 4, il-i4 (n-1)*pi / 2 linear & quadric phase 5, il--i5 (n-1) *pi / 2 shifter error 6, i1~i5 (n-1)*pi/2 linear phase shifter & quadric intensity error linear and quadric phase 7, il-i5 (n-1)*pi/2 shifter error and quadric intensity error 8(Carre) i1~i4 (-3,-1,1,3)*alpha, linear phase shifter error 9(Stoilov) i1~i5 (-2,-1,0,1,2)*alpha, linear phase shifter error in types 8 and 9, alpha is arbitrary and fixed. Images for the demo programs are found in the images sub-folder. Basic Ffm Fft_earrier Ftm_no_earrier
Gauss Gaussflt Ps
Fractional Fringe method for phase retrieving Fourier transform method for fringe patterns with carrier Fourier transform method for fringe patterns without carrier Generate a Gaussian profile Gaussian filter Phase shifting algorithms
182
Radonflt Sc Unwrapping Window Window Wrapping
MA T L A B | f o r P h o t o m e c h a n i c s - A Primer
Radon filter Speckle correlation Phase unwrapping Generate a 1D window function Generate a 2D window function wrapping the phase into (-pi,pi)
Photoelasticity Disc_seg_pe Hist_pe Multi_seg Ps al_pe Pspe Ring_seg_pe Thresh_seg_pe
Segment a disc area from the original image Calculate the histogram of an image Segment one image into four subimages, for use in spatial phase shifting Phase shifting algorithm for photoelasticity Phase shifting in photoelasticity Segment a ring area from the original image Segment the processed area from the original image by thresholding
Moir~ add_moire Centerfre
Addition of two sheared images Detects centre frequency of Fourier spectrum of moir~ pattern CenterfreDetect Detects the center frequency of a pattern in the frequency domain 2-D Gabor filter gaborfilter4 Generates a 2-D Gaussian function gassian2 Generate a grating based on the user specified Grating4 frequencies along the u (freqx) and v(freqy) directions Two-dimensional discrete Foruier transform of matrix IMFFT Save an image file using imwrite Imwrite Performs to smooth 1-D input signal using kernel5 1X5 kernel, [0.05, 0.245, 0.4, 0.245, 0.05]. Low pass filtering in Fourier domain Lowpass Strain contouring using only one interferogram based Mgabor on wavelet analysis Moi_Phshifting Generate moir~ fringes from patterns with low carrier fringe frequency. Doubles moir~ fringe density using eight-step phasemoiredoub shifted fringe patterns. Moir~ of moir~ Mom Moir~ of moir~ with phase shifting momph regionsegment Perform logical operation between image and mask.
Appendix B :PMTOOLB OX 9 Functions
response Singabor
Loose filter bank design for hole pattern Segmentation of a pattern based on fringe frequency content Cgg and cgg2 Computer generated gratings Dm Digital moir~ Dmdemo Demo for digital moir~ (windows version) Dmdemol Demo for digital moir~ (command-line version) Dmdemo* Variations on the dmdemo theme Ftm_carrier_i.m Fourier transform method for phase retrieving from fringe patterns with carrier (interactive mode) Ftmcidemo.m Demo for ftm carrier i.m Holography dhfourier dhineline dhinter dhoffaxis
Reconstruction of digital Fourier hologram Reconstruction of in-line digital holograms Reconstruction of in-line digital holographic interferometry Reconstruction of off-axis digital holograms
Speckle curvdemo Defocus Dfdemo Dfdemol macurv ordemo ordemol ori htm w
se
scdemo speckledemo sspeckle sspecklel y o u n g ftm y o u n g htm y o u n g rtm youngdemo youngdemol
Demo for curvature detection Defocus determination using contrast of speckle fringes Demo for defocus detection (windows version) Demo for defocus detection (command-line version) Curvature detection by speckle shearography Young's fringe pattern analysis using radon transform (Windows version) Young's fringe pattern analysis using radon transform (command-line version) Fringe rotation determination using Radon transform Speckle correlation function Demo of speckle correlation Demo programs for speckle fringe analysis Sample speckle demo (Windows version) Sample speckle demo (command-line version) Fourier transform method for Young's fringe analysis Young fringe analysis using hybrid method Young's fringe analysis using Radon Transform method Demo for Young's fringes (Windows version) Demo for Young's fringes (command-line version)
183
184
APPENDIX C
M A T ~
IP TOOLBOX
(Version
3.1) Function
Image displav colorbar - Display colorbar (MATLAB| Toolbox) getimage - Get image d a t a from axes image - Create and display image object (MATLAB Toolbox) imagesc - Scale d a t a and display as image (MATLAB Toolbox) i m m o v i e - Make movie from m u l t i f r a m e image i m s h o w - Display image montage - Display multiple image frames as r e c t a n g u l a r m o n t a g e m o v i e - P l a y recorded movie frames subimage - Display multiple images in single figure truesize - Adjust display size of image w a r p - Display image as t e x t u r e - m a p p e d surface
Image file I/O dicominfo - Read m e t a d a t a from a DICOM message dicomread - Read a DICOM image i m f i n f o - R e t u r n information about image file (MATLAB Toolbox) i r n r e a d - Read image file (MATLAB Toolbox) i m w r i t e - Write image file (MATLAB Toolbox) Image arithmetic imabsdiff- Compute absolute difference of two images i m a d d - Add two images, or add constant to image i m c o m p l e m e n t - C o m p l e m e n t image i m d i v i d e - Divide two images, or divide image by constant i m l i n c o m b - Compute linear combination of images i m m u l t i p l y - Multiply two images, or multiply image by constant imsubtract - S u b t r a c t two images, or s u b t r a c t constant from image
Spatial transformations checkerboard - Create checkerboard image f i n d b o u n d s - Find o u t p u t bounds for spatial t r a n s f o r m a t i o n f l i p t f o r m - Flip the input and output roles of a T F O R M struct i m c r o p - Crop image i m r e s i z e - Resize image i m r o t a t e - Rotate image i m t r a n s f o r m - Apply spatial t r a n s f o r m a t i o n to image
List
Appendix C." MATLAB IP TOOLBOX (Version 3.1) Function List
makeresampler- Create r e s a m p l e r structure maketform - Create spatial t r a n s f o r m a t i o n structure (TFORM) tformarray- Apply spatial t r a n s f o r m a t i o n to m u l t i d i m e n s i o n a l a r r a y t f o r m f w d - Apply forward spatial t r a n s f o r m a t i o n tforminv - Apply inverse spatial t r a n s f o r m a t i o n
Image registration cpstruct2pairs - Convert CPSTRUCT to valid pairs of control points cp2tform - Infer spatial t r a n s f o r m a t i o n from control point pairs c p c o r r - Tune control point locations using cross-correlation
cpselect - Control point selection tool normxcorr2- Normalized two-dimensional cross-correlation Pixel values and statistics
corr2 - Compute 2-D correlation coefficient imcontour - Create contour plot of image data imhist - Display h i s t o g r a m of image data impixel- Determine pixel color values improfile - Compute pixel-value cross-sections along line segments mean2 - Compute m e a n of m a t r i x elements pixval- Display information about image pixels regionprops- Measure properties of image regions s t d 2 - Compute s t a n d a r d deviation of m a t r i x elements
Image analysis edge - Find edges in intensity image qtdecomp - Perform quadtree decomposition qtgetblk - Get block values in q u a d t r e e decomposition qtsetblk - Set block values in q u a d t r e e decomposition Image enhancement histeq - E n h a n c e contrast using h i s t o g r a m equalization imadjust - Adjust image intensity values or colormap imnoise - Add noise to an image medfilt2 - Perform 2-D median filtering ordfilt2 - Perform 2-D order-statistic filtering stretchlim - Find limits to contrast stretch an image wiener2 - Perform 2-D adaptive noise-removal filtering Linear filtering convmtx2 - Compute 2-D convolution m a t r i x
185
186
M A T L A B | f o r Photomechanics - A Primer
fspecial - Create predefined filters imfilter- Filter 2- D and multidimensional images Linear 2-D filter design f r e q s p a c e - Determine 2-D frequency response spacing (MATLAB Toolbox) f r e q z 2 - Compute 2-D frequency response f s a m p 2 - Design 2-D FIR filter using frequency sampling ftrans2 - Design 2-D FIR filter using frequency transformation f w i n d l - Design 2-D FIR filter using 1-D window method f w i n d 2 - Design 2-D FIR filter using 2-D window method
Image deblurring deconvblind- Deblur image using blind deconvolution d e c o n v l u c y - Deblur image using Lucy-Richardson method
deconvreg - Deblur image using regularized filter deconvwnr - Deblur image using Wiener filter edgetaper - Taper edges using point-spread function o t f 2 p s f - Optical transfer function to point-spread function p s f 2 o t f - Point-spread function to optical transfer function
Image transforms dct2 - 2-D discrete cosine transform d c t m t x - Discrete cosine transform matrix fft2 - 2-D fast Fourier transform (MATLAB Toolbox) f f t n - Multidimensional fast Fourier transform (MATLAB Toolbox) f f t s h i f t - Reverse q u a d r a n t s of output of FFT (MATLAB Toolbox) idct2 - 2-D inverse discrete cosine transform ifft2 - 2-D inverse fast Fourier transform (MATLAB Toolbox) ifftn - Multidimensional inverse fast Fourier transform (MATLAB Toolbox) iradon- Compute inverse Radon transform phantom - Generate a head phantom image radon - Compute Radon transform
Neighborhood and block processing bestblk - Choose block size for block processing blkproe - Implement distinct block processing for image eol2im- Rearrange matrix columns into blocks colfilt - Columnwise neighborhood operations i m 2 c o l - Rearrange image blocks into columns n l f i l t e r - Perform general sliding-neighborhood operations
Appendix C." MATLAB IP TOOLBOX (Version 3.1) Function List
Morphological operations (intensity and binary, images} conndef- Default connectivity imbothat - Perform bottom-hat filtering imclearborder- Suppress light structures connected to image border imclose - Close image imdilate- Dilate image imerode- Erode image imextendedmax- Extended-maxima transform imextendedmin- Extended-minima transform imfill - Fill image regions and holes i m h m a x - H-maxima transform i m h m i n - H-minima transform imimposemin- Impose minima imopen- Open image imreconstruct- Morphological reconstruction imregionalmax - Regional maxima imregionalmin- Regional minima imtophat - Perform tophat filtering watershed- Watershed transform
Morphological operations (binary.images) a p p l y l u t - Perform neighborhood operations using lookup tables b w a r e a - Compute area of objects in binary image bwareaopen- Binary area open (remove small objects) b w d i s t - Compute distance transform of binary image b w e u l e r - Compute Euler number of binary image bwhitmiss - Binary hit-miss operation b w l a b e l - Label connected components in 2-D binary image b w l a b e l n - Label connected components in multidimensional binary image b w m o r p h - Perform morphological operations on binary image b w p a c k - Pack binary image b w p e r i m - Determine perimeter of objects in binary image bwselect - Select objects in binary image b w u l t e r o d e - Ultimate erosion b w u n p a c k - Unpack binary image makelut - Construct lookup table for use with applylut
Structurin~ element (STREL) creation and manipulation getheight - Get strel height getneighbors - Get offset location and height of strel neighbors
187
188
MA TLAB| f o r Photomechanics - A Primer
getnhood - Get strel neighborhood getsequence - Get sequence of decomposed strels isflat- R e t u r n t r u e for flat strels reflect - Reflect strel about its center strel - C r e a t e morphological s t r u c t u r i n g e l e m e n t translate - T r a n s l a t e strel R e , o n - b a s e d processing roicolor- Select region of interest, based on color roifill - Smoothly interpolate within a r b i t r a r y region roifilt2 - Filter a region of i n t e r e s t roipoly - Select polygonal region of i n t e r e s t Colormap manipulation brighten - Brighten or d a r k e n colormap (MATLAB Toolbox) cmpermute - R e a r r a n g e colors in colormap cmunique - Find unique colormap colors and corresponding image colormap - Set or get color lookup table (MATLAB Toolbox) imapprox- A p p r o x i m a t e indexed image by one with fewer colors rgbplot - Plot RGB colormap components (MATLAB Toolbox) Color space conversions h s v 2 r g b - Convert HSV values to RGB color space (MATLAB Toolbox) ntsc2rgb - Convert NTSC values to RGB color space r g b 2 h s v - Convert RGB values to HSV color space (MATLAB Toolbox) rgb2ntsc - Convert RGB values to NTSC color space r g b 2 y c b c r - Convert RGB values to YCBCR color space y c b c r 2 r g b - Convert YCBCR values to RGB color space
Array operations circshift - Shift a r r a y circularly padarray- Pad a r r a y Image t_vDes and type conversions dither- Convert image using d ith e rin g g r a y 2 i n d - Convert i n t e n s i t y image to indexed image grayslice - Create indexed image from i n t e n s i t y image by t h r e s h o l d i n g g r a y t h r e s h - C omp u te global ima g e threshold using Otsu's m e t h o d
Appendix C." MA TLAB IP TOOLBOX (Version 3.1) Function List
i m 2 b w - Convert image to b i n a r y image by thresholding i m 2 d o u b l e - Convert image a r r a y to double precision i m 2 u i n t 8 - Convert image a r r a y to 8-bit unsigned integers i m 2 u i n t l 6 - Convert image a r r a y to 16-bit unsigned integers i n d 2 g r a y - Convert indexed image to i n t e n s i t y image i m 2 m i s - Convert image to J a v a M e m o r y I m a g e S o u r c e i n d 2 r g b - Convert indexed image to RGB image (MATLAB Toolbox) i s b w - R e t u r n true for b i n a r y image i s g r a y - R e t u r n true for i n t e n s i t y image i s i n d - R e t u r n true for indexed image i s r g b - R e t u r n true for RGB image m a t 2 g r a y - Convert m a t r i x to i n t e n s i t y image r g b 2 g r a y - Convert RGB image or colormap to grayscale r g b 2 i n d - Convert RGB image to indexed image
189
This Page Intentionally Left Blank