Wolfgang Osten (Ed.) Fringe 2005
Wolfgang Osten (Ed.)
Fringe 2005 The 5th International Workshop on Automatic Processing of Fringe Patterns
With 448 Figures and 14 Tables
123
Professor Dr. Wolfgang Osten Institut für Technische Optik Universität Stuttgart Pfaffenwaldring 9 70569 Stuttgart Germany
[email protected]
Library of Congress Control Number: 2005931371
ISBN-10 3-540-26037-4 Springer Berlin Heidelberg New York ISBN-13 978-3-540-26037-0 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2006 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: By the authors Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: design & production GmbH, Heidelberg Printed on acid-free paper 7/3142/YL - 5 4 3 2 1 0
V
Conference Committee
Organizers and Conference Chairs: Wolfgang Osten (Germany) Werner Jüptner (Germany)
Program Committee: Armando Albertazzi (Brazil) Gerald Roosen (France) Anand Asundi (Singapore) Fernando Mendoza Santoyo (Mexico) Gerd von Bally (Germany) Joanna Schmit (USA) Josef J.M. Braat (Netherlands) Rajpal S. Sirohi (India) Zoltan Füzessy (Hungary) Paul Smigielski (France) Christophe Gorecki (France) Mitsuo Takeda (Japan) Peter J. de Groot (USA) Ralph P. Tatam (UK) Min Gu (Australia) Hans Tiziani (Germany) Igor Gurov (Russia) Vivi Tornari (Greece) Klaus Hinsch (Germany) Michael Totzeck (Germany) Jonathan M. Huntley (UK) Satoru Toyooka (Japan) Yukihiro Ishii (Japan) James D. Trolinger (USA) Guillermo Kaufmann (Argentina) Theo Tschudi (Germany)
VI
Richard Kowarschik (Germany) Ramon Rodrigues Vera (Mexico) Malgorzata Kujawinska (Poland) Elmar E. Wagner (Germany) Vladimir Markov (USA) Alfred Weckenmann (Germany) Kaoru Minoshima (Japan) Günther Wernicke (Germany) Erik Novak (USA) James C. Wyant (USA) Tilo Pfeifer (Germany) Ichirou Yamaguchi (Japan) Ryszard J. Pryputniewicz (USA) Toyohiko Yatagai (Japan) Session Chairs: Session 1:
Session 2: Session 3: Session 4: Session 5: Poster Session:
W.Jüptner (Germany) M.Takeda (Japan) F.Mendoza Santoyo (Mexico) J.M.Huntley (UK) K.Creath (USA) P.J.de Groot (USA) G.Häusler (Germany) J.D.Trolinger (USA) I.Yamaguchi (Japan) V.Markov (USA) T.Yatagai (Japan) H.Tiziani (Germany) A.Albertazzi (Brazil) A.Asundi (Singapore) G.Wernicke (Germany) N.Demoli (Croatia)
VII
Preface In 1989 the time was hot to create a workshop series dedicated to the dicussion of the latest results in the automatic processing of fringe patterns. This idea was promoted by the insight that automatic and high precision phase measurement techniques will play a key role in all future industrial applications of optical metrology. However, such a workshop must take place in a dynamic environment. Therefore the main topics of the previous events were always adapted to the most interesting subjects of the new period. In 1993 new principles of optical shape measurement, setup calibration, phase unwrapping and nondestructive testing were the focus of discussion, while in 1997 new approaches in multi-sensor metrology, active measurement strategies and hybrid processing technologies played a central role. 2001, the first meeting in the 21st century, was dedicated to optical methods for micromeasurements, hybrid measurement technologies and new sensor solutions for industrial inspection. The fifth workshop takes place in Stuttgart, the capital of the state of BadenWürttemberg and the centre of a region with a long and remarkable tradition in engineering. Thus after Berlin 1989, Bremen 1993, 1997 and 2001, Stuttgart is the third Fringe city where international experts will meet each other to share new ideas and concepts in optical metrology. This volume contains the papers presented during FRINGE 2005. The focus of this meeting was especially directed to resolution enhanced technologies, new approaches in wide scale 4D optical metrology and advanced computer aided measurement techniques. Since optical metrology becomes more and more important for industrial inspection, sophisticated sensor systems and their applications for the solution of challenging measurement problems are chosen again as one of the central topics of the workshop. This extended scope was honored again by a great response on our call for papers. Scientists from all around the world offered more than 110 papers. This enormous response demanded a strong revision of the papers to
VIII
select the best out of the overwhelming number of excellent papers. This hard job had to be done by the program committee since there is a strong limitation of the number of papers which can be presented and discussed during our workshop without holding parallel sessions. The papers presented in this workshop are summarized under 5 topics: 1. New Methods and Tools for Data Processing 2. Resolution Enhanced Technologies 3. Wide Scale 4D Optical Metrology 4. Hybrid Measurement Technologies 5. New Optical Sensors and Measurement Systems Each session is introduced by an acknowledged expert who gives an extensive overview of the topic and a report of the state of the art. The classification of all submitted papers into these topics was again a difficult job which often required compromises. We hope that our decisions will be accepted by the audience. On this occasion we would like to express our deep thanks to the international program committee for helping us to find a good solution in every situation. The editor would like to express his thanks to all the authors who spent a lot of time and effort in the preparation of their papers. Our appreciation also goes to Dr. Eva Hestermann-Beyerle and Monika Lempe from Springer Heidelberg for providing excellent conditions for the publication of these proceedings. My deep thanks is directed to the members of the ITO staff. The continuous help given by Gabriele Grosshans, Ruth Edelmann, Christa Wolf, Reinhard Berger, Witold Gorski, Ulrich Droste, Jochen Kauffmann and Erich Steinbeißer was the basis for making a successful FRINGE 2005. Finally, special thanks and appreciation goes to my co-chair, Werner Jüptner, for sharing with me the spirit of the 5th Fringe workshop. Looking forward to FRINGE 2009. Stuttgart, September 2005 Wolfgang Osten
IX
Table of Contents Conference Committee .............................................................................V Preface ....................................................................................................VII Table of Contents .................................................................................... IX
Key Note R.S. Sirohi Optical Measurement Techniques............................................................2
Session 1: New Methods and Tools for Data Processing M. Kujawinska New Challenges for Optical Metrology: Evolution or Revolution ......14 P. de Groot, X. Colonna de Lega Interpreting interferometric height measurements using the instrument transfer function...................................................................30 K.A. Stetson Are Residues of Primary Importance in Phase Unwrapping? ............38 W. Wang, Z. Duan, S.G. Hanson, Y. Miyamoto, M. Takeda Experimental Study of Coherence Vortices: Birth and Evolution of Phase Singularities in the Spatial Coherence Function........................46 C.A. Sciammarella, F.M. Sciammarella Properties of Isothetic Lines in Discontinuous Fields...........................54 R. Dändliker Heterodyne, quasi-heterodyne and after ...............................................65 J.M. Huntley, M.F. Salfity, P.D. Ruiz, M.J. Graves, R. Cusack, D.A. Beauregard Robust three-dimensional phase unwrapping algorithm for phase contrast magnetic resonance velocity imaging ......................................74
X
R. Onodera, Y. Yamamoto, Y. Ishii Signal processing of interferogram using a two-dimensional discrete Hilbert transform.....................................................................................82 J.A. Quiroga, D. Crespo, J.A. Gomez Pedrero, J.C. Martinez-Antón Recent advances in automatic demodulation of single fringe patterns ...................................................................................................................90 C. Breluzeau, A. Bosseboeuf, S. Petitgrand Comparison of Techniques for Fringe Pattern Background Evaluation ...................................................................................................................98 W. Schumann Deformed surfaces in holographic Interferometry. Similar aspects concerning nonspherical gravitational fields.......................................107 I. Gurov, A. Zakharov Dynamic evaluation of fringe parameters by recurrence processing algorithms...............................................................................................118 T. Haist, M. Reicherter, A. Burla, L. Seifert, M. Hollis, W. Osten Fast hologram computation for holographic tweezers .......................126 Y. Fu, C. Quan, C. Jui Tay, H. Miao Wavelet analysis of speckle patterns with a temporal carrier ...........134 C. Shakher, S. Mirza, V. Raj Singh, Md. Mosarraf Hossain, R.S. Sirohi Different preprocessing and wavelet transform based filtering techniques to improve Signal- to-noise ratio in DSPI fringes ............142 J. Liesener, W. Osten Wavefront Optimization using Piston Micro Mirror Arrays ............150 E. Hack, P. Narayan Gundu Adaptive Correction to the Speckle Correlation Fringes using twisted nematic LCD ..........................................................................................158 R. Doloca, R. Tutsch Random phase shift interferometer .....................................................166
XI
V. Markov, A. Khizhnyak Spatial correlation function of the laser speckle field with holographic technique.................................................................................................175 Q. Kemao, S. Hock Soon, A. Asundi Fault detection from temporal unusualness in fringe patterns..........183 T. Böttner, M. Kästner The Virtual Fringe Projection System (VFPS) and Neural Networks .................................................................................................................191 F.J. Cuevas, F. Mendoza Santoyo, G. Garnica, J. Rayas, J.H. Sossa Fringe contrast enhancement using an interpolation technique .......195 S. Drobczynski, H. Kasprzak Some remarks on accuracy of imaging polarimetry with carrier frequency ................................................................................................204 A. Federico, G.H. Kaufmann Application of weighted smoothing splines to the local denoising of digital speckle pattern interferometry fringes ....................................208 R.M. Groves, S.W. James, R.P. Tatam Investigation of the fringe order in multi-component shearography surface strain measurement ..................................................................212 Q. Kemao, S. Hock Soon, A. Asundi Metrological Fringe inpainting.............................................................217 B. Kemper, P. Langehanenberg, S. Knoche, G. von Bally Combination of Digital Image Correlation Techniques and Spatial Phase Shifting Interferometry for 3D-Displacement Detection and Noise Reduction of Phase Difference Data ..........................................221 T. Kozacki, P. Kniazewski, M. Kujawinska Photoelastic tomography for birefringence determination in optical microelements ........................................................................................226 A. Martínez, J.A. Rayas, R. Corsero Optimization of electronic speckle pattern interferometers ..............230
XII
K. Patorski, A. Styk Properties of phase shifting methods applied to time average interferometry of vibrating objects ......................................................234 P.D. Ruiz, J.M. Huntley Depth-resolved displacement measurement using Tilt Scanning Speckle Interferometry..........................................................................238 G. Sai Siva, L. Kameswara Rao New Phase Unwrapping Strategy for Rapid and Dense 3D Data Acquisition in Structured Light Approach .........................................242 P.A.A.M. Somers, N. Bhattacharya Determination of modulation and background intensity by uncalibrated temporal phase stepping in a two-bucket spatially phase stepped speckle interferometer.............................................................247
Session 2: Resolution Enhanced Technologies K. Sugisaki, M. Hasegawa, M. Okada, Z. Yucong, K. Otaki, Z. Liu, M. Ishii, J. Kawakami, K. Murakami, J. Saito, S. Kato, C. Ouchi, A. Ohkubo, Y. Sekine, T. Hasegawa, A. Suzuki, M. Niibe, M. Takeda EUVA's challenges toward 0.1nm accuracy in EUV at-wavelength interferometry ........................................................................................252 M. Totzeck Some similarities and dissimilarities of imaging simulation for optical microscopy and lithography .................................................................267 I. Harder, J. Schwider, N. Lindlein A Ronchi-Shearing Interferometer for compaction test at a wavelength of 193nm .............................................................................275 J. Zellner, B. Dörband, H. Feldmann Simulation and error budget for high precision interferometry. ......283 G. Jäger, T. Hausotte, E. Manske, H.-J. Büchner, R. Mastylo, N. Dorozhovets, R. Füßl, R. Grünwald Progress on the wide scale Nano-positioning- and Nanomeasuring Machine by Integration of Optical-Nanoprobes .................................291
XIII
J.J.M. Braat, P. Dirksen, A.J.E.M. Janssen Through-Focus Point-Spread Function Evaluation for Lens Metrology using the Extended Nijboer-Zernike Theory ......................................299 C.D. Depeursinge, A.M. Marian, F. Montfort, T. Colomb, F. Charriére, J. Kühn, E. Cuche, Y. Emery, P. Marquet Digital Holographic Microscopy (DHM) applied to Optical Metrology: A resolution enhanced imaging technology applied to inspection of microscopic devices with subwavelength resolution ...........................308 T. Tschudi, V.M. Petrov, J. Petter, S. Lichtenberg, C. Heinisch, J. Hahn An adaptive holographic interferometer for high precision measurements.........................................................................................315 T. Yatagai, Y. Yasuno, M. Itoh Spatio-Temporal Joint Transform Correlator and Fourier Domain OCT.........................................................................................................319 W. Hou Subdivision of Nonlinearity in Heterodyne Interferometers .............326
Session 3: Wide Scale 4D Optical Metrology O. Loffeld Progress in SAR Interferometry ..........................................................336 P. Aswendt, S. Gärtner, R. Höfling New calibration procedure for measuring shape on specular surfaces .................................................................................................................354 T. Bothe, W. Li, C. von Kopylow, W. Jüptner Fringe Reflection for high resolution topometry and surface description on variable lateral scales ...................................................362 J. Kaminski, S. Lowitzsch, M.C. Knauer, G. Häusler Full-Field Shape Measurement of Specular Surfaces.........................372 L.P. Yaroslavsky, A. Moreno, J. Campos Numerical Integration of Sampled Data for Shape Measurements: Metrological Specification.....................................................................380
XIV
P. Pfeiffer, L. Perret, R. Mokdad, B. Pecheux Fringe analysis in scanning frequency interferometry for absolute distance measurement ...........................................................................388 I. Yamaguchi, S. Yamashita, M. Yokota Surface Shape Measurement by Dual-wavelength Phase-shifting Digital Holography ................................................................................396 Y. Ishii, R. Onodera, T. Takahashi Phase-shifting interferometric profilometry with a wide tunable laser source ......................................................................................................404 P. Andrä, H. Schamberger, J. Zänkert Opto-Mechatronic System for Sub-Micro Shape Inspection of Innovative Optical Components for Example of Head-Up-Displays 411 X. Peng, J. Tian 3-D profilometry with acousto-optic fringe interferometry...............420 E. Garbusi, E.M. Frins, J.A. Ferrari Phase-shifting shearing interferometry with a variable polarization grating recorded on Bacteriorhodopsin...............................................428 G. Khan, K. Mantel, N. Lindlein, J. Schwider Quasi-absolute testing of aspherics using Combined Diffractive Optical Elements ....................................................................................432 G. Notni, P. Kühmstedt, M. Heinze, C. Munkelt, M. Himmelreich Selfcalibrating fringe projection setups for industrial use.................436 J. Tian, X. Peng 3-D shape measurement method using point-array encoding ...........442 H. Wagner, A. Wiegmann, R. Kowarschik, F. Zöllner 3D measurement of human face by stereophotogrammetry..............446 M. Wegiel, M. Kujawinska Fast 3D shape measurement system based on colour structure light projection................................................................................................450
XV
Session 4: Hybrid Measurement Technologies L. Koenders, A. Yacoot Tip Geometry and Tip-Sample Interactions in Scanning Probe Microscopy (SPM) .................................................................................456 N. Demoli, K. Šariri, D.Vukicevic, M.Torzynski Applications of time-averaged digital holographic interferometry...464 P. Picart, M. Grill, J. Leval, J.P. Boileau, F. Piquet Spatio-temporal encoding using digital Fresnel holography .............472 G. Wernicke, M. Dürr, H. Gruber, A. Hermerschmidt, S. Krüger, A. Langner High resolution optical reconstruction of digital holograms .............480 J. Engelsberger, E-H.Nösekabel, M. Steinbichler Application of Interferometry and Electronic Speckle Pattern Interferometry (ESPI) for Measurements on MEMS ........................488 J. Trolinger, V. Markov, J. Kilpatrick Full-field, real-time, optical metrology for structural integrity diagnostics ..............................................................................................494 T. Doll, P. Detemple, S. Kunz, T. Klotzbücher 3D Micro Technology: Challenges for Optical Metrology .................506 S. Grilli, P. Ferraro, D. Alfieri, M. Paturzo, L. Sansone, S. De Nicola, P. De Natale Interferometric Technique for Characterization of Ferroelectric Crystals Properties and Microengineering Process............................514 J. Müller, J. Geldmacher, C. König, M. Calomfirescu, W. Jüptner Holographic interferometry as a tool to capture impact induced shock waves in carbon fibre composites .........................................................522 H. Gerhard, G. Busse Two new techniques to improve interferometric deformationmeasurement: Lockin and Ultrasound excited Speckle-Interferometry .................................................................................................................530
XVI
A. Weckenmann, A. Gabbia Testing formed sheet metal parts using fringe projection and evaluation by virtual distortion compensation....................................539 V.B. Markov, B.D. Buckner, S.A. Kupiec, J.C. Earthman Fatigue damage precursor detection and monitoring with laser scanning technique.................................................................................547 G. Montay, I. Lira, M. Tourneix, B. Guelorget, M. François, C. Vial Analysis of localization of strains by ESPI, in equibiaxial loading (bulge test) of copper sheet metals........................................................551 P. Picart, J. Leval, J.P. Boileau, J.C. Pascal Laser vibrometry using digital Fresnel holography ...........................555 P. Picart, J. Leval, M. Grill, J.P. Boileau, J.C. Pascal 2D laser vibrometry by use of digital holographic spatial multiplexing .................................................................................................................563 V. Sainov, J. Harizanova, S. Ossikovska, W. Van Paepegem, J. Degrieck, P. Boone Fatigue Detection of Fibres Reinforced Composite Materials by Fringes Projection and Speckle Shear Interferometry.......................567 L. Salbut, M. Jozwik Multifunctional interferometric platform specialised for active components of MEMS/MOEMS characterisation..............................571 V. Tornari, E. Tsiranidou, Y. Orphanos, C. Falldorf, R. Klattenhof, E. Esposito, A. Agnani, R. Dabu, A. Stratan, A. Anastassopoulos, D. Schipper, J. Hasperhoven, M. Stefanaggi, H. Bonnici, D. Ursu Laser Multitask ND Technology in Conservation Diagnostic Procedures ..............................................................................................575
XVII
Session 5: New Optical Sensors and Measurement Systems T.-C. Poon Progress in Scanning Holographic Microscopy for Biomedical Applications............................................................................................580 K. Creath, G.E. Schwartz The Dynamics of Life: Imaging Temperature and Refractive Index Variations Surrounding Material and Biological Systems with Dynamic Interferometry .......................................................................588 M. Józwik, C. Gorecki, A. Sabac, T. Dean, A. Jacobelli Microsystem based optical measurement systems: case of optomechanical sensors…………………………………………………….597 C. Richter, B. Wiesner, R. Groß, G. Häusler White-light interferometry with higher accuracy and more speed ...605 F. Depiereux, R. Schmitt, T. Pfeifer Novel white light Interferometer with miniaturised Sensor Tip .......613 W. Mirandé Challenges in the dimensional Calibration of sub-micrometer Structures by Help of optical Microscopy ...........................................622 A. Albertazzi, A. Dal Pont A white light interferometer for measurement of external cylindrical surfaces ...................................................................................................632 J. Millerd, N. Brock, J. Hayes, M. North-Morris, B. Kimbrough, J. Wyant Pixelated Phase-Mask Dynamic Interferometers ...............................640 K.D. Hinsch, H. Joost, G. Gülker Tomographic mapping of airborne sound fields by TV-holography 648 S. Toyooka, H. Kadono, T. Saitou, P. Sun, T. Shiraishi, M. Tominaga Dynamic ESPI system for spatio-temporal strain analysis ................656
XVIII
V. Striano, G. Coppola, P. Ferraro, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, R. Marcelli Digital holographic microscope for dynamic characterization of a micromechanical shunt switch..............................................................662 Y. Emery, E. Cuche, F. Marquet, S. Bourquin, P. Marquet Digital Holographic Microscopy (DHM): Fast and robust 3D measurements with interferometric resolution for Industrial Inspection................................................................................................667 R. Höfling, C. Dunn Digital Micromirror Arrays (DMD) – a proven MEMS technology looking for new emerging applications in optical metrology .............672 C. Bräuer-Burchardt, M. Palme, P. Kühmstedt, G. Notni Optimised projection lens for the use in digital fringe projection.....676 K. Mantel, J. Lamprecht, N. Lindlein, J. Schwider Absolute Calibration of Cylindrical Specimens in Grazing Incidence Interferometry........................................................................................682 A. Michalkiewicz, J. Krezel, M. Kujawinska, X. Wang, P.J. Bos Digital holographic interferometer with active wavefront control by means of liquid crystal on silicon spatial light modulator..................686 K.-U. Modrich In-Situ-Detection of Cooling Lubricant Residues on Metal Surfaces Using a Miniaturised NIR-LED-Photodiode-System .........................690 E. Papastathopoulos, K. Körner, W. Osten Chromatic Confocal Spectral Interferometry - (CCSI) .....................694 F. Wolfsgruber, C. Rühl, J. Kaminski, L. Kraus, G. Häusler, R. Lampalzer, E.-B. Häußler, P. Kaudewitz, F. Klämpfl, A. Görtler A simple and efficient optical 3D-Sensor based on “Photometric Stereo” (“UV-Laser Therapy”) ............................................................702
Appendix: New Products ...........................................................707
Key Note
Optical Measurement Techniques Given by Rajpal S. Sirohi Bhopal (India)
Optical Measurement Techniques R.S. Sirohi Vice-Chancellor Barkatullah University Bhopal 462 026 India
1 Introduction Man’s romance with light may date back to millions of years but light as a measurement tool is of recent origin. Light is used for sensing a variety of parameters, and its domain of applications is so vast that it pervades all branches of science, engineering, technology, biomedicine, agriculture etc. The devices that use light for sensing, measurement and control are termed as optical sensors. Optical sensing is generally non-contact, non-invasive and provides very high accuracy of measurement. In many cases, accuracy can be varied over a large range. In these sensors, an optical wave is both an information sensor and a carrier of information. Any one of the following characteristics of a wave are modulated by the measured quantity (measurand): Amplitude or intensity, Phase, Polarization, Frequency, and Direction of propagation. However, the detected quantity is always intensity as the detectors can not follow the optical frequency. The measured quantity modifies the characteristics of the wave in such a way that on demodulation, it results in change in intensity. This change in intensity is related to the measured quantity. In some measurements, the intensity of wave gets modulated straight, and hence no demodulation before detection is used. Measurement of phase is often used; phase can be measured by direct and also by indirect methods. Indirect methods of measuring phase make use of interferometry. I will therefore confine my attention to some of the
Key Note
3
techniques developed by us over the last couple of decades. Further I will confine for the moment to two areas: 1. Collimation Testing, and 2. Speckle Interferometry Many applications require a collimated beam and a number of methods are available. We researched on this topic and developed several novel techniques. Similarly we carried out detailed investigations in speckle shear interferometry.
2 Collimation Testing Laser beam is used for variety of measurements in all branches of science, engineering and technology. Usually the laser oscillating in TEM00 mode is used. In some applications, the beam is to be expanded to a larger diameter. Conversion of small diameter and large divergence beam (as it is emitted from the laser) to a large diameter and low divergence beam is done by an inverted telescope arrangement. The foci of the two lenses must coincide and their optical axes must also align such that diffraction limited incident beam emerges as diffraction limited. The purpose of collimation testing is to check if the two foci are coincident and axes are aligned. In general, the foci are not coincident, and hence the emergent beam would be either divergent or convergent depending on the locations of the focal points of the two lenses. Interferometric methods are commonly used for checking the collimation. These methods can be grouped under1-21,23-34 a. Shear interferometry1,2,7,21-24 b. Classical interferometry2,8,30 c. Talbot interferometry5,6,13,16,19,24 d. Hybrid methods32 e. Special Techniques10,18,27,31,33 All these methods require long coherence length and hence are used only to collimate laser beams. 2.1 Shear Interferometry A plane parallel plate is one of the convenient elements to introduce linear shear both in reflected and transmitted beams1. For reason of high contrast of interference fringes, a reflection arrangement is preferred. An interference pattern is observed in the region of overlap. For divergent or convergent beam incident on the plate, a system of equally spaced
4
Key Note
straight fringes is obtained. However, for a collimated beam, there is uniform illumination in the superposed region, i.e. only a single fringe is formed. It offers a quick method of realizing correct collimation. However determination of location of the lens for infinite fringe on a finite beam introduces certain inaccuracy. This problem is resolved by the use of a wedge plate of a small wedge angle of about 10 arc seconds7. Wedge plate can be used in two orientations namely (i) shear and wedge directions parallel and (ii) shear and wedge directions perpendicular. Interference between the beams reflected from front and rear surfaces of the wedge plate results in a straight fringe pattern. For a collimated beam, the fringes run perpendicular to the wedge direction. However for a divergent or convergent beam, there is a change of fringe width for (i) and change of orientation for (ii). Usually the second configuration is used and collimation is achieved when the fringes run parallel to a fiduciary line drawn on the face of the plate itself. This offers better accuracy compared to that of uncoated plane parallel plate. Wedge plate shear interferometry certainly provided an improvement over the PPP method as the alignment of fringe pattern parallel to a fiduciary mark can be done more accurately than the determination of infinite fringe width over a finite aperture. However, there is a need to dispense with the fiduciary mark. Essentially one searches self-referencing methods12,13,34. This can however be achieved with a pair of wedge plate. The plates are arranged anti-parallel, i.e. wedge angles of two plates are oppositely directed. This composite plate now can be used in two orientations, (i) wedge direction perpendicular to the shear direction (orthogonal configuration), and (ii) wedge direction parallel to the shear direction (parallel configuration)13. At collimation, straight-line fringes normal to the wedge direction are formed in both the halves of the composite plate. However for non-collimated illumination the fringe pattern rotates in opposite directions in orthogonal configuration and fringe width changes in parallel configuration. A number of methods have been demonstrated where in a single wedge plate along with additional optical systems simulates two wedge plates interferometer. The wedge plates can be arranged in tandem12,13,17,25. Instead of uncoated wedge plate, if a plate with both sides coated is used, some interesting results are observed – first, the fringe pattern can be observed both in reflection and transmission, but the pattern in transmission is recommended. It can be shown that the principal fringe has satellites when the plate is illuminated by a convergent or a divergent beam22,23. Satellite fringes disappear and sharp equidistant straight fringes are observed when illumination on the plate is collimated. Transmission pattern over reflection pattern is recommended because (i) the contrast of fringes
Key Note
5
is high, and (ii) satellite fringes are stronger. This method is better than that based on uncoated wedge plate, as it relies on the appearance and disappearance of satellite fringes. It may be emphasized that certain reflectivity values may give better results than the other values of reflectivity. For testing collimation of short duration laser beam, a cyclic interferometer is proposed with a shear element inside the interferometer18,31. The shear element, for example introduce radial shear. 2.2 Classical Interferometry Classical interferometry can be used for collimation testing but it requires fairly large path difference between the two beams2,8. Basic idea is that if the incident beam is collimated, it should give either a fringe free field or a straight-line fringe pattern irrespective of the path difference between the two beams. However if the incident beam departs from collimation, the pattern would arise due to interference between two spherical waves thereby displaying either circular fringes or in general curved fringes. Obviously the sensitivity would depend on the path difference and the set up is highly susceptible to vibration. This technique is therefore seldom used for collimation testing. 2.3 Talbot Interferometry Certain class of objects when illuminated by a coherent beam image themselves at specific distances along the direction of propagation. These objects have to satisfy Montgomery condition. Linear grating is an example of such an object. Periodicity in transverse direction gets translated into periodicity along longitudinal direction under Talbot imaging. A linear grating will image at equal distances; theoretically an infinite number of identical images are formed. However when the illumination is either convergent or divergent, the grating pitch in the image changes and the Talbot images are not equi-spaced. However for a small departure from collimation, which usually is the case, the positions of Talbot plane may be assumed to lie at locations dictated by collimated illumination but the grating pitch would change. If another grating is placed at the Talbot plane, a moiré pattern is formed3,5,6,19. This moiré pattern can be either of finite fringe width or infinite fringe width depending on the orientation of the grating. Departure from collimation will either produce moiré when infinite fringe width is used or rotate the moiré pattern for the case of finite fringe width. This therefore poses same problem encountered with plane parallel plate and wedge plate based techniques. Therefore a dual grating is
6
Key Note
conceived which when illuminated by coherent beam images itself at the Talbot planes. At an appropriate Talbot plane, another identical grating is used thereby producing moiré patterns in both halves. The moiré fringes in these patterns would run parallel to each other if the illumination is collimated. This is a self-referencing technique like with double wedge plate11,16. The dual grating has been produced in two different configurations. Several other types of gratings like circular, spiral etc. have also been used for collimation testing24,29. 2.4 Hybrid Techniques These techniques combine techniques of shear interferometry with Talbot effect. Several combinations have been adopted and found to provide reasonable sensitivities32. 2.5 Special Techniques All self- referencing techniques could be called special techniques as they are self-referencing and provide double the sensitivity. We could also explore the possibility of phase conjugation for this purpose. Since a phase conjugate mirror will convert a diverging wave into a converging wave or vice-versa, this offers an exciting way of collimating a beam. A Michelson interferometer set up is used in which one of the mirrors is replaced by a phase conjugate mirror10,15,33. In general one would observe curved fringes or circular fringes. Only at correct collimation one would observe straightline fringes or fringe free field. The method does offer double the sensitivity compared to plane parallel plate or wedge plate technique as the interference arises between diverging and converging waves of nearly the same curvature. However the self-referencing feature can be introduced by using a double mirror arrangement instead of a single mirror as is the case in Michelson interferometer. The angle between the mirrors is taken very small; this therefore presents two fields. Interference pattern in one field acts as reference for the other.
3 Speckle interferometry Coherent light reflected from a rough surface or transmitted through a medium having refractive index inhomogeneities or random surface height variations like in a ground glass shows a grainy structure in space, which is
Key Note
7
called speckle pattern and the grains are called speckles35. They arise due to self-interference of large number of randomly de-phased waves. The speckle pattern degrades the image quality under coherent illumination and hence was considered bane of holographers. Methods were therefore investigated to eliminate/reduce the speckle noise. It was soon realized that the speckle pattern is also a carrier of information. Realization of this fact gave rise to a class of technique known as speckle metrology36,37. Initially it was applied to fatigue testing but has slowly evolved into a technique comparable to holographic interferometry. The technique is applied to deformation measurement, contouring, stress analysis, vibration measurement etc. Earlier speckle interferometry was carried out with photoemulsions for recording the speckle pattern. However it can also be carried out with electronic detection and hence phase shifting can be easily performed and processing can be almost real-time providing 3-D display of deformation maps38. Further the configuration can be so designed as to measure all the three components of the deformation vector simultaneously and equipments for such measurements are commercially available39. Equipment for directly measuring displacement derivatives and hence strains, slopes and curvature is also commercially available40. The technique, therefore, has come out of the laboratory environment and is used in the field environment. Speckle interferometry, unlike holographic interferometry, uses an imaging geometry and like in classical interferometry, a reference beam is added to code phase information into intensity variation41. The reference beam can be specular or diffuse and is generally added axially except where spatial phase shifting is incorporated42. In the latter case it makes a very small angle with the object wave so that appropriate fringe frequency is produced. In shear speckle interferometry, the reference beam is inbuilt - one sheared field acts as a reference to the other43. The shear methods are grouped under the following categories [table 1]: Table 1.
Shear types 1. 2. 3. 4. 5.
Lateral shear or Linear shear Rotational shear Radial shear Inversion shear Folding shear
Fringe formation G(x+'x, y+'y) - G(x,y) G(r, T + 'T) - G(r, T) G(r r 'r, T) - G(r, T) G(x, y) - G(-x, -y) G(x, y) - G(-x, y) : folding about y-axis G(x, y) - G(x, -y) : folding about x-axis
8
Key Note
Linear shear provides fringe pattern that depict partial slope. Rotational shear is useful to fully compensate rotationally symmetric deflection/deformation and hence presents fringes that are due to departure from circular symmetry44. Radial shear provides radially increasing sensitivity and has maximum sensitivity at the periphery45. Folding yield double the sensitivity for tilt. In case inversion and folding there is full superposition even though the shear has been applied. This is also true for rotational shear. A structural engineer would be interested to obtain deflection, strains and bending moments etc. from the same experiment. This is done by placing an opaque plate having several apertures in front of the imaging lens46. These apertures may carry shear elements etc. By a judicious choice of aperture configuration, it is possible to obtain in-plane and out-of-plane displacement components, partial slopes and curvature fringes from a single double-exposure specklegram. The technique works fine with photographic recording as different information can be retrieved at the frequency plane through the process of Fourier filtering. The method employs both frequency and theta multiplexing. Several other interesting techniques have also been reported where there has been enhancement of sensitivity47-49. It is however difficult to implement multiplexing techniques in electronic speckle pattern interferometry due to the limited resolution of the CCD array. It may however be possible to get over this limitation using technique similar to that used in digital holographic interferometry.
4 Acknowledgements This paper contains information that has been reported in some form in several publications. I would therefore like to express my sincere gratitude to all my students and colleagues who have contributed to the development of these techniques.
5 References 1. MVRK Murty, Use of single plane parallel plate as a lateral shearing interferometer with a visible laser gas source, Appl. Opt., 3, 531-534 (1964) 2. P Langenbeck, Improved collimation test, Appl. Opt., 9, 2590-2593 (1970)
Key Note
9
3. DE Silva, A simple interferometric method of beam collimation, Appl. Opt., 10, 1980-1983 (1971) 4. JC Fouere and D Malacara, Focusing errors in collimating lens or mirror: Use of a moiré technique, Appl. Opt., 13, 1322-1326 (1974) 5. P Hariharan and ZS Hegedus, Double grating interferometers II, Applications to Collimated beams, Opt. Commun., 14, 148-152 (1975) 6. K Patorski, S Yokezeki and T Suzuki, Collimation test by double grating shearing interferometer, Appl. Opt., 15, 1234-1240 (1976) 7. MVRK Murty, Lateral Shear Interferometers, in Optical Shop Testing, Ed. D. Malacara, John Wiley & Sons, pp. 105-148 (1978) 8. M Bass and JS Whittier, Beam divergence determination and collimation using retroreflectors, Appl. Opt., 23, 2674-2675 (1984) 9. MW Grindel, Testing collimation using shearing interferometry, Proc. SPIE, 680, 44-46 (1986) 10. WL Howes, Lens collimation and testing using a Twyman-Green interferometer with self-pumped phase-conjugating mirror, Appl. Opt., 25, 473-474 (1986) 11. MP Kothiyal and RS Sirohi, Improved collimation testing using Talbot interferometry, Appl. Opt., 26, 4056-4057 (1987) 12. RS Sirohi and MP Kothiyal, Double wedge plate shearing interferometer for collimation test, Appl. Opt., 26, 4954-4056 (1987) 13. MP Kothiyal, RS Sirohi and K-J Rosenbruch, Improved techniques of collimation testing, Opt. Laser Technol., 20, 139-144 (1988) 14. CW Chang and DC Su, Collimation method that uses spiral gratings and Talbot interferometry, Opt. Lett., 16, 1783-1784 (1991) 15. RP Shukla, M Dokhanian, MC George and P Venkateshwarlu, Laser beam collimation using phase conjugate Twyman-Green interferometer, Opt. Eng., 30, 386-390 (1991) 16. MP Kothiyal, KV Sriram and RS Sirohi, Setting sensitivity in Talbot interferometry with modified gratings, 23, 361-365 (1991) 17. DY Xu and K-J Rosenbruch, Rotatable single wedge plate shearing interference technique for collimation testing, Opt. Eng., 30, 391-396 (1991) 18. TD Henning and JL Carlsten, Cyclic shearing interferometer for collimating short coherence length laser beams, Appl. Opt., 31, 1199-1209 (1992) 19. AR Ganesan and P Venkateshwarlu, Laser beam collimation using Talbot interferometry, Appl. Opt., 32, 2918-2920 (1993) 20. KV Sriram, MP Kothiyal and RS Sirohi, Self-referencing collimation testing techniques, Opt. Eng., 32, 94-100 (1993)
10
Key Note
21. KV Sriram, P Senthilkumaran, MP Kothiyal and RS Sirohi, Double wedge interferometer for collimation testing: new configurations, Appl. Opt., 32, 4199-4203 (1993) 22. RS Sirohi, T Eiju, K Matsuda and P Senthilkumaran, Multiple beam wedge plate shear interferometry in transmission, J. Mod. Opt., 41, 1747-1755 (1994) 23. P Senthilkumaran, KV Sriram, MP Kothiyal and RS Sirohi, Multiple beam wedge plate shear interferometer for collimation testing, Appl. Opt., 34, 1197-1202 (1994) 24. KV Sriram, MP Kothiyal and RS Sirohi, Collimation testing with linear dual field, spiral and evolute gratings: A comparative study, Appl. Opt., 33, 7258-7260 (1994) 25. J Choi, GM Perera, MD Aggarwal, RP Shukla and MV Mantravadi, Wedge plate shearing interferometers for collimation testing: Use of moiré technique, Appl. Opt., 34, 3628-3638 (1995) 26. JH Chen, MP Kothiyal and HJ Tiziani, Collimation testing of a CO2 laser beam with a shearing interferometer, Opt. Laser Technol., 12, 179-181 (1995) 27. DY Xu and S Chen, Novel wedge plate beam tester, Opt. Eng., 34, 169-172 (1995) 28. JS Darlin, KV Sriram, MP Kothiyal and RS Sirohi, A modified wedge plate shearing interferometer for collimation testing, Appl. Opt., 34, 2886-2887 (1995) 29. JS Darlin, V Ramya, KV Sriram, MP Kothiyal and RS Sirohi, Some investigations in Talbot interferometry for collimation testing, J. Opt. (India), 42, 167-175 (1996) 30. CS Narayanamurthy, Collimation testing using temporal coherence, Opt. Eng., 35(4), 1161-1164 (1996) 31. JS Darlin, MP Kothiyal and RS Sirohi, Self-referencing cyclic shearing interferometer for collimation testing, J. Mod. Opt., 44, 929-939 (1997) 32. JS Darlin, MP Kothiyal and RS Sirohi, A hybrid wedge plate-grating interferometer for collimation testing, Opt. Eng., 37(5), 1593-1598 (1998) 33. JS Darlin, MP Kothiyal and RS Sirohi, Phase conjugate TwymanGreen interferometer with increased sensitivity for laser beam collimation, J. Mod. Opt., 45, 2371-2378 (1998). 34. JS Darlin, MP Kothiyal and RS Sirohi, Wedge plate interferometry – a new dual field configuration for collimation testing, Opt. Laser Technol. 30, 225-228 (1998). 35. JC Dainty (Ed.), Laser Speckle and Related Phenomena, Springer, Berlin (1975).
Key Note
11
36. R Jones and C Wykes, Holographic and Speckle Interferometry, Cambridge University Press, Cambridge, England (1989). 37. RS Sirohi (Ed.), Speckle Metrology, Marcel Dekker, New York (1993). 38. PK Rastogi (Ed.), Digital Speckle Pattern Interferometry, Wiley, New York (2001). 39. Steinbichler Optotechnik GmbH, Germany. 40. Bremer Institut fuer Angewandte Strahltechnik (BIAS), Germany. 41. RS Sirohi, Speckle Interferometry, Contemporary Physics, 43(3), 161180 (2002). 42. RS Sirohi, J Burke, H Helmers and KD Hinsch, Spatial phase-shifting for pure in-plane displacement and displacement derivatives measurement in electronic speckle pattern interferometry (ESPI), Appl. Opt.,36(23), 5787-5791 (1997). 43. RS Sirohi, Speckle shear interferometry - A review, J. Opt. (India), 13, 95-113 (1984). 44. RK Mohanty, C Joenathan and R S Sirohi, NDT speckle rotational shear interferometry, NDT International (UK), 18, 203-05 (1985). 45. C Joenathan, CS Narayanamurthy and RS Sirohi, Radial and rotational slope contours in speckle shear interferometry, Opt. Commun., 56, 309-12 (1986). 46. RK Mohanty, C Joenathan and R.S Sirohi, Speckle and speckle shear interferometers combined for simultaneous determination of out of plane displacement and slope, Appl. Opt., 24, 3106-09 (1985). 47. N Krishna Mohan, T Santhanakrishnan, P Senthilkumaran, and RS Sirohi, Simultaneous implementation of Leendertz and Duffy methods for in-plane displacement measurement, Opt. Commun., 124, 235-239 (1996). 48. T Santhanakrishnan, N Krishna Mohan, P Senthilkumaran, and RS Sirohi, Slope change contouring of 3D-deeply curved objects by multiaperture speckle shear interferometry, Optik, 104, 27-31 (1996). 49. T. Santhanakrishnan, N. Krishna Mohan, P. K. Palanisamy and R. S. Sirohi, Various speckle interferometric configurations for contouring and slope change measurement, J. Instrum. Soc. India, 27, 16-22 (1997).
SESSION 1
New Methods and Tools for Data Processing Chairs: Werner Jüptner Bremen (Germany) Mitsuo Takeda Tokyo (Japan) Fernando Mendoza Santoyo Guanajuato (Mexico) Jonathan M. Huntley Loughborough (UK)
Invited Paper
New Challenges for Optical Metrology: Evolution or Revolution Malgorzata Kujawinska Institute of Micromechanics & Photonics, Warsaw Univ. of Technology, 8, Sw. A. Boboli Str., 02-525 Warsaw, Poland
1 Introduction Although experimental interferometry began several centuries ago, this was Thomas Young who reported one of the first examples of quantitative fringe analysis when in early 1800 s, he estimated the wavelength of light after measuring the spacing of interference fringes. Latter in the same century Michelson and Morley employed fringe measurement in their interferometric measurements. As interferometry began to develop into a mature subject, interferometric metrology was primarily concerned with measurements of optical surfaces in two dimensions, but the routine quantitative interpretation of surface form and wavefront deviation was not practical in the absence of computers. Optical workshops used interferometry as a null setting technique, with craftsman polishing surfaces until fringes were removed or linearised. The real revolution in interferometry and, more general, optical metrology was introduced by invention of laser in early sixties. This highly coherent and efficient light source not only increased application of classical interferometry but also enabled practical development of such measurement techniques as holographic interferometry, ESPI and interferometric grid based techniques. The information coded in interferograms required quantitative analysis, however fringe numbers and spacings were at the beginning determined manually and relied strongly on a priori knowledge of human operators. However at the end of eighties of XXth century we had experienced the next revolution in full-field, fringe-based optical metrology. This was due to rapid development of personal computers with image processing capabilities and matrix detectors (CCD and latter CMOS), as well as introduction of temporal [1] and spatial [2] phase based interferogram analysis methods. This was also the time when Fringe Workshop was born together with at least three other international conferences (Fringe Analysis’89 FASIG (UK), Interferome-
New Methods and Tools for Data Processing
15
try’89 (Poland), Interferometry: Techniques and Analysis (USA)) covering the subject of automatic fringe pattern analysis applied to coherent and noncoherent methods of fringe generation. The topics of Fringe Workshop steadily expanded and changed its original focus in accordance to the needs in the surrounding world. In 1993 the expansion was towards shape measurement and material fault detection. Fringe’97 deals with scaled metrology, active measurement and hybrid processing technologies, while the first meeting in XXI century focused on optical methods for micromeasurements and new optical sensors. The new topics for 2005 meeting are resolution enhanced technologies and wide scale 4D optical metrology with the special emphasis put on modern measurements strategies taking into account combination of physical modeling, computer aided simulation and experimental data acquisition as well as new approaches for the extension of the existing resolution limits. These changes of focus and subjects have been introduced by the new needs expressed through researchers and industry and also by the advancements in the technology, which provide new sources, detectors, optoelectronics and electromechanics with enhanced properties. However it looks to me that we have experienced rather evolution of the approaches, methods, apparatus concepts then revolution. Today, sixteen years after the first Fringe, I was asked by the Program Committee of the Fifth Fringe Workshop to look both: back in time and into future to evaluate this what had happened in research and application of optical metrology and discuss what are the visionary concepts and advancements in methods and technologies in photonics which may create a new generation of optical metrology tools and expand their application to new areas of research, industry, medicine and multimedia technologies. The full topic is too wide to cover in this presentation therefore I will focus on the analysis of a few areas in optical metrology. These include: - new methods and tools for the generation, acquisition, processing and evaluation of data in optical metrology including active phase manipulation methods, - novel concept of instrumentation for micromeasurements. Most of these topics are developing through evolution of the tasks, approaches and tools, however some of them are or may be the subject of technological revolution.
16
New Methods and Tools for Data Processing
2 New methods for the generation, acquisition, processing and evaluation of data The success in implementation of optical full-field measuring methods in industry, medicine and commerce depends on the capability to provide quick, accurate and reliable generation, acquisition, processing and evaluation data which may be used directly in a given application or as the initial data for CAD/CAM, FEM, specialized medical or computer graphics and virtual reality software. I already had mentioned that the revolution in automatic fringe pattern processing was brought by introducing the temporal (phase shifting) [1]) and spatial (2D Fourier transform [2]) phase based interferogram analysis which started hundreds of works devoted to modifications, improvements and extensions of these methods. M. Takeda gave an excellent overview with the focus on the analogies and dualities in fringe generation and processing during Fringe’97 [3]. It was shown that all spatial, temporal and spectral based fringe pattern analysis methods have common roots and this is the reason why they are developed parallelly, although their applications and full implementation are sometimes restricted due to lack of the proper technological support. Let us see where we are now. A fringe pattern obtained as the output of a measuring system may be modified physically by optoelectronic and mechanical hardware (sensors and actuators) and virtually by an image processing software [4]. These modifications refer to phase and amplitude (intensity) of the signal produced in space and time, so that the general form of the fringe pattern is given by f
>
@
Ix, y, t a 0 x, y ¦ a m x, y cosm ^2 ʌ f 0x x f 0y Ȟ 0 t Įt I x, y ` (1) m 1
where am(x,y) is the amplitude of mth harmonic of the signal, f0x, f0y are the fundamental spatial frequencies, Q0 is the temporal frequency and D is the phase shift value. The measurand is coded in the phase I(x,y), (x,y) and t represent the space and time coordinate of the signal. Additionally there are recently several active ways in which the phase coded in fringe pattern may be modified in order to allow further comfortable analysis. One of such methods relies on applying in the reference beam of interferometer active beam forming device e.g. in the form of LCOS modulator [5]. This allows to reduce the number of fringes in the final interferogram, correct systematic errors or introduce the proper reference wavefront (including conical or other higher order if necessary). An example of an ac-
New Methods and Tools for Data Processing
17
tive interferogram generation process resulting in subtracting of an initial microelement shape in order to facilitate the measurement of deformation of vibrating object is shown in Fig. 1. The general scheme of the fringe pattern (FP) analysis process is shown in Fig. 2. After passive or active FP generation, acquisition and preprocessing the fringe pattern is analysed. Although the phase measuring methods are used in most commercial systems, however two alternative approaches should be addressed:
Fig. 1. Active phase correction of micromembrane: a) initial interferogram obtained in Twyman-Green interferometer, b) phase map mod(2S) used for phase correction, c) interferogram after phase correction and shapes computed from d) initial and e) final interferogram.
x intensity methods in which we work passively on an image intensity distribution(s) captured by a detector [6]. This includes: fringe extreme localization methods (skeletoning, fringe tracking), which are widely applied in the fault detection or NDT coupled with neural network or fuzzy logic approach [7], phase evaluation by regularization methods [8], which in principle may deliver the unwrapped phase directly from a fringe pattern, however at the moment this method is computationally extensive and suffer from multiply of restrictions, contrast extreme localization methods for white light interferometry;
18
New Methods and Tools for Data Processing
x phase methods for which we actively modify fringe pattern(s) in order to provide additional information to solve the sign ambiguity [6,9]: temporal heterodyning (introducing running fringes); this method, realized electronically, is very advantaques as it deliver directly phase with no 2S ambiguity. For long time it had required scanning of the measurement field of view by a single detector however recently the CMOS photo sensors technology allows to add functionality [10] and create light detection systems with parallel phase value detection at all pixels. This requires providing high end camera systems with active pixel sensors and high-speed video capabilities. Optical metrology being a lower size market, does not generate commercially avaliable specialized CMOS sensors for temporal heterodyning phase analysis, however I strongly believe that in future its the extended usage of optical metrology will show to the market that this functionality is worth the extra expense of additional silicon real estate and development cost. When it happens we will experience the next revolution in fringe – based optical metrology which allows rapid and highly accurate analysis of arbitrary fringe pattern, spatial heterodyning (Fourier transform method, PLL and spatial carrier phase shifting methods) which importance increases with development of high resolution CCD and CMOS detectors and the necessity of analysis variable in time objects and performing of measurements in unstable environment, temporal and spatial phase shifting, which are discrete versions of the above methods, where the time or spatially varying interferogram is sampled over a single period. The extended Fourier analysis of these methods [9] allows their full understanding and development of numerous algorithms, which are insensitive to a variety of errors, however usually on the expense of increased number of images required for the phase calculations. We experience a constant evolution of analysis methods and algorithms, however the most significant solutions and rapid changes may come from hardware modifications.
New Methods and Tools for Data Processing
19
Fig. 2. The general scheme of the fringe pattern analysis process.
The very good example of such procedure is the evolution of spatial phase shifting and spatial carrier phase shifting methods. They were firstly introduced into interferometric, holographic and ESPI configurations for dynamic events analysis in the middle and late eighties [11], however they had been only recently commercially implemented in a single shot interferometers [12] which allow to measure object in the presence of significant vibrations or actually measure how the sample is vibrating (due to usage the micro polarizer phase-shifting array overlaid at the CCD detectors). Such compact hardware based solution brings efficiently the optical metrology to industry and other customers and may future have much higher impact on intelligent manufacturing, medical or multimedia applications.
20
New Methods and Tools for Data Processing
The new approach, which gained a lot of attention during last ten years, is the phase reconstruction through digital holography [13]. This is again the concept which become practical due to avaliability of high resolution matrix detectors. Digital recording and numerical reconstruction of holograms enable comfortable phase manipulation [14] which, without the necessity to produce secondary interferograms (as in classical holographic interferometry [15]) enables to perform a wide range of measurements especially supporting microtechnology. Another very interesting issue connected with digital holography and digital holographic interferometry is the possibility to perform remote measurements or structure monitoring. This relies on capturing and transferring digital data through Internet network and optoelectronic near real-time reconstruction of digital holograms at distance location [14,16]. The great challenges for DH and DHI are increasing significantly the object size and develop a versatile camera for a variety of industrial and medical applications. The next constant challenge in fringe pattern analysis is phase unwrapping. As mentioned above, the only method which measures the phases with no 2S phase ambiguity is temporal heterodyning. All other methods including [17]: - path dependent and path independent phase unwrapping, - hierarchical unwrapping, - regularized phase – tracking techniques have several limitations and are often computational extensive. We still wait for the new revolutionary approach to solve efficiently this problem which slows down significantly the phase calculations and often unable to fully automate the measurement process. The phase unwrapping procedures finalize the fringe measurement process which reduces a fringe pattern to a continuous phase map (see Fig. 2). However to solve a particular engineering problem the phase scaling [18] which converts the phase map into the physical quality has to be implemented. Further processing is strongly application – oriented and is developing rapidly due to strong need of the full-field metrology data in CAD/CAM/CAE, rapid prototyping, intelligent manufacturing, medicine and multimedia technology. In order to show the importance and complexity of this stage of data processing I refer to just one example connected with a great demand for realistic imaging of real three-dimensional objects in multimedia techniques [19]. These 3D objects are most often used in computer generated scenes and works with (virtual reality, simulators and games) or without (film and animation) implemented interaction. In general, multimedia techniques require information about shape and texture of existing 3D-objects in a form compatible with existing applications. For shape representation of virtual 3D-objects triangle mesh or pa-
New Methods and Tools for Data Processing
21
rametric surface should be used [20]. Also, to deliver additional colour information a texture should be created and mapped on a virtual model [21]. The processing path of 3D object is shown in Fig. 3. The structure light projection systems deliver information about 3D object in the form of (x,y,z,R,G,B) co-ordinates from a single direction, known as two and half-dimensional (2,5D). In consequence, an object has to be measured from N overlapping directions to cover whole surface and to create a complete virtual representation. In most cases, each directional cloud of points CoP is located at its own co-ordinates space because during measurement a relative position object-system changes. After capturing data their preprocessing is realized. The software environment imports data in the form of CoP. It should work efficiently with huge num- Fig. 3. Processing path of 3D object. ber of points; sometimes more than ten million. It has to enable user to pre-process, fit directional, convert and export CoPs to a compatible form. Pre-processing algorithms are used for data smoothing, noise removal and simplification of results in order to decrease the number of points. The component coPs are automatically merged to create one virtual object. Next, triangle mesh or parametric description is calculated from main CoP with attached texture map. Finally, virtual object is exported into format compatible with actually supported application. In order to illustrate the measurement challenges complexity of the processing process and diversivity of the objects to be scanned, the virtual model of full hussar armor rendered in virtual reality application [19] is shown in Fig. 4. The measurements were done in the Kórnik castle (Poland). During two weeks measurement session more than 50 objects were scanned from more than 800 directions. Rough data from measurements take up approximately 80 GB of hard drive space and the number of measurement points is greater than 2 billion.
22
New Methods and Tools for Data Processing
Fig. 4. Virtual model of full hussar armor rendered in virtual reality application: a-d) different views.
Recently the main challenge addressing optical metrology specialist is not only to provide quick and accurate measurement in arbitrary environment but also to prepare the measurement data to be used in further complex analysis and application oriented tasks.
3 New challenges and solutions in development of novel instrumentation for micromeasurements Novel materials, microcomponents and microsystems (MEMS/MOEMS, electronic chips) require constant modifications of the measurement tools and procedures. This has been realized using bulk optics systems (Fig. 5a), however such tools often cannot meet the requirements of micromeasurements which include: integrated multiple functions, improved performance, specifically high spatial and temporal resolution, nanometer accuracy, inexpensive, compact and batch-fabricated, portable, low power consumption and easily massively parallel. The support micro- and nanotechnology with measurement and testing requires providing “rulers” that allow measurement of very small dimension. This “rulers” should be based on novel strategy which fits the dimension of an object to the measurement tool (Fig. 5b). This can be achieved based on lab-on-chip strategy or/and M-O platform approach. Novel MEMS and MOEMS technologies offer new possibilities to create measuring devices. Usually micro-optical
New Methods and Tools for Data Processing
23
MEMS consist of elements used to shape and steer optical beam (actively or passively) and of electro-optical elements (laser diodes, detectors, etc.). For these functionalities generalized platforms are needed, which simplify the assembly of the MOEMS. The platforms then become an enabling technology to design complex microoptical systems where microoptical componets are fabricated with extremely high dimensional tolerances, precise positioning on chip, and well controlled microactuation. Following this strategy the first new measuring architectures have been proposed: Michelson interferometer with MEMS based actuator [22], waveguide based multifunctional microinterferometric system [23] and the on-chip integrated optical scanning confocal microscope [24]. a)
b)
Fig. 5. From macroscopic scale to microsystem concept of optical metrology (courtesy of C. Gorecki).
The first one is in the form of a in plane beam steering platform (microoptical bench, MOB) which consists of a passive micromirror and beamsplitter integrated with mobile (movable) micromirror (Fig. 6) and fixing elements for mounting optical devices (diffractive elements, laser diodes, etc). Such a device finds applications in low cost, mass-produced, miniature spectroscopy, but it can easily be modified for active Twyman-Green interferometer allowing microshape and out-of-displacement measurement.
24
New Methods and Tools for Data Processing
Fig. 6. Michelson interferometer with MEMS based actuator: a) scheme, b) photograph of a com b drive actuator with mirror.
Another example of the miniature measurement system based on microoptics is the novel multifunctional waveguide microinterferometer produced with low cost technologies (moulding) and material (PMMA) (Fig. 7). It consists of one or several measurement modules including:
Fig. 7. The scheme of multifunctional integrated waveguide microinterferometric system.
x grating (moire) microinterferometer (or ESPI) for in-plane displacement/strain measurements [25], x Twyman-Green interferometer (or digital holographic interferometer) for out-of-plane displacement/shape measurement [23,25], x digital holographic interferometer for u, v, w displacements determination [26].
New Methods and Tools for Data Processing
25
The system include also an Illuminating/Detection module in which VCSEL light source and CMOS matrix are integrated at one platform and may include Active Beam Manipulation module which allows to introduce phase shifting or linear carrier fringes for rapid interferogram analysis. The next example: on-chip scanning confocal microscope is obtained by the “smart pixel” solution [24]. The individual pixel is configured from the vertical cavity surface emitted laser (VCSELs) - flip-chip bonded to the microactuator moving up-and-down the integrated microlens, flying directly above the specimen (Fig. 8). The use of optical feedback of the laser cavity as an active detection system, simplifies the microscope design because the light source and detector are unified parts of the VCSEL itself. The microscope can be fabricated in the form of the single device (Fig. 8a) or as an array-type device (Fig. 8b), so called the multi-probe architecture. The focalization system of the multi-probe microscope must contain an array of microlenses, where each of microlenses is moved by an individual vertical actuator. This can be constructed as a two silicon wafers system, where one wafer consists an array of microlenses on moving microstructures (membranes, beams) and the second contains steering electrodes. Using such array of confocal microscopes, light from multiple pixels can be acquired simultaneously. Each microscope is able to capture 2-D images (and 3-D object reconstructions) with improved dynamic range and improved sensitivity due to the independent control of illumination of each pixel. The miniature confocal microscope can posses a lateral resolution of from 1 to 2 µm. The multiprobe (array) approach will allow in future to overcome the fundamental limitation of single-optical-axis imaging, namely the tradeoff between the field of view (FOV) and image resolution. If these bottleneck problem is solved, the industry will be able to check with high speed the quality of new products produced by one of silicon technologies, which in its turn will support the development of micro-optics wafer based technology (Fig. 9). a) b) VCSEL MEMS layer
microlens GaAs
z y
x
focusing (z): microscanner vertical
Micro Optics layer VCSEL layer
skan x-y
Fig. 8. Chip-scale optical scanning confocal microscope: a) an individual “smart pixel”, b) multiprobe system.
New Methods and Tools for Data Processing
26
a)
b)
c)
d) Average deformation of the membranes 0.45 x 0.45 2
mm
700 W avg0.08 [nm]
600 500 400 300 200 100
1
0 1
2
3
2 4
5
6
7
8
9 10
3
Fig. 9. An exemplary silicon wafer with multiply of active micromembranes which requires parallel testing; a) the photo of wafer and b) a focus on individual elements, c) the result of single micromembrane shape measurement and d) the processed results in the form of P-V values distribution over whole wafer.
11
It is interesting that the commercial realization of an array microscope already had started [27]. The researchers from DMETRIX Inc. introduced recently a new generation of optical system with an FOV-to physical-lensdiameter ratio (FDR) values around eight (for classical microscopic objectives this value is on the order 25 to 50). This allows to assemble a large number of microscopic objectives to simultaneously image different parts of the same object at high resolution. The system consists of an array of multi-element micro-optical systems overlaying a 2-D detector array (Fig. 10). Each of the individual optical (aspheric) systems has a numerical aperture of 0.65 and an FOV on the order 250Pm in diameter. The specially design custom complementary image detector captures the data at 2500 frames/s with 3.5Pm pixels and it has 10 parallel output channels and operates at a master clock of 15MHz. The design of the whole array as a monolithic ensemble does not require any form of stitching or image postprocessing, making the completely reconstructed image available immediately after completion of scan. DMETRIX’s microscope is focused on biological application (histopathology), however the array approach can be extended to other approaches including epi-illumination microscopy, confocal microscopy, interferometric and digital holography microscopes. It is anticipated that the development of array-based, ultra-fast, high-resolution microscope systems will launch the next chapter of digital microscopy. It
New Methods and Tools for Data Processing
27
is difficult to predict if it is just evolution or revolution in micromeasurement, as it depends strongly on the amount of money which will be allocated towards implementation of this concept.
Fig. 10. The 8×10 array of miniature microscope objectives constructed from several monolithic plates [27].
4 Conclusions In science revolutions do not happen often, however they influence our life significantly. We had definitely experienced the revolution connected with introduction of lasers, powerful desktop computers and matrix detectors. However several problems should still be solved. At the moment the evolution of the active optoelectronic and MEMS based devices as well as phase analysis and processing methods bring us to the higher level of fulfilling the requirement formulated by the users of the measurement systems. Several efficient solutions have been demonstrated. The presented concepts of the micromeasurement systems demonstrate that sophisticated photonic and micromechanical devices, and their associated electronic control, can be made small, low power, and inexpensive, even permitting the device to be disposable. We are also close to converting our 2D image world into 3D or even 4D one based on active data capture, processing and visualization. If this concept is fully implemented it will be revolution in IT technologies, however it is real challenge for system designers and software developers. On the other hand the possible future analysis by temporal heterodyning method performed electronically by customized CMOS cameras may convert totally our software based fringe pattern analysis concept. However the hardware based technological revolution requires the critical mass of product quantity. Otherwise it is not financially viable for implementation and therefore devoted to evolutional but revolutional changes.
28
New Methods and Tools for Data Processing
5 Acknowledgments We gratefully acknowledge the financial support of EU within Network of Excellence for Micro-Optics NEMO and Ministry of Scientific Research and Information Technology within the statutory work realized at Institute of Micromechanics and Photonics, Warsaw University of Technology.
6 References 1. Bruning, J,H, et al. (1974) Digital wavefront measuring interferometer for testing, optical surfaces and lenses. Applied Optics 13:2693-2703 2. Takeda, M, Ina, H, Kobayashi, S, (1984) Fourier transform methods of fringe pattern analysis for computer based tomography and interferometry. JOSA 72:156-170 3. Takeda, M, (1997) The philosophy of fringes – analogies and dualities in fringe generation and analysis, in Jüptner W. and Osten W. eds Akademie Verlag Series in Optical Metrology 3:17-26 4. Kujawinska, M, Kosinski, C, (1997) Adaptability: problem or solution in Juptner W. and Osten W. eds Akademie Verlag Series in Optical Metrology 3: 419-431 5. Kacperski, J, Kujawinska, M, Wang, X, Bos, P, J, (2004) Active microinterferometer with liquid crystal on silicon (LCOS) for extended range stanic and dynamic micromembrane measurement. Proc. SPIE 5532:37-43 6. Robinson, D, W, Reid, G, T, (eds) (1993) Interferogram analysis: digital fringe pattern measurement techniques. IOP Publishing Bristol 7. Jüptner, W, Kreis, Th, Mieth, U, Osten, W, (1994) Application of neural networks and knowledge based systems for automatic identifiocation of facult – indicating fringe patterns. Proc. SPIE 2342: 16-24 8. Sevrin, M, Marroquin, J, L, Cuevast, F, (1997) Demodulation of a single interferogram by use of a two-dimensional regularized phasetracking technique. Opt. Eng. 36: 4540-4548 9. Malacara, D, Servin, M, Malacara, Z, (1998) Optical testing: analysis of interferograms. Marcel Dekker New York 10. Lauxtermann, S, (2001) State of the art in CMOS photo sensing and applications in machine vision, in Osten W., Jüptner W. (eds). Proc. Fringe 2001 Elsevier Paris: 539-548 11. Kujawinska, M, (1993) Spatial phase measurement methods, in Interferogram Analysis. Robinson D.W., Ried G.T. (eds) IOP Publishing Bristol: 141-193 12. Millerd, J, et al. (2005) Modern approaches in phase measuring metrology. Proc. SPIE 5856:14-22
New Methods and Tools for Data Processing
29
13. Schnars, U, (1994) Direct phase determination in hologram interferometry with use digitally recorded interferograms. JOSA A 11: 2011-2015 14. Michalkiewicz, A., et al. (2005) Phase manipulation and optoelectronic reconstruction of digital holograms by means of LCOS spatial light modulator. Proc. SPIE 5776: 144-152 15. Kreis, Th, (1996) Holographic Interferometry. Akademie Verlag Berlin 16. Baumbach, T, Osten, W, Kopylow, Ch, Jüptner, W, (2004) Application of comparative digital holography for distant shape control. Proc. SPIE 5457: 598-609 17. Ghiglia, C, Pritt, M, D, (1998) Two dimensional phase unwrapping. John Wiley & Sons, Inc. New York 18. Osten, W, Kujawinska, M, (2000) Active phase measurement metrology in Rastogi P., Inandu D., eds Trends in Optical Nondestructive testing and Inspection. Elsevier Science BV: 45-69 19. Sitnik, R, Kujawinska, M, Zaluski, W, (2005) 3DMADAMC system: optical 3D shape acquisition and processing path for VR applications. Proc. SPIE 5857 (in press) 20. Foley, J, D, van Dam, A, Feiner, S, K, Hughes, J, F, Phillips, R, L, (1994) Introduction to computer graphics. Addison-Vesley 21. Saito, T, Takahashi, T, (1990) Comprehensible rendering of 3D shapes. SIGGRAPH'90: 197-206 22. Sasaki, M, Briand, D, Noell, W, de Rooji, N, F, Hane K, (2004) Three dimensional SOI-MEMS constructed by buckled bridges and virtual comb drive actuator. Selected Topics in Quantum Electronics 10: 456461 23. Kujawinska, M, Górecki, C, (2002) New challenges and approaches to interferometric MEMS and MOEMS testing. Proc. SPIE 4900: 809823 24. Gorecki, C, Heinis, D, (2005) A miniaturized SNOM sensor based on the optical feedbach inside the VCSEL cavity. Proc. SPIE 5458: 183187 25. Kujawinska, M, (2002) Modern optical measurement station for micro-materials and microelements studies. Sensors and Actuators A 99: 144-153 26. Michalkiewicz, A, Kujawinska, M, Krezel, J, Salbut, L, Wang, X, Bos, Ph, J, (2005) Phase manipulation and optoelectronic reconstruction of digital holograms by means of LCOS spatial light modulator. Proc. SPIE 5776: 144-152 27. Olszak, A, Descour, M, (2005) Microscopy in multiplex. OE Magazine, SPIE, Bellingham May: 16-18
Interpreting interferometric height measurements using the instrument transfer function Peter de Groot and Xavier Colonna de Lega Zygo Corporation Laurel Brook Rd, Middlefield, CT 06455, USA
1 Introduction Of the various ways of characterizing a system, one of the most appealing is the instrument transfer function or ITF. The ITF describes system response in terms of an input signal’s frequency content. An every-day example is the graph of the response of an audio amplifier or media player to a range of sound frequencies. It is natural therefore to characterize surface profiling interferometers according to their ITF. This is driven in part by developments in precision optics manufacturing, which increasingly tolerance components as a function of spatial frequency [1]. Metrology tools must faithfully detect polishing errors over a specified frequency range, and so we need to know how such tools respond as a function of lateral feature size. Here we review the meaning, applicability, and calculation of the ITF for surface profiling interferometers. This review leads to useful rules of thumb as well as some cautions about what can happen when we apply the concept of a linear ITF to what is, fundamentally, a nonlinear system. Experimental techniques and example results complete the picture. Our approach is informal, as is appropriate for a conference paper. The foundation for a rigorous understanding of the ITF is well documented in the literature, including the well-known books by Goodman [2].
2 Linear systems ITF is most commonly understood to apply to linear systems, which share certain basic properties that lend themselves naturally to frequency analysis. Principally, the response of a linear system is the sum of the responses that each of the component signals would produce individually. Thus if
New Methods and Tools for Data Processing
31
two frequency components are present in an input signal, we can propagate them separately and add up the results. Another property of linear systems is that the response for a given spatial frequency f along a coordinate x is given by a corresponding ITF value characteristic of the system alone, independent of signal magnitude and phase. Thus to determine the output g c given an input g , we write
Gc f
ITF f G f
(1)
where
G f
FT ^ g x `
Gc f
FT ^g c x `
(2)
and the Fourier Transform is defined by f
FT ^
` ³ ^ ` exp 2Sifx dx .
(3)
f
This is a powerful way of predicting system response to diverse stimuli.
3 OTF for optical imaging A familiar ITF is the optical transfer function or OTF, which describes how an optical system reproduces images at various spatial frequencies. The modulus of the OTF is the modulation transfer function (MTF). An approach to the OTF is to consider the effect of a limiting aperture in the pupil plane of an unaberrated imaging system. A plane wavefront generated by a point source illuminates a perfectly flat object (top left diagram in Fig. 1). The object reflectivity profile may be dissected in terms of sinusoidal amplitude gratings over a range of spatial frequencies. Allowing each constituent grating its own DC offset, each grating generates three diffraction orders, -1, 0, 1. The separation of the r1 orders in the pupil plane is proportional to the grating frequency. According to the Abbé principle, if the pupil aperture captures all of the diffracted beams, then the system resolves the corresponding frequency. Assuming that the optical system is perfect and that it obeys the sine condition, the principle rays in Fig. 1 show that the optical system faithfully reproduces the amplitude reflectivity frequency content up to a limiting
New Methods and Tools for Data Processing
32
frequency NA / O . This coherent imaging MTF is therefore a simple rectangle, as shown in the top-right of Fig. 1. Coherent illumination
MTF
Pupil aperture
1
+1 -1 Object
Image
0
Incoherent illumination
NA/O 1
0
Partially obscured diffraction disks
0 Spatial Frequency
Fig. 1. Illustration of incoherent and coherent light imaging systems (left) and the corresponding MTF curves (right).
The reasoning is much the same for an extended, incoherent source (lower left of Fig. 1) [3]; although the results are very different. The various source points in the pupil generate overlapping, mutually-incoherent images that add together as intensities. As we move across the pupil, the obscurations of the r1 diffraction orders vary. The calculation reduces to the autocorrelation of the pupil plane light distribution, which for a uniformly-filled disk is
MTF f
2 S
ª¬ I cos I sin I º¼
I cos 1 Of 2 NA
.
(4)
This curve, shown in the lower right of Fig. 1, declines gradually from zero out to twice the coherent frequency limit. Incoherent imaging is often preferred in microscopes because of this higher frequency limit and softer transfer function, which suppresses ringing and other coherent artifacts. Note that coherent systems are linear in amplitude and incoherent systems are linear in intensity. This leads to an ambiguity in the ITF for partially coherent light, addressed pragmatically by the apparent transfer function, which uses the ratio of the output and input modulations for single, isolated frequencies while simply ignoring spurious harmonics [4].
New Methods and Tools for Data Processing
33
4 ITF for optical profilers The ITF is so useful that it is tempting to use it even for systems that are explicitly nonlinear. Traditional tactile tools, for example, are nonlinear at high spatial frequencies because of the shape of the stylus; but their response is often plotted as a linear ITF [5]. If we are lucky, we find that over some limited range the system is satisfactorily approximated as linear. This is the case of optical profilers as well, with appropriate cautions.
Strength
1 Reflectivity grating PV=100%
0.5
Strength
0 1 Height profile grating PV=O/4
0.5
0
1
0.5
0 Pupil plane coordinate
0.5
1
Fig. 2. Comparison of the diffracted beams from amplitude (upper) and phase (lower) gratings illustrates the complex diffraction behavior of height objects, leading to nonlinear response when profiling surface heights.
Returning to the elementary concept of constituent gratings, consider coherent illumination of an object that has uniform reflectivity but a varying height. The surface impresses upon the incident wavefront a phase profile that propagates through the system to the image plane as a complex amplitude. Using any one of the known interferometric techniques, we can estimate the imaged phase profile and convert this back to height. Just as before, a Fourier Transform of the object wavefront yields sinusoidal phase gratings over a range of spatial frequencies. Each grating generates diffracted beams, although Fig. 2 shows that for phase gratings, the light spreads into higher angles than just the -1, 0, 1 orders present with amplitude gratings. Generally, the deeper the grating, the stronger and more numerous the higher diffraction orders, resulting in a very different situation from simple imaging. Spatial frequencies couple together, resulting in harmonics and beat signals in the imaged wavefront, inconsistent
New Methods and Tools for Data Processing
34
with the simple formula of Eq.(1). The response of the system is now inseparable from the nature of the object itself. Unavoidably, interferometers are nonlinear devices, as are all optical tools that encode height information as wavefront phase. The solution to this dilemma is to restrict ourselves to small surface heights, where small means O 4 . For such small heights, diffraction from a phase grating is once again limited to the -1,0,1 orders and the higher orders become insignificant. The optical system responds to these small surface heights in much the same way as it images pure intensity objects, suggesting that we may be able to approximate the ITF by the OTF. This last idea gains credence by considering a simple example. Arrange an interferometer so that the reference phase is balanced at the point where the intensity is most sensitive to changes in surface height h . Then
I h
I 0 I c sin kh
(5)
where I 0 is the DC offset, I c is the amplitude of the intensity signal and
k
2 S O . Inversion of Eq.(5) as the approximation
h | I I 0 I ck
(6)
shows a linear relationship between height and intensity. More sophisticated algorithms will reduce in this limit to the same kind of simple linear equation. For a coherent system such as a laser Fizeau, the variation I I 0 in Eq.(6) is proportional to the amplitude, since it is the product of the reference and object waves that gives rise to the measured intensity. For small surface heights, the coherent interferometer ITF is the same as the coherent imaging OTF. Similarly, for an incoherent system, we add together the interference intensity patterns for multiple source points—a calculation that mimics that of the incoherent imaging OTF. To summarize the key conclusions of this section: (1) The measurement of surface heights optically, e.g. by interferometry, is a fundamentally nonlinear process. (2) A linear interferometer ITF is a reasonable approximation in the limit of very small surface deviations ( O 4 ). (3) In the limit of small surface deviations, the interferometer ITF is the same as its imaging OTF.
New Methods and Tools for Data Processing
35
5 Measuring interferometer ITF 1.2 1.0
ITF
0.8 0.6 0.4 Experiment
0.2
Theory 0.0 10
100
1000
10,000
Spatial frequency (cycles/mm)
Fig. 3. Comparison of the theoretical ITF magnitude (Eq.(4)) and experimental results for a white-light interference microscope using a 100X, 0.8 NA Mirau objective and incoherent illumination. The data derive from the profile of a 40-nm step object.
As a consequence of conclusion (3) above, it is sufficient to describe an interferometer’s imaging properties to infer how it will respond to shallow height features. Of the many ways to measure OTF, one of the most convenient is to image a sharp reflectivity step [6], generated e.g. by depositing a thin layer ( O 4 ) of chrome over one-half of a flat glass plate. The idea is to determine the frequency content of the image via Fourier analysis and compare it to that of the original object. The ratio of the frequency components directly provides the OTF. The experiment does not require interferometry—we may even wish to block the reference beam to suppress interference effects. Curiosity at least demands that we attempt the same experiment by directly profiling a step height [7]. The ITF in Fig. 3 for one of our white light interferometers illustrates how closely the magnitude of the resulting experimental ITF magnitude matches the prediction based on the incoherent imaging MTF calculated from Eq.(4). The resolution of low-magnification systems are often limited by the camera. Fig. 4 shows the ITF of our laser Fizeau interferometer configured for coherent imaging. The coherent optical ITF is assumed equal to one for the theory curve over the full spatial frequency range shown, while the finite pixel size modulates the ITF by a sinc function.
New Methods and Tools for Data Processing
36 1.2 1.0
ITF
0.8 0.6 0.4 Experiment
0.2 0.0
Theory 0.01
0.1
1
Spatial frequency (/Nyquist)
Fig. 4. The predicted and experimental ITF curves for this 100-mm aperture coherent laser Fizeau interferometer are dominated by the lateral resolution of the 640X480 camera. Here the data stop at Nyquist because the sampling is too sparse above this frequency. 1.2 1.0
ITF
0.8 0.6 0.4 0.2 0.0 1
10
100
1000
10,000
Spatial frequency (cycles/mm)
Fig. 5. Theoretical ITF curves for 2.5X, 5X, 20X and 100X microscope objectives illustrate the spatial frequency overlap achieved in typical microscopes setups, and the influence of the camera at low magnification.
Fig. 5 shows the coverage of a range of microscope objectives in incoherent imaging, including the effects of the camera. At lower magnifications, the lobes correspond to frequencies for which the optical resolution surpasses that of the camera. This figure illustrates how a range of objective on a turret provides complete coverage over a wide spatial frequency range.
New Methods and Tools for Data Processing
37
6 Conclusions Much of this paper has emphasized the precariousness of using a linear ITF for what is fundamentally a nonlinear process of encoding height into the phase of a complex wave amplitude. A more accurate model begins with an explicit calculation of this amplitude, then propagates the wavefront through the system to determine what the instrument will do. Nonetheless, a kind of quasi-linear ITF is an increasingly common way to thumbnail the capabilities and limitations of interferometers in terms of lateral feature size, and to evaluate the effects of aberrations, coherence, defocus and diffraction [8]. As we have seen, the basic requirement for a meaningful application of a linear ITF is that the surface deviations be small. This allows us to estimate the expected behaviour for coherent illumination, as in laser Fizeau systems, and incoherent illumination, which is the norm for interference microscopes. Happily, in this limit of small departures, the profiling behaviour follows closely that of imaging, so that with appropriate cautions we can get a good idea of expected performance using the imaging OTF as a guide to the expected ITF.
References and notes 1. Wolfe, R., Downie, J., Lawson, J. (1996) Measuring the spatial frequency transfer function of phase-measuring interferometers for laser optics. Proc. SPIE 2870, p. 553-557. 2. Goodman, J., Statistical Optics (John Wiley & Sons, 1985) 3. To be truly incoherent, the source pupil should be much larger than the imaging pupil. Fig. 1 is a simplification to illustrate the basic idea. 4. Reynolds, G., DeVelis, J., Parrent, G., Thompson, B. (1989) The New Physical Optics Notebook: Tutorials in Fourier Optics (AIP): 139. 5. Lehman, P. (2003) Optical versus tactile geometry measurement— alternatives or counterparts. Proc. SPIE 5144: 183-196. 6. Barakat, R. (1965) Determination of the optical transfer function directly from the edge spread function. J. Opt. Soc. of Am. 55: 1217. 7. Takacs, P., Li, M., Furenlid, K. Church, E. (1993) A Step-Height Standard for Surface Profiler Calibration. Proc. SPIE 1993: 65-74. 8. Novak, E., Ai, C., and Wyant, J. (1997) Transfer function characterization of laser Fizeau interferometer for high spatial-frequency phase measurements Proc. SPIE 3134: 114-121.
Are Residues of Primary Importance in Phase Unwrapping? Karl A. Stetson Karl Stetson Associates, LLC 2060 South Street Coventry, CT 06238
1 Introduction Phase step interferometry and related techniques have given rise to the problem of phase unwrapping, that is, how to add and subtract multiples of 2S to the values of a wrapped phase map in order to create the continuous phase distribution whose measurement is desired. Wrapped phase maps are obtained by calculating phase via the arctangent function, which can generate phase values over an interval of no more than 2S. Ghiglia and Pritt, in Reference 1, have discussed this problem and its many solutions at length, and the majority of techniques they present are based upon establishing what are called branch cuts that connect what are referred to as residues in the wrapped phase map. These branch cuts define paths across which unwrapping may not proceed, and a successful set of branch cuts will allow phase unwrapping to proceed around them so as to generate the most nearly continuous phase map possible. The purpose of this paper is to examine the concept of residues as applied to the phase maps generated in electronic holographic interferometry and consider how they arise. Further, it suggests that residues are actually imperfect indicators of a more primary phenomenon in these phase maps. The goal of this discussion is to encourage the use of phase wrapping methods that do not use residues and branch cuts in their operation.2, 3
2 Residues Residues are detected in discrete two-dimensional phase maps by making counterclockwise circuits around every set of four neighboring data points
New Methods and Tools for Data Processing
39
in the phase map and summing the number of positive and negative phase transitions greater than S. The resulting sum around the circuit is usually zero, but will occasionally be plus or minus one, in which case a residue is detected and assigned to the center of the circuit of four points. Ref. 1 strongly relates residues in discrete, two-dimensional, phase maps to residues as defined in the theory of complex functions of two dimensions, and it is shown that residues in bounded complex functions are associated with points where the amplitude of the function goes to zero. It has further been shown experimentally4 that such phenomena exist in fully developed laser speckle patterns and that the points where the amplitude of a speckle pattern is zero exhibit what are called optical vortexes, points about which the phase cycles by 2S. Reference 4 goes on to identify the translation of such points as the main source of residues in speckle interferograms.
Fig. 1. A histogram of pixel values from a typical electronic hologram with the camera lens set to f/5.6.
In electronic holographic interferometry, aka electronic speckle pattern interferometry, we may question the relevance of optical vortexes and residues in two-dimensional complex functions. In Ref. 4, the speckle patterns examined were expanded so that their characteristic speckle size was much larger than the pixels of the camera recording the patterns. In a practical electronic holography system, it is common to have speckles that are smaller than the camera pixels. For example, with laser light at 633 nm
40
New Methods and Tools for Data Processing
and the camera lens set between f/5.6 and f/11, the speckle size will range from 4.25 Pm to 8.5 Pm. For a camera in the 2/3-inch format, the pixel cell size will be in the order of 11.6 by 13.5 Pm. Furthermore, the speckle patterns are usually unpolarized and therefore not fully developed. The effect of this can be seen in Fig. 1, which shows a histogram of the pixels in a typical electronic holography image. Note that there are nearly no pixels with a value of zero. Furthermore, in electronic holography the phase function measured is the phase difference between two images of the same object with little if any lateral shift between their speckle patterns.
3 Phase Inclusions As discussed in Section 2.5 of Ref. 1, the goal of unwrapping can be described as elimination from the phase map of all transitions greater than S, and, if this is possible for the final phase map, there is really no difficulty. In such a case, as pointed out in the reference above, the unwrapping process is path independent and can proceed along any path with the same result. Such a phase map is also free of residues. In reality, this circumstance is rarely the case, and the number of transitions greater than S can only be minimized. The first assertion of this paper is the obvious one that any wrapped phase map containing residues will, by necessity, generate a final unwrapped phase map that contains some transitions greater than S. These remaining transitions greater than S are central to the thesis of this paper, and for convenience they require a name. Herráez et al. have referred to them as phase breaks, by which they distinguish them from phase wraps.5 In this paper, we call them phase inclusions and make an analogy to particles of foreign matter in an otherwise homogeneous material. We may also think of these remaining transitions greater than S as gaps in the continuous phase that must be included in the final phase map, and thus the word inclusion seems additionally appropriate. The next assertion is that whenever a residue is detected in a phase map, a phase inclusion must exist between one pair of neighboring points among the four in the circuit. This requires amplification, so consider the example in Table 1 taken from a wrapped phase map from an actual electronic holography interferogram. There is a residue of –1 within the circuit of B3, B4, C4, & D3 and a residue of +1 in the circuit of C3, C4, D4, & D3. These residues are generated by the transitions greater than S between cells B3 & C3, and between cells C3 & D3. Simple trial and error will show that it is impossible to add or subtract any multiples of 256 from these
New Methods and Tools for Data Processing
41
numbers in any pattern that will remove all transitions greater than S from the circuits surrounding the cells with the residues. The value of 215 at cell C3 is clearly the problem, and what must be done is to subtract 256 from it to leave –41. This will reduce the number of transitions greater than S from three to one and place that transition between the two residues. It is this final transition greater than S that we call a phase inclusion. Table 1. A set of 25 points containing two residues taken from an unwrapped phase map generated by electronic holographic interferometry.
1
A 76
2
56
B 50 0
0 54
0 3
55
61
73
5
70
0
215
58 67
0
41
96
64 0
72 0
54
58 0
+1
0
E 47
54 0
1
0
D 24
67 0
0 4
C 98
88 0
66
74
Fig. 2. A residue map for a wrapped phase map from the electronic holographic interferogram shown in Fig. 3.
42
New Methods and Tools for Data Processing
4 Relationship between Residues and Phase Inclusions It is significant to note that the residues in Table 1 occur as a pair of opposite polarity, and, in phase-step holographic interferometry, residues generally do occur in bipolar pairs. The logical reason for this is because residues straddle phase inclusions. Residues occur singly when they are at the edge of the phase map and the corresponding phase inclusion is between two pixels along this edge. Otherwise, one phase inclusion will generate two bipolar residues, and the pair of residues indicates the presence of the inclusion. Figure 2 illustrates this for the wrapped phase map of a disk shown in Fig. 3. The positive residues are rendered as pixel values of 255, the negative residues as 0, and zero residues as 127. Values outside the object are rendered as zero.
Fig. 3. The wrapped phase map of a disk for which the residue map in was calculated for Fig. 2.
The pairing of residues is clear where they are relatively isolated, but becomes confused where they are clustered. The fact that the density of residues is greater in the center of the disk where the phase gradient of the deformation is greatest and is minimum at the edges where the gradient is
New Methods and Tools for Data Processing
43
least is consistent with a model of noise combining with the phase gradient to create phase inclusions. This is further supported by the fact that there are more pairs of bipolar residues in the horizontal direction where the phase gradient is vertical. To get an indication of the noise level, consider Fig. 4, which shows a histogram of part of an interferogram of an undeformed object for which the phase gradient is zero. While most of the pixel values are within a range of about 20 units out of 255, a phase range of about 28 deg., there are pixel values spanning a range of 111 units for a range of 156 deg., and this approaches 180 deg., or S phase. This random variation in calculated phase, combined with a phase gradient due to the object deformation, can easily produce phase inclusions in the data, and the higher the phase gradient, the more inclusions and residues. As noted above, we expect the phase inclusions to be aligned in the direction of the phase gradient.
Fig. 4. A histogram of an undeformed object showing pixel variation due to noise.
If phase inclusions could be identified a priori in a wrapped phase map, it would make unwrapping trivial, but they are only evident a postiori after unwrapping. Branch cuts based on residues serve to prevent phase inclusions from being unwrapped if they indicate correctly the locations of all phase inclusions. Unfortunately, residues may not always indicate the correct locations of phase inclusions. This is particularly true when phase inclusions occur diagonally as shown in table 2. Both sets of three phase in-
New Methods and Tools for Data Processing
44
clusions have the same residue pattern, but they are quite different, and there is a question as to how the branch cuts are to be drawn. There would be a temptation to put a branch cut between the negative and positive residues in the center of the array, especially if other residues were available to connect to the residues at the edges, and such an incorrect branch cut would lead to serious errors in unwrapping. Table 2. Residues for sets of diagonally occurring phase inclusions. The pixels are indicated as spots and the phase inclusions by arrows.
1
A
B
C
D
E
F
x
x
x
x
x
x
͘
͘
x
x
-1 2
x
x -1
3
x
x
͘ x
x
x
x
+1 ĺ
x ͘
4
ĺ
x
x
x
x
x
x
x
+1
5 Conclusions In discrete phase maps, phase inclusions and residues are inseparably linked with phase inclusions being of primary importance and residues indicating where phase inclusions lie. Phase inclusions result when noise in the phase measurement combines with the gradient of the phase being measured to create steps greater than S that must not be unwrapped. Phase inclusions, unlike residues, cannot be used to guide phase unwrapping; however, residues are inadequate because they do not clearly define the locations of phase inclusions. In general, then, it is better to use methods that do not rely on residues for phase unwrapping. To date, it would appear that there are only two such methods, which are cited in refs. 2 and 3. Of these, we recommend the method of ref. 2, which calculates unwrap regions based upon the idea that the locations of phase wraps depend upon the phase reference used in the arctangent calculation whereas the locations of phase inclusions do not. These unwrap regions are then used to
New Methods and Tools for Data Processing
45
guide the unwrapping process in a way that guarantees that phase inclusions will be ignored.
6 References 1. 2.
3. 4.
5.
D. C. Ghiglia and M. D. Pritt, Two-Dimensional Phase Unwrapping, (John Wiley, New York, 1998), Chap. 2. K. A. Stetson, J. Wahid, and P. Gauthier, “Noise-immune phase unwrapping by use of calculated wrap regions,” Appl. Opt. 36, 48304838 (1997). T. J. Flynn, “Two-dimensional phase unwrapping with minimum weighted discontinuity,” J. Opt. Soc. Amer. A, 14, 2692-2701, 1997. J. M. Huntley and J. R. Buckland, “Characterization of sources of 2S phase discontinuity in speckle interferograms,” J. Opt. Soc. Amer. A, 28, 3268-3270, 1995. M. A. Herráez, J. G. Boticario, M. J. Lalor, and D. R. Burton, “Agglomerative clustering-based approach for two-dimensional phase unwrapping,” Appl. Opt. 44, 1129-1140, 2005.
Experimental Study of Coherence Vortices: Birth and Evolution of Phase Singularities in the Spatial Coherence Function Wei Wang a, Zhihui Duan a, Steen G. Hanson b, Yoko Miyamoto a, and Mitsuo Takeda a a The University of Electro-Communications, Dept. of Info. & Comm. Engg., Chofu, Tokyo, 182-8585, Japan b Risoe National Laboratory, Dept. for Optics and Plasma Research, OPL128, P.O. Box 49, 4000 Roskilde, Denmark
1 Introduction Optical vortices have been known for a long time, and extensive studies have been made on their basic properties since the seminal work of Nye and Berry in the early 1970s [1]. While the previous studies have primarily been centered on phase singularities in fully coherent, monochromatic optical fields, recent theoretical researches relating to Young’s interference with partially coherent light have revealed numerous new effects, and predicted the existence of phase singularities found in the phase of a complex coherence function [2,3]. This new type of phase singularities, referred to as coherence vortices, has come to attract more attention and demonstrated their unique properties. Here, we will present the first direct experimental evidence of coherence vortices, and experimentally investigate the mechanism for the birth and evolution of phase singularities in a spatial coherence function along the optical axis.
2 Principle A schematic diagram of the proposed system is illustrated in Fig.1. A conventional Michelson interferometer composed of two plane mirrors is illuminated by an extended quasi-monochromatic spatially incoherent light source (S) located at some distance 'f from the focal plane of lens L1. Light emitted from point A( x0 , y0 ) of the source is collected by lens L1
New Methods and Tools for Data Processing
47
Fig. 1. Optical system for systhesis of the coherence vortex. Abbreviations are defined in text.
and is split into two beams by the beam splitter BS. One beam is reflected from mirror M1, which serves as the reference field at the origin, and the other is reflected from mirror M2, which serves as the three-dimensionally displaced field to be correlated with the reference field. Mirrors M1 and M2 are located at distances z Z and z Z 'Z , respectively, from the lens L1. The interference fringes generated on the CCD image sensor are the result of superposition of the mutually displaced two optical field distributions, being imaged by lens L2 onto the CCD image sensor. The point source u~0 ( x0 , y0 ) at A creates a field distribution behind lens L1: [4] u ( x , y , z ) u0 ( x0 , y0 ) f ( jO B ( z )) exp ^ j 2S ( f z 'f ) O `
^
u exp jS ª 'f x 2 y 2 z f ¬
x02 y02 ¼º
`
O B( z )
(1)
u exp ^ j 2S f ( x0 x y0 y ) O B ( z )` , where O is the wavelength of light, ( x, y , z ) is the coordinate behind lens L1 with their origin at the center of lens L1, f is the focal length of lens L1, and B( z ) { f 2 'fz f'f . The field u~ ( x, y, Z ) at object mirror M1 and the field u~ ( x, y , Z 2'Z ) at the corresponding location in the other arm of the interferometer are imaged and superimposed to form interference fringes on the CCD image sensor. Because each point on the extended source is
New Methods and Tools for Data Processing
48
assumed completely incoherent with respect to any other point on the source, the overall intensity on the image sensor contributed by all the source points becomes a sum of the fringe intensities obtained from the individual point sources:
³³ u ( x, y, Z ) u ( x, y, Z 2'Z )
I ( x, y , Z )
2
dx0 dy0 ,
(2)
where the integration is performed over the area of the extended source. After some straightforward algebra, this intensity distribution becomes:
I ( x, y, Z ) | A^ 1 P~ ( x, y,2'Z ) cos>M ( x, y,2'Z ) 4S'Z O
(3)
@`
2S'f 2 'Z x 2 y 2 OBZ BZ 'Z ,
2
2 f 2 ³³ I 0 ( x0 , y0 ) dx0 dy0 O 2 B ( Z ) B ( Z 2'Z ) , I 0 u0 , and the complex degree of coherence P~ P~ exp( jM ) is now given by where A
P ( x, y , 2 'Z )
³³ I x , y exp ¬ª j 2S 'Z f x 2
0
0
0
2 0
y02 O B ( Z ) B ( Z 'Z )
j 4S'f 'Zf x0 x y0 y O B Z B Z 'Z º¼ dx0dy0
³³ I x , y dx dy 0
0
0
0
0
(4) .
Eq. (4) has a form similar to that of the 3D complex degree of coherence derived from the generalized van Cittert-Zernike theorem: [5]
³³ I x , y exp ª¬ j 2S'z x y O f j 2S x 'x y 'y O f º¼ dx dy ³³ I x , y dxdy
P 'x, 'y, 2'z
0
0
0
0
2 0
0
0
0
2 0
0
0
2
0
(5)
.
Apart from the difference in the scaling factor, the proposed system can give a simultaneous full-field visualization of a three-dimensional coherence function in the form of the fringe contrast and the fringe phase, with the magnification controllable through the scaling factor. From the analogy to the optical diffraction theory [6-8], our problem of producing coherence vortex for some particular optical path difference 2'Z can be reduced to the problem of finding real and nonnegative aperture distributions that produce optical fields with a phase singularity on the optical axis. A circular aperture whose transmittance has the form of a spiral zone plate can satisfy the above requirement if we choose I 0 ( x0 , y0 ) 1 2 1 cos ª 2SJ x02 y02 arctan y0 x0 º , 0 d x02 y02 d R , (6) ¬ ¼
^
`
where R is the radius of a circular source I 0 x0 , y0 , and J is a variable that determines the location of the coherence vortices along the 'Z -axis . Substituting the proposed source distribution, Eq. 6, into Eq. 4, we have the following degree of coherence,
New Methods and Tools for Data Processing ° °P ° ° ° ®P ° ° ° °P ° °¯
'Z 0
49
1;
'Z JO B ( Z ) B ( Z 'Z ) f 2
'Z JO B Z B Z 'Z f 2
v
v
J 1 4S'f J R x 2 y 2 f 2
2
8S 'f J R x y
2
f
J 1 4S'f J R x 2 y 2 f 2
2
8S 'f J R x y
2
y jx
x
f
2
y2
32
x jy
x
2
y2
32
(7)
;
,
where denotes the convolution operation. When Eq. 7 was evaluated, we have used the 2-D Riesz kernels: [9]
^
`
32
exp ª¬ j arctan y x º¼ v ju 2S u 2 v 2 , (8) where ^ ` stands for the Fourier transform. As seen in Eq. 7, a spatially incoherent source whose irradiance distribution has the same form as Eq. 6, produces fields that exhibit a high degree of coherence for the three optical path differences, 'Z 0 and 'Z r JOB Z B Z 'Z f 2 , where a coherence vortex can be clearly observed with the inversed topological charge, respectively. Relation 7 gives three correlation peaks, and we can control the distance between the central and the side peaks with coherence vortices by changing the parameter J of the zone plate with a spatial light modulator.
3 Experiments Experiments have been conducted to demonstrate the validity of the proposed technique. A schematic illustration of the experimental system is shown in Fig. 2. Linearly polarized light from a 15mw He-Ne laser was expanded and collimated by collimator lens C to illuminate a liquidcrystal-based Spatial Light Modulator (SLM), which modulates the light intensity transmitted by analyzer P placed immediately behind the SLM. A computer-generated spiral zone plate pattern was displayed on the SLM, and was imaged onto a rotating ground glass GG by a combination of lenses L1 and L2 through pinhole PH, which functions as a spatial filter to smoothen out the discrete pixel structure of the SLM. The image of the spiral zone plate on the rotating ground glass serves as a quasimonochromatic incoherent light source. The light from this spatially
50
New Methods and Tools for Data Processing
Fig. 2. Schematic illustration of the experimental system: C, collimator lens; P, polarizer; L1, L2, L3 and L4, lenses; PH, pinehole; GG, ground glass; BS, beam splitters.
incoherent source, placed at some distance from the focal plane of lens L3 was collected by L3 and introduced into a Michelson interferometer consisting of prism beam splitter BS, reference mirror MR, and object mirror MO, the surface of which is imaged by lens L4 onto the sensor plane of the CCD camera. The experiments were performed as follows: First, we designed a zone plate source that produces high coherence peaks for mirror distance 'Z 1mm by choosing the correct value for parameter J .Then we observed the fringes virtually located on mirror MO by the CCD camera with lens L4 focused on MO. By moving the position of mirror MR, we changed the optical path difference between the two arms of the Michelson interferometer and measured the visibility of the fringes along the optical axis from the recorded interferogram’s coherence vortices. Fig. 3 shows the irradiance distribution of the source with the shape of a computer generated spiral zone plate. We detected the coherence vortices by moving the reference mirror along the optical axis. The fringes recorded by a CCD camera for the different optical path differences 'Z are shown in Fig. 4(a)-(g). As predicted from the theoretical analysis, coherence vortices with inversed topological charge are readily observed at the position 'Z r1.0 mm in Fig. 4 (b) and (f), respectively, which correspond to the plus and minus first order coherence peak. We can also observe high coherence when 'Z is equal to zero at the position of zero’th order coherence peak. As theoretically predicted by Schouten et al. [2], the coherence vortices have a degree of coherence equal to zero without fringe contrast, while the intensities of the field do not vanish, which is quite different from the traditional optical vortices in the coherence field. From the recorded interferogram in Fig. 4 (c), we can directly calculate the
New Methods and Tools for Data Processing
51
Fig. 3. Source irradiance distribution designed to have the shape of spiral zone plate.
complex degree of coherence by the Fourier transform method (FTM) [10]. The result is shown in Fig. 5. As expected, a cone-like structure whose apex with a degree of coherence equal to zero indicates the position of a phase singularity in the coherence function is observed for the
(a) 'Z
(d) 'Z
1.5 mm
0 mm
(b) 'Z
1.0 mm
(e) 'Z
0.5 mm
(g) 'Z
1.5 mm
(c) 'Z
(f) 'Z
Fig. 4. The interferograms recorded for different optical path difference.
0.5 mm
1.0 mm
New Methods and Tools for Data Processing
52
0.3
4
0.25
2
Phase (rad)
~P ~
0.2 0.15 0.1
0
-2
0.05 0 50
-4 50
40
40
30
X(
a.u
20
.)
10 0
0
10
20
30
.) Y (a .u
40
50
30
X (a
20
.u .)
10 0
0
10
30
20
Y
40
50
(a .u .)
(a) (b) Fig. 5. The distributions of amplitude and phase of complex degree of coherence around the coherence vortex.
amplitude of the complex degree of coherence. In addition, we can also observe that the corresponding phase for this coherence function has a helical structure. Fig. 5 provides a first direct experimental evidence of the existence of phase singularities in the coherence functions.
4 Conclusions In summary, we have presented evidence of coherence vortices for the first time and experimentally investigated the properties of phase singularities in the coherence function. Unlike for conventional optical vortices, the intensity for coherence vortices does not vanish, but their contrasts become zero. Furthermore, the proposed method for synthesizing coherence vortices faciliates direct observation of the detailed local properties of an coherence vortex, and introduces new opportunities to explore other topological phenomena for the coherence function.
Acknowledgments Part of this work was supported by Grant-in-Aid of JSPS B(2) No.15360026, Grant-in-Aid of JSPS Fellow 15.52421, and by The 21st Century Center of Excellence (COE) Program on “Innovation of Coherent Optical Science” granted to The University of Electro-Communications.
New Methods and Tools for Data Processing
53
References 1. Nye, J F and Berry M V (1974) Dislocations in wave trains. Proc. R. Soc. Lond. A 336,165-190. 2. Schouten H F, Gbur, G, Visser, T D and Wolf E (2003) Phase singularities of the coherence functions in Young’s interference pattern. Opt. Lett. 28(12):968-970. 3. Gbur, G and Visser T D (2003) Coherence vortices in partially coherent beams. Opt. Comm. 222:117-125. 4. Goodman, J W (1968) Introduction to Fourier optics. McGRAW-Hill, New York. 5. Rosen, J and Yariv, A (1996) General theorem of spatial coherence: application to three-dimensional imaging. J. Opt. Soc. Am A 13:20912095. 6. Rosen, J and Takeda, M (2000) Longitudinal spatial coherence applied for surface profilometry. Appl. Opt. 39(23):4107-4111. 7. Wang, W, Kozaki, H, Rosen J and Takeda, M (2002) Synthesis of longitudinal coherence functions by spatial modulation of an extended light source: a new interpretation and experimental verifications. Appl. Opt. 41(10):1962-1971. 8. Takeda, M, Wang, W, Duan, Z and Miyamoto, Y (2005) Coherence holography: Holographic Imaging with Coherence Function. Holography 2005, International Conference on Holography, Varna, Bulgaria. 9. Larkin, K G, Bone, D J and Oldfield, M A (2001) Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral phase quadrature transform. J. Opt. Soc. Am. A 18(8):18621870. 10. Takeda, M, Ina H and Kobayashi S (1982) Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am 72:156-160.
Properties of Isothetic Lines in Discontinuous Fields C.A. Sciammarellaa, F.M. Sciammarellab a Dipartimento di Ingegneria Meccanica e Gestionale, Politecnico di Bari, Viale Japigia, 182, 70126, Bari, ITALY,
[email protected] b Dipartment of Mechanical, Materials and Aerospace Engineering, Illinois Institute of Technology, 10 West 32nd St., 60616, Chicago (IL), USA,
[email protected]
1 Introduction Optical methods that retrieve displacement or frequency information produce fringes generated by the beating of two close spatial frequencies of the real or virtual gratings (deformed and undeformed) that are carriers of information ( moiré fringes). The analysis of the displacements and strains requires the knowledge of the topology of the fringe patterns. Moiré fringes have a dual interpretation. They can be seen as lines of equal projected displacements, in this case the phase modulated signal interpretation is used. Or they can be considered as frequency modulated spatial signals. Utilizing the phase modulation concept, the moiré fringes are the loci of equal projected displacement, isothetic lines. The displacements on a plane are given by a vector,
a u ( x, y ) i v ( x, y ) j
(1)
These two components produce separate families of moiré fringes whose light intensity is given by
I D ( x, y ) I 0 >1 Q cos ID ( x, y )@ , D
x, y
(2)
Where I0 is the background intensity, Q is the visibility of the fringes, ID ( x, y ) is the phase of the signal. The moiré fringes are characterized by the property, 2S uD ID ( x , y ) c (3) p
New Methods and Tools for Data Processing
55
Where p is the pitch or fundamental frequency of the real or virtual carrier generating the fringes, the moiré fringes are isophase loci. If we consider the x-axis as the projection axis, the fringes are integral solutions of the differential equation [1] wI1 ( x, y ) dy (4) wx dx wI1 ( x, y ) wy In the above equation the subscript 1 indicates the phase of the moiré fringes corresponding to the x-direction, u(x,y) family. A similar equation can be written for the I 2 ( x, y ) family. The phase functions ID ( x, y ) are not independent of each other because they are subjected to the restrictions imposed by basic assumptions of Continuum Mechanics. The system of equation (4), and the corresponding system to the other family of fringes have solutions that leave the phase indeterminate, these points are called singular points. At singular points the two partial derivatives are equal to zero. The shape of the isothetic lines in the neighbourhood of the singular point is characterized by the Jacobian expression seen below,
J
§ w 2 I1 ( x, y ) ¨ ¨ wx 2 ¨ w 2 I ( x, y ) 1 ¨¨ w x wy ©
w 2 I1 ( x, y ) · ¸ wy wx ¸ w 2 uI1 ( x, y ) ¸ ¸¸ wy 2 ¹
(5)
0
Where the 0 indicates that the derivatives are taken at the point of coordinates xo, yo, the singular point. It can be shown that the behaviour of the lines depends on the solution of the characteristic equation,
O2 OS '
(6)
0
where,
S
trace J
§ w 2I1 ( x, y ) · § w 2I1 ( x, y ) · ¸¸ ¸¸ ¨¨ ¨¨ 2 2 w w x y ¹0 ¹0 © ©
(7)
2
'
det J
2 § w 2I1 ( x , y ) · § w 2 I 1 ( x , y ) · § wI 1 ( x , y ) · ¨ ¸ ¸ ¨ ¨¨ ¸ ¸ ¨ ¸ ¨ wx wy ¸ (8) wx 2 wy 2 © ¹0 © ¹0 © ¹0
56
New Methods and Tools for Data Processing
There is a large variety of singular points. A discussion and graphical examples of some frequently found singular points as well as some singular lines can be found in [1]. The ID ( x, y ) can not be arbitrary functions, one of the hypothesis of the continuum is that the functions ID ( x, y ) are analytical functions. The analicity requires that the functions have a single gradient vector )(x,y) at a point. The isothetic lines can not intercept. The other important property is that the isothetic lines are, either close lines or they begin and end at boundaries. The above described properties are enough to understand the topography of the phase function and therefore to provide the necessary rules for phase unwrapping. There are of course mechanics of solid problems where the analicity of the displacement function does not apply. If the displacement function is not analytical the phase functions can have a large variety of shapes. A similar problem arises when optical techniques are used to obtain shapes of surfaces. Three dimensional surfaces are also second order tensors and hence when one wants to obtain the interpretation of fringes of complex surfaces one faces the same problem that we have indicated in the case of displacement information.
2 Fringe patterns dislocations The analysis of displacement fields of a more general nature than those generated by an analytical function takes us to the concept of dislocations. Dislocation theory is used extensively by material scientists to analyse crystal kinematics and dynamics. A natural extension to the interpretation of the lack of single valued solution for the displacement field is to use the concept of dislocation to analyse fringe patterns. This fundamental step in fringe analysis was done by Aben in his pioneering work on photoelastic fringes interpretation [2]. Following Aben’s model the introduction of the definition of dislocations in moiré fringes can be found in [3], [4], [5] providing preliminary results that were obtained. In this paper this subject is explored further. Let us define a dislocation in a moiré fringe pattern as a departure in the pattern from the continuity properties outlined in the introduction. Figure 1 shows a tensile pattern of a particulate composite specimen. The pattern shown is the v-pattern corresponding to the axial direction of the specimen the material is a live solid propellant. In Figure 2 the dislocation is defined by the Burger circuit indicated in the figure. When one draws the circuit in a pattern that contains a dislocation the circuit will not close after the removal of the dislocation. The length of the
New Methods and Tools for Data Processing
57
Burger vector is equal the pitch p of the grating real or virtual generating the pattern times the number of fringes introduced at the dislocation, in the present example two fringes.
Fig. 1. Moiré pattern corresponding to v(x,y) in a propellant grain tensile specimen. Load applied in the y-direction.
Fig. 2. (a) Burger circuit around a fringe dislocation of pattern Fig.1, (b) Burger circuit with dislocation present, (c) Burger vector magnitude 2p.
The Burger vector defined here is not the same Burger vector utilized in crystal analysis. The dislocations that appear in fringe patterns are manifestations of the presence of dislocations in the field under analysis. The dislocations are positive sources of displacement or negative sources of displacements. Positive sources are defined as sources that increase the displacement field and negative sources are sources that reduce the displacement field. This convention is applied then to follow the definition of tensile deformation as positive, and compressive deformations as negative. The isothetic lines corresponding to a dislocation in the field will also have dislocations. This problem can be looked at in a slightly different way by considering the phase interpretation of displacements. One must recall equation (3), the phase of the fringes is proportional to the displacements. For each fringe family we have a phase function which geometrically has the interpretation of the rotating vector that generates the phase modulated function. Let us now consider that a displacement vector has two projections and each projection has a rotating vector with corresponding phase. The phases of the rotating vectors can not be independent because they are the result of the projections of the same vector.
3 Singularities of the displacement field If there are no discontinuities in the field the solution of a two dimensional continuum mechanics problem is a vector field of the form defined in
New Methods and Tools for Data Processing
58
(1).The vector field, is characterized by trajectories that have the following differential equation, dx dy (9) u v The above equation defines the tangent to the trajectories that should be curves that either end up in the boundaries of the domain or are closed curves. The trajectories can never intercept inside the domain. It is a well known fact that a vector defined by (1) can be represented by the sum of two vector fields, a aI a\ (10) The vector field,
aI )
(11)
a<
(12)
The vector field,
curlA<
Where ) is a scalar potential function, A\ is a vector field. If the divergence of the vector field at a point (x,y) is zero, there are no sources or sinks of displacements inside the field, the field has no discontinuities. If there are dislocations in the field, the above indicated properties are no longer valid, and the displacement trajectories can now intercept inside the field at points that we can call singular points of the field. The same developments utilized to get equation (5) can be applied to equation (9) to obtain, J
§ wu ( x, y ) ¨ ¨ wx ¨ wv( x, y ) ¨ wx ©
wu ( x, y ) · ¸ wy ¸ wv( x, y ) ¸ ¸ wy ¹
(13) 0
Where 0 indicates that the corresponding values are taken at the singular point P0 of coordinates x0, y0. To simplify the analysis we will consider the case of small deformations and rotations. In this case the Jacobian becomes,
J
§H x ¨ ¨- x ©
-y · ¸ H y ¸¹
(14) 0
Where Hx, Hy are the normal components of the strain tensor; Tx, Ty are the rotations of the elements of arc initially parallel to the x-axis and the yaxis respectively. The behavior of the surface that characterize the dis-
New Methods and Tools for Data Processing
59
placement trajectories at the singular point are defined by the characteristic equation, O2 OT D 0 (15) Where T = H o x + H o y = E 1 where ( 1 is the first invariant of the strain tensor; D = H o x + H o y Tx Ty is the determinant of the Jacobian. The resulting eigen vectors will define the surface that provides the trajectories in the neighbourhoods of the singular point, O1 , O 2
Tr
T2 4D
(16)
2
According to the type of solutions it is possible to have a variety of singular points, Figure 3. Besides the isolated points it is possible to have entire lines made out of singular points. The displacement trajectories are related to the isothetic lines and to the lines of principal strains (isostatics in the elastic case). The tangent to the trajectory is given by.
t u (x, y) i v(x, y) j
a)
b)
(17)
c)
Fig. 3. Singular points of the trajectories: (a) hyperbolic point, (b) node point, (c) center
The displacement trajectories can be obtained directly from the isothetic lines (moiré fringes), by applying (9) and computing the modulus of the vector t. The displacement trajectories are also directly related to the principal strain trajectories. At a given point of the field, Fig 4, by computing the line integrals, u (x , y) H 1 ( s ) ds (18)
³
L
v(x , y)
1
³ H 2 ( s ) ds
L
2
(19)
New Methods and Tools for Data Processing
60
it is possible to get the tangent and the value of the vector at a given point of coordinates x,y. The above relationships are important because they connect different families of lines that correspond to the solution of a displacement field where continuity conditions are no longer valid. The type of singular point will depend on the roots of the corresponding characteristic equation (15). The singular points of the isothetic lines correspond to the singular points of the displacement field. It is necessary to restrict our analysis to some fundamental types of discontinuous fields. From experimental observations it follows that many problems of solid mechanics can be solved by assuming a discrete number of discontinuous lines. load
isostatic line H1 isothetic u isostatic line H2
displacement trajectory
isothetic v
displacement vector a v-displacement u-displacement
Fig. 4. Relationship between isothetic lines, isostatic lines, and displacement trajectories
It can be assumed that it is possible to patch a displacement field with regions where continuum mechanics applies separated by singular lines. The singular lines will intersect creating singular points. It is also possible to have isolated singular points in a continuum patch. Isolated singular points are points where the displacement trajectory has a discontinuity. For example, at a bifurcation singular point there will be two different displacement vectors. These displacement vectors will have four projections, four rotating vectors and the corresponding four phases. At the point of discontinuity of a trajectory two different tangent vectors exist. At this type of point the isothetic lines will be discontinuous, that is the 1
1
I 2 ( xo , y 0 ) before
phases I 1 ( xo , y 0 ) , 2
2
the
singularity
and
phases I 1 ( xo , y 0 ) , I 2 ( xo , y 0 ) after the discontinuity will be present.
the
New Methods and Tools for Data Processing
61
Going back to equation (10) one can see that if the divergence of the field is zero you will not have discontinuities. Since the divergence of the curl is zero, this means that the divergence of the potential ) of ҏequation (11) must be different from zero at a dislocation, that is the Laplacian, (20) 2
) z0
If 2 ) ! 0 one will have an increment of the phase at the singular point via an increase of the displacement or in the phase space, the opposite will occur if 2 ) 0 .
4 Applications to actual observed fields The derivations that have been presented in this paper can be used to model actual displacement fields, Figure 5 (a) and (b) correspond to the v(x,y) and u(x,y) displacement patterns of a tensile specimen of a metallic particulate composite. The matrix of the composite is an aluminium alloy and the reinforcing particles are SiC particles. The region observed is 100 x 80 Pm in size, the spatial resolution is 200 nm and the displacement value of each fringe is 55 nm. Figure 6 (a) and (b) correspond to plots of the phase of the fringes represented as levels of grey. It is possible to see a number of dislocations in the moiré patterns and in the projected displacement field one can see a number of lines along which the phase field is discontinuous. The discontinuity lines end in the dislocation points. In place of the bifurcations observed in the previously shown moiré patterns a large number of fringes emerging from the dislocation can be seen. This is an example of what the authors have called a center point. Figure 7 shows the detail of a dislocation region. The dislocation exists in one of the families it does not exist in the other. In the corresponding u-field no dislocations are present. In the case of Figure 6 (a) and (b) there are black lines along which the phase is indeterminate, these lines are lines corresponding to cracks between the particles and the matrix. These lines end at points that are crack tips, these points will be singular points in one of the two patterns or in both of them depending on the particular case. If the singular lines of the phase field are open, the domain where the displacements are defined is simply-connected, all the regions of the domain can be connected by a continuous line and fringe order can be determined following the rules of fringe order determination. It is possible to unwrap the fringe pattern as it is shown in Figure 6 (a) and (b) that correspond to the patterns of Figure 5. If the domain is not simply connected, the domain has to be tiled with continuous patches where the relative displacements are defined.
62
New Methods and Tools for Data Processing
This occurs for example when one observes a polycrystalline material; the inter-granular regions create discontinuities in the fringe field. Each crystal has its own fringe pattern. Figure 9 shows the axial moiré pattern of a propellant grain tensile specimen. It is possible to see a large variety of fringe dislocations. Figure 10 shows the strains corresponding to the displacement field. It is possible to see strain peaks in correspondence with the dislocations, many of the strain peaks come as pairs of positive and negative strain peaks. When observing a tensile specimen the average strain must be positive and in order to compensate for a high negative strain a positive peak must be present to compensate it. Figure 11 shows the detail of the enlarged field of a tensile specimen of a propellant grain. The particle has an elasticity modulus that makes its strain and displacement fields practically zero when compared to the rubber matrix. Consequently a particle acts as a region of zero strain and zero relative displacement, a singular region of the displacement field.
Fig. 5. Moiré patterns of the u and v displacement fields on a region of a tensile specimen made out of aluminum matrix reinforced with silicon carbide particles. Equivalent grating pitch 55 nm.
Fig. 6. Gray level representation of the phases of the patterns of Fig. 5
New Methods and Tools for Data Processing
Fig. 7. Detail of the moiré patterns of Fig. 5
Fig. 8. Displacements corresponding to the patterns of Fig. 5
Fig. 9. Moiré pattern of the v(x, y) field
Fig. 10. Strains corresponding to the pattern of Fig.9
63
64
New Methods and Tools for Data Processing
Fig. 11. Detail of Fig. 9 with higher level of spatial resolution
5 Discussion and conclusions The presence of dislocations in fringe patterns is indicative of the presence of positive or negative sources of displacement. The analysis of the singularities of the differential equations that define isothetic and trajectory lines provides an insight on the analysis of the complex fringes patterns observed in microscopic fields. In order to unwrap isothetic lines it is necessary to take into consideration all the obtained conclusions and use the relationships between isothetic lines, lines of principal strains and trajectory lines. Observing discontinuous fringe patterns it is possible to see that the variety of singular points that one can find is very large.
Bibliography [1] C.A. Sciammarella, “Theoretical and experimental Study on Moiré Fringes”, PhD Dissertation, Illinois Institute of Technology, 1960. [2] H. Aben, L. Ainola, “Interference blots and fringe dislocations in optics of twisted birefringent media”, Journal of Optical Society of America, Vol. 15, pp. 2404-2411, 1998. [3] C.A. Sciammarella, F.M. Sciammarella, “On the theory of moiré fringes in micromechanics”, Proceedings of the SEM Conference, 2002 [4] C.A. Sciammarella, F.M. Sciammarella, “Isothetic lines in microscopic fields”, Proceedings of the SEM Conference, Charlotte (NC), June 2003. [5] C.A. Sciammarella, B. Trentadue, F.M. Sciammarella, “Observation of displacement fields in particulate composites”, Materials Technology, Vol. 18, pp. 229-233, 2003. [6] C.A. Sciammarella, D. Sturgeon, “Digital Techniques Applied to the Interpolation of Moiré Fringe Data”, Experimental Mechanics, Vol. 11, pp. 468-475, 1967.
Invited Paper
Heterodyne, quasi-heterodyne and after René Dändliker Institute of Microtechnology, University of Neuchatel Rue A.-L. Breguet 2, Ch-2000 Neuchatel Switzerland
1 Introduction In this paper, I will tell you the story of heterodyne and quasi heterodyne interferometric fringe analysis based on my own experience over the last 30 years, which is strongly related to holographic interferometry. I have presented a similar talk on the "Story of speckle interferometry" at the International Conference on Interferometry in Speckle Light: Theory and Applications in 2000 at Lausanne, Switzerland [1]. Soon after the first presentation of holographic reconstructions of three-dimensional opaque objects in 1964 [2], the field of holographic interferometry was rapidly developing, independently and in parallel with speckle metrology. A selfcontained treatment of the theory, practice, and application of holographic interferometry, with emphasis on quantitative evaluation of holographic interferograms, was published by C. M. Vest in 1979 [3]. The introduction of electronic fringe interpolation techniques had an important impact on interferometry, because it offers high accuracy and automated data acquisition. These methods are based on the principle of shifting the relative phase between the interfering wavefields, either linearly in time by introducing a frequency offset (heterodyne) or stepwise (quasi-heterodyne or phase-shifting). These techniques can readily be applied to real-time holographic interferometry. However, the application of electronic interference phase measurement in double-exposure holographic interferometry requires a setup with two reference beams [4]. The progress in the 1980s was mainly driven by the ever increasing power of digital image recording and processing, thanks to the development of CCD-cameras, digital frame grabbers, and personal computers. As can be seen from more recent reviews [5] [6], it became also more and more obvious, that holographic interferometry and speckle interferometry are rather two different experimental approaches of the same basic concept for interferometry with
66
New Methods and Tools for Data Processing
opaque and diffusely scattering three-dimensional objects, where speckles play an inherent key role. The latest developments in digital electronic holography brings the two techniques even closer together [7].
2 Heterodyne holographic interferometry Heterodyne interferometry was first describes and experimentally realized for a conventional two-beam interferometer by Crane [8]. The heterodyne technique can be applied, with some restriction, to nearly all known kinds of holographic interferometry. For this purpose it is only necessary that the two wavefields to be compared interferometrically can be reconstructed with slightly different optical frequencies. This can be accomplished in real-time holographic interferometry by a frequency offset between the reference beam, reconstructing the first wavefield from the hologram, and the object illumination, generating the second wavefield. In case of double exposure holography, however, the two holographically recorded wavefields have to be stored independently, so that they can be reconstructed with different optical frequencies. This is most conveniently achieved by using two different reference waves. Heterodyne holographic interferometry for displacement measurement of diffusely scattering objects. was first reported 1973 by Dändliker, Ineichen and Mottier [9]. In 1980 an overview of the state of the art of heterodyne holographic interferometry has been published in Progress in Optics (ed. Emil Wolf) [10]. The early experiments were performed with a rotating radial grating as frequency shifter and with a readily available phase sensitive detector and the calibrated phase shifter of its reference unit to measure the interference phase. As shown in Fig. 1, the interference phase I(x) was obtained by scanning the image of the object with the photodetector D1 and measuring the phase of the beat frequency with respect to the phase obtained from the second detector Dr at a fixed reference point. The signals obtained from the moving interference pattern are shown in Fig. 1 for three positions, corresponding to I = 0º, 90º, 180º, respectively. The phase difference of the two detector signals was measured by adjusting the calibrated phase shifter in the reference signal path to get minimum reading, corresponding to 90º phase difference, at the phase sensitive detector. The phase reading was reproducible within GI = 0.3º at any position. This corresponds to less than 10–3 of the fringe separation. The phase measurements were taken at intervals of 'x = 3 mm. The experimental results for the bending of the cantilever clamped at x = 0 and loaded at x = 123 mm, are shown in Fig. 2.
New Methods and Tools for Data Processing
67
Fig. 1. Schematic representation of the interference fringes on a bent cantilever and the corresponding beat frequency signals obtained from two photodetectors Dr and D1 [10]
Fig. 2. Experimental results for the bending of the cantilever clamped at x = 0 and loaded at x = 123 mm, as shown in Fig. 1. Comparison of the second derivatives d2uz/dx2 of the normal displacement uz(x) with theory indicates an accuracy for the interference phase measurement of GI = 0.3º, corresponding to 10–3 of a fringe [10]
68
New Methods and Tools for Data Processing
Later, the experimental arrangement was improved by using commercially available acousto-optic modulators for the phase shift and digital phase meters. For 2D fringe pattern analysis, the interference phase was determined by scanning an array of three detectors which measured the phase differences in two orthogonal directions in the image plane. The total phase I(x,y) was then obtained by appropriate integration. The phase differences between the detected signals were measured with zero-crossing phasemeters, which interpolate the phase angles to 0.1º. Scanning and data acquisition were automated and computer controlled. The performance and the limits of 3D strain measurement on a curved object surface by heterodyne holographic interferometry was investigated in the mid 1980s by Thalmann [11].
3 Quasi-heterodyne holographic interferometry Heterodyne holographic interferometry offers high spatial resolution and interpolation up to 1/1000 of a fringe. However, it requires sophisticated electronic equipment and mechanical scanning of the image by photodetectors. For moderate accuracy and spatial resolution, quasiheterodyne techniques [12] have been developed, which allow electronic scanning of the image by photodiode arrays (CCD) or TV cameras and use microprocessor-controlled digital phase evaluation [13]. The relative phase is changed step-wise, using at least three different values. The interference phase can be computed then from the different measured intensity values. Quasi-heterodyne techniques are more adequate for digital processing and TV detection. Two-reference-beam holography with reference sources close together and video electronic processing allows one to measure the interference phase with an accuracy of 1/100 of a fringe at any point in the TV image [14]. Quasi-heterodyne holographic interferometry with TV-detection is nearly as simple as standard double-exposure holography, and it does not require any special instrumentation apart from a video-electronic data acquisition system (Fig. 3). The required two-reference-beam holography can be operated as easily as classical double-exposure holography by using two reference beams close together. The method combines the simplicity of standard double-exposure holography, video-electronic processing, and the power of heterodyne holographic interferometry. Phase-shifting fringe processing is very well suited for industrial applications, where high speed and medium accuracy is required. Corresponding equipment for recording
New Methods and Tools for Data Processing
69
and reconstruction, as well as software for fringe analysis and data processing, are commercially available.
HV
laser piezo R1 R2
TV-camera
object
frame store
A/D
D/A
hologram
µ-computer
mass storage
I/O
Fig. 3. Two reference-beam holographic interferometry with video-electronic fringe evaluation system [4]
The optical arrangement shown in Fig. 3 is best suited for recording and reconstructing double-exposure holograms with the same cw laser in the same setup. Shutters are helpful to switch the reference beams between the two exposures. The phase shift for the fringe evaluation is obtained by the computer controlled piezo element. More compact optical modules for the generation of the two reference beams, such as slightly misaligned Michelson interferometers or Wollaston prisms, are commonly used [4]. A comparison of heterodyne and quasi-heterodyne holographic interferometry can be found in [15]. The 3D displacement vector can be determined even in an industrial environment from two-reference-beam holograms recorded with a doublepulse Q-switched laser [16]. For this purpose, three holograms with three different illumination directions are recorded independently by three temporally separated pulses, which are obtained from one laser pulse using optical delay lines. The lengths of the delay lines are typically 4 and 8 m for Q-switched pulses of about 10 ns duration. Figure 4 exhibits the recombination of the three delayed pulses to produce three reference beams which are coded with three aperture masks. The following Pockels cell switches the two reference beams between the two exposures of the hologram. The aperture masks are imaged through the two-reference-beam module onto the hologram plate. Using the corresponding masks during reconstruction, the three spatially multiplexed double-exposure two-reference beam holo-
70
New Methods and Tools for Data Processing
grams can be analyzed independently by standard phase-shifting methods. The described system has been successfully applied to 3D vibration analysis [16]. Previously it was tested for the case of a rotating disk. The disk had a diameter of 150 mm and was rotating at an angular speed of about 0.1 rad/s. The two pulses of the Q-switched ruby laser were fired at an interval of 600 µs.
Fig. 4. Arrangement to recombine three delayed pulses to produce three reference beams coded with different aperture masks. The Pockels cell switches the two reference beams between the two exposures of the hologram [16]
Fig. 5. Measuring the 3D displacement of a rotating disk by double-pulsed holography. (a) Computer evaluated holograms, corresponding to three different illumination vectors. (b) Calculated Cartesian components Lx, Ly, Lz of the displacement vector L [16]
New Methods and Tools for Data Processing
71
Figure 5 shows the results of three computer evaluated holograms, corresponding to the three different illumination vectors. The measured phases are represented by a linear gray scale between 0 and 2ʌ. Taking into account the geometry of the experimental setup, the three Cartesian components of the displacement vector L have been calculated. The results are given in Fig. 5b, still represented by their phase values, similar to Fig. 5a. The theoretical prediction is zero fringes for the z-component (out-of plane) and parallel, equidistant fringes for both the x- and y-components (in-plane). A statistical analysis of the results shows an rms phase error of the order of 10º with respect to the theoretical values for the in-plane components. The residual fringe in the z-direction can be explained by outof plane vibrations induced by the driving motor.
4 Digital electronic holography, the modern times With the increasing power of digital image recording and processing, thanks to the development of CCD-cameras, digital frame grabbers, and personal computers, these techniques became an important and widespread tool in interferometry for direct phase determination by phase-shifting methods. The impact of this development on holographic interferometry can be found in the book Holographic Interferometry edited by P. K. Rastogi in 1994 [17]. Based on the available technology in 1985, Stetson and Brohinsky proposed a practical implementation of direct digital electronic recording of holograms and its application to holographic interferometry [18], which they realized one year later. As described by Pryputniewicz in the chapter on Quantitative Determination of Displacements and Strains from Holograms [19] in Rastogi's book, in electro-optic holography, also known as electronic holography, or TV holography, the interfering beams are combined by a speckle interferometer, which produces speckles large enough to be resolved by the TV camera. The output of the TV camera is fed to a system that computes and stores the magnitude and phase, relative to the reference beam, of each picture element in the image of the illuminated object. Yet another approach of direct phase determination in holographic interferometry with use of digitally recorded holograms was published in 1994 by U. Schnars [20]. The off-axis Fresnel holograms, which represent the undeformed and the deformed states of the object, are generated on a CCD camera and stored electronically. In contrast to speckle interferometry, no lens or other imaging device is used. The reconstruction is done from the digitally stored holograms by numerical methods. This makes it possible to
72
New Methods and Tools for Data Processing
calculate the interference phase, which is the phase difference between the two reconstructed object waves, directly from the holograms, without generation of an interference pattern. The recording geometry and the angular size of object are now severely limited by the spatial resolution of the CCD camera, which limits the spatial carrier frequency, determined by the wavelength and the angle between the object and reference wave, for the off-axis holograms. Further improvement in the digital reconstruction and filtering of the overlapping twin images allows even to work with in-line holograms, as reported by Pedrini et al. in 1998 [21]. A systematic approach to TV holography, including holographic and speckle interferometry, is presented in a recently published review by Doval [7].
References 1. Dändliker, R (2000) The story of speckle interferometry, Interferometry in Speckle Light: Theory and Applications, eds. P. Jacquot and J.-M. Fournier, Springer, Heidelberg, 3-10 2. Leith, E N, Upatnieks, J (1964) Wavefront reconstruction with diffused illumination and three-dimensional objects, J. Opt. Soc. Am., 54:1295-1301 3. Vest, C M (1979) Holographic Interferometry, John Wiley & Sons, New York 4. Dändliker, R (1994) Two-reference-beam holographic interferometry, Holographic Interferometry, Springer Series in Optical Sciences, Vol. 68, ed. P. K. Rastogi, Springer, Berlin, 75-108 5. Jones, R, Wykes, C (1989) Holographic and Speckle Interferometry, Cambridge University Press, Cambridge 6. Dändliker, R, Jacquot, P (19920 Holographic interferometry and speckle methods, Optical Sensors, Sensors, vol. 6, ed. E. Wagner, R. Dändliker, K. Spenner, VCH Verlagsgesellschaft, Weinheim, Deutschland, 589-628 7. Doval, Á F (2000) A systematic approach to TV holography, Meas. Sci. Technol. 11:R1-R36 8. Crane, R (1969) New developments in interferometry. V. Interference phase measurement, Appl. Opt. 8:538-542 9. Dändliker, R, Ineichen, B, Mottier, F M (1973) High resolution hologram interferometry by electronic phase measurement, Opt. Commun. 9:412-416 10. Dändliker, R (1980) Heterodyne holographic interferometry, Progress in Optics, Vol. XVII, ed. E. Wolf, North Holland, Amsterdam, 1-84
New Methods and Tools for Data Processing
73
11. Thalmann, R, Dändliker, R (1987) Strain measurement by heterodyne holographic interferometry, Appl. Opt. 26:1964-1971 12. Bruning, J H, Herriott, D R, Gallagher, J E, Rosenfeld, D P, White, A D, Brangaccio, D J (1974) Digital wavefront measuring interferometer for testing optical surfaces and lenses, Appl. Opt. 13,:2693-2703 13. Hariharan, P, Oreb, B F, Brown, N (1983) Real-time holographic interferometry: a microcomputer system for the measurement of vector displacements, Appl. Opt. 22:876-880 14. Dändliker, R, Thalmann, R, Willemin, J-F (1982) Fringe interpolation by two-reference-beam holographic interferometry: reducing sensitivity to hologram misalignment, Opt. Commun. 42:301-306 15. Dändliker, R, Thalmann, R (1985) Heterodyne and quasi-heterodyne holographic interferometry, Opt. Eng. 24:824-831 16. Linet, V (1991) Développement d'une méthode d'interférométrie holographique appliquée à l'analyse quantitative 3D du comportement dynamique de structures, thèse de doctorat, Institut d'Optique, Orsay, France 17. Rastogi, P K (1994) Holographic Interferometry, Springer, Berlin 18. Stetson, K A, Brohinsky, W R (1985) Electrooptic holography and its application to hologram interferometry, Appl. Opt. 24:3631-3637 19. Pryputniewicz, R J (1994) Quantitative determination of displacements and strains from holograms, Holographic Interferometry, ed. P. K. Rastogi, Springer, Berlin, 33-74 20. Schnars, U (1994) Direct phase determination in hologram interferometry with us of digitally recorded holograms, Opt. Soc. Am. A, 11:2011-2015 21. Pedrini, G, Fröning, Ph, Fessler, H, Tiziani, H J (1998) In-line digital holographic interferometry, Appl. Opt. 37:6262-6269
Robust three-dimensional phase unwrapping algorithm for phase contrast magnetic resonance velocity imaging Jonathan M. Huntley1, María F. Salfity1, Pablo D. Ruiz1, Martin J. Graves2, Rhodri Cusack3, and Daniel A. Beauregard3 1. Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Loughborough LE11 3TU, UK 2. Department of Radiology, Addenbrooke’s Hospital, Hills Road, Cambridge CB2 2QQ, UK 3. MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK
1 Introduction A broad range of full-field measurement techniques such as interferometry, synthetic aperture radar, phase contrast optical coherence tomography and magnetic resonance imaging (MRI), yield two-dimensional (2D) or three-dimensional (3D) phase distributions which are wrapped onto the range (-ʌ, ʌ]. In order to recover the true phase it is necessary to restore to the measured wrapped phase the unknown multiple of 2ʌ. This process of phase unwrapping is not trivial due to the presence of phase singularities (points in 2D, lines in 3D) generated by local or global undersampling. The correct 2D branch cut lines and 3D branch cut surfaces should be placed where the gradient of the original phase distribution exceeded ʌ rad voxel-1. This information, however, is lost due to undersampling and cannot be recovered from the sampled wrapped phase distribution alone. As a consequence, empirical rules such as finding the surface of minimal area or using the wrapped phase gradient will fail to find the correct branch cut surfaces. We conclude that additional information must be included in the branch cut placement algorithm. An example with real data is provided in which downsampled phase contrast magnetic resonance imaging (PCMRI) data is successfully unwrapped when the position of the vessel walls and the physical properties of the flowing blood are taken into account.
New Methods and Tools for Data Processing
75
2 Branch cuts in 2-D and 3-D phase unwrapping Consider a continuous phase distribution Iu, and its wrapped version Iw. The sampled forms of Iu and Iw will be denoted Ius and Iws, respectively. To avoid undersampling, the Nyquist criterion requires the change in Ius between two consecutive samples to be |ǻIus| < ʌ. Fig. 1 illustrates the process of downsampling a 2D distribution and failing to satisfy this criterion, which results in the formation of singular points s1 and s2. The 2D branch cut method involves identifying such points and placing branch cuts (barriers to unwrapping) between points of opposite sign. Two ambiguities arise, however: firstly, how to pair the points, and secondly the shape of the cut line between them. One approach to resolving the first is to minimize the distance between singularities, which can be achieved with a minimum cost matching graph theory method [1]. The second ambiguity is not normally addressed and a straight line cut is typically the default choice. Fig. 1(b) illustrates the potential errors introduced by this process: branch cuts should be placed between samples where ~'Ius~!S, indicated with a dotted line in the figure. If a straight branch cut is placed instead, local errors will appear in the shaded region. When the 2D branch cut method is extended to 3D the first ambiguity (pairing of positive and negative phase singularities) largely disappears [2]. This is because in 3D space, phase singularities arrange themselves into closed phase singularity loops (PSLs). Branch cut surfaces must then be placed on the singularity loops to block the unwrapping path. There remains however the second ambiguity, i.e. where the branch cut surface should be placed. For instance, a loop shaped like a letter C (Fig. 2), will take two different surfaces of equal area. If the wrong surface is built, a localized unwrapping error will result within the volume of the loop. The problem can be summarized as follows: the PSLs due to undersampling mark the edges of the regions where undersampling has occurred. Only these boundaries are known from the wrapped phase Iws alone. However, to place the branch cut surfaces and prevent the unwrapping path from producing localized errors, the spatial coordinates defining where undersampling has occurred are needed. That information is lost in the sampling process and any robust method for placing the branch surfaces in the presence of undersampling requires extra information from the physical problem under study. In the next section an approach that can be used in PC-MRI blood velocity measurements will be described. Related criteria can be derived in a similar way for other 3-D phase volumes, e.g. from speckle interferometry studies in solid mechanics application areas.
New Methods and Tools for Data Processing
76
y
|ǻIwBA|<ʌ
s1
B
A s1
|ǻIwCB|>ʌ
C
|ǻIwAD|<ʌ
D
|ǻIwDC|<ʌ
s2 x Fig. 1. Phase contour map (contour spacing of ʌ) demonstrating formation of a dipole pair s1, s2 due to undersampling. Grid represents sample points; dotted lines along region of high phase gradient represent the branch cut that should join the dipole pair
b
a
Fig. 2. A simple example of C-shaped 3D ambiguous loop. Two of the possible branch cut surfaces are shown shaded in a) and b)
3 Building branch cut surfaces in CINE PC-MRI PC-MRI allows quantitative velocity mapping within arteries and veins by measuring the phase shift Iof the MRI signal. A linear relationship exists between I and the velocity of the moving spins, v: v ( I)
I Venc / S .
(1)
New Methods and Tools for Data Processing
77
The velocity encoding parameter Venc is the velocity that produces a phase shift of ʌ. When combined with cardiac triggering, CINE PC-MRI produces a temporal series of images of a single slice of the test subject, i.e. a 3D phase volume consisting of two spatial axes and one time axis (t). Under the commonly-used assumptions of incompressible flow and constant viscosity µ, and also assuming that the measured velocity component v in the main direction of flow is much larger than the perpendicular components, the Navier-Stokes equations reduce to: U
wv wt
§ w 2v w 2v · ¸, P P¨ ¨ wx 2 wy 2 ¸ ¹ ©
(2)
where U is the fluid density and P the pressure gradient. When Eq.(2) is applied to flow within a cylinder of radius R with no-slip boundary conditions, the well-known parabolic velocity profile results: v ( x, y )
P R 2 4P
2 2 § ¨1 x y ¨ R2 ©
·¸ . ¸ ¹
(3)
In reality, non-Newtonian effects may cause the blood velocity profile to differ from the parabolic form in the middle of the vessel, where it typically has a block profile [3]. If unwrapping is successful within the vessel, the velocity field should therefore be a continuous function of the spatial coordinates. Otherwise, if the branch cut surfaces are placed inappropriately, the velocity field will show spatial discontinuities that are not an actual property of the flow. This can be used to our advantage to check for the effects of different branch cut surfaces on the velocity field. From Eq. (3), the Laplacian of v ( x, y ) , 2 v w 2 v wx 2 w 2 v wy 2 , is equal to a constant for a parabolic velocity profile. For the block profile, however, 2 v will be close to zero inside the vessel except for the region closer to its edges. We can now define the first criterion that may be used to guide the construction of branch surfaces: Criterion 1: By evaluating the standard deviation of the 2D Laplacian of the velocity field, VL, obtained for different branch surfaces, the one that minimizes VL can be considered as the appropriate surface.
To avoid temporal undersampling, the temporal resolution Gt, the peak blood acceleration ap, and the velocity encoding value, must satisfy
a p Gt d Venc .
(4)
New Methods and Tools for Data Processing
78
This condition may not hold in the case of a sudden increase or decrease in pressure gradient, which is followed by a corresponding change in blood velocity, and horizontally oriented PSLs may result (we are assuming here that the time axis is vertical). These loops require branch surfaces that lie on the same plane in order to block the unwrapping path exactly where the phase is undersampled. Hence, a second criterion is: Criterion 2: After a sudden change in pressure gradient, the horizontally oriented loops that appear require minimum area surfaces with their normal in the direction of the blood flow.
Finally, from Eq. (3), the velocity gradient is highest at the vessel wall, and therefore spatial undersampling is more likely to occur here than anywhere else. The Nyquist conditions to avoid spatial undersampling are: wv wv Gy d Venc , Gx d Venc , wy p wx p
(5)
where Gx and Gy are the spatial resolution in x and y, and the subscript p stands for peak value. A third criterion can be expressed as follows: Criterion 3: Vertically oriented (along time axis) portions of loops with the vertical singularities near the vessel wall region, require branch surfaces to be placed at the vessel wall.
The MRI magnitude image can be used to locate the position of the vessel wall either manually or automatically with methods such as deformable models [4]. A test based on the fulfillment of these three criteria could be implemented by defining an energy term that is a function of the standard deviation of the Laplacian of the resulting velocity field and of the distance of the vertical branch surface patches to the vessel wall. An initial surface built on a PSL could then be modified iteratively to minimize this energy. The present work is not concerned with details of this implementation, but rather with showing that the mentioned criteria can indeed produce appropriate surfaces.
4 Results and Discussion A high temporal resolution phase-contrast CINE MRI acquisition Iw through the ascending aorta of a single subject, was obtained using a 1.5T
New Methods and Tools for Data Processing
79
magnetic resonance scanner (Echospeed, GE Medical Systems), with velocity encoding value Venc= 50 cm s-1. One hundred temporal phases were reconstructed with a temporal resolution of 18 ms. Such temporal resolution is much finer than would normally be used in clinical practice, where the desired resolution has to be weighed against the acquisition time and hence cost of the measurement. In order to demonstrate the use of the three criteria described in section 3, Iw was temporally downsampled to sampling rates that might typically be of clinical interest. Decreasing temporal sampling rates equal to one sixth of the original temporal resolution, produced wrapped phase volume Iwt6. Results from other temporal downsampling rates, as well as spatially downsampled data, are presented in [5]. The high resolution phase volume Iw presented only small singularity loops due to noise in static tissue surrounding the aorta and a few small loops due to spatial undersampling near the vessel edges, but not inside the aorta. It could therefore be unwrapped without ambiguity, to provide a benchmark Iu against which the performance of the branch cut placement criteria outlined in section 3 could be judged. The temporally downsampled version of Iu will be denoted Iut6. The temporally undersampled wrapped phase volume Iwt6, contains a large C-shaped loop, shown in Fig. 3(a). The initial surface produces localized unwrapping errors (Fig. 3(b)) and the rms error when compared to Iut6 is 0.81 cm s-1. The Laplacian of the velocity in the x and y directions, calculated by convolving the original velocity image with the 3u3 kernels
ª1 2 1º ª1 1 1º «1 2 1» and « 2 2 2» respectively, has large values in the error re«¬ 1 1 1 »¼ ¬«1 2 1¼» gion (Fig. 3(c)). When the branch cut surface is modified according to criteria 2 and 3 to bring vertical patches to the vessel wall, and minimize the standard deviation of 2 v (Fig. 3(d,f)), the unwrapping errors are corrected (Fig. 3(e)) and the unwrapped phase coincides with the one obtained from the full-resolution volume (rms error = 0). Before concluding, we should comment on the main limitation of the approach described here, namely the assumed presence of a single PSL in the region of interest. As progressively more undersampling is introduced, one can expect that additional loops will be introduced between voxels within the flow, and not just at the vessel walls. The approach presented here could be extended to deal with such a case, although at the expense of a more complex numerical optimization process.
80
New Methods and Tools for Data Processing
a)
d)
b)
e)
c)
f)
Fig. 3. a) Phase singularity loops and their branch surfaces from first three frames of temporally downsampled wrapped phase volume Iwt6, in the ascending aorta. b) Unwrapped phase corresponding to the frames shown in (a); there is an unwrapping error on frame 2. c) Laplacian of frame 2 of (b). d) Same phase sub-volume as in (a) with modified branch surface. e) Unwrapped phase corresponding to the frames shown in (d). f) Laplacian of frame 2 of (e)
New Methods and Tools for Data Processing
81
5 Conclusions In neither 2D nor 3D is it possible to find where the undersampling has happened from the measured wrapped phase distribution alone. Such information is lost in the sampling process, which is also responsible for the creation of the singularities. Therefore, it is necessary to make use of additional information about the physics of the problem. In the particular case of CINE-MRI, the pulsatile nature of the flow produces a sudden acceleration that can result in temporal undersampling and PSLs with singularities oriented perpendicular to the time axis, that require a surface with its normal in the direction of the time axis. This sudden increase in velocity may then result in spatial undersampling near the vessel wall, forming loops that require a branch surface located along the wall and whose normal is perpendicular to the time axis. These considerations, taken with an approximate form for the relevant Navier-Stokes equation, resulted in three criteria for the branch cut placement algorithm that, when implemented manually allowed even one-voxel errors to be prevented. An unfortunate consequence of this observation however is that it is not possible to find a universal approach that solves all possible undersampled 3D phase distributions.
6 References 1. Buckland, J R, Huntley, J M, Turner, S R E (1995) Unwrapping noisy phase maps by use of a minimum-cost-matching algorithm. Applied Optics 34: 5100-5108 2. Huntley, J M (2001) Three-dimensional noise-immune phase unwrapping algorithm. Applied Optics 40: 3901-3908 3. Han, S-I, Marseille, O, Gehlen, C, Blümich, B (2001) Rheology of blood by NMR. Journal of Magnetic Resonance 152: 87-94 4. Hu, Y-L, Rogers, W J, Coast, D A, Kramer, C M, Reichek, N (1998) Vessel boundary extraction based on a global and local deformable physical model with variable stiffness. Magnetic Resonance Imaging 16: 943-951 5. Salfity, M F, Ruiz, P D, Huntley, J M, Graves, M J, Cusack, R, Beauregard, D A (2005) Branch cut surface placement for unwrapping of undersampled three-dimensional phase data: application to magnetic resonance imaging arterial flow mapping. Applied Optics (submitted)
Signal processing of interferogram using a twodimensional discrete Hilbert transform Ribun Onodera1, Yoshitaka Yamamoto1, and Yukihiro Ishii2 Department of Electronics, University of Industrial Technology, 4-1-1 Hashimotodai, Sagamihara, Kanagawa 229-1196, JAPAN 2 Department of Applied Physics, Faculty of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjukuku, Tokyo 162-8601, JAPAN
1
1 Introduction Sub-fringe interferometry has been used for testing high-precision optical systems. Many achievements for the sub-fringe interferometry have been reported: optical heterodyne interferometry[1]. phase-shifting interferometry[2], and Fourier transform interferometry[3]. Recently demodulation methods by utilizing a Hilbert-transform technique have been reported in which the interference phase is measured from the analytic signal determined from a single interference signal[4-6]. We have presented a phase measurement method using a onedimensional discrete Hilbert transform[7]. In this paper we propose a signal processing for interferometric phase by using a two-dimensional (2-D) discrete Hilbert transform. The coefficients of 2-D Hilbert transform are determined in order to make the transfer function to be a spiral phase function[6]. A two-dimensional phase map is numerically calculated by using the 2-D discrete Hilbert transform and the 2-D discrete high-pass filtering.
2 Measurement principle In this section the measurement principle is described. First, we show the formalism used to describe 1-D discrete Hilbert transform[7]. Next, the extension of the 1-D discrete Hilbert transform into two dimensions is introduced. The interferometric phase is obtained as the arctangent of the ratio of the sine and cosine components by using a 2-D discrete Hilbert transform and a 2-D discrete high-pass filtering.
New Methods and Tools for Data Processing
83
2.1 One-dimensional discrete Hilbert transform
The Hilbert transform of f t is given by
H 1D > f t @ 1> j sgn 2SQ > f t @@ ,
(1)
where Q is a frequency and sgn 2SQ is a sign function. Fig. 1 shows the discrete signal processing for performing Hilbert transform of the interference signal. x>n@ are the interference signal data that are sampled at the spacing P . The discrete Hilbert transform signal s>n@ is calculated by
s>n @
N 1 2
¦ h x>n k @ ,
(2)
k k N 1 2
where hk are coefficients of the discrete Hilbert transform[7]. A set of N interference signal data is used for computation of one discrete Hilbert transform signal. Table 1 shows the coefficients of the one-dimensional (1D) discrete Hilbert transform with N 31 .
Fig. 1. Discrete signal processing procedure (N=7 case).
Table 1. Coefficients of the discrete Hilbert transform
k
hk
k
hk
k
hk
-15
8.97716449e-03
-5
1.08367272e-01
6
2.60121154e-07
-14
-1.16333093e-06
-4
-7.56601751e-07
7
-6.59163141e-02
New Methods and Tools for Data Processing
84 -13
1.41161992e-02
-3
2.00378615e-01
8
1.07408968e-07
-12
-9.38455946e-07
-2
-9.83483723e-08
9
-4.09581120e-02
-11
2.48205850e-02
-1
6.32598450e-01
10
-9.02016579e-09
-10
9.02016579e-09
0
0.00000000e+00
11
-2.48205850e-02
-9
4.09581120e-02
1
-6.32598450e-01
12
9.38455946e-07
-8
-1.07408968e-07
2
9.83483723e-08
13
-1.41161992e-02
-7
6.59163141e-02
3
-2.00378615e-01
14
1.16333093e-06
-6
-2.60121154e-07
4
7.56601751e-07
15
-8.97716449e-03
5
-1.08367272e-01
Next we analyze a transfer function for the 1-D discrete Hilbert transform. The transfer function is defined as
H Q
N 1 2
h e ¦
k k N 1 2
j2SkQP
,
(3)
where Q is a spatial frequency. It is evident from Eq. (3) that the coefficients hk determine the characteristics of the transfer function, i. e., the frequency response of the discrete signal processing. Figure 2 shows the transfer function for the discrete Hilbert transform, i.e., an amplitude gain (a) and a phase (b) as a function of spatial frequency Q by substituting the coefficients in Table 1 into Eq. (3) with N 31 and P 1 . It is clear from Fig. 2 that the amplitude gain is 0 (dB) over the spatial frequency ranges from -0.46 (lines/P) to -0.04 (lines/P) and from 0.04 (lines/P) to 0.46 (lines/P). It is also shown that the phases are S / 2 and S / 2 over the spatial frequency ranges, respectively. Therefore the discrete Hilbert transform given by Eq. (2) with N 31 and the coefficients in Table 1 has the desired frequency response of j sgn 2SQ
New Methods and Tools for Data Processing
85
(a)
(b)
Fig. 2. Transfer function for the one-dimensional discrete Hilbert transform: (a) an amplitude gain and (b) a phase as a function of the spatial frequency.
for the interference signals which have the spatial frequency ranging from -0.46 (lines/P) to -0.04 (lines/P) and from 0.04 (lines/P) to 0.46 (lines/P). 2.2 Two-dimensional discrete Hilbert transform
Larkin et al. have proposed two-dimensional (2-D) Hilbert transform of g x, y :
>
H 2D >gx, y @ je jE ( x , y ) 1 e
jI Q x ,Q y
@
>g x, y @ ,
(4)
where E x, y is an orientation angle for a local fringe pattern and
I Q x ,Q y is a spiral phase[6]. A spiral phase function is defined by e
j I Qx , Q y
Qx jQ y Qx2 Q y2
,
(5)
where Q x ,Q y are spatial frequencies. By comparing Eqs. (1) and (4), the spiral phase function can be considered as the transfer function for the 2-D discrete Hilbert transform. The transfer function is generally defined as
>
+ Qx, Q y
@
( N 1) / 2
¦
( N 1) / 2
¦
k ( N 1) / 2 l ( N 1) / 2
h k ,l e
j 2S k Q x Px l Q y Py
,
(6)
New Methods and Tools for Data Processing
86
where hk ,l are coefficients of the 2-D discrete Hilbert transform. Now we can estimate the coefficients hk ,l by substituting Eq. (5) into the left side of Eq. (6):
hk ,l
j
k jl
2S k 2 l 2
32
.
(7)
Finally the 2-D discrete Hilbert transform is given by s > n, m @ j e j E > n,m @
( N 1) / 2
¦
( N 1) / 2
¦
hk,l x >n k , m l@.
k ( N1) / 2 l ( N 1) / 2
(8)
Figure 3 shows the transfer function for the 2-D discrete Hilbert transform, i.e., an amplitude gain (a) and a phase (b) as a function of spatial frequencies Q x ,Q y by substituting the coefficients in Eq. (7) into Eq. (6) with
N
7 and Px 1, Py 1 . It is clear from Fig. 3 that the 2-D discrete
Hilbert transform by Eq. (8) has the suitable characteristics of the transfer function, i.e., the spiral phase function. A discrete high-pass filtering is used to remove the bias intensity and to extract a cosine component, giving c > n, m @
( N 1) / 2
¦
( N 1) / 2
¦
q k ,l x >n k , m l @.
k ( N 1) / 2 l ( N 1) / 2
(9)
where q k ,l are coefficients of the discrete high-pass filtering. Finally we can determine the interference phase under the test from s>n, m@ and
c>n, m@;
(a)
(b)
Fig. 3. Transfer function for the two-dimensional discrete Hilbert transform: (a) an amplitude gain and (b) a phase as a function of the spatial frequencies.
New Methods and Tools for Data Processing
87
§ s>n, m@ ·
I >n, m@ tan 1 ¨¨ ¸¸. © c>n, m@ ¹
(10)
3 Numerical calculation Here we will apply the algorithm described in section 2 to a 2-D interferogram. Fig. 4(a) shows a tested interferogram x>n, m@ A B cosI >n, m@ that is numerically calculated with 256x240 points. It is assumed that a phase distribution I >n, m@ is a hill-like shape. Firstly, we calculate the 2-D discrete Hilbert transform of x>n, m@ using Eq. (8). The orientation angle has been given by an azimuth angle since the interferogram has point symmetry. Fig. 4(b) is the numerical result of s>n, m@ . Next, the high-pass filtering of x>n, m@ is performed using Eq. (9). The result of c>n, m@ is shown in Fig. 4(c). The cross-sectional profiles of the interferograms in Fig. 4(a), (b), and (c) are indicated in Fig. 5(a), (b), and (c), respectively. It is clear from Fig. 5 that the sine component and the cosine component of interferogram are successfully obtained. Lastly, we calculate the phase over the range from S to S using Eq. (10) which is shown in Fig. 5(d). The calculated phases are unwrapped along n direction with the phase-unwrapping algorithm.
(a) Fig. 4. Interferometric intensities; (a)
(b)
(c)
x>n, m@ , (b) s>n, m@ , and (c) c>n, m@.
New Methods and Tools for Data Processing
88
(a)
(b)
(c)
(d)
Fig. 5. Cross-sectional profiles of the interferometric intensities in Fig. 4; (a), (b) and (c) and calculated phase distribution (d).
Fig. 6. Phase distribution calculated by using the proposed method.
Fig. 6 shows the calculated phase profile in the range of 15 S (rad), where it is evident that the phase distribution of hill-like shape is successfully demodulated.
4 Conclusion We have proposed the signal processing technique for the interferometric phase by using the two-dimensional (2-D) discrete Hilbert transform and the 2-D discrete high pass filtering. A 2-D phase profile has been calculated from a single interferogram. An numerical result for the phase distribution with a hill-like shape has been demonstrated. This technique offers a useful demodulation scheme with wide spectral bandwidth as the 1-D discrete Hilbert transform[7].
New Methods and Tools for Data Processing
89
References 1. N. A. Massie, R. D. Nelson, and S. Holly, "High-performance realtime heterodyne interferometry," Appl. Opt. 18, 1797-1803 (1979). 2. K. Creath, "Phase-measurement interferometry techniques," in Progress in Optics, E. Wolf, Ed. (Elsevier, Amsterdam, 1988), Vol. 26, pp. 349-393. 3. M. Takeda, H. Ina, and S. Kobayashi, "Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry," J. Opt. Soc. Am. 72, 156-160 (1982). 4. S. S. C. Chim and G. S. Kino, "Three-dimensional image realization in interference microscopy," Appl. Opt. 31, 2550-2553 (1992). 5. Y. Watanabe and I. Yamaguchi, "Digital Hilbert transformation for separation measurement of thickness and refractive indices of layered objects by use of a wavelength-scanning heterodyne interference confocal microscope," Appl. Opt. 41, 4497-4502 (2002). 6. K. G. Larkin, D. J. Bone, and M. A. Oldfield, "Natural demodulation of two-dimensional fringe patterns. 1. General background of the spiral phase quadrature transform," J. Opt. Soc. Am. A 18, 1862-1870 (2001). 7. R. Onodera, H. Watanabe, and Y. Ishii, "Interferometric phasemeasurement using a one-dimensional discrete Hilbert transform," Opt. Rev. 12, 29-36 (2005).
Recent advances in automatic demodulation of single fringe patterns J. A. Quiroga, D. Crespo, J. A. Gomez Pedrero, J. C, Martinez-Antón Departamento de Optica Universidad Complutense de Madrid 28040 Spain
[email protected]
1 Introduction This work presents some advances in the field of phase demodulation from a single fringe pattern with closed or open fringes and a wide spectral content where conventional techniques asFourier analysis can not be applied. In particular we will present a fast fringe orientation calculation method with application in the Generalized Quadrature Transform and a direct temporal demodulation algorithm for fringe patterns with sensitivity variation. Both techniques have been applied to the case of analysis of shadow moiré topography images. Finally we will speak about the possibilities of web based fringe pattern processing together with Wi-Fi technologies in order to make a fully portable shadow moiré system for external defect measurement in aeronautical surfaces where the previous results obtained in demodulation form a single fringe pattern have been widely used.
2 Fast algorithm for estimation of the fringe orientation angle 2.1 Phase demodulation using a quadrature operator, the relevance of the orientation term
The irradiance distribution of a fringe pattern can be represented by I r br mr cos Ir ,
(1)
New Methods and Tools for Data Processing
91
were I is the irradiance, b the background, m the modulation, I the modulating phase and r x, y denotes the position vector. The process of phase demodulation consists in extracting the relevant information, Ir , from an observed irradiance distribution of the form given by (1). One of the methods proposed recently to solve this problem is the General Quadrature Transform (GQT) [1]. The GQT is a quadrature operator Qn{.}, that transforms a given fringe pattern into its quadrature term, Qn ^ I HP r ` mr sin Ir , where I HP is the background-filtered version of I. With this quadrature signal one can easily determine the modulo S wrapped phase,ҏ W^I(r)}, over the whole region of interest by a simple arctan calculation. As shown in reference [1], the GQT is given by I r Ir I HP r n I r HP . Qn ^ I HP r ` (2) Ir Ir Ir The first term of equation (3), n I r Ir / Ir , is a unit vector
which points in the direction of Ir denominated fringe orientation term. The second term is an isotropic generalization of the 1D Hilbert transform, H^ I HP ` I HP / I . The term H^ I HP ` can be estimated as a linear operator regardless of the dimension of the problem [2]. But, for fringe patterns with closed fringes, the calculation of the orientation term nIҏ is a non linear problem. From its definition the orientation term can be written as § wIwy · ¸¸ is the fringe orienn I r cos E 2 S , sin E 2 S where E 2 S arctan¨¨ © wIwx ¹ tation angle. However to obtain the orientation term we only have access to the fringe pattern irradiance and its gradient from which the orientation § wIwy · angle we obtain is given by E arctan¨ ¸ , that due to the sign flips © wIwx ¹ in the irradiance gradient is defined only modulo S. The relation of E with the modulo 2S orientation angle is E E 2 S kS , being k an integer such that 0 d E d S , in consequence: W ^2E` W ^2E 2 S 2kS` W ^2E 2 S ` , (3) where W denotes the modulo S wrapping operator. Therefore the problem of demodulating a fringe pattern is reduced to the application of a linear operator and to unwrap the distribution W^ 2E` . Because E is an angular magnitude, for patterns with closed fringes the dis-
New Methods and Tools for Data Processing
92
tribution 2E will be piecewise continuous, with discontinuities of r 4S for paths enclosing a fringe center, and pairs of poles of the same sign at the origin of the closed fringes; so, in a general case the process of unwrapping will be path dependent, and it will not be possible to use standard phase unwrapping algorithms for this purpose [3]. 2.2 Fast calculation of the fringe orientation angle
Within the Regularized Phase Tracker method [4] the problem of the phase unwrapping of the signal W^ 2E` can be solved by the local minimization of the cost function U r T, Ȧ
2 2 ¦ fC ȡ cos pr, ȡ f S ȡ sin pr, ȡ
(4)
ȡ N
2
P W4 S ^ Tȡ p r , ȡ ` M ȡ where
f C r
cosW ^ 2E`
(5)
f S r sin W ^ 2E` and
pr, ȡ Tr Ȧr r ȡ (6) Being N a given neighborhood around point r, U and Z a 2D-dimensional vectors, corresponding to the local neighborhood position and the local spatial frequencies of W^ 2E` , P is the regularization parameter and finally M is a field that indicates if a point has been processed or not. The cost function U r T, Ȧ is 3D and therefore the processing time for the minimization of the cost function can be high. To reduce the processing time we propose first to estimate the local spatial frequency vector Zfrom the phase map by Ȧr | Ȧ (7) ˆ r 2Er W ^W ^2Er `` And second to minimize locally the next 1D cost function U r T
2 ¦ f C ȡ cos pˆ r, ȡ
2
f S ȡ sin pˆ r , ȡ
[N L
,
(8)
2
P W4 S ^ Tȡ pˆ r , ȡ ` mȡ where pˆ r , ȡ Tr Ȧ ˆ r r ȡ . This means that the time consumed by the RPT algorithm becomes now lower than in the former case. With this proposed method the values of Zare being calculated taking into account only nearest neighbors for each point. This implies that the
New Methods and Tools for Data Processing
93
size of the neighborhood region N in (8) must be kept small in order to obtain consistent results in the minimization process. In the new algorithm, a typical size of N would be 5. This results in an extra time optimization, since the neighborhood region for a typical RPT problem where both T and Zhave to be estimated at each point, is usually larger, typically 7 -11. In conclusion, the proposed algorithm is faster than the traditional RPT and scales better with the dimension of the problem, while keeping the advantages that make RPT so robust [5]. To demonstrate the capabilities for the phase demodulation from a single fringe pattern, figure 1a shows a 400×500 px image of a shadow moiré pattern of a 500 Pm indentation observed with a ronchi grid of 10 mm-1. The spatial frequency varies from 7 fringes/field to about 30 fringes/field in the central closed fringe region. Also there exist a background and modulation variation produced by the illumination non-uniformity and the distance form the sample to the reference grid. Figure 1b shows the modulo 2S wrapped version of the obtained continuous phase, the processing time was about 10 s with a 2 GHz Pentium processor based portable computer with a regularization parameter of P and a spatial neighborhood of 7 px.
3 Temporal demodulation of fringe patterns with sensitivity change A simplest alternative to the GQT approach discussed above to demodulate the phase from the irradiance is to normalize it and afterwards compute the arcos of the normalized image. If we normalize the irradiance signal of equation (1) we get I N r cos Ir [6] from which it is possible the demodulation of the phase by Iˆ r arccos I N r . (10) However due to the even nature of the cos( ) function the phase obtained by equation (10) corresponds to the absolute value of the modulo 2S wrapped version of the continuous phase, that is Iˆ r W ^Ir ` , (11) From equation (11) it is clear that the wrapped version of the actual modulating phase is given by W ^I` sgn W ^I` Iˆ , (12) where sgn( ) stands for the signum function. Equation (12) can be written as
94
New Methods and Tools for Data Processing
W ^I` sgn sin I arccos I N QS ^I ` arccos I N , (13) where the term QS ^I ` sgnsin I is denominated the Quadrature Sign (QS) [7]. Equation (13) indicates that if we can estimate the QS of a fringe pattern we can demodulate the phase just by a normalization process and a multiplication in the direct space by the QS. The normalization process can be achieved by a linear filter [6], thus in case of direct calculation of the QS the technique proposed by equation (13) will be direct, fast and asynchronous in the sense that the phase variations (spatial or temporal) should not be known. Also an interesting point is that equation (13) is a general ndimensional expression. For a temporal experiment with sensitivity change the modulating phase can be written as I x, y , t h x, y S t (14) where hx, y is the quantity to be measured and S t is a scaling factor that relates the modulating phase, I, and the quantity to be measured. The factor S is known as sensitivity, and its value depends on the nature of the magnitude h(x,y) and the type of experimental set-up (Moiré, interferometry, etc…) employed. If the background and the modulation are temporally smooth the temporal irradiance gradient can be approximated by wI wI (15) wI t m sin I m sin I Zt , wt wt where wI t is the temporal component of the irradiance gradient and Zt is the temporal instantaneous phase frequency. In the case of sensitivity change the instantaneous temporal frequency is given by Zt x, y, t hx, y St t (16) where St t wS / wt . Usually it is possible to fix the sign of the sensitivity variation, that is, the sign of the instantaneous temporal frequency Zt is known. Therefore, from equation (15) the QS can be computed as QS ^I ` sgnZt sgnwI t . (17) Equation (17) indicates that if we have a sensitivity change with known behavior (increasing or decreasing) the term sgnZt is known, and the QS can be computed directly from the temporal irradiance gradient.. Once the QS is estimated the wrapped modulating phase can be obtained using equation (13). This technique is that we have denominated Quadrature Sign method for direct asynchronous demodulation of fringe patterns. In
New Methods and Tools for Data Processing
95
this case the term asynchronous refers to the fact that it is not necessary the explicit knowledge of the phase instantaneous frequency but only its sign. Figure 2 shows the results obtained in a shadow moiré experiment, in which we have used three LED sources with different illumination angles, therefore a RGB shadow moiré pattern is acquired for which each channel has a different sensitivity. In figure 2a we show the fringe pattern for the green channel corresponding to a 3 mm indentation. The obtained surface topography is displayed in figure 2b. To test the reliability of the method, we have measured the surface profile located between the points A and B of figure 2a using a metrology microscope Leica VMM 200 with a lateral resolution of 0.1 microns and a deep resolution of ±100 microns. In figure 2c we have plotted this profile and we have compared it with the profile obtained using our method. We can appreciate a good agreement between both curves showing the ability of our technique to measure the surface topography with accuracy and reliability compared with alternative techniques.
4 Fully portable measurement system using wireless technologies In our case the main objective for the development of fast, reliable and automatic methods for phase demodulation from a single image is the development, in cooperation with NDT Expert, of a fully portable system for surface inspection by shadow moiré topography of external defects in aeronautical surfaces denominated MoireView® [8]. At present there exist a digital still camera based version of the system, that first acquires the fringe patterns and later process the images off-line. Currently we are working towards a fully portable version that incorporating wireless technologies will allow for a on-line visualization and processing of the obtained fringe patterns. The global architecture of the prototype system is the one shown on figure 3a. The system will consist in the following components: x A shadow-moiré measuring head with a wireless camera x A remote processing server. This is a computer which is connected to the wi-fi network to receive the images from the camera. This server does all the necessary processing to demodulate the shadow-moiré fringe patterns and provide measurement of surface topography. In our case, the server implements a software application with all the demodulation techniques explained in the previous discussion.
96
New Methods and Tools for Data Processing
x A thin mobile client used for launching the calculations and displaying the results on-site. We use a pocket-pc device with wi-fi connection that connects to the processing server using web services to send processing instructions and receive the information from the server. On figure 3b, we show an example screen of the application on the mobile device. Currently we have implemented all the mentioned architecture and functionality out of the mounting of the wireless camera in the shadow moiré head. We expect that very soon we will have ready the first prototype version of the portable system. 50
a)
50
100
100
150
150
200
200
250
250
300
300
350
350
50
100
150
200
250
300
350
400
450
500
b)
50
100
150
200
250
300
350
400
450
500
Fig. 1. a) shadow moiré fringe pattern of a 300 Pm indentation, b) wrapped version of the continuous phase demodulated by the GQT method using the fast orientation algorithm presented.
a)
b)
c)
Fig. 2. a) green channel of the RGB image for a 3 mm indentation, b) 3D topography demodulated from the RGB image, c) profile comparison for the proposed technique and a measuring microscope
New Methods and Tools for Data Processing
a)
Shadow moiré head with wireless camera
Remote Processing Server
97
b)
Wireless Access
Mobile Client for Display
Fig. 3. a) architecture of the proposed wireless portable system, b) example screen of the application on the mobile device.
References 1. M. Servin, J. A. Quiroga, J. L. Marroquin, “A General n-dimensional Quadrature Transform and its application to interferogram demodulation”, submitted for publication in J. Opt. Soc. Am. 20 925-934 (2003) 2. K. G. 2, D. J. Bone, and M. A. Oldfield, ‘‘Natural demodulation of twodimensional fringe patterns. I. General background of the spiral phase quadrature transform,’’ J. Opt. Soc. Am. A 18, 1862–1870 (2001). 3. J. A. Quiroga, M. Servín, F. J. Cuevas, “Modulo 2S fringe-orientation angle estimation by phase unwrapping with a regularized phase tracking algorithm”, J. Opt. Soc. Am. A, 19, 1524-31 (2002). 4. M- Servin, F.J. Cuevas, D. Malacara, J.L. Marroquin, R. Rodriguez-Vera, “Phase unwrapping through demodulation by use of the regularized phasetracking technique” Appl. Opt, 35, 2192-2198 (1996) 5. D. Crespo, J. A. Quiroga, J. A. Gomez-Pedrero, "Fast algorithm for estimation of the orientation term of the General Quadrature Transform with application in the demodulation of a n-dimensional fringe pattern", Appl. Opt. 43, 6139-6146 (2004) 6. J. A. Quiroga, M. Servin, “Isotropic n-dimensonal fringe pattern normalization”, Opt. Comm. 224, 221–227, (2003). 7. J. A. Quiroga, J. A. Gómez-Pedrero, M. J. Terrón-López, M. Servin, "Temporal demodulation of fringe patterns with sensitivity change", Optics Communications, in press (2005) 8. http://www.ndt-expert.fr/pdf/MoreView.pdf
Comparison of Techniques for Fringe Pattern Background Evaluation C. Breluzeau1, A. Bosseboeuf1, S. Petitgrand2 1 Institut d'Electronique Fondamentale, UMR8622 Université Paris XI, Bât. 220, F-91405 Orsay Cedex, France 2 Fogale nanotech Parc Kennedy-Bât A3, 285 Rue Gilles Roberval, F-30915 Nimes Cedex 2, France
1 Introduction Single fringe pattern phase demodulation techniques often need a subtraction of the background intensity distribution before their application. This is the case for the Phase Lock Loop (PLL) demodulation techniques [1,2] and for recent techniques able to demodulate interferograms with closed fringes such as the quadrature transform techniques [3,4] and the regularized phase tracking technique [5,6]. For interferograms with a linear fringe carrier and a background intensity with low spatial frequency variations, background subtraction can be achieved by low-pass filtering or by differentiation along the tilt direction [1]. When the fringe pattern has no carrier, the background intensity can still be estimated by using a fringe-less defocused image [7] or by a linear regression of the whole intensity data set after an optional low-pass filtering. Finally, the background intensity distribution can be computed by FFT processing techniques [7,8] or by averaging interferograms having a S phase shift or random phase offsets [9]. As discussed below, these methods have various limitations such as a dependence on the fringe pattern content, the need of an isotropic background or the need of several fringe patterns with a phase shift between them. In this paper, we propose a new method based on the cancellation of the fringe contrast by vibrating the whole sample surface or the reference mirror that does not rely on any assumption about background intensity variations. Performances and limitations of this method are analyzed from computations and from real measurements on various surfaces by interference microscopy.
New Methods and Tools for Data Processing
99
2 Fringe pattern intensity and Fourier spectrum In this work, we will consider that the fringe pattern intensity maps in(x,y) to be analyzed can be described by the following general equation :
in(x,y) a(x,y)b(x,y)Cos>I(x,y)2S(f0nx f0ny )D n @ a(x,y)b(x,y)Cos>)(x,y)@
(1)
where a(x,y) is the background intensity distribution eventually corrupted by noise, b(x,y) the fringe amplitude map, and I(x,y) the optical phase map to be determined. f0nx and f0ny are the spatial frequencies of an optional linear carrier generated by a tilt between the interfering wavefronts and Dn (n=1,...) are optional additional shifts between the n interferograms. We will assume that a(x,y) and b(x,y) are the same for the n interferograms considered. The 2D Fourier spectrum of a fringe pattern intensity given by Eq.1 is classically written as:
I n(f x , f y ) A(f x , f y )C(f x f0nx , f y f0ny )C*(f x f0nx , f y f0ny )
(2)
where A(fx,fy) is the Fourier transform of a(x,y), C(fx,fy) is the Fourier transform of c(x,y)=1/2b(x,y)expi>I(x,y)+Dn] and C* is the complex conjugate of C. It is well known that when the spatial variations of a(x,y), b(x,y) and ij(x,y) are slow with respect to the carrier frequency components, the Fourier spectrum contains three main separated peaks: a peak around origin related to background intensity variations, and two symmetrical sidelobes C and C* centered around (f0x,f0y) and (-f0x,-f0y) that are related to the phasemodulated fringe carrier.
3 Interferogram background extraction techniques An easy method to evaluate the background intensity distribution is to record a fringe-less pattern [7]. Such a pattern can be obtained by adjusting the optical path difference of the interferometer to a value larger than the coherence length of the light source, by inserting a stop in the reference beam or, for interferometers including an objective or a lens, by defocusing. In all these cases, the resulting image has typically a lower average intensity, a lower spatial frequency bandwidth and/or various disturbances so it is only an approximation of the true interferogram background intensity.
100
New Methods and Tools for Data Processing
Another simple method is to perform a low-pass filtering in the real space or in the Fourier domain. However, the choice of the filter type, of its cut-off frequency, of the kernel size and of the number of filtering steps is somewhat arbitrary. Indeed, it depends on the fringe pattern and there is no simple way to check the validity of the result. This method is limited to the restrictive case where the fringe pattern background intensity a(x,y) has low spatial variations with respect to the total optical phase )(x,y). f
Fig. 1. Fourier transform methods of background extraction. a) Interferograms, b) Fourier spectra, c) Extracted background with method 1 (top) and method 2 (bottom). Interferogram size 256x256. Tilt x: 15 fringes, Tilt y: 25 fringes; Gaussian background: Standard deviation 140 pixels , offsets X and Y: 30 pixels
The background of fringe patterns with a linear spatial carrier can be extracted by using fast Fourier Transform (FFT) techniques [7,8] (Fig.1). In these methods, data within a frequency window around the modulated carrier sidelobes in the Fourier space are replaced by data in the same frequency windows but in another spectrum or in another quadrant of the same spectrum. Then the background is computed by the inverse Fourier transform of the modified Fourier spectrum. In the former case, sidelobes data are replaced by data in the same spatial frequency ranges taken in the Fourier spectrum of a fringe pattern with carrier fringes in a perpendicular direction to those in the original interferogram (case 1 in Fig.1b). In the second case they are replaced by data in the same frequency window by mirrored data with respect to the fx axis (case 2 in Fig.1b) or by data after 90° rotation (case 3 in Fig.1b). The main advantage of these FFT methods
New Methods and Tools for Data Processing
101
is that noise and high spatial frequency components are kept in the extracted background. A drawback is that background data with spatial frequency components in the frequency window of the filter used to remove the fringe carrier may be alterated. For cases 1 and 3 this is notably the case when the background spatial frequencies close to those of the fringe carrier are not isotropic. For case 2, fx components of the spatial frequencies of the background around f0x are correctly extracted while fy components around f0y are alterated. An additionnal drawback of the first method is the need to record a second interferogram with a precisely adjusted 90° rotation of the fringes with respect to the first interferogram. Application of the first and second FFT technique is demonstrated on a simulated interferogram in Fig.1. The simulated interferograms (Fig.1a) are fringe patterns of a tilted plane with a background built by superimposing an offcentered gaussian background and a scaled image of a surface with scratches. Fig.1b is the corresponding Fourier spectrum in logarithmic scale. According to the method, complex data within the circle B are replaced by complex data within the circles A, A' or A'' and a similar procedure is used for the symmetrical modulated carrier sidelobe. The extracted backgrounds are displayed in Fig.1c. They are correctly retrieved in this case. More generally, we found that some parasitic undulations often appear near the background image boundaries. As integer fringe numbers along the x and y axis were chosen in simulations to limit spectral leakage effect, it is thought that these undulations occur when there is a fringe contrast discontinuity along the boundaries. In summary FFT methods can be applied only to fringe pattern with a fringe carrier and may provide a background with artefacts near its boundaries. Techniques that potentially provide a better approximation of the true background of fringe patterns with or without a fringe carrier are phase shifting techniques. They consist in recording interferograms with several phase shifts Dn between them. The first one is simply based on the addition of two fringe patterns with a S shift between them. Eq.1 shows that this provides twice the background intensity map if the phase shift is strictly equal to S. A simple calculation shows that when the actual phase shift is equal to Sr H (H<<2)the extracted background is corrupted by low amplitude fringes in quadrature with the original fringes :
i(x,y)iS r H(x,y)|2a(x,y)rb H sin> I(x,y)2 S(f0x f0y )@
(3)
An accurate phase shift is thus needed. A similar method is to sum a large number of interferograms with random phase shifts in the rS range [8]. The background intensity map can be as well extracted for all standard phase shifting algorithms by using suitable formulas. Actually, when phase
New Methods and Tools for Data Processing
102
shifting mesurements can be performed, there is not a real need to extract the background intensity map as the fringe phase and contrast can be obtained directly irrespective of the fringe background variations. Extraction of the background intensity is mainly useful for single interferogram analysis techniques. A case where most of the methods described above fail, is the case of a fringe pattern without a fringe carrier and with a background intensity map having both large, high frequency and non symmetrical variations.
4 Interferogram background extraction by time-averaged interferometry In this work we propose an alternative method to extract the background intensity distribution that does not require an accurate phase shifting device, nor some calibration nor computation. It consist in generating a sinusoidal variation of the optical path difference at a frequency much larger than the acquisition time. It can be shown that for a two-beam interferometer, the resulting time-averaged interferogram can be described by the following equation:
in(x,y) a(x,y)b(x,y)J O (4Sa / O)Cos>I(x,y)2S(f0nx f0ny )D n @ (4) where a is the vibration amplitude, O the mean detected wavelength and J0 is the Bessel function of the first kind of zero integer order (Fig.2).
Fig. 2. Bessel function J0(x)
Fig. 3. Piezoelectric vibrating system
New Methods and Tools for Data Processing
103
Table 1. First eight zeros of the Bessel function J0 and corresponding values of dJ0/dx, dJ0/da and of the vibration amplitude a for O=0.6µm.
Root 1 2 3 4 5 6 7 8 order Value 2.4048 5.5200 8.6537 11.7915 14.9309 18.0710 21.2116 24.3524 dJ0/dx -0.5175 0.3398 -0.2712 0.2323 -0.2064 0.1876 -0.1732 0.1616 dJ0/da (%/nm) 1.084 0.712 0.568 0.487 0.432 0.393 0.363 0.338 a (nm) 114.8 263.6 413.2 563.0 712.9 862.8 1012.8 1162.7
When the vibration amplitude a is adjusted such as 4Sa/O corresponds to a zero of J0 (Table 1), the second term of Eq.4 cancels out and the timeaveraged interferogram becomes simply equal to the true background intensity distribution. It is obvious from the J0 function shape and from the values of the derivative dJ0/da for a values corresponding to the first 8 roots of J0 (see Table 1) that the choice of a high zero order is preferable to minimize the error related to an incorrect adjustement. However beyond the third zero of J0 there is only a slight improvement. Let us emphasize that this technique can theoretically be applied to any fringe pattern recorded by a two-beam interferometer and any background intensity distribution. This is of course true only if the vibration amplitude is homogeneous i.e. when the frequency does not correspond to a resonance of a part of the vibrating surface. It can be as well adapted to other interferometric techniques that allow time-averaged interferometry, like Electronic Speckle pattern Interferometry and holographic interferometry. Then, the vibration amplitudes must be adjusted to the zeros of the corresponding fringe contrast modulation function. Some experiments were performed with an interference microscope to validate this time-averaged interferometry method. Experimental conditions and test samples were selected to provide fringe patterns with high spatial frequency non uniformities. Test samples were vibrated with a simple piezoelectric translator powered with an alternate voltage (Fig.4). The vibration amplitude was adjusted in each case to minimize visually the fringe contrast. This background estimation method is particularly well suited to interference microscopy measurements because an homogeneous vibration amplitude could be obtained in most cases. Fig.4a shows an interferogram recorded on a tilted silicon nitride flat membrane fabricated by KOH bulk micromachining on a silicon wafer. A close look to this interferogram shows that it is entirely corrupted by quasi
New Methods and Tools for Data Processing
104
horizontal parasitic fringes with a vertical spatial frequency relatively close to that of the real fringe carrier. These fringes are related to unwanted
a)
b)
Fig. 4. Background measurement by time-averaged interferometry on a tilted transparent silicon nitride membrane. a) Static interferogram recorded with a Michelson X5 objective. b) Measured background intensity distribution. Interferogram size : 1.5mmx1.5mm
interferences in the optical set-up when a highly coherent light source is used. Fig.4b is the background intensity image obtained by vibrating the sample at 1kHz with an amplitude corresponding to the 3rd root of the bessel function J0(4Sa/O) (see table 1). It demonstrates that, as expected, the contrast of the main interference fringes could be fully cancelled while parasitic fringes are kept intact in the background image. The results of another measurement performed on the same silicon nitride transparent membrane but with a larger field of view is shown in Fig.5a. For this measurement, the sample was put on a rough surface to get a background with an inhomogeneous reflectivity. Fig.5b and 5c display the interferogram recorded for vibration amplitudes adjusted visually to values respectively lower and equal to the 3rd root of the Bessel function J0(4Sa/O). In that case, the fringe contrast on the surrounding frame and on the membrane could not be cancelled simultaneously. Nevertheless high spatial variations of the background intensity are correctly retrieved.
New Methods and Tools for Data Processing
a)
b)
105
c)
Fig. 5. Background measurement by time-averaged interferometry on a transparent silicon nitride membrane over a rough surface. a) Static interferogram recorded with a Michelson X5 objective. b) and c) Interferograms recorded on the sample vibrated at 500Hz with a vibration amplitude lower and approximately equal to the 3rd root of J0(4Sa/O).
5 Conclusion Starting from a critical analysis of the main existing techniques for fringe pattern background evaluation, we proposed in this paper an alternative technique based on the cancellation of the fringe contrast by vibrating the whole sample surface. This technique can be applied whatever the fringe pattern content and background spatial frequency and does not require any computation. Its accuracy is limited by the need to adjust precisely the vibration amplitude in the whole measurement field. Experiment in progress showed that some simple in-flight image processing can be performed to improve the accuracy of this adjustement.
6 References [1]
[2]
[3]
Servin M, Rodriguez-Vera, R, Malacara, D (1995) Noisy fringe pattern demodulation by an iterative phase lock loop. Opt. and lasers in Eng. 23 355:365 Gdeisat, M.A, Burton, D.R, Lalor, M.J (2000) Real-time pattern demodulation with a second-order digital phase-locked loop. Appl. Opt. 39(29) 5326:5336 Quiroga, J.A, Servin, M, Marroquin, J.L, Gomez-Pedrero, J.A (2003) A isotropic n-dimensional qudrature transform and its application in fringe pattern processing. Proc. SPIE 5144 259:267
106
[4]
New Methods and Tools for Data Processing
Larkin, K.G, Bone, D.J, Oldfield, M.A (2001) Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral quadrature transform. J. Opt. Soc. Am. A18(8) 1862:18701 [5] M. Servin, Marroquin, J.L, Cuevas F.J (1997) Demodulation of a single interferogram by use of a two-dimensional regularized phasetracking technique. Appl. Opt. 36(19) 4540:4548 [6] Legarda-Sáenz, R, Osten, W, Jüptner, W (2002) Improvement of the regularized phase tracking technique for the processing of non normalized fringe patterns. Appl. Opt. 41(26) 5519:5526 [7] Roddier, C, Roddier, F (1987) Interferogram analysis using the Fourier transform techniques. Appl. Opt. 26(9) 1668:1673 [8] Baldi, A, Bertolino, F (2001) On the application of the 2D fast Fourier Transform to the surface reconstruction by optical profilometers. Proc. XII ADM int. Conf., Rimini, Italy B1:9-16 [9] D. Lovric, D, Vucic, Z, Gladic, J, Demoli, N, Mitrovic, S, Milas, M (2003) Refined Fourier-transform method of analysis of twodimensional digitized interferograms. Appl. Opt. 42(8):1477-1484 [10] Petitgrand, S, Yahiaoui, R, Bosseboeuf, A, Danaie, K (2001) Quantitative time-averaged microscopic interferometry for micromechanical device vibration mode characterization. Proc. SPIE 4400: 51-60
Deformed surfaces in holographic Interferometry. Similar aspects concerning nonspherical gravitational fields Walter Schumann Zurich Switzerland
1 Derivatives of the optical path difference, strain, rotation, changes of curvature, fringe and visibility vectors The basic expression in holographic Interferometry for a small surface deformation is the optical path difference D u (k h ) OQ . Here are: u the displacement, h, k unit vectors on the incident and reflected rays, O the wave length and Q the fringe order. In case of a large deformation, when using two modified holograms >1@, the exact expression becomes D (O / 2S )(M M ') ( L L ') , where L , L ' denote the distances from the P' to a point K of fringe localisation. The phases at image points P, P' are P,
M (2S / O )( LT LS p q qT p q qT ) S \ , M ' =...+'\ with the distances LT , LS , p, q, qT , p , q , qT ,... (see the figure) so that we obtain D L L ' ( L p ) ( L ' p ') ( p q ) ( p ' q ') q q ' O'\ / 2S . (1) S
S
Many authors >2@,... have studied the recovering of the fringes. In digital holography >3@ the modification must be simulated by the computer. The contrast of the fringes depends on the smallness of the derivative of D.
New Methods and Tools for Data Processing
108 Recording: qT drˆ q
Modification. Reconstruction: Laser n dr LS q˜T T undeformed T k nˆ dr q˜ dr˜ h P n p ˜ L˜ k p ˜ ˜ q ˆ ˜ ' ˜ P l L'S H u dr ' q' c ˜ L˜' P' ˜ c˜ p˜ ' p' n' R ˜ K P' Camera Images Fringe deformed centre Holograms Hologram localisation LT
The fringe spacing leads incidentally to the strains. Thus, the differential dD dLS dL 'S d( L p ) ... dq dq ' is primary. In particular, we with the normal projector have dLS dr NLS dr Nh ,... N I n
n. In the following we use the rules v ( a
b ) (v a )b , ( a
b )w a(b w ) for any dyadic. The 2D-operator
N n .
aD w / wT D (D sum from 1 to 2) is the projected 3D-operator
Here
D
a aE
G DE
are
, aE
T 1, T 2
coordinates, E
wr / wT , aD aE
a1 , a 2
aDE , N
base D
vec-tors
and
E
aDE a
a . We
ˆ c) get so dD dr ' N '(k ' h ') dr N(k h) drˆ ' Nˆ '(k ' c ') drˆ N(k (k k ') . The deformations read dr ' N '(k ' c ') dr N(k c ) dU K U ˆ ˆ drˆ ,... The semi-projection FN N (
u)T N 'dr' = FNdr , Ndr = FN n of the 3D-deformation gradient F I (
u)T intervenes here only. The polar decomposition is F = QU , with the (orthogonal) rotation Q ( QT Q I ) and the symmetric dilatation U, defined by the Cauchy-Green tensor FT F UU . At the surface the decomposition becomes with a rotation Qn ( n ' Qn n Qi n ) and the in-plane dilatation V ( NFT FN VV ), (2) FN Qn V Qi Qp V . For small values , a strain tensor
J , an inclination vector Z , a pivot ro-
tation scalar : , implying the 2D-permutation E
EDE aD
a E ( E11
0,
E12 E21 , E22 0 ) the decomposition is FN N J : E n
Z and Qi N ; N n
Z , Qp ; N : E , V ; N J , EE N . We write also n
n
B ,
n
N
T
B
n B
n@ .
(3,4)
New Methods and Tools for Data Processing
The tensor B
BDE aD
a E
109
(1 / r1 )e1
e1 (1 / r2 )e2
e2 describes the
exterior curvature of a surface with principal values 1 / r1 , 1 / r2 . Eqs.3,4 correspond to the Frenet-relations dn /ds e / r , de /ds n / r in case of a plane curve. The open bracket
@T
in Eq.4 indicates a transposition of the T
last two factors in the triadic, so that B
n @
BDE aD
n
a E . At an iso-
(W v0 EW E ) / E0 with coefficients v0 , E0 , the stress tensor W and the involution E(...) E . The image P> is defined by dT P 0 (i.e. Eq.6) of the phase T P 2S ( p q p q) / O for the rays of the aperture. We get then with T ( 1) ( 1) V ' Q 'n' Qn V , V V N (k k ') , (5) dD dr' N' ª¬ (k' h ') Q n V ( 1) (k h) º¼ dU K U ˆ ª VQ ˆ ˆ Tˆ (k c ) (k c) º 0 . N (6) n ¬ ¼ Next, the equation of a geodesic curve, relative to the arc s, can be written Nd 2 r / ds 2 0 , because the osculating plane contains the unit normal n . However, for any curve and its image we find Nd 2 r V ( 1) ª¬ QnT N ' d 2 r ' (drDV dr ) º¼ , (7)
tropic, elastic surface we have
DV
J
>( n
V ) N @ N >(n
Qn )V @ N ' Qn .
(8)
The sign N marks a projection of the middle factor in a triadic. Using the integrability n ( EFT ) 0 , we could eliminate the rotation and we would obtain DV >( n
V ) N @ N n ( EVE )V (1) E
EV with the involution. Finally, if we apply the formal relation (1) dr' n' dr n dr' Qn V n , we obtain the change of surface curvature by deformation B ' Qn V (1) ( n
n ') N ' Qn V (1) ª¬ BQnT ( n
Qn )n º¼ N ' . (9) Consider now T R
2S (" q " q ) / O . The relation dT R
0 gives
also Eq.6. Therefore we find from neighboring rays d 2 (" q " q ) 0 , ˆ Nˆ / q º drˆ ... . For Nd 2r we apply Eq.7 d 2 q ( Nˆ d 2 rˆ ) c drˆ ª¬ Bˆ (nˆ c ) NC ¼ so that the total term Nˆ d 2 rˆ(...) cancels because of Eq.6. We use also the affine connection drˆ l Mˆ T dk with Mˆ I nˆ
k / nˆ k as oblique pro-
New Methods and Tools for Data Processing
110
jector. Resolving drˆNˆ dk d 2 l ... , we get a transformation d k l T d k ,where / q ˆ Bˆ nˆ (k c) NCN ˆ ˆ / q Q V ( 1) ªB n (k c NCN T M n ¬ / " º V ( 1) QT M ˆ T denotes the curvature tensor of DV V ( 1) (k c ) K n ¼ ˆ . The inverses of the distances the converging nonspherical wavefront at H
^
`
"1 , l 2 to the focal lines of the astigmatic interval R> (the origin at recording of the camera centre R at the reconstruction) are the eigenvalues of T. The ray aberration reads Kdr Kdrˆ pdk ; therefore the bridge ˆ T MT dk gives the virtual deformation ˆ ( 1)Q "dk " KV nˆ ˆ T Q V ) ˆ ( 1) Q (V nˆ n Kdr G (Kdr ) , G "( pT K )Q VMT /( " p ) . (10,11) n
R
R
If the surface areas projected by the aperture overlap sufficiently, we should be / L ' , where the superposition vector f Ku have k k ' # Ku S small because of the correlation. To apply Eq.5, we use dr' = dr'K ' M ' 'dU ( " ' p ' L ')dk ' with M ' I n '
k '/ n ' k ' and K with 'dE ' , a unit vector m ' and an angle dE ' . We write now dk ' m ' f ' or dD / dE ' m ' f ' . The fringe vector f ' dD / dE ' m R'
R'
K
K
R'
(fringe spacing >4@) and the visibility vector f 'K% (distance of the homologous rays and contrast >5@) are 'T M ' ªk ' h ' Q V ( 1) (k h) º K ' f ( " ' p ' L ') / L ' , f 'R' ( " ' p ')G n S R' ¬ ¼ 'T M ' ªk ' h ' Q V ( 1) (k h) º . f 'K L ' G (12,13) n K ¬ ¼
2 Aspects of deformation for spherical and nonspherical gravitational fields, gravitational lens, rotating bodies This section is only indirectly related to the previous subject. An extension should illustrate Eqs.2,3,4,7,8,9 and focus to the problem of general gravitational fields. Eq.9 gives for B { 0 the curvature B ' of a surface 2 3 as a deformed part of 2 . For a hypersurface k n , n > k this leads to the Ricci tensor R . We recall incidentally the components RDE * ODO , E * ODE , O * PDO * OPE * PaE * OPO showing Christoffel sym-
New Methods and Tools for Data Processing
bols
* ODE
111
a OP (aP a, E aPE , a aDE , P ) / 2 .
D
N ' aDE a
a
E
But
the
projector
I n 'i
n 'i (DE from 1 to k or from 0 to k-1; i
from 1 to n-k) implies both the „metric tensor“ aDE and the exterior orthogonal unit vectors n 'i . If we use these vectors, it can be seen that the Riemann-Christoffel tensor is RT = N ' N ' ª¬ n'
n'
N ' n'
( n'
N ')T º¼ N ' N ' = B 'i
B 'i B 'i
B 'i ]]T , according
to
Eq.4
and
B 'i
(14)
Qn V
(1)
( n
n 'i ) N ' (Eq.9).
The
T
bracket ]] indicates a transposition of the factors 2 and 4. The Ricci tensor is the con-traction of R, thus alternatively R B 'i B 'i B 'i ( B 'i N ') . For a spherical gravitational field first, one uses the Schwarzschild radius 2M 2GM / c 2 with the constant of gravitation G , the mass M and the velocity of light c, as well as polar coordinates r, T , M and the radius a of the central body. We define an angle \ o sin 2 \ 2 M / r , where 2M
N
2M , for r ! a and 2M
N ³0r U ( rˆ)rˆ 2 drˆ for r d a ,
with
8S G / c 2 and the „density“ U . The fundamental form >6@ is, by means
of r 2 dT 2 r 2 sin 2 T dM 2
dr Kn dr ,
(cos \ / Y 2 )c 2 dt 2 (cos 2 \ )1 dr 2 dr Kn dr , (15) where Y 1 for r ! a . The projector Kn N k
k refers to the radial vector k(TM .The space part ds'2 dr (k
k / cos 2 \ Kn )dr dr VVdr gives V (1) k
k cos \ Kn . We obtain with r ' rk wn , dw / dr w, r a dV '
2
2
deformation gradient FN
(k w, r n)
k Kn and
dr ª¬(1 w,2r )k
k Kn º¼ dr , so that we get cos2 \ becomes Qn N Using the
ds '2
dr FT Fdr
1 / (1 w,2r ) . Eq.2
FV (1) k '
k Kn with k ' k cos \ n sin \ , n ' ... . key relation (sin \ ), r ] sin \ / 2 r , where
1 NU r 3 / 2M , we find n
n ' (k
k ')] tan \ / 2 r Kn sin \ / r and the 3D-curvatures B ' Qn V (1) ( n
n ') N ' (sin \ / r ) >(] / 2)k '
k ' Kn @
]
(1 / r1 )k '
k ' (1 / r2 ) Kn ,
(16)
New Methods and Tools for Data Processing
112
R3D
B ' B ' B '( B ' N ') (sin 2 \ / r 2 ) >] (k '
k ') (] / 2 1) Kn @
(2 / r1r2 )k '
k ' (1 / r1r2 1 / r2 r2 ) Kn , (17) as well as the known vase-like surface >7@. Second, as for the time-radial terms in Eq.15, we introduce a vector r ' 2 Mk cos \ / Y wn . Defining an angle F o sin F 2 M (cos \ / Y ), r cos \ , w, r cos F / cos \ we get an inclination k ' k sin F n cos F , n ' k cos F n sin F and curvatures B ' Qn V (1) ( n
n ') N ' (K / r )h
h (r / K )k '
k ' Y (sin F ), r / 2 M
h
h / r0 k '
k '/ r1 ,
(18)
ª¬Y (sin F ),r / 2 M º¼ (h
h k '
k ') (1 / r0 r1 )(h
h k '
k ') . (19) Note that K Y r cos F / 2 M cos \ is not relevant in Eq.19 and that both meridians have the same arc s'. Third, the field equation and its inverse are with R R4 D , the relation N ' N ' 4 and the energy-impulse tensor T R N >T (1 / 2 )(T N ') N '@, (20,21) R (1 / 2 )( R N ') N ' N T , R2D
B ' B ' B '( B ' N ')
implying also ( n'T ) N '
T00
0 . The principal components of T in this
T11
static case are U, T22 T33 p , where p(r ) is the „pressure“. We replace now the curvatures of Eqs.15,17, on a meridian-stripe 4 6 , by B '1 (1 / r0 )h
h (1 / r1 )k '
k ' (1 / r2 ) Kn , 1 / r0 Z sin \ / 2 r and by B '2 (1/ r0 )h
h (1/ r1 )k '
k ' , where the re lations cos E / r0 sin E / r0 , cos E / r1 sin E / r1 must hold. E, S– Eare the angles of n '1 , n '2 with respect to n'. We can also use the definition Z o N p (sin 2 \ / r 2 )(Z 1) besides NU (sin 2 \ / r 2 )(1 ] ) The 4D-Ricci tensor becomes with Eq.21, tan 2 E > (2 ] ) / Z 1@ 2 / ] 1 and by composition (i sum from 1 to 2)
R4 D
B 'i B 'i B 'i ( B 'i N ') (N / 2 ) >( U 3 p )h
h ( U p )(k '
k ' Kn )@
(sin 2 \ / 2 r 2 ) >(3Z ] 2)h
h (Z ] 2)(k '
k ' Kn )@ (1/ r0 r1 1/ r0 r1 2 / r0 r2 )h
h (1/ r0 r1 1/ r0 r1 2 / r1r2 )k '
k ' (1 / r0 r2 1 / r1r2 1 / r2 r2 ) Kn , (22) As we have 2 / r0 r2 a 00 * 1 00 (* 212 * 313 ) a 00 a11a00,1 / r Y sin F / Mr
New Methods and Tools for Data Processing
the comparison with 1 / r0
113
Z sin \ / 2 r shows that
Z Y (r sin F / M sin 2 \ ) Y (cos \ / Y ), r (2 r cos \ / sin 2 \ ) ,
(23) Eq.22 is now compatible with Eqs.17,19 and also with all the component equations of RDE , if we have the connection 1/ r0 r1 1/ r0 r1 1/ r0 r1 . (24) Thus, Eq.22 appears as an „intrinsic“ form, expressed by the parts 17,19 alone. Further, the combination 3R11 R00 eliminates Z, so that we obtain, in case U(r) is a given function, a linear differential equation for 1/Y d ª¬(cos\ /Y ),r cos \ / r º¼ / dr (1 / Y )d(sin 2\ / 2 r 2 ) / dr . (25) The following two special cases should be noted: a) r ! a , U 0 , ] 1, Z Y 1, sin F sin 4 / 2 (Schwarzschildsolution), b) r d a , U U0 , ] 2 , Z Y 2 cos \ / (3cos \ a cos \ ) , >8@, sin F (a 3 / r 3 )sin 4 / 2 , p U0 / (cos \ cos \ a ) / (3cos \ a cos \ ) (TOVEq.). However, in general U(r) must satisfiy an equation of state. In the case r2 z r3 of a nonspherical gravitational field we take k normal to the surfaces of constant potential U in the flat space. Here we have the equation U, ss 2U, s / r NU / 2 with a normal arc s and the mean curvature 1 / r 1 / 2 r 1 / 2 r . We define \ o sin 2 \ 2( M PU )1/ 2 , 2
with
U]
3
,s
the key relations 1 r ( M P ), s / 2M P NU r / 4U , s .
(sin \ ), s
U] sin \ / 2 r ,
In the exterior U 0 we choose ] 1 , M M , thus we get the equation U 1 r (ln P ), s / 2 for Ps . In the interior however, it is convenient to define q( s ) o U M / P q 2 (spherical case: U M / r 2 ). We obtain ,s
2
sin \
2U, s P q and the Eq. U]
,r
2 r ( P q ), s / Pq NU r / 2U, s for Pq(s).
Further, in any point the vector k differs by an angle D from another unit vector k* whichmust be determined by the conditions of vanishing mixed terms. The inclination \ will then be between k* and k'. The unit normal reads n ' >k cos D (t cos J u sin J )sin D @sin \ n cos \ . where t and u denote orthogonal unit vectors and where J is a second angle for these principal directions In case of a rotational symmetry we have J { 0 and
New Methods and Tools for Data Processing
114
n
n ' n\
k ' ( nD
t cos D n
k sin D n
t )sin \ . The condition t n\ The 3D-tensor
t n ln( PU, s )r / 2 U , U cos D
U.
B ' Q\ V ( 1) ( n
n ') N ' (1 / r1 )k '
k ' Bk sin \
is
0 gives sin D
now symmetric and contains the 2D-curvature tensor Bk
k
k .
The two factors U and Pdepend on D; thus an iteration must be applied. The generalization for the angle F is (2 Mr / U q )(cos \ / Y ), s cos \ , 1 / r0 r1
sin F a
4
stripe
1 / r0
8
U Z sin \ / 2 r ,1 / r1
we
U ] sin \ / 2 r ,
Y (sin F ), s q / 2 Mr . On use r0 r0 / tan E , as
well
as
the
4D-
ª¬(1 / r0 )h
h (1 / r1 )k '
k ' Bk sin \ º¼ / 2 , B '3 ... and the 2D curvatures B '2 > (1/ r0 )h
h (1/ r1 )k '
k '@ / 2 , B '4 ... . We write then B '1 B '1 B '1 (B '1 N ') (1/ 2r0 r1 1/ r0 r )h
h (1/ 2r0 r1 1/ r1r )k '
k ' B '1
curvatures
(sin 2 \ / 2) ª¬ U (Z ] ) Bk / 2 r (1 / r2 r3 ) Kn º¼ ,
(26)
B '3 B '3 B '3 (B '3 N ') (1/ 2r0 r1 1/ r0 r )h
h (1/ 2r0 r1 1/ r1r )k '
k '
(sin 2 \ / 2) ª¬ U (Z ] ) E Bk E / 2 r (1 / r2 r3 ) Kn º¼ , (27) B '2 B '2 B '2 (B '2 N ') (1/ 2r0 r1 )(h
h k '
k ') B '4 B '4 B '4 ( B '4 N ') . (28,29) E Here are: 1 / r2 r3 Bk ( E Bk E ) / 2 K 1 / r
the
2D-permutation tensor, the Gauss curvature and
1 / 2 r2 1 / 2 r3 , 1/ r sin\ / r . Adding the four Eqs. 26-29, we
obtain, with
Bk E Bk E R4D
1 / r2
sin \ / r2 , 1 / r3
sin \ / r3 ,
2 Kn / r , the 4D-Ricci tensor (i sum from 1 to 4) and U
B 'i B 'i B 'i ( B 'i N ') N ª¬( U 3 p )h
h ( U p )(k '
k ' Kn ) º¼ sin 2 \ ( U / 2 rr ) >(3Z ] 2)h
h (Z ] 2)k '
k '@
sin 2 \ ª¬( U / 2 rr )(Z ] ) K º¼ Kn (1/ r0 r 1/ r1r 1/ r2 r3 )K n (1/ r0 r1 1/ r0 r1 2 / r0 r )h
h (1/ r0 r1 1/ r0 r1 2 / r1r )k '
k ' . (30)
U
rr K ,
N p K sin 2 \ (Z 1) , NU
K sin 2 \ (1 ] ) ,
(31-33)
New Methods and Tools for Data Processing
if tan 2 E
Z
115
>(2 ] ) / Z 1@2 r / r U ] 1 . Similar to Eqs.23-25 we have
(2 cos \ / K r sin 2 \ )Y w(cos \ / Y ) / ws , 1/ r0 r1 1/ r0 r1 1/ r0 r1 and
(w / ws 1 / r ) ª¬ cos \ w(cos \ / Y ) / ws º¼
(1 / Y ) ª¬ w / 2 r ws K º¼ sin 2 \ ,
w( Pq ) / ws ( K r cos D 2 / r NU / 2U, s )Pq NUr cos D / 2U, s for P q( s ) . As for the general gravitational lens, outside with ] 1 , Z Y =1, we use the equation of a geodesic curve N' d 2 r' / ds' 2 0 . A type of Eqs.7,8 gives then the corresponding backwards deformation into the flat space. With an auxiliary sphere of radius pˆ rˆ / sin \ˆ , approximating the hypersurface at \ˆ \ for r1 and r1 r1 / tan E , we write similar to Eqs.26-29 four parts N1d 2 r k sin 2 \ ª¬ U (drˆ h )2 U drˆ 2 / cos 2 \ 2 rdr Bk dr º¼ / 4 r , (34,35) N3d 2 r N2 d 2 r
k sin 2 \ ª¬ U (drˆ h )2 U drˆ 2 / cos 2 \ 2 rdr E Bk E dr º¼ / 4 r , k sin 2 \ tan E ª¬ U (drˆ h )2 U drˆ 2 / cos2 \ º¼ / 4 r
N4 d 2 r .(36,37)
The vector Nd 2 r4 D (V ') ( N1d 2 r N3d 2 r )cos E ( N2 d 2 r N4 d 2 r )sin E gives the image relation Nd 2 r4 D (V ') k sin 2 \ K (dV '2 3dr Kn dr )/2 , with Kn
N k
k and if drˆ
icdt cos \ / 2 . For a
dr / 2 . drˆ h
4D-nullgeodesic or light ray, where dV '
2
2
kd- , k o 0 holds, we ob-
tain then simply (real when K ! 0 , but imaginary when K 0 )
Nd 2 r4 D (- ) k sin 2 \ K (3dr Kn dr )/2 .
(38)
The surrounding field of a rotating star for instance is nonspherical. In the rotating system there, we may write for the scalar of the inertial force V (: 2 / 2 c 2 )r K0 r , where K0 N k0
k0 is the projector for the equatorial plane. : denotes here the angular velocity. The gravitational potential reads U GM / rc 2 M / r and the gradient of the sum is n (U V ) M ( r F K0 r ) / r 3 with F : 2 r 3 / Mc 2 . The normal of U + V = const. is k and Bk k
k
( r F K0 r ) / W , where W 2
(k0 r )2 (1 F )2 r K0 r
>Kn F Kn K0 k F
K0 r k (ln W )
(r F K0 r )@.
New Methods and Tools for Data Processing
116
In the equatorial plane we have k F 0 , k k0 0 , W (1 F )r , D 0 . We get thus the curvatures 1 / r (2 F ) / (1 F )2 r ,
K
1 / (1 F )r 2
K
sin \
U
U
r 2K
3
U,r V,r 2
and
k n (U V ) WM / r , 2 M P (U,r V,r )
(sin \ ),r to d(lnP )/dr
we
(2 M / r ) P (1 F ) .
U sin \ / 2 r , U
find
The
key
relations
P (1 F / 2)2/3 (1 F )1 , (39)
(cos 2 \ )c 2 dt 2 r 2 dM 2 (cos 2 \ )1 dr 2 r 2 dT 2 , 2
1/3
cos \
As
(2 F ) / (2 F ) r (ln P ),r / 2 lead finally
(4 F )(2 F )1 (1 F )1 F / r ,
dV '2
4(1 F ) / (2 F )2 .
1 (2 M / r )(1 F / 2 )
(40)
! 1 2M / r .
(41)
2 2 2 2 cos \ ; 1 2 M / r : r / 3c . A Lorentztransformation leads with 1 : 2 r 2 / c 2 1 / X to Eq.42. In comparison Eq.43 shows the Kerr-solution >9@, also >10@, Eq.(10.58), where ' / r 2 1 2 M / r a 2 / r 2
:,
For
small
dV '2
ª¬ cd t : r 2 dM /c º¼ X cos 2 \ >rdM : rd t @ X dr 2 / cos 2 \ ...
2
2
, 2
2
>cd t adM @ ' / r ª¬(r a )dM acd t º¼ / r r dr / ' r dT 2 (42,43) This tentative approach may be extended to the interior of the rotating body if U U 0 . In the equatorial plane we have 1 / r (2 F a ) / (1 F a )2 r , dV '
2
1 / (1 F a )r 2 , sin 2 \
2
2
2
2
2
2
2
NU0 (1 F a )r / 6 , r / r r ( Pr ),r / P r . The elimination of ] gives the result for P q
K
U]
2(U,r V,r )P q , U, r V, r
and, using the condition at r = a, for cos 2 \ ( complex when r a) and 1/Y 3r d(P q) (4 F a )Pq 6 , P q Cr (4 F a )/(2 F a ) , (44) dr 2 Fa (2 F a )r 3 Fa 2(3 F a ) 1 ½ º ª F a · 3 3(1 F a ) » 3(1 F a ) ° °§ r · 2 F a « § cos \ 1 ®¨ ¸ ¾ ,(45) ¸¹ » « ¨© ¹ © 2 F F a 3 3 a a ° ° »¼ «¬ ¯ ¿ >2(1 F a )r / (2 F a )@d ª¬(cos\ / Y ),r cos\ º¼ /dr (cos\ / Y ),r cos\ 2
2Mr 2 1 a3
(1 / Y ) ª¬ (sin 2 \ ),r / 2 2 sin 2 \ / (2 F a )r º¼ .
(46)
New Methods and Tools for Data Processing
117
References 1.
Champagne, E B (1974) Holographic Interferometry extended, International Optical Computing Conference, Zurich, IEEE: 73-74 2. Cuche, D E (2000) Modification methods in holographic and speckle Interferometry..Interferometry in speckle light, Springer: 109-114 3. Osten, W (2003) Active Metrology by digital Holography, Speckle Metrology, SPIE 4933: 96-110 4. Stetson, K A (1974) Fringe interpretation for hologram interferometry of rigid-body motions and homog. def. ,J.Opt. Soc. Am., 64: 1-10 5. Walles, S (1970) Visibility and localization of fringes in holographic interferometry of diffusely reflecting surfaces. Ark.Fys. 40, 299-403 6. Schwarzschild, K (1916) Ueber das Gravitationsfeld eines Massenpunktes. Deutsche Akademie der Wissenschaften, Kl. Math.: 196. 7. Misner, C W, Thorne, K S, Wheeler, J A (1972) Gravitation. W.H. Freeman and Company, New York: 837 8. Sexl, R U, Urbantke, H K (1981) Gravitation und Kosmologie, Wissenschaftsverlag, Wien: 240-243 9. Kerr, R P(1963) Gravitational field of a spinning mass as an ex. of algebraic special metrics, Physical review letters, 11, 5,: 237,238 10. Goenner, H (1996) Einführung in die spezielle und allg. Relativitätstheorie. Spektrum, Akademischer Verlag, Heidelberg: 303,304
Dynamic evaluation of fringe parameters by recurrence processing algorithms Igor Gurov, Alexey Zakharov Saint Petersburg State University of Information Technologies, Mechanics and Optics Sablinskaya Street 14, 197101 Saint Petersburg Russia
1 Introduction Fringe processing methods are widely used in non-destructive testing and optical metrology. High accuracy, noise-immunity and processing speed are very important in practical use of the systems based on fringe formation and analysis. A few commonly used fringe processing methods are well-known like Fourier transform (FT) method [1] and phase-shifting interferometry (PSI) technique (see, e.g., [2]). The FT method is based on description of interference fringes in frequency domain using integral transformation and can be classified as non-parametric method, because it does not involve a priori information about fringe parameters in an explicit form. Indeed, the Fourier transformation formula f
S( f )
F{s ( x )}
³ s( x ) exp( j 2Sfx )dx
(1)
f
is valid for any function s ( x ) if it only satisfies to integrability condition. In non-parametric methods, an a priori knowledge about general fringe properties is used mainly after calculations to interpret the processing results. The PSI methods utilize fringe samples series or a few fringe patterns obtained with known phase shifts between them. It means that PSI methods relate to parametric class, due to a priori information about at least fringe phase is used in explicit form. It was recently described in detail [3] new approach to interference fringe processing based on fringe description by stochastic differential equations in a state space involving the a priori information about fringe
New Methods and Tools for Data Processing
119
properties in well-defined explicit form that allows dynamic evaluating interference fringe parameters. In discrete case, difference equations derive recurrence algorithms, in which fringe signal is predicted to a following discretization step using full information available before this step, and fringe signal prediction error is used for step-by-step dynamic correcting the fringe parameters. The developed recurrence fringe processing algorithms were successfully applied to interference fringe parameters estimating in rough surface profilometry [3], multilayer tissue evaluating [4], optical coherence tomography (OCT) [5] and in analyzing 2-D fringe patterns [6]. New results of applying the approach proposed to PSI were recently obtained [7]. In this paper, general approach and peculiarities of recurrence fringe processing algorithms are considered and discussed.
2 Fringe description based on differenlial approach Commonly used mathematical model of interference fringes is expressed as s ( x ) B ( x ) A( x ) cos ) ( x ) , (2) where B ( x ) is background component, A( x ) is fringe envelope, ) ( x ) is fringe phase, ) ( x ) H 2Sf 0 x I ( x ) , (3) H is an initial phase at the point x = 0, f 0 is a fringe frequency mean value, and I ( x ) describes a phase change nonlinearity. In the determinate model Eq. (2), fringe background, envelope and phase nonlinearity are usually supposed as belonged to a priori known kind of determinate functions that vary slowly with respect to cosine function cos( 2Sf 0 x ) . This assumption allows one to use processing algorithms applicable to highquality fringes obtained when measuring mirror-reflecting objects. In parametric approach, a priori knowledge about fringe signal dependence of its parameters is involved in processing algorithm in an explicit form before calculations. In this way, fringe signal is initially defined as dependent on its parameters, i.e.
s ( x ) s( x, T); T ( B, A, ), f )T , (4) where T is vector of fringe parameters in the state space {T}. It allows taking into account more accurately the a priori knowledge about supposed variations of fringe parameters and their dynamic evolution.
120
New Methods and Tools for Data Processing
If the fringe signal background component B, amplitude A and fringe frequency f are supposed, e.g., to be constant, and fringe phase ) varies linearly in a given interferometric system, one can write dT (0, 0, 2Sf , 0)T . (5) dx It is evident that Eq. (5) relates to ideal monochromatic fringes. If fringe envelope changes following Gaussian law that is inherent in lowcoherence fringes, all possible envelopes are the solutions of the differential equation for envelope dA A( x x0 ) / V 2 , (6) dx where x0 and V are correspondingly maximum position and Gaussian curve width parameter. Random variations of fringe parameters can be introduced by modifying Eq. (4) as follows: dB dA d) df wB ( x ) , w A ( x) , 2Sf w) ( x ) , w f ( x) , (7) dx dx dx dx where w
( wB , w A , w) , w f )T is a random vector.
The 1st-order Eqs. (7) are stochastic differencial equations of Langevin kind, that can be rewritten in vectorial form dT < ( x , T) w ( x ) , (8) dx where the first term relates to determinate evolution of fringe parameters, and the second one presents their random variations. The a priori information about evolution of vector of parameters T is included by appreciable selecting the vectorial function < and supposed statistical properties of “forming” noise w ( x ) . It is important to emphasize that Eq. (8) defines also non-stationary and non-linear processes. In discrete case, Eq. (8) is rewritten in the form of stochastic difference equation defining discrete samples series at the points xk k'x, k 1, ..., K , where 'x is the discretization step. It provides the possibility of recurrence calculation at k-th step in the form T( k ) T( k / k 1) w ( k ) , (9) where T( k / k 1) is a predicted value from the (k-1)-th step to the k-th one taking into account concrete properties of Eq. (8). Prediction in Eq. (9) contains an error, i.e. difference between the a priori knowledge at the (k-1) step and real information at the k-th step. This difference is available for observation only as fringe signal error. To obtain the a posteriori in-
New Methods and Tools for Data Processing
121
formation about parameters at the k-th step, the signal error should be transformed to correct fringe parameters, namely Tˆ (k ) T (k / k 1) P (k ){s obs (k ) s[k , T(k k 1)]} , (10) where P( k ) is a vectorial function, which transforms scalar difference between observed signal sample value sobs ( k ) and modelled (predicted) one s ( k , T) to vectorial correction of fringe parameters. The peculiarities of recurrence fringe processing algorithms based on general formula Eq. (10) are considered in the following section of the paper.
3 Recurrence fringe processing by using the extended Kalman filtering algorithm As known, discrete Kalman filtering is defined by observation equation and system equation. The observation equation describes the signal evolution s(k) dependent on signal parameters, and the system equation defines dynamic evolution of vector of parameters T( k ) . For linear discrete Kalman filtering these equations are presented as, respectively (see, e.g., [3]) s( k ) C(k )T( k ) n( k ) , (11) T(k ) F(k 1)T(k 1) w(k ) , (12) where C( k ) , F(k ) are known matrix functions, n(k) is an observation noise, and w (k ) is considered as system (forming) noise. In discrete Kalman filtering algorithm (see Fig. 1), vector of parameters is predicted at the kth step using the estimate obtained at previous step T( k 1) . According to Eq. (12), the predicted estimate of vector of parameters is calculated as F( k 1)T( k 1) . The a posteriori estimate is obtained using recurrence equation involving the input signal sample s ( k ) in the form T( k ) F( k 1)T( k 1) P( k ) [ s( k ) C( k )F( k 1)T( k 1)] , (13) where P(k ) is the filter amplification factor. Useful component of interferometric signal value is defined by nonlinear observation equation s ( k ) A( k ) cos ) ( k ) n ( k ) h ( T( k )) n( k ) , (14) and the a posteriori estimate of the vector of fringe parameters is expressed as Tˆ ( k ) T( k / k 1) P( k )[ s( k ) h( T( k / k 1))] . (15)
New Methods and Tools for Data Processing
122
Amplification Correction
^
s(k)
4(k)
-1
Difference
Parameters prediction
Signal prediction
'x
4(k-1) C(k)
F(k)
Fig. 1. Scheme of linear Kalman filter
The Kalman filter amplification factor P( k ) is calculated [3] in the following form involving covariation matrix Rpr(k) of a priori estimate error of the vector T and the observation noise covariation matrix Rn : P(k)=Rpr(k)CT(k) [C(k)Rpr(k)CT(k) +Rn]-1, (16) where C( k ) hTc ( T( k / k 1)) is obtained by local linearization of the nonlinear observation equation. The matrices Rpr(0) and Rn in Eq. (16) are evaluated a priori taking into account general correlation properties and dispersions of fringe parameters and observation noise. The covariation matrix of a posteriori estimate error is determined as R(k)=[I-P(k)C(k)]Rpr(k). (17) It is clearly seen that the Kalman filtering method allows introducing a priori information about dynamic evolution of fringe parameters including their correlation properties in well-defined form.
4 Experimental results 4.1 Dynamic recurrence processing of low-coherence interference fringes
The Kalman filtering processing algorithm described above has been used when processing OCT signals and recovering tomograms of multilayer tissues [4, 8]. Fig. 2 shows an example of the experimental tomogram represented in logarithmic grey-level inverse scale to be more visual.
New Methods and Tools for Data Processing
123
1,5 1 0,5 0 -0,5 -1 -1,5 500
700
900
a)
b)
c)
d)
1100
Fig. 2. (a) Optical coherence tomogram recovered by evaluating envelopes of lowcoherence fringes in parallel depth-scans of multilayer tissue, (b) example of fringe envelope evaluation within single depth-scan by the Kalman filtering algorithm, (c) example of fringes with variable local frequency and (d) unwrapped phase (in radians) of the signal (c) recovered dynamically by the extended Kalman filter (the sample number k is indicated in horizontal axes)
The accuracy of the method was compared with well-known analogue amplitude demodulation methods like signal rectification with subsequent low-pass filtering and synchronous amplitude demodulation [9]. It was found that the Kalman filtering method provides better resolution of fringe envelope variations. 4.2 Application to phase-shifting interferometry
It is well-known that PSI technique is one of the most accurate methods for measuring fringe phase. It provides high accuracy with phase error near 2S/1000 or less. The basic approach to fringe processing in PSI is usually a
New Methods and Tools for Data Processing
124
least-squares fitting of the interferometric data series and phase estimation on conditions that fitting error is minimized. The model Eqs. (2)-(3) is characterized by vector of parameters
T ( B, A, ), f ) T taking into account that initial phase H can be calculated as H ) ( k ) 2Sfk'x . If phase step 2Sf'x is non-stable, e.g., due to external disturbances of optical path difference in interferometer, it may be interpreted by an observer as fringe frequency variations, i.e. f = f(k). Thus, knowing ) (k ) and f(k), one can easily calculate initial phase as
Hˆ( k ) ) ( k ) 2S ¦ kk c
1 f (k
c) 'x .
H, rad 3
3
3
3
2
2
2
2
1
1
1
1
0
0
0
1 12 23 34 45
(a)
1 12 23 34 45
0
1 12 23 34 45
(b)
(c)
1 12 23 34 45
(d)
Fig. 3. (a) a priori supposed phase distribution equal to S/2; (b), (c) wavefront estimates after 5 and 10 recurrence processing steps, correspondingly; and (d) true tilted wavefront obtained after 20th phase step
Fig. 3 shows dynamic evolution of initial phase estimates obtained when measuring a tilted wavefront. The number of lateral points, where fringe phases calculated is 50u50. The phase shift step was selected to be equal to 2S/100. It is seen that after approximately 20th step the phase errors become small. It has been found out [7] that the phase error becomes smaller than 2S/100 after approximately 20 processing steps and smaller than 2S/1000 after a half of fringe period processed. It confirms high phase accuracy of the extended Kalman filtering algorithm.
5 Discussion and conclusions Recurrence fringe processing methods are based on difference equations formalism, and a priori knowledge about fringes should be included in Eq. (16) before calculations. It means that recurrence parametric methods are more specialized providing advantages in accuracy, noise-immunity and processing speed. At first sight, requirement to accurate a priori
New Methods and Tools for Data Processing
125
knowledge seems like restriction. However, almost the same information is needed after calculation in conventional methods to interpret processing results. Parametric approach allows one to use a priori knowledge in welldefined form including non-stationary and nonlinear fringe transformations. Thus, parametric approach presents flexible tool for dynamic fringe analysis and processing. The advantages of the recurrence algorithms considered consist in high noise-immunity and signal processing speed.
6 References 1. Takeda, M, Ina, H, Kobayashi, S (1982) Fourier transform method of fringe-pattern analysis for computer-based topography and interferometry. J Opt. Soc. Am. 72: 156-160. 2. Grevenkamp, JE, Bruning, JH (1992) Phase-shifting interferometry, in Optical Shop Testing, D Malacara, ed. Wiley New York. 3. Gurov, I, Ermolaeva, E, Zakharov, A (2004) Analysis of lowcoherence interference fringes by the Kalman filtering method. J Opt. Soc. Amer. A 21: 242-251. 4. Alarousu, E, Gurov, I, Hast, J, Myllylä, R, Zakharov, A (2003) Optical coherence tomography of multilayer tissue based on the dynamical stochastic fringe processing. Proc. SPIE 5149: 13-20. 5. Alarousu, E, Gurov, I, Hast, J, Myllylä, R, Prykäri, T, Zakharov, A (2003) Optical coherence tomography evaluation of internal random structure of wood fiber tissue. Proc. SPIE 5132: 149-160. 6. Zakharov, A, Volkov, M, Gurov, I, Temnov, V, Sokolovski-Tinten, K, von der Linde, D (2002) Interferometric diagnostics of ablation craters created by the femtosecond laser pulses. J Opt. Technol.: 69: 478-482. 7. Gurov, I, Zakharov, A, Voronina, E (2004) Evaluation of interference fringe parameters by recurrence dynamic data processing. Proc. ODIMAP IV: 60-71. 8. Bellini, M, Fontana, R, Gurov, I, Karpets, A, Materazzi, M, Taratin, M, Zakharov, A (2005) Dymamic signal processing and analysis in the OCT system for evaluating multilayer tissues. Will be published in Proc. SPIE. 9. Gurov, I, Zakharov, A, Bilyk, V, Larionov, A (2004) Low-coherence fringe evaluation by synchronous demodulation and Kalman filtering method: a comparison. Proc. OSAV'2004: 218-224.
Fast hologram computation for holographic tweezers Tobias Haist, Marcus Reicherter, Avinash Burla, Lars Seifert, Mark Hollis, Wolfgang Osten Institut für Technische Optik, Universität Stuttgart Pfaffenwaldring 9 70569 Stuttgart, Germany
1 Introduction In this paper we give a short introduction to the basics of using consumer graphics boards for computing holograms. These holograms are employed in a holographic tweezer system in order to generate multiple optical traps. The phase-only Fourier holograms - generated at video frequency are displayed on a liquid crystal display and then optically reconstructed by the microscope objective. By using a standard consumer graphics board (NVidida 6800GT) we outperform our fastest CPU based solution, which employs machine coding and the SSE multimedia extensions by a factor of more than thirty at a floating point precision of 32 bit. With the help of this fast computation it is now possible to control a large number of Gaussian or doughnut-shaped optical traps independently of each other in three dimensions at video frequency.
2 Holographic Tweezers Holographic tweezers are a special case of optical tweezers where the micromanipulation of small objects (e.g. cells) is realized by a holographically generated light field[1,2,3]. If modern spatial light modulators (SLM) are used as hologram media it is possible to change the trapping field in video real-time. By superposition one can generate a large number of traps and control them in three dimensions with high accuracy. In addition it is possible to correct for field-dependent aberrations via the holograms[4] and to change the trapping potential (e.g. doughnut modes [2]).
New Methods and Tools for Data Processing
127
The principle setup is depicted in Fig. 1. The SLM, a Holoeye LC-R-2500 reflective twisted-nematic liquid crystal on silicon (LCoS) display with XGA (1024 x 768) resolution (pixel pitch: 19 µm. fill-factor: 93%), is illuminated by a 150 mW laser diode working at 830 nm (Laser 2000LHWA-830-150). By a proper selection of the input and output polarization and the driving signal it is possible to obtain a linear 2S phase shift. For these settings one still has an amplitude modulation of about 70%. The phase-only holograms displayed on the LCD are coupled into the microscope objective (Zeiss Achroplan 100x, 1.0W) by a telescope and thereby Fourier transformed into the trapping volume.
Fig. 1. Principle setup for holographic tweezers
Of course the complete flexibility of this method is only exploited if one is able to compute the phase-only Fourier holograms in real-time. A simple estimation of the computational cost shows (see below), that this is not trivial if ordinary personal computers (PC) without specialized hardware are used. For a single trap j the corresponding light field in the Fourier (hologram) plane equals E j ( x, y ) E 0 exp i (k x x k y y ) iD ( x 2 y 2 ) i\ ( x, y ) (1)
>
@
128
New Methods and Tools for Data Processing
where the lateral and axial position of the trap is given by the tilt terms (exp[ikxx+ ikyy]) and the quadratic phase term (exp[iD(x2+ y2]). The additional phase \x,y determines the light field’s potential. For Gaussian shaped light distributions (and a Gaussian input field) this term equals zero. Often in optical trapping a doughnut mode is advantageous. In this case a phase singularity has to be introduced resulting in
tan(\ ( x, y ) / n)
y x
(2)
for a doughnut of order n. Other light fields are of course possible. For M traps we just have to compute the phase of the superposition of the individual light fields.
ª M º « (¦ E j ) » » M ( x, y ) arctan « jM1 « » « ( ¦ E j ) » ¬ j1 ¼
(3)
3 CPU-based computation For one hundred doughnuts and a 1024 x 768 (XGA) pixel hologram we have to compute 78.6·106 hologram pixels. At least 14 operations (including trigonometry) are necessary for the computation of one pixel. For a 15 Hz update of the hologram we therefore would have at least 14 · 15 · 78.6·106 = 16.5·109 floating point operations per second (16.5 GFlops). The peak performance of an Intel Pentium 4 CPU at 3.2 GHz is about 6.4 GFlops. So even if one would be able to use that peak performance the computation of such holograms would not be fast enough on the CPU. The average achievable performance is considerably below the peak performance. In [5] it is reported that careful handcoded matrix multiplication using the SSE extensions on the Pentium 4 results in up to 3.3 GFlops. This average achievable performance of course strongly depends on the problem to be solved. For our handcoded hologram computation we achieved about 0.5 GFlops on a Petium 4 at 3.0 GHz (also using handcoded SSE). To generate the holograms fast enough basically two different approaches are possible. One might try to improve the algorithms and optimize the code for the computation or one might use faster hardware. Specialized hardware for the computation of holograms[6,7] as well as high-
New Methods and Tools for Data Processing
129
speed digital signal processor boards are available, but the performance to cost ratio (GFlops/Euro) is much better if one uses consumer graphics boards (or even video games hardware). Furthermore by going that way one can strongly profit from the extraordinary performance growth over time in that field. Whereas for ordinary CPUs according to Moore’s law we have more or less a doubling of the performance every eighteen month this doubling of performance occurs every twelve months for graphics processing units (GPU).
4 GPU-based computation In the past several authors already used graphics boards for the computation of holograms[8,9,10]. Their approaches were based on amplitude modulation Fresnel holograms. Therefore it was possible for them to render a hologram as the superposition of precomputed holograms which were translated and scaled. Translation, rotation, and scaling are of course basic operations, which can be done by modern graphics boards very easily. For our phase-only Fourier holograms this is not possible because translating the hologram would not translate the reconstruction but would lead to an additional phase tilt. Therefore processing the different exponential terms of Eq. 1 separately and adding up all the results is necessary. Fortunately, today it is also possible to implement the hologram computation directly into the GPU because current GPUs are programmable in a quite flexible way. A lot of scientific applications running on GPUs have been proposed and implemented. For a good overview on this field the reader is referred to http://www.gpgpu.org. Currently we are using an NVidia 6800GT based AGP graphics board (about 350 Euro in spring 2005) within an ordinary PC. It incorporates 16 pixel shaders, each consisting of two floating point units giving an overall peak performance of 51.2 GFlops at 400 MHz since every unit can in principle work on 4 floats (red, green, blue, and alpha) in parallel. For implementing Eq. (1) to (3) we use the straight forward algorithm depicted in Fig. 2. The computational intensive parts are done in Cg, a freely available programming language for GPU programming[11]. The Cg syntax should be easily understandable if one is already familiar with C. A short example is shown in Fig. 3. The basic framework of the program is done in C++ using the OpenGL library.
130
New Methods and Tools for Data Processing
Fig. 2. Algorithm for the computation of the phase-only Fourier holograms using the GPU.
The computation of the sum over the Ej (see Eq. (1) and (3)) was implemented by using a texture or a so-called p-buffer (pixel buffer). For both versions the basic processing and storage unit is a pixel. Every pixel on the GPU consists of four numbers (red, green, blue, and alpha). Therefore we are able to compute within every texture pixel two hologram pixels (real and imaginary part) at the same time. This approach is depicted in Fig. 4.
New Methods and Tools for Data Processing
131
// ------------------------------------------------// Compute the phase out of the complex (re,im) field float hphase(in float2 wpos) { float xx = (wpos.x - 0.5)/2.0; float x = floor(xx); float2 h; wpos = float2(x + 0.5, wpos.y); if(x==xx) h = texRECT(hcomplex, wpos).xy; else h = texRECT(hcomplex, wpos).zw; float A = arctan2(h.y, h.x) / PI_2; return frac(A); } Fig. 3. Example code in Cg computing the phase by an arctan operation
Fig. 4. Doubling of the performance is possible if two hologram pixels are computed within each pixel of the texture. Two complex values corresponding to two hologram pixels can be stored/processed by one GPU pixel.
The overall performance that can be achieved depends on the details of the implementation as well as on the number of doughnuts to be reconstructed. For 100 doughnuts our best solution results in 0.78 ms per
New Methods and Tools for Data Processing
132
doughnut (1024 x 768 hologram size). This corresponds to at least 14.1 GFlops. The driver for the board was set to “dynamic overclocking” in order to obtain the best performance. For such large numbers of doughnuts it is advantageous to do the looping over all doughnuts also within Cg. This outperforms our fastest CPU solution by a factor of more than thirty. The performance can be further increased if two graphics boards are used in one PC (linking is possible via the scalable link interface (SLI) of NVidia). Table 1. Performance of hologram computation based on a NVidia 6800GT with dynamic overclocking for different numbers of doughnuts. “Best time” denotes always the shortest time if the program is run several times. All time values are listed as initialization time/update time. Initialization is only done once while running the program. Results for four different implementations are shown.
2 doughnuts
2 doughnuts
2 doughnuts
4 doughnuts
for loop inside CG
for loop outside CG
p-buffer
for loop inside CG
Best
Avg
Best
Avg
Best
Avg
Best
Avg
1
31/0
47/0
15/0
15/0
16/0
16/0
47/0
47/0
10
31/16
32/15
16/0
31/15
31/16
31/16
31/0
31/0
100
141/ 110
141/ 110
171/ 172
187/ 172
203/ 188
203/ 188
93/ 62
109/ 78
255
312/ 297
328/ 297
453/ 422
453/ 422
500/ 484
500/ 484
250/ 219
250/ 219
5 Conclusions We have shown that it is possible to accelerate considerably the computation of phase-only Fourier holograms by using a consumer graphics board (MSI 6800GT) instead of the ordinary CPU. Our fastest CPU solution (Pentium 4 @ 3.0 GHz) - using handcoded assembly code together with the multimedia extensions SSE – was outperformed by a factor of more than thirty resulting in an average performance of 14.1 GFlops for 100 doughnuts. The presented algorithm can be used for the computation of holograms for an arbitrary (up to 250) number of traps located at
New Methods and Tools for Data Processing
133
different positions in three dimensions and having independent trapping potentials. We thank the Landesstiftung Baden-Württemberg for their financial support within the project “AMIMA”.
References 1. Reicherter, M, Liesener, J, Haist, T, Tiziani, H J (1999) Optical particle trapping with computer-generated holograms written in a liquid crystal display. Optics Letters 9:508-510 2. Dufresne, E R, Grier, D G (1998) Optical tweezer arrays and optical substrates created with diffractive optics. Rev. Sci. Instrum. 69:19741977 3. Liesener, J, Reicherter, M, Haist, T, Tiziani, H J (2000) Multifunctional optical tweezers using computer-generated holograms. Opt. Commun. 185:77-82 4. Reicherter M, Gorski W, Haist T, Osten W (2004) Dynamic correction of aberrations in microscopic imaging systems using an artificial point source, Proc. SPIE 5462:68-78 5. Yotov, K, Li, X, Ren, G, Garzaran, M, Padua, D, Pingali, K, Stodghill, P (2005) Is search really necessary to generate highperformance BLAS, Proc. of the IEEE 93:358-386 6. Ito, T, Masuda, N, Yoshimura, K, Shiraki, A, Shimobaba, T, Sugie, T (2005) Special – purpose computer HORN-5 for a real-time electroholography, Optics Express 13:1932-1932 7. Lucente, M (1993) Interactive computation of holograms using a look-up table. Journal of Electronic Imaging2: 28-34 8. Ritter, A, Böttger, J, Deussen, O, König, M, Strothotte, T (1999) Hardware – based rendering of full – parallax synthetic holograms. Applied Optics 38:1364-1369 9. Petz, C, Magnor, M (2003) Fast hologram systhesis for 3D geometry models using graphics hardware. Proc. SPIE 5005:266-275 10. Quentmeyer, T (2004) Delivering real-time holographic video content with off-the-shelf PC hardware. Master thesis Massachusetts Institute of Technology 11. Fernando, R, Kilgard, M J (2003) The Cg Tutorial, Addison Wesley
Wavelet analysis of speckle patterns with a temporal carrier Yu Fu, Chenggen Quan, Cho Jui Tay and Hong Miao Department of Mechanical Engineering National University of Singapore 10 Kent Ridge Crescent, Singapore 119260
1 Introduction Temporal phase analysis [1] and temporal phase unwrapping technique [2] have been reported during recent years to do measurement on continuously deforming objects. In this technique, a series of fringe or speckle patterns is recorded throughout the entire deformation history of the object. The intensity variation on each pixel is then analyzed as a function of time. There are several temporal phase analysis techniques, among them, temporal Fourier transform is the predominant method. The accuracy of Fourier analysis is high when the signal frequency is high and the spectrum is narrow. However, in some cases, the spectrum of signal is wide due to the non-linear phase change along the time axis, and various spectrums at different pixels increase the difficulties of automatic filtering process. In recent years, a wavelet transform was introduced in temporal phase analysis to overcome the disadvantages of the Fourier analysis. The concept was introduced by Colonna de Lega [3] in 1996, and some preliminary results [4] were presented. Our previous research [5,6] also showed the advantages of the wavelet transform compared with the Fourier transform in temporal phase analysis. Temporal phase analysis technique has the advantage of eliminating speckle noise, as it evaluates the phase pixel by pixel along the time axis. However, it does have its disadvantage: it cannot analyze the part of an object that is not moving with the rest; neither the objects that deform in opposite directions at different parts. Determination of absolute sign of the phase change is impossible by both temporal Fourier and wavelet analysis. This limits the technique to the measurement of deformation in one direction which is already known. Adding a carrier frequency to the image acquisition process is a method to overcome these problems. In this study, a temporal carrier is applied in ESPI and DSSI set-ups. The phase is re-
New Methods and Tools for Data Processing
135
trieved by temporal wavelet transform, and phase unwrapping process in time and spatial domain is not required. The phase variation due to temporal carrier is also measured experimentally. After remove the effect of temporal carrier, the absolute phase change is obtained.
2 Theory of wavelet phase extraction When a temporal carrier is introduced in speckle interferometry, the intensity of each point can be expressed as I xy (t )
I 0 xy (t ) Axy (t ) cos[ M xy (t )]
I 0 xy (t ) Axy (t ) cos[ IC (t ) I xy (t )] (1)
where I 0 xy (t ) is the intensity bias of the speckle pattern, and IC (t ) is the phase change due to the temporal carrier. At each pixel the temporal intensity variation is a frequency-modulated signal and is analyzed by continuous wavelet transform. The continuous wavelet transform (CWT) of a signal s(t) is defined as its inner product with a family of wavelet function \ a ,b (t ) . f *
³ s ( t )\ a , b ( t ) dt
W S (a, b)
(2)
f
where \ a ,b (t )
1 §t b· \¨ ¸ a © a ¹
b R , a ! 0
(3)
‘a’ is a scaling factor related to the frequency and b is the time shift and * denotes the complex conjugate. In this application, the complex Morlet wavelet is selected as a mother wavelet (4) \ (t ) exp t 2 2 exp iZ 0 t
Here Z0 2S is chosen to satisfy the admissibility condition [7]. CWT expands a one-dimensional temporal intensity variation of certain pixels into a two-dimensional plane of scaling factor a (which is related to the frequency) and position b (which is the time axis). The trajectory of maximum Wxy (a, b) 2 on the a-b plane is called a ‘ridge’. The instantaneous frequency of signal M cxy (b) is calculated as M cxy (b)
Z0 arb
(5)
136
New Methods and Tools for Data Processing
Fig. 1. Experimental Set-up of ESPI with temporal carrier
where arb denotes the value of a at instant b on the ridge. The phase change 'M xy (t ) can be calculated by integration of the instantaneous frequency in Eq. (5), so that phase unwrapping procedure is not needed in temporal and spatial domain. Subtracting the phase change 'IC (t ) due to temporal carrier, the absolute phase change representing different physical quantities can be obtained on each pixel.
3 Temporal carrier with ESPI When a vibrating object is measured using ESPI, the phase change of each point has opposite directions at different instants, and different points may have various frequencies of intensity variation due to the different amplitude of the vibration. Figure 1 shows the experimental set up of ESPI with temporal carrier. The specimen tested in this study is a perspex cantilever beam with diffuse surface. The beam is subjected to a sinusoidal vibration at the free end using a vibrator. To generate the temporal carrier, the reference plate is mounted on a computer-controlled piezoelectrical transducer (PZT) stage. During vibration of cantilever beam, the reference plate is applied with a linear rigid body motion at certain velocity. In order to retrieve phase change of temporal carrier, a still reference block with diffuse
New Methods and Tools for Data Processing
137
surface is mounted above the vibrating beam and is captured together with the beam. The object and reference beams are recorded on a CCD sensor.
(a)
(b) Fig. 2. (a) Gray-value variation of one point on cantilever beam; (b) Modulus of complex wavelet transform.
Figures 2 shows the intensity variations and the modulus of the Morlet wavelet transform of a point on cantilever beam. Integration of 2S arb was carried out along the time axis to generate a continuous phase change 'M (t ) . Figure 3(a) shows the temporal phase change obtained on cantilever beam and on reference block. The difference between these two lines gives the absolute phase change of that point due to vibration. In a speckle interferometer, as it is shown in Fig. 1, a 2S phase change represents a displacement of O / 2 (=316.4nm) in the z direction. Figure 3(b) shows the temporal displacement obtained on that point. Figure 4(a) shows the outof-plane displacement on a cross section of the beam at different time intervals (T1 T0 ) , (T2 T0 ) and (T3 T0 ) [shown in Fig. 3(b)]. For comparison, temporal Fourier analysis was also applied on the same speckle patterns. Figure 4(b) shows the temporal displacements obtained by temporal Fourier transform. It was observed that CWT on each pixel generates a smoother spatial displacement distribution at different instants compared to the result of Fourier transform. The maximum displacement fluctuation
138
New Methods and Tools for Data Processing
due to noise is around 0.04 µm in Fourier transform, but only 0.02 µm in wavelet analysis.
(a)
(b) Fig. 3. (a) Phase variation retrieved on reference block and on cantilever beam;(b) Out-ofplane displacement of one point on cantilever beam.
(a)
(b) Fig. 4. The displacement distribution on one cross section at different timeinterval obtained by (a) wavelet transform; (b) Fourier transform.
New Methods and Tools for Data Processing
139
Fig. 5. Typical shearography fringe pattern and area of interest.
4 Temporal carrier with DSSI Shearography is an optical technique that can measure the displacement derivatives. Sometimes even the displacement of the test object is in one direction, the phase change of shearography is in opposite directions at the different parts of object. In addition, zero phase change area also exists. In this case, introducing a temporal carrier is the only method to overcome these problems. The specimen tested in this study is a square plate with a blind hole, clamped at the edges by screws and loaded by compressed air. Similar as ESPI set-up mentioned above, a still reference block with diffuse surface is mounted besides the object and is captured together with the plate. A modified Michelson shearing interferometer is adopted as the shearing device, so the temporal carrier can be easily introduced by shifting the mirror in one beam of interferometer using a PZT stage. Figure 5 shows the specimen and the reference block with typical shearography fringes. Figure 6 shows the intensity variations of point R on reference block and points A and B (shown in Fig. 5) on the plate. Different frequencies are found on points A and B as the directions of phase change at these two points are opposite. Similar as the process mentioned above, the absolute phase change can be obtained by temporal wavelet analysis. The combination of phase change of each point at certain instant gives a instantaneous spatial phase distribution which is proportional to the
140
New Methods and Tools for Data Processing
deflection derivative, in this case, ww wy . Figure 7 shows a high quality 3D plot of reconstructed value of ww wy at certain instant.
Fig. 6. Temporal intensity variations of (a) point R on reference block; (b) point A and (c) point B on the square plate.
Fig. 7. The 3D plot of reconstructed value of ww wy .
5 Conclusion This paper presents a novel method to retrieve the transient phase change on a vibrating or continuously deforming object using combination of temporal wavelet analysis and temporal carrier technique. The introducing of temporal carrier ensures that the phase change of each point on the object is in one direction, so that temporal phase analysis methods can be applied. Two applications of temporal carrier are illustrated with different
New Methods and Tools for Data Processing
141
optical techniques. A complex Morlet wavelet is selected as the wavelet basis. The phase change is retrieved by extracting the ridge of wavelet coefficient. As wavelet analysis extract the instantaneous frequency with the highest energy (which is the frequency of the signal), it performs an adaptive filtering of the measured signal, thus limits the influence of various noise sources and increases the resolution of measurement. A comparison between temporal wavelet transform and Fourier transform shows that wavelet analysis can significantly improve the result in temporal phase measurement. However, continuous wavelet transform maps a onedimensional intensity variation of a signal to a two-dimensional plane of position and frequency, and then extracts the optimized frequencies. Obviously it is a time-consuming process and required high computing speed and memory. In this investigation, the computation time is about 10 times larger than that of temporal Fourier transform. However, this disadvantage becomes inconspicuous due to the rapid improvement in capacity of computers.
References 1.
2.
3.
4.
5.
6. 7.
Kaufmann, GH, and Galizzi, GE (2002) Phase measurement in temporal speckle pattern interferometry: comparison between the phaseshifting and the Fourier transform methods. Applied Optics 41: 72547263. Huntley, JM, Saldner, H (1993) Temporal phase-unwrapping algorithm for automated interferogram analysis. Applied Optics 32: 30473052. Colonna de Lega, X (1996) Continuous deformation measurement using dynamic phase-shifting and wavelet transform. in Applied Optics and Optoeletronics 1996, K. T. V. Grattan, Ed., Institute of Physics Publishing, Bristol 261-267. Cherbuliez, M, Jacquot P, Colonna de Lega, X (1999) Wavelet processing of interferometric signal and fringe patterns. Proc. SPIE, 3813: 692-702. Fu, Y, Tay, CJ, Quan, C, Chen, LJ (2004) Temporal wavelet analysis for deformation and velocity measurement in speckle interferometry. Optical Engineering 43:2780-2787. Fu, Y, Tay, CJ, Quan, C, Miao, H (2005) Wavelet analysis of speckle patterns with a temporal carrier. Applied Optics, 44:959-965. Mallat, S (1998) A wavelet Tour of Signal Processing, Academic Press, San Diego, Calif.
Different preprocessing and wavelet transform based filtering techniques to improve Signal- tonoise ratio in DSPI fringes Chandra Shakher, Saba Mirza, Vijay Raj Singh, Md. Mosarraf Hossain and Rajpal S Sirohi Laser Applications and Holography Laboratory, Instrument Design Development Centre, Indian Institute of Technology, Delhi, New Delhi – 110 016, (INDIA).
1 Introduction Digital Speckle Pattern Interferometry (DSPI) has emerged as a powerful tool for measurement / monitoring of vibrations [1,2]. But the DSPI speckle interferograms have inherent speckle noise. Many possibilities of improvements in DSPI fringes have evolved as a result of advancement in digital image processing techniques. Different methods investigated to reduce speckle noise in DSPI fringes are only partially successful. Methods based on Fourier transform, such as low pass filtering or spectral subtraction image restoration have proven to be quite efficient to reduce speckle noise. Fourier method, however, does not preserve details of the object. This is a severe limitation because in practice, test objects usually contain holes, cracks or shadows in image field. This is basically due to the reasons that in Fourier transform method the original functions expressed in terms of orthogonal basis functions of sine and cosine waves of infinite duration [3]. Thus errors are introduced when filtered fringe pattern is used to evaluate the phase distribution. For better visual inspection and automatic fringe analysis of vibration fringes, a number of methods for optimizing the signal to noise ratio (SNR) have been reported [4]. Wavelets have emerged as a powerful tool for image filtering. Recently several publications have appeared to reduce speckle noise using wavelet filters [5-9]. Our investigations reveal that a filtering scheme based on combination of preprocessing schemes and wavelet filters are quite effective in reducing the speckle noise present in speckle fringes of vibrating objects. In this paper different filtering schemes based on wavelet filtering are presented for removal of speckle noise from the speckle fringes. Preprocessing of
New Methods and Tools for Data Processing
143
speckle interferograms depends mainly upon texture and number of speckles present in the speckle interferogram. The potential of different filtering schemes is evaluated in terms of speckle index / SNR in speckle fringes of vibrating objects.
2 DSPI Fringe Pattern Recording In the case of measurement of vibration using DSPI, let us assume that the frequency of vibration ‘ Z ’ of the harmonically vibrating object is greater than the frame rate of CCD camera used to record image of the object. Two time-averaged specklegrams of the vibrating plate are recorded. The intensity, if two time-averaged specklegrams are subtracted, is given by [6].
I( x , y) 2A o A r J 0 [(2S ) J w 0 (x, y)] u cos[2(I o I r )] O
(1)
where, A o and A r are amplitude of object and reference wavefronts respectively; O is the wavelength of laser light used to illuminate the object (plate); I r is phase of reference beam and I o is a position dependent phase of the object beam which corresponds to the original state of the object; w 0 ( x , y) is phase dependent out-of-plane displacement of the harmonically vibrating object with respect to some reference position; J is geometric factor which depends on the angle of illumination and the angle of observation, and J 0 is a zero order Bessel function. The term
cos[2(I o I r )] represents phase dependent high frequency speckle information. The Bessel function J 0 spatially modulates brightness of the speckle pattern. The time-average subtraction method improves the SNR in the speckle interferograms. The speckle noise however can not be removed by mere subtraction process. Presence of undesired bright speckles in the area of dark fringes and similarly the dark speckles in the area of bright fringes make inaccuracy in measurement from speckle interferogram. In coherent imaging number of filters have been investigated to reduce speckle noise. Recently we have studied various schemes to remove noise from speckle interferograms. Some of the schemes, which are having potential to handle speckle noise effectively, are discussed below.
144
New Methods and Tools for Data Processing
3 Filtering schemes In DSPI fringe pattern, speckle noise appears in terms of resolution at different scales. To remove the noise a filtering scheme is needed which can decompose images at different scales and then remove the unwanted intensity variations. The filter scheme should be such that the desired structure for minima and maxima remain same. The intensity changes occurs at different scales in the image so that their optimal detection requires the use of operators of different sizes. The sudden intensity change produces peak and trough in the first derivative of the image. This requires that the vision filter should have two characteristics; first, it should be differential operator, and second, it should be capable of being tuned to act at any desired scale. The wavelet filters have these properties. The wavelets are new families of orthonormal basis functions, which do not need to have of infinite duration. When wavelet decomposition function is dilated, it accesses lower frequency information, and when contracted, it accesses higher frequency information. It is computationally efficient and provides significant speckle reduction while maintaining the sharp features in the image [6]. One of the parameter to test filtering schemes is to calculate speckle index. The speckle index is the ratio of standard deviation to mean in homogenous area. SpeckleIndex(C) = (Standard Deviation / Mean) =
var(x) / E (x) =
V /m, where V = standard deviation and m = mean. SNR is reciprocal of speckle index, i.e. SNR = 1/C.
4 Experimental The DSPI set-up for recording the DSPI fringes is shown in Fig. 1. A beam of 30 mW He-Ne laser of wavelength 632.8 nm is split into two beams by a beam splitter BS1. One of the beams fully illuminates the surface of the object under study and the other beam is used as the reference beam. The value of J for our experimental setup is 1.938.
New Methods and Tools for Data Processing
145
Fig. 1. Schematic of DSPI setup for measurement of DSPI fringe recording
The object beam is combined with the reference beam to form a speckle interferogram that is converted into a video signal by CCD camera. The video analog output from CCD camera is fed to the PC-based imageprocessing system developed using National Instrument’s IMAQ PCI-1408 card. Lab-VIEW 5.0- based program in graphical programming language was developed to acquire, process and display the interferograms. The program implements accumulated linear histogram equalization after subtraction of the interferograms. The histogram equalization alters the graylevel value of the pixels. It transforms the gray-level values of the pixels of an image to evenly occupy the range (0 to 255 in an 8 bit image) of the histogram, increasing the contrast of the image. The pixels out of range are set to zero. The IMAQ PCI-1408 card is set to process the images of interferogram at the rate of 30 images / second. One time-average interferogram of the vibrating object over the frame acquisition period (1/30second) is grabbed and stored as a reference interferogram. The successive time-averaged interferograms are subtracted from reference interferogram continuously and displayed on computer screen.
146
New Methods and Tools for Data Processing
To arrive at optimum filtering scheme, first experiments were conducted on loudspeakers / tweeders [6,8]. One typical result on tweeders is given in Fig. 2 and Fig. 3. Results show that speckle noise can be reduced significantly by using appropriate preprocessing and wavelet filtering.
Fig. 2. DSPI speckle interferograms recorded for vibrating tweeder at (a) frequency 3kHz and force 0.6 mV, (b) frequency 9.31 kHz and force 3 V and (c) frequency 2.41 kHz and force 3 V.
Fig. 3. Filtered speckle interferograms (a) for the speckle interferograms in Fig. 2(a), (b) for the speckle interferograms in Fig. 2(b), (c) for the speckle interferograms in Fig. 2(c) respectively by implementing Wiener filtering followed by Symlet wavelet filtering and (d) Line profile of the filtered interferogram shown in Fig. 2(c).
After getting cue from these experiments systematic experiment was conducted on cantilever beam fixed at one end of dimension 50mm u 50mm u 0.8mm. Aspect ratio of the cantilever beam a / b = 1 (where ‘a’ and ‘b’ are the length and width of the beam). The cantilever beam was made of aluminum (Young’s modulus = 70 GPa, Density = 2700 kg / m3). Surface of the beam was made flat on the optical grinding / polishing machine. Sketch of the cantilever beam with a point of loading P is shown in Fig. 4(a). Function generator (model number: HP 33120A) regulates the frequency and magnitude of the force of the exciter. The function genera-
New Methods and Tools for Data Processing
147
tor was set to generate sinusoidal signal. An unfiltered speckle interferogram recorded for the cantilever beam is shown in Fig. 4(b).
(a) t = thickness All dimensions are in mm
(b) frequency: 1.937 KHz, force: 0.8 u10-3 N frequency parameter: 24.63
Fig. 4. (a) Sketch of cantilever beam with a point of loading P and (b) Unfiltered speckle interferograms for cantilever beam fixed at one end having dimension 50 mm u 50 mm u 0.8 mm fixed at one edges and other being free
The following filtering schemes are implemented on the recorded fringe pattern shown in Fig. 4(b). 1. Preprocessing by average followed by Daubechies (db). 2. Preprocessing by average or median followed by Symlet. 3. Pre-processing by sampling, thresholding, averaging followed by Symlet. 4. Pre-processing by ampling, thresholding, averaging followed by Biorthogonal wavelet. The filtered images of Fig. 4(b) for average followed by Daubechies (db), average followed by Symlet, pre-processing scheme (consists of average, sampling, thresholding, averaging) followed by Symlet and preprocessing scheme (consists of average, sampling, thresholding, averaging) followed by Biorthogonal wavelet are shown in Fig. 5(a), Fig. 5(b), Fig. 5(c) and Fig. 5(d) respectively. The speckle index and SNR for unfiltered speckle interferogram of Fig. 4(b) and filtered interferograms shown in Fig. 5(a), Fig. 5(b), Fig. 5(c) and Fig. 5(d) are given in Table.1.
New Methods and Tools for Data Processing
148
(a)
(b)
(d)
(c)
Fig. 5. The filtered Speckle interferograms of Fig. 2 (b) for (a) average followed by Daubechies (db), (b) average / median followed by Symlet, (c) pre-processing scheme (consists of average, sampling, thresholding, averaging) followed by Symlet and (d) pre-processing scheme (consists of average, sampling, thresholding)
Table 1. Speckle index and SNR with different filtering scheme Speckle index, C = V / m, SNR = 1 / C
Image name
Fig. 3
Fig. 5(a)
Fig. 5 (b)
Fig. 5(c)
Fig. 5 (d)
1.3100
0.3587
0.3398
0.1494
0.1236
0.7633
2.788
2.9429
6.6946
8.0926
Speckle index (C)
SNR (1/C)
It is observed that there is significant reduction in speckle index of the filtered speckle interferogram by using appropriate pre-processing scheme followed by Biorthogonal wavelet filter.
Conclusions Experimental results reveal that using appropriate pre-processing scheme followed by Biorthogonal wavelet filter reduced speckle index and increase the SNR significantly. This results in enhancement of contrast between dark and bright fringes. The Fig 5(d) shows that the implementation of appropriate pre-processing scheme followed by Biorthogonal wavelet
New Methods and Tools for Data Processing
149
filter gives more clear fringe pattern as comparison to filtering schemes results shown in Fig. 5(a), Fig. 5(b) and Fig. 5(c).
References 1. P. Varman and C. Wykers., “Smoothening of speckle and moiré fringes by computer processing”, Opt. Lasers Eng. 3, 87-100 (1982). 2. O. J. Lokberg, “ESPI – the ultimate holographic tool for vibration analysis?, ” J. Acoust. Soc. Am. 75, 1783-1791 (1984). 3. M. Takeda, K. Mutoh, “Fourier-transform profilometry for the automatic measurement of 3-D object shapes” Appl. Opt. 22, 3977-3982 (1983). 4. S. Kruger, G. Wernecke, W. Osten, D. Kayser, N. Demoli and H. Gruber, “ The application of wavelet filters in convolution processors for the automatic detection of faults in fringe pattern”, Proc. Fringe 2001 (Elsevier), edited by W. Osten and W. Juptner (2001). 5. G. H. Kaufmann and G. E. Galizzi., “Speckle noise reduction in television holography fringes using wavelet thresholding”, Opt. Eng. 35, 914 (1996). 6. C. Shakher, R. Kumar, S. K. Singh, and S. A. Kazmi, “Application of wavelet filtering for vibration analysis using digital speckle pattern interferometry”, Opt. Eng. 41,176-180 (2002). 7. A. Federico, G. H. Kaufmann, “Evaluation of the continuous wavelet transform method for the phase measurement of electronic speckle pattern interferometry fringes”, Opt. Eng. 41, 3209-3216 (2002). 8. C. Shakher and R. S. Sirohi, “ Study of vibrations in square plate and tweeders using DSPI and wavelet transform”, ATEM’03, Sept. 10-12, Nagoya, Japan (2003). 9. Y. Fu, C. J. Tay, C. Quan, L. J. Chen, “ Temporal wavelet analysis for deformation and velocity measurement in speckle interferometry”, Opt. Eng. 43, 2780-2787 (2004).
Wavefront Optimization using Piston Micro Mirror Arrays Jan Liesener, Wolfgang Osten Institut für Technische Optik, Universität Stuttgart Pfaffenwaldring 9, 70569 Stuttgart Germany
1 Introduction Spatial light modulators (SLMs) are key elements in the field of active and adaptive optics, where the defined control of light fields is required. The removal of aberrations in optical systems, for example, requires a modulator by which the phase of light fields can be influenced, thus forming the shape of the outcoming wavefront. Presently, deformable membrane mirrors are widely-used, although their spatial resolution is very limited. Pixelated liquid crystal displays offer a high resolution but their polarization effects must be carefully considered [1][2]. A new type of SLM, a micro mirror array (MMA) developed by the IPMS (Fraunhofer Institut für Photonische Mikrosysteme), consists of an array of micromirrors that move with a piston-like motion perpendicular to their surfaces enabling the accomplishment of a pure phase modulation. A breadboard was set up on which the MMA’s wavefront shaping capability was tested by measuring the maximum achievable coupling efficiency (CE) into a monomode fiber after compensation of artificially induced wavefront errors. The artificial wavefront errors resembled typical wavefront errors expected for lightweight or segmented mirrors of space telescopes. Several wavefront optimization methods were applied. The methods can be divided into direct wavefront measurements, among them ShackHartmann wavefront measurements as well as interferometric phase measurements, and iterative methods that use the CE as the only measurand. The latter methods include a direct search algorithm and a genetic algorithm. Only minor changes in the optical setup were necessary in order to switch between the methods, the fact of which generates the comparability of the methods.
New Methods and Tools for Data Processing
151
2 Micro Mirror Array description The MMA fabricated at the Fraunhofer Institut für Photonische Mikrosysteme (FhG-IPMS, www.ipms.fraunhofer.de) consists of 240x200 micro mirrors, arranged on a regular grid with a pixel pitch of 40µm. Unlike for example the flip mirror arrays developed by Texas Instruments (DLP technology, www.dlp.com) which operate in a binary on/off mode, the micro mirrors in this investigation perform a continuous motion perpendicular to their surface. Thereby, a pure phase shift of the light reflected from the mirrors is accomplished. The deflection of a single mirror is induced by applying a voltage to the electrode underlying the pixel, resulting in an equilibrium between the electrostatic force and the restoring force of the suspension arms (see fig. 1). Since the maximum deflection of the mirrors is 360nm, wavefront shaping with more than 720nm shift has to be done in a wrapped manner, i.e. phase values are truncated to the interval [-S..S] by adding or subtracting multiples of 2S. The one-level architecture device is fabricated in a CMOS compatible surface micro-machining process allowing individual addressing of each element in the matrix. The mirrors and actuators of the mechanism are formed simultaneously in one structural layer of aluminium. The MMA currently operates with a 5% duty cycle and all CE measurements in this investigation refer to the 5% period in which the micro mirrors have the desired deflection value. The measurements could be performed by triggering all other hardware to the MMA driving board. The 5% duty cycle was not problematic in this investigation, but it could prove to be obstructive in other applications.
Fig. 1. Left: Pixel structure of an MMA showing the mirrors and the suspension arms. Right: White light interferometric measurement of deflected and undeflected mirrors.
152
New Methods and Tools for Data Processing
3 Breadboard description
Fig. 2. Drawing of the opto-mechanical setup for coupling efficiency maximization including the generation and compensation of wavefront errors.
With the opto-mechanical setup depicted in fig.2 all optimization routines for the fiber CE maximization can be performed with only minor alterations. A collimated beam with 633nm wavelength is generated with the He-Ne laser (L), a spatial filter (SF), and a collimation lens (CL). The unit consisting of the polarizing beam splitter (BS1), the telescope (T1) and the deformable membrane mirror (DMM) acts as a wavefront error generator. The deformation of the DMM is transferred to the wavefront reflected from the DMM. The telescope is necessary for the adjustment of beam diameters used to read out the DMM (35mm) and the MMA (8 x 9.6mm). The MMA's task is to compensate for the wavefront errors in reflection. BS2 directs the beam towards the focusing lens (FL), by which the light is focused onto the front end of a monomode fiber (MF). BS3 couples out some intensity for the Shack-Hartmann and the interferometric measurements. The telescope T2 acts as beam reducer and projects an image of the MMA either onto the CCD chip or onto the microlens array in the ShackHartmann approach. Part of the initial collimated beam passes through BS1 to form a reference beam for the interferometric approach. The angle of the reference beam can be adjusted using the second adjustable mirror. Quarter wave plates (QWP) control the transmittance/reflectance of the polarizing beam splitters (BS1, BS2, BS4).
New Methods and Tools for Data Processing
153
4 Optimization methods For the determination of the MMA phase pattern necessary to compensate the system’s aberrations four optimization methods were investigated and successfully applied. The compensation phase function is represented by either a set of Zernike coefficients (modal) or a set of localized phase/amplitude values (zonal). 4.1 Direct Wavefront Measurements
Direct wavefront measurements return the shape of the wavefront reflected by the MMA. The difference from a calibration wavefront is calculated and subtracted from the MMA phase pattern. Measurements can be performed repeatedly, enabling closed loop operation. Using a Shack-Hartmann sensor [3], a microlens array is placed in a wavefront. The focal spot behind each lens is shifted according to the local tilt of the wavefront at each microlens position. This tilt information is used for the reconstruction of the wavefront shape. In our test setup the SHS is used to set up a closed loop system for continuous measurement of the wavefront after compensation by the MMA. The sensor is placed in the conjugate plane of the MMA. A Mach-Zehnder type interferometer is established by also using the reference light path described above. By superposing the light coming from the MMA and the reference beam that is not affected by the aberrations introduced by the DMM, an interference pattern is generated on the CCD. The reference beam is tilted so that the interferogram is provided with a carrier frequency. Fourier transform methods [4] are applied to extract the phase information from only one image as opposed to phase shifting interferometry for which at least 3 images are necessary. 4.2 Iterative methods
Using the iterative methods the compensation pattern displayed by the MMA is iteratively modified while the fiber CE is monitored. The genetic algorithm1 [5] assesses several sets of parameters (Zernike coefficients) in terms of fiber CE. Only the best sets of parameters are selected, bad sets are discarded. In order to get back to the original number of sets, new sets are created by inheritance of the properties of two or more 1
the genetic algorithm components library GAlib (http://lancet.mit.edu/ga/) was used
154
New Methods and Tools for Data Processing
previous sets (parents). Analogous to nature, the parameters of all new sets also undergo a certain degree of mutation in order to find the best fit into the environment. In the direct search algorithm [6], fractions of the MMA (zones) are optimized successively in a random order. In each optimization step the deflection of one zone is changed (e.g. in four steps) while the fiber CE is observed. The (four) measured CEs are then used to calculate the optimal zone deflection with algorithms equivalent to the ones used in phase shifting interferometry.
5 Optimization experiments Using the different optimization methods, the MMA had to correct continuous wavefront errors of 0.7O1.2O and 7.9O PV that were created with the DMM. A step wavefront error was generated by introducing the crossed edges of two microscopic cover plates. The grey-scaled phase errors can be seen in table 1. 5.1 Performance of the optimization methods
Table 1 shows the wavefront errors that were corrected by the MMA. Here, the fiber coupling efficiencies (CEs) that could be obtained with the different optimization methods are also listed. The CE is defined as the ratio of light intensity exiting the fiber to the entire light intensity in the fiber input plane (including all diffraction orders caused by the periodic MMA structure). Of all optimization methods the direct search algorithm performed best with small and continuous wavefront errors. An important parameter for the optimization is the size of the zones, i.e. the number of pixels that are optimized simultaneously. On the one hand, too large of cells may not be chosen as wavefront error variations within the cells are then not detected. This problem becomes serious for big slopes and especially for step errors. On the other hand, too small of cells may not be chosen as the intensity
New Methods and Tools for Data Processing
155
Table 1. Listed test results imposed wavefront errors Wavefront phase plot, grayscaled WFE (rms) WFE (PV)
0.13 Ȝ 0.7 Ȝ
Direct search algorithm Genetic algorithm Shack- HartMann sensor Interferometer without compensation
63% 62% 60% 61% 51%
0.17mm glass 0.25 Ȝ 2.0 Ȝ plate edges 1.2 Ȝ 7.9 Ȝ Achieved coupling efficiencies (elapsed time) 49% 40% 42% (22 sec) (1.5 min) (12min) (12 min) 47% Optimization Not applicable (36 min) (45 min) unsuccessful 48% 40% Not applicable (0.4 sec) (0.8 sec) (0.8 sec) 46% 44% 47% (2 sec) (2 sec) (2 sec) (2 sec) 1.4% 0.27% 8.1%
change at the detector, caused by the cell's phase variation during optimization, must be larger than the noise in the signal. A typical progression of the CE during the optimization is depicted in fig. 2 (left). The CE rise is typically quadratic until all zones have been optimized once (at 480 optimization steps). A second optimization of all zones has not resulted in a further improvement (step 481 to 960). The CE jump at the very end of the optimization is achieved by smoothing the phase distribution. Smoothing is a method in which phase values that are not at the center of one zone are interpolated. Contrary to the zonal optimization in the direct search algorithm the genetic algorithm performs a modal (global) optimization in terms of Zernike modes. Principally it can cope better with big slopes than the direct search algorithm if the error can be represented by the chosen number of Zernike modes. Step errors can not be represented by Zernike modes. Therefore the GA is not suitable for step errors. The consequence of higher aberration amplitudes is an unpredictably longer optimization time. A further very important parameter of the optimization is the mutation magnitude used in the optimization. Too little of mutation causes an unnecessarily long optimization time and with too much mutation an iteration toward the "perfect" wavefront becomes unlikely. In fig. 2 (right) a typical CE progression within a GA optimization is depicted. The mutation magnitude in this optimization run was cut in half every 150 generations.
156
New Methods and Tools for Data Processing
Fig. 3. Typical coupling efficiency progression for the iterative methods with the intermediate continuous wavefront error using the direct search algorithm with 24x20 zones (left) and genetic algorithm with 66 Zernike modes (right).
The wavefront control with the Shack-Hartmann sensor requires more equipment than with the stochastical approaches. In turn we have a very powerful control that measures the wavefront with "one shot" and can also perform iterative measurements in a closed-loop manner. Measurable wavefront slopes are only limited by hardware parameters such as focal length of the microlenses and the size of the subaperture of each microlens. In our setup a slope of 0.36 degrees can be measured which corresponds to a wavefront tilt of 95O over the MMA aperture, which is definitely sufficient for the given wavefront errors. As in the GA approach, step errors can not be detected reasonably with the SHS, since, also from a mathematical point of view, the slope of phase steps is not defined. However, the slope is the relevant measurement quantity with this technique. The interferometric approach provides "one shot" measurements with step error detection capability. The carrier frequency is adjusted so that one interference stripe has a period of approximately four camera pixels. The filtering in the Fourier space allows spatial frequencies between 0.5 and 1.5 times the carrier frequency, i.e. local stripe periods between 8 and 2.67 camera pixels are allowed. This limits the measurable wavefront tilts to 96O over the MMA aperture, which is far above the present wavefront tilts. 5.2 Comparison to flip mirror array
The same driving board that drives the piston MMA chip can also drive an MMA chip with flip mirrors that can also perform continuous motion. In this investigation, however, it was used in a binary on/off mode. Of the two MMA types the piston type MMA performed much better, since the tilt MMA operates in a binary amplitude mode, i.e. a big fraction of the in-
New Methods and Tools for Data Processing
157
cident light (about 50%) is taken out of the system and light is also diffracted into unwanted diffraction orders.
6 Summary and conclusions With the given micromirror array several wavefront optimization methods could be applied that enabled a greatly improved fiber coupling efficiency. The improvement was especially significant for the piston type MMA. Direct phase measurement methods (Shack-Hartmann sensor and interferometry) are superior when time is critical or when strong aberrations are present. The advantage of iterative methods (direct search algorithm and genetic algorithm) is the simple experimental setup and the unnecesity of calibration. These methods performed especially well with small aberrations.
Acknowledgements We thank EADS-Astrium and Fraunhofer-Institut für photonische Mikrosysteme (IPMS) for the effective collaboration and ESA-ESTEC for supervising and financing the project (16632/02/NL/PA).
References 1. Kohler, C, Schwab, X, Osten, W (2005) Optimally tuned spatial light modulators for digital holography. Submitted for publication in Applied Optics 2. Osten, W, Kohler, C, Liesener, J (2005) Evaluation and Application of Spatial Light Modulators for Optical Metrology. Optoel'05 (Proceedings Reunión Española de Optoelectrónica) 3. Seifert, L, Liesener, J, Tiziani, H.J (2003) Adaptive Shack-Hartmann sensor. Proc. SPIE 5144:250-258 4. Takeda, M, Ina, H, Kobayashi, S (1982) Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. Journal of the Optical Society of America 72:156-160 5. Schöneburg, E, Heinzmann, F, Feddersen, S (1994) Genetische Algorithmen und Evolutionsstrategien. Addison Wesley 6. Liesener, J, Hupfer, W, Gehner, A, Wallace, K (2004) Tests on micromirror arrays for adaptive optics. Proc. SPIE 5553:319-329
Adaptive Correction to the Speckle Correlation Fringes using twisted nematic LCD Erwin Hack and Phanindra Narayan Gundu EMPA, Laboratory Electronics/Metrology Überlandsstrasse 129, CH-8600 Dübendorf Switzerland
1 Introduction In digital speckle pattern correlation interferometry (DSPI), intensity patterns from the interference of the speckled object wave with a reference wavefront are recorded digitally [1]. Subtracting two interference patterns before and after an object change reveals a correlation fringe pattern. Speckle correlation fringes can be conceived as a smoothly varying intensity distribution multiplied to a noise term. The high noise content is due to the random distribution of speckle intensity and speckle phase across the image plane. Although intensity modulation and speckle phase can be eliminated by phase stepping [2,3] or other phase retrieval methods, noise is not eliminated completely due to the fact that there is a limited dynamic range of the sensor, given by the saturation of the camera, the digitisation depth and the electronic noise. Besides, these techniques require recording several frames. Fourier transform methods, temporal phase unwrapping or spatial phase shifting have been developed to overcome the sequential image capture. The speckle noise remaining in the phase map is generally eliminated by filtering techniques. Many digital processing techniques have been developed to reduce the speckle noise from the fringe pattern [4-8]. Lowpass and Wiener filtering have proved to be inefficient and inadequate due to smoothing of both the noise and the signal. Local averaging and median filtering methods using several kernel sizes and shapes with multiple iterations result in blurring of the image. Recently developed methods such as Wavelet-based filtering have met with some success but the intensity profile of the fringes is not restored completely. To obtain a phase distribution with minimal error one would need to restore the smooth intensity profile of the fringes across the image plane.
New Methods and Tools for Data Processing
159
To improve the intensity profile of speckle correlation fringes, we reduce the speckle noise by reducing the range of the modulation intensity values. The pixelwise adaptive compensation is made only once, before the deformation of the object. It leaves the correlation fringes with a welldefined intensity envelope interspersed with notches or gaps. Hence, a simple morphological filtering – a dilation – is sufficient to obtain smooth correlation fringes.
2 Speckle noise The intensity observed on a CCD where a beam scattered from a rough object interferes with a plane reference beam is given by: Ii
I 0 I M cos M sp,i M ref
I0
I ref I sp
2 I ref I sp
IM
(1)
where the background and modulation intensities, I 0 and I M , are expressed in terms of the reference and speckle wave intensities, I ref and I sp . After an object state change, an intensity pattern I f is recorded. As-
suming that neither the speckle intensity nor the speckle phase change,
I 0 I M cos M sp, f M ref
If
(2)
Correlating the two intensities, Eq. 1 and 2, by subtraction leads to the well-known expression for the speckle correlation fringe pattern [1] F
I f Ii
where 'M
§ 'M · § 'M · M sp,i M ref ¸ sin¨ 2 I M sin¨ ¸ © 2 ¹ © 2 ¹
(3)
M sp, f M sp,i is ҏthe phase change due to the object state
change. The speckle noise in the fringe pattern is multiplicative in nature and arises from the two highly varying terms in Eq. 3, the modulation intensity, I M , and the speckle phase term Psp
§ 'M · M sp,i M ref ¸ sin¨ 2 © ¹
(4)
Assuming uncorrelated speckle intensity and speckle phase distributions, the signal-to-noise ratio (SNR) is [9]
New Methods and Tools for Data Processing
160
SNR
F
2
2
2
IM
var >F @
var >I M @ Psp
2
Psp
IM
2 2
(5)
> @
var Psp
which is independent of the difference phase term, as expected for multiplicative noise. Eq. 5 shows that reducing the variance either in the modulation intensity or in the speckle phase term or both will improve the SNR and lead to better fringe quality. We consider here a fully developed, polarized speckle field, the intensity of which obeys a negative exponential statistics, and the phase is uniformly distributed. The distribution of Isp for an unresolved speckle pattern depends upon the average number of speckles, n, in one pixel of the CCD [1], from which the joint probability density function (pdf) of the modulation and background intensity can be deduced when a smooth reference wave is used [10]
p I 0 , I M
nn
2 § I I M ¨ I 0 I ref M ¨ 4 I ref ©
I sp 2
n
with I M d 4 I ref I 0 I ref
· ¸ ¸ ¹
2 I ref *( n 1)
n 2
§ I 0 I ref exp¨ n ¨ I sp ©
· ¸ ¸ ¹
(6)
and n t 2 . Note that due to the integration
over the pixel the bracket expression in the numerator is not zero. The joint pdf reaches its maximum value at Iˆ0
I ref I sp
2n 3 2n
IˆM
2 u I ref I sp n
(7)
From Eq. 6 the pdf of the modulation intensity alone is p I M
n IM 2 I ref I sp
2 § n IM exp¨ ¨ I © ref I sp
· ¸ ¸ ¹
(8)
which has its maximum at the same value as given in Eq. 7, while the interval [ 0 , 2 IˆM ] includes 95.6% of all modulation intensity values.
New Methods and Tools for Data Processing
161
3 Reducing the variance of IM We note from Eq. 1 that I M can be modified by varying I ref in each pixel to obtain a constant modulation intensity, i.e. var >I M @ 0 . In order to illustrate the concept and its effect we consider the out-of-plane fringe pattern expected from a point loaded bending beam clamped at x = 0. Fig.1 shows the analytical bending line z(x) and, at the top, a cross-section through simulated speckle fringes, where we have assumed a modulation intensity distribution given by Eq. 8 for n = 4. The same cross-section is displayed with a constant modulation intensity, i.e. the smooth envelope multiplied by the random speckle phase factor, Eq. 4, only. Bending beam 3
150
(a)
2.5
100
fringe pattern
2
50
1.5
0
1
-50
(b) (c)
0.5 0 -0.5
0
200
-1
400
-100 -150 600
(d)
-1.5
800
1000
-200
2
z( x )
x· § x· § ¨ ¸ ¨3 ¸ L¹ © L¹ ©
-250 -300
beam position [a.u.]
Fig. 1. (a) Simulated cross-section through the out-of-plane speckle fringe pattern expected from a bending beam. (b) smooth fringe pattern (c) speckle fringe pattern with constant modulation intensity (d) bending line
4 Experimental implementation 4.1 Amplitude-only SLM
The experimental implementation is performed using an amplitude-only Spatial Light Modulator (Sony LCX016AL SLM with 832 x 624 pixels at a pitch of 32 µm) in a conventional ESPI set-up [9]. In general, twistednematic LCDs vary both the phase and amplitude of the incident light together, but not independently. Nevertheless, it has been shown [11] how to obtain amplitude-only characteristics with elliptically polarized light. The phase and amplitude transmission characteristics are plotted in Fig. 2.
New Methods and Tools for Data Processing
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
170
Amplitude Intensity Phase
150 130 110 90 70 50 30 10
Phase change (deg.)
Normalized intensity
162
-10 0
50
100
150
200
250
Gray-level (0-255) Fig. 2. Intensity variation and phase-stability for an amplitude-only SLM.
The plot shows that the incident intensity can be reduced by up to 96% while the phase is changed by less than 3° over the entire dynamic range (0 to 255 grey-levels) of the LCD. 4.2 Adaptive DSPI
The experimental verification is performed using a conventional DSPI setup (Fig.3). In this set-up, an F/3.5 imaging lens is used to image both the LCD and the rough object onto the CCD. By working with ~1/3 magnification, we have a one to one pixel correspondence between the LCD and the CCD. The speckle intensity and the reference intensity are measured by shuttering the reference and object beam, respectively. From the values of I sp and I ref the modulation intensity I M is estimated at each pixel according to Eq. 1. In order not to saturate the camera (too much), total intensity values must be below 255 GL for a fraction of P pixels (e.g. P=99%). The modulation intensity values should be maximized by adaptively controlling the reference intensity transmission at each pixel within the dynamic range of the LCD, i.e. W [W min ,1] (see Fig. 2).
New Methods and Tools for Data Processing
163 BS
Iref
Isp
Object
BS Laser
QWP2 P2
TN-LCD
P1 QWP1
BS
IL
CCD Fig. 3. Experimental realization of adaptive speckle correction. BS are non-polarizing beam splitters, P1 and P2 are linear polarizers, QWP1 and QWP2 are quarter-waveplates, TNLCD is the twisted-nematic LCD. IL is the imaging lens.
I i, f d I 0 I M d 255 GL I f I i d 2 I M d 255 GL
(9)
In order to optimise the measurement, we have to choose an optimum modulation intensity which should cope with as many speckle intensities as possible, whence from Eq. 8 we accommodate most of the speckles within an interval twice the most probable values. Hence from Eq. 9 2 Iˆ0 2 IˆM 4 IˆM
2 I ref I sp 4
2n 3 n
2
2 I ref I sp d 255 GL n
(10)
2 I ref I sp d 255 GL n
Best results are expected for full modulation, i.e. I M 128 GL , which then would call for a background intensity of the same level. Due to the dynamic range of the LCD, the reference intensity can be varied within the interval [W min u I ref , I ref ] . !
IM
2 I sp,max u I ref u W min
2 I sp,min u I ref
(11)
New Methods and Tools for Data Processing
164
Hence, the range of speckle intensities that can be accommodated by the adaptive technique is I sp,max u W min I sp,min . These cases of bright and dark speckles correspond to a maximum total intensity of I i, f (bright ) d I ref u W min I sp,max I M d 255 GL I i, f ( dark ) d I ref I sp,min I M
The optimum value I M
I ref I sp,max u W min I M
(12)
128 GL then yields the condition
I ref I sp,max d
In our case, we have W min
255 GL 1 W min
(13)
0.04 from which I ref I sp,max d 245 GL .
From Eq. 13 the two intensities are found to be I ref
I sp,max
122 GL .
Conclusion Theory, simulation and experimental investigation has shown the feasibility of improving speckle fringe patterns by modulating the IM values pixelwise with an amplitude-only LCD SLM. From Fig. 9c we can see that without adaptive modulation the intensity in the central region of the fringe pattern is saturated and the fringes at the rim area are rather weak. In contrast, from Fig. 9d we note that the fringe pattern after adaptive modulation is much more homogeneous, and fringes are also discernible along the rim area. This can be understood from the histograms in Figs. 9e and 9f of the marked bright fringes in figs. 9a and 9b respectively. Most of the pixels are clustered close to zero grey value in Fig. 9e resulting in a weak appearance of the fringe whereas the pixels in Fig. 9f have a broad spectrum of values leading to a better quality fringe pattern. The SNR has been improved significantly. Complete reduction of the spectrum to a single value was not possible with our LCD SLM because of the limitation of its dynamic range.
New Methods and Tools for Data Processing
165
References 1.
P.K. Rastogi (Ed.), Digital speckle pattern interferometry and related techniques (John Wiley & Sons Ltd, Chichester, England, 2001) 2. Creath, K (1985) Phase-shifting speckle interferometry. Appl. Optics 24: 3053-3058 3. Surrel, Y (1996) Design of algorithms for phase measurements by the use of phase stepping. Appl. Optics 35: 51-60 4. Arsenault, H.H, April, (1976) G Speckle removal by optical and digital processing. J. Opt. Soc. of Am. 66: 177 5. Jain, A.K, Christensen, C.R (1980) Digital processing of images in speckle noise. Proc. SPIE 243: 46-50. 6. Lim, J.S, Nawab, H (1980) Techniques for speckle noise removal. Proc. SPIE 243: 35-44 7. Federico, A, Kaufmann, G.H (2001) Comparative study of wavelet thresholding methods for denoising electronic speckle pattern interferometry fringes. Opt. Eng. 40: 2598-2604 8. Sveinsson, J.R, Benediktsson, J.A (2002) Review of applications of wavelets in speckle reduction and enhancement of SAR images. Proc. SPIE 4541: 47-58. 9. Hack, E, Gundu, P.N, Rastogi, P.K (2005) Adaptive correction to the speckle correlation fringes using twisted nematic LCD. Appl. Opt. 44: 2772-2781 10. Lehmann, M (1996) Phase-shifting speckle interferometry with unresolved speckles: A theoretical investigation”, Opt. Commun. 128: 325-340 11. Pezzaniti, J.L, Chipaman, R.A (1993) Phase-only modulation of a twisted nematic liquid-crystal TV by use of the eigenpolarization states. Opt. Lett.18: 1567-1569
Random phase shift interferometer Doloca, Radu; Tutsch, Rainer Technische Universität Braunschweig Institute für Produktionsmeßtechnik
1 Introduction Mechanical vibrations give rise to significant problems for most interferometric test methods. With the classic phase-shift interferometric methods the experimental data are obtained sequentially, taking four or five frames with a CCD camera. This means that during the measurement the fringe pattern must remain stable. Because of the floor vibrations the interferometric systems must be mounted on a vibration-isolation table, which is usually an expensive task. Several vibration-tolerant solutions have been developed [1]. Taking of the interferogram data at higher frame rates has the effect of pushing the sensitivity to higher vibration frequencies [2], [3]. With instantaneous phase-shifting techniques, using polarisation components [4], [5], or holographic elements [6], [7], [8], the beams are split in multiple paths and phase-shifted interferograms are simultaneously acquired. This paper presents the concept and the first experimental setup of an interferometric system that is designed to work without vibration isolation and uses the random mechanical vibrations as phase shifter. An additional detector system consisting of three photodiodes is used for the determination of the phase shifts that occur at the moments of taking of the interference images. An adequate PSI algorithm for random phase shifts must be used.
2 Experimental setup The system consists basically of a two-beam Fizeau interferometer. For a Fizeau interferometer only the relative oscillations between the reference and the test plates have influence on the fringe pattern. This reduces the
New Methods and Tools for Data Processing
167
demands for the adjustment of the optical components against the internal vibration sensitivity and the thermic effects. Two orthogonally polarised laser beams of different wavelengths, a continuous He-Ne laser with Ȝ1 = 632.8 nm wavelength and a pulsed laser diode with Ȝ2 = 780 nm wavelength, see Fig. 1, are coupled through the Detector system
Achromat-2 He-Ne Laser
Objective Pulse Laser Diode
CCD Camera
Beam Splitter-3 Spatial filter-2 Achromat-1
Spatial filter-1
Test plate Piezoelectric transducer
Beam Splitter-1
Beam Splitter-2
Reference plate
Fig. 1. Experimental setup
beam splitter-1 and collimated with the achromat-1 to the reference and test plates, which have 50 mm in diameter. The waves reflected from the test and reference surfaces are deviated by the beam splitter-2 and trace through the spatial filter-2. Using the polarising beam splitter-3, the inter-
New Methods and Tools for Data Processing
168
Tilt angle
Measured point
Laser beam Measurement head
Reference head Test plate
Mechanical mount
Laser vibrometer
y Post holder
Piezoelectric transducer
z
x
Fig. 2. The oscillating mounting system of the test plate
x
y
P2
z
P1
P3
He-Ne interference field
Fig. 3. The detector system, consisting of three photodiodes P1, P2, P3 placed in the interference field of the collimated He-Ne rays
ference fields are separated. The fringes from the pulsed laser are projected onto the sensor of a CCD camera, and the He-Ne fringes are collimated by the achromat-2 and hit the detector system that consists of three photodiodes. The achromats have the same focal length f = 300 mm, in order to achieve the magnification M = 1 at the photodiodes system. Under the influence of the mechanical vibration, the relative oscillations between the test and the reference plates lead to a continuous random phase-shifting between the test and reference beams. We assume the vibration-induced movements of the reference plate and the test plate as rigidbody shifts and tilts.
New Methods and Tools for Data Processing
169
3 Description of the method Preliminary tests are performed on a vibration-isolation table. The mechanical holder of the test plate is oscillated around the X axis with a piezoelectric transducer, see Fig 2. The post holder has a parallelepipedic form, with the X side much longer than the Z side, so all the points of the test surface and of the mounting system oscillate in phase around the X axis. Starting with simple oscillating functions, like sinusoidal signals, the system is gradually complicated with random oscillations, to simulate the influence of the floor vibrations. The oscillations of the test plate produce a continuous phase shift between the test and reference beams. The CCD camera is externally triggered and synchronised with the laser pulses which have enough intensity and are short enough in order to freeze the laser pulse fringes on the CCD sensor and to obtain good quality interferogram images. The three photodiodes of the detector system are placed in the interference field of the collimated He-Ne beams and define a plane perpendicular to the optical axis. The sensitive area of each photodiode is a square millimeter. The analog signals of the photodiodes show the time dependence of the intensity of the He-Ne fringe pattern at three different measurement points, and are connected through an acquisition card to a computer with Labview software installed. In case of a linear phase shift in time, that would correspond to a translation with constant velocity in the optical axis direction of the test plate, the signal of a photodiode has a sinusoidal dependence in time. According to the fundamental equation for PSI [9], the variation of the intensity measured by the photodiodes can be written as: I i t I ic I icccos>G i t ) i @ ,
i 1, 2, 3 ; G i t
4S
O1
zi t
(1)
where I´i is the intensity bias, I´´i is half the peak-to-valley intensity, įi(t) is the time varying phase shift introduced into the test beam, ĭi is related to the temporal phase shift, i is the photodiode index, zi(t) is the optical path difference (OPD) introduced to the test beam. For continuous oscillations of the test plate the photodiode signal looks more like a frequency modulated signal, as we can see in the Fig. 4.a) at 100 Hz oscillation frequency of the piezo-transducer. In this representation the signals have an arbitrary intensity bias.
New Methods and Tools for Data Processing
170
In order to make a correlation between the photodiode signals and the test plate oscillations, a single point laser vibrometer from Polytec is used. A laser vibrometer measures the vibrations and the velocity of an object in the direction of the laser beam. It is oriented like in the Fig. 2, and it measures the oscillations in the direction of the Z axis of a point on the mechanical holder of the test plate. The vibrometer signal is also connected through the acquisition card and it is synchronised with the photodiode signals, as it is shown in the Fig. 4.b). The evaluation of the photodiodes signals gives us the variation of the phase shifts with time of the three correspondent individual points on the test surface. The plane defined by these three points represents the variation of the phase shift for every point on the test surface. It can be observed that an extreme value of the photodiode signals corresponds to an extreme value of the vibrometer signal. We call them the main extreme values. Between two main extreme points, the signals present a series of maximum and minimum values, IM and Im, so the vibrations introduce phase shift up to several wavelengths. Because of the tilt, there is a spatial variation of the phase shift at the test surface. A photodiode that corresponds to a point placed at a higher level on the test surface, P1 for example, gives a signal with a larger number of maximum and minimum values, due to the larger oscillation amplitude. During the measurements, the interference fringes must remain several times larger than the sensitive area of a photodiode. Otherwise the signal would be the integration of the light intensity over an area comparable with the fringe width, and the signal contrast would be very low. To avoid this effect, the mechanical oscillation should be limited to about (3x10-3)°. To determine the dependence of the phase shift in time for the three individual points on the test surface, if we consider (see Eq. 1.) I´i=0 and ĭi=0 we obtain zij t
O1 I t a cos i , 4S I icc
i 1, 2, 3 the photodiode index
(2)
Using a computation algorithm, the main extreme values are identified. We iteratively apply this equation on every time interval Tij = (tij,ti(j+1)), between the maximum and minimum values and we obtain the OPD variations zi(t) for the entire measurement time. The results, see Fig. 4,b), show
New Methods and Tools for Data Processing
171
I1(t) Main extreme values a)
I2(t)
I3(t) time (ms) OPD (nm)
T3j=(t3j, t3(j+1)) Vibrometer signal z3j(t)
b)
z1(t) z2(t) z3(t)
time (ms)
Fig. 4. a) The simultaneous photodiode signals. b) The correspondent calculated oscillations of the three individual points on the test surface
the oscillations of the three individual points on the test surface, with arbitrary offsets. There are oscillations in phase, but with different amplitudes, due to the different position levels of the points. There is a good concordance with the vibrometer signal, that shows the oscillations of a measurement point situated on the mounting system, at a higher level over the test plate, see Fig 2. Combining the photodiode signals, we find the information of the oscillating plane defined by the three individual points. Under the assumption of a rigid body movement of the test plate we obtain the time dependence of the OPD shift z(x, y, t) at every measurement point on the test surface. Because of the arbitrary offsets of the zi(t), the OPD shift is determined with reference to an arbitrary tilted plane. Only shifts in Z direction and tilts around the X and Y axis are effective for fringe modulation, while lateral movements of the test plate can be neglected. While the movement of the test plate is measured continuously, the CCD camera records a number of interferograms of the laser diode. The trigger signal of the pulse laser is connected in parallel with the photodiode signals to the acquisition card. Comparing the disc interference image of the laser diode on the CCD camera with disc interference image of the He-
New Methods and Tools for Data Processing
172
Ne laser at the detector system plane, we can determine the (xi,yi) coordinates of the three individual points on the test surface. Using the threepoint form of the plane equation we obtain: A A1 x A2 y (3) A3 where A = det(X1X2X3); Xi = (xi, yi, zi(t)), and Ai is the determinant obtained by replacing Xi with a column vector of 1s. Next we get the phase shift variation for every point on the test surface: z x, y , t
G x, y , t
4S
O1
z x, y , t
(4)
A four-steps PSI algorithm can be used, for arbitrary phase shifts introduced between the sequentially recorded interferograms. The correlation on the time scale of the trigger signal of the pulse laser with the photodiode signals makes possible to determine for every interference image, at any point (x,y) the random phase shift įk(x,y) that occurs, where k = 1, 2, 3, 4 is the index of the interferogram. The equation system: I k x, y I cx, y I ccx, y cos>M x, y G k x, y @
(5)
with three unknowns, I´(x,y) the intensity bias, I´´(x,y) the half peak to valley intensity modulation, and M x, y the unknown phase, can be resolved for the value of M x, y at every point of the interferograms. First, the intensity bias I´(x,y) is eliminated. If we make the notations: c12
cos G1 cos G 2 ,
s12
sin G1 sin G 2
c34
cos G 3 cos G 4 ,
s34
sin G 3 sin G 4 ,
R
I1 I 2 (6) I3 I4
the result of the PSI algorithm is: ª Rc c º
M x, y tan 1 « 34 12 » ¬ s12 Rs34 ¼
(7)
The optical path difference OPD related to the test surface profile is given by:
New Methods and Tools for Data Processing
OPDx, y
O2 M ( x, y ) 4S
173
(8)
The real profile, in Z axis direction, is obtained after the rotation of the OPD profile (see Eq. 8.) with the tilt angle of the plane defined by the offsets of the zi(t) oscillations. As depicted in the Fig. 4.b), the values of zi(t) are calculated with reference to the plane of maximum tilt of the test plate. In order to find the value of the tilt angle, we change iteratively the offsets of the zi(t) to obtain the equilibrium position of the test plate. We use the fact that dzi(t)/dt has maximum value at the equilibrium position. The reconstruction of the OPD shifts zi(t) works also for random oscillations of the piezoelectric transducer. The same computation algorithm is used for the identification of the main extreme values and of the maximum and minimum values of the photodiode signals. In the next steps we will test the interferometer placed on a table without vibration-isolation system. The photodiode will measure now the relative oscillations between the reference and the test plates. The right dimensions of the post holder must be found, so that a suitable oscillating phase shift is induced from the floor vibration.
4 Summary In this paper a two-beam interferometric system has been introduced, that is designed to work without vibration isolation and uses the random mechanical vibration as phase shifter. A detector system consisting of three photodiodes is placed in the interference field of a continuous He-Ne laser to determine the random phase shifts, and a CCD camera is simultaneously used to evaluate the fringes from a pulse laser diode. A four steps PSI algorithm for random phase shifts was described.
174
New Methods and Tools for Data Processing
5 References 1. Hayes, J.: Dynamic interferometry handles vibration. In: Laser Focus World, March, 2002, p. 109-113 2. Wizinowich, P.L.: Phase shifting interferometry in the presence of vibration: a new algorithm and system. In: Applied Optics, Vol. 29 (1990) 29, p. 3271-3279 3. Deck, L.: Vibration-resistant phase-shifting interferometry. In Applied Optics, Vol. 35, (1996) 34, p. 6655-6662 4. Koliopoulos, C.L.: Simultaneous phase-shift interferometer. In Doherty, V.J. (Ed.): Advanced Optical Manufacturing and Testing II, Proceedings of SPIE Vol. 1531 (1992), p. 119-127 5. Fa. Engineering Synthesis Design, Inc. (ESDI): product information: Intellium H1000, Tucson, AZ, 2005 6. Hettwer, A.; Kranz, J.; Schwieder, J.: Three channel phase-shifting interferometer using polarisation-optics and a diffraction grating. In Optical Engineering, Vol. 39 (2000) 4, p. 960-966 7. Millerd, J.E.; Brock, N.J.; Hayes, J.B.; Wyant, J.C.: Instantaneous phase-shift point-diffraction interferometer. In: Creath, K.; Schmit, J. (Eds.): Interferometry XII: Techniques and Analysis, Proceedings of SPIE Vol. 5531 (2004), p. 264-272 8. Millerd, J.E.; Brock, N.J.; Hayes, J.B.; North-Morris, M.B.; Novak, M.; Wyant, J.C.: Pixelated phase-mask dynamic interferometer. In: Creath, K.; Schmit, J. (Eds.): Interferometry XII: Techniques and Analysis, Proceedings of SPIE Vol. 5531 (2004), p. 304-314 9. Greivenkamp, J.E.; Bruning, J.H.: Phase shifting interferometry. In Malacara, D.: Optical shop testing, Chap. 14, Second Edition 1992
Spatial correlation function of the laser speckle field with holographic technique Vladimir Markov, Anatoliy Khizhnyak MetroLaser, Inc., 2572 White Road, Irvine, CA 92614, USA
1 Introduction Speckle-field characterization has been a subject of interest since the very beginning of coherent optics. This is despite the fact that a variety of approaches have been tried, yet a reliable and consistent method for an accurate experimental estimation of the spatial correlation function of such a field has not been achieved. An analysis of the power spectral density [1] and the autocorrelation function of the field’s intensity distribution [2] allows to derive such key parameters as an average lateral
and longitudinal speckle size. A number of methods may be used to experimentally measure the values of , such as correlation [1-3], speckle photography [4], and analysis of spatial intensity [5]. However, they all provide qualitative, rather than quantitative, information, especially with regard to . This report discusses a method that allows for the direct measurement of the spatial correlation function of the speckle field and, as a result, the 3-D dimensions of speckle. The method is based on a fundamental feature of volume holography allowing for selection of the component of the reconstruction field that is matched to the spatial structure of that used at recording [6].
2 Characterization of the speckle-hologram 2.1 Recording stage
Let us consider a volume hologram recorded with a plane-wave object beam and speckle reference beam. No general solution has been established so far for characterizing such a hologram, although a partial solution
New Methods and Tools for Data Processing
176
can be obtained by using certain approximations, such as the first Born approximation that works well at a low diffraction efficiency [7]. The method we will apply is known as the holographic mode analysis [8] It can be used when the following conditions are satisfied: (1) a volume hologram is employed; (2) its thickness " encloses several speckles (" >> ) and the entire interactive area of the recording beam; (3) the spacing of the holographic grating (cross-grating) / is smaller than the inter-modulation component (/ << VA). We will also assume that the strength of this inter-modulation grating (gsp) is much smaller than that of the cross-grating, a condition that can be satisfied by using photorefractive materials at their optimal recording geometry [9]. In this case, for a hologram illuminated with a reference beam RR(x,y,z0), which is different from the one used in recording R0(x,y,z0), the reconstructed wave Srec(x,y,z) can then be derived as [6]: S rec ( x, y, z ) iS 0 ( x, y , z ) u sin S u ' u " / O u exp> i (k z z k x x)@u u ³³ R0 ( xc, yc, z0 ) RR ( xc, yc, z0 )dxcdyc
.(1)
P0
Here, S0(x,y,z) is the original object wave; R0,R(x,y,z0) is the recording (R0) and reconstruction (RR) speckle-field at the input plane of the hologram P0 2
2
(z = z0); ' ~ n2 S 0 u RR is the refraction index modulation for the crossgrating, and O is the recording wavelength. As compared to Kogelnik’s solution for a plane-wave hologram [10], Eq. (1) differs only with regard to the last multiplier, which is the scalar product of the recording and reconstruction field at the input plane of the hologram. The presence of this multiplier is easy to explain. Indeed, the complete field inside the hologram can be described as the sum of two & & modes, M1 ( r ) and M 2 ( r ) with different constants of propagation. These two modes are the superposition of the reference and object waves: & M1,2 ( r )
& & RR ( r ) r S 0 ( r ) , 2
(2)
Illumination of the hologram with the reconstruction beam generates these two modes, with their excitation coefficient determined by the projection of the reconstruction field on the field of the mode. As a result, what propagates inside the hologram is a complex field composed of the superposition of the modes and the component of the reconstruction field that is orthogonal to the mode structure. The latter is the product of light scattering on the inter-modulation grating (gSP) and generates the noise in
New Methods and Tools for Data Processing
177
the reconstructed filed. Hence, it is essential that the modulation of gsp remains small. In addition, the angle between recording beams is larger than angular spectra of these beams, resulting in an insignificant noise component in the reconstructed field. Taking into account Eq. (2), the complex field inside the hologram illuminated with R2 beam is: * E (r )
& & AR0 , RR >M 1 (r ) exp'1 z M 2 ( r ) exp' 2 z @ EOR R0 , RR , (3)
where AR0 , RR
³³ R0 ( xc, yc, z0 ) RR ( xc, yc, z0 )dxcdyc , EOR R0 , RR is part of
P0
the reconstruction field RR that is orthogonal to the recording field R0; '1 & & and '2 are the constants of the propagation of the modes M1 ( r ) and M 2 ( r ) , respectively. 2.2 Reconstruction
It follows from Eq. (1) that the amplitude of the reconstructed beam is proportional to the degree of orthogonality between the spatial functions of the reference beam used at recording and reconstruction of the hologram, i.e., the spatial correlation function of these two fields. Thus, by measuring the intensity of the reconstructed beam, it is possible to obtain a direct estimation of spatial correlation function for these two fields. We will show now that this correlation function coincides with the function of mutual intensity that is used to characterize the speckle field. When the ergodicity conditions of the light field that passes through the diffuser are satisfied, the integration in Eq. 1 can be substituted by an ensemble averaging:
³³ R0 ( x, y, z0 )RR ( x, y, z0 )dxdy
R0 ( x, y, z0 ) RR ( x, y, z0 ) ,
(4)
P0
which corresponds to the definition of the mutual intensity function & & J R (r , r c) . Following [11], we will now introduce the normalized function of & * mutual intensity P I r , r / , as this is the parameter that can be measured & * experimentally. In the Fresnel approximation P I r , r / is:
New Methods and Tools for Data Processing
178
& * PI r , r /
& & 2 JR r,r / & & & & J R r , r J R r / , r /
³³ P[ ,K exp>iW [
2
2
@
K 2 i D[ EK d]dK
P0
2
,
(6)
2 ³³ P[ ,K d[dK P0
where P([,K) is a real-valued aperture function of the illuminated area on & the diffuser. The vector r x, y, z0 describes the distance and direction
&
from point p [ ,K on diffuser at z = 0 to the point at observation plane with transverse coordinates (x,y) and distance z0 from the diffuser. The & x J / , y G / , z0 H / describes position of the second point vector r /
in space of the speckle field under the analysis. The parameters W, D, and E are: W k H / / 2 z02 ; D k zJ / H / x / z02 ; E k zG / H / y / z02 (7)
Thus, the diffraction efficiency of the volume hologram is directly proportional to the mutual intensity and, therefore, by recording. A hologram of the plane and speckle waves and measuring its diffraction efficiency as a function of the reconstruction speckle beam spatial position allows for estimation of the correlation function of this beam. Because the hologram completely replicates the spatial distribution of the recorded speckle field, the method should allow for a complete characterization of the latter. As an example, let us consider a common experimental situation when the diffuser is illuminated with a Gaussian beam, i.e.: & 2 P( p)
>
@
exp b( [ 2 K 2 ) .
(8)
with D = 2/ b being the diameter of the illuminated part of the diffuser at the value of 1/e. Taking into account Eq. (6) and (7), Eq. (8) can be reduced to: & &
P I (r , r ' )
§ D2 E2 b b2 ¨ exp ¨ 2 b2 W 2 b2 W 2 ©
· ¸. ¸ ¹
(9)
New Methods and Tools for Data Processing
179
Eq. (9) has an analytical solution that allows us to arrive at two of the most typical cases for speckle field characterization: when the spatial decorrealtion to be measured is in the lateral or longitudinal direction. 2.2.1 Lateral shift
For a lateral shift only (J z 0, G z 0, H = 0), the mutual correlation function takes the exponential form:
§ 2S 2 J 2 G 2 O2 z0 2 b ©
& &
P I ( r , r ' ) exp¨¨
·¸ . ¸ ¹
(10)
Derived in Eq. (11), correlation function has the characteristic lateral scale at the value 1/e:
VA
2 O z , S D 0
(11)
2.2.2 Longitudinal shift
The longitudinal correlation function derived at the displacement along zaxis (J = 0, G = 0, H z 0) can be expressed as the Lorenz curve: 1
· § & & S 2D4 P I (r , r ' ) ¨¨1 H 2 ¸¸ , 2 2 ¹ © 16 O z0
(12)
where the longitudinal scale is: V //
4 O z . S D2 0
(13)
3 Experimental results To verify experimentally the proposed method of mapping the 3-D speckle, the volume hologram was recorded in a thick (" = 2.5 mm to 7.0 mm) Fe:LiNbO3 crystal. The recording geometry was asymmetric with normally incident object beam (plane-wave) and speckle-encoded reference beam with incident angle TR, forming the grating with spacing / |
New Methods and Tools for Data Processing
180
0.8 Pm (<< VA). The crystal was set on a high-precision XYZ computercontrolled positioning table. After recording, the hologram was illuminated with reference speckle beam. Introducing the lateral ('A) or longitudinal ('__) shift between the hologram and reconstruction beam and measuring the dependence K = F(' ,we could derive the function of shift selectivity K(' that can be used for characterization of spatial (3-D) decorrelation between recorded and reconstructing speckle patterns, and, therefore, an “average” speckle size in both lateral and longitudinal directions. A typical dependence of the normalized intensity of the diffracted beam I1('A__ ID/I0 (the ratio of diffracted beam intensity ID at any ' to the intensity I0 at the initial non-decorrelated state) for several are shown in Fig.1a and 1b, respectively. Evidently the behavior of I1('A__ is consistent with the calculated correlation function of speckle field (dashed line), thus making it possible for direct estimation of and .
Intensity IN
0.8 0.6 0.4 0.2 0 0.0
2.0
4.0
6.0
8.0
X-Shift, Pm
a)
10.0 12.0 14.0
1.0
1.0
0.8
0.8
Normalized intensity IN
VA! Pm VA! Pm VA! Pm
Normalized intensity IN
1
0.6 0.4 0.2 0.0 0.0
2.0 4.0 6.0 Z-Shift, Pm u 102
b)
8.0
VII!-Pm VII!-Pm
VA!-Pm VA!-Pm
0.6 0.4 0.2 0.0 0.00
0.20
0.40 0.60 0.80 Relative shift G
1.00
c)
Fig. 1. Lateral (a), longitudinal (b) and relative shift selectivity (c) of volume hologram.
The data in Fig.1a and 1b are sufficient for a complete characterization of 3-D correlation function of an arbitrary speckle field regardless of actual values of . The same is right for I1('__ , which doesn’t
New Methods and Tools for Data Processing
181
depend upon the initial diffraction efficiency of the hologram and, therefore, doesn’t have the threshold requirements or saturation restrictions [5]. 1.00
1.00 / Pm
Normalized Intensity ID
Normalized Intensity ID
/ Pm
0.75
/ Pm
VA! Pm
0.50
0.25
K
0.75
K K
0.50
0.25
0.00
0.00 0.00
0.40
0.80
0.00
1.20
Relative shift 'AHA!
(a)
0.40 0.80 1.20 Relative shift 'AVA!
(b)
Fig. 2. Normalized diffracted beam intensity upon relative lateral shift for various grating spacing / (a) and initial values of diffraction efficiency K (b) of recorded hologram
Lateral shift 'APm
Fig.3 illustrates the measured map of the spatial shift selectivity for a particular experimental setting. It essentially gives a spatial profile of an average speckle along its two major axes: lateral and longitudinal. The apparent tilt of the speckle’s map is associated with the shift of the computer controlled table that holds the hologram and spatial orientation of the speckle-beam. V! II
25 20 15 15
V! A 25
35
45
Longitudinal shift ',,Pm
55
Fig. 3. The spatial configuration of the speckle estimated through the measurements of spatial-shift selectivity of volume hologram with speckle encoded reference beam.
Obviously, the spatial map of the speckle shown in Fig.3 is for a single XZ-plane. However, because of the axial symmetry of lateral correlation function of shift selectivity, the spatial mapping of an average speckle (its spatial profile) can be derived for any arbitrary cross-section, if desired, giving more detailed information on average speckle size.
182
New Methods and Tools for Data Processing
4 References 1.
Goldfisher, L (1965) Autocorrelation function and power spectral density of laser-speckle paterns. J.Opt.Soc.Am. 55:247-253 2. Goodman, JW (1975) Statistical properties of laser speckle patterns. In: Danty, J Laser Speckles and Related Phenomena, Springer-Verlag 3. Marron, J (1986) Correlation function of clipped laser speckle. J.Opt.Soc.Am. 2:403-410 4. Vikram, CS, Vedam, K (1979) Measurement of subspeckle-size changes by laser-speckle photography. Optics Letters 4: 406-407 5. Alexander, TL, Harvey, JE (1994) Average speckle size as a function of intensity threshold level. Appl. Optics 33:8240-8250 6. Markov, V, Soskin, M, Khizhnyak, A, Shishkov, V. (1978) Structural conversion of a coherent beams with a volume phase hologram in LiNbO3. Sov. Thech. Phys. Lett. 4:304-306 7. Zel'dovich, B, Shkunov, V (1986) Holograms of speckle field. Sov.Phys. Uspekhi J. 149:511-548 8. Sidorovich, V (1976) Calculation of diffraction efficiency of threedimencional phase hologram. Sov. Phys. Tecn. Phys. 41:507-510 9. Kukhtarev, N, Markov, V. et all (1979) Holographic storage in electro-optic crystals. Ferroelectrics, 22:949-960 10. Kogelnik, H (1969) Coupled wave theory for thick hologram gratings. Bell Syst. Tech. J. 48:2909-2947 11. Leushacke, L, Kirchner, M (1990) Three-dimensional correlation coefficient of speckle intensity. J.Opt.Soc.Am. 7:827-832
Fault detection from temporal unusualness in fringe patterns Qian Kemao, Seah Hock Soon School of Computer Engineering, Nanyang Technological University, Singapore, 639798 Email: [email protected] Anand Asundi School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, 639798
1 Introduction Detection of faults from interferometric fringe patterns is important for condition monitoring, industrial inspection, nondestructive testing and evaluation (NDT and NDE). Generally a fault occurs at a location where the phase of the fringe changes abruptly. Accordingly there is a sudden change in the fringe density at that location. It can be detected by tracking the fringe number in each local area [1,2]. Osten et al [3] defined six basic fault patterns that occur frequently for detection and classification [3,4]. It was then realized that fault detection is a task of spatial-frequency analysis: determine the frequency of the fringe at each local area and find the place that the frequency density exceeds a certain limit [5]. This can also be adapted to the pattern classification problem [6]. In all the previous works, faults are detected from a single fringe pattern, though sequences of dynamic fringe patterns are often available [3]. In real applications, such as condition monitoring, a fringe sequence evolving over time is generally used to monitor damage or faults. Hence the temporal property of a fault should be highlighted: a fault occurs when the temporal unusualness exists between the fringes of the initial normal frame and the current frame. Though we emphasize the temporal evolution of faults in fringe patterns, we do not infer that the existing algorithms for fault detection are unimportant. On the contrary, they complement the present approach. Following this principle, two approaches, Fourier transform, normalized cross correlation are introduced and then Windowed Fourier Transform approach is proposed.
New Methods and Tools for Data Processing
184
2 Approaches for fault detection from temporal unusualness 2.1 Fourier transform (FT) approach
An interferometric fringe pattern can be expressed as:
f ( x, y , t )
a( x, y, t ) b( x, y, t ) cos[M ( x, y, t )]
(1)
where f , a , b and M are the recorded intensity, background intensity, fringe amplitude and phase distribution, respectively; ( x, y, t ) indicates the spatial and temporal coordinates. Usually the change of M ( x, y, t ) with respect to t is much faster than those of a ( x, y, t ) and b( x, y, t ) , and hence the detection of phase change M ( x, y, t 2 ) M ( x, y, t1 ) is a natural and direct way to indicate the existence of faults. This could be done using the traditional Fourier transform provided that a carrier is introduced into the fringe patterns [7,8]. Figure 1 shows the scheme for this approach, where Fig. 1(c) is the phase difference between the two sequential frames shown in Figs. 1(a) and (b). The FT approach can generally detect the faults and is insensitive to noise, but some problems are revealed: (i) a global carrier frequency is not always available in the two frames. For example, though the carrier is introduced in Fig. 1(a) and maintained in Fig. 1(b), the “eyes” in Fig. 1(b) makes this carrier insufficient to demodulate the phase distribution correctly. It is even harder to process fringe patterns without a carrier frequency; (ii) phase unwrapping might be necessary if the range of phase difference exceeds 2S. One basic assumption for phase unwrapping is that the phase distribution is continuous over the whole field. This may not always be the case since the fault itself introduces phase discontinuities. 2.2 Normalized cross correlation approach
Since the phase change results in different fringe intensity, another natural
New Methods and Tools for Data Processing
(a)
185
(b) Fourier transforms
(c) Fig. 1. Fault detection by FT approach
approach is to directly compare the intensities of each local block at the same location in two frames, rather than phase itself. This scheme is illustrated in Fig. 2. The fringe tracking approach [1, 2] can be adopted for this comparison, which is somewhat complicated and sensitive to noise. It also has problem of accounting for fractional fringes in a block. This direct comparison can be realized using the following normalized cross correlation approach, Q
P
¦ ¦ >f
1
P
Q
¦ ¦ >f
1
( x p, y q ) f1 )
p P q Q
where
@>
( x p, y q ) f1 ) f 2 ( x p, y q) f 2 )
@
(2)
p P q Q
c ( x , y , t1 , t 2 )
p
and
q
P
Q
@ ¦ ¦ >f 2
2
( x p, y q) f 2 )
@
2
p P q Q
are
dummy variables; the block size is (2 P 1) u (2Q 1) ; f ( x, y, t i ) is written as f i ( x, y ) for simplicity; f i is the average value of f i ( x, y ) in that block. The faults are successfully detected by the correlation coefficients c( x, y, t1 , t 2 ) , as shown in Fig. 2(c). Unfortunately, this result is not as encouraging in the presence of noise.
New Methods and Tools for Data Processing
186
(a)
(b) Normalized cross correla-
(c) Fig. 2. Fault detection by NCC approach (a) frame 1; (b) frame 2; (c) correlation of (a) and (b)
2.3 Windowed Fourier transform approach
The WFT approach is proposed by combining the advantages of the FT approach (i.e., the transform-based processing is insensitive to noise) and the NCC approach (i.e., the block processing localizes the fringe features). The proposed scheme is illustrated in Fig. 3. We first construct a database containing all possible local fringe patterns (step 1). A block in frame 1 is compared with the database (step 2) and the most similar pattern in the database is selected (step 3). Finally the selected pattern is compared with the block at the same location in frame 2 and their similarity is measured (step 4). If they are similar, we say that no fault is detected; else a fault is identified. Figure 3(c) shows that the WFT approach can detect faults successfully.
New Methods and Tools for Data Processing
187
(i)
…
(ii) (iii) (iv) (b
(a) )
By above operations
(c) Fig. 3. Fault detection by proposed WFT approach (a) frame 1; (b) frame 2; (c) fault measure of (a) and (b); (i) construct a database; (ii) comparison of a local area with the database; (iii) the most similar basis is selected from the database; (iv) this basis is compared with the same local area in the new frame to detect the faults.
The algorithm is as follows. (i) Construct the WFT elements for the database as k ( x, y, [ ,K ) exp ( x 2 y 2 ) / 2V 2 exp j[x jKy (3)
>
@
1 ; [ where V indicates the spatial extension of the patterns; j and K are angular frequencies in x and y direction, respectively. Different values of [ and K give different WFT elements; (ii) Compute the similarity of a block centered at ( x, y ) in frame 1 and a WFT element in the database as, Q
Af ( x, y, [ ,K , t1 )
P
¦ ¦ f ( p x, q y , t ) k 1
*
( p, q, [ ,K ) (4)
q Q p P
where
p
and
(2 P 1) u (2Q 1) .
q
are
dummy
variables;
the
block
size
is
New Methods and Tools for Data Processing
188
(b)
(a)
(d)
(c)
(e)
Fig. 4. Fault detection in speckle correlation fringes (a) frame 1; (b) frame 2; (c) phase difference of (a) and (b) using the FT approach; (d) correlation of (a) and (b) using the NCC approach; (e) fault measure of (a) and (b) using the WFT approach
It is recommended that P Q 2V and V 10 ; (iii) The best WFT elements can be determined when the similarity is the highest: >[ ( x, y, t1 ),K ( x, y, t1 )@ arg max Af ( x, y, [ ,K , t1 ) (5) [ ,K
r ( x, y, t1 ) Af >x, y, [ (x, y, t1 ),K ( x, y, t1 ), t1 @ (6) where [ ( x, y, t1 ) and K ( x, y, t1 ) are the instantaneous frequencies at ( x, y ) ; r ( x, y, t1 ) is the highest similarity; (iv) Compute the similarity between the block centered at ( x, y ) in frame 2 and the selected element as, Q
r ( x, y , t 2 )
P
¦ ¦ f ( x p, y q, t
2
) k * >p , q , [ ( x , y , t1 ), K ( x , y , t1 ) @ (7)
q Q p P
A fault measure (FM) is then defined and computed as as,
r ( x, y , t 2 ) u 100% (8) r ( x , y , t1 ) A fault alarm is sounded if FM ( x, y, t1 , t 2 ) drops to below a preset threshold. In all the following examples, a FM threshold of 50% is used. FM ( x, y, t1 , t 2 )
New Methods and Tools for Data Processing
189
The four-step algorithm can thus be realized as the previous Equations: Eq. (3) Eq. (4) Eqs. (5-6) Eqs. (7-8).
3 Comparison results Speckle correlation fringe patterns with and without faults are simulated as shown in Fig. 4(a) and (b). They are tested using the FT, NCC and WFT approaches and the results are shown in Fig. 4(c), (d) and (e), respectively. All the approaches detect the defects, while it is the easiest to indicate the faults from WFT result.
4 Conclusions In this paper temporal unusualness of faults are emphasized and three approaches, Fourier transform, normalized cross correlation and windowed Fourier transform are analyzed and compared. The result shows all the approaches can detect the faults in the example, while the WFT is the most promising approach.
References [1] Tichenor, D. A. and Madsen, V. P. (1979) Computer analysis of holographic interferograms for nondestructive testing. Opt. Eng. 18:469472 [2] Robinson, D. W. (1983) Automatic fringe analysis with a computer image-processing system. Appl. Opt. 22:2169-2176 [3] Osten, W., Juptner W. and U. Mieth (1993) Knowledge assisted evaluation of fringe patterns for automatic fault detection. Proc. SPIE 2004:256-268 [4] Juptner, W., Mieth, U. and Osten, W. (1994) Application of neural networks and knowledge based systems for automatic identification of fault indicating fringe patterns. Proc. SPIE 2342:16-26 [5] Li, X. (2000) Wavelet transform for detection of partial fringe patterns induced by defects in nondestructive testing of holographic interferometry and electronic speckle pattern interferometry. Opt. Eng. 39:2821-2827 [6] Krüger, S., Wernicke, G., Osten, W., Kayser, D., Demoli, N. and Gruber, H. (2001) Fault detection and feature analysis in interferometrer
190
New Methods and Tools for Data Processing
fringe patterns by the application of wavelet filters in convolution processors. Journal of Electronic Imaging 10:228-233 [7] Takeda, M., Ina H. and Kobayashi, S. (1982) Fourier transform methods of fringe-pattern analysis for compter-based topography and interferometry. J. Opt. Soc. Am. 72:156-160 [8] Qian, K., Seah H. S. and Asundi, A. K. (2003) Algorithm for directly retrieving the phase difference: a generalization. Opt. Eng. 42:17211724 [9] Kemao, Q. (2004) Windowed Fourier transform for fringe pattern analysis. Appl. Opt. 43:2695-2702 [10]Kemao, Q. (2004) Windowed Fourier transform for fringe pattern analysis: addendum. Appl. Opt. 43:3472-3473 [11]Kemao, Q., Soon S. H. and Asundi, A. (2003) Instantaneous frequency and its application to strain extraction in moire interferometry. Appl. Opt. 42:6504-6513
The Virtual Fringe Projection System (VFPS) and Neural Networks Thomas Böttner Institut für Mess- und Regelungstechnik, Universität Hannover Nienburger Str. 17, 30167 Hannover Germany Markus Kästner Institut für Mess- und Regelungstechnik, Universität Hannover Nienburger Str. 17, 30167 Hannover Germany
1 Introduction Optical measurement systems like fringe projection systems (FPS) are complex systems with a great number of parameters influencing the measurement uncertainty. Pure experimental investigations indeed are unable to determine the influence of the different parameters on the measurement results. The virtual fringe projection system was developed for this purpose. It gives the possibility to control parameters individuelly and independently from other parameters [1]. The VFPS is a computer simulation of a fringe projection system and mainly developed to investigate different calibration methods. While several black-box calibration methods are shown in [1], neural networks is the main subject of this paper.
2 Neural Networks Many different kinds of artificial neural networks have been developed, therefrom backpropagation networks are probably the most well-known. Most neural networks can be considered simply as a nonlinear mapping between the input space and the output space. When a neural network is provided with a set of training data, it will be able to response with the correct answer after a learning phase—at least below a given error margin. Neural networks’ generalization ability means the effect of a nonlinear approximation on new input data.
New Methods and Tools for Data Processing
192
Backpropagation and radial basis function (RBF) networks are appropriate for approximation of functions [2]. Backpropagation networks construct global approximations to the nonlinear input-output mapping, whereas RBF networks construct local approximations. The calibration process of a FPS means mathematically the determination of the nonlinear calibration function f. The function f describes the relationship between the image coordinate system, consisting of the camera pixels (i, j), as well as the phase value I, and the object coordinate system (X, Y, Z), i.e.
( X ,Y , Z )
f (i, j , I )
(1)
Experimental investigations of backpropagation networks provide bad results, therefore only RBF networks will be considered. RBF networks consist of one hidden layer and one output layer. The hidden layer consists of radial basis neurons. The output value oi of neuron i is
oi
h wi x
(2)
with hi as a radial basis function like the Gaussian bell-shaped curve
hr exp Dr 2 , D > 0
(3)
and input vector x and weight vector wi. Because the Gaussian curve is a localized function, i.e. h(r) o 0 as ro f, each RBF neuron approximates locally. The parameter D determines the radius of an area in the input space to which each neuron responds. The output layer is linear mapping from the hidden space into the output space. Each output neuron delivers directly an output value. The number of hidden neurons is normally much greater than the number of input signals. In the case of the calibration task, there are three input signals (image coordinates) and three output signals (object coordinates).
3 Simulation and results With the VFPS it’s possible to evaluate exclusively the error of a RBF network, due to the fact that it allows the investigation of the calibration method under ideal conditions, independently from other influences [1]. The dimension of the used measuring volume is 120 x 90 x 40 (the unit is set to one) in the VFPS. For the calibration process 20 x 15 x 10 (in X, Y and Z direction) calibration points p, uniformly distributed in the measuring volume, is used. With this setup, the RBF network is calculated using the VFPS. For this purpose the phase value I for all calibration points p
New Methods and Tools for Data Processing
193
will be calculated first (via a direct projection on the projector plane and calculation of the corresponding phase value at this point). Subsequently, the points p will be projected on the image plane of the camera, delivering the corresponding image coordinates (i, j). Now the calibration function f can be determined with the aid of all known values (i, j,I) and (X, Y, Z). In order to determine the resulting error of the RBF network 10.000 points are randomly generated in the measuring volume and “measured” with the VFPS. The corresponding object coordinates of these points are then calculated by means of the previously determined RBF network. The resulting standard deviation of the RBF network was 9.8u10-4. For the same configuration a polynomial method (method C in [1]) yielded a standard deviation of 9.2u10-4. Figure 1 shows the 3D gray-coded error map of the RBF network. Three section planes with the measuring volume show the deviation of the calculated values from the correct values. To show border effects, the volume is extended by 10% in each direction compared to the calibration process. Figure 2 shows additionally a similar diagram for the polynomial method mentioned above.
Fig. 1. Gray coded 3D error map for the RBF network, three section planes with the measurement volume visible
194
New Methods and Tools for Data Processing
Fig. 2. Gray coded 3D error map for the polynomial method, three section planes with the measurement volume visible
4 Conclusion The RBF network and the polynomial method investigated previously produce comparable results. So far no evident arguments can be found to prefer it as calibration method. Nevertheless, further investigations have to show the effects of the influence of different disturbances (like noise) on different methods.
5 Acknowledgment The author gratefully acknowledge the support of the DFG.
6 References 1. Böttner, T, Seewig, J (2004) “Black box” calibration methods investigated with a virtual fringe projection system, Proc. of SPIE, Optical Metrology in Production Engineering 5457 : 150-157 2. Haykin, S (1999) Neural Networks, Prentice Hall, 290-294.
Fringe contrast enhancement using an interpolation technique F.J. Cuevas, F. Mendoza Santoyo, G. Garnica and J. Rayas Centro de Investigaciones en Óptica, A.C., Loma del Bosque 115, Col. Lomas del Campestre, CP. 37150, León, Guanajuato, México J.H. Sossa Centro de Investigación en Computación, Av. Othón de Mendizabal s/n, Zacatenco México, D.F., México
1 Introduction We can model mathematically a fringe pattern using the following mathematical expression: I ( x, y ) a ( x, y ) b( x, y ) cos(Z x x Z y y I ( x, y ) , (1) where x, y are the coordinates of the pixel in the interferogram or fringe image, a(x,y) is the background illumination, b(x,y) is the amplitude modulation and I ( x, y ) is the phase term related to the physical quantity being measured. Z x and Z y are the angular carrier frequency in directions x and y. The main idea in metrology tasks is calculate the phase term, which is proportional to the physical quantity being measured. We can approximate the phase term I ( x, y ) by using the phase-shifting technique (PST) [1-5], which needs at least three phase-shifted interferograms. The phase shift among interferograms should be controlled. This technique can be used when mechanical conditions are met throughout the interferometric experiment. The phase-shifting technique can be affected by background illumination variations due to experimental conditions. When the stability conditions mentioned are not fulfilled and a carrier frequency can be added, there are alternative techniques to estimate the phase term from a single fringe pattern, such as: the Fourier method [6,7], the Synchronous method [8] and the Phase Locked Loop method (PLL) [9], among others. Recently, techniques using Regularization, Neural Networks, and Genetic
196
New Methods and Tools for Data Processing
Algorithms have been proposed by Cuevas et al. [10-18] to approximate the phase term from a single image. Phase demodulation errors are found when the analyzed interferogram has irradiance variations due to the background illumination and amplitude modulation (a(x,y) and b(x,y)). In real fringe metrological applications, it is common to capture low and variable contrast fringe images. These contrast problems are generated by the use of different optical components and light sources such as lenses and lasers. Then, these contrast fringe problems complicate the phase calculus with the above mentioned techniques. In this paper, we present a technique to enhance the contrast of a fringe pattern. In this case, we use spline interpolation [19] to obtain wellcontrasted fringes. Two splines are fitted over the maximums and the minimums of the fringe irradiance. Then, the splines (max function and min function) are used to interpolate and enhance the contrast of intermediate points of the fringe pattern. Preliminary results are presented when the method is applied on a degraded computer simulated fringe pattern.
2 Fringe enhancement using spline interpolation The phase detection techniques from fringe patterns [1-17] can work adequately only if a well-contrast fringe image is obtained. Due to this a contrast enhancement or normalization process is required previously to fringe demodulation procedure. This paper is concerned to solve the fringe normalization problem. The main idea is to fit two spline functions over the irradiance maximums and minimums of the fringe irradiance to approximate functions a(x,y) and b(x,y) in Eq.1. Then, each point over the fringe is normalized by using of their respective maximum and minimum value in the splines. The procedure can be described in the following way: 1. For each line in the fringe pattern the fringe irradiance maximums and minimums are calculated by using of first and second derivatives. Then, two lists containing the maximum and minimum fringe peaks are generated for line x=xi : min(I(xi,y))={ymin0,ymin1,…,yminn}where ymin0
New Methods and Tools for Data Processing
197
2. Take each list (max(.) and min(.)) and fit a set of spline interpolant functions FK and F’K for y. Each function FK satisfies the following conditions: a) FK is a cubic function. b) The function values are the same in the boundary points for adjacent points. c) The first function derivatives are the same in the boundary points for adjacent points. d) The second function derivatives are the same in the boundary points for adjacent points. 3. For each pixel in the fringe line xi interpolate and calculate new irradiance value using the following mathematical expression: I ' ( xi , y )
I ( xi , y ) FK' ( y ) FK ( y ) FK' ( y )
u N,
(2)
y [1, M ]
where I’(xi,y) and I(xi,y) are the new and original values of the irradiance in line xi , respectively. FK(y) and F’K(y) (for k=0,1,..,M-1) are the spline interpolate functions for min and max fringe peak lists. N is the maximum gray level. This process continues until the last line in the fringe image is reached.
3 Experimental Results We use the contrast enhancement technique to enhance the computer simulated fringe pattern contrast. First, a computer simulated fringe pattern was calculated using the following mathematical expression: I ( x, y )
128 127 cos(0.2Sx I ( x, y )) x, y [1,200 ]
where I ( x, y )
>
5 u 10 4 2( x 100 ) 2 ( y 100 ) 2 x, y [1,200 ]
,
(3)
@.
(4)
The image resolution was 200 X 200. This fringe image is shown in Fig.1. A middle line (line 100) of this image is drawn in Fig. 2.
New Methods and Tools for Data Processing
198
Fig. 1. Reference computer simulated fringe pattern
Fig. 2. A graph of line 100 from fringe pattern in Fig.1.
Then, a degraded computer simulated fringe pattern contrast image was generated using in Eq. 1 the following background illumination and amplitude modulation functions (a(x,y) and b(x,y)): a ( x, y )
ª (( x 100 ) 2 ( y 100 ) 2 º 130 exp « » 80 , 3600 «¬ »¼ x, y [1,200 ]
(5)
and b ( x, y )
ª ( x 100 ) 3 ( y 100 ) 3 º 43«1 » 2 u 10 6 ¬« ¼» .
(6)
x, y [1,200 ]
The degraded fringe contrast image is shown in Fig. 3. A middle line (line 100) of this degraded image is drawn in Fig. 4.
New Methods and Tools for Data Processing
199
Fig. 3. Degraded computer simulated fringe contrast pattern version.
Fig. 4. Line number 100 from fringe pattern in Fig.3.
A 3D graph related with the degraded fringe image of Fig. 3 is drawn in Fig. 5. The background illumination and the amplitude modulation functions are pictured in Figs. 6 and 7, respectively. The absolute irradiance error between the original fringe image (Fig. 1) and the degraded image (Fig. 3) are displayed in Fig. 8. Firstly, in the enhancement procedure was to approximate the max and min fringe peaks using the points were the first derivative was near 0. The second derivatives were calculate to determinate the concavity of each point.
Fig. 5. 3D graph related to degraded fringe pattern of Fig. 3.
200
New Methods and Tools for Data Processing
Fig. 6. Background illumination function related to fringe pattern of Fig. 3.
Fig. 7. The amplitude modulation function related to fringe pattern of Fig.3.
Fig. 8. Absolute irradiance differences between fringe patterns of Fig. 1 and 3.
Then, the spline are calculate using these node peak lists. The splines are drawing over the fringe pattern in Fig. 9. The next step in the procedure is to interpolate each point in the fringe using the mathematical expression of Eq. 2. The middle line (line 100) of the interpolated fringe image is drawn in Fig. 10. The final normalized fringe image using the spline interpolation is presented in Fig. 11. The absolute error between the original and normalized fringe images is shown in Fig. 12.
New Methods and Tools for Data Processing
201
Fig. 9. Spline fitting over the maximum and minimum irradiance of Fig. 4.
Fig. 10. Normalized fringe line 100 using the spline interpolation technique.
Fig. 11. Final normalized fringe image obtained from spline interpolation technique.
Fig. 12. Absolute irradiance differences between normalized and reference fringe images.
202
New Methods and Tools for Data Processing
4 Conclusions A fringe normalization process was presented to enhance the background illumination (a(x,y)) and the amplitude functions (b(x,y)) of the fringe image. To achieve it, two node lists containing the fringe peaks related to the local maximum and minimum irradiance of the fringe image. Then, two spline interpolation functions are calculated which fit each node list. These functions are used to interpolate each point in the fringe line. These process is repeated over each line in the fringe image. The relative error between the original and the contrast degraded fringe image was 22 %. The technique can recover the original fringe image with an relative error around 1%. Future work includes application over shadow moiré and speckle interferogram images.
5 Acknowledgements The authors would like to thank Dr. Manuel Servin, Dr. Carlos Perez Lopez, and Dr. Ramón Rodriguez-Vera for the invaluable technical and scientific support in the development of this work. We acknowledge the support of the Consejo Nacional de Ciencia y Tecnología de México, Consejo de Ciencia y Tecnología del Estado de Guanajuato and Centro de Investigaciones en Óptica, A.C.
6 References 1. D.W. Robinson, and G.T. Reid, (1993) Interferogram Analysis: Digital Fringe Measurement Techniques. IOP publishing LTD, London 2. D. Malacara, M.Servin, and Z. Malacara, (1998) Interferogram Analysis for Optical Testing, Marcel Dekker, New York 3. D. Malacara, (1992) Optical Shop Testing, Wiley, New York, 1992. 4. K. Creath. Phase-measurement interferometry techiques, in Progress in Optics 26, 350-393, E. Wolf, Ed., Elsevier Science, Amsterdam, 1988. 5. K. Creath, (1993) Temporal phase measurement methods, in Interferogram Analysis. 94-140, D. Robinson and G.T. Reid Eds., IOP Publishing Ltd, London 6. M. Takeda, H. Ina, and S. Kobayashi, (1981) ҏFourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. Journal of Optical Soc. of America 72:156-160
New Methods and Tools for Data Processing
203
7. X.Y. Su and W.J. Chen, (2001) Fourier transform profilometry: a review. Opt. Laser in Eng. 35: 263-284 8. K. H. Womack, (1984) Interferometric phase measurement using spatial synchronous detection. Opt. Eng. 23:391-395 9. M. Servin and R. Rodriguez-Vera, (1993) "Two dimensional phase locked loop demodulation of interferograms,” Jorn. of Modern Opt, vol. 40, 2087-2094, 1993. 10. M. Servin, J.A. Quiroga, and F.J. Cuevas, (2001) Demodulation of carrier fringe patterns by use of non-recursive digital phase locked loop. Opt. Comm. 200:87-97 11. M. Rivera, R. Rodriguez-Vera, and J.L. Marroquin, (1997) Robust procedure for fringe analysys. App. Opt. 38:8391-8392 12. M. Servin, F.J. Cuevas, D. Malacara, J.L. Marroquin, and R. Rodriguez-Vera, (199) Phase unwrapping through demodulation by use of the regularized phase-tracking technique. Appl. Opt. 38:1934-1941 13. F.J. Cuevas, M. Servin, and R. Rodriguez-Vera, (1999) Depth object recovery using radial basis functions. Opt. Comm. 163:270-277 14. F.J. Cuevas, M. Servin, O.N. Stavroudis, and R. Rodriguez-Vera, (2000) Multi-layer neural network applied to phase and depth recovery from fringe patterns. Opt. Comm. 181:239-259 15. M. Servin, J.L. Marroquin, and F.J. Cuevas (2001) Fringe-follower regularized phase tracker for demodulation of closed-fringe interferograms. J. Opt. Soc. Am. A 18:689-695 16. F.J. Cuevas, J.H. Sossa-Azuela, and M. Servin, (2002) A parametric method applied to phase recovery from a fringe pattern based on a genetic algorithm. Opt. Comm. 203:213-223, 2002. 17. F.J. Cuevas, M.Servin, R. Rodriguez-Vera, and J.H. Sossa Azuela, (2003) Soft-computing algorithms for phase detection from fringe patterns. Recent Research Devel. Optics, Chapter 2, 21-39, ISBN: 81-2710028-5, Research Signpost, Kerala, India 18. R. Legarda, W. Osten, and W. Jüptner, (2002) Improvement of the regularized phase tracking technique for the processing of nonnormalized fringe patterns. App. Opt. 41:5519-5526 19. S. Nakamura, (1992) Applied Numerical Methods with software, Prentice Hall, London
Some remarks on accuracy of imaging polarimetry with carrier frequency Slawomir Drobczynski, Henryk Kasprzak Institute of Physics, Wrocáaw University of Technology Wybrzeze Wyspainskiego 27, 50-370 Wroclaw, Poland
1 Introduction Imaging polarimetry with carrier frequency is a relatively new method. It has some analogy to interferometry with carrier frequency but it is not so widely used. Method consists in obtaining of maps of distribution of both the phase retardation between eigen waves propagating in a birefringent object and the azimuth angle of the first eigen vector. Application of carrier frequency principle in imaging polarimetry makes this method competitive to other imaging polarimetry methods, which use algorithms, based on a principle of step phase changes. Imaging polarimetry with carrier frequency applies a space periodically modulation of the light polarization, which is achieved by the Wollastone prism, placed in light pass. Acceleration of measurement of both birefringence and the azimuth angle in relation to other imaging polarimetry methods due to reduction of number of recorded intensity images enables application of the method for measurement of fast, dynamical variations of the object birefringence. The paper describes main factors, which influence the accuracy of reconstruction of birefringence and the azimuth angle of optically anisotropic objects in a principal way.
2 Method The proposed method [1] enables calculation of 2D distributions of the azimuth angle of the fast wave and the phase shift of linearly birefringent and nondichroic media. Figure 1 presents the scheme of proposed optical system. The first image is recorded for the phase shift given by the liquid crystal modulator equal to J LC 90q
New Methods and Tools for Data Processing I1
205
T 1 sin G OB sin 2D OB sin 2Sf 0 xi k sin G OB cos 2D OB cos2Sf 0 xi k ,
while the second one is recorded for J LC I2
(1)
0q
T 1 sin G OB cos 2D OB cos2Sf 0 xi k cos G OB sin 2Sf 0 xi k ,
(2)
where T is the transmittance of whole system, D OB is the azimuth angle of the fast wave, G OB is the phase difference between the fast and the slow waves, xi is the horizontal coordinate (pixel number in 512x512 CCD camera along x axis), f 0 is the space frequency along the x axis and 2ʌf0k is the beginning phase of calculation. Next, the 2D Fourier Transforms of two distributions of the light intensity recorded at the output of the system are calculated. By filtering the first order of the spectrum in the space of frequency domain, shifting it to the origin of the coordinate system by the value of carrier frequency f0 and calculating the Inverse Fourier Transform one obtains the complex distribution of c1 and c 2 . The imagine and the real part of those distributions are Imc1 T sin 2Sf 0 k sin G OB cos 2D OB cos2Sf 0 k sin G OB sin 2D OB
(3)
Rec1 T sin 2Sf 0 k sin G OB sin 2D OB cos2Sf 0 k sin G OB cos 2D OB
(4)
Imc 2 T sin 2Sf 0 k sin G OB cos 2D OB cos2Sf 0 k cos G OB
(5)
Rec 2 T sin 2Sf 0 k cos G OB cos2Sf 0 k sin G OB cos 2D OB .
(6)
Finally, the azimuth angle of the first eigen vector of the object and the phase retardation between the fast and the slow waves behind the object for k 0 are
D OB
§ Imc1 · 1 ¸ arctan¨ ¨ Rec ¸ 2 © ¹
G OB
§ Rec 2 arctan¨¨ © Imc 2 cos 2D OB
(7)
· ¸¸ . ¹
(8)
206
New Methods and Tools for Data Processing
Fig. 1. Scheme of the optical system. [P] - polarizer, [QW] – quarter-wave plate, [OB] – examined object, [LC] -liquid crystal modulator, [W] – Wollastone prism, [A] - analyzer.
3 Influence of the beginning phase to obtained results This part of paper presents the result of numerical calculation, which enables the estimation the influence of beginning phase 2ʌf0k to obtain results of the azimuth angle and the phase retardation. Paper [2] describes the influence of adjustment of the liquid crystal modulator on obtained results.
Fig. 2. Results for the azimuth angle.
New Methods and Tools for Data Processing
207
Fig. 3. Results for the phase retardation.The examined object was a quarter-wave plate. In given examples the beginning phase 2ʌf0k was changed for k ^ 2,1,0,1,2` pixels, f 0 8 , and the azimuth angle was changed from –30 deg to 30 deg. In Figure 2 we can see the influence of the beginning phase on calcutated azimuth angle while in Figure 3 on the value of the retardation.
4 Conclusion Presented method offers new possibilities of dynamical changes measurements of birefringence in anisotropic media. However, the beginning phase of calculation is very important parameter, which significantly influence obtained results.
5 References 1. Drobczynski, S, Kasprzak, H,”Application of space periodic variation of light polarization in imaging polarimetry”, Appl. Opt. 44, (2005). 2. Drobczynski, S, Kasprzak, H,”Modeling of influence of liquid crystal modulator adjustment on reconstruction of birefringence and azimuth angle in imaging polarimetry with carrier frequency”, in Proceedings of Optical Security and Safety (International Conference on Systems of Optical Security, Warsaw, Poland, 11-12 December 2003), Vol. 5566, pp. 273-277.
Application of weighted smoothing splines to the local denoising of digital speckle pattern interferometry fringes Alejandro Federico1, and Guillermo H. Kaufmann2 1 Física y Metrología, Instituto Nacional de Tecnología Industrial, P.O. Box B1650WAB, B1650KNA San Martín, Argentina. 2 Instituto de Física Rosario (CONICET-UNR), Bvd. 27 de Febrero 210 bis, S2000EZP Rosario, Argentina.
1 Introduction The analysis of dynamic events are favoured when a single pattern of digital speckle pattern interferometric (DSPI) fringes is acquired. Several authors have recently reported on the application of methods based on the continuous wavelet transform [1] and the Wigner-Ville distribution [2] to retrieve the phase distribution from a single interferogram, avoiding the application of a phase unwrapping algorithm and the introduction of carrier fringes. Although these methods have been successfully used in DSPI, large inaccuracies can be introduced when the fringe pattern contains a high level of residual speckle noise [3-4]. In this paper we propose the use of weighted smoothing splines for local smoothing the DSPIfringes. The performance of the proposed denoising method is evaluated and a comparison with filtering methods based on the continuous wavelet transform is also presented.
2 Weighted smoothing spline method Splines s (x) , x , are piecewise polynomials smoothly connected together. The joining points of these polynomials are called knots, and here we will only consider splines with uniform knots and unit spacing. Given a set of discrete signal values ^g (k )`, the weighted smoothing spline s (x) of order 2m 1 is defined as the unique minimizer of [5]
New Methods and Tools for Data Processing
H
2
b
209 2
ª wm º U ¦ Z j g ( x j ) s( x j ) ³ dx O ( x) « m s( x)» , j ¬ wx ¼ a
>
@
2
(1)
^ `
given the data sites x1 x N in [a, b] , the weight vector Z j of positive weights, the smoothing parameter U and the nonnegative integrable function O (x) . In this paper we use m 2 and set the parameter to
U 1 . Moreover, the weight vector coordinates are defined as ^Z j 1` and the O (x) function is specified as a non-negative piece constant function with breaks at the data sites. The use of this function allows to control the roughness in different parts of the interval. To extend the smoothing method to a DSPI image, the tensor product of splines is applied. Thus, it is convenient to define a weight matrix O . The application of this method based on a weighted smoothing spline produces a smoothing of the correlation fringes. To avoid the fluctuations of the modulation intensity and also the influence of the background intensity, an algorithm to normalize the fringe pattern is finally applied [6].
3 Numerical evaluation The DSPI fringes were simulated by means of the method reported in Ref. 7 for a resolution of pixels in a scale of grey levels, and for an average speckle size of 1 and 2 pixels. The local denosing method was evaluated using a phase distribution map that produces closed fringes with low and high local fringe densities. Figure 1 shows the correlation fringes that were generated for an average speckle size of 1 pixel. In this case, the fringes were obtained without introducing any decorrelation effects. To evaluate the performance of noise reduction method, we used an image quality index Q . This quality index has a dynamic range of [1, 1] and allows to model any distortion as a combination of three different factors: loss of correlation, luminance distortion, and contrast distortion [8]. We measured the statistical features of the filtered image by combining the three factors within local regions using a sliding window of size 5u5 pixels. This window was moved pixel by pixel along the horizontal and vertical directions through all the rows and columns of the image, and finally the average quality index was evaluated as the mean value of the previously computed values.
New Methods and Tools for Data Processing
210
Fig. 1. Computer-simulated fringes.
Fig. 2. Filtered image obtained by applying the proposed method.
Table 1. Quality index Q obtained for the weighted smoothing spline and the wavelet shrinkage methods for an average speckle size of 1 and 2 pixels, as a function of the ratio D, and when decorrelation effects are and are not present.
Q
1 2 3 4 1 2 3
Without decorrelation 1 pixel 2 pixels 64u10-2 1.4u10-2 80u10-2 8.0u10-2 -2 80u10 21u10-2 -2 80u10 46u10-2 -2 52u10 0.6u10-2 -2 67u10 3.8u10-2 -2 68u10 13u10-2
With decorrelation 1 pixel 2 pixels 40u10-2 -2.5u10-2 68u10-2 -0.9u10-2 -2 72u10 3.0u10-2 -2 73u10 7.0u10-2 -2 24u10 -0.5u10-2 -2 49u10 -0.3u10-2 -2 55u10 0.3u10-2
4
69u10-2
57u10-2
D Smoothing Spline
Wavelet Shrinkage
27u10-2
3.0u10-2
As it was described in section 2, a weight matrix must be applied to minimize Eq. (1). This matrix was chosen by observing that the low and high fringe densities are locally maintained for O values around 30 and 1, respectively. These values determine the high and low limit of the O matrix. To preserve the fringe pattern, two rectangles were defined with minimum O values around the high fringe densities of the image, which were smoothly connected to the higher O values. Once the weighted filtering
New Methods and Tools for Data Processing
211
method was applied, the obtained smoothed image was normalized using the normalization algorithm. Figure 2 shows the filtered fringe pattern obtained after the weighted smoothing spline method and the normalization algorithm were applied to Fig. 1. The denoised image clearly shows the performance of the proposed filtering method to preserve the fringe structure while effectively reducing the speckle noise. The results obtained for the proposed filtering method are summarized in Table 1, where D represents the quotient between the reference intensity and the average of the intensity of the speckle beam [7], with and without decorrelation effects. These results were also compared with those obtained from the application of a wavelet shrinkage technique. The wavelet denoising method is based on the use of a nondecimated Daubechies filter with 8 vanishing moments and the elimination of the two first levels of the decomposition [7]. The proposed denoising method provides a local treatment of the smoothness and consequently considerable improvements in the filtering of DSPI images can be obtained. It is also demonstrated that the use of weighted smoothing splines is more accurate that the application of shrinkage wavelet techniques.
3 References 1. L. Watkins, S. Tan, and T. Barnes. Determination of interferometer phase distributions 2. 3.
4.
5. 6. 7.
8.
by use of wavelets. Opt. Lett. 24, 905-907(1999) C. A. Sciammarella, and T. Kim. Determination of strains from fringe patterns using space-frequency representations. Opt. Eng. 42, 3182-3193(2003) A. Federico, and G. H. Kaufmann. Evaluation of the continuous wavelet transform method for the phase measurement of Electronic Speckle Pattern Interferometry fringes. Opt. Eng. 41, 3209-3216(2002) A. Federico, and G. H. Kaufmann. Phase retrieval in digital speckle pattern interferometry by use of a smoothed space-frequency distribution. Appl. Opt.. 42, 70667071(2003) C. de Boor. Calculation of smoothing spline with weighted roughness measure. http://www.cs.wisc.edu J. A. Quiroga, J. Gómez-Pedrero, and A. García-Botella. Algorithm for fringe pattern normalization. Opt. Comm. 197, 43-51(2001) A. Federico, and G. H. Kaufmann. Comparative study of wavelet thresholding methods for denoising electronic speckle pattern interferometry fringes. Opt. Eng. 40, 25982604(2001) Z. Wang, and A. C. Bovik. A universal quality index. IEEE Sig. Proc. Lett. 9, 8184(2002)
Investigation of the fringe order in multicomponent shearography surface strain measurement Roger M. Groves1, Stephen W. James and Ralph P. Tatam Optical Sensors Group, Centre for Photonics and Optical Engineering School of Engineering, Cranfield University Bedford MK43 0AL United Kingdom.
1 Introduction Shearography [1], a full-field speckle interferometry technique, is sensitive to the displacement gradient. Multi-component shearography instrumentation can be used to determine the full surface strain of an object undergoing loading [2]. This is achieved by combining measurements from three channels using a coordinate transformation. However, before this coordinate transformation is performed, care must be taken to identify correctly the zero order fringe in the phase map in order to avoid amplification of errors in the calculation. A number of strategies have been demonstrated to identify the zero fringe order in shearography. These include: (i) the use of an additional measurement channel to over-determine the coordinate transformation equations [3], (ii) identification of an area within the fringe map where the phase change is less than r S radians [4] and (iii) step loading the object, ensuring that the phase change is always within r S radians for each step [5]. In the third example phase-unwrapping is also avoided, as there are no phase discontinuities. In this manuscript a zero phase identification technique based on fringe tracking through a sequence of images is described. The loading step size may then be increased relative to that employed in technique (iii), making
Current Address: ITO Institut für Technische Optik, Universität Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany. 1
New Methods and Tools for Data Processing
213
full use of the shearography measurement range for each step and increasing the speed of the measurement procedure.
2 Theory In multi-component shearography, individual phase maps are generated by each measurement channel. These phase maps are sensitive to the component of displacement gradient determined by the sensitivity vector, which is defined by the bisector of the illumination and viewing vectors, and by the direction of shear. In general, the loading step size is optimised to be close to the maximum number of fringes that may unwrapped successfully by the fringe unwrapping software, which typically lies between 5 and 10 fringes. For a multicomponent measurement this may require a compromise, as the number of fringes, for a particular load step, is likely to vary between the different measurement channels. The identification of the zero fringe order in the phase maps is the next stage. Firstly, the load is reset to zero and, for each measurement channel, an additional sequence of phase maps with smaller load sub-steps is recorded. The magnitude of the sub-step loading is not important as long as the maximum phase change induced at all pixels is less than r S radians. It is also not necessary for all sub-steps to have the same loading magnitude. The number of sub-step images in the sequence is typically 5 to 10. This sequence is then analysed to identify the direction of fringe movement in the sequence. By considering the initial phase to be zero, the number of times a fringe crosses a reference point in the image for the whole sequence is determined and this is then identified as the fringe order correction. A second sub-fringe order correction is also required as the fringe unwrapping software calculates all phase changes in the phase map relative to the reference point in the image (which is set to zero phase). The sub-fringe order correction value is the absolute phase value at the reference point. Finally the sub-fringe order and fringe order corrections are added to the phase at all pixels in the image to give convert from relative phase to absolute phase. These fringe order corrections are made for all three measurement channels. The phase maps are then processed, using the coordinate transformation, to yield the in-plane and out-of-plane displacement gradient components.
New Methods and Tools for Data Processing
214
3 Experimental details The phase maps were recorded using a six-channel time-divisionmultiplexed shearography instrument [6]. The object was sequentially illuminated from three directions by in-fibre Bragg grating stabilised laser diodes (800 nm, 100 mW optical power). Optical fibres were used for beam delivery and mechanical shutters were used to control switching between illumination channels. The shearing Michelson interferometer contained a piezoelectric controlled phase-stepping reference mirror. An area scan CCD camera was used for image capture.
4 Results and discussion An ABS pipe, internally hydrostatically loaded, was investigated using the shearography instrumentation. The pipe had an internal diameter of 150 mm, an external diameter of 170 mm, a length of 860 mm, and was terminated by steel end plates. The pipe was pressurised in 6.9 kPa steps. Figure 1 shows phase maps from a 6.9 kPa loading step for the six measurement channels.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 1. shows wrapped phase maps from measurement channels 1 to 3 respectively for (a) to (c) horizontal shear and (d) to (f) vertical shear.
The zero fringe order was determined using an additional sequence of phase maps, recorded by using a sequence of smaller loading steps, starting a 10 kPa and continuing until past 20 kPa. As an example the sequence for Channel 1 is shown in Figure 2. The fringe counting is illustrated by the X marked in the image showing the position of the zero order, or if the zero order is located outside the image a known fringe order. The subfringe order correction was made as described in §2. This fringe order correction technique was used to analyse a sequence of fringe maps with loading steps of 6.9 kPa. It was noted that the fringe maps for load steps 0 to 6.9 kPa, 6.9 kPa to 13.8 kPa, etc. were very simi-
New Methods and Tools for Data Processing
215
lar and that it was possible to use the same analysis of the sub-load step fringe maps images repeatedly. We applied this to a sequence of fringe maps for a loading range from 0 to 69 kPa. After completing the coordinate transformation, a measurement of the axial and hoop strains for the pipe was obtained. These may be calculated theoretically using a knowledge of the diameter and wall thickness of the pipe [7]. Final results showed errors of ±90 µİ in the measurement of the displacement gradient. From the comparison of experimental with theoretical results there are two main findings. The first is that operator training in counting the fringe order is required. Initially a systematic error was found, in that the fringe order was being over-counted by one for each image. This is considered to be to be an error in tracking the fringe from the first to the second image in the sequence. Once corrected it was found that the fringe counting was reasonably reliable, but subject to an occasional error of r1 fringe.
X,: rad (a)
0
X: 0 rad (b)
X: 0 rad (c)
X: rad (d)
0
X: 2S rad (e)
X:2S rad (f)
X:4S rad (g)
Fig. 2. shows the sequence of images recorded using measurement Channel 1 for the sub-loading steps. Note (g) has approximately the same loading as Figure 2(a). The Xs mark the approximate position in the image where the phase is 0 radians, or 2S, 4S, etc. if the 0 radians position is not located within the image.
The second finding is the propagation and amplification of errors in the fringe order through the displacement gradient calculation. For example errors of a factor of 2 in the estimate of the displacement gradient were found for fringe order errors of r1 fringe. It was possible to trap these occasional errors by comparison with the theoretical value, and then rechecking the data processing. For certain fringe maps the fringes are unhelpfully positioned and it was not possible to identify, with certainty, the exact fringe order.
5 Conclusions This paper both highlights an important problem in shearogaphy - the location of the zero fringe order - and gives a solution to the problem based on fringe counting.
216
New Methods and Tools for Data Processing
6 Acknowledgements The work was supported by the Engineering and Physical Sciences Research Council, UK under grant No. GR/T09149/01. The authors would like to thank Prof. Li (Wuhan University of Technology, P. R. China), Dr. Chehura and Mr Staines.
7 References 1. Leendertz, JA, Butters, JN (1973) An image-shearing speckle pattern interferometer for measuring bending moments. J. Phys. E 6:11071110 2. Aebischer, HA, Waldner, S (1997) Strain Distributions made Visible with Image-shearing Speckle Pattern Interferometry. Opt. Laser. Eng. 26: 407-420 3. Siebert, T, Spitthof, K, Ettemeyer, A (2004) A Practical Approach to the Absolute Phase in Speckle Interferomerty. J. Holography Speckle 1: 32-38 4. Groves, RM (2001) Development of Shearography for Surface Strain Measurement of Non-Planar Objects. PhD Thesis, Cranfield University, UK 5. Groves, RM, James, SW, Tatam, RP, Furfari, D, Irving, PE, Barnes, SE, Fu, S (2004) Full-Field Laser Shearography for the Detection and Characterisation of Fatigue Cracks in Titanium 10-2-3. ASTM Symposium of Full-Field Optical Deformation Measurement: Applications and User Experience, Salt Lake City, USA 6. James, SW, Tatam RP (1999) Time-Division-Multiplexed 3D Shearography. Proc. SPIE 3744: 394-403 7. Sinnott, RK (1996), Coulson and Richardson’s Chemical Engineering. Butterworth-Heinmann, Oxford, UK
Metrological Fringe inpainting Qian Kemao, Seah Hock Soon School of Computer Engineering, Nanyang Technological University, Singapore, 639798 Email: [email protected] Anand Asundi School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, 639798
1 Introduction In optical metrology, important information is often carried by fringe patterns, which can be expressed as (1) f ( x, y ) b( x, y ) cos M ( x, y ) where f ( x, y ) , b( x, y ) and M ( x, y ) are the recorded intensity, fringe amplitude and phase distribution, respectively. We have assumed that the background intensity could be removed by, say, low-pass filtering, and hence is not shown in Eq. (1). Sometimes the fringe patterns have some undesired invalid areas due to the irregularity of the tested specimen, illumination shadows, or imperfection of the optical elements and detector. These invalid areas may introduce difficulties for further processing [1-3]. It is hence often required to “repair” the fringe patterns. This is very similar to the digital inpainting of artistic pieces [4,5]. Fringe extrapolation and interpolation usually carry the same meaning. Assume the intensity of the fringe pattern (usually noisy), f, is available at the entire image plane :, except for some areas D. The goal of inpainting is to reconstruct the intensity at D (usually without noise), f0, as faithfully as possible. This task can be model by the Maximum A Posteriori (MAP) optimization, i.e., determining f0(:) that maximizes the posterior probability p > f 0 ( : ) f (: D ) @ .
Due to the Bayes’ law, we have p > f 0 ( : ) f (: D ) @ p > f (: D ) f 0 ( : ) @p > f 0 ( : ) @ p > f (: D ) @ (2)
218
New Methods and Tools for Data Processing
As p > f (: D ) @ can be treated as a constant, the MAP optimization is equivalent to maximize the likelihood probability p> f (: D) f 0 (:)@ and the prior probability p> f 0 (:)@ . Iterative Fourier transform was proposed to extrapolate fringes [1]. It assumes that the fringe pattern of the entire : has a narrow spectrum band, which gives the prior constraint. The likelihood requirement is that, the in each iteration, spectrum of f0(:) is the salience components of the spectrum of f(:). Though this approach was shown to be useful [2,3], it has some limitations. Firstly it fails when the fringe pattern itself has a broad spectrum band; secondly, since Fourier spectrum is a global measure of a fringe pattern, it is unreasonable to manipulate it to inpaint a small area. Both limitations are due to the insufficient prior information. As an example, a noiseless fringe pattern is simulated as shown in Fig. 1(a), to which additive noise and 25 invalid areas are added, as shown in Fig. 1(b). Its Fourier spectrum is shown in Fig. 1(c). We use the iterative Fourier transform to inpaint the fringe pattern. The iteration continues until the result converges. In each iteration the spectrum components within the black square are retained while the others are set to zero. Different square sizes for spectrum clipping are tested. Figure 1(d) shows the best result we can obtain, which is far from being satisfactory.
2 Proposed inpainting algorithm A direct extension is to replace the Fourier transform by block cosine transform. In a smaller area, it is more probably that the fringe pattern is narrow banded. However, the “broadness” of the spectrum is a rather subjective measure and it makes the accurate inpainting very difficult. Hence we discard the approach of spectrum analysis and return to the spatial domain. Our assumption is as follows: (a) In a small area E that includes D, we assume that the phase distribution is a quadric polynomial (the polynomial order could be increased when necessary). At the same time, we assume that the fringe amplitude is constant. This is the prior information; (b) The likelihood of f and f0 at ED is that their intensities should be very similar in a least square sense. In another word, their difference is minimized. The area descriptions are illustrated in Fig. 1(b). The MAP model and Bayes’ law should be slightly modified by replacing : by E in Eq. (2). With the above assumptions, our proposed inpainting process is as follows. (a) Assume the phase is a quadric polynomial. Fit the polynomial coefficients and fringe amplitude for each pixel ( x, y ) E D as
219
New Methods and Tools for Data Processing
(3) f ( x, y ) b cos( a 00 a10 x a 01 y a 20 x 2 a11 xy a 02 y 2 ) The least square solution of the seven unknowns [a 00 , a10 , a 01 , a 20 , a11 , a 02 , b] can be obtained by minimizing the cost function
c
¦ > f ( x, y) b cos(a
@
2
00
a10 x a 01 y a 20 x 2 a11 xy a 02 y 2 ) (4)
ED
It can be solved by some iterations while good initial values are necessary [6], or it can be solved by genetic algorithm (GA) without the initial values [7]; (b) The intensity data at E D is generated using the obtained coefficients through Eq. (4) with proper coordinates ( x, y ) D . Only the data at D is inpainted while intensity at other places remains unchanged. To see the effects of the proposed algorithm, inpainting is applied in each invalid area in Fig. 1(b) one by one and the final result is shown in Fig. 1(e). Since the fringe is properly inpainted, further processing is easier. For example, windowed Fourier filtering [8] is applied to Fig. 1(e) and the very satisfying result is shown in Fig. 1(f), which is impossible without the fringe inpainting. We then inpaint two real fringe patterns using the proposed algorithm. Figure 2(a) shows a defect in the interferometric moiré fringe pattern, which is due to the imperfection of grating on the specimen. The inpainted fringe pattern is shown in Fig. 2(b). Figure 3(a) shows a projected fringe pattern where some parts are invisible due to the black words on the specimen. The portion with Chinese characters is inpainted as an example and the result shown in Fig 3(b) is satisfying. This approach can be easily extended to wrapped phase fringes as well.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 1. Fringe inpainting of a simulated fringe pat-
3 Conclusions The fringe inpainting problem is modeled in this paper. An algorithm is proposed and tested with various simulated and real fringe patterns and en-
New Methods and Tools for Data Processing
220
couraging results are obtained. It is useful in regaining the lost information in invalid areas as well as reducing the border effects in further processing.
(a)
(b) Fig. 2. A interferometric moiré fringe (a) and the inpainted fringe
(a)
(b) Fig. 3. A projected fringe (a) and the inpainted fringe (b)
References 1. Roddier, C. and Roddier, G. (1987) Interferogram analysis using Fourier transform techniques. Appl. Opt., 26: 1668-1673 2. Quiroga, J. A., Crespo, D. and Bernabeu, E. (1999) Fourier transform method for automatic processing of moire deflectograms. Opt. Eng. 38: 974-982 3. Kujawinska, M. and Wojciak, J. (1991) High accuracy Fourier transform fringe pattern analysis. Optics and Lasers in Engineering 14:325339 4. Sapiro, G. (2001) Geometric Partial Differential Equations and Image Processing (Cambridge University Press) 5. Chan, T. F., Shen J. and Vese, L. (2003) Variational PDE models in image processing. Notice of the AMS 50:14-26 6. Guse F. and Kross, J. (1992) A new approach for quantitative interferogram analysis. Proc. SPIE 1781:258-265 7. Cuevas, F. J., Sossa-Azuela, J. H. and Servin, M. (2002) A parametric method applied to phase recovery from a fringe pattern based on a genetic algorithm. Opt. Comm. 203:213-223 8. Qian, K. (2004) Windowed Fourier transform for fringe pattern analysis: addendum. Appl. Opt. 43:3472-3473
Combination of Digital Image Correlation Techniques and Spatial Phase Shifting Interferometry for 3D-Displacement Detection and Noise Reduction of Phase Difference Data Björn Kemper, Patrik Langehanenberg, Sabine Knoche, Gert von Bally Laboratory of Biophysics Robert-Koch-Straße 45, D-48129 Münster Germany
1 Introduction Spatial phase shifting (SPS) interferometric techniques enable the quantitative determination of optical path length changes and axial displacements with, in contrast to temporal phase shifting methods, low demands on the stability of the experimental setup. In this way, these methods are particularly suitable for measurements on instable surfaces and for the investigation of biological specimen [1, 2]. In combination with digital image correlation algorithms SPS interferometric techniques open up new prospects for the simultaneous detection of 3D displacements. Furthermore, such methods can be utilized to compensate wave front decorrelation effects, e. g. as caused by rigid body motions of the investigated surface in Electronic-Speckle-Pattern Interferometry. The application of digital image correlation techniques in SPS interferometry requires the reconstruction of the object wave’s intensity from the SPS interferograms. Therefore, an SPS method and a Fourier transform method are characterized and compared. The application of the methods for compensation of decorrelation effects is demonstrated by results obtained on a biological specimen.
2 Experimental Methods The determination of axial object surfaces displacements is performed by SPS interferometry. For this purpose, spatial phase shifted interferograms are generated by superposition of the object wave front with a tilted refer-
222
New Methods and Tools for Data Processing
ence wave front. By taking into account the intensity values of Fig. 1. Object wave intensity reconstruction by MOD. (a): SPS speckle interferogram of an USAF 1951 paper test chart (O = 532 nm); (b): reconstruction of the spatial object wave intensity distribution by calculation of the square of the speckle interferogram modulation
neighbouring CCD pixels the object wave phase mod 2ʌ is determined [1]. For the detection of lateral changes of the object wave front as well as lateral displacements, the object wave’s intensity in the image plane is reconstructed. Therefore, the square of the interferogram modulation is calculated for each object state by a 3 step SPS algorithm (MOD) [1] (see Fig. 1). In comparison, a Fourier transform method (FTM) is used for carrier fringe removal [3] (see Fig. 2). From the reconstructed object wave intensity data, lateral wave front changes and in plane displacements are determined by digital cross correlation of sub images with sub pixel accuracy. The obtained information about lateral wave front changes is utilized to decrease the noise of phase difference data in SPS ElectronicSpeckle-Pattern Interferometry that is affected by speckle decorrelation in the image plane. For this purpose, the measured speckle field displacement is compensated by lateral local shifts in one of the recorded interferograms.
New Methods and Tools for Data Processing
223
30 DSP (without reference wave) FTM MOD linear fit
120
dobj,x [µm]
100 80
25 20 15
60
10
40 20
(a)
0
5 0
20
40
60
80 dE,x [µm]
100
120
dx [Pixel]
140
standard deviation [Pixel]
Fig. 2. Object wave intensity reconstruction by FTM. (a): SPS speckle interferogram of an USAF 1951 paper test chart (O = 532 nm); (b): frequency spectrum obtained by Fast Fourier Transformation (FFT); (c): filter applied in frequency domain on the right side band in (b) that is shifted to the origin of the spectrum; (d): object wave intensity reconstructed from (c) by application of an inverse FFT
0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0,0
DSP (without reference wave) FTM MOD
(b) 2
4
6 8 10 speckle size [Pixel]
12
14
Fig. 3. Comparison between FTM, MOD and DPS for object wave intensity reconstruction. (a): lateral displacement detection of a white painted metal plate from spatial phase shifted speckle interferograms; (b): standard deviation of the displacement measurement in dependence of the speckle size
Afterwards, the phase difference data is recalculated by an SPS algorithm from the decorrelation compensated interferograms.
3 Results Investigations with an SPS speckle interferometer on a white painted metal plate that is laterally shifted by a calibrated piezo translator for distances dE,x are carried out to characterise and optimise the methods MOD and FTM in comparison to conventional digital speckle photography (DSP).
224
New Methods and Tools for Data Processing
Fig. 3a shows a linear dependence of the detected displacement dobj,x and dE,x with no significant accuracy differences between MOD, FTM and DSP. The accuracy of MOD and FTM is determined by calculating the standard deviation of a uniform displacement field (dE,x = (40 r 2.5) µm) in dependence of the speckle size and is estimated to | 0.1 pixel for a speckle size | 3.5 pixel (see Fig. 3b). Fig. 4 shows exemplarily results from a tilted section of a fixed part of a tumorous ovary obtained by MOD with a microscope ESPI system that illustrate the compensation of inhomogeneous lateral speckle decorrelation. After the compensation of the lateral speckle field displacements (see Fig. 4c) the noise of the phase difference distribution in Fig. 4b appears significantly reduced in Fig. 4d.
4 Conclusions The results show that lateral displacement components can be determined by application of cross correlation techniques to the squared modu-
0.2 mm
Fig. 4. Compensation of inhomogeneous lateral speckle decorrelation (a): white light image of a section of a fixed part of tumourous ovary, (b): phase difference mod 2S obtained from a tilt of the specimen with included noise caused by lateral speckle decorrelation, (c): speckle displacement field calculated by MOD and digital cross correlation, (d): phase difference mod 2S after the compensation of speckle decorrelation in the interferogram data
lation distributions obtained from SPS speckle interferograms. Furthermore, these data can be utilized effectively to compensate lateral speckle
New Methods and Tools for Data Processing
225
decorrelation. Finally, the applicability of the presented methods on a biological specimen is demonstrated.
Acknowledgements Financial support of the German Federal Ministry of Education and Research (BMBF) is gratefully acknowledged.
References 1. Knoche, S, Kemper, B, Wernicke, G, von Bally, G (in press) Modulation analysis in spatial phase shifting Electronic-Speckle-Pattern Interferometry and application for automated data selection on biological specimens. Opt. Commun. 2. Kemper, B, Carl, D, Knoche, S, Thien, R, von Bally, G (2004) Holographic interferometric microscopy systems for the application on biological samples. Proc. SPIE 5457:581-588 3. Fricke-Begemann, T, Burke, J (2001) Speckle interferometry: three dimensional deformation field measurement with a single interferogram. Appl. Opt. 40:5011-5022
Photoelastic tomography for birefringence determination in optical microelements Tomasz Kozacki, Pawel Kniazewski, Malgorzata Kujawinska Warsaw University of Technology Institute of Micromechanics and Photonics 8 Sw. A.Boboli St., 02-525 Warsaw, Poland
1 Introduction Mechanical stresses are presented in every optical or photonic element and they are responsible for inducing element anisotropy. In many components stress-induced birefringence play crucial role in their proper performance. On the other hand many photonic applications require minimum birefringence of elements. In the paper we study the photoelastic tomography which uses integrated photoelasticity algorithm [1] for determination of birefringence and tomography [2] for its 3D reconstruction [Fig. 1]. Tomographic reconstruction assumes that the object doesn’t have birefringence in the rotation plane. Thus using this method the axial object stresses (along the Z plane) can be determined, only.
Fig. 1. (a). Circular integrated polariscope with rotational object for tomographic birefringence measurement; S – source SF- spatial filter, P – polarizer polarizing light in Z direction, QW1 – quarter wave plate (45° - angle of slow axis and z axis), O – measured rotational object (rotation axis z) placed in immersion tank, QW2 – rotational quarter wave plate, A – rotational analyzer, D – detector; (b) distributions of object refractive indices used in simulation, nco and ncl - refractive indices of object core and cladding [nco = ncl /(1ǻn)].
New Methods and Tools for Data Processing
227
In the paper we verify the correctness of the algorithm for measurement of small objects, where diffraction phenomena can not be neglected. In order to identify this error we simulate the measurement process. Computations are performed using full vectorial propagation method based on Maxwell-curl equations (finite difference time domain method FDTD).
2 Errors of tomographic reconstruction In the optical model (Fig. 1a) the object is illuminated by left-hand circular polarized light. Thus to simulate light propagation through object we have used 2D full vector FDTD algorithm composed from both orthogonal modes TMZ and TEZ with phase shift pi/2. The most popular FDTD algorithm are inefficient if we want to simulate light propagation through the object having large dimension comparing to optical wavelength (most micro-optical elements are large comparing to wavelength). Thus we have implemented more efficient PSTD (Pseudospectral time-domain) algorithm [3]. As a result of polariscopic measurement we obtain retardation ǻR, which is a difference between a phase for modes TEZ and TMZ. From the retardation object integrated birefringence ǻ along the propagation axis X is calculated. In this calculation a diffraction phenomena is neglected. We assume linear dependence between the object integrated birefringence and retardation. Such assumption introduces errors for objects with considerable gradient of birefringence. To study this error we have performed the measurement simulation for two classes of objects (Fig. 1b): with abrupt and continuous changes of birefringence. For simplicity it is assumed here constant refractive index in plane X, Y. In Fig. 2 the mentioned errors are presented for step-index object with following parameters: nx,y = 1.46, nzco = nzcl /(1- ǻn), nzcl=1.46 and object having Gaussian distribution of birefringence with parameters: nx,y = 1.46, nz = nzcl + (nzco-nzcl)exp{(x2+y2)/2ıo2} (ıo=8.33Pm). Fig. 2a shows obtained errors of birefringence integration ıǻIerr= std{100%(ǻIteorǻI)/max(ǻIteor)}, where ǻIteor=d(nx,y- nz) and ǻd= 2ǻRȜ0/2ʌǻx are the theoretical and computed birefringence (ǻR- retardation received from simulated polariscope, Ȝ0 – optical wavelength in vacuum, ǻx-distance between sampling points, max(…) – maximum value, std(…) – standard deviation). The received errors are computed for object core excluding 15% of object area adjacent to retardation step. For small values ǻn (0y0.01) the errors are represented by small diffraction sidelobes. In area (ǻn>0.015) the para-
228
New Methods and Tools for Data Processing
bolic phase error appears. The largest oscillation error appears near the birefringence discontinuity. In Fig. 2b the errors of retardation reconstruction ǻerr=std{100%(ǻteor-ǻrec)/max(ǻteor)} are presented, where ǻteor are ǻrec are theoretical and reconstructed object retardation. The abrupt jumps of error are connected to the mentioned large sidelobes appearing close to the object birefringence discontinuity. In Fig. 2a, b errors of birefringence integration ıǻIerr and retardation reconstruction ǻerr for object having Gaussian distribution are presented. Both errors are calculated for object area of nonzero birefringence. Comparing birefringence integration ıǻIerr for Gaussian and step index object it is surprising that for Gaussian object the mentioned errors are higher. It is because we have used the standard deviation to represent the errors, and we have excluded area with highest errors for step-index object and not for the Gaussian one. The errors ıǻIerr for step index object have fluctuations all over the error distribution, while these for Gaussian object are the highest where the derivative of birefringence anisotropy has its maximum value. Comparing errors of birefringence reconstruction ǻerr for both objects, it is noticeable that the errors of retardation reconstruction are substantially smaller for the Gaussian object.
Fig. 2. (a) Error of birefringence integration ıǻIerr for step-index and Gaussian object (Fig. 1b) with varying parameters; (b) errors of retardation reconstruction for both objects.
To visualize the differences in errors introduced by smooth object and step-index one we have performed simulation of object reconstruction having following parameters: nclz= nclxy=1.46, ncoxy= 1.4822 (ǻnxy=0.015) , ncoz=1.4747 (ǻnz=0.01) - “step index”; nclz= nclxy=1.46 ncoxy= 1.5051 (ǻnxy=0.03) , ncoz=1.4747 (ǻnz=0.01) - “Gaussian”. In Fig. 3 the crosssections of reconstructed retardation for both objects is presented. For the mentioned step-index object there are considerable reconstruction fluctuations in region surrounding retardation discontinuity. While for the object with smooth variation of refractive index there are visible three errors maximums: one at the center and two smaller at the left and right side of the object. These object areas are the places where the object retardation has the highest gradient.
New Methods and Tools for Data Processing
229
Fig. 3. Cross-sections of reconstructed birefringence of object having step-index (a) and continuous (Gaussian) – (b) distribution of birefringence, solid lines – reconstructed birefringence, dotted line – theoretical birefringence.
3 Conclusions In the paper we have analyzed the optical system for 3D measurement of birefringence of micro-optical elements. For small objects, where the fine birefringence sampling in the measurement is necessary, the diffraction of light can not be neglected. Thus the standard tomographic reconstruction algorithm gives considerable errors. In the paper we have analyzed such errors through simulation of reconstruction of object having variable stepindex and continuous distribution of birefringence.
4 References 1. Mangal, S., K., Ramesh, K. (1999) Determination of characteristic parameters in integrated photoelasticity by phase-shifting technique. Optical and Lasers in Engineering 31:263-278 2. Hseigh, J. (2003) Computed Tomography. SPIE Press 3. Liu Q., H. (1997) The PSTD algorithm: a time-domain requiring only two cells per wavelength. Microwave and Optical Technology Letters 15: 158-165
5 Acknowledgements The authors acknowledge the financial support by Ministry of Science and Information Technologies within the project 4T10C00425.
Optimization of electronic speckle pattern interferometers Amalia Martínez*, Juan Antonio Rayas*, Raúl Corsero** * Centro de Investigaciones en Óptica, A. C. Apartado Postal 1-948, C. P. 37000, León, Gto., MÉXICO, e-mail: [email protected] ** Department of Mechanical and Metallurgical Engineering, Pontificia Universidad Católica de Chile,Vicuña Mackenna 4860, Santiago, CHILE
1 Introduction The main optical interferometric techniques for measuring deformations are speckle techniques [1], moiré methods [1], photoelasticity [2], and holographic interferometry [3]. They are used to obtain relative displacements from fringe patterns that may be interpreted as contour maps of the phase difference induced by the specimen deformation. The automatic extraction of the information that a pattern encodes, includes several stages: first, phase shifting algorithm [3] or a Fourier transformation method [4] are used to obtain the wrapped phase map; second, phase unwrapping is performed to obtain the phase differences induced by the deformation process; third, by using the unwrapped phase difference values and the adequate sensitivity vector component, the relative displacement is evaluated [5]. In this work, the sensitivity vector components are analyzed to each illumination source for an optical system with three divergent illumination beams. Our attention was focused on the quantification of the sensitivity vector components with respect to effect of incidence angles and the angular distribution of illumination sources.
2 Theory 2.1 Sensitivity matrix The basis of the displacement determination is given by [3]:
'I ( P)
d ( P ) e( P ) .
Then at each point P we have to solve the system of linear equations:
(1)
New Methods and Tools for Data Processing
§ 'I 1 ( P) · ¸ ¨ 2 ¨ 'I ( P) ¸ ¨ 'I 3 ( P) ¸ © ¹
§ e1x ¨ 2 ¨ ex ¨ e3 © x
231
e1y
e1z · § u ( P) · ¸ ¨ ¸ ez2 ¸ ¨ v( P) ¸ ez3 ¸¹ ¨© w( P ) ¸¹
e y2 e3y
(2)
d (u, v, w) . The solution is
to obtain
E 1 ( P) 'I ( P)
d (u, v, w)
(3)
where E (P) is called sensitivity matrix. The sensitivity vector components are computed by [5]: 2 S
e xi
O
x0 x
>x
x y 0 y z 0 z 2
0
2
2
@
1
2
x x si >x xsi y y si 2 z z si 2 @
1
2
2
(4a) 2 S
e iy
O
y0 y >x0 x y 0 y 2 z 0 z 2 @
1
2
2
y y si >x x si y y si 2 z z si 2 @
1
2
2
(4b) 2 S
e zi
O
z 0 z >x0 x y 0 y 2 z 0 z 2 @
1
2
2
z z si >x x si y y si 2 z z si 2 @
1
2
2
(4c)
i 1 3 . Where P0=(x0,y0,z0) is the observer position (CCD camera position); Psi=(xs,ys,zs) is illumination point, and P=(x,y,z) is a point on the specimen surface. Then we can define the sensitivity function to each component as
S
i x
e
i 2 x
e i ( P)
2
u 100 , S
i y
e
i 2 y
e i ( P)
2
u 100 and S
i z
e
i 2 z
e i ( P)
2
u 100 .(5a-c)
232
New Methods and Tools for Data Processing
2.2 Theoretical cases for three divergent beams We analysed the case when the sources positions are: Ps1=(17.45 cm, 0 cm, -166 cm); Ps2=(-8.7 cm, 15.11 cm, -166 cm) and Ps3=(-8.7 cm, -15.11 cm, -166 cm). The incidence angle of each illumination source is of T i 6 $ . The angular separation between illumination sources is Z 120 $ . In the second case, the sources positions were: Ps1=(167 cm, 0 cm, 0 cm); Ps2=(-83.5 cm, 145 cm, 0 cm) and Ps3=(-83.5 cm, -145 cm, 0 cm). The incidence angle is T i 90 $ and the angular separation between illumination sources is Z 120 $ . Finally, the third case was: Ps1=(0 cm, 0 cm, -167 cm); Ps2=(0 cm, 161.3 cm, -43.22 cm) and Ps3=(161.3 cm, 0 cm, -43.22 cm). The incidence angles to Ps1, Ps2, and Ps3 are T i 0 $ and 75° to the last two sources. The angular separation between Ps2, and Ps3 is Z 90 $ . The functions to each component of the sensitivity vector according to the eq. 5, to each source are presented in the figure 1 to only last case. The observer position (CCD camera position) is P0=(0 cm, 0 cm, -82 cm) and the specimen surface is considered plane. Figure 1 shows if we pick up a large incidence angle and the sources are collocated on axis x and y, the sensitivity functions Sx and Sy are increased. The sensitivity function Sz is increase when the source is collocated near to optical axis. From the proposed geometries, the described in the third case, makes possible to have a maximum sensitivity on each of the three directions.
3 Conclusion Simple geometries of ESPI systems were discussed, which shows the influence of sources position on the sensitivity vector. It can be observed from the analyzed cases that an optical system which uses three illumination beams, where one illumination source collocated near to optical axis (incidence angle near to 0°) will gives maxim sensitivity in w-direction. The position of the second source on axis y and an incidence angle near to 90° will gives maxim sensitivity in v-direction. The position of the third source on the axis x and an incidence angle to 90° will gives maxim sensitivity in u-direction.
4 Acknowledgments Authors wish to thank partial economical support from Consejo de Ciencia y Tecnología del Estado de Guanajuato. R. R. Cordero thanks support of
New Methods and Tools for Data Processing
233
MECESUP PUC/9903 project and Vlaamse Interuniversitaire Raad (VLIR-ESPOL, Componente 6). 0.15
S x1 ex1 ( %)
0.15
S y1
0.1
ey1 ( %)
0.05
0 -4 -3
- 2 -1
0
y ( cm)
ex2 ( %)
3 44 3
2
1
0 -1
-4 -3
x ( cm)
ey2 ( %)
-2 -1
0 1 2
3
44
3
2
10
-1
33 -4 -3
-2 -1 0
y ( cm)
1 2
3
44
3
2
1
0
-1-2
-3-4
x ( cm)
2
0 -1
99.9 99.8 99.7 -4 -3
x ( cm)
-2 -1
0
y ( cm)
1 2
3 44 3
2
1
0 -1
-4 -2-3
x ( cm)
66
Sz2 ez 2 ( %)
37 35
64 62 60
-2 -1
0
y ( cm)
( %)
35
3 44 3
1
39
-4 -3
Sey3y3
37
1 2
41
x ( cm)
39
0
-4 -2-3
33
-4 -2-3
41
-2 -1
y ( cm)
S y2
y ( cm)
ex3 ( %)
1 2
-4 -2-3
0.25 0.2 0.15 0.1 0.05 0 -4-3
S x3
ez 1 ( %)
0.05
0
S x2
100
Sz1
0.1
1 2
3
44
3
2
1
0
-1
-4 -2-3
-4 -3
0
y ( cm)
x ( cm)
0.25 0.2 0.15 0.1 0.05 0
-2 -1
Sezz33 ( %)
1 2
3
44
3
2
1
0
-1-2
-3-4
x ( cm)
66 64 62 60
-4-3
-2 -1
y ( cm)
0 1 2
3
44
3
2
10
-1
-4 -2-3
x ( cm)
-4 -3
-2 -1
y ( cm)
0
1 2
3
44
3
2
1
0
-1-2
-3-4
x ( cm)
Fig. 1. Percentage of each one of sensitivity vector components to each source in the case of optical set-up with three illuminating beams, S1: Ps1=(0 cm, 0 cm, -167 cm); S2: Ps2=(0 cm, 161.3 cm, -43.22 cm) and S3: Ps3=(161.3 cm, 0 cm, -43.22 cm).
References 1.
2.
3.
4.
5.
Martínez Amalia, Rodríguez-Vera R., Rayas J. A., Puga H. J. (2003) Fracture detection by grating moiré and in-plane ESPI techniques. Optics and Lasers in Engineering, 39(5-6): 525-536. Timoshenko S. P., Goodier J. N., Chapter 5: Photoelastic and Moiré Experimental Methods: 150-167, Chapter 1: Introduction: 1-14, in Theory of Elasticity, (McGrawHill International Editions, Singapore, 1970). W. Jüptner and W. Osten eds., T. Kreis, Chapter 3: Holographic Interferometry: 65-74; Chapter 4: Quantitative Evaluation of the Interference Phase:126-129; Chapter 5: Processing of the interference phase, pp. 186-189, in Holographic Interferometry, (Akademie Verlag, New York, 1996). Takeda M., Ina H., Kobayashi S. (1982) Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. Journal of Optical Society of America 72(1), 156-160. Martínez Amalia, Rayas J. A., Rodríguez-Vera R., Puga H. J. (2004) Threedimensional deformation measurement from the combination of in-plane and out-ofplane electronic speckle pattern interferometers. Applied Optics 43(24): 4652-4658.
Properties of phase shifting methods applied to time average interferometry of vibrating objects K. Patorski, A. Styk Warsaw University of Technology Institute of Micromechanics and Photonics
1 Introduction Time average interferometry allows easy finding of resonant frequencies and their vibration mode patterns irrespectively of frequency value. Temporal phase shifting (TPS) for automatic interferogram analysis supports the method in contrast and modulation calculations [1, 2]. The properties of the TPS method applied to two-beam interferogram modulation calculations are summarized. The modulation changes are introduced by a sinusoidal vibration. Simulations are conducted for two experimental errors: the phase step error and average intensity changes of TPS frames (the latter error might be caused by light source power changes and/or CCD matrix auto mode operation). Noise free and intensity noise fringe patterns obtained under null field and finite fringe detection modes are studied. Exemplary calculation results and experimental resonant mode visualizations of silicon membranes are shown. Next the influence of the mentioned errors on phase-shift histograms is addressed.
2 TPS algorithms The following algorithms have been applied for two-beam interferogram intensity modulation simulations and calculations using experimental data: a) classical four frame algorithm [3-5]; b) modified four frame algorithm [6]; c) five frame modulation algorithm [3-5]; d) Larkin five frame algorithm [7]; e) four frame algorithm 4N1 using the frame sequence (1,3,4,5) [8]; f) four frame algorithm 4N2 using the frame sequence (1,2,3,5) [8]. Last two algorithms emphasize the phase step errors. The number of detector pixels with the same phase shift angle might serve as a source of information on TPS errors. Five frame histograms Į(x,y) have been calculated using the following equations:
New Methods and Tools for Data Processing
235
Well known five frame algorithm introduced by Schwider et al. 1983, Cheng and Wyant 1985, and Hariharan et al. 1987 [3-5] ªI I º Įx, y arc cos « 5 1 » ¬ I4 I2 ¼
(1)
Equations presented by Kreis [9] ª I 1 ( I 3 I 4 ) ( I 2 I 3 I 4 )( I 2 2 I 3 I 4 ) I 5 ( I 2 I 3 ) º (2) » 4( I 2 I 3 )( I 3 I 4 ) ¬ ¼
D ( x, y ) arccos«
º ª ( I 2 I 4 )( I 1 2 I 2 2 I 3 2 I 4 I 5 ) (3) D ( x, y ) arccos« » ( )( 2 ) ( )( 2 ) I I I I I I I I I I 4 1 3 5 1 5 2 3 4 ¼ ¬ 2 To see the influence of average intensity changes of TPS frames on the lattice-site representation of phase shift angles [10] was calculated as well.
3 Numerical simulations and experimental works Let us comment a general case of simultaneous presence of the phase step and different average intensity errors. Parasitic modulations caused by those errors depend on their particular combination, the interferogram contrast, and orientation of two-beam interference fringes with respect to their contrast change direction. Average intensity changes of TPS frames represent a crucial factor for true visualization and measurement of the vibration amplitude. They influence the location and minima values of dark Bessel fringes in the case of null field detection mode and when the carrier fringes are not parallel to their contrast change direction. Although null field detection provides best results, stringent experimental conditions must be met. The detection with carrier fringes parallel to their contrast change direction (if possible) is recommended. Five frame algorithms give better modulation reproduction than four frame ones. Figure 1 shows exemplary results for a membrane vibrating at 170 kHz. a)
b)
c)
Fig. 1. a) grey level representation of simulated modulation distribution; b) cross-sections along columns 1 and 91; and c) experimentally determined modulation map using algorithm 4N1. Square membrane vibrating at 170 kHz; estimated frame recording errors: įIav § 5% (relative average intensity error) and įĮc § -200 (quasi-linear phase step error).
236
New Methods and Tools for Data Processing
4 Phase shift angle histograms Before histogram and lattice-site representation calculations component TPS frames should be noise preprocessed because high frequency intensity noise influences the bias and modulation of the interferogram intensity distribution [1-5]. For that purpose spin filtering was used [11]. The following conclusions have been obtained from calculated histograms: Conventional and lattice-site phase shift angle representations provide limited information on average intensity changes except for some cases related to min and max intensity values of the first and fifth frame. Sharp asymmetries and/or quasi-central dips are found in phase shift histograms. They correspond to vertical displacements of characteristic quasi-elliptical patterns in lattice-site representations; Clear dips appear in 5 frame histograms when I1 and I5 represent Imin and Imax average intensity values, or vice versa; Sharp asymmetries are found in histograms when I1 or I5 are Imin or Imax values. In those cases lattice-site patterns shift vertically as well; For I1 > I5 lattice-site pattern shifts upwards, for I1 < I5 it shifts downwards. Lattice-site representations calculated from equations (2) and (3) are much more irregular and give generally different average shift angle Įc (max populated Į value or dip location) than widely used shortest histogram equation (1). The reason might be much longer forms of Eqs. (2) and (3).
Fig. 2. Upper row - histograms calculated using Eqs. (1) (left), (2) (center) and (3) (right); bottom row – lattice-site representations. Circular membrane vibrating at 833 kHz; average intensities of TPS frames expressed by the numbers: 75.3, 68.3, 68.3, 69.1 and 67.1.
New Methods and Tools for Data Processing
237
5 Acknowledgements This work was supported, in part, by the Ministry of Scientific Research and Information Technology grant No. 3T10C 001 27 and statutory funds.
6 References 1. Patorski K, Sienicki Z, Styk A, (2005) Phase-shifting method contrast calculations in time-averaged interferometry: error analysis, Optical Engineering 44, in press 2. Patorski K, Styk A, (2005) Interferogram intensity modulation calculations using temporal phase shifting: error analysis, Proc. SPIE 585655, in press 3. Schwider J, (1990) Advanced evaluation techniques in interferometry, Chap. 4 in Progress in Optics, ed. Wolf E, 22: 271-359, Elsevier, New York 4. Greivenkamp J.E, Bruning J.H, (1992) Phase shifting interferometry, Chap. 14 in Optical Shop Testing, ed. Malacara D, 501-598, John Wiley & Sons, New York 5. Creath K, (1994) Phase-shifting holographic interferometry, Chap. 5 in Holographic Interferometry, ed. Rastogi P.K, 109-150, SpringerVerlag, Berlin 6. Schwider J, Falkenstorfer O, Schreiber H, Zoller A, Streibl N, (1993) New compensating four-phase algorithm for phase-shift interferometry, Optical Engineering 32: 1883-1885 7. Larkin K.G, (1996) Efficient nonlinear algorithm for envelope detection in white light interferometry, Journal of Optical Society of America 13: 832-843 8. Joenathan C, Phase-measuring interferometry: new methods and error analysis, Applied Optics 33: 4147-4155 9. Kreis T, (1996) Holographic Interferometry: Principles and Methods, Akademie Verlag, Berlin 10. Gutmann B, Weber H, (1998) Phase-shifter calibration and error detection in phase-shifting applications: a new method, Applied Optics 37: 7624-7631 11. Yu Q, Liu X, (1994) New spin filters for interferometric fringe patterns and grating patterns, Applied Optics 33: 3705-3711
Depth-resolved displacement measurement using Tilt Scanning Speckle Interferometry Pablo D. Ruiz and Jonathan M. Huntley Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Ashby Road, Loughborough, Leicestershire, LE11 3TU, United Kingdom
1 Tilt Scanning Interferometry The first demonstrations of depth-resolved displacement field measurement have been presented recently. In those based on Low coherence interferometry (LCI) [1, 2] the system is sensitive only to the movement of scattering points lying within the coherence gate slice selected by the reference mirror position. Wavelength Scanning Interferometry (WSI) systems provide decoupling of the depth resolution and displacement sensitivity, but also appear to possess some additional practical advantages over LCI, the most important being an improved signal-to-noise ratio [3, 4]. In this paper we present a different approach to measure depth-resolved displacements within semi-transparent materials which is based in tilting the illumination angle during the acquisition of image sequences –Fig.1. This provides the necessary depth-dependent phase shifts that allow the reconstruction of the object structure and its internal displacements. In a proof of principle experiment, a collimated beam is steered by a mirror mounted on a tilting stage controlled by a ramp generator. An imaging system captures the interference between the scattered light that propagates nearly normal to the object surface and a smooth wavefront reference beam R. The time varying interference signal is recorded throughout the whole tilt scanning sequence. The test object was a beam manufactured inhouse with clear cast epoxy resin seeded with a small amount of titanium oxide white pigment to increase the scattering within the material. Under a three point bending test, the beam was loaded with a ball tip micrometer against two cylindrical rods, as shown in Fig. 2(a).
New Methods and Tools for Data Processing
239
Sj S2
S1 R l
f2
f1
Fig. 1. By continuously tilting the illumination beam, depth dependent Doppler shifts f1 and f2 corresponding to slices S1 and S2 are introduced in the time varying interference signal
Fourier transformation of the resulting 3D intensity distribution along the time axis reconstructs the scattering potential (magnitude spectrum) and the optical phase within the medium. Repeating the measurements with the object wave at equal and opposite angles about the observation direction resulted in two 3D phase-change volumes, the sum of which gave the outof-plane-sensitive phase volume and the difference between them gave the in-plane phase volume. From these phase-change volumes the in-plane and out-of-plane depth-resolved displacements fields are obtained. A reference surface was placed just in front of the object, and served to compensate for shifts of spectral peaks along the horizontal axis x –see Fig. 2(a)– and due to non-linearity of the tilting stage. The main measurement parameters were set as follows: camera exposure time Texp = 0.1397 s; framing rate FR = 7.16 fps; acquired frames Nf = 480 frames; acquisition time T = Nf ×Texp = 68.6 s; spatial resolution of field of view (FOV): 256×256 pixels; size of FOV: 7.2×7.2mm2; tilt angle scanning range 'T= 0.0048 rad; illumination angle T= 45 deg; material refractive index n1= 1.4; laser wavelength O = 532 nm; laser power per beam: ~35 mW CW; loading pin displacement: 40 Pm along z axis. It can be shown that the depth resolution of the system is:
Gz
JO n0 [ 'T
(1)
where n0 is the refractive index of the medium surrounding the object (air) and [ is a constant that depends on the material index of refraction and the illumination angle T. Depending on the windowing function used to
240
New Methods and Tools for Data Processing
Fig. 2. Experimental results: (a) Schematic view of an epoxy resin beam under 3-point bending load; (b) In-plane (top row) and out-of-plane (bottom row) wrapped phase-change distribution for different slices within the beam (b). Black represents -Sand white +SFringe spacing is equivalent to ~0.38Pm and ~0.15Pm for in-plane and out-of-plane sensitivity, respectively
evaluate the Fourier transform, J = 2, 4 for rectangular or Hanning windows, respectively. In our experiment, n0=1, [ , J = 4 and therefore the depth resolution was Gz ~ 1.1mm. The top row of Fig. 2(b) shows the in-plane (x axis) phase-change distribution for different slices within the epoxy resin beam starting at the object surface z = 0 mm (left) in steps of -1.74 mm down to z = -5.22 mm (right). The out-of-plane phase-change distribution for the same depth slices are shown in the bottom row of Fig. 2(b). These phase maps have been corrected for the refraction due to the slightly bent surface of the beam. It can be seen that the gradient of the in-plane displacements is reversed as we move from the front to the back surface of the beam. This indicates a tensile state for the first front slices and a compressive state for the slice behind the neutral axis at z = -3.8 mm. A nearly flat phase distribution is obtained for the slice in the neutral axis (third column), as would
New Methods and Tools for Data Processing
241
be expected. The out-of-plane displacements show different levels of bending as we approach the back surface from the front surface. The asymmetry of the distribution is produced by the position of the point of contact between the loading pin and the beam, which was ~2mm below the horizontal symmetry axis of the beam. The last slice at z = -5.22 mm starts to reveal detail of the local deformation around the point of contact. The reference surface can be seen in Fig. 2(b) at the bottom of each wrapped phase distribution. These results compare well with finite element simulations.
2 Conclusion Promising results were achieved by means of a novel technique that we call Tilt Scanning Interferometry (TSI) to measure 3D depth-resolved displacement fields within semitransparent scattering materials. A depth resolution Gz ~1.1mm was achieved for a tilting range of 0.0048 rad using a home made tilting stage. By means of TSI, the scattering potential within the sample can be reconstructed in a 3D data volume as in scanning Optical Coherence Tomography. Most importantly, in-plane and out-of-plane displacements can be measured within the object under study with a sensitivity of Vz~O/30 (decoupled from the depth resolution) and up to a depth of ~6mm with our simple system.
3 References 1. Gülker, G, Kraft, A (2003) Low-coherence ESPI in the investigation of ancient terracotta warriors. Speckle Metrology 2003, Trondheim, Norway, SPIE 4933: 53-58 2. Gastinger, K, Winther, S, Hinsch K, D (2003) Low-coherence speckle interferometer (LCSI) for characterization of adhesion in adhesivebonded joints. Speckle Metrology 2003, Trondheim, Norway, SPIE 4933: 59-65 3. Ruiz, P, D, Zhou, Y, Huntley J, M, Wildman, R, D (2004) Depthresolved whole-field displacement measurement using wavelength scanning interferometry. Journal of Optics A: Pure and Applied Optics 6: 679-683 4. Ruiz, P, D, Huntley, J, M, Wildman, R, D (2005) Depth-resolved whole-field displacement measurement using Wavelength Scanning Electronic Speckle Pattern Interferometry. Applied Optics (in press)
New Phase Unwrapping Strategy for Rapid and Dense 3D Data Acquisition in Structured Light Approach G Sai Siva and L Kameswara Rao Department of Instrumentation Indian Institute of Science Bangalore-12, India.
Abstract Sinusoidal structured light projection (SSLP) technique, specificallyphase stepping method, is in widespread use to obtain accurate, dense 3-D data. But, if the object under investigation possesses surface discontinuities, phase unwrapping (an intermediate step in SSLP) stage mandatorily require several additional images, of the object with projected fringes (of different spatial frequencies), as input to generate a reliable 3D shape. On the other hand, Color-coded structured light projection (CSLP) technique is known to require a single image as in put, but generates sparse 3D data. Thus we propose the use of CSLP in conjunction with SSLP to obtain dense 3D data with minimum number of images as input. This approach is shown to be significantly faster and reliable than temporal phase unwrapping procedure that uses a complete exponential sequence. For example, if a measurement with the accuracy obtained by interrogating the object with 32 fringes in the projected pattern is carried out with both the methods, new strategy proposed requires only 5 frames as compared to 24 frames required by the later method. Keywords: Structured light projection, shape measurement; phase stepping; phase unwrapping; color-coding; surface discontinuities
1 Introduction The measurement of surface shape by use of projected structured light patterns is a well-developed technique. Especially, SSLP techniques have
New Methods and Tools for Data Processing
243
been extensively used as it can give accurate and dens 3D data. The procedure involves- projecting a pattern on to the object from an offset angle and recording the image of the pattern which is phase modulated by the topographical variations of the object surface. An automated analysis is then carried out to extract the phase from the deformed fringe pattern mostly using either FFT[1] or phase stepping[2] methods, both of them produce wrapped phase distribution. The reconstruction of surface profile of objects with inherent surface discontinuities or having spatially isolated regions is usually a difficult problem by using standard phase unwrapping techniques. To overcome this problem several phase unwrapping strategies were developed [3][4][5]. All of them mandataroly require multiple phase maps generated by varying the spatial frequency of projected fringe pattern either linearly or exponentially. Further, the degree of reliability varies from method to method. A different class of structured light projection techniques relies upon color-coded projection. They are capable of extracting 3D data from a single image. Different color-coding strategies can be seen in [6],[7]. However, they can give only sparse 3D data. In the following sections we suggest an approach for obtaining dens 3D data of objects even with surface discontinuities while requiring minimum number of input images as compared to any of the contemporary phase unwrapping algorithms
2 Method The first step of profiling objects in the proposed method involves the generation of wrapped phase map using “four-frame phase shifting algorithm”. The fundamental concepts of phase stepping method, is described elsewhere [2], is only briefly reviewed here. 2.1 Phase Stepping Algorithm
When a sinusoidal fringe pattern is projected on to a 3-D diffuse object, the mathematical representation of the deformed fringe pattern may be expressed in the general form I (x, y) = a (x, y) + b (x, y) cos I(x, y)
(1)
Where a (x, y), b(x, y) represent unwanted irradiance variations arising from the non-uniform light reflection by a test object. The phase function
244
New Methods and Tools for Data Processing
I(x, y) characterizes the fringe deformation and is related to the object shape h (x, y). The principal task is to obtain I(x, y) from the measured fringe-pattern intensity distribution. Upon shifting the original projected fringe pattern by a fraction 1/N of its period P, the phase of the pattern represented by Eq.(1) is shifted by 2S/N. Using four images I(x, y) can be retrieved independently of the other parameters in Eq.(1): I(x, y) = arc tan [ ( I4 – I2 )/ ( I1- I3 ) ] (2) 2.2 Phase Unwrapping
The object phase calculated according to Eq.(2) is wrapped in the range -S to S. The true phase of the object is Iun(x, y) = I(x, y) + 2 n(x, y) S (3) where n(x, y) is an integer. Unwrapping is only a process of determining n(x, y). A conventional spatial phase unwrapping algorithm search for locations of phase jumps in the wrapped phase distribution and adds/subtracts 2S to bring the relative phase between two neighboring pixels into the range of - S to S. Thus irrespective of actual value of n(x, y) to be evaluated, they always assign r1 thereby fails to reliably unwrapp phase maps in profiling objects with surface discontinuities. In order to determine n (x, y) we are introducing the following procedure: 2.3 New Unwrapping Procedure
In this new approach, an additional image of the object captured under illumination of a color-coded pattern is used for calculating n (x, y). A color-coded pattern is generated using MATLAB in a computer and projected with the help of a LCD projector. Color-coded pattern generated comprises of: an array of rectangular shaped bands, with each band identified uniquely by its color and arranged in a specific sequence as shown in Fig.1. This pattern is projected onto the reference plane and its image (Cr(i,j)) is recorded. A non-planar real object distorts the projected structured pattern in accordance with its topographical variations (Co(i,j)). Now, we know a priori the color expected to be returned by every point from an ideally planar object surface. Deviations in the detected color at any point on the non-planar object surface essentially correspond to local height deviations. Therefore, from the knowledge of observed color (Co) and expected color (Cr), height deviation at every point on object surface can be expressed in terms of difference of their band indices (m) as explained in [8]. If the width of every band (w) is made equal to the pitch of the gray-
New Methods and Tools for Data Processing
245
scale fringe pattern used in phase stepping, then m can be directly related to n(x, y) in the Eq.(3). This is the basis for determining n(x, y) unambiguously. Procedure to extract necessary information from the deformed color-coded pattern and exploiting its use in determining n(x, y) is presented in [8].
Fig. 1. Generated color-coded pattern
3 Experimental Results
Fig. 2. (a) Fringe pattern (b) color-coded pattern on the surface of an object with two step discontinuities (c) wrapped phase map obtained with phase stepping (d) phase map after unwrapping with the help of Fig.2 (b)
It is impossible to unwrap the phase map in Fig.2(a) correctly by conventional spatial methods because the phase jumps at the steps are too large (more than 2S). Even though it is impossible to determine exact numbers of fringes shifted at each step height from Gray scale fringe pattern (Fig. 2(a)) alone, color-coded pattern on the object surface clearly reveals this information as can be seen from Fig. 2(b).
4 Conclusions The new approach proposed, that combines CSLP and SSLP in a specific way, has resulted in a new and more powerful method for generating rapid
246
New Methods and Tools for Data Processing
and dense 3D data. It is shown to be significantly faster and reliable than temporal phase unwrapping that uses complete exponential sequence and compared to it the reduction both in image acquisition and in analysis times by the factor [N*(log2S +1)] / (N+1) is an important advantage of the present approach. (N: number of frames used in phase stepping and S: number of fringes in the pattern projected).
5 References [1] Mitsuo Takeda et al., (1982) J.Opt. Soc. America 72(1): 156-160 [2] V.Srinivasan et al., (1984) Applied Optics 23(18): 3105-3108 [3] H.Zhao et al., (1994) Applied Optics 33: 4497-4500 [4] J.M. Huntley and H.O. Saldner, (1997) Meas. Sci. Technol 8: 986-992 [5] Hong Zhang et al., (1999) Applied Optics 38(16), 3534-3541 [6] Weiyi Liu et al., (2000) Applied Optics 39(20), 3504-3508 [7] Li Zhang et al., (2002) Proc. of the 1st Int. Symp. on 3DPVT, 24-36 [8] Sai Siva et al., (2005) Proc.of SPIE, vol:5856, Paper:78.
Determination of modulation and background intensity by uncalibrated temporal phase stepping in a two-bucket spatially phase stepped speckle interferometer Peter A.A.M. Somers, and Nandini Bhattacharya Optics Research Group, Delft University of Technology Lorentzweg 1, NL-2628 CJ Delft the Netherlands
1 Abstract A new phase stepping method is presented, based on the combination of spatial and temporal phase stepping. The method comprises one fixed spatial phase step of S/2 and two arbitrary, unknown temporal phase steps. The method prevents phase errors caused by phase changes during temporal phase stepping. It is therefore in particular useful for dynamic applications, but will also improve system performance for quasi-static applications in non-ideal environments.
2 Introduction Optical path differences between two interfering beams can be calculated modulo 2S by applying a phase stepping method. In general three or four interferograms are involved in the calculation of phase; mostly multiples of S/2 or 2S/3 are applied as a phase step [1]. Phase stepped interferometers can be subdivided into two classes: x Temporally phase stepped systems x Spatially phase stepped systems For systems in the first class, phase steps are applied sequentially, in general by a physical change of the optical path length in one of the interfering beams, for instance by displacing a mirror by means of a piezo element. After each phase step a new interferogram is acquired, and after the desired number of interferograms is obtained, phase is calculated. This ap-
248
New Methods and Tools for Data Processing
proach has the advantage that all interferograms are acquired with one single camera. The disadvantage is that the object of interest or the medium between the object and the interferometer may have changed between two exposures, which will lead to errors in the calculation of phase. As a result this approach is not appropriate without adaptation for measurements of dynamic events. In addition piezo elements suffer from hysteresis, drift, and non-linear behaviour, which is a disadvantage with respect to calibration of the phase stepping procedure. In systems of the second class, phase stepping can be realized by introducing a phase difference for adjacent pixels by applying an oblique reference beam. In general two to four pixels are involved. This method is known as spatial phase stepping [1]. The advantage of this approach is that the information which is necessary to calculate phase is present in a single image, representing one particular state of the object. A disadvantage is that a speckle should be large enough to cover the three or four adjacent pixels involved, which limits the efficient use of available light. An alternative method for spatial phase stepping is also based on simultaneous acquisition of two or more phase stepped interferograms. This can be implemented by dividing the interfering beams over two or more optical branches, that each have a fixed phase step with respect to each other. Such a system has been realized recently: a shearing speckle interferometer with two optical branches allowing simultaneous acquisition of two phase stepped interferograms [2]. The phase step is S/2. The intensities I1 and I2 for two phase stepped interferograms that can be acquired simultaneously with this system are: I1 = IB + IM cos(M) I2 = IB + IM sin(M)
(1) (2)
where IB and IM are background and modulation intensities respectively, and M is the phase. Phase step in Eq. 2 is -S/2. Phase M can be calculated by Eq. 3, that can be derived from Eqs. 1 and 2:
M
ª I 2 IB º arctan « ¬ I 1 IB »¼
(3)
In Eq. 3 the modulation intensity IM is eliminated, but the unknown background intensity IB is still present. In the next section a method will presented that resolves IB, after which phase M can be calculated.
New Methods and Tools for Data Processing
249
3 Quadrature phase stepping When three S/2 spatially phase stepped interferogram pairs are taken, each pair with an additional phase step, this combination of spatial and temporal phase stepping yields six equations with five unknowns. For each pair the S/2 phase step is fixed, temporal phase steps are assumed to be unknown. IB and IM are assumed not to have changed during temporal phase stepping, a requirement also to be met for conventional phase stepping. (4),(5) I1 = IB + IM cos(M1 ),I2 = IB + IM sin(M1 ), (6),(7) I3 = IB + IM cos(M2 ),I4 = IB + IM sin(M2 ), I5 = IB + IM cos(M3 ),I6 = IB + IM sin(M3 ) (8),(9) We are particularly interested in M1 , the phase angle that represents the initial state of the object. After a change a second set consisting of three pairs of S/2 phase stepped interferograms is taken, and another phase result, representative for the changed state of the object is obtained. Phase change caused by the event can now be calculated modulo 2Sby taking the difference between initial and final phase. The method can be illustrated graphically by parametric presentation of the three intensity pairs (Fig. 1). On the horizontal axis intensities I1, I3, and I5, given by the cosine equations are plotted. On the vertical axis the intensities given by the sine equations are plotted: I2, I4, and I6. The three points representing the three intensity pairs are all on a circle with a radius of IM. The location of the centre of the circle is given by IB. Both IB and IM can be calculated with known geometrical methods. I3 = IB + IM cos(M I4 = IB + IM sin(M
I5 = IB + IM cos(M I6 = IB + IM sin(M
M
M
M
I90
I1 = IB + IM cos(M I2 = IB + IM sin(M
IM IB IB I0
Fig. 1. Parametric presentation of three pairs of S/2 spatially phase stepped interferograms. Temporal phase step between pairs is arbitrary. Intensity data belong to a single pixel.
250
New Methods and Tools for Data Processing
It is clear that the values of M2 and M3 can be arbitrary, so it is not necessary to calibrate the temporal phase steps. These steps can even be unknown, which allows the object or the medium between the object and the interferometer to change during phase stepping. As a result the method is very robust for quasi-static applications, and exceedingly appropriate for dynamic applications. The method requires three temporally phase stepped pairs of S/2 spatially phase stepped interferograms, but can be extended to four or more.
4 Conclusions A new phase stepping method has been presented, based on the combination of spatial and temporal phase stepping methods. The spatial phase step is fixed at S/2, the temporal phase steps are arbitrary and need not to be known. At least three temporally phase stepped pairs of S/2 spatially phase stepped interferograms are required for the method. The proposed method is very robust for quasi-static applications, and is exceedingly appropriate for dynamic applications since the object is allowed to change during phase stepping.
5 Acknowledgements This research was supported by the Technology Foundation STW, the Applied Science Division of NWO and the Technology Programme of the Ministry of Economic Affairs.
6 References 1. B. V. Dorrio and J. L. Fernandez, “Phase-evaluation methods in whole-field optical measurement techniques,” Measurement Science & Technology, vol. 10, pp. R33-R55 (1999). 2. Peter A.A.M. Somers and Hedser van Brug, “A single camera, dual image real-time-phase-stepped shearing speckle interferometer”, Proceedings Fringe 2001, pp. 573-580, Wolfgang Osten, Werner Jüptner, eds, Elsevier (2001).
SESSION 2 Resolution Enhanced Technologies Chairs Katherine Creath Tucson (USA) Peter J. de Groot Middlefield (USA)
Invited Paper
EUVA's challenges toward 0.1nm accuracy in EUV at-wavelength interferometry Katsumi Sugisaki, Masanobu Hasegawa, Masashi Okada, Zhu Yucong, Katsura Otaki, Zhiqiang Liu, Mikihiko Ishii, Jun Kawakami, Katsuhiko Murakami, Jun Saito, Seima Kato, Chidane Ouchi, Akinori Ohkubo, Yoshiyuki Sekine, Takayuki Hasegawa, Akiyoshi Suzuki, Masahito Niibe* and Mitsuo Takeda** The Extreme Ultraviolet Lithography System Development Association (EUVA)3-23 Kanda Nishiki-cho Chiyoda-ku, Tokyo, 101-0054, Japan *University of Hyogo 3-1-2 Kouto, Kamigori-cho, Ako-gun, Hyogo, 678-1205 Japan **University of Electro-Communications 1-5-1, Chofugaoka, Chofu, 182-8585, Japan
1 Introduction Extreme ultraviolet (EUV) lithography using radiations of 13.5 nm wavelength is a next-generation technology to fabricate fine device patterns below 32 nm in size. The wavefront tolerance of the projection optics used in the EUV lithography is required to be less than O/30 RMS corresponding to 0.45nm RMS. Wavefront metrologies are used to fabricate these optics. In the EUV region, the optics uses multilayer coated mirrors. In general, visible and ultraviolet interferometries also are applicable to evaluations of the optics. However, the wavefront measured with visible/ultraviolet light is different from it measured with the EUV due to the effect of multilayer. Figure 1 shows the wavefront difference between wavelengths of 266nm and 13.5nm. Therefore, wavefront metrology at the operating wavelength (at-wavelength) is an essential for developing such optics. A study on the EUV wavefront metrology was started since the early 1990's at Lawrence Berkeley National Laboratory.[1] Japanese research project was started at Association of Super-Advanced Electronics Technologies (ASET) in 1999. Extreme Ultraviolet Lithography System Development Association (EUVA) takes over this project.
Resolution Enhanced Technologies
-0.0254
253
0.0344 OEUV
11.5mOEUV RMS Fig. 1. Wavefront difference between measurement wavelength of O= 266nm and O= 13.5nm.
The final goal of EUVA is to build the EUV Wavefront Metrology System (EWMS) by March 2006, which evaluates six-mirror projection optics of NA0.25 for mass-production EUV exposure tools. In order to develop metrological techniques for evaluating such optics with ultra-high accuracy, we have built an Experimental EUV Interferometer (EEI) in the NewSUBARU.[2] Using the EEI, six different metrological methods can be tested using the same test optic to determine the most suitable measurement methods. The six kinds of methods are point diffraction interferometry (PDI) and line diffraction interferometry (LDI), lateral shearing interferometry (LSI), slit-type lateral shearing interferometry (SLSI), doublegrating lateral shearing interferometry (DLSI) and cross-grating lateral shearing interferometry (CGLSI). The EEI works well and the six types of the interferograms were successfully obtained. In this paper, we present our recent results including a comparison among six metrological methods, analyses on error factors, developments of calibration methods for achieving high accuracy, and systematic error evaluations as a part of assesment of the accuracy.
2 Interferometric methods and analyses Figure 2 shows a schematic diagram of the EEI. The test optic is a Schwarzschild optic of NA0.2. A coherent EUV beam from the long undulator of the NewSUBARU comes from the left-direction of this figue. The beam is focused onto the 1st pinhole mask by a schwarzschild-type illuminator. The EEI has five piezo-stages for precise alignments of optical
Resolution Enhanced Technologies
254
components such as pinhole masks and gratings. Each masks and gratings have many kinds of patterns on them. By exchanging these patterns on these masks and gratings, we can easily change the type of the testing interferometer. EUV beam from NewSUBARU undulator
Vacuum Chamber
Grating Pinhole or Window Illuminator (NA 0.01)
Grating Grating
Test Optics Schwarzschild NA 0.2 Magnification 1/20
Pinhole and/or Window
CCD Camera
Vibration isolator
Fig. 2. Schematic diagram of the EEI.
2.1 PDI and LDI
The left-side schematic in Fig. 3 shows the PDI [3] and the LDI. The PDI uses a 650nm pinhole. The pinhole generates an aberration-free spherical wavefront. The spherical wavefront is divided into the 0th and +/- 1storder diffracted waves by a binary grating. These waves pass through the test optic and arrive at a 2nd pinhole mask, which has a small pinhole and a large window. The 0th-order wave passes through the small pinhole and is converted to a spherical wave again by the pinhole. One of the 1st-order diffracted waves goes through the large window, carrying the aberration information of the test optic. These two waves interfere and the interference fringes are observed by a CCD camera. In the LDI, pinholes in the PDI are replaced by slits in order to increase the number of detectable photons. This is one of the solutions, which compensate the degradation of the S/N ratio in the PDI for high-NA optics. Because the LDI utilizes diffractions by the slits instead of pinholes in the PDI, only one-dimensional information of the wavefront can be obtained.
Resolution Enhanced Technologies
255
In order to obtain two-dimensional data, two sets of measurements with perpendicular diffracting slits are required.
Fig. 3. Principles of the PDI, the LDI, the LSI, the SLSI and the DLSI.
2.2 LSI and SLSI
The middle schematic in Fig. 3 shows the LSI [4] and the SLSI. The 1st pinhole is illuminated by the EUV radiations. An aberration-free spherical wavefront is generated by diffraction of the 1st pinhole. The aberrationfree wave goes through the optic under test. The wave passing through the optic is aberrated and diffracted by a binary grating. A order-selection mask is placed at the image plane of the test optic. The mask has two large windows, which act as spatial filters. Only the +/- 1st-order diffracted waves can pass through the windows in the mask. The 0th and higher order diffracted waves are blocked by the mask. By using the order-selection mask for spatial filtering, noise is reduced and measurement precision is improved. The +/- 1st-order diffracted waves, which carry the aberration information of the test optic, interfere with each other and the interference fringes are observed by the CCD camera. By shifting gratings laterally, phase shifting measurement is achieved for high sensitivity. In the SLSI, the 1st pinhole in the LSI is replaced by a slit in order to increase the number of detectable photons. From the viewpoint of a quantity of light, the SLSI has an advantage compared with the LSI. Because the SLSI utilizes the diffraction by the slit instead of the pinhole of the LSI, only one-dimensional information of the wavefront can be obtained. In or-
256
Resolution Enhanced Technologies
der to obtain two-dimensional data, two sets of measurements with perpendicular diffraction directions are required, that is the same as the LSI. 2.3 DLSI
The DLSI is one of the shearing interferometry.[5] The DLSI uses two gratings. These gratings are plasced before the object and the image planes of the test optic. These gratings are placed in conjugate positions of the optic under test. The illuminating EUV radiation wave is divided into the 0th-order wave and +/- 1st-order diffracted waves by the first binary grating. A twowindow mask is placed at the object plane of the test optic. The 0th-order and +1st-order waves diffracted by the first grating is selected by the mask. The two waves pass through the test optic in mutually laterally shifted positions and are diffracted again by the second grating. Since the two gratings are placed in conjugate positions, the first grating is imaged on the second grating. Therefore, the 0th-order and the +1st-order waves diffracted by the first grating are completely overlapped and aberrations in the illuminating beam cancel out. The second mask with a large window selects two waves, the 2nd grating's 0th order of the 1st grating's 1st order and the 2nd grating's -1st order of the 1st grating's 0th order. These two waves that passed through the second mask interfere and the interference fringes are observed by the CCD camera. By shifting one of the gratings laterally, phase shifting measurement is achieved. 2.4 CGLSI
The CGLSI is based on the Digital Talbot Interferometer (DTI) [6][7] and the EUV Lateral Shearing Interferometer (LSI). We used a crossed grating to divide and shear the beam passing through the test optic. One of the features is using order selecting windows. The optical layout of the CGLSI is shown in Fig. 4. Two configurations of the CGLSI are available. The aberration-free spherical wavefront generated by an object pinhole passes through the test optic and is diffracted by the cross-grating located before the image plane. In the window-less type of the CGLSI, by setting the cross-grating on the Talbot-plane, the retrieved image of the crossgrating is observed by the CCD camera as an interferogram. In this case, the interferogram is deformed due to the aberrations of the test optic. In the 4-window type CGLSI, four windows are set on the image plane, and work as a spatial filter that blocks undesired orders of diffracted light. The four first-order diffraction light rays (+/-1st Order in the X direction, +/-1st Or-
Resolution Enhanced Technologies
257
der in the Y direction) interfere on the CCD camera and form an interferogram. This interferogram is also deformed by the aberration of the test optic.
Fig. 4. Principle of the CGLSI.
Fourier transform method (FTM) was applied to retrieving differential wavefronts. Figure 5 shows the wavefront retrieval process of the CGLSI. At first, applying 2-dimensional Fourier transform, we can obtain a spatial frequency spectra of the interferogram. Secondly, we set a spectral bandpass filter around the carrier-frequency domain that corresponds to the pitch of the interferogram. After shifting one of the carrier-frequency spectrum to the zero and executing inverse Fourier transform, we obtain the a differential wavefront. This process is applied for two spectrum corresponding to the x- and y-diffrential wavefronts as shown in Fig. 5. The phases of two complex amplitude maps correspond to the differential wavefronts in the x-direction and the y-direction, respectively. The differential Zernike polynomial fitting method [8] was applied to retrieve the wavefront of the CGLSI. Annular Zernike polynomials are used in the process. There are advantages and disadvantages for both the phase shifting method and the FTM, respectively. The phase shifting method is mainly influenced by the factors that vary in the time-domain, for example, light intensity change and vibrations of the system. The FTM is mainly influenced by factors that vary in the spatial-domain, for example, the light intensity distribution of the interferogram and the light cross-talk among
258
Resolution Enhanced Technologies
tensity distribution of the interferogram and the light cross-talk among the different diffraction orders.
Fig. 5. Wavefront retrieval Process of the CGLSI.
2.5 Comparison among six kinds of interferometers
Figure 6 shows the comparison result of five kinds of interferometers, namely, the PDI, the LDI, the CGLSI, the LSI, and the DLSI. We did not succeed the wavefront reconstraction of the SLSI because of the ununiformity of its interferogram. In the LSI, the low order astigmatism was not correctly measured. In the DLSI, Z5, Z6, and Z8 and Z9 show different values from those obtained by the other methods. It seems that the condition of the aberration compensation of illuminator in the DLSI is not completely achieved. The wavefronts obtained by the PDI, the LDI and the CGLSI agree well. There are two points. The LDI uses two interferograms for wavefront retrieval but the PDI and the CGLSI use one interferogram respectively. It shows that composition of two wavefronts of the LDI has succeeded well. The 2nd point is that the wavefront of the point-diffraction method and the wavefront of the shearing method show good agreement. These three methods are considered to be the candidates to be installed in the EWMS.
Resolution Enhanced Technologies
259
Fig. 6. Comparison of Zernike coefficients of six kinds of interferometers.
3 Error factors and calibration methods 3.1 Error factors
Since the wavelength of the EUV is extremely short, noises such as a shot noise and an electronic noise are less effective relative to the case of normal interferometers using visible or ultraviolet light. Instead of the noise, major errors are induced by geometrical configurations of the optical components. Figure 7 shows major error factors of the PDI and the LSI's. The geometrical errors can be calibrated using calculated data based on accurate measurements of the system configuration. However, the accurate measurements of the configurations are hard tasks. Therefore, another calibration method is required. In the following chapters, we describe the error factors in the EUV wavefront metrologies.
Resolution Enhanced Technologies
260
Initial pinhole
diffraction effect by the grating Test optic
Grating
beam alignment to the pinhole Reference pinhole
Flare effect
Aberration leak through the pinhole
Detector
Errors induced by detector arrangement
Fig. 7. Major error factors at the EUV interferometry.
3.1.1 Errors in spherical wavefront emitted from the pinhole
Ideally, the pinholes are expected to eliminate all aberrations included in the illuminating beam and to generate perfect spherical wavefronts. In actual, the pinhole cannot perfectly eliminate the aberrations. For example, a little astigmatism is easy to pass through the pinhole. In addition, the pinhole substrate cannot perfectly block the beam outside the pinhole, that is, a little wavefront including the aberrations of the illuminating beam penetrates the pinhole substrate. Although the more aberrations can be eliminated using a smaller pinhole, the light intensity emitted from the pinhole also become small compared to the transparent light through the pinhole substrate. In addition, misalignment of the beam illuminating the pinhole causes the deformation of the spherical wavefront. The fluctuations of the illuminating beam caused intensity variation. Therefore, we should carefully control the condition related to the pinholes. The accuracy of the spherical wavefront generated by pinholes are discussed the other literatures.[9-11] 3.1.2 Errors generated in Grating
When a concentrating or diverging beam is divided by a planer grating, the diffracted beam contained the diffraction aberration. This is caused by diffraction angle varying with the changes in the incident angle. The major diffraction aberration is coma. In addition, astigmatism is induced by the grating tilt. Since the grating tilt is difficult to be determined, another calibration method is required.
Resolution Enhanced Technologies
261
The grating generates not only required diffraction orders but also unwanted diffraction orders. Especially, transmission phase grating in the EUV region cannot be fabricated. The unwanted diffraction orders act as a noise in the interferogram. 3.1.3 Errors induced by geometrical configuration of detector
Interference fringes generated by the separate point sources are hyperbolically curved. The curved fringes cause coma, which is named as "hyperbolic coma." Similar to the grating, a detector tilt also induces astigmatism. These errors are reviewed another paper.[11] 3.1.4 Errors due to flare
The EUV radiations are much scattered compared to the visible and ultraviolet radiations due to the extremely short wavelength. The scattered radiations are observed as a flare. The flare of the one of the interference beams, including the unwanted diffraction orders, overlaps the other beam as a noise. The flare effect is reviewed another paper as a factor hindering measurements.[12] 3.2 Calibration method
Calibrations are essentials for achieving accurate measurements. Therefore, we have been developing calibration techniques continuously. 3.2.1 Absolute PDI
The absolute PDI is the calibrated PDI.[12] Figure 8 shows the principle of the absolute PDI. The absolute PDI uses two measurements. The first measurement is carried out using the standard PDI configuration with a pinhole-window mask (Fig. 8 (a)). The second measurement is carried out with a windowwindow mask (Fig. 8 (b)) under the same diffraction orders of the grating. The second measurement is used as calibration data of the systematic error of the interferometer. Both measurements have the same systematic errors including the diffraction aberrations and the geometrical aberrations. Therefore, subtracting the first measurement from the second, the result is
Resolution Enhanced Technologies
262
the direct comparison between the wavefront aberration of the test optic and the ideal wavefront diffracted from the small pinhole. 1st
0th
D
T
T
D
T
(a) T
G
G
D
T
D
T
T
(b) G
G
D
(c)
=
T
G T
: Diffraction aberrations : Geometrical aberrations : Wavefront of the test optic
Fig. 8. Principle of the absolute PDI.
3.2.2 Calibrating grating aberrations
For LSI's, calibrating diffraction aberrations is important because the aberrations are large because the grating is inserted in the large NA beam. Therefore, we developed calibration method for LSI's.[14] Figure 9 shows the principle of this calibration method. This calibration method uses two measurements. One is a shearing measurement using 0th and +1st order beams. The other is using -1st and 0th order beams. The second measurement is shifted by the shear amount. After shifting the second measurement, subtracting the differential wavefront generated by the 1st and the 0th orders from the differential wavefront generated by the 0th and the +1st orders, we obtain the difference between them. The wavefronts of the test optic cancel out. The diffraction aberrations derived from this difference.
Resolution Enhanced Technologies
263
O ( x, y )
D1 ( x, y )
D1 ( x, y )
Test
Aberration in +1st Order
Aberration in –1st order
-
=
Fig. 9. Principle of the calibration method for LSI's.
3.2.3 Removing flare effect
The flare can be removed by utilizing temporal and spatial filtering. Three methods are proposed to cancel the flare effect. The first method is referred to as "dual-domain" method.[15] This method is a hybrid method of spatial filtering and phase shifting (temporal-domain). The second method is carried out by a specialized algorithm for the phase-shifting analysis.[4] This utilizes difference of the angular velocity of the phase between true measurement and noise. This is applied for the LSI using +/-1st diffraction orders. The third method used the FTM. This method is utilized averaging different initial phases.[16]
4 Systematic error evaluation In order to assess the accuracy of our interferometer, we have tried to evaluate a part of the systematic error of our interferometer.[17] Through analysing the error, identifying the error source is expected. It is important for high accuracy to identify the error source. The assessed method is the absolute PDI. 4.1 Evaluating method
This method is based on the absolute measurement.[18] In the absolute measurement, the wavefront of the test optic is measured, whereas errors of the measuring system are measured in the systematic error evaluation. When the test optic is rotated, the wavefront of the test optic is also rotated but the errors of the measuring system are not rotated. Therefore, rotating
Resolution Enhanced Technologies
264
TEST
Wm0, 0
-
TEST
Wm0,D
SYSTEM
SYSTEM
SYSTEM
and non-rotating components can be divided by the measurements before and after rotation of the test optic. Figure 11 shows the principle of the systematic error derivation. First, the wavefront of the test optic is measured at normal orientation. The measured wavefront Wm consists of the wavefront of the test optic and the systematic error Ws of the metrology. Then, the test optic is rotated and the wavefront is measured. The measured wavefront returned the original orientation numerically. Subtracting the wavefront measured after rotation (Wm0,a) from that before rotation (Wm0,0), then we obtain the difference between these systematic errors with the different orientations. In this time, the test wavefronts are eliminated. The systematic error is derived from the difference. It is note that the rotationally symmetric and the nT components of the systematic error also distinguish because these components are unchanged by rotation. The number of n is determined by 2S/D where D is a rotation angle.
SYSTEM
SYSTEM
Wm0,D Wm0, 0
,0 Ws0,asym .
=
Fig. 11. Systematic error evaluation method. The systematic error is derived from the difference between the measurements before and after rotating the test optic.
By 90-degree rotation, rotationally symmetrical and 4T systematic error cannot be obtained. Therefore, we also rotated the optics by 120 degrees. By the 120-degree rotation, 3T components cannot be obtained but 4T can be obtained. Therefore, we obtained rotationally asymmetrical error by using 90-degree rotation and 120-degree rotation measurements. 4.2 Result
We measured the wavefront with four different orientations of the test optic. Figure 12 shows the measured wavefronts. The RMS values are from 1.26 to 1.30 nm. The measured wavefronts repeated well.
Resolution Enhanced Technologies
265
0deg.
90deg.
180deg.
120deg.
1.26 nm RMS
1.30 nm RMS
1.27 nm RMS
1.28 nm RMS
(a)
(b)
(c)
(d)
Fig. 12. Measured wavefronts at 4-orientations of the test optic
Figure 13 shows the derived annular Zernike coefficients of the derived systematic errors. We calculated three systematic errors using three sets of 0 and 90 degree, 90 and 180 degree, and 0 and 120-degree wavefronts. The RMS values of the systematic errors are from 0.075 to 0.086 nm corresponds to about O/170. The evaluated systematic errors are quite small compared to the wavefront of the test optic. 6\VWHPDWLF(UURU
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
%QGHHKEKGPV YCX
# PPWNCT
Fig. 13. Evaluated systematic error
5 Conclusions We have been developing the metrological techniques to achieve 0.1 nm accuracy for evaluating the EUV lithographic optics. To select the most suitable methods, six different methods are compared. As a result, we have concluded that the PDI, the LDI and the CGLSI are the most promising candidates installing the EWMS for evaluating the EUV lithographic optics. To achieve the ultra-high accuracy, we have analysed various error
266
Resolution Enhanced Technologies
factors and developed various calibration methods. In order to assess the accuracy of our interferometer, the asymmetrical systematic errors are evaluated. The evaluated asymmetric error is less than 0.09 nm rms, which is small enough for measuring the wavefront of the EUV lithographic optics. The interferometry can extend to the extremely short wavelength of the EUV region and the ultra-high accuracy is achieved.
Acknowledgement This work was performed under the management of EUVA as a research and development program of NEDO/METI, Japan.
References [1] K. A. Goldberg, et. al., Extreme Ultraviolet Lithography, F. Zernike and D. T. Attwood (eds.), (OSA, Washington, D.C.) , 134 (1994). [2] T. Hasegawa, et al., Proc. SPIE Vol. 5374, 797 (2004). [3] H.Medecki, et. al., Opt. Lett., 21 (19), 1526 (1996). [4] Y. Zhu, et. al., Jpn. J. Appl. Phys., 42 (9A), 5844 (2003). [5] Z. Liu, et. al., Jpn. J. Appl. Phys., 43 (6B), 3718 (2004). [6] M. Takeda, et. al., Appl. Opt., 23 (11), 1760 (1984). [7] M. Hasegawa, et. al., Proc SPIE, 5533, 27 (2004). [8] G. Harbers, et. al., Appl. Opt., 35 (31), 6162 (1996). [9] Y. Sekine, et. al., J. Vac. Sci. Technol., B22 (1), 104 (2004). [10] K. A. Goldberg, et. al., Extreme Ultraviolet Lithography, G. Kubiak and D. Kania, eds. (OSA, Washington, D.C.), 133 (1996). [11] P. P. Naulleau, et. al., Appl. Opt., 38 (35), 7252 (1999). [12] K. Sugisaki, et. al., Proc. SPIE, 4688, 695 (2002). [13] Y. Zhu, et. al, Proc. SPIE, 5752, to be published. [14] Z. Liu, et. al., Proc. SPIE, 5752, to be published. [15] P. P. Naulleau, et. al., Appl. Opt., 38 (16), 3523 (1999). [16] S. Kato, et. al., Proc. SPIE, 5751, 110 (2005). [17] K. Sugisaki, et. al., J. Vac. Sci. Technol., to be submitted. [18] D. Malacara, Optical Shop Testing (Wiley, New York, 1992), 501.
Some similarities and dissimilarities of imaging simulation for optical microscopy and lithography Michael Totzeck Carl Zeiss SMT AG Carl Zeiss Straße 22, 73446 Oberkochen Germany
1 Introduction High resolution optical methods require numerical simulation as a mean to understand and improve the obtained images. A major reason for this is the limited resolution of optical imaging described by Rayleigh’s formula
'x
N1
O NA
(1)
Means to increase the optical resolution are straight forward: We can either decrease the wavelength, increase the numerical aperture or decrease the prefactor N. Decreasing the wavelength is possible by using radiation of higher frequency Q with the correspondingly reduced wavelength O=c0/Q (c0 = velocity of light in vacuum). In the utilization of higher frequencies, optical inspection is very likely to follow the road paved by optical lithography (see Table 1 in the introduction) that proceeded from the Hg i-line at 365nm to KrF excimer laser radiation at 248nm to currently ArF excimer laser radiation at 193nm. A leap to 13nm EUV optics is likely to follow. If near field methods are excluded, the remaining method to increase the numerical aperture beyond 1 is to embed the structure into a material of higher refractive index n. This optical immersion increases the maximum numerical aperture by a factor of n. In optical microscopy, immersion is applied on object side, - mainly in biology where the object is already situated within a medium. In optical lithography, immersion is applied on the image side: A fluid is put between the last lens and the photoresist [1]. Immersion, however, allows higher relative propagation angles of the
Resolution Enhanced Technologies
268
plane waves whose interference composes the image, so that vector effects become important [2]. A decreased constant N in the optical resolution limit can be achieved in optical microscopy by non-linear imaging processes like two-photon and multi-photon confocal microscopy. Then, linear system theory and the derived resolution limits are no longer valid. Because of the non-linearity, though, the image interpretation should be performed with care. The need for a preparation of the objects makes it difficult to apply these methods to technical surfaces. In optical lithography, N is driven to its physical limit of 0.25 by applying proper illumination techniques. Microscopy can enhance N within limits by superresolution techniques, for instance by utilizing model-based imaging, threshold-criteria or extrapolation of the spatial frequency spectrum. A low N means that the light field interacts with object-structure sizes that are comparable to the wavelength. Because of this strong deviations from the geometric optic transmission through the object occur frequently. Furthermore, the polarization of the incident field becomes important on both object and image side. For numerical simulation, this requires accurate computations based on Maxwells equations. What is different and what is similar if these are applied to microscopy and lithography is the subject of this paper.
2 Imaging model for microscopy and lithography An optical system satisfying the sine condition can mathematically be represented by spherical entrance and an exit pupils centered on the Gaussian reference sphere [3]. object plane illumination pupil y R
xR
pupil plane entrance pupil
px
py
image plane exit pupil
y x
Ei
z
V
a
Fig. 1. Imaging model
E
Resolution Enhanced Technologies
269
A Fourier-transform relation exists between the object-plane and the entrance pupil and between the exit-pupil and the image-plane. The imaging model for microscopy and lithography is the same. It is sketched in Fig. 1. According to the illumination setting, the illumination pupil is assumed to be filled in a sigma region with an incoherent source (Hopkins effective source). Each point of the illumination pupil radiates a plane wave onto the object. The illuminating plane wave is diffracted by the object producing an angular spectrum of plane waves, i.e. a number of mutually coherent plane waves that propagate into different directions. Within the numerical aperture (NA) of the object side of the lens they are captured by the entrance pupil and enter the imaging system. The lens transfers the diffraction orders with a linear scaling factor of the pupil coordinate (= inverse magnification) to the exit pupil. There, each diffraction order generates a plane wave that radiates into the image plane where it produces the structure image by interference with the plane waves stemming from the other diffraction orders. In terms of the model, the difference between microscopy and lithography lies in the numerical apertures on image and object side. Optical microscopy generates a magnified image. Therefore is has a high NA on object side. In lithography the image is demagnified (usually by factor of 4). The high NA is located on image side and the NA on object side is by the factor of the demagnification smaller (Fig. 2). The consequence of this difference is, that different approximations are allowed in microscopy and lithography simulation. mag
n Resist =1.7 Lithography
NA obj < 0 .4
NA Image> 0.8
NA obj > 0.8
NA Image< 0.0.1
NA Bel =var.
Microscopy mag
Fig. 2. Comparison of numerical apertures of microscopy and lithography
Resolution Enhanced Technologies
270
3 Rigorous computation of object transmission necessary For structure widths smaller than approximately three wavelengths, the “Kirchhoff approximation” for the mask-transmission, i.e. the geometricaloptics transmission of the incident field through the object structures, is violated. The field-distribution at the back of the object deviates significantly from the field expected from geometrical optics. It has to be computed according to “rigorous” theories, for instance the Fourier modal method (RCWA, c.f. [4] and papers cited therein), and the time domain finite differences [5]. The diffraction orders in the entrance pupil of the projection lens deviate accordingly. The diffraction efficiency and phase become dependent on the structure geometry and the incident field. For binary dense lines, for instance, we end up with a dependence on refractive-index, pitch, thickness and wavelength. Fig. 3 shows as an example the halfpitch dependence of the diffraction efficiencies and the corresponding phases of the zeroth and first diffraction order of a binary Cr-mask calculated with the RCWA as implemented in MicroSim [4]. For a halfpitch of 500nm, what corresponds to a pitch of 1Pm, the efficiencies and the phases for TE and TM polarization agree well. With decreasing pitch and increasing amount of deviation occurs, i.e. in this region the diffraction depends strongly on the polarization. Additionally, the deviations are not equal for the 0th and the 1st diffraction order. These effects depend strongly on the structure material. It makes a difference whether the object is made of Sio2, Cr or MoSi, for instance. A small deviation of the refractive index from the assumed value, however, is less critical. a)
b) 140
25
100 Eff xx 1 Eff yy 1
15
Eff xx 0 Eff yy 0
10
Difraction phase [deg]
Difraction efficiency [%]
120
20
80
Pha xx 1 Pha yy 1
60
Pha xx 0
40
Pha yy 0
20 0
5
-20
0
-40
50
150
250
350
Halfpitch [nm]
450
50
150
250
350
450
Halfpitch [nm]
Fig. 3. a) Diffraction efficiency and b) phase of the 0th and 1st diffraction order of a binary Cr-grating on SiO2 for a perpendicularly incident plane wave of 193nm wavelength and 110nm thickness. n(Cr)= 0.8396 + 1.6512i and n(SiO2)=1.5628 [3]. xx = TE-, yy=TMpolarization.
Resolution Enhanced Technologies
271
4 Applicability of rigorous approximations The polarization dependence of the diffraction process as discussed in section 3 would require a rigorous computation for microstructures of less than 3 wavelength width. But rigorous simulations are very time consuming. Therefore one may wish to apply approximations. Two of these are: 1) The “Hopkins approximation”, assuming that amplitude and phase of the diffraction orders do not change with the angle of incidence, and 2) the “constant polarization effect” approximation, assuming that the polarization effect is the same for all diffraction orders. In the forthcoming subsection the applicability of the Hopkins approximations to microscopy and lithography simulation will be considered for a simple example system. 4.1 Hopkins approximation
Within the Hopkins approximation it is sufficient to compute the rigorous diffraction for perpendicular incidence only. The shift theorem of Fourier theory is applied to get the correct phase for other angles of incidence while the amplitude values are unchanged. The Hopkins approximation is more likely to be applicable to lithography simulation because there the angular range of incidence
- max
arcsin NA illumination
(2)
Is considerably smaller than for high resolution microscopy. For an NA=1 lithography scanner the maximum angle of incidence would be -max=arcsin(0.25)=14.5q. In contrast to lithography, high illumination angles occur in optical microscopy. For confocal microscopy at NA=0.9, for instance, the maximum angle of incidence is 64.2q. Fig. 3a shows the dependence of the diffraction efficiency (0th and 1st order) of a binary Cr-grating on the input order, which is the sine of the angle of incidence. The pitch of the grating is 600nm. The two vertical lines denote the 0.25 limit corresponding to optical lithography at NA=1. Within that interval the maximum error of the Hopkins approximation is <3%. Outside the interval the error is increasing rapidly. This observation is supported by the dependence of the phase (Fig. 3b). So the Hopkins approximation is certainly better suited for lithography than for microscopy simulation. The final decision whether it might be applied should depend on the required accuracy.
Resolution Enhanced Technologies
272
a)
b)
25
TM,0th TE,0th TM,1st TE,1st
100 Diffraction phase [deg]
20 Diffraction efficiency [%]
150
TM,0th TE,0th TM,1st TE,1st
15
10
50
0
-50
5 -100
0 -1
-0.5
0 Input order
0.5
1
-150 -1
-0.5
th
0 Input order
0.5
1
st
Fig. 4. a) Diffraction efficiency and b) phase of the 0 and 1 diffraction order of a binary Cr-grating of 600nm pitch on SiO2 for incident plane waves of various angles of incidence (arcsin(input diffraction order)).
5 Vector effect of polarized interference The vector effect denotes the dependence of the image contrast on the polarization of the interfering waves. Consider the interference pattern of two monochromatic electromagnetic waves E1(r), E2(r) , i.e.
I (r )
E1 (r ) E 2 (r )
2
§ § 2S ·· I 0 ¨¨1 J cos¨¨ x 'M ¸¸ ¸¸ © p ¹¹ ©
(3)
It is governed in each point by the projection of one polarization state onto the other because both the contrast J and the phase difference 'M (indicating the pattern shift 'x=p*'M/2S) are determined by the inner product of E1 and E2
J
2 E1 E*2 2
E1 E 2
2
and 'M arg E1 E*2
(4)
The resulting dependence of the contrast on the enclosed angle T for TE and TM-polarization is
J
: TE polarization 1 ® ¯cos T : TM polarization
The angle T is taken in the medium of interference.
(5)
Resolution Enhanced Technologies
273 TE-polarization
1
s-polarization E1 k1
E2 k2
Contrast
0.5
TM-polarization 0
positive contrast
T
E1 E2
p-polarization k1
k2
-0.5
reversed contrast
T
-1 0
45
90 135 Enclosed angle [deg]
180
Fig. 5. Interference contrast depending on the enclosed angle T for TE and TM-polarization
The polarization dependent contrast is depicted in Fig. 5. For TEpolarization the electric field is perpendicular to the plane enclosed by both propagation vectors. In TM-polarization it is situated within the plane (insets in Fig 5). The TE-polarized contrast is independent from the enclosed angle while the TM-polarized one drops down to zero at T=90q. For higher enclosed angles the contrast increases again, but with reversed sign meaning that bright and dark regions are exchanged. The vector effect is negligible in high resolution microscopy, because of the low numerical apertures on image side: A 0.9/100u image results in amaximum enclosed angle of |1q. Of course the large enclosed angles on the object side do not contribute a vector effect. It is the final interference producing the image that counts. And there, the situation in lithography is different. Large enclosed angles do occur in the image resulting in a contrast degradation for not proper polarized light. A NA=1 scanner together with a resist refractive index of 1.75 results in a maximum enclosed angle of 70q. 5.1 Conclusions
The target of this report was to summarize the similarities and dissimilarities of numerical simulation of high-resolution optical microscopy and optical lithography. x Because of the higher numerical apertures on object side of microscopy for both detection and illumination, rigorous methods for the
274
Resolution Enhanced Technologies
computation of object transmission are even more critical for microscopy than for lithography. This has practical consequences for the applicability of the Hopkins approximation which is for some mask structures feasible for lithography but unfeasible for high-NA illumination microscopy. x On the image side, the vector effect can be neglected for microscopy but it has to be taken into account for lithography, where it degrades the TM-polarized contrast.
References 1. B. J. Lin, “Immersion lithography and its impact on semiconductor manufacturing”, JM3, 3 (2004) 377-95. 2. D. G. Flagello, S. Hansen, B. Geh, M. Totzeck, „Challenges with hyper-NA (NA>1.0) polarized light lithography for sub O/4 resolution”, SPIE Proceedings 5754. 3. H.H. Hopkins: “On the diffraction theory of optical images”, Proc. Roy. Soc. (London) A 217 (1953) 408-432. 4. M. Totzeck, “Numerical simulation of high-NA quantitative polarization microscopy and corresponding near-fields”, Optik 112 (2001) 399-406 5. A. Taflove, The Finite-Difference Time-Domain Method (Artech, Boston, 1995
A Ronchi-Shearing Interferometer for compaction test at a wavelength of 193nm Irina Harder, Johannes Schwider, Norbert Lindlein Institute of Optics, Information and Photonics, University Erlangen-Nuernberg
Staudtstr. 7/B2, 91058 Erlangen Germany email: [email protected]
1 Introduction A common problem in UV lithography systems is the deterioration of the optical components made of fused silica due to long time exposure with high energy radiation. One effect is colour centre formation, another one is the compaction or rarefaction of the fused silica [1]. The compaction or rarefaction leads on the one hand to stress induced birefringence as the structure of the fused silica is changed. On the other hand the refractive index is changed due to the change of the density. Although the change of the refractive index and the stress induced birefringence are very weak effects, they contribute nevertheless to the aberrations of a lithographic objective since long path lengths in fused silica are quite common. Also in case of the measurement long path lengths have to be used in order to achieve measurable aberrations. Therefore polished cubes of fused silica with a length of several hundred mm are used for the examination of this effect. The cubes are exposed for several months to a UV-laser beam of 0.2mJ/cm²-10mJ/cm² [2]. This leads to a deteriorated volume along the beam path with a diameter of about 2mm-3mm. For the measurement of the effect there are several methods used. The in situ measurement analyzes mainly the laser induced fluorescence [3], or the Raman scattering [3], or the absorption of the exposure beam [4], or the changes of its polarization [1]. After exposure the stress induced birefringence [2] or the wave front distortion due to the compaction or expansion of the material [5] are analyzed in a setup which is normally based on a HeNe-Laser and therefore the measuring wavelength is O=633nm. The measured wave front distortions are between 1nm/cm and 8nm/cm for this wavelength. How-
276
Resolution Enhanced Technologies
ever, the interesting wavelength domain for the measurement remains the DUV because the working wavelengths are situated there. We setup a Ronchi shearing interferometer with a working wavelength of O=193nm for compaction measurement. The interferometer is based on an ArF-Excimer Laser although this is a light source having poor spatial coherence. For a first test of the setup small samples of fused silica with a diameter of 1” and a thickness of 5mm are used. The samples were structured by reactive ion etching so that a flat step on the surface in the centre of the sample is achieved. The diameter of the step is 2mm and has the shape of a Yin-Yang (B) sign
2 Shearing interferometry with Ronchi phase gratings 2.1 Lateral Shearing with Ronchi phase gratings
Basically the lateral shearing interferometry uses two wave front copies which are laterally sheared one to another. Interference fringes are only observed in the overlap region of the two shifted copies. Then the measured wave front 'W is the difference of the two sheared wave front copies I(x+½'s,y) and I(x-½'s,y) if the shear was applied in x-direction. For compaction measurement one has to deal with local disturbances on an undisturbed area. So, a total separation of the two sheared disturbances can be achieved. In this case the wave front error can directly be seen in the unwrapped image of the measured wave front with both positive and negative sign, in respect to the undisturbed wave front. To double the wave front and apply the shear there are several setups known [6]. The setup, which is used in this paper, is based on two identical Ronchi phase gratings as shearing unit. The first grating acts as a diffractive beam splitter. As a Ronchi grating is defined to have a duty cycle of 1:1 all even orders are suppressed. Apart from the zeroth order only the two first diffraction orders receive maximum intensity. In case of a pure (opaque) Ronchi grating the diffraction efficiency for the two first orders is 10.1%. The efficiency and the total intensity is increased by using a phaseonly Ronchi grating. If the height of the phase structure is chosen to be ½O/(n-1), where n is the refractive index of the substrate material, then the zeroth order is also suppressed. The efficiency of the remaining orders in case of a phase-only Ronchi grating is shown in Tab. 1. If the two first diffraction orders are used as the two wave front copies a common path interferometer with symmetric paths for the two interfering beams is achieved, as long as the impinging wave is mostly perpendicular to the plane of the
Resolution Enhanced Technologies
277
first Ronchi grating. An identical second Ronchi grating is parallely placed behind the first grating and simply parallelize the two first orders [7] by introducing similar diffraction angles. Obviously the spacing between the two Ronchi gratings results in a lateral displacement of the two first orders. The shear is given by the distance between the gratings a and the diffraction angle D:
's
2a tan(D ) | 2a
O p
,
(1)
where p is the period of the grating. Table 1. Efficiency of a Ronchi phase grating 0th Order
1st Order
2nd Order
3rd Order
4th order
5th order
0.00%
40.53%
0.00%
4.50%
0.00%
1.62%
2.2 Reduced coherence
Simple evaluation procedures of shearing interferograms require a total separation of the local features on a flat background. The common exposure setup for compaction tests produces disturbances with a size of some mm. To achieve a total separation of the disturbance on the two wave front copies, shears of several mm with respect to the object plane are needed. This leads to high requirements on the coherence of the used light source. In comparison to a laser light source like the HeNe-Laser, which is normally used for interferometric measurements the excimer laser is a light source with both poor temporal and poor spatial coherence. The bandwidth is specified by the manufacturer to be 0.44nm at a wavelength of 193nm. This results in a coherence length of 85µm. As this shearing setup is a totally symmetric common path interferometer for perpendicular impinging waves the temporal coherence has no effect on the visibility of the interference fringes. In addition the excimer laser is emitting a spatial multi-mode pattern due to the short dwelling time in the laser resonator. The resulting reduced spatial coherence limits the maximum possible shear, as the distance of two interfering wave front points has to be smaller than the coherence area of the light source. The coherence area is defined by the first zero crossing of the absolute value of the complex coherence factor |P12('s)|, which is a cross correlation of two interfering wave front points 1 and 2 with a spacing of 's. The distribution of the complex coherence factor in a specified
278
Resolution Enhanced Technologies
plane can by calculated from the intensity distribution of the light source by using the Van Cittert-Zernike Theorem. For a simple rectangular aperture with a diameter of d=1mm the complex coherence factor is sinc-function and has one small central peak. The width results in a maximum possible shear of 's=0.306mm if a collimation lens with a focal length of f=650mm is used. Obviously this shear is too small to achieve a total separation of the disturbances which we want to measure. 2.3 Use of a periodic light source
The Van Cittert-Zernike theorem can be used to tailor the spatial coherence. So, larger shears will be possible. If a periodic intensity distribution is introduced as light source instead of a homogenous luminous area the complex coherence factor will also show several peaks, likewise the diffraction image of a grating. That implies that the contrast of the interference fringes will return if the shear corresponds to the spacing of the first two orders of the complex coherence factor. In [8] the concept of the use of a periodic light source for a white light shearing interferometer was introduced and systematically discussed in [9]. The tolerance for the shear is again given by the width of the peaks and therefore by the number of luminous periods. An easy way to achieve a periodic light source is illuminating an opaque grating with incoherent light. As the duty cycle of the grating influences the height of the peaks of the complex coherence factor the contrast can be increased by narrowing the slits of the grating. Unfortunately, this leads also to less total intensity. One has to compromise on the desired contrast and the needed intensity. We apply a chromium grating on a fused silica substrate with a duty cycle of 1:4. This leads to a theoretically possible contrast of 87% and a reduced intensity of 0.2I0. The period of the grating is p=50µm. So the contrast will return for a shear of 's=5.02mm in the plane of the sample under test (again collimation lens f=650mm). Additionally to the large possible shears disturbing interference fringes are suppressed. Only those will be visible which meet the condition of an optical path difference of 's. So disturbances in the interference image are reduced.
Resolution Enhanced Technologies
279
3 Experimental setup In the experimental setup the remaining spatial coherence of the laser has to be destroyed to achieve an incoherent illumination of the periodic light source (see Fig. 1). Therefore a defocused spot on a rotating scatterer is imaged onto the light source grating. One might argue that a pulse laser delivers a stationary speckle pattern but several pulses produce differing speckles. In the measuring process an averaging over 100 pulses is used. The slits of the light source grating must be parallel to the grooves of the two shearing gratings to achieve the maximum contrast. So it must be possible to rotate and tilt the light source grating.
Fig. 1. Experimental setup
The light is collimated by a lens made of fused silica with a focal length of f=650mm. The sample is then imaged by a telescopic setup onto the CCD (Quantix 1401E, Photometrics). The scaling is 1:5 as the CCD Chip has a size of 7mmx9mm. The lateral resolution is limited by the pixel size of the CCD which is 6.8µm. As the DUV region suffers of a lack of optical materials mainly fused silica and CaF2 are used for the fabrication of optical components like lenses. To reduce the aberrations the collimation lens and the imaging lenses in this setup are best form lenses made of fused silica. Objectives made of several lenses would be too expensive in this case. The shearing unit are two Ronchi phase gratings with a period of 5µm and a height of the surface structure of 170nm made of fused silica which are placed in front of the first imaging lens. The theoretical height is 172nm for a wavelength O=193nm. Due to errors during development of the structure the duty cycle is not 1:1 over the whole grating. This leads to a small remaining second order which is filtered by the aperture of the second imaging lens like the other higher orders. The diffraction angle is 2.2°.
280
Resolution Enhanced Technologies
So the shearing distance must be 65mm to achieve a shear 's=5.06mm. For phase shifting purposes the first grating is mounted on a piezo which moves perpendicular to the grooves of the grating. The wave front under test is measured by applying five-frame phase shifting interferometry [10]. It is well known that statistic error sources like air turbulences or fluctuations of the intensity during the measurement cause errors which cannot be eliminated by software afterwards. To avoid fluctuations in the phase due to air turbulences the whole setup is shielded. Statistic intensity fluctuations cannot be eliminated in such an easy way. They origin mainly from the excimer laser and to a certain part from inhomogeneities of the rotating scatterer. The shearing interferogram offers the possibility to eliminate fluctuations of the total intensity as impinging intensity can be monitored contemporaneously to the measurement. Therefore, the shifted areas where no interference occurs are captured also during measurement (Fig. 2). Afterwards every measured intensity frame is normalized by the sum of the intensity in the monitoring area in respect to the first frame.
Fig. 2. CCD image of the sheared wave front of the sample under test
4 First results In Fig. 3a the unwrapped phase of the structured fused silica sample is shown. The desired phase step nearly vanishes due to the wave front errors of the sample itself and the optical setup. To achieve a useful sensitivity for small steps those errors have to be reduced. First of all a linear polynomial was fitted to the phase to eliminate the tilt of the measurement (Fig. 3b).
Resolution Enhanced Technologies
281
To get rid of the unwanted wave front errors the sample under test was already measured before the structuring. In Fig. 3c the unwrapped phase of the empty sample is shown which is emended by a linear polynomial. This measurement was afterwards subtracted from the tilt-corrected measurement of the structured sample Fig. 3b. The resulting phase profile is shown in Fig. 3d. The background is now smaller than the desired phase steps if the repositioning of the sample after structuring is done well. The needed precision for the repositioning must laterally be smaller than one pixel on the CCD chip – in our case this means 34µm in the sample plane. The double optical path difference of the disturbed area with respect to the undisturbed area can now be measured by determining the difference between the two sheared disturbances. For this sample we measured a height of 2OPD=0.13O. This complies to a step height of 22nm. Unfortunately the setup suffers still from instabilities of the light source which can be seen by looking at the statistical values of Fig. 3d. To stabilize the setup and so increase the certainty of the measurement some efforts have still to be made.
Fig. 3. First measurement of a small phase step (a), a linear polynomial was fitted (b) and the measurement of the empty sample before structuring (c) was subtracted (d). The resulting phase difference between the sheared phase step images was 2*OPD=0.13O
282
Resolution Enhanced Technologies
5 References 1. Schenker R. et al., (1994) Deep ultraviolet damage of fused silica, J. Vac. Sci. Technol. B 12(6): 3275-3279 2. Liberman V. et al., (1999) Excimer-laser-induceddensification of fused silica: laser-fluence and material-grade effects on scaling law, J. NonCrystalline Solids 244: 159-171 3. Muehling Ch. et al., (2000) In situ diagnostics of pulse laser-induced defects in DUV transparent fused silica glasses, Nucl. Instr. and Meth. in Phys. Res. B 166/167: 698-703 4. Stafast H. et al., (2004) Vakuum-UV-Spektroskopie an synthetischem Quarzglas unter UV-Pulslaserbestrahlung, DGaO-Proceedings 2004 5. Kuehn B. et al., (2003) Compaction vs. expansion behaviour related to the OH-content of synthetic fused silica under prolonged UV-laser irradiation, J. Non-Crystalline Solids 330: 23-32 6. Malacara, (1978) Optical shop testing, Wiley&Sons, New York, 105148 7. Schreiber H. et al., (1997) A lateral shearing Interferometer based on two Ronchi Gratings in series, Appl. Opt. 36(22): 5321 8. Schwider J., (1984) Continuous lateral shearing interferometer, Appl. Opt. 23(23): 4403-4409 9. Wyant J.C., (1974) White light extended source shearing interferometer, Appl. Opt.13: 200 10. Robinson D., Reid G. T., (1993) Interferogram Analysis, IOP Pulishing, Bristol/Philadelphia, 94-140
Simulation and error budget for high precision interferometry Johannes Zellner Bernd Dörband Heiko Feldmann Carl Zeiss SMT AG Carl-Zeiss-Str. 22 73447 Oberkochen Germany
1 Introduction In interferometric measurements the total wavefront includes both the wavefront from the sample and the wavefront due to the experimental setup. The latter can be removed from the total wavefront by first making a calibration without sample. The desired wavefront of the sample is then obtained by subtracting the calibration wavefront from the total wavefront of the measurement. The drawback of this method is that it does not cope with drift errors – errors due to small changes in the experimental setup between calibration and measurement. We will show, that these drift errors can be calculated by a computer simulation which takes into account the real experimental setup.
2 Targets The drift errors depend both on the magnitude of the drift and on the experimental setup including wavefront contributions of the interferometer components. These wavefront contributions include the design wavefront of each component and the wavefront contributions due to manufacturing errors. Setting up an error budget for the individual interferometer components is one of the main targets of the interferometer simulation. This way, criti-
Resolution Enhanced Technologies
284
cal components can be spotted easily and their tolerance budget set up accurately. On the other hand, the interferometer simulation helps relaxing tolerances for non-critical components and therefore helps in saving production costs. As the drift errors depend both on the wavefront contributions of the individual interferometer components and on the drifts, another target of the interferometer simulation is to set up an error budget for the acceptable drifts. Last but not least, as the interferometer simulation determines the measurement errors for a given interferometric setup including all real-world disturbances such as manufacturing errors and drift errors – the interferometer simulation allows qualifying the precision of the total setup.
3 Simulation of the experimental setup For the sake of generality, our computational tool allows in principle to simulate arbitrary interferometric setups. For the following considerations, we use a Fizeau-interferometric setup which consists of: a point-like illuminating source a tilted beam-splitter a collimator a tilted Fizeau-plate for generating the reference wave a sample (for the simulations e.g. a flat mirror) an “eyepiece” a CCD-camera which records the interferogram CCD-camera eye-piece
ce rfa r u o s t a ullim izea F co
sample
source
beam-splitter Fig. 1. Sketch of a simple Fizeau interferometer as it was used for the simulations (not to scale). The rays of the reference wave (reflected from the Fizeau surface) are drawn in light grey. The angle of the rays of the reference wave is largely exaggerated.
Generally, there will be more components than those shown in Fig. 1 (like mirrors for example), which were left out for the sake of simplicity.
Resolution Enhanced Technologies
285
3.1 Wavefronts propagating differently through the system
The reference wave is reflected by the Fizeau surface – the back side of the Fizeau plate. The sample wave is reflected by the sample. Both waves interfere at the CCD camera. As we want to record the interference at the position of the sample, the sample has to be focused on the CCD-camera by moving the CCD-camera to the conjugate plane of the sample. The focus position depends on the cavity length – the distance between the Fizeau plane and the sample. As can be seen from Fig. 1, different parts of the illuminating wave are used to build the reference and the sample waves. The “shearing” of the waves (e.g. at the place of the Fizeau surface) depends on the cavity length. Therefore both waves propagate differently through the interferometer, seeing different parts of each component. 3.2 Wavefronts and interferograms
The source aperture has to be large enough to illuminate both waves completely. Thus the source aperture has to be in fact a little larger than the apertures propagated by the two waves through the system.
Fig. 2. Sample wavefront (left) and reference wavefront (right) relative to the source wave which is indicated by the surrounding circle. The greyscale is on an arbitrary scale. The part of the reference wave which interferes at the CCD-camera is sheared relatively to the illuminating wave and the sample wave diagonally to the top left because of the diagonally tilted Fizeau surface. Both waves show significant coma due to the tilted beam splitter. (Wavefronts are drawn with constant offsets and tilts removed).
The resulting interferogram is obtained as the difference between reference and sample wave. In the case of both waves being dominated by coma as above, the interferogram will show significant astigmatism and focus as the shearing of the two wavefronts performs a differentiation of the wavefront.
286
Resolution Enhanced Technologies
4 Drifts The interferogram measured as shown above contains all features of the interferometric setup including design errors and manufacturing errors of the individual components. To calibrate the interferometer, an interferogram can be recorded with a calibrating mirror instead of a real sample. This interferogram can be subtracted later from the measurements. This way, the wavefront contributions of the interferometric setup can be removed from the total interferogram, allowing separating the desired wavefront contributions from the sample. In practice, calibration and measurement are subject to small drifts – small changes in the interferometric setup due to environmental influences. These can be small changes in air pressure or temperature as well as small motions of the interferometer components. Such drifts are not covered by the calibration and lead to measurement errors. The sensitivity to drifts increases with increasing wavefront contributions by the interferometer. In other words: if the interferometer was an ideal one, having no wavefront contributions itself, it would not be sensitive to any drifts.
5 How a simulation can help Drift errors are influenced both by the wavefront contributions of the interferometer and by the magnitude and type of the drifts. Keeping drift errors small can be achieved by reducing the wavefront contributions of the interferometer components and by reducing the magnitude of the possible drifts. Reduction of the wavefront contributions of the interferometer components is limited to the design wavefront of the interferometer – the wavefront which is already there w/o any manufacturing errors. 5.1 Precision of the interferometer
Given different types of expectable drifts, a simulation of the setup allows the calculation of the resulting drift errors. These drift errors, which are obtained w/o taking any manufacturing errors into account limit the measurement accuracy of the interferometer. Therefore the simulation allows calculating the limiting accuracy of a given interferometric setup.
Resolution Enhanced Technologies
287
5.2 Drift errors due to manufacturing errors
Manufacturing errors of the interferometer components usually lead to an increased wavefront contribution of the interferometer. This in turn leads to increased drift errors. Given the expected drift errors and the desired measurement accuracy of the interferometer, the simulation allows calculating the tolerances of the individual interferometer components precisely based on the real experimental setup. For the simulation, manufacturing errors can be applied for example by aspheric deformations on the surfaces of interest. The simulation will therefore locate critical components with tight tolerances. On the other hand, it will locate non-critical components with relaxed tolerances and therefore helps to reduce production costs. 5.3 Reducing the drift
Reducing the magnitudes of the drifts themselves is the most obvious way to reduce the drift errors. The simulation allows spotting the critical drifts and the calculation of their maximum allowable magnitude. On the other hand, the simulation helps spotting non-critical drifts, therefore relaxing the specification for their maximal allowable magnitude.
6 Examples For the following examples, we consider a typical Fizeau Interferometer as shown in Fig. 1. The phase shift between the reference and the sample wave are achieved by tilting the Fizeau plate. The speed of the system (speed of the collimator and eyepiece) was 5.6. 6.1 Errors due to a specific drift
As a specific drift we consider an additional tilt of the Fizeau plate of 0.0023 deg about the axis perpendicular to the plane shown in Fig. 1. This tilt was chosen to add 10 additional fringes in the interferogram at the CCD plane. The measurement error due to this drift can be calculated by calculating the interferogram of the drifted state and then subtracting the calibration – the interferogram of the undrifted state. The result is shown in Fig. 3. Note that this error is already present in the absence of any manufacturing errros. Therefore, if the given tilt of the Fizeau plate (10 addi-
288
Resolution Enhanced Technologies
tional fringes at the CCD) was an expectable drift, The wavefront error shown in Fig. 3 would be the limiting precision of the interferometer setup.
Fig. 3. Measurement errors in nm due to an additional drift of the Fizeau plate of 10 fringes (at the plane of the CCD). Tilts and constants offsets are removed from the plot.
6.2 Error contributions due to assumed manufacturing errors
Fig. 4. Total measurement error in nm due to an additional drift of the Fizeau plate of 10 fringes (at the CCD plane) given a spherical manufacturing error of 1 Ȝ at the first surface of the eyepice. This measurement error contains both the contributions of the interferometer setup according to the optical design and the contributions of the manufacturing error. Tilts and offsets are removed from the plot.
As a specific manufacturing error, we consider a spherical surface deformation of the first surface of the eyepice of one 1 Ȝ in combination with the drift of the Fizeau plate as given above. The measurement error again is calculated by subtracting the calibration from the drifted interferogram. The result is shown in Fig. 4. To separate the contributions of the manufacturing error to the measurement error from the contributions of the “ideal” interferometer (according to the optical design), the wavefront as show in Fig. 3 can be sub-
Resolution Enhanced Technologies
289
tracted from the total wavefront as shown in Fig. 4. The resulting plain contribution of the manufacturing error to the measuremennt error is shown in Fig. 5:
Fig. 5. Contribution of the spherical manufacuring error of 1 Ȝ on the first surface of the eyepice to the measurement error (in nm) if the drift of the Fizeauplate is 10 fringes (at the CCD plane) between calibration and measurement.
6.3 Error contributions due to known manufacturing errors
If the manufacturing error for a specific interferometer component is known, e.g. by a measurement of the wavefront contributions of this component, the resulting measurement error can be calculated by applying the known manufacturing error to the corresponding component and carrying out the calculations as shown above. Fig 6. shows a measured wavefront of a real eyepice and the contributions to the measurement error due to the measured wavefront in combination with an assumed drift of the Fizeau plate as shown above.
Fig. 6. Left: measured wavefront of an eyepice in nm. The wavefront shows significant coma due to manufacturing errors (e.g. lens element decentration). Right: contributions of the wavefront from the left to the measurement error assuming a drift of the Fizeau plate of 10 fringes at the CCD plane.
290
Resolution Enhanced Technologies
7 Results The simulation was carried out for special interferometer setups which are used at the Carl Zeiss SMT AG. Different drifts as the motion and tilt of all relevant interferometer components were investigated as well as manufacturing errors on all interferometer components. The “eyepiece” turned out to be a critical component regarding manufacturing tolerances. Mirrors and the reflecting Fizeau surface turned out to have also pretty tight surface tolerances. On the other hand, not quite surprisingly, an axial shift of the source or the CCD-camera turned out to have no big impact on the resulting drift errors.
8 Conclusions An accurate calculation of the error budget of individual interferometer components can only be based on a simulation of the total interferometric setup. We have shown that such a simulation can in fact make accurate predictions of the achievable measurement precision. Furthermore, a simulation allows calculating the maximal allowable drifts magnitudes. Using a simulation of the interferometric setup therefore allows for focusing on critical components and drifts on the one hand and for relaxing requirements for non-critical components and drifts on the other hand.
Progress on the wide scale Nano-positioningand Nanomeasuring Machine by Integration of Optical-Nanoprobes Gerd Jäger, Tino Hausotte, Eberhard Manske, Hans-Joachim Büchner, Rostyslav Mastylo, Natalja Dorozhovets, Roland Füßl, Rainer Grünwald Technische Universität Ilmenau, Institut für Prozessmess- und Sensortechnik PF 100 565, 98684 Ilmenau Germany
Abstract The paper describes the operation of a high-precision wide scale threedimensional nanopositioning and nanomeasuring machine (NPMMachine) having a resolution of 0,1 nm over the positioning and measuring range of 25 mm x 25 mm x 5 mm. The NPM-Machine has been developed by the Technische Universität Ilmenau and manufactured by the SIOS Meßtechnik GmbH Ilmenau. The machines are operating successfully in several German and foreign research institutes including the Physikalisch-Technische Bundesanstalt (PTB). The integration of several, optical and tactile probe systems and scanning force microscopes makes the NPM-Machine suitable for various tasks, such as large-area scanning probe microscopy, mask and water inspection, circuit testing as well as measuring optical and mechanical precision work pieces such as micro lens arrays, concave lenses, mm-step height standards.
1 Introduction If one believed in the prediction made by Gordon Moore according to which the number of transistors on a chip doubles every two years, more
292
Resolution Enhanced Technologies
than one thousand million transistors would have to be realized per chip in the year 2010. The consequences of this are that 45-nm structures will have to be implemented. However, the technological development will not have reached its ultimate destination yet. The way afterwards is depicted by the “Technology Roadmap for Semiconductors 2003”, which predicts that in the year around 2016, 22-nm structures will have to be realized. In the face of those enormous technological objectives, high requirements will have to be fulfilled also by nanometrology just as by nanomeasuring- and nanopositioning techniques. Thus, scanning probe microscopes scanning over large areas are required for mask and wafer inspection and also for the testing of ICs, with those microscopes being suited to industrial applications, too. In addition, nanomeasuring- and nanopositioning devices are necessary for the positioning and measuring to within a nanometre of, for example, nanosurface- and nanostructure standards, mechanical and optical high-precision parts as well as for material analysis. A nanopositioning and nanomeasuring machine (NPM machine) presenting a relatively large positioning and measuring range of 25 mm x 25 mm x 5 mm and a resolution of 0,1 nm has been developed at the Institute of Process Measurement and Sensor Technology of the TU Ilmenau. The structure and the operating principle of this machine, the integration of different sensor systems, and the measurements performed are explained.
2 Design and operation of the NPM-Machine The vision is the development of highly capable, reliable nanopositioning and nanomeasuring instruments (NPM-Machines) at sub-nanometer scale across large ranges. Our NPM-Machines /1, 2, 3/ consist of following main components: - traceable linear and angular measurement instruments at high resolution and accuracy - 3D-nanopositioning stages (bearings, drives) - nanoprobes (AFM, Focus-sensor) suitable for integration into NPM-Machine - control equipment. First of all a new concept is required to assemble the main components in order to achieve uncertainties as small as possible.
Resolution Enhanced Technologies
293
2.1 Traceable linear and angular sensors
Fig. 1. Plane mirror interferometers
Fig. 1 shows the difference between a plane mirror interferometer of the state of the art (left) and the interferometer developed by our institute (right) /4/. The most advantage of our plane mirror interferometer is that it has only one measuring beam. This fact is important for compliance with the Abbe comparator principle on all three measurement axes. The interferometer (left) needs two beams for having tilt invariance in a small angle range of the moving plane mirror.
Fig. 2. Single, double and triple beam plane mirror interferometers
Single, double and triple beam plane mirror interferometers can be applied in the NPM-Machine in order to measure the x-, y- and zdisplacements and the pitch-, yaw- and roll-angles (see Fig. 2).
294
Resolution Enhanced Technologies
2.2 Operation of the NPM-Machine
The approach of the NPM-Machine consists of a consequent realization in all the measurement axes at all times (see Fig. 3).
Fig. 3. Principle design of the nanomeasuring machine 1) x-interferometer, 2) yinterferometer, 3) z-interferometer, 4) metrology frame made of Zerodur, 5) roll and yaw angular sensor, 6) pitch and yaw angular sensor, 7) surface-sensing probe, 8) sample, 9) corner mirror, 10) fixing points for probe system
The intersection of all length measurement axes is the point of contact between the probe and the sample. This Abbe offset-free design with three interferometers and a user-selectable surface-sensing probe provides extraordinary accuracy. The sample is placed on a movable corner mirror that is positioned by three-axis drive systems. The position of the corner mirror is measured by three plane mirror interferometers. Angular deviations of the guide systems are measured at the corner mirror by means of sensitive angle sensors and used for angular control. Guide error compensation of the stages is achieved by a closes-loop control system. The used electromagnetic drives achieve high speed and, at the same time, a positioning resolution of less than 1 nm. The NPMMachine has electromagnetic drives with one drive each for x- and y-axes and four drives for the z-axis. Therefore, the angular errors caused by the x- and y-axes of the linear guides can be compensated. 2.3 Nanoprobes integrated into the NPM-Machine
Many different nano sensor types can be used for integration into the NPM-Machine. A focus sensor /5/, a scanning force microscope and a met-
Resolution Enhanced Technologies
295
rological AFM have been developed by the Institute of Process Measurement and Sensor Technology. The central part of the focus sensor is a so-called hologram laser unit. This multifunctional element has made the extreme miniaturization of the sensor possible. The structure of the entire focus sensor is shown in figure 4. The lateral resolution is about 0,8 µm. The resolution depends on the laser wavelength and the focal aperture of the probe. The optical system has been dimensioned such that a measurement range of about ± 3 µm can be achieved. Thus, a resolution of the zero point of < 1 nm is made possible by the AD converter used. To be able to see the point of the optical scanning on the surface of the sample, the focus sensor has been combined with a CCD camera microscope, which allows the user to spot interesting regions on the sample surface. The camera illumination is fed from an LED via optical fibres to minimize heat penetration into the measuring machine. The characteristic line of the focus probe can be calibrated using the laser interferometer of the NPM machine.
Fig. 4. Setup of focus sensor
The focus sensor has been used to design a scanning force microscope. The bending of the cantilever is detected by the focus sensor. Due to an integrated piezo translator, measurements in intermittent contact made as well as in contact made are possible (Fig. 5).
296
Resolution Enhanced Technologies
Fig. 5. Scanning force microscope with focus sensor
The developed metrological AFM is the first AFM traced to international standards. The bending of the cantilever is measured by a plane mirror interferometer and additional detected by the focus sensor.
3 Measurement results Five step-height standards from 7 nm to 780 nm were measured with the NPM-Machine in combination with the focus sensor as probe system. The step-height standards have been calibrated at the PTB. The maximum difference between the mean value measured by the PTB and our own results were r 1,3 nm. The achievable scanning speed is of special interest with regard to large area scans. Scan speed up to 500 µm/s have been carried out without an observable increase of the expanded uncertainty (k = 2), which reached from 0,7 nm to 2 nm. The NPM-Machine provides for the first time a large scanning and measurement range of 25 mm x 25 mm x 5 mm with a resolution of 0,1 nm. The focus sensor has proven to be versatile in its possibilities for use. Step height measurements up to 5 mm can be carried out. Fig. 6 shows the measurement results of a 1 mm step height.
Resolution Enhanced Technologies
297
Fig. 6. 1 mm-step height
Fig. 7 illustrated the long range nanoscale measurement of a concave lens
Fig. 7. Concave lense
A step height standard of 780 nm was measured with the developed focus sensor AFM. A standard deviation of only 0,4 nm was calculated.
Conclusion This paper descripes the design and operation of a nanopositioning and nanomeasuring machine with a scanning and measurement range of 25 mm x 25 mm x 5 mm and resolution of 0,1 nm in all measurement axes. The Laser-interferometric measurement is free from Abbe errors of first order in all three coordinates. The presented single, double and triple beam
298
Resolution Enhanced Technologies
interferometers can be applied in NPM-Machines. A new high-speed focus sensor and a scanning force microscope with installed focus sensor are explained. It was possible to measure step height standards with uncertainties below 1 nm and long range samples of nanometer scale.
Acknowledgements The authors wish to thank all those colleagues who have contributed to the developments presented here. Our special thanks are due to the Thuringian Ministry of Science, Research and Arts for promotiong the nanocoordinate metrology in the framework of joint projects and the German Research Foundation (DFG) for funding the Collaborative Research Center 622 “Nanopopositioning and Nanomeasuring Machines” at the Technische Universität Ilmenau.
References 1. G. Jäger, E. Manske, T. Hausotte, W. Schott: Operation and analysis of a nanopositioning and nanomeasuring machine, Proceedings of the 17th Annual Meeting of the ASPE, St. Louis, Missouri, USA, 2002, S. 299 – 304 2. G. Jäger, E. Manske, T. Hausotte, H. Büchner, R. Grünwald, R. Füßl: Application of miniature interferometers to nanomeasuring and nanopositioning devices, Proceedings of the Conference Scanning Probe Microscopy, Sensors and Nanostructures (TEDA), Beijing, China, Mai 2004, S. 23 – 24 3. G. Jäger, E. Manske, T. Hausotte, R. Füßl, R. Grünwald, H. Büchner, W. Schott, D. Dontsov: Miniature interferometers developed for applications in nano-devices, Proceedings of the 7th International Conference on Mechatronic Technology, ICMT 2003 Taipei, Taiwan, Dezember 2003, S. 41 – 45 4. H.-J. Büchner; G. Jäger: Interferometrische Messverfahren zur berührungslosen und quasi punktförmigen Antastung von Messoberflächen; Technisches Messen, 59 (1992) 2, S. 43 – 47 5. R. Mastylo; E. Manske; G. Jäger: Development of a focus sensor and its integration into the nanopositioning and nanomeasuring machine; OPTO 2004, 25.-27.05.2004; Nürnberg, Proceedings, S. 123 – 126
Invited Paper
Through-Focus Point-Spread Function Evaluation for Lens Metrology using the Extended NijboerZernike Theory Joseph J.M. Braat1, Peter Dirksen2, Augustus J.E.M. Janssen3 1 Optics Research Group, Department of Imaging Science and Technology, Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands ([email protected]) 2 Philips Research Laboratories, Kapeldreef 75, B-3001 Leuven, Belgium ([email protected]) 3 Philips Research Laboratories, Professor Holstlaan 4, 5656 AA Eindhoven, The Netherlands ([email protected])
1 Introduction Lens metrology is of utmost importance in the field of optical lithography; both at the delivery stage and during the lifetime of the projection objective, a very high imaging quality should be guaranteed. Frequent on-line tests of the optical quality are required and they should be well adapted to the environment in which the objective has to function, viz. a semiconductor manufacturing facility. The most common method for high-precision lens characterization is at-wavelength optical interferometry [1]. A first limitation for applying this method can be found in the availability of an appropriate coherent source at the desired wavelength. The second problem is the reference surface that is needed in virtually all interferometric set-ups. For these reasons, there has been much interest in lens quality assessment by directly using the intensity distribution in the image plane. A reconstruction of the complex pupil function of the objective from a single intensity measurement is generally not possible. A combined intensity measurement in the image plane and the pupil plane, if possible, can give rise to an improved reconstruction method. More advanced methods use several through-focus images but it is not possible to guarantee the uniqueness of the aberration reconstruction in such a numerical ‘inversion’ process [2]-[4]. In this paper we give an overview of a new method that is based on an analysis of the through-focus images of a pointlike object using a complex pupil function expansion in terms of Zernike polynomials.
300
Resolution Enhanced Technologies
While the original analysis of the diffracted intensity by Nijboer and Zernike, using the orthogonal circle polynomials, was limited to a close region around the image plane, the so-called Extended Nijboer-Zernike (ENZ) theory offers Bessel series expressions for the through-focus intensity distribution over a much larger range. Good convergence for the through-focus intensity is obtained over a range of typically 10 to 15 focal depths. For high-quality projection lenses, this is typically the range over which relevant intensity variations due to diffraction are observed and the information from this focal volume is used for the reconstruction of both the amplitude and phase of the complex pupil function of the lens. In a recent development, the theory has been extended to incorporate vector diffraction problems so that the intensity distribution in the focal volume of an imaging system with a high (geometrical) aperture (e.g. sinĮ=0.95) can be adequately described. From this analysis one can basically also extract the so-called ‘polarization aberrations’ of the imaging system. As it was mentioned previously, the starting point in all cases is a pointlike object source that is smaller than or comparable to the diffraction limit of the optical system to be analyzed. A sufficient number of defocused images serves to create three-dimensional ‘contours’ of the intensity distribution that are then used in the reconstruction or retrieval scheme. In Section 2 we briefly describe the basic features of the extended Nijboer-Zernike theory, followed in Section 3 by a presentation of its implementation in the retrieval problem for characterizing the lens quality. In Section 4 we present experimental results and in Section 5 we give some conclusions and an outlook towards further research and developments in this field.
2 Basic outline of the Extended Nijboer-Zernike theory To a large extent, the imaging quality of an optical system is described by the properties of the complex (exit) pupil function. In Fig.1 we have shown the geometry corresponding to the exit pupil and the image plane. The cartesian coordinates on the exit pupil sphere are denoted by µ and Ȟ and the pupil radius is ȡ0. The distance from the centre E0 of the exit pupil to the centre P0 of the image plane is R. The cartesian coordinates on the exit pupil sphere are normalized with respect to the pupil radius. The real space coordinates (x,y) in the image plane are normalized with respect to the diffraction unit Ȝ/s0 where s0 is the numerical aperture of the imaging system and these coordinates are then denoted by (X,Y). In an analogous way, the axial coordinate z is normalized with respect to the axial diffraction unit,
Resolution Enhanced Technologies
301
u O / ^1 (1 s02 )1/ 2` and denoted by Z. As usually, the diffraction calculations are carried out using polar coordinates, (ȡ,-) for the pupil coordinates and (r,ij,z) for the image plane coordinates. The complex pupil function is written as (2.1) B ( U ,- ) A( U ,- ) exp ^i) ( U ,- )` , with A(ȡ,-) equal to the lens transmission function and ĭ(ȡ,-) the aberration function in radians of the objective.
Fig. 1. The choice of coordinates in the pupil and the image space.
The point-spread function in image space is obtained from Fourier optics [5] and is written as U (r,M ; f ) (2.2) 2S 1 1 2 ³ ³ exp{if U }B(U ,- )exp^i2S rU cos(- M )` U dU d- ,
S
0 0
where the defocusing parameter f has been included to allow through-focus evaluation of the image space amplitude. The calculation of U(r,ij;f) can be done in a purely numerical way but the Nijboer-Zernike theory has shown that a special representation of B(ȡ,-) in terms of the Zernike polynomials allows an analytical solution for f=0. It has turned out that for nonzero values of f, as large as ±2ʌ, a well-converging series expression for Eq.(2.2) can be found [6] and this solution has proven to be very useful
Resolution Enhanced Technologies
302
and effective [7] once we are confronted with the inversion problem described in the introduction. 2.1 Zernike representation of the pupil function
The common way to introduce Zernike polynomials in the pupil function representation is to put the amplitude transmission function A(ȡ,-) equal to unity (a frequently occurring situation in optical systems) and to apply the expansion only to the phase aberration function ĭ(ȡ,-) according to (2.3) B( U,-) exp{i)(U,-)} | 1 i)(U,-) 1 i¦Dnm Zn|m| (U,-) , n,m
with (2.4) Z (U ,-) Rn|m| (U )exp(im-) . |m| The radial polynomial Rn (ȡ) is the well-known Zernike polynomial of radial order n and azimuthal order |m| (with n-|m|0 and even) and the azimuthal dependence is represented by the complex exponential function exp(im-). To represent all possible cosine and sine dependences in the Zernike polynomial expansion, the summation over n,m for the representation of ĭ(ȡ,-) has to be extended to both positive and negative values of m up to a chosen maximum value of |m|. In our analysis of the through-focus amplitude we prefer to use a Zernike polynomial expansion for the complete pupil function B(ȡ,-) and this leads to the following expression (2.5) B( U,-) A(U,-)exp{i)(U,-)} ¦Enm Zn|m| (U ,-) , n,m
where the coefficients ȕ now represent both the amplitude and phase of the complex pupil function. For sufficiently small values of the ȕ-coefficients, an unequivocal reconstruction of the separate functions A(ȡ,-) and ĭ(ȡ,-) is feasible. 2.2 Amplitude distribution in the focal region
The extended Nijboer-Zernike theory preferably uses the general representation of the complex pupil function according to Eq.(2.5) and from this expression the amplitude in the focal plane is obtained as (2.6) U (r,M ; f ) ¦EnmUnm (r,M ; f ) , n, m
with the functions Unm(r,ij;f) given by
Resolution Enhanced Technologies
303
(2.7) Unm (r,M ; f ) 2imVnm (r, f )exp(imM ) . m The expression for Vn (r,f) reads 1 2 m m t 0, °³ exp(if U )Rn (U ) Jm (2S r U )Ud U (2.8) 0 ° ° Vnm (r, f ) ® ° 1 °(1)m exp(if U 2 )Rn|m| (U ) J|m| (2S r U )Ud U m 0. ³0 °¯ It is a basic result of the Extended Nijboer-Zernike theory that the function Vnm(r,f) can be analytically written as a well-converging series expansion over the domain of interest in the axial direction, say |f | 2ʌ. The integrals in (2.8) are given by l p f J|m|l 2 j 1(2S r) (2.9) § if · exp(if )¦¨ , ¸ ¦ulj (2S r) l 0 © Sr ¹ j 0 where p=(n-|m|)/2 and q=(n+|m|)/2. The coefficients ulj are given by | m | l 2 j 1 § | m | j l ·§ j l ·§ l · § q l j · (2.10) ulj (1) p ¸¨ l ¸¨ p j ¸ ¨ l ¸ , l q l j 1 ¨© ¹© ¹© ¹ © ¹ where the binomial coefficients ‘n over k’ are defined by n!/(k!(n-k)!) for integer k,n with 0 k n and equal to zero for all other values of k and n. 2.3 Intensity distribution in the focal region
The intensity distribution is proportional to the squared modulus of the expression in Eq.(2.6) and can be written as 2
I (r , M ; f )
(2.11)
¦ E nmU nm (r ,M ; f ) n,m
and in a first order approximation we obtain 2 2 I a (r , M ; f ) | E 00 U 00 (r , M ; f ) 2¦ ' Re ^E 00 E nm*U 00 (r , M ; f )U nm* (r , M ; f )` n,m
4E
0 2 0
0 0
2
V ( r , f ) 8E
0 0
¦ n,m
'
^
`
Re i m E nm*V00 (r , f )Vnm* (r , f ) exp(imM ,
(2.12) where the two summation signs Ȉ’ exclude (n,m)=(0,0). The approximated expression Ia(r,ij;f) neglects all quadratic terms with factors ȕnm ȕn’m’* in them which is reasonable if in the pupil function expansion the term with ȕ00 , assumed to be >0, is the dominant one.
Resolution Enhanced Technologies
304
3 Retrieval scheme for the complex pupil function In this section we develop the system of linear equations that allows to extract the Zernike coefficients from the measured through-focus intensity function. We suppose that in a certain number of defocused planes (typically 2N+1 with N is e.g. 5) the intensity has been measured. A typical step of the defocus parameter from plane to plane is e.g. 4ʌ/(2N+1). The discrete data in the defocused planes are interpolated and, if needed, transformed from a square to a polar grid so that they optimally fit the retrieval problem. After these operations we effectively have the real measured intensity function I(r,ij;f) at our disposal for further analysis. 3.1 Azimuthal decomposition
We first carry out a Fourier decomposition of the measured intensity distribution according to S 1 (3.1) X m (r , f ) I (r , M ; f ) exp(imM )dM . 2S ³S Our task is to match the measured function I(r,ij;f) to the analytical intensity function Ia(r,ij;f) of Eq.(2.12) in the focal volume by finding the appropriate coefficients ȕnm. The harmonic decomposition of Ia(r,ij;f) yields S 1 X am (r , f ) I a (r , M ; f ) exp(imM )dM 2S ³S 2
=4G m 0 E00 \ 00 (r , f ) 4E 00 ¦ ª¬ E nm*\ nm* (r , f ) E n m\ n m (r , f ) º¼ . nz0
(3.2) Here we have used the shorthand notation (3.3) \ nm (r , f ) i mV00* (r , f )Vnm (r , f ). We now have at our disposal the harmonic decomposition of both the measured data set and the theoretically predicted intensity distribution that depends on the unknown ȕnm-coefficients. These coefficients can be evaluated by solving for each harmonic component m the approximate equality (3.4) X am (r , f ) X m (r , f ) . Our preferred solution of the 2m+1 equations is obtained by applying a multiplication with ȥnm(r,ij) and integrating over the relevant region in the (r,f)-domain (inner product method). In this way, we get a system of linear equations in the coefficients ȕnm that can be solved by standard methods.
Resolution Enhanced Technologies
305
4 Experimental results The ENZ aberration retrieval scheme has been applied to a projection system that allows, in a controlled way, the addition or subtraction of a certain amount of a specific aberration. In this way, by controlled adjustment of the lens setting, we were able to execute two sets of consecutive measurements and detect the aberrational change between them. In Fig.2 we show the results of these measurements where each time a certain amount of extra aberration has been introduced (50 mȜ in rms-value). It can be seen that the measured aberration values correspond quite well to the changes in the objective predicted by the mechanical adjustments. The through-focus intensity contours have been obtained by a large number of recorded pointspread images at different dose values. For a fixed resist clipping level, we can track the various intensity contours from the developed resist images that are automatically analyzed by an electron microscope. The accuracy of the method can be checked by a forward calculation of the point-spread function intensity using the retrieved pupil function. The measured and reconstructed intensity distributions show differences of 1% at the most. At this moment, the accuracy in the reconstructed wavefront aberration is of the order of a few mȜ in rms value.
5 Conclusions and outlook The Extended Nijboer-Zernike theory for point-source imaging has been applied to a lens metrology problem and has proven to be a versatile and accurate wavefront measurement method in an environment where e.g. classical interferometry is difficult to implement. The wavefront retrieval process requires a reliable measurement of the through-focus intensity distribution. In our case, for a moderate NA projection lens (NA=0.30), we have used a large number of defocused resist images with widely varying exposure values. Automated resist contour evaluation of the developed images with an electron microscope yields an accurate representation of the through-focus intensity distribution.
Resolution Enhanced Technologies
306
Spherical added 120 Nominal Detuned
100
[mO]
80 +50 mO spherical
60 40 20 0
0
Spher.
Y-coma
X-coma
Y-ast.
X-ast.
Y-thre.
X-thre.
X-ast.
Y-thre.
X-thre.
X-coma added 120 100 80 [mO]
+50 mO X-coma 60 40 20 0
0
Spher.
Y-coma
X-coma
Y-ast.
HV-Astigmatism added 120 +50 mO X-Astigm. 100
[mO]
80 60 40 20 0
0
Spher.
Y-coma
X-coma
Y-ast.
X-ast.
Y-thre.
X-thre.
Fig. 2. Bar diagrams of the measured change in aberration value for three specific aberration types (spherical aberration, coma and astigmatism) using the retrieval method according to the Extended Nijboer-Zernike theory. The black bars indicate the reference aberration values, the white bars the values corresponding to the detuned systems (all values in units of mȜ rms aberration).
Resolution Enhanced Technologies
307
With the aid of the reconstructed exit pupil function, the point-spread image of the aberrated projection lens can be calculated and we have observed a fit to better than 1% in intensity between the measured and calculated through-focus intensity distributions. Recent research has focused on the incorporation of high-NA vector diffraction into the ENZ-theory [8], on the effects of image blurring due to latent resist image diffusion during the post-exposure bake and on the influence of lateral smear by mechanical vibrations [9]. All these effects tend to obscure the real lens contribution to image degradation and they have to be taken into account in the retrieval process.
References 1. 2.
3.
4. 5. 6.
7.
8.
9.
Malacara, D (1992) Optical Shop Testing, 2nd edition,Wiley, Hoboken NJ (USA). Gerchberg, R W, Saxton, W O (1971) Phase determination from image and diffraction plane pictures in electron-microscope. Optik 34:277-286 Gerchberg, R W, Saxton, W O (1972) Practical algorithm for determination of phase from image and diffraction pictures. Optik 35:237-246 Fienup, JR (1982) Phase retrieval algorithms - a comparison. Appl. Opt. 21:2758-2769 Born, M, Wolf, E (1970) Principles of Optics, 4th rev. ed., Pergamon Press, New York (USA) Janssen, A J E M (2002) Extended Nijboer-Zernike approach for the computation of optical point-spread functions. J. Opt. Soc. Am. A 19:849-857 Braat, J J M, Dirksen, P, Janssen, A J E M (2002) Assessment of an extended Nijboer-Zernike approach for the computation of optical point-spread functions. J. Opt. Soc. Am. A 19:858-870 Braat, J J M, Dirksen, P, Janssen, A J E M, van de Nes, A S (2003) Extended Nijboer-Zernike representation of the vector field in the focal region of an aberrated high-aperture optical system. J. Opt. Soc. Am. A 20:2281-2292 Dirksen, P, Braat, J J M, Janssen, A J E M, Leeuwestein, A (2005) Aberration retrieval for high-NA optical systems using the extended Nijboer-Zernike theory. To appear in Proc. SPIE 5754, Conference on Microlithography 2005, San Jose, USA, February 26 - March 4
Digital Holographic Microscopy (DHM) applied to Optical Metrology: A resolution enhanced imaging technology applied to inspection of microscopic devices with subwavelength resolution Christian D.Depeursinge, Anca M. Marian, Frederic Montfort, Tristan.Colomb, Florian Charrière, Jonas Kühn, STI-IOA, EPFL, 1015 Lausanne, Switzerland Etienne Cuche, Yves.Emery, Lyncée Tec SA, rue du Bugnon 7, CH-1005 Lausanne, Switzerland and Pierre Marquet Physiology Institute, Lausanne University, Switzerland
1 Introduction Digital Holographic Microscopy is an imaging technique offering both sub-wavelength resolution and real time observation capabilities. We show in this article that, if longitudinal accuracies can be as low as one nanometer in air or even less in elevated refractive index media, the lateral accuracy and the corresponding resolution is less good, but can be kept at a sub-micron level by the use of a high Numerical Aperture (N.A.) microscope Objective (M.O.). In the present state of the art, it can be kept currently below 600nm. We show that the use of high N.A. objectives provides an effective mean of adapting the sampling capacity of the digital camera (here a CCD) to the needs of hologram registration. On the other hand, the accuracy may be also limited by the weak intensities of the optical signals from the nanometer size diffracting objects.
2 Digital Holographic Microscopes (DHM) Several optical arrangements have been selected for taking digital holograms of various specimen of diffracting objects. The most frequently used are the reflection geometry for surface topology measurement (Fig.1.) and
Resolution Enhanced Technologies
309
the transmission geometry for the measurement of object thicknesses and refractive indexes (Fig.2.). The originality of our approach is to provide high accuracy in the reconstructed images, both by using a slightly modified microscope design yielding digital holograms of microscopic objects, and by taking advantage of an interactive computer environment to reconstruct easily object shape from digital holograms. The use of a slightly offaxis configuration, enables to capture the whole image information by a single hologram acquisition. By using a gated camera with a few tens of microseconds aperture time or pulsed illumination sources, it is possible to avoid perturbations originating from parasitic movements or vibrations or perturbing ambient light. On the other side, wavefront reconstruction rate may be as high as 15 frames/second, making DHM an ideal solution to perform systematic investigations on large volumes of micro-devices such as full wafers bearing MEMS, MOEMS and micro-optical devices. Real time image reconstruction and rendering is henceforth possible, thus providing a new tool in the hands of micro- and nano-system engineers. This imaging modality is based on the reconstruction of the wave front in a numerical form, directly from a single digitalized hologram taken in a slightly off axis geometry. In our DHM implementation, no time heterodyning or moving mirrors are required and the microscope design is therefore simple and robust. DHM brings quantitative data derived simultaneously from the amplitude and phase of the complex reconstructed wave front diffracted by the object. Microscopic objects can be imaged in transmission and reflection geometry. DHM provides an absolute phase image, which can be directly interpreted in term of refractive index and/or profile of the object. Very high accuracies can be achieved, which are comparable to that provided by high quality interferometers, but DHM offers a better flexibility and the capability of adjusting the reference plane with the computer, i.e. without positioning the beam or the object. This computerized procedure adds Fig. 1. DHM configuration for much flexibility and can even be made use in reflection microscopy transparent to the user.
Resolution Enhanced Technologies
310
The holograms are acquired with a CCD or MOS camera and then digitized. A digital expression of the wavefront is formed in the plane [K and then propagated in the object plane x-y according to the Fresnel diffraction law. The Fresnel Huyghens expression given by the mathematical expression (1) which, in the paraxial approximation, can be put in form (2) which will be evaluated numerically after discretization. The reconstructed wavefront simultaneously delivFig. 2. Geometry for use in ers the phase information, which reveals transmission microscopy the 3D topography of an object surface and the intensity image, as obtained by conventional optical microscope.
< ( x, y )
< ([ ,K )
exp ik r r ' 1 < 0 ([ ,K ) d [ dK ³³ r r' iO
) ([ ,K )
(1)
exp(i2S d/O ) ª iS 2 º [ K 2 » exp « iO d ¬ Od ¼
ª iS ( x, y ) I H ( x, y ) exp « R D ³³ ¬ Od
x [
2
2 º y K » ¼
(2)
Expression (2) of the reconstructed wavefront < is the Fresnel transform of the un-propagated wavefront RDIH,. O is the wavelength and d the propagation distance. (2) provides the transformation of the wavefront < from the hologram plane 0xy, where it is equal to RDIH , to the observation plane 0[K. The digital reference wave RD has been introduced in Ref.1, and the digital phase mask )introduced in Ref.2 in order to correct phase quadratic dependence and possible aberrations introduced by the M.O., play a major role for phase reconstruction. RD is defined as a computed replica of the experimental reference wave R. If we assume that the hologram has been recorded in the off-axis geometry, with a plane wave as reference, RD is defined as follows:
Resolution Enhanced Technologies
311
ª 2S º RD ( x, y ) exp «i k x x k y y iG (t ) » ¬ O ¼
(3)
Where the parameters kx, ky define the propagation direction, and G(t) the phase delay between the object and reference waves, which can vary. The adjustment of several parameters is needed for proper reconstruction of the phase distribution. In particular, kx and ky compensate for the tilt aberration resulting from the off axis geometry or resulting from a not perfect orientation of the specimen surface. Equation (3) can be discretized and computed by calculation of the discrete Fresnel Transform (see [2] and [3]). 2.1 High longitudinal resolution
Fig. 3. Reconstructed image of a pentacen layer deposited on gold. The diagram shows the profile of the border of the layer: nanometric accuracies are obtained.
The accuracy of the reconstructed wavefront on the optical axis is given by the phase of the reconstructed wavefront < Accuracies of approximately half a degree have been estimated experimentally for phase measurements. In a reflection geometry, this corresponds to a vertical resolution less than 1 nanometer at a wavelength of 630 nanometers. In the transmission geometry, the resolution for thickness measurements depends on the refractive index of the specimen and a resolution less than ap-
312
Resolution Enhanced Technologies
proximately 16 nanometers has been estimated for quartz objects, and down to 2 nm after averaging [to be published]. Fig. 3 illustrates this feature. The sample observed in reflection is a layer of pentacen deposited on gold substrate. There are two surface visible in the specimen: their height difference is around 142 nm and the details of the border can be observed. These detailsare very useful for industrial inspection, since it provides direct brightfield vision and avoid lengthy scanning procedures. 2.2 High lateral resolution
Very high accuracy has been obtained by using a microscope objective (MO) with a high numerical aperture (NA). The role of this High NA MO is to provide a simple mean to adapt the sampling capacity of the camera to the information content of the hologram. As illustrated on fig.4., a lens or M.O. achieves a reduction of the kx, ky components of the k vector components in the x,y plane perpendicular to the optical axis. The reduction factor is given by the magnification M of the M.O. The new components k’x, k’y of the k’ wavevector of the beam after having crossed the M.O., can be made as small as required by the Shannon theorem applied to the sampling capacity dictated by the pixel size of the camera. Using a high magnification objective, the match can be optimized. In the same time, by maximizing N.A , transverse resolution can be pushed to the limit of diffraction and sub-micron resolution can be easily achieved (ordinarily better than 600nm).
Fig. 4. Use of a lens or microscope objective to match the sampling capacity of a CCD placed in the plane of the hologram with the spatial spectrum of the object.
Resolution Enhanced Technologies
313
Rather than computing the propagation from the object to the hologram plane, it is appropriate in this case to reconstruct the wavefield in the plane of the Image of the object by propagating numerically the wavefront over the distance d and, finally, compute the wavefront radiated by the physical object by de-convoluting the field radiated by the object using the complex point spread function characterizing the M.O. a)
b)
Such theoretical predictions could be easily demonstrated and are illustrated on fig. 5: gratings embossed in polycarbonate are used as test objects: two periods are shown: a) 8 Pm period, b) 2Pm period. The images clearly show a submicron resolution. Such a high lateral resolution is also needed to evaluate correctly the height profile of small peaks or dips. This performances are also required to measure correctly surface roughness.
Fig. 5. Gratings embossed in polycarbonate. Field of view; 250x250 Pm. Depth of lines around 500nm. a) 8 Pm period, b) 2Pm period
3 Conclusions Digital Holographic Microscopy is an imaging technique with high resolution and real time observation capabilities. It provides large and appealing perspectives in microscopy. The method brings quantitative data that can be derived from the digitally reconstructed complex wavefront. Simultaneous amplitude and quantitative phase contrast can be derived from the acquisition of a single hologram and used to determine precisely the optical pathlength. In the field of microscopy, the use of high N.A. lenses or M.O. provide optimal space-bandwidth product for scattered beam characterization. Highly resolved images of the refractive index and/or shape of the object can be derived from these data. Pure numerical procedures enable
314
Resolution Enhanced Technologies
DHM to investigate the shape and sizes of microscopic objects: Topology of surfaces or and distribution of refractive index.
4 Acknowledgments The development of the technology has been supported by Swiss government through the grant 205320-103885/1 from the Swiss National Science Foundation CTI grants TopNano 21 #6101.3 and NanoMicro #6606.2 and #7152.1 and. Carried out in cooperation with Lyncée Tec SA.
5 References 1. E. Cuche, F. Bevilacqua, and C. Depeursinge, "Digital holography for quantitative phase-contrast imaging," Optics Letters 24(5), p. 291-293, (1999). 2. E. Cuche, P. Marquet and C. Depeursinge, "Simultaneous amplitude and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms," Appl. Opt., 38, 6994-7001 (1999). 3. Y. Emery, E. Cuche, F. Marquet, S. Bourquin, P. Marquet, J. Kühn, N. Aspert, M. Botkin, C. Depeursinge,” Digital Holographic Microscopy (DHM): “Fast and robust 3D measurements with interferometric resolution for Industrial Inspection”, see this volume.
An adaptive holographic interferometer for high precision measurements Theo Tschudi, Viktor M. Petrov, Jürgen Petter, Sören Lichtenberg, Christian Heinisch, Julia Hahn Institute of Applied Physics, Darmstadt University of Technology Hochschulstrasse 6, 64289 Darmstadt Germany
1 Introduction Adaptive holographic interferometers are powerful tools for many practical applications. They can be used to detect acoustic and ultrasonic waves, vibrations and small displacements or distortions of surfaces or phase differences between coherent waves. For example, a few years ago the possibility to detect a periodical longitudinal displacement of the surface with an amplitude of 0.1 – 0.2 nm was demonstrated [1]. Our presented adaptive holographic interferometer is based on the principles of dynamic holography in photorefractive materials. It combines the advantages of a traditional Fabry-Perot interferometer and known adaptive holographic interferometers such as stability versus environmental influences without some of the disadvantages of these two interferometer types. We demonstrate the possibility to detect the readout wavelength deviations of 10 fm and corresponding angular deviations of 10-8 rad.
2 Principle of operation It is well known that the transmission geometry of holograms provides extremely high angular selectivity, and the reflection geometry of holograms provides extremely high wavelength selectivity. We combined both properties in a single optical scheme of interferometer (fig.1). Let us suppose that the initial hologram is recorded by two plane waves. In order to read this hologram in the reflection geometry, the readout wavelength has to be fitted to the grating spacing written by the recording beams:
Resolution Enhanced Technologies
316
Or
n
OZ sin T
2 n/
where ȜȦ is the recording wavelength, Ȝr is the readout wavelength, n is the refractive index of the crystal, ș is the recording angle and ȁ is the grating spacing.
Fig. 1. Hologram is written in transmission geometry and simultaneously read out in reflection geometry. TO is the test object.
Now let us consider that the test object (TO) is installed into one of the recording beams and its front and back surfaces are not absolutely parallel. After passing through the non-parallel object one of the recording beams will be reflected by refraction, and consequently, the recording angle ș will be changed to the angle ș + ǻș. Therefore a new hologram with a different grating period will be recorded. In order to rad out the new hologram, one has to change tha readout wavelength. The variation of the recording angle ǻș leads to variations of the reflected wavelength ǻȜr:
'O
O ' cot T . r r From this equation one can see that the variation of the angle between the recording beams can be detected by measuring the shift of the Bragg wavelength ǻȜr of the readout beam. The final accuracy depends on the accuracy of the measurement of the wavelength.
Resolution Enhanced Technologies
317
3 Realisation of the interferometer and applications We present experimental results for a high resolution interferometer based on dynamic volume holography in photorefractive materials in combination with phase-shift keying of reflective holographic gratings. Two phaseshifted holographic gratings are recorded in transmission geometry and simultaneously read out in reflection geometry using a tunable IR laser. We measure the spectral transfer function of the combined gratings, which is extremely sensitive to the phase-shift between the two recorded gratings. The proposed interferometer is able to detect extremely small refractive index variations in gases due to pressure changes, small angular deviations (up to 10-8 rad), and wavelength shifts in one of the readout beams in the range of 10 fm. Small changes in the recording beams of the hologram are detected by evaluating the corresponding readout Bragg wavelength. We also present experimental investigations as well as a numerical approach to simulate the transfer function for Bragg gratings with a phase shift. As a first application of this technology we demonstrate the meassurement of small refractive index variations of gases due to pressure changes of the gases. Using our technique, relative phase shifts can be measured with a resolution of about 0.025 ʌ. We present experimental investigations as well as a numerical approach to simulate the transfer function for Bragg gratings with a phase change.
Fig. 2. Experimental set-up for the measurement of light pressure. 1- Nd-YAG cw laser, Ȝ=532 nm; 2-Bi12SiO20 crystal; 3-pellicle placed in a vacuum chamber; 4-glass mirror with a piezo-driver; 5-laser (Ȝ=405, 532, 1569 nm) used as a source of coherent striking light; 6amplitude modulator; 7-beam-forming system (6 and 7 are individual for each wavelength), 8-sync-generator; 9-lock-in amplifier; angle Į~22.5°; Outp.1, Outp.2 – are two outputs of the interferometer; Insert (a): orientation of the crystal.
318
Resolution Enhanced Technologies
In a second application we present an adaptive interferometer based on a dynamic reflection hologram in sillenite-type crystals which we used for measuring the pressure of light. A reflective pellicle was used as one of the interferometer’s mirrors. The pellicle was illuminated by amplitudemodulated light to produce mechanical periodical displacements, which were detected by the interferometer (fig.2). We were able to detect modulations of the light intensity within a frequency range of 0.5 Hz-30kHz. The minimum measurable modulation of the light intensity was about 1-2 mW/cm², which corresponds to displacement of several 10 pm! No dependence on the wavelebgth in the band of 405-1550 nm was observed.
4 References 1. Petrov, VM, Denz, C, Petter, j, Tschudi, T, Enhancing sensitivity of an adaptive holographic interferometer using non-Bragg diffraction orders. Opt. Lett. 22 (1997) 1902
Spatio-Temporal Joint Transform Correlator and Fourier Domain OCT T. Yatagai, Y. Yasuno and M. Itoh Institute of Applied Physics, University of Tsukuba Tennouda1 1-1, Tsukuba, Ibaraki 305-8573 Japan
1 Introduction The joint transform correlator (JTC) is one of the most useful optical computing techniques to make correlation between two objects[]. In this optical method, two Fourier spectra of objects are superimposed and then square-low detected to obtain the intensity of the sum of the spectra. The intensity of the spectra is then Fourier transformed by optical means so as to obtain the correlation between two objects. On the other hand, a spectral modulating pulse shaper is employed to control shapes of ultra fast light pulses[2]. The pulse shaper decomposes the temporal spectrum components of an ultra fast light pulse spatially by a grating-lens pair, modulates the spectrum, and then reconstructs a temporally modulated light pulse by another grating-lens pair. We have proposed a spatio-temporal joint transform correlator (ST-JTC)[3,4], in which the temporal spectrum of double pulses with a time delay is spatially displayed by a grating-lens pair. Then the spatial distribution of the spectrum is spatially Fourier transformed. As the result, the temporal auto correlation of the double pulses are obtained as a spatial pattern. In this paper, a profilometry using ST-JCT is introduced and then the modification of ST-JCT to Fourier domain optical coherence tomography is discussed with some in vivo measurement of bio-medical samples.
2 Spatio-temporal joint transform correlator The schematic setup of the system of ST-JTC is shown in Fig. 1. The optical system is in three parts. The first part is a signal generator, which uses light to encode the three-dimensional shape of the sample object. We used
Resolution Enhanced Technologies
320
the Michelson interferometer with its arm-length shifted several hundred micrometers from its zero point. In a spectrometer, the temporally encoded depth information is spread along the spatial axis as a spectral interferogram. The last part of the Fourier transform, the spatially spread spectral interferogram is Fourier transformed spatially. As a result, the sectional image of the surface under test is obtained by CCD2.
Fig. 1. Schematic diagram of spatio-temporal joint transform correlator.
First, light is divided into reference and probe light by a beam splitter(BS1), and these two beams are reflected and scattered by the mirror and the object, respectively. The spectral electric fields of probe and reference lights, E p (Z ) and E r (Z ) , are described as:
E p (Z )
Ft [ E 0 (t ) n' (t )](Z )
Eˆ 0 (Z )nˆ ' (Z )
(1)
and
E r (Z )
Ft [ E 0 (t W )](Z )
Eˆ 0 (Z ) exp(iWZ ) (2)
where E 0 (t ) is temporal electric field of the light source, n' is the differential of the axial structure of the refractive index of the object and the axial spatial variable transformed to time, t, is time-of-flight. Ft [](Z ) denotes Fourier transform from t to Z , fˆ means the Fourier transformed function of f and * is a convolution operator. W is the time-delay introduced by the optical path difference between two arms. This light being led into the spectrometer consists of a grating and a cylindrical lens (CL). The electric field of the two lights is dispersed by grating[5] and Fourier transformed by CL. The electric fields are superimposed on a CCD camera to make an interferometric fringe as:
Resolution Enhanced Technologies
321
I ( F ) | Eˆ 0 ( F )nˆ c( F ) Eˆ 0 ( F ) exp(iWF ) | 2 | Eˆ ( F )nˆ c( F ) | 2 | Eˆ ( F ) | 2 [1 2 | nˆ c( F ) | 0
0
(3)
u cos{WF nˆ c( F )}], where F
xZ 02 d cos(J 0 ) /(2 fScm) and x is space variable on the
CCD camera and parallel to the reflecting surface of grating, Z 0 the central frequency of the light source, J 0 the angle of diffraction at central frequency, d the groove interval of grating, f the focal length of CL and c the light speed. The correlation signal between the reference and probe light is calculated by Fourier transform interference fringe as:
Iˆ( F c)
p[ E 0 n c]( F c) p c [ E 0 n c, E 0 ]( F c
W ) 2S
W p c [ E 0 , E 0 n c]( F c ) p[ E 0 ]( F c), 2S
(4)
where p[] is auto-correlation and p c [, ] cross-correlation. The first and fourth terms are auto-correlation of the probe and reference light, respectively; the second and third terms are cross-correlation between the probe and reference light that has shifted by time-delay W . Finally, the axial structure of the object is obtained from the cross-correlation terms. We have measured the path length difference of the Michelson interferometer and three-dimensional profiles of two objects by a spectral interferometric joint transform OCT system, shown in Fig 1. In the experiment, an SLD with a central wavelength 850 nm and 12 nm FWHM of spectrum is used as a cw broad band light source and a SLM[6], parallel aligned nematic liquid crystal SLM is used as the SLM. First, we measured the path length difference of a Michelson interferometer. In this case, plane mirrors are placed on the ends of the two arms of the interferometer. The two cylindrical lenses, XCL1 and YCL1 shown in Fig. 1 are removed. The results are shown in Fig. 2. In Fig. 2(a), there are an auto-correlation peak of an input signal (0th order peak), halation noise and 1st order cross correlation peak on the left side of the 0th order peak, indicated by the white arrow. The position of the 1st order peak represents the path length difference. In the case of Fig. 2(b), one arm-length is 400 Pm longer than in the case of (a). Hence, the 1st order peak shifts in proportion to the arm-length shift. From these results, we calculated the coefficient between the arm-length difference and the spatial position on CCD2 as
Resolution Enhanced Technologies
322
16 Pm /pixel. Thus the coefficient gives the measurement accuracy of this system.
(a)
(b) Fig. 2. Output images of the ST-OCT. Correlation peaks which correspond to the path length difference of the two arms of a Michelson interferometer.
In the next experiment, we measured the sample object with a stepped surface. The sample is constructed of a cover glass and a glass slide covered by aluminum. The optical system included the two cylindrical lenses which had been removed in the first experiment. Using XCL1, the light is focused on the measured sample, improving the resolution in the x direction. Furthermore, XCL1 corrects the direction of the reflected light, presumably leading to reduced light power loss. The image of the object surface is formatted on CCD1 by the YCL1, resulting in improved resolution in the y -direction. In this system, the scanning operation along the z -axis runs in parallel to the spectral interferogram. Hence, we have to carry out only one-dimensional scanning in the x -direction to measure the three-dimensional profile of the object. In this experiment, we scanned 50 points and reconstructed the object surface as shown in Fig.3. The stepped surface of the object can be observed.
Resolution Enhanced Technologies
323
Fig. 3. Three-dimensional surface measured by ST-OCT.
3 Fourier domain optical coherence tomography In the case of 3-D scattering object, such as biological issue and numerical Fourier transform for calculating the final correlation, the ST-JCT is usually called Fourier domain OCT (FD-OCT). Figure 4 shows a typical optical setup of FD-OCT, in which the power spectrum is detected by CCD and then Fourier transformed in a computer to obtain the correlation including the depth distribution of the scattering object. Because the reference mirror is not moved, fast measurement is performed. To measure the sectional distribution of the scattering object, the object is moved. Figure 5 show an example of in vitro FD-OCT image. The object is a pig eye. Cornea, iris and crystal lens of the eye are clearly observed. To calculate this image, the complex spectrum measurement method is used by modulating the reference phase[7].
Fig. 4. Optical setup of Fourier domain OCT (FD-OCT).
324
Resolution Enhanced Technologies
Fig. 5. in vitro FD-OCT image of a pig eye.
The combination of FD-OCT and polarization measurement, we have a polarization sensitive FD-OCT for measuring the complex birefringence of the biological tissue[8]. With arbitrary polarization state of incident and reference light, the change of polarization state can be acquired, and the axial structure of the tissue is measured by single detection of the power spectrum, superposition of object and reference light. Using with this OCT system, a cross-sectional Mueller matrix images of human skin can be observed.
4 Conclusion We have presented a novel optical system, a spatio-temporal joint transform correlator (ST-JTC). The system based on similarity between a conventional spatial joint transform correlator, which is one of the most popular optical computing technique and a spectral filtering method, which is one of the most popular method of controlling the temporal profile of an ultra-fast light pulse. Based on ST-OCT, we have constructed a surfacemeasurement system. Then this principle is applied to a highly scattering biological issue. Fourier transform of the power spectrum is done by a numerical method. This system is called usually Fourier domain OCT. Finally polarization sensitive FD-OCT is discussed to measure birefringence property of biological issue.
Resolution Enhanced Technologies
325
References 1. Weaver, C. S., and Goodman, J. W. (1966) Appl. Opt., 5: 1248. 2. Weiner, A. M. and Heritage, J. P. (1988) Opt. Lett., 13, 300-302. 3. Yasuno, Y., Sutoh, Y., Yoshikawa, N., M. Itoh, M., Mori, M., Komori, K., Watanabe, M. and Yatagai, T. (2000) Opt. Commun., 177: 135139. 4. Yasuno, Y., M. Nakama, M., Sutoh, Y., Itoh, M., Mori, M. and Yatagai, T.(2000) 186: 51-26. 5. Danailov, M. B. and Christov, I. P. (1989) J. Modern Opt., 36:725. 6. Mukozaka, N, Yoshida, N. Toyoda, H., Kobayashi, Y., Hara, T. (1994) Appl. Opt. 33:2804-2811. 7. Yasuno, Y., Makita, S, Sutoh, Y., Itoh, M. and Yatagai, T. (2004) Opt. Express, 12:6184-6191. 8. Yasuno, Y., Makita, S.,Endo T., Aoki, G., Itoh, M. and Yatagai, T. (2002) Opt. Lett. 27: 1803-1805.
Subdivision of Nonlinearity in Heterodyne Interferometers Wenmei Hou University Shanghai for Science and Technology * Yangpu-Qu, Jungong-Lu 516, 200093 Shanghai P.R. China * The fourth key discipline of education committee of Shanghai.
1 Introduction The demands for ultra-precise displacement measurement, arisen from the need of making future generations of semiconductor devices and exploring the miniature structure of micro-electro-mechanical systems, as well as the nanopositioning in scanning probe microscopes, have made the laser interferometer an important instrument with widespread use. The measurement of positions and displacements of various components is needed to an accuracy in the than sub-nanometre range. The heterodyne interferometer has special advantages for the measurement in nanometer region, as it allows a very stable subdivision of the light wavelength with high resolution on the basis of a phase measurement. Theoretically, the subdivision of the wavelength could be unlimited. Using the precision phase measurement with a typical accuracy better than 0.1°, a resolution of about 0.1 nm can be easily got for the displacement measurement. However, the effective resolution of laser interferometer often is limited by its cyclic nonlinearity in phase-versus-displacement response. Finding the most effective approach to the solution is always one of the most important research subjects nowadays.
2 The nonlinearity of heterodyne interferometer A heterodyne interferometer uses two polarized beams having slightly different frequencies which are normally orthogonally polarized (see Fig.1). This is usually achieved using either a Zeeman-stabilized laser, or a singlefrequency laser with an acousto-optic modulator (AOM). The two beams
Resolution Enhanced Technologies
327
are separated by a polarizing beam-splitter and travel separately through the two interferometer arms: one beam becomes the reference beam and the other is used as the measurement beam.
Fig. 1. The heterodyne interferometer, principle DR , DMʊoptical detectors, IRʊReference signal, IMʊMeasurement signal.
The two beams traverse their respective paths, are recombined in the beam-splitter, and then pass through a polarizer set at 45o to the polarization axes of the two beams. As the measurement arm of the interferometer changes its length ǻl, there will be a phase shifting (or say Doppler shifting of the frequency ) of the beam in the measurement arm. This results in a change ǻij in the phase ( frequency ) difference between the two beams which have passed through the interferometer. This difference is then compared with the frequency difference between the two beams prior to them entering the beam-splitter (see Fig.1) to yield a phase difference corresponding to the distance moved by the measurement arm of the interferometer. Ideally, when only one frequency occurs in each interferometer arm, the measured phase shift ǻĭ varies proportional to the path difference ǻL: Ȝ (1) ǻL = ǻĭ 2ʌ With a phase detection resolution of 2ʌ/3600 and an interferometer ( see Fig. 1 ) we have a resolution of about 0.05 nm for the displacement measurement. However, due to various influences, mixed frequency states are often found in one or both interferometer arms. This results in a nonlinear relation between the measured phase difference ǻĭ and the respective path difference ¨L. Thus the path difference ¨L would be: Ȝ (2) ǻL = ( ǻij + Ȗ ) 2ʌ
328
Resolution Enhanced Technologies
where ǻij is the real phase shift, Ȗ is a nonlinearity error and it changes periodically with ǻij. This results in a nonlinear relation between the measured phase shift ǻĭ and the real phase shift ǻij (see Fig. 2), and thus the respective path difference 'L. Except the first harmonic nonlinearity (see Fig.2), there is also the second harmonic nonlinearity in Ȗ, and it could also occur with higher harmonic nonlinearities [3].
Fig. 2. The nonlinearity in heterodyne interferometer
The nonlinearity of heterodyne interferometer is caused mainly by the radiation of the laser source i.e. elliptical or nonorthogonal polarizations of the laser beams, and the imperfect optical parts. Even perfect optical parts can also contribute to the nonlinearity, as well as the misalignment of interferometer can affect the nonlinearity too. 2Ȗmax is the maximum phase error of the nonlinearity. It is usually in the order of several nanometers to 10 nanometers, and can be worse under certain conditions. This cyclic nonlinearity in phase-versus-displacement response has limited the effective resolution of laser interferometer.
3 The subdivision of nonlinearity in heterodyne interferometers Since the error of the nonlinearity in heterodyne interferometer has been discovered [1], many works have been contributed for eliminating this error [1-9]. With the double detectors[3] or using AOM [6-8], the nonlinearity of heterodyne interferometer could be essentially reduced. For most cases, especially in precise industrial measurements, there is an interesting approach being found for easy and efficient decreasing the nonlinearity of the heterodyne interferometers.
Resolution Enhanced Technologies
329
Normally, because the nonlinearity is a periodic deviation, the maximum amplitude of the nonlinearity in heterodyne interferometer would be hold as constant, while the interferometer is kept stabilized. But if one concerns the nonlinearity in term of measured displacement length, this statement will not be completely exact. Here we generally adopt ¨L for the measured optical path shift and ¨ĭ for the measured optical phase shift: ǻĭ=ǻij+Ȗ; ¨L=N(¨lm); N is the number of one-way passes through the interferometer; ¨lm is the measured displacement length; then from Eq.1., we have ǻl m =
1 Ȝ ǻL 1 Ȝ (ǻij + Ȗ ) ǻĭ = = N N 2ʌ N 2ʌ 1 Ȝ 1 Ȝ ǻij + = Ȗ = ǻl + į l ( Ȗ ) N 2ʌ N 2ʌ
(3)
¨l is the displacement length, and we define įl ( Ȗ ) as the measurement error of displacement length caused by nonlinearity Ȗ. Then there are 1 Ȝ (4) ǻij ǻl = N 2ʌ Ȝ 1 (5) Ȗ įl ( Ȗ ) = N 2ʌ and we get an interesting result: a) For the heterodyne interferometer with single optical path difference, i.e. N=1: ǻl m =
Ȝ ǻĭ 2ʌ
Ȝ (6) Ȗ 2ʌ b) For the heterodyne interferometer with double optical path difference, i.e. N=2:
then
įl ( Ȗ ) =
ǻl m =
1 Ȝ ǻĭ 2 2ʌ
Ȝ (7) Ȗ 4ʌ c) For the heterodyne interferometer with fourfold optical path difference, i.e. N=4:
then
ǻl m =
įl ( Ȗ ) =
1 Ȝ ǻĭ 4 2ʌ
Resolution Enhanced Technologies
330
Ȝ (8) Ȗ 8ʌ d) For the heterodyne interferometer with N times optical path difference, i.e.N=N
then
įl ( Ȗ ) =
ǻl m =
we have
įl ( Ȗ ) =
1 Ȝ ǻĭ N 2ʌ
Ȝ Ȗ 2 Nʌ
this is just the Eq. (5). This means, for the same nonlinearity Ȗ, the measured periodical displacement error should be different due to the different interferometer arrangements. This is like the subdivision of wavelength, the nonlinearity of the heterodyne interferometers, in term of length, is also subdivided by the multiple of optical path. This interesting and significant finding gives us a simple and efficient approach to decrease the nonlinearity of the heterodyne interferometers. Figure 3 shows the relationship of the measured phase error and the corresponding length error with different interferometer types by assuming that here the nonlinearity Ȗ is a first harmonic error and has the same magnitude for each types of interferometer. It shows that the measured length error due to nonlinearity will be reduced as N increases. If N is big enough, the measured length error trends to be approximately zero.
Fig. 3. The nonlinearity in measured displacement length. a. Single optical path difference: ǻĭ/2ʌ= ǻl/Ȝ ; b. Double optical path difference: ǻĭ/2ʌ= 2ǻl/Ȝ ; c. Fourfold optical path difference: ǻĭ/2ʌ= 4ǻl/Ȝ ; x. N times optical path difference: ǻĭ/2ʌ= N ǻl/Ȝ Ȗ. The nonlinearity of phase measurement error Ȗ.
Resolution Enhanced Technologies
331
As an example, Figure 4 shows the measured nonlinearities with commercial heterodyne interferometers. There were a double path type (see Fig.5. a.) and a fourfold path type (see Fig.5. b.) interferometers setting with the same laser source and detector. The nonlinearity in the double path interferometer is about twice of the fourfold path type, and its period is the half of the fourfold path type. That indicates also why the measured nonlinearity in fourfold path type, e.g. differential plane mirror interferometer (DPMI), is mostly smaller than that in double optical path difference type interferometers. Eq. (5) is generally valid for all nonlinearity errors whether it is a firstorder or a higher-order error. Even the nonlinearity caused by the polarizing leakage of the PBS, where the unwanted optic component is polarized in orthogonal polarization direction to the main optic component and usually would be filtered from multiple passes through the PBS cube so that it is mostly neglected, would also be decreased by subdivision through multiple optical paths, if it still remains. Measured with double path interferometer
Measured with fourfold path interferometer
Fig. 4. The nonlinearity measured with different interferometers
Fig. 5. The heterodyne interferometers, principle. (a) Interferometer with double path optical difference; (b) Interferometer with fourfold path optical difference.
332
Resolution Enhanced Technologies
Nevertheless, it should be pointed out that the decrease of the nonlinearity would not be unconditional multiple with the multiple of optical path, because it could bring more optical impairments into interferometer and therefore cause extra frequency mixing in interferometer arms, hence the nonlinearity. Therefore, in order to reduce the nonlinearity by using the multiple of optical path, the optic components have to be made as perfect as possible. However, experiments confirmed that the measured nonlinearity in fourfold path type, even with commercial interferometer, is mostly smaller than that in double optical path difference type of commonly used Michelson interferometers. This is because of the fact that once the incident beams enter into polarization beam splitter (PBS), the frequency mixing ration would be essentially distributed by the laser source and the split of PBS, the later coming optics could mostly only have subordinate influence. E.g. the errors of the retardation plates after PBS can not lead to nonlinearity, but only cause an additional constant phase shift, if the incident beams and polarizing beam splitter are perfect. If the incident beams and the polarizing beam splitter are not perfect, the errors of waveplates could affect the nonlinearity Ȗ (increasing or decreasing is depended on the combination of all parameters), but usually with a factor in an order of about 10-2, if the errors of waveplates are held in a range of several degrees. Furthermore, the experiments also confirmed that the losses of optical power caused by multiple passes through the PBS cube would normally not affect the nonlinearity, unless the measured displacement is very large.
4 Conclusion A significant finding was that, like the subdivision of wavelength, the nonlinearity of the heterodyne interferometers, in term of length, is also subdivided by the multiple of the optical path. This gives us a simple approach to decrease the nonlinearity of the heterodyne interferometers, while the resolution of interferometer is increasing. The theoretical and experimental investigations showed that the decreasing the nonlinearity of heterodyne interferometer through the subdivision of wavelength is especially efficient in the industry precision nanomeasurement.
Resolution Enhanced Technologies
333
5 Acknowledgments The author thanks Startup Fund of University Shanghai for Science and Technology for the support of this paper.
6 References 1. Quenelle R C, Nonlinearity in interferometric measurements, HewlettPackard J. 34, 1983 2. Bobroff, N, Residual errors in laser interferometry from air turbulence and nonlinearity, Appl Opt (1987), 26, 2676-2682 3. Hou W, Wilkening G, Investigation and compensation of the nonlinearity of heterodyne interferometers, PRECIS ENG 14(2): 91-98, 1992 4. Hou W, Zhao X, The drift of the nonlinearity of heterodyne interferometers, PRECIS ENG 16(1): 25-34, 1994; 5. Stone JA, Howard LP, A simple technique for observing periodic nonlinearities in Michelson interferometers, PRECIS ENG 22 (4): 220232 OCT 1998 6. Lawall J, Kessler E, Michelson Interferometry with 10 pm accuracy, REV SCI INSTRUM 71 (7): 2669-2676 JUL 2000 7. Schmitz TL, Beckwith JF, Acousto-optic displacement-measuring interferometer: a new heterodyne interferometer with Angstrom-level periodic error. J MOD OPTIC 49 (13): 2105-2114 NOV 2002 8. Wu CM, Periodic nonlinearity resulting from ghost reflections in heterodyne interferometry OPT COMMUN 215 (1-3): 17-23 JAN 1 2003 9. Peggs GN, Yacoot A, A review of recent work in sub-nanometre displacement measurement using optical and X-ray interferometry, PHILOS T ROY sac A 360 (1794): 953-968 MAY 15 2002
SESSION 3 Wide Scale 4D Optical Metrology Chairs: Gerd Häusler Erlangen (Germany) Jim Trolinger Irvine (USA)
Invited Paper
Progress in SAR Interferometry Otmar Loffeld Center for Sensorsystems (ZESS) Paul-Bonatz-Str. 9-11, 57068 Siegen Germany
1 A Brief Review of SAR Interferometry The first Earth observation satellite to provide Synthetic Aperture Radar (SAR) data suitable for interferometry was SEASAT. Launched in 1978, it was operated for 100 days where SAR data collection was limited to a period of 70 days. The interferometric usefulness of the SEASAT SAR data for topographic mapping was demonstrated 8 years later by Zebker and Goldstein [12], and for detection and mapping of small elevation changes by Gabriel [13]. Meanwhile SAR systems have reached a maturity that transformed them from experimental non optical imaging systems into true and coherent measuring systems providing valuable, highly resolved large scale remote sensing information for a numerous environmental sciences like geodesy, geophysics, climatology, oceanology etc. The phase information, becoming available due to that coherency, enables the coherent integration of partial images of the same region as well as interferometric techniques. All this is naturally based on the prerequisite of sufficient phase stability of the SAR sensor hardware, and on the phase preservation of the SAR processing algorithm, converting the SAR raw data into a focussed two dimensional complex SAR image. Thus we can imagine each image pixel being characterized by its absolute value describing the radar brightness or - after calibration - the radar backscattering coefficient V 0 and by its phase which is directly proportional to the distance between the pixel’s corresponding object point and the SAR sensor at that specific time when the pixel is located perpendicularly to the flight track (Zero Doppler condition). Nevertheless any image pixel of a focussed image represents a ‘facet’, meaning a resolution cell (with dimensions of the SAR sensors spatial resolution, for spaceborne sensors usually in the meter range) rather than exactly one surface point. Since any of these facets actually comprises an
Wide Scale 4D Optical Metrology
337
infinite collection of individual and independent backscatterers, neither the pixels amplitude nor the phase are deterministic values, but must be considered as random variables. For homogenous scenes the pixel’s amplitude will tend to a Rayleigh distribution, while the pixel’s phase shows a uniform distribution [1]. With decreasing resolution cell size or in urban areas, however, the assumption of homogeneity does not hold anymore. Some recent compilations concerning the statistics of amplitude and phase may be found in [2, 3]. Despite of all statistical issues the range information of any pixel phase can be exploited by an interferometric approach. Two complex SAR images of the same scene acquired from different paths can be interferometrically superimposed, meaning that after coregistering, one SAR image can be multiplied with the complex conjugate of the other image on a pixel to pixel basis. Thus any interferogram pixel i(n,m) is obtained as: i( n, m)
s1* ( n, m) s2 ( n, m)
a1 (n, m) a 2 (n, m) exp> j M 2 (n, m) M1 (n, m) @
ai (n, m) exp> jM i (n, m)@
(1)
n,m denote column and row index, ai(n,m) is the interferometric amplitude and M i (n, m) the interferometric phase. While any individual pixel phase in a SAR image will be helplessly noisy (uniformly distributed in the interval S , S @ , the interferometric phase calculated as the phase difference of two corresponding pixels will assume a meaningful value. This means that the two corresponding SAR image pixels, from which the interferogram pixel has been calculated by equation (1), will be noisy but they will be noisy in the same way. The quantity measuring the similarity is the pixel coherence J (n, m) defined as the normalized cross correlation coefficient: J (n, m)
^
`
E s1* (n, m) s 2 (n, m)
^
2
` ^
E s1 (n, m) E s 2 (n, m)
2
`
(2)
Conceptually the coherence can be defined as a complex value, while the classical coherence known from Laser interferometry [5] might be interpreted as the absolute value of the complex coherence expressed by equation (2). A lot of scientific work concerning interferometric phase and amplitude statistics as well as optimal filtering to reduce interferometric phase noise has been done and published by various authors. An example is [7], and a compilation of several results can be found in [2,3]. It can be shown that the interferometric phase noise variance is some non linear function of the classical coherence, where the total coherence
Wide Scale 4D Optical Metrology
338
U (n, m)
J (n, m) can be factorized into three individual physically mean-
ingful terms: U (n, m)
U temporal (n, m) U spatial (n, m) U thermal (n, m)
(3)
The spatial coherence describes the correlation or decorrelation effects caused by the different aspect angles of the two SAR sensors imaging the same scene. These effects generally grow with increasing baseline length, so for each interferometric mission depending on incidence angle and employed wavelength there is an upper baseline length limit which can not be exceeded without complete decorrelating the individual SAR images. The thermal coherence describes the decorrelation caused by thermal sensor noise. Finally the temporal coherence reflects the influence of the time delay between the acquisition of the individual SAR scenes and the change in the scene content introduced by the time delay (vegetation growth, environmental or man made changes, changing weather conditions, etc.). 1.1 Repeat Pass or Single Pass Interferometry?
It is the temporal decorrelation which basically introduced the two basic options of SAR interferometry, namely x Two or repeat pass interferometry x Single or one pass interferometry While the first option images the same scene with the same instrument from different paths at different times, being prone to any temporal variation of the scene between the acquisition times, the second option uses two different instruments on the same platform, separated by a rigid baseline (a 60 m mast in the Shuttle Radar Topography Mission, 1 – 2 m in airborne experiments). Here the two scenes are essentially acquired at the same time, minimizing the temporal decorrelation. As an example for spaceborne repeat pass interferometry the ERS-1 interferometric setups and the ERS-1/ERS-2 tandem mission might be regarded. The first spaceborne single pass interferometer mission was the Shuttle Radar Topography Mission (SRTM), which will be considered subsequently. While in repeat pass interferometry, no mechanical coupling between the two sensors is present, this coupling, existing in the second option, introduces dynamical motion errors to the baseline vector in a way that the whole interferometric constellation becomes time varying. While repeat pass interferometry seems feasible for spaceborne sensors showing very stable orbits that require almost no motion compensation, repeat pass interferometry is much more
Wide Scale 4D Optical Metrology
339
demanding for airborne sensors requiring more sophisticated motion compensation.
1.2 Across Track or Along Track Interferometry?
The spacial separation of the SAR sensors might be in the flight direction, giving rise to Along Track Interferometry, or perpendicular to the motion vector, being described by Cross Track Interferometry. Along track interferometry allows for the determination of motion, such as oceanic surface flows [9-11, 13-15], artic glacier flow and traffic monitoring, while across track interferometry in general enables the generation of high accuracy Digital Elevation Models (DEM). More recently substantial developments have been achieved in the contest of centimetric/millimetric accuracy ground deformation monitoring via the Multiple-Pass Differential SAR Interferometry technique [16-18]. A topical review of Synthetic Aperture Radar Interferometry, with emphasis on processing aspects can be found in [8]. In the following we will restrict our interest to across track SAR systems.
2 A Simple SAR Across Track SAR Interferometer S1
B S
2 (SAR 2)
[
J
(SAR 1)
BA H
r+dr
r
T
P z
Terrain Surface Point
Fig. 1. Principle of Across Track Interferometry
The principle of an across track interferometer is shown inFig. 1. Any pixel of the corregistered scenes corresponds to one surface point, which before the coregistration in one scene would be imaged at slant range r, showing a phase value proportional to r, while in the second scene the surface point would be imaged at slant range r+dr, showing a phase value proportional to that slightly aly tered range value.
Wide Scale 4D Optical Metrology
340
With the cosine law for arbitrary triangles we have then: (r dr ) 2
r 2 B 2 2 B r cos J where:
J
90$ [ T
(4)
Hence we may write: (r dr ) 2
r 2 B 2 2 B r cos(90 $ [ T ) r 2 B 2 2 B r sin(T $ [ )
and for the sine term we obtain: sin(T $ [ )
(5)
(r dr ) 2 r 2 B 2 2 Br
For the right angled triangle consisting of S1, the local surface point and the point P at height z we have: H r ª 1 a 2 cos([ ) a cos([ )º «¬ »¼ (r dr ) 2 r 2 B 2 where : a sin(T [ ) 2 Br z
a(r , B, dr )
(6)
indicating that we can determine the height of any imaged surface point from its slant range, the baseline length B and the slant range difference dr. Further parameters, we need to know, are the baseline orientation and the height H of the master satellite S1. Obviously the slant range difference dr is proportional to the interferometric phase difference dM M i : Mi
4S
O
dr and : dr
O Mi 4S
(7)
so that we are finally able to determine the height from the interferometric phase, provided that we know all the other parameters, like baseline length and orientation. It is worth noting that here the interferometric phase is essentially the unambiguous phase which unfortunately is not directly observable from the interferogram.
3 Interferometric Processing 3.1 Raw Data Focussing
Interferometric processing starts with two complex raw data sets collected by the individual, spacially separated SAR sensors (Fig. 2).
Wide Scale 4D Optical Metrology
341
Fig. 2. 2 Raw Data sets (real part)
Due to the large number of individual scatterers in one antenna footprint SAR raw data always resembles noise. Only very bright and isolated point like scatterers yield some structure in the noise, in fact they yield the SAR sensor’s complex valued point spread function. After the focussing process which basically consists of a space variant, phase preserving two dimensional matched filter operation, both raw data sets are converted into focussed Single Look Complex (SLC) images, the amplitude of any pixel reflecting the radar brightness and the phase being proportional to the distance (Fig. 3). Rotated and shifted
Titisee (Schwarzwald) Raw data Dornier
Titisee (Schwarzwald) Raw data Dornier
Fig. 3. Focussed Single Look Complex SAR images (absolute values)
For non parallel flight tracks, the images will not necessarily be aligned and in general can be rotated against each other, see Fig. 3 right image. 3.2 Coregistration and Interferogram Formation
The process of coregistering the images eliminates relative shift and rotation of the two SAR images against each other. In a first step a large number of imagelets in one image is selected, and then correspondencies
342
Wide Scale 4D Optical Metrology
in the other image are searched. This can be done by correlation or information theoretical similarity measures (e.g. mutual information). As part of that procedure the relative shifts in two directions between matching imagelets are determined, and from that large number of relative shift vectors a similarity transformation from one image to the other is set up. The result of that transform is a pair of coregistered images that match pixelwise (Fig. 4). Usually coregistering the images implies subsampling and interpolation techniques applied to the complex data.
Fig. 4. Coregistered Images
After coregistering the images, the complex interferogram is formed by pixelwise multiplying one image with the complex conjugate of the other image (c.f. equation (1)). The result is shown in Fig 5. From the complex interferogram the phase image (Fig. 6) is readily found by calculating the arctan of the ratio of imaginary and real part of any pixel.
Fig. 5. Complex Interferogram
Fig. 6. Phase Image (modulo 2S)
Due to the arctan operation all pixel phases are wrapped into the ambiguity interval (-S,S] which is denoted as the wrap around effect. The phase jumps, called fringes, now become clearly visible, introducing the term of a fringe image. The ‘flat earth’ phase contribution introducing a phase
Wide Scale 4D Optical Metrology
343
ramp of constant slope and being visible as phase fringes which are almost parallel to the azimuth flight track should be noted in Fig. 6. 3.3 Interferogram Filtering, Phase Unwrapping, Phase to Height Conversion
For the phase to height conversion the phase image must be unwrapped which, due to the implicit phase noise stemming from lacking coherence is not a trivial task. The result is shown in Fig. 7. Additionally reference points (e.g. corner cubes or land marks with known heights) are utilized to determine the absolute phase offset of the whole scene. From the unwrapped phase the geometric height of any pixel can be determined by means of equation (6) for the airborne case (Fig. 8). In spaceborne SAR interferometry height determination is usually done in WGS-84 coordinates using vectorial calculus.
Fig. 7. Unwrapped Phase Image
Fig. 8. Slant Range Height Image
Due to phase noise the raw height image usually appears quite noisy, containing spikes and arbitrarily wrong individual pixel heights. These effects raise the need of filtering the phase images or employing noise eliminating phase unwrapping approaches to be addressed later. 3.4 Geometrical Transformations - Orthoprojection
The height image is still in slant range azimuth coordinates. Hence it must be converted to ground range azimuth coordinates. In the airborne case orthoprojection process essentially makes use of Pythagoras’ law (Fig. 9), in the spaceborne case the mapping is performed onto some ellipsoidal reference plane. Finally the orthoprojected height image can displayed in Pseudo 3D representation (Fig. 10) or as an red green anaglyph. Provided that absolute
344
Wide Scale 4D Optical Metrology
GPS-track references are available the image can be mapped in a geocoding process to any geodetical coordinate frame (e.g. Gauss Krüger).
Fig. 9. Orthoprojected Height Image
Fig. 10. Orthoprojected Image
4 Crucial Issue - Phase Unwrapping Phase unwrapping is an extremely critical operational issue. A very basic approach to phase unwrapping consists of first calculating finite phase differences (phase slopes), removing the 2S phase jumps by a modulo operation and integrating the finite phase differences again [19]. It has been shown, for example in [8, 24] that the complexity of phase unwrapping essentially depends on the degree of coherence and on the phase slope. The coherence is a quality measure depending on mission design (geometric and temporal baseline, wavelength, mean incidence angle) and sensor design and can only be improved by filtering out the uncorrelated noise stemming from disjoint parts of the power spectral densities of the complex SAR images (cf. [8]). This reduction of phase noise comes at the cost of decreasing the geometrical (slant range/azimuth) resolution). Further filtering (weighted special averaging) to reduce the noisiness of the phases can be applied to the complex interferogram (denoted by Multi Looking), again improving phase resolution at the cost of slant range/azimuth resolution. The phase slope is usually reduced by phase image flattening procedures, which try to first extract the flat earth contribution or to demodulate the phase image with some nominal or coarsely known height image. After that process the residual phase image is unwrapped and after unwrapping the previously extracted phase offsets are superimposed again. Kalman filter based phase unwrapping interprets the inphase and quadrature component of the complex interferogram as noisy nonlinear observations of the true unambiguous phase. This approach is followed and described in [20-24] and does not need any prefiltering nor phase slope
Wide Scale 4D Optical Metrology
345
flattening. Fig. 11 shows an undisturbed, unambiguous fractal phase image. From that phase image a complex interferogram was formed with superimposed complex noise of 10 dB. From that noisy interferogram the interferometric phase image in Fig. 12 was generated by an arctan operation. While classical phase unwrappers would try to unwrap that phase image, the Kalman filter directly processes the complex interferogram values and directly estimates the unambiguous phase image from the complex interferogram (Fig. 13). Rewrapping that result (Fig. 14) it becomes obvious that the Kalman filter eliminated the noise almost completely without, however, smoothing away the tiny details of the phase image. It has been shown furthermore in [23] that the Kalman filter approach maintains the fractal dimension of the phase image.
Fig. 11. Unambiguous (fractal phase image)
Fig. 12. Noisy (wrapped around) phase image (coherence equivalent SNR=10 dB)
Fig. 13. Kalman Filter based Phase Unwrapping Result
Fig. 14. Rewrapped Phase Unwrapping Result
5 Recent and Future Interferometric Missions While most of the airborne interferometric missions have been single or one pass interferometric missions – all of the satellite based missions in the past have been two or repeat pass missions. It must be emphasized that the
346
Wide Scale 4D Optical Metrology
real breakthrough in SAR interferometry was achieved through the European ERS-1 satellite and its follow-on, ERS-2. The satellite orbit was determined with dm and cm accuracy; the baseline control was very good and many orbit pairs met the baseline conditions for repeat-pass interferometry. The ERS-2 SAR is identical to the one of ERS-1. The satellite was launched in 1995 and has the same orbit parameters as ERS-1. Most important from the SAR-interferometry point of view was the TANDEMmission [25] during which ERS-1 and ERS-2 were operated in parallel. ERS-2 followed ERS-1 on the same orbit at a 35 min delay. Together with the Earth’s rotation this orbit scenario assured that ERS-1 and ERS-2 imaged the same areas at the same look angle at a 1 day time lag. The orbits were deliberately tuned slightly out of phase such that a baseline of some 100 m allowed for cross-track interferometry. This virtual baseline between ERS-1 and ERS-2 could be kept very stable, because both satellites were affected by similar disturbing forces. The first of several TANDEM missions was executed in May 1996. Despite all the excellent scientific results obtained with ERS data, it should be kept in mind that the instrument had been designed for oceanographic imaging and, hence, used a very steep incidence angle of 23q. According to that terrain slopes of higher than about 20q could not be mapped. 5.1 Shuttle Radar Topography Mission
Based on the extremely successful Shuttle Imaging Radar SIR-C/X-SAR missions, (see, e.g. the special SIR-C/X-SAR issue of IEEE Transactions on Geoscience and Remote Sensing 33 (4), 1995), the Shuttle Radar Topography Mission SRTM [27] was launched in February 2000, acquiring topographic mapping of the entire land mass within r 60qlatitudes during an 11 days flight. This first spaceborne single pass/ dual-antenna acrosstrack interferometer reused the existing SIR-C/X-SAR hardware augmented by a second set of receive antennas for the C- and X-band SARs mounted at the tip of a 60 m boom, which extended from the cargo bay of the shuttle (Fig. 15). The mission was intended to combine the stability of orbital SAR platforms with the advantages of single pass interferometric imaging thus eliminating all phase noise influences from temporal decorrelation. The German X-band data was intended to provide DEMs of about 6 m height accuracy and 25 m posting accuracy [28], imaging about 70% of the area covered by the C-band interferometer. Despite of the tremendous success the mission has finally converted into, one of the most serious problems
Wide Scale 4D Optical Metrology
347
during mission operation turned out to be the mast oscillations due to failing mast dampers.
DLR Fig. 15. Shuttle Radar Topography Mission
OCS-Ursprung im ICS - z-Komponente [m]
Fig. 16 shows an oscillation amplitude of several cm in each direction. As a drawback of those attitude instabilities, baseline length and baseline orientation angle turned out to be massively time varying introducing height errors of several 70 m without compensation. In order to cope with these effects, the phase to height conversion had to use time varying baseline parameters. A lot of work has been performed to estimate these parameters over ocean and then to propagate the baseline parameters over land by dynamic models employing Kalman filtering techniques [2934]. Fig. 17 shows a result of estimating the baseline length over OCS-Ursprung im ICS - x-Komponente [m] an ocean track and then propaFig. 16. Origin of outboard coordinate frame in gating the baseline over land inboard coordinates with a Kalman filter. DT 146_190; PADR_009_00_19_00_009_00_41_00_512_B_PB2.RKD
A more detailed description can be found in [34]. A nice compilation of recent results can be found in [35].
Wide Scale 4D Optical Metrology
348
Baseline estimates determined over ocean calibration track
Predicted over land
baseline
estimates
Nominal baseline estimates from mission data base
Fig. 17. Baseline length estimates over flight time
5.2 Interferometric Cartwheel
While the main problem in the Shuttle Radar Topography Mission turned out to be the mechanical coupling between outboard antenna and the shuttle giving rise to oscillations, a new generation of spaceborne interferometer aims at achieving highly stable interferometric constellations without any mechanical coupling. The basic idea is to use the implicit short and medium term stability of a group or cluster of satellites orbiting on identical orbits with slightly detuned orbit parameters. One of the most prominent missions developed in this context is CNES’1Interferometric Cartwheel [36-38]. Another well known example is DLR’s2 Interferometric Pendulum [39]. By employing a set of passive receive only satellites cost reductions may be realized, the transmit antenna and high power electronics with the corresponding power supply being one of the main cost drivers in SAR satellites. Using only passive receivers, such missions need to employ active SAR satellites, such as Envisat, TerraSAR-X as illuminators. The Interferometric Cartwheel consists of a group of N passive (receive only) satellites (e.g. N=3). All satellites are copositioned in the same orbital plane, they move on identical (same semi major axes) but slightly excentric orbits (eccentricity <<0.1). Now the points of perigee of the individual satellites are equiangularly distributed over 2S and the times of perigee are shifted against each other by TS=TO/N where TO is the time for one orbit and N the number of cartwheel satellites. For N=3 the angular 1 2
Centre Nationale d’Etudes Spatiales German Aerospace Research Centre
Wide Scale 4D Optical Metrology
349
shift of the points of perigee is 2S/3 and TS=TO/3. Fig. 18 shows the satellite trajectories in the orbital plane for an exaggerated eccentricity of 0.6. The Earth is assumed in the center of the circular orbit with a radius equal to the ellipses semimajor axis. For small eccentricities (cf. Fig. 19) the satellites form a fixed constellation moving with constant velocity on a small cartwheel ellipse around some virtual center, while that center point of the cartwheel ellipse also moves on a circle with constant velocity.
Fig. 18. Interferometric Cartwheel with exaggerated eccentricity (e=0.6)
Fig. 19. Interferometric Cartwheel with small eccentricity (e=0.18)
The virtual lines from the individual satellites to the center of the cartwheel ellipse (cf. Fig. 19) form the spokes of a rotating wheel, the so called cartwheel. The details are developed and analyzed in [40]. The Interferometric Cartwheel as well as the Pendulum are examples of spaceborne satellite clusters with slowly time varying but orbit mechanically stable and thus predictable baseline parameters, where the cartwheel most of the time enables simultaneous along track and across track interferometry. The cross track baseline component introduces height sensitivity into the interferometric phase while the along track component introduces surface motion sensitivity, where both contributions add up. The individual contributions can be separated by a weighted combination of the individual phase contributions of the cluster satellites. Besides of the fact that in general the interferometric processing will be more complex regarding the time varying nature of the interferometric constellations, another processing issue concerning the raw data is raised, being described by bistatic SAR: Using spatially separated transmitters and receivers, mounted on different platforms, the primary imaging constellation is no longer monostatic. Rather than that, we speak of bistatic SAR imaging, being described by the fact, that the raw data focussing is not yet scientifically well established. First promising processing ap-
350
Wide Scale 4D Optical Metrology
proaches have been published [41-50], yet the whole field is under investigation.
6 References 1. Madsen, S. N., 1986, Speckle Theory – Modelling, analysis, and applications related to Synthetic Aperture Radar, Lic.Techn (Ph.D.) Thesis at the Electromagnetics Institute at the Technical University of Denmark, Kopenhagen, 2. Walessa, M. 2001, Bayesian Information Extraction from SAR Images, Ph.D Thesis, Center for Sensorsystems/University of Siegen, http://www.zess.uni-siegen.de/cms/diss/diss.php?diss=41, 3. Quartulli, M.,F.,2005, Hierarchical Bayesian Analysis of High Complexity Data for the Inversion of Metric InSAR in Urban Environments, Ph.D Thesis, Center for Sensorsystems/University of Siegen, 4. Middleton, D., 1987, Introduction to Statistical Communication Theory, Peninsula Publishing, Los Altos, pp. 396-410, 5. Goodman, J.W., 1975, Statistical Properties of Laser Speckle Patterns, in Laser Speckle and Related Phenomena (ed. J.C. Dainty), Springer, New York, 6. Zebker, H.A., Villasenor, J., 1992, Decorrelation in Interferometric Radar Echoes, IEEE Trans. Geosc. and Rem. Sens., Vol.30, No. 5, pp. 950-959, Sep. 1992, 7. J. S. Lee, K. Hoppel, S. A. Mango, and A. R. Miller, 1994, Intensity and phase statistics of multi-look polarimetric SAR imagery, IEEE Trans. Geosc. and Rem. Sens, Vol.32, . 1017–1028, 8. R. Bamler, Ph. Hartl, 1998, Synthetic aperture radar interferometer, in, Inverse Problems , 14, No. 5, IoP Electronic Journals, http://ej.iop.org/links/q13/kjAIU3U6W3G+H6s3YTD6hQ/ip84r1.pdf 9. O. Hirsch, 2002, Neue Verarbeitungsverfahren von Along-Track Interferometrie Daten eines Radars mit synthetischer Apertur, Ph.D Thesis, Center for Sensorsystems/University of Siegen, http://www.zess.unisiegen.de/cms/diss/diss.php?diss=52, 10. Bao M, Brüning C and Alpers W, 1997, Simulation of ocean waves imaging by an along-track interferometric synthetic aperture radar IEEE Trans. Geosci. Remote Sens. 35 618–3,1 11. Carande, R E, 1994, Estimating ocean coherence time using dual-baseline interferometric synthetic aperture radar, IEEE Trans. Geosci. Remote Sens. 32, 846–54, 12. Zebker H A and Goldstein R M 1986 Topographic mapping from interferometric synthetic aperture radarobservations J. Geophys. Res. 91 4993–9
Wide Scale 4D Optical Metrology
351
13. Gabriel A K, Goldstein R M and Zebker H A 1989 Mapping small elevation changes over large areas: differential radar interferometry J. Geophys. Res. 94 9183–91 14. Goldstein R M, Barnett T P and Zebker H A, 1989, Remote sensing of ocean current, Science, 246 1282–1285, 15. Goldstein R M and Zebker H A 1987 Interferometric radar measurement of ocean surface current, Nature, 328, 707–709, 16. Wu X, Thiel K-H and Hartl P, 1997, Estimating ice changes by SAR interferometry 3rd Int. Airborne Remote Sensing Conf. and Exhibition (Copenhagen) pp 110–117 17. Massonnet D, Holzer T and Vadon H, 1997, Land subsidence caused by the East Mesa geothermal field, California, observed using SAR interferometry Geophys. Res. Lett. 24 901–4 18. Massonnet D, Rossi M, Carmona C, Adragna F, Peltzer G, Feigl K and Rabaute T, 1993, The displacement field of the Landers earthquake mapped by radar interferometer, Nature 364 138–42 19. Goldstein R. M., Zebker H.A., Werner C.L., „Satellite Radar Interferometry: Two-dimensional Phase Unwrapping“, Radio Science, Vol. 23, Nr. 4, 713-720, 1988 20. Christoph Arndt, Otmar Loffeld: „Optimal weighting of Phase Data with varying Signal to Noise Ratio“, SPIE Conference on Sensors and Sensorsystems, European Symposium on Lasers, Optics, and Vision for Productivity and Manufacturing, 16-20 Juni 97, Munich, Germany 21. R. Krämer, O. Loffeld, ‘A Novel Procedure For Cutline Detection’, International Journal of Electronics and Communications (AEÜ), Vol. 50, No. 2, Hirzel Verlag, Stuttgart, pp. 112-116, March 1996, ISSN 0001-1096 22. R. Krämer, O. Loffeld, ‘Phase Unwrapping for SAR Interferometry’, Proc. EUSAR’96, pp. 165-169, Königswinter, März 1996 23. R. Krämer, O. Loffeld, ‘New results in calculating the unambiguous phase for SAR interferometry’, Sensor, Sensor Systems, and Sensor Data Processing, Munich, June 16-20, 1997, Proc. of SPIE Vol. 3100, pp. 166-174, ISBN 0-81942520-6 24. Loffeld, O., Arndt, CH., Hein, A., „Estimating the derivative of modulo-mapped phases“, Fringe 96, ETH Zürich, 1-3 October 96 (http://www.geo.unizh.ch/rsl/fringe96/papers/loffeld-et-al/) 25. Duchossois G and Martin P, ERS-1 and ERS-2 Tandem Operations ESA Bull. 83 54–60, 1995 26. Stebler O, Pasquali P, Small D, Holecz F and N¨uesch D, Analysis of ERS-SAR tandem time-series using coherence and backscattering coefficient FRINGE 96 ESA Workshop on Applications of ERS SARInterferometry (Zurich), 1996
352
Wide Scale 4D Optical Metrology
27. Jordan R L, Caro E R, Kim Y, Kobrik M, Shen Y, Stuhr F V andWerner M U, Shuttle radar topography mapper (SRTM) Microwave Sensing and Synthetic Aperture Radar (Proc. SPIE) ed G Franceschetti, C J Oliver, F S Rubertone and S Tajbakhsh (Bellingham: SPIE) pp 412–22, 1996 28. Bamler R, Eineder M and Breit H, The X-SAR single-pass interferometer on SRTM: Expected performance and processing concept EUSAR’96 (K¨onigswinter) pp 181–4, 1996 29. Zink, M.; Geudtner, D.: 'Calibration of the Interferometric X-SAR System on SRTM', Proceedings of IGARSS'99, Hamburg, Germany, 1999 30. Knedlik, Loffeld, Hein, Arndt: 'A Novel Approach to Accurate Baseline Estimation', Proceedings of IGARSS'99, Hamburg, Germany, 1999 31. Werner, M.; Klein, K.B.; Haeusler, M.: 'Performance of the Shuttle Radar Topography Mission, X-Band Radar System', Proceedings of IGARSS'00, Honolulu Hawaii USA, 2000 32. Werner, M.: 'Operating the X-band SAR Interferometer of the SRTM', Proceedings of IGARSS'00, Honolulu Hawaii USA, 2000 33. Knedlik S., Loffeld, O.: 'Analysis of different Maximum a Posteriori Estimation Approaches for Interferometric Parameter Calibration', Proceedings of IGARSS'00, Honolulu Hawaii USA, 2000 34. Knedlik, S. Auf Kalman-Filtern basierende Verfahren zur Erzielung genauer Höhenmodelle in der SAR-Interferometrie, Dissertation, Aachen: Shaker Verlag, 2003. ZESS Forschungsberichte (Band 20) ISBN 3-8322-1596-4 , 298 Seiten 35. SRTM –Ausschnitte aus der Weltkarte des 21. Jahrunderts, DLR Nachrichten, Magazin des Magazin des Deutschen Zentrums für Luftund Raumfahrt ed. S. Wittig, Mai 2003/G 12625, http://www.dlr.de/dlr/Presse/dlr-nachrichten104/104__gesamt.pdf 36. Massonet, D., ‘Capabilities and Limitations of the Interferometric Cartwheel’, IEEE Transactions on Geoscience and Remote Sensing, Vol. 39, No. 3, March 2001, pp. 506 - 520. 37. D. Massonnet, “The interferometric cartwheel: a constellation of passive satellites to produce radar images to be coherently combined,” Int. J. Remote Sensing, 2001, No. 12, pp. 2413-2430 38. Ramongassie, S. Phalippou, L. , Thouvenot, E. Massonnet, D., ´Preliminary design of the payload for the interferometric cartwheel`, Proceedings of EUSAR 2000, pp.29-32, Cologne 39. Krieger, G., Fiedler, H., Hounam, D. and Moreira, A., ‘Analysis of Systems Concepts for Bi- and Multistatic SAR Missions’, Proc.
Wide Scale 4D Optical Metrology
40.
41.
42.
43.
44.
45.
46. 47.
48.
49.
50.
353
IGARSS 2003, International Geoscience and Remote Sensing Symposium 2003, Toulouse, France. Loffeld, O.,Nies, H., Gebhardt, U., Beschreibung des Interferometrischen Cartwheels und dessen Vorteile zu Standard Sar – Verfahren, ZESS Technical Note TN-Cartwheel –ZS/01, 2001. D’Aria, D., Monti Guarnieri, A., Rocca, F., ‘Precision Bistatic Processing with a Standard SAR Processor’, Proc. EUSAR 2004, European Symposium on Synthetic Aperture Radar, Ulm, 2004. D’Aria, D., Monti Guarnieri, A., Rocca, F., ‘Focusing Bistatic Synthetic Aperture Radar using Dip Move Out’, IEEE Transactions on Geoscience and Remote Sensing, Vol. 42, No. 7, July 2004. Wong, F. H., Yeo, Tat Soon, ‘New Applications of Nonlinear Chirp Scaling in SAR Data Processing’, IEEE Transactions on Geoscience and Remote Sensing, Vol. 39, No. 5, May 2001. Krieger, G., Fiedler, H., Hounam, D. and Moreira, A., ‘Analysis of Systems Concepts for Bi- and Multistatic SAR Missions’, Proc. IGARSS 2003, International Geoscience and Remote Sensing Symposium 2003, Toulouse, France Loffeld, Otmar, Nies, Holger, Peters, Valerij, Knedlik, Stefan, 'Models and Useful Relations for Bistatic SAR Processing', Proc. IGARSS 2003, Toulouse, France, Ender, Joachim H.G., 'Signal Theoretical Aspects of Bistatic SAR', Proc. IGARSS 2003, Toulouse, France, Loffeld, Otmar, Nies, Holger, Peters, Valerij, Knedlik, Stefan, Wiechert, Wolfgang, 'Bistatic SAR - Some Reflections on Rocca' s Smile, Proc. EUSAR 2004, Ulm, Germany Walterscheid, I., Brenner, A.R., Ender, Joachim H.G., 'Geometry and Systems Aspects for a Bistatic Airborne SAR-Experiment', Proc. EUSAR 2004, Ulm, Germany Walterscheid, I., Brenner, A. R., Ender, J. H. G., „New results on bistatic airborne radar“, IEE Electronics Letters, Vol. 40, No. 19, September 2004 Loffeld, O., Nies. H., Peters, V., Knedlik, St., ’Models and Useful Relations for Bistatic SAR Processing’, IEEE Transactions on Geoscience and Remote Sensing, Vol. 42, No. 10, October 2004
New calibration procedure for measuring shape on specular surfaces Petra Aswendt Sören Gärtner Fraunhofer Institute IWU Reichenhainer Str. 88, 09126 Chemnitz, Germany
Roland Höfling ViALUX GmbH Reichenhainer Str. 88, 09126 Chemnitz, Germany
1 Introduction The use of optical methods for 3-D shape measurement has gaining more and more acceptance in industry and there is a growing number of suppliers of measuring equipment. In contrast to contacting techniques like coordinate measuring machines the optical properties of the object surface have a significant influence on the measurements. In particular, the object appearance can vary from completely dull to highly glossy. However, there is not a sharp limit between the two categories, the properties change gradually according to the spatial scatter distribution. For this reason, two major approaches are used in full-field shape measurement: - projection of structured light onto an object and - reflection of structured light by an object. While the first one operates best if the object is dull the latter one is superior for highly reflective parts like glass or mirrors. The structured light is used to determine a certain point or direction in space. In this paper, sinusoidal intensity distributions are used as a precise and robust coding technique. A phase value is measured from a series of fringes. Combining two sets of patterns with orthogonal directions yields a unique phase coordinate (M1,M 2) for each pixel of the recording camera. Fig. 1 shows the phase measuring principle schematically, in this case the phase coordinate is obtained from altogether six camera recordings.
Wide Scale 4D Optical Metrology
355
Fig. 1. Measuring phase coordinates from sinusoidal patterns
The phase measurement is applied to both the projection and reflection method. The difference lies in the measuring value that is obtained in one or the other case. As visible in Fig. 2 the projection is sensitive to a variation of the distance of the scattering surface and it is not sensitive against any tilt of the normal vector n at this point. In contrary, the reflective setup in Fig. 3 has a strong response to changes of the surface normal but is simultaneously sensitive to the distance.
Fig. 2. Measuring sensitivity of the projection method for displacement (left) and tilt (right)
356
Wide Scale 4D Optical Metrology
Fig. 3. Measuring sensitivity of the reflection method for displacement (left) and tilt (right)
It becomes obvious from the Figures 2 and 3 that both methodologies can be used to determine the object shape from the measured phase coordinates. The evaluation, however, is quite different. A calibrated projection system provides the object shape unambiguously while the processing of a reflective measurement signal is not straight forward due to the simultaneous sensitivity for displacement and tilt. Otherwise, the projection method is hard, if not impossible, to apply to glossy surfaces. Therefore, this work is focused on the reflective technique and the combination of evaluation methods is discussed in detail in the following section.
3 Shape reconstruction from reflective measurements There are commercial systems with reflective sensors available on the market [4]. Their measurement results prove that the reflection technique allows to measure curvature variation and correspondingly waviness of shiny surfaces with outstanding sensitivity. However, serious limitations occur in applications where the global 3-D shape is required in addition to the local values. As described in the introduction the phase coordinate (M1,M2) depends upon both point location and surface normal n at this point. Looking along one ray h yields an indefinite number of geometric solutions. The same phase coordinate is obtained from different sets of object points (x,y,z) and normal vectors n. An example of two such sets is drawn in Fig. 4. This fact is an obstacle for the direct evaluation even in a pre-calibrated setup. Additional measurements or a-priori information about the surface are required to calculate an unambiguous result.
Wide Scale 4D Optical Metrology
357
Fig. 4. Ambiguous solutions for the shape reconstruction of glossy surfaces
Fig. 5. Exact shape reconstruction by TFT translation
There are some approaches that have been proposed in the past to solve the shape reconstruction problem for specular surfaces. Multiple recordings with defined variation of sensor geometry were introduced in [5,6] in order to acquire the additional information needed. A direct, exact solution can be found if the TFT panel is displaced by a certain distance D. The two phase coordinates (M1,M 2)A and (M1,M 2)B define a second ray k and the object point S is given by the intersection between h and k (Fig. 5). The drawback of this procedure is the requirement of a precise mechanical translation of the flat panel in relation to camera and object that makes a sensor difficult to handle. Another solution avoids any mechanical movement by adding a second camera looking at the same object area. The two cameras see different regions of the TFT flat panel and this increase of information reduces the degree of unambiguity for the location and normal vector of the surface point S [7,8]. The whole setup has to be calibrated for this purpose. A practical drawback is that the surface areas seen by the two cameras do not completely overlap, thus reducing the measuring field.
358
Wide Scale 4D Optical Metrology
The iterative procedure is shown in Fig. 6. An assumption is made for the location of the surface point S* along the given observation ray h1 of camera 1. The corresponding illumination ray k1 follows from the phase coordinate (M1,M2)1 measured for the corresponding pixel P1. The normal vector is given by the bisecting line between h1 and k1. Using the calibration parameter the back projection of S* into camera 2 can be calculated and from the phase coordinate (M1,M 2)2 the illumination vector k2 is found. Again a normal vector n2 results from the intersection of h2 and k2 . In general, the normal vectors n1 and n2 will be different for a freely assumed surface point S*. A match will be found if S* is moved along the observation ray h1 to the actual position of the surface under test (S).A third technique exploits basic assumptions about the surface. The geometry of many engineering objects is smooth and can be described by steady functions. The ambiguity problem shown in Fig. 4 is present if, and only if, each single point of the object is considered independently. However, if the neighborhood is considered to be connected smoothly, then the location of the set of points and the corresponding normal vectors are not longer independent upon each other. In other words, a measured point cloud of a surface region defines also the normal vector field of this area. This fact constraints the number of valid solutions for the under determined problem of shape reconstruction of specular surfaces and should be taken into account then ever the smoothness assumption is fulfilled.
Fig. 6. Iterative shape reconstruction using two cameras
Wide Scale 4D Optical Metrology
359
An iterative procedure has been described that starts from a zero order surface approaching to more complex shape step by step until a consistent solution is achieved [9,10]. It is not yet proven that this approach will be successful in any case. In particular, local minima of the error functions used may occur. Therefore, the authors propose a combined Double Camera & Steady Surface (DCSS) solution methodology described in the following.
4 Combined DCSS methodology Driven by practical needs the DCSS approach aims at a maximum of measurement field even for large objects like a car windshield, for example. In parallel, the calculation of the surface topography has to be robust, unambiguous, and fully automated. It is obvious from the discussion in the previous chapter that these objectives can be met only by a combined method. The leading idea is to use a multiple camera configuration. There are two areas within the measuring field - points seen from a single camera only and - points where the fields of view of at least two cameras overlap. In a first step only overlapping regions are considered. The whole setup has been calibrated so that the mutual position of any of the two cameras is known. The surface topography for these object parts is achieved by the iterative dual camera procedure described in section 3. At this stage each point is treated separately and the iteration aims at the optimum position and normal based on the phase coordinates in both cameras. The result is a 3D-point cloud for the overlapping patches. This partial information provides a valuable input for the surface iteration where the remaining part of the object is calculated. While the previous solutions have to start the iteration with a plane and proceed to more complex shape over many steps, now, the input data allow to start with an estimated model of higher degree. In this way, the evaluation is speeded up, and even more important, it becomes more reliable. The whole processing scheme is summarized in Fig. 7.
360
Wide Scale 4D Optical Metrology
Fig. 7. Processing scheme for the DCSS evaluation
Experiments have validated the DCSS method. A large spherical mirror with well-known curvature (R=2.5 m) was used to determine the measurement accuracy. As an example, Fig. 8 shows a line profile of the measured shape compared against the ideal sphere. The small difference between both is made visible in Fig.9. The standard deviation for a 200 mm object field is 0.05 mm. The results have proven that the developed methodology operates stable and accurate and has a good potential for future integration in ViALUX SurfCheck sensors for industrial inspection.
Fig. 8. Reconstructed shape of a large spherical mirror with specified radius of 2.5 m
Wide Scale 4D Optical Metrology
361
Fig. 9. Difference between ideal sphere and experimental data
5 References 1. Höfling, R, Aswendt, P, Neugebauer, R (2000) Phase Reflection – a new solution for the detection of shape defects on car body sheets. Opt. Eng. 39 (1):175-182 2. Häusler, G (1999) Verfahren und Vorrichtung zur Ermittlung der Form oder der Abbildungseigenschaften von spiegelnden oder transparenten Objekten, Patentanmeldung DE 19944354 A1 3. (2002) Vorrichtung zur Oberflächenprüfung, ViALUX Gebrauchsmuster Nr. 200216852.2 4. Aswendt, P, Gärtner, S, Höfling, R (2005) Industrial inspection of specular surfaces using a new calibration procedure. Proc. SPIE. 5856 (in press) 5. Ritter, R, Hahn, R (1983) Contribution to analysis of the reflection grating method, Opt. and Lasers in Eng., 4, No.1:13-24 6. Petz, M, Ritter, R (2001) Reflection grating method for 3D measurement of reflecting surfaces, Proc. SPIE Vol. 4399:35-41 7. Knauer, MC, Kaminski, J, Häusler, G (2004) Phase measuring deflectometry: a new approach to measure specular free-form surfaces, Proc. SPIE No. 5457:366-376 8. Petz, M, Tutsch, R (2004) Reflection grating photogrammetry, VDI Verlag, No. 1844:327-338 9. Beyerer, J, Pérard, D (1997) Automatische Inspektion spiegelnder Freiformflächen anhand von Rasterreflexion, Technisches Messen 64:394-400 10. Pérard, D (2001) Automated visual inspection of specular surfaces with structured-lighting reflection techniques, VDI Verlag, No. 869
Fringe Reflection for high resolution topometry and surface description on variable lateral scales Thorsten Bothe, Wansong Li*, Christoph von Kopylow, Werner Jüptner BIAS – Bremer Institut für angewandte Strahltechnik Klagenfurter Strasse 2, 28359 Bremen, Deutschland E-Mail: [email protected] *: VEW, Edisonstrasse 19, 28357 Bremen, Deutschland E-Mail: [email protected]
1 The Fringe Reflection Technique (FRT) 1.1 Classification, Resolution and Application Fields
The FRT is a robust, non-coherent technique for the characterization of specular surfaces. The technique delivers the local gradients of the surface. Further evaluation delivers the local curvatures and for continuous objects the height map of the object. For this aim, the measurement system generates a straight fringe pattern on a plane. A camera records the reflected pattern from the surface under investigation. The system evaluates the fringe distortions and calculates surface normals for each camera pixel. The display window and the camera resolution define the lateral resolution. The high sensitivity of the fringe reflection technique allows to measure gradient changes in the range of micro-degree and thus, local height changes in the range of nanometres. Despite the high resolution of the system, the investigated objects may have a height range of centimetres. Thus, the dynamic range is larger than a million – quite promising for a large range of measurement problems. The technique has several advantages with respect to classical interferometric techniques: it is quite simple regarding hardware requirements and thus inexpensive and it is less sensitive to disturbances. Hence, it applies to many fields of industrial inspection.
Wide Scale 4D Optical Metrology
363
1.2 Development and actual State
The physical basis of the fringe reflection technique (FRT) is the distortion of a mirror image by a specular surface. The first technique to utilize the effect by reflecting a defined, regular pattern for topological evaluation is the reflection moiré method that has been demonstrated in the 50th, already [1]. The development of the FRT follows the history of the fringe projection technique, which started as Moiré techniques and then “lost” one of the two necessary gratings which results into a simpler calibration procedure and widened application fields. A comparison of fringe projection and fringe reflection can be found in [2]. A first publication using only one grating is [3]. Recently, the technique was stimulated again by measurement problems in the automobile industry that needs high precision inspection of specular surfaces [4] and on less specular surfaces [5]. New manufacturing possibilities for smooth high-quality freeform surfaces like eyeglasses are demonstrated in this paper and in [6]. Machined high quality surface components also benefit from inspection / qualification by the FRT [7]. The FRT can be realized with extremely low technical effort [8] but gains a lot from recent technology developments like computer controllable monitors for pattern generation [9]. Especially the availability of digital TFT monitors with defined pixel positions potentates the possible robustness and accuracy in fringe reflection. A main issue for high accuracy FRT measurements is the distanceangle-ambiguity: from single camera beams, it is impossible to evaluate the object distance and surface reflection angle at the same time. This issue allows sorting the actually existing main system development ideas: a) Get rough object coordinates by triangulation based on the controlled movement of the pattern generating monitor [10]. b) Get rough object coordinates by triangulation based on the use of a (stereoscopic) pair of cameras [6,11] c) Use knowledge of the measured object for evaluation like the defined shape and orientation (inspection) or grant a known object distance for an object point (unknown shape) and iteratively recalculate the object coordinates from the physical / topological conditions. The general possibility was mentioned in [12,13] and is continuously developed for the system that is topic of this paper [14]. The techniques a) and b) ease the mathematical calculation of exact coordinates a lot but the hardware requirements are relatively high and measured objects need to be cooperative (e.g. both cameras must get a mirror
364
Wide Scale 4D Optical Metrology
image on same parts of the object). Additionally it is difficult to maintain the potential high resolution of the data when combining the stereo views. Method c) allows building a very simple, robust and flexible measurement head on basis of just a monitor, a camera and a device for object repositioning or for measuring the object distance (fixing the distance-angleambiguity). A depth resolution of better than a nanometre on large smooth surfaces is obtainable when the FRT is combined with: x State of the art phase measurement techniques that allow completely free sized, linearized, sinusoidal fringes [15] x Actual photogrammetric calibration techniques [16] x Optimized algorithms to handle real measured gradient fields (i.e. allow invalid areas) for further processing and integration to height data [17]. In this paper, we describe the measurement technique, discuss possible surface descriptions techniques to treat the description of waviness on different lateral scales, demonstrate the possible resolution and thus, showing part of the range of possible materials and applications. 1.2 Function principle and measurement setup
a)
b)
c)
Fig. 1. FRT setup: coordinate plane (monitor) with camera fixed at border a) vertical: front side showing straight fringes, b) vertical: back side and measured object (car window) with reflected distorted fringes, c) boxed horizontal mobile system driven by a laptop.
To evaluate the mirror image distortions, the measurement system generates a straight fringe pattern on a plane (Fig. 1a). A camera records the reflected pattern from the surface under investigation (Fig. 1b). A compact, mobile setup on basis of this technique is shown in (Fig. 1c). The FRT system evaluates the fringe distortions and calculates surface normals for each camera pixel (Fig. 2). Further calculation and techniques have been developed for feature extraction and object analysis [17].
Wide Scale 4D Optical Metrology
365
COORDINATE PLANE
SPECULAR OBJECT
2 enlarged pixel:
M or s
2D
s
CAMERA
l
D
'D
xCam
'z (per pixel)
Fig. 2. Simplified physical model showing the imaging beam of one camera pixel in distance l of the object that has a local surface angle D. The doubled mirror angle (2D) leads to the position s on the monitor that is coded into the phase M of displayed sinusoid fringes. By phase measurement of M the local surface normal can be calculated for every camera pixel. Additionally the local angle represents the height change 'z per pixel and can be used to calculate the object shape.
The following obtainable resolutions have been demonstrated on smooth surfaces [14]: angle: 500 microdegrees (1.8’’), shape: 0.7 nm, curvature 0.05 D. The accuracy is lowered due to systematic errors by the non-exact chosen distance l for each point on the object (distance-angle-ambiguity) and shows up as: x additional ramp for the surface normals x respectively: constant offset in curvature (deviation of the shape) x respectively: additional parabolic shape (integrated normals) The relative curvature error is similar to the relative distance error. E.g. for a distance l that is assumed wrongly by 1% there will be a wrong offset in the evaluated curvature by the same 1%. This is a relative small value but when the object shape is calculated by field integration spanning a large object size the local systematic errors add up and end up in additional parabolic components up to micrometers. Generally, statistic and systematic shape errors increase with the evaluated structure size (scale dependent). The systematic, additive error is nearly removed in differential measurements or when parabolic shape components can be neglected (defect or microstructure evaluation –example of this paper) or when a known shape (e.g. measured master, CAD model) is used for calculation.
2 Experimental Results Many possible application fields for the FRT have been published e.g. in [14]. The main condition is an at least part specularity of the surface: metal, glass, plastic, lacquer or fluids (also dynamically) have been measured successfully by the FRT. Exemplary results for a freeform eyeglass (Fig. 3a) are demonstrated in Fig. 3.
Wide Scale 4D Optical Metrology
366
+25.1 concave
D (1/m)
a)
(convex) +8.7
b)
c) (far)
780 nm
(near)
Fig. 3. a) free form plastic lens and reflected fringes, b) curvature, c) microstructure
The evaluated curvature (Fig. 3b) directly describes the close and far range of the eyeglass by its optical power in Dioptres as well as local variations and manufacturing traces. The shape (Fig. 3c) can be used to investigate the polishing traces in the range of 780 nm. However, first the global shape must be removed so that the microstructure gets visible. It is a general observation that the curvature of a surface is an excellent property to describe surface microstructure - in many cases more powerful than a height plot: e.g. the qualification of waviness or bumps and dents where not just the height of a structure defines a failure but its slope change per lateral distance - which is the curvature. The visual impression of the lacquer structure (orange skin effect) directly complies with the local curvature of the surface: a strong optical impression corresponds to a large curvature. Wherever a subjective perception defines the surface quality like for lacquer surfaces or mould plastics, standard roughness descriptions of the surface do not match but the curvature does. Thus, the FRT curvature evaluation provides an opportunity to switch to an objective description and qualification of such surfaces. Choosing the curvature for surface description is not a final conclusion but a topic to put into discussion. Thus, for the following examples, the results of shape microstructure and curvature are compared as far as possible and the generation of objective surface parameters is discussed.
a)
b)
c)
Fig. 4. KTL inspection a) measurement situation b) reflected fringes, c) camera view
Wide Scale 4D Optical Metrology
367
In the lacquer process, different layers of material are superimposed e.g. starting with raw metal (thin sheet), then phosphate coating, cathode electro deposition varnish (Kathodentauchlack - KTL), filler and finally the top coating. Each layer except the phosphate (which is perfectly non-specular) is measurable by the FRT. Thus, the complete process can be supported by the FRT e.g. to choose the best filler material which minimizes the orangeskin-effect for the final surface. Fig. 4a shows the measurement situation at a freshly KTL lacquered car door: The FRT measurement head is placed at a defined distance (two crossing laser beams) to evaluate the reflected fringes (Fig. 4b,c). The evaluated field size was 4.5 x 3.4 cm with a lateral resolution of 50 µm. -65.6 convex D (1/m) concave +103.5
a)
d)
(near)
(near)
range 1.56 mm
range 3.72 µm
(far)
b)
(far)
c) -40.4 convex
(near)
D (1/m)
range 3.4 µm
concave +103.5
e)
(far)
Fig. 5. a) curvature, b) shape, c) microstructure (by Savitzky-Golay-Filter), d) enlarged curvature around defect, e) enlarged microstructure around defect
The measurement results are displayed in Fig. 5. The curvature (Fig. 5a) directly shows the microstructure of the surface and a surface defect with strong signal which is displayed enlarged in Fig. 5d. The evaluated shape (Fig. 5e) has a range of 1.56 mm. Thus, the 3.7 µm microstructure is not visible unless the global shape is removed. As the actual shape of the measured part is unknown, it is a difficult task to remove 1.5 mm from the height and still retrieve valid 3.7 µm data. High order polynomials tend to generate artificial structures, here. To generate the microstructure in Fig. 5c a localized polynomial fitting has been used: an extended 2D SavitzkyGolay filter [18]. The area of the defect is displayed enlarged in Fig. 5e and shows a dent of about 2.4 µm depth.
Wide Scale 4D Optical Metrology
368
In the curvature map Fig. 5d, a high frequency component is visible, which vanishes in the height map. Unless the curvature corresponds to the visual impression of the surface, Fig. 5d is more close to the impression that the surface has main structures on two different lateral scales. This would get visible in Fig. 5e only if it would be possible to remove the shape component of the larger lateral scale correctly, again. A frequency analysis is able to quantify the components on different lateral scales. In the process to develop methods to describe lacquer surfaces, it was found that just a roughness value is not sufficient to quantitatively describe e.g. the orange-skin-effect. A logarithmic series of lateral scale intervals was defined to describe the surface for different structure sizes: Wa[0.1-0.3 mm], Wb[0.3-1 mm], Wc[1-3 mm], Wd[3-10 mm]. Fourier band pass filtering can separate the measurement results into these intervals (Fig. 6, single images).
0.1
Wa
Wb
1
Wc
Wd
10 m
Fig. 6. Curvature components over logarithmized lateral structure size: band pass filtered interval components (images and enlarged areas around defect) with relative energy level (dashed line) and high resolution continuous spectrum.
The analysed spectrum (Fig. 6) reveals the observed high frequency to be in interval Wa and the longer Period as well as the defect in Wb. The continuous spectrum localizes the main structure components at a lateral size of 180 µm and accordingly 700 µm. When carrying out the same evaluation for the (integrated) shape (Fig. 7) the high frequency component in Wa is flattened and the second maximum shifts from Wb to the beginning of Wc. Periods larger than 2 or 3 mm are attenuated by the global shape removal filtering. It is not possible to evaluate the shape by FFT the same way without removing the global shape because the large period’s energy loss gets too strong and produces artefacts.
Wide Scale 4D Optical Metrology
0.1
Wa
369
Wb
1
Wc
Wd
10 m
Fig. 7. Microstructure (shape) components over logarithmized lateral structure size: band pass filtered interval components (images and enlarged areas around defect) with relative energy level (dashed line) and high resolution continuous spectrum (thick line) with overlaid curvature with inverted derivation factors (thin line).
Besides having changed amplitudes, the normalized component images of curvature and shape look very similar. This behaviour is completely understandable when taking into account that the curvature is the second derivation of the shape. A Fourier decomposition delivers the shape as a sum of sin(fxx) and cos(fxx). The second derivation of all elements delivers the same but weighted by the square of their frequency. Steel sheet pos1 Steel sheet pos2
KTL 1
KTL 2
KTL 3 pos1
KTL 3 pos2
Filler 1
Filler 2
Base Code pos1 Base Code pos2
0.1
Wa
Wb
1
Wc
Wd
Fig. 8. Lacquer curvature spectra for: blank steel, KTL, filler and top coating (BC)
10 mm
370
Wide Scale 4D Optical Metrology
When the spectrum of Fig 6 is multiplied with the inverse weights (thin line in Fig. 7), the result is very similar to Fig. 7 (thick line) for small periods. The deviation for larger periods is due to the microstructure filter and because the noise increases with the quickly increasing factors towards large periods. Different layers of the varnishing process have been measured and were frequency analysed (Fig. 8). The frequency analysis of the topology is very selective (compare KTL 1/2/3, Filler 1/2) but also highly repeatable (compare pos1/pos2). KTL 3 pos1 is the surface which has been chosen for detailed evaluation in this article. It has been demonstrated that the FRT - besides being able to inspect the shape of specular surfaces - delivers a very sensitive, robust and selective method for topologic microstructure analysis.
4 Acknowledgments The authors thank the BIA (Bremer Innovations Agentur), VEW (Vereinigte Elektronikwerkstätten in Bremen) and Satisloh for financial support for the development of the FRT that will result in a series prototype which is manufactured and sold by VEW. Thanks to the customers that potentate the continuous development - many thrilling measurement tasks of them define a great practice-orientated guide for the ongoing developments.
5 References 1. Ligtenberg, F. K.: “The moiré method, a new experimental method for the determination of moments in small slab models”, Proc. Soc. Exp. Stress Anal., 12, 1954/55, Pages 83-98 2. Hung, Y.Y., Lin, L., Shang, H.M., Park, B.G.: Practical threedimensional computer vision techniques for full-field surface measurement; Opt. Eng. 39 (1), 2000, Pages 143-149 3. Ritter, R., Hahn, R.: Contribution to analysis of the reflection grating method; Optics and Lasers in Engineering, Volume 4, Issue 1, 1983, Pages 13-24. 4. Kammel, S.: Topography reconstruction of specular surfaces from a series of grey-scale images; Proc. SPIE Vol 4189-18, 2001, Pages 136144
Wide Scale 4D Optical Metrology
371
5. Höfling, R., Aswendt, P., Neugebauer, R.: Phase Reflection – a new solution for the detection of shape defects on car body sheets; Opt. Eng. 39 (1), 2000, Pages 175-182 6. Knauer, M., Kaminski, J., Häusler, G. (2004): Phase Measuring Deflectometry: a new approach to measure specular free-form surfaces. Optical Metrology in Production Engineering, Proc. SPIE, 5457, Strasbourg, France, p. 366-376 7. Ralf Gläbe, Christian Flucke, Thorsten Bothe, Ekkard Brinksmeier: High Speed Fringe Reflection Technique for nm Resolution Topometry of Diamond Turned Free Form Mirrors, Proc. Euspen 5th Int. Con., Vol. 1 (2005), 25-28 8. Massig, J.: Deformation measurement on specular surfaces by simple means; Optical Engineering, 40(10), 2001, 2315-2318. 9. Hung, Y.Y., Chen, F., Tang, S.H.: Reflective computer vision technique for measuring surface slope and plate deformation; Proc. SEM, Spring Conference on Experimental Mechanics, 1993, Pages 948-953 10. Hung, Y.Y., Chen, F., Tang, S.H.: Reflective computer vision technique for measuring surface slope and plate deformation; Proc. SEM, Spring Conference on Experimental Mechanics, 1993, Pages 948-953 11. Petz, M., Tutsch, R.: “Measurement of optically reflective surfaces by imaging of gratings”; Proc. SPIE, Vol. 5144, 2003, Pages 288-294 12. Beyerer, J., Pérard, D.: Automatische Inspektion spiegelnder Freiformflächen anhand von Rasterreflexion; Technisches Messen 64 (10), 1997, Pages 394-400 13. Pérard, D., Beyerer, J.: Three-dimensional measurement of specular free-form surfaces with a structured-lighting reflection technique; Proc. SPIE Vol. 3204-11, 1997, Pages 74-80 14. Bothe, T., Li, W., Kopylow, C., Jüptner, W.: “High Resolution 3D Shape Measurement on specular surfaces by fringe reflection”, Proc. SPIE Int. Soc. Opt. Eng. 5457, (2004), p. 411-422 15. Burke, J.; Bothe, T.; Osten, W.; Hess, C.: „Reverse engineering by fringe projection“. Proc. SPIE Vol.4778, 2002, pp. 312-324 16. Legarda-Saenz, R., Bothe, T., Jüptner, W.: “Accurate Procedure for the Calibration of a Structured Light System”; Opt. Eng. 43 (2), 2004 17. Li, W., Bothe, T., Kopylow, C., Jüptner, W.: “Evaluation Methods for Gradient Measurement Techniques”, Proc. SPIE Int. Soc. Opt. Eng. 5457, (2004), p. 300-311 18. Savitzky A., Golay, M.J.E., Analytical Chemistry, vol. 36, 1964, p. 1627–1639.
Full-Field Shape Measurement of Specular Surfaces Jürgen Kaminski, Svenja Lowitzsch, Markus C. Knauer, and Gerd Häusler Max Planck Research Group, Institute of Optics, Information and Photonics, University of Erlangen-Nuremberg Staudtstr. 7/B2, 91058 Erlangen Germany
1 Introduction The measurement of highly curved specular free-form surfaces is a challenging task. Whereas the surface itself is “not visible”, only the effect on the reflected light can be measured – if it is actually reflected into the pupil of the observation device. For example, the use of interferometers for the measurement of highly curved aspheric surfaces usually requires sophisticated compensation optics depending on the object under test. We present the combination of a deflectometric sensor and a numerical algorithm to obtain the shape of specular free-form surfaces. The sensor measures the local slope of the surface, which then has to be integrated to get the object’s overall shape. Most deflectometric methods cannot measure absolute data. To overcome this problem, we have developed a novel stereo method for specular surfaces. So far, most implemented methods for shape reconstruction from slopes have different drawbacks: Highly curved boundaries lead to artifacts, and holes and outliers cause error propagation. We introduce a new reconstruction algorithm which overcomes these weaknesses. By interpolation of the slopes we get an analytic representation of the surface. The method can be applied to grid-free noisy data with holes or outliers. With the combination of the new measuring tool and the robust numeric tool we are able to measure the global shape of specular free-form surfaces with sub-micron accuracy, while local details can even be measured in a range of a few nanometers depth variation.
Wide Scale 4D Optical Metrology
373
2 Sensor Principle The sensor is based on “Phase-Measuring Deflectometry” (PMD). Hereby, we generate a sinusoidal fringe pattern on a large screen [˪1]. This pattern is observed with a camera using the surface under test as mirror. Depending on the shape of the surface the observed pattern appears distorted (see Fig. 1). This method is based on the “Reflection Grating Method” introduced by Ritter and Hahn in 1983 [2]. We encode the screen with a series of sinusoidal fringes. By the application of well-known phase-shift algorithms we measure the phase of each observed point which corresponds with a location on the screen. The use of sinusoidal fringes is essential because we cannot focus on the surface and the screen at the same time due to the limited depth of focus. For the best possible lateral resolution we focus on the surface and thus acquire a blurred image of the pattern. Since the fringes are sinusoidal, the measured phase does not change.
Fig. 1. Principle of Phase-Measuring Deflectometry [4]
The primary data acquired by PMD is the local surface slope. Among other advantages, Wagner [3] showed that the channel capacity of a system can be used more efficiently if the slope, instead of the height, is transmitted. 2.1 Absolute Measurement
Most of the deflectometric methods only give qualitative results because the unknown height of the object has an influence on the measured slope. In Fig. 2 on the left the situation is depicted:
Wide Scale 4D Optical Metrology
374
Assuming the system has been calibrated [4], for each camera pixel P we know the ray of vision v and the observed point Q on the screen. Since we do not know the position on the surface S we cannot calculate a unique normal vector n. Thus, it is impossible to get an absolute measurement. Q
n
v S1
n2 S2
camera 1
camera 2
Fig. 2. Left: Although P, Q and v are known, the normal n still depends on the height of the surface S. Right: Potential normals for two cameras. The normals are only identical at the actual position of the surface [5].
We solve this ambiguity problem employing a technique which we call “Stereo Deflectometry” [4, 5]. For this method, we use two cameras. In comparison to common stereo approaches on diffusely reflecting surfaces, it is more difficult to find corresponding points in reflected images: For each surface point the two cameras observe two different phases of the fringe pattern. Hence, we use the surface normal itself as corresponding parameter. For each camera we calculate a series of potential normals based on varying height assumptions. For the real surface position, both normals must coincide (Fig. 2, right). Using this method we get absolute slope values which are independent on the position of the object in the measuring volume1. 2.2 Resulting Slope Data
The noise on the measured slopes is less than 10 arcsec. This accuracy allows detecting local surface defects in a range of a few nanometers. The Stereo Deflectometry approach yields a rough estimation of the object’s surface height, with an absolute uncertainty of up to 0.1 mm (see Fig. 3). However, the height estimation is sufficient to obtain accurate slope values with an absolute error of less than 100 arcsec.
1
A similar method was developed by Petz et al. [6]
375 150
40
height in µm
slope in x-direction in arcsec
Wide Scale 4D Optical Metrology
20 0 -20
100 50 0 -50
-40 -40
-20
0
20
40
x-position in mm
-40
-20
0
20
40
x-position in mm
Fig. 3. Left: Measured slope of a planar O/10 mirror in x-direction. Right: Height profile estimated by Stereo Deflectometry.
To exploit the intrinsic accuracy of the slope data for the resulting height, a new shape reconstruction method is required. Hereby, we have to keep certain issues in mind: First, the desired method should be capable of accurately reconstructing the object’s global shape while preserving its local detail. Therefore, it has to be stable to noise and data gaps. Last, the method should be able to deal with highly curved object shapes in order to be applicable to a large class of objects.
3 Shape Reconstruction There exists a variety of shape reconstruction methods [7]. They all belong to one of two types: local or global reconstruction techniques. Numerical integration in 2D is a local method which accumulates the slope along a path of choice. However, noisy slope data and outliers cause error propagation such that the result depends heavily on the integration path. Fourier methods such as the Frankot-Chellappa method [8] are global methods. They imply a periodic extension of the boundaries. If this is not fulfilled, the shape reconstruction fails. These methods also require regularly and completely sampled data. 3.1 Analytic Slope Interpolation
Our new method is satisfying the above requirements by interpolating the slope data to acquire an analytic data representation. The analytic integral of the interpolating function then yields the object’s height. The method is a combined local and global approach: It is based on Hermite-Birkhoff interpolation employing so-called radial basis functions (RBFs) [9]. We use Wendland functions [10] of sufficient polynomial order as basis functions. They have limited support; hence, the radius of the basis
376
Wide Scale 4D Optical Metrology
They have limited support; hence, the radius of the basis function controls the locality and the stability of the interpolation. To perform the RBF interpolation, the inversion of a linear system is required. A typical PMD data set consists of up to one million measured points and thus leads to a system of two by two million entries. In order to cope with such huge data sets, we developed a two-step method (Fig. 4). In the first step, we split the data field into overlapping patches. On each patch the slope data is interpolated and the height is calculated up to a constant of integration. In the second step, we determine the height constant using a least-squares fit on the overlaps.
(all units in mm)
Fig. 4. The 2-step height reconstruction method shown on a part of a progressive eyeglass lens. Left: The height is reconstructed on overlapping patches up to an integration constant. Right: A least-squares fit has been applied to obtain the object’s overall shape.
3.2 Simulation Results
To show the accuracy of the method we simulated the slope measurement of a sphere with a radius of 80 mm on a field of 80u80 mm². We added uniformly distributed noise of 10 arcsec. The lateral resolution was 0.2 mm. Figure 5 shows the difference between the original sphere and the reconstructed one. The global deviation from the ideal sphere is less than 100 nm, and the local error is below 10 nm.
Wide Scale 4D Optical Metrology
377
deviation in nm
40 20 0 -20
20 nm -40 -60 -40
-30
-20
-10
0
10
20
30
40
x-position in mm Fig. 5. Deviation of a reconstructed sphere from the nominal sphere. The measurement was simulated adding 10 arcsec noise. The absolute reconstruction error is less than 100 nm.
4 Experimental Results and Conclusion We reconstructed the shape of the planar O/10 mirror which was depicted in Fig. 3 using the new interpolation method. The shape of the mirror could be reconstructed with sub-micron precision (Fig. 6). Whereas the reconstruction method is stable to noise, it is sensitive to systematic errors, such as errors caused by inaccurate sensor calibration. This can be used in the future as feedback to improve the sensor calibration. 0,6
height in µm
0,4
0.2 µm 0,2 0,0 -0,2 -0,4 -0,6 -40
-30
-20
-10
0
10
20
30
40
x-position in mm Fig. 6. Cross-section through the reconstructed shape of the planar O/10 mirror in Fig. 3. The absolute error is about 0.6 µm on a 80 mm profile.
Wide Scale 4D Optical Metrology
378
Further, we measured an object slide with a groove of about 0.5 µm depth both with the PMD and a white-light interferometer. In Figure 7, the difference between the reconstructed surface (black) and the white-light interferometry measurement (grey) is depicted. On a field of 20u20 mm², the maximal difference of both measurements is less than 1 µm. 20 mm 1 µm
height in µm
3
2
1
0 -10
-5
0
5
10
x-position in mm
Fig. 7. Comparison of a PMD- and a white-light interferometry measurement of an object slide with a groove. The maximal deviation is below 1 µm.
Fig. 8. Rendered height map of a 0.5u0.5 mm² part of a plastic micro-lens. The clearly visible groove has a depth of approximately 30 nm.
In Figure 8, a part of the height map of a plastic micro-lens is depicted. The global shape has been subtracted to make fine details visible. The largest groove has a depth of roughly 30 nm.
Wide Scale 4D Optical Metrology
379
In conclusion, we have shown that our novel method is able to measure surfaces of specular free-form objects with nanometer precision. Not only are fine details preserved, the global shape is reconstructed as well. This new method makes it also possible to fine-calibrate the deflectometry sensor and to obtain results of even higher global accuracy.
5 Acknowledgements This work was supported by the DFG-SFB 603 and by the Bayerische Forschungsstiftung (AZ 450/01).
6 References 1.
2. 3. 4.
5. 6. 7.
8. 9.
10
G. Häusler: Verfahren und Vorrichtung zur Ermittlung der Form oder der Abbildungseigenschaften von spiegelnden oder transparenten Objekten, Patent: DE19944354 (1999) R. Ritter and R. Hahn: Contribution to analysis of the reflection grating method. Optics and Lasers in Engineering 4(1): 13–24 (1983) C. Wagner and G. Häusler: Information theoretical optimization for optical range sensors. Applied Optics 42(27): 5418-5426 (2003) M. Knauer, J. Kaminski, G. Häusler: Phase Measuring Deflectometry: a new approach to measure specular free-form surfaces. Optical Metrology in Production Engineering, Proc. SPIE 5457: 366-376 (2004) M. Knauer, J. Kaminski, G. Häusler: Absolute Phasenmessende Deflektometrie. DGaO Proceedings, A15 (2004) M. Petz and R. Tutsch: Reflection grating photogrammetry. VDI-Berichte 1844, Photonics in Measurement: 327-338 (2004) K. Schlüns and R. Klette: Local and global integration of discrete vector fields. Advances in Computer Vision, F. Solina, W.G. Kropatsch, R. Klette, R. Bajcsy (eds.), Springer, Wien, 1997, 149-158 R.T. Frankot and R. Chellappa: A method for enforcing integrability in shape from shading algorithms. IEEE Trans. on PAMI 10: 439-451 (1988) S. Lowitzsch: Matrix-valued radial basis functions: stability estimates and applications. Advances in Computational Mathematics 23(3): 299-315 (2005) H. Wendland: Piecewise polynomial, positive definite and compactly supported radial basis functions of minimal degree. Advances in Computational Mathematics 4(4): 389-396 (1995).
Numerical Integration of Sampled Data for Shape Measurements: Metrological Specification L. P. Yaroslavsky Dept. Interdisc. Studies, Faculty of Engineering. Tel Aviv University.Tel Aviv, Ramat Aviv 69978, Israel A. Moreno, J. Campos Dept. Física, Universidad Autónoma de Barcelona, 08193 Bellaterra, Spain
1 Introduction Given computational algorithm applied to sampled data, one should always find out to what continuous transformation of continuous functions represented by the sampled data this algorithm corresponds. In this paper, we address this problem for the case of numerical integration of functions using sampled representation of their derivatives, the processing frequently used in optical metrology. Integration of functions can be regarded as a convolution of the functions with a corresponding integration kernel, or point spread function. Different numerical integration algorithms correspond to different approximations of the ideal integration point spread function. The convolution integral, in its turn, can be treated in Fourier transform domain as a product of Fourier spectrum of the function and of that of the convolution kernel called convolution frequency transfer function, or frequency response. Therefore one can also characterize numerical integration algorithms in terms of the accuracy of approximating the ideal integration frequency response. The purpose of the paper is to investigate frequency responses and resolving power of several numerical integration algorithms. Specifically, we investigate trapezoidal integration formula, two modifications of Simpson formula, integration using cubic splines and two methods integration by ‘1/f-filtering’ in the domain of Discrete Fourier Transform that, in a certain sense, approximates frequency response of the ideal integration most closely. We will also show limitations imposed to the integration accuracy by the finiteness of the number of available function samples and by methods of treating boundary effects in the numerical integration.
Wide Scale 4D Optical Metrology
381
2 Continuous and numerical integrators and their frequency responses Digital signal integrating is an operation that assumes interpolation of sampled data. Similarly to signal differentiating, signal integrating operates with infinitesimal signal increments. It can also be treated as signal convolution. In the Fourier transform domain, it is described as a x
³ ax dx
f
i
³ 2Sf D f exp i 2Sfx dx . f
(1)
where a x and D f are signal and its Fourier transform spectrum, respectively. From Eq (1) it follows that and H int f i 2Sf is the integrating filter frequency response. In digital processing, integrating filtering described by Eq. 1 can be implemented in the domain of Discrete Fourier Transform (DFT) as:
^a k `
^
`
IDFT K rint x DFT^a k ` ,
(2)
where ^a k ` , k 0,1,..., N 1 is a set of N samples of the input signal to be integrated, DFT^` . and IDFT^` . are operators of direct and inverse Discrete Fourier Transforms, sign ” x “denotes element-wise product of vectors, and K rint
N ° ,r ® i 2Sr °¯ 1 2S ,
N 1 , K rint 2 r N /2
1,...,
N ,r i 2Sr
1,2,...,
N 1 2
(3)
for even and odd N , respectively, and with K 0int 0 and K rint K N r for r N / 2 1,..., N 1 . Numerical integrating according to Eq. 2 automatically implies signal discrete sinc-interpolation [1]. We will refer to this integration method as DFT-based method. Most known in the literature numerical integration methods are the Newton-Cotes quadrature rules [2, 3]. The three first rules are the trapezoidal, the Simpson and the 3/8 Simpson ones. In these integration methods, a linear, a quadratic, and a cubic interpolation, respectively, are assumed between the sampled slope data. More recently, spline interpolation methods were introduced. In the cubic spline interpolation, a cubic polynomial is evaluated between every couple of points [3], and then, an analytical integration of these polynomials is performed. One can show that discrete frequency response of the digital filter (Discrete Fourier Transform coefficients of its point spread function) are samples of the equivalent continuous filter frequency response. Therefore, we
Wide Scale 4D Optical Metrology
382
will compare the numerical integration methods in terms of their discrete frequency responses. In Fig. 1, absolute values of the frequency responses of the DFT based method of integration and of the Newton-Cotes rules and cubic spline method are represented with a frequency coordinate normalized to its maximal value. 1 0
1 0
1 0
1 0
1 0
0
-2
-4
-6
-8
D T S S C 0 .0 5
F T r a p e z o id a l im p s o n im p s o n 3 / 8 u b ic s p lin e 0 .1 0 .1 5 0 .2 0 .2 5 0 .3 0 .3 5 0 .4 F r e q u e n c y ( in f r a c tio n o f th e s a m p lin g r a te )
0 .4 5
Fig. 1. Comparison of frequency responses of trapezoidal, Simpson, 3/8 Simpson, Cubic Splines and Fourier methods of integrations
Frequency response coefficients of the DFT method are, by the definition, samples of the ideal “1/f” frequency response of the continuous integrator. Therefore, with respect to approximation of the ideal frequency response, DFT method can be regarded as a “gold standard”. As one can see in the figure, frequency responses of all methods are similar in the low frequency region, and the Newton-Cotes rules and cubic spline method began to deviate from the ideal one in the medium and high frequency zones. The Simpson and 3/8 Simpson rules exhibit large deviations and even poles (the frequency response tends to infinity) at the highest frequency and at 2/3 of the maximal frequency, respectively. The cubic spline based method is the closest to the “gold standard” DFT-method.
3 DCT based integrator Although DFT-based integration (FI) method is the closest approximation to the continuous integrator, this method suffers from boundary effects since it implements cyclic convolution rather than shift-invariant convolution. Boundary effects exhibit themselves in form of oscillations around signal discontinuities that may occur between samples at the beginning and the end of the available signal realization. One can substantially decrease
Wide Scale 4D Optical Metrology
383
influence of boundary effects by means of signal extension to double length with its “mirror reflected” copy: a~k a k , k 0,1,..., N 1; a~k a 2 N 1 k , k N ,..., 2 N 1 . Such an extended signal ^a~k ` , by the definition, has no discontinuities at its end and in the middle and can be used in the DFT based integration method instead of the initial signal. Frequency response of the integration filter in this case is defined as K rint
N ;r iSr
1,..., N 1; K r
K 2 N r ; r
N 1,...,2 N 1; K 0int
0
(4)
The value of K Nint is inessential as N-th DFT spectral coefficient of symmetrical signals such as ^a~k ` is equal to zero. Note, that in this case of the doubled signal length the degree of approximation of the ideal “1/f “ frequency response of the continuous integrator is even better than that for the above DFT based method because the doubling of the number of signal samples results in two times more dense grid of samples of the frequency response. The doubling of the number of signal samples in this implementation of the DFT based method does not necessarily doubles the computational complexity of computation. It is known that 2N- DFT convolution of signals obtained by mirror reflection extension can be carried out using fast algorithms of Discrete Cosine Transform and of associated with it Discrete Cosine/Sine Transform for signals of N samples ([1]). We will refer to this modified DFT based integration method as “extended”, or DCT-based method.
4 Experimental comparison 4.1 Integration of periodical sinusoidal signals
In these experiments, sinusoidal signals were used as test signals as analytical integration result for such sigbals can be easily found and their derivatives are generated as, correspondingly: p § · f 0 x cos¨ 2S x M ¸; N © ¹
f 0' x
2S
p p § · cos¨ 2S x M ¸ , N N © ¹
(5)
where N is the number of signal samples, p is the frequency parameter of the sinusoidal signal; M corresponds to an initial random phase and x represents the domain of the signal ( 0 d x d N ). The frequencies are se-
Wide Scale 4D Optical Metrology
384
Integration E rror
25 20 15
Fourier Trapezoidal Simpson Cubic Spline Simpson 3/8
(a)
10 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalized Frequency
Integration E rror
lected to have an integer number of periods in the signal length. Different integration methods were applied to the derivatives and then the root mean square error between results of analytical and numerical integration was found. In the experiments, pseudo-random phase data were generated using pseudo-random number generator and the error was averaged over 1000 realizations. In Figures 2, a) b) the average integration error for each of studied integration methods as a function of the normalized frequency of the sinusoidal signal 2 p N is presented for N=256. 0.7 0.6 0.5 0.4
Fourier Trapezoidal Cubic Spline
(b)
0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalized Frequency
Fig. 2. Integration error of periodical sinusoidal signals as a function of the normalized frequency: a) for all methods; b) only for DFT-based, trapezoidal and cubic spline methods
From Fig. 2, one can see that while all methods give similarly low error for low frequencies, when the frequency of the sinusoidal signal increases, the different integration methods exhibit different behaviour. For example, for a frequency equal to 2/3 of the maximum frequency, the 3/8 Simpson method gives very high error. A similar behaviour occurs for the Simpson method when the frequency is near to the signal maximal frequency as defined by the sampling rate. Figures 2a, and b), show that the trapezoidal method and the method based on the cubic spline interpolation give similar errors in all frequency spectrum, lower for the cubic spline based method. The error produced by the Fourier integration method for considered periodical signals is defined only by computation round-off errors. 4.2 Aperiodical signals and boundary effects
In order to study the boundary effects for the best methods (DFT-based method (FI), DCT-based method (Extended method) and method based in the interpolation of the slope data by cubic splines (CSI) another numerical experiment was carried out with sinusoidal signals of a non integer number
Wide Scale 4D Optical Metrology
385
of periods. The number of periods is given by the parameter p in Eq. (21). For different integer frequencies pint , sinusoidal signals were generated with parameter p p int s / n , s 0,..., n with values equally spaced in the interval p int , p int 1 . For each pint , the root mean square integration on average over s was found. Figure 3 (a) shows experimental results for the integration error within first 10 signal samples for the three methods, the signal normalized frequency Q = 0.273, the number of signal samples N =256 and the number of divisions of the frequency interval n=20. From the figure one can see that the boundary effects are more severe for FI method than for the CSI method. DCT-based (Extended) method shows errors that are very close though slightly larger then those for the cubic spline method. One can also appreciate that the boundary effects practically disappear after 10-th signal sample. The sample-wise integration error obtained for signal frequency increased to Q = 0.547 is shown in Figure 3(b). In this case the boundary effect error for DFT-based and spline methods are similar while the error for DCT-based (Extended) method is substantially lower. These boundary effects also last only approximately in the first 10 pixels. By comparing these errors with those for the low frequency region one can see that the boundary errors in the first 10 pixels increased for all methods and that for DCT-based (Extended) method they are the lowest. In the stationary region (beyond the 10 pixels), the error for the CSI method is, in agreement with Figs. 1 and 2, higher than that for both DFT-based (FI) and DCT-based methods For higher initial high frequency, Q = 0.820, the error produced by the three studied methods is shown in Figure 3(c). From the figure, one can see that the errors for the CSI method are much higher than those for both DFT and DCT-based methods. In the CSI method, the stationary errors predominate those due to boundary effects. One can also see that, for the DFT-based (FI) and DCT-based methods, boundary effects last same 10 first pixels and that their values are higher than in the low and medium frequency regions.
Wide Scale 4D Optical Metrology
386 0.06
0.16
Fourier Cubic Spline
0.03
Reflected Method
0.02 0.01
Integration Error
Integration Error
Integration Error
0.04
0.6
(b)
0.14
(a)
0.05
0.12
Fourier
0.1
Cubic Spline
0.08
Reflected Method
0.06 0.04 0.02
0 0
2
4
Pixel
6
8
Cubic Spline
0.4
Reflected Method
0.3 0.2 0.1 0
0
10
(c)
Fourier
0.5
0
2
4 Pixel 6
8
0
10
2
4
6
8
10
Pixel
Fig. 3. Experimentally obtained integration error versus sample k for DFT-based (FI) method (black) CSI method (red) and DCT-based (Extended) method (blue). Normalized initial frequency: (a) Q = 0.273. (b) Q = 0.547 and (c) Q = 0.820
Finally, in order to check, how boundary effects depend on the number N of signal samples, the previous numerical experiments were repeated for different signal lengths. Figure 4 shows the error obtained in the first 10 samples for the number of the samples N = 256, 512 and 1024. The initial normalized frequency in this experiment was Q = 0.547. From the figure, one can conclude that the boundary errors are similar in all the cases and they do not last more then about 10 samples independently of the signal length. Yet another important conclusion follows from these data that the boundary errors for DCT-based method rapidly decrease with the increase of the number of signal samples. 0.16
0.1
Fourier Cubic Spline Reflected Method
0.08 0.06 0.04
0.12 0.1 0.08 0.04
0.02
0.02
0
0 0
2
4 Pixel 6
8
10
Fourier Cubic Spline Reflected Method
0.06
Integration Error
0.12
0.14
(b)
0.14
Integration Error
Integration Error
0.16
0.16
(a)
0.14
(c)
0.12 0.1 Fourier Cubic Spline Reflected Method
0.08 0.06 0.04 0.02 0
0
2
4
Pixel
6
8
10
0
2
4 Pixel 6
8
10
Fig. 4. Average error evaluated in the 10 first samples of the domain for the same initial normalized frequency (p = 0.547) but different N: (a) N = 256, (b) N = 512, (c) N = 1024.
4.3 Resolving power of integrators
Resolving power of integrators characterizes their capability to resolve between close sharp impulses in the integrated data. Although it is fully defined by the integrator frequency responses, it is much more straightforward to compare the resolving power for different integrators directly in signal domain. Figure 5 illustrates results of numerical evaluation of the capabilities of three types of integrators, trapezoidal, cubic spline and DFT-based ones, to reproduce two sharp impulses placed on the distance of one sample one from another for the case when the second impulse is
Wide Scale 4D Optical Metrology
387
half height of the first impulse. The signals are 8 times sub-sampled to imitate the corresponding continuous signals at the integrator output. 1
1
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.2
0.2
0.4
0.4
0.2
0.2 0
0
0
0 125
130
135
(a)
125
130
(b)
135
125
130
(c)
135
125
130
135
(d)
Fig. 5. Theoretical Profile(a) and Integrated Profiles with Trapezoidal Rule (b), Cubic Spline method (c), and DFT- Method (d). All are shown subsampled.
The figure clearly shows that the tested integrators differ in their resolving power. DFT-based integrator produces the sharpest peaks with the lowest valley between the peaks while cubic spline integrators and trapezoidal integrators exhibit poorer behavior. In particular, the latter seems to be incapable of reliable resolving the half height impulse against oscillations seen in the sub-sampled signal.
5 Acknowledgments We would like to thank M. J. Yzuel for useful discussions. This work has been partially financed by Ministerio de Ciencia y Tecnología, under project BFM2003-006273-C02-01, and by the European Community project CTB556-01-4175. L. Yaroslavsky acknowledges the grant SAB2003-0053 by Spain Ministry of Education, Culture and Sport.
6 References 1. L. Yaroslavsky, Digital Holography and Digital Image Processing (Kluwer Academic Publishers, 2004) 2. J. H. Mathew and K. D. Fink, Numerical Methods using MATLAB (Prentice-Hall, Englewood Cliffs, N. J., 1999) 3. C. Elster, I. Weingärtner, “High-accuracy reconstruction of a function f(x) when only df(x)/dx is known at discrete measurements points,” Proc. SPIE, 4782, (2002)
Invited Paper
Fringe analysis in scanning frequency interferometry for absolute distance measurement Pierre Pfeiffer, Luc Perret, Rabah Mokdad and Bertrand Pecheux University Louis Pasteur Strasbourg / ENSPS Laboratoire des Systèmes Photoniques Bld S. Brant 67400 ILLKIRCH France e-mail : [email protected]
1 Introduction Absolute distance interferometry (ADI) has been thoroughly studied during the last decades. Various methods using wavelength shifting techniques with dye lasers or laser diodes have been extensively described and in the last years external cavity laser (ECL) have been introduced. Furthermore, different detection schemes have been successfully tested. One of them is the fringe counting technique /1,2/, which provides good results. Unfortunately, it cannot be implemented with multiple targets, which produce multi-frequency components, generated by interference of the different reflected beams. Another technique is the Fourier transform technique (FTT) proposed by Suematsu and Takeda /3/. It consists in windowing the data set, then calculating the Fourier transform, next filtering the frequency of interest and finally calculating the inverse Fourier transform to extract the frequency from the imaginary part of interest. This technique is sensitive to amplitude and frequency modulation in the signal but when implemented as described before, slow variations of amplitude are eliminated. Numerical simulations done by Talamonty et al /4/ demonstrated the limitation of this technique to windowing, amplitude and frequency modulation. This technique can theoretically reach uncertainties of 10-7 for perfect recorded data. They have also outlined that scanning nonlinearities are another limiting factor to the measurement accuracy. In a more recent paper, we have proposed an autoregressive (AR) method /5/ to extract the frequency of the beat signal. Such methods are known to have fine resolution and perform well for short data records. When a laser diode is modulated through a saw tooth current for wavelength tuning, good results were obtained with this method. In this paper
Wide Scale 4D Optical Metrology
389
we will analyse results from a Burg autoregressive method and the Fourier transform technique. We will also show how non-linearities deteriorate the accuracy, and we will present some results of a two-target interferometer.
2 System description 2.1 The interferometric setup
We implemented a double interferometer system (fig.1). The reference interferometer is a fiber optic Mach-Zehnder. With fiber optic technology, it is possible to achieve a longer optical path difference than with a nonguided technique. But the fiber has the drawback of being more sensitive to temperature changes than an interferometer manufactured from zerodur. The drift of the optical path is limited by implementing a copper metal clad fiber in the short arm. The metal clad fiber has a thermal expansion coefficient similar to the metal coating one. In our experiment, the optical path difference is 10m long and we used a 22cm copper clad fiber to compensate the temperature drift. RR1
DMeas RR2
L TLS
FC
ISO
Measurement interferometer
PDref Reference interferometer
PDmeas
Computer
Fig. 1. Setup of the system. L collimator, TLS tunable laser source, FC fibered coupler, RR1 and RR2 retro-reflectors, PDmeas and PDref photodiode detectors, Dmeas distance to be measured, ISO optical isolator.
2.2 Laser source
At 1,5 µm wavelength, the external cavity laser from Agilent can be tuned over 140 nm without mode hopping and with sweeping speeds as high as 40 nm/s. For ADI, continuous tuning is a prerequisite otherwise a very high bias in frequency estimation is introduced. Attention must be paid to
Wide Scale 4D Optical Metrology
390
the start of sweep where mode hopping and larger non-linearities occur. These non-linearities are identified by a change of the duration between two following periods. With the FTT, Talamonty et al have demonstrated that frequency modulation (FM) noise degrades the accuracy at FM levels higher than 0.01% for recorded sequences in the range of 5000 to 25000 samples and 70 GHz (0.14 nm) tuning range. Larger tuning ranges of 3-4 nm are necessary to reduce the sensitivity of the ADI setup. We have also identified the non-linearities as a main limiting factor and constituting the major component of systematic error in an autoregressive processing technique. 2.3 Absolute distance interferometry with wavelength sweeping
The setup (fig. 1) consists of two interferometers, a measurement Michelson interferometer with a moving target and the reference Mach-Zehnder interferometer. If Lref and Lmeas are respectively the optical path length difference of the reference and measurement interferometers, and fref and fmeas are the respective frequencies of the beat signals, the unknown length Lmeas is given by :
f meas (1) 2 u f ref The synthetic wavelength is defined by /S = OiOf /(Oi -Of ) where Oi is the start wavelength and Of is the end of sweep wavelength. The ratio between the synthetic wavelength to the normal wavelength /SOdetermines the Lmeas
Lref
error magnification of an ADI and can be the major source of limitation particularly for long distance measurements. To keep this factor as low as possible two conditions must be met : first large tuning ranges without mode hopping, and secondly a high wavelength tuning speed. Air turbulence, temperature drifting of the optical path difference or vibrations are sources of length modifications during the measurement and are amplified by this factor. An optical path change of one wavelength corresponds to an error of one synthetic wavelength. The ECL has long coherence length and large tuning ranges which decrease the synthetic wavelength and consequently reduce the sensitivity of the ADI system. Non-linearities of the scanning are expressed by the following formula : 'O D 0t 'D (t )t (2)
where Do is the linear coefficient and 'D(t) is the non-linear component of the sweeping speed. The non-linear component is generally expressed
391
Wide Scale 4D Optical Metrology
by a polynomial depending on the type of laser and where at least, for an external cavity laser, the first non-linear term is dominant. 2.4 Fringe processing
The Fourier transform technique generally suffers from leakage inherently associated with the method. This is when the energy from one frequency spreads or leaks into an adjacent frequency. Windowing is the main origin of leakage and is inherent in finite length data records. Without an integral number of fringes across both interferograms, discontinuities will occur across the boundaries of the original function and replicas, resulting in an error of the retrieved phase. By weighting the data through windowing to reduce their intensity at the edges of the sequence, leakage is minimized. We used a Gaussian window to reduce the leakage effect and a Blackman filter to select the frequency spectra (fig. 3a). The parametric modelling /6/ approach eliminates the need for windowing functions and the assumption that the autocorrelation function is zero when the number of lags m is greater than the number of recorded points N. Parametric models can provide better frequency estimation than FFT based methods do, particularly in short data records. The data sequence xn (for AR) is modelled by a function whose difference equation is : p
x ( n)
¦
ak x(n k ) w(n)
(3)
k 1
xn represents the output sequence, ak are the parameters of the model, p is the order of the model, and the input sequence w(n) is assumed to be a zero mean white noise with autocorrelation Jww(m) = Vw2G(m), where Vw2 is the variance of the noise. We consider an autoregressive model where the {ak} parameters are obtained from the solution of the Yule-Walker equations. Since the matrix of equation is Toepliz, it can be efficiently inverted by use of the Levinson-Durbin algorithm. Burg proposed a method based on the minimization of the forward and backward errors in the linear predictors xˆ n with the constraint that the AR parameters satisfy the Levinson-Durbin recursion. The Burg method is stable, computationally efficient but is limited by line splitting, spurious peaks and frequency bias. The resolution of the method is set by the variance of the estimated fre-
quency fˆ of the modelled signal, and is limited by the Cramér-Rao bounds according to the following equation : 6V 2 (4) var fˆ1 t 2 2 A1 N N 2 1 2S
Wide Scale 4D Optical Metrology
392
where A12/V2 is the signal to noise ratio. As can be seen, the theoretical limit of the resolution, which is given by the square root of the variance is inversely proportional to the product of N3/2.
3 Measurements and results Non-linear sweeping is another major error source in ADI systems with frequency sweeping. Figure 2 shows how the dispersion of a measurement is affected by the sweeping speed. A faster tuning speed allows shorter measurement times so that the drift of the optical path is kept as low as possible and the probability of trouble due to vibrations is limited. For our laser, the best results were obtained at 10 or 20 nm/s tuning speed. -4
x 10 1
V =5e-5
0.5 V =2e-5 ). u. a ( n oi sr e p si D
0 V =9e-5
-0.5 V =3e-5
V =2e-5
-1
0
5
10
35 30 25 20 15 Sweeping speed (nm/s)
40
45
Fig. 2. Dispersion of the measurements in relation with the sweeping speed.
The Fourier transform technique was implemented in the following way. For each position several records of 216 samples were made. The target is moved on a linear stage over a length of 2mm or 20mm over twelve different positions. For short distances, measurements were made on an optical table to avoid the influence of vibrations and for longer distances, measurements were made on the floor. The performance of the method is evaluated by calculating the relative uncertainty which is the standard deviation of a set of measurements to the line (in dashed on figure 4) which best fits the results. Despite the high dispersion of the values for one measurement as can be seen in figure 3b, the relative uncertainty (at 1V for the set of measured positions is in the order of 2×10-6 which is rather good. This result could be explained by
Wide Scale 4D Optical Metrology
393 3500
18
x 10
5
14
x 10
5
3000
16
12 2500
14
st n e m 2000 el e f o 1500 r e b m u N 1000
10 l a n g si e c n er ef e R
12 l a n g si tc ej b O
10 8 6
8 6 4
4
0
500
2
2 0
1 Frequency [Hz]
2 x 10
5
0
0
1 Frequency [Hz]
2 x 10
0 0.484
0.486
0.488
0.49
0.492
0.494
0.496
0.498
Frequency ratio
5
(a) (b) Fig. 3. (a) Spectra of the reference and the measurement signal; in dashed the applied filter. (b) Histogram of the frequency ratio for a given measurement for the FTT. 0.4905 Relative uncertainty : 2e-6 : 0.49
0.4895
0.489
0.4885
0.488 0
5
10 15 Position of the target (mm)
20
Fig. 4. Experimental results for a target at a distance of 5m. *** measured points, ooo average of the 2 records.
the symmetrical distribution of the frequency ratio of each measurement so that the mean ratio stays unbiased. Further improvements can be obtained with longer records and an increased number of measurements to reduce errors through averaging. The AR Burg method was implemented by adding a moving averaging for a given record. The elimination of the bias on frequency estimation due to initial phase error requires records longer than 1000 samples. We have experimentally shown that the dispersion on one measurement was the lowest for records of 20000 samples and a second order AR method. For one measurement the dispersion is in the order of 10-4 , which is better than that one of the FTT by an order of magnitude. But dispersion does not integrate the bias on the frequency estimation. To reduce the bias introduced by the AR method, we have added moving averaging which consists in
Wide Scale 4D Optical Metrology
394
segmenting the sequence and estimating the frequency for each segment. In order to increase the number of segments, a 95% overlap rate was implemented on them. A total number of 50 segments for each measurement was processed. Some of our results are related in table 1. The relative uncertainty achieved with the method remains an order of magnitude worse than the one of the FTT. We explain these results by the bias introduced through non-linear wavelength sweeping and a greater sensitivity of the non-optimized AR method to non-linearities. Table 1. Relative uncertainty in relation with distance Relative uncertainty FTT ×10-5 (optical table) AR Relative uncertainty FTT AR ×10-5 (floor)
2m 0.15 3
5m 0.11 3
7m
0.2 3.5
8m 0.25 6
10m 0.14 3
12m
20m
25m
0.2 9
0.5 10
1.5 10
To reduce the effect of non-linear sweeping of the wavelength with an AR method, a commonly used solution is to sample the measured signal with the reference signal. At each zero crossing of the reference signal, one sample of the object signal is acquired. The method was implemented by increasing the sample rate four times and resampling the object signal through software. To get enough samples per period, the distance of the target was reduced to one meter. The relative uncertainty (at 1V) decreases to 2×10-6 for two records per position.
4 Multi-beam interferometry In the case of more than one target, the optical fields travel through three different paths and then combine through a beam splitter cube (fig. 5a). The frequency of the beat signal issued from the longest arm is higher than the one from the short arm. Both spectra are independent only if the beat signals are decorrelated from each other. This condition can be satisfied if the optical path difference between the two arms is long enough so that no frequency from one spectrum spreads in the other spectrum. Even weak overlap will introduce new frequencies and thus increase the dispersion and the bias of the estimated frequency. If this condition is met, processing with the FTT shows no degradation of the relative uncertainty. Figure 5b shows results with target A at 2m and target B at 8m.
Wide Scale 4D Optical Metrology
395
Target A
Relative uncertainty : 2.6e-006
0.1875
Target B
0.1875 0.1875 0.1875
TLS
o ar y c n e u q er F
0.1875 0.1875 0.1875 0.1875 0.1874
PDmeas
Reference interferometer
0.1874 0.1874 1000
(a)
PDref
1150 1100 1050 Position of the target (µm)
1200
(b)
Fig. 5. (a) Schematic arrangement of the multi-beam interferometer. (b) Dispersion of the frequency ratio for 2 records per position. The target A is at a distance of 2 m.
5 Conclusion The deterministic FTT gives good results whereas the parametric method implemented needs proper settings of the parameters. Also, we have observed no degradation of the uncertainty in the case of two targets with the FTT.
6 References 1. 2. 3.
4.
5.
6.
J. A. Stone, A. Stejskal, L. Howard, Absolute interferometry with a 670-nm external cavity diode laser, Appl. Opt. 38, 5981-5994 (1999). J. Thiel, T. Pfeifer, M. Hartmann, Interferometric measurement of absolute distances of up to 40 m, Measurement, 16, 1-6 (1995). M. Suematsu and M. Takeda, Wavelength-shift interferometry for distance measurement using the Fourier transform technique for fringe analysis. Appl. Opt. 30,n°28, 4046-4055, (1991) J. J. Talamonti, R. B. Kay, D. J. Krebs, Numerical model estimating the capabilities and limitations of the fast Fourier transform technique in absolute interferometry, Appl. Opt. 35, 2182-2191 (1996). R. Mokdad, B. Pécheux, P. Pfeiffer, P. Meyrueis, Fringe pattern analysis using a parametric method for absolute distance measurement with a frequency modulated continuous optical wave technique, Appl. Opt., Vol.42, n°6, (2003) S. M. Kay, Modern spectral estimation, theory and application, PTR Prentice Hall, chapter 13
Surface Shape Measurement by Dual-wavelength Phase-shifting Digital Holography Ichirou Yamaguchi, Shinji Yamashita, and Masayuki Yokota Faculty of Engineering, Gunma University Kiryu, Gunma 376-8515 Japan
1 Introduction Noncontacting and quick measurement of surface shape is now strongly demanded in industry. Optical surfaces can be measured by a variety of interferometers which can be used in various circumstances. However, for diffusely reflecting surfaces, especially, for those having a depth exceeding a focal depth of an imaging system, conventional instruments using principles of geometrical optics such as triangulation, light-sectioning, and grating projection are difficult to apply. Confocal microscopes employing mechanical adjustment of focus can be used for these objects but requires complicated optical systems. For measurement of three-dimensional objects holographic contouring is suitable. In conventional holography twowavelengths methods were proposed1,2. In optical reconstruction the image is covered with contour fringes whose sensitivity depends on the difference of the wavelengths. The fringes have to be recorded by a CCD and analyzed by a computer for quantitative analysis. Digital holography, which combines the recording and analysis together through CCD recording of holograms and computer reconstruction, saves the time and trouble of photographic processing as well as that of mechanical focusing on the reconstructed images3. Issues due to much lower resolution of CCD were substantially relaxed by phase-shifting digital holography that uses both in-line setup and phase shift of the reference beam4. It was applied to surface contouring where difference of the phases that are recorded before and after a change of an incident angle on the object is derived5.6. Although this method is equivalent to the fringe projection method, it offers higher flexibility and more direct evaluation of the required quantities because phase values are processed numerically in the reconstruction. It was also applied to larger objects by employing an imaging setup and combined with deformation measurement7.
Wide Scale 4D Optical Metrology
397
A limitation of the method is that the oblique illumination is needed because it induces shadowing effect and tilt of the reference plane for contours. Phase-unwrapping is also obstructed by these properties owing to steeper phase slope. To overcome these limitations we propose here the method where wavelength of illumination is shifted at the normal incidence. The wavelength shift is provided by the change of the injection current of a laser diode. Although surface contouring based on wavelength shift of many steps has been reported8,9, imaging setups and mechanical focusing are always needed there. The present method simplifies the setup and reduces the measurement time. We describe below the principle and experimental results.
2 Principle The basic principle of digital holographic contouring is illustrated by Fig.1. The wave vectors of object illumination beam changed successively are represented by ka and kb. If we denote the wave vectors representing the observation direction by kao and kbo, the difference of the reconstructed phases corresponding to the identical surface points is given by ) x , y
( k saz ksbz koaz k obz ) h( x, y ) ( k sax k sbx ) x ,
(1)
where we have assumed the incident plane to be parallel to the x-z plane and the reference plane for the surface height to be the x-y plane. The first term on the right hand side of Eq.(1) means the phase difference proportional to the surface height and the second term stands for the tilt component, which can be eliminated in the wavelength shift method by employing the normal incidence. If we denote the wavelength of the illumination by Oa and Ob at the normal incidence and reconstruct each of the holograms with the same wavelength as in the recording, the phase difference is expressed by ) x , y 2ka k b h( x , y )
4S h( x , y ) / /
(2)
that means the contours of object height with a sensitivity represented by the synthetic wavelength as defined by /
1 / (1/ O a 1 / O b ) .
(3)
Wide Scale 4D Optical Metrology
398
OE
N VE Object
NV]
[ K[
NV
Illumination
N VD Ts NV[ N RD N RE ]
OD
Observation direction
Fig. 1. Principles of dual wavelength contouring
In the conventional dual wavelength holographic interferometery the reconstruction was performed using the same wavelength as one of the recording wavelength. Therefore, wavelength aberration was introduced, while in digital holography it is not the case. Hence the height difference corresponding to the phase difference of 2Scalled here height sensitivity, is given by 'h
/ 2 O 2 2 'O ,
(4)
where we put Oa=O and Ob=O+'Owith 'OOfor deducing the right-hand side
3 Experiments We conducted experiments using the setup shown in Fig.2. A laser diode with the wavelength 657 nm and output power of 30 mW is collimated, divided by a cube beam splitter and incident on the reference PZT mirror and the object. Light reflected from them are combined again by the same beam splitter and recorded by a CCD having pixels of 512x512 with the pitch 12.92 x 12.87 Pm2. The video signal is A/D converted at 8 bits and brought into a personal computer which calculates the complex amplitude from the three-step phase-shifted holograms. Numerical reconstruction is
Wide Scale 4D Optical Metrology
399
&&'
/HQV
3&
%6
/' 3=7PLUURU 2EMHFW
Fig. 2. Experimental setup for contouring by dual wavelength phase-shifting digital holography
done by the single FFT method. The wavelength change resulting from the change of the injection current is shown in Fig.3. The wavelength shift is accompanied by the change in output power that causes, however, no serious effect in the reconstructed phase. If we change the current from 64 mA to 70 mA, the wavelength is shifted by 0.5 nm that leads to the height sensitivity equal to ҏ'h=/=432 Pm.
0.4
658 0.2
0
50
60
70
80
657
Current [mA] Fig. 3. Wavelength and intensity variations against injection current
Wavelength [˩ m]
Power [mW]
659
Wide Scale 4D Optical Metrology
400
Figure 4 shows the experimental results from a Japanese coin which was positioned at a distance of 540 mm from the CCD. In Fig. 4(a) the intensity of the reconstructed image is displayed, (b) shows the phase difference, and (c) is the unwrapped phase-difference. The height sensitivity is equal to 442 Pm as mentioned above. There are neither shadowing effect nor fine phase structures caused by an additional tilt component that appeared in the dual incident angle method and increased the frequency of phase jumps. Hence the phase-unwrapping leads to better results. The distribution of phase difference contains noise associated with speckles appearing in the intensity image (a). The speckle size is given by the wavelength divided by the angular aperture of CCD as seen from the object. We suppressed this noise by extracting only one point from each 2x2 matrix where the intensity becomes maximum. This filtering is based on the fact that the phase value is more reliable for higher amplitude. The compressed data are then smoothed by averaging over each 2x2 matrix with final data pixels of 128x128. The cross-section along the upper white line in (c) is exhibited in Fig.5(a). By averaging the cross sections over 20 horizontal lines down to the lower white line in Fig. 4(c), we obtain the smoothed cross section shown in Fig. 5(b). The fluctuation is caused by residual speckle noise and its standard deviation in a flat region is about 100 Pm. This value is a quarter of the sensitivity and about the same as obtained from the fringe projection method.
FP (a)
(b)
(c)
Fig. 4. Results from a coin. (a) reconstructed image. (b) Phase difference. (c) Unwrapped phase difference
401
Height (˩ m)
Height (˩ m)
Wide Scale 4D Optical Metrology
900 600 300 0
0
5
10
15
900
600
300
0 0
Position (mm) (a)
5
10
15
Position (mm)
(b)
Fig. 5. Cross-section of height distribution of Fig.4(c). (a) before averaging. (b) after averaging over 20 horizontal lines
4 Discussions The upper limit of measurement is governed with the imperfect cancellation of random fluctuation of the reconstructed phases that is associated with speckle noise. In the focused image speckles do not move as a result of the wavelength change. Hence the imperfect cancellation only depends on the speckle decorrelation caused by the wavelength shift at the reconstruction plane. This decorrelation is governed by speckle displacement at the CCD plane as compared with the CCD size. If the incident angle and the observation angle are denoted by Ts and To, the speckle displacement at the center of CCD is given10 by AX
ª § sin T s · º 'k «L 0¨ tan T o ¸ x » ¹ ¼ k ¬ © cos To
'O x, O
(5)
where the right-hand side is valid for the normal incidence and observation. In the case of the above amount of wavelength shift the speckle displacement is much smaller than the CCD size even at the edge of CCD and thus we can ignore this effect.
402
Wide Scale 4D Optical Metrology
5 Conclusions A new method for surface contouring by using digital holography has been proposed and verified by experiments. It takes the difference of the reconstructed phases recorded with two wavelengths separately. The wavelength shift was provided by a change of injection current of a laser diode. By a wavelength shift of a fraction of nanometers we realized a height sensitivity of a few hundreds of micrometers. Compared with the previous method that changes the incident angle it is free from shadowing and tilting component to make phase unwrapping less noisy. By combining the results using various wavelength shifts we might be able to compare phase-differences resulting from various wavelength shifts and suppress the speckle noise. This will also overcome the difficulty in phase-unwrapping and improve the accuracy of measurement. Object size is limited by the wavelength divided by the angular pixel size of CCD as seen from the object plane. Larger objects can be measured by using an imaging lens but even the defocused region will be reconstructed sharply.
6 Acknowledgment The authors thank Dr. Jun-ichi Kato of RIKEN for providing the program for phase-unwrapping.
References 1. Hildebrand, B. P, Haines, K. A (1967) Multiple-wavelength and multiple source holography applied to contour generations, J. Opt. Soc. Am. 57: 155-162 2. Yonemura, M (1985) Wavelength-change characteristics of semiconductor lasers and their applications to holographic contouring, Opt. Lett. 10: 1-3 3. Schnars, U (1994) Direct phase determination in hologram interferometry with use of digitally recorded holograms, J. Opt. Soc. Am., 11: 2011-2015 4. Yamaguchi, I, T. Zhang, T (1997) Phase-shifting digital holography, Opt. Lett., 22: 1268-1270. 5. Yamaguchi, I, Kato, J. Ohta, S (2001) Surface shape measurement by phase-shifting digital holography, Opt. Rev, 8: 85-89
Wide Scale 4D Optical Metrology
403
6. Yamaguchi, I, Ohta, S. Kato, S (2001) Surface contouring by phaseshifting digital holography, Optics and Lasers in Engineering, 36: 417428 7. Yamaguchi, I, Kato, J. Matsuzaki, H (2003). Measurement of surface shape and deformation by phase-shifting image digital holography, Opt. Eng. 42: 1267-1271 8. Takeda, M. Yamamoto, H (1994) Fourier-transform speckle profilometry: three-dimensional shape measurementof diffuse objects with large height steps and/or spatially isolated surfaces, Appl. Opt. 33: 78297837 9 Yamaguchi, I, Yamamoto, A, Yano, M (2000) Surface topography by wavelength scanning interferometry, Opt. Engin., 39: 40-46 10 Yamaguchi, I, Kobayashi, K, Yaroslavsky, L (2004) Measurement of surface roug hness by speckle correlation, Opt. Eng. 43: 2753-2761
Phase-shifting interferometric profilometry with a wide tunable laser source Yukihiro Ishii1, Ribun Onodera2, and Takeshi Takahashi2 Department of Applied Physics, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, JAPAN 2 Department of Electronic System Engineering, University of Industrial Technology, Sagamihara, 4-1-1 Hashimotodai, Sagamihara, Kanagawa 229-1196, JAPAN 1
1 Introduction Many of interferometric measuring systems with a wavelength-shifted laser source have been developed [1]. The phase measurement would appear to restrict us to measuring optical path differences to no more than half a wavelength. The phase shifts are produced by changing the optical frequencies of a tunable laser on an unbalanced interferometers. The measured phase shifts can be offered the measuring range from a few micrometers to a few millimeters for range and profile measurements. The measurement accuracy is inversely proportional to the amount of the frequency changes of a tunable laser. A most promising tunable CW Ti:sapphire laser has a tuning range of several tens of nanometers that is adequate for phase-shifting range-measurement interferometry. The distances have been measured from the phase shift itself [2,3], the period of the beat signal produced by a frequency-ramped laser diode (LD) [4], two consecutive odd harmonics in the interference signal by sinusoidal phase modulation [5], and a frequency of a temporal fringe signal by the Fouriertransform technique [6]. To obtain high accuracy together with a wide dynamic range such a laser with a large frequency tuning range has become an important light source. The wavelength-tunable dye lasers have been used for a noncontact measurement of optical thickness [7]. There are several achievements for interferometric methods with the external-cavity tunable laser diode; optical coherence tomography [8] and wavelengthshift speckle profilometry [9]. Sum of the initial phase and the phase shift by the wavelength change of an LD is subtracted from the phase measured by a PZT phase shifter, yielding profile measurement [10].
Wide Scale 4D Optical Metrology
405
In this paper we present a phase-shifting interferometer using a tunable Ti:sapphire laser for profile measurement. The phase shift produced by a wavelength tuning of the Ti:sapphire laser source is measured by a Schwider-Hariharan technique [11,12] using five measurements with equal phase shifts. The profile information can be calculated from the measured phase shifts, the measured phase by a four-stepping phase-shifting method driven by a PZT and the emitting wavelengths of the light source that are monitored by an optical spectrum analyzer. Three-dimensional (3D) profile measurement for the discontinuous step mirror are presented using the tunable Ti:sappire phase-shifting interferometer.
2 Measurement In an unbalanced Twyman-Green interferometer with an unbalanced optical path difference (OPD) of d shown in Fig. 1, the interference intensity is
I 0 ^1 J cosI ` ,
I
(1)
where I0 is the bias intensity, J is the visibility, and I is the interference phase given by I=2Sd/O. If we change the wavelength O by GO, a phase shift can be introduced in the interferometer [1], such that
GI
2SdGO / O2 .
(2)
The optical path difference d can be measured from the phase shift, giving
d
O2GI / 2SGO .
(3)
The absolute distance can be obtained from the phase shift in conjunction with the measurements of the wavelength O and its change GO. Here we measure the amount of the phase shift by the SchwiderHariharan algorithm in conjunction with the sub-fringe measurement by a PZT movement in four steps. It is assumed that the phase is shifted by GI between consecutive measurements with five steps to yield five equations, i.e.,
Wide Scale 4D Optical Metrology
406 Frequency-Stabilized He-Ne Laser
R
Mirror Image CW Ti: Sapphire laser
3-D Object
Half Mirror
Lens
Objective
d/2 PZT Lens
OSA CCD
Computer
Interferogram Fringe for Monitor
Fig. 1. A phase-shifting interferometric profilometer with a tunable Ti:sapphire laser.
Ij
I 0 >1 J cos^I j 3 GI `@ (j = 1,…,5).
(4)
From Eq. (4), the phase shift GI is calculated by
sin GI
2 I 3 I 5 I1 tan I , 2 I 2 I 4
(5)
I1 I 5 , 2 I 2 I 4
(6)
cos GI
GI
º ª 2I I I arctan « 3 5 1 tan I » . ¼ ¬ I1 I 5
(7)
The phase I is extracted from a common four-step algorithm, i.e., I=arctan{(I2´-I4´)/(I1´-I3´)} with additional intensities I1´ a I4´ of the phase steps of 0, S/2, S, and 3S/2. The measurement sensitivity in the phase I has an order of sub-fringe range and is correctly dependent on the test surface profile d(x,y) such that the phase I must be measured by a PZT phase shifter. The profile information d(x,y) can be computed from Eqs. (3) and (7).
Wide Scale 4D Optical Metrology
407
3 Error analysis in meaurement Fig. 1 shows the experimental setup of the phase-shifting interferometer for profile measurement. The light source is the CW Ti:sapphire laser (Showa Optronics Model LJ5-B). A center wavelength is 780 nm with a tuning range of 300 nm. The output beam from the Ti:sapphire laser is coupled into a Twyman-Green interferometer. In the Twyman-Green interferometer, one beam is reflected from a reference mirror and the other beam is reflected from a step mirror object. The object mirror is imaged on a CCD camera those video signal is converted into an 8-bit signal by a frame grabber. The phase-shift extraction algorithm in Eq. (7) can be performed by a computer at 480×512 sample points. The output power and the emitting wavelength are monitored during the measurement by a power meter and an optical spectral analyzer with a resolution of 0.05 nm (Ando AQ-6315A), respectively. The frequency-stabilized He-Ne laser is used to form the interference fringes in a fringe monitor for checking the fringe drift for the long-term period by a PZT. The instability due to a PZT movement can be thus compensated for its adjustment. The measured interference fringe of a tilted reference mirror is used to estimate the noise level in the interference intensity. The interference intensity with an additive gaussian noise n is rewritten from Eq. (4) as
Ij
>
@
I 0 1 J cos^I j 3 GI H j ` n (j = 1,…,5).
(8)
where an error in a phase-shift separation is defined as Hj-Hj+1 =2DGI under 0dDd1 and n is a gaussian noise generated by uniformly random number. The measured one-dimensional (1-D) intensity profile in Fig. 2 across one tilt fringe is used to determine the bias intensity I0, the modulation intensity I0J and the period of tilt fringe / those phase is written as I(x)=2Sx//. The bias and the modulation intensities are equal to half of the addition and the difference between the maximum and minimum intensities averaged over the nearest three pixels on the extreme intensities, respectively. The period of tilt fringe equalizes a half of addition between the separation of the maxima averaged over two extreme positions and the separation of the minima averaged over two extreme positions.
Wide Scale 4D Optical Metrology
408 250 3
D=0, n/I0 J=0.15
2.5
RMS Error
Intensity
200 150 100 50
Numerical
2 1.5 1
b a
c
0.5 0 0
0 0
100
200
300
400
500
Pixel
Fig. 2. One-dimensional intensity function profiles across one tilt fringe and the estimated cosine function (solid curve).
50
100
150
200
250
300
350
Phase Shift GI
Fig. 3. A plot of the rms error in GI as a of the change in the phase shift.
The estimated cosine function of 1-D fringe is shown in a solid curve of Fig. 2. The noise diversity n in Eq. (8) becomes an rms value of n=12 and n(I0J)=0.15 for D=0 from an estimated cosine function, i.e., I(x)=92+80cos{2Sx/211-2S (104/211)}, where the quantized level in the interference intensity is 255. Since the choice of the phase shift is arbitrary, it is better to choose it to minimize the variation in phase shift to rms error in GI. This numerical result is plotted in Fig. 3. The minimum rms error in the phase shift of the plot in Fig. 3 is centered at 90° and 270°.
4 Experimenal Results Fig. 4 shows five tilted fringe patterns (left figure) by stepwise changing the wavelength and the surface profile measurement of a step blockgauge (right figure) those floor plane is at a mirror image (reference plane) of the interferometer. Five intensity patterns, I1, I2, I3, I4, and I5, are measured sequentially with a stepwise increase, GO=0.62 nm, of wavelength in a Ti:sapphire laser, and four intensity patterns I1´, I2´, I3´, and I4´ are measured sequentially with a stepwise change by PZT to equally give S/2 phase shift. Total nine intensity data are substituted into Eq. (7) such that
Wide Scale 4D Optical Metrology
409 b: GI=250.2°
0.34 mm (±0.01 mm)
O-2GO
O-GO
0.13 mm (±0.005 mm)
O
c: GI=90°
O+GO
Reference plane
O+GO
Fig. 4. The patterns on the left are five interference patterns induced by the wavelength changes. Measured profile on the right is a step blockgauge object.
the three-dimensional range d(x,y) in Eq. (3) can be measured as shown in Fig. 4. The top and bottom surfaces correspond to the phase shifts GI=250.2° and GI=90° that yields rms phase errors in GI pointed by arrows (b) and (c) in Fig. 3, respectively. The rms phase errors in GI in (b) and (c) are 0.11 rad and 0.08 rad, respectively. The rms phase error in a top surface of Fig. 4 is much larger than a bottom surface such that a corrugated noise surface is exhibited at the top. There is a singularity and indetermination when I=mS/2 (m:integer) following to Eq. (7), since the interference fringes used for the measurement has beam tilted. The step height difference between the top and bottom of the blockgauge can be absolutely measured as 0.21 mm with the rms accuracy of ±0.01 mm (top surface) and ±0.005 mm (bottom surface).
5 Conclusion A phase-shifting interferometer profile has been constructed by using a 300-nm wide tunable Ti:sapphire laser for three-dimensional profile measurement. The phase shift in five step by wavelength diversity can be measured with Schwider-Hariharan technique in which a phase under a subfringe range is measured by using four step phase-shifting interferometry with a PZT movement. A step blockgauge object can be measured those height is ~0.21 mm.
410
Wide Scale 4D Optical Metrology
References 1. Y. Ishii, "Laser-Diode Interferometry, " in Progress in Optics, E. Wolf, Ed. (Elsevier, Amsterdam, 2004), Vol. 46, pp. 243-307. 2. H. Kikuta, K. Iwata and R. Nagata, "Distance measurement by the wavelength shift of laser diode light," Appl. Opt. 25, 2976-2980 (1986). 3. O. Sasaki, T. Yoshida and T. Suzuki, "Double sinusoidal phasemodulating laser diode interferometer for distance measurement," Appl. Opt. 30, 3617-3621 (1991). 4. G. Beheim and K. Fritsch, "Remote displacement measurements using a laser diode," Electron. Lett. 21, 93-94 (1985). 5. A. J. den Boef, "Interferometric laser rangefinder using a frequency modulated diode laser," Appl. Opt. 26, 4545-4550 (1987). 6. M. Suematsu and M. Takeda, "Wavelength-shift interferometry for distance measurements using the Fourier transform technique for fringe analysis," Appl. Opt. 30, 4046-4055 (1991). 7. A. Olsson and C. L. Tang, "Dynamic interferometry techniques for optical path length measurements," Appl. Opt. 20, 3503-3507 (1981). 8. S. R. Chinn, E. A. Swanson and J. G. Fujimoto, "Optical coherence tomography using a frequency-tunable optical source," Opt. Lett. 22, 340-342 (1997). 9. H. J. Tiziani, B. France and P. Haible, "Wavelength-shift speckle interferometry for absolute profilometry using a mode-hop free external cavity diode laser," J. Mod. Opt. 44, 1485-1496 (1997). 10. J. Kato and I. Yamaguchi, "Phase-Shifting Fringe Analysis for Laser Diode Wavelength-Scanning Interferometer," Opt. Rev. 7, 158-163 (2000). 11. J. Schwider, R. Burow, K.-E. Elssner, J. Grzanna, R. Spolaczyk and K. Merkel, "Digital wavefront measuring interferometry: some systematic error sources," Appl. Opt. 22, 3421-3432 (1983). 12. P. Hariharan, B. F. Oreb and T. Eiju, "Digital phase-shifting interferometry: a simple error-compensating phase calculation algorithm", Appl. Opt. 26, 2504-2505 (1987).
Opto-Mechatronic System for Sub-Micro Shape Inspection of Innovative Optical Components for Example of Head-Up-Displays Peter Andrä, Hans Schamberger, Johann Zänkert, [email protected] LINOS Photonics GmbH & Co. KG Isartalstraße 43, D-80469 München Germany
1 Introduction The paper describes an advanced concept for highly resolved shape measurement and inspection of large aspheric, especial free-formed optical components required more and more for improved and innovative optical systems, e.g. Head-up displays (HUD). A special opto-mechanic 3-D co-ordinate measuring system allows object adapted measurements at different scales with nanometre depth and variable lateral resolution (wide scale measurement). The surface shape of INOs are measured at macro- up to nano-scale pointwise using an interferometric distance sensor that is machine-guided round the object at spherical paths. As result sub-micro accuracy can be achieved that meets the requirements of precision optics and is comparable to whole-field interferometry.
2 Optical and Tactile Shop Testing A lot of methods for “optical shop testing” of spherical as well as aspherical lenses, such as interferometry, also with holograms, Ronchi test, foucault test and so on, are known partly since about 80 years [1] and some are applied until today with success. Nowadays a promising approach is deflectometry [2] being developed from fringe projection technique concerning reflecting surfaces. The continued deflectogrammetry denoted by the author is aiming at self calibrating of “fringe reflecting” set-ups similar to photogrammetric methods.
Wide Scale 4D Optical Metrology
412
Besides different tactile surface measurement techniques, e.g. profile scanners (stylus instruments) and 3-D co-ordinate measuring machines (CMM), are well established in industry since many years. Currently Nanomeasuring machines [3] are available for the inspection of microsystems. Otherwise the modern precision optics industry has to meet higher requirements onto their optical systems like other branches of industry too. Such requirements refer mostly to new system functions, improved imaging quality (aberration), lower system size, number of components, weight or price and other system properties. LINOS Photonics company is developing increasingly innovative optical elements (INO) and systems for different branches, e.g. Head-up displays for the automotive- and aircraft industry (Fig. 1). For HUDs aspherical asymmetrical combiners or mirrors can reduce the number of optical components and generate sharp undistorted images within the required eye motion box. INOs are aspherical and mainly asymmetric free-formed optical components (lenses, mirrors) to be made of glass, plastic or metal. The development of INOs requires integrated process chains including new or modified technologies for the computer aided design, manufacture and shape measurement [4]. aspheric combiner
display projector
projector
aspheric mirror
Fig. 1. Two general types of HUD-system: optics design for an automotive-HUD, sketch of a wide-angle HUD for airliners
3 Measurement Problem A general challenge is the wide-scale shape measurement of large optical elements with high resolution, like the HUD-combiner having a diameter of around 250mm and different curvatures in x- and y-direction.
Wide Scale 4D Optical Metrology
413
The macroscopic shape (sagitta z) of different optical surfaces concerning the radius of curvature and size (lens diameter up to 200 mm and more) has to measure with sub-micro accuracy and nanometre resolution (1nm/ 100mm =10-8!). Additionally free-formed surfaces are characterised by non-rotational symmetry and strong curvature. They can have a considerable deviation from spherical form (best-fit sphere) up to several mm being still changed during manufacturing process from one step to another. Besides the great variety of optical components requires object adapted surface measurements. The classical measuring techniques mentioned above can be applied only with some limitations during the development or series production of INOs: Interferometry allows an uncomplicated inspection of spherical surfaces, but otherwise it needs expensive optical correction systems or computer generated holograms (CGH) that are suitable for only one aspherical element. Most tactile systems touch the surface in z-direction (not perpendicular) and consider the radius of calliper sphere, but they can not achieve a large z-range and interferometric accuracy at the same time. Important sources of errors are a large calliper range in z-direction together with a strong surface slope at the lens border. Since optical 3-D sensors are limited on defined depth-resolution per area [5] a combined measuring concept for wide-scale inspection of technical freeform-surfaces (scaled topometry) was developed [6]. Instead of applying different optical methods the concept described in this paper combines the advantages of optical and mechanical solutions. Owing to above limitations an advanced opto-mechatronic measuring device for aspheric lenses [7] is modified now for the inspection of freeformed aspheric elements at LINOS Photonics [4].
4 Interferometry based 3-D Measuring System The global basic shape of aspherical surfaces is a spherical one mostly (best-fit sphere). The idea is to measure their deviation in radial direction R from that reference sphere with high resolution instead of the whole zheight range of the lens known from tactile systems [7]. Of course this is comparable to whole field interferometry, but aspherical deviations have to be measured over a much larger range up to some mm. This can be solved using a pointwise distance measured, e.g. interferometric sensor which is machine-guided round the object at defined paths on the reference sphere precisely (Fig. 2a).
Wide Scale 4D Optical Metrology
414
In difference to usual tactile methods this sensor is able to scan surfaces perpendicular to reference sphere at large slopes also reducing measurement range and errors. So the shape data are acquired as 3D-polar coordinates primarily: radius R as the distance value to the reference sphere generated by two rotations (MI), Fig. 2a. swivelling table 1 tilting table 3
sensor
reference sphere
R aspheric lens
M rotation
M M
rotation table 2
I
swivelling
Fig. 2. a) Principle: Measuring form deviations as polar co-ordinates, b) 3-D shape measuring system with interferometric distance sensor
4.1 System Realisation
This approach is realised by a special opto-mechatronic set-up with a tactile miniature interferometric distance sensor (Fig. 2b) [7]. It executes a defined rotation of the lens (M) to be measured and a swivelling of the interferometric sensor (I) about the centre of lens curvature M. The lens rotation generates circular scan paths as contour lines. All distance and rotation values are measured optically using opto-electronic encoders or an interferometer. The swivelling I of the sensor about point M (at any lens radius of curvature) (Fig. 2a) is not realised directly to avoid too long levers of rotation if flat lenses are measured. The sensor is swivelled about axis 1 mounted on a tilting table 3 (Fig. 2b). The height position of point M - intersection of both axis 1 and 2 - and hence the mechanical sphere radius can be changed by tilting (table 3) the swivelling table 1.
Wide Scale 4D Optical Metrology
415
Thus INOs of different size and curvature radius - both convex or concave and plane surfaces - can be measured. 4.2 System Components and Properties
The main components of the measuring system are (Fig. 2b): x optical-tactile distance sensor (Fig. 3a) with a miniature interferometer and glass fibre connected HeNe-laser (O ~633nm) x lens rotary, sensor swivelling tables with air bearings, optical encoders x PC and control system with a DOS real-time operating system. The rotary and swivelling table are equipped with air bearings having a mechanical true running accuracy of about 20nm – 30nm. Both deviations together result to a value smaller than O/10 limiting the whole measurement accuracy of the system. The polar angle values are acquired with incremental encoders (accuracy 0.0010, resolution 0.00010). The stable air bearing bridge construction (Fig. 2b) ensures the necessary stability and insensitivity to vibrations during the measurement. The miniature distance sensor (length ~55mm) consists of a replaceable tactile calliper and a small interferometer (Fig. 2b, 3a). The stiff stylus with a small sphere at the end is guided by two spiral springs. Because of the low touch force (~0.1N) also sensitive plastic surfaces can be measured without damage. The range of calliper is around 2mm that limits the distance and hence the deviation of aspherical lenses to be measured. For larger deviations especial of free-formed surfaces a new calliper with a range of ~8mm is developed and evaluated just now. The displacement of the stylus during the scanning is acquired by a phase-sensitive length measuring interferometer of michelson type (Fig. 3a). It provides by 900 phase shifted intensity signals (I ~ cos S L/ O) being evaluated by an incremental counter card (resolution ~O/1000). The used resolution of 1nm and basic noise of 3nm are lower limits of the attainable sensor accuracy which is much better than that of the used rotary tables. This tactile sensor is able to scan both reflecting and matt surfaces. So the system can be applied during the whole development and manufacture of INOs after every processing step: measurement of the first preprocessed grinded glass lenses still having a large shape deviation up to final polished or blank pressed lenses, injection moulded plastic lenses as well as precision processed metal mirrors. The system concept together with the miniature sensor makes the record of unusual large apertures of concave and convex surfaces with sizes up to 250mm possible.
Wide Scale 4D Optical Metrology
416
interferometer deviation [Pm]
calliper
Fig. 3. a) Image of the miniature sensor with tactile calliper and interferometer, b) calibration result: measuring deviation of a test glass used for a combiner (150nm P-V)
5 3-D Shape Measurement Steps and Results A complete shape measurement of the HUD-combiner includes the adjustment and calibration of the described system using a test glass as reference, the acquisition and evaluation of measuring data including the comparison to the given surface target data (theoretical lens parameter or CAD-surface data, e.g. NURBS). 5.1 Adjustment and Calibration
The set-up can be adjusted and calibrated very flexible with respect to the size and basic radius of any aspherical surfaces (best-fit sphere). Accurate spherical test glasses of different radius of curvature are available as calibration references. Both preparation steps are automated mostly to reduce time. Any positive or negative sphere radius will be adjusted by defined tilting (3) of axis 1 (Fig. 2b). Then the calliper has to be aligned into vertical direction to coincide with the rotation axis 2 positioned automatically by a linear table. Now the selected test glass with a suitable lens radius (e.g. R~390mm for the combiner) can be applied onto the rotation table and adjusted in space as far as the lens vertex is to be at the scanning start position. The
Wide Scale 4D Optical Metrology
417
topical system state – influenced by not perfect adjustment – has to be identified by the calibration using this test glass. As result a possible radius correction of the mechanic spherical paths and deviations of the measured test glass data to the spherical reference are computed. These can be composed of systematic measuring as well as test glass deviations itself that are usual smaller than O/10. Fig. 3b shows a typical calibration result (P-V value is 150nm). 5.2 Shape Measurement
After calibration the system is ready for measuring the combiner shape (Fig. 1). It will be adjusted at the rotation table similar to the test glass with respect to the usable calliper range. The swivelled sensor scans the surface in touch from the centre to the border at circular paths normally. Additionally free-formed INOs with non-circular, e.g. rectangular contours as the combiner should be measured up to the edge. This needs a new system control to swivel the sensor inwards if it is positioned near the border. Beside the adjustment of such INOs is a little bit harder. At the moment these system modifications are implemented and tested (Fig. 5a). The system allows scanning strategies with variable field diameter, number of paths and measuring points. The combiner is scanned normally at 17 circular paths for every 96 points within one minute. After acquisition of primary 3D polar the 3D Cartesian co-ordinates of the combiner are determined using the system parameters identified before. 5.3 Shape Inspection
In principle such 3-D point clouds can be processed further similar to the inspection of mechanical components measured optically [8, 9]. These can be compared with given surface target data, i.e. parameters of regular surfaces or NURBS models (spherical and aspherical surface inspection), or be transformed into 3D models (surface reconstruction, reverse engineering). Results of both ways are used in available CAE- and optics design packages to improve optical elements and systems within complete process chains (modelling of lenses, simulation and analysis of imaging system, form correction at manufacture) [4]. In optics the shape inspection is of great interest. The evaluation of symmetrical regular surfaces is a well solved problem if these are described analytically by one or some more parameters. Such a result is simi-
Wide Scale 4D Optical Metrology
418
lar to Figure 3b showing as example the deviation of a spherical test glass (R~390mm). But the comparison of acquired asymmetrical surfaces to given NURBS data is still a problem if no additional reference points (RPS) or features can be defined. Then surface orientation is an ill-posed problem. The remaining method of surface best-fit matching is sensitive in normal direction only, but surfaces can not be aligned in its tangential direction correctly. The matching of free-formed surfaces that are near to a spherical or regular basic shape is badly conditioned and ambiguous dependent on measuring or manufacturing errors of the lens additionally. To improve the numerical optimisation and convergence of the algorithm for all that a new solution has been developed [4] and applied to the evaluation of the pre-processed HUD-combiner (Fig. 5b). The deviation of the measured combiner is about 270nm P-V (~50nm rms). It illustrates residual manufacturing errors that will be corrected at the following manufacturing step. deviation [Pm]
Fig. 5. a) Scanning paths for rectangular lens contours b) Pre-processed HUD-combiner: measured shape deviation from target data (NURBS)
The described measuring system is applied for industrial inspection and quality control of novel free-formed INOs, e.g. HUD-combiners, within complete process chains. The measured 3-D data and surface deviations can be used in the optical design- and manufacturing process.
6 Acknowledgements We would like to thank the project partners of FINO, the German Federal Ministry for Education and Research (BMBF) and the Project Institution for Production and Manufacturing Technologies Forschungszentrum Karlsruhe (PTKA-PFT).
Wide Scale 4D Optical Metrology
419
7 References 1. Malacara, D. (1978) Optical shop testing. John Wiley & Sons, Inc. 2. Petz, M., Tutsch, R. (2004): Reflection grating photogrammetry for the measurement of specular surfaces, tm-Technisches Messen 71: 389397. 3. Jäger, G., Manske, E., Hausotte, T., Büchner, H.J. (2000) Laserinterferometrische Nanomessmaschinen, VDI-Report 1530: 15-18. 4. Zänkert, J.: Verbundprojekt FINO - Flexible Prototypen- und reproduzierbare Serienfabrikation innovativer Optikelemente. 5. Osten, W., Andrä, P., Kayser, D. (1999) Highly-resolved measurement of extended technical surfaces with scalable topometry. tmTechnisches Messen 66, Heft 11: 413-428. 6. Andrä, P., Ivanov, E., Osten, W. (1997) Scaled topometry - an active measurement approach for wide scale 3d surface inspection. In: Jüptner, W, Osten, W (Eds.): Proc. Fringe´97, 3rd International Workshop, Akademie Verlag Berlin: 179-189. 7. Schamberger, H. (1999) Keep in touch. Feinwerktechnik & Messtechnik, Jahrg. 107/12: 43-45. 8. Osten, W. (2000) Application of optical shape measurement for the nondestructive evaluation of complex objects. Optical Engineering Vol. 39 No. 1: 232-243. 9. Andrä, P., Steinbichler, H., Maidhof, A., Lazar, M., Thoss, F. (2002) COMET VarioZoom - A novel sensor concept for flexible optical 3-D coordinate measurement. VDI-Report 1694: 285-292.
3-D profilometry with acousto-optic fringe interferometry Xiang Peng and Jindong Tian Institute of Optoelectronics, Key Laboratory of OptoelectronicDevices and Systems of Education Ministry of China, Shenzhen University 518060 Shenzhen P. R. China
1 Introduction Three dimensional (3D) shape measurement attracts considerable interests from both from both academic and industrial sectors [1]-[2]. Profilometry based on phase mapping is among those methods for 3-D shape measurement. Projecting a periodic pattern upon an object and observing its deformation from a different view angle has been a common principle for various 3-D shape measurement techniques, including Moiré technique and the Fourier-transform profilometry (FTP). Phase-mapping based profilometry involves in two basic operations: (1) phase encoding. In this process the spatial phase of fringe pattern is modulated by height variation of object surface as the pattern is projected upon the test object. We referred to this process as phase encoding because the topographic information about the object surface has been encoded into deformed fringe pattern through the phase modulation, (2) phase decoding. This process can be divided into another two sub-operations: phase evaluation and phase unwrapping. The process of phase evaluation can be performed by either phase-shifting techniques or by Fourier transform based techniques. However, the operation of phase unwrapping plays more significant role in the process of phase decoding because it would become a difficult task if the object surface has complicated geometric shape or topology. For example, if object surface has a portion with steep slope or there are holes or breakout on the surface, then most of conventional phase unwrapping techniques will not work well. This limits the applicable scope of phase-mapping based profilometry. To overcome the discontinuities, the basic idea is that changing the sensitivity of measurement system, results in fringe density changes. This
Wide Scale 4D Optical Metrology
421
means that the integer order of the fringes sweep through the discontinuity. In order to extend the scope of phase-mapping-based techniques, allowing it applicable to broader class of object surfaces temporal phase unwrapping techniques [3]-[4] have been proposed to deal with object surfaces with discontinuities or surface isolations. In addition, spatial frequencymultiplexing techniques have also been also proposed to make phase unwrapping technique adapted to those object surfaces with complex geometry and topology [5]-[6]. Except for research efforts on new phase unwrapping techniques, additional efforts have been made in developing new hardware suitable for profiling those object surfaces with complex geometry and topology. For example, the video projector has been used to generate variable sensitivity fringe patterns for structured light illumination [7]-[8]. Recently, a socalled accordion fringe interferometry using an acoustic-optic modulator (AO-AFI) has been proposed to reach the video-rate image acquisition and 3-D arbitrary shape sensing [9]. As a structured-light active triangulation technique, AO-AFI overcomes many of weaknesses of conventional active triangulation techniques. AO-AFI provides absolute range measurements that are (1) computational inexpensive, (2) at video rate, (3) full field, and (4) unambiguous. However, in the AO-AFI scheme, amplitude-modulated (AM) laser power must be used to freeze the traveling interference pattern at a particular phase, resulting in the complexity in building up the system. The effect is analogous to a stroboscope only lighting up a moving fringe pattern when it aligns to the desired spatial phase. In this paper, we present a new scheme for dynamic 3-D profilometry that is constructed with two acousto-optic deflectors (AODs). We refer to this new scheme as dual acousto-optic profilometry (DAOP). The rest of this paper is organized as follows: Section 2 describes the working principle of the DAOP, Section 3 present the experiment result with the use of DAOP, the last section gives the conclude remarks.
2 Method 2.1 Acousto-optic deflection
An acousto-optic cell utilizes the effect of Bragg diffraction of the laser beam incident on a volume grating. When an ultrasonic wave propagates through the crystal material, different regions of expansion and compression are created inside the Bragg cell, causing changes in density. The index of refraction is periodically modulated, and the medium becomes
Wide Scale 4D Optical Metrology
422
equivalent to a moving grating, which can be described by the following equation
'n x, t 'n sin Z s t k s x
(1)
where x is position in the Bragg cell along the vertical axis; Z s is the sound angular frequency and k s is the sound wave vector given by Z s / v where v is the acoustic velocity. Diffraction of light by ultrasonic wave is used to change the direction of the laser beam. The electromagnetic wave with angular frequency Z and wave-vector k can also be considered to consist of photons (light particles) with energy and momentum. If one regards acousto-optic interaction as a photon-photon interaction, then due to the conservation of energy, one has [10]
fi
fd fs
(2)
& ki
& & kd ks
(3)
and
For the isotropic case when ni
sin T B where Ol
sin T i
sin T d
nd , it can be shown that
Ol 2Os
Ol 2vs
fs
(4)
O0 / n is the wavelength of the light in the material itself and
T B is the Bragg angle. For uniaxial crystal, for diffraction in a plane perpendicular to the optical axis, we have n no or ne . Under the assumptions of ni nd n and small T i and T d , we can determine the deflection angle, T D , by [10]
TD
O0 f s nvs
TB
(5)
By differentiating (4) we obtain the spread in the Bragg angle, 'T B ,
'T B
Ol 2vs cos T B
'f s
(6)
Wide Scale 4D Optical Metrology
423
This implies that changing the frequency of acoustic wave results in the change to the Bragg angle. 2.2 DAOD-based fringe projector
A fringe projector plays a central role in Moiré technique and the Fouriertransform profilometry (FTP). Using the effect of acousto-optic deflection, we construct a fringe projector with dual acousto-optic deflectors. A schematic diagram of the dual-acousto-optic deflector (DAOD) based fringe projector is shown in Fig. 1.
Fig. 1. Schematic diagram of DAOD-based profilometer
The DAOD is composed of following units: a coherent light source, a beam splitter (BS), two acousto-optic deflectors (AODs), two optic wedges, a beam synthesizer, a lens, a microscopic objective, the RF signals, and the direct digital synthesizer (DDS). The working principle of DAOD can be described as follows: an incident light is divided into two parallel light beams by a beam splitter and input to acousto-optic deflectors (AODs), respectively. By adjusting two optic wedges, we are able to compensate the deviations of the propagation directions of two first-order diffraction beams from AODs. Then the two output beams are recombined and converged by the beam synthesizer and lens to form an interference region in a micro scale region. Furthermore, this micro interference region is expanded by a microscopic objective and
424
Wide Scale 4D Optical Metrology
thus projected upon the test object surface. The amplified RF signals used to drive the AODs are generated and controlled by DDS. By changing the time frequency of RF signal, we are able to generate spatial gratings with varying frequencies according to Eq. (6). In comparison with other fringe generators, the proposed DAOD projector has following advantages: (1) static optical fringes with sinusoidal profile can be obtained, (2) the sensitivity of the fringe pattern can be precisely changed and controlled, (3) any moving mechanical part are not needed due to the adaptation of acoustooptic interaction, (5) perturbation immunity can be reached because of inherent symmetric property and equal optical path of the configuration, (6) quick variation of optical sensitivity due to the use of electronic circuit. 2.3 DAOD-based fringe profilometer
A deformed fringe pattern is captured by a CCD camera and digitized via a frame grabber as also shown in Fig. 1. Through the desktop computer, a time-sequence control can be realized to change the sensitivity of projecting fringes via changing the time frequency of the RF driven signal. The deflection or the propagation direction of the first-order diffraction beam can be adjusted by changing the time frequency of the RF signal according to the Eq. (6). The spatial frequency and phase shift of the fringe patterns can be altered through changing the bandwidth of the sound wave in a magnitude of the millisecond so that the speed for generating timesequence fringe patterns with variable sensitivities can be much faster than video-frame rate.
3 Experiment and results The real experiment setup of the DAOD-based optical profilometer is shown in Fig. 2. The total setup is composed of a ‘transmitter’ and a ‘receiver’. The main components of the transmitter consist of the light source, beam splitter, AODs, DDS, beam synthesizer, optic-wedge, converging lens, and microscopic objective. The laser source is a single longitudinal mode laser with wavelength 532nm and 21mW output power (SUTTECH G-SLM-020), the AODs used in the experiment is AO deflector made by A.A OPTO-ELECTONIQUE designed for 532nm laser. The central frequency is 80 MHz and the bandwidth is r 25MHz so that the AO laser frequency shift is 80 r 25MHz . The material of acousto-optic birefringent crystal is tellurium oxide (TeO2) and the Bragg (incident) angle is close to auto-collimation that is approximately perpendicular to input
Wide Scale 4D Optical Metrology
425
face. The AO efficiency is larger than 70% for RF power being smaller than 1W . The active aperture of the AOD is 4.5mm while the laser beam diameter is smaller than 4mm . The driver for AOD is direct digital synthesizer (DDS) OEM version plus amplifier. The communication time of the DDS is less than 40ns and the rise time/fall time is less than 10ns . The desktop computer controls the DDS though a parallel port and 15bits TTL logic. The receiver is composed of an imaging lens and a CCD sensor as well as a frame grabber. The control software is programmed to generate a time-sequence to synchronize the spatial frequency variation, phase shifting, and image acquisition. A time-varying sequential deformed fringe patterns with different optical sensitivities are thus obtained for later processing. A modified spatial-temporal phase-unwrapping algorithm [11] is applied to extract the phase maps with different resolution levels and then synthesized using a recursive scheme. In order to demonstrate that our approach is valid for those object surfaces with steep slope, we select a step-like object as test object. The deformed fringe patterns with 25, 47, and 95 fringes are shown in Fig. 2 (a)(c), respectively, and corresponding unwrapped phase maps obtained from spatial-temporal phase unwrapping algorithm are presented as Fig. 2 (d)(f), respectively. Fig. 2 (g) shows reconstructed object surface.
(a)
(d)
(b)
(c)
(e)
(f)
(g)
Fig. 2. Experiment results. Each of them has been explained in the text
426
Wide Scale 4D Optical Metrology
4 Conclusion In conclusion, we have introduced a new scheme for dynamic 3-D profilometry that is constructed with two acousto-optic deflectors (AODs). We refer to this new scheme as dual acousto-optic profilometry (DAOP). The DAOP has several unique features such as (1) static optical fringes with sinusoidal profile can be obtained, (2) the sensitivity of the fringe pattern can be precisely changed and controlled, (3) any moving mechanical part are not needed due to the adaptation of acousto-optic interaction, (5) perturbation immunity can be reached because of inherent symmetric property and equal optical path of the configuration, (6) quick variation of optical sensitivity due to the use of electronic circuit. The control software is designed for the DAOP operation and programmed to generate a timesequence to synchronize the spatial frequency variation, phase shifting, and image acquisition. A time-varying sequential deformed fringe patterns with different optical sensitivities are thus obtained and a modified spatialtemporal phase-unwrapping algorithm is applied to extract the phase maps with different resolution levels and then synthesized using a recursive scheme. An object surface with steep slope has been used to validate the approach presented in this paper.
Acknowledgement The authors are grateful to the financial supports from Natural Science Foundation of China (NSFC) through the grant No. 60275012 and No. 60472107. The authors also thank the Natural Science Foundation of Gaungdong province (No. 031804 and 04300864) and Science and Technology Bureau of Shenzhen for their supports to the work reported here. The work reported in this paper has been filed for patents.
References 1. Chen, F, Brown, G M and Song, M (2000) Overview of threedimensional shape measurement using optical methods. Opt. Eng. 39: 10-22, and references therein 2. Hung, Y Y, Lin, L, Shang, H M and Park, B G (2000) Practical threedimensional computer vision techniques for full-field surface measurement. Opt. Eng. 39: 143-149
Wide Scale 4D Optical Metrology
427
3. Huntley J M and Saldner, H (1993) Temporal phase unwrapping algorithm for automated interferogram analysis. Appl. Opt. 32: 3047-3052 4. Saldner, H and Huntley, J M (1997) Temporal phase unwrapping: application to surface profiling of discontinuous objects Appl. Opt. 36: 2770-2775 5. Burton, D R, Godall, A J, Atkinson, J T and Lalor, M J (1995) The use of carrier frequency shifting for the elimination of phase discontinuities in Fourier transform profilometry Opt. Laser Eng. 23: 245-257 6. Takeda, M, Gu, W, Kinoshita, M, Takai, H, and Takahashi, Y Frequency-multiplex Fourier-transform profilomertry: a single-shot threedimensional shape measurement of objects with large height discontinuities and /or surface isolations Appl. Opt. 36: 5347-5354 7. Kalms, M K, Juptner, W and Osten, W (1997) Automatic adaptation of projected fringe patterns using a programmable LCD projector. Proc. SPIE, 3100: 156-165 8. Saldner, H and Huntley, J M (1997) Profilometry by temporal phase unwrapping and spatial light modulator based projector. Opt. Eng. 36: 610-615 9. Mermeistein, M S, Feldkhum, D L and Shirley, L G (2000) Video-rate surface profiling with acousto-optic accordion fringe interferometry. Opt. Eng. 39: 106-113 10. Das, P K (1991) Optical Signal Processing, Springer-Verlag, Berlin, pp. 150-152 11. Peng, X, Yang, Z L and Niu, H B (2003) Multi-resolution reconstruction of 3-D image with modified temporal unwrapping algorithm. Opt. Commun. 224: 35-44
Phase-shifting shearing interferometry with a variable polarization grating recorded on Bacteriorhodopsin Eugenio Garbusi, Erna M. Frins and José A. Ferrari Instituto de Física, Facultad de Ingeniería J. Herrera y Reissig 565, 11300 Montevideo Uruguay
1 Introduction In this paper we describe a novel optical configuration for shearing interferometry [1]. The proposed optical architecture is schematically shown in Figs. 1 and 2. The interferometer is based on the wavelength dependent polarization recording properties of the Bacteriorhodopsin (BR). BR is a photochromic protein from the purple membranes produced by halobacteria (Halobacterium salinarum). Due to the light-induced dichroism of BR when illuminated with yellow-green light [2,3], a polarization mask is generated when a linearly polarized (yellow-green) laser illuminates a BRfilm. Intensity and polarization direction of the writing laser beam can be spatially inhomogeneous. A polarization grating generated by the superposition of circularly polarized (green) light beams is recorded on a Bacteriorhodopsin (BR) film, and it is read by a linear polarized (red) laser beam. This (red) wave passes also through the phase object, and thus, it carries the information of the test wavefront. By reading the BR film, the incident (red) wave generates three copies of itself, which correspond to the zero and r1 diffraction orders of the polarization grating. The zero order has the same polarization of the incident (red) wave, while the r1 diffraction orders are circularly polarized waves with opposite circularity. The zero diffraction order can be suppresed with a polarizer with transmission direction orthogonal to the polarization direction of the incident (red) wave, which in turn, allows the r1 diffraction orders to interfere with each other and to generate the shearing interferogram. In the following Section it will be shown that the phase-shift between the interfering (red) waves is controllable by changing the relative phase of the (green) laser beams used to record the polarization grating on the BR.
Wide Scale 4D Optical Metrology
429
x'
xs
x L1
f1
He-Ne
P L2
f1
C
• PO
BR
B
PC W
f2
f2
QW
• Laser (532 nm)
M
Fig. 1. Schematic layout of the proposed shearing interferometer
2 Shear interferometer with a polarization grating The proposed device is shown in Figs. 1 and 2. A green laser beam is split into two linearly (orthogonal) polarized beams by means of a Wollaston prism (W). One of the linear polarized waves passes through a Pockels cell (PC) without polarizers and with one of its birefringence axes in the direction of the polarization of the incident beam. Thus, the relative phase (ĭ) of the orthogonally polarized (green) beams can be controlled electrically. Afterwards, the beams pass through a Ȝ/4-waveplate (QW) with its fast axis at 45q with respect to the polarization directions, and thus, we obtain two (orthogonal) circular polarized waves. These waves superpose under a certain angle (ș) on a BR film. The electric field (E) resultant of the superposition of the circularly polarized beams will be linearly polarized with transmission direction dependent on the phase ĭ. The field E originates a polarization grating on the BR film with transmission matrix given by
M
§ cos 2 E x I / 2 ¨ ¨ sin E x I / 2 cos E x I / 2 ©
sin E x I / 2 cos E x I / 2 ·
sin
2
E x I / 2
¸ (1) ¸ ¹
Wide Scale 4D Optical Metrology
430
x
P L2
L1
+1
He-Ne beam
x
x
D
-1
BR
T Green laser
Fig. 2. The figure shows the polarization directions of the beams incident on the BR film, and the waves diffracted by the polarization grating recorded on it. In this figure the ±1 diffraction order are spatially separated, but evidently in the actual optical system both diffraction orders have a superposition region
This is the matrix of a polarization grating along the x-axis (see e.g., Ref. [4]). Note that the phase ĭ actually determines the lateral displacement of the polarization grating along the x-axis. If the zero diffraction order (and the spurious light scattered by the BR film) is removed with a polarizer (P) orthogonal to the polarization state of the incident He-Ne beam, and we allow to interfere r1 diffraction orders, it is easily shown that the intensity distribution I = ŇE(x’’, y’’) Ň2˨ at the interferometer output can be written as I ( x ", y ")
2A
2
> 1 cos W ( x " d , y ") W ( x " d , y ") 2 I @
(2)
where W ( x ', y ') is the phase modulation acquired by the He-Ne beam passing through the test phase object (PO), and A is a constant factor.
3 Experimental results Some validation experiments were performed with the proposed setup, using a frequency doubled Nd:YAG laser (Og = 532 nm) as green light source for recording the polarization grating on the BR. As recording media, a BR module provided by MIB GmbH (variant D96N, OD = 4) was used. The light source of the shearing interferometer was a polarized HeNe laser (O = 633 nm). The interferograms were acquired with a digital camera (C).
Wide Scale 4D Optical Metrology
(a)
431
(b)
(c)
Fig. 3. Figs. (a)-(c) show three shearing interferograms (with the shear in the x-direction) obtained for ĭ = 0, ĭ = ʌ/4 and ĭ = ʌ/2, respectively. The fringes between consecutive images are shifted by ʌ/2 (rad).
In our experiments we worked without a phase object (PO) and we intentionally introduced a small convergence in the originally plane test wavefront in order to obtain straight fringes as interference pattern. Figs. 3(a)-(c) show straight fringes in the region of interference of the sheared (red) beams, as expected for a slight convergent (or divergent) test wavefront. The three interferograms were obtained with different electrical potentials applied to the Pockels cell.
4 Conclusions In this paper we presented a novel optical configuration for lateral shearing interferometry with phase-shifting control. The method described in the present paper uses the capability of BR to record polarization information. The proposed configuration has several advantages compared with other solutions. Shear amount and phase shift can be controlled by means of an external system (associated to the green laser) which does not require moving parts inside the interferometer, e.g., there are neither PZT mounted gratings nor axial or lateral translations are required, which contributes to the mechanical stability of the interferometer.
5 References 1. E. Garbusi, E. M. Frins and J. A Ferrari. Opt. Comm., 241, 309-314, (2004). 2. N. Hampp. Chem. Rev. 100, 1755-1776 (2000). 3. Y. Okada-Shudo, J.-M. Jonathan, and G. Roosen. Opt. Eng. 41(11) 2803-2808 (2002). 4. F. Gori. Opt. Lett. 24(9), 584-586 (1999).
Quasi-absolute testing of aspherics using Combined Diffractive Optical Elements Gufran Khan, Klaus Mantel, Norbert Lindlein, Johannes Schwider Institute of Optics, Information, and Photonics Staudtstr. 7/B2, 91058 Erlangen Germany
1 Introduction Aspheric optical surfaces are becoming more and more important as they can deliver improved performance with a reduced number of elements and hence less weight [1]. The testing of precision aspherics, however, requires a calibration procedure to separate the surface deviations of the aspheric from the systematic aberrations of the interferometer. To achieve this, the three-position measurement procedure [2] used for the absolute testing of spheres has been transferred to the case of aspherics by using dual-wavefront computer generated holograms. A quasi-absolute interferometric test for aspheric surfaces is presented, where Combined Diffractive Optical Elements (combo-DOEs) [3] have been used as null elements in a Twyman-Green interferometric setup (Fig 1). The Combo-DOE carries the information of the ideal aspheric surface and additionally provides the spherical wave for the measurement at the cat’s-eye position. In order to avoid reflections from the DOE front and backside, the DOE is tilted by an angle of one degree. There are two possibilities for the design of these elements. The aperture can be sliced in stripes (sliced-DOE), which are alternatively assigned to the spherical and aspheric wave. The other method is to add the complex amplitudes of the hologram functions of both waves (superposed-DOE). Both possibilities have been investigated.
Wide Scale 4D Optical Metrology
433
Fig. 1. Schematic of a phase shifting Twyman-Green Interferometer with a Combo-DOE as a null element. The phase shifting is introduced in the reference arm by a piezo-driven reference mirror. The polarization optics allows for an adjustment of the fringe visibility. The wavelength is O=632.8nm
2 Quasi-absolute test of aspherics Analogous to the three position test for spheres, the aspheric is measured in different positions relative to the interferometer. The positions are the basic position W1, the 180q rotated position W2,and the cat’s-eye position W3 (Fig. 2).
Fig. 2. Schematic of the three positions for the quasi-absolute measurement of aspherical surfaces
Wide Scale 4D Optical Metrology
434
The measured phase in the first two positions contains the systematic aberrations of the interferometer and the deviations of the surface under test. In the cat’s-eye position, a mirror is placed at the focal point to invert the wavefront. The aberrations from the reference arm, the object arm, and the surface deviations are denoted by WR, WS, and P respectively. By suitably combining the measurement results of the three positions, the surface deviations are given by P ( x, y ) 1 / 2 >W1 ( x, y ) W 2 ( x, y ) W3 ( x, y ) W3 ( x, y )@ .
(1)
For equation (1) to hold, it is however necessary that the wave aberrations of the spherical and the aspherical wave are approximately the same. The procedure is therefore called “quasi”-absolute test.
3 Results and Discussion The quasi-absolute test was performed for both types of DOEs (superposed and sliced). Fig. 3 shows the contour plot of the surface deviations obtained from a rotationally symmetric aspheric with a diameter of 50mm. For a comparison of the calibration results with another, independent procedure, a rotational averaging [4] has been performed. By subtracting the rotationally averaged results from a single measurement, the rotational symmetric surface deviations and the systematic aberrations cancel, leaving the non-rotational deviations of the surface (Fig. 4) which can be compared with the non-rotational parts of the results obtained with the quasiabsolute test (Fig. 5).
Fig. 3. Surface deviations after a three position quasi-absolute test. Left: superposed Combo-DOE. Right: sliced Combo-DOE. The pv values are 0.8023O and 0.8216O, respectively. The contour line spacing is O/20
Wide Scale 4D Optical Metrology
435
Fig. 4. Left: Single surface measurement. Middle: Average of 36 measurements at 10º angle interval. Right: Difference, giving the non-rotational part of the surface deviations (contour line spacing O/50)
Both results show a qualitative agreement, indicating however that the consistency of the procedures has yet to be increased. After eliminating residual error sources, the accuracy of the quasi-absolute test should be high enough for a useful calibration.
Fig. 5. Left: Quasi-absolute measurement. Middle: Rotational symmetric part of the absolute result. Right: Difference, giving the non-rotational part of the surface deviations (contour line spacing O/50)
4 References 1. Born, M, Wolf, E (1999) Principle of Optics. Cambridge University Press. Seventh Edition 2. Schulz, G, Schwider, J (1976) Interferometric testing of smooth surfaces. Progress in Optics 13 E. Wolf, ed. (Elsevier, New York) 3. Beyerlein, M, Lindlein, N, Schwider, J (2002) Dual-wave-front computer generated holograms for quasi-absolute testing of aspherics. Applied Optics 41: 2440-2447 4. Evans, C J, Kestner, R, N (1996) Test optics error removal. Applied Optics 35:1015-102
Selfcalibrating fringe projection setups for industrial use Gunther Notni, Peter Kühmstedt, Matthias Heinze, Christoph Munkelt, Michael Himmelreich Fraunhofer Institute for Applied Optics and Precision Engineering, IOF Albert-Einstein-Straße 7, 07745 Jena Germany
1 Introduction The digitisation of models, the continuous process control, and the inspection of selected complex components are important tasks within the modern industrial product development chain. The application of optically 3D measurement systems for this requires the choice of suitable measurement system combined with an adequate strategy for the evaluation of data. For this all-around 3D-measurement arrangements have been developed basing on the phasogrammetric approach introduced by the authors, recently [1, 2].
2 Phasogrammetric approach The method called phasogrammetry - phase value based photogrammetry describes the mathematical mergence of photogrammetry methods with fringe projection techniques [1,2,3,4]. The basic principle of phasogrammetry is to project fringe patterns from at least two different positions, each one including two series of pattern sequences (e.g. Gray-codesequences in combination with phase shift method) on the object being measured, whereas the second series of pattern is rotated by 90° with respect to the first sequence. The resulting phase values Mx(i) and My(i) on the object point P and its associated projection centres define spatial bundle of rays similar to those of photogrammetry, which can be used to calculate the coordinates using bundle adjustment calculation giving a selfcalibration approach.
Wide Scale 4D Optical Metrology
437
Based on this principle different application-specified systems can be realised, two of them are described in the following. The aim of the systems described below is to realize a fully automatic whole around measurement.
3 Measurement setups 3.1 Measurement of large objects – “kolibri 1500”
For the automatic measurement of large objects a system named “kolibri 1500” has been developed (see Fig.1). Here different views of the object can be realized by a combination of a multi-camera arrangement, a rotation of the sensorhead, and a x-y stage that allows lateral object positioning [5]. The measuring procedure is outlined in Fig.2. linking camera KLi – fix to the object
rotating sensorhead (projector and measuring camera Kpi) calibration camera Kmi - fix to the basic stage
object
x-y stage
Fig. 1. Picture of the measurement system “kolibri 1500”
Wide Scale 4D Optical Metrology
438
The advantages of this measuring system can be summarised as follows: The output of each measurement exists in a uniform coordinate system without the requirement of additional matching. Consequently, to locate the homologue points neither stationary or projected landmarks nor particular object characteristics are needed for the combination of the multiple range images. The pixel pattern of the calibration camera and the linking camera provide free virtual points (landmarks) [5]. This allows the prevention of object interaction. The number of object views can be freely chosen and is not restricted to the number of the used cameras. Because of the self calibration of the system, no accurate expensive rotation and x-y-stages are needed. Projection of two 90° rotated fringe sequences
Variation of object position with x-y-stage (k times)
Variation of sensor head postion (n times)
Simultaneous image recording by m fixed calibration cameras Kmi , measuring camera Kpi and the linking camera KLi
k (n+m) patches in a unique world coordinate system by simultaneous calculation of the system parameters and 3D coordinates (bundle adjustment)
3D coordinates of the measured object
Fig. 2. Measuring procedure of the measurement system “kolibri 1500”
Because of these benefits the system is very well suited for the quality control near the production line. A typicall application is shown in Fig. 3, for the inspection of a motor block at the company BMW.
Wide Scale 4D Optical Metrology
439
Fig. 3. STL-surface and 3D CAD-comparison of a motor-block
In that procedure one linking camera, four calibration cameras (m=4) and one measuring camera at three positions (k=3) of the x-y-stage and ten rotating positions (n=10) of the sensorhead are used. All in all the measuring result is composed of 42 patches resulting in a complete processing time of about 25 min. The measurement takes place fully automatically, whereas the evaluation process partly works automatically and stills needs some interaction, within 45 min. The measurement volume was 900x500x300mm3 wheras a accuracy below 50µm has been reached. 3.2 Measurement of small objects – “kolibri flex mini”
The same methodology can also be used for the measuerement of objects of a dimension below 10cm in diameter, see Fig.4. cameras
projection pathes 1 and 2
rotation unit Fig. 4. Configuration and photo of the system “kolibri flex mini”
440
Wide Scale 4D Optical Metrology
This system is characterized by the following technical features: The object is illuminated and observed in a complete central projection arrangement. The fringe projection unit is baesd on an LCoS-microdisplay with a high-power LED as a light source. Two different illumination directions are realized (20° and 60°) by a switching mirror in front of the projection unit (projection path 1 and 2). The object is simultaneously observed by 3 cameras (one from the top and the others to get side views (20° and 60°) (remark: shown in the sketch are only the top and one side-camera). The object can be rotated with respect to the cameras and the projection unit by a rotation unit. The system is self calibrating, works fully automatically, and is easily to use even by non-technical staff. Up to to 37 different views can be captured within one measurement procedure. The typicall measurement time is 2 min (13 patches). The accuracy is better 10µm. The system is a table top one (weight 33 kg). Because of these benefits the system is very well suited for the measurement of small complex objects typically for CAD/CAM in dental industry [6] and for the inspection of small work pieces.
Fig. 5. STL-surface of a measured dental impression (complete arch) and a heat sink.
Wide Scale 4D Optical Metrology
441
4 Summary It was outlined that the developed multi-view 3D-measurement systems basing on the method of phasogrammetry can be used for an automated measurement of complex objects. This helps to explore new fields of application within industrial measurements. This methodology of phasogrammetry is not restricted to a specific measurement volume and can be very well adapted to the customer needs. Highest relative measuring accuracy up to M/100.000 (M=illuminated measuring field) can be reached.
References 1. Schreiber W., Notni G. (2000) Theory and arrangements of selfcalibrating whole-body three-dimensional measurement systems using fringe projection technique. Optical Engineering 39:159-169 2. Kühmstedt P., Heinze M., Himmelreich M., Bräuer-Burchardt C., Notni G. (2004) Phasogrammetric optical 3D-Sensor for the measurement of large objects, Proc. SPIE 5457:56-64 3. Notni G. (2001) 360-deg shape measurement with fringe projectioncalibration and application, Proc. Fringe´01 (Eds. W. Osten, W. Jüptner) Elsevier-Verlag :311-323 4. Kirschner V., Schreiber W. (1997) Self-calibrating shape-measuring system based on fringe projection, Proc. SPIE 3102:5-13 5. Kühmstedt P., Heinze M., Himmelreich M., Bräuer-Burchardt C., Brakhage P., Notni G. (2005) to be published in Proc. SPIE 5856 6. www.hintel.com
3-D shape measurement method using pointarray encoding Jindong Tian, Xiang Peng Institute of Optoelectronics, Key Laboratory of OptoelectronicDevices and Systems of Education Ministry of China, Shenzhen University 518060 Shenzhen P. R. China
1 Introduction Optical 3-D profilometry has been widely used for machine vision, industry inspection, rapid prototyping, biomedicine, etc [1]-[3]. For the 3-D measurement methods based on triangulation, the structure light could be either laser point or laser sheet. The phase measuring profilometry (PMP) [4]-[5] is another sort of 3-D measurement technique. The laser triangulations with the point-structured or sheet-structure light are low efficiency and not suitable for real-time measurement because scanning mechanism must be used to realize full-field measurement. In this regard the PMP method is featured by high-speed measurement, a full-field range image can be obtained at one time. However, the PMP method is an indirect method through measuring the phase of spatial carrier to recover the range image of object surface. When dealing with the surface with inherently discontinuous region, or the surface with very steep slope, phase calculation usually gets troubles such as phase ambiguity and error propagation. In this paper, a new point-array encoding method is proposed to calculates object height directly through the affine transformation. Compared with the PMP, the point-array encoding method does not involve phase measurement and therefore has no phase ambiguity and error propagation in phase unwrapping. The rest of paper is organized as follows: Section 2 establishes a mathematic and geometric model that formulizes the 3-D imaging process based on point-array encoding. Section 3 gives the experiment results and discussion. The last section summarizes the major result in this approach.
Wide Scale 4D Optical Metrology
443
2 Principle of method The imaging geometry of a point-array is shown in Fig. 1, in which POw is the optical axis of a projector lens. It crosses the optical axis of a camera lens, OcOw, at point Ow on a reference plane R. The point P is the center of the exit pupil of the projector lens. Ow denotes the origin of the object coordinate system. The points P, Oc and Ow form a triangulation system. The camera lens, with the entrance pupil at Oc, images the point-array patterns to the front image plane XuOuYu following by the perspective projection. The transformations for any point in the 3-D world coordinate system include three steps: rigid body transformation, perspective projection and discrete processing [6]. R P1
S B1 B2
P
Oc
D B3 Yw
Zc
Ou Xu
P2
Ow
Xc
Ow
P3 Xw
Zw Xw
Fig. 1. Imaging geometry of the point-array projected on object surface
For an arbitrary point projected on the object surface and the reference respectively, the position difference of the point on these two surfaces along lateral direction in the computer image coordinate is 'u
§ L' z w x 0 cos D L' z w sin D x 0 cos D · ¨ ' ¸ ¨ LL L' z x sin D L' z cos D L x sin D ¸ f 0 w w 0 © ¹
(1)
Then the object height zw can be deduced zw
L' x 0 cos D LC Cx 0 sin D , C C x 0 sin D L' cos D x 0 cos D L' sin D
x0 cos D 'u f L x0 sin D
(2)
where L is the distance of OcOw; L' is the distance of POw; D is the angle between OcOw and POw; x0 denotes the center-to-center distance between adjacent points of point-array on the reference, f is the focus of camera lens. All of the above parameters are known. So, firstly, we can measure the spatial position of the both images of the point-array projected on reference plane and the object surface respectively. Secondly, computes the lateral displacement 'u for each pair of points in the point-array. Finally, calculates the object height at every discrete point to form the range image of the object according to Eq. 2.
Wide Scale 4D Optical Metrology
444
In order to achieve high spatial resolution, the projected point-array can be shifted digitally along both lateral and longitudinal directions with onepixel minimum step to fill up the lost information caused by the discrete processing. By Merging all calculated range images of the sequence, a range image with high spatial resolution will be obtained.
3 Experiment Results A digital projector (Benq 2115) is used to project the point-array patterns. A JVC TK-C1481BEC color video camera with 25mm focal length lens is used to acquire point-array. A frame grabber OK-USB20A was used to capture and save sets of point-array images for further processing. Figure 2 shows the result of a mannequin face. Figure 2(a) is an image of point-array modulated by the face. Figure 2(b) shows the range image of selected region. A regularly arranged point-array with 64u64 points was used and shifted along lateral and longitudinal direction with two-pixel step to fill up the lost information caused by the discrete processing.
(a)
(b)
Fig. 2. 3-D profilometry result: (a) Point-array image of a head surface, (b) range image of selected region in (a)
Another kind of object surface used to demonstrate our approach is a step-like object surface. Figure 3(a) is an image of the object illuminated by point-array structured light. Figure 3(b) shows the range image. Only one 64u64 point-array pattern without movement is used in this experiment.
(a)
(b)
Fig. 3. 3-D profilometry result: (a) point-array image of a step-like object, (b) range image of (a)
Wide Scale 4D Optical Metrology
445
4 Conclusion A point-array encoding method for 3-D measurement is proposed. In comparison with conventional triangulation methods, our approach is more convenient as it does not require any scanning mechanism. Compared with PMP methods, it does not exist the problems of phase ambiguity and error propagation as encountered in spatial phase unwrapping. With this method, we have successfully measured two representative classes of objects surface. Experimental results have shown the effectiveness of this approach and have revealed that the method can be applicable for a broad range of objects, especially for objects with large height discontinuities and/or surface isolation.
Acknowledgments The authors are grateful to the financial supports from Natural Science Foundation of China (NSFC) through the grant No. 60275012, Natural Science Foundation of Gaungdong province (No. 031804 and 04300864) and Science and Technology Bureau of Shenzhen.
References 1. F. Chen, G. M. Brown, M. Song, “Overview of three-dimensional shape measurement using optical methods”, Opt. Eng., 39, 10–22 (2000). 2. Hans J. Tiziani, “Optical 3D-shape, surface, and material analysis”, in Second International Conference on Experimental Mechanics, F. S. Chau and C. Quan, eds., Proc. SPIE 4317, 204-210 (2001). 3. X. Peng, Z. Zhang, and Hans J. Tiziani, “3-D imaging and modeling – Part I: acquisition and registration”, OPTIK, 113, 448-452 (2002). 4. B. Fiona, P. Paul, C. James, “A theoretical comparison of three fringe analysis methods for determining the three-dimensional shape of an object in the presence of noise”, Optics and Lasers in Engineering, 39, 3550 (2003). 5. M. Takeda, K. Mutoh, “Fourire transform profilometry for the automatic measurement of 3-D object shapes”, Appl. Opt. 22, 3977-3982 (1983). 6. R. Jain, R. Kasturi, B. G. Schunck, Machine Vision (McGraw Hill, New York, 1995).
3D measurement of human face by stereophotogrammetry Holger Wagner*, Axel Wiegmann, Richard Kowarschik, Friedrich Zöllner Friedrich-Schiller-University Jena, Institute of Applied Optics Fröbelstieg 1, 07743 Jena, Germany *[email protected] Phone: 0049 3641 947666
Abstract In this article a rapid and self-calibrating stereophotogrammetry based method for 3D measurements is described. The approach uses a series of statistical generated illumination patterns to encode the surface under test. The allocation of homologous points, which is necessary for 3D reconstruction by triangulation, is done by an adapted correlation technique.
1 Introduction The advantages of optical measurements like fast data acquisition, noninteracting with the object under test or the possibility of soft tissue measurements were used in a wide range of technical and medical applications [1-6]. Examples are the comparison between work pieces and their CADreferences at the production process or growth monitoring of children for medical examinations. The background of our activities was to adapt a method for function orientated diagnostics and therapy in dentistry to provide prognoses for jawgrowth or surgical procedures. The aim was to get a dense 3D point cloud of the human face by a single, rapid measurement. The two named requirements were posted to avoid measurement errors caused by movements of the person’s head and to overcome difficulties of matching point clouds effected by inevitable changing’s in facial expressions.
Wide Scale 4D Optical Metrology
447
2 Adapted photogrammetrical method The adapted photogrammetrical method consists of four essential components. At first, for image aqusition and providing of coordinates for 3D reconstruction by triangulation a convergent arrangement of two cameras is used. Actually, the cameras are calibrated by a previous calibration procedure to get necessary parameters like focal length, main point coordinates and ones which describe deviations, i.e. distortion or shear, from an idealized pinhole model. Furthermore, for encoding the object shape a special illumination is adapted which maps a sequence of about 20 statistically generated patterns onto the object. Therefore, every point of the object surface is characterized by an individual sequence of intensity values. This approach makes it possible to use a single sensor element for allocating homologus points. It is not necessary to take a spatial template, i.e. nine or eleven pixels in square, into account. As a result, especially at higher profile gradients a denser point cloud is generated. The third main component is a correlation technique for allocating homologous points. The correlation coefficients of each point of the first camera and such ones which are in relating areas of the second camera are computed. A point is accepted as a homologous one if the maximum of the correlation coefficients exceeds a certain threshold. At least a self calibration of the extrinsic parameters is implemented. This feature makes the arrangement insensitive against environmental changes because the geometric relations of the cameras are determined at the period of measurement. The arrangement has to be timestable only while the period of image capturing. This approach is possible because every pair of homologous points provides four coordinates for the determination of the three unknown values of the 3D point. Consequently, if a sufficient number of homologous points is known, it is possible to determine the 3D points and the extrinsic parameters.
3 Experimental setup The previously described photogrammetrical method is realized by an experimental setup which is shown in figure 1. Note that it is not necessary to use a digital projector like it is pictured in figure 1. Also a classical one or the usage of more than a single direction of illumination, i.e. to prevent not or low illuminated areas, is possible.
Wide Scale 4D Optical Metrology
448
The spatial resolution, caused by camera resolution and reproduction scale, is about 0.2 mm whereas the longitudinal resolution covers a range of 0.4 to 0.8 mm depending on the angle between the cameras.
p
cl cr
o Fig. 1. Experimental setup with projector (p), left and right camera (cl, cr) and 3D object (o)
4 Results of 3D measurements Figure 2 shows two examples of reconstructed 3D objects. The measured human face consists of about 200,000 points and is pictured with rendering algorithms of OpenGL. Areas with a lower density of measured points are caused by low reflectance, e.g., at eyebrows or by masking of the surface. The right part of figure 2 shows a point cloud of a shuttlecock and clarifies that also technical objects with complex structures can be measured. For experimental tests regarding of precision and uncertainty of measurement we used a matt finished aspherical lens. The deviations from the reference are determined in a range of +/- 0.2 mm whereas the rms error is about 0.1 mm.
Wide Scale 4D Optical Metrology
449
Fig. 2. Examples of measured 3D objects: Human face and a shuttlecock
5 Conclusions In this work a possibility of a rapid 3D measurement by stereophotogrammetry is shown. In addition to the low measurement period (< 1 second), which is determined by the digital cameras, a dense point cloud is generated. The accuracy of about +/- 0.1mm (rms) is sufficient for a wide range of medical or technical applications.
Acknowledgments This project was supported by the Thuringia ministry of science, research and culture under the topic: ‘3D shape measurement for function orientated diagnostic and therapy in dentistry’.
References 1. 2. 3. 4. 5. 6.
W. Schreiber, G. Notni: Opt. Eng. 39, pp. 159-169 (2000) R. Kowarschik, P. Kühmstedt, J. Gerber, W. Schreiber, G. Notni: Opt. Eng. 39, pp. 150-158 (2000) F. Zöllner, V. Matusevich, R. Kowarschik: Proc. of SPIE Vol. 5144 (2003) P. Albrecht, B. Michaelis: Proc. 14th Int. Conf. on Pattern Recognition, Volume I, Brisbane, Australia, pp. 845-849 (1998) M. Kujawinska, L. Salbut, K. Patorski: Appl. Opt. 30, 1633 (1991) G. Sansoni, M. Carocci, S. Lazzari, R. Rodella J. Opt. A: Pure Appl. Opt. 1, 83 (1999)
Fast 3D shape measurement system based on colour structure light projection Marek Wegiel, Malgorzata Kujawinska Warsaw University of Technology Instutute of Micromechanics and Photonics 8 Boboli St., 02-525 Warsaw, Poland e-mails: [email protected], [email protected]
1 Introduction Many multimedia and medical applications require rapid gathering of data about 3D real objects. Typical shape measurement projection systems based on multiframe fringe/Gray code projection restrict applicability of these instruments to static objects only [1-3]. In the paper we present a fast 3D shape measurement method based on colour structure light projection. It utilizes combined multicolour sinusoidal phase-shifted codes and colour Gray codes and therefore limits the number of required frames to 3. However the main problem in the case of using colour masks is the mismatching of spectral sensivity of projector and detector. Therefore we propose a new calibration method based on appropriate modification of projected colour enabling the detector to see the desired colour. This approach is suitable to detectors which do not have self spectral control or to detectors that have very different spectral sensitivity compared to colour temperature of projector lamp. Additionally in order to determine the proper colour threshold during color Gray codes analysis we use hue-intensity calibration matrix. The colour Gray codes consist of chromatic colours only (without black and white) and pixel colours of projected sinusoidal mask are matched to spectral sensivity of detector.
2 The methodology of 3D shape measurement The shape measurement system configuration includes digital light projector based on LCD technology and video camera with 3 CCD chips. Optical
Wide Scale 4D Optical Metrology
451
axes of these devices are crossed. The object is located in measurement volume described by calibration matrix, which allows to scale phase data into (x, y, z) surface co-ordinate values [3]. The measurement methodology consists of three colour rasters projection: x one colour sine raster to obtain phase mod 2ʌ distribution, x two colour Gray codes in order to phase unwrapping. For the colour sine raster three sine intensity distributions with 2/3ʌ ҏphase shifts are coded independently in RGB colour components [4]. Next the colour Gray codes include only achromatic colour stripes (RGBCMY). Additionally colours of Gray code stripes are arranged in way, that side by side are colours which are not neighbours on the Hue scale (Fig.1).
Fig. 1. Colour Gray rasters generation
Notice that during the measurement process, when the object is moving, the captured images of projected three colour rasters do not cover the same object surface. They are shifted between themselves, depending on displacement of the object in sequential frames. In this connection, we assume the sinusoidal raster as the base with relation to remaining two rasters with color Gray codes. Thus borders for Gray codes stripes are defined in reference to calculated pixels with phase mod 2ʌ jumps. Such compensation of the raster displacement (caused by the object movement) turns out to be useful, when change of the position of measured object for all required frames is less than half period of sinusoidal fringes.
3 System calibration Performing fast 3D shape measurement methodology in real conditions requires appriopriate matching colours of the projected rasters in order to minimize mismatching between projector and detector spectral sensitivity. Color sine raster calibration process, which is carried out in RGB colour space, includes amplitude and direct factor of sine distribution calculation
452
Wide Scale 4D Optical Metrology
on the base of sine raster colour pixels projections and iterative matching the projected colour of each pixel from one period of colour sine raster. On the basis of matched pixels colours the modified colour sine raster is generated, so when it is projected onto the object, detector sees desired theoretical colour. It results in minimizing of the phase error (Fig.2).
Fig. 2. One period colour sine raster matching: a) before; b) after
For colour Gray codes, the analysis is performed in HSI colour space [5]. At the beginning the hue-intensity distribution for primary addidtive (RGB) and subtractive (CMY) colours is obtained. Next the appriopraite weights of RGB components for subtractive colours are iteratively changed in order to adjust position of the centre of gravity of each CMY area in the middle between primary addidtive colours areas. After definition of shares of RGB values in CMY colours the second stage of colour Gray codes procedure is performed. It consists in projection of sequence of rasters with all possible neighbouring pairs of colours for whole intensity range. From each frame the hue and intensity of all pixels are calculated. The last operation is based on specifying points in hue-intensity coordinates which include pixels coming from the same projected colour areas. This approach allows to determine independent detection regions, for all six colours of colour Gray code rasters, connected to spectral sensitivity of current projector and detector (Fig.3). When the Gray code raster is analysed, RGB value of each pixel is converted into hue-intensity value and then into the colour value from adequate position of hue-intensity calibration matrix.
Fig. 3. Colour Gray codes calibration matrix
Wide Scale 4D Optical Metrology
453
4 Experiments in real conditions and conclusions To examine the new 3D shape measurement methodology in real conditions, the measurements of moving hand in a white glove were performed. The system configuration consists of: LG RL-JT10 projector, SONY DSC390P camera, MatrixVision Delta frame grabber and PC PIV 3GHz. With this equipment the measurement of one object position takes about 200ms. Fig.4 shows a few stages of measured hand. The experiment confirmed the applicability of the method proposed. However at the moment application is restricted to white objects and the frequency of frame update is too low in comparision with for multimedia applications requirements (16 Hz). Fig. 4. Clouds of points of measured moving hand in This work was financed white glove by the Ministry of Science and Information Technology within the project PB 4T10C 03425.
5 References [1] F.Chen, G.M.Brown, M.Song: „Overview of three-dimensional shape measurement using optical methods”, Opt.Eng., 39, 10-22, 2000 [2] R.Kowarschik, P.Kuhmstedt, J.Gerber, W.Schreiber, G.Notni: „Adaptive optical threedimensional measurement with structural light”, Opt.Eng 39, 150-158, 2000 [3] R.Sitnik, M.Kujawinska: “Digital fringe projection system for largevolume 360 deg shape measurement”, Opt.Eng., 40, 2001 [4] M.WĊgiel, M.KujawiĔska: “Active colour structure light projections the tool for quasi real-time 3D object measurement”, International Conference on Computer Vision and Graphics, Zakopane 2529.09.2002, v.II, 777-786 [5] A.Hanbury, J.Serra: “A 3D-polar Coordinate Colour Representation Suitable for Image Analysis”, PRIP-TR-77, Vienna University of Technology, 2002
SESSION 4 Hybrid Measurement Technologies Chairs: Ichirou Yamaguchi Gunma (Japan) Vladimir Markov Irvine (USA)
Invited Paper
Tip Geometry and Tip-Sample Interactions in Scanning Probe Microscopy (SPM) Ludger Koenders and Andrew Yacoot Physikalisch-Technische Bundesanstalt Braunschweig und Berlin Bundesallee 100, 38116 Braunschweig Germany
1 Introduction Since their invention in 1981 [1] and 1986 [2] the scanning tunnelling microscopy (STM) and the atomic force microscopy (AFM) (Fig. 1), respectively, have proven their suitability in various fields of application. The STM developed as an instrument to image conductive surfaces with atomic resolution, whereas the AFM was designed to measure on non-conductive surfaces. The capability of both to investigate surfaces with unprecedented resolution introduced a large variety of techniques using probes that exploit different interactions in order to probe sample properties. SPMs have become important for the measurement of small structures in dimensional metrology, like e.g. pitch [3], step height [3], particle diameter [4], line width [5] and roughness, as these techniques achieve high spatial resolution.
Fig. 1. Schematic diagrams of scanning tunnelling microscopy (STM) (left) and scanning force microscopy (SFM) (right). The probe is a fine tip (metal needle (STM) or a silicon cantilever with tip (SFM)) and is moved in contact or non-contact, but small distance, across the sample surface.
Hybrid Measurement Technologies
457
The uncertainties for step height and pitch measurements are now in the sub-nanometre and picometre range, respectively [3]. The main contributions to the uncertainty are still due to some properties of the scanning and positioning apparatus. However, on the atomic scale effects due to the tip shape or tip wear together with interaction forces between tip and sample, which cause elastic or plastic deformations of tip and sample, have to be taken into account. For scanning probe microscopes the fine tip is crucial to the spatial resolution of structures on the sample, but it must be considered that this tip is naturally not infinitely fine. Thus the geometry and the physical characteristics of the probe together with the interaction between probe and sample are substantial importance for the measurement. For dimensional metrology within the range of atomic dimensions, this is consequently substantial for the understanding of measurements.
2 Tip geometry In some cases the influence of the tip shape can be assumed to be very small, like e.g. pitch or step height measurements on homogeneous material. But it is obvious that for measurements of line width, particle diameter and roughness in the nanometre range the knowledge of the shape of the tip is essential for the result. Therefore some techniques have been developed over the years [6 - 8] to determine the tip shape, i.e. the shape or opening angle D and the radius R of the tip. Neglecting effects due to the interaction between tip and sample, the measurement of a sample by a tip formally corresponds to a morphological operation and can be described as dilation [7, 8]. Consequently, after performing an erosion – the mathematical reverse of a dilation – the reconstructed surface is obtained, which is identical to the imaged surface in an ideal case. Measurements of the tip shape can be made by SEM, however these are off-line measurements and the whole procedure is rather time consuming. Therefore, special samples have been developed for the measurement of the tip shape, so-called tip characterizers. The idea constitutes on the basis of a reconstruction of the measuring data; consequently exact knowledge of the geometrical shape of the characterizer is required. On the other hand, the calibration standards must be used relatively often for precise measurements. Similar to the SEM tip inspection, this measurements needs to be performed ex situ. Another method is called ‘blind tip reconstruction’ [7,8]. Here one estimates the geometry of the tip using the data measured during the scanning of the sample. The approach is based on the assumption that the image of
458
Hybrid Measurement Technologies
the shape of the tip is always a component of the topological maps of a sample that can be generated by means of dilation. An iterative algorithm is thus able to permit an estimation of the maximum dilation of the tip. Having a closer look to an upper bound estimation for the tip as provided by the blind reconstruction method leads to a first, qualitative estimation of the geometric properties of the tip apex: symmetry, opening angle D and radius R. However this technique is quite restricted as can been seen in [5]. Czerkas et al used different samples to determine the tip shape. Using gold nanospheres and a so-called TipCheck sample they obtained an angle of the tip of about 40 – 50°. Other samples gave a value of 60 – 70° instead. The reason for this deviation can be related to different physical properties of different samples which effects the interaction forces. Thus the geometry of the tip is only one component of tip related influences that need to be taken into account for a precise description of a measurement. A not negligible effect may be caused by the interaction forces which can temporarily/permanently modify the tip shape by elastic or plastic deformation, whereas strong adhesion may cause tip wear, and capillary forces due to water films may strongly influence, e.g. line width measurements.
3 Tip – sample interaction 3.1 STM tips and electron density of states
In a tunnelling electron microscope the electrons in the fine metal needle feel the empty electron states in the sample and can tunnel through the vacuum to the sample and vice versa. Only at very small distances of fractions of nanometres this effect leads to detectable currents. If one withdraws the tip only a little, the probability for the tunnelling decreases substantially. On the other hand if the metallic needle is very close to the sample surface the electronic states of the sample may be affected a priori by the presence of the probe. After the Tersoff-Haman theory [9] the STM measure the electron density of the sample at the position of the centre of the tip at the Fermi-Energy (small voltages). Additionally, on electron scale the “sample geometry”, e.g. edges, are no longer sharp. The electron states tend to flatten sharp edges of steps, lines or grooves.
Hybrid Measurement Technologies
459
Fig. 2. Interaction forces in SFM (vdW: van der Waals, M: magnetic, E electrostatic forces)
3.2 SFM tips and interaction forces
Fig. 2 shows the profile produced by a tip which feels different interactions forces across its scan over the surface. In contact the Lennard-Jones potential repels the tip, whereas in non-contact mode attractive forces like the short-range van der Waals force and the long-range capillary force may act on the tip [10]. Local electrical charges on the surface may lead to attractive or repulsive electrostatic forces on the tip (electrostatic force microscopy). In a similar way, magnetic forces can be imaged if the tip is coated with a magnetic material, e.g. iron, that has been magnetised along the tip axis (magnetic force microscopy). The tip probes the stray field of the sample and allows the magnetic structure of the sample to be determined. Stick-slip processes exert friction or lateral forces on the tip. The SFM tip can be driven in an oscillating mode to probe the elastic properties of a surface (elastic modulus spectroscopy). Increasing the force of the tip allows to probe plastic deformation or nanoindentation. All the interaction forces strongly depend on the separation between tip and sample and on the materials and material properties (e.g. magnetic) used. This is only a short overview (see also [10, 11]). More information about a modern approach to the calculation of van der Waals forces is described by Hartmann [12]. The lateral resolution of a SFM is highest in contact mode or if during oscillations the tip strongly feels repulsive forces. In non-contact mode the tip is sensitive to van der Waals forces or electric or magnetic forces, but the resolution is reduced due to the increased probe-sample separation. Information on force gradients can be obtained by cantilever oscillations
460
Hybrid Measurement Technologies
techniques. Depending on the oscillation amplitude, terms like ‘tapping mode’ or ‘dynamic force microscopy’ are used. The dynamic force microscopy is able to provide true atomic resolution on various surfaces under ultra-high vacuum conditions and allows force spectroscopy on specific sites. The measurement of forces as a function of the tip-sample separation (force-distance curves) allows to draw conclusions regarding the material characteristics of surfaces and their chemical properties [13]. For a better understanding of dimensional measurements these effects have to be taken into account, especially on inhomogeneous samples, e.g. height measurements of chromium lines on glass or metal electrodes in a silicon oxide matrix. This is essential for non-contact measurement where the attractive van der Waals force acts on the tip, but also other modes may be influenced by such a material dependent interaction, especially during “contact” with the sample surface. An understanding of contact, adhesion, and friction forces between surfaces requires knowledge of the area of contact between them. Depending on the elastic modulus and the adhesion strong deformation of tip and sample could occur. Continuum models which predict the contact area for various geometries have been worked out starting with the work of Hertz [14], which assume no surface forces (table 1). The spatial range over which surface forces act depends on the chemistry of the materials in contact, and may or may not be long-range compared to the scale of elastic deformations due to these forces. Two limiting cases are apparent. If the surface forces are short-range in comparison to the elastic deformations they cause the contact area is described by the Johnson-KendallRoberts-Sperling (JKRS) model [15, 16]. The opposite limit is referred to as the Derjaguin-Müller-Toporov (DMT) regime [17] and the form of the contact area is presented in the work of Maugis [18]. Maugis provides an analytic solution using a Dugdale model, but the resulting equations are cumbersome if compared then with experimental data deduced from scanning force microscope measurements. Therefore there is a demand for good theoretical calculations of the response of an atomic force microscope tip to the extremely non-linear impacts received while touching the sample. The dependence of the tip or cantilever amplitude and phase upon the sample stiffness, adhesion and damping has to be investigated using an appropriate theoretical model.
Hybrid Measurement Technologies
461
Table 1. Comparison of Hertz-, DMT-, JKRS-, and MD-Models Theory
Assumptions
Limitations
Hertz [14]
No surface forces
Not appropriate for low loads if surface forces are present.
Derjaguin-MüllerToporev (DMT) [17]
Long-range surface forces act May underestimate contact only outside the contact area area due to restricted geome(stiff materials, weak adhesion try. forces, small tip radii)
Johnson-KendallRoberts-Sperling (JKRS) [15, 16]
Short-range surface forces act May underestimate loading only inside the contact area. due to surface forces Contact geometry allowed to deform. (compliant materials, strong adhesion forces, large tip radii)
Maugis-Dugdale (MD) Surface forces act everywhere. Must be solved numerically [18] Contact geometry allowed to de- [see 19, too] form.
3.3 A model for SFM tip – sample interaction
Ideally, a complete description of the experimental phenomena would take account of: a) non-linear long-range attractive forces between the tip and the sample b) the non-linear mechanical compliance of the contact region c) the contact area itself, which is a function of the surface forces that act during contact, as well as by the externally applied load and by the elasticity and geometry of the materials. d) the geometrical shape of the tip and sample structure Such model has to include the attractive van der Waals force during the non-contact movement as well as the ionic repulsive and additional adhesion forces during contact. Burnham et al [20] have set up a model which fulfils the first points of the above mentioned list, but does not account for plasticity of the tip or sample. Nevertheless, by using this model it is possible to calculate the instantaneous contact pressure which then can be used to estimate if permanent deformation occurs. Additionally, the cantilever is represented by mass-less spring constant kc and an effective point mass m, whereas in reality its mass is distributed. The model uses a tipsample interaction model that is analytic and covers all possible combina-
462
Hybrid Measurement Technologies
tions of tip and sample materials and interaction strengths and accounts for long-range attractive forces. The starting point for the model is an excited damped harmonic oscillator, with an additional term to describe the interaction between tip and sample. The motion is expressed in terms of the displacement of the tip d(t) and the position of the root of the cantilever beam z(t). ßc describes the damping between the base of the cantilever and the tip. Rearrangement of the second-order differential equation leads to
md(t ) 2mE c [d (t ) z(t )] kc [d (t ) z (t )] P[d (t )] (1) Here the right-hand side P[d(t)] contains the unique physics of the contact. Two separate function P are needed to describe the attractive force in non-contact regime and the net force in contact. Using normalised forces makes it convenient to discuss the pure contact mechanics. Burnham et al. used this model to reproduce details of experimentally observed behaviour of intermittent contact measurements in both air and liquids. Furthermore they used the model to predict the dependence of cantilever response upon materials’ properties and conclude that topographic images obtained are not independent of sample properties. However extracting quantitative mechanical properties from the measurements will prove to be too challenging. Nevertheless, although not all questions are answered, such models are necessary for a better understanding of experimental results obtained in dimensional metrology.
4 Conclusions Scanning probe microscopes allow to resolve small structures with very high resolution. Therefore, these instruments are essential for dimensional metrology from micro to atomic scale. The image obtained, however, is more than a morphological operation linking tip and sample geometry. To achieve high accuracy a detailed knowledge of the interaction between sample and tip is necessary. First approaches have been published, however, SPM techniques need more appropriate and more quantitative models.
Hybrid Measurement Technologies
463
5 References 1. Binning G, Rohrer H, Gerber Ch and Weibel E (1982) Phys.Rev. Lett. 49:57 2. Binning G, Quate C F, Gerber Ch (1986) Phys. Rev. Lett 56:930 3. Dai G, Koenders L, Pohlenz F (2005) Meas. Sci. Technol. 16:1241 4. Meli F. (2005) in Nanoscale Calibration Standards and Methods: Dimensional and Related Measurements in the Micro- and Nanometer Range. Edited by G. Wilkening and L. Koenders, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany, p. 361 5. Czerkas S, Dziomba T, Bosse H (2005) in Nanoscale Calibration Standards and Methods: Dimensional and Related Measurements in the Micro- and Nanometer Range. Edited by G. Wilkening and L. Koenders, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany, p. 311 6. Keller D J (1991) Surf. Sci. 253:353 7. Villarrubia J S (1994) Surf. Sci. 321:287 8. Williams P M, Shakesheff K M, Davies M C, Jackson D E, Roberts C J, Tendler S J B (1996) J. Vac. Sci. Technol. B 14:1557 9. Tersoff J, Hamann D R (1985) Phys. Rev. B31:805 10. Giessibl F J (2003) Rev. Mod. Phys. 75:949 11. Tsukada M, Sasaki M, Gauthier M, Tagami K, Watanabe S (2002) in Noncontact Atomic Force Microscopy, ed. S Morita, R Wiesendanger, E Meyer (Springer, Berlin) Chap. 15, pp. 257 12. Hartmann U (1991) Theory of van der Waals microscopy J. Vac. Sci. Technol. B 9:465 13. Burnham N A, Kulik A J, Oulevey F, Mayencourt C, Gourdon D, Dupas E, Gremaud G in B. Bhushan (ed.) Micro/Nanotribology and Its Applications (1997) 421 14. Hertz H (1881) J. Reine Angew. Math. 92:156 15. Johnson K L, Kendall K, Roberts A D (1971) Proc. R. Soy. A 324:301 16. Sperling G (1964) Eine Theorie der Haftung von Feststoffteilchen an festen Körpern, PhD Thesis, T. H. Karlsruhe 17. Derjaguin B V, Muller V M, Toporov Y P J (1975) Coll. Interface Sci. 53:314 18. Maugis D J (1992) Coll. Interface Sci. 150:243 19. Carpick R W, Ogletree D F, Salmeron M (1999) J. Coll Interface Sci. 211:395 20. Burnham N A, Behrend O P, Oulevey F, Gremaud G, Gallo P-J, Gourdon D, Dupas E, Kulik A J, Pollock H M, Briggs G A D (1997) Nanotechnology 8:67
Applications of time-averaged digital holographic interferometry Nazif Demoli, Kristina Šariri Institute of Physics Bijeniþka c. 46, 10000 Zagreb Croatia Dalibor Vukicevic, Marc Torzynski University Louis Pasteur Ecole Nationale Superieure de Physique de Strasbourg Boulevard Sebastien Brant, 67412 Illkirch France
1 Introduction For studying surface vibrations, a well-established technique is timeaveraged holographic interferometry [1]. In the same manner, with appropriate capturing, taking exposure time much longer than the period of vibration, vibration modes can be observed using digital holography technique as well [2]. Digital holograms are recorded using a CCD camera, then stored in the frame grabber memory, and reconstructed numerically using computer software. Drawbacks of the digital holography techniques are related to the recording conditions with necessarily low numerical aperture which leads to large speckle noise and overlapping of zero- and first-order terms. To overcome these difficulties we have proposed an original method called subtraction digital holography enabling efficient suppressing of the zero-order disturbance in the off-axis digital holography [3]. Namely, by subtracting two stochastically changed primary fringe patterns, corresponding in here presented particular cases to a time averaged digital hologram, and implementing standard numerical FFT procedure clear object reconstructions covered with time averaged interferometric fringes are hence obtained. Previously, we have demonstrated the use of subtraction digital holography for speckle noise reduction and therefore spatial resolution enhancement through wavelength multiplexing, ie. through color recordings [4]. In this work, we present our most recent results in time-averaged subtraction digital holographic interferometry.
Hybrid Measurement Technologies
465
In the first example we are demonstrating the power of subtraction digital holography method for time averaged analysis of realistic large size vibrating objects. With the second example we demonstrate how to exploit the particular properties of digital holography to obtain additional information, not at all available through classical holographic recordings. A method is developed for detecting displacements of a surface, from its steadystate position to its equilibrium position while it is vibrating [5].
2 Theory We consider a quasi-Fourier off-axis setup with both, a reference point source and an opaque object, located at the input plane P1 [coordinates x1 , y1 ] and a CCD sensor located at the hologram plane P2 [coordinates
x2 , y2 ] distanced d from the plane P1. Digital holograms are captured optically and stored in a computer memory by the CCD. Thus recorded primary fringe patterns are reconstructed numerically. Input field at the plane P1 can be described as: U x1 , y1 , t s x1 , y1 , t G x1 X , y1 Y where s x1 , y1 , t
X ,Y
(1)
s x1 , y1 exp ª¬iI x1 , y1 , t º¼ is the object wave front and
is the position of the point source. The time-dependant phase
I x1 , y1 , t M x1 , y1 , t \ x1 , y1 , t
(2)
is composed of its deterministic and random parts, M x1 , y1 , t and
\ x1 , y1 , t , respectively. The diffracted field at the plane P2 is calculated according to the Fresnel approximation
^
`
U x2 , y2 , t v exp ª¬i[ x2 , y2 º¼ F U x1 , y1 , t exp ª¬i[ x1 , y1 º¼ , (3) where [ D , E
S O d D 2 E 2 ,
F denotes the Fourier-transform op-
erator, Ȝ is the wavelength, and where the constant terms are omitted. Thus, from Eqs. 1 and 3 follows: U x2 , y2 , t v exp ª¬i[ x2 X , y2 Y º¼
^
`
exp ª¬i[ x2 , y2 º¼ F s x1 , y1 , t exp ª¬i[ x1 , y1 º¼ .
(4)
466
Hybrid Measurement Technologies
Intensity at the hologram plane is given by squaring the modulus of the amplitude, I x2 , y2 , t v DC exp ª ¬ i] x2 , y2 º¼
^
(5)
`
u F s x1 , y1 , t exp ª¬i[ x1 , y1 º¼ CC ,
where DC denotes the zero-order term, ] x2 , y2 2S O d x2 X y2Y , CC denotes a term complex conjugated to the previous term, and the complex constant is omitted. A time-integrating photodetector is recording an exposure given by W
E x2 , y2
³ I x , y , t dt , 2
(6)
2
0
where IJ is the exposure time. Two exposures recorded at different time instants are given by: Ei x2 , y2 v DC1 CC1
^
(7)
`
exp ª ¬ i] x2 , y2 º¼ F s x1 , y1 , ti exp ª¬i[ x1 , y1 º¼ where s x1 , y1 , ti
^
`
s x1 , y1 exp i ¬ªM x1 , y1 \ x1 , y1 , ti º¼ and i 1, 2 . To apply the subtraction digital holography technique (for more details see the reference [3]), we must calculate 'E x2 , y2 E1 x2 , y2 E2 x2 , y2 ,
^
`
'E x2 , y2 v exp ª ¬ i] x2 , y2 º¼ F s x1 , y1 , ti Z x1 , y1 exp ª¬i[ x1 , y1 º¼
*
^
`
exp ª¬i] x2 , y2 º¼ F s x1 , y1 , ti Z x1 , y1 exp ª¬i[ x1 , y1 º¼ ,
(8)
where Z x1 , y1 represents the difference function: Z x1 , y1 1 exp ª¬i'\ x1 , y1 º¼
(9)
where '\ x1 , y1 \ 2 x1 , y1 \ 1 x1 , y1 . The hologram reconstruction I ' x1 , y1 is obtained by calculating the inverse Fourier transform (IF) of 'E x2 , y2 and then taking the squared modulus: 2
2
I ' x1 , y1 v s x1 X , y1 Y s x1 X , y1 Y .
(10)
Hybrid Measurement Technologies
467
Thus, we obtain two symmetrical object reconstructions free from zeroorder disturbance which allows improved recordings of large objects. The principle of the subtraction technique is explained by the stochastic nature of the phase '\ x1 , y1 , since we must calculate the squared modulus of the difference function. In the analysis of various object deformations the stochastic term will be omitted and only one object reconstruction (+1 order) will be described. First we consider a static deformation influencing the phase change of the object wave front described by 'I x1 , y1 'M x1 , y1
(11)
which, when subtracted from the initial object wavefront, 'E x2 , y2 v exp ª ¬ i] x2 , y2 º¼
^
`
u F s x1 , y1 , ti 1 exp ª¬i'M x1 , y1 º¼ exp ª¬i[ x1 , y1 º¼
(12)
yields the reconstruction: I ' x1 , y1 v s x1 X , y1 Y
^1 cos ª' ¬ M x X , y Y º¼` .
2
1
1
(13)
Second we consider a dynamic deformation with the object phase given by 'I x1 , y1 , t
4S
O
h x1 , y1 sin 2S ft ,
(14)
where we have assumed that only out-of-plane harmonic vibrations of the object occur and that the illumination is normal to the object surface. In Eq. 14, h x1 , y1 is the vibration amplitude (maximum deviation from the surface equilibrium) and f is the vibration frequency. If the integration time of the CCD sensor satisfies the condition W !! 1 / f , the exposure is 'E x2 , y2 v exp ª ¬ i] x2 , y2 º¼ ½ ª 4S º u F ® s x1 , y1 , ti J 0 « h x1 X , y1 Y » exp ª¬i[ x1 , y1 º¼ ¾ ¬O ¼ ¯ ¿
(15)
and the reconstruction: 2 ª 4S º I ' x1 , y1 v s x1 X , y1 Y J 02 « h x1 X , y1 Y » . ¬O ¼
(16)
468
Hybrid Measurement Technologies
In Eqs. 15 and 16 J 0 denotes the zero-order Bessel function of the first kind. Third we consider both deformations, the static and dynamic, applied simultaneously to the object surface and described by 'I x1 , y1 , t
4S
O
h x1 , y1 sin 2S ft 'M x1 , y1 ,
(17)
with the reconstruction the same as described by Eq. 16. Evidently, the information about the static deformation is lost. To retrieve such information, the following procedure was proposed [5]: (i)
the subtraction of the undisturbed exposure and the exposure obtained for the last case followed by the reconstruction procedure, yielding
I ' x1 , y1 v s x1 X , y1 Y
2
ª 4S º u 1 J 0 « h x1 , y1 » exp ª¬i'M x1 X , y1 Y º¼ O ¬ ¼ (ii)
(18)
the addition of the undisturbed exposure and the exposure obtained for the last case followed by the reconstruction procedure, yielding I ' x1 , y1 v s x1 X , y1 Y
2
ª 4S º u 1 J 0 « h x1 , y1 » exp ª¬i'M x1 X , y1 Y º¼ O ¬ ¼ (iii)
2
2
(19)
the subtraction of Eqs. 18 and 19 followed by the modulus, yielding 'I ' x1 , y1 v s x1 X , y1 Y
2
ª 4S º u J 0 « h x1 , y1 » cos ª' ¬ M x1 X , y1 Y º¼ . ¬O ¼
(20)
The result shown in the upper relation contains the reconstructed image of the object multiplied with both the dynamic and static deformation fringes.
Hybrid Measurement Technologies
469
3 Applications We demonstrate two applications using the quasi-Fourier off-axis experimental setup described in the reference [5]. We used a krypton ion laser (647 nm) as a light source and an integration type of CCD sensor (Kodak Megaplus, 1008 u 1018 pixels, 9 u 9 P m 2 each). The intensity ratio between the reference and object beams was adjusted to 3:1 using the exposure time of 64 ms. To apply the subtraction technique two holograms were captured and stored in computer memory for each situation. 3.1 Large objects
As an example of a large object, we present the results for the standard 3.5 inch PC hard disk which was forced to vibrate using a piezoelectric actuator. Total object surface was 110 u 150 mm2 and recording was done at 3m sensor-to-object distance. Without subtraction technique only four times smaller object could be equivalently recorded. Figure 1 shows the reconstructions with the reference point scarcely visible in the middledown region on the left side beside the hard disk.
Fig. 1. Hologram reconstructions of a hard disk. Left: no vibration, right: vibration at the frequency of 1350 Hz
470
Hybrid Measurement Technologies
3.2 Detection of hidden stationary deformations
As an example of detecting the hidden deformations, we present the results for an oscillating membrane with a diameter of 32 mm. For dynamic deformation, the membrane was excited to vibrate sinusoidally by a function generator. For static deformation, the membrane was mounted on a rotational stage with two fixed positions. The digital holograms of the membrane were recorded corresponding to the initial position, rotated position and rotated position with the membrane vibrating. Figure 2 shows these situations from a) to c), respectively, reconstructed by the corresponding digital holography procedures. The fringes of the rotated static displacement, not visible on Fig. 2 (c), are detected by applying our method of interferogram analysis, see Fig. 2 (d). Notice that the fringe pattern shown in Fig. 2 (c) reveals a static deformation additionally to the rotated displacement.
(a)
(b)
(c)
(d)
Fig. 2. Hologram reconstructions of a membrane: (a) the initial position, (b) the difference between the initial and rotated positions, (c) the rotated position with vibration at 1020 Hz and (d) the same as (c) but obtained with the procedure described with Eqs. (18) – (20)
Hybrid Measurement Technologies
471
4 Summary This work reports on recent results in the area of the time-averaged digital holographic interferometry. Two techniques are described, one that removes the zero-order reconstruction term thus allowing recording of large objects, and the other by which the hidden stationary deformations can be detected. Quantifying hidden deformations is important because its evaluation allows direct measurement of a stationary bias strain in the dynamic analysis of the vibration object. These two techniques are explained mathematically and illustrated by the experimental results.
Acknowledgments This work was supported by the French-Croatian Cogito project “Dynamic Multiwave Digital Holographic Interferometry” and by the Croatian Ministry of Science, Education and Sports under the project 0035005.
References 1. Powel, R L, Stetson, K A (1965) Interferometric vibration analysis by wavefront reconstruction. J. Opt. Soc. Am. 55: 1593-1598 2. Picart, P, Leval, J, Mounier, D, Gougeon, S (2003) Time-averaged digital holography. Opt. Lett. 28: 1900-1902 3. Demoli, N, Meštroviü, J, Soviü, I (2003) Subtraction digital holography. Appl. Opt. 42: 798-804 4. Demoli, N, Vukicevic, D, Torzynski, M (2003) Dynamic digital holographic interferometry with three wavelengths. Opt. Express 11: 767774 5. Demoli, N, Vukicevic, D (2004) Detection of hidden stationary deformations of vibrating surfaces by use of time-averaged digital holographic interferometry. Opt. Lett. 29: 2423-2425
Spatio-temporal encoding using digital Fresnel holography Pascal Picart, Michel Grill, Julien Leval, Jean Pierre Boileau, Francis Piquet Laboratoire d’Acoustique de l’Université du Maine, Avenue Olivier Messiaen, 72085 LE MANS Cedex 9, France ; email : [email protected]
1 Introduction Holography and speckle interferometry have demonstrated in the past three decades their strong ability for the measurement of displacement field of mechanical assemblies under static or dynamic loading. Since the 90’s, digital holography was concretized with the advances of data acquisition technologies and increasing computation power of computers [1]. It is reasonable thinking that digital holography will replace in a mean term classical holography with photographic plates as recording media. Digital holography can be advantageously used to image objects. Lastly, a lot of very interesting possibilities were demonstrated. Example of which includes amplitude-contrast and phase-contrast microscopic imaging [2]. Speckle metrology is adapted for studying objects under static or quasi-static loadings [3], sinusoidal excitation using stroboscopic light [4] or with use of time-averaging [5]. Transient situations are difficult to study because of the detector frame-rate. Thus, high speed cameras allowing up to 40000 frames per second to be recorded can give microsecond temporal resolution [6]. However, the spatial resolution is poor because of the low pixel number that can be used at such frame-rate. For example, at 4500 frames per second image resolution is about 256u256 pixels and 64u64 at 40500 frames per second [6]. Another strategy consists in using additive speckle pattern interferometry and a spatial carrier fringe pattern evaluation [7]; the main advantage is that the detector frame-rate can be low and that the full recording is performed during a single exposure. The use of a multipulse and multi-detector architecture also allows transient deformations to be evaluated [8].
Hybrid Measurement Technologies
473
In this paper, we propose an alternative approach for transient deformation analysis by use of digital Fresnel holography and spatial multiplexing of digital holograms with a single CCD detector. The method is based on path-switching during the recording of the hologram. This allows an efficient simultaneous spatio-temporal encoding, thus leading to a temporal analysis without loss of spatial resolution and with a low frame-rate.
2 Theory Let us consider an interferometric set-up with a laser for illumination source. Let us consider that the object is illuminated by one part of the laser beam. As it is rough, time varying and illuminated by a coherent laser beam it induces a spatio-temporal optical phase modulation which can be written as A(x,y,t) = A0(x,y)exp(i\0(x,y,t)), where \0(x,y,t) is a timevarying random phase. Diffraction of the object beam along distance d0 from object to detector area produces a speckle pattern written under the Fresnel approximation as O( x' , y ' , d0 , t )
º ª iS i expi 2S d 0 / O exp « x' 2 y ' 2 » O d0 ¼ ¬ O d0
f f
u
ª iS ³ ³ A( x, y, t ) exp«¬ O d x 0
f f
2
(1)
º º ª 2iS xx' yy' » dxdy y 2 » exp « O d 0 ¼ ¼ ¬
Consider that the set-up allows production of a path-varying, smooth and plane reference wave which is then written as Rj(x’,y’) = ajexp[2iS(ujx’+vjy’)], j = 1 for path n°1 and j = 2 for path 2. Ideally, we have a1 = a2 = aR. Consider now the following features : the laser source emits two pulses, the first at time t1 and the last at time t2 ; the recording by the CCD area is performed during an exposure time T greater than the pulse separation 't = t2t1 given by the laser ; the reference path is switched from path n°1 to path n°2 between the two pulses. So reference wave associated with pulse n°1 follows path n°1 and reference wave associated with pulse n°2 follows path n°2; the three beams (object+reference n°1+reference n°2) illuminate the CCD during exposure T. Following that, the recorded hologram can be written as H t1 , t 2
2
2
2
2
Ot1 Ot 2 R1 R2 R1 Ot1 R2 O t 2 R1 O t1 R2O * t 2
(2)
The numerical reconstruction of the object plane is based on the diffraction integral given in equation 1 by considering the hologram as a trans-
474
Hybrid Measurement Technologies
mittance and considering a discrete spatial sampling of H(x’,y’,d0,t1,t2) given in equation 2. It supposes that x’ = lpx et y’ = kpy, {px,py} being the pixel pitch of the solid state sensor. The computation of the reconstructed field in +1 order for jth hologram leads to [5]
>
A1j x, y, d 0 , t j # D j R j x, y exp iSO d 0 u 2j v 2j R
@
u Ax Ou j d 0 , y Ov j d 0 , t j
(3)
Note that A+1R1 and A+1R2 are multiplexed in the reconstruction field because of the appropriate choice of the spatial frequencies ^uj,vj`. When the two holograms are demultiplexed according to the method proposed in [3], each phase term can be exctracted and it is expressed as
\ j x, y 2iS u j x v j y i S O d 0 u 2j v 2j \ 0 x, y, t j
(4)
When the object is in a static state we have \0(x,y,t2) = \0(x,y,t1). When the object is under a dynamic excitation, we get \0(x,y,t2) = \0(x,y,t1) + 'M(t1,t2). So phase difference '\ = \2 \1 includes the time varying phase change 'M. Since in a static state, we have '\S = 2S((u2u1)x + (v2v1)y) SO0d0(u22u12+v22v12), the phase term of the subtraction includes a phase biased term. This term hides the useful information and thus it must be removed. When removing '\S, the phase difference in a dynamic state is simply '\D = 'M(t1,t2). Thus spatio-temporal encoding allows to study transcient deformations using low frame rate cameras, the temporal resolution being only limited by the time difference between the two laser pulses.
3 Experimental set-up We used an interferometric set-up with two different paths for the reference wave. It is described in figure 1. The laser is a 2Z NdYAG pulsed laser (20ns, 15mJ). The mixing of the four waves in the CCD area produces the addition of two spatio-temporally-shifted holograms of the object. The path switching is performed by a polarization switching in a secondary Mach-Zehnder architecture. This is done using a high-speed pockels cell by applying two different voltages in order to get a half wave plate. This device is generally devoted to laser pulse generation in pulsed lasers. Each path of this secondary interferometer includes a telescopic system, whose last lens is translated perpendicularly to its optical axis in order to produce suitable spatial frequencies for each reference wave.
Hybrid Measurement Technologies
475
The reference wave propagates following one of the two possible paths indicated in figure 1. The path is determined by the polarization of the reference wave before polarizing beam splitter n°2. If the reference wave is spolarized, then it propagates following path n°1. It follows path n°2 if it is p-polarized. The switching between the two paths is produced by the pockels cell placed just before polarizing beam splitter n°2.
Fig. 1. Experimental set-up for dual space-time encoding
As pointed out, introduction of off-line holographic recording is realized using lens L2 on path n°1 and lens L3 on path n°2. Lenses are displaced out of the afocal axis by means of two micrometric transducers to adjust the values of the spatial frequencies {uj,vj}. Lenses are adjusted such that there is no overlapping between the five diffracted orders when the field is nu-
476
Hybrid Measurement Technologies
merically reconstructed [3]. The digital holograms are reconstructed using a discrete version of the Fresnel transform programmed with Matlab 5.3 [5]. De-multiplexing is the step which consists in determining the point to point relation between the two holograms. It is achieved according to [3]. The detector is a 12 bits digital CCD (PCO PixelFly) with 1024u1360 pixels each of size px = py = 4.65Pm. The camera is driven by the software CamWare via a PCI acquisition board.
4 Experimental results We applied the measurement principle to a loudspeaker 40 mm in diameter placed at 1400 mm in front of the CCD area. The laser delivered pulses at a rate of 50 Hz. So two pulses were selectionned with a time difference t2t1 = 20 ms. Note that the pulse delay is limited by the flash pump rate and that it can be considerably diminuted if one uses a twin NdYAG laser or a double pulse ruby laser. The CCD exposure was set to 30 ms. In full frame mode, the CCD can perform acquisitions at 7 frames/s. Thus the detector has a very low frame rate and temporal resolution is only 1/7 = 142 ms; the two holograms generated by the two laser pulses could not be recorded with two consecutive frames.
Fig. 2. Reconstructed spatio-temporally multiplexed holograms of the loudspeaker
Thus, the proposed method appears to be very well adapted to current state of the art of detector technology. Its limitation is only due to the laser source. The path switching was performed before the second laser pulse was fired. Figure 2 shows the multiplexed holograms of the loudspeaker.
Hybrid Measurement Technologies
477
Figure 3 shows the phase difference '\S (modulo 2S) when the loudspeaker is in a static state. This term has high spatial frequencies so that 2S phase jumps can not be observed at the naked eye. The computation of the phase biased term allows it modulo 2S subtraction to '\S. This is shown in figure 4. It can be shown that after removing of the phase biased term, the phase change is uniform. This result is coherent with the fact that the loudspeaker is in a static state, thus it do not produce any temporal phase change. So the result shown in figure 4 validates the measurement principle.
Fig. 3. Static phase difference '\S
Fig. 4. Static phase difference after removing of the phase biased term
The loudspeaker was sinusoidally excited at a frequency of 2855 Hz. So the pulse delay corresponds to a phase shift of S/5 rad on the sinusoidal excitation. The parameters for illumination and recording were the same as in the static state.
478
Hybrid Measurement Technologies
Figure 5 shows the wrapped phase change '\D = 'M(t1,t2) which was extracted after removing of the phase biased term. The deformation of the membrane of the loudspeaker between time t1 and t2 can be clearly seen. It is indicated by 2S phase jumps.
Fig. 5. Vibrating phase difference after removing of the phase biased term
Figure 6 shows the unwrapped phase map of that of figure 5.
Fig. 6. Unwrapped phase map of figure 5
Hybrid Measurement Technologies
479
5 Conclusion This paper has presented a new method for stationary or transcient deformation analysis. The method needs a phase biased term compensation in order to retrieve the phase change induced by the object deformation. The validation was perfomed successfully by using a static state of the object. Potentialities of the method were demonstrated through the application to a loudspeaker. This new technique could be applied in the future in full field optical metrology of fast or very fast transcient phenomena for which frame rate of current solid state detectors is too low.
6 References 1. Schnars, U, Jüptner, W (1994) Direct recording of holograms by a CCD target and numerical reconstruction. Applied Optics 33:179-181 2. Cuche, E, Bevilacqua, F, Depeursinge, C (1999) Digital holography for quantitative phase contrast imaging. Optics Letters 24:291-293 3. Picart, P, Moisson, E, Mounier, D (2003) Twin sensitivity measurement by spatial multiplexing of digitally recorded holograms. Applied Optics 42:1947-1957 4. Doval, AF, Trillo, C, Cernadas, D, Dorrio, BV, Lopez, C, Fernandez, JL, Perez-Amor, M (2000) Measuring amplitude and phase of vibration with double exposure stroboscopic TV holography. Interferometry in Speckle Light – Theory and Applications, Jacquot, P & Fournier, JM Editors, 25-28 September 2000, Lausanne, Switzerland, Edited by Springer (Berlin):281-288 5. Picart, P, Leval, J, Mounier, D, Gougeon, S (2005) Some opportunities for vibration analysis with time-averaging in digital Fresnel holography. Applied Optics 44:337-343 6. Moore, AJ, Duncan, DP, Barton, JS, Jones, JDC (1999) Transient deformation measurement with electronic speckle pattern interferometry and a high speed camera. Applied Optics 38:1159-1162 7. Farrant, DI, Kaufmann, GH, Petzing, JN, Tyrer, JR, Oreb, BF, Kerr, D (1998) Measurement of transient deformation with dual-pulse addition electronic speckle pattern interferometry. Applied Optics 37:72597267 8. Pedrini, G, Froning, PH, Fessler, H, Tiziani, HJ (1997) Transient vibration measurements using multi-pulse digital holography. Optics and Laser Technology 29:505-511
High resolution optical reconstruction of digital holograms Günther Wernicke, Matthias Dürr, Hartmut Gruber, Andreas Hermerschmidt*, Sven Krüger*, Andreas Langner Humboldt University Berlin, Institute of Physics Newtonstrasse 15, 12489 Berlin Germany * Holoeye Photonics AG Einsteinstrasse 14, 12489 Berlin, Germany
1 Introduction Electro-optical effects in liquid crystal displays make them suitable for amplitude and phase modulation of coherent wave fronts so that they can be used as a programmable diffractive element. The devices are addressed by a digital signal at video frame rate. Therefore they can act as adaptive optical elements, digital-optical interfaces or digital-analog interfaces. In practical applications, important parameters of these devices are the available phase shift, light efficiency, and space-bandwidth product. The improved parameters of recent LC displays make them useable for digital holography as well as for many other coherent optical applications. The requirements for such usage can be summarized as follows: - Small (squared) pixels, high pixel number, high dynamic range (8 Bit – 12 Bit) - High fill factor, high transmission and reflectivity - Uncoupled amplitude or phase modulation, high contrast, phase modulation above 2S - No flicker and cross talk, exact pixel to pixel addressing - Flat panel surface (no wave front distortions) - Homogeneous thickness of the liquid crystal layer - Analog addressing (no time sequential or field sequential) - High frame rate, short response times, standard signal sources In the Laboratory for Coherence Optics at Humboldt University we made investigations for the applicability of liquid crystal displays for the optical reconstruction of digital holograms.
Hybrid Measurement Technologies
481
In the first part of this paper we will give a short overview of the technology of liquid crystal spatial light modulators. In part two we will show results of our investigations about the performance of the displays when illuminated with coherent light and in part three we show the application in the optical reconstruction of digital holograms.
2 Spatial light modulators Spatial Light Modulators (SLMs) became very important components in optical systems. Among the well known shutter and display applications, the possibilities of phase modulation are more and more subject of current research and development [1,2]. SLMs using two-dimensional arrays of phase-modulating pixels are the basis of many new system proposals in adaptive optics, image processing and optical switching. One challenge is the implementation in diffractive optics in order to realize high-resolution phase functions [3]. With increased resolution and performance they might even compete with micro-lithographic fabricated diffractive elements in some applications. Using the available phase modulation, SLMs can be used in a wide range of application fields like laser beam splitting and beam shaping for projection and material processing applications. 2.1 Micro displays: Technologies and developments
Mainly driven by multimedia applications, displays with high resolution, efficiency and contrast have been developed. The increase of pixel numbers has been overcompensated by the decrease of the pixel size so that the total area of the displays has become smaller over the recent years. However, these displays have not been developed for the purpose of realizing high-resolution phase modulating spatial light modulators. So, the phase modulation caused by the birefringence of the LC-material of about 2Sҏ@ 532 nm, that we found rather early in the liquid crystal display devices like the SONY LCX012BL series and used for the reconstruction of digital holograms [3, 4], can be considered a more or less accidental side-effect. Since that time quite a few research groups are looking for the right display type for coherent or diffractive applications. Now, there are even institutes designing displays for special optical applications. The displays developed so far still lack important properties of a high-resolution twodimensional phase modulator.
482
Hybrid Measurement Technologies
The video and data projectors as well as now also rear projection TVs mostly using micro-displays such as LCDs, DMDs and LCoS displays of small pixel size and high resolution. The LCD technology turned from diaphanous TFT LCDs with pixel sizes down to 15 µm to reflective LCoS displays, which can realize pixels smaller than 10 µm with an enormous fill factor of >90 %. So, especially the liquid-crystal-on-silicon technology is very promising to deliver displays with high resolution, small pixels, and a high light efficiency. The development of smaller pixels for translucent displays has the drawback of a reduction of the filling factor (i.e. the optically useable fraction of the overall area). For example, the Sony SVGA display LCX016 with a 1,3” panel diagonal and a 32 µm pixel has a fill factor of about 85%. The 0,9” LCX029 XGA display with a pitch of 18 µm but a pixel size of only 12 µm has a fill factor of only around 45%. Moerover, the micro-lens arrays mounted on these displays are more or less distorting because of their influence on the optical wave front. 2.2 Implemented micro display devices
Based on translucent SONY LC displays, the spatial light modulators LC 1004 and LC 2002 were developed by Holoeye Photonics AG. Measurements of the phase modulation and a subsequent adaptation of the driver electronics led to the successful implementation of a dynamic phase modulating system with an almost linear modulation and a maximum phase shift of Sҏ. Such system is suitable for addressing highly efficient diffractive phase functions. Limiting parameters of the system are determined by physical boundary conditions, such as pixel number and size, response time, transmission etc. Some of these limitations can be overcome by devices based on LCoS displays which have a high optical fill factor, high resolution and very small pixels. Three different LCoS systems with various resolution
Fig. 1. Spatial light modulator LC-R 3000 with WUXGA resolution
Hybrid Measurement Technologies
483
and pixel size have been tested for their suitability for phase-modulating SLM systems. The micro display MD800G6 Micromonitor is a high resolution LCoS display with 0.5 inch diagonal, 1.44 million dots and a pixel size of 12.55 µm x 12.55 µm. It has 800x600 SVGA resolution and a 91 % fill factor. A second system, which is currently in use is an LCoS prototype, which shows XGA resolution with 19 µm pixel size. Moreover, the Holoeye SLM LC-R 3000 based on a WUXGA resolution micro display fabricated with the LCoS technology, was investigated. The display has a 0.85 inch diagonal, a 9.5 µm pixel pitch and can address HDTV standard of 1920x1080 with a lower border (120 pixels high) for other data. The fill factor is >92 %. It has over 2.3 million pixels and can display up to 256 grey levels. This display device is shown in Fig. 1.
3 Automated system for the measurement of complex modulation The complex modulation properties are of high importance for the performance of SLM systems, so the dynamic modulation range has to be investigated carefully. Different kinds of measurement systems were proposed and discussed in the literature [1-3]. For the automated measurement system for complex modulation of micro displays in reflexion we built up a comparatively simple measuring system based on a double slit experiment (Fig. 2). Mirror
1.5°
Laser
Aperture Polarizer LCoS Mirror
CCD PC
Analyzer
Fig. 2. Experimental Set-up for SLM characterization in reflexion
Hybrid Measurement Technologies
Phase Shift in ʌ
484
0,5 0 -0,5 -1 -1,5 -2 -2,5 -3 0
100 200 Graylevel
Fig. 3. Experimental result: Phase distribution Fig. 4. Phase change for different wave dependent on the gray value relation (O 633 lengths in the range 355 nm to 780 n nm)
The LCoS display is addressed with an image consisting of two areas of equal size, but different graylevels. By changing one of the graylevels, the mutual phase delay of the two waves created by the two slits can be measured. The interference patterns are recorded with a CCD camera and averaged with respect to the direction perpendicular to the optical table in order to reduce noise. The interference pattern thus is reduced to a single pixel row in the merged image shown in Fig .3, where the interference patterns for 256 different graylevels are assembled. In Fig.3 one can see that for gray values of about 200 the fringe contrast nearly disappeares. The reason for the low contrast is the simultaneous change of polarization and phase by the LC molecules. The relation between these quantities can be expressed by the geometrical phase [5]. The dynamical part of the overall phase is caused by differences in the optical path length of two beams. In contrast to that the geometrical phase is independent from the dynamical progression of the beam. The theoretical base for the geometrical phase was developed by Pancharatnam in the 1950´s. It can be shown that the phase difference between two single beams is not only affected by the optical path length but also by the different polarizing states that each beam passes through. For applications of the SLM system as a phase modulating element it is not necessary to distinguish between geometrical and dynamical phase because both parts result in the overall phase by sum up of each phase value [6]. A publication of these results is in preparation. From the shape of the individual interference patterns, which are proportional to cos(/x+)), the phase difference ) can be determined as a function of the adressed graylevel, as shown inFig. 4. These results were
Hybrid Measurement Technologies
485
verified for cw- and femtosecond lasers and no significant differences were measured.
4 Application in the optical reconstruction of digital holograms For the optical reconstruction of digital holograms we have set up an experiment shown in Fig. 5. The recording of the holograms was made with a 12 bit CCD camera Kappa DX2N, pixel number 1382x1032, Pixel size 4.65 µm x 4.65 µm. We used an Adlas Nd:YAG laser, frequency doubled, output power 140 mW. For the reconstruction was used the LCoS LC-R 3000, with a gamma-curve that was linearized with the results of the measurements described before. This procedure is comparable to the well known procedure of bleaching in conventional holography. Mirror
Laser
Mirror
adjustable Beamsplitter Object
CCD Beamsplitter LCoS
Polariser
PC
Mirror
Screen (Reconstruction)
Fig. 5. Scheme for the recording and reconstruction of digital holograms
The hologram recorded by the CCD camera is used as the video signal addressed to the LCoS display. The display is illuminated with the laser and a reconstruction of the hologram is obtained. With an additional function of the used software, deformations and displacements of the object are detected. The optical wave fields are subtracted without any intermediate change in the system. This makes it possible to store one hologram and to subtract a second one or to subtract a life stream of the camera, simulating
486
Hybrid Measurement Technologies
the double exposure and the real time technique of conventional holography, respectively. By a change in the geometry of the object, e.g. due to displacements, interference fringes become visible Additionally there is a possibility to add some diffractive distributions to the hologram, realizing an additional lens for focusing the hologram reconstruction or a prism phase to shift the image away from the zeroth order of the reconstruction. The deformation of a thermoelectric device is shown as an example in Fig. 6.
Fig. 6. Deformation of a thermoelec-tric device
Fig. 7. Optical reconstruction of a microscopic digital hologram of a USAF resolution target
The spatial resolution of the optical reconstruction is in the µm range . An optical reconstruction of a microscopic hologram of a resolution target, made as digital hologram by D. Carl [7], is shown in Fig. 7.
5 Conclusion High resolution spatial light modulators provide a new technology option for adaptive optics, which can be used for the realization of dynamic diffractive devices. In this work, we have used the possibilities of modern SLM technology for the optical reconstruction of digital holograms. By this method it is possible to reconstruct an optical wave field and to manipulate it practically in real time. Further investigations will evaluate applications in the field of optical metrology.
Hybrid Measurement Technologies
487
6 Acknowledgements The financial support of the German Ministry for Science and Technology under Grant Nr. 13N8096 (Humboldt University Berlin) and 13N8097 (HoloEye Photonics) is gratefully acknowledged.
7 References 1. D. A. Gregory, J. A. Loudin, J. C. Kirsch, E. C. Tam, F. T. S. Yu, 1991 Use of the hybrid modulating properties of liquid crystal television. Appl. Opt. 30, 1374-1378 2. K. Ohkubo, J. Ohtsubo, 1993 Evaluation of LCTV as spatial light modulator. Opt. Comm. 102, 116-124 3. G. Wernicke, S. Krüger, J. Kamps, H. Gruber, N. Demoli, M. Dürr, S. Teiwes, 2004 Application of a liquid crystal spatial light modulator system as dynamic diffractive element and in optical image processing. J Optical Communications 25 141-148 4. G. Wernicke, S. Krüger, H. Gruber, 2000 New challenges for spatial light modulator systems. New Prospects of Holography and 3DMetrology - International Berlin Workshop, Strahltechnik-Bremen, Vol. 14, 27-28 5. P. Hariharan, H. Remachandran, K.A. Suresh, J. Samuel, 1997 The Pancharatnam Phase as a strictly geometric phase: A demonstration using pure projections. J. Mod. Optics 44 707-713 6. A. Langner, 2004 Untersuchungen an reflektiven FlüssigkristallLichtmodulatoren. Diploma Thesis Humboldt University Berlin 7. D. Carl, B. Kemper, G. Wernicke, G. von Bally, 2004 Parameteroptimized Digital Holographic Microscope for High-resolution Livingcell Analysis. Applied Optics 43 36, 6536-6544
Application of Interferometry and Electronic Speckle Pattern Interferometry (ESPI) for Measurements on MEMS J. Engelsberger, E-H.Nösekabel, M. Steinbichler Steinbichler Optotechnik GmbH Am Bauhof 4, D-83115 Neubeuern Germany
1 Introduction Since many years, interferometric measurement methods have been applied in research and industry for the investigation of deformation and vibration behaviour of mechanical components. Besides the classical interferometers, electronic speckle pattern interferometers have been introduced to a wide range of applications in many industrial branches. Recently, these techniques also have been introduced to measure the deformations of MEMS, especially under vibration conditions. In the following, some measurements carried out with a modified Michelson interferometer and some measurements carried out with a continuous wave 3D-ESPI interferometer will be described.
2 Measurements with a Michelson Interferometer For flat and highly reflecting objects like mirrors and micro-mirrors, deformation measurements can be carried out directly with a Michelson interferometer. Usually, such an interferometer is equipped with a CCD camera connected to a fringe processing system. In order to obtain quantitative deformation results with high accuracy, usually temporal phase shifting is applied. Using temporal phase shifting, measurement resolutions of about Ȝ/50 can be obtained. A common phase shifting device consists of a piezo crystal and its computer controlled driver, mounted to the reference mirror of the Michelson interferometer.
Hybrid Measurement Technologies
489
Fig. 1. Fringe pattern of a deformed micro mechanical actuator and quantified result presented as pseudo-3D plot
The deformation of a micro mechanical actuator structure as a typical result obtained by such an interferometer is shown in figure 1. The size of the mirror plate is about 2mm x 2mm with a thickness of about 30 µm. The torsion springs measure 2 mm x 30 µm x 30 µm. In figure 1, the left image shows the obtained fringe pattern, the right image shows a pseudo 3D plot of the quantified deformation.
3 Measurements with a 1D-ESPI Interferometer In most cases, ESPI systems use continuous laser light to illuminate the investigated structure. The output beam of the laser is separated into two illumination beams, the object beam and the reference beam. The object beam travels through a lens to the object and illuminates it. Due to the roughness and shape of the object, the backscattered object light has no longer a homogeneous wave front, but is modulated in a way that is characteristic for the object and shows a granular pattern of bright and dark spots, the “speckle pattern”. The reference beam is guided by fibre optics directly to the recording camera. The backscattered light from the object travels to a CCD-camera, where it is superposed with the reference beam with which it creates an interference pattern. For a deformation measurement, the interference pattern coming from the unloaded object describes the “reference state” of the object. When the object is deformed by applying a load to the structure, the light coming from each object point changes its phase, depending on the amount of deformation, whereas the reference beam remains constant.
490
Hybrid Measurement Technologies
LASER
OBJECT OBJECT BEAM
REFERENCE BEAM
CCD SENSOR LENS
CCD-CAMERA
BACKSCATTERED BEAM
Fig. 2. Scheme of the basic set-up of a 1D-ESPI interferometer
This fact modifies the interference pattern on the CCD- target for all object points, which have moved. Again, this interference pattern, which describes the “deformed state” of the object, is recorded. The difference of the two recorded interference patterns describes the shape difference between the loaded and unloaded state in the direction of the sensitivity vector. The basic principle of an Electronic Speckle Pattern interferometer is shown in figure 2. If measurements of MEMS shall be carried out with ESPI, the reflecting surfaces have to be treated in such a way that they scatter back the illuminating light. For measurements of sinusoidal vibration, it is advantageous to introduce a stroboscopic illumination in such a way that the duty cycle of the illumination can be shifted in phase with respect to the excitation phase. By setting the duty cycle of the illumination in such a way that the vibrating object is illuminated during its amplitude maximum, the vibration can be treated like a static deformation.
Fig. 3. Deformation of a micro-mechanic mirror vibrating at 200 Hz, evaluated results
Hybrid Measurement Technologies
491
Figure 3 shows the quantified deformation of a micro-mechanic image mirror vibrating at 200 Hz. The mirror has a size of 10 mm x 6 mm and a thickness of approx. 50 µm. For measurements of Eigenfrequencies it is interesting to know, in addition to the frequency and the amplitude, the vibration phase with respect to the excitation. This vibration phase can easily be determined with the light modulator, if it is synchronized with the excitation: If the excitation energy of an object vibrating in a natural frequency is kept constant and if the duty cycle of the modulator is set to a small value, it is sufficient to shift the duty cycle of the light modulator with respect to the excitation until the obtained vibration amplitude reaches its maximum. The phase difference between the excitation and the illumination now indicates the vibration phase. Figure 4 shows the result of such a measurement: All mirrors of a micro mirror array have been excited with their respective Eigenfrequencies, and for each of the mirrors, their phase difference with respect to the excitation has been determined.
4 Measurements with a 3D-ESPI Interferometer It is often interesting to learn about the vibration behaviour of MEMS in all three directions in space. In this case, a 3D-ESPI system can be applied. Although different techniques for the measurement of the 3-dimensional vibration behaviour exist, the easiest set-up uses 3 illumination directions and 1 observation.
Fig. 4. Vibration phases of independent mirrors of a micro mirror array. Each mirror is vibrating in one of its natural frequencies
492
Hybrid Measurement Technologies
Fig. 5. Scheme of the basic set-up of a 3D-ESPI interferometer
Consecutive speckle pattern acquisition using all illumination directions leads to three reference state speckle patterns. If the system is equipped with a light modulator, these measurements can also be carried out with sinusoidal vibrating objects. The acquisition procedure is repeated after the object has been loaded to obtain the deformed state speckle patterns. Now, the deformation phase maps can be calculated for each of the illumination directions. As the sensitivity vectors for the three measurements can be defined from the geometrical set-up of the system, it is easy to transform the intermediate phase images to deformation maps representing the deformations along the 3 axes of a Cartesian coordinate system. The scheme of such a set-up is shown in figure 5, the applied system in figure 6.
Fig. 6. The 3D-ESPI interferometer applied for the measurements
Hybrid Measurement Technologies
493
With such a set-up, it is possible to measure the resonant mode shapes e.g. of silicon resonators not only in the out-of-plane direction, but also torsional modes, in-plane rotations or in-plane translations, like shown in figures 7 and 8.
Fig. 7. The silicon resonator investigated with 3D-ESPI
Fig. 8. 299Hz out-of-plane torsion (left), 371Hz in-plane rotation (middle) and 773Hz inplane translation (right) of the silicon resonator
5 Acknowledgments We would like to thank the Technische Universität Chemnitz, Prof. Dr. Gessner, Dr. Markert and Mr. Kurth , for their kind contribution of test results.
Full-field, real-time, optical metrology for structural integrity diagnostics Jim Trolinger, Vladimir Markov, Jim Kilpatrick MetroLaser, Inc., 2572 White Road, Irvine, CA 92614, USA
1 Introduction In this paper, we explore the relationship between two optical diagnostics techniques: Electronic Digital Holography (EDH) and Laser Doppler Vibrometry (LDV), two methods that can be used to analyze the relative micro-movements of points on surfaces. Both methods produce phase and phase difference maps of lightwaves reflected from a surface, which can be related to the makeup and condition of the underlying structures, as well as the integrity of the entire structure. We have used both extensively to locate and identify defects and to assess structural health. The interpretation of LDV as a real time variation of EDH can offer insights into new ways to improve and deploy the methods by carrying over signal and image processing procedures that have been developed separately for the two. Other investigators have made similar observations [1-3]. A primary motive for our research is to produce advanced sensors for space exploration applications. Such sensors can fulfill a need to equip robots with a wide range of advanced diagnostics capability. We show how robots equipped with EDH sensors could provide investigators with a presence in remote space environments to assess the characteristics and health, and provide microscopic examination, of both manmade and natural objects and structures.
2 Relating EDH and LDV Digital holography. Digital holography is the division of coherent optics that deals with recording, storing, reconstructing, processing, and synthesizing wavefronts by using discrete values of information distributed over discrete points in space and time. Historical and geographical variations in terminology have led to a wide range of names than can be encompassed under the digital holography umbrella, including electronic speckle pattern interferometry (ESPI), TV holography, electro-optical holography, electronic holography, and LDV. Within a few years of the invention of holo-
Hybrid Measurement Technologies
495
graphy, investigators had begun to examine the possibility of digital holography [4]. For many years, available computers were far behind the ability to compete with analogue methods and the spatial resolution of electronic detectors was far short of that which is needed to compete with photographic materials. With new megapixel detector arrays; fast, low-cost computers; and new signal processing algorithms, this is fast changing, and electronic digital holography is replacing analogue methods in many applications.
Fig. 1. Recording and reconstructing wavefronts with digital holography. The hologram can be a set of ones and zeros in a computer memory or it can be a specific configuration of an optical device.
EDH is more often used as a tool in applications where the information is processed, analyzed, quantified, and compared, with the goal of viewing the results of the analysis as opposed to the raw wavefront itself. The information can be manipulated entirely inside the computer to do such things as retrieve and manipulate the object wavefront or add and subtract other wavefronts (interferometry). The ability to subtract two wavefronts is unique to electronic holography, since with optical means alone, they can only be added. This fact has been exploited in EDH beneficially and may also have application in LDV. Digital holography comes with all the advantages and disadvantages of digital information processing and handling. In many ways, a digital hologram can be thought of as an “ideal” hologram, because it provides a direct, real time link between optical wavefront information and computers, allowing one to access efficiently the power of both optics and electronics to record, process, and even create optical information. Available framing
496
Hybrid Measurement Technologies
detector arrays are about one centimeter square and contain about 2000 sensors on a side, and the associated spatial resolution is at least 10 to 20 times smaller than even common photographic materials used in holography. So, compared to analogue holograms recorded on photographic materials, the equivalent size of a typical digital hologram is about one millimeter in diameter in terms of information capacity. This must be offset somehow with all the other benefits of digital recording. Much of the work and opportunities for advancement are to produce more efficient algorithms and sampling methods that overcome the spatial resolution limits of the recording devices and compress and use the vast amounts of data that can be generated. Laser doppler vibrometry. With the simplest form of LDV, a beam of laser light is focused to a surface, and the scattered light from the surface, which is Doppler shifted by surface movement, is collected and mixed with a reference wave, producing a beat or heterodyne signal, which is proportional to the frequency difference of the two waves. Demodulating this signal retrieves the Doppler frequency, which is proportional to the surface velocity at the illumination point (with geometry factors) and which is also proportional to the phase change rate of the scattered light wave. Integrating this signal provides a measure of surface displacement, which can quite easily be achieved with nanometer resolution. Likewise, this integral provides the phase of the light wave emerging from the surface. Therefore, such a LDV can be thought of as a tiny, continuously varying hologram of a point on a surface, actually an electronic, analogue hologram at this stage. The phase of the scattered wave can be determined with great precision, since the availability of temporal information allows a precise measurement of the relative phase between the object and reference waves (this is also known as heterodyne interferometry). To produce such a hologram of a line or a finite area on a surface requires going digital, since continuous electronic detectors do not exist. A line or area on a surface can be imaged onto a linear or area detector array where it is also mixed with a reference wave that floods the detectors, providing the temporally resolved phase of the wave at each imaged point. In practice, the lines, areas, and even the reference wave are digitized for efficiency, so beams are focused to points on a line or matrix. An example of such a LDV system is shown in Figure 2. The outgoing laser beam is split into an array of beams by a holographic optical element, and then these are focused to a surface. Similarly, a HOE is used to combine the return signal with individual reference waves. These systems provide the amplitude and phase of the scattered signal from each illuminated point on the surface. Therefore the resulting data can be described as a real time, digital hologram of a line or area of surface, depending upon the de-
Hybrid Measurement Technologies
497
tector array used. To capture the precise movement of the whole surface requires that the points be positioned sufficiently close to each other to resolve the spatial variations or modes.
Fig. 2. Linear array (multipoint) laser Doppler vibrometer
Optical scanning holography. A related holography method, in which holograms of a surface are produced point by point, sequentially, by scanning the object beam over the surface has been demonstrated by Poon in a procedure described as optical scanning holography [5]. In this procedure, an object is scanned by a point source and the reflected light is mixed with a reference wave at a single detector. The small detector size is not a limiting factor, although the time for scanning does lead to limitations. It would seem beneficial to deploy this method with multiple beams and a detector array to reduce the recording time.
3 Digital hologram recording devices and limitations Framing arrays like CCD and CMOS require a certain amount of time, typically tens of microseconds, to produce a storable, few mega pixel hologram and the most commonly affordable systems can record up to about 30 holograms per second, although very high speed systems are available at much higher costs with reduced numbers of pixels. Sensors are typically sized between 5 and 10 microns in diameter and a mega pixel ar-
498
Hybrid Measurement Technologies
ray is therefore between 5 and 10 millimeters in diameter. Two challenges with these detectors are to deal with the relatively small size of the equivalent hologram and to efficiently handle and exploit the vast amount of data that can be produced in a short period of time. Real time detector arrays, in which each detector continuously reads and transmits temporal data, can, in principle, record holograms continuously and in near real time and are even more limited presently by available sensor size and number. Available real time arrays are limited to less than 10,000 detectors being between 50 and 100 microns in diameter. Clearly this resolution limits the application to very special cases where the angle between the object and reference waves is less than one degree, and phase changes over the detected area are small. Even so, some of the benefits of such recording could lead to remarkable new applications. For example, this kind of phase detection enables heterodyne detection that can provide nanometer-scale, optical path length sensitivity, at least an order of magnitude better than is possible with static interferograms. The challenges here are in overcoming the spatial resolution limitation of such detectors and devising ways to transfer and store the vast amount of data that would be produced. One group working on such a sensor has approached this problem by employing on-chip processing, so the hologram is formed, analyzed, and used to extract and compress the specific useful data all on the same chip [6,7]. To be useful, holograms must record fringe information with a sufficient spatial resolution to define the details of interest in the object wavefront. To make the digital hologram, this means that the interferogram of the object and reference wave must be sampled sufficiently to capture every fringe whose information content is needed in the wavefront of interest that is being stored. For a static recording, classical sampling methods would require that at least two individual detectors must lie on each fringe required to store the object wave of interest. Fringe spacing is given by D~O/D, where D is the angle between the object and reference wave, and O is the wavelength. For practical cases with typical detector sizes of 5 to 10 microns in mega pixel arrays, this limits the allowed angle between the object and reference wave to a few degrees. Methods exist and are continuously under development to circumvent such limitations, and classical methods may not be the optimal way to sample [8,9]. Scanning the object, scanning the detector [10], super resolution and magnifying the interference fringe pattern to produce arrays of holograms that must be stitched together properly [11] are a few possibilities.
Hybrid Measurement Technologies
499
As an illustration of scanning to extract additional information, consider a specific case of a detector whose diameter, D, is larger than the fringe of period, P, of the fringes that we wish to resolve. I = 1+Cos2S(x/P)
x
D/2 Pixel
x’
P Fig. 3. Sampling a fringe pattern by scanning with a detector that is larger than a fringe
We consider a sinusoidal fringe pattern with period, P, and we scan the pixel of diameter D across the fringe pattern in a direction normal to the fringes. At any place, x, the energy collected by the pixel is given by the following integral: x D / 2
E
³ [1 Cos2S ( x' / P)]dx' = 2[ D + (P/S)Sin(SD/P)Cos(2Sx/P)]
xD / 2
Note that by scanning the detector, we have recovered the cosine wave, even though the pixel itself may be much larger than the cosine period. This is because the information is provided by the edges of the pixel as it is scanned with the expense of some loss in contrast. The highest contrast occurs when the pixel diameter obeys: D=qP/2, where q = 1, 3, 5, Then: E= 2D[ 1 + (2/qS) Cos(2Sx/P)] The signal contrast is reduced directly by q, and for even values of q, the contrast is zero and the information is lost. Consequently, to totally recover all frequencies would require that the effective pixel size, D, be also varied. This simple example is primarily for illustration that information can be extracted by scanning, since, of course, it assumes that the variation is
500
Hybrid Measurement Technologies
in one dimension, limiting its use in general. Any of these methods come with the penalty of added information processing. For many applications, a change in the phase map is the quantity of greatest interest and not the absolute wavefronts themselves. Instead of reconstructing two wavefronts and subtracting one from the other, it is possible to reconstruct the difference only, eliminating two operations in the process. In addition, the subtraction procedures should enable the removal of constant terms and noise cancellation and system optics effects. A unique phase difference algorithm was developed to allow wavefronts to be compared efficiently and accurately [12], and it also helps with noise cancellation and management. By solving for phase differences directly, the number of computer operations can be cut in half. Some of the greatest features of holographic interferometry are even easier to implement electronically and offer additional advantages.
4 Four Applications Landmine Detection. Sabatier [13,14] and his group have shown that LDV with acoustic-to-seismic coupling-based technology is a viable tool in locating landmines, especially those that are difficult to sense with other methods, i.e., non magnetic or metallic materials. The ground is vibrated acoustically and the LDV detects anomalies in the surface velocity immediately above the landmine, because the landmine behaves differently than soil in the overall mechanical system. A major remaining problem is to improve coverage and detection speed. This is a motivation for employing multiple beam systems, arrays, and digital holography. We have applied and compared all of these methods. In one field test, the multi-beam LDV [15,16] simultaneously measured the vibration of the ground at 16 points spread over a 1-meter line. It was used in two modes of operation: stop-and-stare and continuously scanning beams. The noise floor of measurements in the continuously scanning mode increased with increasing scanning speed. This increase in the velocity noise floor is caused by dynamic speckles. Either airborne sound or mechanical shakers can be used as a source to excite vibration of the ground. A specially-designed loudspeaker array and mechanical shakers were used in the frequency range from 85-2000 Hz to excite vibrations in the ground and elicit resonances in the mine. Field experiments show that buried landmines can be detected within one square meter in several seconds using a system in a continuously scanning mode, with loudspeakers or shakers as the excitation source. De-
Hybrid Measurement Technologies
501
velopment of a new demodulation technique that can provide a lower noise floor in continuously scanning mode is a high-priority task of the future work. This system is driven over an area to produce a scan of a surface. In some ways, these recordings are analogous to the synthetic aperture radar images that lead Leith to employ holographic methods in his original work on holography. Non Destructive Inspection of Space Shuttle Tiles. We have applied and compared multiple beam LDV and digital holography to analyze the structural integrity of space shuttle tiles, which had been purposely preprogrammed with typical defects. Defect detection is achieved by performing a vibrational spectral analysis on the surface as different frequencies are used to excite the test piece. At characteristic frequencies, the defects become apparent. In these tests, all of the programmed defects were located and classified. LDV systems are more sensitive by at least an order of magnitude over those that employ framing types of digital holography, i.e., CCD cameras. This is because the temporal information is used to improve the phase resolution and displacement sensitivity. The obvious disadvantage is the lower spatial resolution. Vibrational Analysis. In a typical application, a digital holocamera and a LDV were used to analyse the vibrational characteristics of an aircraft component. LDV’s similar to the one shown above have been used extensively for this purpose. The digital holocamera that we use employs a frequency doubled YAG laser to illuminate the object. In this case, digital holograms are formed on a 2000X2000 CCD array. Wavefronts are reconstructed through the use of an instantaneous phase shifting interferometry procedure that was reported elsewhere [17], wavefront subtractions are done in the computer and phase difference maps are displayed. The system determines instantaneously (within one vibrational cycle) the displacement difference between the two extreme positions of the blade as it vibrates. These recordings can be produced in the existing system at a rate up to about 20 frames per second, limited by the CCD camera. Commercially available cameras can now support such measurements at kilohertz rates. Virtual Laboratories - The remote window concept. Digital holograms can be thought of as windows into other spaces and times [18]. If a hologram is produced in real time and the information is transmitted in near real time to another location where a second hologram like the original is synthesized, this second hologram is the optical equivalent of a window into the space where the original hologram is being recorded. Much more than close circuit TV, this provides a physical window that looks into someplace else, in a different time and place. The concept, taken to its fullest, has the ultimate promise of providing scientists with “eyes” in space. A robot equipped with “holographic windows or eyes” could provide a
502
Hybrid Measurement Technologies
vide a presence in space without actually being there. For example, a scientist looking into a holographic window on earth could see the true surface of Mars 150 million miles away just millimeters beyond the window. An experimental chamber of this type, originally developed for deployment on the International Space Station under the NASA SHIVA [19] (Spaceflight Holography Investigation in a Virtual Apparatus) program is now being examined for its potential application as a planetary instrument. All of the necessary concepts were tested and validated in the SHIVA project [20]. A holocamera of this type was developed in the early seventies by Hughes Research Corporation for deployment on the moon. Difficulties and hardware limitations limited its potential, and it was never deployed. With the improvements in technology since that time, such applications should be reconsidered.
5 Concept for a Holographic Eye for a Robot Figures 4 and 5 illustrate the concept of a holographic eye for a robot. The eye projects its own illumination to the target, then senses the full 3-D movement of the surface. A user who is located in a remote station can “see” with great precision the full dynamics of the structure in 3-D as fast as the data can be transmitted to the receiver station. This capability can be especially useful in space exploration where it is critical that sensors provide the most complete information to the investigator.
Fig. 4. Block Diagram for a robotic digital holographic sensor
Hybrid Measurement Technologies
503
Fig. 5. Conceptual application of a digital holographic sensor robot application, i.e., a “holographic robot eye”.
6 Conclusions We have shown how laser Doppler velocimetry can be considered as a special case of digital holography. When used in a matrix form, LDV can provide an extremely precise full 3-D dynamic analysis of a structure, and can provide the user with a presence in a remote location, leading to the capability to make precision measurements of structures and structural changes. Such measurement capability can provide information about the structural characteristics and health of either manmade objects, such as the robot itself, or of natural objects, such as geological formations. By recognizing that multipoint LDV is a special case of digital holography, we can exploit techniques that have been developed for either method for enhancing the other.
7 References 1. Kilpatrick JM, Moore AJ, Barton JS, Jones JDC, Reeves M, Buckberry C (2000) Measurement of complex surface deformation by high-speed temporal phase-stepped digital speckle pattern interferometry. Opt Lett. 25:1068-1070 2. Ruiz, PD, Huntley, JM, Shen, Y, Coggrave, CR, Kaufmann, G. H. (2001) Low-frequency vibration measurement with high-speed phase-
504
3. 4.
5. 6.
7.
8.
9. 10.
11.
12.
13. 14. 15.
16.
Hybrid Measurement Technologies
shifting speckle pattern interferometry and temporal phase unwrapping, W. Jüptner and W. Osten, editors, Elsevier, Paris, Fringe 2001:247-252 Kauffmann J, Tiziani HJ, (2002) Temporal Speckle Pattern Interferometry for Vibration Measurement”, SPIE Proc. 4827:133-136 Huang, TS, Prasada, B (1966) Considerations on the generation and processing of holograms by digital computers, MIT/ RLE Quat. Prog. Rep. 81:199-205 Poon, TC (2004) Recent progress in optical scanning holography. Journ. Of Holography and Speckle, 1:6-25 Pui, BH, Hayes-Gill, BR, Clark, M, Somekh, M, See, CW, Pieri, J F , Morgan, S, Ng A (2001) Optical VLSI processor fabricated via a standard CMOS process. Proc. SPIE 4408:73-80 B H Pui, B Hayes-Gill, M Clark, M G Somekh, C W See, J F Pieri, S P Morgan, and A Ng. (2002) The design and characterisation of an optical VLSI processor for real time centroid detection. Analog integrated circuits and signal processing, 32:67–75,. Onural, L. Ozahtas H. (2005) Diffraction and Holography from a Signal Processing Perspective, Proceedings of the International Conference on Holography, Optical Recording, and Processing of Information, Varna, Bulgaria. Onural, L (2000) Sampling of the Diffraction Field. Applied Optics 39:5929-5935 Trolinger, J, Markov, V, Khizhnyak, A (2005) Applications, Challenges, and Approaches for Electronic, Digital Holography, Proceedings of the International Conference on Holography, Optical Recording, and Processing of Information, Varna, Bulgaria Gyimesi, F, Borbely, V, Raczkevi, B, Fuzessy, Z (2004) A Speckel Based Photo Stitching in Holographic Interferometry for Measuring Range Extension. Journ. of Holography and Speckle, 1:39-45. Vikram, CS, Witherow, WK, Trolinger, J.D. (1993) Algorithm for Phase-Difference Measurement in Phase-Shifting Interferometry. Applied Optics 32:6250-6252 Sabatier, JM, and Xiang, N (1999) Laser-Doppler Based Acoustic-toSeismic Detection of Buried Mines .SPIE Proc. 3710:215-222 Xiang, N. Sabatier, JM (2000) Land Mine Detection Measurements using Acoustic-to-Seismic Coupling. SPIE Proc. 4038:645-655 Lal, AK, Zhang, H, Aranchuk, V, Hurtado, E, Hess, CF, Burgett, RD, Sabatier, JM, (2003) Multi-beam LDV system for buried landmine detection. SPIE Proc. 5089:579-590 Aranchuk, S, Sabatier, JM, Lal, AK, Hess, CF, Burgett, RD, O’Neill, M (2005) Multi-beam laser Doppler vibrometry for acoustic landmine
Hybrid Measurement Technologies
17.
18.
19.
20.
505
detection using airborne and mechanically-coupled vibration Proc. SPIE 5794:624-632 Brock, NJ, Millerd JE, Trolinger, JD (1999) A Simple Real-Time Interferometer for Quantitative Flow Visualization. AIAA Paper No. 990770, 37th Aerospace Sciences Meeting, Reno, NV Trolinger, JD, L’Esperance, D. (2004) Digital Holography in Real and Virtual Research-A Window into Remote Places, SPIE Holography Newsletter, November. Trolinger, JD, L'Esperance, D, Rangel, RH, Coimbra, CFM, Witherow, WK. (2004) Design and preparation of a particle dynamics space flight experiment, SHIVA. Annals of the New York Academy of Sciences 1027:550-566 L’Esperance, D, Coimbra, CFM, Trolinger, JD, Rangel, RH (2005) Experimental verification of fractional history effects on the viscous dynamics of small spherical particles. Experiments in Fluids
Invited Paper
3D Micro Technology: Challenges for Optical Metrology
Theodore Doll, Peter Detemple, Stefan Kunz, Thomas Klotzbücher Institut für Mikrotechnik Mainz GmbH Carl-Zeiss Str. 18-20, 55129 Mainz Germany
1 Introduction Production control and specification verification are two mayor demands in successful industrial use of micro systems and thus have been brought into the foci of several research calls by the CEC [1] and other national authorities. The reason is that the tolerances typical to micro systems are not scaling with the dimension shrinkage of its parts, despite three dimensional assembly especially of heterogeneous hybrid systems like plastic, metal and electronic components is the naturally the best way of achieving low cost mass products. The latter components origin from precision engineering and semiconductor technology indicates that micro systems metrology will always be seen from different viewpoints of testing: 100% validation of serially manufactured parts at one hand and statistical control of batch parallel processes on the other. The first aspect requires online control or fast inspection whilst the other may employ destructive inspection methods. As several technologies merge into a hybrid, new level, multiparameter and multitool testing beds are discussed, however, optical inspection remains the fastest and often cost effective approach to dimensional metrology despite the micro dimensions coming close to the wavelengths under use. Where these limitations are and which measurements are desired are exemplified here from an applicational point of view.
Hybrid Measurement Technologies
507
2 Technologies Historically, the so-called LIGA-technique, was the first important approach for the realization of non-silicon micro parts with huge aspect ratio and structural dimensions from the µm up to the mm range. Combining deep X-ray lithography as a mastering step followed by electroforming for the fabrication of ultra-precise mould inserts, the LIGA-technique allows the manufacturing of high quality micro components and micro structured parts in particular from plastics but also from ceramics and metals on large scale. In recent years, different competing variants of this technique have been established which are mainly based on UV-lithography and employ e.g. the processing of thick photo resists such as SU-8 or methods of deep reactive ion etching of silicon such as the so-called ASETM (Advanced Silicon Etch) process. These processes are also suitable for direct integration of electronics for the realization of advanced MEMS devices. Due to the recent advances in mechanical ultra precision machining, laser micro machining and micro electro discharge machining, also these techniques allow today the realization of structural dimensions that that were up to now only accessible by lithography-based processes.
3 Measurement Challenges Usually technologies come with their own specialized measurement setups and strategies; however heterogeneous hybrid systems need one approach that does it all. SEM pictures as provided here, give an idea, however its metrological validity is still under research [3]. 3.1 Three Dimensional Geometries
Precision machined micro holes are used for professional inject printing heads, spinnerets and injection nozzles. Typical diameters are ranging from 300 µm down to some ten µm. Figure 1 shows an study for hydrogen gas turbines which employed laser drilling, sink erosion and wire erosion. Mayor demands are roundness and diameter tolerances, both ranging below 1 µm. Others are inside wall roughness and debris, deviations from cylinder geometry and circumferring burr [2]. SEM inspection, as seen in the inset, provides some idea, however for industrial production and quality control standard measuring methods and parameters are lacking.
508
Hybrid Measurement Technologies
Fig. 1. Hole machined in 0,5 mm V4A steel by laser drilling of starting holes, sink EDM of a thread cones (inserts) and subsequent wire EDM refinement. SEM inspection indicates 0,5 µm roundness tolerance, however internal cylindricity, roughness and creep remain undetermined.
Other examples of 3D parameters in micro systems are given in figure 2. On the left a miniature x,y,z biomedical sensor is shown, fabricated from solid single crystalline silicon by ASETM, in which the “joystick” protrudes the sensor surface. The processing damages at the outer rim do not infer with applicability. Critical parameters are, however, the precise thickness control of the suspension at the bottom and, for imposing tactile probes, low tolerances of the shaft diameter. The SEM inspection suggests some shrinkage downwards the shaft [4].
Fig. 2. Three dimensional Microsystems with demanding production requirements. Left: The Silicon X,Y,Z (joystick)-sensor needs proper bottom thickness and shaft diameter control. Right: The polymer micro gear box needs checking complex geometries, interlacing and moulding shrinkage from LIGA processing.
Hybrid Measurement Technologies
509
The right photograph shows a micro gearbox subassembly made by polymer moulding from LIGA machined inserts [5]. Whilst still working, problems with polymer shrinkage of the individual parts are obvious. 3.2 Sidewalls
As with the x,y,z-sensor above, sidewall control is also important for setting up a process for hot embossing. Figure 3, left, shows an silicon etched structure that undergoes electroplating replication that finally ends up in PMMA micro channels for massive parallel capillary electrophoresis (right). Rigorously steep flanks are a must and, for high uniformity of the latter parallel ion drift, the sidewalls need uniform roughness over a macroscopic chip of ID-card dimensions [6].
Fig. 3. Mold insert fabrication for fluidic capillary electrophoresis polymer chips. Right: Silicon master structure made by ASETM, turned into Nickel structures by electroplating. Left: PMMA channels replicated by hot embossing. Processing quality strongly depends on the etching steepness and sidewall roughness for successful mold separation.
Such smooth flanks are not a standard result of ASE processes. Due to its setup of iterative etching and passivation, sidewall ripple is inherent to this technology. Figure 4 highlights the problem and the high sidewall quality achieved by a post process developed at IMM. The surface would appear like defocused if there were no remaining sharp pits in the sidewall. Up to now, no roughness measurement of both the original and smoothed could be realized.
510
Hybrid Measurement Technologies
Fig. 4. Sidewall roughness smoothing of ASETM Silicon structures before (right & insert) and after two steps of oxidation and HF etching (left). Destructive SEM inspection indicates substantial improvement, however no quantitative access is known for easy process control.
Fig. 5. Holey Silicon devices for gas diffusive membranes and electron optical elements (inset). Note the bottom curvature that influences the active areas as well as variance in cylindicity, burr and sidewall roughness.
3.3 Bottom Curvature
All deep reactive ion etching processes produce some bottom curvature. This becomes critical if membrane or opening areas are directly influencing the device performance. Figure 5 gives two examples for that. The larger photograph shows, in crossection, a gas diffusive SiO2 membrane at the bottom of a silicon. It is used for helium leak detection where the membrane withstands normal air pressure against vacuum. It is obvious
Hybrid Measurement Technologies
511
that silicon remainders forming the bottom curvature reduce the active diffusible membrane area. Even for such larger hole diameters of 500 µm, there is no standard method for checking these edges of microstructures without using destructive preparation techniques. The problem becomes severe, if the hole dimensions are drastically reduced as shown in the inset. These micro grids made of heavily doped silicon are used in advanced electron microscopy, e.g. for energy filtering. The silicon structure acts like a structured metal foil but allows smaller hole sizes and density and thus leads to a reduction of Moiré effects. For industrial use, it would be important to easily control the bottom curvature again in order to tailor the best potential distribution [7].
Fig. 6. Sink EDM tool fabricated by wire EDM turning (left) and micro electron optical structure fabricated with that tool (right, dissected). Proper non-destructive inspection of those internal micro geometries is still a mayor challenge.
3.4 Undercuts
Again with an electron microscopy application, figure 6 shows the demands of measuring undercuts exemplarily. The tool that for that required sink erosion has been machined by wire EDM turning. What would be required in this case, are simple geometries inside a 2,5 mm diameter element like diameters, rotational symmetry and roundness of edges. As these products are individually made to specifications, destructive testing is rather unfavourable.
4 Standard Approaches What is common to the tasks described above is that they almost all require coupled methods e.g. for roughness and geometries and, which is most important, sidewall data in cavities as small as 100 µm. These are
512
Hybrid Measurement Technologies
almost not accessible by standard methods given in table 1. Tilted SEM inspection offers some access towards sidewalls for aspect ratios up to one. The tactile probes are, in general able to do the job, however the tip radii make them of limited use. The fibre optical probe would be best choice, however needs fundamental adaptation. Only micro tomography would give full access even to undercuts despite its price and limitation to low Z materials like polymers. Table 1. State of the art in MEMS measurement
Type of probe
Res. / Tip Ø
Uncertainty
Sidewall capability
1D, 2D (x, y)
3D (z)
AR
Video probes
> 5 µm
> 1 µm
> 1 µm
NO
Classical tactile probes
300 µm
> 0.5 µm
> 0.5 µm
YES, 5
Tactile-optical probes (“fibre probes”)
20 µm
0.5 µm
0.5 µm
Ø, 1
MEMS micro-probes
300 µm
50 nm
> 50 nm
Ø
SPMs Scanning probe microsco- 20-50 nm pes, (AFM, SNOM)
< 10 nm
no 3D objects measurable
NO
Confocal microscopes (scanning) and other focussing sensors
< 1 µm
> 0.5 µm
1 µm
NO
Electron microscope, tilted beam
3 nm
20 nm
0.5 µm without tilt
Ø, 1
Micro Tomography
> 2 µm
YES
5 New Concepts and Outlook Confocal scanning probes have the advantage of high resolution in zdirection (< 1 nm), but low lateral resolution (1-2 µm) due to the wavelength of light. At aspect ratios below 1 and combining multiple views (rotation of the object or the probe) will allow some access comparable to SEM views. Complex software to stitch the views is required because the resolution is only high in one coordinate direction and the use of Far UV light sources. For stand-offs up to 5 mm measurements at the bottom of deep and/or rotated structures might become feasible. For the most part, the confocal microscope will be used to test the multiple view techniques. As the work horse of classical micro topography is the interference microscope, its combination with automatic fringe evaluation software might
Hybrid Measurement Technologies
513
offer additional access to cavities not coming too close to wavelength restrictions. Other ideas are focus probes or confocal point probes that are combined with SPM-probes and optical microscopes. With this it might be possible to combine fast orientation with the optical microscope, high speed optical scanning and high lateral resolution of SPM-technique. It is estimated that such advanced measurement strategies will reduce measurement times dramatically and combined instruments will bridge milli-, micro- and nanometre scales. As a conclusion, the up come of multitools combining tactile and optical sensing has been regarded the most promising approach towards 3D micro systems measurement. Optics allows for enhanced speed, whilst mechanics should “bring the light down”. Ideas to combine this within a novel fibre optical inspection are missing.
References 1. CEC FP6-2002-NMP-1 by 17.12.2002
2. Neumann, F., Challenges for Mikrometrology, Proc. 12th Sensor Congress, Nuremberg May 2005 3. Herrmann, K., Koenders, L.; K.; Wilkening, G., Dimensional Metrology Using Scanning Probe Microscopy, Sino-German Symp. Micro Systems and Nano Technology, Braunschweig, 5-7, September, 2001, Germany 4. L. Beccai, S. Roccella, A. Arena, F. Valvo, P. Valdastri, A. Menciassi, M. P. Dario, L. Schmitt, W. Staab, F. Schmitz, "Design and fabrication of a hybrid silicon three-axial force sensor for biomechanical applications", S&A A, 120, 2, pp. 370-382 5. Nienhaus, M., Ehrfeld, W., Berg, U., Schmitz, F. and Soultan, H., 2000, Tools and methods for automated assembly of miniaturized gear systems, in Microrobotics and microassembly II : 5 - 6 November 2000, Boston, USA (SPIE, Bellingham, WA), pp 33-, 43 6. Griebel A. Rund S. Schönfeld F. Dörner W. Konrad W. Hardt S. Integrated polymer chip for two-dimensional capillary gel electrophoresis. Lab. Chip., 4, 18-23, 2004 7. F. Haase, P. Detemple, S. Schmitt, A.Lendle, O.Haverbeck, T.Doll, D.Gnieser, H. Bosse, G. Frase, Electron Permeable Membranes for Environmental MEMS Electron Sources, to be presented at Eurosensors 2005
Interferometric Technique for Characterization of Ferroelectric Crystals Properties and Microengineering Process Simonetta Grilli, Pietro Ferraro, Domenico Alfieri, Melania Paturzo, Lucia Sansone, Sergio De Nicola, and Paolo De Natale Istituto Nazionale di Ottica Applicata (INOA) del CNR c/o Istituto di Cibernetica "E.Caianiello" del CNR Via Campi Flegrei, 34 - c/o Compr. "Olivetti" - 80078 Pozzuoli (NA) Italy
1 Introduction Ferroelectric crystals, such as lithium niobate (LN) or lithium tantalate, have emerged as an important class of materials useful for several technological applications including quasi-phase-matched (QPM) frequency converters or electro-optic scanners and surface acoustic wave (SAW) devices. Most of these applications are critically dependent on the ability to micro-engineer ferroelectric domains, usually performed by the electric field poling process. Therefore non-invasive and in-situ diagnostic methods for thorough understanding of domain formation become very important. In particular, the fabrication of high-quality ferroelectric microengineered devices strictly depend on the ability to control the domain switching process during poling. In case of low conductive ferroelectric materials, such as LN, the electric field poling is usually monitored by controlling the poling current flowing in the external circuit, which gives information about the amount of charge delivered to the sample but ignoring the spatial and temporal evolution of the domain walls [1]. In this paper we present an interferometric technique based on digital holography (DH) to provide the phase shift distribution of the object wavefield, due to the linear electro-optic (EO) and piezoelectric effect, for realtime visualization of ferroelectric domain switching. Temporal and spatial evolution of reversing domain regions in congruent LN crystal samples is presented. During the application of the poling external voltage a MacZehnder (MZ) type interferometer set-up generates an interference fringe pattern recorded by solid state array detector. The recorded digital holo-
Hybrid Measurement Technologies
515
grams are used to numerically reconstruct both the amplitude and the phase of the wavefield transmitted by the crystal. Sequences of amplitudeand phase-maps of the domain walls, moving under the effect of poling process, are obtained and collected into movies for real-time visualization of domain evolution. Depending from the reversed wall velocity experienced during the poling process CCD or high frame-rate CMOS camera can be employed. The technique can be used for real-time monitoring of periodic poling process as an alternative to the poling current control method, giving spatial full-field information. In fact this technique can provide in-situ and non-invasive characterization of the reversed domain structures obtained after the poling process. By this method it is possible to assess the quality of the fabricated periodically reversed samples avoding the destructive ex-situ selective etching usually adopted to reavel reversed domains [2].
2 Experimental procedure Congruent LN crystal samples (20u20u0.5)mm sized are obtained by dicing single domain 3-inch diameter crystal wafers, polished on both sides, and mounted in a special holder for electric field poling and simulateneous interferometric investigation (see Fig.1). The structure of the sample holder allows simultaneously the application of an external high voltage for reversing ferroelectric domains and the laser illumination of the crystal sample along its z-axis direction, through the quartz windows [3]. plexiglas mount quartz windows silicone o-ring object laser beam
liquid electrolyte (LiCl)
y x z LiNbO3 crystal
HVA
Fig. 1. Schematic view of the sample holder used for electric field poling and simultaneous interferometric acquisitions (HVA high voltage amplifier)
516
Hybrid Measurement Technologies
Electrical contact on the sample surfaces is obtained by tap water. The sample area under poling and simultaneous laser illumination is 5 mm in diameter. The sample holder is inserted into one arm of a MZ type interferometer set-up as shown in Fig.2. The horizontally polarized beam emitted by a frequency-doubled Nd-YAG laser at 532 nm is divided by a polarizing beam-splitter into two beams which are properly expanded to get two plane parallel wavefields. The object wavefield propagates through the crystal sample along its z-axis direction. The two beams are then recombined by the second beam-splitter and an interferogram is obtained and captured by the camera. The fringe pattern is digitized by a CCD camera with (1024x1024) pixels 6.7 Pm sized or by a CMOS camera with (512x512) pixels 11.7 Pm sized. The CCD provides an acquisition time of about 10 s at a rate of about 12 frame/s and it is used here in case of LN slow poling [1]. Fast poling process is captured by a CMOS camera which provides an acquisition time of about 1 s at a rate of about 800 frame/s. It is well known that for EO materials such as LN the refractive index n increases from n to n 'n under a uniform external electric field, in one domain, while in the oppositely oriented one it decreases from n to n 'n , thus providing an index contrast across the domain wall [4]. In fact, the refractive-index change, due to the linear EO effect along the z crystal axis, depends on the domain orientation according to 'n v r13 E3 , where E3 is the external electric field parallel to the z crystal axis. The index difference across a domain wall is equal to 2'n and causes phase retardation of the transmitted beam. mirror
PBS 532 nm O/2 %(
O/2
%(
sample
BS mirror
camera
Fig. 2. MZ type interferometer set-up used for real-time visualization of reversing ferroelectric domains by the linear EO and piezoelectric effect (PBS polarizing beam-splitter; BE beam-expander; BS beam-splitter)
Hybrid Measurement Technologies
517
This effect was widely used for in-situ EO imaging of domain reversal in ferroelectric materials [4] thus avoiding the ex-situ invasive chemical etching process [2]. The phase retardation of a plane wave incident along the domain boundaries is also affected by the piezoelectric effect, which induces a negative or positive sample thickness variation 'd in reversed domain regions. Therefore, during the electric field poling, an incident plane wave experiences a phase shift 'I, mainly due to the linear EO and piezoelectric effects along the z crystal axis, according to 'I
2S ª 2S 'nd 2« n0 n w 'd º» O ¬O ¼
>
@
2S r13 n 0 3 2n 0 n w k 3 U
(1)
where the piezoelectric thickness change 'd is dependent on k3, the ratio between the linear piezoelectric and the stiffness tensor (k3=7.57u10-12m/V) [24], n w 1.33 is the refractive index of water and U is the applied voltage. Compared to the amplitude contrast EO imaging methods [4] the DH technique allows to reconstruct the object wavefield in both amplitude and phase. The phase shift distribution provides high contrast images of reversing domain regions and quantitative information about the phase retardation in correspondence of the domain walls. A DH technique has been used by these authors in a previous paper [5] for in-situ visualization of switching ferroelectric domains in congruent LN. An improvement of the technique is prosposed here to get high spatially and temporally resolved domain reversal visualization by replacing the RGI set-up with a MZ type interferometer and by using higher speed cameras. DH is an imaging method in which the hologram resulting from the interference between the reference and the object complex fields, r(x,y) and o(x,y) respectively, is recorded by a camera and numerically reconstructed [6,7]. The hologram is multiplied by the reference wavefield in the hologram plane, namely the camera plane, to calculate the diffraction pattern in the image plane. The reconstructed field *Q , P in the image plane, namely the plane of the object, at a distance d from the camera plane, is obtained by using the Fresnel approximation of the Rayleigh-Sommerfeld diffraction formula ª iS 2 º [ cos D 2 K 2 » exp> 2iS [Q KP @d[ dK *Q , P v ³³ h[ , K r [ , K exp « ¬Od ¼
(2)
518
Hybrid Measurement Technologies
The reference wave r(x,h), in case of a plane wave, is simply given by a 2 constant value and h[ , K r [ , K o[ , K is the hologram function, O is the laser source wavelength and d is the reconstruction distance, namely the distance measured between the object and the camera plane along the beam path. The coordinates (QP) are related to the image plane coordinates (x’, y’) by Q x' O d and P y ' O d . The reconstructed field *Q , P is obtained by the Fast Fourier Transform (FFT) algorithm applied to the hologram h[ , K and multiplied by the reference wave r([K) and the chirp function exp[iS O d ([ 2 K 2 )] . The discrete finite form of equation (2) is obtained through the pixel size 'x' , 'y ' of the camera array, which is different from that '[ , 'K in the image plane and related as follows: 'x'
Od ; N'[
'y '
Od N'K
(3)
where N is the pixel number of the camera array. The 2D amplitude Ax' , y ' and phase I x' , y ' distributions of the object wavefield can be reimaged by simple calculations: A x' , y '
abs>* x' , y ' @; I x' , y '
arctan
Im>* x' , y ' @ Re>* x' , y ' @
(4)
3 Results Two different configurations have been used in this work. In case A the LN sample is subjected to slow poling process by using high series resistor in the external circuit (100M:) [1]. The whole area under investigation (diameter 5 mm) is reversed in less than 10 s and the interferograms are acquired by the CCD camera. In case B another virgin LN crystal sample is reversed by fast poling (series resistor 5M: in order to reverse the whole crystal area in less than 1 s and the interferograms are acquired by the CMOS camera. Amplitude and phase maps of the object wavefield are numerically reconstructed by DH method as described in the previous section. The reconstruction distance is 125 mm in case A and 180 mm in case B, while the lateral resolution obtained in the reconstructed amplitude and phase images is 9.7 Pm in case A and 16 Pm in case B, according to (3). A reference interferogram of the sample at its initial virgin state is acquired before applying the external voltage and it is used to calculate the phase shift experienced by the object wavefield during poling. The DH reconstruction is performed for both the reference hologram and the nth holo-
Hybrid Measurement Technologies
519
gram, recorded during the domain switching, to obtain the corresponding phase distributions I 0 x' , y ' and I n x' , y ' . The 2D phase shift map 'I x' , y ' I n x' , y ' I 0 x' , y ' is calculated for each hologram and the corresponding images are collected into a movie. Fig.3-4 show some of the frames extracted from such movie in case A and B, respectively. The out of focus real image term, generated by the DH numerical procedure [6,7], is filtered out for clarity. The in-focus image of the domain wall propagating during the application of the external voltage is clearly visible. Switching process always starts with nucleation at the electrode edges. It is interesting to note that a residual phase shift gradient is present in correspondence of previously formed domain walls, as indicated in the Figs.3-4. This is probably due to the decay effect of the internal field related to the polarization hysteresis in ferroelectric crystals [8,9]. It is important to note that crystal defects and disuniformities are clearly visible and readly detectable by observing Figs.3-4, due to their different EO behaviour [10]. Moreover in Fig.4 the high temporally resolved frames obtained by using the CMOS camera allow to notice that the evolution of the domain walls is clearly influenced by the crystal defects where the domain wall propagation appears to be partially blocked.
o l d d o m a in w a l l
n e w d o m a i n w a ll
(a )
(c )
(b ) o ld d o m a in w a ll
n e w d o m a in w a ll
(d )
(e )
(f)
Fig. 3. Selected frames from the phase-map movie obtained in case A. The frame area is (5x5) mm2 and the time t (in seconds) corresponding to each frame is (a) 4.2, (b) 4.6, (c) 5.0, (d) 6.3, (e) 6.7, (f) 7.9. The polarization axis is normal to the image plane
520
Hybrid Measurement Technologies
o ld d o m a i n w a ll
d e f ec ts
n e w d o m a in w a ll
(a )
(b )
(c )
n e w d o m a in w a ll
o ld d o m a i n w a ll
(d )
(e )
(f)
Fig. 4. Selected frames from the phase-map movie obtained in case B. The frame area is (5x5) mm2 and the time t (in milliseconds) corresponding to each frame is (a) 390, (b) 430, (c) 470, (d) 510, (e) 530, (f) 580. The polarization axis is normal to the image plane
4 Conclusions A DH technique for non-invasive real-time visualization of switching ferroeletric domains with high spatial and temporal resolution has been proposed and demonstrated in this paper. The technique provides the reconstruction of the phase shift distribution of the wavefield transmitted by the sample during poling by making use of the EO and piezoelectric effect occurring under the external voltage. The technique can be used as an accurate and high-fidelity method for monitoring the periodic poling process as an alternative to the commonly used poling current control. Further experiments are under investigation in case of photoresist patterned samples by using a microscopic configuration of the MZ interferometer.
5 Acknowledgments This research was partially funded by the Ministero dell'Istruzione dell'Università e della Ricerca (MIUR) within the project "Microdispositivi in Niobato di Litio" n. RBNE01KZ94 and partially by the project MIUR n.77 DD N.1105/2002
Hybrid Measurement Technologies
521
6 References 1. Myers, L, Eckardt, R, Fejer, M, Byer, R, Bosenberg, W, Pierce, J (1995) Quasi-phase-matched optical parametric oscillators in bulk periodically poled LiNbO3. Journal Optical Society of America B 12:2102-2116 2. Nassau, K, Levinstein, H, Loiacono, G (1966) Ferroelectric lithium niobate. 1. Growth, domain structure, dislocations and etching. Journal of Physics and Chemistry of Solids 27:983-988 3. Wengler, M, Müller, M, Soergel, E, Buse, K (2003) Poling dynamics of lithium niobate crystals. Applied Physics B 76:393-396 4. Gopalan, V, Mitchell, T (1999) In situ video observation of 180° domain switching in LiTaO3 by electro-optic imaging microscopy. Journal of Applied Physics 85: 2304-2311 5. Grilli, S, Ferraro, P, Paturzo, M, Alfieri, D, De Natale, P, de Angelis, M, De Nicola, S, Finizio, A, Pierattini, G (2004) In-situ visualization, monitoring and analysis of electric field domain reversal process in ferroelectric crystals by digital holography. Optics Express 12:18321842 6. Schanrs, U, Jüptner, W (2002) Digital recording and numerical reconstruction of holograms. Measurement and Science Technology 13: R85R101 7. Grilli, S, Ferraro, P, De Nicola, S, Finizio, A, Pierattini, G, Meucci, R (2001) Whole optical wavefields reconstruction by digital holography. Optics Express 9 : 294-302 8. Paturzo, M, Alfieri, D, Grilli, S, Ferraro, P, De Natale, P, de Angelis, M, De Nicola, S, Finizio, A, Pierattini, G (2004) Investigation of electric internal field in congruent LiNbO3 by electro-optic effect. Applied Physics Letters 85:5652-5654 9. de Angelis, M, De Nicola, S, Finizio, A, Pierattini, G, Ferraro, P, Grilli, S, Paturzo, M (2004) Evaluation of the internal field in lithium niobate ferroelectric domains by an interferometric method. Applied Physics Letters 85:2785-2787 10. de Angelis, M, Ferraro, P, Grilli, S, Paturzo, M, Sansone, L, Alfieri, D, De Natale, P, De Nicola, S, Finizio, A, Pierattini, G (2005) Twodimensional mapping of the electro-optic phase retardation in Lithium Niobate crystals by Digital Holography. Optics Letters (to be published).
Holographic interferometry as a tool to capture impact induced shock waves in carbon fibre composites J. Müller1, J. Geldmacher1, C. König2, M. Calomfirescu3, W. Jüptner1 1 BIAS GmbH, Klagenfurter Str. 2, 2 Bremer Institut für Konstruktionstechnik BIK, Badgasteiner Str. 1, 3 Faserinstitut Bremen e.V., Am Biologischen Garten 2, 28359 Bremen, Germany
Abstract In this work an analysis of impacts on carbon fibre structures using holographic interferometry is presented. Impacts are caused e.g. by stones or hail at high motion speeds of vehicles. An impact is defined as a force acting shorter than the travelling time of the impact waves through the structure. The measurements are therefore performed using a pulsed Nd:YAG laser, making it possible to record digital interferograms at different times after the impact [1]. The impact is produced by an air-driven projectile and the holograms are stored digitally using a CCD camera. The numerical evaluation of the interferograms then gives access to the out-of-plane displacements of the surface. Thus a 2D-strain field is extracted and analysed quantitatively. The experiments cover the influence of different parameters. Investigated are the influence of contact time during impact, momentum of the projectile and the evolving wave forms (longitudinal, transversal, bending wave). The effect of these parameters is also investigated concerning different layer designs of the composite. Due to the anisotropic properties of carbon fibre composites not much is known about the damage tolerance and failure limits especially in the dynamical case. Therefore the goal of these experiments is to gain a deeper understanding of the dynamical behaviour of these materials and to provide the dynamical material parameters which can be used for numerical simulations on one hand and for design and construction on the other hand.
Hybrid Measurement Technologies
523
1 Introduction The designers of modern airplanes and automobiles make extensive use of carbon fibre reinforced structures. These composites have advantages concerning weight and durability and therefore reduce costs. But due to high motion speeds they are exposed to damage by impacts caused e.g. by stones, birds or hail. Most of the analytically and experimentally gained parameters for the structural analysis only cover the static case. For the design it is necessary to know the effects of highly dynamic cases like different impact induced wave forms on the structures [2]. Additionally the design of structural components is diffcult due to the anisotropic nature of the composite. Therefore research is necessary to understand the behaviour of impact induced waves in these materials. In the following we will report on our work using optical methods to gain access to these waves and the influence of different material parameters. We calculated the principal sum of strains from measurements performed using a holographic double exposure setup. These will be combined with results from experiments performed at the Bremer Institut für Konstruktionstechnik BIK. The photoelastic coating method (PCM) used here gives the principal difference of strains [3]. From these a 2D representation of the principal strains H1 and H2 can be calculated. The data is then used for further work in finite element simulation at the Faserinstitut Bremen to gain understanding of the effects of wave forms on carbon fibre structures and to predict potentially occurring secondary damages outside a non-damaging impact.
2 Impact waves Unlike in static cases the loads caused by an impact are space-and timedependent. The known dependacy between the stress and the strain measured at the same time at a different part of the structure does not exist in this highly dynamic case. An impact is defined as a loading having a duration which is smaller than the running time of the wave through the sructure.
t impact
l c
(1)
Here l is the typical dimension and c is the speed of the wave. The amplitude of the stress depends on the momentum that was carried into the structure, the wavelength depends on the contact time and the speed of the
524
Hybrid Measurement Technologies
wave. Therefore three parameters have to be controlled: Contact time, force distribution over time and momentum. Three kinds of waves need to be considered. The phase speed of the longitudinal wave is given by [4]
E
cL0
U
; c L1
E U (1 Q 2 )
(2)
for the one- and two-dimensional case. The 3D case is given by
E (1 Q ) U (1 Q ) (1 2Q )
cL2
(3)
where E is Young’s Modulus, U is the mass density and Q is Poisson’s number. The second wave form is the transversal wave which travels at roughly half the speed of the longitudinal wave
cT
E 2 U (1 Q )
(4)
Another wave form is mixture of the two preceding forms. The Rayleigh wave forms at free surfaces of the body. We are looking at flat plates and for this twodimensional case, Rayleigh waves can be neglected because their wave length would be much larger than the thickness of the panel. The third wave form that has to be considered is the bending wave. The phase speed of this wave is frequency dependent and given by
cB
E 1 f h 2 3U (1 Q 2 )
S
E h 2 3U (1 Q ) O
(5)
with h being the thickness of the plate. Note that cBĺ0 for large wavelengths and cBĺ for Oĺ0. This is called the anomalous dispersion. For physical reasons the upper limit for the phase speed is the speed of the Rayleigh wave [5, 6]
c B ,max
c Rayleigh
q cT
Where q can be approximated by q=(0.874+1.12Q)/(1+Q) [4].
(6)
Hybrid Measurement Technologies
525
3 Holographic double exposure method The capturing of the travelling wave follows the well known double exposure method [7] with the difference that the two holograms are recorded separately with a CCD-target and the reconstruction is performed numerically [8]. The procedure used here is shown in Fig. 1. The first hologram H1 represents the unloaded reference state before the impact. The impact triggers the second laser pulse and hologram H2 is recorded at a defined time after the impact. From these two hologams the complex wavefields b’ are reconstructed and the phases are calculated
M n ( x' , y ' )
arctan
Im[(b' ( x' , y ' )] Re[(b' ( x' , y ' )]
(7)
The phase difference 'I of the loaded and the unloaded state is then directly related to the displacement uz of the surface
uZ
'I
O 2S
(8)
In our case a third hologram H3 is recorded after the impact. This serves the purpose to remove any remaing deformations of the plate due to frictional forces by the anvil and to control if the impact was nondestructive. This second difference phase DP2 is subtracted from the first one resulting in the final difference phase DP3. This difference phase is then unwrapped (DP) and converted into a 2D-field of the principal sum of strain according to
(H1 H 2 )
2
1 Q uz Q h
Fig. 1. Schematic procedure for the evaluation of holograms
(9)
526
Hybrid Measurement Technologies
4 Experimental Setup The light source used in this work is a pulsed ruby laser working in double pulse mode at 694 nm. The emitted pulses have a peak power of pout=1J and a pulse duration of tpulse=30ns. The beam is divided into object and reference beam by a 90/10 beamsplitter BS1 and a lens L1 is used to illuminate the sample with the object beam. The reference beam is expanded using a telescope arrangement. Both beams are brought to interference on the CCD-target using a second beamsplitter BS2. Since a CCD-target can only record a limited angle between object- and reference wave [7] due to the sampling theorem the diverging lens L2 is introduced.
M1
ruby laser L1
BS1 BS2
impact unit sample
L2
CCD
Fig. 2. Experimental setup for holographic double exposure
The impact unit consist of a pressure container which drives a steel ball in a tube acting as the projectile. The velocity is recorded by two photosensors at the end of the tube. The force is recorded by a transducer mounted between the end of the tube and an anvil which provides a constant surface of contact during the impact. Experiments show that the force can be accurately reproduced for each impact. The duration of impact can be controlled by choosing anvils of different sizes. The force is controlled by choosing different driving pressures and therefore different velocities of the projectile. The sample is a plate measuring 30cm x 30cm and is clamped at the upper and lower edges. The other two edges are free. The pulse laser is triggered by the first photosensor. The first pulse is emitted before the projectile hits the sample. The second pulse is triggered at a choosable time after the impact. The CCD camera is synchronized to this trigger in order to capture two images of the two pulses. Therefore for each point in time that has to be captured a single impact has to be performed. The reconstruction of the holograms is performed off-line on a personal computer.
Hybrid Measurement Technologies
527
5 Results The material under investigation is a carbon fibre structure typically used in fuselages of aircrafts. It contains fibres in the directions of 0°, 45°, -45° and 90° in equal fractions of 25% which is the simpliest case without anisotropic behaviour. The impact was performed using a 7mm steel ball with an average impact velocity of vave=18.1 m/s. The contact time of the impact was 9,5µs at a maximum impact force of 10.9 kN. Finite element simulations show that the impact produces a series of circular wave peaks (see Fig. 3a). The impact was non-damaging and so was the wave itself. From this calculations also the speeds of the transversal and the bending wave were extracted.
Fig. 3. a) Simulation of displacement (x1000), 30µs after impact (zoomed), b) Unwrapped phase differences at diff. times after impact (complete sample)
Figure 3b shows the corresponding measurements of the phase differences at 4 different times after the impact. It can be seen that the equal fractions of fibre directions lead to the expected circular wave front. The center section of the excited area is not resolved due to the large amplitude causing an undersampled fringe density. Only the front part of the wave can be observed. Inspection of the difference phases after the impact also show that no damage was done to the material. From these unwrapped difference phases representative line scans as indicated by the white lines are extracted and the corresponding displacement is calculated according to Equ. 8. One typical result in Y direction (fibre direction 90°) is shown in Figure 4. Here the displacement of the wave is shown for three points in time. The data points are only shown for the first one for clarity. Two peaks of the
528
Hybrid Measurement Technologies
wave front are identifiable. The speed of the slower one can be evaluated as indicated by the dotted line. The velocity is v = (1700±200) m/s indicating a bending wave. 2,5
10µs 27µs 41µs
displacement [µm]
2 1,5 1 0,5 0 -0,5 0
1
2
3
4
5
6 Y [cm]
7
8
9
10
11
Fig. 4. Amplitude at different times after impact in Y-direction (90°)
The second peak indicated by the dash-dotted line travels at nearly double the speed of the slower one at v = (3500±400)m/s. This velocity suggest this wave as being a transversal wave since the theoretical value from numerical calculations is vT=3705 m/s and also the wavelength is longer than the wavelength of the slower one which would not be the case for a bending wave (see Equ. 5). Calculations show that the longitudinal wave travels at vL=6282 m/s. This wave was not observed yet.
6 Summary and Outlook In this paper we have shown the possibility of recording the amplitude of transient events like an impact using double exposure digital holography. We have shown that the resolution of digital holography is sufficient for the recording of different wave forms. Comparisons to FEM-simulations show good agreements concerning wave speeds but further work is required for more accurate modelling of e.g. the scaling of the displacement and the damage effects of the impacts like delaminations. Future work will include the calculation of 2D maps of the principal strains by combining
Hybrid Measurement Technologies
529
the result from holography and the PCM method. This data can then be used for evaluation of the FEM-simulation. Further investigation is also needed concerning the frequeny dependent damping of bending waves and the effect of different layer designs of the composites on the damping. With more knowledge about the damping it becomes possible to make better predictions of the damaging characteristics of structures and to give according design rules.
7 Acknowledgements The authors would like to thank the Deutsche Forschungsgemeinschaft for funding this work under the grant number Ju 142/54-1.
8 References 1. Hariharan, P, (1984) Optical Holography: Principles Cambridge University Press 2. Müller, D, Jüptner, W, Franz, T, (1996) Untersuchung der Stosswellenausbreitung in Faserverbundwerkstoffen mittels dynamischer Spannungsoptik und holografischer Interferometrie, Engineering Research Vol. 62, No. 7/8, pp. 195-213 3. Franz, T, (1998) Experimentelle Untersuchung von impactbelasteten versteiften bzw. gekerbten Platten aus Faserverbundwerkstoffen, Diss. Universität Bremen 4. Cremer, L, Heckl, M, (1996) Körperschall, Springer Verlag, Berlin 5. Kolsky, H, (1963) Stress waves in Solids, New York, Dover Publications 6. Goldsmith, W, (1960) Impact – The Theory and physical behaviour of colliding solids, London, Edw. Arnold Ltd. 7. Kreis, T, (1996) Holographic interferometry: principles and methods, Akademie Verlag, Berlin 8. Schnars, U, Jüptner, W, (1994) Direct recording of holograms by a CCD target and numerical reconstruction, Apllied Optics 33(2), pp. 179-181
Two new techniques to improve interferometric deformation-measurement: Lockin and Ultrasound excited Speckle-Interferometry Henry Gerhard, Gerhard Busse University Stuttgart, IKP-ZFP Pfaffenwaldring 32, 70569 Stuttgart Germany
1 Introduction Speckle methods like Shearography and Electronic-Speckle-PatternInterferometry (ESPI) display changes of surface deformation in a fringe pattern [1]. Such a deformation can be induced e.g. by applying a pressure difference or by remote heating. Defects cause a distortion of the fringe pattern so they reveal themselves this way. However, the deformation of the whole sample makes it difficult to detect the much smaller superposed defect induced distortions. We show how modulated heating induced deformation improves the detectability. The probability of defect detection (POD) is much higher if heating acts selectively on defects. This is achieved by using elastic waves for enhanced loss angle heating in defects (“ultrasound activated speckleinterferometry” [2,3]). The hidden defect is then marked by a "small bump" on the surface.
2 Principle 2.1 Electronic-Speckle-Pattern-Interferometry (ESPI)
In an ESPI-measurement a diffuse reflecting sample is exposed to a laser beam and imaged in this light by a CCD- or CMOS camera. A superposed reference beam causes an interferometric speckle pattern that responds to wavelength-sized deformations of the object. The superposition of the two speckle patterns “before and after deformation” results in fringes that display lines of equal surface deformation (similar to the equal-height-lines in
Hybrid Measurement Technologies
531
a map). The components of the object deformations, perpendicular (“out of plane”) or tangential (“in-plane”) to the object surface can be imaged this way simultaneously providing complete 3D-information about the deformation field [4]. For our experiments we used out of plane imaging where height difference between adjacent fringes is half the laser wavelength (l/2=328nm). Phase shifting technique and unwrapping algorithms were used to improve the signal-to-noise ratio and to obtain absolute deformation values [5]. The ESPI-system and the software were developed at our institute (IKP-ZfP). 2.2 Lockin Interferometry
Modulation techniques allow for noise suppression due to the reduction of bandwidth. As such filtering used to be performed by electronics named lockin, the wording is also applied for imaging where it denotes the use of such a procedure at each pixel. An example is lockin thermography (OLT) where an absorbing sample is exposed to intensity modulated light. The resulting surface temperature modulation propagates as a thermal wave into the object. At boundaries this wave is back-reflected to the surface where it affects both phase and magnitude of the initial wave by superposition. The stack of thermography images obtained during the modulated irradiation of the object are pixelwise Fourier transformed at the modulation frequency. This way the information from all images is narrowband filtered and compressed into an amplitude and a phase image of the temperature modulation [6-10]. We applied this principle to ESPI to perform optical-lockin interferometry (OLI) [11-12]. In contrast to OLT which monitors the temperature amplitudes or the phase shift of the modulated temperature field at the surface of the object, OLI analyses the modulated deformation resulting from periodical heating (e.g. by intensity modulated irradiation). The advantages are the same as before: one obtains an amplitude and a phase image at a much improved signal to noise ratio. It should be mentioned that phase is involved twice: both the lockin phase image and the amplitude image are derived by Fourier tansformation from a sequence of interferometric phase images.
532
Hybrid Measurement Technologies
A
laser
lens
t sinus modulated deformation
beam splitter
object beam
reference beam lens
defect
camera
filter
sample Sinus modulated lamp (heat source) computer
Fig. 1. Principle of Lockin-ESPI
OLI is suited for imaging of hidden features where the depth range depends on the thermal diffusion length µ = (2k/wr)1/2 (k denotes thermal conductivity, r density and c specific heat, respectively) [13,14] which can be varied by the modulation frequency w [14]. 2.3 Ultrasound activated interferometry
Information in speckle-interferometry images is coded by fringes. The dynamic range of measurements is limited by the maximum detectable lines in the image. When the whole sample is illuminated, the whole surface is heated and deformed while the effect of the defect may be quite small. We developed a method where a defect responds selectively so that the image would display mostly the defect induced fringes and not the potentially confusing background of the intact structure. As a mechanical defect is generally characterized by local stress concentration and/or an enhanced mechanical loss angle ("hysteresis“), excitation of the sample by ultrasound together with internal friction in defects converts elastic wave energy locally into heat. The resulting local thermal expansion in the defect area causes a bump that reveals the hidden defect.
Hybrid Measurement Technologies
533
Laser lens beam splitter sample
object beam reference beam camera
defect with heat generation ultrasound transducer
thermal expansion
lens
control unit computer
Fig. 2. Principle of ultrasound excited ESPI
3 Results We present examples for the potential of the two methods described above. 3.1 Optical-Lockin-ESPI
Defect detection in Polymethylmethacrylat (PMMA) The first example is an investigation of a homogeneous PMMA model sample (120x86x6mm³) with subsurface holes drilled perpendicular to the rear surface in order to simulate defects in different depths underneath the inspected front surface. The transparent specimen was painted black on the front side to hide the holes. The plate was illuminated at 0.02 Hz modulation frequency while ESPI-images were continuously recorded.
534
Hybrid Measurement Technologies
In each image taken out of the sequence, the defects can be detected neither in the wrapped (Fig. 3 left) nor unwrapped phase image (Fig. 3 right). Only a two-dimensional fit could find small differences in deformation (fig.4 - left) at a poor signal to noise ratio. The phase image derived by Fourier transformation from the whole sequence of modulated deformation at the lockin frequency makes all holes clearly visible (fig. 4 - right). It has been shown previously that holes in different depths can be distinguished by variation of the modulation frequency because it controls the depth range of the generated thermal wave probing the defect [11].
7
250
6 200
expansion
Intensity
5
150
100
4
3
2 50
1
0 0
50
100
150
200
250
0
300
0
50
100
P o s itio n [ p ix e l]
150
200
250
300
P o s itio n [p ix e l]
Fig. 3. Left: wrapped-phase image, Right: same image after phase unwrapping
0 ,4
0 ,1
Hole1
0 ,3
Hole2
Hole3
0 ,2
phase [°]
expansion (filtered)
0 ,0 5
0 ,1
0
Hole1
-0 ,1
0
-0 ,0 5
- 0 ,1
-0 ,2
-0 ,3
-0 ,1 5
0
50
10 0
150
P o s itio n [p ix e l]
200
250
30 0
0
50
100
150
200
250
30 0
P o s itio n [p ix e l]
Fig. 4. Left: demodulated image with a two-dimensional fit, Right: Lockin phase image with a two dimensional fit
Hybrid Measurement Technologies
535
Inclusions in a honeycomb structure The high specific stiffness of honeycomb structures is of interest for aerospace applications. The critical part of such structures is where the skin is bonded to the core. Ingress of water or excessive glue may result in lower stiffness or too much weight, respectively. In our investigations we used a honeycomb structure (420x170x13mm³) partially filled with glue. In the middle of the plate a marking is visible. The two areas of modified stiff-
filled with glue marking strip
Fig. 5. Left: One wrapped interferometric phase image of the sequence Right: Lockin phase image from Fourier transform of sequence at 0.06Hz
ness stand out clearly in the lockin phase image (figure 5 right) derived from the sequence at the frequency of excitation while they are hidden in the strong background deformation in the single image of the sequence (figure 5 left). Depth resolved measurements in wood Wood is a natural material which is important for furniture where genuine wood is used for the veneer layer and cheap wood for the core sheet. Delaminations caused by glue failure can be detected using OLI. The dimensions of the plate were discussed in [11]. Since this publication the measurements were not only improved with respect to the signal to noise ratio, but it shows also a depth resolved measurement of the holes under a different veneer thickness. At a frequency of 0.09Hz only the holes visible under a thin veneer layer. By decreasing the frequency the penetration depth of the thermal waves increases, thus allowing to detect more holes at lower frequencies. The grain veneer structure gets also a better contrast at lower frequencies. The applied lockin frequencies are given below the images.
536
Hybrid Measurement Technologies
0.09Hz
0.06Hz
0.03Hz
Fig. 6. Depth resolved measurements of holes under different veneer thickness
3.2 Ultrasound activated interferometry
Impacts, delaminations and embedded foil in CFRP In this example the cooling-down process of an impact damaged CFRPsample (100x150x5mm³) is shown over a period of 13.4s after excitation for 3s at an input power of 200 Watt of ultrasound. However, the actually injected power is much lower because of the impedance mismatch between sample and transducer. The plate contains an impact damage in the center, some delaminations in the edges and a laminated foil in the upper right part. The defects (heated selectively due to their enhanced mechanical losses) are well visible in the time sequence (figure 7). The number of fringes is reduced since overall sample heating is avoided. The visible temporally shifted delaminations allow to investigate the nature of the defects and the depth where they are located.
Hybrid Measurement Technologies
537
Fig. 7. Time sequence after ultrasound excitation
4 Conclusion Two new techniques have been presented which improve interferometric defect detection. Optical lockin interferometry extracts depth-resolved weak structures from a strong background of overall deformation. The mechanism involved in this interferometric tomography is frequency dependence of thermal wave depth range. Phase images have the advantage of more depth range and insensitivity to variation of surface absorption and scattering. The second technique -ultrasound activated interferometry - responds selectively to the mechanical losses in defects because ultrasound is converted into heat. This is an alternative method to reduce the influence of background deformation and to enhance the probability of detection (POD).
538
Hybrid Measurement Technologies
5 References [1] [2]
[3]
[4]
[5] [6] [7] [8]
[9] [10] [11]
[12] [13] [14] [15]
Cloud, G.: Optical Methods of Engineering Analysis. Cambridge: University of Cambridge, 1995 Salerno, A.; Danesi, S.; Wu, D.; Ritter, S.; Busse, G.: Ultrasonic loss angle with speckle interferometry. 5th International Congress on Sound and Vibration. University of Adelaide,15.-18.12.1997 Gerhard, H.; Busse, G.: Ultraschallangeregte Verformungsmessung mittels Speckle Interferometrie. In: DGZfP Berichtsband BB7-CD. 6. Kolloquium, Qualitätssicherung durch Werkstoffprüfung, Zwickau, 13.11.- 14.11.2001 Ritter, S.; Busse, G.: 3D-electronic speckle-pattern-interferometer (ESPI) in der zerstörungsfreien Werkstoff- und Bauteilprüfung. Deutsche Gesellschaft für Zerstörungsfreie Prüfung e.V. (Jahrestagung 17.-19.05.93, GarmischPartenkirchen), pp. 491-498 Ghiglia, D.; Pritt, M.: Two- Dimensional Phase Unwrapping: Theory, Algorithms, and Software. (Wiley, New York), 1998 Patent DE 4203272- C2, (1992) Busse, G.; Wu, D.; Karpen, W.: Thermal wave imaging with phase sensitive modulated thermography. In: J. Appl. Phys. Vol.71 (1992): pp. 3962-3965 Carlomagno, G.; Bernardi, P.: Unsteady thermotopography in nondestructive testing. In: Proc. 3nd Biannual Exchange, St. Louis/USA, 24.26. August 1976, pp.33-39 Rosencwaig, A.; Gersho, A.: Theory of the photo-acoustic effect with solids. J. Appl. Phys., (1976), pp. 64-69 Busse, G.: Optoacoustic phase angle measurement for probing a metal. In:Appl. Phys. Lett. Vol. 35 (1979) pp.759-760 Gerhard, H.; Busse, G.: Use of ultrasound excitation and optical-lockin method for speckle interferometry displacement imaging. In: Green, R.E. Jr.; Djordjevic, B.B.; Hentschel, M.P. (Eds): Nondestructive Characterisation of Materials XI, Springer-Verlag Berlin, (2003), pp. 525-534, ISBN: 3540-40154-7 Gerhard, H.; Busse, G.: Zerstörungsfreie Prüfung mit neuen InterferometrieVerfahren. In: Materialprüfung, Vol. 45 Nr. 3 (2003), pp. 78-84 Rosencwaig, A.; Busse, G.: High resolution photoacoustic thermal wave microscopy. Appl. Phys. Lett. 36 (1980) pp. 725-727 Opsal, J.; Rosencwaig, A.: Thermal wave depth profiling: Theory. J. Appl. Phys. 53 (1982) pp. 4240-4246 Dillenz, A.; Zweschper, T.; Busse, G.: Elastic wave burst thermography for NDE of subsurface features. In: INSIGHT Vol 42 No 12 (2000) pp. 815817
Testing formed sheet metal parts using fringe projection and evaluation by virtual distortion compensation A. Weckenmann, A. Gabbia Chair Quality Management and Manufacturing Metrology, University Erlangen-Nuremberg, Naegelsbachstr. 25, 91052 Erlangen Germany
1 Introduction Nowadays the forming technology allows the production of highly sophisticated free form sheet material components, affording great flexibility to the design and manufacturing processes across a wide range of industries. Due to versatile forming technologies, sheet material parts with complex shapes can be produced at acceptable costs. A very important factor in the process of production and measurement of these objects in according to the structure is the elastic return of the material that is translated in the linear course of the strain – deformation (V – H) graph (see Fig.1). The curve in this graph has a characteristic course depending on the material, but one feature point can be characterized up to which an elastic return is obtained (right case), that will be greater for the ductile materials and will go down for resilient materials. In cases where it is not obvious to define the change between elastic and plastic behaviour (left case), the elasticity limit is assumed to be the strain Vs which causes a permanent deformation of H=0,2% (Fig. 1a). In this work only deformations in the elastic region are subject to simulation; it means strain V < Vs. V
V
A
VR
B
VR
Vs
Vs
H
H
Fig. 1. Classic strain – deformation curve for a steal material
H
540
Hybrid Measurement Technologies
The objects therefore underlie forces that can modify the shape during movement and assembly according to their structure. Such deformations require the work piece to be clamped during the measurement process with conventional measuring systems (e.g. tactile coordinate measuring systems) in order to set the work piece in it’s assembled state. In this state geometrical features can be inspected and compared to the respective tolerances. The conventional measuring systems require an accurate alignment of the clamped work piece. Therefore the inspection process chain consists of six steps (see Fig. 2). These steps are laborious, time consuming and some actions (e.g. clamping) can not be automated. With this traditional system is not possible to testing 100% of the production. The progresses in the field of optical coordinate measuring systems brought up systems robust enough to be used in industrial environment. Such systems as fringe projection allow a fast, parallel and contact free acquisition of point clouds. The measured data contains of points of the visible surface of the surrounding measuring range and nearest the camera. From this a surface model representing the work piece can be extracted, which could be used in Finite Element Method (FEM) simulations of the clamping process. Using a virtual distortion compensation methods with a FEM analysis, the inspection process chain can be significantly reduced (e.g. for a car door from 1 day to a few minutes) and automated. Such a process chain could serve to increase the control over sheet material production processes and decrease inspection costs. With this method 100% of the production can be tested in line. manufacture precise positioning
source: www.ukon.de
tactile measurement
source: www.zeiss.de
clamping picking up
unclamping deposit in measuring range optical measurement
virtual distortion compensation
picking up
MARK Mentat
optical struments
Fringeprojection
assemblage
Fig. 2. Measuring chain with tactile and with optical measurement system
Hybrid Measurement Technologies
541
2 Measuring system and measuring chain Optical surface measuring systems allow a fast, parallel and contact free sampling of the measurands surface. Using a measuring system with a measuring range that fits the space where the measurand will be located, one obtains data that can be used for the extraction of the work piece from its surrounding and for the virtual distortion compensation. Although several optical measurement principles have been developed for the measurement of surfaces in 3D (e.g. stereo vision, auto focus), fringe projection systems turned out to be the most successful method on the market. Its success is based in the fast, robust and accurate data acquisition combined with high configuration flexibility. In fringe projection systems the triangulation principle is used to calculate 3D coordinates. If a surface is illuminated by a light spot, a displacement in z direction is translated by the spots position in a displacement in x direction with respect to the observation system under a calibrated angle T (see Fig. 3). The spot is used as a marker for depth information. 'x CCD Camera Laser
tgD Reference plane
'x 'z
D
'z
Fig. 3. Triangulation principle with a single CCD camera
Fringe projection systems use a fringe pattern as a marker. The phase of a sinusoidal intensity fringe pattern serves as a parallel marker. This pattern can be projected by DMD projectors. The so illuminated scene is observed by (at least) one camera (or few cameras) under a triangulation angle with respect to the projections optical axes. The phase difference between the captured picture and a picture captured by measuring a calibrated plane (reference plane), is proportional to the position in the planes normal direction according to the triangulation principle. This method is suitable for the measurement of non reflective surfaces. Reflection of the projected marker leads to outliers and makes an accurate measurement impossible. As the phase has to be determined looking at a neighbourhood of the sample point, outliers also have impact on the measurement accuracy of their neighbouring measured points.
542
Hybrid Measurement Technologies
In the example described, a fringe projection system consisting of two cameras and one Digital Micromirror Device projector (DMD) is used. It’s measuring field has a size of 700 mm x 900 mm x 400 mm at a resolution of 0.5 mm in x-y direction and 0.15 mm in z direction. The system was designed for the measurement of large sheet metal parts of sizes finding application in car industry. For this work are important the following information, objects and systems: working piece, fringe projection system, PC with FEM and processing data software, clamping information, CAD-model as STL (STereoLithography) file of the clamped work piece and material proprieties. The work piece is measured at first without clamping system: in this measured model are detected holes and edges. Than with clamping information (for example the position of holes and edges) and through a FEM analysis (which needs the material proprieties), the virtual distortion compensated model was obtained (FEM-model). Figure 4 shows the measuring and processing chain which is used for this analysis. The work piece is indispensable for the measurement (it lasts 8-10 seconds) only. The other steps are executed by use of a PC. The production don’t have to be stopped to test the work piece. For this reason this method is more suitable for a inline test than a conventional tactile measuring process. The most important steps are described in the following chapters. Work piece with CAD Model in STL file format and material properties Measurement with fringe projection
point cloud triangle mesh smoothing and remeshing
Processing of the measurement Material properties: E-Modul, Poisson ratio, density Information for assembly of the work piece
contour and hole detection FEM Analysis distortion compensated shape as STL file Comparison of the geometrie
qualitative analysis
Results
Fig. 4. Processing chain
CAD Modell as STL File
quantitative analysis
Hybrid Measurement Technologies
543
3 Measured data Fringe projection systems provide the measured coordinates (x,y,z) in a so called point cloud, which is a list of all measured points (see Fig. 5). As there is no three dimensional order contained in a list, for geometrical operations a comparison of all points is required to find the points neighbourhood relations.
Fig. 5. Measured point cloud
Fig. 6. Triangle mesh
Although the point cloud contains all measured information, for a complete surface model the space between the points has to be approximated as well. A linear approximation to the surface is a triangle mesh, in which the space between the adjacent points is approximated by triangles (see Fig. 6). Methods like Delaunay triangulations allow the automated determination of a unique triangle patch representing the measured surface. Additional to the point cloud (or vertex list) a triangle mesh consists of a triangle list, each entry being a 3 vector containing the edge points’ indices in the vertex list. Such a structure provides the point clouds topology. Using the intersection of triangles, the mentioned neighbourhood information can be easily accessed, a fact which leads to an increase of numerical efficiency in geometrical calculations. As fringe projection systems calculate up to a few million coordinates per measurement, the data has to be reduced for further analysis. Using the edge collapse algorithm, points in regions with small feature density can be removed iteratively, until a given geometrical error threshold is not exceeded. Thus one can reduce the number of sample points for the increase of the following up algorithms without loosing geometrical information. The noise contained in the measured coordinates can be removed using adaptive smoothing. Several adaptive smoothing methods have been proposed in the past. In the research carried out the local sphere assumption proposed in [2] has been used. Such smoothing algorithms preserve geometrical features and remove noise within specific limits. As this measured
544
Hybrid Measurement Technologies
data will be used in the later FEM analysis, noise would lead to wrong simulation results. Thus smoothing has to be performed as much as possible with respect to feature preservation. The model, that will obtained, is subject to further inspection.
4 Virtual distortion compensation The measured, smoothed and thinned triangle mesh is the base for the FEM model of the work piece (here with 180.000 degrees of freedom). First, the mesh-points on the surface have to be translated by half of the metals thickness in normal direction to obtain the midsurface. Using e.g. the MARC Mentat pre-processor, each triangle in this mesh can be specified to represent a triangular shell element with a given thickness. The boundary conditions for the FEM problem are found by the above feature extraction. As the fixing process acts with given translations and torque at the holes, they have to be detected on the mesh. From a nominal actual value comparison of the position and orientation of the holes the resulting translations and torque can be determined. The corresponding nodes in the FEM model are then assigned to the calculated translations and torque. Solvers like MARC are used to calculate the shape of the measured object under the given boundary conditions using the updated Lagrange algorithm. For a surface description, the nodes localized on the midsurface has to be back-shifted in normal direction. +
A B –
Fig. 7. FEM model with boundary conditions
Fig 8. Qualitative analysis (FEM-model and CAD-model)
In an experiment, the work piece was measured in a relaxed state and in two clamped states. By means of virtual distortion compensation, the unclamped state was transformed into the clamped states.
Hybrid Measurement Technologies
545
A qualitative analysis consists on comparison with the colours: the two shape (CAD-model and FEM-model) are overlapped. Taking the surface of the CAD-model like reference, is possible to characterize positive or negative deviations based on the visualized colour in a the specific area of the work piece, which is under consideration (see Fig. 8).
50 mm
clamped profil
25 mm
virtually clamped profil Local point observed
Point 1 clamped Point 1 Point 1 virtually clamped
Point 1
Fig. 9. Quantitative analysis: clamped (red) and virtually clamped (green) datasets
With a quantitative analysis a cut through the data (see 9) shows the success of this method. Figure 10 shows the deviation of the marked point from it’s nominal position with and without virtual distortion compensation. 6 with FEM analysis
mm
Point 2
4 Point 3
Deviation
3 without FEM analysis
2 1 0
Point 1 Point 1
Point 2
Point 3
Fig. 10. Deviation of a point from its nominal position with and without virtual distortion compensation
The limits of the method lie in the measurement uncertainty of the measured coordinates (measuring system), the stability and accuracy of the feature extraction algorithms and the quality of the thickness assumption (there is only a value for the all the work piece).
546
Hybrid Measurement Technologies
5 Conclusion Fringe projection systems are suitable for the fast and contact free measurement of FSM parts without clamping. The measurement result can be used to extract features of the object like holes or edges. Some of these are relevant for the assembly process, others are subject to further inspection. From the information about the transformation of the assembly features from their actual to their nominal position, virtual distortion compensation can be used to calculate feature parameters of the distortion compensated shape. Thus the inspection process chain can be shortened and automated at the same time. Actual limitation of the method is the measurement uncertainty of the measured coordinates. An ulterior deepening of this work would involve a comparison of the obtained results with a tactile coordinate measurement.
6 References 1. G. Frankowski, M. Chen, T. Huth: Real-time 3D Shape Measurement with Digital Stripe Projection by Texas Instruments Micromirror Devices DMD. Proceedings of SPIE Vol. 3958, San Jose, USA 22.-28.1.2000, p. 90-106. 2. S. Karbacher, G. Häusler: A new approach for modelling and smoothing of scattered 3D data. Richard N. Ellson und Joseph H. Nurre (Ed.), Proceedings of SPIE Vol. 3313, San Jose, USA 24.-30.1.1998, p. 168-177. 3. Weckenmann, A.; Gall P.; Hoffmann, J.: Inspection of holes in sheet metal using optical measuring systems. In: Proceedings of VIth International Science Conference Coordinate Measuring Technique (April 21-24, 2004, Bielsko-Biala, Poland), p. 339-346. 4. Y.Sun, D. Page, J.K. Paik, A. Koschan, M.A. Abidi: Triangle Mesh-Based Edge Detection and its Application to Surface Segmentation and Adaptive Surface Smoothing. In: Proceedings of IEEE 2002 International Conference on Image Processing (ICIP 2002), September 22-25, 2002, Rochester, New York (USA), p. III 825-828. 5. Steinke P.; Finite-Elemente-Methode; Springer, Berlin, Heidelberg, New York 2004 6. T. Asano, R. Klette, C. Ronse: Geometry, Morphology, and Computational Imaging, Proceedings of 11th International Workshop on Theoretical Foundations of Computer Vision, Dagstuhl Castle, Germany, April 7-12, 2002. 7. Weckenmann A.; Gall. P.; Ernst R.: Virtuell einspannen, Prüfprozess flächiger Leichtbauteile verkürzen. In: Qualität und Zuverlässigkeit QZ 49 (2004) 11, S.49-51. 8. Weckenmann A.; Knauer, M.; Killmaier, T.: Uncertainty of Coordinate Measurements on Sheet Metal Parts in Automotive Industry. In: Sheet Metal 1999, Geiger, M., Kals, H., Shrivani, B., Singh, U., Hg., 1999. Proceedings of the 7th International SheMet, 27.– 28. 09. 1999, Erlangen. p. 109-116. 9. Weckenmann A.; Gall. P.; S. Ströhla, Ernst R.: Shortening of the inspection process chain by using virtual distortion compensation. In: 8th International Symposium on Measurement and Quality Control in Production (October, 12–15, 2004, Erlangen, Germany) – VDI-Berichte 1860, p. 137-143. 10. Weckenmann A.; Gall P.; Gabbia A.: 3D surface coordinate inspection of formed sheet material parts using optical measurement systems and virtual distortion compensation. In: 8th International Symposium LaserMetrology, 2005. Proocedings of the 8th International Symposium, 12-18 February 2005, Merida, Mexico.
Fatigue damage precursor detection and monitoring with laser scanning technique VB Markov, BD Buckner, SA Kupiec, JC Earthmang MetroLaser, Inc., 2572 White Road, Irvine, CA 92614, USA g Department of Chemical Engineering and Materials Science University of California, Irvine, CA 92697, USA
1 Introduction Over the last decade, investigations into fatigue damage precursors have established that metal components subjected to cyclic stress develop surface-evident defects such as microcracks and slip bands [1-4]. These defects evolve with the development of fatigue. Detection of such damage precursors on a component and examination of their evolution can thus provide information on the state of the critical parts and prognostics of their catastrophic failure. Evolving surface defects, although of a different nature, can similarly be observed on some materials when subjected to thermal cycling fatigue.
2 Technique The damage detection technique we use is to scan a focused laser beam over the component surface and then detect the scattered light intensity (Fig.1). The scattering signal variation as the beam sweeps over the microcracks provides information on the specimen surface. Additionally, the microscale surface texture (roughness) affects both the mean signal level and (under coherent illumination) the statistical properties of the speckle. The areal surface microcrack density tends to increase in characteristic patterns as fatigue progresses in the component, and knowledge of this pattern, combined with the measured crack density from the scanning technique, can provide a means of gauging the fatigue state of the component. Surface micro-roughness evolution can be more complex, but it does yield some information as well.
548
Hybrid Measurement Technologies
reflected beam
incident beam
scattered light scattered light
microcracks Fig. 1. Illustration of light scattering from defects on a fatigued surface.
With this technique, we have investigated fatigue damage precursors on the surfaces of nickel-base superalloy turbine components under low-cycle fatigue conditions. The scanning approach allows selective exploration of the information space (sometimes with hardware-based feature extraction), requiring a much lower data throughput rate than would an image-based technique. As a result, the present technique is capable of scanning speeds that are substantially greater than those achieved with image processing methods. Specially designed Waspaloy specimens and sections of the turbine rotors were tested using a servo-hydraulic MTS machine at ambient temperature under load control conditions, as well as at elevated temperature. The fatigue damage was monitored by scanning a laser beam along the specimen in situ and during periodic interruptions of the cyclic loading. Acetate replicas of the gage section surface were also made to examine the surface morphology using SEM. Comparisons of the results demonstrate that a rapid rise in the mean defect frequency corresponds to the emergence of surface relief features that follow the grain boundaries intersecting the surface in the areas of greatest stress. The presence of this surface relief can be attributed to the presence of relatively soft precipitate free zones along the grain boundaries that preferentially deform under fatigue loading conditions leading to the formation of microcracks. Fig.2 shows a similar comparison of visual count of slip-band groups and deformed grain boundaries with the scattering peak count along a Waspaloy fatigue coupon, showing that regions with high fatigue defect densities also have high
Hybrid Measurement Technologies
549
densities of scattering signal peaks, demonstrating that this technique provides a good estimate of surface fatigue damage.
add peak count
count
MWF 4695 fatigue defects over slip band region add peaks 20
10
8 15
6 10 4
5 2
0 0
5000
10000
15000
0 20000
position (um)
Fig. 2. Comparison of scanned defect counts and microscopically sampled defect counts (in a 0.3 mm2 area) at the same positions on the waspaloy sample
Measurements of scattering peaks over fatigue life have been performed as well (Fig.3). These measurements show a general trend toward increasing surface defect density, though a few samples show anomalous variations, probably due to surface contamination, which is one of the major hurdles to practical implementation of the technology. This technology is being developed both for laboratory and field use, as well as approaches to utilizing compact beam-scanning devices for in-situ structural health monitoring in several areas.
550
Hybrid Measurement Technologies
mean defect frequency
coarse grained waspaloy
95 s1 95 s2
10
96 s1
1
96 s2
0.1
97 s1
0.01
97 s2
0.001 0
20
40
cycles (1000s)
60
98 s1 98 s2
Fig. 3. Mean defect frequency on different Waspaloy samples with an increasing number of fatigue cycles
3 Acknowledgments Portions of the work reported here were supported under U.S. Army Aviation Applied Technology Directorate, RDECOM, Contract DAAH10-02C-0007 and U.S. Army Aviation & Missile Command (DARPA), Contract DAAH01-02-C-R192. The information contained herein does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
4 References 1. Schmidt, P, Earthman, JC (1995) Development of a scanning laser crack detection technique for corrosion fatigue testing of fine wire. J. Mater. Res. 10:372 2. Chou, KJC, Earthman, JC (1997) Characterization of low-cycle fatigue damage in inconel 718 by laser light scattering. J. Mater. Res. 12:2048-2056 3. Earthman, JC, Angeles, J, Markov, V, Trolinger, J, Moffatt, J. (2004) Scattered light scanning for fatigue damage precursor detection on turbine components. Materials Evaluation 62:460-465 4. Lee, C, Chao, YJ, Sutton, MA, Peters, WH, Ranson, WF (1989) Determination of plastic strains at notches by image-processing methods. Experimental Mechanics 29:214-220
Analysis of localization of strains by ESPI, in equibiaxial loading (bulge test) of copper sheet metals Guillaume Montay1, Ignacio Lira2, Marie Tourneix1, Bruno Guelorget1, Manuel François1 and Cristián Vial2 1 Université de Technologie de Troyes, Laboratoire des Systèmes Mécaniques et d’Ingénierie Simultanée (LASMIS FRE CNRS 2719), 12rue Marie Curie, BP 2060, 10010 Troyes, France. 2 Pontificia Universidad Católica de Chile, Department of Mechanical and Metallurgical Engineering, Vicuña Mackenna 4860, Santiago, Chile.
1 Introduction The problem of strain localization is important in sheet metal forming, as it determines the forming limit diagram of the material. To analyze this process, engineers use a variety of tests. One of them is the bulge test. In it, an initially flat specimen is placed between a matrix and a blank holder, and hydraulic pressure is applied on one of its surfaces. An approximately equi-biaxial strain loading path is obtained [1,2]. In this paper we report on the application of electronic speckle-pattern interferometry (ESPI) to determine strain localization in the bulge test. A video sequence of images was captured and stored. The video allowed a posteriori analysis of the test. By subtracting pairs of images, fringes were obtained at load steps close to fracture. In this way, the progress of local strain rate at various positions on the apex of the dome was followed. The stages of diffuse and localized necking [3] were clearly seen.
2 Experimental procedure Experiments were performed on cold rolled copper plates, 0.8 mm thick, annealed at 400 °C. Pressure was applied with a tensile testing machine through a hydraulic jack. Oil flow was 105 mm3/s, except near the end of the test where it was changed manually to 52.5 mm3/s to follow more pre-
552
Hybrid Measurement Technologies
cisely the localization stage. A pick-up computer-connected sensor was used to monitor pressure. After reaching the maximum, a short period of nearly constant pressure ensued, to be followed by a slow decrease until fracture. Total strains between the initial flat state and the current state were found from white light images of a square grid impressed on the specimen. The grid sizes in the x and y directions, Lx and Ly, were measured at different positions and at different load stages. This information (in pixels) was transformed into millimetres with the magnification factor of the imaging system. The strains were computed as Hxx ln(Lx/L0) and Hyy ln(Ly/L0), where L0 = 5 mm was the initial grid size. The strain in the third direction was obtained under the hypothesis of incompressibility and neglecting elastic strain, giving Hzz (Hxx + Hyy). Because of the height variation, the magnification factor changed during the test. This change was followed by monitoring the image of a piece of graph paper glued onto the specimen near its apex.
1
y
11
x Fig. 1. Example of a fringe pattern depicting eleven bright fringes. Fringes are contour lines of equal deformation, about 1 µm per fringe order
An interferometer with in plane sensitivity in the y direction was placed above the specimen to measure deformation due to bulging. An expanded He-Ne laser beam was divided in two by a beam splitter. The two beams impinged from opposite sides onto the surface. They produced speckle patterns that were captured and stored at a rate of four pictures per second. Electronic fringes were obtained by subtracting pairs of images. From one fringe to the next, the displacement in the y direction between the two im-
553
Hybrid Measurement Technologies
ages was S O/(2sinD), where D is the incidence angle. We used D=18o, giving a sensitivity of about 1 µm per fringe. However, because of the change in height, the incidence angle of the laser beams had to be readjusted several times to maintain constant sensitivity. Figure 1 shows one of the fringed images, it corresponds to the central part the dome. On this image, three parallel lines were drawn along the y direction; the one in the centre was close to the fracture zone. Fringe positions on these lines were measured with an image processing software. The strain increment along each line was obtained as Hyy,inc NS/L, where L is the length of the line in millimetres and N is the number of fringes that cross the line. The conversion from distance in pixels to length was carried out as explained above. Dividing the strain increment by the time interval 't between the two images gave the strain rate. In practice, 't was about 2 seconds.
3 Results As expected, total strains Hxx and Hyy were about equal, indicating isotropy. In the z direction, maximum strain was about 130%. This value corresponds to a final thickness of 222 µm, in agreement with the measured thickness after the crack, 238 µm. 0,0003
Strain rate (1/s)
0,00025 0,0002
x=7 mm
x=19 mm
0,00015
x=31 mm
0,0001 0,00005
Diffuse necking
Localized necking
0 0
10
20
30 40 Strain (%)
50
60
Fig. 2. Strain rate as a function of average strain for the three lines in figure 1
70
554
Hybrid Measurement Technologies
Figure 2 depicts the strain rate as a function of average strain H for the three lines in figure 1. The plots start from an average strain of 27%, below which the strain rate was almost the same for the 3 lines. After that, the strain rate at the centre line increased a little faster. Strong localization started at H|59%. Maximum strain rate was obtained at the central line, close to the final fracture. The two regions in figure 2 correspond to those of diffuse and localized necking.
4 Conclusions In this paper, an original application of ESPI in materials engineering has been described. The technique was used to analyze a bulge test in order to study strain localization by following the strain rate progress. Results show that ESPI allows detecting clearly the two stages of localization, namely, the diffuse and localized necks. Using this technique, forming limit diagrams can thus be established accurately.
Acknowledgments The support of Conicyt (Chile) through Fondecyt 1030399 and ECOS/CONICYT C01E04 research grants is gratefully acknowledged.
References 1. Atkinson M (1997) Accurate determination of biaxial stress-strain relationships from hydraulic bulging tests of sheet metals. International Journal of Mechanical Sciences 39:761-769 2. Gutscher G, Wu HC, Ngaile G, Altan T (2004) Determination of flow stress for sheet metal forming using the viscous pressure bulge (VPB) test. Journal of Materials Processing Technology 146:1-7 3. Rees DWA (1996) Sheet orientation and forming limits under diffuse necking. Applied Mathematical Modelling 20: 624-635
Laser vibrometry using digital Fresnel holography Pascal Picart, Julien Leval, Jean Pierre Boileau, Jean Claude Pascal Laboratoire d’Acoustique de l’Université du Maine, Avenue Olivier Messiaen, 72085 LE MANS Cedex 9, France ; email : [email protected]
1 Introduction Recently, it has been demonstrated that digital Fresnel holography offers new opportunities for metrological applications, examples of which include object deformation [1], surface shape measurement [2], phasecontrast microscopic imaging [3] and twin-sensitivity measurements [4]. Nevertheless, vibration amplitude and phase retrieving is a challenge for full field optical metrology because it is necessary for some applications. This paper presents a full field vibrometer using digital Fresnel holography. A simple three-step algorithm is presented. So vibration amplitude and phase extraction do not need too much images and information is full field, thus leading to direct full field vibrometry. Experimental results are presented and especially the mean quadratic velocity is estimated using the full field digital holographic vibrometer.
2 Theory A rough object submitted to a harmonic excitation and illuminated by a coherent laser beam induces a spatio-temporal optical phase modulation Thus, at any time t, the surface of the illuminated object diffuses an optical wave written as
A x, y, t
A0 x, y exp>i\ 0 x, y @ u exp>i'M m x, y sin Z 0 t M 0 x, y @
(1)
556
Hybrid Measurement Technologies
where 'M m is the maximum amplitude at pulsation Z = 2S/T0, and M 0 is the phase of the mechanical vibration. In equation (1), A0 is the modulus of the diffused wave and \ 0 is a random phase uniformly distributed over interval [-S,+S]. At any distance d0 the diffracted object field can be mixed with a reference wave of spatial frequencies {u0,v0}. Interferences between the diffused wave and the plane reference wave R(x’,y’) = aRexp[2iS(u0x’+v0y’)] generate an instantaneous hologram. The instantaneous hologram will be time-integrated by the solid state sensor during an exposure time noted T. The digital reconstruction is computed by a discrete bidimentionnal Fresnel transform of the interference pattern. After digital reconstruction at distance d0, the reconstructed +1 order is :
>
AR1 x, y, d 0 , t j # N M O4 d 04 R * x, y exp iSO d 0 u 02 v02 u³
t j T tj
@
A x O u 0 d 0 , y O v0 d 0 , t dt
(2)
where {M,N} corresponds to the number of pixels. The +1 order is then localized at coordinate (Ou0d0, Ov0d0) in the image plane. Temporal integration can be derived considering the harmonic excitation expressed in equation (1). The phase of the digitally reconstructed object can be calculated and it is found to be
^
`
arg AR1 x, y, d 0 , t j
2iS u0 x v0 y i S O d 0 u02 v02 \ 0 x, y 'M j x, y
(3)
° q j x, y sin 'M j x, y 4 j x, y ½° arctan ® ¾ °¯1 q j x, y cos'M j x, y 4 j x, y °¿ where
q j exp 4 j
k f
§
k f
©
ª ZT º T· ¸¸ J k 'M m exp «ik §¨ Z0t j M0 0 ·¸» 2 ¹¼ ¬ © 0 ¹
(4)
x2n 1 ¦ 2n 1 ! n 1
(5)
¦ P¨¨ kS T
and
P x
n f
n
In the ideal case where exposure time T is infinitely small compared to the vibration period T0, it appears that the phase of the reconstructed object
Hybrid Measurement Technologies
557
contains 3 unknowns. So only 3 equations are necessary to extract amplitude and phase of the vibration. The set of 3 equations with 3 unknowns can be obtained by applying a S/2 phase shifting between excitation and recording. Under these considerations, with (j = 1,2,3), we get
^
`
arg AR1 t j
\ j \ '0 'M m sin Z0t j M0 j 1 S / 2
(6)
It is straigthforward to demonstrate that amplitude and phase of the vibration may be extracted with the two following equations :
'M m x, y
1 2 '\ 132 x, y >'\ 23 x, y '\ 21 x, y @ 2 º '\ 13 x, y » ¬ '\ 23 x, y '\ 21 x, y ¼ ª
M0 x, y arctan «
(7)
(8)
where '\kl = \k – \l.
3 Influence of pulse width In practical situations, it is common for T to be not much smaller than T0. Because of the non-zero cyclic ratio Rc = T/T0, the phase measurement '\kl includes error terms d'\kl = d\k – d\l. Variation terms d\j are extracted from equation (3) and they may be approximated as [5]
d\ j #
q j sin'M j 4 j
1 q j cos'M j 4 j
(9)
Considering a linear approximation for the phase error for vibration amplitude and phase, it is found that : d'M m
k f
D 11
¦ D k 1
1 4 k 1
D 14 k 1 cos4kZ 0 t1 4kM 0 2kZ 0T
(10)
558
dM 0
Hybrid Measurement Technologies
1 'M m
u
k f
¦ D k 0
1 4 k 3
(11)
D 14 k 5 sin 22k 2 Z 0 t1 M 0 Z 0T / 2
with D1k is the kth coefficient of the Fourier expansion of d\1. These errors have a period one quarter the vibration period. In order to quantify the error it is useful to define a criterion which takes into account these characteristics. We have chosen criterions that correspond to the mean power of expressions 10 and 11. Figure 1 presents 3D-plot of the criterion for 'Mm measurement as a function of 'Mm and Rc. This result shows the highly non linear behavior of distortion in relation to vibration amplitude and cyclic ratio. As a conclusion it can be seen that the cyclic ratio Rc must be smaller than 1/'Mm to avoid distortion.
Fig. 1. 3D plot of criterion for amplitude distortion vs Rc and 'Mm (%)
4 Experimental set-up The digital holographic set-up is described in figure 2. The object under the sinusoidal excitation is a loudspeaker 60 mm in diameter, placed at d0 = 1400 mm from the detector area. The off-axis holographic recording is carried out using lens L2 which is displaced out of the afocal axis by means of two micrometric transducers [4]. The detector is a 12-bit digital CCD with (MuN) = (1024u1360) pixels of pitch px = py = 4.65Pm. Digital reconstruction was performed with K = L = 2048 data points. The synchro-
Hybrid Measurement Technologies
559
nisation between acquisition and excitation is performed by use of a stroboscopic set-up. The system is based on a mechanical shuttering with a rotating aperture [5]. Considering the mechanical and electronic devices, the cyclic ratio of the stroboscope is found to be Rc | 1/18. So the stroboscopic set-up is designed for vibration amplitude smaller than 18 rad in order to get amplitude distortion than 0.15 %.
Fig. 2. Experimental set-up
5 Experimental results The loudspeaker was excited in sinusoidal regime from 1.36 kHz to 4.32 kHz with step of 40 Hz. Figure 3 shows the phase maps obtained at the different steps of the process for a frequency of 2 kHz. Figure 4 shows vibration amplitude and phase at 2 kHz extracted from the three S/2 phase shifted phase maps of figure 3, according to algorithms 7 and 8. In figure 4, maximum amplitude is found to be 16.1 rad; so distortion is less than 0.25 % for 'Mm. The region of interest in each map contains about 240000 data points. The evaluation of the amplitude and the phase of the membrane of the loudspeaker determines its velocity along z direction. A criterion usually used for quantifying the vibration is that of the mean quadratic velocity :
vz
2
1 S T0
³³ ³ S
T0 0
2
vz x, y, t dtdxdy
(12)
where S is the surface of the vibrating object and the velocity is given by
560
Hybrid Measurement Technologies
v z x, y , t
Of 0 1 cos T
'M m x, y cosZ0t M 0
(13)
Fig. 3. Phase maps for vibration amplitude and phase reconstruction at 2 kHz
Fig. 4. Vibration amplitude (left) and vibration phase (right) for a frequency of 2 kHz
Figure 5 shows the mean quadratic velocity of the loudspeaker extracted from the set of data. The frequency is varying from 1.36 kHz to 4.32 kHz with step of 40 Hz.
Hybrid Measurement Technologies
561
Fig. 5. Mean quadratic velocity of the mebrane of the loudspeaker
In order to validate the stroboscopic measurement, figure 6 shows a comparison between time-averaged measurement [6] and a computation of time-averaging from the stroboscopic result.
Fig. 6. Comparison between time-averaging and stroboscopic measurement (left : experimental ; right : computation with stroboscopic result)
The experimental Bessel fringes and those which were computed are seen to be in close agreement. This confirms the relevance of the set-up to perform full field accurate amplitude and phase vibration measurements.
562
Hybrid Measurement Technologies
6 Conclusion This paper has discussed a full field vibrometer based on digital Fresnel holography. The Influence of the pulse width was studied and analytical expressions of amplitude and phase distortion were proposed. As a result, it was demonstrated that amplitude and phase extraction are possible with a cyclic ratio about 1/'Mm if maximum vibration amplitude does not exceed 'Mm. Thus, the use of a moderated cyclic ratio is possible in vibration analysis. Experimental results were presented and exhibit the relevance of digital Fresnel holography in vibrometry. The comparison between timeaveraging and stroboscopic measurement confirms the accuracy of the method.
7 References 1. Pedrini, G, Tiziani, H.J. (1995) Digital double pulse holographic interferometry using Fresnel and image plane holograms. Measurement 18:251-260 2. Wagner, C, Seebacher, S, Osten, W, Juptner, W (1999) Digital recording and numerical reconstruction of lens less Fourier holograms in optical metrology. Applied Optics 28:4812-4820 3. Dubois, F, Joannes, L, Legros, J.C. (1999) Improved three dimensional imaging with a digital holography microscope whith a source of partial spatial coherence. Applied Optics 38:7085-7094 4. Picart, P, Moisson, E, Mounier, D (2003) Twin sensitivity measurement by spatial multiplexing of digitally recorded holograms. Applied Optics 42:1947-1957 5. Leval, J, Picart, P, Boileau, JP, Pascal, JC (2005) Full field vibrometry with digital Fresnel holography. Applied Optics (to be published) 6. Picart, P, Leval, J, Mounier, D, Gougeon, S (2003) Time-avberaged digital holography. Optics Letters 28:1900-1902
2D laser vibrometry by use of digital holographic spatial multiplexing Pascal Picart, Julien Leval, Michel Grill, Jean Pierre Boileau, Jean Claude Pascal Laboratoire d’Acoustique de l’Université du Maine, Avenue Olivier Messiaen, 72085 LE MANS Cedex 9, France ; email : [email protected]
1 Introduction In this paper we present opportunities for full field bidimensionnal vibrometry. We demonstrate that it is possible to simultaneously encodedecode 2D amplitude and phase of harmonic mechanical vibrations. The process allows determination of in plane and out of plane component of the vibration of an object sinusoidally excited. The principle is based on spatial multiplexing in digital Fresnel holography [1].
2 Spatial multiplexing of digital holograms Spatial multiplexing is based on the incoherent addition of two views of the object under interest. For this, we used a twin Mach-Zehnder interferometer. The incoherent summation of holograms is realized by orthogonal polarization along the two interferometers. The spatial frequencies of the reference waves are adjusted such that there is no overlapping between the five diffracted orders when the field is reconstructed [1]. Figure 1 presents the set-up. De-multiplexing is the step which consists in determining the point to point relation between the two holograms. It is achieved according to [1]. Digital reconstruction is performed following diffraction theory under the Fresnel approximation [2]. When the rough object is submitted to a harmonic excitation it induces a spatio-temporal displacement vector which can be written as U(t) = uxsin(Z0t+Mx)i + uysin(Z0t+My)j + uzsin(Z0t+Mz)k, where ^ux,uy,uz` are the maximum amplitudes at pulsation
564
Hybrid Measurement Technologies
Z0 = 2S/T0, and ^Mx,My,Mz` are the phase of the mechanical vibration along the three directions (see reference axis of figure 1).
Fig. 1. Twin Mach-Zehnder digital holographic interferometer for 2D vibrometry
Because àf the sensitivity of the set-up, at any time tj where we perform a recording followed by a reconstruction, the phase of the ith (i = 1,2) reconstructed object is given by
\ ij \ 0 r 'M x sin T sin Z0t j M x 'M z 1 cosT sin Z0t j M z
(1)
where \0 is a random phase mainly due to the roughness of the object and 'Mx,z = 2Sux,z/O.
3 2D Amplitude and phase retrieving If we compute phase \ij at three different synchronous time t1, t2, t3 such that Z0(t2t1) = S/2 and Z0(t3t1) = S, we can extract quantities '\ ikl = \ ik – \ il. Synchronisation can be realized with a stroboscopic set-up [3]. Quantities '\i13, '\ i21 and '\ i23 carrie informations on in plane and out of plane vibrations. Note that these quantities are determined modulo 2S and that they must be unwrapped. Determination of pure in plane and out of plane phase terms is perform by computing continuous quantities '\kl_x = '\1kl '\2kl and '\kl_z = '\1kl + '\2kl. With these quantities it is now possible to extract the amplitude of the vibration according to the following algorithm [3]
Hybrid Measurement Technologies
565
1 '\ 132 _ A '\ 23 _ A '\ 21 _ A 2
>
'M A
@
2
(2)
Where A = x or A = z. For the vibration phase, the following relation holds
MA
ª º '\ 13 _ A arctan « » ¬« '\ 23 _ A '\ 21 _ A ¼»
(3)
4 Experimental results We applied the set-up and the measurement principle to an industrial automotive car joint constituted with elastomer material. The inspected zone on the piece is 58.4 mm by 15.4 mm at a distance from the CCD of 1348 mm. Figure 2 shows the multiplexed holograms of the piece.
Fig. 2. Reconstructed multiplexed holograms of the car joint piece
Figure 3 shows in plane and out of plane vibrations for a frequency of 690 Hz. The determination of amplitude and phase allows the computation of the mean quadratic velocity along the two sentivities. It is given by
v
2
1 S T0
T0
³³ ³ vx, y, t S
0
2
dtdxdy
(4)
where S is the surface of the object. Figure 4 shows the mean quadratic velocities exctracted from the two sets of measurements for an excitation frequency varying from 200 Hz to 1000 Hz.
566
Hybrid Measurement Technologies
Fig. 3. Vibration amplitudes and phases along i and k directions at a frequency of 690 Hz
Fig. 4. Mean quadratic velocities along i and k directions
5 References 1. Picart, P, Moisson, E, Mounier, D (2003) Twin sensitivity measurement by spatial multiplexing of digitally recorded holograms. Applied Optics 42:1947-1957 2. Kreis, Th, Adams, M, Jüptner, W (1997) Methods of digital holography : a comparison. Proceedings SPIE 3098:224-233 3. Leval, J, Picart, P, Boileau, JP, Pascal, JC (2005) Full field vibrometry with digital Fresnel holography. Applied Optics (to be published)
Fatigue Detection of Fibres Reinforced Composite Materials by Fringes Projection and Speckle Shear Interferometry Ventseslav Sainova, Jana Harizanovaa, Sonja Ossikovskaa a Bulgarian Academy of Sciences, CLOSPI-BAS Acad. G. Bonchev Str., bl. 101, PO Box 95, 1113 Sofia Bulgaria Wim Van Paepegemb, Joris Degrieckb, Pierre Booneb b University of Gent Belgium
1 Introduction Fatigue detection by fringes projection and speckle shear interferometry of subjected to cyclic loading different types of fibres reinforced composite materials is presented. As the sensitivity of the applied methods could vary in broad limits in comparison with the other interferometric techniques, the inspection of the loaded specimens is realized in a wide dynamic range. Three points bending test has been applied at two static loading (pure tensile at 1 kN and 5 kN), with two consecutive displacements along Z (normal) direction – 0.1 mm and 1.5 mm for both loading conditions. The results are presented as differences ratio ǻz/ǻy, by subtracting phase maps modulus 2S, corresponding to the shape differences after loading the specimens. The testing of composite vessel subjected to cycling loading (pressure) was performed. Two spacing phase stepping fringes projection interferometry was applied for absolute coordinate measurement of the object. Derivatives of in plane and out of the plane components of the displacement vector were obtained by lateral speckle shear interferometry. The experimentally obtained results for non-pre-loaded, loaded and cycled specimens are presented together with the results from the pure tensile tests and from the cyclic tests. The selected measurement methodology follows the tendency in development of optical methods for remote measurements and non-destructive testing [1].
568
Hybrid Measurement Technologies
2 Testing of fibres reinforced materials before and after cycling loading The tested samples are plates with dimensions 200 u 30 u 3 mm, cut from unidirectional glass/epoxy fibres reinforced composite with eight layers. All layers are reinforced with unidirectional glass fibres. The stacking sequence used is [+45/45]_2s. The results from usual tensile and cyclic tests are shown on Fig. 1 and 2 with loading along y-axis. 14
Tensile test [+45/-45]_2s UD glass fibres
8
Cyclic test [+45/-45]_2s UD glass fibres 12
10
Load [kN]
Load [kN]
6
4
8
6
4
2
2 0 0,0
0,5
1,0
1,5
2,0
2,5
3,0
Displacement [mm]
0 0
2
4
6
8
10
12
14
Displacement [mm]
Fig. 1. Tensile test of specimen G3
Fig. 2. Cyclic test of specimen G4
Optical measurements are performed onto the same samples by threepoints bending test. The results for relative coordinates measurement by fringes projection interferometry of cycled and non-cycled samples at different loading (1 and 5 kN) and 1.5 mm normal displacement are shown on Fig. 3.
a) Normal displacement of non-cycled sample G3
b) Normal displacement of cycled sample G4
Fig. 3. Three point bending test of cycled and non-cycled samples.
He Ne laser (O = 632.8 nm) is used as a light source in Max-Zhender interferometer for generation of the projected fringes with spacing 0.5 mm, angle of illumination 70 deg, and distance 2 m, as described in [3]. All measurements are performed under static conditions. Five steps algorithm is used for phase calculation. Initial five frames with consecutive S/2 phase
Hybrid Measurement Technologies
569
shifts of the projected fringes onto the loaded sample (sample’s surface being the reference plane) are recorded. The results for differential three-points bending test of non-cycled and cycled samples are presented in Fig. 4. Two different loadings (1 and 5 kN) with two different normal displacements (0.1 and 1.5 mm) for each loading state have been consecutively applied. The results, presented as ǻz/ǻy, are calculated by subtracting the measured values of the normal displacements z(x, y) for two different loading states – 1 kN and 5 kN obtained at a given normal displacement, z(0,0) = 0.1 or 1.5 mm respectively, in the center of the specimen. 'z (1) 'z5kN 'z1kN / 'y 'y For the central part (y = 0) of the samples 'y is about 0.3 mm. At 'y o 0 'z 'y o dz / dy that is more informative for the tested materials.
a) At normal displacement 0.1 mm b) At normal displacement 1.5 mm Fig. 4. Difference 3-points bending tests of cycled sample G4
3 Testing of fibres reinforced composite vessel In the case of real 3D objects correct information for the object’s shape is necessary for determination of the three components of deformation on the curved surface [2]. The object for testing is a composite vessel. The results of shape measurement in the central part of the object sized 100×100 mm using two spacing fringes projection interferometry (absolute coordinates measurement [3]) at d1 = 0.5 mm and d2 = 2 mm are presented in Fig. 5. The applied cyclic loading is near to the sinusoidal with modulation from 300 to 500 kPa and 0.2 Hz frequency. The initial, interims and final results from macro measurements are performed by lateral shear interferometry along x direction (1% in the central part sized 100×100 mm of the object) at ~ǻ200 kPa static loading (static pressure) and shown in Figs. 6. The fibers bands should form a number of fringes. An unusual mechanical response appears in the different surface zones.
570
Hybrid Measurement Technologies
a) Unwrapped phase map modulus 2ʌ
b) 3D presentation
Fig. 5. Absolute coordinates measurement of the central part of composite vessel by two spacing fringes projection interferometry
a)
b)
c)
Fig. 6. Fatigue of composite vessel at cycling loading a) Initial state; b) after 400; c) after 600 cycles;
4 Conclusions Fringes projection and speckle-shear interferometry are applied for fatigue detection of fibres reinforced composite materials. The comparative results from tensile, cyclic and 3-points bending tests are presented. The fatigue of the tested objects after cycled loading is clearly identified. The presentation of the results in both cases (fringes projection and shearography) as the first difference of normal displacement is more informative due to the higher sensitivity, that allow fatigue detection of composite materials and machine parts to be performed at low-levels loadings, in working conditions and real time operating mode.
5 References 1. F. Chen, G. M. Brown, M. Song, “Overview of three-dimensional shape measurement using optical methods”, Opt. Eng., 39 (1), 2000, pp. 10-22 2. M. Y. Y. Hung, H. M. Shang, L. Yang, “Unified approach for holography and shearography in surface deformation measurement and nondestructive testing”, Opt. Eng., 42 (5), 2003, pp. 1197-1207 3. V. Sainov, G. Stoilov, J. Harisanova, P. Boone, “Phase-stepping interferometric system for relative and absolute co-ordinate measurement of real objects ”, Proc. of Int. Conf. OWLS V, Springer-Verlag Berlin, 2000, pp. 50-5
Multifunctional interferometric platform specialised for active components of MEMS/MOEMS characterisation Leszek Salbut1, Michal Jozwik2 1 Warsaw University of Technology Institute of Micromechanics and Photonics 8 Sw. A. Boboli St., 02-525 Warsaw Poland 2 Department d’Optique Universite de Franche-Comte 16 Route de Gray, 25030 Besancon Cedex France
1 Introduction Strict requirements with respect to reliability and lifetime of microsystems have to be fulfilled if they are used in practice. Both, reliability and lifetime are strongly dependent on the material properties and mechanical design. In comparison to conventional technologies the situation in microsystems technology is extremely complicated. Modern microsystems (MEMS and MOEMS) and their components are characterized by high volume integration of the variety of materials. But it is well known that the material behavior in combination with new structural design cannot be easily predicted by theoretical and numerical simulations. The objective of this work is to develop the new instrument and procedures for characterization of the mechanical behavior of MEMS elements. High sensitive and accurate measurements are required for automatic microelements static shape determination and monitoring the value and phase of out-of plane displacement at chosen frequencies and stages of vibration. The measurement system is based on conventional two-beam interferometry [1] with cw and pulse light sources. It gives the possibility to combine capabilities of time average, stroboscopic and pulse interferometry techniques. The opto-electro-mechanical system and measurement techniques create the multifunctional interferometric platform MIP for various types of microelements testing. The efficiency of MIP is presented on examples
572
Hybrid Measurement Technologies
of the resonance frequencies and amplitude distributions in vibration modes of active micromembrane testing.
2 Measurement system Scheme of the measurement system is shown in Fig.1a. It is based on the Twyman-Green microinterferometer integrated with optical microscope and a variety of supporting devices for object manipulation and loading and for synchronization between object loading system and pulse light source controller. The photos of the interferometric system designed and manufactured at IMiF PW are presented in Fig. 1b. As the light source (PLS) two types of lasers can be used: pulse microlaser (O = 543 nm, power 80 mW, frequency up to 50 kHz) or diode pulse laser (O = 630 nm, power 15 mW, frequency up to 2 MHz ). The reference mirror is mounted on piezoceramic transducer for realization of phase shift required for automated fringe pattern analysis methods (PSM). Two manipulators with arms provide voltage for electrical loading of microelements and enable testing them on the silicon wafer (before cutting) and separately (after cutting). a)
b)
PLS
Fig. 1. Scheme of the measurement system (a) for static and pulse (stroboscopic) interfereometry and photo of measurement area (b)
3 Measurement procedures and exemplary results MIP with cw light source can works as conventional Twyman-Green interferometer for shape deformation measurement or can be used for vibrating microelements testing by time average technique. For improving visibility of the Bessel fringes obtained by time average technique the special numerical procedure, colled “four frame method” is used [2]. If the pulse la-
Hybrid Measurement Technologies
573
ser is applied one of the following techniques for non static objects study can be used: x pulse interferometry – the most general method for testing of moving elements [3], x stroboscopic interferometry – the method for testing of vibration modes [2,4], x quasi-stroboscopic interferometry – simplified stroboscopic method for qualitative low frequency vibration analysis [2].
a)
b)
Fig. 2. The principle of pulse and stroboscopic (a) and quasi-stroboscopic (b) techniques
a)
b)
c)
Fig. 3. The vibration modes of the square membrane for resonance frequencies: a) 92.8 kHz; b) 107.1 kHz; c) 172 kHz (excitation signal = 5.6 VPP for each case) determined using the time average and quasi-stroboscopic methods
For vibration modes profiling by pulse or stroboscopic interferometry, the light pulse of width Gt, synchronized with the vibration excitation signal (see Fig.2a) but with an adjustable delay time tm is used to freeze the object vibration at any time of the vibration cycle. The idea of quasi-stroboscopic technique is shown in Fig.2b. A pulsed laser and amplifier activating the microelement under test are controlled by
574
Hybrid Measurement Technologies
the same signal. In this case the excitation signal and light pulses are synchronized automatically and the light illuminates the vibrating microelement at its maximal deflection. Due to the rectangular shape of the excitation signal the shape of microelement is quasi-stable during the relatively long time of the light pulse (this situation is similar to the stroboscopic technique used for tm = T/4). Fig. 3 shows exemplary results of vibration analysis of the silicon micromembranes by time average and quasi-stroboscopic techniques. First three resonance frequencies of the 1.35 x 1.35 mm2 square silicon micromembrane with active PZT layer (made by THALES under the European Project OCMMM) were tested.
5 Acknowledgements The work was supported partially by European Project OCMMM and Polish Scientific Council project no. 4 T10C 021 24
6 References 1. Kujawinska, M, Gorecki, C (2002) New challenges and approaches to interferometric MEMS and MOEMS testing. Proc.SPIE 4900: 809-823 2. Salbut, L, Patorski, K, Jozwik, M, Kacperski, J, Gorecki, C, Jacobelli, A, Dean, T (2003) Active microelements testing by interferometry using time-average and quasi-stroboscopic techniques. Proc.SPIE 5145: 23-32 3. Cloud, G (1995) Optical methods in engineering analysis. Cambridge University Press 4. Petitgrand, S, Yahiaoui, R, Danaie, K, Bosseboeuf, A (2001) 3D measurement of micromechanical devices vibration mode shapes with a stroboscopic interferometric microscope. Optics and Lasers in Engineering 36: 77-101
Laser Multitask ND Technology in Conservation Diagnostic Procedures
V. Tornari1, E. Tsiranidou1, Y. Orphanos1, C. Falldorf2, R. Klattenhof2, E. Esposito3, A. Agnani3, R. Dabu4, A. Stratan4, A. Anastassopoulos5, D. Schipper6, J. Hasperhoven6, M. Stefanaggi7, H. Bonnici8 , D. Ursu9 1 FORTH/IESL, 2 BIAS, 3UNIVPM, 4 NILPRP, 5Envirocoustics S.A., 6Art Innovation b.v, 7LRMH, 8MMRI, 9ProOptica
Introduction Laser metrology techniques successfully applied in industrial diagnostic fields have not yet been adjusted in accordance to the investigations requirements of Cultural Heritage. The setback is due to the partial applicability provided by each technique unsuited for the variety of diagnostic problems implicated in the field. The extensively fragmented applicability obstructs the technology transfer and elevates the aim to integrate complementary properties providing the essential functionality. In particular, structural diagnosis in art conservation intends to depict the mechanical state of the concerned cultural treasure for plotting its restoration strategy. Conventional conservation practices rely on point by point finger-knocking on the exposed surfaces and differentiate acoustically the surface sound. Ultimate tool only for movable items and in emergency cases is provided by x-ray imaging and thermography. Modern optical metrology may provide better suited alternatives incorporating transportable, non-contacting, safe, sensitive and fast subsurface topography if a diagnostic methodology is developed to design an integrated working procedure of techniques.
EC 5th FWP - DG Research, EESD, LASERACT EVK4-CT-2002-00096 Scientific coordinator: [email protected], tel:+30 810 391394, fax:+30 810 391318, 1 Foundation for Research and Technology-Hellas/Institute of Electronic Structure and Laser, 71 110 Heraklion, Crete, Greece.
576
Hybrid Measurement Technologies
1 Integration concept and diagnostic methodology The integration concept is based on two respects constituting separate methodology development steps. a) Techniques act on complementary advantages: Holography related techniques tested to provide diverged object beams allowing larger field of view for artworks of moderated dimensions and high resolving power for complex micro-defect detection and parametrical analysis whereas vibrometry scanning tested allows for remote access on distant objects of extended dimensions and larger but simpler defects [1-3]. b) Art classification table versus defect pathology: Despite the broad field of objects and materials constituting the cultural heritage two characteristic structural problems are persistively identified to dominate the deterioration growth. These are the detachments and cracks formed in plethora of artworks with multilayered structure and inhomogeneous materials. The experimental work was based on detection from simulated samples and real artworks of the dominant conservation problems. Suitability of techniques assigned to the art classification table permits the standardisation of inspection sequence according to artwork character and potential pathology. INTERCONNECTION DIAGRAM OF OPERATIONAL PROCEDURE 1st step: INSPECTION SSS DSHIoDSS* SSS: Defect map a. Image Processing b.Defect Detection c. Indicate a suggested type of defect
MSS SLDV*oDSSo DSHI
2nd step: ANALYSIS MSS Defect map * plus vibration threshold
LSS SLDVoDSSoDSHI*
LSS Vibration threshold * plus defect map
3rd step: EVALUATION
Fig. 1. Operational sequence for development of integrated diagnosis. (SSS: Small Scale Structures, M: Medium and L: Large). The technique in italics is optional to the operator.
Hybrid Measurement Technologies
577
2 Results The feasibility tests were concluded to the simultaneous development of interchangeable transportable modules based on the techniques of Digital Speckle Holographic Interferometry (DSHI), Digital Speckle Shearography (DSS) and Scanning Laser Doppler Vibrometry (SLDV) to constitute one compactified prototype. For the DSHI a custom 8 ns pulsed laser at 532 nm based on microlaser pumping was additionally developed Green pulse energy adjustable in 50 steps by use of a O/2 wave-plate @ 532 nm, rotated by a step by step motor controlled by computer, and a Glan polariser. Software integrating the art classification table with the operational parameters of modules drives by interactive user-friendly interface the operator to perform the investigation and conclude the diagnosis. Operating system
INPUT
Working procedure
OUTPUT
Operatin system EXPERT
SLDV DSS DSHI
Lasers Exciters E/O
Operational procedure
Art class database
Fig. 2. Principle of operation for the multitask prototype system.
The prototype system in field action is shown in figure 3. The compact dimensions allowed for transportation under extreme out of laboratory conditions. Some characteristic results are shown in figure 4a-c.
Modules & laser
Software driven procedures
Output parameters of the laser : 1. Wavelength: 532 nm 2. Pulse energy: > 10 mJ 3. Standard deviation: <5% 4. Pulse duration : 7-8 nsec 5. Rep. rate: Adjustable, max. 5 Hz 6. Coherence length : > 1.5 m
Fig. 3. Photograph from system in onfield investigation and laser parameters.
578
Hybrid Measurement Technologies
Fig. 4. a) DSHI on Maltese fortification for defect detection and b) DSS on defect detection of simulated sample, and c) SLDV on Maltese samples stone for definition of quality.
3 Discussion Taking into account the requirements for a technique to be qualified by conservation community as suitable, such as to be non-destructive, non invasive, non contacting, acquiring subsurface information and visualise defect presence, being capable for remote access and on-field transportation, applicable to variety of artworks/shapes/materials, and providing objective and repeatable results; was successfully suggested that those requirements can be delivered by integrating complimentary characteristics delivered from laser techniques existed in optical metrology with development of art classification database and user-friendly integrated software.
4 References 1. V. Tornari, V. Zafiropulos, A. Bonarou, N. A. Vainos, and C. Fotakis, “ Modern technology in artwork conservation: A laser based approach for process control and evaluation”, Journal of Optics and Lasers in Engineering, vol. 34, (2000), pp 309-326 2. P. Castellini, E. Esposito, N. Paone, and E. P. Tomasini, “Non-invasive measurements of damage of frescoes paintings and icon by Laser Scanning Vibrometer: experimental results on artificial samples and real works of art ”, SPIE Vol. 3411, 439-448, (1998) 3. V. Tornari, A. Bonarou, E. Esposito, W. Osten, M. Kalms, N. Smyrnakis, S. Stasinopulos, “Laser based systems for the structural diagnostic of artworks: an application to XVII century Byzantine icons”, SPIE 2001, Munich Conference, June 18-22, 2001, vol. 4402
SESSION 5 New Optical Sensors and Measurement Systems Chairs: Toyohiko Yatagai Tsukuba (Japan) Hans Tiziani Stuttgart (Germany)
Invited Paper
Progress in Scanning Holographic Microscopy for Biomedical Applications Ting-Chung Poon Optical Image Processing Laboratory Bradley Department of Electrical and Computer Engineering Virginia Tech Blacksburg, Virginia 24061, USA
1 Introduction Optical scanning holography (OSH) is a unique technique in that holographic information of a three-dimensional (3-D) object is acquired with a single 2-D optical heterodyne scanning. OSH has several opportunities of applications in areas such as 3-D holographic microscopy, recognition of 3-D objects, 3-D holographic television, 3-D optical cryptography as well as 3-D optical remote sensing. In this talk, I will concentrate on the use of OSH to 3-D microscopy.
2 Generalized two-pupil processing system Optical scanning holography starts with the so-called two-pupil processing system [1-3]. A generalized version of its set-up is shown in fig. 1. p1 ( x, y ) and p 2 ( x, y ) are the two pupil functions, located in the front focal plane of Lens L1. The two pupils are illuminated by laser light of temporal frequencies Z 0 and Z 0 : , respectively. The beamsplitter BS combines the two pupil fields and the combined fields are projected, through an x-y scanner, onto the specimen slice T ( x, y; z ) located at a distance f z 0 z from Lens L1, as shown in Fig. 1. We model the 3-D object as a stack of transverse slices and each slice of the object is represented by an amplitude transmittance T ( x, y; z ) , which is thin and weakly scattering. We place the 3-D object in front of the Fourier transform lens L2. M ( x, y ) is a mask placed in the back focal plane of Lens L2. The
New Optical Sensors and Measurement Systems
581
photodetector PD collects all the light after the mask and delivers a scanned current i (t ) . The electronic bandpass filter, BPF, tuned at the frequency : , will then deliver a heterodyne current i: (t ) , which can be written as
i: (t ) v Re[i:p ( x, y ) exp( j:t )] , where x Vt and y Vt and V is the speed of the scanning beam.
(1)
Z0 :
f
f
p ( x, y) 2
p ( x, y) 1
Lens L1
i(t)
Z0
i: (t )
BPF@: [ \ VFDQQHU
%6
f
Lens L2
f
z0
z
PD
M(x,y) T ( x, y; z )
3D object
cos( : t )
id i:
iq
Computer or 2-D Display
sin(:t )
Fig. 1. A generalized two-pupil optical system: filter, BPF-bandpass filter, PD-photodetector
-
electronic multiplier, LPF-lowpass
2.1 Coherency of imaging
When M ( x, y ) i :p ( x, y)
³ [P
1
1 , i:p ( x, y ) in Eq. (1) becomes [4] *
( x' , y ' ; z z 0 ) P2 ( x' , y ' ; z z 0 ) | T ( x' x, y ' y; z ) | 2 dx' dy' dz (2)
582
New Optical Sensors and Measurement Systems
where Pi ( x' , y ' ; z z 0 )
F { pi ( x, y )}k x
i=1 or 2, F{ p i ( x, y )}k x , k y
k0 x / f ,k y k0 y / f
³³ p ( x, y) exp( jk i
x
h( x, y; z z 0 ) with
x jk y y )dxdy , denotes
the 2-D convolution involving x and y coordinates, and finally jk k h( x, y; z ) exp( jk 0 z ) 0 exp[ j 0 ( x 2 y 2 )] is the free-space spatial 2Sz 2z impulse response in Fourier optics with k 0 being the wavenumber of the laser light [5]. Equation (2) corresponds to the case of incoherent imaging. Incoherent objects include fluorescent specimens in biology, or diffusely reflecting surfaces as encountered in remote sensing. When M ( x, y ) G ( x, y ) , i.e., the mask is a pin hole, and one of the pupils is also a pin hole, i.e., p1 ( x, y )
G ( x, y ) , i:p ( x, y ) in Eq. (1) be-
comes [6]
i :p ( x, y )
³ P ( x' , y ' ; z z 2
0
)T ( x' x, y ' y; z )dx' dy ' dz .
(3)
This corresponds to coherent imaging, which is important for quantitative phase-contrast imaging for some biological imaging. 2.2 Detection scheme
This heterodyne current can be processed electronically in two channels by electronic multipliers and lowpass filters as shown in Fig. 1 to give two outputs id ( x, y ) and iq ( x, y ) to be displayed or stored in a computer. They are given by [6]
id ( x, y ) v| i:p ( x, y ) | cos T
and
iq ( x, y ) v| i:p ( x, y ) | sin T ,
(4)
where i:p ( x, y ) | i:p ( x, y ) | exp( jT ( x, y )) .
3 Optical scanning holography The generalized two-pupil processing system can operate as a holographic mode by properly choosing the pupil functions. Taking the case of incoherent imaging, if we let p1 ( x, y ) 1 and p 2 ( x, y ) G ( x, y ) , the two equations in Eq. (4) become [6]
New Optical Sensors and Measurement Systems
583
k0 k0 sin[ ( x 2 y 2 )] | T ( x, y; z ) | 2 dz (5a) (z z0 ) 2( z z 0 ) k0 k0 cos[ ( x 2 y 2 )] | T ( x, y; z ) | 2 dz , (5b) i q ( x, y ) v ³ (z z0 ) 2( z z 0 ) where denotes 2-D correlation involving x and y coordinates. Eqs. (5a) i d ( x, y ) v ³
and (5b) are called, respectively, the sine- and cosine- Fresnel zone plate (FZP) hologram of the incoherent object, | T ( x, y; z ) | 2 . Figure 2a) shows the original “fringe” 2-D pattern located at z 0 , 2 i.e., | T ( x, y; z ) | I ( x, y )G ( z ) , where I ( x, y ) represents the 2-D “fringe” pattern. Figure 2b) and 2c) show the sine-FZP and cosine-FZP holograms, respectively. Figure 2d) and 2e) show their reconstructions. Reconstruction can simply be done by convolving the holograms with the free space impulse response matched to the depth parameter z 0 , h( x, y; z 0 ) . Note that there is twin-image noise in these reconstructions. For no twin-image noise reconstruction, we can construct a complex hologram, H c ( x, y ) , according to the following equation [6]:
H c ( x, y )
iq ( x, y ) jid ( x, y )
(6)
Figure 2f) shows the reconstruction of the complex hologram and it is evident that twin-image noise has been rejected completely.
a)
584
New Optical Sensors and Measurement Systems Sine-FZP hologram
b) Cosine-FZP hologram
c) Reconstruction of sine-FZP hologram
d)
New Optical Sensors and Measurement Systems
585
Reconstruction of cosine-FZP hologram
e) Reconstruction of complex FZP hologram,+j
f) Fig. 2. a) Original “fringe”, b) Sine-FZP hologram, c) Cosine-FZP hologram, d) Reconstruction of sine-hologram, e) Reconstruction of cosine-hologram f) Reconstruction of complex hologram (no twin-image noise)
4 Scanning holographic fluorescence microscopy We have applied the principles of optical scanning holography to 3-D microscopy [7]. In 1996, scanning holographic microscopy was first proposed [8]. In 1997, we captured hologram of fluorescent beads with sizes of about 15 Pm within a volume of space about 2mm by 2mm by 2mm, first ever recorded hologram of fluorescent information [9]. The hologram is shown in Fig. 3a). Fig. 3b) and 3c) show its reconstruction at two planes.
586
New Optical Sensors and Measurement Systems
a)
b)
c)
Fig. 3. a) hologram of fluorescent beads b) and c) are reconstructions at two planes [After Schilling et al. Optics Letters, Vol. 22, 1507 (1997)].
In 1998, using optical scanning holography we described and illustrated experimentally a method aimed at the three-dimensional (3-D) imaging of fluorescent inhomogeneities embedded in a turbid medium [10]. In 2002, Swoger et al. analyzed the use of optical scanning holography as a technique for high-resolution 3-D biological microscopy [11] and most recently, Indebetouw et al. have demonstrated optical scanning holographic microscopy with resolution about 1 Pm [12].
5 References 1. Lohmann, A, Rhodes, W, (1978) Two-pupil synthesis of optical transfer function. Applied Optics 17:1141-1150 2. Poon, TC, Korpel, A (1979) Optical transfer function of an acoustooptic heterodyning image processor. Optics Letters 4:317-319
New Optical Sensors and Measurement Systems
587
3. Indebetouw, G, Poon, TC, (1992) Novel approaches of incoherent image processing with emphasis on scanning methods. Optical Engineering 31:2159-2167 4. Poon, TC, Indebetouw, G, (2003) Three-dimensional point spread functions of an optical heterodyne scanning image processor. Applied Optics 42:1485-1492 5. Poon, TC, Banerjee, P, (2001) Contemporary Optical Image Processing with MATLAB®, Elsevier Science, Oxford 6. Poon, TC, (2004) Recent progress in optical scanning. Journal of Holography and Speckle 1: 6-25 7. Poon, TC, Schilling, B, Indebetouw, G, Storrie, B (2000) Threedimensional Holographic Fluorescence Microscope. U.S. Patent # 6,038,041 8. Poon, TC, Doh, K, Schilling, B, Wu, M, Shinoda, K, Suzuki, Y (1996) Three-dimensional microscopy by optical scanning. Optical Engineering 34:1338-1344 9. Schilling, B, Poon, TC, Indebetouw, G, Storrie, B, Wu, M, Shinoda, K, Suzuki, Y (1997) Three-dimensional holographic fluorescence microscopy. Optics Letters 22:1506-1508 10. Indebetouw, G, Kim, T, Poon, TC, Schilling, B (1998) Threedimensional location of fluorescent inhomogeneities in turbid media by scanning heterodyne holography. Optics Letters 23: 135-137 11. Swoger, J, Martinez-Corral, M, Huisken, J, Stelzer, E (2002) Optical scanning holography as a technique for high-resolution threedimensional biological microscopy. Journal of Optical Society of America 19: 1910-1918 12. Indebetouw, G, Maghnouji, A, Foster, R. (2005) Scanning holographic microscopy with transverse resolution exceeding the Rayleigh limit and extended depth of focus. Journal of Optical Society of America A 22: 892-898
The Dynamics of Life: Imaging Temperature and Refractive Index Variations Surrounding Material and Biological Systems with Dynamic Interferometry Katherine Creath a,b,c,d, Gary E. Schwartz d,b College of Optical Sciences, University of Arizona, 1630 E. University Blvd, Tucson, AZ, USA 85721-0094 b Biofield Optics, LLC, 2247 E. La Mirada St., Tucson, AZ, USA 85719 c Optineering, 2247 E. La Mirada St., Tucson, AZ, USA 85719 d Center for Frontier Medicine in Biofield Science, University of Arizona, 1601 N. Tucson Blvd., Su.17, Tucson, AZ, USA 85719
a
1 Abstract Dynamic interferometry is a highly sensitive means of obtaining phase information determining phase at rates of a few measurements per second. Many different techniques have been developed to obtain multiple frames of interferometric data simultaneously. Commercial instruments have recently been designed with the purpose of measuring phase data in the presence of vibration and air turbulence. The sensitivity of these phasemeasurement instruments is on the order of thousandths of a wavelength at visible wavelengths. This sensitivity enables the measurement of small temperature changes and thermal fields surrounding living biological objects as well as material objects. Temperature differences are clearly noticeable using a visible wavelength source because of subtle changes in the refractive index of air due to thermal variations between an object and the ambient room temperature. Living objects can also easily be measured to monitor changes as a function of time. Unwrapping dynamic data in time enables the tracking of these subtle changes to better understand the dynamics and interactions of these subtle variations. This technique has many promising applications in biological and medical sciences for studying thermal fields around living objects. In this paper we outline methods of dynamic interferometry, discuss challenges and theoretical concerns, and
[email protected]; phone 520 626-1730; fax 520 882-6976
New Optical Sensors and Measurement Systems
589
present experimental data comparing thermal fields measured with dynamic phase-measuring interferometry surrounding warm and cold material objects as well as living biological objects.
2 Introduction Dynamic interferometers have been designed with the purpose of measuring phase data in the presence of vibration and air turbulence so that interferograms can be captured in “real” time [1], turbulence in the area near an object can be “frozen,” and flows and vibrational motion can be followed [2]. They are designed to take all necessary interferometric data simultaneously to determine phase in a single snapshot [3,4,5,6]. Variations in optical path as a function of time can be calculated to obtain OPD movies (“burst” mode) of dynamic material and living biological objects. Any object not in thermal equilibrium with its environment will have a thermal field surrounding it. Temperature variations in thermal fields will alter the refractive index of the air surrounding objects. These subtle variations can be measured interferometrically, and with dynamic capabilities fluctuations in thermal fields can be frozen in time and followed over a period of time. For the study presented in this paper we focus on looking at the difference between room temperature and body temperature objects and compared these to a human finger. The human body dynamically emits thermal energy. This thermal energy relates to metabolic processes. We hypothesize that the thermal emission from the human body is dynamic and cycles with time constants related to blood flow and respiration [7]. These cycles create small bursts of thermal energy that creates convection and air currents. We have termed these subtle air currents generated by the human body “microbreezes” [8]. The thermal variations of these microbreezes modulate the refractive index of the air path. Because dynamic interferometry can measure subtle changes in refractive index and thereby measure air currents and microbreezes, we hypothesize that this technique will enable us to visualize the thermal aura around human body parts such as a finger tip, and furthermore that we will be able to quantify the relative variations over time.
590
New Optical Sensors and Measurement Systems
3 Dynamic interferometer The specific type of dynamic interferometry used for this study is a spatial multichannel phase-measurement technique [9]. Multiple duplicate interferograms are generated with different relative phase shifts between the object and reference beams. These interferograms are then recorded using either a number of separate cameras or by multiplexing all of the interferograms onto a single camera. These interferometers are well suited for studying objects that are not vibrationally isolated or that are dynamically moving. They are able to “freeze” the motion of an object to obtain phase and surface height information as a function of time. Commercial systems utilizing this type of phase measurement are manufactured by 4D Technology Corporation, Engineering Synthesis
Fig. 1. Schematic of dynamic interferometer system used for this study. The object under study is between the collimating lens and the return mirror. (Schematic courtesy of 4D Technology Corporation).
Design, Inc. and ADE Phase Shift. The system used for this study was a PhaseCamTM from 4D Technology [10]. A schematic of the 4D Technology PhaseCam is shown in Fig. 1. A single mode polarized HeNe laser beam is expanded and collimated to provide illumination coupled by a polarization beam splitter. Quarter-wave plates are used to set orthogonal polarizations for the object and reference beams. The object beam is further expanded, collimated and directed at a return mirror. The cavity between the collimating lens and the return mirror is where the objects were placed for this study. When the object and reference beams are recombined inside the interferometer, the polarizations are kept orthogonal. The combined beams pass through optical transfer elements providing imaging of the return mirror onto the highresolution camera. In one embodiment a holographic optical element creates four copies of the interference pattern that are mapped onto four camera quadrants (see
New Optical Sensors and Measurement Systems
591
Fig. 2). A phase plate consisting of polarization components is placed in front of the camera to provide a different relative phase shift between the object and reference beams in each of the four quadrants of the image plane. Phase values are determined modulo 2ʌ at each point in the phase map using the standard 4-frame algorithm (see Fig. 2) [11]. This calculation does not yield absolute phase differences in the object cavity. The arctangent phase calculation provides modulo 2ʌ data requiring that phase values be unwrapped to determine a phase map of the relative phase difference between the object and reference beams [12]. If data frames are tracked in time, and the phase values do not jump by more than a half of a fringe between frames of data, it is possible to track the relative phase differences dynamically (also known as unwrapping in time). The version of the software used for this study did not yet have the capability of unwrapping the phase in time. Because the interferograms are multiplexed onto different quadrants of the image plane, care needs to be taken to determine pixel overlap of all
Fig. 2. Schematic showing the creation and encoding of 4 phase shifted interferograms onto the CCD camera. (Courtesy of 4D Technology Corporation).
the interferograms, to remove spatial optical distortion and to balance the gain values between interferograms. In practice, when measurements are taken, a reference file is generated from an average of a large number of frames of data with an empty cavity and subtracted from subsequent measurements to provide a null. Dynamic variations in the object cavity can be monitored by taking a “burst” of data comprised of a user-selectable number of data frames with a fixed time delay between frames. The sensitivity of phase-measurements taken with this instrument is on the order of thousandths of a wavelength at the visible HeNe wavelength (633 nm) enabling measurement of small
592
New Optical Sensors and Measurement Systems
temperature changes in thermal fields surrounding material and living biological objects.
4 Results The dynamic interferometer was set up as shown in Fig. 1. The object cavity between the collimating lens and the return mirror was enclosed with a cardboard tube except for an approximately 3cm space to place the object in the beam. This limited ambient air currents from affecting the measurements as much as possible. A reference data set was taken with no object present as the average of 30 consecutive measurements taken in a single burst. This reference data set is subtracted from all subsequent measurements accounting for variations across the field due to the interferometer itself creating a null cavity when no object was present. Figure 3 displays various OPD maps of air taken in a single CCD frame (30ms)under different conditions. All OPD maps are scaled from –0.05 to +0.05 waves OPD. Figure 3(A) shows the empty object cavity. Bright areas (white) are warmer than dark areas (black). Figure 3(B) shows a blast from a can of canned air. Note that the turbulence is easily frozen in time and that the canned air is cooler than the background air. Figure 3(C) shows the effect of a candle flame below the object beam. The area heated by the candle flame is obviously brighter than the darker ambient air temperature. Figure 4(A) shows OPD maps of a screwdriver handle approximately 2 cm across at room temperature. These images are scaled the same as Fig. 3. The presence of the room temperature screwdriver handle does not appear to thermally affect the air path at all. However, when the screwdriver handle is warmed up to body temperature and placed in the beam, there is obviously a thermal gradient around it (Fig. 4(B)). Figure 4(C) shows a finger of the second author placed in the beam. Note that the thermal gradient around the finger is similar to that around the body temperature screwdriver handle. The differences between these two objects are mainly in the surrounding “halo”.
New Optical Sensors and Measurement Systems
(A)
(B)
593
(C)
Fig. 3. OPD maps of air patterns recorded in a dynamic interferometer with different objects. (A) Empty cavity. (B) Blast of canned air. (C) Candle flame. Brighter shades (white) are warmer air temperatures and darker shades (black) are cooler. All OPD maps are scaled to the same range.
(A)
(B)
(C)
Fig. 4. (A) Room temperature screwdriver handle. (B) Body temperature screwdriver handle. (C) Human finger. All OPD maps are scaled the same as Fig. 3. Note “halo” around warm objects.
Figure 5 displays three consecutive OPD maps of dynamic air patterns taken ~0.1s apart surrounding the tip of a human finger (A-C) and a screwdriver handle at finger temperature (D-F). The OPD maps were processed using ImageJ software [13] utilizing a lookup table to reveal structure and changes in structure over time. A number of distinctions can be seen in these figures. The screwdriver is more symmetric and static, while the finger is more asymmetric and dynamic. In the generated OPD movies it is possible to see pulsing around the finger probably corresponding to heart rate that is not visible around the screwdriver handle.
594
New Optical Sensors and Measurement Systems
(A)
(B)
(C)
(D)
(E)
(F)
Fig. 5. Consecutive OPD maps taken ~0.1s apart of a human finger (A-C) and a body temperature screwdriver handle (D-F). Lines indicate areas of equal optical path like a topographic map. Note there are more dynamic variations between images of the finger than the screwdriver handle.
5 Discussion and Conclusions The sensitivity of these phase measurements shows that our eye can easily discern 0.01 waves difference in OPD from an OPD map. Calculations can further extend the repeatability to have a sensitivity of around 0.001 waves. The refractive index of air is roughly 1.003 and is dependent upon temperature, pressure, humidity, CO2 quantity and wavelength. These dependencies have been studied extensively for accurate distance measurements using light [14]. Operating with an interferometric measurement sensitivity on the order of 0.001 waves, variations of 1 part in 104 of refractive index can be resolved. As seen in the OPD maps presented here, this type of variation is apparent in the fields around the human finger and can be extrapolated to be present around other living biological objects. Since we are interested in dynamic changes and not absolute values, we
New Optical Sensors and Measurement Systems
595
feel that this technique shows promise for tracking dynamic changes in thermal fields around biological objects. The main limitation of the method used in this study was that the software was not yet able to unwrap phase in time. Unwrapping in time enables tracking specific air currents over time and determination of the refractive index (or OPD changes) between frames of phase data. This type of calculation could be invaluable for a number of different applications such as modal analysis and mapping of air turbulence in a telescope dome. Dynamic interferometry is relatively new. The first dynamic interferometers were designed simply to get around vibration and air turbulence issues. As the field is evolving it is becoming apparent that dynamic interferometry has a huge advantage over standard phase measurement interferometry by being able to follow dynamic motions and capture dynamic events. A survey of vendors of dynamic interferometers indicates that they are in the process of incorporating this type of analysis into their products. It is anticipated that in the not too distant future dynamic analysis and visualization of motions and flows will be the industry standard. The studies presented in this paper clearly show that it is possible to discern the difference between objects at different temperatures by looking at the gradient of the phase map around the object. This experiment has also shown that there is a difference in the dynamic air currents and temperature gradients around living biological objects and inanimate objects at the same temperature. Adding the dimension of time enables the study of subtle changes as a function of time. This type of measurement will enable the study of the dynamics of thermal emission from the human body. We anticipate that dynamic interferometry will enable the correlation of dynamic biofield measurements of thermal microbreezes to variations in metabolic function such as heart rate, respiration and EEG.
6 Acknowledgements The authors wish to thank 4D Technology, Inc. for the use of their PhaseCam interferometer and specialized software they created for this study. One of the authors (GES) is partially supported at the University of Arizona by NIH grant P20 AT00774 from the National Center for Complementary and Alternative Medicine (NCCAM). The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of NCCAM or NIH.
596
New Optical Sensors and Measurement Systems
7 References 1. Hayes, J. (2002). Dynamic interferometry handles vibration. Laser Focus World, 38(3):109-+. 2. Millerd, J.E.et al. (2004). Interferometric measurement of the vibrational characteristics of light-weight mirrors. In H.P. Stahl (Ed.), Proceedings of SPIE -- Volume 5180: Optical Manufacturing and Testing V (Vol. 5180, pp. 211-218). Bellingham, WA: SPIE. 3. Wyant, J.C. (2003). Dynamic Interferometry. Optics and Photonics News, 14(4):36-41. 4. North-Morris, M.B., VanDelden, J., & Wyant, J.C. (2002). PhaseShifting Bi-refringent Scatterplate Interferometer. Applied Optics, 41:668-677. 5. Koliopoulos, C.L. (1992). Simultaneous phase-shift interferometer. In V.J. Doherty (Ed.), Advanced Optical Manufacturing and Testing II (Vol. 1531, pp. 119-127). Bellingham, WA: SPIE. 6. Smythe, R., et al. (1984). Instantaneous Phase Measuring Interferometry. Op-tical Engineering, 23(4):361-364. 7. Creath, K., & Schwartz, G.E. (2005). The Dynamics of Life: Imaging Chang-ing Patterns of Air Surrounding Material and Biological Systems with Dy-namic Interferometry. J. Alt. Comp. Med., 11(222-235). 8. Creath, K., & Schwartz, G.E. (2004). Dynamic visible interferometric meas-urement of thermal fields around living biological objects. In K. Creath & J. Schmit (Eds.), Interferometry XII: Techniques and Analysis (Vol. 5531, pp. 24-31). Bellingham, WA: SPIE. 9. Creath, K., & Schmit, J. (2004). Phase-Measurement Interferometry. In B.D. Guenther & e. al (Eds.), Encyclopedia of Modern Optics. New York: Aca-demic Press. 10.Millerd, J.E., & Brock, N.J. (2003). Methods and apparatus for splitting, im-aging, and measuring wavefronts in interferometry. USPTO. USA: MetroLaser, Inc. 11.Creath, K. (1988). Phase-measurement interferometry techniques. In E. Wolf (Ed.), Progress in Optics (Vol. 26, pp. 349-393). Amsterdam: Elsevier Science Publishers. 12.Robinson, D.W. (1993). Phase unwrapping methods. In D.W. Robinson & G.T. Reid (Eds.), Interferogram Analysis (pp. 194-229). Bristol: IOP Publish-ing. 13.Rasband, W.S. (1997-2005). ImageJ. retrieved 5 May, 2005, http://rsb.info.nih.gov/ij. 14.Ciddor, P.E. (1996). Refractive index of air: new equations for the visible and near infrared. Applied Optics, 35(9):1566-73.
Microsystem based optical measurement systems: case of opto-mechanical sensors Michaá Józwik, Christophe Gorecki, Andrei Sabac Département d'Optique, FEMTO-ST, Université de Franche-Comté 16 Route de Gray, 25030 Besançon Cedex France Thierry Dean, Alain Jacobelli Thales Research & Technology France Domaine de Corbeville, 91404 Orsay Cedex France
1 Introduction The MEMS technology offers a large field for the development of miniature optical sensors by combining of planar waveguiding structures with micromachined MEMS. The achieved functions may be passive or active. Passive functions like alignment between optical fibers and integrated optical devices by U-grooves and V-grooves on silicon are attractive in terms of low cost packaging [1], providing good reproducibility and precision of fiber-to-waveguide connection. The use of active functions like modulation or sensing, issued from the combination of integrated optics and mechanical structures, is also attractive because the potential to generate low cost opto-mechanical sensors [2]. In opto-mechanical sensors, the active structural element converts a mechanical external input signal (force, pressure, acceleration) into an electrical signal via a waveguide read-out of the micromechanical-sensing element [2,3]. Structurally active elements are typically high aspect ratio components such as suspended beams or membranes. The most wide application of micromachined opto-mechanical sensors is the pressure sensing. In resonant pressure sensors, the detection of frequency shift offers a high accuracy, high stability and excellent repeatability [4]. The pressure range and sensitivity is limited by the dimensions and thickness of the membrane. In general, the optical testing methods offer the advantage, that the optical observations do not influence the mechanical behaviour. In particular, integrated opto-mechanical structures offer the possibility of monitoring behaviour in situations where laboratory equipment with free space beams
598
New Optical Sensors and Measurement Systems
have no access to the parts to be observed anymore. This situation is perfectly adapted for reliability tests and the monitoring of micromechanical performances during lifetime of MEMS devices.
2 Design and testing of devices 2.1 Waveguide fabrication
Most published work refers to waveguides structures based on depositing pure silica, for the cladding layers and doped silica for the core layer [4], and silicon nitride or silicon oxinitride (SiOxNy) for the core layer [5-7]. SiOxNy waveguides permit low attenuation of light and well adjustable refractive index over an important range, this is suitable for matching to single-mode fibers, due to the possibility to tailor the mode-field profile of such waveguides to that of silica-based optical fibers. The our single mode buried channel waveguides are composed from 1.5µm–thick silicon oxide claddings (n=1.47) deposited by plasmaenhanced chemical vapour deposition (PECVD) [7]. The light will be laterally confined by the resulting core SiOxNy rib with refeactive index n=1.53, 4µm large and 0.22µm depth. The PECVD working with relatively high deposition rate and low deposition temperature, is compatible with the well-established microelectronic processing. 2.2 Integrated Mach-Zehnder interferometer
The first type of device consists of three main parts: silicon membrane, integrated MZI interferometer, and PZT layer as mechanical actuator. A schematic is shown in Fig.1. The device contains a measuring arm of MZI crossing a 1350x1350-µm wide, 5-µm thick membrane, acting as an interrogation system [8]. The reference arm, positioned outside the membrane, is rigid.
New Optical Sensors and Measurement Systems
599
Fig. 1. Schematic of the device with Mach-Zehnder interferometer
The optical waveguides of MZI are sandwiched between the SOI wafer and the PZT transducer. Piezoelectric 2.6-µm thick actuator is located at the top of membrane integrated with measuring arm of MZI. After fabrication process all structures are separated by saw cutting to singular chips with dimensions 37x6mm2.
Fig. 2. Comparison of normalised amplitudes of optical signal from MZI output - sensing arm at: center (a), at quarter (b), and arm at board of membrane (c).
Three position of a measuring arm were considered: arm crossing center of membrane, arm at their quarter, and arm at board of membrane. In the MZI the position of the sensing arm on the membrane will influence on optical signal at MZI output. The sensing arm should be located where the
600
New Optical Sensors and Measurement Systems
largest change of refractive index is expected. Also depending on the resonant mode of membrane the MZI optical signal will be changed. We have tested amplitude of signal vs. frequency for three configuration of sensing arm position (Fig.2). The application of micromachined integrated MZI is in the area of resonant pressure sensors. We have observed highest amplitudes at the MZI output for structure with sensing arm placed at quarter of membrane. This configuration of MZI was adapted and tested as resonant pressure sensor at frequency of 111.413 kHz. The output signal was compared with excitation sinusoid from generator in synchronic detection module. In this way, the output optical signal amplitude and phase changing, observed between electrical excitation and optical output, can be directly visualised and measured. Firstly, the amplitude at 111.413 kHz equal 7.9 V was accompanied by 90 degrees of phase changes. Secondly, the pressure was applied from -2000 Pa to 2000 Pa. The amplitude and phase changes observed at the output of MZI are plotted in Fig. 3a.
Fig. 3. The amplitude (solid) and phase (square symbols) of MZI optical signal as a function of pressure (a), and shift of resonance frequency due to applied pressure.
Asymmetric behaviour of amplitude is caused to stresses involved by pressure. Their value is added to initial stress state caused bu technological process [9]. The Fig. 3b present the amplitude versus frequency. Three peaks of amplitude corresponding to resonance frequency changes. Results show, that 4000 Pa of pressure correspond to 4000 Hz of resonance frequency shift. It confirms very good sensitivity of presented device and prove their application as resonant pressure sensor. 2.3 Integrated Michelson interferometer
The second device is an integrated Michelson interferometer (MI) (Fig.4) fabricated on a silicon substrate by using of single-mode waveguides.
New Optical Sensors and Measurement Systems
601
Fig. 4. The scheme of the device with Michelson interferometer
The input and output facets of the interferometer are obtained by high precision saw dicing. The light source is a commercially available laser diode coupled via a polarisation maintaining optical fiber to the input of MI, while the output is linked by an optical fiber to a photodiode. Face to the cleaved waveguide end face of the reference arm of MI is placed a mirror activated by electrostatic actuator. The input light beam is divided into reference and sensing arms. The light in the sensing arm is guided up to the output plane where after reflection on a measured micromechanical part is coupled back into the waveguide. Photodiode at output provide information about light insensitivity as a result interference sensing and the reference beam. The displacement of electrostatic mirror generates a phase shift between both the reference and sensing arms of the MI. This optical phase modulator is used for high-resolution optical heterodyning with phasemodulated single sideband detection. In this paper we present the simulations data and the firsts results of chip version of Michelson interferometer with dimmensions 5x40mm2. Two MI configuration were considered. The first one consist two Y junctions with waveguides crossing at the MI center (distance 0). Seconde one is a directional coupler, where two adjacent waveguides are designed such, that the light can be transferred from one waveguide to the other by coupling. The coupling is obtained by adjusting the distance between waveguides varying from 0 to 4 µm. Using commercial integrated optics OlympIOs software we simulate the light propagation and power transfer was calculated by finite difference 2D-BPM (Fig.5) [10]. When distance between waveguides is equal 0, due to splitting, we obtain exactly this some power at the ends of structure.
602
New Optical Sensors and Measurement Systems
Fig. 5. Light propagation in Michelson interferometer by OlympIOs.
According to simulation results, the set of devices were fabricated (Fig.6) but good performance is hard to obtain mostly due to photolithographic transfer. The deviation of waveguide width and distance x have to be smaller than 0.2µm. In this case the optical attenuation of waveguide is 0.5 dB/cm for TE polarisation and total loss of device about 10dB.
Fig. 6. Photography of the diced chips.
The goal of the proposed study is the implementation of a high resolution MOEMS sensor based on a MI integrated with a electrostatic actuator, applied for the characterisation of dynamic behaviour of micromechanical parts. The future experimental tests will be concentred on MI connection with a beam actuator and output signal demodulation.
New Optical Sensors and Measurement Systems
603
3 Conclusions This paper describe design and investigations of a MOEMS family measurement devices based on light propagation via SiOxNy waveguide structures. Planar optical waveguide integration with micromachined structures and inclusion of micro-optic elements within MEMS environment offers significant promises for achieving advanced functionality of optomechanical architectures. The required optical sources and detectors can be outside the opto-mechanical system, then requiring light transport by fibers. As example, the resonant pressure sensor based on micromembrane with optical interrogation were designed, fabricated and tested. It works on the principle of resonance frequency shift, caused by the change of internal stress due to change of external physical environment. The introduced pressure sensor combine the advantages of the resonant operational mode with a MEMS fabrication process and optical signal detection, providing high sensitivity and maintaining stable performance. The technology has to be optimised in order to decreasing of initial stress state, what influence on sensitivity of sensor. The results indicated, that the sensor do not require vacuum encapsulation, but low-cost packaging is sufficient. The presented devices with optical read-out starts new methodology for reliability testing and monitoring mechanical performances during lifetime of MEMS devices. In this case the microinterferometer is completely integrated with the MEMS and sometimes cannot be reused for other measuring systems. Seconded architecture is a waveguide version of integrated Michelson (MI) interferometer. MI can provide measuring of position, displacements and vibrational characteristics. The fabrication and tests proved functionality of device was accomplished, but integration with micromechanical actuator are under development.
4 References 1. M. Tabib-Azar, and G. Beheim, "Modern trends in microstructures and integrated optics for communication, sensing, and actuation", Opt. Eng. 36, pp.1307-1318, 1997 2. C. Gorecki, in: P. Rai-Choudhury (Ed.), Optical waveguides and silicon-based micromachined architectures, MEMS and MOEMS – Technology and Applications, SPIE Press, Bellingham, 2000 3. E. Bonnotte, C. Gorecki, H. Toshiyoshi, H. Kawakatsu , H. Fujita, K. Wörhoff, K. Hashimoto, “Guided-wave acousto-optic interaction with phase modulation in a ZnO thin film transducer on Silicon-based inte-
604
4.
5.
6.
7.
8.
9.
10.
New Optical Sensors and Measurement Systems
grated Mach-Zehnder interferometer”, IEEE J.of Lightwave Technol. 17, pp. 35-42, 1999 S. Valette, S. Renard, J.P. Jadot, P. Guidon, C. Erbeia, "Silicon-based integrated optics technology for optical sensor applications", Sensors and Actuators A21-A23, pp.1087-1091, 1990 C. Gorecki, F. Chollet, E. Bonnotte, H. Kawakatsu, "Silicon-based integrated interferometer with phase modulation driven by acoustic surface waves", Opt. Lett.22, pp.1784-1786, 1997 K. Wörhoff, P.V. Lambeck, A. Driessen “Design , Tolerance Analysis, and Fabrication of Silicon Oxinitride Based Planar Optical Waveguides for Communication Devices", J. Lightwave Tech, 17, No 8, 1401-1407, 1999 A. Sabac, M. Józwik, L. Nieradko, C. Gorecki, „Silicon oxynitride waveguides developed for opto-mechanical sensing functions", Proc. SPIE, Vol. 4944, 214-218, 2003 A. Sabac, C. Gorecki, M. Józwik, T. Dean, A. Jacobelli, “Design, testing, and calibration of an integrated Mach-Zehnder-based optical readout architecture for MEMS characterization”, Proc. of SPIE Vol. 5458, pp. 141-146, 2004 L. Saábut, J. Kacperski, A.R. Styk, M. Józwik, C. Gorecki, H. Urey, A. Jacobelli, T. Dean, „Interferometric methods for static and dynamic characterizations of micromembranes for sensing functions”, Proc. of SPIE, Vol. 5458, pp. 16-24, 2004 OlympIOs, BBV Software BV, http://www.bbvsoftware.com
5 Acknowledgements This work was supported by The Growth Programme of the European Union (contract G1RD-CT-2000-00261). The developement of MI structure is main of European Union - Marie Curie Intra-European Fellowships (contract FP6-501428). Michaá Józwik thanks for financial support of work at Université Franche-Comté. Special thanks to Lukasz Nieradko from FEMTO-ST and to Pascal Blind from Centre de Transfert des Microtechniques for guiding and help in technological process realisation.
White-light interferometry with higher accuracy and more speed Claus Richter, Bernhard Wiesner, Reinhard Groß and Gerd Häusler Max Planck Research Group, Institute of Optics, Information and Photonics, University of Erlangen-Nuremberg Staudtstr. 7/B2, 91058 Erlangen Germany
1 Introduction White-light interferometry is a well established optical sensor principle for shape measurements. It provides high accuracy on a great variety of surface materials. However, to accomplish future industrial tasks several problems have to be solved. One major task is to increase the scanning velocity. Common systems provide scanning speeds up to 16 µm/sec depending on the surface texture. We will introduce a system based on a standard white-light interferometer which can achieve scanning speeds up to 100 µm/sec with a standard frame rate of 25 Hz. With an add-on to the hardware we achieve up to 10 times higher modulation in the signal, compared to the standard setup. To cope with the sub-sampled signals, we introduce new evaluation methods. On a mirror we achieve a distance measurement uncertainty of 230 nm using a scanning speed of 100 µm/sec. On optically rough surfaces we achieve an improvement of the scanning speed up to 78 µm/sec without any loss of accuracy. Another major task concerns the white-light interferometry on rough surfaces (“Coherence Radar”) [1]. Here the physically limited measuring uncertainty is determined by the random phase of the individual speckle interferograms. As a consequence, the standard deviation of the measured shape data is given by the roughness of the given surface [4]. The statistical error in each measuring point depends on the brightness of the corresponding speckle; a dark speckle yields a more uncertain measurement than a bright one. If the brightness is below the noise threshold of the camera, the measurement fails completely and an outlier occurs.
606
New Optical Sensors and Measurement Systems
We present a new method to reduce the measuring uncertainty and the number of outliers. In our method, we generate two or more statistically independent speckle patterns and evaluate these speckle patterns by assigning more weight to brighter speckles.
2 Increasing the scanning speed The major factor limiting the vertical scanning speed is the frame rate of the used video camera. With a standard frame rate of 25 Hz, the scanning speed cannot exceed 16 µm/sec. To fulfil future demands for industrial applications this speed has to be increased. 2.1 Increasing modulation at higher scanning speeds
Using a standard setup and just increasing the scanning speed evokes some difficulties. During the exposure of an image the linear positioning system is still moving. Therefore the optical path difference between reference arm and object arm of the sensor is changing. This leads to a decrease of the modulation of the interferograms at high scanning speed [2]. To avoid this effect the optical path difference must be approximately constant. Our basic idea is to introduce a motion to the reference mirror to compensate for the motion of the positioning system during the exposure of one frame. During this time the optical path length of both arms is changing but the optical path difference remains the same. In the time gap between two images the reference mirror is switched back to its initial state [3]. To test this setup we recorded the signals of 10.000 interferograms at several sample distances and calculated the mean modulation for each sample distance (see Fig. 1).
New Optical Sensors and Measurement Systems
607
modulation [digits]
200 180
without compensation
160
with compensation
140 120 100 80 60 40 20 0 0
0,5
1
1,5
2
2,5
3
3,5
4
4,5
sample distance [µm]
Fig. 1. Modulation of interferograms at different scanning speeds with and without compensating the integration effect
The object under test was a mirror. Looking at the grey curve you can see the integrating effect of the camera. The modulation is rapidly decreasing down to about 13 digits at high sample distances, corresponding to the background noise. Using the compensation movement (black curve) the modulation at small sample distances is the same compared to the standard setup. Still at higher scan velocities the modulation remains high. With this setup it is possible to increase the scan velocity by a factor of 8 obtaining the same modulation compared to the standard system. 2.2 Evaluating sub-sampled signals
Carrying out measurements with large sample distances causes subsampling of the interferogram. Evaluating these signals with established algorithms does not provide the required accuracy. Two new approaches are made to evaluate sub-sampled signals in white-light interferometry. The first approach is a centre-of-mass algorithm. This is a very simple algorithm and enables very fast data processing. To improve this evaluation method the interferogram is rectified before. The second approach uses the information we have about the interferogram shape. To calculate the height information we apply a crosscorrelation between the recorded interferogram and a simulated interferogram. This method is more complex than the centre-of-mass algorithm and needs much more evaluation time.
608
New Optical Sensors and Measurement Systems
2.3 Results
With this combination of hardware add-on and new evaluation methods we achieved a measurement uncertainty of 230 nm measuring a mirror. The scanning speed was 100 µm/sec using 25 Hz frame rate. On optically rough objects we were able to increase the scanning speed up to 78 µm/sec without great loss of accuracy. In Figure 2 you can see the comparison of two crossections of a measured coin. The measurement on the left was done with a standard setup with a scanning speed of 4 µm/s. The measurement on the right was done with the new setup and the new evaluation methods with a scanning speed of 34 µm/s.
Fig. 2. Crossection of a measurement (coin). Left side: Normal setup; scanning speed 4µm/s. Right side: New setup; scanning speed 34 µm/s
3 Better accuracy and reliability Another challenge for white-light interferometry is the measurement of rough surfaces. Generally we speak of a rough surface if height variations greater than O/4 within the diffraction spot of the imaging system appear. In that case the well-known interference fringes disappear and instead, a speckle pattern appears. Since the phase varies statistically from speckle to speckle it does not carry any useful information and one can only evaluate the envelope of the interference signal (“correlogram”). Since this resembles a time-of-flight measurement we called the method “coherence radar” [1]. Comparing the correlograms of different camera pixels one can see two main features of the interference signal different from smooth surfaces:
New Optical Sensors and Measurement Systems
609
x statistical displacement of the signal envelope (“physical measurement error”) x varying interference contrast It has been shown [4] that the surface roughness can be evaluated from the ensemble of all those displacements. If we explore the reliability of one measured height value, we find [5] that the standard deviation of the height values ız(I) depends on the surface roughness ıh, the average intensity and the individual speckle intensity I:
V z (I )
1 2
I! Vh I
(1)
The consequence of Eq. 1 is quite far reaching, because it reveals that every measured height value is associated with a physical measurement error. The darker the speckles are the bigger is this error. Hence, we are eager to create and select bright speckles. 3.1 Consequences of varying interference contrast
An additional error source that has to be taken into account is demonstrated in the following experiment: A rough surface was measured ten times. To ensure that at any time the same speckle pattern was measured, the object under test remained at the same position. A cross section through the surface is shown in Figure 3. There is a spreading of the height values in every pixel. Since the speckles are the same for all ten measurements the physical error and so the measured height value should remain the same. So the spreading has to be caused by another error source and that is the camera noise. A dark and a bright speckle are highlighted to point out the difference: The spreading is bigger for darker speckles due to the bigger share of the camera noise. In the worst case the interference contrast of the correlogram is below the noise threshold of the camera. This means, the measurement in that speckle fails completely and an outlier appears. If the camera noise is reduced, for example by cooling the CCD-chip or by applying higher integration time, the spreading will disappear, but not the physical measuring error. This way the repeatability would be perfect, but according to Eq. 1, the measured height is still unreliable.
610
New Optical Sensors and Measurement Systems
Fig. 3. Profile of a rough surface measured ten times.
The consequences can be summarized as x bright speckles generate more reliable measurements x bright speckles avoide outliers Therefore, in order to improve the quality of a measurement one has to look for bright speckles. A posteriori solutions such as filtering the measured image are not an appropriate approach. 3.2 Offering different speckle patterns to the system
Our new approach is to offer not only one but two (or even more) decorrelated speckle patterns to the system. The combination displays a better statistics: For one speckle pattern the darkest speckles have the highest probability. However, if we may select the brightest speckle in each pixel, out of two (or more) independent speckle patterns, the most likely speckle intensity is shifted to higher values and the probability to end up with a very dark speckle is small. Decorrelated speckle patterns can be generated either by the use of different wavelengths [6] or by moving the light source. In this case the camera sees different speckle patterns. In our setup we synchronized the camera with the light sources: For odd frame numbers only light source “one” was on, whereas for the even frames only light source “two” was on. A separated signal evaluation for the correlograms recorded in odd and even camera frames is carried out. Subsequently, the SNR for both signals is estimated and that height value is chosen which displayed the better SNR. The cost of this method is of course a reduction of the actual frame rate but according to Eq. 1 there is a significantly higher reliability of the measured profile.
New Optical Sensors and Measurement Systems
611
3.3 Results
In an experimental verification two LEDs with a central wavelength of 840 nm are used as light sources. They are placed in front of a beam splitter which. One of the LEDs is mounted on a micrometer slide to shift the sources against each other. The rough object under test is a diffuse surface and the signals are recorded by a standard 50 Hz camera. The LEDs are alternating switched on and off as described above. Subsequently in ten thousand camera pixels the higher SNR value is estimated. For comparison with the standard setup, a second measurement is performed using only one LED. Again, in ten thousand camera pixels the SNR value is estimated. Figure 4 displays the SNR distribution of the two measurements. The improvement by the “choice of the brighter speckle” is significant: The maximum of the probability is shifted to higher SNR values (indicated in Fig. 4 by the arrow) and the amount of pixels with low SNR is significantly reduced. To quantify this, a series of measurements with different scanning speeds was performed and the share of pixels with a SNR not exceeding 4 was determined. A SNR value of at least 4 ensures a safe distinction from noise. The result is displayed in Figure 5. For all scanning speeds the proportion of low SNR camera pixels is smaller for two speckle patterns than for one. Figure 6 displays as a measurement example a part of a coin.
Fig. 4. Distribution of ten thousand SNR values measured with both, one (grey) and two speckle patterns (black). The arrow shows the improvement of the SNR by the “choice of the brighter speckle”
612
New Optical Sensors and Measurement Systems
Fig. 5. Percentage of camera pixels with a SNR < 4 for one and two speckle patterns.
Fig. 6. Part of a coin measured with both, one (left) and two (right) speckle patterns. The number of outliers has been reduced significant.
4 References 1. Dresel, T, Häusler, G, Venzke H (1992) Three-dimensional sensing of rough surfaces by coherence radar. Applied Optics 31: 919-925 2. Windecker, R, Haible, P, Tiziani, H J (1995) Fast coherence scanning interferometry for smooth, rough and spherical surfaces.Journal of Modern Optics 42: 2059-2069 3. Richter, C (2004) Neue Ansätze in der Weisslichtinterferometrie. Diploma Thesis, University Erlangen-Nuremberg 4. Ettl, P, Schmidt, B, Schenk, M, Laszlo, I, Häusler, G (1998) Roughness parameters and surface deformation measured by “Coherence Radar”. Proceedings op SPIE Volume 3407: 133-140 5. Ettl, P (2001) Über die Signalentstehung bei Weißlichtinterferometrie, PhD Thesis, University Erlangen-Nuremberg 6. George, N, Jain, A, Speckle (1973) Reduction Using Multiple Tones of Illumination. Applied Optics 12: 1202-1212
Novel white light Interferometer with miniaturised Sensor Tip Frank Depiereux, Robert Schmitt, Tilo Pfeifer Fraunhofer Institute for Production Technology IPT Dept. Metrology and Quality Management Steinbachstrasse 17, 52074 Aachen Germany
1 Introduction White light interferometry is an established technique in metrology [1]. It gives the possibility to obtain absolute distance measurements on different surfaces. White light systems are mostly bulky stand-alone solutions and cannot be used for certain measurement tasks, such as small cavities inspections. White light interferometers can also be realized as fiber based systems, these systems provide a great potential for miniaturization. We describe such a fiber based white light interferometer with its main, innovative components. In principle, the presented system is based on linking two interferometers: a measuring interferometer (the donor) and a receiving interferometer (the receiver) [2]. The donor set-up was realized as a fiber based Fabry-Perot solution, which results in a reduction of the sensor tip diameter of 800 µm. This sensor tip is very sturdy and can be used in an industrial environment. A Michelson Interferometer has been used as receiver. Scanning of the measuring area in the time domain is replaced by spatial projection of the white light fringes onto a CCD-, CMOS- or line-detector. The choice of the detector depends on the preferred measuring frequency-to-range combination. The digital detection of the fringe pattern explains that the measuring frequency is determined in the first instance by the frame rate of the chosen detector. A stepped CERTAL (oxygen-free aluminium) mirror element replaces commonly used phase shifting elements like piezos or linear stages. The range of the system can be designed as required by designing the number and geometries of the steps [5]. A slight angularity of the mirror perpendicular to the direction of the incidence beams results in a characteristic fringe pattern on the sensor chip.
614
New Optical Sensors and Measurement Systems
2 Theoretical background In contrast to laser interferometry, white light interferometry provides limited coherent areas in which interference is possible. They are obtained in dependence on the FWHM (full width at half maximum) and the central wavelength of the light source. Short coherent light sources allow absolute distance measurement. The function of the sensor and the principle of white light interferometry can be developed in the frequency range by focusing on the transmission functions of the donor and the receiver, combined with the quasi Gaussian power density spectrum of the light source [6]. The signal intensity and position are the result of the path difference in the sensor and the receiver [5]. 2.1 Power density spectrum of the light source (Gaussian)
PO
2 1 2 e S 'O
( O O0 ) 'O2
(1)
Here, O0 is the central wavelength and 'O the FWHM of the light source. 2.2 Transmission function for the sensor, (Airy function)
TG x, k
(r1 r2 ) 2 4r1r2 sin 2 2Skx (1 r1r2 ) 2 4r1r2 sin 2 2Skx
(2)
Here, k=1/O is the wave-number, r1 and r2 are the values of reflection of the Fabry-Perot donor, where the first surface is the end surface of the fiber and the second surface is the surface of the measurement object at a distance of x. 2.3 Transmission function for the receiver
TE y, k
1 (1 cos 2Sky ) 2
(3)
The path difference y is found between stepped and reference mirror in the receiver.
New Optical Sensors and Measurement Systems
615
2.4 The signal intensity Ux(y) f
U x ( y)
³ P(O )T x, k T y, k dk G
E
(4)
f
In order to simplify these equations, differences in running time of the waves due to changes in refractive indices of the optical components are not taken into account. On the one hand there is a so-called main signature, for equal geometric paths in the receiver (y=0), on the other hand there are sub-signatures. These are visible on the mirror, when the path length differs in the donor and the path lengths in the receiver are equal for x and y. There are redundant signatures which are n4X=n(x+y)=n2x apart for a measured value of X, as long as the path difference can be compensated by the geometry of the stepped mirror, i.e. they appear within the measuring range. Only the first pair of sub-signatures is of interest for the detection, because the second pair does not provide more information. It is sufficient for measurement evaluation to detect the main signature and one sub-signature because the distance between the center values of the signatures is exactly x=2X. The signature width depends on the distribution of the light source’s power density. The theoretical signal described by (4) is shown in Fig. 1. Both the main and the first two pairs of subsignatures are shown. The width of the signature relates to the used SLD. With the main wavelength O0=846.9 nm and the FWHM value 'O=15.6 nm, the coherence length of the SLD is given by the coherence length:
lcSLD
O20 / 'O
846.9 2 nm 2 / 15.6nm | 46Pm
Fig. 1. Detector Signal, theoretical
(5)
616
New Optical Sensors and Measurement Systems
3 Interferometer set-up The setup (Fabry-Perot donor / Michelson receiver) is shown in Fig. 2. When light is emitted from the source (A), a short coherent wave reaches the Fabry-Perot donor via the single-mode fiber coupler (B). The reference wave, which originates from the end surface of the sensor tip (C) is superimposed with the measuring wave from the object (D) in the Michelson receiver. When the paths match, they interfere. The interference signals can be detected by a CMOS-, CCD- or line camera (E), depending on the desired measuring range and frequency. The light source is an SLD with an average power output of ~3 mW and a central wavelength of 850 nm, already pigtailed to a single-mode fiber. The light from the source is coupled into the fiber and transmitted to the Fabry-Perot sensor via the coupler (50/50). The single-mode fiber has a diameter of approx. 4.5 µm and a numerical aperture of 0.12, which, in conjunction with a collimating sensor tip, results in an almost collimated beam with a spot size of approx. 40 µm along the complete measuring range. The use of a focusing sensor tip is also possible, so the capability to measure trailing edges can be increased. The single-mode fiber has a further advantage: it acts as a spatial mode filter; the spatial coherence is restored [6]. The beam from the fiber is collimated in the receiver again to illuminate the reference (F) and the tilted mirror (G). As mentioned above, the stepped mirror replaces commonly used phase shifting elements. A
B
D
E
G
F Fig. 2. Schematic system setup
C
New Optical Sensors and Measurement Systems
617
4 CFP sensor tip When the system is used with a bare fiber as sensor tip, the beam expands under the half angle of T= 6.8° (NA of the fiber). This results in a beam diameter of db~ 245 µm in a measurement distance of 1 mm. A focussed or collimated beam can be provided by the use of a gradient index fiber [7].
Fig. 3. Collimating connector (Fa. Diamond)
A short piece of gradient index fiber (2) is spliced (3) to the single-mode fiber (2) and glued into the ferrule (1) of the chosen connector (Fig. 3). The GRIN-fiber has then to be polished to a proper length to provide beam shaping. The same technique has been used to realise the miniaturized sensor tip. In order to achieve an outer diameter of the sensor head below 1 mm and a sensor shaft length of min. 50 mm, a new designed CFP-tube (carbon fiber reinforced plastic) acts as sensor tip [8]. The integration of the spliced fiber resulted in a Fabry-Perot sensor tip with a diameter of 0.8 mm. A prototype of the sensor terminated with an E2000 connector is shown in Fig. 4. Compared to other materials, the main advantages of CFP are its special properties. On the one hand, the sensor is flexible enough to allow industrial handling and on the other hand it is stiff enough to keep its shape, which is elementary for measurement purposes.
Fig. 4. Sensor prototype [9]
618
New Optical Sensors and Measurement Systems
5 Mirror element As mentioned above, the sensor tip is connected to the Michelson receiver with the fiber coupler. The lens set-up between the fiber and the Michelson interferometer provides a collimated beam which illuminates the reference and the stepped mirror. The mirror (length 10mm, width 7mm) gives a measuring range depending on the dimensions, number of steps and angle of the mirror. A mirror with ten steps, each 100 µm high, was used in the set-up (Fig. 5).
Fig. 5. Stepped mirror: (a)calibration step, (b) serpentine structure
To ensure a continuous measuring range, the required angle of incidence is ~ 0.6°. This angle is due to the selected step dimensions (step length and height). The visibility of the fringes diminishes continually with increasing incidence angle [5][10]. The use of the stepped mirror within the system delivered important information about improving its design. In order to increase the measuring distance to 1 mm by simultaneously detecting both the main and subsignature, a new design with a so-called “calibration step” was realized (Fig. 5a: first step). The advantage of this design is that the mirror always reflects a stable detectable main signature on the first step (height 1mm). The main signature is important not only for signal processing but also for monitoring the receiver condition. The required signal processing for a mirror with planar steps is quite intensive because of the “signature jump” at the end of each step. Fig. 5b shows an improved mirror with a serpentine structure. This structure enables continuous signal detection without signature jumps. It has to be made clear that these mirrors are far more difficult to manufacture than the stepped versions.
New Optical Sensors and Measurement Systems
619
6 Results The combination of the stepped mirror with the light source results in signatures which are laterally spread across regions of the mirror, with a maximum intensity in the centre of the signature. The CCD image shows the stepped mirror with signatures on different steps. The signature on the first step (calibration step) shows the main signature. It also shows, as described above, that the main signature has a higher intensity compared to the sub-signatures (Fig. 7). The sub-signature is encircled and can be located on the third step. In order to acquire the distance between sensor and object it is necessary to filter and analyze the raw image data. After background subtraction the noise signals can be reduced with a frequency filter (i.e. by means of FFT).
Fig. 7. CCD image of four steps with main and sub signature
Fig. 8a shows the filtered grey code signal together with a Gaussian fit for the peak. The pixel positions of the peak of both the main and the sub signature can now be used to calculate the distance. The linearity of the measured positions can be seen in Fig. 8b. It shows on the y-axis the pixel positions that have been measured for the distances between sensor and object which can be seen on the x-axis.
Pixel
Translation
Fig. 8. (a) Processed image data with signature and peak, (b) Linear fit
620
New Optical Sensors and Measurement Systems
With this method it is possible to calibrate the system and obtain a linear relation between pixel position and distance. The measuring uncertainty depends on the capability to clearly separate two peaks.
7 Summary We presented the set-up of a novel fiber-based, miniaturized white light interferometer with a unique CFP sensor tip. A stepped mirror that replaces mechanical scanning components was introduced. Improved designs of this mirror show the capability for further developments of the system like stability and condition of the receiver (calibration step) on the one hand or for optimizations in signal processing (serpentine structure) on the other hand. It is also shown, how fringe patterns on different steps of the mirror encode measurement (distance) information.
8 Acknowledgements The presented results have arisen from a national research project supported by the German Ministry of Education and Science (BMBF) and the Stiftung Industrieforschung. The project is carried out by CeramOptech GmbH, Bonn; Precitec GmbH, Rodgau and Mahr-OKM GmbH in Jena.
9 References 1. James C. Wyant: White Light Interferometry, Optical Sciences Center, University of Arizona, Tucson, AZ 85721 2. Bludau, W. Lichtwellenleiter in Sensorik und optischer Nachrichtentechnik, Springer Verlag Heidelberg 3. T. Bosselmann, T. (1985) Spektral-kodierte Positionsübertragung mittels fasergekoppelter Weißlicht-interferometrie, Universitätsbibliothek Hannover, Hannover 4. Koch, A. (1985) Streckenneutrale und busfähige faseroptische Sensoren für die Wegmessung mittels Weißlichtinterferometrie, Universität Hamburg-Harburg, VDI-Verlag, Düsseldorf 5. Chen, S., Meggitt, B.T., Roger, A.J. (1990) Electronically-scanned white-light interferometry with enhanced Dynamic range. Electronics Letters Vol. 26, No. 20, S. 1663-1665 6. Company brochure Superlum Diodes LTD., Moscow, Russia
New Optical Sensors and Measurement Systems
621
7. Cerini, A., Caloz, F., Pittini, R., Marazzi, S. “HIGH POWER PS CONNECTORS”; DIAMOND SA, Via dei Patrizi 5, 6616 Losone, Switzerland ([email protected]) 8. Depiereux, F., Schmitz, S., Lange, S. (2003) “SENSOREN AUS CFK”, F&M Mechatronik, Hanser Verlag, 11-12/2003 9. Photography, courtesy of Felix Depiereux, Düsseldorf 10. Chen, S., Meggitt, B.T., Rogers, A.J. (1990) A novel electronic scanner for coherence multiplexing a quasi-distributed pressure sensor, Electron. Lett., Vol. 26, (17), pp. 1367-1369
Honorary Lecture
Challenges in the dimensional Calibration of submicrometer Structures by Help of optical Microscopy Werner Mirandé (Retired from) Section for Quantitative Microscopy Physikalisch Technische Bundesanstalt Bundesallee 100 38116 Braunschweig Germany
1 Introduction Optical microscopes are well-established instruments for dimensional measurements on small structures. The main advantage imaging methods that make use of the visible and UV-parts of the electromagnetic radiation is the minimum risk of damage of the objects to be measured. Measurement results with high accuracy, however, can only be obtained by carefully analysing the process of image formation in the microscopes and observing all sources of systematic uncertainties or by calibrating the systems by use of traceable standards.
2 Basics 2.1 Image Formation in optical Microscopes
Essential components of a typical measuring microscope are the light source, the condenser, the objective lens, a tube lens and in measuring systems an electro-optical receiver system. Although exists there a tendency to assume that the images at least qualitatively resemble the shape of the sample structures, the distortion of the features produced by the imaging system or in optical microscopy by the illumination conditions can sometimes be severe. In practise, because of diffraction at diaphragms that are introduced as aperture stops in the optical setup, the image of an object point results in a three dimensional dis-
New Optical Sensors and Measurement Systems
623
tribution of the complex amplitude or intensity, even though the system is free of aberrations and perfectly focussed. is free of aberrations and is free of aberrations and perfectly focussed. perfectly focussed. is free of aberrations and perfectly focussed. free of aberrations and perfectly focussed. perfectly focussed. is free of aberrations and perfectly focussed. Objects that are representatives in the present context are usually non-luminescent and have, therefore, to be illuminated by help of an auxiliary light source and condenser system. The condenser-aperture diaphragm controls the maximum angle of incidence of the light cone of illumination. Some of this light is then transmitted through the object or it is absorbed, reflected or scattered, with or without change of the phase or the polarisation. In the objective lens system the objective–aperture controls the maximum angle of inclination to the optical axis of marginal rays that can pass through the objective. In combination with the wavelength of light it usually determines the limit of resolution. The ratio between the condenser-aperture and the objective-aperture is a critical parameter in an imaging system as it determines the total degree of spatial coherence, and consequently it affects also essentially the image intensity. A point to notice is that, even for an aperture ratio of one the microscopic image formation of a nonluminescent object is partially coherent [1]. That’s why pure phase objects give rise to an intensity distribution in a diffraction-limited, perfectly focussed system with an aperture ratio of one. For perfect incoherent imaging this would not be the case. Out of the collection of various particular arrangements and methods of observation that have been developed, each suitable for the study of certain types of objects or designed to bring out particular features of the object, [2- 4] only the conventional bright field and two dark field methods shall be discussed in some more detail. 2.1.1 Bright Field Methods
For dimensional measurements at structures on photomasks or at other features on transparent substrates so called bright field imaging is a suitable method [5-8]. Fig. 1 shows the schematic setup of a typical bright field system. For sake of simplification it has been assumed that the focal length of the condenser lens, the objective lens, and the tube lens are equal. In this case a magnification of 1 result’s. In a microscope for imaging objects in bright field illumination mode the objective lens has to do double duty, acting as a condenser in the illumination system as well as the imaging objective. An additional diaphragm in the illumination system may than act as condenser aperture in order to provide adequate conditions of spatial coher-
624
New Optical Sensors and Measurement Systems
ence. That is of interest, for instance, when the image contrast of topographical structures shall be enhanced.
Fig. 1. Schematic beam path of bright field microscope
2.1.2 Dark Field Methods
As it will be shown later, dark field imaging can be advantageous in context with edge localisation. In a common dark field system only the light scattered or diffracted by object details reaches the image plane. In the reflected light mode this can for example be achieved in a conventional microscope by an elliptical ring mirror at the periphery of the objective that directs the light onto the object at the suitable angle. Incidentally, image patterns with intensity distributions similar to those obtained by the methods mentioned above can also be produced by differential interference contrast or by special adjustment of confocal microscope systems [9,10]. 2.2 Some Terms and Definitions 2.2.1 Precision
A fundamental and of course desirable quality of a measuring instrument is that it delivers the same result for a certain measurement every time. Thus, the consistency of measurement results is an important concept for characterising the quality of a measuring system. This property is usually called precision. The International Organisation for Standardisation (ISO) defines repeatability and reproducibility instead of precision for the variability observed in repeated measurements that are performed under the same conditions [11]. On the one hand the repeatability and reproducibility depend on the
New Optical Sensors and Measurement Systems
625
scale and its relation to the image; on the other hand they include the effects of noise and thermal or mechanical drift. 2.2.2 Accuracy
According to [11] and [12] nowadays the reciprocal term uncertainty, in the present context, the total measurement uncertainty, is defined as a combination of the random and systematic uncertainties with some estimate in the confidence in this number. It is also a parameter associated with the result of a measurement that characterises the dispersion of the values that could be attributed to the measurement. 2.2.3 Traceability
According to the ISO (International Organisation for Standardisation) Traceability is a property of a result of a measurement or the value of a standard whereby it can be related to stated references usually national or international standards through an unbroken chain of comparisons all having stated uncertainties. For dimensional measurements the length reference of the PTB (Physikalisch Technische Bundesanstalt) and other NMIs (National Measurement Institutes) is the SI unit of length, the definition of the meter [13]. 2.3 Standards and Calibration
From the preceding discussion can be obviously seen that high precision or good reproducibility is not sufficient to guarantee that a measurement result has a high accuracy or small uncertainty respectively. The measurements performed with an instrument that provides excellent reproducible results can be precisely wrong because of significant systematic errors, which have not been made out. But what can be done in order to reduce at least these errors or deviations. An often-used solution is compensating a measuring instrument for systematic deviations by measuring samples with well-known values of the parameter to be measured. This process is called calibration and the samples that are specially designed for this purpose are usually called standards. The efficiency of the calibration, of course, depends on the one hand critically on the quality and particularly on the uncertainty of the known values of the standard and on the other hand on the adequate use of it. Such standards are widely used by various customers in context with their quality management systems. They are also needed for vendor/buyer communication, for developing specifications and ensuring that products meet specifications, and sometimes for compliance with legal
626
New Optical Sensors and Measurement Systems
requirements. The essential value of a traceable standard lies in the carefully estimated calibration uncertainty as claimed by its purveyor and its authority, the ultimate user’s confidence in that claim. These qualities are then transferred to the user’s subsequent in-house measurements. Now the question arises, how these standards themselves can be calibrated. The PTB and other NMIs have been working on this task since more than two decades [14-16]. For this purpose specially designed measuring microscopes, sophisticated procedures for evaluating the image information and strategies like cross calibration [17] were developed in order to reduce ever existing uncertainties to the lowest possible level. Apart from optical microscopes that are mainly used for this task other methods like e.g. scatterometry, scanning electron as well as scanning force microscopy are employed by co-workers of the working groups for Quantitative Microscopy and Ultra-High Resolution Microscopy at the PTB in Braunschweig either for cross calibration or in order to get additional, more detailed information on the samples to be calibrated. 2.4 Edge Localisation
Frequently used dimensional standards for the calibration of measuring microscopes are pitch or linewidth standards. The pitch value is defined by the distance of congruent edges. The linewidth is the distance between two neighbouring edges of a sample structure. Accurate edge localisation, therefore, is a substantial task in the calibration of dimensional standards. If the measurements are performed by optical microscopy, usually an intensity distribution in the magnified image of the object has to be evaluated for determining the edge positions. The intensity distribution in the image, however, results from the reflected or transmitted profile of the complex amplitude across an object feature and depends on the relative reflectivities or transmittances, the induced phase shifts of the materials composing the feature and the substrate and coherence conditions. Because of diffraction at the apertures of the imaging system and residual aberrations even perfect edges are represented by more or less blurred distributions of intensity in the image plane. By applying threshold or extremevalue criteria edge localisation with a precision or reproducibility better than a nanometer in principle can be achieved in well designed instruments [18]. But what’s about the uncertainty.
New Optical Sensors and Measurement Systems
627
3 Examples Some challenging capacities and problems of edge localization by using extreme value criteria can be demonstrated in context with the task of calibration of standards that are designated for the characterization of the tips of scanning force microscopes. These samples consist of silicon chips with surface structures that have been produced by whet etching. They are developed by the IPHT (Institut für Physikalische Hochtechnologie)/Jena in collaboration with the PTB and partners from industry [19, 20]. In the following examples it is assumed that the object structures consist of flat, isolated silicon bars on a silicon substrate. All are 100 nm in height and have perfect vertical edges. They are imaged in reflected light with quasi-monochromatic radiation with O = 365 nm. The calculations have been performed by use of the software package MICROSIM [21] that is based on the RCWA (Rigorous Coupled Wave Analysis) method and was developed at the Institut für Technische Optik University of Stuttgart. The intensity across the images of silicon bars with linewidths of 1500 nm and 300 nm is shown for bright field imaging and different conditions of polarisation in Fig. 2 Fig. 3 shows the modelled image distribution of the field in the neighbourhood of the sample surface and the intensity across the image of a silicon bar with 300 nm linenewidth that is imaged in conventional dark field mode with circularly shaped condenser aperture.
4 Discussion From the modelled distributions shown in Fig.2 and Fig. 3 it becomes plainly visible that the extreme-values of the image intensity are located not exactly at the positions of the edges and that the deviation depends on the polarisation (TE: E parallel to the edge, TM: E perpendicular to the edge). However in all cases the error for a linewidth measurement remains smaller than 50 nm. By the way, calculations by use of a model on the basis of scalar diffraction theory do not reveal a larger deviation than 50 nm in context with bright field imaging for an object according to that of Fig. 2.
628
New Optical Sensors and Measurement Systems
90
90 TE TM UP
80
80
70
70
60 y t si n et ni
60
50
y t si n et ni
40 30 20
40
TE TM UP
30
(a)
20
10 0 -3000
50
-2000
-1000
0 x / nm
1000
2000
10 -600
3000
(b) -400
-200
(a)
0 x / nm
200
400
600
(b)
Fig. 2. Modelled image intensity across SI-bars with linewidth a: 1500 nm, b: 300 nm. Bright field imaging; Objective numerical aperture: 0.9, condenser aperture: 0.6 -300 TE TM UP
180 -200
160 140
-100 m n / z
y t si n et ni
0
100
120 100 80 60
200 40 300 -600
-400
-200
0 x / nm
200
400
600
20 -600
-400
-200
0 x / nm
200
400
600
Fig. 3. Modelled intensity for a SI-bar with a linewidth of 300 nm. Dark field imaging Objective aperture: 0.85, ring shaped condenser aperture:
There is an other unfavourable feature of the conventional dark field imaging; for a linewidth not much smaller than the wavelength the intensities of the extreme-values already begin to merge. Compared with that, the new dark field imaging methods AGID and FIRM [22] make it possible to separate the signals from the different edges by alternating gracing incidence illumination or by making use of frustrated total internal reflection. But also in this case a non negligible deviation from the true linewidth results and modelling based on rigorous diffraction theories has to be used for determining systematic offsets. Alternating gracing incidence dark field illumination (AGID) is assumed with the calculation of the image distributions for the same object as in Fig. 3.
New Optical Sensors and Measurement Systems
629
0.18
-300
TE 0.16
-200
0.14 0.12
-100 m n / z
y t si n et ni
0
0.1 0.08 0.06
100
0.04 200
0.02 300 -600
-400
-200
0 x / nm
200
400
600
0 -600
-400
-200
0 x / nm
200
400
600
Fig. 4. Image distribution calculated for the same object as in Fig. 3 but with illumination from the left side according to the AGID method
5 Conclusion A basic task in dimensional metrology is edge localisation. The distance of two neighboring edges in an object structure, for instance, can be determined by the evaluation of the intensity distribution by use of threshold or extreme-value criteria. However, the distributions in the images begin to overlap for structures with dimensions below Ȝ/NA where Ȝ is the wavelength and NA is the numerical aperture of the imaging lens. That’s why the distances of the extreme values or the thresholds become strongly dependent from the width of the structures and for still smaller structures the extrema usually merge into one extremum. By use of a special new type of dark field illumination it becomes possible to separate the maxima of intensity representing the edges of single microstructures whose edges would not be resolved by conventional dark field techniques. But also with this method the position of the extreme values in the image distribution has an offset to the true positions of the structure edges. In order to get traceable measurements; however, modelling of the image intensity on the basis of rigorous diffraction theories can be applied in order to compensate for residual offsets from exact edge positions [23]. The most direct connection of the length scale of a measuring microscope is achievable by making use of the object scanning method [24] where the object stage of the system is equipped with a laser interferometer.
630
New Optical Sensors and Measurement Systems
6 Acknowledgements The author wants to thank N. Kerwin an G. Ehret for performing the calculations an providing the figures as well as A. Diener for kind help in preparing the final version of the manuscript.
7 References 1. Hopkins, H. H (1953) On the diffraction theory of optical images. Proc. Roy. Soc. Lond. A 217 : 408-432 2. Pluta, M (1989) Advanced Light Microscopy, Vol. 2 , Specialized Methods. PWN-Polish Scientific Publishers Warzawa 494 pages 3. Totzeck, M, Jacobsen, H, Tiziani, H. J (2000) Edge localisation of subwavelength structures by use of interferometry and extreme-value criteria. Applied Optics 39: 6295-6305 4. Bodermann, B, Michaelis, W. Diener, A, Mirandé, W. (2003) New Methods for Measurements on Photomasks using dark field optical Microscopy. Proc. of 19th European Mask Conference on Mask Technology for Integrated Circuits and Micro-Components, GMMFachbericht 39: 47-52 5. Nyysonen, D, Larrabee, R (1987) Submicrometer Linewidth Metrology in the Optical Microscope. J. Research of the National Bureau of Standards, Vol.16 6. Potzick, J. (1989) Automated Calibration of Optical Photomask Linewidth Standards at National Institute of Standards and Technology. SPIE Symposium on Microlithography 1087: 165-178 7. Czaske, M, Mirandé, W, Fraatz, M (1991) Optical Linewidth Measurements on Masks and Wafers in the Micrometre and Submicrometre Range. Progress in Precision Engineering: 328-329 8. Nunn, J. Mirandé, W. Jacobsen, H. Talene, N (1997) Challenges in the calibration of a photomask linewidth standard developed for the European Commission. GMM-Fachbericht 21: 53-68 9. Lesssor, D. L. Hartmann, J.S. and Gordon, R.L. (1979) Quantitative Surface Topography determination by Nomarski Reflection Microscopy, I. Theory. J Opt. Soc. Am. 69: 22-23 10. Kimura, S. Wilsom, T. (1994) Confocal scanning dark-field polarization microscopy. Applied Optics 33: 1274- 1278 11. ISO, Geneva (1993) International Vocabulary of Basic and General Terms in Metrology. 2nd Edition
New Optical Sensors and Measurement Systems
631
12. ISO, Geneva (1993) Guide to the Expression of Uncertainty in Measurement. 1st Edition 13 Bureau International des Poids et Mesures (1991) Le Système International d’Unitées (SI), 6 ieme Édition 14. Nyyssonen, D (1977) Linewidth Measurement with an Optical Microscope.the Effect of Operating Conditions on the Image Profile. Applied Optics 16: 2223-2230 15. Downs, M. J, Turner, N. P, King, R. J, Horsfield, A (1983) Linewidth Measurements on Photomasks using Optical Image-shear Microscopy. Proc. 50th PTB-Seminar Micrometrology PTB-Opt-15: 24-32 16. Mirandé, W. (1983) Absolutmessungen von Strukturbreiten im Mikrometer-bereich mit dem Lichtmikroskop. Proc. 50th PTB-Seminar Micrometrology PTB-Opt-15: 3-16 17. Bodermann, B, Mirandé, W (2003) Status of optical CD metrology at PTB. Proc. 188th PTB-Seminar, PTB-Bericht F-48: 115-129 18. Hourd, A. C et al. (2003) Implementation of 248 nm based CD Metrology for Advanced Reticle Production. Proc. of 19th European Mask Conference on Mask Technology for Integrated Circuits and MicroComponents, GMM-Fachbericht 39: 203-212 19. Hübner, U et al. (2003) Downwards to metrology in naonscale: determination of the AFM tip shape with well known sharp-edged calibration structures.Appl.Phys.A 76: 913-917 20. Hübner, U et al. (2005) Prototypes of nanoscale CD-Standards for high resolution optical microscopy and AFM. Proc. 5th euspen Internatinol Conference 21. Totzeck, M (2001) Numerical simulation of high-NA quantitative polarization microscopy and corresponding near-fields. Optik 112: 399406 22 Mirandé, W, Bodermann. B (2003) New dark field microscopy methods. Proceedings of the 187th PTB-seminar on Current Developments in Microscopy PTB-Opt-68: 73-86 23 Schröder, K. P , Mirandé, W , Geuther, H , Herrmann, C (1995) In quest of nm accuracy: supporting optical metrology by rigorous diffraction theory and AFM topograhy. Optics Communications 115: 568-575 24 Mirandé, W. (1990) Strukturbreiten-Kalibrierung und Kontrolle. VDIBerichte 870: 47-82
A white light interferometer for measurement of external cylindrical surfaces Armando Albertazzi G. Jr., Alex Dal Pont Universidade Federal de Santa Catarina Metrology and Automation Laboratory Cx Postal 5053, CEP 88 040-970, Florianópolis, SC Brazil
1 Introduction White light interferometry has been extensively used for profiling of technical parts. It combines the high sensitivity of interferometers and the ability to perform absolute height measurements.[1 to 8] Parts with lateral sizes ranging from few micrometers to over 100 mm can be measured. It is possible to achieve height resolution better than one nanometer and measurement ranges up to several millimeters, what makes this technique excellent for industrial applications concerning geometric quality control. Several commercial systems using this measurement principle are already available on the market. A typical setup for white light interferometer is a Michelson like configuration. Light from a low coherent light source is collimated and directed to a partial mirror. Part of the light is directed to a reference surface, usually a high quality mirror, and is reflected back to the imaging device. Light is also directed to the object to be measured and is reflected back to the imaging device and it is combined with the light reflected by the reference surface. An interference pattern is only visible for those regions where the optical path difference is smaller than the coherence length of the light source. The loci of the points where the interference patter is visible is a contour line for a given height. By moving the part to be measured, or the reference mirror, it is possible to scan the entire surface. An algorithm is used to find the position of maximum contrast of the interference pattern for each pixel of the image and to assign a height value. White light interferometers are naturally used to make measurements in rectangular coordinates. X and Y are associated with the lateral dimensions of the image and Z to the heights. In this paper the authors extends white
New Optical Sensors and Measurement Systems
633
light interferometry to measure in cylindrical coordinates. A high precision 45º conical mirror is used for both illuminate cylindrical parts and to image the resulting interference pattern into a CCD camera. This configuration opens possibilities for measuring high precision cylindrical or almost cylindrical parts. Either continuous or stepwise surfaces can be measured. The measurement principle, practical considerations and performance results are presented here as well few applications of practical interest.
2 The optical setup 45° conical mirrors have some interesting optical properties. They can be used to optically transform rectangular coordinates into cylindrical coordinates. Collimated light propagating in Z direction is reflected by the conical mirror to propagate in radial direction, as shows Fig. 1. If a cylinder is aligned to the axis of the conical mirror its image reflected in a 45° conical mirror is transformed in such a way that it is seen as a flat disc. If the observer is located in the infinity or a telecentric optical system is used, and if the alignment and mirror geometry are ideal, a perfect cylinder is transformed into a perfect flat disc. If the quality of the optical components and the alignment is good enough, the form deviations of the cylindrical surface is directly connected to flatness errors of the flat disc.
Z
Fig. 1. Reflection of a cylinder by a conical mirror: cylindrical surfaces become flat discs
To measure in cylindrical coordinates the white light interferometer is modified in the way presented in Fig. 2. A near infrared ultra-bright LED is used as a non coherent light source with about 20 µm coherent length. The light is naturally expanded and split by a partial mirror in two components. The first component goes through the partial mirror, is collimated, reaches a reference flat mirror and is reflected back toward the partial mir-
634
New Optical Sensors and Measurement Systems
ror and then is imaged into a high resolution (1300 x 1030) digital camera. The second light component is reflected to the bottom of the figure by the partial mirror, is collimated and reaches a 45° conical mirror. The conical mirror reflects the collimated light radially toward the cylindrical surface to be measured, located inside the conical mirror. The light is reflected back by the measured surface to the conical mirror and then propagates and is imaged into the camera. Unlike most white light interferometers the collimating lens are placed after the partial mirror since a larger clear aperture was needed for the image of the measured cylinder reflected by the conical mirror. Both collimating lens are similar to minimize the optical aberration differences between the two arms of the interferometer. The outer diameter of the 45° conical mirror is about 80 mm and was designed to fit the set of diameters and heights of the cylindrical pieces to be measured. It was manufactured in aluminium with an ultra-precision turning machine with a diamond tool. TV camera
telecentric lens reference mirror partial mirror
light source
scanning motor lens conical mirror cylindrical part
Fig. 2. Modified white light interferometer to measure in cylindrical coordinates
Interference patterns are visible if the optical path difference is smaller than the coherent length of the light source. A high precision motor moves the flat reference mirror across the measurement range what produces equivalent changes in the radius of a virtual cylinder that scans the cylindrical measurement volume. The peak of maximum contrast of the fringes is searched by software for each pixel of the image and represents the
New Optical Sensors and Measurement Systems
635
heights of the flat disc which is equivalent to the radius where a virtual cylinder crosses the actual measured shape. So, the measured heights are converted to radius and the actual 3D surface is reconstructed from cylindrical coordinates.
3 Alignments and Calibration To align and calibrate the interferometer a 22.5 mm diameter master cylinder was used as reference. The form error of the master cylinder is known to be better than ±0.5 µm. The master cylinder, mirrors and lens were carefully aligned to minimize the number of visible residual fringes. The master cylinder was measured ten times. The apparent shape deviation of the master cylinder was computed from the mean values. Since the master cylinder was assumed to be the reference, its deviation from a perfect mathematical cylinder was assumed to be the systematic errors of the interferometer. This amount of systematic error was saved and used to correct all further measurements. Data sets from ten repeated measurements were also analyzed to estimate typical random error components. The standard deviation was separately computed for each measured point on the cylindrical surface. A typical Ȥ2 distribution was obtained for the standard deviations of all measured points. The most frequent value was 0.11 µm and 95% of the values were smaller than 0.27 µm. The influences of other major error sources were analyzed and their contributions are presented in Table 1. The type A component (standard deviation) and the master cylinder uncertainty were the most significant ones. The overall expanded uncertainty was estimated to be about 1.0 µm with 95% confidence level. The alignment of the part to be measured to the conical mirror axis is not a relevant error source. Translations and tilts of the measured cylinder related to the conical mirror axis can easily be detected and corrected by software. However, a finer alignment reduces the measurement time since a smaller scanning range is required.
636
New Optical Sensors and Measurement Systems
Table 1. Uncertainty budget for cylindrical shape measurement Uncertainty source Type A (standard deviation)
0,27 Pm
Distribution normal
Divider 1
u 0,27 Pm
Ȟ 9
uSE
Systematic error uncertainty
0,09 Pm
normal
1
0,09 Pm
f
uCil
Master cylinder uncertainty
0,50 Pm
rectangular
3
0,29 Pm
f
uC
Combined uncertainty
normal
0,41 Pm
9
U95%
Expanded uncertainty
normal
Symbol uA
Value
k = 2,32
0,95 Pm
At the present stage, the alignment requires some practice and about 15 minutes. It starts with a coarse alignment trying to make the piston’s image uniformly illuminated. After that, the software starts a loop were acquires one image, moves the reference flat mirror of about 180° in phase and acquires a second image. The images are subtracted and the result is squared. The result shows white areas in regions on the measured cylinder where the interference pattern is visible. Those white areas are equivalent to pseudo-interferometric fringes, as shows Fig. 3. Mechanical stages are used for aligning the measured cylinder. The fine alignment is guided for the shape of the pseudo-interferometric fringes and is completed when only one fringe occupies the entire image.
4 Measurement examples The interferometer has been successfully applied to measure cylindrical deviation of gas compressor pistons. Ranging from 17 mm to 26 mm those pistons are made of steel and recovered with a phosphate coating, what makes the cylinder surface quite rough. Fig. 4 shows results of a measurement of a gas compressor piston in a much exaggerated scale. Note that the difference between the minimum and maximum radius is about only 7 µm. Quantitative analysis of a longitudinal and a transversal section of this piston are shown in Fig. 5.
Fig. 3. Alignment sequence using pseudo-interference fringes
New Optical Sensors and Measurement Systems
637
7 µm
Fig. 4. Measurement example of a gas compressor piston
Fig. 5. Quantitative analysis of the piston shown in Fig. 4.
Fig. 6 demonstrates that it is possible to measure stepped cylinders. The scale of the left part of figure was chosen to make it possible to see both cylindrical surfaces. The scale of the right part was chosen to emphasize form deviation of the cylindrical surface with larger diameter.
Fig. 6. Measurement results for a stepped piston
638
New Optical Sensors and Measurement Systems
No surface preparation at all was needed in any case. For the continuous cylinder the scanning was done in one range only. For the stepped one the scanning was done in two continuous regions close to the expected diameter values for each area. The scanning time for each cylindrical part was typically from three to five minutes.
5 Conclusions This paper shows that is possible to extend white light interferometers to measure both continuous or stepped cylindrical surfaces. The optical setup is modified by introducing a high precision 45° conical mirror to optically transform rectangular coordinates into cylindrical coordinates. A prototype of this new design of interferometer was built, aligned and calibrated using a master cylinder as reference. At the current stage the prototype of the interferometer is not optimized, but it was possible to perform preliminary evaluations and apply it to measure pistons of gas compressors. The typical measurement time ranges from three to five minutes. It was found an overall expanded measurement uncertainty of about 1.0 µm, what is sufficient for several industrial applications. That configuration opens possibilities for new applications of high interest in mechanical engineering such as wear measurement in cylindrical surfaces. Either continuous or stepped cylindrical surfaces can be measured. The uncertainty achieved at this stage of development is about 1.0 µm. The authors believe that improvements in the scanning mechanism and the use of a better reference cylinder can reduce the expanded uncertainty to something below 0.3 µm. Current developments efforts are focused in the measurement of inner cylindrical geometries and development of algorithms for wear measurement.
6 Acknowledgments The authors would like to thanks the help and encouragement of Analucia V. Fantin, José R. Menezes, Danilo Santos, Fabricio Broering, Ricardo S. Yoshimura, Lucas B. de Oliveira and the financial support of MCT/TIB, Finep and Embraco.
New Optical Sensors and Measurement Systems
639
7 References 1. Creath, K., Phase measurement interferometry methods. Progress in Optics XXVI, ed. E. Wolf, p. 349-442, 1988. 2. Dresel, T.; Häusler, G.; Venzke, H., Three-dimensional sensing of rough surfaces by coherence radar. Appl. Opt., v. 31, n. 7, p. 919-925, 1992. 3. De Groot, P.; Deck, L., Three-dimensional imaging by sub-Nyquist sampling of white light interferograms. Optics Letters. v. 18, n. 17, p. 1462-1464, 1993. 4. Häusler G., et al., Limits of optical range sensors and how to exploit them. International Trends in Optics and Photonics ICO IV, T. Asakura, Ed. (Springer Series in Optical Sciences, v. 74, Springer Verlag, Berlin, Heidelberg, New York), p. 328 – 342, 1999. 5. Yatagai, T., Recent progress in white-light interferometry, Proc. SPIE Vol. 2340, p. 338-345, Interferometry '94: New Techniques and Analysis in Optical Measurements; Malgorzata Kujawinska, Krzysztof Patorski; Eds., Dec 1994. 6. Helen S. S., Kothiyal, M. P., Sirohi, R. S., Analysis of spectrally resolved white light interferograms: use of phase shifting technique, Optical Engineering 40(07), p. 1329-1336, Donald C. O'Shea; Ed., Jul 2001. 7. de Groot P., de Lega X. C., Valve cone measurement using white light interference microscopy in a spherical measurement geometry, Optical Engineering 42(05), p. 1232-1237, Donald C. O'Shea; Ed., May 2003. 8. de Groot P., Deck, L. L., Surface profiling by frequency-domain analysis of white light interferograms, Proc. SPIE Vol. 2248, p. 101-104, Optical Measurements and Sensors for the Process Industries; Christophe Gorecki, Richard W. Preater; Eds. Nov.
Pixelated Phase-Mask Dynamic Interferometers James Millerd, Neal Brock, John Hayes, Michael North-Morris, Brad Kimbrough, and James Wyant 4D Technology Corporation 3280 E. Hemisphere Loop, Suite 146 Tucson, AZ 85706
1 Introduction We demonstrate a new type of spatial phase-shifting, dynamic interferometer that can acquire phase-shifted interferograms in a single camera frame. The interferometer is constructed with a pixelated phasemask aligned to a detector array. The phase-mask encodes a highfrequency spatial interference pattern on two collinear and orthogonally polarized reference and test beams. The wide spectral response of the mask and true common-path design permits operation with a wide variety of interferometer front ends, and with virtually any light source including white-light. The technique is particularly useful for measurement applications where vibration or motion is intrinsic. In this paper we present the designs of several types of dynamic interferometers including a novel Fizeau configuration and show measurement results.
2 Phase Sensor Configuration The heart of the system consists of a pixelated phase-mask where each pixel has a unique phase-shift. By arranging the phase-steps in a repeating pattern, fabrication of the mask and processing of the data can be simplified. A small number of discrete steps can be arranged into a “unit cell” which is then repeated contiguously over the entire array. The unit cell can be thought of as a super-pixel; the phase across the unit cell is assumed to change very little. By providing at least three discrete phase-shifts in a unit cell, sufficient interferograms are produced to characterize a sample surface using conventional interferometric algorithms.
New Optical Sensors and Measurement Systems
641
The overall system concept is shown in Fig. 1 and consists of a polarization interferometer that generates a reference wavefront R and a test wavefront T having orthogonal polarization states (which can be linear as well as circular) with respect to each other; a pixelated phase-mask that introduces an effective phase-delay between the reference and test wavefronts at each pixel and subsequently interferes the transmitted light; and a detector array that converts the optical intensity sensed at each pixel to an electrical charge. The pixelated phase-mask and the detector array may be located in substantially the same image plane, or positioned in conjugate image planes. R Polarization Interferometer T Sensor
Mask
Array matched to detector array pixels
Unit Cell 0
0
90
180 270
Stacked
or
270
90 180
Circular
ref LHC RHC test
Circ. Pol. Beams ('I) + linear polarizer (D)
cos ('I + 2D)
Phase-shift depends on polarizer angle
Fig. 1. Basic concept for the pixelated phase-mask dynamic interferometer
In principle, a phase-mask shown in Fig. 1 could be constructed using an etched birefringent plate, however, such a device is difficult to manufacture accurately. An alternative approach is to use an array of micropolarizers. Kothiyal and Delisle [1] showed that the intensity of two beams having orthogonal circular polarization (i.e., right-hand circular and lefthand circular) that are interfered by a polarizer is given by I ( x, y )
1 I r I s 2 I r I s cos 'I ( x, y ) 2D p 2
(1.)
where Dp is the angle of the polarizer with respect to the x, y plane. The basic principle is illustrated in Figure 1. From this relation it can be seen that
642
New Optical Sensors and Measurement Systems
a polarizer oriented at zero degrees causes interference between the inphase (i.e., 0q) components of the incident reference and test wavefronts R and T. Similarly, polarizers oriented at 45, 90 and 135 degrees interfere the in-phase quadrature (i.e., 90q), out-of-phase (i.e., 180q) and out-of-phase quadrature (i.e., 270q) components respectively. The basic principle can be extended to an array format so that each pixel has a unique phase-shift transfer function. Several possible methods can be used to construct the pixelated phase-mask. Nordin et al[2]. describe the use of micropolarizer arrays made from fine conducting wire arrays for imaging polarimetry in the near infrared spectrum. Recently, the use of wire grid arrays has also been demonstrated in the visible region of the spectrum[3]. The planar nature of the conducting strip structure permits using it as a polarizer over an extremely wide incident angle, including zero degrees, and over a broad range of wavelengths, provided the period remains much less than the wavelength. For circular polarized input light, the micropolarizer array can be used directly. For linear polarized input light, which is more typical of polarization interferometers, a quarter-wave retarder plate (zero order or achromatic[4]) can be used in combination with the micropolarizer array. The quarter-wave retarder may be adjoined to the oriented polarizer array to form the pixelated phasemask; however, it can also be separated by other imaging optics.
3 Data Processing The effective phase-shift of each pixel of the polarization phase-mask can have any spatial distribution; however, it is highly desirable to have a regularly repeating pattern. A preferred embodiment for the polarization phasemask is based on an arrangement wherein neighboring pixels are in quadrature or out-of-phase with respect to each other; that is, there is a ninetydegree or one hundred eighty degree relative phase shift between neighboring pixels. Multiple interferograms can thus be synthesized by combining pixels with like transfer functions. To generate a continuous fringe map that opticians are accustomed to viewing for alignment, pixels with transfer functions can be combined into a single image or interferogram. The phase difference is calculated at each spatial coordinate by combining and weighting the measured signals of neighboring pixels in a fashion similar to a windowed convolution algorithm. The phase difference and
New Optical Sensors and Measurement Systems
643
modulation index can be calculated by a variety of algorithms that are well-known in the art [5]. This method provides an output phase-difference map having a total number of pixels equal to (N-w) times (M-v), where w and v are the sizes of the correlation window and N and M are the size of the array in the x and y directions, respectively. Thus, the resolution of the phasemap is close to the original array size, although the spatial frequency content has been somewhat filtered by the convolution process. Figure 1 illustrates two possible ways of arranging the polarization phase-mask and detector pixels (circular and stacked). We examined the sensitivity of both orientations as a function of phase gradient using a computer model and plot the results in Figure 2. The stacked orientation preferentially reduces the effects of sensor smear because each column of pixels has a constant signal level regardless of the input phase. However, the circular orientation has a significantly reduced sensitivity to phase gradients and is therefore the prefered orientation under most conditions. 0.016 Circular 0.014
Circular 50% smear
RMS Error (waves)
Stacked 0.012 Stacked 50% smear 0.01 0.008
0.006
0.004
0.002 0 0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
45 deg Tilt (waves/pixel)
Fig. 2. Simulated phase error as a function of fringe tilt for two pixel orientations (circular and stacked), with and without sensor smear.
644
New Optical Sensors and Measurement Systems
4 Interferometer Configurations 4.1 Twyman Green
One type of measurement system is illustrated in Fig. 3, wherein the pixelated phase-mask is used in conjunction with a Twyman-Green interferometer (TG). An afocal relay is used to form an image of the input pupil plane at the location of the pixelated phase-mask. The aperture is preferably selected so that the diffraction-limited spot size at the pixelated phasemask is approximately 2 effective pixels in diameter in order to avoid aliasing of the interference pattern spatial frequency. This selection of the aperture ensures that spatial frequencies higher than the pixel spacing are not present in the final interference pattern.
Test Mirror
Single Mode Laser
QWP
PBS
High Resolution Camera Diverger
QWP
Phase-Mask
Reference Mirror A
C
A
C
A
C
B
D
B
D
B
D
A
C
A
C
A
C
B
D
B
D
B
D
A
C
A
C
A
C
B
D
B
D
B
D
Pixelated Mask Pattern
Parsing
Pixelated Mask
Sensor Array Phase-Shifted Interferograms
Fig. 3. Twyman-Green implementation of a dynamic Interferometer
4.2 Fizeau
The pixelated phasemask can also be combined with a Fizeau-type interferometer employing both on-axis [5] and off-axis beams (shown in Figure 4). The on-axis configuration achieves very high uncalibrated accuracy due to the true common-path arrangement but requires the additional step of path matching during alignment. The off-axis arrangement is simple to use but requires careful design of the optical imaging system in order to mitigate off-axis abberations. We have built and demonstrated both types of systems.
New Optical Sensors and Measurement Systems test Ref.
645
collimator
'L
R
Beam splitter Aperture R
T
R T
Beam Combiner
T
Mask and Detector
(System can be used on-axis) PBS
QWP
'L M2
source L
HWP QWP M1
stage
Path compensation module
Fig. 4. Fizeau implementation of the dynamic interferometer. Path matching module can be used with either an on- or off-axis configuration.
5 Measurement Results We constructed a pixelated phase-mask sensor using a planar deposition technique. The pixel pitch of the mask and CCD was 9 microns, and was 1000x1000 pixels wide. The pixelated phase-mask was bonded directly in front of a CCD array. Every 4th pixel
Fig. 5. Measurements made with the TG interferometer. Checked pattern is a magnified grayscale image showing 24 x 17 pixels. The Fringe pattern is synthesized by selecting every fourth pixel. Saw tooth is generated with a 3x3 convolution phase algorithm.
Fig. 5 shows data measured from a pixelated phasemask sensor configured as a Twyman-Green interferometer. A flat mirror was used as the test object. The angle between the mirrors was adjusted to give several fringes
646
New Optical Sensors and Measurement Systems
of tilt. The magnified image shows a area of 24 x 17 pixels from the CCD array. The greyscale of the image corresponds to the measured intensity at each pixel. The high contrast between adjacent pixels demonstrates the ability to accomplish discrete spatial phase shifting at the pixel level. Every 4th pixel was combined to generate a continuous fringe map or interferogram. A wrapped fringe map was calculated using the 3x3 convolution approach. The resulting sawtooth map, shown in Figure 5, had a total of 974 x 980 pixels, just under the actual CCD dimensions. We measured good fringe contrast with up to 170 fringes of tilt in each direction before the onset of unwrapping errors. Figure 6 shows measurements of a mirror having a 2 meter radius of curvature using the TG interferometer and a 400mm diameter mirror using a large aperture Fizeau interferometer. The mirror and interferometer were located on separate tables for the TG measurement and a spider was introduced into the cavity of the Fizeau measurement to demonstrate the ability of the technique to successfully process high spatial frequency content without edge distortion or ringing.
Fig. 6. Measurement of a mirror with a 2 meter radius of curvature using TG interferometer located on a separate table and; measurement of a flat mirror (400mm dia) using a large aperture fizeau interferometer. Exposures were made in under 60 microseconds.
We performed a series of measurements to determine the instrument repeatability. 10 measurements were made of a test mirror, each measurement consisting of 16 averages. The results of the study are shown in Table 1. The uncalibrated accuracy, defined as the pixel-wise average of all 160 measurements, was limited mainly to the polarization beamsplitter. Precision, defined as the average deviation of each measurement subtracted from the calibrated surface on a pixel-by-pixel basis, was below 1 milliwave rms. Repeatability, defined as the standard deviation of the 10 measurements, was below 1/10th milliwave rms.
New Optical Sensors and Measurement Systems
647
Table 1. Measured performance for the pixelated phasemask interferometer using a flat reference.
Uncalibrated Accuracy Precision Repeatability
0.0039 waves rms 0.0007 waves rms 0.00008 waves rms
6 Summary We have demonstrated a new type of dynamic measurement system that is comprised of a micropolarizer array and can work with any type polarization interferometer to measure a variety of physical properties. The unique configuration overcomes many of the limitations of previous single frame, phase-shift interferometer techniques. In particular it has a true common path arrangement, is extremely compact, and is achromatic over a very wide range. We demonstrated high quality measurement with both a Twyman-Green and Fizeau type interferometer. The technique is useful for many applications where vibration or motion is intrinsic to the process.
7 References 1. P. Kothiyal and R. Delisle, “Shearing interferometer for phase shifting interferometry with polarization phase shifter,” Applied Optics Vol. 24, No. 24, pp. 4439-4442, 1985 2. Nordin, et. al., “Micorpolarizer array for infrared imaging polarimetry,” J. Opt. Soc. Am A, Vol. 16, No. 5, 1999 3. See for example, U.S. Patent No. 6,108,131 4. Helen, et. al., ”Achromatic phase-shifting by a rotating polarizer” Optics Communications 154, p249-254, 1998 5. see for example, Interferogram Analysis for Optical Testing. Malacara et. al. Marcel Decker, Inc. New York, 1998 6. US Patent 4,872,755 October 1989
Tomographic mapping of airborne sound fields by TV-holography K. D. Hinsch, H. Joost, G. Gülker Applied Optics, Institute of Physics, Carl von Ossietzky University D-26111 Oldenburg Germany
1 Introduction Optical detection of sound utilizes the pressure-induced change in the refractive index n. Phase-sensitive techniques measure the resulting modulation of the optical path at the sound frequency. Thus, any method that responds to the phase modulation of light scattered from a vibrating surface can also be applied for the sensing of sound. Since the pressure fluctuations in airborne sound, however, are extremely small, only an interferometric method of high sensitivity deserves consideration. Time-averaging TV-holography or Electronic Speckle Pattern Interferometry (ESPI) with sinusoidal reference wave modulation and phase shifting usually is used for vibration studys in case of amplitudes in the range of only a few nanometers. In the present study we use this technique for an acoustic challenge that requires mapping of a three-dimensional sound field with high spatial resolution. The recordings represent a twodimensional projection of the refractive index modulation of the sound field integrated along the viewing direction. The three-dimensional field is obtained from many such projections through the sound field at different viewing angles in a tomographic setup. Inversion by filtered backprojection yields the three-dimensional sound amplitude and phase. These data has been used to optimize the sound field of a parametric acoustic array. Parametric acoustic arrays are built to generate highly directional audio sound by nonlinear interaction of two ultrasonic waves differing in frequency by the audio frequency to be generated. Both these waves are made to overlap in the air volume in front of the sound transducer and create the difference frequency by parametric mixing [1]. Since the wavelength of the ultrasound is smaller by one or two orders of magnitude than the dimensions of its sound source it can be radiated with high directionality.
New Optical Sensors and Measurement Systems
649
Now, also the angular diagram of the audio sound radiation is very narrow, because it is governed by the length of the interaction volume. Due to the low efficiency of the nonlinear process high-level ultrasound of more than 110 dB is needed. Arrays for applications at high audio sound pressure use piezoelectric transducers of PZT. Since the individual transducer elements are very small (<16 mm in diameter) several hundred of them must be combined to reach the desired audio sound pressure level. Usually, however, they differ slightly in their resonance frequencies and as such radiate with differing phases. This, in turn, degrades directionality and homogeneity of the ultrasound field – prerequisites for high performance of the method. We are using TV-holography to map the 3D ultrasound field to guide us in the phase adjustment of each individual transducer which can be done by adjustment of its longitudinal position in the array. In our sound-field studies ESPI is in a range with good prospect for an application since sound with a pressure level of 110dB would produce a phase modulation equivalent to a vibration amplitude of 1 nm if it propagated a path of 5 cm.
2 Sound-mapping by ESPI Let us briefly recall the basic technique as illustrated in Fig. 1. Primarily we measure the modulation in n integrated over the light path through the sound field. In our case, the field is generated in front of a rigid rough background wall and the light from a cw Nd:YAG-laser @ 532 nm penetrates the field twice on its path to and from the wall to the CCD-camera. A proper reference beam is split off the illumination beam, fed through a fiber wound in several turns around a hollow cylinder of a piezoelectric material and then superimposed in the well-known optical configuration onto the object light. Both waves interfere in an image-plane hologram that is registered by the camera target. The system relies on time-averaging by the CCD and on electronic high-pass filtering and squaring of the video signal. The final image is stored in a computer memory. In time-average operation for sinusoidal phase modulation the image intensity versus the amplitude a0 of the phase modulation is governed by the square of the zero-order Bessel function J02(a0), the so-called characteristic function [2].
650
New Optical Sensors and Measurement Systems
Solid Wall
Turntable
Sinewave Generator
Laser Acoustic Source
PZT
CCD
Highpass + Rectifier
Computer
Fig. 1. Schematic of ESPI set-up for the measurement of sound fields
Due to its zero slope at a0 = 0 this technique is only poorly suited for our case of small-amplitude phenomena. An improved sensitivity is obtained by reference-beam modulation to shift the working point to the location aR of highest slope J in the characteristic function. This task is performed by the piezoelectric cylinder which is oscillating at the sound frequency and modulates the optical length of the reference-beam fiber. We can now assume a linear relation between the output image intensity I(x,y) and the small amplitude a0(x,y) of the total phase modulation. Taking into account a phase shift )(x,y) between the object wave modulation by the sound and the reference wave modulation we obtain the following general relation
I ( x, y )
I b ( x, y ) J ( x, y )a0 ( x, y ) cos ) ( x, y )
(1)
Here, Ib(x,y) is a background intensity while amplitude a0(x,y) and phase )(x,y) of the modulation are the data characterizing the sound effect on the traversing light – quantities we are interested in. To start with, we have to determine calibration values for J(x,y). Before turning on the sound we measure intensities I+G and I-G at two known amplitude values aR+GR and aR-GR that we introduce with the aid of the PZT-device in the reference beam. In the sound measurement we apply four 90° phase steps by means of the piezoelectric cylinder to calculate the amplitude and phase distribution in the 3D field from four measured distributions of I(x,y) of each projection. For noise reduction that is crucial for the small-amplitude situation we face, each of the images is acquired one hundred times to apply several averaging procedures. We thus achieved a sensitivity equivalent to a vibration-amplitude of 1 nm at 532 nm laser wavelength. It has been mentioned that the measured light phase is the result of the integration of the change in refractive index along the projection path of the light through the sound field. In a 3D field we need many such projec-
New Optical Sensors and Measurement Systems
651
tions to invert for the sound distribution in space. We have implemented a tomographic set-up for the study of the field radiated by a 2D array of transducers. It utilizes the arrangement of Fig.1 in which the sound source can be rotated around a horizontal axis parallel to the wall. In this way we obtain 2D interferometric data for 180 projection directions at 1° angular separation that are adjusted by a computer-controlled rotation stage. Tomographic sound field mapping needs a special treatment, since it is different from ordinary tomography. Usually, the physical quantity that is integrated along the light path does not change with time. The sound amplitude, however, is an oscillating quantity. Along the integration path we thus encounter contributions at differing phase settings and a direct inversion is not possible. Because of that, the integration has to be done separately for the real and imaginary parts of the modulation which can be obtained from four measured distributions of I(x,y) each 90° phase shifts [3].
a0 cos )
I180 I 000 2J
a0 sin )
I 090 I 270 2J
(2)
These so-called quadrature components are linear projections of the equivalent quadrature components of the acoustic field. For each of them a separate tomographic backprojection is made. In our case we have implemented a filtered backprojection as an inverse Radon transformation algorithm in Matlab. The two backprojected images Q1(x´,y´,z´) and Q2(x´,y´,z´) are combined according to usual vector algebra to get the amplitudes and phases in the projected field. The coordinates x´,y´,z´ refer to positions in the backprojected space. The final amplitude a0(x´,y´,z´) – which can be turned into sound pressure via the index of refraction relation – and phase ĭ(x´,y´,z´) are calculated according to
a0
Q12 Q2 2
)
arctan
Q2 sgn Q2 Q1
(3)
where sgnQ is defined as -1 for Q < 0 and +1 for Q t 0.
3 Sound field radiated from an array of 37 ultrasonic transducers The following study was made at a field produced by an array of 37 ultrasonic transducers (Fig. 2). Each transducer element is mounted against a spring in a small tube in such a way that its axial position can be adjusted
652
New Optical Sensors and Measurement Systems
with a screw from the backside. This allows changing the relative phase of the sound radiated from the element. The 3D set of sound field data is handled by a visualization-software that offers arbitrary orientated cuts through the sound field.
Fig. 2. Sound source consisting of 37 piezoelectric transducers of 16 mm diameter each
We will demonstrate the kind of data and the utilization for the purpose of field optimization in a few illustrations. For Fig. 3 let us begin with the array of transducers assembled as they came from the producer. The figure gives phase and amplitude of the backprojected field for a plane parallel to the transducer array and directly above it and for a plane perpendicular to the array. Fig. 3a presents the saw-tooth representation of the acoustic phase in the sound field slightly above the transducers. The positions of the individual transducers in the cluster (cf. Fig. 2) are visible from the structures in the phase-value image. We are close to the surface of the transducers. Since the diameter of the transducers is similar in size to the acoustic wavelength each of them radiates a spherical-wave pattern indicated by the individual circular phase patterns. From a rough analysis of the gray levels we conclude that there are mainly “gray” and “white” transducers that will need an adjustment in a first optimization step. As a consequence, the amplitude in Fig. 3b shows large fluctuations. It, too, allows to identify each element. The situation is also shown in the data from the cut perpendicular to the array in Fig. 3c (phase) and Fig. 3d (amplitude). This cut passes through the centers of the lower row of four transducers, evident in the irregularities of the phase distribution and the pronounced “plumes” in the amplitude distribution. Obviously the field is far from homogeneous.
New Optical Sensors and Measurement Systems
653
a
b
c
d
Fig. 3. Tomographic analysis of 38.5-kHz ultrasound field above an array of 37 piezoelectric transducers. The transducers have been assembled without any correction to their phases. a and b: phase and amplitude in a plane parallel to the array of transducers and directly above it. c and d: phase and amplitude in a plane perpendicular to the array and through the centers of the lower row of four transducers
We took the data from this entrance test for the phase adjustment of the transducers. Fig. 4 gives the corresponding results after the first corrections. In Fig 4a the phases of all transducers are well matched. One should not be irritated by the many changes from black to white – the saw-tooth representation of the phase shows jumps from black to white when the phase passes 2S. The amplitude above the array in Fig. 4b is also much more balanced. The same holds for the amplitude in the perpendicular plane (Fig. 4d). In addition, the phase distribution in Fig 4c demonstrates an extended region of nicely parallel fringes indicating a well-defined plane-parallel wave.
654
New Optical Sensors and Measurement Systems
a
b
c
d
Fig. 4. Tomographic analysis of 38.5-kHz ultrasound field as in Fig. 4, however after a first correction of the transducers’ phases
4 Optimization of ultrasonic parametric array To finish, we would like to demonstrate the improvement in the performance of the nonlinear generation of audio sound due to the adjustment of our array of elementary ultrasonic transducers. This can be done best by showing the angular diagram of the radiated sound. Let us have a look at the audio sound at 500 Hz generated when we drive the source with 38.5 kHz and 39 kHz. Fig. 5a gives the angular diagram for the original transducer device, Fig. 5b the improved performance. We see that three original lobes covering an angular range of some 30° turn into a single strong forward-directed lobe with a beam angle less than 10°. This is evidence that we are now obtaining the performance expected from the parametric array under optimum conditions.
New Optical Sensors and Measurement Systems
a
655
b
Fig. 5. Angular radiation diagram of 500-Hz audio sound produced by parametric mixing of ultrasound; a: phases of transducers uncorrected; b: phases adjusted under optical control
5 Conclusion Tomographic ESPI is a powerful technique for the non-intrusive mapping of a 3D sound field. It has been shown that sophisticated processing of the measurement data from a state-of-the-art ESPI instrument in time-average mode allows to measure the spatial distribution of sound pressure and phase in airborne ultrasound fields. The optical data have been used as input for an acoustic optimization of a parametric array for the nonlinear generation of audio sound that could not have been achieved otherwise.
6 Acknowledgements We thank V. Mellert for the acoustical expertise that he dedicated to the present study and M. Schellenberg for his engaged contribution in developing Matlab routines. We also acknowledge financial support by DFG.
7 References 1. 2. 3.
Westervelt P.J., (1963) Parametric Acoustic Array, J.Acoust.Soc.Am. 35: 535-537 Vest C.M., (1979) Holographic Interferometry, John Wiley, New York, p.180 Espeland M., Løkberg O.J., Rustad R., (1995) Full field tomographic reconstruction of sound fields using TV holography, J.Acoust.Soc.Am. 98: 280287
Dynamic ESPI system for spatio-temporal strain analysis Satoru Toyooka and Hirofumi Kadono Dept. Environmental Science & Human Engineering, Saitama University, 255, Shimo-okubo, Saitama, Japan [email protected] Takayuki Saitou and Ping Sun**, Fujinon Corporation Tomohisa Shiraishi Saitama Institute of Technology Center Manabu Tominaga, Ibaraki National College of Technology
1 Introduction Electronic Speckle Pattern Interferometry (ESPI) is the most appropriate methods for high sensitive deformation measurement of a rough surface object. Precise quantitative deformation analysis has to be obtained for strain analysis which is our final target. To do so, phase shifting method is popularly used [1]. In the method, to determine speckle phase, plural speckle patterns at least three frames with different references phases have to be taken under a stationary state of the object. It becomes a problem when we want to analyse dynamic deformation contunuously. To overcome the problem, we proposed dynamic ESPI which included phase analysis beased on Hilbert transformation on time domain. At the viewpoint of Non-DestructiveTesting, it is very important to know a deformation process continuously. In many cases, it is expected that wavelike propagation of deformation may provide insight into a degradation condition, faults or defects before they visibly appear. Depending on the meso-mechanics based on field theory of deformation, deformation of a solid material can be explained as synergetic interaction between shearing and rotation of structural deformation elements [2]. In our previous experiments of dynamic ESPI (DESPI), we found wavy propagation of localized nonuniform deformation band after yielding of the material [3]. The experimental results suggested us important information about degra-
New Optical Sensors and Measurement Systems
657
dation process of the material. We proposed new algorithms of phase analysis suited to investigate dynamic deformation processes [4]. In this paper, we give brief introduction of our proposal of phase analysis beased on time domain Hibelt transformation, and present a prototype of DESPI system which implements two-dimensional strain analysis in tensile and fatigue experiments of metal samples.
2 Continuous observation of correlation fringes in ESPI
Fig. 1. Spatio-temporal observation of the interference signal (left) and resultant correlation fringes (right)
A simple way to observe a continuous defformation process was at first done by taking successive frames of interference speckle patterns and by calculating subtraction between two frames successively. Figure 1 shows an example of experimental results. Experiments were done by an in-plane sensitive optical setup. The object under study was a dumbbell-shaped plate of aluminum alloy with the effective length of 80 mm, width of 30 mm, and thickness of 5mm. The specimen stretched on a tensile machine with a constant tensile speed of 0.5 Pm/s was illuminated by two collimated beams symetrically about the optical plane including the tensile axis of the sample. In Fig.1, the left side pattern is a spatio-temporal display of original speckle intensity along a white line drawn in the right side pattern which is a correlation fringe pattern obtained by subtracting two speckle images. The amount of deformation per unit fringe spacing is determined by the wavelength of the light and the incident angle of the illuminating laser light. In the experiment, the light source was a SHG of YAG laser with
658
New Optical Sensors and Measurement Systems
wavelength of 532 nm and the incident angle D 45$ , then the amount of deformation per unit fringe spacing was 380 nm. The CCD camera was positioned perpendicularly to the surface of the specimen and the interference speckle patterns are taken continuously with a constant acquisition rate of 30 fps. The specimen was under plastic deformation state. On the spatio-temporal diagram, we found a clear defferece bounded by the central horizontal line. Frequency of intensity variation along the time axis (horizontal) of the upper half is higher than that of the lower harf of the diagram. It is caused by the defference of displacement rate of each point on the surface. The sample plate was fixed at the bottom end of the plate and stretched upword. The holizontal border line corresponds to the diagonal bright zone found in the right side correlation fringe pattern. In the diagonal bright zone in the right side pattern, fringe structure disappear caused by deccorelation of speckle intensity. It was caused by propagation of crystalographic sliding where stress is suddenly reliesed. Fringes observed both upper side and lower side of the band display wavy propagation of deformation originated by diagonal slip band. The band becomes wider and narrower and runs over the sumple at almoset constant speed.
3 Phase analysis based on Hilbert transformation in time domain The spatio-temporal observation of interfering speckles shown in Fig.1 gives us an interesting feature of speckle intensity which varies randomly in spatial domain but smooth in time domain. Paying attention to the feature, phase analysis in time domain is preferably considered. We proposed phase analysis for dynamic ESPI based on Hilbert transformation in time domain [4]. In general, an interference speckle pattern can be represented by I ( x, y , t )
I 0 ( x, y, t ) I m ( x, y, t ) cos{T ( x, y ) M ( x, y, t )}
(1)
where Io(x,y,t) and Im(x,y,t) are the bias and the modulation intensities, respectively. The speckle phase T(x,y) varies randomly in space domain but is almost stationary in time domain. On the contrary, the phase term M(x,y,t) varies linearly depending on the object deformation independent of speckle noise. Consider temporal variation of the intensity of Eq.(1) on a fixed point on the object and pay attention to the interfering term. If a conjugate function of the interfering term can be obtained, we can derive the phase term by calculating arctangent of their ration.
New Optical Sensors and Measurement Systems
659
§ HT { I ' ( x , y , t i ) · ¸ tan 1 ¨¨ (2) ¸ © I ' ( x, y , t i ) ¹ where I’=I-I0 is the interference term in Eq.(1). Phase difference at a time ti+p is given by subtracting phase value from the reference one at a time ti,
T ( x, y ) M ( x, y , t i )
'M ( x, y, t i p ) M ( x, y, t i p ) M ( x, y, t i )
(3)
where random speckle phase T(x,y) is cancelled. The speckle phase will change for long term of experiment. For this reason, the reference value is renewed after a certain interval. The unwrapped phase values are recorded as two-dimensional images, which represent the space development of the deformation. In order to derive strain distribution, spatial defferences are numerically calculated to every deformation maps. To reduce the influence of the fluctuations accompanied by numerical differenciation, smoosing filters were applied to deformation maps.
4 Experimental by a prototype of DESPI system 4.1 Prototype of DESPI system
Figure 2 is a schematic drawing of a prototype of DESPI system. The system consists of (1) Optical box which is the main body of the system, (2) YAG laser and a fibre cable to introduce laser light into the box, (3) digi
Optical box
Camera
Laser
Sample
Fiber Computer Controller Fig. 2. Prototype of DESPI system
660
New Optical Sensors and Measurement Systems
tal camera which is separated from the optical box, (4) controller, and (5) microcomputer. The optical setup inside the optical box consists of a couple of two-beam illuminating system, both on horizontal optical plane and vertical optical plane. The light guided to two branches is switched by a mechanical chopper. PZT actuators are setup on two branches in order to introduce constant phase modulation to distinguish sing of deformation. They are controlled by the computer through the controller. One of the unique features of our system is that the digital camera is separated from the optical box. It allows us that any type of digital camera in each laboratory is used. Especially high speed camera is expected to measure dynamic phenomena precisely. 4.2 Analysed results
The prototype of DESPI was attached to a fatigue equipment. We tried a tensile experiment of a dumbbell-shaped plate of aluminum alloy with half-circle notches having the effective length of 45 mm, width of 15 mm, and thickness of 3mm and noche radius of 3mm. The specimen was stretched on a tensile machine with a constant tensile speed of 3 Pm/s and speckle patterns were taken by the CCD camera by the frame rate of 30 fps. Analyzed results of one shot of deformation process just before yielding are illustraten in Fig.3. Deformation maps along tensile axis (dy) and horizontal axis (dx ) and their spatial derivatives or strain maps (Hyy, Hxx, Hxy) are shown. Complex strain distribution around the notches are displayed precisely.
dy
dx
Hyy
Fig. 3. Analyzed maps of deformation and strain
Hxx
Hxy
New Optical Sensors and Measurement Systems
661
5 Conclusions In the proposed method, the data processing to derive phase distribution is performed in temporal domain, considering the temporal history of the interference signal at every single pixel. The resultant phase distributions are converted to time series of deformation maps. Finally they are converted to time series of strains. The final results give a temporal development of two-dimensional strain field. The prototype presented here enables us a fully automatic system to analyze two dimensional strain analysis.
6 Acknowledgments We want to thank to the cooperation council in Saitama University members of DESPI group where we discussed eagerly to construct the prototype.
7 References 1. Rastog P.K. i, Digital Speckle Pattern Interferometry and Related Techniques (John and Wiley and Sons Ltd., Chichester, 2001). 2. S.Yoshida and S.Toyooka, Field theoretical interpretation on dynamics of plastic deformation ʊPortevin-Le Chatelie effect and propagation of shear band, J.Phys.:Condens. Matters 13, 6474-6757 (2001). 3. S.Toyooka, R.Widiasututi, Q.C.Zhang, and H.Kato: Dynamic Observation of Localized Pulsation Generated in the Plastic Deformation Process by Electronic Speckle Pattern Interferometry Jpn.J.Appl.Phys., 40-2A, pp. 873-876, (2001) 4. MadjarovaV.D., Kadono H., and Toyooka S., Dynamic electronic speckle pattern interferometry (DESPI) phase analyses with temporal Hilbert transform Optics Express, 11-6 pp.617 – 623 (2003).
Digital holographic microscope for dynamic characterization of a micromechanical shunt switch Valerio Striano1, Giuseppe Coppola1, Pietro Ferraro2, Domenico Alfieri2, Sergio De Nicola3, Andrea Finizio3, Giovanni Pierattini3 and Romolo Marcelli4 1 Istituto per la Microelettronica e Microsistemi – Sez.Napoli, Via P.Castellino, 111 – 80131, Napoli (Italy) 2 Istituto Nazionale di Ottica Applicata – Sez. Napoli, Via Campi Flegrei 34 – 80078, Pozzuoli (Na), (Italy) 3 Istituto di cibernetica del CNR, “E.Caianiello”, Via Campi Flegrei 34 – 80078, Pozzuoli (Na), (Italy) 4 Istituto per la Microelettronica e Microsistemi – Sez. Roma, Via del Fosso del Cavaliere 100 – 00133 Roma, (Italy)
1 Introduction Microelectromechanical (MEM) switches have been recently considered as alternative key elements with respect to PIN diode switches for high frequency applications A digital holography microscope (DHM) is employed as non-destructive metrological tool for characterization of a micromechanical shunt switches. By means of DHM, we have fully characterized the electrical actuated shunt switches, both in static and in dynamic condition.. In particular, the out-of-plane deformation of the bridge has been investigated with high accuracy.
2 RF MEMS switches Switch matrixes and phase shifters can take advantage, with respect to PIN diode switche, from an all passive environment, when the switching time is not a critical issue but lower losses are required [1]. Presently, MEM switches can exhibit insertion losses lower than 0.5 dB up to 40 GHz, and switching times in the order of tens of Ps.
New Optical Sensors and Measurement Systems
663
The investigated MEMS switch, shown in fig.1, is based on a bridge that can be actuated by using electrodes positioned laterally with respect to the central conductor of the guide. By using this solution, the DC signal for the bridge actuation is separated by the RF signal. Generally, a switch biased by a DC voltage signal V (a voltage difference between bridge as the ground plane and the coplanar waveguide bringing also the high frequency signal) will experience an electrostatic force for the actuation which is balanced by its mechanical stiffness, measured in terms of a "spring constant". The balance is theoretically obtained when the bridge is going down approximately 1/3 of its initial height. After that, the bridge is fully actuated and it needs a value of V less than the initial one to remain in the OFF (actuated) position.
3 Digital holographic microscopy The purpose of holography is to capture the complete wavefront scattered by an illuminated object and then to reconstruct it in order to obtain quantitative information about its topographic profile. The phase of the scattered wave, incorporates information about the shape of the device under investigation, beacuse it is related to the optical path difference. Since all light sensitive sensor respond to intensity only, the phase is encoded in an intensity fringe pattern, through an interferometric approach. Generally, in Digital Holography (DH) the recording device is a digital camera, CCD or CMOS, i.e. a two dimensional array of N x M pixels. In DH the hologram is spatially sampled and stored as a numerical array in a computer and the reconstruction process is perfomed, by discretization of the Fresnel approximation of the Rayleigh-Sommerfield diffraction integral[2].
664
New Optical Sensors and Measurement Systems
Fig. 1. Classical shunt switch realization in coplanar configuration (left) and detail of the bridge (right).
4 Experimental setup The holograms are recorded using the experimental setup shown in figure 2. It consists in a Mach-Zehnder interferometer. The laser source wavelength is Ȝ=0.532µm. The holographic pattern is recorded by a CCD camera at distance d from the hologram plane. The CCD array has a resolution of 1024x1024 squared pixels with a pixel size of 6.7µm
5 Experimental results With a digital holographic microscopy, the behavior of the device has been studied to evaluate the correct operation of the bridge in working condition and to define the value of the DC voltage signal for which the complete commutation of the switch is obtained. Moreover, DH inspection has allowed to investigate the shape of the bridge during the actuation, the total warpage due to the actuation, possible residual gap, possible hysteresis, and so on. To this aim, some holograms of the switch, biased with a voltage ramp in the range of [0;30V] are recorded.
New Optical Sensors and Measurement Systems
665
Fig. 2. Experimental set-up
The phase map and the reconstructed profile of the switch, in actuation condition, is shown in Fig.3. From the analysis of the reconstructed profiles, we have seen that, for voltages, less than 20 V, the bridge is unperturbated. Only for high voltages up to 25 V, the switch starts to deflect, but doesn’t exceed the balance zone. This behavior has been found in more analyzed devices. We suppose that some impurities, like powder particles, are present in the gap between the bridge and substrate, and they dont allow the complete commutation of the switch.
(a)
(b)
Fig. 3. Phase map (a) and profile (b) of RF switch
6 Conclusion From the DH analysis we have found an asymmetric behavior of the device, probably due to impurities presence in the air gap between the bride an substrate. These results show all potentialities of this non destructive inspection method, that it promise to have a very important role in the futures processes of dynamic MEMS characterization.
666
New Optical Sensors and Measurement Systems
7 Reference 1. G. M. Rebeiz, Guan-Leng Tan and J. S. Hayden: "RF MEMS Phase Shifters, Design and Applications", IEEE Microwave Magazine, Vol.3, No.2, pp.72-81 (2002) 2. U. Schaanrs and W. Juptner : “Direct recording of holograms by a CCD target and numerical reconstruction”, Appl. Opt. 33, 179-81 (1994) 3 G. Coppola, P. Ferraro, M. Iodice, S. De Nicola, A.Finizio, S. Grilli, “A digital holographic microscope for complete characterization of microelectromechanical systems”, Meas. Sci. Technol. 15, 529–539, (2004)
Digital Holographic Microscopy (DHM): Fast and robust 3D measurements with interferometric resolution for Industrial Inspection Yves Emery, Etienne Cuche, François Marquet, Sébastien Bourquin, Pierre Marquet Lyncée Tec SA, rue du Bugnon 7, CH-1005 Lausanne, Switzerland, Jonas Kühn, Nicolas Aspert, Mikhail Botkin, Christian Depeursinge STI-IOA, EPFL, 1015 Lausanne, Switzerland
1 Nanometer scale measurements for industrial inspection Main microscopic systems for industrial inspection are optical microscopes (including confocal), Scanning Electron Microscopes (SEM), Atomic Force Microscopes (AFM), and interferometers. With recent technological advances in micro-, nano-, and bio-technologies, the demand for nanometer scale systems is increasing. Micro-optics components, biochips, Micro Electro- and Opto-Mechanical Systems (MEMS and MOEMS), are examples of devices, actually produced in large quantities, which require solutions for efficient quality control. The use of current systems in industrial environments is limited either by their limited resolution (e.g. optical microscopes), or by their sensitivity to vibrations (e.g. AFM and interferometers), or by time consuming measurement protocols (e.g. SEM), or by restricted tolerances with respect to sample positioning (e.g. white light interferometers). An ideal system should offer simultaneously high precision and resolution, high measurement rates, robustness, ease of use, and non contact measurements, without sample preparation. Digital Holographic Microscopy (DHM), is a new technology for optical imaging and metrology, which is particularly well adapted for quality control applications, since it enables interferometric resolution, at high acquisition rate, with un unrivalled degree of robustness with respect to external perturbations. This paper describes the principles of the technology and its potential for industrial applications. In particular we show that
668
New Optical Sensors and Measurement Systems
DHM offers attractive solutions to compensate for variations of the sample orientation (tilt) and vertical position (tilt). Fig. 1. Recording principle of off-axis holograms2. There is a few degrees (angle T) between the reference (R) and the object beams (O). This enables to reconstruct the information using a single hologram acquisition. Onaxis holography (i.e. T=0) requires acquisition of several holograms. The portrait is the one of Dennis Gabor, Nobel Price of physics for its discovery of Holography.
2 Digital Holographic Microscopes (DHM) The strength of our approach lies in particular on the use of the so-called off-axis configuration (see Fig. 1 and [1]), which enables to capture the whole information by a single image acquisition, that is to say typically a few ten of microseconds using a standard camera and down to two microseconds with fast cameras. These extremely short acquisition times make DHM systems insensitive to vibrations and ambient light. These instruments can operate without vibration insulation means, making them a cost effective solution for an implementation on production lines. Hologram can be acquired with either an optical set up for transparent samples, or for reflective samples [1].
Fig. 2. DHM process: A) Acquisition of a single hologram by a DHM and B) digital reconstruction of both 3D phase and intensity images of the observed sample by numerical procedures.
Once holograms are acquired (see Fig. 2 and [2]), they are numerically interpreted within a tenth of second to reconstruct simultaneously: (1) the
New Optical Sensors and Measurement Systems
669
phase information, which reveals the 3D topography of an object surface with vertical resolution at the nanometer scale along the optical axis, and (2) intensity images, as obtained by conventional optical microscope. Both images are defined with a diffraction limited resolution in the transverse (0xy) plane and are available in real time (more than 10 frames per second). According to [1], the reconstructed wavefront <(m'[, n'K) is < (m'[ , n'K )
ª iS º A) (m, n) exp « m 2 '[ 2 n 2 'K 2 » ¬ Od ¼ ª iS º½ uFFT ® RD (i, j)IH (i, j) exp « i 2 x 2 j2 y 2 » ¾ ¬ Od ¼ ¿m,n ¯
(1)
where m and n are integers (-N/2 d m, n < N/2), FFT is the Fast Fourier Transform operator, A=exp(i2Sd/O)/(iOd), '[ and 'K are the sampling intervals in the observation plane, x and y are the pixel size of the CCD
RD (i, j) exp ª¬ k Dx ix k Dy jy º¼
(2)
is the digital reference wave, where kDx, and kDy are the two components of the wave vector, and )(m,n) is the digital phase mask.
3 Numerical focusing Varying d in Eq. 1, enables numerical focusing: several image planes can be reconstruct from a single hologram. This enables to compensate automatically for variations of the specimen height, as may occur on the convey tray of a manufacturing process. Another application of digital focusing is the quality control of samples with multiple surfaces level, or transparent thick samples having potentially defects located at different depth. Fig. 3 illustrates this feature. The sample observed in transmission is a sapphire window. There are two internal defects within the sample, located at different depths, about 10 Pm apart. On phase images reconstructed from the same hologram at different distances d, one can observe that only one of the defect is focalized. This procedure is very useful for industrial inspection, since it avoid mechanical motorized translation of the sample along the optical axis to focalized it.
670
New Optical Sensors and Measurement Systems
Fig. 3. Phase reconstructions from the same hologram of a sapphire window at two different distances. A) d = -1.4 cm, and B) d = -2.4 cm. Focalized defects are surrounded in black.
4 Numerical tilt adjustment Eq. 3 requires the adjustment of several parameters for proper reconstruction of the phase distribution. In particular, kDx and kDy compensate for the tilt aberration resulting from the off axis geometry or resulting from a not perfect orientation of the specimen surface which should be accurately oriented perpendicular to the optical axis. Fig. 4 illustrates this feature on phase images reconstructed from the hologram of a cylindrical lens. Numerical fit along the two profiles indicated by dashed lines on Fig 4A is performed to calculate kDx and kDy. The result is shown on figure 4B: the lens appears as perfectly aligned perpendicular to the optical axis. Standard mechanical compensation of orientation variations, as performed for example with standard interferometers is a difficult task requiring high precision motorized positioning devices. The numerical procedure described here makes DHM easy to use in industrial environments. Fig. 4. Numerical tilt adjustment of a cylindrical micro lenses. Diameter: 160 microns, max height 7.73 microns. The tilt is adjust along the two dashed profile indicated in panel A.
New Optical Sensors and Measurement Systems
671
5 Conclusions The use of numerical procedure at a level never reach so far in optical imaging enables DHM to overcome two cumbersome alignment procedures of interferometric microscopes: (1) fine focus of the sample, and (2) sample tilt adjustment. In addition the technology enables to retrieve the full information from a single hologram acquisition. This enables very short acquisition time, smaller than typical propagation frequencies of ambient vibrations. These instruments can be operated without vibration insulation tables. Combined with the simplicity of the opto-mechanical design and the absence of moving parts, DHM are new measurement systems with a nanometer scale resolution ready to take place in production lines. DHM can be used to characterized shapes and surface roughness of a large variety of samples and their use for industrial inspection should definitely spread during the next years
6 Acknowledgments The development of the technology has been supported by Swiss government through CTI grants TopNano 21 #6101.3 and NanoMicro #6606.2 and #7152.1 and the grant 205320-103885/1 from the Swiss National Science Foundation. Measurements of micro-lenses have been realized with the help of Disco program. Systems are commercialized by Lyncée Tec SA.
7 References 1. Etienne Cuche, Pierre Marquet, and Christian Depeursinge "Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms", APPLIED OPTICS, Vol. 38, No. 34 ,December 1st 1999, p. 6994 – 7001 2. E. Cuche, P. Marquet and C. Depeursinge, "Simultaneous amplitude and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms," Appl. Opt., 38, 6994-7001 (1999).
Digital Micromirror Arrays (DMD) – a proven MEMS technology looking for new emerging applications in optical metrology
Roland Höfling ViALUX GmbH Reichenhainer Str. 88, 09126 Chemnitz, Germany
Cheraina Dunn Texas Instruments Inc. 6550 Chase Oaks Blvd., Plano, TX 75023 U.S.A.
1 Introduction Engaged developments in microtechnology paved the way to a mature MEMS solution in the field of microdisplays: the Digital Micromirror Device[1] (DMD) built by Texas Instruments Inc. (TI). The history is sketched in Fig.1. For the past 10 years, DLPTM applications were focused on large sales volume devices with 6 millions of data projectors and TV sets have been sold since 1996. In order to address the numerous desires in optical engineering, TI introduced the DMD DiscoveryTM general purpose DLP chipset for the development of new business opportunities [2,3].
Fig. 1. Milestones of DLP development
TM
DMD Discovery and DLP are trademarks of Texas Instruments Inc.
New Optical Sensors and Measurement Systems
673
2 DMD operation and properties The current DMD consists of on array of hundreds of thousands of movable, highly reflective mirrors, arranged on a 13.68 µm pitch. The mirror tilt is +12° or -12°, respectively, and the transition between the two states takes less than 10 µs. This extremely short switching time together with the 12.8 Gbits per second data rate of the latest LVDS devices yield the impressive potential of this spatial light modulator (SLM) in optical engineering. The DMD can be considered as an intensity-only modulator with outstanding contrast values of 3000:1. The bi-stable mirror’s output can be either be “ON” or “OFF” at any time. The high XGA frame rate of up to 16,300 fps, allows for sophisticated and precise gray value generation if the target processes or detectors are integrating over a certain time interval. The pulse-width modulation (PWM) methodology is used to reduce the number of switches and therefore maximize the duty cycle. An example for a 3-bit gray value PWM is shown in Fig 2. In case of integrating detectors, it is necessary to synchronize the sampling interval with the DMD output sequence. If this handshaking is given, the DMD modulator will produce illumination levels with perfect linearity and high digital precision. The details of the desired PWM sequence can differ significantly depending upon the application. The requirement for smooth patterns seen by the human eye, in particular, is very different from the objectives in engineering processes like photofinish or lithography. The metallic reflection in the SLM gives a lot of flexibility with respect to the wavelength and the polarization state of light. The DMD DiscoveryTM product family covers one order of magnitude in the optical spectrum ranging from the UV ( >350nm) up to the NIR (< 2750 nm). Impressive illumination levels of up to 45 W incident power and 5W/mm² power density (visible spectrum) can be switched.
Fig. 2. Bi-stable mirror positions (left) and 3-bit gray value generation by PWM (right)
674
New Optical Sensors and Measurement Systems
Fig. 3. Supporting chipsets for DLPTM technology (left) and turnkey development system: DMD DiscoveryTM1100 board with ALP1.1 high-speed interface (right)
Fig. 3. gives an overview on DMD supporting electronics. The analog driver chip (DAD1000) is directly paired with the DMD. The data and control lines of the DMD can be served by either the graphic controller chip (DDP) or the DMD DiscoveryTM chipset. While the DDP is dedicated to human viewing applications, the DMD DiscoveryTM interface is preferred for optical engineering, because it allows for customized PWM algorithmsThe latest LVDS generation comes with a 200 MHz DDR data interface. Plug&play components are available in order to speed-up and simplify prototyping [4] so that high-frequency array updates from onboard RAM and PWM gray value patterns can be sequenced without any hardware development.
3 Stimulating applications in optical metrology It is clearly beyond the scope of this paper to go into the details. A number of typical applications are selected just to stimulate thinking with the DMD as a new optical component. 3.1 Light emitting systems for metrology
Very common is the DMD use for the generation of intensity patterns quite similar to that in projectors and TVs, exploiting the structured light for full-field triangulation or reflection in 3d shape measurements. The direct high-speed control of the mirrors by the DMD DiscoveryTM chipset leads to fast 3d cameras and inspection systems. Adding a Fourier transform lens, the primary 2d intensity patterns is converted into an angle distribution. In this way, the initially bi-stable mirrors give a tunable steering device that has potential for testing electrooptic devices under defined illumination conditions.
New Optical Sensors and Measurement Systems
675
The high and digital nature of synchronized, time-averaged PWM gray values yields a precise gauge of illumination levels for photometry. A tunable color composition of light is achieved by using the SLM for remixing the spectral content of a light source. Thus, long-term stable, standard light sources can be emulated for color measurement. 3.2 Light collecting systems for metrology
In general, the DMD is a versatile component for filtering out certain spectral, spatial, or angular contents of light. The replacement of the Nipkow disc in confocal microscopes is a good example how a MEMS may replace mechanical parts in optical devices. The pin-holes of this spinning disc are modeled by moving clusters of mirrors gaining both, accuracy and flexibility. Quite similar is the moving slit in DMD based spectrometers. The PWM technique can be reversed for light collection. If built into the detection path, the DMD acts as a precise attenuator able to increase the dynamic range of detectors by several orders of magnitude. Compensation of spatial variations is also feasible. Adaptive configurations have been demonstrated for Shack-Hartmann wave front sensors where the DMD models an array of diffractive optical elements. Binary XGA masks that can be switched at 16kfps make the DMD a good candidate for high-speed pattern matching without the need of a (much slower) camera. Such an electro-optical pattern comparator may be also implemented applying the DMD masks in the Fourier plane. Finally, mirror-by-mirror scanning of a whole picture is also in the focus of interest. A resulting camera, e.g. for DUV or NIR, would not be extremely fast but benefit from the high accuracy and dynamic range of a (cooled) single point detector.
4 References 1. Hornbeck, L J, (1997) Digital light processing for high-brightness, high-resolution applications. Proc. SPIE, 3013:27-41 2. Dudley, D, Duncan, W M, Slaughter, J (2003) Emerging digital micromirror device (DMD) applications. Proc. SPIE, 4985:14-25 3. Dudley, D, Dunn, C (2005) DLP Technologie- nicht nur für Projektoren und Fernsehen. Photonik 1:32-35 4. Höfling, R, Ahl, E (2004) ALP: universal DMD controller for metrology and testing. Proc. SPIE 5289:322-329
Optimised projection lens for the use in digital fringe projection Christian Bräuer-Burchardt, Martin Palme, Peter Kühmstedt, Gunther Notni Fraunhofer Institute for Applied Optics and Precision Engineering IOF Albert-Einstein-Straße 7, 07745 Jena Germany
1 Introduction The quality of fringe projection is mainly influenced by the optical properties of the projection lens because the projected fringes directly influence the metric of the measurement. In order to achieve best measuring results the properties of the lenses used should be optimised according to the measuring conditions. The aim is to achieve a minimum of distortion from lens design and/or the correction of the remaining distortion. Usually, only lateral distortion effects in the image plane (2D grid distortion) are taken into account but the change of aberration along depth (zaxis) is neglected. Here, we will introduce approaches for a distortion compensation including z direction. In typical applications the aperture should be large to give more light to the object. But this will reduce the usable range of depth in z-axis. The design of the optics will try to reduce this contrariety by a specific adapted modulation transfer function (MTF). Digitising effects due to the discrete structure of the microdisplays should be suppressed by this specific MTF design, too.
2 Fringe projection sources The fringes which are projected onto the object are produced by microdisplays (MD) using different physical effects. Some examples for MD are digital micro mirror devices (DMD), liquid crystal display LCD, liquid crystal on silicon displays (LCoS), and organic light emitting diodes
New Optical Sensors and Measurement Systems
677
(OLED) [1]. The MD consist of a number of pixels arranged in rows and columns. A typical number of pixels in a MD is 1024 x 768. The pixels are separated by gaps and may have substructures itself. Common problems of digital fringe projection are discussed in [2].
3 Lens distortion Lens distortion occurs in almost all lenses of an optical system, and, usually in fringe projection systems, too. However, not corrected lens distortion leads to considerable measuring errors. Thus, the exact knowledge of the lens distortion is requested in order to avoid measuring errors in fringe projection systems. Lens distortion is not included in the pinhole camera model which is the basic of the calculation of the measuring values using central projection. Recently, lens distortion is mainly modeled as a function in the 2D image plane as described e.g. in [3,4] whereas the modelled image plane has to be perpendicular to the the optical axis. Brakhage [5] suggests a correction method including distance information with polynomials. However, lens distortion is often quite complex and can not be described by the above mentioned models. A more general model which is able to describe any possible occurring 2D distortion is the model of a distortion grid [6]. 3.1 Distance depending lens distortion
The used principle for the calculation of the 3D measuring data in our system is based on a triangulation between the rays coming from the projector and the observing camera, see fig. 1. fringe projector
camera
z
inclined object plane
Fig. 1. Scheme of the triangulation situation in the analysed case
678
New Optical Sensors and Measurement Systems
It is obvious that only one of the optical axes can be perpendicular to the object (image) plane (here the axis of the camera, D=90°). But for the other optical component (here the projection unit) this is not the case. The object plane is tilted with respect to the optical axis (here E<90°) resulting in a zdependence of the distortions (range 'z) which has to taken into account. For this a new method to determine lens distortion of projection systems was developed (see also [6]) and is briefly outlined as follows. In order to avoid adjustment errors it was requested to determine the distortion in the device ready for measurement. The projection unit projects a fringe pattern onto a plane surface, e.g. a stone slab which is positioned in the centre of the actual measuring volume (position of the object plane). The fringe pattern should be a Gray-code sequence followed by a cos-pattern sequence in order to obtain calculated phase values. The pattern is recorded by the observing camera of the system. The algorithm uses the projected fringes as a calibration pattern for the lens distortion determination. The distortion of the camera lens was mainly corrected before and will be improved furthermore within the iterative process of the determination of the projection lens distortion. For more details see [6]. The distortion is determined for the object plane. However, the elements of the distortion matrix correspond to different distances between projector and object within a range of 'z. Thus 3D information enters into the 2D distortion grid leading to a “2 ½ D - matrix”.
Fig. 2. Distortion matrix of the lens used
In fig. 2 the distortion matrix of the projector lens (for the analysed plane) is shown. Figure 3 illustrates the effect of the distortion correction application to a measurement of a plane surface with a size of 400 x 400 mm. The deviation from planarity (pv-value) decreased from 350 µm to
New Optical Sensors and Measurement Systems
679
32 µm. Usually, the distortion of a projection lens can be reduced by a factor often to an amount of less than 0.03 pixel on the projector chip.
Fig. 3. Example of a plane measurement without correction (above) and corrected (below). Correction leads to a reduction of the pv-value by a factor of 12
4 Lens optimisation In some special arrangements a lens design is possible to realize minimal distortion. This is the case for an one-to-one magnification often applied in microfringe projection [1]. 4.1 Projection Lens
In this case the requirement for widely correcting the aberrations, first of all distortion, can be met by a projection lens of the double-gauss type (see fig. 4). This kind of lens is symmetrical about the aperture stop, which enables the reduction of distortion down to 0.0003%. Lateral colour aberration is less than 0.006 µm, and the defocus caused by field curvature does not exceed 0.01 µm.
Fig. 4. Projection lens of the double-gauss type
680
New Optical Sensors and Measurement Systems
The values of the trough-focus MTF at the relevant spatial frequency, determine the depth of focus, see fig. 5. It has been experimentally shown that a minimum contrast of 0.1 is necessary for a successful phase measurement. According to fig. 5 the range of the depth of focus is about 5 mm, whereas the illuminiated field is 8mm in diameter for the choosen type of an OLED-display.
Fig. 5. Though–focus–MTF with NA=0.06
This projection lens was produced and tested for the use in an OLED based fringe projection unit [1]. It successfully met the theoretical predictions. 4.2 Suppression of sub structures in the image
A further demand on the lens design was the suppression of the sub pixel structure in the optical image of cosine- and Gray-code distribution generated by the discrete MD-pixels to avoid nonlinear distortion and digitalisation errors. The task was to design a projection lens which is able to suppress the spatial frequencies greater than 10 cycles/mm in the MTF assuming a pixel pitch of 15 µm. In the case of very small numerical aperture in image space the MTF is limited by the diffraction limit which is a strong decreased function of spatial frequencies. In extension of the case discussed in section 4.1 we can use the optical aberrations to generate a well defined decrease of the MTF which goes to zero for higher frequencies than 10 cycles/mm. But it should be pointed out, that a contrary effect of the decreased MTF is the reduction of the depth of focus which is defined by the through focus MTF. This have to be taken into account during the lens design.
New Optical Sensors and Measurement Systems
681
5 Summary and outlook It was outlined that lens design for fringe projection systems should include a number of aspects in order to achieve optimal 3D measuring results. Using an arrangement with a considerable range of 'z, the zdependence of the distortion must be considered. For this case a 2½D approach has been developed. Furthermore, it is possible to realise a distortion free design for the special magnification of one-to-one. Future work should include a proper 3D lens distortion correction in order to decrease the errors due to distortion while measuring objects with big variation in height.
References 1. Notni G., Riehemann S., Kühmstedt P., Heidler L., Wolf N. (2004) OLED microdisplays - a new key element for fringe projection setups. Proc SPIE Vol. 5532:170-77 2. Notni G.H. and Notni G. (2003) Digital fringe projection in 3D shape measurement – an error analysis. Proc SPIE Vol.5144:372-80 3. Luhmann Th., Nahbereichsphotogrammetrie, Wichmann Verlag, (2003) 4. Shah S. and Aggarwal J.K. (1996) Intrinsic parameter calibration procedure for a (high distortion) fish-eye lens camera with distortion model and accuracy estimation. PR(29), No.11:1775-88 5. Brakhage P., Notni G., Kowarschik R. (2004) Image aberrations in optical three-dimensional measurement systems with fringe projection. Applied Optics, vol. 43, nr. 16:3217-23 6. Bräuer-Burchardt C. (2005) A new methodology for determination and correction of lens distortion in 3D measuring systems using fringe projection. Appears in: Pattern Recognition (Proc. 27th DAGM-Symp.), Springer LNCS
Absolute Calibration of Cylindrical Specimens in Grazing Incidence Interferometry Klaus Mantel, Jürgen Lamprecht, Norbert Lindlein, Johannes Schwider Institute of Optics, Information, and Photonics Staudtstr. 7/B2, 91058 Erlangen Germany
1 Introduction Interferometry in grazing incidence is an appropriate tool for optical metrology of rod objects, especially cylindrical lenses. The light impinges obliquely onto the specimen, which leads to an increased effective wavelength Oeff being equal to the DOE period p (Fig. 1) [1]. Since the whole circumference of a rod object can be measured, the surface quality together with the relative orientation and position of the surfaces may be determined simultaneously [2]. The absolute accuracy of the obtained surface profiles is limited by systematic errors of the interferometric setup. In this work, the principle of a multiple positions test (Fig. 2) was combined with the methods of rotational averaging and of measuring difference quotients. The test is performed on hollow cylinders whose mantle surface is completely accessible.
Fig. 1. Setup of a grazing incidence interferometer. The diffractive optical elements are used as null elements and in addition act as beam splitter and beam combiner. The oblique incidence (D90°) results in an increased effective wavelength Oeff
New Optical Sensors and Measurement Systems
683
Fig. 2. A calibration procedure in grazing incidence in general uses four positions of the specimen relative to the interferometer [3]. In the following, this principle is combined with rotational averaging as well as measuring difference quotients
2 The four positions test with rotational averaging For the rotational averaging, a total of N measurements is performed whereby the cylinder is rotated about an angle of 2S/N around its axis, respectively [4]. This is done in the normal position, the flipped position and the shifted position. The fourth position of the test is provided by the normal position without averaging (Fig. 3).
Fig. 3. The four positions test with rotational averaging. The rotational averaging is performed in the basic position, the flipped position, and the shifted position
684
New Optical Sensors and Measurement Systems
The difference between the normal position and the averaged normal position directly gives the non-rotationally symmetric part of the surface deviations, eliminating the interferometer aberrations which remain unchanged under a movement of the surface under test. The difference of the averaged measurements in the shifted and the normal positions yields the difference quotient of the rotationally symmetric deviations which can be integrated numerically. To reconstruct a conical deviation of the specimen, an extra measurement in the flipped position is required. The difference quotient for an axial shift is a constant, which is ambiguous in interferometry. As the number of averaging steps is necessarily finite (between 10 and 100 steps are generally used), all spatial frequencies of the rotationally symmetric deviations that are integral multiples of the averaging frequency are not removed by the averaging process and impair the mea-surement results. However, the amplitudes of the higher frequencies are small for smooth surfaces, so that the separation of the surface shape from the interferometer aberrations should work with satisfactory accuracy.
3 The four positions test via measuring difference quotients An alternative method is to measure the difference quotients of the surface deviations in both the M- and z-directions and then integrate to obtain the surface deviations (Fig. 4). Again, a measurement in the flipped position is needed in addition to reconstruct a conical deviation of the specimen as well.
Fig. 4. The four positions test via measuring difference quotients
New Optical Sensors and Measurement Systems
685
4 Measurement Results Fig. 5 shows two calibration results. The reproducibility was Oeff/430 rms for the rotational averaging, and Oeff/330 rms for the measurement via difference quotients. The difference of the two results shows an rms value of Oeff/160, indicating a satisfying agreement of both methods and there-fore a sufficiently high reliability of the calibration procedures.
Fig. 5. Surface deviations obtained by rotational averaging (left) and measuring difference quotients (right). The values are given in µm. The difference shows an rms value of Oeff / 160
5 Acknowledgments We are grateful to the Deutsche Forschungsgemeinschaft (DFG) for supporting this project.
6 References 1. Birch, K G (1973) Oblique incidence interferometry applied to nonoptical surfaces. J. Phys. E, Vol. 6:1045 2. Mantel, K, Lindlein, N, Schwider, J (2005) Simultaneous characterization of the quality and orientation of cylindrical lens surfaces. Appl. Opt. (to be published) 3. Blümel, T, Elßner, K-E, Schulz, G (1997) Absolute interferometric calibration of toric and conical surfaces. SPIE Vol. 3134:370 4. Evans, C J, Kestner, R N (1996) Test optics error removal. Appl. Opt. Vol. 35:1015
Digital holographic interferometer with active wavefront control by means of liquid crystal on silicon spatial light modulator Aneta Michalkiewicz, Jerzy Krezel, Malgorzata Kujawinska Institute of Micromechanics & Photonics, Warsaw Univ. of Technology, 8, Sw. A. Boboli Str., 02-525 Warsaw, Poland Xingua Wang, Philips J. Bos Liquid Crystal Institute, Kent State University, POB 5190, Kent, OH44242, USA
1 Introduction In recent years microsystems engineering is a fast growing technology. Therefore methods, which allow to measure deformation of microelements and to determine their properties in micro scale must be developed and implemented. One of the often applied method is digital holographic interferometry (DHI), which facilitates the measurements of an arbitrary displacement vector. However in the case of a general configuration the proper calculation of in-plane (u,v) and out-of-plane (w) displacements required is troublesome. Therefore in the system reported the configuration and analysis method for simple u (or v) and w displacements determination is proposed. Two beams impinging obliquely from two symmetrical directions illuminate a sample in the system sequentially. Additionally in the system, liquid crystal on silicon spatial light modulator (LCOS SLM) is used as an active device for phase modification including: - phase-shifting for enhanced digital holography reconstruction, - tilting of object wavefront for object contouring. In the paper, the mentioned above methods and the modified active DHI experimental system are presented. In addition, the results of shape, inplane and out-of-plane displacement measurement of silicon micromembrane loaded by pressure are shown and discussed.
New Optical Sensors and Measurement Systems
687
2 Principles Digital holography provides new possibilities of phase manipulation, which result in simplification of the holographic system for an arbitrary displacement vector determination [1].
Fig. 1. Basic setup for combined displacement and shape measurement by means of DHI. LCOS – liquid crystal on silicon SLM, M – mirror, BS – beamsplitter, RM – rotating mirror
In the system proposed, the beams impinging from left 6ill and right 6ilr directions (Fig. 1) illuminate the specimen symmetrically. They interfere with reference beam 6ref and sequentially two pairs of holograms (before and after object loading) are captured. In order to obtain u and w displacement vector the component phase differences are calculated: 'Ir ,l Ir1,l1 Ir 2,l 2 [2] and applied for calculation of the x component of in-plane displacement (Eq. 1a) and out-of-plane displacement (Eq. 1b):
u 2 u1
O ('Il 'Ir ) 4S sin(D )
(1a)
w2 w1
O ('Il 'I r ) 4S cos(D ) 4S
(1b)
The displacement fields are correct under assumption of flat object. In the case of 3D object studies, its shape has to be known. Therefore, it is advised to combine this setup with shape measurement system. Here the contouring is introduced by tilting of an object beam by means of LCOS SLM (linear phase G=2S'Tx, Fig. 1). The object contours are defined by 'h O / 'T sin T if the tilt angle is sufficiently small [3]. The LCOS SLM is additionally used to implement phase-shifting digital holography (PSDH) which allows improving the quality of digital holograms. The LCOS is placed in object beam and the sequential phase shifts in object beams are introduced by providing at LCOS constant phase shifts equal to Gi=iS/2 (equivalent to 32 graylevels, where i=1,2…5). In the system proposed the proper phase shifting procedure is combined with sensitivity vector modifications, which allow to determine shape. On
688
New Optical Sensors and Measurement Systems
the other hand, the symmetrical illumination arrangement facilitates convenient measurement of out-of-plane and in-plane displacement in a sample. If both object and illuminating beams are controlled by LCOS it is possible to build a system, which combines all requirements for monitoring of arbitrary displacement of 3D diffuse object.
3 Experimental setup and results of measurement The scheme of experimental setup is shown in Fig. 1. Collimated beam from laser (O=532 nm, 5 mW) is splitted by a beam splitter and form reference and object beams. Two illumination beams are formed sequentially by rotating mirror RM, which reflect beam towards the left or right LCOS SLM. Light scattered from object illuminated by 6ill or 6ilp interfere with reference wave and the intensity is captured by CCD ('x=4.8Pm, 1024x1024).
a)
b)
c)
d)
Fig. 2. Measurement of silicon micromembrane under pressure load: profiles of: a) inplane, b) out-of-plane displacement, in range 0-0.7kPa with step 0.1 kPa; 3D representation of a) in-plane displacement d) 3D shape of out-of-plane displacement
The object under study was quasi-flat silicon micromembrane (3.5 mm x 3.5 mm) fixed on edges and loaded by changing pressure. The results of in-plane and out-of-plane displacement measurements performed for the series of load in range 0-0.7kPa with step 0.1 kPa are shown in Fig. 2 respectively.
New Optical Sensors and Measurement Systems
689
The P-V values of in-plane displacement increase with bigger load and vary from 50-250 nm. They are most significant in the neighborhood of micromembrane edges. The P-V values of out-of-plane displacement reach 2500 nm (for p=0.7 kPa) at the centre part of micromembrane. The system capability to measure object shape was perform for the strongly loaded micromembrane (Fig. 3, p=2kPa). The P-V shape value after scaling according to the sensitivity vector reached 20 Pm ('h§10 Pm).
a)
b)
c)
Fig. 3. Shape of silicon micromembrane loaded with 2.0 kPa, a) mod2S fringes, b) crosssection A-A, c) 3D shape representation
4 Conclusions The DHI system with symmetric object illumination and active phase modificationd performed by LCOS SLM facilitate the enhanced studies of shape and displacement of 3D objects. The presented measurement of silicon micromembrane proved validity of such a system for microelements and microsystems testing.
5 Acknowledgments We gratefully acknowledge the financial support of EU CoE COMBAT and Ministry of Scientific Research and Information Technology within the project PBZ-MIN-09/T11/2003.
6 References 1. Kreis T. „Holographic Interferometry” (1996) Akademie Verlag, Berlin 2. Michalkiewicz A., et al. “Phase manipulation and optoelectronic reconstruction of digital holograms by means of LCOS spatial light modulator“ (2005) Proc. SPIE 5776: 144-152, 3. Yamaguchi, S. Ohta, J. Kato “Surface contouring by phase-shifting digital holography” (2001) Optics and Lasers in Engineering 36:417–428, 4. Yamaguchi I., Zhang T. “Phase-shifting digital holography” (1997) Optics Letters 22/16:1268-1270.
In-Situ-Detection of Cooling Lubricant Residues on Metal Surfaces Using a Miniaturised NIR-LEDPhotodiode-System Kai-Udo Modrich Fraunhofer Institut für Produktionstechnik und Automatisierung Nobelstraße 12, 70569 Stuttgart Germany
1 Introduction To produce quality high-class products, technical cleanliness of components is indispensable. In the automobile industry and mechanical engineering requirements regarding functionality and stress of metal surfaces of units and system components have been increased in recent years. Thus, the effort of cleaning processes augmented in order to reach the required surface cleanliness to an increasing degree. Remaining impurities on the components diminish not only the quality of technical systems but also increase the production costs due to increased rejects. Therefore, the cleaning of components contributes considerably to the added value of the units and components in the manufacturing process. In the production of qualitatively high-grade metal surfaces cleaning residues depending on the manufacturing process disturb even in small layer thickness because of high requirements on the surface cleanliness and their functional and coating qualities. The metal-working industry is endeavoured to configure more efficient cleansing processes, and thus, to guarantee a sufficient surface quality. This requires efficient cleansing processes as well as testing methods which directly control components during the manufacturing process. Optical measuring methods offer necessary potentials to realise a process-integrated control. But present available testing methods do not permit a reproducible in-situ detection under manufacturing conditions. The development of an optical testing method for insitu detection of oil-based contamination films is an essential necessity. The analysis of the fields of application resulted in the greatest potentials of applications in production processes requiring a surface treatment after metal cutting (fig.1.). With the analysis of the technical metal surface
New Optical Sensors and Measurement Systems
691
and the occurring contamination films the boundary conditions were defined, which result from the manufacturing process and have an effect on the development of the testing methods.
Fig. 1. Application scenario
2 Conception On the basis of an analysis and defined requirements, available measuring methods were evaluated. As a result the near infrared spectroscopy was identified as the most promising developmental basis for the testing method. Deficits arise from the systems’ input in apparatus, the susceptibility to environmental influences occurred by the production as well as from the detection on varying surface roughness. Hence it follows that no reproducible in-situ detection is possible with the available systems under manufacturing conditions. On this account alternative solutions were conceived for the testing facility based on near infrared spectroscopy. They comprised the conception of miniaturised lighting and detection systems in the spectral field of the near infrared combined with integrated analysis system to eliminate the environmental light. Subsequently the developed solutions were assessed, compared and the most promising variant was selected with respect to the requirements made. As a result of the assessment, a miniaturised NIR-LED photodiode system with electronic filter was conceived and developed on the basis of a comparison measurement. As a result of the developed solu-
692
New Optical Sensors and Measurement Systems
tion concept and subsystems, a test facility was realised, which can carry out surface and punctual measurements. By means of the implemented evaluation algorithm and the flexible functionality, the realised test facility was used as a basic tool for the development of the testing method and for the experiments to verify the testing method.
3 Implementation Finally, the developed testing method could be successfully integrated into an existing production sequence by the development of a pilot installation. A testing facility was integrated in a robot gripper. The test of the surface purity regarding oil-based contamination films takes place during the palletising process in the realised pilot installation. Furthermore, local residues of cooling lubricants are identified, visualised on the monitor, and finally, sorted out by the robot. The obtained testing results of the pilot installation completely fulfil the requirements resulting from the manufacturing process concerning the surface cleanliness. Sequence of Operat ions
Localisation Gripping Detection
Palletising or Rejection
Visualisation
Fig. 2. Palletising of clean components and sorting out of cotaminated components
New Optical Sensors and Measurement Systems
693
Drive System
Evaluation System
Illumination and Detection System Vacuum Suction Gripper Part
Fig. 3. Robot gripper with integrated NIR-LED-Photodiode-Sensorsystem
By means of the developed pilot installation, it could be proved that a reliable in-situ detection of contamination films of > 1µm is achieved directly on the component during the manufacturing process. Up to now an in-situ detection of oil-based contamination films for quality control and surface cleanliness could not be carried out directly on the component during a running manufacturing process. Thus, the developed testing method creates new possibilities for the industrial application to increase the product quality by an efficient quality control, to reduce defects and arising costs, and consequently, to guarantee a more economic production of cleaning-sensitive components
4 Summary Because of the flexibility of this testing method concerning punctual and surface measurements as well as the low input in apparatus, this testing method, as described in the present paper, offers the possibility to considerably reduce the reject rate of cleaning-sensitive manufacturing in metal working by in-situ detection of oil-based contamination films. The developed in-situ detection represents an efficient and economic method of reproducible testing surface cleanliness especially for metal-cutting, which due to its partly rough and changing surfaces has not been possible up until now. In manufacturing processes in which the quantifiability of contamination films within defined limits are of interest for subsequent processes, radiation sources have to be used where the wavelength lies within the range of higher absorption coefficients.
Chromatic Confocal Spectral Interferometry - (CCSI) Evangelos Papastathopoulos, Klaus Körner and Wolfgang Osten ITO – Institut für Technische Optik Pfaffenwaldring 9 70569 Stuttgart Germany
1 Introduction In recent years, several methods have been proposed in order to characterize the geometry of complex surfaces. Due to their enhanced depth and lateral resolution the optical techniques of Confocal Microscopy (CM) and White-Light-Interferometry (WLI) have been established as standard methods for investigating the topology of various microscopical structures. In CM the depth information, necessary to construct a 3D-image, is obtained by selectively collecting the light emerging from a well defined focal plane, while in WLI the same information is rather obtained by analyzing the cross-correlation pattern created during the optical interference between the low-coherence light-field reflected from the sample and a reference field. Both techniques are based on sequential recording of the depth information, experimentally realized by mechanically scanning the distance between the investigated object and the microscope’s objective. Nevertheless, simultaneous acquisition of the entire depth information is possible using the so-called Focus-Wavelength-Encoding [1]. Here, a dispersive element is combined with the objective lens of the microscope, to induce a variation of the focal position, depending on the illumination wavelength (Chromatic Splitting). Finally, the light reflected from the sample is spectrally-analyzed to deliver the depth information. In WLI the spectrally-resolved measurement [2-6] results in an oscillatory waveform (Spectral Interference-SI) where the periodicity encloses the depth information. By use of these chromatic concepts, mechanical scanning is no longer necessary, the measurement is rather performed in a so-called “single-shot” manner. On the other hand, the method used to acquire the depth information does not effect the properties of the lateral image. In CM and
New Optical Sensors and Measurement Systems
695
WLI, focusing with a higher Numerical-Aperture (NA) increases both the lateral resolution as well as the light collection efficiency of the detection.
Fig. 1. (a-d) Simulated spectral waveforms arising from the optical interference between two identical field with a Gaussian spectral profile and focused geometries of various numerical apertures. (e) Combined schematic representation of the focused fields. After reflection upon the sample and reference mirrors the two fields are recombined and brought to optical interference (not shown here).
In the present communication, we theoretically address the hybrid technique of Chromatic Confocal Spectral Interferometry (CCSI). As shown in the following, the waveform acquired by SI undergoes a severe loss of contrast when high NA focusing is employed. Combining the technique of SI with the chromatic concept allows for an effective compensation of this discrepancy while a large dynamical range is retained for the topological measurement. Additionally, confocal filtering of the light emerging from the sample allows for an effective supression of backround signals, often met in WLI measurements by objects with a high degree of volume scattering (biological samples, thick polymer probes etc).
2 Spectral Interference at High Numerical Apertures We assume two broad-band fields originating from the sample and the reference arms of a Linik WLI microscope and suppose that both fields have identical gaussian spectra and equal amplitudes (optimal contrast conditions). After reflection upon the sample and reference mirrors the two fields are recombined and their optical interference is observed with a spectrometer. To simulate the emerging interference pattern we assume the
696
New Optical Sensors and Measurement Systems
focused geometries depicted in Fig. 1e. The interference contribution for a single ray bundle through the optical system is given by [7]: (1) where T is the incidence angle of the ray bundle with respect to the axis of the optical system, k is the wavenumber of the light-field, z the displacement of the sample with respect to the reference arm, R1 and R2 the reflectivity of the sample and reference mirrors respectively, and I the relative phase between the sample and reference field acquired during their propagation and reflection. The total interference signal recorded with the spectrometer is given by the integral of the ray bundle contributions dI(T,k,z) over the whole range of incidence-angles: (2)
where Tmax is the maximum incidence angle, defined by the numerical aperture and V(k) is the optical-spectrum of the interfering fields. For a fixed sample position z=4 Pm and assuming equal reflectivities R1=R2 we evaluated numerically the integral in Eq. 2. The results are summarized in Fig. 1 (a-d) for various NA. With relatively low NA = 0.1 (Fig. 1a) the interferometric signal exhibits a pronounced oscillatory behaviour. The frequency of this spectral modulation scales linearly with the displacement z, as admittedly seen in Eq. 1. Under these focusing conditions the paraxial approximation holds and the cos(T) term can be approximated by unity. However, for higher NA this approximation fails. This gives rise to a periodicity which is a function of the incidence angle T. Consequently, after integrating over T (Eq. 2) the contrast of the spectral interference is reduced, to such an extent that by NA = 0.7, the modulation is hard to analyze (Fig. 1c,1d). The discrepancy of the reduced spectral-modulation creates the necessity for an alternative interferometric scheme, which will be the subject of the following section.
3 Chromatic-Confocal Filtering To acquire the interference patterns in Fig. 1(a-d) we assumed a constant displacement z of the sample with respect to the reference field. However, the amplitude of modulation depends both on the NA employed, as well as
New Optical Sensors and Measurement Systems
697
the displacements z. This effect is illustrated in Fig. 2 where the modulation depth is plotted as a function of z in the vicinity of 1-10 Pm.
Fig. 2. Simulation of the modulation-depth observed with spectral intereferometry under various focusing conditions. The results are presented as functions of the displacement of the sample with respect to the reference field. The trend of the curves shown here resembles the depth-of-focus function reported for CM and WLI.
These results follow the numerical evaluation of the interference using Eq. 2 for different displacement z and various focusing conditions. By NA = 0.1 the modulation depth depends hardly on the displacement z (less than 5% reduction over the 1-10 Pm range). However, under sharp focusing conditions (higher NA) the dependence becomes more pronounced. By NA = 0.9 (Fig. 2 solid line), the amplitude of the interference reduces by almost 90% within the first 500 nm. The plots in Fig. 2 resemble the depth-of-focus functions reported for CM and WLI [8]. Despite the loss of modulation at high-NA, the interference exhibits always a maximum when the displacement z approaches 0. For z=0 the interferometric scheme assumed here is utterly symmetric and the optical interference is perfect. This effect is exploited in CCSI. The basic idea behind this concept is to introduce a (chromatic-) wavelength-dependence of the focal plane in the sample arm of the interferometer, so that for a wide range of z a part of the broad light-spectrum always interferes at equal optical paths with the reference. A possible experimental realization of this concept is schematically depicted in Fig. 3. The basis of the set-up is a standard Linik-type interferometer. To introduce the chromatic-dependence of the focal position, a focusing Diffractive-Optical-Element (DOE) is added at the back-Fourier plane of the objective lens. Insertion of the DOE results to a linear dependence of the focal position with respect to the wave-number of the illumina-
698
New Optical Sensors and Measurement Systems
tion field i.e. the focal length for the “blue” part of the spectrum is larger than that for the “red” part.
Fig. 3. Schematic representation of the modified Linik-Interferometer used for monitoring spectral interference. A diffractive Optical Element, located at the back Fourier-plane of one objective lens separates the focal positions of the different spectral components. To compensate for the group velocity mismatch of the two fields, the reference field propagates through a dispersive material of variable thickness. The recombined fields are focused on the entrance pinhole of the spectrometer and the interference is recorded by a CCD camera.
Accounting for the combined operation of the DOE with the objective lens, the focal position is summarized under the expression: (3) where k0 is the wave-number corresponding to the center of the opticalspectrum and A is a measure of the chromatic splitting. The interference component dI(T,k,z) then becomes: (4) To derive the above expression, we assumed that the focal length for k0 is
New Optical Sensors and Measurement Systems
699
the same as that from the reference field, which is assumed to be achromatic.
Fig. 4. a) Simulated interference pattern following the insertion of the DOE. The chromatic-splitting of the light-field reflected by the sample induces a high contrast modulation in the vicinity of z=A(k-k0). b) Due to the spatial filtering from the spectrometer pinhole a confocal-spectral filter (dashed line) is imposed on the interference signal (solid line).
Using the same field parameters as in Fig. 1 (a-d) we calculated the spectral-interference by integrating dI´(T,k,z) as in Eq. 2. The spectral interferogram acquired for z=4 Pm, A=7 Pm2 and NA=0.7 is depicted in Fig. 4a. A fast oscillating wavelet is seen in the vicinity of 750 nm. The amplitude of this modulation is maximum when z=A(k-k0), with a contrast of practically equal to unit. It has to be noted that not only the position of the spectral interference but also it’s periodicity enclose the information of the position z. This allows for an accurate measurement of z based both on the envelope of the modulation as well as by evaluating the spectral-phase underlying the interferogram. The width of the wavelet in Fig. 4a is determined by the NA employed. By high NA the amplitude of the spectralinterference drops faster (Fig. 2) and the wavelet becomes narrower. In Fig. 4a the interference pattern is confined within a spectral window from about 700 nm to 800 nm. Beyond this region the observed waveform originates from the non-interfering Gaussian light-spectrum of the individual fields. Usually the entrance of a spectrometer comprises a pinhole or slit (Fig. 3), the opening of which effects significantly the resolution of the instrument. Focusing of the two interfering fields onto the pinhole incorporates a spatial confocal-filtering. This imposes a modification of the interference, since only the frequency components of the chromatic-analyzed spectrum that are sharp focused propagate through the pinhole and contribute to the interference. This effect is included in the calculation by replacing the reflectivity term R2 by:
700
New Optical Sensors and Measurement Systems
(5) The added term resembles the confocal depth-response function, only that the axial position has been replaced by the chromatic-dependent coordinate z-A(k-k0). The dashed line in Fig. 4b represents the resulting confocal spectral-filter, while the interference signal is depicted as a solid line. The confocal-filtering apparently does not effect the spectral contribution from the reference field since no chromatic-dispersion is involved i.e. all spectral components are equally focused and propagate through the pinhole. The confocal filtering of the sample field is in particular advantageous in the context of reducing the background signal in measurements where a high degree of volume scattering is involved (thick polymer probes, biological samples etc). As previously mentioned, in order to accomplish a high contrast spectral modulation, the optical paths of the sample and reference fields must be approximately equal. This incorporates the requirement that the optical interference takes place within the coherence length of the employed light field. However, the chromatic dispersion from the DOE induces a group velocity mismatch of the various components of the light-spectrum. Upon reflection from the sample, the “blue” part of the spectrum propagates a longer optical path that the “red” part before recombining with the reference. Due to Eq. 3, the group delay of the sample-field at the recombiner is a linear function of k. This resembles the effect of Group-VelocityDispersion (GVD) for propagation within dispersive materials, where for a given geometrical distance the optical path also exhibits a linear dependence with k. Therefore, the group delay of the interfering fields can be matched by simply including a dispersive element in the reference arm of the interferometer, indicated as GVD compensator (Fig. 3). With this configuration, the optical path difference can be compensated for without the necessity of readjusting the length of the reference arm (no-scanning is required).
4 Conclusion In the present communication, we addressed the hybrid technique of Chromatic-Confocal Spectral-Interferometry (CCSI). A number of recent developments have proved the feasibility of encoding the depth information of topological measurements into the spectrum of broad-bandwidth low-coherent light sources. The loss of contrast arising in SI measurements when high NA is employed were discussed, and how this discrepancy is
New Optical Sensors and Measurement Systems
701
lifted by incorporating a diffractive focusing element (DOE) in a typical Linik interference microscope. Also by means of numerical simulations a qualitative description of the emerging chromatic-spectral interference was presented. On the basis of these simulations, a number of issues were raised, concerning the confocal filtering of the light reflected from the sample and the compensation of group-velocity mismatch induced by the DOE. The functional proposals presented in this discussion aim to contribute towards the development of the so-called “single shot” metrology suitable for dynamical topology characterization of micro-structured surfaces.
5 References 1. Akinyemi, O, Boyde, A, Browne, MA, (1992) Chromatism and confocality in confocal microscopes. Scanning 14:136-143 2. Mehta, DS, Sugai, M, Hinosugi, H, Saito, S, Takeda M, Kurokawa, T, Takahashi, H, Ando, M, Shishido, M, Yoshizawa, T, (2002) Simultaneous three-dimensional step-height measurement and high-resolution tomographic imaging with a spectral interferometric microscope. Appl. Opt. 41: 3874-3885 3. Calatroni, J, Guerrero, AJ, Sainz, C, Escalona, R, (1996) Spectrallyresolved white-light interferometry as a profilometry tool. Opt. & Laser Tech. 28:485-489 4. Sandoz, P, Tribillon, G, Perrin, H, (1996) High-resolution profilometry by using phase calculation algorithms for spectroscopic analysis of white-light interferograms. J. Mod. Opt. 43:710-708 5. Li, G, Sun, PC, Lin, C, Fainman, Y, (2000) Interference microscopy for three-dimensional imaging with wavelength-to-depth encoding. Opt. Lett. 25:1505-1507 6. Pavlícek, P, Häusler, G, (2005) White-light interferometer with dispersion: an accurate fiber-optic sensor for the measurement of distance. Appl. Opt. 44:2978-2983 7. Kino, GS, Chim, SSC, (1990) Mirau correlation microscope. Appl. Opt. 29:3775-3783 8. Corle, TR, Kino, GS, (1996) Confocal scanning optical microscopy and related imaging systems. Academic Press (San Diego)
A simple and efficient optical 3D-Sensor based on “Photometric Stereo” (“UV-Laser Therapy”) F. Wolfsgruber1, C. Rühl1, J. Kaminski1, L. Kraus1, G. Häusler1, R. Lampalzer2, E.-B. Häußler3, P. Kaudewitz3, F. Klämpfl4, A. Görtler5 1 Max Planck Research Group, Institute of Optics, Information and Photonics, University of Erlangen-Nuremberg; 23999D-Shape GmbH, Erlangen; 3 Dermatologische Klinik und Poliklinik der Ludwig-MaximiliansUniversität München; 4Bayerisches Laserzentrum gGmbH, Erlangen; 5 TuiLaser AG, München
1 Introduction We report about the present state of our research project “UV Laser Therapy”: Its objective is the precise sensor controlled treatment of skin lesions such as dermatitis and psoriasis, using high power (UV-)excimer laser radiation. We present an optical 3D sensor that measures the lesion areas. The acquired 2D and 3D information is used to control the laser scanner for the reliable exposure of the lesion areas. The medical and commercial requirements for the sensor and the algorithms are high reliability, accurate and fast identification of the lesions and – last not least – low cost. These requirements can be satisfied by the sensor that is based on “Photometric Stereo”.
2 The Treatment System The treatment system (see Fig. 1) consists of the 3D sensor, the laser scanner, and the UV Laser. The 3D sensor measures the skin of a patient. The output of the sensor is the slope of the surface in each pixel. This data is used to control the laser to apply the correct radiation dose onto the skin. The slope and the additionally acquired 2D color images are used to automatically detect the diseased skin regions. The scanner directs the beam exclusively over the identified regions. Thus, the exposure dose on healthy skin is reduced to a minimum.
New Optical Sensors and Measurement Systems
703
Fig. 1. Sketch of the complete treatment system
The laser (made by TuiLaser, Munich) allows to apply high radiation doses onto the skin, such that the number of treatments can be reduced in comparison to conventional light therapy. The whole therapy is more comfortable for the patient and reduces the time cost for the physician.
3 Photometric Stereo Photometric Stereo [1] is a simple and fast principle with high information efficiency [2]. It measures the surface slope precisely and allows to detect surface anomalies with high sensitivity. The object is illuminated from four different directions (see Fig. 2). A CCD camera grabs an intensity image for each illumination direction. Based on the different shadings in each image, the surface normal can be computed for each camera pixel:
n x, y
1
S S U T
1
S T E x, y
(1)
n = surface normal, ȡ = local reflection, S = illumination matrix, E = irradiance vector
704
New Optical Sensors and Measurement Systems
Fig. 2. Principle of Photometric Stereo
Fig. 3. Left: camera image; Right: intensity-encoded slope image
A complete measurement (3D data and RGB image) takes about 0.7 s. Fig. 3 displays a measurement example of a patient’s knee with a psoriasis lesion. The change of the surface structure caused by the lesion is clearly observable in the slope image. In addition we also need accurate slope data to control the power of the laser. High accuracy is difficult to achieve because the method requires the precise knowledge of the illumination parameters (direction and power). Our approach to obtain these parameters consists of a new calibration procedure utilising a set of calibration gauges, and of new evaluation algorithms [3]. The result of the calibration is displayed in Fig. 4.
New Optical Sensors and Measurement Systems
705
Fig. 4. Result of the calibration: The measured slope of a tilted plane is more accurate with the additional direction calibration
4 Automatic Detection of the Lesions The automatic detection of diseased skin is a segmentation problem. We have to distinguish healthy skin from psoriasis lesions and from background. The background is segmented with a simple empiric rule [4]. The segmentation of the skin itself is more difficult because of different manifestations of psoriasis. Additionally, the appearance of the psoriasis varies during the therapy. We achieve the best results by using a k-means clustering algorithm [5]. In a last step, remaining holes inside a lesion are closed automatically.
Fig. 5. Left: region of interest of the camera image from Fig. 2; Right: the white line displays the result of the discrimination process
The reliability of this method depends on the manifestation of the psoriasis. In some cases (see Fig. 5), healthy skin is detected as part of the lesion (or the other way round). To overcome this problem, we are investi-
706
New Optical Sensors and Measurement Systems
gating additional methods that additionally analyze the local surface structure of the skin. Untreated lesions have a significant surface texture which can be detected by local frequency analysis methods such as Gabor filters. First experiments show promising results.
5 Conclusions We presented an improved sensor setup based on Photometric Stereo that satisfies all the requirements (speed and accuracy) for “UV-Laser therapy”. The present study shows that the discrimination process between healthy skin and skin lesions will be reliable in many cases. With a combination of 2D methods and 3D methods, we expect to get a reliable procedure for the automatic treatment of patients.
6 Acknowledgements This work is supported by the “Bayerische Forschungsstiftung”.
7 References 1. B. K. P. Horn, M. J. Brooks, Shape from Shading, MIT Press (1989) 2. C. Wagner and G. Häusler, Information theoretical optimization for optical range sensors. Applied Optics 42(27): 5418-5426 (2003) 3. C. Rühl, Optimierung von Photometrischem Stereo für die 3D-Formvermessung, Diploma Thesis, University of Erlangen (2005) 4. G. Gomez, E. Morales, Automatic Feature Construction and a Simple Rule Induction Algorithm for Skin Detection, Proceedings of the ICML Workshop on Machine Learning in Computer Vision: 31-38 (2002) 5. R. Cucchiara, C. Grana and M. Piccardi, Iterative fuzzy clustering for detecting regions of interesting skin lesions, Atti del Workshop su Intelligenza Artificiale, Visione e Pattern Recognition (in conjunction with AI*IA 2001): 31-38 (2001) 6. Asawanonda Pravit, Anderson R. Rox., Yuchiao Chang, Taylor Charles R., 308 nm Excimer Laser for Treatment of Psoriasis, Arch. Dermatol. 2000, 136: 619-624 (2000)
APPENDIX New Products
New Products
New Products
YOUR PARTNER IN 3D MEASUREMENT
Portable 3D measurement device for industrial and medical applications x Your advantage: x Mobile Hand-held 3D measurement device x No tripod necessary x Compact and flexible x Very easy to use x Available for Industrial measurements and LifeSciences
x x x x x x
Key Features: 3D Measurement in between milliseconds Fire wire COLOR camera Different measurement fields available Used with laptop computer Complete software solution for different applications Database
CONTACT GFMesstechnik GmbH
Warthestr. 21, 14513 Teltow / Berlin Tel.: +49 (0) 3328-316760 Fax: +49 (0) 3328-305188 Web: www.gfmesstechnik.com E-Mail: [email protected]
New Products
New Products
New Products
New Products
New Products
3-D-Camera The VEW 3-D-Ca mera is a miniaturized fringe projection device for robust surfac e topometry Due to the sm all size of 20x22x14c m and an ope ning a ngle larger than 90°, the system c an b e optim ally utilize d und er diffic ult me asureme nt c ond itions with lim ite d sp ac e a nd / or sm all distance to the ob jec t's surface. Bec ause of the large op ening angle, the mea surem ent fie ld size reac he s 1 x 1 m in a dista nc e of 1 m with a m easureme nt resolutio n of 0.1 mm . The allowed me asure ment distance ranges from 20 c m up to 4 m . The in teg ra ted high po wer lig ht sourc e illu m ina tes a m easurem ent a rea of 2x2m , ric h in contra st, even und er diffic ult environm ental c onditions. The mea surem ent process is fully a utom ated and d elivers a c curate X-, Y-, Z-c oordinates. Scop e of d elive ry: 3D-Ca mera, trip od, system c ase, me asure ment and evaluation software, com puter (op tio na l) Te c hnic a l fac ts: Warp resistant aluminum p ro file p acka ge, size: 200 x 220 x 140 m m, with q uick fastener (e xc hange ab le ad ap te r) Me asureme nt field d ia m eter: 200 ... 4000 m m Projector: Pixel q ua ntiza tio n: (12 bit grays) Resolution: 1024 x 768 pixe l Interfac e: DVI Ob je ctive : f= 12.5 m m
Ca mera : 8 Bit g ra ys 1040 x 1392 p ixel (o pt. 1200x1600) IEEE 1394 (Fire Wire ) f= 6.5 mm
Illumination: 160 W UHP la mp (switcha ble hi / lo intensity) The objec tives c an be e xc hange d for ada pting the 3D-Cam era to spe c ial m easureme nt tasks. VEW Vereinigte Elektronikwerkstätten GmbH Edisonstraße 19 * Pob: 330543 * 28357 Bremen Fon:(+ 49) 0421/271530 Fa x(+ 49) 0421/273608 E-Mail: [email protected]
DIE ENTWICKLER